本文整理汇总了TypeScript中vs/editor/standalone-languages/test/testUtil.testTokenization函数的典型用法代码示例。如果您正苦于以下问题:TypeScript testTokenization函数的具体用法?TypeScript testTokenization怎么用?TypeScript testTokenization使用的例子?那么恭喜您, 这里精选的函数代码示例或许可以为您提供帮助。
在下文中一共展示了testTokenization函数的10个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的TypeScript代码示例。
示例1: testTokenization
testTokenization('sql', language, [
// Comments
[{
line: '-- a comment',
tokens: [
{ startIndex: 0, type: 'comment.sql' }
]}],
[{
line: '---sticky -- comment',
tokens: [
{ startIndex: 0, type: 'comment.sql' }
]}],
[{
line: '-almost a comment',
tokens: [
{ startIndex: 0, type: 'operator.sql' },
{ startIndex: 1, type: 'identifier.sql' },
{ startIndex: 7, type: 'white.sql' },
{ startIndex: 8, type: 'identifier.sql' },
{ startIndex: 9, type: 'white.sql' },
{ startIndex: 10, type: 'identifier.sql' }
]}],
[{
line: '/* a full line comment */',
tokens: [
{ startIndex: 0, type: 'comment.quote.sql' },
{ startIndex: 2, type: 'comment.sql' },
{ startIndex: 23, type: 'comment.quote.sql' }
]}],
[{
line: '/* /// *** /// */',
tokens: [
{ startIndex: 0, type: 'comment.quote.sql' },
{ startIndex: 2, type: 'comment.sql' },
{ startIndex: 15, type: 'comment.quote.sql' }
]}],
[{
line: 'declare @x int = /* a simple comment */ 1;',
tokens: [
{ startIndex: 0, type: 'keyword.sql' },
{ startIndex: 7, type: 'white.sql' },
{ startIndex: 8, type: 'identifier.sql' },
{ startIndex: 10, type: 'white.sql' },
{ startIndex: 11, type: 'keyword.sql' },
{ startIndex: 14, type: 'white.sql' },
{ startIndex: 15, type: 'operator.sql' },
{ startIndex: 16, type: 'white.sql' },
{ startIndex: 17, type: 'comment.quote.sql' },
{ startIndex: 19, type: 'comment.sql' },
{ startIndex: 37, type: 'comment.quote.sql' },
{ startIndex: 39, type: 'white.sql' },
{ startIndex: 40, type: 'number.sql' },
{ startIndex: 41, type: 'delimiter.sql' }
]}],
// Not supporting nested comments, as nested comments seem to not be standard?
// i.e. http://stackoverflow.com/questions/728172/are-there-multiline-comment-delimiters-in-sql-that-are-vendor-agnostic
[{
line: '@x=/* a /* nested comment 1*/;',
tokens: [
{ startIndex: 0, type: 'identifier.sql' },
{ startIndex: 2, type: 'operator.sql' },
{ startIndex: 3, type: 'comment.quote.sql' },
{ startIndex: 5, type: 'comment.sql' },
{ startIndex: 28, type: 'comment.quote.sql' },
{ startIndex: 30, type: 'delimiter.sql' }
]}],
[{
line: '@x=/* another comment */ 1*/;',
tokens: [
{ startIndex: 0, type: 'identifier.sql' },
{ startIndex: 2, type: 'operator.sql' },
{ startIndex: 3, type: 'comment.quote.sql' },
{ startIndex: 5, type: 'comment.sql' },
{ startIndex: 22, type: 'comment.quote.sql' },
{ startIndex: 24, type: 'white.sql' },
{ startIndex: 25, type: 'number.sql' },
{ startIndex: 26, type: 'operator.sql' },
{ startIndex: 28, type: 'delimiter.sql' }
]}],
[{
line: '@x=/*/;',
tokens: [
{ startIndex: 0, type: 'identifier.sql' },
{ startIndex: 2, type: 'operator.sql' },
{ startIndex: 3, type: 'comment.quote.sql' },
{ startIndex: 5, type: 'comment.sql' }
]}],
// Numbers
[{
line: '123',
tokens: [
//.........这里部分代码省略.........
开发者ID:13572293130,项目名称:vscode,代码行数:101,代码来源:sql.test.ts
示例2: testTokenization
testTokenization('lua', language, [
// Keywords
[{
line: 'local x, y = 1, 10',
tokens: [
{ startIndex: 0, type: 'keyword.local.lua' },
{ startIndex: 5, type: '' },
{ startIndex: 6, type: 'identifier.lua' },
{ startIndex: 7, type: 'delimiter.lua' },
{ startIndex: 8, type: '' },
{ startIndex: 9, type: 'identifier.lua' },
{ startIndex: 10, type: '' },
{ startIndex: 11, type: 'delimiter.lua' },
{ startIndex: 12, type: '' },
{ startIndex: 13, type: 'number.lua' },
{ startIndex: 14, type: 'delimiter.lua' },
{ startIndex: 15, type: '' },
{ startIndex: 16, type: 'number.lua' }
]}],
[{
line: 'foo = "Hello" .. "World"; local foo = foo',
tokens: [
{ startIndex: 0, type: 'identifier.lua' },
{ startIndex: 3, type: '' },
{ startIndex: 4, type: 'delimiter.lua' },
{ startIndex: 5, type: '' },
{ startIndex: 6, type: 'string.lua' },
{ startIndex: 13, type: '' },
{ startIndex: 14, type: 'delimiter.lua' },
{ startIndex: 16, type: '' },
{ startIndex: 17, type: 'string.lua' },
{ startIndex: 24, type: 'delimiter.lua' },
{ startIndex: 25, type: '' },
{ startIndex: 26, type: 'keyword.local.lua' },
{ startIndex: 31, type: '' },
{ startIndex: 32, type: 'identifier.lua' },
{ startIndex: 35, type: '' },
{ startIndex: 36, type: 'delimiter.lua' },
{ startIndex: 37, type: '' },
{ startIndex: 38, type: 'identifier.lua' }
]}],
// Comments
[{
line: '--[[ text ]] x',
tokens: [
{ startIndex: 0, type: 'comment.lua' },
{ startIndex: 12, type: '' },
{ startIndex: 13, type: 'identifier.lua' }
]}],
[{
line: '--[===[ text ]===] x',
tokens: [
{ startIndex: 0, type: 'comment.lua' },
{ startIndex: 18, type: '' },
{ startIndex: 19, type: 'identifier.lua' }
]}],
[{
line: '--[===[ text ]==] x',
tokens: [
{ startIndex: 0, type: 'comment.lua' }
]}]
]);
开发者ID:13572293130,项目名称:vscode,代码行数:67,代码来源:lua.test.ts
示例3: testTokenization
testTokenization('powershell', language, [
// Comments - single line
[{
line: '#',
tokens: null}],
[{
line: ' # a comment',
tokens: [
{ startIndex: 0, type: '' },
{ startIndex: 4, type: 'comment.ps1' }
]}],
[{
line: '# a comment',
tokens: [
{ startIndex: 0, type: 'comment.ps1' }
]}],
[{
line: '#sticky comment',
tokens: [
{ startIndex: 0, type: 'comment.ps1' }
]}],
[{
line: '##still a comment',
tokens: [
{ startIndex: 0, type: 'comment.ps1' }
]}],
[{
line: '1 / 2 /# comment',
tokens: [
{ startIndex: 0, type: 'number.ps1' },
{ startIndex: 1, type: '' },
{ startIndex: 2, type: 'delimiter.ps1' },
{ startIndex: 3, type: '' },
{ startIndex: 4, type: 'number.ps1' },
{ startIndex: 5, type: '' },
{ startIndex: 6, type: 'delimiter.ps1' },
{ startIndex: 7, type: 'comment.ps1' }
]}],
[{
line: '$x = 1 # my comment # is a nice one',
tokens: [
{ startIndex: 0, type: 'variable.ps1' },
{ startIndex: 2, type: '' },
{ startIndex: 3, type: 'delimiter.ps1' },
{ startIndex: 4, type: '' },
{ startIndex: 5, type: 'number.ps1' },
{ startIndex: 6, type: '' },
{ startIndex: 7, type: 'comment.ps1' }
]}],
// Comments - range comment, single line
[{
line: '<# a simple comment #>',
tokens: [
{ startIndex: 0, type: 'comment.ps1' }
]}],
[{
line: '$x = <# a simple comment #> 1',
tokens: [
{ startIndex: 0, type: 'variable.ps1' },
{ startIndex: 2, type: '' },
{ startIndex: 3, type: 'delimiter.ps1' },
{ startIndex: 4, type: '' },
{ startIndex: 5, type: 'comment.ps1' },
{ startIndex: 27, type: '' },
{ startIndex: 28, type: 'number.ps1' }
]}],
[{
line: '$yy = <# comment #> 14',
tokens: [
{ startIndex: 0, type: 'variable.ps1' },
{ startIndex: 3, type: '' },
{ startIndex: 4, type: 'delimiter.ps1' },
{ startIndex: 5, type: '' },
{ startIndex: 6, type: 'comment.ps1' },
{ startIndex: 19, type: '' },
{ startIndex: 20, type: 'number.ps1' }
]}],
[{
line: '$x = <##>7',
tokens: [
{ startIndex: 0, type: 'variable.ps1' },
{ startIndex: 2, type: '' },
{ startIndex: 3, type: 'delimiter.ps1' },
{ startIndex: 4, type: '' },
{ startIndex: 5, type: 'comment.ps1' },
{ startIndex: 9, type: 'number.ps1' }
]}],
[{
line: '$x = <#<85',
//.........这里部分代码省略.........
开发者ID:Huachao,项目名称:vscode,代码行数:101,代码来源:powershell.test.ts
示例4: testTokenization
testTokenization('cpp', language, [
// Keywords
[{
line: 'int _tmain(int argc, _TCHAR* argv[])',
tokens: [
{ startIndex: 0, type: 'keyword.int.cpp' },
{ startIndex: 3, type: '' },
{ startIndex: 4, type: 'identifier.cpp' },
{ startIndex: 10, type: 'delimiter.parenthesis.cpp' },
{ startIndex: 11, type: 'keyword.int.cpp' },
{ startIndex: 14, type: '' },
{ startIndex: 15, type: 'identifier.cpp' },
{ startIndex: 19, type: 'delimiter.cpp' },
{ startIndex: 20, type: '' },
{ startIndex: 21, type: 'identifier.cpp' },
{ startIndex: 27, type: 'delimiter.cpp' },
{ startIndex: 28, type: '' },
{ startIndex: 29, type: 'identifier.cpp' },
{ startIndex: 33, type: 'delimiter.square.cpp' },
{ startIndex: 35, type: 'delimiter.parenthesis.cpp' }
]}],
// Comments - single line
[{
line: '//',
tokens: [
{ startIndex: 0, type: 'comment.cpp' }
]}],
[{
line: ' // a comment',
tokens: [
{ startIndex: 0, type: '' },
{ startIndex: 4, type: 'comment.cpp' }
]}],
[{
line: '// a comment',
tokens: [
{ startIndex: 0, type: 'comment.cpp' }
]}],
[{
line: '//sticky comment',
tokens: [
{ startIndex: 0, type: 'comment.cpp' }
]}],
[{
line: '/almost a comment',
tokens: [
{ startIndex: 0, type: 'delimiter.cpp' },
{ startIndex: 1, type: 'identifier.cpp' },
{ startIndex: 7, type: '' },
{ startIndex: 8, type: 'identifier.cpp' },
{ startIndex: 9, type: '' },
{ startIndex: 10, type: 'identifier.cpp' }
]}],
[{
line: '/* //*/ a',
tokens: [
{ startIndex: 0, type: 'comment.cpp' },
{ startIndex: 7, type: '' },
{ startIndex: 8, type: 'identifier.cpp' }
]}],
[{
line: '1 / 2; /* comment',
tokens: [
{ startIndex: 0, type: 'number.cpp' },
{ startIndex: 1, type: '' },
{ startIndex: 2, type: 'delimiter.cpp' },
{ startIndex: 3, type: '' },
{ startIndex: 4, type: 'number.cpp' },
{ startIndex: 5, type: 'delimiter.cpp' },
{ startIndex: 6, type: '' },
{ startIndex: 7, type: 'comment.cpp' }
]}],
[{
line: 'int x = 1; // my comment // is a nice one',
tokens: [
{ startIndex: 0, type: 'keyword.int.cpp' },
{ startIndex: 3, type: '' },
{ startIndex: 4, type: 'identifier.cpp' },
{ startIndex: 5, type: '' },
{ startIndex: 6, type: 'delimiter.cpp' },
{ startIndex: 7, type: '' },
{ startIndex: 8, type: 'number.cpp' },
{ startIndex: 9, type: 'delimiter.cpp' },
{ startIndex: 10, type: '' },
{ startIndex: 11, type: 'comment.cpp' }
]}],
// Comments - range comment, single line
[{
line: '/* a simple comment */',
tokens: [
{ startIndex: 0, type: 'comment.cpp' }
//.........这里部分代码省略.........
开发者ID:13572293130,项目名称:vscode,代码行数:101,代码来源:cpp.test.ts
示例5: testTokenization
testTokenization('python', language, [
// Keywords
[{
line: 'def func():',
tokens: [
{ startIndex: 0, type: 'keyword.python' },
{ startIndex: 3, type: 'white.python' },
{ startIndex: 4, type: 'identifier.python' },
{ startIndex: 8, type: 'delimiter.parenthesis.python' },
{ startIndex: 10, type: 'delimiter.python' }
]}],
[{
line: 'func(str Y3)',
tokens: [
{ startIndex: 0, type: 'identifier.python' },
{ startIndex: 4, type: 'delimiter.parenthesis.python' },
{ startIndex: 5, type: 'keyword.python' },
{ startIndex: 8, type: 'white.python' },
{ startIndex: 9, type: 'identifier.python' },
{ startIndex: 11, type: 'delimiter.parenthesis.python' }
]}],
[{
line: '@Dec0_rator:',
tokens: [
{ startIndex: 0, type: 'tag.python' },
{ startIndex: 11, type: 'delimiter.python' }
]}],
// Comments
[{
line: ' # Comments! ## "jfkd" ',
tokens: [
{ startIndex: 0, type: 'white.python' },
{ startIndex: 1, type: 'comment.python' }
]}],
// Strings
[{
line: '\'s0\'',
tokens: [
{ startIndex: 0, type: 'string.escape.python' },
{ startIndex: 1, type: 'string.python' },
{ startIndex: 3, type: 'string.escape.python' }
]}],
[{
line: '"\' " "',
tokens: [
{ startIndex: 0, type: 'string.escape.python' },
{ startIndex: 1, type: 'string.python' },
{ startIndex: 3, type: 'string.escape.python' },
{ startIndex: 4, type: 'white.python' },
{ startIndex: 5, type: 'string.escape.python' }
]}],
[{
line: '\'\'\'Lots of string\'\'\'',
tokens: [
{ startIndex: 0, type: 'string.python' }
]}],
[{
line: '"""Lots \'\'\' \'\'\'"""',
tokens: [
{ startIndex: 0, type: 'string.python' }
]}],
[{
line: '\'\'\'Lots \'\'\'0.3e-5',
tokens: [
{ startIndex: 0, type: 'string.python' },
{ startIndex: 11, type: 'number.python' }
]}],
// Numbers
[{
line: '0xAcBFd',
tokens: [
{ startIndex: 0, type: 'number.hex.python' }
]}],
[{
line: '0x0cH',
tokens: [
{ startIndex: 0, type: 'number.hex.python' },
{ startIndex: 4, type: 'identifier.python' }
]}],
[{
line: '456.7e-7j',
tokens: [
{ startIndex: 0, type: 'number.python' }
]}]
]);
开发者ID:13572293130,项目名称:vscode,代码行数:96,代码来源:python.test.ts
示例6: testTokenization
testTokenization('r', language, [
// Keywords
[{
line: 'function(a) { a }',
tokens: [
{ startIndex: 0, type: 'keyword.r' },
{ startIndex: 8, type: 'delimiter.parenthesis.r' },
{ startIndex: 9, type: 'identifier.r' },
{ startIndex: 10, type: 'delimiter.parenthesis.r' },
{ startIndex: 11, type: 'white.r' },
{ startIndex: 12, type: 'delimiter.curly.r' },
{ startIndex: 13, type: 'white.r' },
{ startIndex: 14, type: 'identifier.r' },
{ startIndex: 15, type: 'white.r' },
{ startIndex: 16, type: 'delimiter.curly.r' }
]}],
[{
line: 'while(FALSE) { break }',
tokens: [
{ startIndex: 0, type: 'keyword.r' },
{ startIndex: 5, type: 'delimiter.parenthesis.r' },
{ startIndex: 6, type: 'constant.r' },
{ startIndex: 11, type: 'delimiter.parenthesis.r' },
{ startIndex: 12, type: 'white.r' },
{ startIndex: 13, type: 'delimiter.curly.r' },
{ startIndex: 14, type: 'white.r' },
{ startIndex: 15, type: 'keyword.r' },
{ startIndex: 20, type: 'white.r' },
{ startIndex: 21, type: 'delimiter.curly.r' }
]}],
[{
line: 'if (a) { b } else { d }',
tokens: [
{ startIndex: 0, type: 'keyword.r' },
{ startIndex: 2, type: 'white.r' },
{ startIndex: 3, type: 'delimiter.parenthesis.r' },
{ startIndex: 4, type: 'identifier.r' },
{ startIndex: 5, type: 'delimiter.parenthesis.r' },
{ startIndex: 6, type: 'white.r' },
{ startIndex: 7, type: 'delimiter.curly.r' },
{ startIndex: 8, type: 'white.r' },
{ startIndex: 9, type: 'identifier.r' },
{ startIndex: 10, type: 'white.r' },
{ startIndex: 11, type: 'delimiter.curly.r' },
{ startIndex: 12, type: 'white.r' },
{ startIndex: 13, type: 'keyword.r' },
{ startIndex: 17, type: 'white.r' },
{ startIndex: 18, type: 'delimiter.curly.r' },
{ startIndex: 19, type: 'white.r' },
{ startIndex: 20, type: 'identifier.r' },
{ startIndex: 21, type: 'white.r' },
{ startIndex: 22, type: 'delimiter.curly.r' }
]}],
// Identifiers
[{
line: 'a',
tokens: [
{ startIndex: 0, type: 'identifier.r' }
]}],
// Comments
[{
line: ' # comment #',
tokens: [
{ startIndex: 0, type: 'white.r' },
{ startIndex: 1, type: 'comment.r' }
]}],
// Roxygen comments
[{
line: ' #\' @author: me ',
tokens: [
{ startIndex: 0, type: 'white.r' },
{ startIndex: 1, type: 'comment.doc.r' },
{ startIndex: 4, type: 'tag.r' },
{ startIndex: 11, type: 'comment.doc.r' }
]}],
// Strings
[{
line: '"a\\n"',
tokens: [
{ startIndex: 0, type: 'string.escape.r' },
{ startIndex: 1, type: 'string.r' },
{ startIndex: 4, type: 'string.escape.r' }
]}],
// '\\s' is not a special character
[{
line: '"a\\s"',
tokens: [
{ startIndex: 0, type: 'string.escape.r' },
{ startIndex: 1, type: 'string.r' },
{ startIndex: 2, type: 'error-token.r' },
{ startIndex: 4, type: 'string.escape.r' }
]}],
//.........这里部分代码省略.........
开发者ID:13572293130,项目名称:vscode,代码行数:101,代码来源:r.test.ts
示例7: testTokenization
testTokenization('xml', language, [
// Complete Start Tag with Whitespace
[{
line: '<person>',
tokens: [
{ startIndex: 0, type: 'delimiter.start.xml' },
{ startIndex: 1, type: 'tag.tag-person.xml' },
{ startIndex: 7, type: 'delimiter.start.xml' }
]}],
[{
line: '<person/>',
tokens: [
{ startIndex: 0, type: 'delimiter.start.xml' },
{ startIndex: 1, type: 'tag.tag-person.xml' },
{ startIndex: 8, type: 'delimiter.start.xml' }
]}],
[{
line: '<person >',
tokens: [
{ startIndex: 0, type: 'delimiter.start.xml' },
{ startIndex: 1, type: 'tag.tag-person.xml' },
{ startIndex: 7, type: '' },
{ startIndex: 8, type: 'delimiter.start.xml' }
]}],
[{
line: '<person />',
tokens: [
{ startIndex: 0, type: 'delimiter.start.xml' },
{ startIndex: 1, type: 'tag.tag-person.xml' },
{ startIndex: 7, type: '' },
{ startIndex: 8, type: 'tag.tag-person.xml' },
{ startIndex: 9, type: 'delimiter.start.xml' }
]}],
// Incomplete Start Tag
[{
line: '<',
tokens: [
{ startIndex: 0, type: '' }
]}],
[{
line: '<person',
tokens: [
{ startIndex: 0, type: 'delimiter.start.xml' },
{ startIndex: 1, type: 'tag.tag-person.xml' }
]}],
[{
line: '<input',
tokens: [
{ startIndex: 0, type: 'delimiter.start.xml' },
{ startIndex: 1, type: 'tag.tag-input.xml' }
]}],
// Invalid Open Start Tag
[{
line: '< person',
tokens: [
{ startIndex: 0, type: '' }
]}],
[{
line: '< person>',
tokens: [
{ startIndex: 0, type: '' }
]}],
[{
line: 'i <person;',
tokens: [
{ startIndex: 0, type: '' },
{ startIndex: 2, type: 'delimiter.start.xml' },
{ startIndex: 3, type: 'tag.tag-person.xml' },
{ startIndex: 9, type: '' }
]}],
// Tag with Attribute
[{
line: '<tool name="">',
tokens: [
{ startIndex: 0, type: 'delimiter.start.xml' },
{ startIndex: 1, type: 'tag.tag-tool.xml' },
{ startIndex: 5, type: '' },
{ startIndex: 6, type: 'attribute.name.xml' },
{ startIndex: 10, type: '' },
{ startIndex: 11, type: 'attribute.value.xml' },
{ startIndex: 13, type: 'delimiter.start.xml' }
]}],
[{
line: '<tool name="Monaco">',
tokens: [
{ startIndex: 0, type: 'delimiter.start.xml' },
{ startIndex: 1, type: 'tag.tag-tool.xml' },
{ startIndex: 5, type: '' },
{ startIndex: 6, type: 'attribute.name.xml' },
//.........这里部分代码省略.........
开发者ID:13572293130,项目名称:vscode,代码行数:101,代码来源:xml.test.ts
示例8: testTokenization
testTokenization('dockerfile', language, [
// All
[{
line: 'FROM mono:3.12',
tokens: [
{ startIndex: 0, type: 'keyword.dockerfile' },
{ startIndex: 4, type: '' }
]}, {
line: '',
tokens: [
]}, {
line: 'ENV KRE_FEED https://www.myget.org/F/aspnetvnext/api/v2',
tokens: [
{ startIndex: 0, type: 'keyword.dockerfile' },
{ startIndex: 3, type: '' },
{ startIndex: 4, type: 'variable.dockerfile' },
{ startIndex: 12, type: '' }
]}, {
line: 'ENV KRE_USER_HOME /opt/kre',
tokens: [
{ startIndex: 0, type: 'keyword.dockerfile' },
{ startIndex: 3, type: '' },
{ startIndex: 4, type: 'variable.dockerfile' },
{ startIndex: 17, type: '' }
]}, {
line: '',
tokens: [
]}, {
line: 'RUN apt-get -qq update && apt-get -qqy install unzip ',
tokens: [
{ startIndex: 0, type: 'keyword.dockerfile' },
{ startIndex: 3, type: '' }
]}, {
line: '',
tokens: [
]}, {
line: 'ONBUILD RUN curl -sSL https://raw.githubusercontent.com/aspnet/Home/dev/kvminstall.sh | sh',
tokens: [
{ startIndex: 0, type: 'keyword.dockerfile' },
{ startIndex: 7, type: '' },
{ startIndex: 8, type: 'keyword.dockerfile' },
{ startIndex: 11, type: '' }
]}, {
line: 'ONBUILD RUN bash -c "source $KRE_USER_HOME/kvm/kvm.sh \\',
tokens: [
{ startIndex: 0, type: 'keyword.dockerfile' },
{ startIndex: 7, type: '' },
{ startIndex: 8, type: 'keyword.dockerfile' },
{ startIndex: 11, type: '' },
{ startIndex: 20, type: 'string.dockerfile' },
{ startIndex: 28, type: 'variable.dockerfile' },
{ startIndex: 42, type: 'string.dockerfile' }
]}, {
line: ' && kvm install latest -a default \\',
tokens: [
{ startIndex: 0, type: 'string.dockerfile' }
]}, {
line: ' && kvm alias default | xargs -i ln -s $KRE_USER_HOME/packages/{} $KRE_USER_HOME/packages/default"',
tokens: [
{ startIndex: 0, type: 'string.dockerfile' },
{ startIndex: 42, type: 'variable.dockerfile' },
{ startIndex: 56, type: 'string.dockerfile' },
{ startIndex: 69, type: 'variable.dockerfile' },
{ startIndex: 83, type: 'string.dockerfile' }
]}, {
line: '',
tokens: [
]}, {
line: '# Install libuv for Kestrel from source code (binary is not in wheezy and one in jessie is still too old)',
tokens: [
{ startIndex: 0, type: 'comment.dockerfile' }
]}, {
line: 'RUN apt-get -qqy install \\',
tokens: [
{ startIndex: 0, type: 'keyword.dockerfile' },
{ startIndex: 3, type: '' }
]}, {
line: ' autoconf \\',
tokens: [
{ startIndex: 0, type: '' }
]}, {
line: ' automake \\',
tokens: [
{ startIndex: 0, type: '' }
]}, {
line: ' build-essential \\',
tokens: [
{ startIndex: 0, type: '' }
]}, {
line: ' libtool ',
tokens: [
{ startIndex: 0, type: '' }
]}, {
line: 'RUN LIBUV_VERSION=1.0.0-rc2 \\',
tokens: [
{ startIndex: 0, type: 'keyword.dockerfile' },
//.........这里部分代码省略.........
开发者ID:13572293130,项目名称:vscode,代码行数:101,代码来源:dockerfile.test.ts
示例9: testTokenization
testTokenization('jade', language, [
// Tags [Jade]
[{
line: 'p 5',
tokens: [
{ startIndex: 0, type: 'tag.jade' },
{ startIndex: 1, type: '' }
]}],
[{
line: 'div#container.stuff',
tokens: [
{ startIndex: 0, type: 'tag.jade' },
{ startIndex: 3, type: 'tag.id.jade' },
{ startIndex: 13, type: 'tag.class.jade' }
]}],
[{
line: 'div.container#stuff',
tokens: [
{ startIndex: 0, type: 'tag.jade' },
{ startIndex: 3, type: 'tag.class.jade' },
{ startIndex: 13, type: 'tag.id.jade' }
]}],
[{
line: 'div.container#stuff .container',
tokens: [
{ startIndex: 0, type: 'tag.jade' },
{ startIndex: 3, type: 'tag.class.jade' },
{ startIndex: 13, type: 'tag.id.jade' },
{ startIndex: 19, type: '' }
]}],
[{
line: '#tag-id-1',
tokens: [
{ startIndex: 0, type: 'tag.id.jade' }
]}],
[{
line: '.tag-id-1',
tokens: [
{ startIndex: 0, type: 'tag.class.jade' }
]}],
// Attributes - Single Line [Jade]
[{
line: 'input(type="checkbox")',
tokens: [
{ startIndex: 0, type: 'tag.jade' },
{ startIndex: 5, type: 'delimiter.parenthesis.jade' },
{ startIndex: 6, type: 'attribute.name.jade' },
{ startIndex: 10, type: 'delimiter.jade' },
{ startIndex: 11, type: 'attribute.value.jade' },
{ startIndex: 21, type: 'delimiter.parenthesis.jade' }
]}],
[{
line: 'input (type="checkbox")',
tokens: [
{ startIndex: 0, type: 'tag.jade' },
{ startIndex: 5, type: '' }
]}],
[{
line: 'input(type="checkbox",name="agreement",checked)',
tokens: [
{ startIndex: 0, type: 'tag.jade' },
{ startIndex: 5, type: 'delimiter.parenthesis.jade' },
{ startIndex: 6, type: 'attribute.name.jade' },
{ startIndex: 10, type: 'delimiter.jade' },
{ startIndex: 11, type: 'attribute.value.jade' },
{ startIndex: 21, type: 'attribute.delimiter.jade' },
{ startIndex: 22, type: 'attribute.name.jade' },
{ startIndex: 26, type: 'delimiter.jade' },
{ startIndex: 27, type: 'attribute.value.jade' },
{ startIndex: 38, type: 'attribute.delimiter.jade' },
{ startIndex: 39, type: 'attribute.name.jade' },
{ startIndex: 46, type: 'delimiter.parenthesis.jade' }
]}],
[{
line: 'input(type="checkbox"',
tokens: [
{ startIndex: 0, type: 'tag.jade' },
{ startIndex: 5, type: 'delimiter.parenthesis.jade' },
{ startIndex: 6, type: 'attribute.name.jade' },
{ startIndex: 10, type: 'delimiter.jade' },
{ startIndex: 11, type: 'attribute.value.jade' }
]}, {
line: 'name="agreement"',
tokens: [
{ startIndex: 0, type: 'attribute.name.jade' },
{ startIndex: 4, type: 'delimiter.jade' },
{ startIndex: 5, type: 'attribute.value.jade' }
]}, {
line: 'checked)',
tokens: [
{ startIndex: 0, type: 'attribute.name.jade' },
//.........这里部分代码省略.........
开发者ID:13572293130,项目名称:vscode,代码行数:101,代码来源:jade.test.ts
示例10: testTokenization
testTokenization('ruby', language, [
// Keywords
[{
line: 'class Klass def init() end',
tokens: [
{ startIndex: 0, type: 'keyword.class.ruby' },
{ startIndex: 5, type: '' },
{ startIndex: 6, type: 'constructor.identifier.ruby' },
{ startIndex: 11, type: '' },
{ startIndex: 12, type: 'keyword.def.ruby' },
{ startIndex: 15, type: '' },
{ startIndex: 16, type: 'identifier.ruby' },
{ startIndex: 20, type: 'delimiter.parenthesis.ruby' },
{ startIndex: 22, type: '' },
{ startIndex: 23, type: 'keyword.def.ruby' }
]}],
// Single digit
[{
line: 'x == 1 ',
tokens: [
{ startIndex: 0, type: 'identifier.ruby' },
{ startIndex: 1, type: '' },
{ startIndex: 2, type: 'operator.ruby' },
{ startIndex: 4, type: '' },
{ startIndex: 5, type: 'number.ruby' },
{ startIndex: 6, type: '' }
]}],
// Regex
[{
line: 'text =~ /Ruby/',
tokens: [
{ startIndex: 0, type: 'identifier.ruby' },
{ startIndex: 4, type: '' },
{ startIndex: 5, type: 'operator.ruby' },
{ startIndex: 7, type: '' },
{ startIndex: 8, type: 'regexp.delim.ruby' },
{ startIndex: 9, type: 'regexp.ruby' },
{ startIndex: 13, type: 'regexp.delim.ruby' }
]}],
[{
line: 'text.sub!(/Rbuy/, "Ruby")',
tokens: [
{ startIndex: 0, type: 'identifier.ruby' },
{ startIndex: 4, type: '' },
{ startIndex: 5, type: 'identifier.ruby' },
{ startIndex: 9, type: 'delimiter.parenthesis.ruby' },
{ startIndex: 10, type: 'regexp.delim.ruby' },
{ startIndex: 11, type: 'regexp.ruby' },
{ startIndex: 15, type: 'regexp.delim.ruby' },
{ startIndex: 16, type: 'delimiter.ruby' },
{ startIndex: 17, type: '' },
{ startIndex: 18, type: 'string.d.delim.ruby' },
{ startIndex: 19, type: 'string.$S2.ruby' },
{ startIndex: 23, type: 'string.d.delim.ruby' },
{ startIndex: 24, type: 'delimiter.parenthesis.ruby' }
]}],
// make sure that division does not match regex
[{
line: 'a / b',
tokens: [
{ startIndex: 0, type: 'identifier.ruby' },
{ startIndex: 1, type: '' },
{ startIndex: 2, type: 'operator.ruby' },
{ startIndex: 3, type: '' },
{ startIndex: 4, type: 'identifier.ruby' }
]}],
// Heredoc
[{
line: '<<HERE',
tokens: [
{ startIndex: 0, type: 'string.heredoc.delimiter.ruby' }
]}, {
line: 'do some string',
tokens: [
{ startIndex: 0, type: 'string.heredoc.ruby' }
]}, {
line: 'HERE',
tokens: [
{ startIndex: 0, type: 'string.heredoc.delimiter.ruby' }
]}],
[{
line: 'x <<HERE',
tokens: [
{ startIndex: 0, type: 'identifier.ruby' },
{ startIndex: 1, type: 'string.heredoc.delimiter.ruby' }
]}, {
line: 'do some string',
tokens: [
{ startIndex: 0, type: 'string.heredoc.ruby' }
]}, {
line: 'HERE',
tokens: [
{ startIndex: 0, type: 'string.heredoc.delimiter.ruby' }
]}],
//.........这里部分代码省略.........
开发者ID:13572293130,项目名称:vscode,代码行数:101,代码来源:ruby.test.ts
注:本文中的vs/editor/standalone-languages/test/testUtil.testTokenization函数示例由纯净天空整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。 |
请发表评论