Lines Matching refs:tokenize
1 :mod:`tokenize` --- Tokenizer for Python source
4 .. module:: tokenize
10 **Source code:** :source:`Lib/tokenize.py`
14 The :mod:`tokenize` module provides a lexical scanner for Python source code,
23 :term:`named tuple` returned from :func:`tokenize.tokenize`.
30 .. function:: tokenize(readline)
32 The :func:`.tokenize` generator requires one argument, *readline*, which
57 :func:`.tokenize` determines the source encoding of the file by looking for a
64 Like :func:`.tokenize`, the *readline* argument is a callable returning
69 :func:`.tokenize`. It does not yield an :data:`~token.ENCODING` token.
72 :mod:`tokenize`.
75 useful for creating tools that tokenize a script, modify the token stream, and
86 guaranteed to tokenize back to match the input so that the conversion is
92 is the first token sequence output by :func:`.tokenize`. If there is no
96 :func:`.tokenize` needs to detect the encoding of source files it tokenizes. The
103 readline, in the same way as the :func:`.tokenize` generator.
154 The :mod:`tokenize` module can be executed as a script from the command line.
159 python -m tokenize [-e] [filename.py]
163 .. program:: tokenize
182 from tokenize import tokenize, untokenize, NUMBER, STRING, NAME, OP
208 g = tokenize(BytesIO(s.encode('utf-8')).readline) # tokenize the string
234 $ python -m tokenize hello.py
260 $ python -m tokenize -e hello.py
285 import tokenize
287 with tokenize.open('hello.py') as f:
288 tokens = tokenize.generate_tokens(f.readline)
292 Or reading bytes directly with :func:`.tokenize`::
294 import tokenize
297 tokens = tokenize.tokenize(f.readline)