ThinkChat2.0新版上线,更智能更精彩,支持会话、画图、阅读、搜索等,送10W Token,即刻开启你的AI之旅 广告
### 导航 - [索引](../genindex.xhtml "总目录") - [模块](../py-modindex.xhtml "Python 模块索引") | - [下一页](pyclbr.xhtml "pyclbr --- Python class browser support") | - [上一页](tokenize.xhtml "tokenize --- Tokenizer for Python source") | - ![](https://box.kancloud.cn/a721fc7ec672275e257bbbfde49a4d4e_16x16.png) - [Python](https://www.python.org/) » - zh\_CN 3.7.3 [文档](../index.xhtml) » - [Python 标准库](index.xhtml) » - [Python 语言服务](language.xhtml) » - $('.inline-search').show(0); | # [`tabnanny`](#module-tabnanny "tabnanny: Tool for detecting white space related problems in Python source files in a directory tree.") --- 模糊缩进检测 **源代码:** [Lib/tabnanny.py](https://github.com/python/cpython/tree/3.7/Lib/tabnanny.py) \[https://github.com/python/cpython/tree/3.7/Lib/tabnanny.py\] - - - - - - 目前,该模块旨在作为脚本调用。但是可以使用下面描述的 [`check()`](#tabnanny.check "tabnanny.check") 函数将其导入IDE。 注解 此模块提供的API可能会在将来的版本中更改;此类更改可能无法向后兼容。 `tabnanny.``check`(*file\_or\_dir*)If *file\_or\_dir* is a directory and not a symbolic link, then recursively descend the directory tree named by *file\_or\_dir*, checking all `.py`files along the way. If *file\_or\_dir* is an ordinary Python source file, it is checked for whitespace related problems. The diagnostic messages are written to standard output using the [`print()`](functions.xhtml#print "print") function. `tabnanny.``verbose`Flag indicating whether to print verbose messages. This is incremented by the `-v` option if called as a script. `tabnanny.``filename_only`Flag indicating whether to print only the filenames of files containing whitespace related problems. This is set to true by the `-q` option if called as a script. *exception* `tabnanny.``NannyNag`Raised by [`process_tokens()`](#tabnanny.process_tokens "tabnanny.process_tokens") if detecting an ambiguous indent. Captured and handled in [`check()`](#tabnanny.check "tabnanny.check"). `tabnanny.``process_tokens`(*tokens*)This function is used by [`check()`](#tabnanny.check "tabnanny.check") to process tokens generated by the [`tokenize`](tokenize.xhtml#module-tokenize "tokenize: Lexical scanner for Python source code.") module. 参见 模块 [`tokenize`](tokenize.xhtml#module-tokenize "tokenize: Lexical scanner for Python source code.")用于Python源代码的词法扫描程序。 ### 导航 - [索引](../genindex.xhtml "总目录") - [模块](../py-modindex.xhtml "Python 模块索引") | - [下一页](pyclbr.xhtml "pyclbr --- Python class browser support") | - [上一页](tokenize.xhtml "tokenize --- Tokenizer for Python source") | - ![](https://box.kancloud.cn/a721fc7ec672275e257bbbfde49a4d4e_16x16.png) - [Python](https://www.python.org/) » - zh\_CN 3.7.3 [文档](../index.xhtml) » - [Python 标准库](index.xhtml) » - [Python 语言服务](language.xhtml) » - $('.inline-search').show(0); | © [版权所有](../copyright.xhtml) 2001-2019, Python Software Foundation. Python 软件基金会是一个非盈利组织。 [请捐助。](https://www.python.org/psf/donations/) 最后更新于 5月 21, 2019. [发现了问题](../bugs.xhtml)? 使用[Sphinx](http://sphinx.pocoo.org/)1.8.4 创建。