新聞中心
日志操作手冊
作者

創(chuàng)新互聯(lián)公司專注為客戶提供全方位的互聯(lián)網(wǎng)綜合服務(wù),包含不限于網(wǎng)站建設(shè)、成都網(wǎng)站設(shè)計、北流網(wǎng)絡(luò)推廣、微信平臺小程序開發(fā)、北流網(wǎng)絡(luò)營銷、北流企業(yè)策劃、北流品牌公關(guān)、搜索引擎seo、人物專訪、企業(yè)宣傳片、企業(yè)代運(yùn)營等,從售前售中售后,我們都將竭誠為您服務(wù),您的肯定,是我們最大的嘉獎;創(chuàng)新互聯(lián)公司為所有大學(xué)生創(chuàng)業(yè)者提供北流建站搭建服務(wù),24小時服務(wù)熱線:18980820575,官方網(wǎng)址:www.cdcxhl.com
Vinay Sajip
This page contains a number of recipes related to logging, which have been found useful in the past. For links to tutorial and reference information, please see 其他資源.
在多個模塊中記錄日志
無論對 logging.getLogger('someLogger') 進(jìn)行多少次調(diào)用,都會返回同一個 logger 對象的引用。不僅在同一個模塊內(nèi)如此,只要是在同一個 python 解釋器進(jìn)程中,跨模塊調(diào)用也是一樣。同樣是引用同一個對象,應(yīng)用程序也可以在一個模塊中定義和配置一個父 logger,而在另一個單獨(dú)的模塊中創(chuàng)建(但不配置)子 logger,對于子 logger 的所有調(diào)用都會傳給父 logger。以下是主模塊:
import loggingimport auxiliary_module# create logger with 'spam_application'logger = logging.getLogger('spam_application')logger.setLevel(logging.DEBUG)# create file handler which logs even debug messagesfh = logging.FileHandler('spam.log')fh.setLevel(logging.DEBUG)# create console handler with a higher log levelch = logging.StreamHandler()ch.setLevel(logging.ERROR)# create formatter and add it to the handlersformatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')fh.setFormatter(formatter)ch.setFormatter(formatter)# add the handlers to the loggerlogger.addHandler(fh)logger.addHandler(ch)logger.info('creating an instance of auxiliary_module.Auxiliary')a = auxiliary_module.Auxiliary()logger.info('created an instance of auxiliary_module.Auxiliary')logger.info('calling auxiliary_module.Auxiliary.do_something')a.do_something()logger.info('finished auxiliary_module.Auxiliary.do_something')logger.info('calling auxiliary_module.some_function()')auxiliary_module.some_function()logger.info('done with auxiliary_module.some_function()')
以下是輔助模塊:
import logging# create loggermodule_logger = logging.getLogger('spam_application.auxiliary')class Auxiliary:def __init__(self):self.logger = logging.getLogger('spam_application.auxiliary.Auxiliary')self.logger.info('creating an instance of Auxiliary')def do_something(self):self.logger.info('doing something')a = 1 + 1self.logger.info('done doing something')def some_function():module_logger.info('received a call to "some_function"')
輸出結(jié)果會像這樣:
2005-03-23 23:47:11,663 - spam_application - INFO -creating an instance of auxiliary_module.Auxiliary2005-03-23 23:47:11,665 - spam_application.auxiliary.Auxiliary - INFO -creating an instance of Auxiliary2005-03-23 23:47:11,665 - spam_application - INFO -created an instance of auxiliary_module.Auxiliary2005-03-23 23:47:11,668 - spam_application - INFO -calling auxiliary_module.Auxiliary.do_something2005-03-23 23:47:11,668 - spam_application.auxiliary.Auxiliary - INFO -doing something2005-03-23 23:47:11,669 - spam_application.auxiliary.Auxiliary - INFO -done doing something2005-03-23 23:47:11,670 - spam_application - INFO -finished auxiliary_module.Auxiliary.do_something2005-03-23 23:47:11,671 - spam_application - INFO -calling auxiliary_module.some_function()2005-03-23 23:47:11,672 - spam_application.auxiliary - INFO -received a call to 'some_function'2005-03-23 23:47:11,673 - spam_application - INFO -done with auxiliary_module.some_function()
在多個線程中記錄日志
多線程記錄日志并不需要特殊處理,以下示例演示了在主線程(起始線程)和其他線程中記錄日志的過程:
import loggingimport threadingimport timedef worker(arg):while not arg['stop']:logging.debug('Hi from myfunc')time.sleep(0.5)def main():logging.basicConfig(level=logging.DEBUG, format='%(relativeCreated)6d %(threadName)s %(message)s')info = {'stop': False}thread = threading.Thread(target=worker, args=(info,))thread.start()while True:try:logging.debug('Hello from main')time.sleep(0.75)except KeyboardInterrupt:info['stop'] = Truebreakthread.join()if __name__ == '__main__':main()
腳本會運(yùn)行輸出類似下面的內(nèi)容:
0 Thread-1 Hi from myfunc3 MainThread Hello from main505 Thread-1 Hi from myfunc755 MainThread Hello from main1007 Thread-1 Hi from myfunc1507 MainThread Hello from main1508 Thread-1 Hi from myfunc2010 Thread-1 Hi from myfunc2258 MainThread Hello from main2512 Thread-1 Hi from myfunc3009 MainThread Hello from main3013 Thread-1 Hi from myfunc3515 Thread-1 Hi from myfunc3761 MainThread Hello from main4017 Thread-1 Hi from myfunc4513 MainThread Hello from main4518 Thread-1 Hi from myfunc
以上如期顯示了不同線程的日志是交替輸出的。當(dāng)然更多的線程也會如此。
多個 handler 和多種 formatter
日志是個普通的 Python 對象。 addHandler() 方法可加入不限數(shù)量的日志 handler。有時候,應(yīng)用程序需把嚴(yán)重錯誤信息記入文本文件,而將一般錯誤或其他級別的信息輸出到控制臺。若要進(jìn)行這樣的設(shè)定,只需多配置幾個日志 handler 即可,應(yīng)用程序的日志調(diào)用代碼可以保持不變。下面對之前的分模塊日志示例略做修改:
import logginglogger = logging.getLogger('simple_example')logger.setLevel(logging.DEBUG)# create file handler which logs even debug messagesfh = logging.FileHandler('spam.log')fh.setLevel(logging.DEBUG)# create console handler with a higher log levelch = logging.StreamHandler()ch.setLevel(logging.ERROR)# create formatter and add it to the handlersformatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')ch.setFormatter(formatter)fh.setFormatter(formatter)# add the handlers to loggerlogger.addHandler(ch)logger.addHandler(fh)# 'application' codelogger.debug('debug message')logger.info('info message')logger.warning('warn message')logger.error('error message')logger.critical('critical message')
需要注意的是,“應(yīng)用程序”內(nèi)的代碼并不關(guān)心是否存在多個日志 handler。示例中所做的改變,只是新加入并配置了一個名為 fh 的 handler。
在編寫和測試應(yīng)用程序時,若能創(chuàng)建日志 handler 對不同嚴(yán)重級別的日志信息進(jìn)行過濾,這將十分有用。調(diào)試時無需用多條 print 語句,而是采用 logger.debug :print 語句以后還得注釋或刪掉,而 logger.debug 語句可以原樣留在源碼中保持靜默。當(dāng)需要再次調(diào)試時,只要改變?nèi)罩緦ο蠡?handler 的嚴(yán)重級別即可。
在多個地方記錄日志
假定要根據(jù)不同的情況將日志以不同的格式寫入控制臺和文件。比如把 DEBUG 以上級別的日志信息寫于文件,并且把 INFO 以上的日志信息輸出到控制臺。再假設(shè)日志文件需要包含時間戳,控制臺信息則不需要。以下演示了做法:
import logging# set up logging to file - see previous section for more detailslogging.basicConfig(level=logging.DEBUG,format='%(asctime)s %(name)-12s %(levelname)-8s %(message)s',datefmt='%m-%d %H:%M',filename='/tmp/myapp.log',filemode='w')# define a Handler which writes INFO messages or higher to the sys.stderrconsole = logging.StreamHandler()console.setLevel(logging.INFO)# set a format which is simpler for console useformatter = logging.Formatter('%(name)-12s: %(levelname)-8s %(message)s')# tell the handler to use this formatconsole.setFormatter(formatter)# add the handler to the root loggerlogging.getLogger('').addHandler(console)# Now, we can log to the root logger, or any other logger. First the root...logging.info('Jackdaws love my big sphinx of quartz.')# Now, define a couple of other loggers which might represent areas in your# application:logger1 = logging.getLogger('myapp.area1')logger2 = logging.getLogger('myapp.area2')logger1.debug('Quick zephyrs blow, vexing daft Jim.')logger1.info('How quickly daft jumping zebras vex.')logger2.warning('Jail zesty vixen who grabbed pay from quack.')logger2.error('The five boxing wizards jump quickly.')
當(dāng)運(yùn)行后,你會看到控制臺如下所示
root : INFO Jackdaws love my big sphinx of quartz.myapp.area1 : INFO How quickly daft jumping zebras vex.myapp.area2 : WARNING Jail zesty vixen who grabbed pay from quack.myapp.area2 : ERROR The five boxing wizards jump quickly.
而日志文件將如下所示:
10-22 22:19 root INFO Jackdaws love my big sphinx of quartz.10-22 22:19 myapp.area1 DEBUG Quick zephyrs blow, vexing daft Jim.10-22 22:19 myapp.area1 INFO How quickly daft jumping zebras vex.10-22 22:19 myapp.area2 WARNING Jail zesty vixen who grabbed pay from quack.10-22 22:19 myapp.area2 ERROR The five boxing wizards jump quickly.
如您所見,DEBUG 級別的日志信息只出現(xiàn)在了文件中,而其他信息則兩個地方都會輸出。
上述示例只用到了控制臺和文件 handler,當(dāng)然還可以自由組合任意數(shù)量的日志 handler。
Note that the above choice of log filename /tmp/myapp.log implies use of a standard location for temporary files on POSIX systems. On Windows, you may need to choose a different directory name for the log - just ensure that the directory exists and that you have the permissions to create and update files in it.
Custom handling of levels
Sometimes, you might want to do something slightly different from the standard handling of levels in handlers, where all levels above a threshold get processed by a handler. To do this, you need to use filters. Let’s look at a scenario where you want to arrange things as follows:
-
Send messages of severity
INFOandWARNINGtosys.stdout -
Send messages of severity
ERRORand above tosys.stderr -
Send messages of severity
DEBUGand above to fileapp.log
Suppose you configure logging with the following JSON:
{"version": 1,"disable_existing_loggers": false,"formatters": {"simple": {"format": "%(levelname)-8s - %(message)s"}},"handlers": {"stdout": {"class": "logging.StreamHandler","level": "INFO","formatter": "simple","stream": "ext://sys.stdout",},"stderr": {"class": "logging.StreamHandler","level": "ERROR","formatter": "simple","stream": "ext://sys.stderr"},"file": {"class": "logging.FileHandler","formatter": "simple","filename": "app.log","mode": "w"}},"root": {"level": "DEBUG","handlers": ["stderr","stdout","file"]}}
This configuration does almost what we want, except that sys.stdout would show messages of severity ERROR and above as well as INFO and WARNING messages. To prevent this, we can set up a filter which excludes those messages and add it to the relevant handler. This can be configured by adding a filters section parallel to formatters and handlers:
"filters": {"warnings_and_below": {"()" : "__main__.filter_maker","level": "WARNING"}}
and changing the section on the stdout handler to add it:
"stdout": {"class": "logging.StreamHandler","level": "INFO","formatter": "simple","stream": "ext://sys.stdout","filters": ["warnings_and_below"]}
A filter is just a function, so we can define the filter_maker (a factory function) as follows:
def filter_maker(level):level = getattr(logging, level)def filter(record):return record.levelno <= levelreturn filter
This converts the string argument passed in to a numeric level, and returns a function which only returns True if the level of the passed in record is at or below the specified level. Note that in this example I have defined the filter_maker in a test script main.py that I run from the command line, so its module will be __main__ - hence the __main__.filter_maker in the filter configuration. You will need to change that if you define it in a different module.
With the filter added, we can run main.py, which in full is:
import jsonimport loggingimport logging.configCONFIG = '''{"version": 1,"disable_existing_loggers": false,"formatters": {"simple": {"format": "%(levelname)-8s - %(message)s"}},"filters": {"warnings_and_below": {"()" : "__main__.filter_maker","level": "WARNING"}},"handlers": {"stdout": {"class": "logging.StreamHandler","level": "INFO","formatter": "simple","stream": "ext://sys.stdout","filters": ["warnings_and_below"]},"stderr": {"class": "logging.StreamHandler","level": "ERROR","formatter": "simple","stream": "ext://sys.stderr"},"file": {"class": "logging.FileHandler","formatter": "simple","filename": "app.log","mode": "w"}},"root": {"level": "DEBUG","handlers": ["stderr","stdout","file"]}}'''def filter_maker(level):level = getattr(logging, level)def filter(record):return record.levelno <= levelreturn filterlogging.config.dictConfig(json.loads(CONFIG))logging.debug('A DEBUG message')logging.info('An INFO message')logging.warning('A WARNING message')logging.error('An ERROR message')logging.critical('A CRITICAL message')
And after running it like this:
python main.py 2>stderr.log >stdout.log
We can see the results are as expected:
$ more *.log::::::::::::::app.log::::::::::::::DEBUG - A DEBUG messageINFO - An INFO messageWARNING - A WARNING messageERROR - An ERROR messageCRITICAL - A CRITICAL message::::::::::::::stderr.log::::::::::::::ERROR - An ERROR messageCRITICAL - A CRITICAL message::::::::::::::stdout.log::::::::::::::INFO - An INFO messageWARNING - A WARNING message
日志配置服務(wù)器示例
以下是一個用到了日志配置服務(wù)器的模塊示例:
import loggingimport logging.configimport timeimport os# read initial config filelogging.config.fileConfig('logging.conf')# create and start listener on port 9999t = logging.config.listen(9999)t.start()logger = logging.getLogger('simpleExample')try:# loop through logging calls to see the difference# new configurations make, until Ctrl+C is pressedwhile True:logger.debug('debug message')logger.info('info message')logger.warning('warn message')logger.error('error message')logger.critical('critical message')time.sleep(5)except KeyboardInterrupt:# cleanuplogging.config.stopListening()t.join()
以下腳本將接受文件名作為參數(shù),然后將此文件發(fā)送到服務(wù)器,前面加上文件的二進(jìn)制編碼長度,做為新的日志配置:
#!/usr/bin/env pythonimport socket, sys, structwith open(sys.argv[1], 'rb') as f:data_to_send = f.read()HOST = 'localhost'PORT = 9999s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)print('connecting...')s.connect((HOST, PORT))print('sending config...')s.send(struct.pack('>L', len(data_to_send)))s.send(data_to_send)s.close()print('complete')
處理日志 handler 的阻塞
有時你必須讓日志記錄處理程序的運(yùn)行不會阻塞你要記錄日志的線程。 這在 Web 應(yīng)用程序中是很常見,當(dāng)然在其他場景中也可能發(fā)生。
有一種原因往往會讓程序表現(xiàn)遲鈍,這就是 SMTPHandler:由于很多因素是開發(fā)人員無法控制的(例如郵件或網(wǎng)絡(luò)基礎(chǔ)設(shè)施的性能不佳),發(fā)送電子郵件可能需要很長時間。不過幾乎所有網(wǎng)絡(luò) handler 都可能會發(fā)生阻塞:即使是 SocketHandler 操作也可能在后臺執(zhí)行 DNS 查詢,而這種查詢實(shí)在太慢了(并且 DNS 查詢還可能在很底層的套接字庫代碼中,位于 Python 層之下,超出了可控范圍)。
有一種解決方案是分成兩部分實(shí)現(xiàn)。第一部分,針對那些對性能有要求的關(guān)鍵線程,只為日志對象連接一個 QueueHandler。日志對象只需簡單地寫入隊列即可,可為隊列設(shè)置足夠大的容量,或者可以在初始化時不設(shè)置容量上限。盡管為以防萬一,可能需要在代碼中捕獲 queue.Full 異常,不過隊列寫入操作通常會很快得以處理。如果要開發(fā)庫代碼,包含性能要求較高的線程,為了讓使用該庫的開發(fā)人員受益,請務(wù)必在開發(fā)文檔中進(jìn)行標(biāo)明(包括建議僅連接 QueueHandlers )。
解決方案的另一部分就是 QueueListener,它被設(shè)計為 QueueHandler 的對應(yīng)部分。QueueListener 非常簡單:傳入一個隊列和一些 handler,并啟動一個內(nèi)部線程,用于偵聽 QueueHandlers``(或其他 ``LogRecords 源)發(fā)送的 LogRecord 隊列。LogRecords 會從隊列中移除并傳給 handler 處理。
QueueListener 作為單獨(dú)的類,好處就是可以用同一個實(shí)例為多個 QueueHandlers 服務(wù)。這比把現(xiàn)有 handler 類線程化更加資源友好,后者會每個 handler 會占用一個線程,卻沒有特別的好處。
以下是這兩個類的運(yùn)用示例(省略了 import 語句):
que = queue.Queue(-1) # no limit on sizequeue_handler = QueueHandler(que)handler = logging.StreamHandler()listener = QueueListener(que, handler)root = logging.getLogger()root.addHandler(queue_handler)formatter = logging.Formatter('%(threadName)s: %(message)s')handler.setFormatter(formatter)listener.start()# The log output will display the thread which generated# the event (the main thread) rather than the internal# thread which monitors the internal queue. This is what# you want to happen.root.warning('Look out!')listener.stop()
在運(yùn)行后會產(chǎn)生:
MainThread: Look out!
備注
Although the earlier discussion wasn’t specifically talking about async code, but rather about slow logging handlers, it should be noted that when logging from async code, network and even file handlers could lead to problems (blocking the event loop) because some logging is done from asyncio internals. It might be best, if any async code is used in an application, to use the above approach for logging, so that any blocking code runs only in the QueueListener thread.
在 3.5 版更改: 在 Python 3.5 之前,QueueListener 總會把由隊列接收到的每條信息都傳遞給已初始化的每個處理程序。(因?yàn)檫@里假定級別過濾操作已在寫入隊列時完成了。)從 3.5 版開始,可以修改這種處理方式,只要將關(guān)鍵字參數(shù) respect_handler_level=True 傳給偵聽器的構(gòu)造函數(shù)即可。這樣偵聽器將會把每條信息的級別與 handler 的級別進(jìn)行比較,只在適配時才會將信息傳給 handler 。
通過網(wǎng)絡(luò)收發(fā)日志事件
假定現(xiàn)在要通過網(wǎng)絡(luò)發(fā)送日志事件,并在接收端進(jìn)行處理。有一種簡單的方案,就是在發(fā)送端的根日志對象連接一個 SocketHandler 實(shí)例:
import logging, logging.handlersrootLogger = logging.getLogger('')rootLogger.setLevel(logging.DEBUG)socketHandler = logging.handlers.SocketHandler('localhost',logging.handlers.DEFAULT_TCP_LOGGING_PORT)# don't bother with a formatter, since a socket handler sends the event as# an unformatted picklerootLogger.addHandler(socketHandler)# Now, we can log to the root logger, or any other logger. First the root...logging.info('Jackdaws love my big sphinx of quartz.')# Now, define a couple of other loggers which might represent areas in your# application:logger1 = logging.getLogger('myapp.area1')logger2 = logging.getLogger('myapp.area2')logger1.debug('Quick zephyrs blow, vexing daft Jim.')logger1.info('How quickly daft jumping zebras vex.')logger2.warning('Jail zesty vixen who grabbed pay from quack.')logger2.error('The five boxing wizards jump quickly.')
在接收端,可以用 socketserver 模塊設(shè)置一個接收器。簡要示例如下:
import pickleimport loggingimport logging.handlersimport socketserverimport structclass LogRecordStreamHandler(socketserver.StreamRequestHandler):"""Handler for a streaming logging request.This basically logs the record using whatever logging policy isconfigured locally."""def handle(self):"""Handle multiple requests - each expected to be a 4-byte length,followed by the LogRecord in pickle format. Logs the recordaccording to whatever policy is configured locally."""while True:chunk = self.connection.recv(4)if len(chunk) < 4:breakslen = struct.unpack('>L', chunk)[0]chunk = self.connection.recv(slen)while len(chunk) < slen:chunk = chunk + self.connection.recv(slen - len(chunk))obj = self.unPickle(chunk)record = logging.makeLogRecord(obj)self.handleLogRecord(record)def unPickle(self, data):return pickle.loads(data)def handleLogRecord(self, record):# if a name is specified, we use the named logger rather than the one# implied by the record.if self.server.logname is not None:name = self.server.lognameelse:name = record.namelogger = logging.getLogger(name)# N.B. EVERY record gets logged. This is because Logger.handle# is normally called AFTER logger-level filtering. If you want# to do filtering, do it at the client end to save wasting# cycles and network bandwidth!logger.handle(record)class LogRecordSocketReceiver(socketserver.ThreadingTCPServer):"""Simple TCP socket-based logging receiver suitable for testing."""allow_reuse_address = Truedef __init__本文標(biāo)題:創(chuàng)新互聯(lián)Python教程:日志操作手冊
當(dāng)前URL:http://m.fisionsoft.com.cn/article/cdhigcc.html


咨詢
建站咨詢
