• Home
  • Line#
  • Scopes#
  • Navigate#
  • Raw
  • Download
1.. _logging-cookbook:
2
3================
4Logging Cookbook
5================
6
7:Author: Vinay Sajip <vinay_sajip at red-dove dot com>
8
9This page contains a number of recipes related to logging, which have been found
10useful in the past.
11
12.. currentmodule:: logging
13
14Using logging in multiple modules
15---------------------------------
16
17Multiple calls to ``logging.getLogger('someLogger')`` return a reference to the
18same logger object.  This is true not only within the same module, but also
19across modules as long as it is in the same Python interpreter process.  It is
20true for references to the same object; additionally, application code can
21define and configure a parent logger in one module and create (but not
22configure) a child logger in a separate module, and all logger calls to the
23child will pass up to the parent.  Here is a main module::
24
25    import logging
26    import auxiliary_module
27
28    # create logger with 'spam_application'
29    logger = logging.getLogger('spam_application')
30    logger.setLevel(logging.DEBUG)
31    # create file handler which logs even debug messages
32    fh = logging.FileHandler('spam.log')
33    fh.setLevel(logging.DEBUG)
34    # create console handler with a higher log level
35    ch = logging.StreamHandler()
36    ch.setLevel(logging.ERROR)
37    # create formatter and add it to the handlers
38    formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
39    fh.setFormatter(formatter)
40    ch.setFormatter(formatter)
41    # add the handlers to the logger
42    logger.addHandler(fh)
43    logger.addHandler(ch)
44
45    logger.info('creating an instance of auxiliary_module.Auxiliary')
46    a = auxiliary_module.Auxiliary()
47    logger.info('created an instance of auxiliary_module.Auxiliary')
48    logger.info('calling auxiliary_module.Auxiliary.do_something')
49    a.do_something()
50    logger.info('finished auxiliary_module.Auxiliary.do_something')
51    logger.info('calling auxiliary_module.some_function()')
52    auxiliary_module.some_function()
53    logger.info('done with auxiliary_module.some_function()')
54
55Here is the auxiliary module::
56
57    import logging
58
59    # create logger
60    module_logger = logging.getLogger('spam_application.auxiliary')
61
62    class Auxiliary:
63        def __init__(self):
64            self.logger = logging.getLogger('spam_application.auxiliary.Auxiliary')
65            self.logger.info('creating an instance of Auxiliary')
66
67        def do_something(self):
68            self.logger.info('doing something')
69            a = 1 + 1
70            self.logger.info('done doing something')
71
72    def some_function():
73        module_logger.info('received a call to "some_function"')
74
75The output looks like this::
76
77    2005-03-23 23:47:11,663 - spam_application - INFO -
78       creating an instance of auxiliary_module.Auxiliary
79    2005-03-23 23:47:11,665 - spam_application.auxiliary.Auxiliary - INFO -
80       creating an instance of Auxiliary
81    2005-03-23 23:47:11,665 - spam_application - INFO -
82       created an instance of auxiliary_module.Auxiliary
83    2005-03-23 23:47:11,668 - spam_application - INFO -
84       calling auxiliary_module.Auxiliary.do_something
85    2005-03-23 23:47:11,668 - spam_application.auxiliary.Auxiliary - INFO -
86       doing something
87    2005-03-23 23:47:11,669 - spam_application.auxiliary.Auxiliary - INFO -
88       done doing something
89    2005-03-23 23:47:11,670 - spam_application - INFO -
90       finished auxiliary_module.Auxiliary.do_something
91    2005-03-23 23:47:11,671 - spam_application - INFO -
92       calling auxiliary_module.some_function()
93    2005-03-23 23:47:11,672 - spam_application.auxiliary - INFO -
94       received a call to 'some_function'
95    2005-03-23 23:47:11,673 - spam_application - INFO -
96       done with auxiliary_module.some_function()
97
98Logging from multiple threads
99-----------------------------
100
101Logging from multiple threads requires no special effort. The following example
102shows logging from the main (initIal) thread and another thread::
103
104    import logging
105    import threading
106    import time
107
108    def worker(arg):
109        while not arg['stop']:
110            logging.debug('Hi from myfunc')
111            time.sleep(0.5)
112
113    def main():
114        logging.basicConfig(level=logging.DEBUG, format='%(relativeCreated)6d %(threadName)s %(message)s')
115        info = {'stop': False}
116        thread = threading.Thread(target=worker, args=(info,))
117        thread.start()
118        while True:
119            try:
120                logging.debug('Hello from main')
121                time.sleep(0.75)
122            except KeyboardInterrupt:
123                info['stop'] = True
124                break
125        thread.join()
126
127    if __name__ == '__main__':
128        main()
129
130When run, the script should print something like the following::
131
132     0 Thread-1 Hi from myfunc
133     3 MainThread Hello from main
134   505 Thread-1 Hi from myfunc
135   755 MainThread Hello from main
136  1007 Thread-1 Hi from myfunc
137  1507 MainThread Hello from main
138  1508 Thread-1 Hi from myfunc
139  2010 Thread-1 Hi from myfunc
140  2258 MainThread Hello from main
141  2512 Thread-1 Hi from myfunc
142  3009 MainThread Hello from main
143  3013 Thread-1 Hi from myfunc
144  3515 Thread-1 Hi from myfunc
145  3761 MainThread Hello from main
146  4017 Thread-1 Hi from myfunc
147  4513 MainThread Hello from main
148  4518 Thread-1 Hi from myfunc
149
150This shows the logging output interspersed as one might expect. This approach
151works for more threads than shown here, of course.
152
153Multiple handlers and formatters
154--------------------------------
155
156Loggers are plain Python objects.  The :meth:`~Logger.addHandler` method has no
157minimum or maximum quota for the number of handlers you may add.  Sometimes it
158will be beneficial for an application to log all messages of all severities to a
159text file while simultaneously logging errors or above to the console.  To set
160this up, simply configure the appropriate handlers.  The logging calls in the
161application code will remain unchanged.  Here is a slight modification to the
162previous simple module-based configuration example::
163
164    import logging
165
166    logger = logging.getLogger('simple_example')
167    logger.setLevel(logging.DEBUG)
168    # create file handler which logs even debug messages
169    fh = logging.FileHandler('spam.log')
170    fh.setLevel(logging.DEBUG)
171    # create console handler with a higher log level
172    ch = logging.StreamHandler()
173    ch.setLevel(logging.ERROR)
174    # create formatter and add it to the handlers
175    formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
176    ch.setFormatter(formatter)
177    fh.setFormatter(formatter)
178    # add the handlers to logger
179    logger.addHandler(ch)
180    logger.addHandler(fh)
181
182    # 'application' code
183    logger.debug('debug message')
184    logger.info('info message')
185    logger.warn('warn message')
186    logger.error('error message')
187    logger.critical('critical message')
188
189Notice that the 'application' code does not care about multiple handlers.  All
190that changed was the addition and configuration of a new handler named *fh*.
191
192The ability to create new handlers with higher- or lower-severity filters can be
193very helpful when writing and testing an application.  Instead of using many
194``print`` statements for debugging, use ``logger.debug``: Unlike the print
195statements, which you will have to delete or comment out later, the logger.debug
196statements can remain intact in the source code and remain dormant until you
197need them again.  At that time, the only change that needs to happen is to
198modify the severity level of the logger and/or handler to debug.
199
200.. _multiple-destinations:
201
202Logging to multiple destinations
203--------------------------------
204
205Let's say you want to log to console and file with different message formats and
206in differing circumstances. Say you want to log messages with levels of DEBUG
207and higher to file, and those messages at level INFO and higher to the console.
208Let's also assume that the file should contain timestamps, but the console
209messages should not. Here's how you can achieve this::
210
211   import logging
212
213   # set up logging to file - see previous section for more details
214   logging.basicConfig(level=logging.DEBUG,
215                       format='%(asctime)s %(name)-12s %(levelname)-8s %(message)s',
216                       datefmt='%m-%d %H:%M',
217                       filename='/temp/myapp.log',
218                       filemode='w')
219   # define a Handler which writes INFO messages or higher to the sys.stderr
220   console = logging.StreamHandler()
221   console.setLevel(logging.INFO)
222   # set a format which is simpler for console use
223   formatter = logging.Formatter('%(name)-12s: %(levelname)-8s %(message)s')
224   # tell the handler to use this format
225   console.setFormatter(formatter)
226   # add the handler to the root logger
227   logging.getLogger('').addHandler(console)
228
229   # Now, we can log to the root logger, or any other logger. First the root...
230   logging.info('Jackdaws love my big sphinx of quartz.')
231
232   # Now, define a couple of other loggers which might represent areas in your
233   # application:
234
235   logger1 = logging.getLogger('myapp.area1')
236   logger2 = logging.getLogger('myapp.area2')
237
238   logger1.debug('Quick zephyrs blow, vexing daft Jim.')
239   logger1.info('How quickly daft jumping zebras vex.')
240   logger2.warning('Jail zesty vixen who grabbed pay from quack.')
241   logger2.error('The five boxing wizards jump quickly.')
242
243When you run this, on the console you will see ::
244
245   root        : INFO     Jackdaws love my big sphinx of quartz.
246   myapp.area1 : INFO     How quickly daft jumping zebras vex.
247   myapp.area2 : WARNING  Jail zesty vixen who grabbed pay from quack.
248   myapp.area2 : ERROR    The five boxing wizards jump quickly.
249
250and in the file you will see something like ::
251
252   10-22 22:19 root         INFO     Jackdaws love my big sphinx of quartz.
253   10-22 22:19 myapp.area1  DEBUG    Quick zephyrs blow, vexing daft Jim.
254   10-22 22:19 myapp.area1  INFO     How quickly daft jumping zebras vex.
255   10-22 22:19 myapp.area2  WARNING  Jail zesty vixen who grabbed pay from quack.
256   10-22 22:19 myapp.area2  ERROR    The five boxing wizards jump quickly.
257
258As you can see, the DEBUG message only shows up in the file. The other messages
259are sent to both destinations.
260
261This example uses console and file handlers, but you can use any number and
262combination of handlers you choose.
263
264
265Configuration server example
266----------------------------
267
268Here is an example of a module using the logging configuration server::
269
270    import logging
271    import logging.config
272    import time
273    import os
274
275    # read initial config file
276    logging.config.fileConfig('logging.conf')
277
278    # create and start listener on port 9999
279    t = logging.config.listen(9999)
280    t.start()
281
282    logger = logging.getLogger('simpleExample')
283
284    try:
285        # loop through logging calls to see the difference
286        # new configurations make, until Ctrl+C is pressed
287        while True:
288            logger.debug('debug message')
289            logger.info('info message')
290            logger.warn('warn message')
291            logger.error('error message')
292            logger.critical('critical message')
293            time.sleep(5)
294    except KeyboardInterrupt:
295        # cleanup
296        logging.config.stopListening()
297        t.join()
298
299And here is a script that takes a filename and sends that file to the server,
300properly preceded with the binary-encoded length, as the new logging
301configuration::
302
303    #!/usr/bin/env python
304    import socket, sys, struct
305
306    with open(sys.argv[1], 'rb') as f:
307        data_to_send = f.read()
308
309    HOST = 'localhost'
310    PORT = 9999
311    s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
312    print('connecting...')
313    s.connect((HOST, PORT))
314    print('sending config...')
315    s.send(struct.pack('>L', len(data_to_send)))
316    s.send(data_to_send)
317    s.close()
318    print('complete')
319
320
321.. _network-logging:
322
323Sending and receiving logging events across a network
324-----------------------------------------------------
325
326Let's say you want to send logging events across a network, and handle them at
327the receiving end. A simple way of doing this is attaching a
328:class:`SocketHandler` instance to the root logger at the sending end::
329
330   import logging, logging.handlers
331
332   rootLogger = logging.getLogger('')
333   rootLogger.setLevel(logging.DEBUG)
334   socketHandler = logging.handlers.SocketHandler('localhost',
335                       logging.handlers.DEFAULT_TCP_LOGGING_PORT)
336   # don't bother with a formatter, since a socket handler sends the event as
337   # an unformatted pickle
338   rootLogger.addHandler(socketHandler)
339
340   # Now, we can log to the root logger, or any other logger. First the root...
341   logging.info('Jackdaws love my big sphinx of quartz.')
342
343   # Now, define a couple of other loggers which might represent areas in your
344   # application:
345
346   logger1 = logging.getLogger('myapp.area1')
347   logger2 = logging.getLogger('myapp.area2')
348
349   logger1.debug('Quick zephyrs blow, vexing daft Jim.')
350   logger1.info('How quickly daft jumping zebras vex.')
351   logger2.warning('Jail zesty vixen who grabbed pay from quack.')
352   logger2.error('The five boxing wizards jump quickly.')
353
354At the receiving end, you can set up a receiver using the :mod:`SocketServer`
355module. Here is a basic working example::
356
357   import pickle
358   import logging
359   import logging.handlers
360   import SocketServer
361   import struct
362
363
364   class LogRecordStreamHandler(SocketServer.StreamRequestHandler):
365       """Handler for a streaming logging request.
366
367       This basically logs the record using whatever logging policy is
368       configured locally.
369       """
370
371       def handle(self):
372           """
373           Handle multiple requests - each expected to be a 4-byte length,
374           followed by the LogRecord in pickle format. Logs the record
375           according to whatever policy is configured locally.
376           """
377           while True:
378               chunk = self.connection.recv(4)
379               if len(chunk) < 4:
380                   break
381               slen = struct.unpack('>L', chunk)[0]
382               chunk = self.connection.recv(slen)
383               while len(chunk) < slen:
384                   chunk = chunk + self.connection.recv(slen - len(chunk))
385               obj = self.unPickle(chunk)
386               record = logging.makeLogRecord(obj)
387               self.handleLogRecord(record)
388
389       def unPickle(self, data):
390           return pickle.loads(data)
391
392       def handleLogRecord(self, record):
393           # if a name is specified, we use the named logger rather than the one
394           # implied by the record.
395           if self.server.logname is not None:
396               name = self.server.logname
397           else:
398               name = record.name
399           logger = logging.getLogger(name)
400           # N.B. EVERY record gets logged. This is because Logger.handle
401           # is normally called AFTER logger-level filtering. If you want
402           # to do filtering, do it at the client end to save wasting
403           # cycles and network bandwidth!
404           logger.handle(record)
405
406   class LogRecordSocketReceiver(SocketServer.ThreadingTCPServer):
407       """
408       Simple TCP socket-based logging receiver suitable for testing.
409       """
410
411       allow_reuse_address = 1
412
413       def __init__(self, host='localhost',
414                    port=logging.handlers.DEFAULT_TCP_LOGGING_PORT,
415                    handler=LogRecordStreamHandler):
416           SocketServer.ThreadingTCPServer.__init__(self, (host, port), handler)
417           self.abort = 0
418           self.timeout = 1
419           self.logname = None
420
421       def serve_until_stopped(self):
422           import select
423           abort = 0
424           while not abort:
425               rd, wr, ex = select.select([self.socket.fileno()],
426                                          [], [],
427                                          self.timeout)
428               if rd:
429                   self.handle_request()
430               abort = self.abort
431
432   def main():
433       logging.basicConfig(
434           format='%(relativeCreated)5d %(name)-15s %(levelname)-8s %(message)s')
435       tcpserver = LogRecordSocketReceiver()
436       print('About to start TCP server...')
437       tcpserver.serve_until_stopped()
438
439   if __name__ == '__main__':
440       main()
441
442First run the server, and then the client. On the client side, nothing is
443printed on the console; on the server side, you should see something like::
444
445   About to start TCP server...
446      59 root            INFO     Jackdaws love my big sphinx of quartz.
447      59 myapp.area1     DEBUG    Quick zephyrs blow, vexing daft Jim.
448      69 myapp.area1     INFO     How quickly daft jumping zebras vex.
449      69 myapp.area2     WARNING  Jail zesty vixen who grabbed pay from quack.
450      69 myapp.area2     ERROR    The five boxing wizards jump quickly.
451
452Note that there are some security issues with pickle in some scenarios. If
453these affect you, you can use an alternative serialization scheme by overriding
454the :meth:`~handlers.SocketHandler.makePickle` method and implementing your
455alternative there, as well as adapting the above script to use your alternative
456serialization.
457
458
459.. _context-info:
460
461Adding contextual information to your logging output
462----------------------------------------------------
463
464.. currentmodule:: logging
465
466Sometimes you want logging output to contain contextual information in
467addition to the parameters passed to the logging call. For example, in a
468networked application, it may be desirable to log client-specific information
469in the log (e.g. remote client's username, or IP address). Although you could
470use the *extra* parameter to achieve this, it's not always convenient to pass
471the information in this way. While it might be tempting to create
472:class:`Logger` instances on a per-connection basis, this is not a good idea
473because these instances are not garbage collected. While this is not a problem
474in practice, when the number of :class:`Logger` instances is dependent on the
475level of granularity you want to use in logging an application, it could
476be hard to manage if the number of :class:`Logger` instances becomes
477effectively unbounded.
478
479
480Using LoggerAdapters to impart contextual information
481^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
482
483An easy way in which you can pass contextual information to be output along
484with logging event information is to use the :class:`LoggerAdapter` class.
485This class is designed to look like a :class:`Logger`, so that you can call
486:meth:`debug`, :meth:`info`, :meth:`warning`, :meth:`error`,
487:meth:`exception`, :meth:`critical` and :meth:`log`. These methods have the
488same signatures as their counterparts in :class:`Logger`, so you can use the
489two types of instances interchangeably.
490
491When you create an instance of :class:`LoggerAdapter`, you pass it a
492:class:`Logger` instance and a dict-like object which contains your contextual
493information. When you call one of the logging methods on an instance of
494:class:`LoggerAdapter`, it delegates the call to the underlying instance of
495:class:`Logger` passed to its constructor, and arranges to pass the contextual
496information in the delegated call. Here's a snippet from the code of
497:class:`LoggerAdapter`::
498
499    def debug(self, msg, *args, **kwargs):
500        """
501        Delegate a debug call to the underlying logger, after adding
502        contextual information from this adapter instance.
503        """
504        msg, kwargs = self.process(msg, kwargs)
505        self.logger.debug(msg, *args, **kwargs)
506
507The :meth:`~LoggerAdapter.process` method of :class:`LoggerAdapter` is where the
508contextual information is added to the logging output. It's passed the message
509and keyword arguments of the logging call, and it passes back (potentially)
510modified versions of these to use in the call to the underlying logger. The
511default implementation of this method leaves the message alone, but inserts
512an 'extra' key in the keyword argument whose value is the dict-like object
513passed to the constructor. Of course, if you had passed an 'extra' keyword
514argument in the call to the adapter, it will be silently overwritten.
515
516The advantage of using 'extra' is that the values in the dict-like object are
517merged into the :class:`LogRecord` instance's __dict__, allowing you to use
518customized strings with your :class:`Formatter` instances which know about
519the keys of the dict-like object. If you need a different method, e.g. if you
520want to prepend or append the contextual information to the message string,
521you just need to subclass :class:`LoggerAdapter` and override
522:meth:`~LoggerAdapter.process` to do what you need. Here is a simple example::
523
524    class CustomAdapter(logging.LoggerAdapter):
525        """
526        This example adapter expects the passed in dict-like object to have a
527        'connid' key, whose value in brackets is prepended to the log message.
528        """
529        def process(self, msg, kwargs):
530            return '[%s] %s' % (self.extra['connid'], msg), kwargs
531
532which you can use like this::
533
534    logger = logging.getLogger(__name__)
535    adapter = CustomAdapter(logger, {'connid': some_conn_id})
536
537Then any events that you log to the adapter will have the value of
538``some_conn_id`` prepended to the log messages.
539
540Using objects other than dicts to pass contextual information
541~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
542
543You don't need to pass an actual dict to a :class:`LoggerAdapter` - you could
544pass an instance of a class which implements ``__getitem__`` and ``__iter__`` so
545that it looks like a dict to logging. This would be useful if you want to
546generate values dynamically (whereas the values in a dict would be constant).
547
548
549.. _filters-contextual:
550
551Using Filters to impart contextual information
552^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
553
554You can also add contextual information to log output using a user-defined
555:class:`Filter`. ``Filter`` instances are allowed to modify the ``LogRecords``
556passed to them, including adding additional attributes which can then be output
557using a suitable format string, or if needed a custom :class:`Formatter`.
558
559For example in a web application, the request being processed (or at least,
560the interesting parts of it) can be stored in a threadlocal
561(:class:`threading.local`) variable, and then accessed from a ``Filter`` to
562add, say, information from the request - say, the remote IP address and remote
563user's username - to the ``LogRecord``, using the attribute names 'ip' and
564'user' as in the ``LoggerAdapter`` example above. In that case, the same format
565string can be used to get similar output to that shown above. Here's an example
566script::
567
568    import logging
569    from random import choice
570
571    class ContextFilter(logging.Filter):
572        """
573        This is a filter which injects contextual information into the log.
574
575        Rather than use actual contextual information, we just use random
576        data in this demo.
577        """
578
579        USERS = ['jim', 'fred', 'sheila']
580        IPS = ['123.231.231.123', '127.0.0.1', '192.168.0.1']
581
582        def filter(self, record):
583
584            record.ip = choice(ContextFilter.IPS)
585            record.user = choice(ContextFilter.USERS)
586            return True
587
588    if __name__ == '__main__':
589        levels = (logging.DEBUG, logging.INFO, logging.WARNING, logging.ERROR, logging.CRITICAL)
590        logging.basicConfig(level=logging.DEBUG,
591                            format='%(asctime)-15s %(name)-5s %(levelname)-8s IP: %(ip)-15s User: %(user)-8s %(message)s')
592        a1 = logging.getLogger('a.b.c')
593        a2 = logging.getLogger('d.e.f')
594
595        f = ContextFilter()
596        a1.addFilter(f)
597        a2.addFilter(f)
598        a1.debug('A debug message')
599        a1.info('An info message with %s', 'some parameters')
600        for x in range(10):
601            lvl = choice(levels)
602            lvlname = logging.getLevelName(lvl)
603            a2.log(lvl, 'A message at %s level with %d %s', lvlname, 2, 'parameters')
604
605which, when run, produces something like::
606
607    2010-09-06 22:38:15,292 a.b.c DEBUG    IP: 123.231.231.123 User: fred     A debug message
608    2010-09-06 22:38:15,300 a.b.c INFO     IP: 192.168.0.1     User: sheila   An info message with some parameters
609    2010-09-06 22:38:15,300 d.e.f CRITICAL IP: 127.0.0.1       User: sheila   A message at CRITICAL level with 2 parameters
610    2010-09-06 22:38:15,300 d.e.f ERROR    IP: 127.0.0.1       User: jim      A message at ERROR level with 2 parameters
611    2010-09-06 22:38:15,300 d.e.f DEBUG    IP: 127.0.0.1       User: sheila   A message at DEBUG level with 2 parameters
612    2010-09-06 22:38:15,300 d.e.f ERROR    IP: 123.231.231.123 User: fred     A message at ERROR level with 2 parameters
613    2010-09-06 22:38:15,300 d.e.f CRITICAL IP: 192.168.0.1     User: jim      A message at CRITICAL level with 2 parameters
614    2010-09-06 22:38:15,300 d.e.f CRITICAL IP: 127.0.0.1       User: sheila   A message at CRITICAL level with 2 parameters
615    2010-09-06 22:38:15,300 d.e.f DEBUG    IP: 192.168.0.1     User: jim      A message at DEBUG level with 2 parameters
616    2010-09-06 22:38:15,301 d.e.f ERROR    IP: 127.0.0.1       User: sheila   A message at ERROR level with 2 parameters
617    2010-09-06 22:38:15,301 d.e.f DEBUG    IP: 123.231.231.123 User: fred     A message at DEBUG level with 2 parameters
618    2010-09-06 22:38:15,301 d.e.f INFO     IP: 123.231.231.123 User: fred     A message at INFO level with 2 parameters
619
620
621.. _multiple-processes:
622
623Logging to a single file from multiple processes
624------------------------------------------------
625
626Although logging is thread-safe, and logging to a single file from multiple
627threads in a single process *is* supported, logging to a single file from
628*multiple processes* is *not* supported, because there is no standard way to
629serialize access to a single file across multiple processes in Python. If you
630need to log to a single file from multiple processes, one way of doing this is
631to have all the processes log to a :class:`~handlers.SocketHandler`, and have a
632separate process which implements a socket server which reads from the socket
633and logs to file. (If you prefer, you can dedicate one thread in one of the
634existing processes to perform this function.)
635:ref:`This section <network-logging>` documents this approach in more detail and
636includes a working socket receiver which can be used as a starting point for you
637to adapt in your own applications.
638
639If you are using a recent version of Python which includes the
640:mod:`multiprocessing` module, you could write your own handler which uses the
641:class:`~multiprocessing.Lock` class from this module to serialize access to the
642file from your processes. The existing :class:`FileHandler` and subclasses do
643not make use of :mod:`multiprocessing` at present, though they may do so in the
644future. Note that at present, the :mod:`multiprocessing` module does not provide
645working lock functionality on all platforms (see
646https://bugs.python.org/issue3770).
647
648
649Using file rotation
650-------------------
651
652.. sectionauthor:: Doug Hellmann, Vinay Sajip (changes)
653.. (see <http://blog.doughellmann.com/2007/05/pymotw-logging.html>)
654
655Sometimes you want to let a log file grow to a certain size, then open a new
656file and log to that. You may want to keep a certain number of these files, and
657when that many files have been created, rotate the files so that the number of
658files and the size of the files both remain bounded. For this usage pattern, the
659logging package provides a :class:`~handlers.RotatingFileHandler`::
660
661   import glob
662   import logging
663   import logging.handlers
664
665   LOG_FILENAME = 'logging_rotatingfile_example.out'
666
667   # Set up a specific logger with our desired output level
668   my_logger = logging.getLogger('MyLogger')
669   my_logger.setLevel(logging.DEBUG)
670
671   # Add the log message handler to the logger
672   handler = logging.handlers.RotatingFileHandler(
673                 LOG_FILENAME, maxBytes=20, backupCount=5)
674
675   my_logger.addHandler(handler)
676
677   # Log some messages
678   for i in range(20):
679       my_logger.debug('i = %d' % i)
680
681   # See what files are created
682   logfiles = glob.glob('%s*' % LOG_FILENAME)
683
684   for filename in logfiles:
685       print(filename)
686
687The result should be 6 separate files, each with part of the log history for the
688application::
689
690   logging_rotatingfile_example.out
691   logging_rotatingfile_example.out.1
692   logging_rotatingfile_example.out.2
693   logging_rotatingfile_example.out.3
694   logging_rotatingfile_example.out.4
695   logging_rotatingfile_example.out.5
696
697The most current file is always :file:`logging_rotatingfile_example.out`,
698and each time it reaches the size limit it is renamed with the suffix
699``.1``. Each of the existing backup files is renamed to increment the suffix
700(``.1`` becomes ``.2``, etc.)  and the ``.6`` file is erased.
701
702Obviously this example sets the log length much too small as an extreme
703example.  You would want to set *maxBytes* to an appropriate value.
704
705An example dictionary-based configuration
706-----------------------------------------
707
708Below is an example of a logging configuration dictionary - it's taken from
709the `documentation on the Django project <https://docs.djangoproject.com/en/1.9/topics/logging/#configuring-logging>`_.
710This dictionary is passed to :func:`~config.dictConfig` to put the configuration into effect::
711
712    LOGGING = {
713        'version': 1,
714        'disable_existing_loggers': True,
715        'formatters': {
716            'verbose': {
717                'format': '%(levelname)s %(asctime)s %(module)s %(process)d %(thread)d %(message)s'
718            },
719            'simple': {
720                'format': '%(levelname)s %(message)s'
721            },
722        },
723        'filters': {
724            'special': {
725                '()': 'project.logging.SpecialFilter',
726                'foo': 'bar',
727            }
728        },
729        'handlers': {
730            'null': {
731                'level':'DEBUG',
732                'class':'django.utils.log.NullHandler',
733            },
734            'console':{
735                'level':'DEBUG',
736                'class':'logging.StreamHandler',
737                'formatter': 'simple'
738            },
739            'mail_admins': {
740                'level': 'ERROR',
741                'class': 'django.utils.log.AdminEmailHandler',
742                'filters': ['special']
743            }
744        },
745        'loggers': {
746            'django': {
747                'handlers':['null'],
748                'propagate': True,
749                'level':'INFO',
750            },
751            'django.request': {
752                'handlers': ['mail_admins'],
753                'level': 'ERROR',
754                'propagate': False,
755            },
756            'myproject.custom': {
757                'handlers': ['console', 'mail_admins'],
758                'level': 'INFO',
759                'filters': ['special']
760            }
761        }
762    }
763
764For more information about this configuration, you can see the `relevant
765section <https://docs.djangoproject.com/en/1.9/topics/logging/#configuring-logging>`_
766of the Django documentation.
767
768Inserting a BOM into messages sent to a SysLogHandler
769-----------------------------------------------------
770
771`RFC 5424 <https://tools.ietf.org/html/rfc5424>`_ requires that a
772Unicode message be sent to a syslog daemon as a set of bytes which have the
773following structure: an optional pure-ASCII component, followed by a UTF-8 Byte
774Order Mark (BOM), followed by Unicode encoded using UTF-8. (See the `relevant
775section of the specification <https://tools.ietf.org/html/rfc5424#section-6>`_.)
776
777In Python 2.6 and 2.7, code was added to
778:class:`~logging.handlers.SysLogHandler` to insert a BOM into the message, but
779unfortunately, it was implemented incorrectly, with the BOM appearing at the
780beginning of the message and hence not allowing any pure-ASCII component to
781appear before it.
782
783As this behaviour is broken, the incorrect BOM insertion code is being removed
784from Python 2.7.4 and later. However, it is not being replaced, and if you
785want to produce RFC 5424-compliant messages which include a BOM, an optional
786pure-ASCII sequence before it and arbitrary Unicode after it, encoded using
787UTF-8, then you need to do the following:
788
789#. Attach a :class:`~logging.Formatter` instance to your
790   :class:`~logging.handlers.SysLogHandler` instance, with a format string
791   such as::
792
793      u'ASCII section\ufeffUnicode section'
794
795   The Unicode code point ``u'\ufeff'``, when encoded using UTF-8, will be
796   encoded as a UTF-8 BOM -- the byte-string ``'\xef\xbb\xbf'``.
797
798#. Replace the ASCII section with whatever placeholders you like, but make sure
799   that the data that appears in there after substitution is always ASCII (that
800   way, it will remain unchanged after UTF-8 encoding).
801
802#. Replace the Unicode section with whatever placeholders you like; if the data
803   which appears there after substitution contains characters outside the ASCII
804   range, that's fine -- it will be encoded using UTF-8.
805
806If the formatted message is Unicode, it *will* be encoded using UTF-8 encoding
807by ``SysLogHandler``. If you follow the above rules, you should be able to
808produce RFC 5424-compliant messages. If you don't, logging may not complain,
809but your messages will not be RFC 5424-compliant, and your syslog daemon may
810complain.
811
812
813Implementing structured logging
814-------------------------------
815
816Although most logging messages are intended for reading by humans, and thus not
817readily machine-parseable, there might be cirumstances where you want to output
818messages in a structured format which *is* capable of being parsed by a program
819(without needing complex regular expressions to parse the log message). This is
820straightforward to achieve using the logging package. There are a number of
821ways in which this could be achieved, but the following is a simple approach
822which uses JSON to serialise the event in a machine-parseable manner::
823
824    import json
825    import logging
826
827    class StructuredMessage(object):
828        def __init__(self, message, **kwargs):
829            self.message = message
830            self.kwargs = kwargs
831
832        def __str__(self):
833            return '%s >>> %s' % (self.message, json.dumps(self.kwargs))
834
835    _ = StructuredMessage   # optional, to improve readability
836
837    logging.basicConfig(level=logging.INFO, format='%(message)s')
838    logging.info(_('message 1', foo='bar', bar='baz', num=123, fnum=123.456))
839
840If the above script is run, it prints::
841
842    message 1 >>> {"fnum": 123.456, "num": 123, "bar": "baz", "foo": "bar"}
843
844Note that the order of items might be different according to the version of
845Python used.
846
847If you need more specialised processing, you can use a custom JSON encoder,
848as in the following complete example::
849
850    from __future__ import unicode_literals
851
852    import json
853    import logging
854
855    # This next bit is to ensure the script runs unchanged on 2.x and 3.x
856    try:
857        unicode
858    except NameError:
859        unicode = str
860
861    class Encoder(json.JSONEncoder):
862        def default(self, o):
863            if isinstance(o, set):
864                return tuple(o)
865            elif isinstance(o, unicode):
866                return o.encode('unicode_escape').decode('ascii')
867            return super(Encoder, self).default(o)
868
869    class StructuredMessage(object):
870        def __init__(self, message, **kwargs):
871            self.message = message
872            self.kwargs = kwargs
873
874        def __str__(self):
875            s = Encoder().encode(self.kwargs)
876            return '%s >>> %s' % (self.message, s)
877
878    _ = StructuredMessage   # optional, to improve readability
879
880    def main():
881        logging.basicConfig(level=logging.INFO, format='%(message)s')
882        logging.info(_('message 1', set_value=set([1, 2, 3]), snowman='\u2603'))
883
884    if __name__ == '__main__':
885        main()
886
887When the above script is run, it prints::
888
889    message 1 >>> {"snowman": "\u2603", "set_value": [1, 2, 3]}
890
891Note that the order of items might be different according to the version of
892Python used.
893
894
895.. _custom-handlers:
896
897.. currentmodule:: logging.config
898
899Customizing handlers with :func:`dictConfig`
900--------------------------------------------
901
902There are times when you want to customize logging handlers in particular ways,
903and if you use :func:`dictConfig` you may be able to do this without
904subclassing. As an example, consider that you may want to set the ownership of a
905log file. On POSIX, this is easily done using :func:`os.chown`, but the file
906handlers in the stdlib don't offer built-in support. You can customize handler
907creation using a plain function such as::
908
909    def owned_file_handler(filename, mode='a', encoding=None, owner=None):
910        if owner:
911            import os, pwd, grp
912            # convert user and group names to uid and gid
913            uid = pwd.getpwnam(owner[0]).pw_uid
914            gid = grp.getgrnam(owner[1]).gr_gid
915            owner = (uid, gid)
916            if not os.path.exists(filename):
917                open(filename, 'a').close()
918            os.chown(filename, *owner)
919        return logging.FileHandler(filename, mode, encoding)
920
921You can then specify, in a logging configuration passed to :func:`dictConfig`,
922that a logging handler be created by calling this function::
923
924    LOGGING = {
925        'version': 1,
926        'disable_existing_loggers': False,
927        'formatters': {
928            'default': {
929                'format': '%(asctime)s %(levelname)s %(name)s %(message)s'
930            },
931        },
932        'handlers': {
933            'file':{
934                # The values below are popped from this dictionary and
935                # used to create the handler, set the handler's level and
936                # its formatter.
937                '()': owned_file_handler,
938                'level':'DEBUG',
939                'formatter': 'default',
940                # The values below are passed to the handler creator callable
941                # as keyword arguments.
942                'owner': ['pulse', 'pulse'],
943                'filename': 'chowntest.log',
944                'mode': 'w',
945                'encoding': 'utf-8',
946            },
947        },
948        'root': {
949            'handlers': ['file'],
950            'level': 'DEBUG',
951        },
952    }
953
954In this example I am setting the ownership using the ``pulse`` user and group,
955just for the purposes of illustration. Putting it together into a working
956script, ``chowntest.py``::
957
958    import logging, logging.config, os, shutil
959
960    def owned_file_handler(filename, mode='a', encoding=None, owner=None):
961        if owner:
962            if not os.path.exists(filename):
963                open(filename, 'a').close()
964            shutil.chown(filename, *owner)
965        return logging.FileHandler(filename, mode, encoding)
966
967    LOGGING = {
968        'version': 1,
969        'disable_existing_loggers': False,
970        'formatters': {
971            'default': {
972                'format': '%(asctime)s %(levelname)s %(name)s %(message)s'
973            },
974        },
975        'handlers': {
976            'file':{
977                # The values below are popped from this dictionary and
978                # used to create the handler, set the handler's level and
979                # its formatter.
980                '()': owned_file_handler,
981                'level':'DEBUG',
982                'formatter': 'default',
983                # The values below are passed to the handler creator callable
984                # as keyword arguments.
985                'owner': ['pulse', 'pulse'],
986                'filename': 'chowntest.log',
987                'mode': 'w',
988                'encoding': 'utf-8',
989            },
990        },
991        'root': {
992            'handlers': ['file'],
993            'level': 'DEBUG',
994        },
995    }
996
997    logging.config.dictConfig(LOGGING)
998    logger = logging.getLogger('mylogger')
999    logger.debug('A debug message')
1000
1001To run this, you will probably need to run as ``root``:
1002
1003.. code-block:: shell-session
1004
1005    $ sudo python3.3 chowntest.py
1006    $ cat chowntest.log
1007    2013-11-05 09:34:51,128 DEBUG mylogger A debug message
1008    $ ls -l chowntest.log
1009    -rw-r--r-- 1 pulse pulse 55 2013-11-05 09:34 chowntest.log
1010
1011Note that this example uses Python 3.3 because that's where :func:`shutil.chown`
1012makes an appearance. This approach should work with any Python version that
1013supports :func:`dictConfig` - namely, Python 2.7, 3.2 or later. With pre-3.3
1014versions, you would need to implement the actual ownership change using e.g.
1015:func:`os.chown`.
1016
1017In practice, the handler-creating function may be in a utility module somewhere
1018in your project. Instead of the line in the configuration::
1019
1020    '()': owned_file_handler,
1021
1022you could use e.g.::
1023
1024    '()': 'ext://project.util.owned_file_handler',
1025
1026where ``project.util`` can be replaced with the actual name of the package
1027where the function resides. In the above working script, using
1028``'ext://__main__.owned_file_handler'`` should work. Here, the actual callable
1029is resolved by :func:`dictConfig` from the ``ext://`` specification.
1030
1031This example hopefully also points the way to how you could implement other
1032types of file change - e.g. setting specific POSIX permission bits - in the
1033same way, using :func:`os.chmod`.
1034
1035Of course, the approach could also be extended to types of handler other than a
1036:class:`~logging.FileHandler` - for example, one of the rotating file handlers,
1037or a different type of handler altogether.
1038
1039
1040.. _filters-dictconfig:
1041
1042Configuring filters with :func:`dictConfig`
1043-------------------------------------------
1044
1045You *can* configure filters using :func:`~logging.config.dictConfig`, though it
1046might not be obvious at first glance how to do it (hence this recipe). Since
1047:class:`~logging.Filter` is the only filter class included in the standard
1048library, and it is unlikely to cater to many requirements (it's only there as a
1049base class), you will typically need to define your own :class:`~logging.Filter`
1050subclass with an overridden :meth:`~logging.Filter.filter` method. To do this,
1051specify the ``()`` key in the configuration dictionary for the filter,
1052specifying a callable which will be used to create the filter (a class is the
1053most obvious, but you can provide any callable which returns a
1054:class:`~logging.Filter` instance). Here is a complete example::
1055
1056    import logging
1057    import logging.config
1058    import sys
1059
1060    class MyFilter(logging.Filter):
1061        def __init__(self, param=None):
1062            self.param = param
1063
1064        def filter(self, record):
1065            if self.param is None:
1066                allow = True
1067            else:
1068                allow = self.param not in record.msg
1069            if allow:
1070                record.msg = 'changed: ' + record.msg
1071            return allow
1072
1073    LOGGING = {
1074        'version': 1,
1075        'filters': {
1076            'myfilter': {
1077                '()': MyFilter,
1078                'param': 'noshow',
1079            }
1080        },
1081        'handlers': {
1082            'console': {
1083                'class': 'logging.StreamHandler',
1084                'filters': ['myfilter']
1085            }
1086        },
1087        'root': {
1088            'level': 'DEBUG',
1089            'handlers': ['console']
1090        },
1091    }
1092
1093    if __name__ == '__main__':
1094        logging.config.dictConfig(LOGGING)
1095        logging.debug('hello')
1096        logging.debug('hello - noshow')
1097
1098This example shows how you can pass configuration data to the callable which
1099constructs the instance, in the form of keyword parameters. When run, the above
1100script will print::
1101
1102    changed: hello
1103
1104which shows that the filter is working as configured.
1105
1106A couple of extra points to note:
1107
1108* If you can't refer to the callable directly in the configuration (e.g. if it
1109  lives in a different module, and you can't import it directly where the
1110  configuration dictionary is), you can use the form ``ext://...`` as described
1111  in :ref:`logging-config-dict-externalobj`. For example, you could have used
1112  the text ``'ext://__main__.MyFilter'`` instead of ``MyFilter`` in the above
1113  example.
1114
1115* As well as for filters, this technique can also be used to configure custom
1116  handlers and formatters. See :ref:`logging-config-dict-userdef` for more
1117  information on how logging supports using user-defined objects in its
1118  configuration, and see the other cookbook recipe :ref:`custom-handlers` above.
1119
1120
1121.. _custom-format-exception:
1122
1123Customized exception formatting
1124-------------------------------
1125
1126There might be times when you want to do customized exception formatting - for
1127argument's sake, let's say you want exactly one line per logged event, even
1128when exception information is present. You can do this with a custom formatter
1129class, as shown in the following example::
1130
1131    import logging
1132
1133    class OneLineExceptionFormatter(logging.Formatter):
1134        def formatException(self, exc_info):
1135            """
1136            Format an exception so that it prints on a single line.
1137            """
1138            result = super(OneLineExceptionFormatter, self).formatException(exc_info)
1139            return repr(result) # or format into one line however you want to
1140
1141        def format(self, record):
1142            s = super(OneLineExceptionFormatter, self).format(record)
1143            if record.exc_text:
1144                s = s.replace('\n', '') + '|'
1145            return s
1146
1147    def configure_logging():
1148        fh = logging.FileHandler('output.txt', 'w')
1149        f = OneLineExceptionFormatter('%(asctime)s|%(levelname)s|%(message)s|',
1150                                      '%d/%m/%Y %H:%M:%S')
1151        fh.setFormatter(f)
1152        root = logging.getLogger()
1153        root.setLevel(logging.DEBUG)
1154        root.addHandler(fh)
1155
1156    def main():
1157        configure_logging()
1158        logging.info('Sample message')
1159        try:
1160            x = 1 / 0
1161        except ZeroDivisionError as e:
1162            logging.exception('ZeroDivisionError: %s', e)
1163
1164    if __name__ == '__main__':
1165        main()
1166
1167When run, this produces a file with exactly two lines::
1168
1169    28/01/2015 07:21:23|INFO|Sample message|
1170    28/01/2015 07:21:23|ERROR|ZeroDivisionError: integer division or modulo by zero|'Traceback (most recent call last):\n  File "logtest7.py", line 30, in main\n    x = 1 / 0\nZeroDivisionError: integer division or modulo by zero'|
1171
1172While the above treatment is simplistic, it points the way to how exception
1173information can be formatted to your liking. The :mod:`traceback` module may be
1174helpful for more specialized needs.
1175
1176.. _spoken-messages:
1177
1178Speaking logging messages
1179-------------------------
1180
1181There might be situations when it is desirable to have logging messages rendered
1182in an audible rather than a visible format. This is easy to do if you have text-
1183to-speech (TTS) functionality available in your system, even if it doesn't have
1184a Python binding. Most TTS systems have a command line program you can run, and
1185this can be invoked from a handler using :mod:`subprocess`. It's assumed here
1186that TTS command line programs won't expect to interact with users or take a
1187long time to complete, and that the frequency of logged messages will be not so
1188high as to swamp the user with messages, and that it's acceptable to have the
1189messages spoken one at a time rather than concurrently, The example implementation
1190below waits for one message to be spoken before the next is processed, and this
1191might cause other handlers to be kept waiting. Here is a short example showing
1192the approach, which assumes that the ``espeak`` TTS package is available::
1193
1194    import logging
1195    import subprocess
1196    import sys
1197
1198    class TTSHandler(logging.Handler):
1199        def emit(self, record):
1200            msg = self.format(record)
1201            # Speak slowly in a female English voice
1202            cmd = ['espeak', '-s150', '-ven+f3', msg]
1203            p = subprocess.Popen(cmd, stdout=subprocess.PIPE,
1204                                 stderr=subprocess.STDOUT)
1205            # wait for the program to finish
1206            p.communicate()
1207
1208    def configure_logging():
1209        h = TTSHandler()
1210        root = logging.getLogger()
1211        root.addHandler(h)
1212        # the default formatter just returns the message
1213        root.setLevel(logging.DEBUG)
1214
1215    def main():
1216        logging.info('Hello')
1217        logging.debug('Goodbye')
1218
1219    if __name__ == '__main__':
1220        configure_logging()
1221        sys.exit(main())
1222
1223When run, this script should say "Hello" and then "Goodbye" in a female voice.
1224
1225The above approach can, of course, be adapted to other TTS systems and even
1226other systems altogether which can process messages via external programs run
1227from a command line.
1228
1229.. _buffered-logging:
1230
1231Buffering logging messages and outputting them conditionally
1232------------------------------------------------------------
1233
1234There might be situations where you want to log messages in a temporary area
1235and only output them if a certain condition occurs. For example, you may want to
1236start logging debug events in a function, and if the function completes without
1237errors, you don't want to clutter the log with the collected debug information,
1238but if there is an error, you want all the debug information to be output as well
1239as the error.
1240
1241Here is an example which shows how you could do this using a decorator for your
1242functions where you want logging to behave this way. It makes use of the
1243:class:`logging.handlers.MemoryHandler`, which allows buffering of logged events
1244until some condition occurs, at which point the buffered events are ``flushed``
1245- passed to another handler (the ``target`` handler) for processing. By default,
1246the ``MemoryHandler`` flushed when its buffer gets filled up or an event whose
1247level is greater than or equal to a specified threshold is seen. You can use this
1248recipe with a more specialised subclass of ``MemoryHandler`` if you want custom
1249flushing behavior.
1250
1251The example script has a simple function, ``foo``, which just cycles through
1252all the logging levels, writing to ``sys.stderr`` to say what level it's about
1253to log at, and then actually logging a message at that level. You can pass a
1254parameter to ``foo`` which, if true, will log at ERROR and CRITICAL levels -
1255otherwise, it only logs at DEBUG, INFO and WARNING levels.
1256
1257The script just arranges to decorate ``foo`` with a decorator which will do the
1258conditional logging that's required. The decorator takes a logger as a parameter
1259and attaches a memory handler for the duration of the call to the decorated
1260function. The decorator can be additionally parameterised using a target handler,
1261a level at which flushing should occur, and a capacity for the buffer. These
1262default to a :class:`~logging.StreamHandler` which writes to ``sys.stderr``,
1263``logging.ERROR`` and ``100`` respectively.
1264
1265Here's the script::
1266
1267    import logging
1268    from logging.handlers import MemoryHandler
1269    import sys
1270
1271    logger = logging.getLogger(__name__)
1272    logger.addHandler(logging.NullHandler())
1273
1274    def log_if_errors(logger, target_handler=None, flush_level=None, capacity=None):
1275        if target_handler is None:
1276            target_handler = logging.StreamHandler()
1277        if flush_level is None:
1278            flush_level = logging.ERROR
1279        if capacity is None:
1280            capacity = 100
1281        handler = MemoryHandler(capacity, flushLevel=flush_level, target=target_handler)
1282
1283        def decorator(fn):
1284            def wrapper(*args, **kwargs):
1285                logger.addHandler(handler)
1286                try:
1287                    return fn(*args, **kwargs)
1288                except Exception:
1289                    logger.exception('call failed')
1290                    raise
1291                finally:
1292                    super(MemoryHandler, handler).flush()
1293                    logger.removeHandler(handler)
1294            return wrapper
1295
1296        return decorator
1297
1298    def write_line(s):
1299        sys.stderr.write('%s\n' % s)
1300
1301    def foo(fail=False):
1302        write_line('about to log at DEBUG ...')
1303        logger.debug('Actually logged at DEBUG')
1304        write_line('about to log at INFO ...')
1305        logger.info('Actually logged at INFO')
1306        write_line('about to log at WARNING ...')
1307        logger.warning('Actually logged at WARNING')
1308        if fail:
1309            write_line('about to log at ERROR ...')
1310            logger.error('Actually logged at ERROR')
1311            write_line('about to log at CRITICAL ...')
1312            logger.critical('Actually logged at CRITICAL')
1313        return fail
1314
1315    decorated_foo = log_if_errors(logger)(foo)
1316
1317    if __name__ == '__main__':
1318        logger.setLevel(logging.DEBUG)
1319        write_line('Calling undecorated foo with False')
1320        assert not foo(False)
1321        write_line('Calling undecorated foo with True')
1322        assert foo(True)
1323        write_line('Calling decorated foo with False')
1324        assert not decorated_foo(False)
1325        write_line('Calling decorated foo with True')
1326        assert decorated_foo(True)
1327
1328When this script is run, the following output should be observed::
1329
1330    Calling undecorated foo with False
1331    about to log at DEBUG ...
1332    about to log at INFO ...
1333    about to log at WARNING ...
1334    Calling undecorated foo with True
1335    about to log at DEBUG ...
1336    about to log at INFO ...
1337    about to log at WARNING ...
1338    about to log at ERROR ...
1339    about to log at CRITICAL ...
1340    Calling decorated foo with False
1341    about to log at DEBUG ...
1342    about to log at INFO ...
1343    about to log at WARNING ...
1344    Calling decorated foo with True
1345    about to log at DEBUG ...
1346    about to log at INFO ...
1347    about to log at WARNING ...
1348    about to log at ERROR ...
1349    Actually logged at DEBUG
1350    Actually logged at INFO
1351    Actually logged at WARNING
1352    Actually logged at ERROR
1353    about to log at CRITICAL ...
1354    Actually logged at CRITICAL
1355
1356As you can see, actual logging output only occurs when an event is logged whose
1357severity is ERROR or greater, but in that case, any previous events at lower
1358severities are also logged.
1359
1360You can of course use the conventional means of decoration::
1361
1362    @log_if_errors(logger)
1363    def foo(fail=False):
1364        ...
1365
1366
1367.. _utc-formatting:
1368
1369Formatting times using UTC (GMT) via configuration
1370--------------------------------------------------
1371
1372Sometimes you want to format times using UTC, which can be done using a class
1373such as `UTCFormatter`, shown below::
1374
1375    import logging
1376    import time
1377
1378    class UTCFormatter(logging.Formatter):
1379        converter = time.gmtime
1380
1381and you can then use the ``UTCFormatter`` in your code instead of
1382:class:`~logging.Formatter`. If you want to do that via configuration, you can
1383use the :func:`~logging.config.dictConfig` API with an approach illustrated by
1384the following complete example::
1385
1386    import logging
1387    import logging.config
1388    import time
1389
1390    class UTCFormatter(logging.Formatter):
1391        converter = time.gmtime
1392
1393    LOGGING = {
1394        'version': 1,
1395        'disable_existing_loggers': False,
1396        'formatters': {
1397            'utc': {
1398                '()': UTCFormatter,
1399                'format': '%(asctime)s %(message)s',
1400            },
1401            'local': {
1402                'format': '%(asctime)s %(message)s',
1403            }
1404        },
1405        'handlers': {
1406            'console1': {
1407                'class': 'logging.StreamHandler',
1408                'formatter': 'utc',
1409            },
1410            'console2': {
1411                'class': 'logging.StreamHandler',
1412                'formatter': 'local',
1413            },
1414        },
1415        'root': {
1416            'handlers': ['console1', 'console2'],
1417       }
1418    }
1419
1420    if __name__ == '__main__':
1421        logging.config.dictConfig(LOGGING)
1422        logging.warning('The local time is %s', time.asctime())
1423
1424When this script is run, it should print something like::
1425
1426    2015-10-17 12:53:29,501 The local time is Sat Oct 17 13:53:29 2015
1427    2015-10-17 13:53:29,501 The local time is Sat Oct 17 13:53:29 2015
1428
1429showing how the time is formatted both as local time and UTC, one for each
1430handler.
1431
1432
1433.. _context-manager:
1434
1435Using a context manager for selective logging
1436---------------------------------------------
1437
1438There are times when it would be useful to temporarily change the logging
1439configuration and revert it back after doing something. For this, a context
1440manager is the most obvious way of saving and restoring the logging context.
1441Here is a simple example of such a context manager, which allows you to
1442optionally change the logging level and add a logging handler purely in the
1443scope of the context manager::
1444
1445    import logging
1446    import sys
1447
1448    class LoggingContext(object):
1449        def __init__(self, logger, level=None, handler=None, close=True):
1450            self.logger = logger
1451            self.level = level
1452            self.handler = handler
1453            self.close = close
1454
1455        def __enter__(self):
1456            if self.level is not None:
1457                self.old_level = self.logger.level
1458                self.logger.setLevel(self.level)
1459            if self.handler:
1460                self.logger.addHandler(self.handler)
1461
1462        def __exit__(self, et, ev, tb):
1463            if self.level is not None:
1464                self.logger.setLevel(self.old_level)
1465            if self.handler:
1466                self.logger.removeHandler(self.handler)
1467            if self.handler and self.close:
1468                self.handler.close()
1469            # implicit return of None => don't swallow exceptions
1470
1471If you specify a level value, the logger's level is set to that value in the
1472scope of the with block covered by the context manager. If you specify a
1473handler, it is added to the logger on entry to the block and removed on exit
1474from the block. You can also ask the manager to close the handler for you on
1475block exit - you could do this if you don't need the handler any more.
1476
1477To illustrate how it works, we can add the following block of code to the
1478above::
1479
1480    if __name__ == '__main__':
1481        logger = logging.getLogger('foo')
1482        logger.addHandler(logging.StreamHandler())
1483        logger.setLevel(logging.INFO)
1484        logger.info('1. This should appear just once on stderr.')
1485        logger.debug('2. This should not appear.')
1486        with LoggingContext(logger, level=logging.DEBUG):
1487            logger.debug('3. This should appear once on stderr.')
1488        logger.debug('4. This should not appear.')
1489        h = logging.StreamHandler(sys.stdout)
1490        with LoggingContext(logger, level=logging.DEBUG, handler=h, close=True):
1491            logger.debug('5. This should appear twice - once on stderr and once on stdout.')
1492        logger.info('6. This should appear just once on stderr.')
1493        logger.debug('7. This should not appear.')
1494
1495We initially set the logger's level to ``INFO``, so message #1 appears and
1496message #2 doesn't. We then change the level to ``DEBUG`` temporarily in the
1497following ``with`` block, and so message #3 appears. After the block exits, the
1498logger's level is restored to ``INFO`` and so message #4 doesn't appear. In the
1499next ``with`` block, we set the level to ``DEBUG`` again but also add a handler
1500writing to ``sys.stdout``. Thus, message #5 appears twice on the console (once
1501via ``stderr`` and once via ``stdout``). After the ``with`` statement's
1502completion, the status is as it was before so message #6 appears (like message
1503#1) whereas message #7 doesn't (just like message #2).
1504
1505If we run the resulting script, the result is as follows:
1506
1507.. code-block:: shell-session
1508
1509    $ python logctx.py
1510    1. This should appear just once on stderr.
1511    3. This should appear once on stderr.
1512    5. This should appear twice - once on stderr and once on stdout.
1513    5. This should appear twice - once on stderr and once on stdout.
1514    6. This should appear just once on stderr.
1515
1516If we run it again, but pipe ``stderr`` to ``/dev/null``, we see the following,
1517which is the only message written to ``stdout``:
1518
1519.. code-block:: shell-session
1520
1521    $ python logctx.py 2>/dev/null
1522    5. This should appear twice - once on stderr and once on stdout.
1523
1524Once again, but piping ``stdout`` to ``/dev/null``, we get:
1525
1526.. code-block:: shell-session
1527
1528    $ python logctx.py >/dev/null
1529    1. This should appear just once on stderr.
1530    3. This should appear once on stderr.
1531    5. This should appear twice - once on stderr and once on stdout.
1532    6. This should appear just once on stderr.
1533
1534In this case, the message #5 printed to ``stdout`` doesn't appear, as expected.
1535
1536Of course, the approach described here can be generalised, for example to attach
1537logging filters temporarily. Note that the above code works in Python 2 as well
1538as Python 3.
1539