• Home
  • Raw
  • Download

Lines Matching full:logging

4 Logging Cookbook
9 This page contains a number of recipes related to logging, which have been found
12 .. currentmodule:: logging
14 Using logging in multiple modules
17 Multiple calls to ``logging.getLogger('someLogger')`` return a reference to the
25 import logging
29 logger = logging.getLogger('spam_application')
30 logger.setLevel(logging.DEBUG)
32 fh = logging.FileHandler('spam.log')
33 fh.setLevel(logging.DEBUG)
35 ch = logging.StreamHandler()
36 ch.setLevel(logging.ERROR)
38 formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
57 import logging
60 module_logger = logging.getLogger('spam_application.auxiliary')
64 self.logger = logging.getLogger('spam_application.auxiliary.Auxiliary')
100 Logging from multiple threads
103 Logging from multiple threads requires no special effort. The following example
104 shows logging from the main (initial) thread and another thread::
106 import logging
112 logging.debug('Hi from myfunc')
116logging.basicConfig(level=logging.DEBUG, format='%(relativeCreated)6d %(threadName)s %(message)s')
122 logging.debug('Hello from main')
154 This shows the logging output interspersed as one might expect. This approach
163 text file while simultaneously logging errors or above to the console. To set
164 this up, simply configure the appropriate handlers. The logging calls in the
168 import logging
170 logger = logging.getLogger('simple_example')
171 logger.setLevel(logging.DEBUG)
173 fh = logging.FileHandler('spam.log')
174 fh.setLevel(logging.DEBUG)
176 ch = logging.StreamHandler()
177 ch.setLevel(logging.ERROR)
179 formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
206 Logging to multiple destinations
215 import logging
217 # set up logging to file - see previous section for more details
218 logging.basicConfig(level=logging.DEBUG,
224 console = logging.StreamHandler()
225 console.setLevel(logging.INFO)
227 formatter = logging.Formatter('%(name)-12s: %(levelname)-8s %(message)s')
231 logging.getLogger('').addHandler(console)
234 logging.info('Jackdaws love my big sphinx of quartz.')
239 logger1 = logging.getLogger('myapp.area1')
240 logger2 = logging.getLogger('myapp.area2')
276 Here is an example of a module using the logging configuration server::
278 import logging
279 import logging.config
284 logging.config.fileConfig('logging.conf')
287 t = logging.config.listen(9999)
290 logger = logging.getLogger('simpleExample')
293 # loop through logging calls to see the difference
304 logging.config.stopListening()
308 properly preceded with the binary-encoded length, as the new logging
332 .. currentmodule:: logging.handlers
334 Sometimes you have to get your logging handlers to do their work without
335 blocking the thread you're logging from. This is common in web applications,
374 handler = logging.StreamHandler()
376 root = logging.getLogger()
378 formatter = logging.Formatter('%(threadName)s: %(message)s')
404 .. _network-logging:
406 Sending and receiving logging events across a network
409 Let's say you want to send logging events across a network, and handle them at
413 import logging, logging.handlers
415 rootLogger = logging.getLogger('')
416 rootLogger.setLevel(logging.DEBUG)
417 socketHandler = logging.handlers.SocketHandler('localhost',
418 logging.handlers.DEFAULT_TCP_LOGGING_PORT)
424 logging.info('Jackdaws love my big sphinx of quartz.')
429 logger1 = logging.getLogger('myapp.area1')
430 logger2 = logging.getLogger('myapp.area2')
441 import logging
442 import logging.handlers
448 """Handler for a streaming logging request.
450 This basically logs the record using whatever logging policy is
469 record = logging.makeLogRecord(obj)
482 logger = logging.getLogger(name)
491 Simple TCP socket-based logging receiver suitable for testing.
497 port=logging.handlers.DEFAULT_TCP_LOGGING_PORT,
516 logging.basicConfig(
544 Running a logging socket listener in production
547 To run a logging listener in production, you may need to use a process-management tool
557 Adding contextual information to your logging output
560 Sometimes you want logging output to contain contextual information in
561 addition to the parameters passed to the logging call. For example, in a
569 level of granularity you want to use in logging an application, it could
578 with logging event information is to use the :class:`LoggerAdapter` class.
587 information. When you call one of the logging methods on an instance of
602 contextual information is added to the logging output. It's passed the message
603 and keyword arguments of the logging call, and it passes back (potentially)
618 class CustomAdapter(logging.LoggerAdapter):
628 logger = logging.getLogger(__name__)
639 that it looks like a dict to logging. This would be useful if you want to
662 import logging
665 class ContextFilter(logging.Filter):
683 levels = (logging.DEBUG, logging.INFO, logging.WARNING, logging.ERROR, logging.CRITICAL)
684 logging.basicConfig(level=logging.DEBUG,
686 a1 = logging.getLogger('a.b.c')
687 a2 = logging.getLogger('d.e.f')
696 lvlname = logging.getLevelName(lvl)
719 Logging to a single file from multiple processes
722 Although logging is thread-safe, and logging to a single file from multiple
723 threads in a single process *is* supported, logging to a single file from
731 :ref:`This section <network-logging>` documents this approach in more detail and
743 .. currentmodule:: logging.handlers
746 all logging events to one of the processes in your multi-process application.
749 them according to its own logging configuration. Although the example only
752 analogous) it does allow for completely different logging configurations for
757 import logging
758 import logging.handlers
766 # Because you'll want to define the logging configurations for listener and workers, the
768 # for configuring logging for that process. These functions are also passed the queue,
778 root = logging.getLogger()
779 h = logging.handlers.RotatingFileHandler('mptest.log', 'a', 300, 10)
780 f = logging.Formatter('%(asctime)s %(processName)-10s %(name)s %(levelname)-8s %(message)s')
784 # This is the listener process top-level loop: wait for logging events
794 logger = logging.getLogger(record.name)
803 LEVELS = [logging.DEBUG, logging.INFO, logging.WARNING,
804 logging.ERROR, logging.CRITICAL]
816 # will run the logging configuration code when it starts.
818 h = logging.handlers.QueueHandler(queue) # Just the one handler needed
819 root = logging.getLogger()
822 root.setLevel(logging.DEBUG)
833 logger = logging.getLogger(choice(LOGGERS))
861 A variant of the above script keeps the logging in the main process, in a
864 import logging
865 import logging.config
866 import logging.handlers
877 logger = logging.getLogger(record.name)
882 qh = logging.handlers.QueueHandler(q)
883 root = logging.getLogger()
884 root.setLevel(logging.DEBUG)
886 levels = [logging.DEBUG, logging.INFO, logging.WARNING, logging.ERROR,
887 logging.CRITICAL]
892 logger = logging.getLogger(random.choice(loggers))
901 'class': 'logging.Formatter',
907 'class': 'logging.StreamHandler',
911 'class': 'logging.FileHandler',
917 'class': 'logging.FileHandler',
923 'class': 'logging.FileHandler',
945 logging.config.dictConfig(d)
952 # And now tell the logging thread to finish up, too
958 ``foo`` subsystem in a file ``mplog-foo.log``. This will be used by the logging
959 machinery in the main process (even though the logging events are generated in
1005 `Running a logging socket listener in production`_ for more details.
1012 .. (see <https://pymotw.com/3/logging/>)
1018 logging package provides a :class:`~handlers.RotatingFileHandler`::
1021 import logging
1022 import logging.handlers
1027 my_logger = logging.getLogger('MyLogger')
1028 my_logger.setLevel(logging.DEBUG)
1031 handler = logging.handlers.RotatingFileHandler(
1071 When logging was added to the Python standard library, the only way of
1077 Logging (as of 3.2) provides improved support for these two additional
1089 >>> import logging
1090 >>> root = logging.getLogger()
1091 >>> root.setLevel(logging.DEBUG)
1092 >>> handler = logging.StreamHandler()
1093 >>> bf = logging.Formatter('{asctime} {name} {levelname:8s} {message}',
1097 >>> logger = logging.getLogger('foo.bar')
1102 >>> df = logging.Formatter('$asctime $name ${levelname} $message',
1111 Note that the formatting of logging messages for final output to logs is
1112 completely independent of how an individual logging message is constructed.
1119 Logging calls (``logger.debug()``, ``logger.info()`` etc.) only take
1120 positional parameters for the actual logging message itself, with keyword
1122 logging call (e.g. the ``exc_info`` keyword parameter to indicate that
1125 you cannot directly make logging calls using :meth:`str.format` or
1126 :class:`string.Template` syntax, because internally the logging package
1129 all logging calls which are out there in existing code will be using %-format
1134 arbitrary object as a message format string, and that the logging package will
1191 approach: the actual formatting happens not when you make the logging call, but
1201 import logging
1211 class StyleAdapter(logging.LoggerAdapter):
1220 logger = StyleAdapter(logging.getLogger(__name__))
1226 logging.basicConfig(level=logging.DEBUG)
1233 .. currentmodule:: logging
1240 Every logging event is represented by a :class:`LogRecord` instance.
1248 logging an event. This invoked :class:`LogRecord` directly to create an
1260 :meth:`Logger.makeRecord`, and set it using :func:`~logging.setLoggerClass`
1277 logger = logging.getLogger(__name__)
1280 could also add the filter to a :class:`~logging.NullHandler` attached to their
1285 In Python 3.2 and later, :class:`~logging.LogRecord` creation is done through a
1287 :func:`~logging.setLogRecordFactory`, and interrogate with
1288 :func:`~logging.getLogRecordFactory`. The factory is invoked with the same
1289 signature as the :class:`~logging.LogRecord` constructor, as :class:`LogRecord`
1296 old_factory = logging.getLogRecordFactory()
1303 logging.setLogRecordFactory(record_factory)
1309 overhead to all logging operations, and the technique should only be used when
1370 return logging.makeLogRecord(msg)
1375 Module :mod:`logging`
1376 API reference for the logging module.
1378 Module :mod:`logging.config`
1379 Configuration API for the logging module.
1381 Module :mod:`logging.handlers`
1382 Useful handlers included with the logging module.
1384 :ref:`A basic logging tutorial <logging-basic-tutorial>`
1386 :ref:`A more advanced logging tutorial <logging-advanced-tutorial>`
1392 Below is an example of a logging configuration dictionary - it's taken from
1393 …he Django project <https://docs.djangoproject.com/en/stable/topics/logging/#configuring-logging>`_.
1396 LOGGING = {
1409 '()': 'project.logging.SpecialFilter',
1420 'class':'logging.StreamHandler',
1449 section <https://docs.djangoproject.com/en/stable/topics/logging/#configuring-logging>`_
1471 rh = logging.handlers.RotatingFileHandler(...)
1482 The following working example shows how logging can be used with multiprocessing
1490 see logging in the main process, how the workers log to a QueueHandler and how
1491 the listener implements a QueueListener and a more complex logging
1500 import logging
1501 import logging.config
1502 import logging.handlers
1510 A simple handler for logging events. It runs in the listener process and
1512 which then get dispatched, by the logging system, to the handlers
1518 logger = logging.getLogger()
1520 logger = logging.getLogger(record.name)
1524 # doing the logging to files and console
1533 This initialises logging according to the specified configuration,
1537 logging.config.dictConfig(config)
1538 listener = logging.handlers.QueueListener(q, MyHandler())
1547 logger = logging.getLogger('setup')
1558 This initialises logging according to the specified configuration,
1566 logging.config.dictConfig(config)
1567 levels = [logging.DEBUG, logging.INFO, logging.WARNING, logging.ERROR,
1568 logging.CRITICAL]
1578 logger = logging.getLogger('setup')
1582 logger = logging.getLogger(random.choice(loggers))
1593 'class': 'logging.StreamHandler',
1612 'class': 'logging.handlers.QueueHandler',
1622 # logging configuration is available to dispatch events to handlers however
1632 'class': 'logging.Formatter',
1636 'class': 'logging.Formatter',
1642 'class': 'logging.StreamHandler',
1647 'class': 'logging.FileHandler',
1653 'class': 'logging.FileHandler',
1659 'class': 'logging.FileHandler',
1676 # Log some initial events, just to show that logging in the parent works
1678 logging.config.dictConfig(config_initial)
1679 logger = logging.getLogger('setup')
1698 # Logging in the parent still works normally.
1718 :class:`~logging.handlers.SysLogHandler` to insert a BOM into the message, but
1729 #. Attach a :class:`~logging.Formatter` instance to your
1730 :class:`~logging.handlers.SysLogHandler` instance, with a format string
1748 :rfc:`5424`-compliant messages. If you don't, logging may not complain, but your
1752 Implementing structured logging
1755 Although most logging messages are intended for reading by humans, and thus not
1759 straightforward to achieve using the logging package. There are a number of
1764 import logging
1776 logging.basicConfig(level=logging.INFO, format='%(message)s')
1777 logging.info(_('message 1', foo='bar', bar='baz', num=123, fnum=123.456))
1794 import logging
1822 logging.basicConfig(level=logging.INFO, format='%(message)s')
1823 logging.info(_('message 1', set_value={1, 2, 3}, snowman='\u2603'))
1840 .. currentmodule:: logging.config
1845 There are times when you want to customize logging handlers in particular ways,
1857 return logging.FileHandler(filename, mode, encoding)
1859 You can then specify, in a logging configuration passed to :func:`dictConfig`,
1860 that a logging handler be created by calling this function::
1862 LOGGING = {
1896 import logging, logging.config, os, shutil
1903 return logging.FileHandler(filename, mode, encoding)
1905 LOGGING = {
1935 logging.config.dictConfig(LOGGING)
1936 logger = logging.getLogger('mylogger')
1974 :class:`~logging.FileHandler` - for example, one of the rotating file handlers,
1978 .. currentmodule:: logging
1985 In Python 3.2, the :class:`~logging.Formatter` gained a ``style`` keyword
1989 governs the formatting of logging messages for final output to logs, and is
1990 completely orthogonal to how an individual logging message is constructed.
1992 Logging calls (:meth:`~Logger.debug`, :meth:`~Logger.info` etc.) only take
1993 positional parameters for the actual logging message itself, with keyword
1994 parameters used only for determining options for how to handle the logging call
1998 logging calls using :meth:`str.format` or :class:`string.Template` syntax,
1999 because internally the logging package uses %-formatting to merge the format
2001 backward compatibility, since all logging calls which are out there in existing
2008 For logging to work interoperably between any third-party libraries and your
2010 individual logging call. This opens up a couple of ways in which alternative
2017 In Python 3.2, along with the :class:`~logging.Formatter` changes mentioned
2018 above, the logging package gained the ability to allow users to set their own
2037 :ref:`arbitrary-object-messages`) that when logging you can use an arbitrary
2038 object as a message format string, and that the logging package will call
2090 approach: the actual formatting happens not when you make the logging call, but
2100 .. currentmodule:: logging.config
2105 You *can* configure filters using :func:`~logging.config.dictConfig`, though it
2107 :class:`~logging.Filter` is the only filter class included in the standard
2109 base class), you will typically need to define your own :class:`~logging.Filter`
2110 subclass with an overridden :meth:`~logging.Filter.filter` method. To do this,
2114 :class:`~logging.Filter` instance). Here is a complete example::
2116 import logging
2117 import logging.config
2120 class MyFilter(logging.Filter):
2133 LOGGING = {
2143 'class': 'logging.StreamHandler',
2154 logging.config.dictConfig(LOGGING)
2155 logging.debug('hello')
2156 logging.debug('hello - noshow')
2173 in :ref:`logging-config-dict-externalobj`. For example, you could have used
2178 handlers and formatters. See :ref:`logging-config-dict-userdef` for more
2179 information on how logging supports using user-defined objects in its
2193 import logging
2195 class OneLineExceptionFormatter(logging.Formatter):
2210 fh = logging.FileHandler('output.txt', 'w')
2214 root = logging.getLogger()
2215 root.setLevel(logging.DEBUG)
2220 logging.info('Sample message')
2224 logging.exception('ZeroDivisionError: %s', e)
2242 Speaking logging messages
2245 There might be situations when it is desirable to have logging messages rendered
2258 import logging
2262 class TTSHandler(logging.Handler):
2274 root = logging.getLogger()
2277 root.setLevel(logging.DEBUG)
2280 logging.info('Hello')
2281 logging.debug('Goodbye')
2294 .. _buffered-logging:
2296 Buffering logging messages and outputting them conditionally
2301 start logging debug events in a function, and if the function completes without
2307 functions where you want logging to behave this way. It makes use of the
2308 :class:`logging.handlers.MemoryHandler`, which allows buffering of logged events
2317 all the logging levels, writing to ``sys.stderr`` to say what level it's about
2318 to log at, and then actually logging a message at that level. You can pass a
2323 conditional logging that's required. The decorator takes a logger as a parameter
2327 records buffered). These default to a :class:`~logging.StreamHandler` which
2328 writes to ``sys.stderr``, ``logging.ERROR`` and ``100`` respectively.
2332 import logging
2333 from logging.handlers import MemoryHandler
2336 logger = logging.getLogger(__name__)
2337 logger.addHandler(logging.NullHandler())
2341 target_handler = logging.StreamHandler()
2343 flush_level = logging.ERROR
2383 logger.setLevel(logging.DEBUG)
2423 As you can see, actual logging output only occurs when an event is logged whose
2442 import logging
2445 class UTCFormatter(logging.Formatter):
2449 :class:`~logging.Formatter`. If you want to do that via configuration, you can
2450 use the :func:`~logging.config.dictConfig` API with an approach illustrated by
2453 import logging
2454 import logging.config
2457 class UTCFormatter(logging.Formatter):
2460 LOGGING = {
2474 'class': 'logging.StreamHandler',
2478 'class': 'logging.StreamHandler',
2488 logging.config.dictConfig(LOGGING)
2489 logging.warning('The local time is %s', time.asctime())
2504 Using a context manager for selective logging
2507 There are times when it would be useful to temporarily change the logging
2509 manager is the most obvious way of saving and restoring the logging context.
2511 optionally change the logging level and add a logging handler purely in the
2514 import logging
2550 logger = logging.getLogger('foo')
2551 logger.addHandler(logging.StreamHandler())
2552 logger.setLevel(logging.INFO)
2555 with LoggingContext(logger, level=logging.DEBUG):
2558 h = logging.StreamHandler(sys.stdout)
2559 with LoggingContext(logger, level=logging.DEBUG, handler=h, close=True):
2606 logging filters temporarily. Note that the above code works in Python 2 as well
2617 * Use a logging level based on command-line arguments
2618 * Dispatch to multiple subcommands in separate files, all logging at the same
2627 command-line argument, defaulting to ``logging.INFO``. Here's one way that
2632 import logging
2664 logging.basicConfig(level=options.log_level,
2675 import logging
2677 logger = logging.getLogger(__name__)
2687 import logging
2689 logger = logging.getLogger(__name__)
2708 import logging
2710 logger = logging.getLogger(__name__)
2739 The first word is the logging level, and the second word is the module or
2742 If we change the logging level, then we can change the information sent to the
2772 A Qt GUI for logging
2784 can log to the GUI from both the UI itself (via a button for manual logging)
2785 as well as a worker thread doing work in the background (here, just logging
2799 import logging
2815 logger = logging.getLogger(__name__)
2823 signal = Signal(str, logging.LogRecord)
2836 class QtHandler(logging.Handler):
2856 # Used to generate random levels for logging.
2858 LEVELS = (logging.DEBUG, logging.INFO, logging.WARNING, logging.ERROR,
2859 logging.CRITICAL)
2900 logging.DEBUG: 'black',
2901 logging.INFO: 'blue',
2902 logging.WARNING: 'orange',
2903 logging.ERROR: 'red',
2904 logging.CRITICAL: 'purple',
2923 formatter = logging.Formatter(fs)
2974 @Slot(str, logging.LogRecord)
2996 logging.getLogger().setLevel(logging.DEBUG)
3038 * Logging output can be garbled because multiple threads or processes try to
3039 write to the same file. Although logging guards against concurrent use of the
3057 given logger instance by name using ``logging.getLogger(name)``, so passing
3067 Configuring logging by adding handlers, formatters and filters is the
3070 loggers other than a :class:`~logging.NullHandler` instance.