• Home
  • Line#
  • Scopes#
  • Navigate#
  • Raw
  • Download
1:mod:`multiprocessing` --- Process-based "threading" interface
2==============================================================
3
4.. module:: multiprocessing
5   :synopsis: Process-based "threading" interface.
6
7.. versionadded:: 2.6
8
9
10Introduction
11----------------------
12
13:mod:`multiprocessing` is a package that supports spawning processes using an
14API similar to the :mod:`threading` module.  The :mod:`multiprocessing` package
15offers both local and remote concurrency, effectively side-stepping the
16:term:`Global Interpreter Lock` by using subprocesses instead of threads.  Due
17to this, the :mod:`multiprocessing` module allows the programmer to fully
18leverage multiple processors on a given machine.  It runs on both Unix and
19Windows.
20
21The :mod:`multiprocessing` module also introduces APIs which do not have
22analogs in the :mod:`threading` module.  A prime example of this is the
23:class:`Pool` object which offers a convenient means of parallelizing the
24execution of a function across multiple input values, distributing the
25input data across processes (data parallelism).  The following example
26demonstrates the common practice of defining such functions in a module so
27that child processes can successfully import that module.  This basic example
28of data parallelism using :class:`Pool`, ::
29
30   from multiprocessing import Pool
31
32   def f(x):
33       return x*x
34
35   if __name__ == '__main__':
36       p = Pool(5)
37       print(p.map(f, [1, 2, 3]))
38
39will print to standard output ::
40
41   [1, 4, 9]
42
43
44The :class:`Process` class
45~~~~~~~~~~~~~~~~~~~~~~~~~~
46
47In :mod:`multiprocessing`, processes are spawned by creating a :class:`Process`
48object and then calling its :meth:`~Process.start` method.  :class:`Process`
49follows the API of :class:`threading.Thread`.  A trivial example of a
50multiprocess program is ::
51
52    from multiprocessing import Process
53
54    def f(name):
55        print 'hello', name
56
57    if __name__ == '__main__':
58        p = Process(target=f, args=('bob',))
59        p.start()
60        p.join()
61
62To show the individual process IDs involved, here is an expanded example::
63
64    from multiprocessing import Process
65    import os
66
67    def info(title):
68        print title
69        print 'module name:', __name__
70        if hasattr(os, 'getppid'):  # only available on Unix
71            print 'parent process:', os.getppid()
72        print 'process id:', os.getpid()
73
74    def f(name):
75        info('function f')
76        print 'hello', name
77
78    if __name__ == '__main__':
79        info('main line')
80        p = Process(target=f, args=('bob',))
81        p.start()
82        p.join()
83
84For an explanation of why (on Windows) the ``if __name__ == '__main__'`` part is
85necessary, see :ref:`multiprocessing-programming`.
86
87
88Exchanging objects between processes
89~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
90
91:mod:`multiprocessing` supports two types of communication channel between
92processes:
93
94**Queues**
95
96   The :class:`~multiprocessing.Queue` class is a near clone of :class:`Queue.Queue`.  For
97   example::
98
99      from multiprocessing import Process, Queue
100
101      def f(q):
102          q.put([42, None, 'hello'])
103
104      if __name__ == '__main__':
105          q = Queue()
106          p = Process(target=f, args=(q,))
107          p.start()
108          print q.get()    # prints "[42, None, 'hello']"
109          p.join()
110
111   Queues are thread and process safe.
112
113**Pipes**
114
115   The :func:`Pipe` function returns a pair of connection objects connected by a
116   pipe which by default is duplex (two-way).  For example::
117
118      from multiprocessing import Process, Pipe
119
120      def f(conn):
121          conn.send([42, None, 'hello'])
122          conn.close()
123
124      if __name__ == '__main__':
125          parent_conn, child_conn = Pipe()
126          p = Process(target=f, args=(child_conn,))
127          p.start()
128          print parent_conn.recv()   # prints "[42, None, 'hello']"
129          p.join()
130
131   The two connection objects returned by :func:`Pipe` represent the two ends of
132   the pipe.  Each connection object has :meth:`~Connection.send` and
133   :meth:`~Connection.recv` methods (among others).  Note that data in a pipe
134   may become corrupted if two processes (or threads) try to read from or write
135   to the *same* end of the pipe at the same time.  Of course there is no risk
136   of corruption from processes using different ends of the pipe at the same
137   time.
138
139
140Synchronization between processes
141~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
142
143:mod:`multiprocessing` contains equivalents of all the synchronization
144primitives from :mod:`threading`.  For instance one can use a lock to ensure
145that only one process prints to standard output at a time::
146
147   from multiprocessing import Process, Lock
148
149   def f(l, i):
150       l.acquire()
151       print 'hello world', i
152       l.release()
153
154   if __name__ == '__main__':
155       lock = Lock()
156
157       for num in range(10):
158           Process(target=f, args=(lock, num)).start()
159
160Without using the lock output from the different processes is liable to get all
161mixed up.
162
163
164Sharing state between processes
165~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
166
167As mentioned above, when doing concurrent programming it is usually best to
168avoid using shared state as far as possible.  This is particularly true when
169using multiple processes.
170
171However, if you really do need to use some shared data then
172:mod:`multiprocessing` provides a couple of ways of doing so.
173
174**Shared memory**
175
176   Data can be stored in a shared memory map using :class:`Value` or
177   :class:`Array`.  For example, the following code ::
178
179      from multiprocessing import Process, Value, Array
180
181      def f(n, a):
182          n.value = 3.1415927
183          for i in range(len(a)):
184              a[i] = -a[i]
185
186      if __name__ == '__main__':
187          num = Value('d', 0.0)
188          arr = Array('i', range(10))
189
190          p = Process(target=f, args=(num, arr))
191          p.start()
192          p.join()
193
194          print num.value
195          print arr[:]
196
197   will print ::
198
199      3.1415927
200      [0, -1, -2, -3, -4, -5, -6, -7, -8, -9]
201
202   The ``'d'`` and ``'i'`` arguments used when creating ``num`` and ``arr`` are
203   typecodes of the kind used by the :mod:`array` module: ``'d'`` indicates a
204   double precision float and ``'i'`` indicates a signed integer.  These shared
205   objects will be process and thread-safe.
206
207   For more flexibility in using shared memory one can use the
208   :mod:`multiprocessing.sharedctypes` module which supports the creation of
209   arbitrary ctypes objects allocated from shared memory.
210
211**Server process**
212
213   A manager object returned by :func:`Manager` controls a server process which
214   holds Python objects and allows other processes to manipulate them using
215   proxies.
216
217   A manager returned by :func:`Manager` will support types :class:`list`,
218   :class:`dict`, :class:`~managers.Namespace`, :class:`Lock`, :class:`RLock`,
219   :class:`Semaphore`, :class:`BoundedSemaphore`, :class:`Condition`,
220   :class:`Event`, :class:`~multiprocessing.Queue`, :class:`Value` and :class:`Array`.  For
221   example, ::
222
223      from multiprocessing import Process, Manager
224
225      def f(d, l):
226          d[1] = '1'
227          d['2'] = 2
228          d[0.25] = None
229          l.reverse()
230
231      if __name__ == '__main__':
232          manager = Manager()
233
234          d = manager.dict()
235          l = manager.list(range(10))
236
237          p = Process(target=f, args=(d, l))
238          p.start()
239          p.join()
240
241          print d
242          print l
243
244   will print ::
245
246       {0.25: None, 1: '1', '2': 2}
247       [9, 8, 7, 6, 5, 4, 3, 2, 1, 0]
248
249   Server process managers are more flexible than using shared memory objects
250   because they can be made to support arbitrary object types.  Also, a single
251   manager can be shared by processes on different computers over a network.
252   They are, however, slower than using shared memory.
253
254
255Using a pool of workers
256~~~~~~~~~~~~~~~~~~~~~~~
257
258The :class:`~multiprocessing.pool.Pool` class represents a pool of worker
259processes.  It has methods which allows tasks to be offloaded to the worker
260processes in a few different ways.
261
262For example::
263
264   from multiprocessing import Pool, TimeoutError
265   import time
266   import os
267
268   def f(x):
269       return x*x
270
271   if __name__ == '__main__':
272       pool = Pool(processes=4)              # start 4 worker processes
273
274       # print "[0, 1, 4,..., 81]"
275       print pool.map(f, range(10))
276
277       # print same numbers in arbitrary order
278       for i in pool.imap_unordered(f, range(10)):
279           print i
280
281       # evaluate "f(20)" asynchronously
282       res = pool.apply_async(f, (20,))      # runs in *only* one process
283       print res.get(timeout=1)              # prints "400"
284
285       # evaluate "os.getpid()" asynchronously
286       res = pool.apply_async(os.getpid, ()) # runs in *only* one process
287       print res.get(timeout=1)              # prints the PID of that process
288
289       # launching multiple evaluations asynchronously *may* use more processes
290       multiple_results = [pool.apply_async(os.getpid, ()) for i in range(4)]
291       print [res.get(timeout=1) for res in multiple_results]
292
293       # make a single worker sleep for 10 secs
294       res = pool.apply_async(time.sleep, (10,))
295       try:
296           print res.get(timeout=1)
297       except TimeoutError:
298           print "We lacked patience and got a multiprocessing.TimeoutError"
299
300Note that the methods of a pool should only ever be used by the
301process which created it.
302
303.. note::
304
305   Functionality within this package requires that the ``__main__`` module be
306   importable by the children. This is covered in :ref:`multiprocessing-programming`
307   however it is worth pointing out here. This means that some examples, such
308   as the :class:`Pool` examples will not work in the interactive interpreter.
309   For example::
310
311      >>> from multiprocessing import Pool
312      >>> p = Pool(5)
313      >>> def f(x):
314      ...     return x*x
315      ...
316      >>> p.map(f, [1,2,3])
317      Process PoolWorker-1:
318      Process PoolWorker-2:
319      Process PoolWorker-3:
320      Traceback (most recent call last):
321      Traceback (most recent call last):
322      Traceback (most recent call last):
323      AttributeError: 'module' object has no attribute 'f'
324      AttributeError: 'module' object has no attribute 'f'
325      AttributeError: 'module' object has no attribute 'f'
326
327   (If you try this it will actually output three full tracebacks
328   interleaved in a semi-random fashion, and then you may have to
329   stop the master process somehow.)
330
331
332Reference
333---------
334
335The :mod:`multiprocessing` package mostly replicates the API of the
336:mod:`threading` module.
337
338
339:class:`Process` and exceptions
340~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
341
342.. class:: Process(group=None, target=None, name=None, args=(), kwargs={})
343
344   Process objects represent activity that is run in a separate process. The
345   :class:`Process` class has equivalents of all the methods of
346   :class:`threading.Thread`.
347
348   The constructor should always be called with keyword arguments. *group*
349   should always be ``None``; it exists solely for compatibility with
350   :class:`threading.Thread`.  *target* is the callable object to be invoked by
351   the :meth:`run()` method.  It defaults to ``None``, meaning nothing is
352   called. *name* is the process name.  By default, a unique name is constructed
353   of the form 'Process-N\ :sub:`1`:N\ :sub:`2`:...:N\ :sub:`k`' where N\
354   :sub:`1`,N\ :sub:`2`,...,N\ :sub:`k` is a sequence of integers whose length
355   is determined by the *generation* of the process.  *args* is the argument
356   tuple for the target invocation.  *kwargs* is a dictionary of keyword
357   arguments for the target invocation.  By default, no arguments are passed to
358   *target*.
359
360   If a subclass overrides the constructor, it must make sure it invokes the
361   base class constructor (:meth:`Process.__init__`) before doing anything else
362   to the process.
363
364   .. method:: run()
365
366      Method representing the process's activity.
367
368      You may override this method in a subclass.  The standard :meth:`run`
369      method invokes the callable object passed to the object's constructor as
370      the target argument, if any, with sequential and keyword arguments taken
371      from the *args* and *kwargs* arguments, respectively.
372
373   .. method:: start()
374
375      Start the process's activity.
376
377      This must be called at most once per process object.  It arranges for the
378      object's :meth:`run` method to be invoked in a separate process.
379
380   .. method:: join([timeout])
381
382      Block the calling thread until the process whose :meth:`join` method is
383      called terminates or until the optional timeout occurs.
384
385      If *timeout* is ``None`` then there is no timeout.
386
387      A process can be joined many times.
388
389      A process cannot join itself because this would cause a deadlock.  It is
390      an error to attempt to join a process before it has been started.
391
392   .. attribute:: name
393
394      The process's name.
395
396      The name is a string used for identification purposes only.  It has no
397      semantics.  Multiple processes may be given the same name.  The initial
398      name is set by the constructor.
399
400   .. method:: is_alive
401
402      Return whether the process is alive.
403
404      Roughly, a process object is alive from the moment the :meth:`start`
405      method returns until the child process terminates.
406
407   .. attribute:: daemon
408
409      The process's daemon flag, a Boolean value.  This must be set before
410      :meth:`start` is called.
411
412      The initial value is inherited from the creating process.
413
414      When a process exits, it attempts to terminate all of its daemonic child
415      processes.
416
417      Note that a daemonic process is not allowed to create child processes.
418      Otherwise a daemonic process would leave its children orphaned if it gets
419      terminated when its parent process exits. Additionally, these are **not**
420      Unix daemons or services, they are normal processes that will be
421      terminated (and not joined) if non-daemonic processes have exited.
422
423   In addition to the  :class:`threading.Thread` API, :class:`Process` objects
424   also support the following attributes and methods:
425
426   .. attribute:: pid
427
428      Return the process ID.  Before the process is spawned, this will be
429      ``None``.
430
431   .. attribute:: exitcode
432
433      The child's exit code.  This will be ``None`` if the process has not yet
434      terminated.  A negative value *-N* indicates that the child was terminated
435      by signal *N*.
436
437   .. attribute:: authkey
438
439      The process's authentication key (a byte string).
440
441      When :mod:`multiprocessing` is initialized the main process is assigned a
442      random string using :func:`os.urandom`.
443
444      When a :class:`Process` object is created, it will inherit the
445      authentication key of its parent process, although this may be changed by
446      setting :attr:`authkey` to another byte string.
447
448      See :ref:`multiprocessing-auth-keys`.
449
450   .. method:: terminate()
451
452      Terminate the process.  On Unix this is done using the ``SIGTERM`` signal;
453      on Windows :c:func:`TerminateProcess` is used.  Note that exit handlers and
454      finally clauses, etc., will not be executed.
455
456      Note that descendant processes of the process will *not* be terminated --
457      they will simply become orphaned.
458
459      .. warning::
460
461         If this method is used when the associated process is using a pipe or
462         queue then the pipe or queue is liable to become corrupted and may
463         become unusable by other process.  Similarly, if the process has
464         acquired a lock or semaphore etc. then terminating it is liable to
465         cause other processes to deadlock.
466
467   Note that the :meth:`start`, :meth:`join`, :meth:`is_alive`,
468   :meth:`terminate` and :attr:`exitcode` methods should only be called by
469   the process that created the process object.
470
471   Example usage of some of the methods of :class:`Process`:
472
473   .. doctest::
474
475       >>> import multiprocessing, time, signal
476       >>> p = multiprocessing.Process(target=time.sleep, args=(1000,))
477       >>> print p, p.is_alive()
478       <Process(Process-1, initial)> False
479       >>> p.start()
480       >>> print p, p.is_alive()
481       <Process(Process-1, started)> True
482       >>> p.terminate()
483       >>> time.sleep(0.1)
484       >>> print p, p.is_alive()
485       <Process(Process-1, stopped[SIGTERM])> False
486       >>> p.exitcode == -signal.SIGTERM
487       True
488
489
490.. exception:: BufferTooShort
491
492   Exception raised by :meth:`Connection.recv_bytes_into()` when the supplied
493   buffer object is too small for the message read.
494
495   If ``e`` is an instance of :exc:`BufferTooShort` then ``e.args[0]`` will give
496   the message as a byte string.
497
498
499Pipes and Queues
500~~~~~~~~~~~~~~~~
501
502When using multiple processes, one generally uses message passing for
503communication between processes and avoids having to use any synchronization
504primitives like locks.
505
506For passing messages one can use :func:`Pipe` (for a connection between two
507processes) or a queue (which allows multiple producers and consumers).
508
509The :class:`~multiprocessing.Queue`, :class:`multiprocessing.queues.SimpleQueue` and :class:`JoinableQueue` types are multi-producer,
510multi-consumer FIFO queues modelled on the :class:`Queue.Queue` class in the
511standard library.  They differ in that :class:`~multiprocessing.Queue` lacks the
512:meth:`~Queue.Queue.task_done` and :meth:`~Queue.Queue.join` methods introduced
513into Python 2.5's :class:`Queue.Queue` class.
514
515If you use :class:`JoinableQueue` then you **must** call
516:meth:`JoinableQueue.task_done` for each task removed from the queue or else the
517semaphore used to count the number of unfinished tasks may eventually overflow,
518raising an exception.
519
520Note that one can also create a shared queue by using a manager object -- see
521:ref:`multiprocessing-managers`.
522
523.. note::
524
525   :mod:`multiprocessing` uses the usual :exc:`Queue.Empty` and
526   :exc:`Queue.Full` exceptions to signal a timeout.  They are not available in
527   the :mod:`multiprocessing` namespace so you need to import them from
528   :mod:`Queue`.
529
530.. note::
531
532   When an object is put on a queue, the object is pickled and a
533   background thread later flushes the pickled data to an underlying
534   pipe.  This has some consequences which are a little surprising,
535   but should not cause any practical difficulties -- if they really
536   bother you then you can instead use a queue created with a
537   :ref:`manager <multiprocessing-managers>`.
538
539   (1) After putting an object on an empty queue there may be an
540       infinitesimal delay before the queue's :meth:`~Queue.empty`
541       method returns :const:`False` and :meth:`~Queue.get_nowait` can
542       return without raising :exc:`Queue.Empty`.
543
544   (2) If multiple processes are enqueuing objects, it is possible for
545       the objects to be received at the other end out-of-order.
546       However, objects enqueued by the same process will always be in
547       the expected order with respect to each other.
548
549.. warning::
550
551   If a process is killed using :meth:`Process.terminate` or :func:`os.kill`
552   while it is trying to use a :class:`~multiprocessing.Queue`, then the data in the queue is
553   likely to become corrupted.  This may cause any other process to get an
554   exception when it tries to use the queue later on.
555
556.. warning::
557
558   As mentioned above, if a child process has put items on a queue (and it has
559   not used :meth:`JoinableQueue.cancel_join_thread
560   <multiprocessing.Queue.cancel_join_thread>`), then that process will
561   not terminate until all buffered items have been flushed to the pipe.
562
563   This means that if you try joining that process you may get a deadlock unless
564   you are sure that all items which have been put on the queue have been
565   consumed.  Similarly, if the child process is non-daemonic then the parent
566   process may hang on exit when it tries to join all its non-daemonic children.
567
568   Note that a queue created using a manager does not have this issue.  See
569   :ref:`multiprocessing-programming`.
570
571For an example of the usage of queues for interprocess communication see
572:ref:`multiprocessing-examples`.
573
574
575.. function:: Pipe([duplex])
576
577   Returns a pair ``(conn1, conn2)`` of :class:`Connection` objects representing
578   the ends of a pipe.
579
580   If *duplex* is ``True`` (the default) then the pipe is bidirectional.  If
581   *duplex* is ``False`` then the pipe is unidirectional: ``conn1`` can only be
582   used for receiving messages and ``conn2`` can only be used for sending
583   messages.
584
585
586.. class:: Queue([maxsize])
587
588   Returns a process shared queue implemented using a pipe and a few
589   locks/semaphores.  When a process first puts an item on the queue a feeder
590   thread is started which transfers objects from a buffer into the pipe.
591
592   The usual :exc:`Queue.Empty` and :exc:`Queue.Full` exceptions from the
593   standard library's :mod:`Queue` module are raised to signal timeouts.
594
595   :class:`~multiprocessing.Queue` implements all the methods of :class:`Queue.Queue` except for
596   :meth:`~Queue.Queue.task_done` and :meth:`~Queue.Queue.join`.
597
598   .. method:: qsize()
599
600      Return the approximate size of the queue.  Because of
601      multithreading/multiprocessing semantics, this number is not reliable.
602
603      Note that this may raise :exc:`NotImplementedError` on Unix platforms like
604      Mac OS X where ``sem_getvalue()`` is not implemented.
605
606   .. method:: empty()
607
608      Return ``True`` if the queue is empty, ``False`` otherwise.  Because of
609      multithreading/multiprocessing semantics, this is not reliable.
610
611   .. method:: full()
612
613      Return ``True`` if the queue is full, ``False`` otherwise.  Because of
614      multithreading/multiprocessing semantics, this is not reliable.
615
616   .. method:: put(obj[, block[, timeout]])
617
618      Put obj into the queue.  If the optional argument *block* is ``True``
619      (the default) and *timeout* is ``None`` (the default), block if necessary until
620      a free slot is available.  If *timeout* is a positive number, it blocks at
621      most *timeout* seconds and raises the :exc:`Queue.Full` exception if no
622      free slot was available within that time.  Otherwise (*block* is
623      ``False``), put an item on the queue if a free slot is immediately
624      available, else raise the :exc:`Queue.Full` exception (*timeout* is
625      ignored in that case).
626
627   .. method:: put_nowait(obj)
628
629      Equivalent to ``put(obj, False)``.
630
631   .. method:: get([block[, timeout]])
632
633      Remove and return an item from the queue.  If optional args *block* is
634      ``True`` (the default) and *timeout* is ``None`` (the default), block if
635      necessary until an item is available.  If *timeout* is a positive number,
636      it blocks at most *timeout* seconds and raises the :exc:`Queue.Empty`
637      exception if no item was available within that time.  Otherwise (block is
638      ``False``), return an item if one is immediately available, else raise the
639      :exc:`Queue.Empty` exception (*timeout* is ignored in that case).
640
641   .. method:: get_nowait()
642
643      Equivalent to ``get(False)``.
644
645   :class:`~multiprocessing.Queue` has a few additional methods not found in
646   :class:`Queue.Queue`.  These methods are usually unnecessary for most
647   code:
648
649   .. method:: close()
650
651      Indicate that no more data will be put on this queue by the current
652      process.  The background thread will quit once it has flushed all buffered
653      data to the pipe.  This is called automatically when the queue is garbage
654      collected.
655
656   .. method:: join_thread()
657
658      Join the background thread.  This can only be used after :meth:`close` has
659      been called.  It blocks until the background thread exits, ensuring that
660      all data in the buffer has been flushed to the pipe.
661
662      By default if a process is not the creator of the queue then on exit it
663      will attempt to join the queue's background thread.  The process can call
664      :meth:`cancel_join_thread` to make :meth:`join_thread` do nothing.
665
666   .. method:: cancel_join_thread()
667
668      Prevent :meth:`join_thread` from blocking.  In particular, this prevents
669      the background thread from being joined automatically when the process
670      exits -- see :meth:`join_thread`.
671
672      A better name for this method might be
673      ``allow_exit_without_flush()``.  It is likely to cause enqueued
674      data to lost, and you almost certainly will not need to use it.
675      It is really only there if you need the current process to exit
676      immediately without waiting to flush enqueued data to the
677      underlying pipe, and you don't care about lost data.
678
679   .. note::
680
681      This class's functionality requires a functioning shared semaphore
682      implementation on the host operating system. Without one, the
683      functionality in this class will be disabled, and attempts to
684      instantiate a :class:`Queue` will result in an :exc:`ImportError`. See
685      :issue:`3770` for additional information.  The same holds true for any
686      of the specialized queue types listed below.
687
688
689.. class:: multiprocessing.queues.SimpleQueue()
690
691   It is a simplified :class:`~multiprocessing.Queue` type, very close to a locked :class:`Pipe`.
692
693   .. method:: empty()
694
695      Return ``True`` if the queue is empty, ``False`` otherwise.
696
697   .. method:: get()
698
699      Remove and return an item from the queue.
700
701   .. method:: put(item)
702
703      Put *item* into the queue.
704
705
706.. class:: JoinableQueue([maxsize])
707
708   :class:`JoinableQueue`, a :class:`~multiprocessing.Queue` subclass, is a queue which
709   additionally has :meth:`task_done` and :meth:`join` methods.
710
711   .. method:: task_done()
712
713      Indicate that a formerly enqueued task is complete. Used by queue consumer
714      threads.  For each :meth:`~Queue.get` used to fetch a task, a subsequent
715      call to :meth:`task_done` tells the queue that the processing on the task
716      is complete.
717
718      If a :meth:`~Queue.Queue.join` is currently blocking, it will resume when all
719      items have been processed (meaning that a :meth:`task_done` call was
720      received for every item that had been :meth:`~Queue.put` into the queue).
721
722      Raises a :exc:`ValueError` if called more times than there were items
723      placed in the queue.
724
725
726   .. method:: join()
727
728      Block until all items in the queue have been gotten and processed.
729
730      The count of unfinished tasks goes up whenever an item is added to the
731      queue.  The count goes down whenever a consumer thread calls
732      :meth:`task_done` to indicate that the item was retrieved and all work on
733      it is complete.  When the count of unfinished tasks drops to zero,
734      :meth:`~Queue.Queue.join` unblocks.
735
736
737Miscellaneous
738~~~~~~~~~~~~~
739
740.. function:: active_children()
741
742   Return list of all live children of the current process.
743
744   Calling this has the side effect of "joining" any processes which have
745   already finished.
746
747.. function:: cpu_count()
748
749   Return the number of CPUs in the system.  May raise
750   :exc:`NotImplementedError`.
751
752.. function:: current_process()
753
754   Return the :class:`Process` object corresponding to the current process.
755
756   An analogue of :func:`threading.current_thread`.
757
758.. function:: freeze_support()
759
760   Add support for when a program which uses :mod:`multiprocessing` has been
761   frozen to produce a Windows executable.  (Has been tested with **py2exe**,
762   **PyInstaller** and **cx_Freeze**.)
763
764   One needs to call this function straight after the ``if __name__ ==
765   '__main__'`` line of the main module.  For example::
766
767      from multiprocessing import Process, freeze_support
768
769      def f():
770          print 'hello world!'
771
772      if __name__ == '__main__':
773          freeze_support()
774          Process(target=f).start()
775
776   If the ``freeze_support()`` line is omitted then trying to run the frozen
777   executable will raise :exc:`RuntimeError`.
778
779   Calling ``freeze_support()`` has no effect when invoked on any operating
780   system other than Windows.  In addition, if the module is being run
781   normally by the Python interpreter on Windows (the program has not been
782   frozen), then ``freeze_support()`` has no effect.
783
784.. function:: set_executable()
785
786   Sets the path of the Python interpreter to use when starting a child process.
787   (By default :data:`sys.executable` is used).  Embedders will probably need to
788   do some thing like ::
789
790      set_executable(os.path.join(sys.exec_prefix, 'pythonw.exe'))
791
792   before they can create child processes.  (Windows only)
793
794
795.. note::
796
797   :mod:`multiprocessing` contains no analogues of
798   :func:`threading.active_count`, :func:`threading.enumerate`,
799   :func:`threading.settrace`, :func:`threading.setprofile`,
800   :class:`threading.Timer`, or :class:`threading.local`.
801
802
803Connection Objects
804~~~~~~~~~~~~~~~~~~
805
806Connection objects allow the sending and receiving of picklable objects or
807strings.  They can be thought of as message oriented connected sockets.
808
809Connection objects are usually created using :func:`Pipe` -- see also
810:ref:`multiprocessing-listeners-clients`.
811
812.. class:: Connection
813
814   .. method:: send(obj)
815
816      Send an object to the other end of the connection which should be read
817      using :meth:`recv`.
818
819      The object must be picklable.  Very large pickles (approximately 32 MB+,
820      though it depends on the OS) may raise a :exc:`ValueError` exception.
821
822   .. method:: recv()
823
824      Return an object sent from the other end of the connection using
825      :meth:`send`.  Blocks until there its something to receive.  Raises
826      :exc:`EOFError` if there is nothing left to receive
827      and the other end was closed.
828
829   .. method:: fileno()
830
831      Return the file descriptor or handle used by the connection.
832
833   .. method:: close()
834
835      Close the connection.
836
837      This is called automatically when the connection is garbage collected.
838
839   .. method:: poll([timeout])
840
841      Return whether there is any data available to be read.
842
843      If *timeout* is not specified then it will return immediately.  If
844      *timeout* is a number then this specifies the maximum time in seconds to
845      block.  If *timeout* is ``None`` then an infinite timeout is used.
846
847   .. method:: send_bytes(buffer[, offset[, size]])
848
849      Send byte data from an object supporting the buffer interface as a
850      complete message.
851
852      If *offset* is given then data is read from that position in *buffer*.  If
853      *size* is given then that many bytes will be read from buffer.  Very large
854      buffers (approximately 32 MB+, though it depends on the OS) may raise a
855      :exc:`ValueError` exception
856
857   .. method:: recv_bytes([maxlength])
858
859      Return a complete message of byte data sent from the other end of the
860      connection as a string.  Blocks until there is something to receive.
861      Raises :exc:`EOFError` if there is nothing left
862      to receive and the other end has closed.
863
864      If *maxlength* is specified and the message is longer than *maxlength*
865      then :exc:`IOError` is raised and the connection will no longer be
866      readable.
867
868   .. method:: recv_bytes_into(buffer[, offset])
869
870      Read into *buffer* a complete message of byte data sent from the other end
871      of the connection and return the number of bytes in the message.  Blocks
872      until there is something to receive.  Raises
873      :exc:`EOFError` if there is nothing left to receive and the other end was
874      closed.
875
876      *buffer* must be an object satisfying the writable buffer interface.  If
877      *offset* is given then the message will be written into the buffer from
878      that position.  Offset must be a non-negative integer less than the
879      length of *buffer* (in bytes).
880
881      If the buffer is too short then a :exc:`BufferTooShort` exception is
882      raised and the complete message is available as ``e.args[0]`` where ``e``
883      is the exception instance.
884
885
886For example:
887
888.. doctest::
889
890    >>> from multiprocessing import Pipe
891    >>> a, b = Pipe()
892    >>> a.send([1, 'hello', None])
893    >>> b.recv()
894    [1, 'hello', None]
895    >>> b.send_bytes('thank you')
896    >>> a.recv_bytes()
897    'thank you'
898    >>> import array
899    >>> arr1 = array.array('i', range(5))
900    >>> arr2 = array.array('i', [0] * 10)
901    >>> a.send_bytes(arr1)
902    >>> count = b.recv_bytes_into(arr2)
903    >>> assert count == len(arr1) * arr1.itemsize
904    >>> arr2
905    array('i', [0, 1, 2, 3, 4, 0, 0, 0, 0, 0])
906
907
908.. warning::
909
910    The :meth:`Connection.recv` method automatically unpickles the data it
911    receives, which can be a security risk unless you can trust the process
912    which sent the message.
913
914    Therefore, unless the connection object was produced using :func:`Pipe` you
915    should only use the :meth:`~Connection.recv` and :meth:`~Connection.send`
916    methods after performing some sort of authentication.  See
917    :ref:`multiprocessing-auth-keys`.
918
919.. warning::
920
921    If a process is killed while it is trying to read or write to a pipe then
922    the data in the pipe is likely to become corrupted, because it may become
923    impossible to be sure where the message boundaries lie.
924
925
926Synchronization primitives
927~~~~~~~~~~~~~~~~~~~~~~~~~~
928
929Generally synchronization primitives are not as necessary in a multiprocess
930program as they are in a multithreaded program.  See the documentation for
931:mod:`threading` module.
932
933Note that one can also create synchronization primitives by using a manager
934object -- see :ref:`multiprocessing-managers`.
935
936.. class:: BoundedSemaphore([value])
937
938   A bounded semaphore object: a close analog of
939   :class:`threading.BoundedSemaphore`.
940
941   A solitary difference from its close analog exists: its ``acquire`` method's
942   first argument is named *block* and it supports an optional second argument
943   *timeout*, as is consistent with :meth:`Lock.acquire`.
944
945   .. note::
946      On Mac OS X, this is indistinguishable from :class:`Semaphore` because
947      ``sem_getvalue()`` is not implemented on that platform.
948
949.. class:: Condition([lock])
950
951   A condition variable: a clone of :class:`threading.Condition`.
952
953   If *lock* is specified then it should be a :class:`Lock` or :class:`RLock`
954   object from :mod:`multiprocessing`.
955
956.. class:: Event()
957
958   A clone of :class:`threading.Event`.
959   This method returns the state of the internal semaphore on exit, so it
960   will always return ``True`` except if a timeout is given and the operation
961   times out.
962
963   .. versionchanged:: 2.7
964      Previously, the method always returned ``None``.
965
966
967.. class:: Lock()
968
969   A non-recursive lock object: a close analog of :class:`threading.Lock`.
970   Once a process or thread has acquired a lock, subsequent attempts to
971   acquire it from any process or thread will block until it is released;
972   any process or thread may release it.  The concepts and behaviors of
973   :class:`threading.Lock` as it applies to threads are replicated here in
974   :class:`multiprocessing.Lock` as it applies to either processes or threads,
975   except as noted.
976
977   Note that :class:`Lock` is actually a factory function which returns an
978   instance of ``multiprocessing.synchronize.Lock`` initialized with a
979   default context.
980
981   :class:`Lock` supports the :term:`context manager` protocol and thus may be
982   used in :keyword:`with` statements.
983
984   .. method:: acquire(block=True, timeout=None)
985
986      Acquire a lock, blocking or non-blocking.
987
988      With the *block* argument set to ``True`` (the default), the method call
989      will block until the lock is in an unlocked state, then set it to locked
990      and return ``True``.  Note that the name of this first argument differs
991      from that in :meth:`threading.Lock.acquire`.
992
993      With the *block* argument set to ``False``, the method call does not
994      block.  If the lock is currently in a locked state, return ``False``;
995      otherwise set the lock to a locked state and return ``True``.
996
997      When invoked with a positive, floating-point value for *timeout*, block
998      for at most the number of seconds specified by *timeout* as long as
999      the lock can not be acquired.  Invocations with a negative value for
1000      *timeout* are equivalent to a *timeout* of zero.  Invocations with a
1001      *timeout* value of ``None`` (the default) set the timeout period to
1002      infinite.  The *timeout* argument has no practical implications if the
1003      *block* argument is set to ``False`` and is thus ignored.  Returns
1004      ``True`` if the lock has been acquired or ``False`` if the timeout period
1005      has elapsed.  Note that the *timeout* argument does not exist in this
1006      method's analog, :meth:`threading.Lock.acquire`.
1007
1008   .. method:: release()
1009
1010      Release a lock.  This can be called from any process or thread, not only
1011      the process or thread which originally acquired the lock.
1012
1013      Behavior is the same as in :meth:`threading.Lock.release` except that
1014      when invoked on an unlocked lock, a :exc:`ValueError` is raised.
1015
1016
1017.. class:: RLock()
1018
1019   A recursive lock object: a close analog of :class:`threading.RLock`.  A
1020   recursive lock must be released by the process or thread that acquired it.
1021   Once a process or thread has acquired a recursive lock, the same process
1022   or thread may acquire it again without blocking; that process or thread
1023   must release it once for each time it has been acquired.
1024
1025   Note that :class:`RLock` is actually a factory function which returns an
1026   instance of ``multiprocessing.synchronize.RLock`` initialized with a
1027   default context.
1028
1029   :class:`RLock` supports the :term:`context manager` protocol and thus may be
1030   used in :keyword:`with` statements.
1031
1032
1033   .. method:: acquire(block=True, timeout=None)
1034
1035      Acquire a lock, blocking or non-blocking.
1036
1037      When invoked with the *block* argument set to ``True``, block until the
1038      lock is in an unlocked state (not owned by any process or thread) unless
1039      the lock is already owned by the current process or thread.  The current
1040      process or thread then takes ownership of the lock (if it does not
1041      already have ownership) and the recursion level inside the lock increments
1042      by one, resulting in a return value of ``True``.  Note that there are
1043      several differences in this first argument's behavior compared to the
1044      implementation of :meth:`threading.RLock.acquire`, starting with the name
1045      of the argument itself.
1046
1047      When invoked with the *block* argument set to ``False``, do not block.
1048      If the lock has already been acquired (and thus is owned) by another
1049      process or thread, the current process or thread does not take ownership
1050      and the recursion level within the lock is not changed, resulting in
1051      a return value of ``False``.  If the lock is in an unlocked state, the
1052      current process or thread takes ownership and the recursion level is
1053      incremented, resulting in a return value of ``True``.
1054
1055      Use and behaviors of the *timeout* argument are the same as in
1056      :meth:`Lock.acquire`.  Note that the *timeout* argument does
1057      not exist in this method's analog, :meth:`threading.RLock.acquire`.
1058
1059
1060   .. method:: release()
1061
1062      Release a lock, decrementing the recursion level.  If after the
1063      decrement the recursion level is zero, reset the lock to unlocked (not
1064      owned by any process or thread) and if any other processes or threads
1065      are blocked waiting for the lock to become unlocked, allow exactly one
1066      of them to proceed.  If after the decrement the recursion level is still
1067      nonzero, the lock remains locked and owned by the calling process or
1068      thread.
1069
1070      Only call this method when the calling process or thread owns the lock.
1071      An :exc:`AssertionError` is raised if this method is called by a process
1072      or thread other than the owner or if the lock is in an unlocked (unowned)
1073      state.  Note that the type of exception raised in this situation
1074      differs from the implemented behavior in :meth:`threading.RLock.release`.
1075
1076
1077.. class:: Semaphore([value])
1078
1079   A semaphore object: a close analog of :class:`threading.Semaphore`.
1080
1081   A solitary difference from its close analog exists: its ``acquire`` method's
1082   first argument is named *block* and it supports an optional second argument
1083   *timeout*, as is consistent with :meth:`Lock.acquire`.
1084
1085.. note::
1086
1087   The :meth:`acquire` method of :class:`BoundedSemaphore`, :class:`Lock`,
1088   :class:`RLock` and :class:`Semaphore` has a timeout parameter not supported
1089   by the equivalents in :mod:`threading`.  The signature is
1090   ``acquire(block=True, timeout=None)`` with keyword parameters being
1091   acceptable.  If *block* is ``True`` and *timeout* is not ``None`` then it
1092   specifies a timeout in seconds.  If *block* is ``False`` then *timeout* is
1093   ignored.
1094
1095   On Mac OS X, ``sem_timedwait`` is unsupported, so calling ``acquire()`` with
1096   a timeout will emulate that function's behavior using a sleeping loop.
1097
1098.. note::
1099
1100   If the SIGINT signal generated by :kbd:`Ctrl-C` arrives while the main thread is
1101   blocked by a call to :meth:`BoundedSemaphore.acquire`, :meth:`Lock.acquire`,
1102   :meth:`RLock.acquire`, :meth:`Semaphore.acquire`, :meth:`Condition.acquire`
1103   or :meth:`Condition.wait` then the call will be immediately interrupted and
1104   :exc:`KeyboardInterrupt` will be raised.
1105
1106   This differs from the behaviour of :mod:`threading` where SIGINT will be
1107   ignored while the equivalent blocking calls are in progress.
1108
1109.. note::
1110
1111   Some of this package's functionality requires a functioning shared semaphore
1112   implementation on the host operating system. Without one, the
1113   :mod:`multiprocessing.synchronize` module will be disabled, and attempts to
1114   import it will result in an :exc:`ImportError`. See
1115   :issue:`3770` for additional information.
1116
1117
1118Shared :mod:`ctypes` Objects
1119~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1120
1121It is possible to create shared objects using shared memory which can be
1122inherited by child processes.
1123
1124.. function:: Value(typecode_or_type, *args[, lock])
1125
1126   Return a :mod:`ctypes` object allocated from shared memory.  By default the
1127   return value is actually a synchronized wrapper for the object.
1128
1129   *typecode_or_type* determines the type of the returned object: it is either a
1130   ctypes type or a one character typecode of the kind used by the :mod:`array`
1131   module.  *\*args* is passed on to the constructor for the type.
1132
1133   If *lock* is ``True`` (the default) then a new recursive lock
1134   object is created to synchronize access to the value.  If *lock* is
1135   a :class:`Lock` or :class:`RLock` object then that will be used to
1136   synchronize access to the value.  If *lock* is ``False`` then
1137   access to the returned object will not be automatically protected
1138   by a lock, so it will not necessarily be "process-safe".
1139
1140   Operations like ``+=`` which involve a read and write are not
1141   atomic.  So if, for instance, you want to atomically increment a
1142   shared value it is insufficient to just do ::
1143
1144       counter.value += 1
1145
1146   Assuming the associated lock is recursive (which it is by default)
1147   you can instead do ::
1148
1149       with counter.get_lock():
1150           counter.value += 1
1151
1152   Note that *lock* is a keyword-only argument.
1153
1154.. function:: Array(typecode_or_type, size_or_initializer, *, lock=True)
1155
1156   Return a ctypes array allocated from shared memory.  By default the return
1157   value is actually a synchronized wrapper for the array.
1158
1159   *typecode_or_type* determines the type of the elements of the returned array:
1160   it is either a ctypes type or a one character typecode of the kind used by
1161   the :mod:`array` module.  If *size_or_initializer* is an integer, then it
1162   determines the length of the array, and the array will be initially zeroed.
1163   Otherwise, *size_or_initializer* is a sequence which is used to initialize
1164   the array and whose length determines the length of the array.
1165
1166   If *lock* is ``True`` (the default) then a new lock object is created to
1167   synchronize access to the value.  If *lock* is a :class:`Lock` or
1168   :class:`RLock` object then that will be used to synchronize access to the
1169   value.  If *lock* is ``False`` then access to the returned object will not be
1170   automatically protected by a lock, so it will not necessarily be
1171   "process-safe".
1172
1173   Note that *lock* is a keyword only argument.
1174
1175   Note that an array of :data:`ctypes.c_char` has *value* and *raw*
1176   attributes which allow one to use it to store and retrieve strings.
1177
1178
1179The :mod:`multiprocessing.sharedctypes` module
1180>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
1181
1182.. module:: multiprocessing.sharedctypes
1183   :synopsis: Allocate ctypes objects from shared memory.
1184
1185The :mod:`multiprocessing.sharedctypes` module provides functions for allocating
1186:mod:`ctypes` objects from shared memory which can be inherited by child
1187processes.
1188
1189.. note::
1190
1191   Although it is possible to store a pointer in shared memory remember that
1192   this will refer to a location in the address space of a specific process.
1193   However, the pointer is quite likely to be invalid in the context of a second
1194   process and trying to dereference the pointer from the second process may
1195   cause a crash.
1196
1197.. function:: RawArray(typecode_or_type, size_or_initializer)
1198
1199   Return a ctypes array allocated from shared memory.
1200
1201   *typecode_or_type* determines the type of the elements of the returned array:
1202   it is either a ctypes type or a one character typecode of the kind used by
1203   the :mod:`array` module.  If *size_or_initializer* is an integer then it
1204   determines the length of the array, and the array will be initially zeroed.
1205   Otherwise *size_or_initializer* is a sequence which is used to initialize the
1206   array and whose length determines the length of the array.
1207
1208   Note that setting and getting an element is potentially non-atomic -- use
1209   :func:`Array` instead to make sure that access is automatically synchronized
1210   using a lock.
1211
1212.. function:: RawValue(typecode_or_type, *args)
1213
1214   Return a ctypes object allocated from shared memory.
1215
1216   *typecode_or_type* determines the type of the returned object: it is either a
1217   ctypes type or a one character typecode of the kind used by the :mod:`array`
1218   module.  *\*args* is passed on to the constructor for the type.
1219
1220   Note that setting and getting the value is potentially non-atomic -- use
1221   :func:`Value` instead to make sure that access is automatically synchronized
1222   using a lock.
1223
1224   Note that an array of :data:`ctypes.c_char` has ``value`` and ``raw``
1225   attributes which allow one to use it to store and retrieve strings -- see
1226   documentation for :mod:`ctypes`.
1227
1228.. function:: Array(typecode_or_type, size_or_initializer, *args[, lock])
1229
1230   The same as :func:`RawArray` except that depending on the value of *lock* a
1231   process-safe synchronization wrapper may be returned instead of a raw ctypes
1232   array.
1233
1234   If *lock* is ``True`` (the default) then a new lock object is created to
1235   synchronize access to the value.  If *lock* is a
1236   :class:`~multiprocessing.Lock` or :class:`~multiprocessing.RLock` object
1237   then that will be used to synchronize access to the
1238   value.  If *lock* is ``False`` then access to the returned object will not be
1239   automatically protected by a lock, so it will not necessarily be
1240   "process-safe".
1241
1242   Note that *lock* is a keyword-only argument.
1243
1244.. function:: Value(typecode_or_type, *args[, lock])
1245
1246   The same as :func:`RawValue` except that depending on the value of *lock* a
1247   process-safe synchronization wrapper may be returned instead of a raw ctypes
1248   object.
1249
1250   If *lock* is ``True`` (the default) then a new lock object is created to
1251   synchronize access to the value.  If *lock* is a :class:`~multiprocessing.Lock` or
1252   :class:`~multiprocessing.RLock` object then that will be used to synchronize access to the
1253   value.  If *lock* is ``False`` then access to the returned object will not be
1254   automatically protected by a lock, so it will not necessarily be
1255   "process-safe".
1256
1257   Note that *lock* is a keyword-only argument.
1258
1259.. function:: copy(obj)
1260
1261   Return a ctypes object allocated from shared memory which is a copy of the
1262   ctypes object *obj*.
1263
1264.. function:: synchronized(obj[, lock])
1265
1266   Return a process-safe wrapper object for a ctypes object which uses *lock* to
1267   synchronize access.  If *lock* is ``None`` (the default) then a
1268   :class:`multiprocessing.RLock` object is created automatically.
1269
1270   A synchronized wrapper will have two methods in addition to those of the
1271   object it wraps: :meth:`get_obj` returns the wrapped object and
1272   :meth:`get_lock` returns the lock object used for synchronization.
1273
1274   Note that accessing the ctypes object through the wrapper can be a lot slower
1275   than accessing the raw ctypes object.
1276
1277
1278The table below compares the syntax for creating shared ctypes objects from
1279shared memory with the normal ctypes syntax.  (In the table ``MyStruct`` is some
1280subclass of :class:`ctypes.Structure`.)
1281
1282==================== ========================== ===========================
1283ctypes               sharedctypes using type    sharedctypes using typecode
1284==================== ========================== ===========================
1285c_double(2.4)        RawValue(c_double, 2.4)    RawValue('d', 2.4)
1286MyStruct(4, 6)       RawValue(MyStruct, 4, 6)
1287(c_short * 7)()      RawArray(c_short, 7)       RawArray('h', 7)
1288(c_int * 3)(9, 2, 8) RawArray(c_int, (9, 2, 8)) RawArray('i', (9, 2, 8))
1289==================== ========================== ===========================
1290
1291
1292Below is an example where a number of ctypes objects are modified by a child
1293process::
1294
1295   from multiprocessing import Process, Lock
1296   from multiprocessing.sharedctypes import Value, Array
1297   from ctypes import Structure, c_double
1298
1299   class Point(Structure):
1300       _fields_ = [('x', c_double), ('y', c_double)]
1301
1302   def modify(n, x, s, A):
1303       n.value **= 2
1304       x.value **= 2
1305       s.value = s.value.upper()
1306       for a in A:
1307           a.x **= 2
1308           a.y **= 2
1309
1310   if __name__ == '__main__':
1311       lock = Lock()
1312
1313       n = Value('i', 7)
1314       x = Value(c_double, 1.0/3.0, lock=False)
1315       s = Array('c', 'hello world', lock=lock)
1316       A = Array(Point, [(1.875,-6.25), (-5.75,2.0), (2.375,9.5)], lock=lock)
1317
1318       p = Process(target=modify, args=(n, x, s, A))
1319       p.start()
1320       p.join()
1321
1322       print n.value
1323       print x.value
1324       print s.value
1325       print [(a.x, a.y) for a in A]
1326
1327
1328.. highlightlang:: none
1329
1330The results printed are ::
1331
1332    49
1333    0.1111111111111111
1334    HELLO WORLD
1335    [(3.515625, 39.0625), (33.0625, 4.0), (5.640625, 90.25)]
1336
1337.. highlightlang:: python
1338
1339
1340.. _multiprocessing-managers:
1341
1342Managers
1343~~~~~~~~
1344
1345Managers provide a way to create data which can be shared between different
1346processes. A manager object controls a server process which manages *shared
1347objects*.  Other processes can access the shared objects by using proxies.
1348
1349.. function:: multiprocessing.Manager()
1350
1351   Returns a started :class:`~multiprocessing.managers.SyncManager` object which
1352   can be used for sharing objects between processes.  The returned manager
1353   object corresponds to a spawned child process and has methods which will
1354   create shared objects and return corresponding proxies.
1355
1356.. module:: multiprocessing.managers
1357   :synopsis: Share data between process with shared objects.
1358
1359Manager processes will be shutdown as soon as they are garbage collected or
1360their parent process exits.  The manager classes are defined in the
1361:mod:`multiprocessing.managers` module:
1362
1363.. class:: BaseManager([address[, authkey]])
1364
1365   Create a BaseManager object.
1366
1367   Once created one should call :meth:`start` or ``get_server().serve_forever()`` to ensure
1368   that the manager object refers to a started manager process.
1369
1370   *address* is the address on which the manager process listens for new
1371   connections.  If *address* is ``None`` then an arbitrary one is chosen.
1372
1373   *authkey* is the authentication key which will be used to check the validity
1374   of incoming connections to the server process.  If *authkey* is ``None`` then
1375   ``current_process().authkey``.  Otherwise *authkey* is used and it
1376   must be a string.
1377
1378   .. method:: start([initializer[, initargs]])
1379
1380      Start a subprocess to start the manager.  If *initializer* is not ``None``
1381      then the subprocess will call ``initializer(*initargs)`` when it starts.
1382
1383   .. method:: get_server()
1384
1385      Returns a :class:`Server` object which represents the actual server under
1386      the control of the Manager. The :class:`Server` object supports the
1387      :meth:`serve_forever` method::
1388
1389      >>> from multiprocessing.managers import BaseManager
1390      >>> manager = BaseManager(address=('', 50000), authkey='abc')
1391      >>> server = manager.get_server()
1392      >>> server.serve_forever()
1393
1394      :class:`Server` additionally has an :attr:`address` attribute.
1395
1396   .. method:: connect()
1397
1398      Connect a local manager object to a remote manager process::
1399
1400      >>> from multiprocessing.managers import BaseManager
1401      >>> m = BaseManager(address=('127.0.0.1', 5000), authkey='abc')
1402      >>> m.connect()
1403
1404   .. method:: shutdown()
1405
1406      Stop the process used by the manager.  This is only available if
1407      :meth:`start` has been used to start the server process.
1408
1409      This can be called multiple times.
1410
1411   .. method:: register(typeid[, callable[, proxytype[, exposed[, method_to_typeid[, create_method]]]]])
1412
1413      A classmethod which can be used for registering a type or callable with
1414      the manager class.
1415
1416      *typeid* is a "type identifier" which is used to identify a particular
1417      type of shared object.  This must be a string.
1418
1419      *callable* is a callable used for creating objects for this type
1420      identifier.  If a manager instance will be created using the
1421      :meth:`from_address` classmethod or if the *create_method* argument is
1422      ``False`` then this can be left as ``None``.
1423
1424      *proxytype* is a subclass of :class:`BaseProxy` which is used to create
1425      proxies for shared objects with this *typeid*.  If ``None`` then a proxy
1426      class is created automatically.
1427
1428      *exposed* is used to specify a sequence of method names which proxies for
1429      this typeid should be allowed to access using
1430      :meth:`BaseProxy._callmethod`.  (If *exposed* is ``None`` then
1431      :attr:`proxytype._exposed_` is used instead if it exists.)  In the case
1432      where no exposed list is specified, all "public methods" of the shared
1433      object will be accessible.  (Here a "public method" means any attribute
1434      which has a :meth:`~object.__call__` method and whose name does not begin
1435      with ``'_'``.)
1436
1437      *method_to_typeid* is a mapping used to specify the return type of those
1438      exposed methods which should return a proxy.  It maps method names to
1439      typeid strings.  (If *method_to_typeid* is ``None`` then
1440      :attr:`proxytype._method_to_typeid_` is used instead if it exists.)  If a
1441      method's name is not a key of this mapping or if the mapping is ``None``
1442      then the object returned by the method will be copied by value.
1443
1444      *create_method* determines whether a method should be created with name
1445      *typeid* which can be used to tell the server process to create a new
1446      shared object and return a proxy for it.  By default it is ``True``.
1447
1448   :class:`BaseManager` instances also have one read-only property:
1449
1450   .. attribute:: address
1451
1452      The address used by the manager.
1453
1454
1455.. class:: SyncManager
1456
1457   A subclass of :class:`BaseManager` which can be used for the synchronization
1458   of processes.  Objects of this type are returned by
1459   :func:`multiprocessing.Manager`.
1460
1461   It also supports creation of shared lists and dictionaries.
1462
1463   .. method:: BoundedSemaphore([value])
1464
1465      Create a shared :class:`threading.BoundedSemaphore` object and return a
1466      proxy for it.
1467
1468   .. method:: Condition([lock])
1469
1470      Create a shared :class:`threading.Condition` object and return a proxy for
1471      it.
1472
1473      If *lock* is supplied then it should be a proxy for a
1474      :class:`threading.Lock` or :class:`threading.RLock` object.
1475
1476   .. method:: Event()
1477
1478      Create a shared :class:`threading.Event` object and return a proxy for it.
1479
1480   .. method:: Lock()
1481
1482      Create a shared :class:`threading.Lock` object and return a proxy for it.
1483
1484   .. method:: Namespace()
1485
1486      Create a shared :class:`Namespace` object and return a proxy for it.
1487
1488   .. method:: Queue([maxsize])
1489
1490      Create a shared :class:`Queue.Queue` object and return a proxy for it.
1491
1492   .. method:: RLock()
1493
1494      Create a shared :class:`threading.RLock` object and return a proxy for it.
1495
1496   .. method:: Semaphore([value])
1497
1498      Create a shared :class:`threading.Semaphore` object and return a proxy for
1499      it.
1500
1501   .. method:: Array(typecode, sequence)
1502
1503      Create an array and return a proxy for it.
1504
1505   .. method:: Value(typecode, value)
1506
1507      Create an object with a writable ``value`` attribute and return a proxy
1508      for it.
1509
1510   .. method:: dict()
1511               dict(mapping)
1512               dict(sequence)
1513
1514      Create a shared ``dict`` object and return a proxy for it.
1515
1516   .. method:: list()
1517               list(sequence)
1518
1519      Create a shared ``list`` object and return a proxy for it.
1520
1521   .. note::
1522
1523      Modifications to mutable values or items in dict and list proxies will not
1524      be propagated through the manager, because the proxy has no way of knowing
1525      when its values or items are modified.  To modify such an item, you can
1526      re-assign the modified object to the container proxy::
1527
1528         # create a list proxy and append a mutable object (a dictionary)
1529         lproxy = manager.list()
1530         lproxy.append({})
1531         # now mutate the dictionary
1532         d = lproxy[0]
1533         d['a'] = 1
1534         d['b'] = 2
1535         # at this point, the changes to d are not yet synced, but by
1536         # reassigning the dictionary, the proxy is notified of the change
1537         lproxy[0] = d
1538
1539
1540.. class:: Namespace
1541
1542    A type that can register with :class:`SyncManager`.
1543
1544    A namespace object has no public methods, but does have writable attributes.
1545    Its representation shows the values of its attributes.
1546
1547    However, when using a proxy for a namespace object, an attribute beginning with
1548    ``'_'`` will be an attribute of the proxy and not an attribute of the referent:
1549
1550    .. doctest::
1551
1552       >>> manager = multiprocessing.Manager()
1553       >>> Global = manager.Namespace()
1554       >>> Global.x = 10
1555       >>> Global.y = 'hello'
1556       >>> Global._z = 12.3    # this is an attribute of the proxy
1557       >>> print Global
1558       Namespace(x=10, y='hello')
1559
1560
1561Customized managers
1562>>>>>>>>>>>>>>>>>>>
1563
1564To create one's own manager, one creates a subclass of :class:`BaseManager` and
1565uses the :meth:`~BaseManager.register` classmethod to register new types or
1566callables with the manager class.  For example::
1567
1568   from multiprocessing.managers import BaseManager
1569
1570   class MathsClass(object):
1571       def add(self, x, y):
1572           return x + y
1573       def mul(self, x, y):
1574           return x * y
1575
1576   class MyManager(BaseManager):
1577       pass
1578
1579   MyManager.register('Maths', MathsClass)
1580
1581   if __name__ == '__main__':
1582       manager = MyManager()
1583       manager.start()
1584       maths = manager.Maths()
1585       print maths.add(4, 3)         # prints 7
1586       print maths.mul(7, 8)         # prints 56
1587
1588
1589Using a remote manager
1590>>>>>>>>>>>>>>>>>>>>>>
1591
1592It is possible to run a manager server on one machine and have clients use it
1593from other machines (assuming that the firewalls involved allow it).
1594
1595Running the following commands creates a server for a single shared queue which
1596remote clients can access::
1597
1598   >>> from multiprocessing.managers import BaseManager
1599   >>> import Queue
1600   >>> queue = Queue.Queue()
1601   >>> class QueueManager(BaseManager): pass
1602   >>> QueueManager.register('get_queue', callable=lambda:queue)
1603   >>> m = QueueManager(address=('', 50000), authkey='abracadabra')
1604   >>> s = m.get_server()
1605   >>> s.serve_forever()
1606
1607One client can access the server as follows::
1608
1609   >>> from multiprocessing.managers import BaseManager
1610   >>> class QueueManager(BaseManager): pass
1611   >>> QueueManager.register('get_queue')
1612   >>> m = QueueManager(address=('foo.bar.org', 50000), authkey='abracadabra')
1613   >>> m.connect()
1614   >>> queue = m.get_queue()
1615   >>> queue.put('hello')
1616
1617Another client can also use it::
1618
1619   >>> from multiprocessing.managers import BaseManager
1620   >>> class QueueManager(BaseManager): pass
1621   >>> QueueManager.register('get_queue')
1622   >>> m = QueueManager(address=('foo.bar.org', 50000), authkey='abracadabra')
1623   >>> m.connect()
1624   >>> queue = m.get_queue()
1625   >>> queue.get()
1626   'hello'
1627
1628Local processes can also access that queue, using the code from above on the
1629client to access it remotely::
1630
1631    >>> from multiprocessing import Process, Queue
1632    >>> from multiprocessing.managers import BaseManager
1633    >>> class Worker(Process):
1634    ...     def __init__(self, q):
1635    ...         self.q = q
1636    ...         super(Worker, self).__init__()
1637    ...     def run(self):
1638    ...         self.q.put('local hello')
1639    ...
1640    >>> queue = Queue()
1641    >>> w = Worker(queue)
1642    >>> w.start()
1643    >>> class QueueManager(BaseManager): pass
1644    ...
1645    >>> QueueManager.register('get_queue', callable=lambda: queue)
1646    >>> m = QueueManager(address=('', 50000), authkey='abracadabra')
1647    >>> s = m.get_server()
1648    >>> s.serve_forever()
1649
1650Proxy Objects
1651~~~~~~~~~~~~~
1652
1653A proxy is an object which *refers* to a shared object which lives (presumably)
1654in a different process.  The shared object is said to be the *referent* of the
1655proxy.  Multiple proxy objects may have the same referent.
1656
1657A proxy object has methods which invoke corresponding methods of its referent
1658(although not every method of the referent will necessarily be available through
1659the proxy).  A proxy can usually be used in most of the same ways that its
1660referent can:
1661
1662.. doctest::
1663
1664   >>> from multiprocessing import Manager
1665   >>> manager = Manager()
1666   >>> l = manager.list([i*i for i in range(10)])
1667   >>> print l
1668   [0, 1, 4, 9, 16, 25, 36, 49, 64, 81]
1669   >>> print repr(l)
1670   <ListProxy object, typeid 'list' at 0x...>
1671   >>> l[4]
1672   16
1673   >>> l[2:5]
1674   [4, 9, 16]
1675
1676Notice that applying :func:`str` to a proxy will return the representation of
1677the referent, whereas applying :func:`repr` will return the representation of
1678the proxy.
1679
1680An important feature of proxy objects is that they are picklable so they can be
1681passed between processes.  Note, however, that if a proxy is sent to the
1682corresponding manager's process then unpickling it will produce the referent
1683itself.  This means, for example, that one shared object can contain a second:
1684
1685.. doctest::
1686
1687   >>> a = manager.list()
1688   >>> b = manager.list()
1689   >>> a.append(b)         # referent of a now contains referent of b
1690   >>> print a, b
1691   [[]] []
1692   >>> b.append('hello')
1693   >>> print a, b
1694   [['hello']] ['hello']
1695
1696.. note::
1697
1698   The proxy types in :mod:`multiprocessing` do nothing to support comparisons
1699   by value.  So, for instance, we have:
1700
1701   .. doctest::
1702
1703       >>> manager.list([1,2,3]) == [1,2,3]
1704       False
1705
1706   One should just use a copy of the referent instead when making comparisons.
1707
1708.. class:: BaseProxy
1709
1710   Proxy objects are instances of subclasses of :class:`BaseProxy`.
1711
1712   .. method:: _callmethod(methodname[, args[, kwds]])
1713
1714      Call and return the result of a method of the proxy's referent.
1715
1716      If ``proxy`` is a proxy whose referent is ``obj`` then the expression ::
1717
1718         proxy._callmethod(methodname, args, kwds)
1719
1720      will evaluate the expression ::
1721
1722         getattr(obj, methodname)(*args, **kwds)
1723
1724      in the manager's process.
1725
1726      The returned value will be a copy of the result of the call or a proxy to
1727      a new shared object -- see documentation for the *method_to_typeid*
1728      argument of :meth:`BaseManager.register`.
1729
1730      If an exception is raised by the call, then is re-raised by
1731      :meth:`_callmethod`.  If some other exception is raised in the manager's
1732      process then this is converted into a :exc:`RemoteError` exception and is
1733      raised by :meth:`_callmethod`.
1734
1735      Note in particular that an exception will be raised if *methodname* has
1736      not been *exposed*.
1737
1738      An example of the usage of :meth:`_callmethod`:
1739
1740      .. doctest::
1741
1742         >>> l = manager.list(range(10))
1743         >>> l._callmethod('__len__')
1744         10
1745         >>> l._callmethod('__getslice__', (2, 7))   # equiv to `l[2:7]`
1746         [2, 3, 4, 5, 6]
1747         >>> l._callmethod('__getitem__', (20,))     # equiv to `l[20]`
1748         Traceback (most recent call last):
1749         ...
1750         IndexError: list index out of range
1751
1752   .. method:: _getvalue()
1753
1754      Return a copy of the referent.
1755
1756      If the referent is unpicklable then this will raise an exception.
1757
1758   .. method:: __repr__
1759
1760      Return a representation of the proxy object.
1761
1762   .. method:: __str__
1763
1764      Return the representation of the referent.
1765
1766
1767Cleanup
1768>>>>>>>
1769
1770A proxy object uses a weakref callback so that when it gets garbage collected it
1771deregisters itself from the manager which owns its referent.
1772
1773A shared object gets deleted from the manager process when there are no longer
1774any proxies referring to it.
1775
1776
1777Process Pools
1778~~~~~~~~~~~~~
1779
1780.. module:: multiprocessing.pool
1781   :synopsis: Create pools of processes.
1782
1783One can create a pool of processes which will carry out tasks submitted to it
1784with the :class:`Pool` class.
1785
1786.. class:: multiprocessing.Pool([processes[, initializer[, initargs[, maxtasksperchild]]]])
1787
1788   A process pool object which controls a pool of worker processes to which jobs
1789   can be submitted.  It supports asynchronous results with timeouts and
1790   callbacks and has a parallel map implementation.
1791
1792   *processes* is the number of worker processes to use.  If *processes* is
1793   ``None`` then the number returned by :func:`cpu_count` is used.  If
1794   *initializer* is not ``None`` then each worker process will call
1795   ``initializer(*initargs)`` when it starts.
1796
1797   Note that the methods of the pool object should only be called by
1798   the process which created the pool.
1799
1800   .. versionadded:: 2.7
1801      *maxtasksperchild* is the number of tasks a worker process can complete
1802      before it will exit and be replaced with a fresh worker process, to enable
1803      unused resources to be freed. The default *maxtasksperchild* is ``None``, which
1804      means worker processes will live as long as the pool.
1805
1806   .. note::
1807
1808      Worker processes within a :class:`Pool` typically live for the complete
1809      duration of the Pool's work queue. A frequent pattern found in other
1810      systems (such as Apache, mod_wsgi, etc) to free resources held by
1811      workers is to allow a worker within a pool to complete only a set
1812      amount of work before being exiting, being cleaned up and a new
1813      process spawned to replace the old one. The *maxtasksperchild*
1814      argument to the :class:`Pool` exposes this ability to the end user.
1815
1816   .. method:: apply(func[, args[, kwds]])
1817
1818      Equivalent of the :func:`apply` built-in function.  It blocks until the
1819      result is ready, so :meth:`apply_async` is better suited for performing
1820      work in parallel. Additionally, *func* is only executed in one of the
1821      workers of the pool.
1822
1823   .. method:: apply_async(func[, args[, kwds[, callback]]])
1824
1825      A variant of the :meth:`apply` method which returns a result object.
1826
1827      If *callback* is specified then it should be a callable which accepts a
1828      single argument.  When the result becomes ready *callback* is applied to
1829      it (unless the call failed).  *callback* should complete immediately since
1830      otherwise the thread which handles the results will get blocked.
1831
1832   .. method:: map(func, iterable[, chunksize])
1833
1834      A parallel equivalent of the :func:`map` built-in function (it supports only
1835      one *iterable* argument though).  It blocks until the result is ready.
1836
1837      This method chops the iterable into a number of chunks which it submits to
1838      the process pool as separate tasks.  The (approximate) size of these
1839      chunks can be specified by setting *chunksize* to a positive integer.
1840
1841   .. method:: map_async(func, iterable[, chunksize[, callback]])
1842
1843      A variant of the :meth:`.map` method which returns a result object.
1844
1845      If *callback* is specified then it should be a callable which accepts a
1846      single argument.  When the result becomes ready *callback* is applied to
1847      it (unless the call failed).  *callback* should complete immediately since
1848      otherwise the thread which handles the results will get blocked.
1849
1850   .. method:: imap(func, iterable[, chunksize])
1851
1852      An equivalent of :func:`itertools.imap`.
1853
1854      The *chunksize* argument is the same as the one used by the :meth:`.map`
1855      method.  For very long iterables using a large value for *chunksize* can
1856      make the job complete **much** faster than using the default value of
1857      ``1``.
1858
1859      Also if *chunksize* is ``1`` then the :meth:`!next` method of the iterator
1860      returned by the :meth:`imap` method has an optional *timeout* parameter:
1861      ``next(timeout)`` will raise :exc:`multiprocessing.TimeoutError` if the
1862      result cannot be returned within *timeout* seconds.
1863
1864   .. method:: imap_unordered(func, iterable[, chunksize])
1865
1866      The same as :meth:`imap` except that the ordering of the results from the
1867      returned iterator should be considered arbitrary.  (Only when there is
1868      only one worker process is the order guaranteed to be "correct".)
1869
1870   .. method:: close()
1871
1872      Prevents any more tasks from being submitted to the pool.  Once all the
1873      tasks have been completed the worker processes will exit.
1874
1875   .. method:: terminate()
1876
1877      Stops the worker processes immediately without completing outstanding
1878      work.  When the pool object is garbage collected :meth:`terminate` will be
1879      called immediately.
1880
1881   .. method:: join()
1882
1883      Wait for the worker processes to exit.  One must call :meth:`close` or
1884      :meth:`terminate` before using :meth:`join`.
1885
1886
1887.. class:: AsyncResult
1888
1889   The class of the result returned by :meth:`Pool.apply_async` and
1890   :meth:`Pool.map_async`.
1891
1892   .. method:: get([timeout])
1893
1894      Return the result when it arrives.  If *timeout* is not ``None`` and the
1895      result does not arrive within *timeout* seconds then
1896      :exc:`multiprocessing.TimeoutError` is raised.  If the remote call raised
1897      an exception then that exception will be reraised by :meth:`get`.
1898
1899   .. method:: wait([timeout])
1900
1901      Wait until the result is available or until *timeout* seconds pass.
1902
1903   .. method:: ready()
1904
1905      Return whether the call has completed.
1906
1907   .. method:: successful()
1908
1909      Return whether the call completed without raising an exception.  Will
1910      raise :exc:`AssertionError` if the result is not ready.
1911
1912The following example demonstrates the use of a pool::
1913
1914   from multiprocessing import Pool
1915   import time
1916
1917   def f(x):
1918       return x*x
1919
1920   if __name__ == '__main__':
1921       pool = Pool(processes=4)              # start 4 worker processes
1922
1923       result = pool.apply_async(f, (10,))   # evaluate "f(10)" asynchronously in a single process
1924       print result.get(timeout=1)           # prints "100" unless your computer is *very* slow
1925
1926       print pool.map(f, range(10))          # prints "[0, 1, 4,..., 81]"
1927
1928       it = pool.imap(f, range(10))
1929       print it.next()                       # prints "0"
1930       print it.next()                       # prints "1"
1931       print it.next(timeout=1)              # prints "4" unless your computer is *very* slow
1932
1933       result = pool.apply_async(time.sleep, (10,))
1934       print result.get(timeout=1)           # raises multiprocessing.TimeoutError
1935
1936
1937.. _multiprocessing-listeners-clients:
1938
1939Listeners and Clients
1940~~~~~~~~~~~~~~~~~~~~~
1941
1942.. module:: multiprocessing.connection
1943   :synopsis: API for dealing with sockets.
1944
1945Usually message passing between processes is done using queues or by using
1946:class:`~multiprocessing.Connection` objects returned by
1947:func:`~multiprocessing.Pipe`.
1948
1949However, the :mod:`multiprocessing.connection` module allows some extra
1950flexibility.  It basically gives a high level message oriented API for dealing
1951with sockets or Windows named pipes, and also has support for *digest
1952authentication* using the :mod:`hmac` module.
1953
1954
1955.. function:: deliver_challenge(connection, authkey)
1956
1957   Send a randomly generated message to the other end of the connection and wait
1958   for a reply.
1959
1960   If the reply matches the digest of the message using *authkey* as the key
1961   then a welcome message is sent to the other end of the connection.  Otherwise
1962   :exc:`AuthenticationError` is raised.
1963
1964.. function:: answer_challenge(connection, authkey)
1965
1966   Receive a message, calculate the digest of the message using *authkey* as the
1967   key, and then send the digest back.
1968
1969   If a welcome message is not received, then :exc:`AuthenticationError` is
1970   raised.
1971
1972.. function:: Client(address[, family[, authenticate[, authkey]]])
1973
1974   Attempt to set up a connection to the listener which is using address
1975   *address*, returning a :class:`~multiprocessing.Connection`.
1976
1977   The type of the connection is determined by *family* argument, but this can
1978   generally be omitted since it can usually be inferred from the format of
1979   *address*. (See :ref:`multiprocessing-address-formats`)
1980
1981   If *authenticate* is ``True`` or *authkey* is a string then digest
1982   authentication is used.  The key used for authentication will be either
1983   *authkey* or ``current_process().authkey)`` if *authkey* is ``None``.
1984   If authentication fails then :exc:`AuthenticationError` is raised.  See
1985   :ref:`multiprocessing-auth-keys`.
1986
1987.. class:: Listener([address[, family[, backlog[, authenticate[, authkey]]]]])
1988
1989   A wrapper for a bound socket or Windows named pipe which is 'listening' for
1990   connections.
1991
1992   *address* is the address to be used by the bound socket or named pipe of the
1993   listener object.
1994
1995   .. note::
1996
1997      If an address of '0.0.0.0' is used, the address will not be a connectable
1998      end point on Windows. If you require a connectable end-point,
1999      you should use '127.0.0.1'.
2000
2001   *family* is the type of socket (or named pipe) to use.  This can be one of
2002   the strings ``'AF_INET'`` (for a TCP socket), ``'AF_UNIX'`` (for a Unix
2003   domain socket) or ``'AF_PIPE'`` (for a Windows named pipe).  Of these only
2004   the first is guaranteed to be available.  If *family* is ``None`` then the
2005   family is inferred from the format of *address*.  If *address* is also
2006   ``None`` then a default is chosen.  This default is the family which is
2007   assumed to be the fastest available.  See
2008   :ref:`multiprocessing-address-formats`.  Note that if *family* is
2009   ``'AF_UNIX'`` and address is ``None`` then the socket will be created in a
2010   private temporary directory created using :func:`tempfile.mkstemp`.
2011
2012   If the listener object uses a socket then *backlog* (1 by default) is passed
2013   to the :meth:`~socket.socket.listen` method of the socket once it has been
2014   bound.
2015
2016   If *authenticate* is ``True`` (``False`` by default) or *authkey* is not
2017   ``None`` then digest authentication is used.
2018
2019   If *authkey* is a string then it will be used as the authentication key;
2020   otherwise it must be ``None``.
2021
2022   If *authkey* is ``None`` and *authenticate* is ``True`` then
2023   ``current_process().authkey`` is used as the authentication key.  If
2024   *authkey* is ``None`` and *authenticate* is ``False`` then no
2025   authentication is done.  If authentication fails then
2026   :exc:`AuthenticationError` is raised.  See :ref:`multiprocessing-auth-keys`.
2027
2028   .. method:: accept()
2029
2030      Accept a connection on the bound socket or named pipe of the listener
2031      object and return a :class:`~multiprocessing.Connection` object.  If
2032      authentication is attempted and fails, then
2033      :exc:`~multiprocessing.AuthenticationError` is raised.
2034
2035   .. method:: close()
2036
2037      Close the bound socket or named pipe of the listener object.  This is
2038      called automatically when the listener is garbage collected.  However it
2039      is advisable to call it explicitly.
2040
2041   Listener objects have the following read-only properties:
2042
2043   .. attribute:: address
2044
2045      The address which is being used by the Listener object.
2046
2047   .. attribute:: last_accepted
2048
2049      The address from which the last accepted connection came.  If this is
2050      unavailable then it is ``None``.
2051
2052
2053The module defines two exceptions:
2054
2055.. exception:: AuthenticationError
2056
2057   Exception raised when there is an authentication error.
2058
2059
2060**Examples**
2061
2062The following server code creates a listener which uses ``'secret password'`` as
2063an authentication key.  It then waits for a connection and sends some data to
2064the client::
2065
2066   from multiprocessing.connection import Listener
2067   from array import array
2068
2069   address = ('localhost', 6000)     # family is deduced to be 'AF_INET'
2070   listener = Listener(address, authkey='secret password')
2071
2072   conn = listener.accept()
2073   print 'connection accepted from', listener.last_accepted
2074
2075   conn.send([2.25, None, 'junk', float])
2076
2077   conn.send_bytes('hello')
2078
2079   conn.send_bytes(array('i', [42, 1729]))
2080
2081   conn.close()
2082   listener.close()
2083
2084The following code connects to the server and receives some data from the
2085server::
2086
2087   from multiprocessing.connection import Client
2088   from array import array
2089
2090   address = ('localhost', 6000)
2091   conn = Client(address, authkey='secret password')
2092
2093   print conn.recv()                 # => [2.25, None, 'junk', float]
2094
2095   print conn.recv_bytes()            # => 'hello'
2096
2097   arr = array('i', [0, 0, 0, 0, 0])
2098   print conn.recv_bytes_into(arr)     # => 8
2099   print arr                         # => array('i', [42, 1729, 0, 0, 0])
2100
2101   conn.close()
2102
2103
2104.. _multiprocessing-address-formats:
2105
2106Address Formats
2107>>>>>>>>>>>>>>>
2108
2109* An ``'AF_INET'`` address is a tuple of the form ``(hostname, port)`` where
2110  *hostname* is a string and *port* is an integer.
2111
2112* An ``'AF_UNIX'`` address is a string representing a filename on the
2113  filesystem.
2114
2115* An ``'AF_PIPE'`` address is a string of the form
2116   :samp:`r'\\\\.\\pipe\\{PipeName}'`.  To use :func:`Client` to connect to a named
2117   pipe on a remote computer called *ServerName* one should use an address of the
2118   form :samp:`r'\\\\{ServerName}\\pipe\\{PipeName}'` instead.
2119
2120Note that any string beginning with two backslashes is assumed by default to be
2121an ``'AF_PIPE'`` address rather than an ``'AF_UNIX'`` address.
2122
2123
2124.. _multiprocessing-auth-keys:
2125
2126Authentication keys
2127~~~~~~~~~~~~~~~~~~~
2128
2129When one uses :meth:`Connection.recv <multiprocessing.Connection.recv>`, the
2130data received is automatically
2131unpickled.  Unfortunately unpickling data from an untrusted source is a security
2132risk.  Therefore :class:`Listener` and :func:`Client` use the :mod:`hmac` module
2133to provide digest authentication.
2134
2135An authentication key is a string which can be thought of as a password: once a
2136connection is established both ends will demand proof that the other knows the
2137authentication key.  (Demonstrating that both ends are using the same key does
2138**not** involve sending the key over the connection.)
2139
2140If authentication is requested but no authentication key is specified then the
2141return value of ``current_process().authkey`` is used (see
2142:class:`~multiprocessing.Process`).  This value will be automatically inherited by
2143any :class:`~multiprocessing.Process` object that the current process creates.
2144This means that (by default) all processes of a multi-process program will share
2145a single authentication key which can be used when setting up connections
2146between themselves.
2147
2148Suitable authentication keys can also be generated by using :func:`os.urandom`.
2149
2150
2151Logging
2152~~~~~~~
2153
2154Some support for logging is available.  Note, however, that the :mod:`logging`
2155package does not use process shared locks so it is possible (depending on the
2156handler type) for messages from different processes to get mixed up.
2157
2158.. currentmodule:: multiprocessing
2159.. function:: get_logger()
2160
2161   Returns the logger used by :mod:`multiprocessing`.  If necessary, a new one
2162   will be created.
2163
2164   When first created the logger has level :data:`logging.NOTSET` and no
2165   default handler. Messages sent to this logger will not by default propagate
2166   to the root logger.
2167
2168   Note that on Windows child processes will only inherit the level of the
2169   parent process's logger -- any other customization of the logger will not be
2170   inherited.
2171
2172.. currentmodule:: multiprocessing
2173.. function:: log_to_stderr()
2174
2175   This function performs a call to :func:`get_logger` but in addition to
2176   returning the logger created by get_logger, it adds a handler which sends
2177   output to :data:`sys.stderr` using format
2178   ``'[%(levelname)s/%(processName)s] %(message)s'``.
2179
2180Below is an example session with logging turned on::
2181
2182    >>> import multiprocessing, logging
2183    >>> logger = multiprocessing.log_to_stderr()
2184    >>> logger.setLevel(logging.INFO)
2185    >>> logger.warning('doomed')
2186    [WARNING/MainProcess] doomed
2187    >>> m = multiprocessing.Manager()
2188    [INFO/SyncManager-...] child process calling self.run()
2189    [INFO/SyncManager-...] created temp directory /.../pymp-...
2190    [INFO/SyncManager-...] manager serving at '/.../listener-...'
2191    >>> del m
2192    [INFO/MainProcess] sending shutdown message to manager
2193    [INFO/SyncManager-...] manager exiting with exitcode 0
2194
2195In addition to having these two logging functions, the multiprocessing also
2196exposes two additional logging level attributes. These are  :const:`SUBWARNING`
2197and :const:`SUBDEBUG`. The table below illustrates where theses fit in the
2198normal level hierarchy.
2199
2200+----------------+----------------+
2201| Level          | Numeric value  |
2202+================+================+
2203| ``SUBWARNING`` | 25             |
2204+----------------+----------------+
2205| ``SUBDEBUG``   | 5              |
2206+----------------+----------------+
2207
2208For a full table of logging levels, see the :mod:`logging` module.
2209
2210These additional logging levels are used primarily for certain debug messages
2211within the multiprocessing module. Below is the same example as above, except
2212with :const:`SUBDEBUG` enabled::
2213
2214    >>> import multiprocessing, logging
2215    >>> logger = multiprocessing.log_to_stderr()
2216    >>> logger.setLevel(multiprocessing.SUBDEBUG)
2217    >>> logger.warning('doomed')
2218    [WARNING/MainProcess] doomed
2219    >>> m = multiprocessing.Manager()
2220    [INFO/SyncManager-...] child process calling self.run()
2221    [INFO/SyncManager-...] created temp directory /.../pymp-...
2222    [INFO/SyncManager-...] manager serving at '/.../pymp-djGBXN/listener-...'
2223    >>> del m
2224    [SUBDEBUG/MainProcess] finalizer calling ...
2225    [INFO/MainProcess] sending shutdown message to manager
2226    [DEBUG/SyncManager-...] manager received shutdown message
2227    [SUBDEBUG/SyncManager-...] calling <Finalize object, callback=unlink, ...
2228    [SUBDEBUG/SyncManager-...] finalizer calling <built-in function unlink> ...
2229    [SUBDEBUG/SyncManager-...] calling <Finalize object, dead>
2230    [SUBDEBUG/SyncManager-...] finalizer calling <function rmtree at 0x5aa730> ...
2231    [INFO/SyncManager-...] manager exiting with exitcode 0
2232
2233The :mod:`multiprocessing.dummy` module
2234~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2235
2236.. module:: multiprocessing.dummy
2237   :synopsis: Dumb wrapper around threading.
2238
2239:mod:`multiprocessing.dummy` replicates the API of :mod:`multiprocessing` but is
2240no more than a wrapper around the :mod:`threading` module.
2241
2242
2243.. _multiprocessing-programming:
2244
2245Programming guidelines
2246----------------------
2247
2248There are certain guidelines and idioms which should be adhered to when using
2249:mod:`multiprocessing`.
2250
2251
2252All platforms
2253~~~~~~~~~~~~~
2254
2255Avoid shared state
2256
2257    As far as possible one should try to avoid shifting large amounts of data
2258    between processes.
2259
2260    It is probably best to stick to using queues or pipes for communication
2261    between processes rather than using the lower level synchronization
2262    primitives from the :mod:`threading` module.
2263
2264Picklability
2265
2266    Ensure that the arguments to the methods of proxies are picklable.
2267
2268Thread safety of proxies
2269
2270    Do not use a proxy object from more than one thread unless you protect it
2271    with a lock.
2272
2273    (There is never a problem with different processes using the *same* proxy.)
2274
2275Joining zombie processes
2276
2277    On Unix when a process finishes but has not been joined it becomes a zombie.
2278    There should never be very many because each time a new process starts (or
2279    :func:`~multiprocessing.active_children` is called) all completed processes
2280    which have not yet been joined will be joined.  Also calling a finished
2281    process's :meth:`Process.is_alive <multiprocessing.Process.is_alive>` will
2282    join the process.  Even so it is probably good
2283    practice to explicitly join all the processes that you start.
2284
2285Better to inherit than pickle/unpickle
2286
2287    On Windows many types from :mod:`multiprocessing` need to be picklable so
2288    that child processes can use them.  However, one should generally avoid
2289    sending shared objects to other processes using pipes or queues.  Instead
2290    you should arrange the program so that a process which needs access to a
2291    shared resource created elsewhere can inherit it from an ancestor process.
2292
2293Avoid terminating processes
2294
2295    Using the :meth:`Process.terminate <multiprocessing.Process.terminate>`
2296    method to stop a process is liable to
2297    cause any shared resources (such as locks, semaphores, pipes and queues)
2298    currently being used by the process to become broken or unavailable to other
2299    processes.
2300
2301    Therefore it is probably best to only consider using
2302    :meth:`Process.terminate <multiprocessing.Process.terminate>` on processes
2303    which never use any shared resources.
2304
2305Joining processes that use queues
2306
2307    Bear in mind that a process that has put items in a queue will wait before
2308    terminating until all the buffered items are fed by the "feeder" thread to
2309    the underlying pipe.  (The child process can call the
2310    :meth:`~multiprocessing.Queue.cancel_join_thread` method of the queue to avoid this behaviour.)
2311
2312    This means that whenever you use a queue you need to make sure that all
2313    items which have been put on the queue will eventually be removed before the
2314    process is joined.  Otherwise you cannot be sure that processes which have
2315    put items on the queue will terminate.  Remember also that non-daemonic
2316    processes will be joined automatically.
2317
2318    An example which will deadlock is the following::
2319
2320        from multiprocessing import Process, Queue
2321
2322        def f(q):
2323            q.put('X' * 1000000)
2324
2325        if __name__ == '__main__':
2326            queue = Queue()
2327            p = Process(target=f, args=(queue,))
2328            p.start()
2329            p.join()                    # this deadlocks
2330            obj = queue.get()
2331
2332    A fix here would be to swap the last two lines (or simply remove the
2333    ``p.join()`` line).
2334
2335Explicitly pass resources to child processes
2336
2337    On Unix a child process can make use of a shared resource created in a
2338    parent process using a global resource.  However, it is better to pass the
2339    object as an argument to the constructor for the child process.
2340
2341    Apart from making the code (potentially) compatible with Windows this also
2342    ensures that as long as the child process is still alive the object will not
2343    be garbage collected in the parent process.  This might be important if some
2344    resource is freed when the object is garbage collected in the parent
2345    process.
2346
2347    So for instance ::
2348
2349        from multiprocessing import Process, Lock
2350
2351        def f():
2352            ... do something using "lock" ...
2353
2354        if __name__ == '__main__':
2355            lock = Lock()
2356            for i in range(10):
2357                Process(target=f).start()
2358
2359    should be rewritten as ::
2360
2361        from multiprocessing import Process, Lock
2362
2363        def f(l):
2364            ... do something using "l" ...
2365
2366        if __name__ == '__main__':
2367            lock = Lock()
2368            for i in range(10):
2369                Process(target=f, args=(lock,)).start()
2370
2371Beware of replacing :data:`sys.stdin` with a "file like object"
2372
2373    :mod:`multiprocessing` originally unconditionally called::
2374
2375        os.close(sys.stdin.fileno())
2376
2377    in the :meth:`multiprocessing.Process._bootstrap` method --- this resulted
2378    in issues with processes-in-processes. This has been changed to::
2379
2380        sys.stdin.close()
2381        sys.stdin = open(os.devnull)
2382
2383    Which solves the fundamental issue of processes colliding with each other
2384    resulting in a bad file descriptor error, but introduces a potential danger
2385    to applications which replace :func:`sys.stdin` with a "file-like object"
2386    with output buffering.  This danger is that if multiple processes call
2387    :meth:`~io.IOBase.close()` on this file-like object, it could result in the same
2388    data being flushed to the object multiple times, resulting in corruption.
2389
2390    If you write a file-like object and implement your own caching, you can
2391    make it fork-safe by storing the pid whenever you append to the cache,
2392    and discarding the cache when the pid changes. For example::
2393
2394       @property
2395       def cache(self):
2396           pid = os.getpid()
2397           if pid != self._pid:
2398               self._pid = pid
2399               self._cache = []
2400           return self._cache
2401
2402    For more information, see :issue:`5155`, :issue:`5313` and :issue:`5331`
2403
2404Windows
2405~~~~~~~
2406
2407Since Windows lacks :func:`os.fork` it has a few extra restrictions:
2408
2409More picklability
2410
2411    Ensure that all arguments to :meth:`Process.__init__` are picklable.  This
2412    means, in particular, that bound or unbound methods cannot be used directly
2413    as the ``target`` argument on Windows --- just define a function and use
2414    that instead.
2415
2416    Also, if you subclass :class:`~multiprocessing.Process` then make sure that
2417    instances will be picklable when the :meth:`Process.start
2418    <multiprocessing.Process.start>` method is called.
2419
2420Global variables
2421
2422    Bear in mind that if code run in a child process tries to access a global
2423    variable, then the value it sees (if any) may not be the same as the value
2424    in the parent process at the time that :meth:`Process.start
2425    <multiprocessing.Process.start>` was called.
2426
2427    However, global variables which are just module level constants cause no
2428    problems.
2429
2430Safe importing of main module
2431
2432    Make sure that the main module can be safely imported by a new Python
2433    interpreter without causing unintended side effects (such a starting a new
2434    process).
2435
2436    For example, under Windows running the following module would fail with a
2437    :exc:`RuntimeError`::
2438
2439        from multiprocessing import Process
2440
2441        def foo():
2442            print 'hello'
2443
2444        p = Process(target=foo)
2445        p.start()
2446
2447    Instead one should protect the "entry point" of the program by using ``if
2448    __name__ == '__main__':`` as follows::
2449
2450       from multiprocessing import Process, freeze_support
2451
2452       def foo():
2453           print 'hello'
2454
2455       if __name__ == '__main__':
2456           freeze_support()
2457           p = Process(target=foo)
2458           p.start()
2459
2460    (The ``freeze_support()`` line can be omitted if the program will be run
2461    normally instead of frozen.)
2462
2463    This allows the newly spawned Python interpreter to safely import the module
2464    and then run the module's ``foo()`` function.
2465
2466    Similar restrictions apply if a pool or manager is created in the main
2467    module.
2468
2469
2470.. _multiprocessing-examples:
2471
2472Examples
2473--------
2474
2475Demonstration of how to create and use customized managers and proxies:
2476
2477.. literalinclude:: ../includes/mp_newtype.py
2478
2479
2480Using :class:`~multiprocessing.pool.Pool`:
2481
2482.. literalinclude:: ../includes/mp_pool.py
2483
2484
2485Synchronization types like locks, conditions and queues:
2486
2487.. literalinclude:: ../includes/mp_synchronize.py
2488
2489
2490An example showing how to use queues to feed tasks to a collection of worker
2491processes and collect the results:
2492
2493.. literalinclude:: ../includes/mp_workers.py
2494
2495
2496An example of how a pool of worker processes can each run a
2497:class:`SimpleHTTPServer.HttpServer` instance while sharing a single listening
2498socket.
2499
2500.. literalinclude:: ../includes/mp_webserver.py
2501
2502
2503Some simple benchmarks comparing :mod:`multiprocessing` with :mod:`threading`:
2504
2505.. literalinclude:: ../includes/mp_benchmarks.py
2506
2507