• Home
  • Line#
  • Scopes#
  • Navigate#
  • Raw
  • Download
1:mod:`multiprocessing` --- Process-based "threading" interface
2==============================================================
3
4.. module:: multiprocessing
5   :synopsis: Process-based "threading" interface.
6
7.. versionadded:: 2.6
8
9
10Introduction
11----------------------
12
13:mod:`multiprocessing` is a package that supports spawning processes using an
14API similar to the :mod:`threading` module.  The :mod:`multiprocessing` package
15offers both local and remote concurrency, effectively side-stepping the
16:term:`Global Interpreter Lock` by using subprocesses instead of threads.  Due
17to this, the :mod:`multiprocessing` module allows the programmer to fully
18leverage multiple processors on a given machine.  It runs on both Unix and
19Windows.
20
21The :mod:`multiprocessing` module also introduces APIs which do not have
22analogs in the :mod:`threading` module.  A prime example of this is the
23:class:`Pool` object which offers a convenient means of parallelizing the
24execution of a function across multiple input values, distributing the
25input data across processes (data parallelism).  The following example
26demonstrates the common practice of defining such functions in a module so
27that child processes can successfully import that module.  This basic example
28of data parallelism using :class:`Pool`, ::
29
30   from multiprocessing import Pool
31
32   def f(x):
33       return x*x
34
35   if __name__ == '__main__':
36       p = Pool(5)
37       print(p.map(f, [1, 2, 3]))
38
39will print to standard output ::
40
41   [1, 4, 9]
42
43
44The :class:`Process` class
45~~~~~~~~~~~~~~~~~~~~~~~~~~
46
47In :mod:`multiprocessing`, processes are spawned by creating a :class:`Process`
48object and then calling its :meth:`~Process.start` method.  :class:`Process`
49follows the API of :class:`threading.Thread`.  A trivial example of a
50multiprocess program is ::
51
52    from multiprocessing import Process
53
54    def f(name):
55        print 'hello', name
56
57    if __name__ == '__main__':
58        p = Process(target=f, args=('bob',))
59        p.start()
60        p.join()
61
62To show the individual process IDs involved, here is an expanded example::
63
64    from multiprocessing import Process
65    import os
66
67    def info(title):
68        print title
69        print 'module name:', __name__
70        if hasattr(os, 'getppid'):  # only available on Unix
71            print 'parent process:', os.getppid()
72        print 'process id:', os.getpid()
73
74    def f(name):
75        info('function f')
76        print 'hello', name
77
78    if __name__ == '__main__':
79        info('main line')
80        p = Process(target=f, args=('bob',))
81        p.start()
82        p.join()
83
84For an explanation of why (on Windows) the ``if __name__ == '__main__'`` part is
85necessary, see :ref:`multiprocessing-programming`.
86
87
88Exchanging objects between processes
89~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
90
91:mod:`multiprocessing` supports two types of communication channel between
92processes:
93
94**Queues**
95
96   The :class:`~multiprocessing.Queue` class is a near clone of :class:`Queue.Queue`.  For
97   example::
98
99      from multiprocessing import Process, Queue
100
101      def f(q):
102          q.put([42, None, 'hello'])
103
104      if __name__ == '__main__':
105          q = Queue()
106          p = Process(target=f, args=(q,))
107          p.start()
108          print q.get()    # prints "[42, None, 'hello']"
109          p.join()
110
111   Queues are thread and process safe.
112
113**Pipes**
114
115   The :func:`Pipe` function returns a pair of connection objects connected by a
116   pipe which by default is duplex (two-way).  For example::
117
118      from multiprocessing import Process, Pipe
119
120      def f(conn):
121          conn.send([42, None, 'hello'])
122          conn.close()
123
124      if __name__ == '__main__':
125          parent_conn, child_conn = Pipe()
126          p = Process(target=f, args=(child_conn,))
127          p.start()
128          print parent_conn.recv()   # prints "[42, None, 'hello']"
129          p.join()
130
131   The two connection objects returned by :func:`Pipe` represent the two ends of
132   the pipe.  Each connection object has :meth:`~Connection.send` and
133   :meth:`~Connection.recv` methods (among others).  Note that data in a pipe
134   may become corrupted if two processes (or threads) try to read from or write
135   to the *same* end of the pipe at the same time.  Of course there is no risk
136   of corruption from processes using different ends of the pipe at the same
137   time.
138
139
140Synchronization between processes
141~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
142
143:mod:`multiprocessing` contains equivalents of all the synchronization
144primitives from :mod:`threading`.  For instance one can use a lock to ensure
145that only one process prints to standard output at a time::
146
147   from multiprocessing import Process, Lock
148
149   def f(l, i):
150       l.acquire()
151       print 'hello world', i
152       l.release()
153
154   if __name__ == '__main__':
155       lock = Lock()
156
157       for num in range(10):
158           Process(target=f, args=(lock, num)).start()
159
160Without using the lock output from the different processes is liable to get all
161mixed up.
162
163
164Sharing state between processes
165~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
166
167As mentioned above, when doing concurrent programming it is usually best to
168avoid using shared state as far as possible.  This is particularly true when
169using multiple processes.
170
171However, if you really do need to use some shared data then
172:mod:`multiprocessing` provides a couple of ways of doing so.
173
174**Shared memory**
175
176   Data can be stored in a shared memory map using :class:`Value` or
177   :class:`Array`.  For example, the following code ::
178
179      from multiprocessing import Process, Value, Array
180
181      def f(n, a):
182          n.value = 3.1415927
183          for i in range(len(a)):
184              a[i] = -a[i]
185
186      if __name__ == '__main__':
187          num = Value('d', 0.0)
188          arr = Array('i', range(10))
189
190          p = Process(target=f, args=(num, arr))
191          p.start()
192          p.join()
193
194          print num.value
195          print arr[:]
196
197   will print ::
198
199      3.1415927
200      [0, -1, -2, -3, -4, -5, -6, -7, -8, -9]
201
202   The ``'d'`` and ``'i'`` arguments used when creating ``num`` and ``arr`` are
203   typecodes of the kind used by the :mod:`array` module: ``'d'`` indicates a
204   double precision float and ``'i'`` indicates a signed integer.  These shared
205   objects will be process and thread-safe.
206
207   For more flexibility in using shared memory one can use the
208   :mod:`multiprocessing.sharedctypes` module which supports the creation of
209   arbitrary ctypes objects allocated from shared memory.
210
211**Server process**
212
213   A manager object returned by :func:`Manager` controls a server process which
214   holds Python objects and allows other processes to manipulate them using
215   proxies.
216
217   A manager returned by :func:`Manager` will support types :class:`list`,
218   :class:`dict`, :class:`~managers.Namespace`, :class:`Lock`, :class:`RLock`,
219   :class:`Semaphore`, :class:`BoundedSemaphore`, :class:`Condition`,
220   :class:`Event`, :class:`~multiprocessing.Queue`, :class:`Value` and :class:`Array`.  For
221   example, ::
222
223      from multiprocessing import Process, Manager
224
225      def f(d, l):
226          d[1] = '1'
227          d['2'] = 2
228          d[0.25] = None
229          l.reverse()
230
231      if __name__ == '__main__':
232          manager = Manager()
233
234          d = manager.dict()
235          l = manager.list(range(10))
236
237          p = Process(target=f, args=(d, l))
238          p.start()
239          p.join()
240
241          print d
242          print l
243
244   will print ::
245
246       {0.25: None, 1: '1', '2': 2}
247       [9, 8, 7, 6, 5, 4, 3, 2, 1, 0]
248
249   Server process managers are more flexible than using shared memory objects
250   because they can be made to support arbitrary object types.  Also, a single
251   manager can be shared by processes on different computers over a network.
252   They are, however, slower than using shared memory.
253
254
255Using a pool of workers
256~~~~~~~~~~~~~~~~~~~~~~~
257
258The :class:`~multiprocessing.pool.Pool` class represents a pool of worker
259processes.  It has methods which allows tasks to be offloaded to the worker
260processes in a few different ways.
261
262For example::
263
264   from multiprocessing import Pool, TimeoutError
265   import time
266   import os
267
268   def f(x):
269       return x*x
270
271   if __name__ == '__main__':
272       pool = Pool(processes=4)              # start 4 worker processes
273
274       # print "[0, 1, 4,..., 81]"
275       print pool.map(f, range(10))
276
277       # print same numbers in arbitrary order
278       for i in pool.imap_unordered(f, range(10)):
279           print i
280
281       # evaluate "f(20)" asynchronously
282       res = pool.apply_async(f, (20,))      # runs in *only* one process
283       print res.get(timeout=1)              # prints "400"
284
285       # evaluate "os.getpid()" asynchronously
286       res = pool.apply_async(os.getpid, ()) # runs in *only* one process
287       print res.get(timeout=1)              # prints the PID of that process
288
289       # launching multiple evaluations asynchronously *may* use more processes
290       multiple_results = [pool.apply_async(os.getpid, ()) for i in range(4)]
291       print [res.get(timeout=1) for res in multiple_results]
292
293       # make a single worker sleep for 10 secs
294       res = pool.apply_async(time.sleep, (10,))
295       try:
296           print res.get(timeout=1)
297       except TimeoutError:
298           print "We lacked patience and got a multiprocessing.TimeoutError"
299
300Note that the methods of a pool should only ever be used by the
301process which created it.
302
303.. note::
304
305   Functionality within this package requires that the ``__main__`` module be
306   importable by the children. This is covered in :ref:`multiprocessing-programming`
307   however it is worth pointing out here. This means that some examples, such
308   as the :class:`Pool` examples will not work in the interactive interpreter.
309   For example::
310
311      >>> from multiprocessing import Pool
312      >>> p = Pool(5)
313      >>> def f(x):
314      ...     return x*x
315      ...
316      >>> p.map(f, [1,2,3])
317      Process PoolWorker-1:
318      Process PoolWorker-2:
319      Process PoolWorker-3:
320      Traceback (most recent call last):
321      Traceback (most recent call last):
322      Traceback (most recent call last):
323      AttributeError: 'module' object has no attribute 'f'
324      AttributeError: 'module' object has no attribute 'f'
325      AttributeError: 'module' object has no attribute 'f'
326
327   (If you try this it will actually output three full tracebacks
328   interleaved in a semi-random fashion, and then you may have to
329   stop the master process somehow.)
330
331
332Reference
333---------
334
335The :mod:`multiprocessing` package mostly replicates the API of the
336:mod:`threading` module.
337
338
339:class:`Process` and exceptions
340~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
341
342.. class:: Process(group=None, target=None, name=None, args=(), kwargs={})
343
344   Process objects represent activity that is run in a separate process. The
345   :class:`Process` class has equivalents of all the methods of
346   :class:`threading.Thread`.
347
348   The constructor should always be called with keyword arguments. *group*
349   should always be ``None``; it exists solely for compatibility with
350   :class:`threading.Thread`.  *target* is the callable object to be invoked by
351   the :meth:`run()` method.  It defaults to ``None``, meaning nothing is
352   called. *name* is the process name.  By default, a unique name is constructed
353   of the form 'Process-N\ :sub:`1`:N\ :sub:`2`:...:N\ :sub:`k`' where N\
354   :sub:`1`,N\ :sub:`2`,...,N\ :sub:`k` is a sequence of integers whose length
355   is determined by the *generation* of the process.  *args* is the argument
356   tuple for the target invocation.  *kwargs* is a dictionary of keyword
357   arguments for the target invocation.  By default, no arguments are passed to
358   *target*.
359
360   If a subclass overrides the constructor, it must make sure it invokes the
361   base class constructor (:meth:`Process.__init__`) before doing anything else
362   to the process.
363
364   .. method:: run()
365
366      Method representing the process's activity.
367
368      You may override this method in a subclass.  The standard :meth:`run`
369      method invokes the callable object passed to the object's constructor as
370      the target argument, if any, with sequential and keyword arguments taken
371      from the *args* and *kwargs* arguments, respectively.
372
373   .. method:: start()
374
375      Start the process's activity.
376
377      This must be called at most once per process object.  It arranges for the
378      object's :meth:`run` method to be invoked in a separate process.
379
380   .. method:: join([timeout])
381
382      Block the calling thread until the process whose :meth:`join` method is
383      called terminates or until the optional timeout occurs.
384
385      If *timeout* is ``None`` then there is no timeout.
386
387      A process can be joined many times.
388
389      A process cannot join itself because this would cause a deadlock.  It is
390      an error to attempt to join a process before it has been started.
391
392   .. attribute:: name
393
394      The process's name.
395
396      The name is a string used for identification purposes only.  It has no
397      semantics.  Multiple processes may be given the same name.  The initial
398      name is set by the constructor.
399
400   .. method:: is_alive
401
402      Return whether the process is alive.
403
404      Roughly, a process object is alive from the moment the :meth:`start`
405      method returns until the child process terminates.
406
407   .. attribute:: daemon
408
409      The process's daemon flag, a Boolean value.  This must be set before
410      :meth:`start` is called.
411
412      The initial value is inherited from the creating process.
413
414      When a process exits, it attempts to terminate all of its daemonic child
415      processes.
416
417      Note that a daemonic process is not allowed to create child processes.
418      Otherwise a daemonic process would leave its children orphaned if it gets
419      terminated when its parent process exits. Additionally, these are **not**
420      Unix daemons or services, they are normal processes that will be
421      terminated (and not joined) if non-daemonic processes have exited.
422
423   In addition to the  :class:`threading.Thread` API, :class:`Process` objects
424   also support the following attributes and methods:
425
426   .. attribute:: pid
427
428      Return the process ID.  Before the process is spawned, this will be
429      ``None``.
430
431   .. attribute:: exitcode
432
433      The child's exit code.  This will be ``None`` if the process has not yet
434      terminated.  A negative value *-N* indicates that the child was terminated
435      by signal *N*.
436
437   .. attribute:: authkey
438
439      The process's authentication key (a byte string).
440
441      When :mod:`multiprocessing` is initialized the main process is assigned a
442      random string using :func:`os.urandom`.
443
444      When a :class:`Process` object is created, it will inherit the
445      authentication key of its parent process, although this may be changed by
446      setting :attr:`authkey` to another byte string.
447
448      See :ref:`multiprocessing-auth-keys`.
449
450   .. method:: terminate()
451
452      Terminate the process.  On Unix this is done using the ``SIGTERM`` signal;
453      on Windows :c:func:`TerminateProcess` is used.  Note that exit handlers and
454      finally clauses, etc., will not be executed.
455
456      Note that descendant processes of the process will *not* be terminated --
457      they will simply become orphaned.
458
459      .. warning::
460
461         If this method is used when the associated process is using a pipe or
462         queue then the pipe or queue is liable to become corrupted and may
463         become unusable by other process.  Similarly, if the process has
464         acquired a lock or semaphore etc. then terminating it is liable to
465         cause other processes to deadlock.
466
467   Note that the :meth:`start`, :meth:`join`, :meth:`is_alive`,
468   :meth:`terminate` and :attr:`exitcode` methods should only be called by
469   the process that created the process object.
470
471   Example usage of some of the methods of :class:`Process`:
472
473   .. doctest::
474
475       >>> import multiprocessing, time, signal
476       >>> p = multiprocessing.Process(target=time.sleep, args=(1000,))
477       >>> print p, p.is_alive()
478       <Process(Process-1, initial)> False
479       >>> p.start()
480       >>> print p, p.is_alive()
481       <Process(Process-1, started)> True
482       >>> p.terminate()
483       >>> time.sleep(0.1)
484       >>> print p, p.is_alive()
485       <Process(Process-1, stopped[SIGTERM])> False
486       >>> p.exitcode == -signal.SIGTERM
487       True
488
489
490.. exception:: BufferTooShort
491
492   Exception raised by :meth:`Connection.recv_bytes_into()` when the supplied
493   buffer object is too small for the message read.
494
495   If ``e`` is an instance of :exc:`BufferTooShort` then ``e.args[0]`` will give
496   the message as a byte string.
497
498
499Pipes and Queues
500~~~~~~~~~~~~~~~~
501
502When using multiple processes, one generally uses message passing for
503communication between processes and avoids having to use any synchronization
504primitives like locks.
505
506For passing messages one can use :func:`Pipe` (for a connection between two
507processes) or a queue (which allows multiple producers and consumers).
508
509The :class:`~multiprocessing.Queue`, :class:`multiprocessing.queues.SimpleQueue` and :class:`JoinableQueue` types are multi-producer,
510multi-consumer FIFO queues modelled on the :class:`Queue.Queue` class in the
511standard library.  They differ in that :class:`~multiprocessing.Queue` lacks the
512:meth:`~Queue.Queue.task_done` and :meth:`~Queue.Queue.join` methods introduced
513into Python 2.5's :class:`Queue.Queue` class.
514
515If you use :class:`JoinableQueue` then you **must** call
516:meth:`JoinableQueue.task_done` for each task removed from the queue or else the
517semaphore used to count the number of unfinished tasks may eventually overflow,
518raising an exception.
519
520Note that one can also create a shared queue by using a manager object -- see
521:ref:`multiprocessing-managers`.
522
523.. note::
524
525   :mod:`multiprocessing` uses the usual :exc:`Queue.Empty` and
526   :exc:`Queue.Full` exceptions to signal a timeout.  They are not available in
527   the :mod:`multiprocessing` namespace so you need to import them from
528   :mod:`Queue`.
529
530.. note::
531
532   When an object is put on a queue, the object is pickled and a
533   background thread later flushes the pickled data to an underlying
534   pipe.  This has some consequences which are a little surprising,
535   but should not cause any practical difficulties -- if they really
536   bother you then you can instead use a queue created with a
537   :ref:`manager <multiprocessing-managers>`.
538
539   (1) After putting an object on an empty queue there may be an
540       infinitesimal delay before the queue's :meth:`~Queue.empty`
541       method returns :const:`False` and :meth:`~Queue.get_nowait` can
542       return without raising :exc:`Queue.Empty`.
543
544   (2) If multiple processes are enqueuing objects, it is possible for
545       the objects to be received at the other end out-of-order.
546       However, objects enqueued by the same process will always be in
547       the expected order with respect to each other.
548
549.. warning::
550
551   If a process is killed using :meth:`Process.terminate` or :func:`os.kill`
552   while it is trying to use a :class:`~multiprocessing.Queue`, then the data in the queue is
553   likely to become corrupted.  This may cause any other process to get an
554   exception when it tries to use the queue later on.
555
556.. warning::
557
558   As mentioned above, if a child process has put items on a queue (and it has
559   not used :meth:`JoinableQueue.cancel_join_thread
560   <multiprocessing.Queue.cancel_join_thread>`), then that process will
561   not terminate until all buffered items have been flushed to the pipe.
562
563   This means that if you try joining that process you may get a deadlock unless
564   you are sure that all items which have been put on the queue have been
565   consumed.  Similarly, if the child process is non-daemonic then the parent
566   process may hang on exit when it tries to join all its non-daemonic children.
567
568   Note that a queue created using a manager does not have this issue.  See
569   :ref:`multiprocessing-programming`.
570
571For an example of the usage of queues for interprocess communication see
572:ref:`multiprocessing-examples`.
573
574
575.. function:: Pipe([duplex])
576
577   Returns a pair ``(conn1, conn2)`` of :class:`Connection` objects representing
578   the ends of a pipe.
579
580   If *duplex* is ``True`` (the default) then the pipe is bidirectional.  If
581   *duplex* is ``False`` then the pipe is unidirectional: ``conn1`` can only be
582   used for receiving messages and ``conn2`` can only be used for sending
583   messages.
584
585
586.. class:: Queue([maxsize])
587
588   Returns a process shared queue implemented using a pipe and a few
589   locks/semaphores.  When a process first puts an item on the queue a feeder
590   thread is started which transfers objects from a buffer into the pipe.
591
592   The usual :exc:`Queue.Empty` and :exc:`Queue.Full` exceptions from the
593   standard library's :mod:`Queue` module are raised to signal timeouts.
594
595   :class:`~multiprocessing.Queue` implements all the methods of :class:`Queue.Queue` except for
596   :meth:`~Queue.Queue.task_done` and :meth:`~Queue.Queue.join`.
597
598   .. method:: qsize()
599
600      Return the approximate size of the queue.  Because of
601      multithreading/multiprocessing semantics, this number is not reliable.
602
603      Note that this may raise :exc:`NotImplementedError` on Unix platforms like
604      Mac OS X where ``sem_getvalue()`` is not implemented.
605
606   .. method:: empty()
607
608      Return ``True`` if the queue is empty, ``False`` otherwise.  Because of
609      multithreading/multiprocessing semantics, this is not reliable.
610
611   .. method:: full()
612
613      Return ``True`` if the queue is full, ``False`` otherwise.  Because of
614      multithreading/multiprocessing semantics, this is not reliable.
615
616   .. method:: put(obj[, block[, timeout]])
617
618      Put obj into the queue.  If the optional argument *block* is ``True``
619      (the default) and *timeout* is ``None`` (the default), block if necessary until
620      a free slot is available.  If *timeout* is a positive number, it blocks at
621      most *timeout* seconds and raises the :exc:`Queue.Full` exception if no
622      free slot was available within that time.  Otherwise (*block* is
623      ``False``), put an item on the queue if a free slot is immediately
624      available, else raise the :exc:`Queue.Full` exception (*timeout* is
625      ignored in that case).
626
627   .. method:: put_nowait(obj)
628
629      Equivalent to ``put(obj, False)``.
630
631   .. method:: get([block[, timeout]])
632
633      Remove and return an item from the queue.  If optional args *block* is
634      ``True`` (the default) and *timeout* is ``None`` (the default), block if
635      necessary until an item is available.  If *timeout* is a positive number,
636      it blocks at most *timeout* seconds and raises the :exc:`Queue.Empty`
637      exception if no item was available within that time.  Otherwise (block is
638      ``False``), return an item if one is immediately available, else raise the
639      :exc:`Queue.Empty` exception (*timeout* is ignored in that case).
640
641   .. method:: get_nowait()
642
643      Equivalent to ``get(False)``.
644
645   :class:`~multiprocessing.Queue` has a few additional methods not found in
646   :class:`Queue.Queue`.  These methods are usually unnecessary for most
647   code:
648
649   .. method:: close()
650
651      Indicate that no more data will be put on this queue by the current
652      process.  The background thread will quit once it has flushed all buffered
653      data to the pipe.  This is called automatically when the queue is garbage
654      collected.
655
656   .. method:: join_thread()
657
658      Join the background thread.  This can only be used after :meth:`close` has
659      been called.  It blocks until the background thread exits, ensuring that
660      all data in the buffer has been flushed to the pipe.
661
662      By default if a process is not the creator of the queue then on exit it
663      will attempt to join the queue's background thread.  The process can call
664      :meth:`cancel_join_thread` to make :meth:`join_thread` do nothing.
665
666   .. method:: cancel_join_thread()
667
668      Prevent :meth:`join_thread` from blocking.  In particular, this prevents
669      the background thread from being joined automatically when the process
670      exits -- see :meth:`join_thread`.
671
672      A better name for this method might be
673      ``allow_exit_without_flush()``.  It is likely to cause enqueued
674      data to lost, and you almost certainly will not need to use it.
675      It is really only there if you need the current process to exit
676      immediately without waiting to flush enqueued data to the
677      underlying pipe, and you don't care about lost data.
678
679   .. note::
680
681      This class's functionality requires a functioning shared semaphore
682      implementation on the host operating system. Without one, the
683      functionality in this class will be disabled, and attempts to
684      instantiate a :class:`Queue` will result in an :exc:`ImportError`. See
685      :issue:`3770` for additional information.  The same holds true for any
686      of the specialized queue types listed below.
687
688
689.. class:: multiprocessing.queues.SimpleQueue()
690
691   It is a simplified :class:`~multiprocessing.Queue` type, very close to a locked :class:`Pipe`.
692
693   .. method:: empty()
694
695      Return ``True`` if the queue is empty, ``False`` otherwise.
696
697   .. method:: get()
698
699      Remove and return an item from the queue.
700
701   .. method:: put(item)
702
703      Put *item* into the queue.
704
705
706.. class:: JoinableQueue([maxsize])
707
708   :class:`JoinableQueue`, a :class:`~multiprocessing.Queue` subclass, is a queue which
709   additionally has :meth:`task_done` and :meth:`join` methods.
710
711   .. method:: task_done()
712
713      Indicate that a formerly enqueued task is complete. Used by queue consumer
714      threads.  For each :meth:`~Queue.get` used to fetch a task, a subsequent
715      call to :meth:`task_done` tells the queue that the processing on the task
716      is complete.
717
718      If a :meth:`~Queue.Queue.join` is currently blocking, it will resume when all
719      items have been processed (meaning that a :meth:`task_done` call was
720      received for every item that had been :meth:`~Queue.put` into the queue).
721
722      Raises a :exc:`ValueError` if called more times than there were items
723      placed in the queue.
724
725
726   .. method:: join()
727
728      Block until all items in the queue have been gotten and processed.
729
730      The count of unfinished tasks goes up whenever an item is added to the
731      queue.  The count goes down whenever a consumer thread calls
732      :meth:`task_done` to indicate that the item was retrieved and all work on
733      it is complete.  When the count of unfinished tasks drops to zero,
734      :meth:`~Queue.Queue.join` unblocks.
735
736
737Miscellaneous
738~~~~~~~~~~~~~
739
740.. function:: active_children()
741
742   Return list of all live children of the current process.
743
744   Calling this has the side effect of "joining" any processes which have
745   already finished.
746
747.. function:: cpu_count()
748
749   Return the number of CPUs in the system.  May raise
750   :exc:`NotImplementedError`.
751
752.. function:: current_process()
753
754   Return the :class:`Process` object corresponding to the current process.
755
756   An analogue of :func:`threading.current_thread`.
757
758.. function:: freeze_support()
759
760   Add support for when a program which uses :mod:`multiprocessing` has been
761   frozen to produce a Windows executable.  (Has been tested with **py2exe**,
762   **PyInstaller** and **cx_Freeze**.)
763
764   One needs to call this function straight after the ``if __name__ ==
765   '__main__'`` line of the main module.  For example::
766
767      from multiprocessing import Process, freeze_support
768
769      def f():
770          print 'hello world!'
771
772      if __name__ == '__main__':
773          freeze_support()
774          Process(target=f).start()
775
776   If the ``freeze_support()`` line is omitted then trying to run the frozen
777   executable will raise :exc:`RuntimeError`.
778
779   Calling ``freeze_support()`` has no effect when invoked on any operating
780   system other than Windows.  In addition, if the module is being run
781   normally by the Python interpreter on Windows (the program has not been
782   frozen), then ``freeze_support()`` has no effect.
783
784.. function:: set_executable()
785
786   Sets the path of the Python interpreter to use when starting a child process.
787   (By default :data:`sys.executable` is used).  Embedders will probably need to
788   do some thing like ::
789
790      set_executable(os.path.join(sys.exec_prefix, 'pythonw.exe'))
791
792   before they can create child processes.  (Windows only)
793
794
795.. note::
796
797   :mod:`multiprocessing` contains no analogues of
798   :func:`threading.active_count`, :func:`threading.enumerate`,
799   :func:`threading.settrace`, :func:`threading.setprofile`,
800   :class:`threading.Timer`, or :class:`threading.local`.
801
802
803Connection Objects
804~~~~~~~~~~~~~~~~~~
805
806.. currentmodule:: None
807
808Connection objects allow the sending and receiving of picklable objects or
809strings.  They can be thought of as message oriented connected sockets.
810
811Connection objects are usually created using
812:func:`Pipe <multiprocessing.Pipe>` -- see also
813:ref:`multiprocessing-listeners-clients`.
814
815.. class:: Connection
816
817   .. method:: send(obj)
818
819      Send an object to the other end of the connection which should be read
820      using :meth:`recv`.
821
822      The object must be picklable.  Very large pickles (approximately 32 MB+,
823      though it depends on the OS) may raise a :exc:`ValueError` exception.
824
825   .. method:: recv()
826
827      Return an object sent from the other end of the connection using
828      :meth:`send`.  Blocks until there is something to receive.  Raises
829      :exc:`EOFError` if there is nothing left to receive
830      and the other end was closed.
831
832   .. method:: fileno()
833
834      Return the file descriptor or handle used by the connection.
835
836   .. method:: close()
837
838      Close the connection.
839
840      This is called automatically when the connection is garbage collected.
841
842   .. method:: poll([timeout])
843
844      Return whether there is any data available to be read.
845
846      If *timeout* is not specified then it will return immediately.  If
847      *timeout* is a number then this specifies the maximum time in seconds to
848      block.  If *timeout* is ``None`` then an infinite timeout is used.
849
850   .. method:: send_bytes(buffer[, offset[, size]])
851
852      Send byte data from an object supporting the buffer interface as a
853      complete message.
854
855      If *offset* is given then data is read from that position in *buffer*.  If
856      *size* is given then that many bytes will be read from buffer.  Very large
857      buffers (approximately 32 MB+, though it depends on the OS) may raise a
858      :exc:`ValueError` exception
859
860   .. method:: recv_bytes([maxlength])
861
862      Return a complete message of byte data sent from the other end of the
863      connection as a string.  Blocks until there is something to receive.
864      Raises :exc:`EOFError` if there is nothing left
865      to receive and the other end has closed.
866
867      If *maxlength* is specified and the message is longer than *maxlength*
868      then :exc:`IOError` is raised and the connection will no longer be
869      readable.
870
871   .. method:: recv_bytes_into(buffer[, offset])
872
873      Read into *buffer* a complete message of byte data sent from the other end
874      of the connection and return the number of bytes in the message.  Blocks
875      until there is something to receive.  Raises
876      :exc:`EOFError` if there is nothing left to receive and the other end was
877      closed.
878
879      *buffer* must be an object satisfying the writable buffer interface.  If
880      *offset* is given then the message will be written into the buffer from
881      that position.  Offset must be a non-negative integer less than the
882      length of *buffer* (in bytes).
883
884      If the buffer is too short then a :exc:`BufferTooShort` exception is
885      raised and the complete message is available as ``e.args[0]`` where ``e``
886      is the exception instance.
887
888
889For example:
890
891.. doctest::
892
893    >>> from multiprocessing import Pipe
894    >>> a, b = Pipe()
895    >>> a.send([1, 'hello', None])
896    >>> b.recv()
897    [1, 'hello', None]
898    >>> b.send_bytes('thank you')
899    >>> a.recv_bytes()
900    'thank you'
901    >>> import array
902    >>> arr1 = array.array('i', range(5))
903    >>> arr2 = array.array('i', [0] * 10)
904    >>> a.send_bytes(arr1)
905    >>> count = b.recv_bytes_into(arr2)
906    >>> assert count == len(arr1) * arr1.itemsize
907    >>> arr2
908    array('i', [0, 1, 2, 3, 4, 0, 0, 0, 0, 0])
909
910
911.. warning::
912
913    The :meth:`Connection.recv` method automatically unpickles the data it
914    receives, which can be a security risk unless you can trust the process
915    which sent the message.
916
917    Therefore, unless the connection object was produced using :func:`Pipe` you
918    should only use the :meth:`~Connection.recv` and :meth:`~Connection.send`
919    methods after performing some sort of authentication.  See
920    :ref:`multiprocessing-auth-keys`.
921
922.. warning::
923
924    If a process is killed while it is trying to read or write to a pipe then
925    the data in the pipe is likely to become corrupted, because it may become
926    impossible to be sure where the message boundaries lie.
927
928
929Synchronization primitives
930~~~~~~~~~~~~~~~~~~~~~~~~~~
931
932.. currentmodule:: multiprocessing
933
934Generally synchronization primitives are not as necessary in a multiprocess
935program as they are in a multithreaded program.  See the documentation for
936:mod:`threading` module.
937
938Note that one can also create synchronization primitives by using a manager
939object -- see :ref:`multiprocessing-managers`.
940
941.. class:: BoundedSemaphore([value])
942
943   A bounded semaphore object: a close analog of
944   :class:`threading.BoundedSemaphore`.
945
946   A solitary difference from its close analog exists: its ``acquire`` method's
947   first argument is named *block* and it supports an optional second argument
948   *timeout*, as is consistent with :meth:`Lock.acquire`.
949
950   .. note::
951      On Mac OS X, this is indistinguishable from :class:`Semaphore` because
952      ``sem_getvalue()`` is not implemented on that platform.
953
954.. class:: Condition([lock])
955
956   A condition variable: a clone of :class:`threading.Condition`.
957
958   If *lock* is specified then it should be a :class:`Lock` or :class:`RLock`
959   object from :mod:`multiprocessing`.
960
961.. class:: Event()
962
963   A clone of :class:`threading.Event`.
964   This method returns the state of the internal semaphore on exit, so it
965   will always return ``True`` except if a timeout is given and the operation
966   times out.
967
968   .. versionchanged:: 2.7
969      Previously, the method always returned ``None``.
970
971
972.. class:: Lock()
973
974   A non-recursive lock object: a close analog of :class:`threading.Lock`.
975   Once a process or thread has acquired a lock, subsequent attempts to
976   acquire it from any process or thread will block until it is released;
977   any process or thread may release it.  The concepts and behaviors of
978   :class:`threading.Lock` as it applies to threads are replicated here in
979   :class:`multiprocessing.Lock` as it applies to either processes or threads,
980   except as noted.
981
982   Note that :class:`Lock` is actually a factory function which returns an
983   instance of ``multiprocessing.synchronize.Lock`` initialized with a
984   default context.
985
986   :class:`Lock` supports the :term:`context manager` protocol and thus may be
987   used in :keyword:`with` statements.
988
989   .. method:: acquire(block=True, timeout=None)
990
991      Acquire a lock, blocking or non-blocking.
992
993      With the *block* argument set to ``True`` (the default), the method call
994      will block until the lock is in an unlocked state, then set it to locked
995      and return ``True``.  Note that the name of this first argument differs
996      from that in :meth:`threading.Lock.acquire`.
997
998      With the *block* argument set to ``False``, the method call does not
999      block.  If the lock is currently in a locked state, return ``False``;
1000      otherwise set the lock to a locked state and return ``True``.
1001
1002      When invoked with a positive, floating-point value for *timeout*, block
1003      for at most the number of seconds specified by *timeout* as long as
1004      the lock can not be acquired.  Invocations with a negative value for
1005      *timeout* are equivalent to a *timeout* of zero.  Invocations with a
1006      *timeout* value of ``None`` (the default) set the timeout period to
1007      infinite.  The *timeout* argument has no practical implications if the
1008      *block* argument is set to ``False`` and is thus ignored.  Returns
1009      ``True`` if the lock has been acquired or ``False`` if the timeout period
1010      has elapsed.  Note that the *timeout* argument does not exist in this
1011      method's analog, :meth:`threading.Lock.acquire`.
1012
1013   .. method:: release()
1014
1015      Release a lock.  This can be called from any process or thread, not only
1016      the process or thread which originally acquired the lock.
1017
1018      Behavior is the same as in :meth:`threading.Lock.release` except that
1019      when invoked on an unlocked lock, a :exc:`ValueError` is raised.
1020
1021
1022.. class:: RLock()
1023
1024   A recursive lock object: a close analog of :class:`threading.RLock`.  A
1025   recursive lock must be released by the process or thread that acquired it.
1026   Once a process or thread has acquired a recursive lock, the same process
1027   or thread may acquire it again without blocking; that process or thread
1028   must release it once for each time it has been acquired.
1029
1030   Note that :class:`RLock` is actually a factory function which returns an
1031   instance of ``multiprocessing.synchronize.RLock`` initialized with a
1032   default context.
1033
1034   :class:`RLock` supports the :term:`context manager` protocol and thus may be
1035   used in :keyword:`with` statements.
1036
1037
1038   .. method:: acquire(block=True, timeout=None)
1039
1040      Acquire a lock, blocking or non-blocking.
1041
1042      When invoked with the *block* argument set to ``True``, block until the
1043      lock is in an unlocked state (not owned by any process or thread) unless
1044      the lock is already owned by the current process or thread.  The current
1045      process or thread then takes ownership of the lock (if it does not
1046      already have ownership) and the recursion level inside the lock increments
1047      by one, resulting in a return value of ``True``.  Note that there are
1048      several differences in this first argument's behavior compared to the
1049      implementation of :meth:`threading.RLock.acquire`, starting with the name
1050      of the argument itself.
1051
1052      When invoked with the *block* argument set to ``False``, do not block.
1053      If the lock has already been acquired (and thus is owned) by another
1054      process or thread, the current process or thread does not take ownership
1055      and the recursion level within the lock is not changed, resulting in
1056      a return value of ``False``.  If the lock is in an unlocked state, the
1057      current process or thread takes ownership and the recursion level is
1058      incremented, resulting in a return value of ``True``.
1059
1060      Use and behaviors of the *timeout* argument are the same as in
1061      :meth:`Lock.acquire`.  Note that the *timeout* argument does
1062      not exist in this method's analog, :meth:`threading.RLock.acquire`.
1063
1064
1065   .. method:: release()
1066
1067      Release a lock, decrementing the recursion level.  If after the
1068      decrement the recursion level is zero, reset the lock to unlocked (not
1069      owned by any process or thread) and if any other processes or threads
1070      are blocked waiting for the lock to become unlocked, allow exactly one
1071      of them to proceed.  If after the decrement the recursion level is still
1072      nonzero, the lock remains locked and owned by the calling process or
1073      thread.
1074
1075      Only call this method when the calling process or thread owns the lock.
1076      An :exc:`AssertionError` is raised if this method is called by a process
1077      or thread other than the owner or if the lock is in an unlocked (unowned)
1078      state.  Note that the type of exception raised in this situation
1079      differs from the implemented behavior in :meth:`threading.RLock.release`.
1080
1081
1082.. class:: Semaphore([value])
1083
1084   A semaphore object: a close analog of :class:`threading.Semaphore`.
1085
1086   A solitary difference from its close analog exists: its ``acquire`` method's
1087   first argument is named *block* and it supports an optional second argument
1088   *timeout*, as is consistent with :meth:`Lock.acquire`.
1089
1090.. note::
1091
1092   The :meth:`acquire` method of :class:`BoundedSemaphore`, :class:`Lock`,
1093   :class:`RLock` and :class:`Semaphore` has a timeout parameter not supported
1094   by the equivalents in :mod:`threading`.  The signature is
1095   ``acquire(block=True, timeout=None)`` with keyword parameters being
1096   acceptable.  If *block* is ``True`` and *timeout* is not ``None`` then it
1097   specifies a timeout in seconds.  If *block* is ``False`` then *timeout* is
1098   ignored.
1099
1100   On Mac OS X, ``sem_timedwait`` is unsupported, so calling ``acquire()`` with
1101   a timeout will emulate that function's behavior using a sleeping loop.
1102
1103.. note::
1104
1105   If the SIGINT signal generated by :kbd:`Ctrl-C` arrives while the main thread is
1106   blocked by a call to :meth:`BoundedSemaphore.acquire`, :meth:`Lock.acquire`,
1107   :meth:`RLock.acquire`, :meth:`Semaphore.acquire`, :meth:`Condition.acquire`
1108   or :meth:`Condition.wait` then the call will be immediately interrupted and
1109   :exc:`KeyboardInterrupt` will be raised.
1110
1111   This differs from the behaviour of :mod:`threading` where SIGINT will be
1112   ignored while the equivalent blocking calls are in progress.
1113
1114.. note::
1115
1116   Some of this package's functionality requires a functioning shared semaphore
1117   implementation on the host operating system. Without one, the
1118   :mod:`multiprocessing.synchronize` module will be disabled, and attempts to
1119   import it will result in an :exc:`ImportError`. See
1120   :issue:`3770` for additional information.
1121
1122
1123Shared :mod:`ctypes` Objects
1124~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1125
1126It is possible to create shared objects using shared memory which can be
1127inherited by child processes.
1128
1129.. function:: Value(typecode_or_type, *args[, lock])
1130
1131   Return a :mod:`ctypes` object allocated from shared memory.  By default the
1132   return value is actually a synchronized wrapper for the object.
1133
1134   *typecode_or_type* determines the type of the returned object: it is either a
1135   ctypes type or a one character typecode of the kind used by the :mod:`array`
1136   module.  *\*args* is passed on to the constructor for the type.
1137
1138   If *lock* is ``True`` (the default) then a new recursive lock
1139   object is created to synchronize access to the value.  If *lock* is
1140   a :class:`Lock` or :class:`RLock` object then that will be used to
1141   synchronize access to the value.  If *lock* is ``False`` then
1142   access to the returned object will not be automatically protected
1143   by a lock, so it will not necessarily be "process-safe".
1144
1145   Operations like ``+=`` which involve a read and write are not
1146   atomic.  So if, for instance, you want to atomically increment a
1147   shared value it is insufficient to just do ::
1148
1149       counter.value += 1
1150
1151   Assuming the associated lock is recursive (which it is by default)
1152   you can instead do ::
1153
1154       with counter.get_lock():
1155           counter.value += 1
1156
1157   Note that *lock* is a keyword-only argument.
1158
1159.. function:: Array(typecode_or_type, size_or_initializer, *, lock=True)
1160
1161   Return a ctypes array allocated from shared memory.  By default the return
1162   value is actually a synchronized wrapper for the array.
1163
1164   *typecode_or_type* determines the type of the elements of the returned array:
1165   it is either a ctypes type or a one character typecode of the kind used by
1166   the :mod:`array` module.  If *size_or_initializer* is an integer, then it
1167   determines the length of the array, and the array will be initially zeroed.
1168   Otherwise, *size_or_initializer* is a sequence which is used to initialize
1169   the array and whose length determines the length of the array.
1170
1171   If *lock* is ``True`` (the default) then a new lock object is created to
1172   synchronize access to the value.  If *lock* is a :class:`Lock` or
1173   :class:`RLock` object then that will be used to synchronize access to the
1174   value.  If *lock* is ``False`` then access to the returned object will not be
1175   automatically protected by a lock, so it will not necessarily be
1176   "process-safe".
1177
1178   Note that *lock* is a keyword only argument.
1179
1180   Note that an array of :data:`ctypes.c_char` has *value* and *raw*
1181   attributes which allow one to use it to store and retrieve strings.
1182
1183
1184The :mod:`multiprocessing.sharedctypes` module
1185>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
1186
1187.. module:: multiprocessing.sharedctypes
1188   :synopsis: Allocate ctypes objects from shared memory.
1189
1190The :mod:`multiprocessing.sharedctypes` module provides functions for allocating
1191:mod:`ctypes` objects from shared memory which can be inherited by child
1192processes.
1193
1194.. note::
1195
1196   Although it is possible to store a pointer in shared memory remember that
1197   this will refer to a location in the address space of a specific process.
1198   However, the pointer is quite likely to be invalid in the context of a second
1199   process and trying to dereference the pointer from the second process may
1200   cause a crash.
1201
1202.. function:: RawArray(typecode_or_type, size_or_initializer)
1203
1204   Return a ctypes array allocated from shared memory.
1205
1206   *typecode_or_type* determines the type of the elements of the returned array:
1207   it is either a ctypes type or a one character typecode of the kind used by
1208   the :mod:`array` module.  If *size_or_initializer* is an integer then it
1209   determines the length of the array, and the array will be initially zeroed.
1210   Otherwise *size_or_initializer* is a sequence which is used to initialize the
1211   array and whose length determines the length of the array.
1212
1213   Note that setting and getting an element is potentially non-atomic -- use
1214   :func:`Array` instead to make sure that access is automatically synchronized
1215   using a lock.
1216
1217.. function:: RawValue(typecode_or_type, *args)
1218
1219   Return a ctypes object allocated from shared memory.
1220
1221   *typecode_or_type* determines the type of the returned object: it is either a
1222   ctypes type or a one character typecode of the kind used by the :mod:`array`
1223   module.  *\*args* is passed on to the constructor for the type.
1224
1225   Note that setting and getting the value is potentially non-atomic -- use
1226   :func:`Value` instead to make sure that access is automatically synchronized
1227   using a lock.
1228
1229   Note that an array of :data:`ctypes.c_char` has ``value`` and ``raw``
1230   attributes which allow one to use it to store and retrieve strings -- see
1231   documentation for :mod:`ctypes`.
1232
1233.. function:: Array(typecode_or_type, size_or_initializer, *args[, lock])
1234
1235   The same as :func:`RawArray` except that depending on the value of *lock* a
1236   process-safe synchronization wrapper may be returned instead of a raw ctypes
1237   array.
1238
1239   If *lock* is ``True`` (the default) then a new lock object is created to
1240   synchronize access to the value.  If *lock* is a
1241   :class:`~multiprocessing.Lock` or :class:`~multiprocessing.RLock` object
1242   then that will be used to synchronize access to the
1243   value.  If *lock* is ``False`` then access to the returned object will not be
1244   automatically protected by a lock, so it will not necessarily be
1245   "process-safe".
1246
1247   Note that *lock* is a keyword-only argument.
1248
1249.. function:: Value(typecode_or_type, *args[, lock])
1250
1251   The same as :func:`RawValue` except that depending on the value of *lock* a
1252   process-safe synchronization wrapper may be returned instead of a raw ctypes
1253   object.
1254
1255   If *lock* is ``True`` (the default) then a new lock object is created to
1256   synchronize access to the value.  If *lock* is a :class:`~multiprocessing.Lock` or
1257   :class:`~multiprocessing.RLock` object then that will be used to synchronize access to the
1258   value.  If *lock* is ``False`` then access to the returned object will not be
1259   automatically protected by a lock, so it will not necessarily be
1260   "process-safe".
1261
1262   Note that *lock* is a keyword-only argument.
1263
1264.. function:: copy(obj)
1265
1266   Return a ctypes object allocated from shared memory which is a copy of the
1267   ctypes object *obj*.
1268
1269.. function:: synchronized(obj[, lock])
1270
1271   Return a process-safe wrapper object for a ctypes object which uses *lock* to
1272   synchronize access.  If *lock* is ``None`` (the default) then a
1273   :class:`multiprocessing.RLock` object is created automatically.
1274
1275   A synchronized wrapper will have two methods in addition to those of the
1276   object it wraps: :meth:`get_obj` returns the wrapped object and
1277   :meth:`get_lock` returns the lock object used for synchronization.
1278
1279   Note that accessing the ctypes object through the wrapper can be a lot slower
1280   than accessing the raw ctypes object.
1281
1282
1283The table below compares the syntax for creating shared ctypes objects from
1284shared memory with the normal ctypes syntax.  (In the table ``MyStruct`` is some
1285subclass of :class:`ctypes.Structure`.)
1286
1287==================== ========================== ===========================
1288ctypes               sharedctypes using type    sharedctypes using typecode
1289==================== ========================== ===========================
1290c_double(2.4)        RawValue(c_double, 2.4)    RawValue('d', 2.4)
1291MyStruct(4, 6)       RawValue(MyStruct, 4, 6)
1292(c_short * 7)()      RawArray(c_short, 7)       RawArray('h', 7)
1293(c_int * 3)(9, 2, 8) RawArray(c_int, (9, 2, 8)) RawArray('i', (9, 2, 8))
1294==================== ========================== ===========================
1295
1296
1297Below is an example where a number of ctypes objects are modified by a child
1298process::
1299
1300   from multiprocessing import Process, Lock
1301   from multiprocessing.sharedctypes import Value, Array
1302   from ctypes import Structure, c_double
1303
1304   class Point(Structure):
1305       _fields_ = [('x', c_double), ('y', c_double)]
1306
1307   def modify(n, x, s, A):
1308       n.value **= 2
1309       x.value **= 2
1310       s.value = s.value.upper()
1311       for a in A:
1312           a.x **= 2
1313           a.y **= 2
1314
1315   if __name__ == '__main__':
1316       lock = Lock()
1317
1318       n = Value('i', 7)
1319       x = Value(c_double, 1.0/3.0, lock=False)
1320       s = Array('c', 'hello world', lock=lock)
1321       A = Array(Point, [(1.875,-6.25), (-5.75,2.0), (2.375,9.5)], lock=lock)
1322
1323       p = Process(target=modify, args=(n, x, s, A))
1324       p.start()
1325       p.join()
1326
1327       print n.value
1328       print x.value
1329       print s.value
1330       print [(a.x, a.y) for a in A]
1331
1332
1333.. highlightlang:: none
1334
1335The results printed are ::
1336
1337    49
1338    0.1111111111111111
1339    HELLO WORLD
1340    [(3.515625, 39.0625), (33.0625, 4.0), (5.640625, 90.25)]
1341
1342.. highlightlang:: python
1343
1344
1345.. _multiprocessing-managers:
1346
1347Managers
1348~~~~~~~~
1349
1350Managers provide a way to create data which can be shared between different
1351processes. A manager object controls a server process which manages *shared
1352objects*.  Other processes can access the shared objects by using proxies.
1353
1354.. function:: multiprocessing.Manager()
1355
1356   Returns a started :class:`~multiprocessing.managers.SyncManager` object which
1357   can be used for sharing objects between processes.  The returned manager
1358   object corresponds to a spawned child process and has methods which will
1359   create shared objects and return corresponding proxies.
1360
1361.. module:: multiprocessing.managers
1362   :synopsis: Share data between process with shared objects.
1363
1364Manager processes will be shutdown as soon as they are garbage collected or
1365their parent process exits.  The manager classes are defined in the
1366:mod:`multiprocessing.managers` module:
1367
1368.. class:: BaseManager([address[, authkey]])
1369
1370   Create a BaseManager object.
1371
1372   Once created one should call :meth:`start` or ``get_server().serve_forever()`` to ensure
1373   that the manager object refers to a started manager process.
1374
1375   *address* is the address on which the manager process listens for new
1376   connections.  If *address* is ``None`` then an arbitrary one is chosen.
1377
1378   *authkey* is the authentication key which will be used to check the validity
1379   of incoming connections to the server process.  If *authkey* is ``None`` then
1380   ``current_process().authkey``.  Otherwise *authkey* is used and it
1381   must be a string.
1382
1383   .. method:: start([initializer[, initargs]])
1384
1385      Start a subprocess to start the manager.  If *initializer* is not ``None``
1386      then the subprocess will call ``initializer(*initargs)`` when it starts.
1387
1388   .. method:: get_server()
1389
1390      Returns a :class:`Server` object which represents the actual server under
1391      the control of the Manager. The :class:`Server` object supports the
1392      :meth:`serve_forever` method::
1393
1394      >>> from multiprocessing.managers import BaseManager
1395      >>> manager = BaseManager(address=('', 50000), authkey='abc')
1396      >>> server = manager.get_server()
1397      >>> server.serve_forever()
1398
1399      :class:`Server` additionally has an :attr:`address` attribute.
1400
1401   .. method:: connect()
1402
1403      Connect a local manager object to a remote manager process::
1404
1405      >>> from multiprocessing.managers import BaseManager
1406      >>> m = BaseManager(address=('127.0.0.1', 5000), authkey='abc')
1407      >>> m.connect()
1408
1409   .. method:: shutdown()
1410
1411      Stop the process used by the manager.  This is only available if
1412      :meth:`start` has been used to start the server process.
1413
1414      This can be called multiple times.
1415
1416   .. method:: register(typeid[, callable[, proxytype[, exposed[, method_to_typeid[, create_method]]]]])
1417
1418      A classmethod which can be used for registering a type or callable with
1419      the manager class.
1420
1421      *typeid* is a "type identifier" which is used to identify a particular
1422      type of shared object.  This must be a string.
1423
1424      *callable* is a callable used for creating objects for this type
1425      identifier.  If a manager instance will be created using the
1426      :meth:`from_address` classmethod or if the *create_method* argument is
1427      ``False`` then this can be left as ``None``.
1428
1429      *proxytype* is a subclass of :class:`BaseProxy` which is used to create
1430      proxies for shared objects with this *typeid*.  If ``None`` then a proxy
1431      class is created automatically.
1432
1433      *exposed* is used to specify a sequence of method names which proxies for
1434      this typeid should be allowed to access using
1435      :meth:`BaseProxy._callmethod`.  (If *exposed* is ``None`` then
1436      :attr:`proxytype._exposed_` is used instead if it exists.)  In the case
1437      where no exposed list is specified, all "public methods" of the shared
1438      object will be accessible.  (Here a "public method" means any attribute
1439      which has a :meth:`~object.__call__` method and whose name does not begin
1440      with ``'_'``.)
1441
1442      *method_to_typeid* is a mapping used to specify the return type of those
1443      exposed methods which should return a proxy.  It maps method names to
1444      typeid strings.  (If *method_to_typeid* is ``None`` then
1445      :attr:`proxytype._method_to_typeid_` is used instead if it exists.)  If a
1446      method's name is not a key of this mapping or if the mapping is ``None``
1447      then the object returned by the method will be copied by value.
1448
1449      *create_method* determines whether a method should be created with name
1450      *typeid* which can be used to tell the server process to create a new
1451      shared object and return a proxy for it.  By default it is ``True``.
1452
1453   :class:`BaseManager` instances also have one read-only property:
1454
1455   .. attribute:: address
1456
1457      The address used by the manager.
1458
1459
1460.. class:: SyncManager
1461
1462   A subclass of :class:`BaseManager` which can be used for the synchronization
1463   of processes.  Objects of this type are returned by
1464   :func:`multiprocessing.Manager`.
1465
1466   It also supports creation of shared lists and dictionaries.
1467
1468   .. method:: BoundedSemaphore([value])
1469
1470      Create a shared :class:`threading.BoundedSemaphore` object and return a
1471      proxy for it.
1472
1473   .. method:: Condition([lock])
1474
1475      Create a shared :class:`threading.Condition` object and return a proxy for
1476      it.
1477
1478      If *lock* is supplied then it should be a proxy for a
1479      :class:`threading.Lock` or :class:`threading.RLock` object.
1480
1481   .. method:: Event()
1482
1483      Create a shared :class:`threading.Event` object and return a proxy for it.
1484
1485   .. method:: Lock()
1486
1487      Create a shared :class:`threading.Lock` object and return a proxy for it.
1488
1489   .. method:: Namespace()
1490
1491      Create a shared :class:`Namespace` object and return a proxy for it.
1492
1493   .. method:: Queue([maxsize])
1494
1495      Create a shared :class:`Queue.Queue` object and return a proxy for it.
1496
1497   .. method:: RLock()
1498
1499      Create a shared :class:`threading.RLock` object and return a proxy for it.
1500
1501   .. method:: Semaphore([value])
1502
1503      Create a shared :class:`threading.Semaphore` object and return a proxy for
1504      it.
1505
1506   .. method:: Array(typecode, sequence)
1507
1508      Create an array and return a proxy for it.
1509
1510   .. method:: Value(typecode, value)
1511
1512      Create an object with a writable ``value`` attribute and return a proxy
1513      for it.
1514
1515   .. method:: dict()
1516               dict(mapping)
1517               dict(sequence)
1518
1519      Create a shared ``dict`` object and return a proxy for it.
1520
1521   .. method:: list()
1522               list(sequence)
1523
1524      Create a shared ``list`` object and return a proxy for it.
1525
1526   .. note::
1527
1528      Modifications to mutable values or items in dict and list proxies will not
1529      be propagated through the manager, because the proxy has no way of knowing
1530      when its values or items are modified.  To modify such an item, you can
1531      re-assign the modified object to the container proxy::
1532
1533         # create a list proxy and append a mutable object (a dictionary)
1534         lproxy = manager.list()
1535         lproxy.append({})
1536         # now mutate the dictionary
1537         d = lproxy[0]
1538         d['a'] = 1
1539         d['b'] = 2
1540         # at this point, the changes to d are not yet synced, but by
1541         # reassigning the dictionary, the proxy is notified of the change
1542         lproxy[0] = d
1543
1544
1545.. class:: Namespace
1546
1547    A type that can register with :class:`SyncManager`.
1548
1549    A namespace object has no public methods, but does have writable attributes.
1550    Its representation shows the values of its attributes.
1551
1552    However, when using a proxy for a namespace object, an attribute beginning with
1553    ``'_'`` will be an attribute of the proxy and not an attribute of the referent:
1554
1555    .. doctest::
1556
1557       >>> manager = multiprocessing.Manager()
1558       >>> Global = manager.Namespace()
1559       >>> Global.x = 10
1560       >>> Global.y = 'hello'
1561       >>> Global._z = 12.3    # this is an attribute of the proxy
1562       >>> print Global
1563       Namespace(x=10, y='hello')
1564
1565
1566Customized managers
1567>>>>>>>>>>>>>>>>>>>
1568
1569To create one's own manager, one creates a subclass of :class:`BaseManager` and
1570uses the :meth:`~BaseManager.register` classmethod to register new types or
1571callables with the manager class.  For example::
1572
1573   from multiprocessing.managers import BaseManager
1574
1575   class MathsClass(object):
1576       def add(self, x, y):
1577           return x + y
1578       def mul(self, x, y):
1579           return x * y
1580
1581   class MyManager(BaseManager):
1582       pass
1583
1584   MyManager.register('Maths', MathsClass)
1585
1586   if __name__ == '__main__':
1587       manager = MyManager()
1588       manager.start()
1589       maths = manager.Maths()
1590       print maths.add(4, 3)         # prints 7
1591       print maths.mul(7, 8)         # prints 56
1592
1593
1594Using a remote manager
1595>>>>>>>>>>>>>>>>>>>>>>
1596
1597It is possible to run a manager server on one machine and have clients use it
1598from other machines (assuming that the firewalls involved allow it).
1599
1600Running the following commands creates a server for a single shared queue which
1601remote clients can access::
1602
1603   >>> from multiprocessing.managers import BaseManager
1604   >>> import Queue
1605   >>> queue = Queue.Queue()
1606   >>> class QueueManager(BaseManager): pass
1607   >>> QueueManager.register('get_queue', callable=lambda:queue)
1608   >>> m = QueueManager(address=('', 50000), authkey='abracadabra')
1609   >>> s = m.get_server()
1610   >>> s.serve_forever()
1611
1612One client can access the server as follows::
1613
1614   >>> from multiprocessing.managers import BaseManager
1615   >>> class QueueManager(BaseManager): pass
1616   >>> QueueManager.register('get_queue')
1617   >>> m = QueueManager(address=('foo.bar.org', 50000), authkey='abracadabra')
1618   >>> m.connect()
1619   >>> queue = m.get_queue()
1620   >>> queue.put('hello')
1621
1622Another client can also use it::
1623
1624   >>> from multiprocessing.managers import BaseManager
1625   >>> class QueueManager(BaseManager): pass
1626   >>> QueueManager.register('get_queue')
1627   >>> m = QueueManager(address=('foo.bar.org', 50000), authkey='abracadabra')
1628   >>> m.connect()
1629   >>> queue = m.get_queue()
1630   >>> queue.get()
1631   'hello'
1632
1633Local processes can also access that queue, using the code from above on the
1634client to access it remotely::
1635
1636    >>> from multiprocessing import Process, Queue
1637    >>> from multiprocessing.managers import BaseManager
1638    >>> class Worker(Process):
1639    ...     def __init__(self, q):
1640    ...         self.q = q
1641    ...         super(Worker, self).__init__()
1642    ...     def run(self):
1643    ...         self.q.put('local hello')
1644    ...
1645    >>> queue = Queue()
1646    >>> w = Worker(queue)
1647    >>> w.start()
1648    >>> class QueueManager(BaseManager): pass
1649    ...
1650    >>> QueueManager.register('get_queue', callable=lambda: queue)
1651    >>> m = QueueManager(address=('', 50000), authkey='abracadabra')
1652    >>> s = m.get_server()
1653    >>> s.serve_forever()
1654
1655Proxy Objects
1656~~~~~~~~~~~~~
1657
1658A proxy is an object which *refers* to a shared object which lives (presumably)
1659in a different process.  The shared object is said to be the *referent* of the
1660proxy.  Multiple proxy objects may have the same referent.
1661
1662A proxy object has methods which invoke corresponding methods of its referent
1663(although not every method of the referent will necessarily be available through
1664the proxy).  A proxy can usually be used in most of the same ways that its
1665referent can:
1666
1667.. doctest::
1668
1669   >>> from multiprocessing import Manager
1670   >>> manager = Manager()
1671   >>> l = manager.list([i*i for i in range(10)])
1672   >>> print l
1673   [0, 1, 4, 9, 16, 25, 36, 49, 64, 81]
1674   >>> print repr(l)
1675   <ListProxy object, typeid 'list' at 0x...>
1676   >>> l[4]
1677   16
1678   >>> l[2:5]
1679   [4, 9, 16]
1680
1681Notice that applying :func:`str` to a proxy will return the representation of
1682the referent, whereas applying :func:`repr` will return the representation of
1683the proxy.
1684
1685An important feature of proxy objects is that they are picklable so they can be
1686passed between processes.  Note, however, that if a proxy is sent to the
1687corresponding manager's process then unpickling it will produce the referent
1688itself.  This means, for example, that one shared object can contain a second:
1689
1690.. doctest::
1691
1692   >>> a = manager.list()
1693   >>> b = manager.list()
1694   >>> a.append(b)         # referent of a now contains referent of b
1695   >>> print a, b
1696   [[]] []
1697   >>> b.append('hello')
1698   >>> print a, b
1699   [['hello']] ['hello']
1700
1701.. note::
1702
1703   The proxy types in :mod:`multiprocessing` do nothing to support comparisons
1704   by value.  So, for instance, we have:
1705
1706   .. doctest::
1707
1708       >>> manager.list([1,2,3]) == [1,2,3]
1709       False
1710
1711   One should just use a copy of the referent instead when making comparisons.
1712
1713.. class:: BaseProxy
1714
1715   Proxy objects are instances of subclasses of :class:`BaseProxy`.
1716
1717   .. method:: _callmethod(methodname[, args[, kwds]])
1718
1719      Call and return the result of a method of the proxy's referent.
1720
1721      If ``proxy`` is a proxy whose referent is ``obj`` then the expression ::
1722
1723         proxy._callmethod(methodname, args, kwds)
1724
1725      will evaluate the expression ::
1726
1727         getattr(obj, methodname)(*args, **kwds)
1728
1729      in the manager's process.
1730
1731      The returned value will be a copy of the result of the call or a proxy to
1732      a new shared object -- see documentation for the *method_to_typeid*
1733      argument of :meth:`BaseManager.register`.
1734
1735      If an exception is raised by the call, then is re-raised by
1736      :meth:`_callmethod`.  If some other exception is raised in the manager's
1737      process then this is converted into a :exc:`RemoteError` exception and is
1738      raised by :meth:`_callmethod`.
1739
1740      Note in particular that an exception will be raised if *methodname* has
1741      not been *exposed*.
1742
1743      An example of the usage of :meth:`_callmethod`:
1744
1745      .. doctest::
1746
1747         >>> l = manager.list(range(10))
1748         >>> l._callmethod('__len__')
1749         10
1750         >>> l._callmethod('__getslice__', (2, 7))   # equiv to `l[2:7]`
1751         [2, 3, 4, 5, 6]
1752         >>> l._callmethod('__getitem__', (20,))     # equiv to `l[20]`
1753         Traceback (most recent call last):
1754         ...
1755         IndexError: list index out of range
1756
1757   .. method:: _getvalue()
1758
1759      Return a copy of the referent.
1760
1761      If the referent is unpicklable then this will raise an exception.
1762
1763   .. method:: __repr__
1764
1765      Return a representation of the proxy object.
1766
1767   .. method:: __str__
1768
1769      Return the representation of the referent.
1770
1771
1772Cleanup
1773>>>>>>>
1774
1775A proxy object uses a weakref callback so that when it gets garbage collected it
1776deregisters itself from the manager which owns its referent.
1777
1778A shared object gets deleted from the manager process when there are no longer
1779any proxies referring to it.
1780
1781
1782Process Pools
1783~~~~~~~~~~~~~
1784
1785.. module:: multiprocessing.pool
1786   :synopsis: Create pools of processes.
1787
1788One can create a pool of processes which will carry out tasks submitted to it
1789with the :class:`Pool` class.
1790
1791.. class:: multiprocessing.Pool([processes[, initializer[, initargs[, maxtasksperchild]]]])
1792
1793   A process pool object which controls a pool of worker processes to which jobs
1794   can be submitted.  It supports asynchronous results with timeouts and
1795   callbacks and has a parallel map implementation.
1796
1797   *processes* is the number of worker processes to use.  If *processes* is
1798   ``None`` then the number returned by :func:`cpu_count` is used.  If
1799   *initializer* is not ``None`` then each worker process will call
1800   ``initializer(*initargs)`` when it starts.
1801
1802   Note that the methods of the pool object should only be called by
1803   the process which created the pool.
1804
1805   .. versionadded:: 2.7
1806      *maxtasksperchild* is the number of tasks a worker process can complete
1807      before it will exit and be replaced with a fresh worker process, to enable
1808      unused resources to be freed. The default *maxtasksperchild* is ``None``, which
1809      means worker processes will live as long as the pool.
1810
1811   .. note::
1812
1813      Worker processes within a :class:`Pool` typically live for the complete
1814      duration of the Pool's work queue. A frequent pattern found in other
1815      systems (such as Apache, mod_wsgi, etc) to free resources held by
1816      workers is to allow a worker within a pool to complete only a set
1817      amount of work before being exiting, being cleaned up and a new
1818      process spawned to replace the old one. The *maxtasksperchild*
1819      argument to the :class:`Pool` exposes this ability to the end user.
1820
1821   .. method:: apply(func[, args[, kwds]])
1822
1823      Equivalent of the :func:`apply` built-in function.  It blocks until the
1824      result is ready, so :meth:`apply_async` is better suited for performing
1825      work in parallel. Additionally, *func* is only executed in one of the
1826      workers of the pool.
1827
1828   .. method:: apply_async(func[, args[, kwds[, callback]]])
1829
1830      A variant of the :meth:`apply` method which returns a result object.
1831
1832      If *callback* is specified then it should be a callable which accepts a
1833      single argument.  When the result becomes ready *callback* is applied to
1834      it (unless the call failed).  *callback* should complete immediately since
1835      otherwise the thread which handles the results will get blocked.
1836
1837   .. method:: map(func, iterable[, chunksize])
1838
1839      A parallel equivalent of the :func:`map` built-in function (it supports only
1840      one *iterable* argument though).  It blocks until the result is ready.
1841
1842      This method chops the iterable into a number of chunks which it submits to
1843      the process pool as separate tasks.  The (approximate) size of these
1844      chunks can be specified by setting *chunksize* to a positive integer.
1845
1846   .. method:: map_async(func, iterable[, chunksize[, callback]])
1847
1848      A variant of the :meth:`.map` method which returns a result object.
1849
1850      If *callback* is specified then it should be a callable which accepts a
1851      single argument.  When the result becomes ready *callback* is applied to
1852      it (unless the call failed).  *callback* should complete immediately since
1853      otherwise the thread which handles the results will get blocked.
1854
1855   .. method:: imap(func, iterable[, chunksize])
1856
1857      An equivalent of :func:`itertools.imap`.
1858
1859      The *chunksize* argument is the same as the one used by the :meth:`.map`
1860      method.  For very long iterables using a large value for *chunksize* can
1861      make the job complete **much** faster than using the default value of
1862      ``1``.
1863
1864      Also if *chunksize* is ``1`` then the :meth:`!next` method of the iterator
1865      returned by the :meth:`imap` method has an optional *timeout* parameter:
1866      ``next(timeout)`` will raise :exc:`multiprocessing.TimeoutError` if the
1867      result cannot be returned within *timeout* seconds.
1868
1869   .. method:: imap_unordered(func, iterable[, chunksize])
1870
1871      The same as :meth:`imap` except that the ordering of the results from the
1872      returned iterator should be considered arbitrary.  (Only when there is
1873      only one worker process is the order guaranteed to be "correct".)
1874
1875   .. method:: close()
1876
1877      Prevents any more tasks from being submitted to the pool.  Once all the
1878      tasks have been completed the worker processes will exit.
1879
1880   .. method:: terminate()
1881
1882      Stops the worker processes immediately without completing outstanding
1883      work.  When the pool object is garbage collected :meth:`terminate` will be
1884      called immediately.
1885
1886   .. method:: join()
1887
1888      Wait for the worker processes to exit.  One must call :meth:`close` or
1889      :meth:`terminate` before using :meth:`join`.
1890
1891
1892.. class:: AsyncResult
1893
1894   The class of the result returned by :meth:`Pool.apply_async` and
1895   :meth:`Pool.map_async`.
1896
1897   .. method:: get([timeout])
1898
1899      Return the result when it arrives.  If *timeout* is not ``None`` and the
1900      result does not arrive within *timeout* seconds then
1901      :exc:`multiprocessing.TimeoutError` is raised.  If the remote call raised
1902      an exception then that exception will be reraised by :meth:`get`.
1903
1904   .. method:: wait([timeout])
1905
1906      Wait until the result is available or until *timeout* seconds pass.
1907
1908   .. method:: ready()
1909
1910      Return whether the call has completed.
1911
1912   .. method:: successful()
1913
1914      Return whether the call completed without raising an exception.  Will
1915      raise :exc:`AssertionError` if the result is not ready.
1916
1917The following example demonstrates the use of a pool::
1918
1919   from multiprocessing import Pool
1920   import time
1921
1922   def f(x):
1923       return x*x
1924
1925   if __name__ == '__main__':
1926       pool = Pool(processes=4)              # start 4 worker processes
1927
1928       result = pool.apply_async(f, (10,))   # evaluate "f(10)" asynchronously in a single process
1929       print result.get(timeout=1)           # prints "100" unless your computer is *very* slow
1930
1931       print pool.map(f, range(10))          # prints "[0, 1, 4,..., 81]"
1932
1933       it = pool.imap(f, range(10))
1934       print it.next()                       # prints "0"
1935       print it.next()                       # prints "1"
1936       print it.next(timeout=1)              # prints "4" unless your computer is *very* slow
1937
1938       result = pool.apply_async(time.sleep, (10,))
1939       print result.get(timeout=1)           # raises multiprocessing.TimeoutError
1940
1941
1942.. _multiprocessing-listeners-clients:
1943
1944Listeners and Clients
1945~~~~~~~~~~~~~~~~~~~~~
1946
1947.. module:: multiprocessing.connection
1948   :synopsis: API for dealing with sockets.
1949
1950Usually message passing between processes is done using queues or by using
1951:class:`Connection` objects returned by :func:`~multiprocessing.Pipe`.
1952
1953However, the :mod:`multiprocessing.connection` module allows some extra
1954flexibility.  It basically gives a high level message oriented API for dealing
1955with sockets or Windows named pipes, and also has support for *digest
1956authentication* using the :mod:`hmac` module.
1957
1958
1959.. function:: deliver_challenge(connection, authkey)
1960
1961   Send a randomly generated message to the other end of the connection and wait
1962   for a reply.
1963
1964   If the reply matches the digest of the message using *authkey* as the key
1965   then a welcome message is sent to the other end of the connection.  Otherwise
1966   :exc:`AuthenticationError` is raised.
1967
1968.. function:: answer_challenge(connection, authkey)
1969
1970   Receive a message, calculate the digest of the message using *authkey* as the
1971   key, and then send the digest back.
1972
1973   If a welcome message is not received, then :exc:`AuthenticationError` is
1974   raised.
1975
1976.. function:: Client(address[, family[, authenticate[, authkey]]])
1977
1978   Attempt to set up a connection to the listener which is using address
1979   *address*, returning a :class:`Connection`.
1980
1981   The type of the connection is determined by *family* argument, but this can
1982   generally be omitted since it can usually be inferred from the format of
1983   *address*. (See :ref:`multiprocessing-address-formats`)
1984
1985   If *authenticate* is ``True`` or *authkey* is a string then digest
1986   authentication is used.  The key used for authentication will be either
1987   *authkey* or ``current_process().authkey)`` if *authkey* is ``None``.
1988   If authentication fails then :exc:`AuthenticationError` is raised.  See
1989   :ref:`multiprocessing-auth-keys`.
1990
1991.. class:: Listener([address[, family[, backlog[, authenticate[, authkey]]]]])
1992
1993   A wrapper for a bound socket or Windows named pipe which is 'listening' for
1994   connections.
1995
1996   *address* is the address to be used by the bound socket or named pipe of the
1997   listener object.
1998
1999   .. note::
2000
2001      If an address of '0.0.0.0' is used, the address will not be a connectable
2002      end point on Windows. If you require a connectable end-point,
2003      you should use '127.0.0.1'.
2004
2005   *family* is the type of socket (or named pipe) to use.  This can be one of
2006   the strings ``'AF_INET'`` (for a TCP socket), ``'AF_UNIX'`` (for a Unix
2007   domain socket) or ``'AF_PIPE'`` (for a Windows named pipe).  Of these only
2008   the first is guaranteed to be available.  If *family* is ``None`` then the
2009   family is inferred from the format of *address*.  If *address* is also
2010   ``None`` then a default is chosen.  This default is the family which is
2011   assumed to be the fastest available.  See
2012   :ref:`multiprocessing-address-formats`.  Note that if *family* is
2013   ``'AF_UNIX'`` and address is ``None`` then the socket will be created in a
2014   private temporary directory created using :func:`tempfile.mkstemp`.
2015
2016   If the listener object uses a socket then *backlog* (1 by default) is passed
2017   to the :meth:`~socket.socket.listen` method of the socket once it has been
2018   bound.
2019
2020   If *authenticate* is ``True`` (``False`` by default) or *authkey* is not
2021   ``None`` then digest authentication is used.
2022
2023   If *authkey* is a string then it will be used as the authentication key;
2024   otherwise it must be ``None``.
2025
2026   If *authkey* is ``None`` and *authenticate* is ``True`` then
2027   ``current_process().authkey`` is used as the authentication key.  If
2028   *authkey* is ``None`` and *authenticate* is ``False`` then no
2029   authentication is done.  If authentication fails then
2030   :exc:`AuthenticationError` is raised.  See :ref:`multiprocessing-auth-keys`.
2031
2032   .. method:: accept()
2033
2034      Accept a connection on the bound socket or named pipe of the listener
2035      object and return a :class:`Connection` object.
2036      If authentication is attempted and fails, then
2037      :exc:`~multiprocessing.AuthenticationError` is raised.
2038
2039   .. method:: close()
2040
2041      Close the bound socket or named pipe of the listener object.  This is
2042      called automatically when the listener is garbage collected.  However it
2043      is advisable to call it explicitly.
2044
2045   Listener objects have the following read-only properties:
2046
2047   .. attribute:: address
2048
2049      The address which is being used by the Listener object.
2050
2051   .. attribute:: last_accepted
2052
2053      The address from which the last accepted connection came.  If this is
2054      unavailable then it is ``None``.
2055
2056
2057The module defines the following exceptions:
2058
2059.. exception:: ProcessError
2060
2061   The base class of all :mod:`multiprocessing` exceptions.
2062
2063.. exception:: BufferTooShort
2064
2065   Exception raised by :meth:`Connection.recv_bytes_into()` when the supplied
2066   buffer object is too small for the message read.
2067
2068.. exception:: AuthenticationError
2069
2070   Raised when there is an authentication error.
2071
2072.. exception:: TimeoutError
2073
2074   Raised by methods with a timeout when the timeout expires.
2075
2076
2077**Examples**
2078
2079The following server code creates a listener which uses ``'secret password'`` as
2080an authentication key.  It then waits for a connection and sends some data to
2081the client::
2082
2083   from multiprocessing.connection import Listener
2084   from array import array
2085
2086   address = ('localhost', 6000)     # family is deduced to be 'AF_INET'
2087   listener = Listener(address, authkey='secret password')
2088
2089   conn = listener.accept()
2090   print 'connection accepted from', listener.last_accepted
2091
2092   conn.send([2.25, None, 'junk', float])
2093
2094   conn.send_bytes('hello')
2095
2096   conn.send_bytes(array('i', [42, 1729]))
2097
2098   conn.close()
2099   listener.close()
2100
2101The following code connects to the server and receives some data from the
2102server::
2103
2104   from multiprocessing.connection import Client
2105   from array import array
2106
2107   address = ('localhost', 6000)
2108   conn = Client(address, authkey='secret password')
2109
2110   print conn.recv()                 # => [2.25, None, 'junk', float]
2111
2112   print conn.recv_bytes()            # => 'hello'
2113
2114   arr = array('i', [0, 0, 0, 0, 0])
2115   print conn.recv_bytes_into(arr)     # => 8
2116   print arr                         # => array('i', [42, 1729, 0, 0, 0])
2117
2118   conn.close()
2119
2120
2121.. _multiprocessing-address-formats:
2122
2123Address Formats
2124>>>>>>>>>>>>>>>
2125
2126* An ``'AF_INET'`` address is a tuple of the form ``(hostname, port)`` where
2127  *hostname* is a string and *port* is an integer.
2128
2129* An ``'AF_UNIX'`` address is a string representing a filename on the
2130  filesystem.
2131
2132* An ``'AF_PIPE'`` address is a string of the form
2133   :samp:`r'\\\\.\\pipe\\{PipeName}'`.  To use :func:`Client` to connect to a named
2134   pipe on a remote computer called *ServerName* one should use an address of the
2135   form :samp:`r'\\\\{ServerName}\\pipe\\{PipeName}'` instead.
2136
2137Note that any string beginning with two backslashes is assumed by default to be
2138an ``'AF_PIPE'`` address rather than an ``'AF_UNIX'`` address.
2139
2140
2141.. _multiprocessing-auth-keys:
2142
2143Authentication keys
2144~~~~~~~~~~~~~~~~~~~
2145
2146When one uses :meth:`Connection.recv`, the
2147data received is automatically
2148unpickled.  Unfortunately unpickling data from an untrusted source is a security
2149risk.  Therefore :class:`Listener` and :func:`Client` use the :mod:`hmac` module
2150to provide digest authentication.
2151
2152An authentication key is a string which can be thought of as a password: once a
2153connection is established both ends will demand proof that the other knows the
2154authentication key.  (Demonstrating that both ends are using the same key does
2155**not** involve sending the key over the connection.)
2156
2157If authentication is requested but no authentication key is specified then the
2158return value of ``current_process().authkey`` is used (see
2159:class:`~multiprocessing.Process`).  This value will be automatically inherited by
2160any :class:`~multiprocessing.Process` object that the current process creates.
2161This means that (by default) all processes of a multi-process program will share
2162a single authentication key which can be used when setting up connections
2163between themselves.
2164
2165Suitable authentication keys can also be generated by using :func:`os.urandom`.
2166
2167
2168Logging
2169~~~~~~~
2170
2171Some support for logging is available.  Note, however, that the :mod:`logging`
2172package does not use process shared locks so it is possible (depending on the
2173handler type) for messages from different processes to get mixed up.
2174
2175.. currentmodule:: multiprocessing
2176.. function:: get_logger()
2177
2178   Returns the logger used by :mod:`multiprocessing`.  If necessary, a new one
2179   will be created.
2180
2181   When first created the logger has level :data:`logging.NOTSET` and no
2182   default handler. Messages sent to this logger will not by default propagate
2183   to the root logger.
2184
2185   Note that on Windows child processes will only inherit the level of the
2186   parent process's logger -- any other customization of the logger will not be
2187   inherited.
2188
2189.. currentmodule:: multiprocessing
2190.. function:: log_to_stderr()
2191
2192   This function performs a call to :func:`get_logger` but in addition to
2193   returning the logger created by get_logger, it adds a handler which sends
2194   output to :data:`sys.stderr` using format
2195   ``'[%(levelname)s/%(processName)s] %(message)s'``.
2196
2197Below is an example session with logging turned on::
2198
2199    >>> import multiprocessing, logging
2200    >>> logger = multiprocessing.log_to_stderr()
2201    >>> logger.setLevel(logging.INFO)
2202    >>> logger.warning('doomed')
2203    [WARNING/MainProcess] doomed
2204    >>> m = multiprocessing.Manager()
2205    [INFO/SyncManager-...] child process calling self.run()
2206    [INFO/SyncManager-...] created temp directory /.../pymp-...
2207    [INFO/SyncManager-...] manager serving at '/.../listener-...'
2208    >>> del m
2209    [INFO/MainProcess] sending shutdown message to manager
2210    [INFO/SyncManager-...] manager exiting with exitcode 0
2211
2212In addition to having these two logging functions, the multiprocessing also
2213exposes two additional logging level attributes. These are  :const:`SUBWARNING`
2214and :const:`SUBDEBUG`. The table below illustrates where theses fit in the
2215normal level hierarchy.
2216
2217+----------------+----------------+
2218| Level          | Numeric value  |
2219+================+================+
2220| ``SUBWARNING`` | 25             |
2221+----------------+----------------+
2222| ``SUBDEBUG``   | 5              |
2223+----------------+----------------+
2224
2225For a full table of logging levels, see the :mod:`logging` module.
2226
2227These additional logging levels are used primarily for certain debug messages
2228within the multiprocessing module. Below is the same example as above, except
2229with :const:`SUBDEBUG` enabled::
2230
2231    >>> import multiprocessing, logging
2232    >>> logger = multiprocessing.log_to_stderr()
2233    >>> logger.setLevel(multiprocessing.SUBDEBUG)
2234    >>> logger.warning('doomed')
2235    [WARNING/MainProcess] doomed
2236    >>> m = multiprocessing.Manager()
2237    [INFO/SyncManager-...] child process calling self.run()
2238    [INFO/SyncManager-...] created temp directory /.../pymp-...
2239    [INFO/SyncManager-...] manager serving at '/.../pymp-djGBXN/listener-...'
2240    >>> del m
2241    [SUBDEBUG/MainProcess] finalizer calling ...
2242    [INFO/MainProcess] sending shutdown message to manager
2243    [DEBUG/SyncManager-...] manager received shutdown message
2244    [SUBDEBUG/SyncManager-...] calling <Finalize object, callback=unlink, ...
2245    [SUBDEBUG/SyncManager-...] finalizer calling <built-in function unlink> ...
2246    [SUBDEBUG/SyncManager-...] calling <Finalize object, dead>
2247    [SUBDEBUG/SyncManager-...] finalizer calling <function rmtree at 0x5aa730> ...
2248    [INFO/SyncManager-...] manager exiting with exitcode 0
2249
2250The :mod:`multiprocessing.dummy` module
2251~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2252
2253.. module:: multiprocessing.dummy
2254   :synopsis: Dumb wrapper around threading.
2255
2256:mod:`multiprocessing.dummy` replicates the API of :mod:`multiprocessing` but is
2257no more than a wrapper around the :mod:`threading` module.
2258
2259
2260.. _multiprocessing-programming:
2261
2262Programming guidelines
2263----------------------
2264
2265There are certain guidelines and idioms which should be adhered to when using
2266:mod:`multiprocessing`.
2267
2268
2269All platforms
2270~~~~~~~~~~~~~
2271
2272Avoid shared state
2273
2274    As far as possible one should try to avoid shifting large amounts of data
2275    between processes.
2276
2277    It is probably best to stick to using queues or pipes for communication
2278    between processes rather than using the lower level synchronization
2279    primitives from the :mod:`threading` module.
2280
2281Picklability
2282
2283    Ensure that the arguments to the methods of proxies are picklable.
2284
2285Thread safety of proxies
2286
2287    Do not use a proxy object from more than one thread unless you protect it
2288    with a lock.
2289
2290    (There is never a problem with different processes using the *same* proxy.)
2291
2292Joining zombie processes
2293
2294    On Unix when a process finishes but has not been joined it becomes a zombie.
2295    There should never be very many because each time a new process starts (or
2296    :func:`~multiprocessing.active_children` is called) all completed processes
2297    which have not yet been joined will be joined.  Also calling a finished
2298    process's :meth:`Process.is_alive <multiprocessing.Process.is_alive>` will
2299    join the process.  Even so it is probably good
2300    practice to explicitly join all the processes that you start.
2301
2302Better to inherit than pickle/unpickle
2303
2304    On Windows many types from :mod:`multiprocessing` need to be picklable so
2305    that child processes can use them.  However, one should generally avoid
2306    sending shared objects to other processes using pipes or queues.  Instead
2307    you should arrange the program so that a process which needs access to a
2308    shared resource created elsewhere can inherit it from an ancestor process.
2309
2310Avoid terminating processes
2311
2312    Using the :meth:`Process.terminate <multiprocessing.Process.terminate>`
2313    method to stop a process is liable to
2314    cause any shared resources (such as locks, semaphores, pipes and queues)
2315    currently being used by the process to become broken or unavailable to other
2316    processes.
2317
2318    Therefore it is probably best to only consider using
2319    :meth:`Process.terminate <multiprocessing.Process.terminate>` on processes
2320    which never use any shared resources.
2321
2322Joining processes that use queues
2323
2324    Bear in mind that a process that has put items in a queue will wait before
2325    terminating until all the buffered items are fed by the "feeder" thread to
2326    the underlying pipe.  (The child process can call the
2327    :meth:`~multiprocessing.Queue.cancel_join_thread` method of the queue to avoid this behaviour.)
2328
2329    This means that whenever you use a queue you need to make sure that all
2330    items which have been put on the queue will eventually be removed before the
2331    process is joined.  Otherwise you cannot be sure that processes which have
2332    put items on the queue will terminate.  Remember also that non-daemonic
2333    processes will be joined automatically.
2334
2335    An example which will deadlock is the following::
2336
2337        from multiprocessing import Process, Queue
2338
2339        def f(q):
2340            q.put('X' * 1000000)
2341
2342        if __name__ == '__main__':
2343            queue = Queue()
2344            p = Process(target=f, args=(queue,))
2345            p.start()
2346            p.join()                    # this deadlocks
2347            obj = queue.get()
2348
2349    A fix here would be to swap the last two lines (or simply remove the
2350    ``p.join()`` line).
2351
2352Explicitly pass resources to child processes
2353
2354    On Unix a child process can make use of a shared resource created in a
2355    parent process using a global resource.  However, it is better to pass the
2356    object as an argument to the constructor for the child process.
2357
2358    Apart from making the code (potentially) compatible with Windows this also
2359    ensures that as long as the child process is still alive the object will not
2360    be garbage collected in the parent process.  This might be important if some
2361    resource is freed when the object is garbage collected in the parent
2362    process.
2363
2364    So for instance ::
2365
2366        from multiprocessing import Process, Lock
2367
2368        def f():
2369            ... do something using "lock" ...
2370
2371        if __name__ == '__main__':
2372            lock = Lock()
2373            for i in range(10):
2374                Process(target=f).start()
2375
2376    should be rewritten as ::
2377
2378        from multiprocessing import Process, Lock
2379
2380        def f(l):
2381            ... do something using "l" ...
2382
2383        if __name__ == '__main__':
2384            lock = Lock()
2385            for i in range(10):
2386                Process(target=f, args=(lock,)).start()
2387
2388Beware of replacing :data:`sys.stdin` with a "file like object"
2389
2390    :mod:`multiprocessing` originally unconditionally called::
2391
2392        os.close(sys.stdin.fileno())
2393
2394    in the :meth:`multiprocessing.Process._bootstrap` method --- this resulted
2395    in issues with processes-in-processes. This has been changed to::
2396
2397        sys.stdin.close()
2398        sys.stdin = open(os.devnull)
2399
2400    Which solves the fundamental issue of processes colliding with each other
2401    resulting in a bad file descriptor error, but introduces a potential danger
2402    to applications which replace :func:`sys.stdin` with a "file-like object"
2403    with output buffering.  This danger is that if multiple processes call
2404    :meth:`~io.IOBase.close()` on this file-like object, it could result in the same
2405    data being flushed to the object multiple times, resulting in corruption.
2406
2407    If you write a file-like object and implement your own caching, you can
2408    make it fork-safe by storing the pid whenever you append to the cache,
2409    and discarding the cache when the pid changes. For example::
2410
2411       @property
2412       def cache(self):
2413           pid = os.getpid()
2414           if pid != self._pid:
2415               self._pid = pid
2416               self._cache = []
2417           return self._cache
2418
2419    For more information, see :issue:`5155`, :issue:`5313` and :issue:`5331`
2420
2421Windows
2422~~~~~~~
2423
2424Since Windows lacks :func:`os.fork` it has a few extra restrictions:
2425
2426More picklability
2427
2428    Ensure that all arguments to :meth:`Process.__init__` are picklable.  This
2429    means, in particular, that bound or unbound methods cannot be used directly
2430    as the ``target`` argument on Windows --- just define a function and use
2431    that instead.
2432
2433    Also, if you subclass :class:`~multiprocessing.Process` then make sure that
2434    instances will be picklable when the :meth:`Process.start
2435    <multiprocessing.Process.start>` method is called.
2436
2437Global variables
2438
2439    Bear in mind that if code run in a child process tries to access a global
2440    variable, then the value it sees (if any) may not be the same as the value
2441    in the parent process at the time that :meth:`Process.start
2442    <multiprocessing.Process.start>` was called.
2443
2444    However, global variables which are just module level constants cause no
2445    problems.
2446
2447Safe importing of main module
2448
2449    Make sure that the main module can be safely imported by a new Python
2450    interpreter without causing unintended side effects (such a starting a new
2451    process).
2452
2453    For example, under Windows running the following module would fail with a
2454    :exc:`RuntimeError`::
2455
2456        from multiprocessing import Process
2457
2458        def foo():
2459            print 'hello'
2460
2461        p = Process(target=foo)
2462        p.start()
2463
2464    Instead one should protect the "entry point" of the program by using ``if
2465    __name__ == '__main__':`` as follows::
2466
2467       from multiprocessing import Process, freeze_support
2468
2469       def foo():
2470           print 'hello'
2471
2472       if __name__ == '__main__':
2473           freeze_support()
2474           p = Process(target=foo)
2475           p.start()
2476
2477    (The ``freeze_support()`` line can be omitted if the program will be run
2478    normally instead of frozen.)
2479
2480    This allows the newly spawned Python interpreter to safely import the module
2481    and then run the module's ``foo()`` function.
2482
2483    Similar restrictions apply if a pool or manager is created in the main
2484    module.
2485
2486
2487.. _multiprocessing-examples:
2488
2489Examples
2490--------
2491
2492Demonstration of how to create and use customized managers and proxies:
2493
2494.. literalinclude:: ../includes/mp_newtype.py
2495
2496
2497Using :class:`~multiprocessing.pool.Pool`:
2498
2499.. literalinclude:: ../includes/mp_pool.py
2500
2501
2502Synchronization types like locks, conditions and queues:
2503
2504.. literalinclude:: ../includes/mp_synchronize.py
2505
2506
2507An example showing how to use queues to feed tasks to a collection of worker
2508processes and collect the results:
2509
2510.. literalinclude:: ../includes/mp_workers.py
2511
2512
2513An example of how a pool of worker processes can each run a
2514:class:`SimpleHTTPServer.HttpServer` instance while sharing a single listening
2515socket.
2516
2517.. literalinclude:: ../includes/mp_webserver.py
2518
2519
2520Some simple benchmarks comparing :mod:`multiprocessing` with :mod:`threading`:
2521
2522.. literalinclude:: ../includes/mp_benchmarks.py
2523
2524