• Home
  • Line#
  • Scopes#
  • Navigate#
  • Raw
  • Download
1:mod:`multiprocessing` --- Process-based parallelism
2====================================================
3
4.. module:: multiprocessing
5   :synopsis: Process-based parallelism.
6
7**Source code:** :source:`Lib/multiprocessing/`
8
9--------------
10
11Introduction
12------------
13
14:mod:`multiprocessing` is a package that supports spawning processes using an
15API similar to the :mod:`threading` module.  The :mod:`multiprocessing` package
16offers both local and remote concurrency, effectively side-stepping the
17:term:`Global Interpreter Lock` by using subprocesses instead of threads.  Due
18to this, the :mod:`multiprocessing` module allows the programmer to fully
19leverage multiple processors on a given machine.  It runs on both Unix and
20Windows.
21
22The :mod:`multiprocessing` module also introduces APIs which do not have
23analogs in the :mod:`threading` module.  A prime example of this is the
24:class:`~multiprocessing.pool.Pool` object which offers a convenient means of
25parallelizing the execution of a function across multiple input values,
26distributing the input data across processes (data parallelism).  The following
27example demonstrates the common practice of defining such functions in a module
28so that child processes can successfully import that module.  This basic example
29of data parallelism using :class:`~multiprocessing.pool.Pool`, ::
30
31   from multiprocessing import Pool
32
33   def f(x):
34       return x*x
35
36   if __name__ == '__main__':
37       with Pool(5) as p:
38           print(p.map(f, [1, 2, 3]))
39
40will print to standard output ::
41
42   [1, 4, 9]
43
44
45The :class:`Process` class
46~~~~~~~~~~~~~~~~~~~~~~~~~~
47
48In :mod:`multiprocessing`, processes are spawned by creating a :class:`Process`
49object and then calling its :meth:`~Process.start` method.  :class:`Process`
50follows the API of :class:`threading.Thread`.  A trivial example of a
51multiprocess program is ::
52
53   from multiprocessing import Process
54
55   def f(name):
56       print('hello', name)
57
58   if __name__ == '__main__':
59       p = Process(target=f, args=('bob',))
60       p.start()
61       p.join()
62
63To show the individual process IDs involved, here is an expanded example::
64
65    from multiprocessing import Process
66    import os
67
68    def info(title):
69        print(title)
70        print('module name:', __name__)
71        print('parent process:', os.getppid())
72        print('process id:', os.getpid())
73
74    def f(name):
75        info('function f')
76        print('hello', name)
77
78    if __name__ == '__main__':
79        info('main line')
80        p = Process(target=f, args=('bob',))
81        p.start()
82        p.join()
83
84For an explanation of why the ``if __name__ == '__main__'`` part is
85necessary, see :ref:`multiprocessing-programming`.
86
87
88
89Contexts and start methods
90~~~~~~~~~~~~~~~~~~~~~~~~~~
91
92.. _multiprocessing-start-methods:
93
94Depending on the platform, :mod:`multiprocessing` supports three ways
95to start a process.  These *start methods* are
96
97  *spawn*
98    The parent process starts a fresh python interpreter process.  The
99    child process will only inherit those resources necessary to run
100    the process objects :meth:`~Process.run` method.  In particular,
101    unnecessary file descriptors and handles from the parent process
102    will not be inherited.  Starting a process using this method is
103    rather slow compared to using *fork* or *forkserver*.
104
105    Available on Unix and Windows.  The default on Windows and macOS.
106
107  *fork*
108    The parent process uses :func:`os.fork` to fork the Python
109    interpreter.  The child process, when it begins, is effectively
110    identical to the parent process.  All resources of the parent are
111    inherited by the child process.  Note that safely forking a
112    multithreaded process is problematic.
113
114    Available on Unix only.  The default on Unix.
115
116  *forkserver*
117    When the program starts and selects the *forkserver* start method,
118    a server process is started.  From then on, whenever a new process
119    is needed, the parent process connects to the server and requests
120    that it fork a new process.  The fork server process is single
121    threaded so it is safe for it to use :func:`os.fork`.  No
122    unnecessary resources are inherited.
123
124    Available on Unix platforms which support passing file descriptors
125    over Unix pipes.
126
127.. versionchanged:: 3.8
128
129   On macOS, the *spawn* start method is now the default.  The *fork* start
130   method should be considered unsafe as it can lead to crashes of the
131   subprocess. See :issue:`33725`.
132
133.. versionchanged:: 3.4
134   *spawn* added on all unix platforms, and *forkserver* added for
135   some unix platforms.
136   Child processes no longer inherit all of the parents inheritable
137   handles on Windows.
138
139On Unix using the *spawn* or *forkserver* start methods will also
140start a *resource tracker* process which tracks the unlinked named
141system resources (such as named semaphores or
142:class:`~multiprocessing.shared_memory.SharedMemory` objects) created
143by processes of the program.  When all processes
144have exited the resource tracker unlinks any remaining tracked object.
145Usually there should be none, but if a process was killed by a signal
146there may be some "leaked" resources.  (Neither leaked semaphores nor shared
147memory segments will be automatically unlinked until the next reboot. This is
148problematic for both objects because the system allows only a limited number of
149named semaphores, and shared memory segments occupy some space in the main
150memory.)
151
152To select a start method you use the :func:`set_start_method` in
153the ``if __name__ == '__main__'`` clause of the main module.  For
154example::
155
156       import multiprocessing as mp
157
158       def foo(q):
159           q.put('hello')
160
161       if __name__ == '__main__':
162           mp.set_start_method('spawn')
163           q = mp.Queue()
164           p = mp.Process(target=foo, args=(q,))
165           p.start()
166           print(q.get())
167           p.join()
168
169:func:`set_start_method` should not be used more than once in the
170program.
171
172Alternatively, you can use :func:`get_context` to obtain a context
173object.  Context objects have the same API as the multiprocessing
174module, and allow one to use multiple start methods in the same
175program. ::
176
177       import multiprocessing as mp
178
179       def foo(q):
180           q.put('hello')
181
182       if __name__ == '__main__':
183           ctx = mp.get_context('spawn')
184           q = ctx.Queue()
185           p = ctx.Process(target=foo, args=(q,))
186           p.start()
187           print(q.get())
188           p.join()
189
190Note that objects related to one context may not be compatible with
191processes for a different context.  In particular, locks created using
192the *fork* context cannot be passed to processes started using the
193*spawn* or *forkserver* start methods.
194
195A library which wants to use a particular start method should probably
196use :func:`get_context` to avoid interfering with the choice of the
197library user.
198
199.. warning::
200
201   The ``'spawn'`` and ``'forkserver'`` start methods cannot currently
202   be used with "frozen" executables (i.e., binaries produced by
203   packages like **PyInstaller** and **cx_Freeze**) on Unix.
204   The ``'fork'`` start method does work.
205
206
207Exchanging objects between processes
208~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
209
210:mod:`multiprocessing` supports two types of communication channel between
211processes:
212
213**Queues**
214
215   The :class:`Queue` class is a near clone of :class:`queue.Queue`.  For
216   example::
217
218      from multiprocessing import Process, Queue
219
220      def f(q):
221          q.put([42, None, 'hello'])
222
223      if __name__ == '__main__':
224          q = Queue()
225          p = Process(target=f, args=(q,))
226          p.start()
227          print(q.get())    # prints "[42, None, 'hello']"
228          p.join()
229
230   Queues are thread and process safe.
231
232**Pipes**
233
234   The :func:`Pipe` function returns a pair of connection objects connected by a
235   pipe which by default is duplex (two-way).  For example::
236
237      from multiprocessing import Process, Pipe
238
239      def f(conn):
240          conn.send([42, None, 'hello'])
241          conn.close()
242
243      if __name__ == '__main__':
244          parent_conn, child_conn = Pipe()
245          p = Process(target=f, args=(child_conn,))
246          p.start()
247          print(parent_conn.recv())   # prints "[42, None, 'hello']"
248          p.join()
249
250   The two connection objects returned by :func:`Pipe` represent the two ends of
251   the pipe.  Each connection object has :meth:`~Connection.send` and
252   :meth:`~Connection.recv` methods (among others).  Note that data in a pipe
253   may become corrupted if two processes (or threads) try to read from or write
254   to the *same* end of the pipe at the same time.  Of course there is no risk
255   of corruption from processes using different ends of the pipe at the same
256   time.
257
258
259Synchronization between processes
260~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
261
262:mod:`multiprocessing` contains equivalents of all the synchronization
263primitives from :mod:`threading`.  For instance one can use a lock to ensure
264that only one process prints to standard output at a time::
265
266   from multiprocessing import Process, Lock
267
268   def f(l, i):
269       l.acquire()
270       try:
271           print('hello world', i)
272       finally:
273           l.release()
274
275   if __name__ == '__main__':
276       lock = Lock()
277
278       for num in range(10):
279           Process(target=f, args=(lock, num)).start()
280
281Without using the lock output from the different processes is liable to get all
282mixed up.
283
284
285Sharing state between processes
286~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
287
288As mentioned above, when doing concurrent programming it is usually best to
289avoid using shared state as far as possible.  This is particularly true when
290using multiple processes.
291
292However, if you really do need to use some shared data then
293:mod:`multiprocessing` provides a couple of ways of doing so.
294
295**Shared memory**
296
297   Data can be stored in a shared memory map using :class:`Value` or
298   :class:`Array`.  For example, the following code ::
299
300      from multiprocessing import Process, Value, Array
301
302      def f(n, a):
303          n.value = 3.1415927
304          for i in range(len(a)):
305              a[i] = -a[i]
306
307      if __name__ == '__main__':
308          num = Value('d', 0.0)
309          arr = Array('i', range(10))
310
311          p = Process(target=f, args=(num, arr))
312          p.start()
313          p.join()
314
315          print(num.value)
316          print(arr[:])
317
318   will print ::
319
320      3.1415927
321      [0, -1, -2, -3, -4, -5, -6, -7, -8, -9]
322
323   The ``'d'`` and ``'i'`` arguments used when creating ``num`` and ``arr`` are
324   typecodes of the kind used by the :mod:`array` module: ``'d'`` indicates a
325   double precision float and ``'i'`` indicates a signed integer.  These shared
326   objects will be process and thread-safe.
327
328   For more flexibility in using shared memory one can use the
329   :mod:`multiprocessing.sharedctypes` module which supports the creation of
330   arbitrary ctypes objects allocated from shared memory.
331
332**Server process**
333
334   A manager object returned by :func:`Manager` controls a server process which
335   holds Python objects and allows other processes to manipulate them using
336   proxies.
337
338   A manager returned by :func:`Manager` will support types
339   :class:`list`, :class:`dict`, :class:`~managers.Namespace`, :class:`Lock`,
340   :class:`RLock`, :class:`Semaphore`, :class:`BoundedSemaphore`,
341   :class:`Condition`, :class:`Event`, :class:`Barrier`,
342   :class:`Queue`, :class:`Value` and :class:`Array`.  For example, ::
343
344      from multiprocessing import Process, Manager
345
346      def f(d, l):
347          d[1] = '1'
348          d['2'] = 2
349          d[0.25] = None
350          l.reverse()
351
352      if __name__ == '__main__':
353          with Manager() as manager:
354              d = manager.dict()
355              l = manager.list(range(10))
356
357              p = Process(target=f, args=(d, l))
358              p.start()
359              p.join()
360
361              print(d)
362              print(l)
363
364   will print ::
365
366       {0.25: None, 1: '1', '2': 2}
367       [9, 8, 7, 6, 5, 4, 3, 2, 1, 0]
368
369   Server process managers are more flexible than using shared memory objects
370   because they can be made to support arbitrary object types.  Also, a single
371   manager can be shared by processes on different computers over a network.
372   They are, however, slower than using shared memory.
373
374
375Using a pool of workers
376~~~~~~~~~~~~~~~~~~~~~~~
377
378The :class:`~multiprocessing.pool.Pool` class represents a pool of worker
379processes.  It has methods which allows tasks to be offloaded to the worker
380processes in a few different ways.
381
382For example::
383
384   from multiprocessing import Pool, TimeoutError
385   import time
386   import os
387
388   def f(x):
389       return x*x
390
391   if __name__ == '__main__':
392       # start 4 worker processes
393       with Pool(processes=4) as pool:
394
395           # print "[0, 1, 4,..., 81]"
396           print(pool.map(f, range(10)))
397
398           # print same numbers in arbitrary order
399           for i in pool.imap_unordered(f, range(10)):
400               print(i)
401
402           # evaluate "f(20)" asynchronously
403           res = pool.apply_async(f, (20,))      # runs in *only* one process
404           print(res.get(timeout=1))             # prints "400"
405
406           # evaluate "os.getpid()" asynchronously
407           res = pool.apply_async(os.getpid, ()) # runs in *only* one process
408           print(res.get(timeout=1))             # prints the PID of that process
409
410           # launching multiple evaluations asynchronously *may* use more processes
411           multiple_results = [pool.apply_async(os.getpid, ()) for i in range(4)]
412           print([res.get(timeout=1) for res in multiple_results])
413
414           # make a single worker sleep for 10 secs
415           res = pool.apply_async(time.sleep, (10,))
416           try:
417               print(res.get(timeout=1))
418           except TimeoutError:
419               print("We lacked patience and got a multiprocessing.TimeoutError")
420
421           print("For the moment, the pool remains available for more work")
422
423       # exiting the 'with'-block has stopped the pool
424       print("Now the pool is closed and no longer available")
425
426Note that the methods of a pool should only ever be used by the
427process which created it.
428
429.. note::
430
431   Functionality within this package requires that the ``__main__`` module be
432   importable by the children. This is covered in :ref:`multiprocessing-programming`
433   however it is worth pointing out here. This means that some examples, such
434   as the :class:`multiprocessing.pool.Pool` examples will not work in the
435   interactive interpreter. For example::
436
437      >>> from multiprocessing import Pool
438      >>> p = Pool(5)
439      >>> def f(x):
440      ...     return x*x
441      ...
442      >>> with p:
443      ...   p.map(f, [1,2,3])
444      Process PoolWorker-1:
445      Process PoolWorker-2:
446      Process PoolWorker-3:
447      Traceback (most recent call last):
448      Traceback (most recent call last):
449      Traceback (most recent call last):
450      AttributeError: 'module' object has no attribute 'f'
451      AttributeError: 'module' object has no attribute 'f'
452      AttributeError: 'module' object has no attribute 'f'
453
454   (If you try this it will actually output three full tracebacks
455   interleaved in a semi-random fashion, and then you may have to
456   stop the parent process somehow.)
457
458
459Reference
460---------
461
462The :mod:`multiprocessing` package mostly replicates the API of the
463:mod:`threading` module.
464
465
466:class:`Process` and exceptions
467~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
468
469.. class:: Process(group=None, target=None, name=None, args=(), kwargs={}, \
470                   *, daemon=None)
471
472   Process objects represent activity that is run in a separate process. The
473   :class:`Process` class has equivalents of all the methods of
474   :class:`threading.Thread`.
475
476   The constructor should always be called with keyword arguments. *group*
477   should always be ``None``; it exists solely for compatibility with
478   :class:`threading.Thread`.  *target* is the callable object to be invoked by
479   the :meth:`run()` method.  It defaults to ``None``, meaning nothing is
480   called. *name* is the process name (see :attr:`name` for more details).
481   *args* is the argument tuple for the target invocation.  *kwargs* is a
482   dictionary of keyword arguments for the target invocation.  If provided,
483   the keyword-only *daemon* argument sets the process :attr:`daemon` flag
484   to ``True`` or ``False``.  If ``None`` (the default), this flag will be
485   inherited from the creating process.
486
487   By default, no arguments are passed to *target*.
488
489   If a subclass overrides the constructor, it must make sure it invokes the
490   base class constructor (:meth:`Process.__init__`) before doing anything else
491   to the process.
492
493   .. versionchanged:: 3.3
494      Added the *daemon* argument.
495
496   .. method:: run()
497
498      Method representing the process's activity.
499
500      You may override this method in a subclass.  The standard :meth:`run`
501      method invokes the callable object passed to the object's constructor as
502      the target argument, if any, with sequential and keyword arguments taken
503      from the *args* and *kwargs* arguments, respectively.
504
505   .. method:: start()
506
507      Start the process's activity.
508
509      This must be called at most once per process object.  It arranges for the
510      object's :meth:`run` method to be invoked in a separate process.
511
512   .. method:: join([timeout])
513
514      If the optional argument *timeout* is ``None`` (the default), the method
515      blocks until the process whose :meth:`join` method is called terminates.
516      If *timeout* is a positive number, it blocks at most *timeout* seconds.
517      Note that the method returns ``None`` if its process terminates or if the
518      method times out.  Check the process's :attr:`exitcode` to determine if
519      it terminated.
520
521      A process can be joined many times.
522
523      A process cannot join itself because this would cause a deadlock.  It is
524      an error to attempt to join a process before it has been started.
525
526   .. attribute:: name
527
528      The process's name.  The name is a string used for identification purposes
529      only.  It has no semantics.  Multiple processes may be given the same
530      name.
531
532      The initial name is set by the constructor.  If no explicit name is
533      provided to the constructor, a name of the form
534      'Process-N\ :sub:`1`:N\ :sub:`2`:...:N\ :sub:`k`' is constructed, where
535      each N\ :sub:`k` is the N-th child of its parent.
536
537   .. method:: is_alive
538
539      Return whether the process is alive.
540
541      Roughly, a process object is alive from the moment the :meth:`start`
542      method returns until the child process terminates.
543
544   .. attribute:: daemon
545
546      The process's daemon flag, a Boolean value.  This must be set before
547      :meth:`start` is called.
548
549      The initial value is inherited from the creating process.
550
551      When a process exits, it attempts to terminate all of its daemonic child
552      processes.
553
554      Note that a daemonic process is not allowed to create child processes.
555      Otherwise a daemonic process would leave its children orphaned if it gets
556      terminated when its parent process exits. Additionally, these are **not**
557      Unix daemons or services, they are normal processes that will be
558      terminated (and not joined) if non-daemonic processes have exited.
559
560   In addition to the  :class:`threading.Thread` API, :class:`Process` objects
561   also support the following attributes and methods:
562
563   .. attribute:: pid
564
565      Return the process ID.  Before the process is spawned, this will be
566      ``None``.
567
568   .. attribute:: exitcode
569
570      The child's exit code.  This will be ``None`` if the process has not yet
571      terminated.  A negative value *-N* indicates that the child was terminated
572      by signal *N*.
573
574   .. attribute:: authkey
575
576      The process's authentication key (a byte string).
577
578      When :mod:`multiprocessing` is initialized the main process is assigned a
579      random string using :func:`os.urandom`.
580
581      When a :class:`Process` object is created, it will inherit the
582      authentication key of its parent process, although this may be changed by
583      setting :attr:`authkey` to another byte string.
584
585      See :ref:`multiprocessing-auth-keys`.
586
587   .. attribute:: sentinel
588
589      A numeric handle of a system object which will become "ready" when
590      the process ends.
591
592      You can use this value if you want to wait on several events at
593      once using :func:`multiprocessing.connection.wait`.  Otherwise
594      calling :meth:`join()` is simpler.
595
596      On Windows, this is an OS handle usable with the ``WaitForSingleObject``
597      and ``WaitForMultipleObjects`` family of API calls.  On Unix, this is
598      a file descriptor usable with primitives from the :mod:`select` module.
599
600      .. versionadded:: 3.3
601
602   .. method:: terminate()
603
604      Terminate the process.  On Unix this is done using the ``SIGTERM`` signal;
605      on Windows :c:func:`TerminateProcess` is used.  Note that exit handlers and
606      finally clauses, etc., will not be executed.
607
608      Note that descendant processes of the process will *not* be terminated --
609      they will simply become orphaned.
610
611      .. warning::
612
613         If this method is used when the associated process is using a pipe or
614         queue then the pipe or queue is liable to become corrupted and may
615         become unusable by other process.  Similarly, if the process has
616         acquired a lock or semaphore etc. then terminating it is liable to
617         cause other processes to deadlock.
618
619   .. method:: kill()
620
621      Same as :meth:`terminate()` but using the ``SIGKILL`` signal on Unix.
622
623      .. versionadded:: 3.7
624
625   .. method:: close()
626
627      Close the :class:`Process` object, releasing all resources associated
628      with it.  :exc:`ValueError` is raised if the underlying process
629      is still running.  Once :meth:`close` returns successfully, most
630      other methods and attributes of the :class:`Process` object will
631      raise :exc:`ValueError`.
632
633      .. versionadded:: 3.7
634
635   Note that the :meth:`start`, :meth:`join`, :meth:`is_alive`,
636   :meth:`terminate` and :attr:`exitcode` methods should only be called by
637   the process that created the process object.
638
639   Example usage of some of the methods of :class:`Process`:
640
641   .. doctest::
642      :options: +ELLIPSIS
643
644       >>> import multiprocessing, time, signal
645       >>> p = multiprocessing.Process(target=time.sleep, args=(1000,))
646       >>> print(p, p.is_alive())
647       <Process ... initial> False
648       >>> p.start()
649       >>> print(p, p.is_alive())
650       <Process ... started> True
651       >>> p.terminate()
652       >>> time.sleep(0.1)
653       >>> print(p, p.is_alive())
654       <Process ... stopped exitcode=-SIGTERM> False
655       >>> p.exitcode == -signal.SIGTERM
656       True
657
658.. exception:: ProcessError
659
660   The base class of all :mod:`multiprocessing` exceptions.
661
662.. exception:: BufferTooShort
663
664   Exception raised by :meth:`Connection.recv_bytes_into()` when the supplied
665   buffer object is too small for the message read.
666
667   If ``e`` is an instance of :exc:`BufferTooShort` then ``e.args[0]`` will give
668   the message as a byte string.
669
670.. exception:: AuthenticationError
671
672   Raised when there is an authentication error.
673
674.. exception:: TimeoutError
675
676   Raised by methods with a timeout when the timeout expires.
677
678Pipes and Queues
679~~~~~~~~~~~~~~~~
680
681When using multiple processes, one generally uses message passing for
682communication between processes and avoids having to use any synchronization
683primitives like locks.
684
685For passing messages one can use :func:`Pipe` (for a connection between two
686processes) or a queue (which allows multiple producers and consumers).
687
688The :class:`Queue`, :class:`SimpleQueue` and :class:`JoinableQueue` types
689are multi-producer, multi-consumer :abbr:`FIFO (first-in, first-out)`
690queues modelled on the :class:`queue.Queue` class in the
691standard library.  They differ in that :class:`Queue` lacks the
692:meth:`~queue.Queue.task_done` and :meth:`~queue.Queue.join` methods introduced
693into Python 2.5's :class:`queue.Queue` class.
694
695If you use :class:`JoinableQueue` then you **must** call
696:meth:`JoinableQueue.task_done` for each task removed from the queue or else the
697semaphore used to count the number of unfinished tasks may eventually overflow,
698raising an exception.
699
700Note that one can also create a shared queue by using a manager object -- see
701:ref:`multiprocessing-managers`.
702
703.. note::
704
705   :mod:`multiprocessing` uses the usual :exc:`queue.Empty` and
706   :exc:`queue.Full` exceptions to signal a timeout.  They are not available in
707   the :mod:`multiprocessing` namespace so you need to import them from
708   :mod:`queue`.
709
710.. note::
711
712   When an object is put on a queue, the object is pickled and a
713   background thread later flushes the pickled data to an underlying
714   pipe.  This has some consequences which are a little surprising,
715   but should not cause any practical difficulties -- if they really
716   bother you then you can instead use a queue created with a
717   :ref:`manager <multiprocessing-managers>`.
718
719   (1) After putting an object on an empty queue there may be an
720       infinitesimal delay before the queue's :meth:`~Queue.empty`
721       method returns :const:`False` and :meth:`~Queue.get_nowait` can
722       return without raising :exc:`queue.Empty`.
723
724   (2) If multiple processes are enqueuing objects, it is possible for
725       the objects to be received at the other end out-of-order.
726       However, objects enqueued by the same process will always be in
727       the expected order with respect to each other.
728
729.. warning::
730
731   If a process is killed using :meth:`Process.terminate` or :func:`os.kill`
732   while it is trying to use a :class:`Queue`, then the data in the queue is
733   likely to become corrupted.  This may cause any other process to get an
734   exception when it tries to use the queue later on.
735
736.. warning::
737
738   As mentioned above, if a child process has put items on a queue (and it has
739   not used :meth:`JoinableQueue.cancel_join_thread
740   <multiprocessing.Queue.cancel_join_thread>`), then that process will
741   not terminate until all buffered items have been flushed to the pipe.
742
743   This means that if you try joining that process you may get a deadlock unless
744   you are sure that all items which have been put on the queue have been
745   consumed.  Similarly, if the child process is non-daemonic then the parent
746   process may hang on exit when it tries to join all its non-daemonic children.
747
748   Note that a queue created using a manager does not have this issue.  See
749   :ref:`multiprocessing-programming`.
750
751For an example of the usage of queues for interprocess communication see
752:ref:`multiprocessing-examples`.
753
754
755.. function:: Pipe([duplex])
756
757   Returns a pair ``(conn1, conn2)`` of
758   :class:`~multiprocessing.connection.Connection` objects representing the
759   ends of a pipe.
760
761   If *duplex* is ``True`` (the default) then the pipe is bidirectional.  If
762   *duplex* is ``False`` then the pipe is unidirectional: ``conn1`` can only be
763   used for receiving messages and ``conn2`` can only be used for sending
764   messages.
765
766
767.. class:: Queue([maxsize])
768
769   Returns a process shared queue implemented using a pipe and a few
770   locks/semaphores.  When a process first puts an item on the queue a feeder
771   thread is started which transfers objects from a buffer into the pipe.
772
773   The usual :exc:`queue.Empty` and :exc:`queue.Full` exceptions from the
774   standard library's :mod:`queue` module are raised to signal timeouts.
775
776   :class:`Queue` implements all the methods of :class:`queue.Queue` except for
777   :meth:`~queue.Queue.task_done` and :meth:`~queue.Queue.join`.
778
779   .. method:: qsize()
780
781      Return the approximate size of the queue.  Because of
782      multithreading/multiprocessing semantics, this number is not reliable.
783
784      Note that this may raise :exc:`NotImplementedError` on Unix platforms like
785      Mac OS X where ``sem_getvalue()`` is not implemented.
786
787   .. method:: empty()
788
789      Return ``True`` if the queue is empty, ``False`` otherwise.  Because of
790      multithreading/multiprocessing semantics, this is not reliable.
791
792   .. method:: full()
793
794      Return ``True`` if the queue is full, ``False`` otherwise.  Because of
795      multithreading/multiprocessing semantics, this is not reliable.
796
797   .. method:: put(obj[, block[, timeout]])
798
799      Put obj into the queue.  If the optional argument *block* is ``True``
800      (the default) and *timeout* is ``None`` (the default), block if necessary until
801      a free slot is available.  If *timeout* is a positive number, it blocks at
802      most *timeout* seconds and raises the :exc:`queue.Full` exception if no
803      free slot was available within that time.  Otherwise (*block* is
804      ``False``), put an item on the queue if a free slot is immediately
805      available, else raise the :exc:`queue.Full` exception (*timeout* is
806      ignored in that case).
807
808      .. versionchanged:: 3.8
809         If the queue is closed, :exc:`ValueError` is raised instead of
810         :exc:`AssertionError`.
811
812   .. method:: put_nowait(obj)
813
814      Equivalent to ``put(obj, False)``.
815
816   .. method:: get([block[, timeout]])
817
818      Remove and return an item from the queue.  If optional args *block* is
819      ``True`` (the default) and *timeout* is ``None`` (the default), block if
820      necessary until an item is available.  If *timeout* is a positive number,
821      it blocks at most *timeout* seconds and raises the :exc:`queue.Empty`
822      exception if no item was available within that time.  Otherwise (block is
823      ``False``), return an item if one is immediately available, else raise the
824      :exc:`queue.Empty` exception (*timeout* is ignored in that case).
825
826      .. versionchanged:: 3.8
827         If the queue is closed, :exc:`ValueError` is raised instead of
828         :exc:`OSError`.
829
830   .. method:: get_nowait()
831
832      Equivalent to ``get(False)``.
833
834   :class:`multiprocessing.Queue` has a few additional methods not found in
835   :class:`queue.Queue`.  These methods are usually unnecessary for most
836   code:
837
838   .. method:: close()
839
840      Indicate that no more data will be put on this queue by the current
841      process.  The background thread will quit once it has flushed all buffered
842      data to the pipe.  This is called automatically when the queue is garbage
843      collected.
844
845   .. method:: join_thread()
846
847      Join the background thread.  This can only be used after :meth:`close` has
848      been called.  It blocks until the background thread exits, ensuring that
849      all data in the buffer has been flushed to the pipe.
850
851      By default if a process is not the creator of the queue then on exit it
852      will attempt to join the queue's background thread.  The process can call
853      :meth:`cancel_join_thread` to make :meth:`join_thread` do nothing.
854
855   .. method:: cancel_join_thread()
856
857      Prevent :meth:`join_thread` from blocking.  In particular, this prevents
858      the background thread from being joined automatically when the process
859      exits -- see :meth:`join_thread`.
860
861      A better name for this method might be
862      ``allow_exit_without_flush()``.  It is likely to cause enqueued
863      data to lost, and you almost certainly will not need to use it.
864      It is really only there if you need the current process to exit
865      immediately without waiting to flush enqueued data to the
866      underlying pipe, and you don't care about lost data.
867
868   .. note::
869
870      This class's functionality requires a functioning shared semaphore
871      implementation on the host operating system. Without one, the
872      functionality in this class will be disabled, and attempts to
873      instantiate a :class:`Queue` will result in an :exc:`ImportError`. See
874      :issue:`3770` for additional information.  The same holds true for any
875      of the specialized queue types listed below.
876
877.. class:: SimpleQueue()
878
879   It is a simplified :class:`Queue` type, very close to a locked :class:`Pipe`.
880
881   .. method:: empty()
882
883      Return ``True`` if the queue is empty, ``False`` otherwise.
884
885   .. method:: get()
886
887      Remove and return an item from the queue.
888
889   .. method:: put(item)
890
891      Put *item* into the queue.
892
893
894.. class:: JoinableQueue([maxsize])
895
896   :class:`JoinableQueue`, a :class:`Queue` subclass, is a queue which
897   additionally has :meth:`task_done` and :meth:`join` methods.
898
899   .. method:: task_done()
900
901      Indicate that a formerly enqueued task is complete. Used by queue
902      consumers.  For each :meth:`~Queue.get` used to fetch a task, a subsequent
903      call to :meth:`task_done` tells the queue that the processing on the task
904      is complete.
905
906      If a :meth:`~queue.Queue.join` is currently blocking, it will resume when all
907      items have been processed (meaning that a :meth:`task_done` call was
908      received for every item that had been :meth:`~Queue.put` into the queue).
909
910      Raises a :exc:`ValueError` if called more times than there were items
911      placed in the queue.
912
913
914   .. method:: join()
915
916      Block until all items in the queue have been gotten and processed.
917
918      The count of unfinished tasks goes up whenever an item is added to the
919      queue.  The count goes down whenever a consumer calls
920      :meth:`task_done` to indicate that the item was retrieved and all work on
921      it is complete.  When the count of unfinished tasks drops to zero,
922      :meth:`~queue.Queue.join` unblocks.
923
924
925Miscellaneous
926~~~~~~~~~~~~~
927
928.. function:: active_children()
929
930   Return list of all live children of the current process.
931
932   Calling this has the side effect of "joining" any processes which have
933   already finished.
934
935.. function:: cpu_count()
936
937   Return the number of CPUs in the system.
938
939   This number is not equivalent to the number of CPUs the current process can
940   use.  The number of usable CPUs can be obtained with
941   ``len(os.sched_getaffinity(0))``
942
943   May raise :exc:`NotImplementedError`.
944
945   .. seealso::
946      :func:`os.cpu_count`
947
948.. function:: current_process()
949
950   Return the :class:`Process` object corresponding to the current process.
951
952   An analogue of :func:`threading.current_thread`.
953
954.. function:: parent_process()
955
956   Return the :class:`Process` object corresponding to the parent process of
957   the :func:`current_process`. For the main process, ``parent_process`` will
958   be ``None``.
959
960   .. versionadded:: 3.8
961
962.. function:: freeze_support()
963
964   Add support for when a program which uses :mod:`multiprocessing` has been
965   frozen to produce a Windows executable.  (Has been tested with **py2exe**,
966   **PyInstaller** and **cx_Freeze**.)
967
968   One needs to call this function straight after the ``if __name__ ==
969   '__main__'`` line of the main module.  For example::
970
971      from multiprocessing import Process, freeze_support
972
973      def f():
974          print('hello world!')
975
976      if __name__ == '__main__':
977          freeze_support()
978          Process(target=f).start()
979
980   If the ``freeze_support()`` line is omitted then trying to run the frozen
981   executable will raise :exc:`RuntimeError`.
982
983   Calling ``freeze_support()`` has no effect when invoked on any operating
984   system other than Windows.  In addition, if the module is being run
985   normally by the Python interpreter on Windows (the program has not been
986   frozen), then ``freeze_support()`` has no effect.
987
988.. function:: get_all_start_methods()
989
990   Returns a list of the supported start methods, the first of which
991   is the default.  The possible start methods are ``'fork'``,
992   ``'spawn'`` and ``'forkserver'``.  On Windows only ``'spawn'`` is
993   available.  On Unix ``'fork'`` and ``'spawn'`` are always
994   supported, with ``'fork'`` being the default.
995
996   .. versionadded:: 3.4
997
998.. function:: get_context(method=None)
999
1000   Return a context object which has the same attributes as the
1001   :mod:`multiprocessing` module.
1002
1003   If *method* is ``None`` then the default context is returned.
1004   Otherwise *method* should be ``'fork'``, ``'spawn'``,
1005   ``'forkserver'``.  :exc:`ValueError` is raised if the specified
1006   start method is not available.
1007
1008   .. versionadded:: 3.4
1009
1010.. function:: get_start_method(allow_none=False)
1011
1012   Return the name of start method used for starting processes.
1013
1014   If the start method has not been fixed and *allow_none* is false,
1015   then the start method is fixed to the default and the name is
1016   returned.  If the start method has not been fixed and *allow_none*
1017   is true then ``None`` is returned.
1018
1019   The return value can be ``'fork'``, ``'spawn'``, ``'forkserver'``
1020   or ``None``.  ``'fork'`` is the default on Unix, while ``'spawn'`` is
1021   the default on Windows.
1022
1023   .. versionadded:: 3.4
1024
1025.. function:: set_executable()
1026
1027   Sets the path of the Python interpreter to use when starting a child process.
1028   (By default :data:`sys.executable` is used).  Embedders will probably need to
1029   do some thing like ::
1030
1031      set_executable(os.path.join(sys.exec_prefix, 'pythonw.exe'))
1032
1033   before they can create child processes.
1034
1035   .. versionchanged:: 3.4
1036      Now supported on Unix when the ``'spawn'`` start method is used.
1037
1038.. function:: set_start_method(method)
1039
1040   Set the method which should be used to start child processes.
1041   *method* can be ``'fork'``, ``'spawn'`` or ``'forkserver'``.
1042
1043   Note that this should be called at most once, and it should be
1044   protected inside the ``if __name__ == '__main__'`` clause of the
1045   main module.
1046
1047   .. versionadded:: 3.4
1048
1049.. note::
1050
1051   :mod:`multiprocessing` contains no analogues of
1052   :func:`threading.active_count`, :func:`threading.enumerate`,
1053   :func:`threading.settrace`, :func:`threading.setprofile`,
1054   :class:`threading.Timer`, or :class:`threading.local`.
1055
1056
1057Connection Objects
1058~~~~~~~~~~~~~~~~~~
1059
1060.. currentmodule:: multiprocessing.connection
1061
1062Connection objects allow the sending and receiving of picklable objects or
1063strings.  They can be thought of as message oriented connected sockets.
1064
1065Connection objects are usually created using
1066:func:`Pipe <multiprocessing.Pipe>` -- see also
1067:ref:`multiprocessing-listeners-clients`.
1068
1069.. class:: Connection
1070
1071   .. method:: send(obj)
1072
1073      Send an object to the other end of the connection which should be read
1074      using :meth:`recv`.
1075
1076      The object must be picklable.  Very large pickles (approximately 32 MiB+,
1077      though it depends on the OS) may raise a :exc:`ValueError` exception.
1078
1079   .. method:: recv()
1080
1081      Return an object sent from the other end of the connection using
1082      :meth:`send`.  Blocks until there is something to receive.  Raises
1083      :exc:`EOFError` if there is nothing left to receive
1084      and the other end was closed.
1085
1086   .. method:: fileno()
1087
1088      Return the file descriptor or handle used by the connection.
1089
1090   .. method:: close()
1091
1092      Close the connection.
1093
1094      This is called automatically when the connection is garbage collected.
1095
1096   .. method:: poll([timeout])
1097
1098      Return whether there is any data available to be read.
1099
1100      If *timeout* is not specified then it will return immediately.  If
1101      *timeout* is a number then this specifies the maximum time in seconds to
1102      block.  If *timeout* is ``None`` then an infinite timeout is used.
1103
1104      Note that multiple connection objects may be polled at once by
1105      using :func:`multiprocessing.connection.wait`.
1106
1107   .. method:: send_bytes(buffer[, offset[, size]])
1108
1109      Send byte data from a :term:`bytes-like object` as a complete message.
1110
1111      If *offset* is given then data is read from that position in *buffer*.  If
1112      *size* is given then that many bytes will be read from buffer.  Very large
1113      buffers (approximately 32 MiB+, though it depends on the OS) may raise a
1114      :exc:`ValueError` exception
1115
1116   .. method:: recv_bytes([maxlength])
1117
1118      Return a complete message of byte data sent from the other end of the
1119      connection as a string.  Blocks until there is something to receive.
1120      Raises :exc:`EOFError` if there is nothing left
1121      to receive and the other end has closed.
1122
1123      If *maxlength* is specified and the message is longer than *maxlength*
1124      then :exc:`OSError` is raised and the connection will no longer be
1125      readable.
1126
1127      .. versionchanged:: 3.3
1128         This function used to raise :exc:`IOError`, which is now an
1129         alias of :exc:`OSError`.
1130
1131
1132   .. method:: recv_bytes_into(buffer[, offset])
1133
1134      Read into *buffer* a complete message of byte data sent from the other end
1135      of the connection and return the number of bytes in the message.  Blocks
1136      until there is something to receive.  Raises
1137      :exc:`EOFError` if there is nothing left to receive and the other end was
1138      closed.
1139
1140      *buffer* must be a writable :term:`bytes-like object`.  If
1141      *offset* is given then the message will be written into the buffer from
1142      that position.  Offset must be a non-negative integer less than the
1143      length of *buffer* (in bytes).
1144
1145      If the buffer is too short then a :exc:`BufferTooShort` exception is
1146      raised and the complete message is available as ``e.args[0]`` where ``e``
1147      is the exception instance.
1148
1149   .. versionchanged:: 3.3
1150      Connection objects themselves can now be transferred between processes
1151      using :meth:`Connection.send` and :meth:`Connection.recv`.
1152
1153   .. versionadded:: 3.3
1154      Connection objects now support the context management protocol -- see
1155      :ref:`typecontextmanager`.  :meth:`~contextmanager.__enter__` returns the
1156      connection object, and :meth:`~contextmanager.__exit__` calls :meth:`close`.
1157
1158For example:
1159
1160.. doctest::
1161
1162    >>> from multiprocessing import Pipe
1163    >>> a, b = Pipe()
1164    >>> a.send([1, 'hello', None])
1165    >>> b.recv()
1166    [1, 'hello', None]
1167    >>> b.send_bytes(b'thank you')
1168    >>> a.recv_bytes()
1169    b'thank you'
1170    >>> import array
1171    >>> arr1 = array.array('i', range(5))
1172    >>> arr2 = array.array('i', [0] * 10)
1173    >>> a.send_bytes(arr1)
1174    >>> count = b.recv_bytes_into(arr2)
1175    >>> assert count == len(arr1) * arr1.itemsize
1176    >>> arr2
1177    array('i', [0, 1, 2, 3, 4, 0, 0, 0, 0, 0])
1178
1179
1180.. warning::
1181
1182    The :meth:`Connection.recv` method automatically unpickles the data it
1183    receives, which can be a security risk unless you can trust the process
1184    which sent the message.
1185
1186    Therefore, unless the connection object was produced using :func:`Pipe` you
1187    should only use the :meth:`~Connection.recv` and :meth:`~Connection.send`
1188    methods after performing some sort of authentication.  See
1189    :ref:`multiprocessing-auth-keys`.
1190
1191.. warning::
1192
1193    If a process is killed while it is trying to read or write to a pipe then
1194    the data in the pipe is likely to become corrupted, because it may become
1195    impossible to be sure where the message boundaries lie.
1196
1197
1198Synchronization primitives
1199~~~~~~~~~~~~~~~~~~~~~~~~~~
1200
1201.. currentmodule:: multiprocessing
1202
1203Generally synchronization primitives are not as necessary in a multiprocess
1204program as they are in a multithreaded program.  See the documentation for
1205:mod:`threading` module.
1206
1207Note that one can also create synchronization primitives by using a manager
1208object -- see :ref:`multiprocessing-managers`.
1209
1210.. class:: Barrier(parties[, action[, timeout]])
1211
1212   A barrier object: a clone of :class:`threading.Barrier`.
1213
1214   .. versionadded:: 3.3
1215
1216.. class:: BoundedSemaphore([value])
1217
1218   A bounded semaphore object: a close analog of
1219   :class:`threading.BoundedSemaphore`.
1220
1221   A solitary difference from its close analog exists: its ``acquire`` method's
1222   first argument is named *block*, as is consistent with :meth:`Lock.acquire`.
1223
1224   .. note::
1225      On Mac OS X, this is indistinguishable from :class:`Semaphore` because
1226      ``sem_getvalue()`` is not implemented on that platform.
1227
1228.. class:: Condition([lock])
1229
1230   A condition variable: an alias for :class:`threading.Condition`.
1231
1232   If *lock* is specified then it should be a :class:`Lock` or :class:`RLock`
1233   object from :mod:`multiprocessing`.
1234
1235   .. versionchanged:: 3.3
1236      The :meth:`~threading.Condition.wait_for` method was added.
1237
1238.. class:: Event()
1239
1240   A clone of :class:`threading.Event`.
1241
1242
1243.. class:: Lock()
1244
1245   A non-recursive lock object: a close analog of :class:`threading.Lock`.
1246   Once a process or thread has acquired a lock, subsequent attempts to
1247   acquire it from any process or thread will block until it is released;
1248   any process or thread may release it.  The concepts and behaviors of
1249   :class:`threading.Lock` as it applies to threads are replicated here in
1250   :class:`multiprocessing.Lock` as it applies to either processes or threads,
1251   except as noted.
1252
1253   Note that :class:`Lock` is actually a factory function which returns an
1254   instance of ``multiprocessing.synchronize.Lock`` initialized with a
1255   default context.
1256
1257   :class:`Lock` supports the :term:`context manager` protocol and thus may be
1258   used in :keyword:`with` statements.
1259
1260   .. method:: acquire(block=True, timeout=None)
1261
1262      Acquire a lock, blocking or non-blocking.
1263
1264      With the *block* argument set to ``True`` (the default), the method call
1265      will block until the lock is in an unlocked state, then set it to locked
1266      and return ``True``.  Note that the name of this first argument differs
1267      from that in :meth:`threading.Lock.acquire`.
1268
1269      With the *block* argument set to ``False``, the method call does not
1270      block.  If the lock is currently in a locked state, return ``False``;
1271      otherwise set the lock to a locked state and return ``True``.
1272
1273      When invoked with a positive, floating-point value for *timeout*, block
1274      for at most the number of seconds specified by *timeout* as long as
1275      the lock can not be acquired.  Invocations with a negative value for
1276      *timeout* are equivalent to a *timeout* of zero.  Invocations with a
1277      *timeout* value of ``None`` (the default) set the timeout period to
1278      infinite.  Note that the treatment of negative or ``None`` values for
1279      *timeout* differs from the implemented behavior in
1280      :meth:`threading.Lock.acquire`.  The *timeout* argument has no practical
1281      implications if the *block* argument is set to ``False`` and is thus
1282      ignored.  Returns ``True`` if the lock has been acquired or ``False`` if
1283      the timeout period has elapsed.
1284
1285
1286   .. method:: release()
1287
1288      Release a lock.  This can be called from any process or thread, not only
1289      the process or thread which originally acquired the lock.
1290
1291      Behavior is the same as in :meth:`threading.Lock.release` except that
1292      when invoked on an unlocked lock, a :exc:`ValueError` is raised.
1293
1294
1295.. class:: RLock()
1296
1297   A recursive lock object: a close analog of :class:`threading.RLock`.  A
1298   recursive lock must be released by the process or thread that acquired it.
1299   Once a process or thread has acquired a recursive lock, the same process
1300   or thread may acquire it again without blocking; that process or thread
1301   must release it once for each time it has been acquired.
1302
1303   Note that :class:`RLock` is actually a factory function which returns an
1304   instance of ``multiprocessing.synchronize.RLock`` initialized with a
1305   default context.
1306
1307   :class:`RLock` supports the :term:`context manager` protocol and thus may be
1308   used in :keyword:`with` statements.
1309
1310
1311   .. method:: acquire(block=True, timeout=None)
1312
1313      Acquire a lock, blocking or non-blocking.
1314
1315      When invoked with the *block* argument set to ``True``, block until the
1316      lock is in an unlocked state (not owned by any process or thread) unless
1317      the lock is already owned by the current process or thread.  The current
1318      process or thread then takes ownership of the lock (if it does not
1319      already have ownership) and the recursion level inside the lock increments
1320      by one, resulting in a return value of ``True``.  Note that there are
1321      several differences in this first argument's behavior compared to the
1322      implementation of :meth:`threading.RLock.acquire`, starting with the name
1323      of the argument itself.
1324
1325      When invoked with the *block* argument set to ``False``, do not block.
1326      If the lock has already been acquired (and thus is owned) by another
1327      process or thread, the current process or thread does not take ownership
1328      and the recursion level within the lock is not changed, resulting in
1329      a return value of ``False``.  If the lock is in an unlocked state, the
1330      current process or thread takes ownership and the recursion level is
1331      incremented, resulting in a return value of ``True``.
1332
1333      Use and behaviors of the *timeout* argument are the same as in
1334      :meth:`Lock.acquire`.  Note that some of these behaviors of *timeout*
1335      differ from the implemented behaviors in :meth:`threading.RLock.acquire`.
1336
1337
1338   .. method:: release()
1339
1340      Release a lock, decrementing the recursion level.  If after the
1341      decrement the recursion level is zero, reset the lock to unlocked (not
1342      owned by any process or thread) and if any other processes or threads
1343      are blocked waiting for the lock to become unlocked, allow exactly one
1344      of them to proceed.  If after the decrement the recursion level is still
1345      nonzero, the lock remains locked and owned by the calling process or
1346      thread.
1347
1348      Only call this method when the calling process or thread owns the lock.
1349      An :exc:`AssertionError` is raised if this method is called by a process
1350      or thread other than the owner or if the lock is in an unlocked (unowned)
1351      state.  Note that the type of exception raised in this situation
1352      differs from the implemented behavior in :meth:`threading.RLock.release`.
1353
1354
1355.. class:: Semaphore([value])
1356
1357   A semaphore object: a close analog of :class:`threading.Semaphore`.
1358
1359   A solitary difference from its close analog exists: its ``acquire`` method's
1360   first argument is named *block*, as is consistent with :meth:`Lock.acquire`.
1361
1362.. note::
1363
1364   On Mac OS X, ``sem_timedwait`` is unsupported, so calling ``acquire()`` with
1365   a timeout will emulate that function's behavior using a sleeping loop.
1366
1367.. note::
1368
1369   If the SIGINT signal generated by :kbd:`Ctrl-C` arrives while the main thread is
1370   blocked by a call to :meth:`BoundedSemaphore.acquire`, :meth:`Lock.acquire`,
1371   :meth:`RLock.acquire`, :meth:`Semaphore.acquire`, :meth:`Condition.acquire`
1372   or :meth:`Condition.wait` then the call will be immediately interrupted and
1373   :exc:`KeyboardInterrupt` will be raised.
1374
1375   This differs from the behaviour of :mod:`threading` where SIGINT will be
1376   ignored while the equivalent blocking calls are in progress.
1377
1378.. note::
1379
1380   Some of this package's functionality requires a functioning shared semaphore
1381   implementation on the host operating system. Without one, the
1382   :mod:`multiprocessing.synchronize` module will be disabled, and attempts to
1383   import it will result in an :exc:`ImportError`. See
1384   :issue:`3770` for additional information.
1385
1386
1387Shared :mod:`ctypes` Objects
1388~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1389
1390It is possible to create shared objects using shared memory which can be
1391inherited by child processes.
1392
1393.. function:: Value(typecode_or_type, *args, lock=True)
1394
1395   Return a :mod:`ctypes` object allocated from shared memory.  By default the
1396   return value is actually a synchronized wrapper for the object.  The object
1397   itself can be accessed via the *value* attribute of a :class:`Value`.
1398
1399   *typecode_or_type* determines the type of the returned object: it is either a
1400   ctypes type or a one character typecode of the kind used by the :mod:`array`
1401   module.  *\*args* is passed on to the constructor for the type.
1402
1403   If *lock* is ``True`` (the default) then a new recursive lock
1404   object is created to synchronize access to the value.  If *lock* is
1405   a :class:`Lock` or :class:`RLock` object then that will be used to
1406   synchronize access to the value.  If *lock* is ``False`` then
1407   access to the returned object will not be automatically protected
1408   by a lock, so it will not necessarily be "process-safe".
1409
1410   Operations like ``+=`` which involve a read and write are not
1411   atomic.  So if, for instance, you want to atomically increment a
1412   shared value it is insufficient to just do ::
1413
1414       counter.value += 1
1415
1416   Assuming the associated lock is recursive (which it is by default)
1417   you can instead do ::
1418
1419       with counter.get_lock():
1420           counter.value += 1
1421
1422   Note that *lock* is a keyword-only argument.
1423
1424.. function:: Array(typecode_or_type, size_or_initializer, *, lock=True)
1425
1426   Return a ctypes array allocated from shared memory.  By default the return
1427   value is actually a synchronized wrapper for the array.
1428
1429   *typecode_or_type* determines the type of the elements of the returned array:
1430   it is either a ctypes type or a one character typecode of the kind used by
1431   the :mod:`array` module.  If *size_or_initializer* is an integer, then it
1432   determines the length of the array, and the array will be initially zeroed.
1433   Otherwise, *size_or_initializer* is a sequence which is used to initialize
1434   the array and whose length determines the length of the array.
1435
1436   If *lock* is ``True`` (the default) then a new lock object is created to
1437   synchronize access to the value.  If *lock* is a :class:`Lock` or
1438   :class:`RLock` object then that will be used to synchronize access to the
1439   value.  If *lock* is ``False`` then access to the returned object will not be
1440   automatically protected by a lock, so it will not necessarily be
1441   "process-safe".
1442
1443   Note that *lock* is a keyword only argument.
1444
1445   Note that an array of :data:`ctypes.c_char` has *value* and *raw*
1446   attributes which allow one to use it to store and retrieve strings.
1447
1448
1449The :mod:`multiprocessing.sharedctypes` module
1450>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
1451
1452.. module:: multiprocessing.sharedctypes
1453   :synopsis: Allocate ctypes objects from shared memory.
1454
1455The :mod:`multiprocessing.sharedctypes` module provides functions for allocating
1456:mod:`ctypes` objects from shared memory which can be inherited by child
1457processes.
1458
1459.. note::
1460
1461   Although it is possible to store a pointer in shared memory remember that
1462   this will refer to a location in the address space of a specific process.
1463   However, the pointer is quite likely to be invalid in the context of a second
1464   process and trying to dereference the pointer from the second process may
1465   cause a crash.
1466
1467.. function:: RawArray(typecode_or_type, size_or_initializer)
1468
1469   Return a ctypes array allocated from shared memory.
1470
1471   *typecode_or_type* determines the type of the elements of the returned array:
1472   it is either a ctypes type or a one character typecode of the kind used by
1473   the :mod:`array` module.  If *size_or_initializer* is an integer then it
1474   determines the length of the array, and the array will be initially zeroed.
1475   Otherwise *size_or_initializer* is a sequence which is used to initialize the
1476   array and whose length determines the length of the array.
1477
1478   Note that setting and getting an element is potentially non-atomic -- use
1479   :func:`Array` instead to make sure that access is automatically synchronized
1480   using a lock.
1481
1482.. function:: RawValue(typecode_or_type, *args)
1483
1484   Return a ctypes object allocated from shared memory.
1485
1486   *typecode_or_type* determines the type of the returned object: it is either a
1487   ctypes type or a one character typecode of the kind used by the :mod:`array`
1488   module.  *\*args* is passed on to the constructor for the type.
1489
1490   Note that setting and getting the value is potentially non-atomic -- use
1491   :func:`Value` instead to make sure that access is automatically synchronized
1492   using a lock.
1493
1494   Note that an array of :data:`ctypes.c_char` has ``value`` and ``raw``
1495   attributes which allow one to use it to store and retrieve strings -- see
1496   documentation for :mod:`ctypes`.
1497
1498.. function:: Array(typecode_or_type, size_or_initializer, *, lock=True)
1499
1500   The same as :func:`RawArray` except that depending on the value of *lock* a
1501   process-safe synchronization wrapper may be returned instead of a raw ctypes
1502   array.
1503
1504   If *lock* is ``True`` (the default) then a new lock object is created to
1505   synchronize access to the value.  If *lock* is a
1506   :class:`~multiprocessing.Lock` or :class:`~multiprocessing.RLock` object
1507   then that will be used to synchronize access to the
1508   value.  If *lock* is ``False`` then access to the returned object will not be
1509   automatically protected by a lock, so it will not necessarily be
1510   "process-safe".
1511
1512   Note that *lock* is a keyword-only argument.
1513
1514.. function:: Value(typecode_or_type, *args, lock=True)
1515
1516   The same as :func:`RawValue` except that depending on the value of *lock* a
1517   process-safe synchronization wrapper may be returned instead of a raw ctypes
1518   object.
1519
1520   If *lock* is ``True`` (the default) then a new lock object is created to
1521   synchronize access to the value.  If *lock* is a :class:`~multiprocessing.Lock` or
1522   :class:`~multiprocessing.RLock` object then that will be used to synchronize access to the
1523   value.  If *lock* is ``False`` then access to the returned object will not be
1524   automatically protected by a lock, so it will not necessarily be
1525   "process-safe".
1526
1527   Note that *lock* is a keyword-only argument.
1528
1529.. function:: copy(obj)
1530
1531   Return a ctypes object allocated from shared memory which is a copy of the
1532   ctypes object *obj*.
1533
1534.. function:: synchronized(obj[, lock])
1535
1536   Return a process-safe wrapper object for a ctypes object which uses *lock* to
1537   synchronize access.  If *lock* is ``None`` (the default) then a
1538   :class:`multiprocessing.RLock` object is created automatically.
1539
1540   A synchronized wrapper will have two methods in addition to those of the
1541   object it wraps: :meth:`get_obj` returns the wrapped object and
1542   :meth:`get_lock` returns the lock object used for synchronization.
1543
1544   Note that accessing the ctypes object through the wrapper can be a lot slower
1545   than accessing the raw ctypes object.
1546
1547   .. versionchanged:: 3.5
1548      Synchronized objects support the :term:`context manager` protocol.
1549
1550
1551The table below compares the syntax for creating shared ctypes objects from
1552shared memory with the normal ctypes syntax.  (In the table ``MyStruct`` is some
1553subclass of :class:`ctypes.Structure`.)
1554
1555==================== ========================== ===========================
1556ctypes               sharedctypes using type    sharedctypes using typecode
1557==================== ========================== ===========================
1558c_double(2.4)        RawValue(c_double, 2.4)    RawValue('d', 2.4)
1559MyStruct(4, 6)       RawValue(MyStruct, 4, 6)
1560(c_short * 7)()      RawArray(c_short, 7)       RawArray('h', 7)
1561(c_int * 3)(9, 2, 8) RawArray(c_int, (9, 2, 8)) RawArray('i', (9, 2, 8))
1562==================== ========================== ===========================
1563
1564
1565Below is an example where a number of ctypes objects are modified by a child
1566process::
1567
1568   from multiprocessing import Process, Lock
1569   from multiprocessing.sharedctypes import Value, Array
1570   from ctypes import Structure, c_double
1571
1572   class Point(Structure):
1573       _fields_ = [('x', c_double), ('y', c_double)]
1574
1575   def modify(n, x, s, A):
1576       n.value **= 2
1577       x.value **= 2
1578       s.value = s.value.upper()
1579       for a in A:
1580           a.x **= 2
1581           a.y **= 2
1582
1583   if __name__ == '__main__':
1584       lock = Lock()
1585
1586       n = Value('i', 7)
1587       x = Value(c_double, 1.0/3.0, lock=False)
1588       s = Array('c', b'hello world', lock=lock)
1589       A = Array(Point, [(1.875,-6.25), (-5.75,2.0), (2.375,9.5)], lock=lock)
1590
1591       p = Process(target=modify, args=(n, x, s, A))
1592       p.start()
1593       p.join()
1594
1595       print(n.value)
1596       print(x.value)
1597       print(s.value)
1598       print([(a.x, a.y) for a in A])
1599
1600
1601.. highlight:: none
1602
1603The results printed are ::
1604
1605    49
1606    0.1111111111111111
1607    HELLO WORLD
1608    [(3.515625, 39.0625), (33.0625, 4.0), (5.640625, 90.25)]
1609
1610.. highlight:: python3
1611
1612
1613.. _multiprocessing-managers:
1614
1615Managers
1616~~~~~~~~
1617
1618Managers provide a way to create data which can be shared between different
1619processes, including sharing over a network between processes running on
1620different machines. A manager object controls a server process which manages
1621*shared objects*.  Other processes can access the shared objects by using
1622proxies.
1623
1624.. function:: multiprocessing.Manager()
1625
1626   Returns a started :class:`~multiprocessing.managers.SyncManager` object which
1627   can be used for sharing objects between processes.  The returned manager
1628   object corresponds to a spawned child process and has methods which will
1629   create shared objects and return corresponding proxies.
1630
1631.. module:: multiprocessing.managers
1632   :synopsis: Share data between process with shared objects.
1633
1634Manager processes will be shutdown as soon as they are garbage collected or
1635their parent process exits.  The manager classes are defined in the
1636:mod:`multiprocessing.managers` module:
1637
1638.. class:: BaseManager([address[, authkey]])
1639
1640   Create a BaseManager object.
1641
1642   Once created one should call :meth:`start` or ``get_server().serve_forever()`` to ensure
1643   that the manager object refers to a started manager process.
1644
1645   *address* is the address on which the manager process listens for new
1646   connections.  If *address* is ``None`` then an arbitrary one is chosen.
1647
1648   *authkey* is the authentication key which will be used to check the
1649   validity of incoming connections to the server process.  If
1650   *authkey* is ``None`` then ``current_process().authkey`` is used.
1651   Otherwise *authkey* is used and it must be a byte string.
1652
1653   .. method:: start([initializer[, initargs]])
1654
1655      Start a subprocess to start the manager.  If *initializer* is not ``None``
1656      then the subprocess will call ``initializer(*initargs)`` when it starts.
1657
1658   .. method:: get_server()
1659
1660      Returns a :class:`Server` object which represents the actual server under
1661      the control of the Manager. The :class:`Server` object supports the
1662      :meth:`serve_forever` method::
1663
1664      >>> from multiprocessing.managers import BaseManager
1665      >>> manager = BaseManager(address=('', 50000), authkey=b'abc')
1666      >>> server = manager.get_server()
1667      >>> server.serve_forever()
1668
1669      :class:`Server` additionally has an :attr:`address` attribute.
1670
1671   .. method:: connect()
1672
1673      Connect a local manager object to a remote manager process::
1674
1675      >>> from multiprocessing.managers import BaseManager
1676      >>> m = BaseManager(address=('127.0.0.1', 50000), authkey=b'abc')
1677      >>> m.connect()
1678
1679   .. method:: shutdown()
1680
1681      Stop the process used by the manager.  This is only available if
1682      :meth:`start` has been used to start the server process.
1683
1684      This can be called multiple times.
1685
1686   .. method:: register(typeid[, callable[, proxytype[, exposed[, method_to_typeid[, create_method]]]]])
1687
1688      A classmethod which can be used for registering a type or callable with
1689      the manager class.
1690
1691      *typeid* is a "type identifier" which is used to identify a particular
1692      type of shared object.  This must be a string.
1693
1694      *callable* is a callable used for creating objects for this type
1695      identifier.  If a manager instance will be connected to the
1696      server using the :meth:`connect` method, or if the
1697      *create_method* argument is ``False`` then this can be left as
1698      ``None``.
1699
1700      *proxytype* is a subclass of :class:`BaseProxy` which is used to create
1701      proxies for shared objects with this *typeid*.  If ``None`` then a proxy
1702      class is created automatically.
1703
1704      *exposed* is used to specify a sequence of method names which proxies for
1705      this typeid should be allowed to access using
1706      :meth:`BaseProxy._callmethod`.  (If *exposed* is ``None`` then
1707      :attr:`proxytype._exposed_` is used instead if it exists.)  In the case
1708      where no exposed list is specified, all "public methods" of the shared
1709      object will be accessible.  (Here a "public method" means any attribute
1710      which has a :meth:`~object.__call__` method and whose name does not begin
1711      with ``'_'``.)
1712
1713      *method_to_typeid* is a mapping used to specify the return type of those
1714      exposed methods which should return a proxy.  It maps method names to
1715      typeid strings.  (If *method_to_typeid* is ``None`` then
1716      :attr:`proxytype._method_to_typeid_` is used instead if it exists.)  If a
1717      method's name is not a key of this mapping or if the mapping is ``None``
1718      then the object returned by the method will be copied by value.
1719
1720      *create_method* determines whether a method should be created with name
1721      *typeid* which can be used to tell the server process to create a new
1722      shared object and return a proxy for it.  By default it is ``True``.
1723
1724   :class:`BaseManager` instances also have one read-only property:
1725
1726   .. attribute:: address
1727
1728      The address used by the manager.
1729
1730   .. versionchanged:: 3.3
1731      Manager objects support the context management protocol -- see
1732      :ref:`typecontextmanager`.  :meth:`~contextmanager.__enter__` starts the
1733      server process (if it has not already started) and then returns the
1734      manager object.  :meth:`~contextmanager.__exit__` calls :meth:`shutdown`.
1735
1736      In previous versions :meth:`~contextmanager.__enter__` did not start the
1737      manager's server process if it was not already started.
1738
1739.. class:: SyncManager
1740
1741   A subclass of :class:`BaseManager` which can be used for the synchronization
1742   of processes.  Objects of this type are returned by
1743   :func:`multiprocessing.Manager`.
1744
1745   Its methods create and return :ref:`multiprocessing-proxy_objects` for a
1746   number of commonly used data types to be synchronized across processes.
1747   This notably includes shared lists and dictionaries.
1748
1749   .. method:: Barrier(parties[, action[, timeout]])
1750
1751      Create a shared :class:`threading.Barrier` object and return a
1752      proxy for it.
1753
1754      .. versionadded:: 3.3
1755
1756   .. method:: BoundedSemaphore([value])
1757
1758      Create a shared :class:`threading.BoundedSemaphore` object and return a
1759      proxy for it.
1760
1761   .. method:: Condition([lock])
1762
1763      Create a shared :class:`threading.Condition` object and return a proxy for
1764      it.
1765
1766      If *lock* is supplied then it should be a proxy for a
1767      :class:`threading.Lock` or :class:`threading.RLock` object.
1768
1769      .. versionchanged:: 3.3
1770         The :meth:`~threading.Condition.wait_for` method was added.
1771
1772   .. method:: Event()
1773
1774      Create a shared :class:`threading.Event` object and return a proxy for it.
1775
1776   .. method:: Lock()
1777
1778      Create a shared :class:`threading.Lock` object and return a proxy for it.
1779
1780   .. method:: Namespace()
1781
1782      Create a shared :class:`Namespace` object and return a proxy for it.
1783
1784   .. method:: Queue([maxsize])
1785
1786      Create a shared :class:`queue.Queue` object and return a proxy for it.
1787
1788   .. method:: RLock()
1789
1790      Create a shared :class:`threading.RLock` object and return a proxy for it.
1791
1792   .. method:: Semaphore([value])
1793
1794      Create a shared :class:`threading.Semaphore` object and return a proxy for
1795      it.
1796
1797   .. method:: Array(typecode, sequence)
1798
1799      Create an array and return a proxy for it.
1800
1801   .. method:: Value(typecode, value)
1802
1803      Create an object with a writable ``value`` attribute and return a proxy
1804      for it.
1805
1806   .. method:: dict()
1807               dict(mapping)
1808               dict(sequence)
1809
1810      Create a shared :class:`dict` object and return a proxy for it.
1811
1812   .. method:: list()
1813               list(sequence)
1814
1815      Create a shared :class:`list` object and return a proxy for it.
1816
1817   .. versionchanged:: 3.6
1818      Shared objects are capable of being nested.  For example, a shared
1819      container object such as a shared list can contain other shared objects
1820      which will all be managed and synchronized by the :class:`SyncManager`.
1821
1822.. class:: Namespace
1823
1824   A type that can register with :class:`SyncManager`.
1825
1826   A namespace object has no public methods, but does have writable attributes.
1827   Its representation shows the values of its attributes.
1828
1829   However, when using a proxy for a namespace object, an attribute beginning
1830   with ``'_'`` will be an attribute of the proxy and not an attribute of the
1831   referent:
1832
1833   .. doctest::
1834
1835    >>> manager = multiprocessing.Manager()
1836    >>> Global = manager.Namespace()
1837    >>> Global.x = 10
1838    >>> Global.y = 'hello'
1839    >>> Global._z = 12.3    # this is an attribute of the proxy
1840    >>> print(Global)
1841    Namespace(x=10, y='hello')
1842
1843
1844Customized managers
1845>>>>>>>>>>>>>>>>>>>
1846
1847To create one's own manager, one creates a subclass of :class:`BaseManager` and
1848uses the :meth:`~BaseManager.register` classmethod to register new types or
1849callables with the manager class.  For example::
1850
1851   from multiprocessing.managers import BaseManager
1852
1853   class MathsClass:
1854       def add(self, x, y):
1855           return x + y
1856       def mul(self, x, y):
1857           return x * y
1858
1859   class MyManager(BaseManager):
1860       pass
1861
1862   MyManager.register('Maths', MathsClass)
1863
1864   if __name__ == '__main__':
1865       with MyManager() as manager:
1866           maths = manager.Maths()
1867           print(maths.add(4, 3))         # prints 7
1868           print(maths.mul(7, 8))         # prints 56
1869
1870
1871Using a remote manager
1872>>>>>>>>>>>>>>>>>>>>>>
1873
1874It is possible to run a manager server on one machine and have clients use it
1875from other machines (assuming that the firewalls involved allow it).
1876
1877Running the following commands creates a server for a single shared queue which
1878remote clients can access::
1879
1880   >>> from multiprocessing.managers import BaseManager
1881   >>> from queue import Queue
1882   >>> queue = Queue()
1883   >>> class QueueManager(BaseManager): pass
1884   >>> QueueManager.register('get_queue', callable=lambda:queue)
1885   >>> m = QueueManager(address=('', 50000), authkey=b'abracadabra')
1886   >>> s = m.get_server()
1887   >>> s.serve_forever()
1888
1889One client can access the server as follows::
1890
1891   >>> from multiprocessing.managers import BaseManager
1892   >>> class QueueManager(BaseManager): pass
1893   >>> QueueManager.register('get_queue')
1894   >>> m = QueueManager(address=('foo.bar.org', 50000), authkey=b'abracadabra')
1895   >>> m.connect()
1896   >>> queue = m.get_queue()
1897   >>> queue.put('hello')
1898
1899Another client can also use it::
1900
1901   >>> from multiprocessing.managers import BaseManager
1902   >>> class QueueManager(BaseManager): pass
1903   >>> QueueManager.register('get_queue')
1904   >>> m = QueueManager(address=('foo.bar.org', 50000), authkey=b'abracadabra')
1905   >>> m.connect()
1906   >>> queue = m.get_queue()
1907   >>> queue.get()
1908   'hello'
1909
1910Local processes can also access that queue, using the code from above on the
1911client to access it remotely::
1912
1913    >>> from multiprocessing import Process, Queue
1914    >>> from multiprocessing.managers import BaseManager
1915    >>> class Worker(Process):
1916    ...     def __init__(self, q):
1917    ...         self.q = q
1918    ...         super(Worker, self).__init__()
1919    ...     def run(self):
1920    ...         self.q.put('local hello')
1921    ...
1922    >>> queue = Queue()
1923    >>> w = Worker(queue)
1924    >>> w.start()
1925    >>> class QueueManager(BaseManager): pass
1926    ...
1927    >>> QueueManager.register('get_queue', callable=lambda: queue)
1928    >>> m = QueueManager(address=('', 50000), authkey=b'abracadabra')
1929    >>> s = m.get_server()
1930    >>> s.serve_forever()
1931
1932.. _multiprocessing-proxy_objects:
1933
1934Proxy Objects
1935~~~~~~~~~~~~~
1936
1937A proxy is an object which *refers* to a shared object which lives (presumably)
1938in a different process.  The shared object is said to be the *referent* of the
1939proxy.  Multiple proxy objects may have the same referent.
1940
1941A proxy object has methods which invoke corresponding methods of its referent
1942(although not every method of the referent will necessarily be available through
1943the proxy).  In this way, a proxy can be used just like its referent can:
1944
1945.. doctest::
1946
1947   >>> from multiprocessing import Manager
1948   >>> manager = Manager()
1949   >>> l = manager.list([i*i for i in range(10)])
1950   >>> print(l)
1951   [0, 1, 4, 9, 16, 25, 36, 49, 64, 81]
1952   >>> print(repr(l))
1953   <ListProxy object, typeid 'list' at 0x...>
1954   >>> l[4]
1955   16
1956   >>> l[2:5]
1957   [4, 9, 16]
1958
1959Notice that applying :func:`str` to a proxy will return the representation of
1960the referent, whereas applying :func:`repr` will return the representation of
1961the proxy.
1962
1963An important feature of proxy objects is that they are picklable so they can be
1964passed between processes.  As such, a referent can contain
1965:ref:`multiprocessing-proxy_objects`.  This permits nesting of these managed
1966lists, dicts, and other :ref:`multiprocessing-proxy_objects`:
1967
1968.. doctest::
1969
1970   >>> a = manager.list()
1971   >>> b = manager.list()
1972   >>> a.append(b)         # referent of a now contains referent of b
1973   >>> print(a, b)
1974   [<ListProxy object, typeid 'list' at ...>] []
1975   >>> b.append('hello')
1976   >>> print(a[0], b)
1977   ['hello'] ['hello']
1978
1979Similarly, dict and list proxies may be nested inside one another::
1980
1981   >>> l_outer = manager.list([ manager.dict() for i in range(2) ])
1982   >>> d_first_inner = l_outer[0]
1983   >>> d_first_inner['a'] = 1
1984   >>> d_first_inner['b'] = 2
1985   >>> l_outer[1]['c'] = 3
1986   >>> l_outer[1]['z'] = 26
1987   >>> print(l_outer[0])
1988   {'a': 1, 'b': 2}
1989   >>> print(l_outer[1])
1990   {'c': 3, 'z': 26}
1991
1992If standard (non-proxy) :class:`list` or :class:`dict` objects are contained
1993in a referent, modifications to those mutable values will not be propagated
1994through the manager because the proxy has no way of knowing when the values
1995contained within are modified.  However, storing a value in a container proxy
1996(which triggers a ``__setitem__`` on the proxy object) does propagate through
1997the manager and so to effectively modify such an item, one could re-assign the
1998modified value to the container proxy::
1999
2000   # create a list proxy and append a mutable object (a dictionary)
2001   lproxy = manager.list()
2002   lproxy.append({})
2003   # now mutate the dictionary
2004   d = lproxy[0]
2005   d['a'] = 1
2006   d['b'] = 2
2007   # at this point, the changes to d are not yet synced, but by
2008   # updating the dictionary, the proxy is notified of the change
2009   lproxy[0] = d
2010
2011This approach is perhaps less convenient than employing nested
2012:ref:`multiprocessing-proxy_objects` for most use cases but also
2013demonstrates a level of control over the synchronization.
2014
2015.. note::
2016
2017   The proxy types in :mod:`multiprocessing` do nothing to support comparisons
2018   by value.  So, for instance, we have:
2019
2020   .. doctest::
2021
2022       >>> manager.list([1,2,3]) == [1,2,3]
2023       False
2024
2025   One should just use a copy of the referent instead when making comparisons.
2026
2027.. class:: BaseProxy
2028
2029   Proxy objects are instances of subclasses of :class:`BaseProxy`.
2030
2031   .. method:: _callmethod(methodname[, args[, kwds]])
2032
2033      Call and return the result of a method of the proxy's referent.
2034
2035      If ``proxy`` is a proxy whose referent is ``obj`` then the expression ::
2036
2037         proxy._callmethod(methodname, args, kwds)
2038
2039      will evaluate the expression ::
2040
2041         getattr(obj, methodname)(*args, **kwds)
2042
2043      in the manager's process.
2044
2045      The returned value will be a copy of the result of the call or a proxy to
2046      a new shared object -- see documentation for the *method_to_typeid*
2047      argument of :meth:`BaseManager.register`.
2048
2049      If an exception is raised by the call, then is re-raised by
2050      :meth:`_callmethod`.  If some other exception is raised in the manager's
2051      process then this is converted into a :exc:`RemoteError` exception and is
2052      raised by :meth:`_callmethod`.
2053
2054      Note in particular that an exception will be raised if *methodname* has
2055      not been *exposed*.
2056
2057      An example of the usage of :meth:`_callmethod`:
2058
2059      .. doctest::
2060
2061         >>> l = manager.list(range(10))
2062         >>> l._callmethod('__len__')
2063         10
2064         >>> l._callmethod('__getitem__', (slice(2, 7),)) # equivalent to l[2:7]
2065         [2, 3, 4, 5, 6]
2066         >>> l._callmethod('__getitem__', (20,))          # equivalent to l[20]
2067         Traceback (most recent call last):
2068         ...
2069         IndexError: list index out of range
2070
2071   .. method:: _getvalue()
2072
2073      Return a copy of the referent.
2074
2075      If the referent is unpicklable then this will raise an exception.
2076
2077   .. method:: __repr__
2078
2079      Return a representation of the proxy object.
2080
2081   .. method:: __str__
2082
2083      Return the representation of the referent.
2084
2085
2086Cleanup
2087>>>>>>>
2088
2089A proxy object uses a weakref callback so that when it gets garbage collected it
2090deregisters itself from the manager which owns its referent.
2091
2092A shared object gets deleted from the manager process when there are no longer
2093any proxies referring to it.
2094
2095
2096Process Pools
2097~~~~~~~~~~~~~
2098
2099.. module:: multiprocessing.pool
2100   :synopsis: Create pools of processes.
2101
2102One can create a pool of processes which will carry out tasks submitted to it
2103with the :class:`Pool` class.
2104
2105.. class:: Pool([processes[, initializer[, initargs[, maxtasksperchild [, context]]]]])
2106
2107   A process pool object which controls a pool of worker processes to which jobs
2108   can be submitted.  It supports asynchronous results with timeouts and
2109   callbacks and has a parallel map implementation.
2110
2111   *processes* is the number of worker processes to use.  If *processes* is
2112   ``None`` then the number returned by :func:`os.cpu_count` is used.
2113
2114   If *initializer* is not ``None`` then each worker process will call
2115   ``initializer(*initargs)`` when it starts.
2116
2117   *maxtasksperchild* is the number of tasks a worker process can complete
2118   before it will exit and be replaced with a fresh worker process, to enable
2119   unused resources to be freed. The default *maxtasksperchild* is ``None``, which
2120   means worker processes will live as long as the pool.
2121
2122   *context* can be used to specify the context used for starting
2123   the worker processes.  Usually a pool is created using the
2124   function :func:`multiprocessing.Pool` or the :meth:`Pool` method
2125   of a context object.  In both cases *context* is set
2126   appropriately.
2127
2128   Note that the methods of the pool object should only be called by
2129   the process which created the pool.
2130
2131   .. warning::
2132      :class:`multiprocessing.pool` objects have internal resources that need to be
2133      properly managed (like any other resource) by using the pool as a context manager
2134      or by calling :meth:`close` and :meth:`terminate` manually. Failure to do this
2135      can lead to the process hanging on finalization.
2136
2137      Note that is **not correct** to rely on the garbage colletor to destroy the pool
2138      as CPython does not assure that the finalizer of the pool will be called
2139      (see :meth:`object.__del__` for more information).
2140
2141   .. versionadded:: 3.2
2142      *maxtasksperchild*
2143
2144   .. versionadded:: 3.4
2145      *context*
2146
2147   .. note::
2148
2149      Worker processes within a :class:`Pool` typically live for the complete
2150      duration of the Pool's work queue. A frequent pattern found in other
2151      systems (such as Apache, mod_wsgi, etc) to free resources held by
2152      workers is to allow a worker within a pool to complete only a set
2153      amount of work before being exiting, being cleaned up and a new
2154      process spawned to replace the old one. The *maxtasksperchild*
2155      argument to the :class:`Pool` exposes this ability to the end user.
2156
2157   .. method:: apply(func[, args[, kwds]])
2158
2159      Call *func* with arguments *args* and keyword arguments *kwds*.  It blocks
2160      until the result is ready. Given this blocks, :meth:`apply_async` is
2161      better suited for performing work in parallel. Additionally, *func*
2162      is only executed in one of the workers of the pool.
2163
2164   .. method:: apply_async(func[, args[, kwds[, callback[, error_callback]]]])
2165
2166      A variant of the :meth:`apply` method which returns a
2167      :class:`~multiprocessing.pool.AsyncResult` object.
2168
2169      If *callback* is specified then it should be a callable which accepts a
2170      single argument.  When the result becomes ready *callback* is applied to
2171      it, that is unless the call failed, in which case the *error_callback*
2172      is applied instead.
2173
2174      If *error_callback* is specified then it should be a callable which
2175      accepts a single argument.  If the target function fails, then
2176      the *error_callback* is called with the exception instance.
2177
2178      Callbacks should complete immediately since otherwise the thread which
2179      handles the results will get blocked.
2180
2181   .. method:: map(func, iterable[, chunksize])
2182
2183      A parallel equivalent of the :func:`map` built-in function (it supports only
2184      one *iterable* argument though, for multiple iterables see :meth:`starmap`).
2185      It blocks until the result is ready.
2186
2187      This method chops the iterable into a number of chunks which it submits to
2188      the process pool as separate tasks.  The (approximate) size of these
2189      chunks can be specified by setting *chunksize* to a positive integer.
2190
2191      Note that it may cause high memory usage for very long iterables. Consider
2192      using :meth:`imap` or :meth:`imap_unordered` with explicit *chunksize*
2193      option for better efficiency.
2194
2195   .. method:: map_async(func, iterable[, chunksize[, callback[, error_callback]]])
2196
2197      A variant of the :meth:`.map` method which returns a
2198      :class:`~multiprocessing.pool.AsyncResult` object.
2199
2200      If *callback* is specified then it should be a callable which accepts a
2201      single argument.  When the result becomes ready *callback* is applied to
2202      it, that is unless the call failed, in which case the *error_callback*
2203      is applied instead.
2204
2205      If *error_callback* is specified then it should be a callable which
2206      accepts a single argument.  If the target function fails, then
2207      the *error_callback* is called with the exception instance.
2208
2209      Callbacks should complete immediately since otherwise the thread which
2210      handles the results will get blocked.
2211
2212   .. method:: imap(func, iterable[, chunksize])
2213
2214      A lazier version of :meth:`.map`.
2215
2216      The *chunksize* argument is the same as the one used by the :meth:`.map`
2217      method.  For very long iterables using a large value for *chunksize* can
2218      make the job complete **much** faster than using the default value of
2219      ``1``.
2220
2221      Also if *chunksize* is ``1`` then the :meth:`!next` method of the iterator
2222      returned by the :meth:`imap` method has an optional *timeout* parameter:
2223      ``next(timeout)`` will raise :exc:`multiprocessing.TimeoutError` if the
2224      result cannot be returned within *timeout* seconds.
2225
2226   .. method:: imap_unordered(func, iterable[, chunksize])
2227
2228      The same as :meth:`imap` except that the ordering of the results from the
2229      returned iterator should be considered arbitrary.  (Only when there is
2230      only one worker process is the order guaranteed to be "correct".)
2231
2232   .. method:: starmap(func, iterable[, chunksize])
2233
2234      Like :meth:`map` except that the elements of the *iterable* are expected
2235      to be iterables that are unpacked as arguments.
2236
2237      Hence an *iterable* of ``[(1,2), (3, 4)]`` results in ``[func(1,2),
2238      func(3,4)]``.
2239
2240      .. versionadded:: 3.3
2241
2242   .. method:: starmap_async(func, iterable[, chunksize[, callback[, error_callback]]])
2243
2244      A combination of :meth:`starmap` and :meth:`map_async` that iterates over
2245      *iterable* of iterables and calls *func* with the iterables unpacked.
2246      Returns a result object.
2247
2248      .. versionadded:: 3.3
2249
2250   .. method:: close()
2251
2252      Prevents any more tasks from being submitted to the pool.  Once all the
2253      tasks have been completed the worker processes will exit.
2254
2255   .. method:: terminate()
2256
2257      Stops the worker processes immediately without completing outstanding
2258      work.  When the pool object is garbage collected :meth:`terminate` will be
2259      called immediately.
2260
2261   .. method:: join()
2262
2263      Wait for the worker processes to exit.  One must call :meth:`close` or
2264      :meth:`terminate` before using :meth:`join`.
2265
2266   .. versionadded:: 3.3
2267      Pool objects now support the context management protocol -- see
2268      :ref:`typecontextmanager`.  :meth:`~contextmanager.__enter__` returns the
2269      pool object, and :meth:`~contextmanager.__exit__` calls :meth:`terminate`.
2270
2271
2272.. class:: AsyncResult
2273
2274   The class of the result returned by :meth:`Pool.apply_async` and
2275   :meth:`Pool.map_async`.
2276
2277   .. method:: get([timeout])
2278
2279      Return the result when it arrives.  If *timeout* is not ``None`` and the
2280      result does not arrive within *timeout* seconds then
2281      :exc:`multiprocessing.TimeoutError` is raised.  If the remote call raised
2282      an exception then that exception will be reraised by :meth:`get`.
2283
2284   .. method:: wait([timeout])
2285
2286      Wait until the result is available or until *timeout* seconds pass.
2287
2288   .. method:: ready()
2289
2290      Return whether the call has completed.
2291
2292   .. method:: successful()
2293
2294      Return whether the call completed without raising an exception.  Will
2295      raise :exc:`ValueError` if the result is not ready.
2296
2297      .. versionchanged:: 3.7
2298         If the result is not ready, :exc:`ValueError` is raised instead of
2299         :exc:`AssertionError`.
2300
2301The following example demonstrates the use of a pool::
2302
2303   from multiprocessing import Pool
2304   import time
2305
2306   def f(x):
2307       return x*x
2308
2309   if __name__ == '__main__':
2310       with Pool(processes=4) as pool:         # start 4 worker processes
2311           result = pool.apply_async(f, (10,)) # evaluate "f(10)" asynchronously in a single process
2312           print(result.get(timeout=1))        # prints "100" unless your computer is *very* slow
2313
2314           print(pool.map(f, range(10)))       # prints "[0, 1, 4,..., 81]"
2315
2316           it = pool.imap(f, range(10))
2317           print(next(it))                     # prints "0"
2318           print(next(it))                     # prints "1"
2319           print(it.next(timeout=1))           # prints "4" unless your computer is *very* slow
2320
2321           result = pool.apply_async(time.sleep, (10,))
2322           print(result.get(timeout=1))        # raises multiprocessing.TimeoutError
2323
2324
2325.. _multiprocessing-listeners-clients:
2326
2327Listeners and Clients
2328~~~~~~~~~~~~~~~~~~~~~
2329
2330.. module:: multiprocessing.connection
2331   :synopsis: API for dealing with sockets.
2332
2333Usually message passing between processes is done using queues or by using
2334:class:`~Connection` objects returned by
2335:func:`~multiprocessing.Pipe`.
2336
2337However, the :mod:`multiprocessing.connection` module allows some extra
2338flexibility.  It basically gives a high level message oriented API for dealing
2339with sockets or Windows named pipes.  It also has support for *digest
2340authentication* using the :mod:`hmac` module, and for polling
2341multiple connections at the same time.
2342
2343
2344.. function:: deliver_challenge(connection, authkey)
2345
2346   Send a randomly generated message to the other end of the connection and wait
2347   for a reply.
2348
2349   If the reply matches the digest of the message using *authkey* as the key
2350   then a welcome message is sent to the other end of the connection.  Otherwise
2351   :exc:`~multiprocessing.AuthenticationError` is raised.
2352
2353.. function:: answer_challenge(connection, authkey)
2354
2355   Receive a message, calculate the digest of the message using *authkey* as the
2356   key, and then send the digest back.
2357
2358   If a welcome message is not received, then
2359   :exc:`~multiprocessing.AuthenticationError` is raised.
2360
2361.. function:: Client(address[, family[, authkey]])
2362
2363   Attempt to set up a connection to the listener which is using address
2364   *address*, returning a :class:`~Connection`.
2365
2366   The type of the connection is determined by *family* argument, but this can
2367   generally be omitted since it can usually be inferred from the format of
2368   *address*. (See :ref:`multiprocessing-address-formats`)
2369
2370   If *authkey* is given and not None, it should be a byte string and will be
2371   used as the secret key for an HMAC-based authentication challenge. No
2372   authentication is done if *authkey* is None.
2373   :exc:`~multiprocessing.AuthenticationError` is raised if authentication fails.
2374   See :ref:`multiprocessing-auth-keys`.
2375
2376.. class:: Listener([address[, family[, backlog[, authkey]]]])
2377
2378   A wrapper for a bound socket or Windows named pipe which is 'listening' for
2379   connections.
2380
2381   *address* is the address to be used by the bound socket or named pipe of the
2382   listener object.
2383
2384   .. note::
2385
2386      If an address of '0.0.0.0' is used, the address will not be a connectable
2387      end point on Windows. If you require a connectable end-point,
2388      you should use '127.0.0.1'.
2389
2390   *family* is the type of socket (or named pipe) to use.  This can be one of
2391   the strings ``'AF_INET'`` (for a TCP socket), ``'AF_UNIX'`` (for a Unix
2392   domain socket) or ``'AF_PIPE'`` (for a Windows named pipe).  Of these only
2393   the first is guaranteed to be available.  If *family* is ``None`` then the
2394   family is inferred from the format of *address*.  If *address* is also
2395   ``None`` then a default is chosen.  This default is the family which is
2396   assumed to be the fastest available.  See
2397   :ref:`multiprocessing-address-formats`.  Note that if *family* is
2398   ``'AF_UNIX'`` and address is ``None`` then the socket will be created in a
2399   private temporary directory created using :func:`tempfile.mkstemp`.
2400
2401   If the listener object uses a socket then *backlog* (1 by default) is passed
2402   to the :meth:`~socket.socket.listen` method of the socket once it has been
2403   bound.
2404
2405   If *authkey* is given and not None, it should be a byte string and will be
2406   used as the secret key for an HMAC-based authentication challenge. No
2407   authentication is done if *authkey* is None.
2408   :exc:`~multiprocessing.AuthenticationError` is raised if authentication fails.
2409   See :ref:`multiprocessing-auth-keys`.
2410
2411   .. method:: accept()
2412
2413      Accept a connection on the bound socket or named pipe of the listener
2414      object and return a :class:`~Connection` object.
2415      If authentication is attempted and fails, then
2416      :exc:`~multiprocessing.AuthenticationError` is raised.
2417
2418   .. method:: close()
2419
2420      Close the bound socket or named pipe of the listener object.  This is
2421      called automatically when the listener is garbage collected.  However it
2422      is advisable to call it explicitly.
2423
2424   Listener objects have the following read-only properties:
2425
2426   .. attribute:: address
2427
2428      The address which is being used by the Listener object.
2429
2430   .. attribute:: last_accepted
2431
2432      The address from which the last accepted connection came.  If this is
2433      unavailable then it is ``None``.
2434
2435   .. versionadded:: 3.3
2436      Listener objects now support the context management protocol -- see
2437      :ref:`typecontextmanager`.  :meth:`~contextmanager.__enter__` returns the
2438      listener object, and :meth:`~contextmanager.__exit__` calls :meth:`close`.
2439
2440.. function:: wait(object_list, timeout=None)
2441
2442   Wait till an object in *object_list* is ready.  Returns the list of
2443   those objects in *object_list* which are ready.  If *timeout* is a
2444   float then the call blocks for at most that many seconds.  If
2445   *timeout* is ``None`` then it will block for an unlimited period.
2446   A negative timeout is equivalent to a zero timeout.
2447
2448   For both Unix and Windows, an object can appear in *object_list* if
2449   it is
2450
2451   * a readable :class:`~multiprocessing.connection.Connection` object;
2452   * a connected and readable :class:`socket.socket` object; or
2453   * the :attr:`~multiprocessing.Process.sentinel` attribute of a
2454     :class:`~multiprocessing.Process` object.
2455
2456   A connection or socket object is ready when there is data available
2457   to be read from it, or the other end has been closed.
2458
2459   **Unix**: ``wait(object_list, timeout)`` almost equivalent
2460   ``select.select(object_list, [], [], timeout)``.  The difference is
2461   that, if :func:`select.select` is interrupted by a signal, it can
2462   raise :exc:`OSError` with an error number of ``EINTR``, whereas
2463   :func:`wait` will not.
2464
2465   **Windows**: An item in *object_list* must either be an integer
2466   handle which is waitable (according to the definition used by the
2467   documentation of the Win32 function ``WaitForMultipleObjects()``)
2468   or it can be an object with a :meth:`fileno` method which returns a
2469   socket handle or pipe handle.  (Note that pipe handles and socket
2470   handles are **not** waitable handles.)
2471
2472   .. versionadded:: 3.3
2473
2474
2475**Examples**
2476
2477The following server code creates a listener which uses ``'secret password'`` as
2478an authentication key.  It then waits for a connection and sends some data to
2479the client::
2480
2481   from multiprocessing.connection import Listener
2482   from array import array
2483
2484   address = ('localhost', 6000)     # family is deduced to be 'AF_INET'
2485
2486   with Listener(address, authkey=b'secret password') as listener:
2487       with listener.accept() as conn:
2488           print('connection accepted from', listener.last_accepted)
2489
2490           conn.send([2.25, None, 'junk', float])
2491
2492           conn.send_bytes(b'hello')
2493
2494           conn.send_bytes(array('i', [42, 1729]))
2495
2496The following code connects to the server and receives some data from the
2497server::
2498
2499   from multiprocessing.connection import Client
2500   from array import array
2501
2502   address = ('localhost', 6000)
2503
2504   with Client(address, authkey=b'secret password') as conn:
2505       print(conn.recv())                  # => [2.25, None, 'junk', float]
2506
2507       print(conn.recv_bytes())            # => 'hello'
2508
2509       arr = array('i', [0, 0, 0, 0, 0])
2510       print(conn.recv_bytes_into(arr))    # => 8
2511       print(arr)                          # => array('i', [42, 1729, 0, 0, 0])
2512
2513The following code uses :func:`~multiprocessing.connection.wait` to
2514wait for messages from multiple processes at once::
2515
2516   import time, random
2517   from multiprocessing import Process, Pipe, current_process
2518   from multiprocessing.connection import wait
2519
2520   def foo(w):
2521       for i in range(10):
2522           w.send((i, current_process().name))
2523       w.close()
2524
2525   if __name__ == '__main__':
2526       readers = []
2527
2528       for i in range(4):
2529           r, w = Pipe(duplex=False)
2530           readers.append(r)
2531           p = Process(target=foo, args=(w,))
2532           p.start()
2533           # We close the writable end of the pipe now to be sure that
2534           # p is the only process which owns a handle for it.  This
2535           # ensures that when p closes its handle for the writable end,
2536           # wait() will promptly report the readable end as being ready.
2537           w.close()
2538
2539       while readers:
2540           for r in wait(readers):
2541               try:
2542                   msg = r.recv()
2543               except EOFError:
2544                   readers.remove(r)
2545               else:
2546                   print(msg)
2547
2548
2549.. _multiprocessing-address-formats:
2550
2551Address Formats
2552>>>>>>>>>>>>>>>
2553
2554* An ``'AF_INET'`` address is a tuple of the form ``(hostname, port)`` where
2555  *hostname* is a string and *port* is an integer.
2556
2557* An ``'AF_UNIX'`` address is a string representing a filename on the
2558  filesystem.
2559
2560* An ``'AF_PIPE'`` address is a string of the form
2561   :samp:`r'\\\\.\\pipe\\{PipeName}'`.  To use :func:`Client` to connect to a named
2562   pipe on a remote computer called *ServerName* one should use an address of the
2563   form :samp:`r'\\\\{ServerName}\\pipe\\{PipeName}'` instead.
2564
2565Note that any string beginning with two backslashes is assumed by default to be
2566an ``'AF_PIPE'`` address rather than an ``'AF_UNIX'`` address.
2567
2568
2569.. _multiprocessing-auth-keys:
2570
2571Authentication keys
2572~~~~~~~~~~~~~~~~~~~
2573
2574When one uses :meth:`Connection.recv <Connection.recv>`, the
2575data received is automatically
2576unpickled. Unfortunately unpickling data from an untrusted source is a security
2577risk. Therefore :class:`Listener` and :func:`Client` use the :mod:`hmac` module
2578to provide digest authentication.
2579
2580An authentication key is a byte string which can be thought of as a
2581password: once a connection is established both ends will demand proof
2582that the other knows the authentication key.  (Demonstrating that both
2583ends are using the same key does **not** involve sending the key over
2584the connection.)
2585
2586If authentication is requested but no authentication key is specified then the
2587return value of ``current_process().authkey`` is used (see
2588:class:`~multiprocessing.Process`).  This value will be automatically inherited by
2589any :class:`~multiprocessing.Process` object that the current process creates.
2590This means that (by default) all processes of a multi-process program will share
2591a single authentication key which can be used when setting up connections
2592between themselves.
2593
2594Suitable authentication keys can also be generated by using :func:`os.urandom`.
2595
2596
2597Logging
2598~~~~~~~
2599
2600Some support for logging is available.  Note, however, that the :mod:`logging`
2601package does not use process shared locks so it is possible (depending on the
2602handler type) for messages from different processes to get mixed up.
2603
2604.. currentmodule:: multiprocessing
2605.. function:: get_logger()
2606
2607   Returns the logger used by :mod:`multiprocessing`.  If necessary, a new one
2608   will be created.
2609
2610   When first created the logger has level :data:`logging.NOTSET` and no
2611   default handler. Messages sent to this logger will not by default propagate
2612   to the root logger.
2613
2614   Note that on Windows child processes will only inherit the level of the
2615   parent process's logger -- any other customization of the logger will not be
2616   inherited.
2617
2618.. currentmodule:: multiprocessing
2619.. function:: log_to_stderr()
2620
2621   This function performs a call to :func:`get_logger` but in addition to
2622   returning the logger created by get_logger, it adds a handler which sends
2623   output to :data:`sys.stderr` using format
2624   ``'[%(levelname)s/%(processName)s] %(message)s'``.
2625
2626Below is an example session with logging turned on::
2627
2628    >>> import multiprocessing, logging
2629    >>> logger = multiprocessing.log_to_stderr()
2630    >>> logger.setLevel(logging.INFO)
2631    >>> logger.warning('doomed')
2632    [WARNING/MainProcess] doomed
2633    >>> m = multiprocessing.Manager()
2634    [INFO/SyncManager-...] child process calling self.run()
2635    [INFO/SyncManager-...] created temp directory /.../pymp-...
2636    [INFO/SyncManager-...] manager serving at '/.../listener-...'
2637    >>> del m
2638    [INFO/MainProcess] sending shutdown message to manager
2639    [INFO/SyncManager-...] manager exiting with exitcode 0
2640
2641For a full table of logging levels, see the :mod:`logging` module.
2642
2643
2644The :mod:`multiprocessing.dummy` module
2645~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2646
2647.. module:: multiprocessing.dummy
2648   :synopsis: Dumb wrapper around threading.
2649
2650:mod:`multiprocessing.dummy` replicates the API of :mod:`multiprocessing` but is
2651no more than a wrapper around the :mod:`threading` module.
2652
2653
2654.. _multiprocessing-programming:
2655
2656Programming guidelines
2657----------------------
2658
2659There are certain guidelines and idioms which should be adhered to when using
2660:mod:`multiprocessing`.
2661
2662
2663All start methods
2664~~~~~~~~~~~~~~~~~
2665
2666The following applies to all start methods.
2667
2668Avoid shared state
2669
2670    As far as possible one should try to avoid shifting large amounts of data
2671    between processes.
2672
2673    It is probably best to stick to using queues or pipes for communication
2674    between processes rather than using the lower level synchronization
2675    primitives.
2676
2677Picklability
2678
2679    Ensure that the arguments to the methods of proxies are picklable.
2680
2681Thread safety of proxies
2682
2683    Do not use a proxy object from more than one thread unless you protect it
2684    with a lock.
2685
2686    (There is never a problem with different processes using the *same* proxy.)
2687
2688Joining zombie processes
2689
2690    On Unix when a process finishes but has not been joined it becomes a zombie.
2691    There should never be very many because each time a new process starts (or
2692    :func:`~multiprocessing.active_children` is called) all completed processes
2693    which have not yet been joined will be joined.  Also calling a finished
2694    process's :meth:`Process.is_alive <multiprocessing.Process.is_alive>` will
2695    join the process.  Even so it is probably good
2696    practice to explicitly join all the processes that you start.
2697
2698Better to inherit than pickle/unpickle
2699
2700    When using the *spawn* or *forkserver* start methods many types
2701    from :mod:`multiprocessing` need to be picklable so that child
2702    processes can use them.  However, one should generally avoid
2703    sending shared objects to other processes using pipes or queues.
2704    Instead you should arrange the program so that a process which
2705    needs access to a shared resource created elsewhere can inherit it
2706    from an ancestor process.
2707
2708Avoid terminating processes
2709
2710    Using the :meth:`Process.terminate <multiprocessing.Process.terminate>`
2711    method to stop a process is liable to
2712    cause any shared resources (such as locks, semaphores, pipes and queues)
2713    currently being used by the process to become broken or unavailable to other
2714    processes.
2715
2716    Therefore it is probably best to only consider using
2717    :meth:`Process.terminate <multiprocessing.Process.terminate>` on processes
2718    which never use any shared resources.
2719
2720Joining processes that use queues
2721
2722    Bear in mind that a process that has put items in a queue will wait before
2723    terminating until all the buffered items are fed by the "feeder" thread to
2724    the underlying pipe.  (The child process can call the
2725    :meth:`Queue.cancel_join_thread <multiprocessing.Queue.cancel_join_thread>`
2726    method of the queue to avoid this behaviour.)
2727
2728    This means that whenever you use a queue you need to make sure that all
2729    items which have been put on the queue will eventually be removed before the
2730    process is joined.  Otherwise you cannot be sure that processes which have
2731    put items on the queue will terminate.  Remember also that non-daemonic
2732    processes will be joined automatically.
2733
2734    An example which will deadlock is the following::
2735
2736        from multiprocessing import Process, Queue
2737
2738        def f(q):
2739            q.put('X' * 1000000)
2740
2741        if __name__ == '__main__':
2742            queue = Queue()
2743            p = Process(target=f, args=(queue,))
2744            p.start()
2745            p.join()                    # this deadlocks
2746            obj = queue.get()
2747
2748    A fix here would be to swap the last two lines (or simply remove the
2749    ``p.join()`` line).
2750
2751Explicitly pass resources to child processes
2752
2753    On Unix using the *fork* start method, a child process can make
2754    use of a shared resource created in a parent process using a
2755    global resource.  However, it is better to pass the object as an
2756    argument to the constructor for the child process.
2757
2758    Apart from making the code (potentially) compatible with Windows
2759    and the other start methods this also ensures that as long as the
2760    child process is still alive the object will not be garbage
2761    collected in the parent process.  This might be important if some
2762    resource is freed when the object is garbage collected in the
2763    parent process.
2764
2765    So for instance ::
2766
2767        from multiprocessing import Process, Lock
2768
2769        def f():
2770            ... do something using "lock" ...
2771
2772        if __name__ == '__main__':
2773            lock = Lock()
2774            for i in range(10):
2775                Process(target=f).start()
2776
2777    should be rewritten as ::
2778
2779        from multiprocessing import Process, Lock
2780
2781        def f(l):
2782            ... do something using "l" ...
2783
2784        if __name__ == '__main__':
2785            lock = Lock()
2786            for i in range(10):
2787                Process(target=f, args=(lock,)).start()
2788
2789Beware of replacing :data:`sys.stdin` with a "file like object"
2790
2791    :mod:`multiprocessing` originally unconditionally called::
2792
2793        os.close(sys.stdin.fileno())
2794
2795    in the :meth:`multiprocessing.Process._bootstrap` method --- this resulted
2796    in issues with processes-in-processes. This has been changed to::
2797
2798        sys.stdin.close()
2799        sys.stdin = open(os.open(os.devnull, os.O_RDONLY), closefd=False)
2800
2801    Which solves the fundamental issue of processes colliding with each other
2802    resulting in a bad file descriptor error, but introduces a potential danger
2803    to applications which replace :func:`sys.stdin` with a "file-like object"
2804    with output buffering.  This danger is that if multiple processes call
2805    :meth:`~io.IOBase.close()` on this file-like object, it could result in the same
2806    data being flushed to the object multiple times, resulting in corruption.
2807
2808    If you write a file-like object and implement your own caching, you can
2809    make it fork-safe by storing the pid whenever you append to the cache,
2810    and discarding the cache when the pid changes. For example::
2811
2812       @property
2813       def cache(self):
2814           pid = os.getpid()
2815           if pid != self._pid:
2816               self._pid = pid
2817               self._cache = []
2818           return self._cache
2819
2820    For more information, see :issue:`5155`, :issue:`5313` and :issue:`5331`
2821
2822The *spawn* and *forkserver* start methods
2823~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2824
2825There are a few extra restriction which don't apply to the *fork*
2826start method.
2827
2828More picklability
2829
2830    Ensure that all arguments to :meth:`Process.__init__` are picklable.
2831    Also, if you subclass :class:`~multiprocessing.Process` then make sure that
2832    instances will be picklable when the :meth:`Process.start
2833    <multiprocessing.Process.start>` method is called.
2834
2835Global variables
2836
2837    Bear in mind that if code run in a child process tries to access a global
2838    variable, then the value it sees (if any) may not be the same as the value
2839    in the parent process at the time that :meth:`Process.start
2840    <multiprocessing.Process.start>` was called.
2841
2842    However, global variables which are just module level constants cause no
2843    problems.
2844
2845Safe importing of main module
2846
2847    Make sure that the main module can be safely imported by a new Python
2848    interpreter without causing unintended side effects (such a starting a new
2849    process).
2850
2851    For example, using the *spawn* or *forkserver* start method
2852    running the following module would fail with a
2853    :exc:`RuntimeError`::
2854
2855        from multiprocessing import Process
2856
2857        def foo():
2858            print('hello')
2859
2860        p = Process(target=foo)
2861        p.start()
2862
2863    Instead one should protect the "entry point" of the program by using ``if
2864    __name__ == '__main__':`` as follows::
2865
2866       from multiprocessing import Process, freeze_support, set_start_method
2867
2868       def foo():
2869           print('hello')
2870
2871       if __name__ == '__main__':
2872           freeze_support()
2873           set_start_method('spawn')
2874           p = Process(target=foo)
2875           p.start()
2876
2877    (The ``freeze_support()`` line can be omitted if the program will be run
2878    normally instead of frozen.)
2879
2880    This allows the newly spawned Python interpreter to safely import the module
2881    and then run the module's ``foo()`` function.
2882
2883    Similar restrictions apply if a pool or manager is created in the main
2884    module.
2885
2886
2887.. _multiprocessing-examples:
2888
2889Examples
2890--------
2891
2892Demonstration of how to create and use customized managers and proxies:
2893
2894.. literalinclude:: ../includes/mp_newtype.py
2895   :language: python3
2896
2897
2898Using :class:`~multiprocessing.pool.Pool`:
2899
2900.. literalinclude:: ../includes/mp_pool.py
2901   :language: python3
2902
2903
2904An example showing how to use queues to feed tasks to a collection of worker
2905processes and collect the results:
2906
2907.. literalinclude:: ../includes/mp_workers.py
2908