• Home
  • Line#
  • Scopes#
  • Navigate#
  • Raw
  • Download
1:mod:`multiprocessing` --- Process-based parallelism
2====================================================
3
4.. module:: multiprocessing
5   :synopsis: Process-based parallelism.
6
7**Source code:** :source:`Lib/multiprocessing/`
8
9--------------
10
11Introduction
12------------
13
14:mod:`multiprocessing` is a package that supports spawning processes using an
15API similar to the :mod:`threading` module.  The :mod:`multiprocessing` package
16offers both local and remote concurrency, effectively side-stepping the
17:term:`Global Interpreter Lock <global interpreter lock>` by using
18subprocesses instead of threads.  Due
19to this, the :mod:`multiprocessing` module allows the programmer to fully
20leverage multiple processors on a given machine.  It runs on both Unix and
21Windows.
22
23The :mod:`multiprocessing` module also introduces APIs which do not have
24analogs in the :mod:`threading` module.  A prime example of this is the
25:class:`~multiprocessing.pool.Pool` object which offers a convenient means of
26parallelizing the execution of a function across multiple input values,
27distributing the input data across processes (data parallelism).  The following
28example demonstrates the common practice of defining such functions in a module
29so that child processes can successfully import that module.  This basic example
30of data parallelism using :class:`~multiprocessing.pool.Pool`, ::
31
32   from multiprocessing import Pool
33
34   def f(x):
35       return x*x
36
37   if __name__ == '__main__':
38       with Pool(5) as p:
39           print(p.map(f, [1, 2, 3]))
40
41will print to standard output ::
42
43   [1, 4, 9]
44
45
46The :class:`Process` class
47~~~~~~~~~~~~~~~~~~~~~~~~~~
48
49In :mod:`multiprocessing`, processes are spawned by creating a :class:`Process`
50object and then calling its :meth:`~Process.start` method.  :class:`Process`
51follows the API of :class:`threading.Thread`.  A trivial example of a
52multiprocess program is ::
53
54   from multiprocessing import Process
55
56   def f(name):
57       print('hello', name)
58
59   if __name__ == '__main__':
60       p = Process(target=f, args=('bob',))
61       p.start()
62       p.join()
63
64To show the individual process IDs involved, here is an expanded example::
65
66    from multiprocessing import Process
67    import os
68
69    def info(title):
70        print(title)
71        print('module name:', __name__)
72        print('parent process:', os.getppid())
73        print('process id:', os.getpid())
74
75    def f(name):
76        info('function f')
77        print('hello', name)
78
79    if __name__ == '__main__':
80        info('main line')
81        p = Process(target=f, args=('bob',))
82        p.start()
83        p.join()
84
85For an explanation of why the ``if __name__ == '__main__'`` part is
86necessary, see :ref:`multiprocessing-programming`.
87
88
89
90Contexts and start methods
91~~~~~~~~~~~~~~~~~~~~~~~~~~
92
93.. _multiprocessing-start-methods:
94
95Depending on the platform, :mod:`multiprocessing` supports three ways
96to start a process.  These *start methods* are
97
98  *spawn*
99    The parent process starts a fresh python interpreter process.  The
100    child process will only inherit those resources necessary to run
101    the process object's :meth:`~Process.run` method.  In particular,
102    unnecessary file descriptors and handles from the parent process
103    will not be inherited.  Starting a process using this method is
104    rather slow compared to using *fork* or *forkserver*.
105
106    Available on Unix and Windows.  The default on Windows and macOS.
107
108  *fork*
109    The parent process uses :func:`os.fork` to fork the Python
110    interpreter.  The child process, when it begins, is effectively
111    identical to the parent process.  All resources of the parent are
112    inherited by the child process.  Note that safely forking a
113    multithreaded process is problematic.
114
115    Available on Unix only.  The default on Unix.
116
117  *forkserver*
118    When the program starts and selects the *forkserver* start method,
119    a server process is started.  From then on, whenever a new process
120    is needed, the parent process connects to the server and requests
121    that it fork a new process.  The fork server process is single
122    threaded so it is safe for it to use :func:`os.fork`.  No
123    unnecessary resources are inherited.
124
125    Available on Unix platforms which support passing file descriptors
126    over Unix pipes.
127
128.. versionchanged:: 3.8
129
130   On macOS, the *spawn* start method is now the default.  The *fork* start
131   method should be considered unsafe as it can lead to crashes of the
132   subprocess. See :issue:`33725`.
133
134.. versionchanged:: 3.4
135   *spawn* added on all unix platforms, and *forkserver* added for
136   some unix platforms.
137   Child processes no longer inherit all of the parents inheritable
138   handles on Windows.
139
140On Unix using the *spawn* or *forkserver* start methods will also
141start a *resource tracker* process which tracks the unlinked named
142system resources (such as named semaphores or
143:class:`~multiprocessing.shared_memory.SharedMemory` objects) created
144by processes of the program.  When all processes
145have exited the resource tracker unlinks any remaining tracked object.
146Usually there should be none, but if a process was killed by a signal
147there may be some "leaked" resources.  (Neither leaked semaphores nor shared
148memory segments will be automatically unlinked until the next reboot. This is
149problematic for both objects because the system allows only a limited number of
150named semaphores, and shared memory segments occupy some space in the main
151memory.)
152
153To select a start method you use the :func:`set_start_method` in
154the ``if __name__ == '__main__'`` clause of the main module.  For
155example::
156
157       import multiprocessing as mp
158
159       def foo(q):
160           q.put('hello')
161
162       if __name__ == '__main__':
163           mp.set_start_method('spawn')
164           q = mp.Queue()
165           p = mp.Process(target=foo, args=(q,))
166           p.start()
167           print(q.get())
168           p.join()
169
170:func:`set_start_method` should not be used more than once in the
171program.
172
173Alternatively, you can use :func:`get_context` to obtain a context
174object.  Context objects have the same API as the multiprocessing
175module, and allow one to use multiple start methods in the same
176program. ::
177
178       import multiprocessing as mp
179
180       def foo(q):
181           q.put('hello')
182
183       if __name__ == '__main__':
184           ctx = mp.get_context('spawn')
185           q = ctx.Queue()
186           p = ctx.Process(target=foo, args=(q,))
187           p.start()
188           print(q.get())
189           p.join()
190
191Note that objects related to one context may not be compatible with
192processes for a different context.  In particular, locks created using
193the *fork* context cannot be passed to processes started using the
194*spawn* or *forkserver* start methods.
195
196A library which wants to use a particular start method should probably
197use :func:`get_context` to avoid interfering with the choice of the
198library user.
199
200.. warning::
201
202   The ``'spawn'`` and ``'forkserver'`` start methods cannot currently
203   be used with "frozen" executables (i.e., binaries produced by
204   packages like **PyInstaller** and **cx_Freeze**) on Unix.
205   The ``'fork'`` start method does work.
206
207
208Exchanging objects between processes
209~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
210
211:mod:`multiprocessing` supports two types of communication channel between
212processes:
213
214**Queues**
215
216   The :class:`Queue` class is a near clone of :class:`queue.Queue`.  For
217   example::
218
219      from multiprocessing import Process, Queue
220
221      def f(q):
222          q.put([42, None, 'hello'])
223
224      if __name__ == '__main__':
225          q = Queue()
226          p = Process(target=f, args=(q,))
227          p.start()
228          print(q.get())    # prints "[42, None, 'hello']"
229          p.join()
230
231   Queues are thread and process safe.
232
233**Pipes**
234
235   The :func:`Pipe` function returns a pair of connection objects connected by a
236   pipe which by default is duplex (two-way).  For example::
237
238      from multiprocessing import Process, Pipe
239
240      def f(conn):
241          conn.send([42, None, 'hello'])
242          conn.close()
243
244      if __name__ == '__main__':
245          parent_conn, child_conn = Pipe()
246          p = Process(target=f, args=(child_conn,))
247          p.start()
248          print(parent_conn.recv())   # prints "[42, None, 'hello']"
249          p.join()
250
251   The two connection objects returned by :func:`Pipe` represent the two ends of
252   the pipe.  Each connection object has :meth:`~Connection.send` and
253   :meth:`~Connection.recv` methods (among others).  Note that data in a pipe
254   may become corrupted if two processes (or threads) try to read from or write
255   to the *same* end of the pipe at the same time.  Of course there is no risk
256   of corruption from processes using different ends of the pipe at the same
257   time.
258
259
260Synchronization between processes
261~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
262
263:mod:`multiprocessing` contains equivalents of all the synchronization
264primitives from :mod:`threading`.  For instance one can use a lock to ensure
265that only one process prints to standard output at a time::
266
267   from multiprocessing import Process, Lock
268
269   def f(l, i):
270       l.acquire()
271       try:
272           print('hello world', i)
273       finally:
274           l.release()
275
276   if __name__ == '__main__':
277       lock = Lock()
278
279       for num in range(10):
280           Process(target=f, args=(lock, num)).start()
281
282Without using the lock output from the different processes is liable to get all
283mixed up.
284
285
286Sharing state between processes
287~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
288
289As mentioned above, when doing concurrent programming it is usually best to
290avoid using shared state as far as possible.  This is particularly true when
291using multiple processes.
292
293However, if you really do need to use some shared data then
294:mod:`multiprocessing` provides a couple of ways of doing so.
295
296**Shared memory**
297
298   Data can be stored in a shared memory map using :class:`Value` or
299   :class:`Array`.  For example, the following code ::
300
301      from multiprocessing import Process, Value, Array
302
303      def f(n, a):
304          n.value = 3.1415927
305          for i in range(len(a)):
306              a[i] = -a[i]
307
308      if __name__ == '__main__':
309          num = Value('d', 0.0)
310          arr = Array('i', range(10))
311
312          p = Process(target=f, args=(num, arr))
313          p.start()
314          p.join()
315
316          print(num.value)
317          print(arr[:])
318
319   will print ::
320
321      3.1415927
322      [0, -1, -2, -3, -4, -5, -6, -7, -8, -9]
323
324   The ``'d'`` and ``'i'`` arguments used when creating ``num`` and ``arr`` are
325   typecodes of the kind used by the :mod:`array` module: ``'d'`` indicates a
326   double precision float and ``'i'`` indicates a signed integer.  These shared
327   objects will be process and thread-safe.
328
329   For more flexibility in using shared memory one can use the
330   :mod:`multiprocessing.sharedctypes` module which supports the creation of
331   arbitrary ctypes objects allocated from shared memory.
332
333**Server process**
334
335   A manager object returned by :func:`Manager` controls a server process which
336   holds Python objects and allows other processes to manipulate them using
337   proxies.
338
339   A manager returned by :func:`Manager` will support types
340   :class:`list`, :class:`dict`, :class:`~managers.Namespace`, :class:`Lock`,
341   :class:`RLock`, :class:`Semaphore`, :class:`BoundedSemaphore`,
342   :class:`Condition`, :class:`Event`, :class:`Barrier`,
343   :class:`Queue`, :class:`Value` and :class:`Array`.  For example, ::
344
345      from multiprocessing import Process, Manager
346
347      def f(d, l):
348          d[1] = '1'
349          d['2'] = 2
350          d[0.25] = None
351          l.reverse()
352
353      if __name__ == '__main__':
354          with Manager() as manager:
355              d = manager.dict()
356              l = manager.list(range(10))
357
358              p = Process(target=f, args=(d, l))
359              p.start()
360              p.join()
361
362              print(d)
363              print(l)
364
365   will print ::
366
367       {0.25: None, 1: '1', '2': 2}
368       [9, 8, 7, 6, 5, 4, 3, 2, 1, 0]
369
370   Server process managers are more flexible than using shared memory objects
371   because they can be made to support arbitrary object types.  Also, a single
372   manager can be shared by processes on different computers over a network.
373   They are, however, slower than using shared memory.
374
375
376Using a pool of workers
377~~~~~~~~~~~~~~~~~~~~~~~
378
379The :class:`~multiprocessing.pool.Pool` class represents a pool of worker
380processes.  It has methods which allows tasks to be offloaded to the worker
381processes in a few different ways.
382
383For example::
384
385   from multiprocessing import Pool, TimeoutError
386   import time
387   import os
388
389   def f(x):
390       return x*x
391
392   if __name__ == '__main__':
393       # start 4 worker processes
394       with Pool(processes=4) as pool:
395
396           # print "[0, 1, 4,..., 81]"
397           print(pool.map(f, range(10)))
398
399           # print same numbers in arbitrary order
400           for i in pool.imap_unordered(f, range(10)):
401               print(i)
402
403           # evaluate "f(20)" asynchronously
404           res = pool.apply_async(f, (20,))      # runs in *only* one process
405           print(res.get(timeout=1))             # prints "400"
406
407           # evaluate "os.getpid()" asynchronously
408           res = pool.apply_async(os.getpid, ()) # runs in *only* one process
409           print(res.get(timeout=1))             # prints the PID of that process
410
411           # launching multiple evaluations asynchronously *may* use more processes
412           multiple_results = [pool.apply_async(os.getpid, ()) for i in range(4)]
413           print([res.get(timeout=1) for res in multiple_results])
414
415           # make a single worker sleep for 10 secs
416           res = pool.apply_async(time.sleep, (10,))
417           try:
418               print(res.get(timeout=1))
419           except TimeoutError:
420               print("We lacked patience and got a multiprocessing.TimeoutError")
421
422           print("For the moment, the pool remains available for more work")
423
424       # exiting the 'with'-block has stopped the pool
425       print("Now the pool is closed and no longer available")
426
427Note that the methods of a pool should only ever be used by the
428process which created it.
429
430.. note::
431
432   Functionality within this package requires that the ``__main__`` module be
433   importable by the children. This is covered in :ref:`multiprocessing-programming`
434   however it is worth pointing out here. This means that some examples, such
435   as the :class:`multiprocessing.pool.Pool` examples will not work in the
436   interactive interpreter. For example::
437
438      >>> from multiprocessing import Pool
439      >>> p = Pool(5)
440      >>> def f(x):
441      ...     return x*x
442      ...
443      >>> with p:
444      ...   p.map(f, [1,2,3])
445      Process PoolWorker-1:
446      Process PoolWorker-2:
447      Process PoolWorker-3:
448      Traceback (most recent call last):
449      Traceback (most recent call last):
450      Traceback (most recent call last):
451      AttributeError: 'module' object has no attribute 'f'
452      AttributeError: 'module' object has no attribute 'f'
453      AttributeError: 'module' object has no attribute 'f'
454
455   (If you try this it will actually output three full tracebacks
456   interleaved in a semi-random fashion, and then you may have to
457   stop the parent process somehow.)
458
459
460Reference
461---------
462
463The :mod:`multiprocessing` package mostly replicates the API of the
464:mod:`threading` module.
465
466
467:class:`Process` and exceptions
468~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
469
470.. class:: Process(group=None, target=None, name=None, args=(), kwargs={}, \
471                   *, daemon=None)
472
473   Process objects represent activity that is run in a separate process. The
474   :class:`Process` class has equivalents of all the methods of
475   :class:`threading.Thread`.
476
477   The constructor should always be called with keyword arguments. *group*
478   should always be ``None``; it exists solely for compatibility with
479   :class:`threading.Thread`.  *target* is the callable object to be invoked by
480   the :meth:`run()` method.  It defaults to ``None``, meaning nothing is
481   called. *name* is the process name (see :attr:`name` for more details).
482   *args* is the argument tuple for the target invocation.  *kwargs* is a
483   dictionary of keyword arguments for the target invocation.  If provided,
484   the keyword-only *daemon* argument sets the process :attr:`daemon` flag
485   to ``True`` or ``False``.  If ``None`` (the default), this flag will be
486   inherited from the creating process.
487
488   By default, no arguments are passed to *target*.
489
490   If a subclass overrides the constructor, it must make sure it invokes the
491   base class constructor (:meth:`Process.__init__`) before doing anything else
492   to the process.
493
494   .. versionchanged:: 3.3
495      Added the *daemon* argument.
496
497   .. method:: run()
498
499      Method representing the process's activity.
500
501      You may override this method in a subclass.  The standard :meth:`run`
502      method invokes the callable object passed to the object's constructor as
503      the target argument, if any, with sequential and keyword arguments taken
504      from the *args* and *kwargs* arguments, respectively.
505
506   .. method:: start()
507
508      Start the process's activity.
509
510      This must be called at most once per process object.  It arranges for the
511      object's :meth:`run` method to be invoked in a separate process.
512
513   .. method:: join([timeout])
514
515      If the optional argument *timeout* is ``None`` (the default), the method
516      blocks until the process whose :meth:`join` method is called terminates.
517      If *timeout* is a positive number, it blocks at most *timeout* seconds.
518      Note that the method returns ``None`` if its process terminates or if the
519      method times out.  Check the process's :attr:`exitcode` to determine if
520      it terminated.
521
522      A process can be joined many times.
523
524      A process cannot join itself because this would cause a deadlock.  It is
525      an error to attempt to join a process before it has been started.
526
527   .. attribute:: name
528
529      The process's name.  The name is a string used for identification purposes
530      only.  It has no semantics.  Multiple processes may be given the same
531      name.
532
533      The initial name is set by the constructor.  If no explicit name is
534      provided to the constructor, a name of the form
535      'Process-N\ :sub:`1`:N\ :sub:`2`:...:N\ :sub:`k`' is constructed, where
536      each N\ :sub:`k` is the N-th child of its parent.
537
538   .. method:: is_alive
539
540      Return whether the process is alive.
541
542      Roughly, a process object is alive from the moment the :meth:`start`
543      method returns until the child process terminates.
544
545   .. attribute:: daemon
546
547      The process's daemon flag, a Boolean value.  This must be set before
548      :meth:`start` is called.
549
550      The initial value is inherited from the creating process.
551
552      When a process exits, it attempts to terminate all of its daemonic child
553      processes.
554
555      Note that a daemonic process is not allowed to create child processes.
556      Otherwise a daemonic process would leave its children orphaned if it gets
557      terminated when its parent process exits. Additionally, these are **not**
558      Unix daemons or services, they are normal processes that will be
559      terminated (and not joined) if non-daemonic processes have exited.
560
561   In addition to the  :class:`threading.Thread` API, :class:`Process` objects
562   also support the following attributes and methods:
563
564   .. attribute:: pid
565
566      Return the process ID.  Before the process is spawned, this will be
567      ``None``.
568
569   .. attribute:: exitcode
570
571      The child's exit code.  This will be ``None`` if the process has not yet
572      terminated.
573
574      If the child's :meth:`run` method returned normally, the exit code
575      will be 0.  If it terminated via :func:`sys.exit` with an integer
576      argument *N*, the exit code will be *N*.
577
578      If the child terminated due to an exception not caught within
579      :meth:`run`, the exit code will be 1.  If it was terminated by
580      signal *N*, the exit code will be the negative value *-N*.
581
582   .. attribute:: authkey
583
584      The process's authentication key (a byte string).
585
586      When :mod:`multiprocessing` is initialized the main process is assigned a
587      random string using :func:`os.urandom`.
588
589      When a :class:`Process` object is created, it will inherit the
590      authentication key of its parent process, although this may be changed by
591      setting :attr:`authkey` to another byte string.
592
593      See :ref:`multiprocessing-auth-keys`.
594
595   .. attribute:: sentinel
596
597      A numeric handle of a system object which will become "ready" when
598      the process ends.
599
600      You can use this value if you want to wait on several events at
601      once using :func:`multiprocessing.connection.wait`.  Otherwise
602      calling :meth:`join()` is simpler.
603
604      On Windows, this is an OS handle usable with the ``WaitForSingleObject``
605      and ``WaitForMultipleObjects`` family of API calls.  On Unix, this is
606      a file descriptor usable with primitives from the :mod:`select` module.
607
608      .. versionadded:: 3.3
609
610   .. method:: terminate()
611
612      Terminate the process.  On Unix this is done using the ``SIGTERM`` signal;
613      on Windows :c:func:`TerminateProcess` is used.  Note that exit handlers and
614      finally clauses, etc., will not be executed.
615
616      Note that descendant processes of the process will *not* be terminated --
617      they will simply become orphaned.
618
619      .. warning::
620
621         If this method is used when the associated process is using a pipe or
622         queue then the pipe or queue is liable to become corrupted and may
623         become unusable by other process.  Similarly, if the process has
624         acquired a lock or semaphore etc. then terminating it is liable to
625         cause other processes to deadlock.
626
627   .. method:: kill()
628
629      Same as :meth:`terminate()` but using the ``SIGKILL`` signal on Unix.
630
631      .. versionadded:: 3.7
632
633   .. method:: close()
634
635      Close the :class:`Process` object, releasing all resources associated
636      with it.  :exc:`ValueError` is raised if the underlying process
637      is still running.  Once :meth:`close` returns successfully, most
638      other methods and attributes of the :class:`Process` object will
639      raise :exc:`ValueError`.
640
641      .. versionadded:: 3.7
642
643   Note that the :meth:`start`, :meth:`join`, :meth:`is_alive`,
644   :meth:`terminate` and :attr:`exitcode` methods should only be called by
645   the process that created the process object.
646
647   Example usage of some of the methods of :class:`Process`:
648
649   .. doctest::
650      :options: +ELLIPSIS
651
652       >>> import multiprocessing, time, signal
653       >>> p = multiprocessing.Process(target=time.sleep, args=(1000,))
654       >>> print(p, p.is_alive())
655       <Process ... initial> False
656       >>> p.start()
657       >>> print(p, p.is_alive())
658       <Process ... started> True
659       >>> p.terminate()
660       >>> time.sleep(0.1)
661       >>> print(p, p.is_alive())
662       <Process ... stopped exitcode=-SIGTERM> False
663       >>> p.exitcode == -signal.SIGTERM
664       True
665
666.. exception:: ProcessError
667
668   The base class of all :mod:`multiprocessing` exceptions.
669
670.. exception:: BufferTooShort
671
672   Exception raised by :meth:`Connection.recv_bytes_into()` when the supplied
673   buffer object is too small for the message read.
674
675   If ``e`` is an instance of :exc:`BufferTooShort` then ``e.args[0]`` will give
676   the message as a byte string.
677
678.. exception:: AuthenticationError
679
680   Raised when there is an authentication error.
681
682.. exception:: TimeoutError
683
684   Raised by methods with a timeout when the timeout expires.
685
686Pipes and Queues
687~~~~~~~~~~~~~~~~
688
689When using multiple processes, one generally uses message passing for
690communication between processes and avoids having to use any synchronization
691primitives like locks.
692
693For passing messages one can use :func:`Pipe` (for a connection between two
694processes) or a queue (which allows multiple producers and consumers).
695
696The :class:`Queue`, :class:`SimpleQueue` and :class:`JoinableQueue` types
697are multi-producer, multi-consumer :abbr:`FIFO (first-in, first-out)`
698queues modelled on the :class:`queue.Queue` class in the
699standard library.  They differ in that :class:`Queue` lacks the
700:meth:`~queue.Queue.task_done` and :meth:`~queue.Queue.join` methods introduced
701into Python 2.5's :class:`queue.Queue` class.
702
703If you use :class:`JoinableQueue` then you **must** call
704:meth:`JoinableQueue.task_done` for each task removed from the queue or else the
705semaphore used to count the number of unfinished tasks may eventually overflow,
706raising an exception.
707
708Note that one can also create a shared queue by using a manager object -- see
709:ref:`multiprocessing-managers`.
710
711.. note::
712
713   :mod:`multiprocessing` uses the usual :exc:`queue.Empty` and
714   :exc:`queue.Full` exceptions to signal a timeout.  They are not available in
715   the :mod:`multiprocessing` namespace so you need to import them from
716   :mod:`queue`.
717
718.. note::
719
720   When an object is put on a queue, the object is pickled and a
721   background thread later flushes the pickled data to an underlying
722   pipe.  This has some consequences which are a little surprising,
723   but should not cause any practical difficulties -- if they really
724   bother you then you can instead use a queue created with a
725   :ref:`manager <multiprocessing-managers>`.
726
727   (1) After putting an object on an empty queue there may be an
728       infinitesimal delay before the queue's :meth:`~Queue.empty`
729       method returns :const:`False` and :meth:`~Queue.get_nowait` can
730       return without raising :exc:`queue.Empty`.
731
732   (2) If multiple processes are enqueuing objects, it is possible for
733       the objects to be received at the other end out-of-order.
734       However, objects enqueued by the same process will always be in
735       the expected order with respect to each other.
736
737.. warning::
738
739   If a process is killed using :meth:`Process.terminate` or :func:`os.kill`
740   while it is trying to use a :class:`Queue`, then the data in the queue is
741   likely to become corrupted.  This may cause any other process to get an
742   exception when it tries to use the queue later on.
743
744.. warning::
745
746   As mentioned above, if a child process has put items on a queue (and it has
747   not used :meth:`JoinableQueue.cancel_join_thread
748   <multiprocessing.Queue.cancel_join_thread>`), then that process will
749   not terminate until all buffered items have been flushed to the pipe.
750
751   This means that if you try joining that process you may get a deadlock unless
752   you are sure that all items which have been put on the queue have been
753   consumed.  Similarly, if the child process is non-daemonic then the parent
754   process may hang on exit when it tries to join all its non-daemonic children.
755
756   Note that a queue created using a manager does not have this issue.  See
757   :ref:`multiprocessing-programming`.
758
759For an example of the usage of queues for interprocess communication see
760:ref:`multiprocessing-examples`.
761
762
763.. function:: Pipe([duplex])
764
765   Returns a pair ``(conn1, conn2)`` of
766   :class:`~multiprocessing.connection.Connection` objects representing the
767   ends of a pipe.
768
769   If *duplex* is ``True`` (the default) then the pipe is bidirectional.  If
770   *duplex* is ``False`` then the pipe is unidirectional: ``conn1`` can only be
771   used for receiving messages and ``conn2`` can only be used for sending
772   messages.
773
774
775.. class:: Queue([maxsize])
776
777   Returns a process shared queue implemented using a pipe and a few
778   locks/semaphores.  When a process first puts an item on the queue a feeder
779   thread is started which transfers objects from a buffer into the pipe.
780
781   The usual :exc:`queue.Empty` and :exc:`queue.Full` exceptions from the
782   standard library's :mod:`queue` module are raised to signal timeouts.
783
784   :class:`Queue` implements all the methods of :class:`queue.Queue` except for
785   :meth:`~queue.Queue.task_done` and :meth:`~queue.Queue.join`.
786
787   .. method:: qsize()
788
789      Return the approximate size of the queue.  Because of
790      multithreading/multiprocessing semantics, this number is not reliable.
791
792      Note that this may raise :exc:`NotImplementedError` on Unix platforms like
793      macOS where ``sem_getvalue()`` is not implemented.
794
795   .. method:: empty()
796
797      Return ``True`` if the queue is empty, ``False`` otherwise.  Because of
798      multithreading/multiprocessing semantics, this is not reliable.
799
800   .. method:: full()
801
802      Return ``True`` if the queue is full, ``False`` otherwise.  Because of
803      multithreading/multiprocessing semantics, this is not reliable.
804
805   .. method:: put(obj[, block[, timeout]])
806
807      Put obj into the queue.  If the optional argument *block* is ``True``
808      (the default) and *timeout* is ``None`` (the default), block if necessary until
809      a free slot is available.  If *timeout* is a positive number, it blocks at
810      most *timeout* seconds and raises the :exc:`queue.Full` exception if no
811      free slot was available within that time.  Otherwise (*block* is
812      ``False``), put an item on the queue if a free slot is immediately
813      available, else raise the :exc:`queue.Full` exception (*timeout* is
814      ignored in that case).
815
816      .. versionchanged:: 3.8
817         If the queue is closed, :exc:`ValueError` is raised instead of
818         :exc:`AssertionError`.
819
820   .. method:: put_nowait(obj)
821
822      Equivalent to ``put(obj, False)``.
823
824   .. method:: get([block[, timeout]])
825
826      Remove and return an item from the queue.  If optional args *block* is
827      ``True`` (the default) and *timeout* is ``None`` (the default), block if
828      necessary until an item is available.  If *timeout* is a positive number,
829      it blocks at most *timeout* seconds and raises the :exc:`queue.Empty`
830      exception if no item was available within that time.  Otherwise (block is
831      ``False``), return an item if one is immediately available, else raise the
832      :exc:`queue.Empty` exception (*timeout* is ignored in that case).
833
834      .. versionchanged:: 3.8
835         If the queue is closed, :exc:`ValueError` is raised instead of
836         :exc:`OSError`.
837
838   .. method:: get_nowait()
839
840      Equivalent to ``get(False)``.
841
842   :class:`multiprocessing.Queue` has a few additional methods not found in
843   :class:`queue.Queue`.  These methods are usually unnecessary for most
844   code:
845
846   .. method:: close()
847
848      Indicate that no more data will be put on this queue by the current
849      process.  The background thread will quit once it has flushed all buffered
850      data to the pipe.  This is called automatically when the queue is garbage
851      collected.
852
853   .. method:: join_thread()
854
855      Join the background thread.  This can only be used after :meth:`close` has
856      been called.  It blocks until the background thread exits, ensuring that
857      all data in the buffer has been flushed to the pipe.
858
859      By default if a process is not the creator of the queue then on exit it
860      will attempt to join the queue's background thread.  The process can call
861      :meth:`cancel_join_thread` to make :meth:`join_thread` do nothing.
862
863   .. method:: cancel_join_thread()
864
865      Prevent :meth:`join_thread` from blocking.  In particular, this prevents
866      the background thread from being joined automatically when the process
867      exits -- see :meth:`join_thread`.
868
869      A better name for this method might be
870      ``allow_exit_without_flush()``.  It is likely to cause enqueued
871      data to be lost, and you almost certainly will not need to use it.
872      It is really only there if you need the current process to exit
873      immediately without waiting to flush enqueued data to the
874      underlying pipe, and you don't care about lost data.
875
876   .. note::
877
878      This class's functionality requires a functioning shared semaphore
879      implementation on the host operating system. Without one, the
880      functionality in this class will be disabled, and attempts to
881      instantiate a :class:`Queue` will result in an :exc:`ImportError`. See
882      :issue:`3770` for additional information.  The same holds true for any
883      of the specialized queue types listed below.
884
885.. class:: SimpleQueue()
886
887   It is a simplified :class:`Queue` type, very close to a locked :class:`Pipe`.
888
889   .. method:: close()
890
891      Close the queue: release internal resources.
892
893      A queue must not be used anymore after it is closed. For example,
894      :meth:`get`, :meth:`put` and :meth:`empty` methods must no longer be
895      called.
896
897      .. versionadded:: 3.9
898
899   .. method:: empty()
900
901      Return ``True`` if the queue is empty, ``False`` otherwise.
902
903   .. method:: get()
904
905      Remove and return an item from the queue.
906
907   .. method:: put(item)
908
909      Put *item* into the queue.
910
911
912.. class:: JoinableQueue([maxsize])
913
914   :class:`JoinableQueue`, a :class:`Queue` subclass, is a queue which
915   additionally has :meth:`task_done` and :meth:`join` methods.
916
917   .. method:: task_done()
918
919      Indicate that a formerly enqueued task is complete. Used by queue
920      consumers.  For each :meth:`~Queue.get` used to fetch a task, a subsequent
921      call to :meth:`task_done` tells the queue that the processing on the task
922      is complete.
923
924      If a :meth:`~queue.Queue.join` is currently blocking, it will resume when all
925      items have been processed (meaning that a :meth:`task_done` call was
926      received for every item that had been :meth:`~Queue.put` into the queue).
927
928      Raises a :exc:`ValueError` if called more times than there were items
929      placed in the queue.
930
931
932   .. method:: join()
933
934      Block until all items in the queue have been gotten and processed.
935
936      The count of unfinished tasks goes up whenever an item is added to the
937      queue.  The count goes down whenever a consumer calls
938      :meth:`task_done` to indicate that the item was retrieved and all work on
939      it is complete.  When the count of unfinished tasks drops to zero,
940      :meth:`~queue.Queue.join` unblocks.
941
942
943Miscellaneous
944~~~~~~~~~~~~~
945
946.. function:: active_children()
947
948   Return list of all live children of the current process.
949
950   Calling this has the side effect of "joining" any processes which have
951   already finished.
952
953.. function:: cpu_count()
954
955   Return the number of CPUs in the system.
956
957   This number is not equivalent to the number of CPUs the current process can
958   use.  The number of usable CPUs can be obtained with
959   ``len(os.sched_getaffinity(0))``
960
961   When the number of CPUs cannot be determined a :exc:`NotImplementedError`
962   is raised.
963
964   .. seealso::
965      :func:`os.cpu_count`
966
967.. function:: current_process()
968
969   Return the :class:`Process` object corresponding to the current process.
970
971   An analogue of :func:`threading.current_thread`.
972
973.. function:: parent_process()
974
975   Return the :class:`Process` object corresponding to the parent process of
976   the :func:`current_process`. For the main process, ``parent_process`` will
977   be ``None``.
978
979   .. versionadded:: 3.8
980
981.. function:: freeze_support()
982
983   Add support for when a program which uses :mod:`multiprocessing` has been
984   frozen to produce a Windows executable.  (Has been tested with **py2exe**,
985   **PyInstaller** and **cx_Freeze**.)
986
987   One needs to call this function straight after the ``if __name__ ==
988   '__main__'`` line of the main module.  For example::
989
990      from multiprocessing import Process, freeze_support
991
992      def f():
993          print('hello world!')
994
995      if __name__ == '__main__':
996          freeze_support()
997          Process(target=f).start()
998
999   If the ``freeze_support()`` line is omitted then trying to run the frozen
1000   executable will raise :exc:`RuntimeError`.
1001
1002   Calling ``freeze_support()`` has no effect when invoked on any operating
1003   system other than Windows.  In addition, if the module is being run
1004   normally by the Python interpreter on Windows (the program has not been
1005   frozen), then ``freeze_support()`` has no effect.
1006
1007.. function:: get_all_start_methods()
1008
1009   Returns a list of the supported start methods, the first of which
1010   is the default.  The possible start methods are ``'fork'``,
1011   ``'spawn'`` and ``'forkserver'``.  On Windows only ``'spawn'`` is
1012   available.  On Unix ``'fork'`` and ``'spawn'`` are always
1013   supported, with ``'fork'`` being the default.
1014
1015   .. versionadded:: 3.4
1016
1017.. function:: get_context(method=None)
1018
1019   Return a context object which has the same attributes as the
1020   :mod:`multiprocessing` module.
1021
1022   If *method* is ``None`` then the default context is returned.
1023   Otherwise *method* should be ``'fork'``, ``'spawn'``,
1024   ``'forkserver'``.  :exc:`ValueError` is raised if the specified
1025   start method is not available.
1026
1027   .. versionadded:: 3.4
1028
1029.. function:: get_start_method(allow_none=False)
1030
1031   Return the name of start method used for starting processes.
1032
1033   If the start method has not been fixed and *allow_none* is false,
1034   then the start method is fixed to the default and the name is
1035   returned.  If the start method has not been fixed and *allow_none*
1036   is true then ``None`` is returned.
1037
1038   The return value can be ``'fork'``, ``'spawn'``, ``'forkserver'``
1039   or ``None``.  ``'fork'`` is the default on Unix, while ``'spawn'`` is
1040   the default on Windows and macOS.
1041
1042.. versionchanged:: 3.8
1043
1044   On macOS, the *spawn* start method is now the default.  The *fork* start
1045   method should be considered unsafe as it can lead to crashes of the
1046   subprocess. See :issue:`33725`.
1047
1048   .. versionadded:: 3.4
1049
1050.. function:: set_executable(executable)
1051
1052   Set the path of the Python interpreter to use when starting a child process.
1053   (By default :data:`sys.executable` is used).  Embedders will probably need to
1054   do some thing like ::
1055
1056      set_executable(os.path.join(sys.exec_prefix, 'pythonw.exe'))
1057
1058   before they can create child processes.
1059
1060   .. versionchanged:: 3.4
1061      Now supported on Unix when the ``'spawn'`` start method is used.
1062
1063.. function:: set_start_method(method)
1064
1065   Set the method which should be used to start child processes.
1066   *method* can be ``'fork'``, ``'spawn'`` or ``'forkserver'``.
1067
1068   Note that this should be called at most once, and it should be
1069   protected inside the ``if __name__ == '__main__'`` clause of the
1070   main module.
1071
1072   .. versionadded:: 3.4
1073
1074.. note::
1075
1076   :mod:`multiprocessing` contains no analogues of
1077   :func:`threading.active_count`, :func:`threading.enumerate`,
1078   :func:`threading.settrace`, :func:`threading.setprofile`,
1079   :class:`threading.Timer`, or :class:`threading.local`.
1080
1081
1082Connection Objects
1083~~~~~~~~~~~~~~~~~~
1084
1085.. currentmodule:: multiprocessing.connection
1086
1087Connection objects allow the sending and receiving of picklable objects or
1088strings.  They can be thought of as message oriented connected sockets.
1089
1090Connection objects are usually created using
1091:func:`Pipe <multiprocessing.Pipe>` -- see also
1092:ref:`multiprocessing-listeners-clients`.
1093
1094.. class:: Connection
1095
1096   .. method:: send(obj)
1097
1098      Send an object to the other end of the connection which should be read
1099      using :meth:`recv`.
1100
1101      The object must be picklable.  Very large pickles (approximately 32 MiB+,
1102      though it depends on the OS) may raise a :exc:`ValueError` exception.
1103
1104   .. method:: recv()
1105
1106      Return an object sent from the other end of the connection using
1107      :meth:`send`.  Blocks until there is something to receive.  Raises
1108      :exc:`EOFError` if there is nothing left to receive
1109      and the other end was closed.
1110
1111   .. method:: fileno()
1112
1113      Return the file descriptor or handle used by the connection.
1114
1115   .. method:: close()
1116
1117      Close the connection.
1118
1119      This is called automatically when the connection is garbage collected.
1120
1121   .. method:: poll([timeout])
1122
1123      Return whether there is any data available to be read.
1124
1125      If *timeout* is not specified then it will return immediately.  If
1126      *timeout* is a number then this specifies the maximum time in seconds to
1127      block.  If *timeout* is ``None`` then an infinite timeout is used.
1128
1129      Note that multiple connection objects may be polled at once by
1130      using :func:`multiprocessing.connection.wait`.
1131
1132   .. method:: send_bytes(buffer[, offset[, size]])
1133
1134      Send byte data from a :term:`bytes-like object` as a complete message.
1135
1136      If *offset* is given then data is read from that position in *buffer*.  If
1137      *size* is given then that many bytes will be read from buffer.  Very large
1138      buffers (approximately 32 MiB+, though it depends on the OS) may raise a
1139      :exc:`ValueError` exception
1140
1141   .. method:: recv_bytes([maxlength])
1142
1143      Return a complete message of byte data sent from the other end of the
1144      connection as a string.  Blocks until there is something to receive.
1145      Raises :exc:`EOFError` if there is nothing left
1146      to receive and the other end has closed.
1147
1148      If *maxlength* is specified and the message is longer than *maxlength*
1149      then :exc:`OSError` is raised and the connection will no longer be
1150      readable.
1151
1152      .. versionchanged:: 3.3
1153         This function used to raise :exc:`IOError`, which is now an
1154         alias of :exc:`OSError`.
1155
1156
1157   .. method:: recv_bytes_into(buffer[, offset])
1158
1159      Read into *buffer* a complete message of byte data sent from the other end
1160      of the connection and return the number of bytes in the message.  Blocks
1161      until there is something to receive.  Raises
1162      :exc:`EOFError` if there is nothing left to receive and the other end was
1163      closed.
1164
1165      *buffer* must be a writable :term:`bytes-like object`.  If
1166      *offset* is given then the message will be written into the buffer from
1167      that position.  Offset must be a non-negative integer less than the
1168      length of *buffer* (in bytes).
1169
1170      If the buffer is too short then a :exc:`BufferTooShort` exception is
1171      raised and the complete message is available as ``e.args[0]`` where ``e``
1172      is the exception instance.
1173
1174   .. versionchanged:: 3.3
1175      Connection objects themselves can now be transferred between processes
1176      using :meth:`Connection.send` and :meth:`Connection.recv`.
1177
1178   .. versionadded:: 3.3
1179      Connection objects now support the context management protocol -- see
1180      :ref:`typecontextmanager`.  :meth:`~contextmanager.__enter__` returns the
1181      connection object, and :meth:`~contextmanager.__exit__` calls :meth:`close`.
1182
1183For example:
1184
1185.. doctest::
1186
1187    >>> from multiprocessing import Pipe
1188    >>> a, b = Pipe()
1189    >>> a.send([1, 'hello', None])
1190    >>> b.recv()
1191    [1, 'hello', None]
1192    >>> b.send_bytes(b'thank you')
1193    >>> a.recv_bytes()
1194    b'thank you'
1195    >>> import array
1196    >>> arr1 = array.array('i', range(5))
1197    >>> arr2 = array.array('i', [0] * 10)
1198    >>> a.send_bytes(arr1)
1199    >>> count = b.recv_bytes_into(arr2)
1200    >>> assert count == len(arr1) * arr1.itemsize
1201    >>> arr2
1202    array('i', [0, 1, 2, 3, 4, 0, 0, 0, 0, 0])
1203
1204.. _multiprocessing-recv-pickle-security:
1205
1206.. warning::
1207
1208    The :meth:`Connection.recv` method automatically unpickles the data it
1209    receives, which can be a security risk unless you can trust the process
1210    which sent the message.
1211
1212    Therefore, unless the connection object was produced using :func:`Pipe` you
1213    should only use the :meth:`~Connection.recv` and :meth:`~Connection.send`
1214    methods after performing some sort of authentication.  See
1215    :ref:`multiprocessing-auth-keys`.
1216
1217.. warning::
1218
1219    If a process is killed while it is trying to read or write to a pipe then
1220    the data in the pipe is likely to become corrupted, because it may become
1221    impossible to be sure where the message boundaries lie.
1222
1223
1224Synchronization primitives
1225~~~~~~~~~~~~~~~~~~~~~~~~~~
1226
1227.. currentmodule:: multiprocessing
1228
1229Generally synchronization primitives are not as necessary in a multiprocess
1230program as they are in a multithreaded program.  See the documentation for
1231:mod:`threading` module.
1232
1233Note that one can also create synchronization primitives by using a manager
1234object -- see :ref:`multiprocessing-managers`.
1235
1236.. class:: Barrier(parties[, action[, timeout]])
1237
1238   A barrier object: a clone of :class:`threading.Barrier`.
1239
1240   .. versionadded:: 3.3
1241
1242.. class:: BoundedSemaphore([value])
1243
1244   A bounded semaphore object: a close analog of
1245   :class:`threading.BoundedSemaphore`.
1246
1247   A solitary difference from its close analog exists: its ``acquire`` method's
1248   first argument is named *block*, as is consistent with :meth:`Lock.acquire`.
1249
1250   .. note::
1251      On macOS, this is indistinguishable from :class:`Semaphore` because
1252      ``sem_getvalue()`` is not implemented on that platform.
1253
1254.. class:: Condition([lock])
1255
1256   A condition variable: an alias for :class:`threading.Condition`.
1257
1258   If *lock* is specified then it should be a :class:`Lock` or :class:`RLock`
1259   object from :mod:`multiprocessing`.
1260
1261   .. versionchanged:: 3.3
1262      The :meth:`~threading.Condition.wait_for` method was added.
1263
1264.. class:: Event()
1265
1266   A clone of :class:`threading.Event`.
1267
1268
1269.. class:: Lock()
1270
1271   A non-recursive lock object: a close analog of :class:`threading.Lock`.
1272   Once a process or thread has acquired a lock, subsequent attempts to
1273   acquire it from any process or thread will block until it is released;
1274   any process or thread may release it.  The concepts and behaviors of
1275   :class:`threading.Lock` as it applies to threads are replicated here in
1276   :class:`multiprocessing.Lock` as it applies to either processes or threads,
1277   except as noted.
1278
1279   Note that :class:`Lock` is actually a factory function which returns an
1280   instance of ``multiprocessing.synchronize.Lock`` initialized with a
1281   default context.
1282
1283   :class:`Lock` supports the :term:`context manager` protocol and thus may be
1284   used in :keyword:`with` statements.
1285
1286   .. method:: acquire(block=True, timeout=None)
1287
1288      Acquire a lock, blocking or non-blocking.
1289
1290      With the *block* argument set to ``True`` (the default), the method call
1291      will block until the lock is in an unlocked state, then set it to locked
1292      and return ``True``.  Note that the name of this first argument differs
1293      from that in :meth:`threading.Lock.acquire`.
1294
1295      With the *block* argument set to ``False``, the method call does not
1296      block.  If the lock is currently in a locked state, return ``False``;
1297      otherwise set the lock to a locked state and return ``True``.
1298
1299      When invoked with a positive, floating-point value for *timeout*, block
1300      for at most the number of seconds specified by *timeout* as long as
1301      the lock can not be acquired.  Invocations with a negative value for
1302      *timeout* are equivalent to a *timeout* of zero.  Invocations with a
1303      *timeout* value of ``None`` (the default) set the timeout period to
1304      infinite.  Note that the treatment of negative or ``None`` values for
1305      *timeout* differs from the implemented behavior in
1306      :meth:`threading.Lock.acquire`.  The *timeout* argument has no practical
1307      implications if the *block* argument is set to ``False`` and is thus
1308      ignored.  Returns ``True`` if the lock has been acquired or ``False`` if
1309      the timeout period has elapsed.
1310
1311
1312   .. method:: release()
1313
1314      Release a lock.  This can be called from any process or thread, not only
1315      the process or thread which originally acquired the lock.
1316
1317      Behavior is the same as in :meth:`threading.Lock.release` except that
1318      when invoked on an unlocked lock, a :exc:`ValueError` is raised.
1319
1320
1321.. class:: RLock()
1322
1323   A recursive lock object: a close analog of :class:`threading.RLock`.  A
1324   recursive lock must be released by the process or thread that acquired it.
1325   Once a process or thread has acquired a recursive lock, the same process
1326   or thread may acquire it again without blocking; that process or thread
1327   must release it once for each time it has been acquired.
1328
1329   Note that :class:`RLock` is actually a factory function which returns an
1330   instance of ``multiprocessing.synchronize.RLock`` initialized with a
1331   default context.
1332
1333   :class:`RLock` supports the :term:`context manager` protocol and thus may be
1334   used in :keyword:`with` statements.
1335
1336
1337   .. method:: acquire(block=True, timeout=None)
1338
1339      Acquire a lock, blocking or non-blocking.
1340
1341      When invoked with the *block* argument set to ``True``, block until the
1342      lock is in an unlocked state (not owned by any process or thread) unless
1343      the lock is already owned by the current process or thread.  The current
1344      process or thread then takes ownership of the lock (if it does not
1345      already have ownership) and the recursion level inside the lock increments
1346      by one, resulting in a return value of ``True``.  Note that there are
1347      several differences in this first argument's behavior compared to the
1348      implementation of :meth:`threading.RLock.acquire`, starting with the name
1349      of the argument itself.
1350
1351      When invoked with the *block* argument set to ``False``, do not block.
1352      If the lock has already been acquired (and thus is owned) by another
1353      process or thread, the current process or thread does not take ownership
1354      and the recursion level within the lock is not changed, resulting in
1355      a return value of ``False``.  If the lock is in an unlocked state, the
1356      current process or thread takes ownership and the recursion level is
1357      incremented, resulting in a return value of ``True``.
1358
1359      Use and behaviors of the *timeout* argument are the same as in
1360      :meth:`Lock.acquire`.  Note that some of these behaviors of *timeout*
1361      differ from the implemented behaviors in :meth:`threading.RLock.acquire`.
1362
1363
1364   .. method:: release()
1365
1366      Release a lock, decrementing the recursion level.  If after the
1367      decrement the recursion level is zero, reset the lock to unlocked (not
1368      owned by any process or thread) and if any other processes or threads
1369      are blocked waiting for the lock to become unlocked, allow exactly one
1370      of them to proceed.  If after the decrement the recursion level is still
1371      nonzero, the lock remains locked and owned by the calling process or
1372      thread.
1373
1374      Only call this method when the calling process or thread owns the lock.
1375      An :exc:`AssertionError` is raised if this method is called by a process
1376      or thread other than the owner or if the lock is in an unlocked (unowned)
1377      state.  Note that the type of exception raised in this situation
1378      differs from the implemented behavior in :meth:`threading.RLock.release`.
1379
1380
1381.. class:: Semaphore([value])
1382
1383   A semaphore object: a close analog of :class:`threading.Semaphore`.
1384
1385   A solitary difference from its close analog exists: its ``acquire`` method's
1386   first argument is named *block*, as is consistent with :meth:`Lock.acquire`.
1387
1388.. note::
1389
1390   On macOS, ``sem_timedwait`` is unsupported, so calling ``acquire()`` with
1391   a timeout will emulate that function's behavior using a sleeping loop.
1392
1393.. note::
1394
1395   If the SIGINT signal generated by :kbd:`Ctrl-C` arrives while the main thread is
1396   blocked by a call to :meth:`BoundedSemaphore.acquire`, :meth:`Lock.acquire`,
1397   :meth:`RLock.acquire`, :meth:`Semaphore.acquire`, :meth:`Condition.acquire`
1398   or :meth:`Condition.wait` then the call will be immediately interrupted and
1399   :exc:`KeyboardInterrupt` will be raised.
1400
1401   This differs from the behaviour of :mod:`threading` where SIGINT will be
1402   ignored while the equivalent blocking calls are in progress.
1403
1404.. note::
1405
1406   Some of this package's functionality requires a functioning shared semaphore
1407   implementation on the host operating system. Without one, the
1408   :mod:`multiprocessing.synchronize` module will be disabled, and attempts to
1409   import it will result in an :exc:`ImportError`. See
1410   :issue:`3770` for additional information.
1411
1412
1413Shared :mod:`ctypes` Objects
1414~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1415
1416It is possible to create shared objects using shared memory which can be
1417inherited by child processes.
1418
1419.. function:: Value(typecode_or_type, *args, lock=True)
1420
1421   Return a :mod:`ctypes` object allocated from shared memory.  By default the
1422   return value is actually a synchronized wrapper for the object.  The object
1423   itself can be accessed via the *value* attribute of a :class:`Value`.
1424
1425   *typecode_or_type* determines the type of the returned object: it is either a
1426   ctypes type or a one character typecode of the kind used by the :mod:`array`
1427   module.  *\*args* is passed on to the constructor for the type.
1428
1429   If *lock* is ``True`` (the default) then a new recursive lock
1430   object is created to synchronize access to the value.  If *lock* is
1431   a :class:`Lock` or :class:`RLock` object then that will be used to
1432   synchronize access to the value.  If *lock* is ``False`` then
1433   access to the returned object will not be automatically protected
1434   by a lock, so it will not necessarily be "process-safe".
1435
1436   Operations like ``+=`` which involve a read and write are not
1437   atomic.  So if, for instance, you want to atomically increment a
1438   shared value it is insufficient to just do ::
1439
1440       counter.value += 1
1441
1442   Assuming the associated lock is recursive (which it is by default)
1443   you can instead do ::
1444
1445       with counter.get_lock():
1446           counter.value += 1
1447
1448   Note that *lock* is a keyword-only argument.
1449
1450.. function:: Array(typecode_or_type, size_or_initializer, *, lock=True)
1451
1452   Return a ctypes array allocated from shared memory.  By default the return
1453   value is actually a synchronized wrapper for the array.
1454
1455   *typecode_or_type* determines the type of the elements of the returned array:
1456   it is either a ctypes type or a one character typecode of the kind used by
1457   the :mod:`array` module.  If *size_or_initializer* is an integer, then it
1458   determines the length of the array, and the array will be initially zeroed.
1459   Otherwise, *size_or_initializer* is a sequence which is used to initialize
1460   the array and whose length determines the length of the array.
1461
1462   If *lock* is ``True`` (the default) then a new lock object is created to
1463   synchronize access to the value.  If *lock* is a :class:`Lock` or
1464   :class:`RLock` object then that will be used to synchronize access to the
1465   value.  If *lock* is ``False`` then access to the returned object will not be
1466   automatically protected by a lock, so it will not necessarily be
1467   "process-safe".
1468
1469   Note that *lock* is a keyword only argument.
1470
1471   Note that an array of :data:`ctypes.c_char` has *value* and *raw*
1472   attributes which allow one to use it to store and retrieve strings.
1473
1474
1475The :mod:`multiprocessing.sharedctypes` module
1476>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
1477
1478.. module:: multiprocessing.sharedctypes
1479   :synopsis: Allocate ctypes objects from shared memory.
1480
1481The :mod:`multiprocessing.sharedctypes` module provides functions for allocating
1482:mod:`ctypes` objects from shared memory which can be inherited by child
1483processes.
1484
1485.. note::
1486
1487   Although it is possible to store a pointer in shared memory remember that
1488   this will refer to a location in the address space of a specific process.
1489   However, the pointer is quite likely to be invalid in the context of a second
1490   process and trying to dereference the pointer from the second process may
1491   cause a crash.
1492
1493.. function:: RawArray(typecode_or_type, size_or_initializer)
1494
1495   Return a ctypes array allocated from shared memory.
1496
1497   *typecode_or_type* determines the type of the elements of the returned array:
1498   it is either a ctypes type or a one character typecode of the kind used by
1499   the :mod:`array` module.  If *size_or_initializer* is an integer then it
1500   determines the length of the array, and the array will be initially zeroed.
1501   Otherwise *size_or_initializer* is a sequence which is used to initialize the
1502   array and whose length determines the length of the array.
1503
1504   Note that setting and getting an element is potentially non-atomic -- use
1505   :func:`Array` instead to make sure that access is automatically synchronized
1506   using a lock.
1507
1508.. function:: RawValue(typecode_or_type, *args)
1509
1510   Return a ctypes object allocated from shared memory.
1511
1512   *typecode_or_type* determines the type of the returned object: it is either a
1513   ctypes type or a one character typecode of the kind used by the :mod:`array`
1514   module.  *\*args* is passed on to the constructor for the type.
1515
1516   Note that setting and getting the value is potentially non-atomic -- use
1517   :func:`Value` instead to make sure that access is automatically synchronized
1518   using a lock.
1519
1520   Note that an array of :data:`ctypes.c_char` has ``value`` and ``raw``
1521   attributes which allow one to use it to store and retrieve strings -- see
1522   documentation for :mod:`ctypes`.
1523
1524.. function:: Array(typecode_or_type, size_or_initializer, *, lock=True)
1525
1526   The same as :func:`RawArray` except that depending on the value of *lock* a
1527   process-safe synchronization wrapper may be returned instead of a raw ctypes
1528   array.
1529
1530   If *lock* is ``True`` (the default) then a new lock object is created to
1531   synchronize access to the value.  If *lock* is a
1532   :class:`~multiprocessing.Lock` or :class:`~multiprocessing.RLock` object
1533   then that will be used to synchronize access to the
1534   value.  If *lock* is ``False`` then access to the returned object will not be
1535   automatically protected by a lock, so it will not necessarily be
1536   "process-safe".
1537
1538   Note that *lock* is a keyword-only argument.
1539
1540.. function:: Value(typecode_or_type, *args, lock=True)
1541
1542   The same as :func:`RawValue` except that depending on the value of *lock* a
1543   process-safe synchronization wrapper may be returned instead of a raw ctypes
1544   object.
1545
1546   If *lock* is ``True`` (the default) then a new lock object is created to
1547   synchronize access to the value.  If *lock* is a :class:`~multiprocessing.Lock` or
1548   :class:`~multiprocessing.RLock` object then that will be used to synchronize access to the
1549   value.  If *lock* is ``False`` then access to the returned object will not be
1550   automatically protected by a lock, so it will not necessarily be
1551   "process-safe".
1552
1553   Note that *lock* is a keyword-only argument.
1554
1555.. function:: copy(obj)
1556
1557   Return a ctypes object allocated from shared memory which is a copy of the
1558   ctypes object *obj*.
1559
1560.. function:: synchronized(obj[, lock])
1561
1562   Return a process-safe wrapper object for a ctypes object which uses *lock* to
1563   synchronize access.  If *lock* is ``None`` (the default) then a
1564   :class:`multiprocessing.RLock` object is created automatically.
1565
1566   A synchronized wrapper will have two methods in addition to those of the
1567   object it wraps: :meth:`get_obj` returns the wrapped object and
1568   :meth:`get_lock` returns the lock object used for synchronization.
1569
1570   Note that accessing the ctypes object through the wrapper can be a lot slower
1571   than accessing the raw ctypes object.
1572
1573   .. versionchanged:: 3.5
1574      Synchronized objects support the :term:`context manager` protocol.
1575
1576
1577The table below compares the syntax for creating shared ctypes objects from
1578shared memory with the normal ctypes syntax.  (In the table ``MyStruct`` is some
1579subclass of :class:`ctypes.Structure`.)
1580
1581==================== ========================== ===========================
1582ctypes               sharedctypes using type    sharedctypes using typecode
1583==================== ========================== ===========================
1584c_double(2.4)        RawValue(c_double, 2.4)    RawValue('d', 2.4)
1585MyStruct(4, 6)       RawValue(MyStruct, 4, 6)
1586(c_short * 7)()      RawArray(c_short, 7)       RawArray('h', 7)
1587(c_int * 3)(9, 2, 8) RawArray(c_int, (9, 2, 8)) RawArray('i', (9, 2, 8))
1588==================== ========================== ===========================
1589
1590
1591Below is an example where a number of ctypes objects are modified by a child
1592process::
1593
1594   from multiprocessing import Process, Lock
1595   from multiprocessing.sharedctypes import Value, Array
1596   from ctypes import Structure, c_double
1597
1598   class Point(Structure):
1599       _fields_ = [('x', c_double), ('y', c_double)]
1600
1601   def modify(n, x, s, A):
1602       n.value **= 2
1603       x.value **= 2
1604       s.value = s.value.upper()
1605       for a in A:
1606           a.x **= 2
1607           a.y **= 2
1608
1609   if __name__ == '__main__':
1610       lock = Lock()
1611
1612       n = Value('i', 7)
1613       x = Value(c_double, 1.0/3.0, lock=False)
1614       s = Array('c', b'hello world', lock=lock)
1615       A = Array(Point, [(1.875,-6.25), (-5.75,2.0), (2.375,9.5)], lock=lock)
1616
1617       p = Process(target=modify, args=(n, x, s, A))
1618       p.start()
1619       p.join()
1620
1621       print(n.value)
1622       print(x.value)
1623       print(s.value)
1624       print([(a.x, a.y) for a in A])
1625
1626
1627.. highlight:: none
1628
1629The results printed are ::
1630
1631    49
1632    0.1111111111111111
1633    HELLO WORLD
1634    [(3.515625, 39.0625), (33.0625, 4.0), (5.640625, 90.25)]
1635
1636.. highlight:: python3
1637
1638
1639.. _multiprocessing-managers:
1640
1641Managers
1642~~~~~~~~
1643
1644Managers provide a way to create data which can be shared between different
1645processes, including sharing over a network between processes running on
1646different machines. A manager object controls a server process which manages
1647*shared objects*.  Other processes can access the shared objects by using
1648proxies.
1649
1650.. function:: multiprocessing.Manager()
1651
1652   Returns a started :class:`~multiprocessing.managers.SyncManager` object which
1653   can be used for sharing objects between processes.  The returned manager
1654   object corresponds to a spawned child process and has methods which will
1655   create shared objects and return corresponding proxies.
1656
1657.. module:: multiprocessing.managers
1658   :synopsis: Share data between process with shared objects.
1659
1660Manager processes will be shutdown as soon as they are garbage collected or
1661their parent process exits.  The manager classes are defined in the
1662:mod:`multiprocessing.managers` module:
1663
1664.. class:: BaseManager([address[, authkey]])
1665
1666   Create a BaseManager object.
1667
1668   Once created one should call :meth:`start` or ``get_server().serve_forever()`` to ensure
1669   that the manager object refers to a started manager process.
1670
1671   *address* is the address on which the manager process listens for new
1672   connections.  If *address* is ``None`` then an arbitrary one is chosen.
1673
1674   *authkey* is the authentication key which will be used to check the
1675   validity of incoming connections to the server process.  If
1676   *authkey* is ``None`` then ``current_process().authkey`` is used.
1677   Otherwise *authkey* is used and it must be a byte string.
1678
1679   .. method:: start([initializer[, initargs]])
1680
1681      Start a subprocess to start the manager.  If *initializer* is not ``None``
1682      then the subprocess will call ``initializer(*initargs)`` when it starts.
1683
1684   .. method:: get_server()
1685
1686      Returns a :class:`Server` object which represents the actual server under
1687      the control of the Manager. The :class:`Server` object supports the
1688      :meth:`serve_forever` method::
1689
1690      >>> from multiprocessing.managers import BaseManager
1691      >>> manager = BaseManager(address=('', 50000), authkey=b'abc')
1692      >>> server = manager.get_server()
1693      >>> server.serve_forever()
1694
1695      :class:`Server` additionally has an :attr:`address` attribute.
1696
1697   .. method:: connect()
1698
1699      Connect a local manager object to a remote manager process::
1700
1701      >>> from multiprocessing.managers import BaseManager
1702      >>> m = BaseManager(address=('127.0.0.1', 50000), authkey=b'abc')
1703      >>> m.connect()
1704
1705   .. method:: shutdown()
1706
1707      Stop the process used by the manager.  This is only available if
1708      :meth:`start` has been used to start the server process.
1709
1710      This can be called multiple times.
1711
1712   .. method:: register(typeid[, callable[, proxytype[, exposed[, method_to_typeid[, create_method]]]]])
1713
1714      A classmethod which can be used for registering a type or callable with
1715      the manager class.
1716
1717      *typeid* is a "type identifier" which is used to identify a particular
1718      type of shared object.  This must be a string.
1719
1720      *callable* is a callable used for creating objects for this type
1721      identifier.  If a manager instance will be connected to the
1722      server using the :meth:`connect` method, or if the
1723      *create_method* argument is ``False`` then this can be left as
1724      ``None``.
1725
1726      *proxytype* is a subclass of :class:`BaseProxy` which is used to create
1727      proxies for shared objects with this *typeid*.  If ``None`` then a proxy
1728      class is created automatically.
1729
1730      *exposed* is used to specify a sequence of method names which proxies for
1731      this typeid should be allowed to access using
1732      :meth:`BaseProxy._callmethod`.  (If *exposed* is ``None`` then
1733      :attr:`proxytype._exposed_` is used instead if it exists.)  In the case
1734      where no exposed list is specified, all "public methods" of the shared
1735      object will be accessible.  (Here a "public method" means any attribute
1736      which has a :meth:`~object.__call__` method and whose name does not begin
1737      with ``'_'``.)
1738
1739      *method_to_typeid* is a mapping used to specify the return type of those
1740      exposed methods which should return a proxy.  It maps method names to
1741      typeid strings.  (If *method_to_typeid* is ``None`` then
1742      :attr:`proxytype._method_to_typeid_` is used instead if it exists.)  If a
1743      method's name is not a key of this mapping or if the mapping is ``None``
1744      then the object returned by the method will be copied by value.
1745
1746      *create_method* determines whether a method should be created with name
1747      *typeid* which can be used to tell the server process to create a new
1748      shared object and return a proxy for it.  By default it is ``True``.
1749
1750   :class:`BaseManager` instances also have one read-only property:
1751
1752   .. attribute:: address
1753
1754      The address used by the manager.
1755
1756   .. versionchanged:: 3.3
1757      Manager objects support the context management protocol -- see
1758      :ref:`typecontextmanager`.  :meth:`~contextmanager.__enter__` starts the
1759      server process (if it has not already started) and then returns the
1760      manager object.  :meth:`~contextmanager.__exit__` calls :meth:`shutdown`.
1761
1762      In previous versions :meth:`~contextmanager.__enter__` did not start the
1763      manager's server process if it was not already started.
1764
1765.. class:: SyncManager
1766
1767   A subclass of :class:`BaseManager` which can be used for the synchronization
1768   of processes.  Objects of this type are returned by
1769   :func:`multiprocessing.Manager`.
1770
1771   Its methods create and return :ref:`multiprocessing-proxy_objects` for a
1772   number of commonly used data types to be synchronized across processes.
1773   This notably includes shared lists and dictionaries.
1774
1775   .. method:: Barrier(parties[, action[, timeout]])
1776
1777      Create a shared :class:`threading.Barrier` object and return a
1778      proxy for it.
1779
1780      .. versionadded:: 3.3
1781
1782   .. method:: BoundedSemaphore([value])
1783
1784      Create a shared :class:`threading.BoundedSemaphore` object and return a
1785      proxy for it.
1786
1787   .. method:: Condition([lock])
1788
1789      Create a shared :class:`threading.Condition` object and return a proxy for
1790      it.
1791
1792      If *lock* is supplied then it should be a proxy for a
1793      :class:`threading.Lock` or :class:`threading.RLock` object.
1794
1795      .. versionchanged:: 3.3
1796         The :meth:`~threading.Condition.wait_for` method was added.
1797
1798   .. method:: Event()
1799
1800      Create a shared :class:`threading.Event` object and return a proxy for it.
1801
1802   .. method:: Lock()
1803
1804      Create a shared :class:`threading.Lock` object and return a proxy for it.
1805
1806   .. method:: Namespace()
1807
1808      Create a shared :class:`Namespace` object and return a proxy for it.
1809
1810   .. method:: Queue([maxsize])
1811
1812      Create a shared :class:`queue.Queue` object and return a proxy for it.
1813
1814   .. method:: RLock()
1815
1816      Create a shared :class:`threading.RLock` object and return a proxy for it.
1817
1818   .. method:: Semaphore([value])
1819
1820      Create a shared :class:`threading.Semaphore` object and return a proxy for
1821      it.
1822
1823   .. method:: Array(typecode, sequence)
1824
1825      Create an array and return a proxy for it.
1826
1827   .. method:: Value(typecode, value)
1828
1829      Create an object with a writable ``value`` attribute and return a proxy
1830      for it.
1831
1832   .. method:: dict()
1833               dict(mapping)
1834               dict(sequence)
1835
1836      Create a shared :class:`dict` object and return a proxy for it.
1837
1838   .. method:: list()
1839               list(sequence)
1840
1841      Create a shared :class:`list` object and return a proxy for it.
1842
1843   .. versionchanged:: 3.6
1844      Shared objects are capable of being nested.  For example, a shared
1845      container object such as a shared list can contain other shared objects
1846      which will all be managed and synchronized by the :class:`SyncManager`.
1847
1848.. class:: Namespace
1849
1850   A type that can register with :class:`SyncManager`.
1851
1852   A namespace object has no public methods, but does have writable attributes.
1853   Its representation shows the values of its attributes.
1854
1855   However, when using a proxy for a namespace object, an attribute beginning
1856   with ``'_'`` will be an attribute of the proxy and not an attribute of the
1857   referent:
1858
1859   .. doctest::
1860
1861    >>> manager = multiprocessing.Manager()
1862    >>> Global = manager.Namespace()
1863    >>> Global.x = 10
1864    >>> Global.y = 'hello'
1865    >>> Global._z = 12.3    # this is an attribute of the proxy
1866    >>> print(Global)
1867    Namespace(x=10, y='hello')
1868
1869
1870Customized managers
1871>>>>>>>>>>>>>>>>>>>
1872
1873To create one's own manager, one creates a subclass of :class:`BaseManager` and
1874uses the :meth:`~BaseManager.register` classmethod to register new types or
1875callables with the manager class.  For example::
1876
1877   from multiprocessing.managers import BaseManager
1878
1879   class MathsClass:
1880       def add(self, x, y):
1881           return x + y
1882       def mul(self, x, y):
1883           return x * y
1884
1885   class MyManager(BaseManager):
1886       pass
1887
1888   MyManager.register('Maths', MathsClass)
1889
1890   if __name__ == '__main__':
1891       with MyManager() as manager:
1892           maths = manager.Maths()
1893           print(maths.add(4, 3))         # prints 7
1894           print(maths.mul(7, 8))         # prints 56
1895
1896
1897Using a remote manager
1898>>>>>>>>>>>>>>>>>>>>>>
1899
1900It is possible to run a manager server on one machine and have clients use it
1901from other machines (assuming that the firewalls involved allow it).
1902
1903Running the following commands creates a server for a single shared queue which
1904remote clients can access::
1905
1906   >>> from multiprocessing.managers import BaseManager
1907   >>> from queue import Queue
1908   >>> queue = Queue()
1909   >>> class QueueManager(BaseManager): pass
1910   >>> QueueManager.register('get_queue', callable=lambda:queue)
1911   >>> m = QueueManager(address=('', 50000), authkey=b'abracadabra')
1912   >>> s = m.get_server()
1913   >>> s.serve_forever()
1914
1915One client can access the server as follows::
1916
1917   >>> from multiprocessing.managers import BaseManager
1918   >>> class QueueManager(BaseManager): pass
1919   >>> QueueManager.register('get_queue')
1920   >>> m = QueueManager(address=('foo.bar.org', 50000), authkey=b'abracadabra')
1921   >>> m.connect()
1922   >>> queue = m.get_queue()
1923   >>> queue.put('hello')
1924
1925Another client can also use it::
1926
1927   >>> from multiprocessing.managers import BaseManager
1928   >>> class QueueManager(BaseManager): pass
1929   >>> QueueManager.register('get_queue')
1930   >>> m = QueueManager(address=('foo.bar.org', 50000), authkey=b'abracadabra')
1931   >>> m.connect()
1932   >>> queue = m.get_queue()
1933   >>> queue.get()
1934   'hello'
1935
1936Local processes can also access that queue, using the code from above on the
1937client to access it remotely::
1938
1939    >>> from multiprocessing import Process, Queue
1940    >>> from multiprocessing.managers import BaseManager
1941    >>> class Worker(Process):
1942    ...     def __init__(self, q):
1943    ...         self.q = q
1944    ...         super().__init__()
1945    ...     def run(self):
1946    ...         self.q.put('local hello')
1947    ...
1948    >>> queue = Queue()
1949    >>> w = Worker(queue)
1950    >>> w.start()
1951    >>> class QueueManager(BaseManager): pass
1952    ...
1953    >>> QueueManager.register('get_queue', callable=lambda: queue)
1954    >>> m = QueueManager(address=('', 50000), authkey=b'abracadabra')
1955    >>> s = m.get_server()
1956    >>> s.serve_forever()
1957
1958.. _multiprocessing-proxy_objects:
1959
1960Proxy Objects
1961~~~~~~~~~~~~~
1962
1963A proxy is an object which *refers* to a shared object which lives (presumably)
1964in a different process.  The shared object is said to be the *referent* of the
1965proxy.  Multiple proxy objects may have the same referent.
1966
1967A proxy object has methods which invoke corresponding methods of its referent
1968(although not every method of the referent will necessarily be available through
1969the proxy).  In this way, a proxy can be used just like its referent can:
1970
1971.. doctest::
1972
1973   >>> from multiprocessing import Manager
1974   >>> manager = Manager()
1975   >>> l = manager.list([i*i for i in range(10)])
1976   >>> print(l)
1977   [0, 1, 4, 9, 16, 25, 36, 49, 64, 81]
1978   >>> print(repr(l))
1979   <ListProxy object, typeid 'list' at 0x...>
1980   >>> l[4]
1981   16
1982   >>> l[2:5]
1983   [4, 9, 16]
1984
1985Notice that applying :func:`str` to a proxy will return the representation of
1986the referent, whereas applying :func:`repr` will return the representation of
1987the proxy.
1988
1989An important feature of proxy objects is that they are picklable so they can be
1990passed between processes.  As such, a referent can contain
1991:ref:`multiprocessing-proxy_objects`.  This permits nesting of these managed
1992lists, dicts, and other :ref:`multiprocessing-proxy_objects`:
1993
1994.. doctest::
1995
1996   >>> a = manager.list()
1997   >>> b = manager.list()
1998   >>> a.append(b)         # referent of a now contains referent of b
1999   >>> print(a, b)
2000   [<ListProxy object, typeid 'list' at ...>] []
2001   >>> b.append('hello')
2002   >>> print(a[0], b)
2003   ['hello'] ['hello']
2004
2005Similarly, dict and list proxies may be nested inside one another::
2006
2007   >>> l_outer = manager.list([ manager.dict() for i in range(2) ])
2008   >>> d_first_inner = l_outer[0]
2009   >>> d_first_inner['a'] = 1
2010   >>> d_first_inner['b'] = 2
2011   >>> l_outer[1]['c'] = 3
2012   >>> l_outer[1]['z'] = 26
2013   >>> print(l_outer[0])
2014   {'a': 1, 'b': 2}
2015   >>> print(l_outer[1])
2016   {'c': 3, 'z': 26}
2017
2018If standard (non-proxy) :class:`list` or :class:`dict` objects are contained
2019in a referent, modifications to those mutable values will not be propagated
2020through the manager because the proxy has no way of knowing when the values
2021contained within are modified.  However, storing a value in a container proxy
2022(which triggers a ``__setitem__`` on the proxy object) does propagate through
2023the manager and so to effectively modify such an item, one could re-assign the
2024modified value to the container proxy::
2025
2026   # create a list proxy and append a mutable object (a dictionary)
2027   lproxy = manager.list()
2028   lproxy.append({})
2029   # now mutate the dictionary
2030   d = lproxy[0]
2031   d['a'] = 1
2032   d['b'] = 2
2033   # at this point, the changes to d are not yet synced, but by
2034   # updating the dictionary, the proxy is notified of the change
2035   lproxy[0] = d
2036
2037This approach is perhaps less convenient than employing nested
2038:ref:`multiprocessing-proxy_objects` for most use cases but also
2039demonstrates a level of control over the synchronization.
2040
2041.. note::
2042
2043   The proxy types in :mod:`multiprocessing` do nothing to support comparisons
2044   by value.  So, for instance, we have:
2045
2046   .. doctest::
2047
2048       >>> manager.list([1,2,3]) == [1,2,3]
2049       False
2050
2051   One should just use a copy of the referent instead when making comparisons.
2052
2053.. class:: BaseProxy
2054
2055   Proxy objects are instances of subclasses of :class:`BaseProxy`.
2056
2057   .. method:: _callmethod(methodname[, args[, kwds]])
2058
2059      Call and return the result of a method of the proxy's referent.
2060
2061      If ``proxy`` is a proxy whose referent is ``obj`` then the expression ::
2062
2063         proxy._callmethod(methodname, args, kwds)
2064
2065      will evaluate the expression ::
2066
2067         getattr(obj, methodname)(*args, **kwds)
2068
2069      in the manager's process.
2070
2071      The returned value will be a copy of the result of the call or a proxy to
2072      a new shared object -- see documentation for the *method_to_typeid*
2073      argument of :meth:`BaseManager.register`.
2074
2075      If an exception is raised by the call, then is re-raised by
2076      :meth:`_callmethod`.  If some other exception is raised in the manager's
2077      process then this is converted into a :exc:`RemoteError` exception and is
2078      raised by :meth:`_callmethod`.
2079
2080      Note in particular that an exception will be raised if *methodname* has
2081      not been *exposed*.
2082
2083      An example of the usage of :meth:`_callmethod`:
2084
2085      .. doctest::
2086
2087         >>> l = manager.list(range(10))
2088         >>> l._callmethod('__len__')
2089         10
2090         >>> l._callmethod('__getitem__', (slice(2, 7),)) # equivalent to l[2:7]
2091         [2, 3, 4, 5, 6]
2092         >>> l._callmethod('__getitem__', (20,))          # equivalent to l[20]
2093         Traceback (most recent call last):
2094         ...
2095         IndexError: list index out of range
2096
2097   .. method:: _getvalue()
2098
2099      Return a copy of the referent.
2100
2101      If the referent is unpicklable then this will raise an exception.
2102
2103   .. method:: __repr__
2104
2105      Return a representation of the proxy object.
2106
2107   .. method:: __str__
2108
2109      Return the representation of the referent.
2110
2111
2112Cleanup
2113>>>>>>>
2114
2115A proxy object uses a weakref callback so that when it gets garbage collected it
2116deregisters itself from the manager which owns its referent.
2117
2118A shared object gets deleted from the manager process when there are no longer
2119any proxies referring to it.
2120
2121
2122Process Pools
2123~~~~~~~~~~~~~
2124
2125.. module:: multiprocessing.pool
2126   :synopsis: Create pools of processes.
2127
2128One can create a pool of processes which will carry out tasks submitted to it
2129with the :class:`Pool` class.
2130
2131.. class:: Pool([processes[, initializer[, initargs[, maxtasksperchild [, context]]]]])
2132
2133   A process pool object which controls a pool of worker processes to which jobs
2134   can be submitted.  It supports asynchronous results with timeouts and
2135   callbacks and has a parallel map implementation.
2136
2137   *processes* is the number of worker processes to use.  If *processes* is
2138   ``None`` then the number returned by :func:`os.cpu_count` is used.
2139
2140   If *initializer* is not ``None`` then each worker process will call
2141   ``initializer(*initargs)`` when it starts.
2142
2143   *maxtasksperchild* is the number of tasks a worker process can complete
2144   before it will exit and be replaced with a fresh worker process, to enable
2145   unused resources to be freed. The default *maxtasksperchild* is ``None``, which
2146   means worker processes will live as long as the pool.
2147
2148   *context* can be used to specify the context used for starting
2149   the worker processes.  Usually a pool is created using the
2150   function :func:`multiprocessing.Pool` or the :meth:`Pool` method
2151   of a context object.  In both cases *context* is set
2152   appropriately.
2153
2154   Note that the methods of the pool object should only be called by
2155   the process which created the pool.
2156
2157   .. warning::
2158      :class:`multiprocessing.pool` objects have internal resources that need to be
2159      properly managed (like any other resource) by using the pool as a context manager
2160      or by calling :meth:`close` and :meth:`terminate` manually. Failure to do this
2161      can lead to the process hanging on finalization.
2162
2163      Note that it is **not correct** to rely on the garbage collector to destroy the pool
2164      as CPython does not assure that the finalizer of the pool will be called
2165      (see :meth:`object.__del__` for more information).
2166
2167   .. versionadded:: 3.2
2168      *maxtasksperchild*
2169
2170   .. versionadded:: 3.4
2171      *context*
2172
2173   .. note::
2174
2175      Worker processes within a :class:`Pool` typically live for the complete
2176      duration of the Pool's work queue. A frequent pattern found in other
2177      systems (such as Apache, mod_wsgi, etc) to free resources held by
2178      workers is to allow a worker within a pool to complete only a set
2179      amount of work before being exiting, being cleaned up and a new
2180      process spawned to replace the old one. The *maxtasksperchild*
2181      argument to the :class:`Pool` exposes this ability to the end user.
2182
2183   .. method:: apply(func[, args[, kwds]])
2184
2185      Call *func* with arguments *args* and keyword arguments *kwds*.  It blocks
2186      until the result is ready. Given this blocks, :meth:`apply_async` is
2187      better suited for performing work in parallel. Additionally, *func*
2188      is only executed in one of the workers of the pool.
2189
2190   .. method:: apply_async(func[, args[, kwds[, callback[, error_callback]]]])
2191
2192      A variant of the :meth:`apply` method which returns a
2193      :class:`~multiprocessing.pool.AsyncResult` object.
2194
2195      If *callback* is specified then it should be a callable which accepts a
2196      single argument.  When the result becomes ready *callback* is applied to
2197      it, that is unless the call failed, in which case the *error_callback*
2198      is applied instead.
2199
2200      If *error_callback* is specified then it should be a callable which
2201      accepts a single argument.  If the target function fails, then
2202      the *error_callback* is called with the exception instance.
2203
2204      Callbacks should complete immediately since otherwise the thread which
2205      handles the results will get blocked.
2206
2207   .. method:: map(func, iterable[, chunksize])
2208
2209      A parallel equivalent of the :func:`map` built-in function (it supports only
2210      one *iterable* argument though, for multiple iterables see :meth:`starmap`).
2211      It blocks until the result is ready.
2212
2213      This method chops the iterable into a number of chunks which it submits to
2214      the process pool as separate tasks.  The (approximate) size of these
2215      chunks can be specified by setting *chunksize* to a positive integer.
2216
2217      Note that it may cause high memory usage for very long iterables. Consider
2218      using :meth:`imap` or :meth:`imap_unordered` with explicit *chunksize*
2219      option for better efficiency.
2220
2221   .. method:: map_async(func, iterable[, chunksize[, callback[, error_callback]]])
2222
2223      A variant of the :meth:`.map` method which returns a
2224      :class:`~multiprocessing.pool.AsyncResult` object.
2225
2226      If *callback* is specified then it should be a callable which accepts a
2227      single argument.  When the result becomes ready *callback* is applied to
2228      it, that is unless the call failed, in which case the *error_callback*
2229      is applied instead.
2230
2231      If *error_callback* is specified then it should be a callable which
2232      accepts a single argument.  If the target function fails, then
2233      the *error_callback* is called with the exception instance.
2234
2235      Callbacks should complete immediately since otherwise the thread which
2236      handles the results will get blocked.
2237
2238   .. method:: imap(func, iterable[, chunksize])
2239
2240      A lazier version of :meth:`.map`.
2241
2242      The *chunksize* argument is the same as the one used by the :meth:`.map`
2243      method.  For very long iterables using a large value for *chunksize* can
2244      make the job complete **much** faster than using the default value of
2245      ``1``.
2246
2247      Also if *chunksize* is ``1`` then the :meth:`!next` method of the iterator
2248      returned by the :meth:`imap` method has an optional *timeout* parameter:
2249      ``next(timeout)`` will raise :exc:`multiprocessing.TimeoutError` if the
2250      result cannot be returned within *timeout* seconds.
2251
2252   .. method:: imap_unordered(func, iterable[, chunksize])
2253
2254      The same as :meth:`imap` except that the ordering of the results from the
2255      returned iterator should be considered arbitrary.  (Only when there is
2256      only one worker process is the order guaranteed to be "correct".)
2257
2258   .. method:: starmap(func, iterable[, chunksize])
2259
2260      Like :meth:`~multiprocessing.pool.Pool.map` except that the
2261      elements of the *iterable* are expected to be iterables that are
2262      unpacked as arguments.
2263
2264      Hence an *iterable* of ``[(1,2), (3, 4)]`` results in ``[func(1,2),
2265      func(3,4)]``.
2266
2267      .. versionadded:: 3.3
2268
2269   .. method:: starmap_async(func, iterable[, chunksize[, callback[, error_callback]]])
2270
2271      A combination of :meth:`starmap` and :meth:`map_async` that iterates over
2272      *iterable* of iterables and calls *func* with the iterables unpacked.
2273      Returns a result object.
2274
2275      .. versionadded:: 3.3
2276
2277   .. method:: close()
2278
2279      Prevents any more tasks from being submitted to the pool.  Once all the
2280      tasks have been completed the worker processes will exit.
2281
2282   .. method:: terminate()
2283
2284      Stops the worker processes immediately without completing outstanding
2285      work.  When the pool object is garbage collected :meth:`terminate` will be
2286      called immediately.
2287
2288   .. method:: join()
2289
2290      Wait for the worker processes to exit.  One must call :meth:`close` or
2291      :meth:`terminate` before using :meth:`join`.
2292
2293   .. versionadded:: 3.3
2294      Pool objects now support the context management protocol -- see
2295      :ref:`typecontextmanager`.  :meth:`~contextmanager.__enter__` returns the
2296      pool object, and :meth:`~contextmanager.__exit__` calls :meth:`terminate`.
2297
2298
2299.. class:: AsyncResult
2300
2301   The class of the result returned by :meth:`Pool.apply_async` and
2302   :meth:`Pool.map_async`.
2303
2304   .. method:: get([timeout])
2305
2306      Return the result when it arrives.  If *timeout* is not ``None`` and the
2307      result does not arrive within *timeout* seconds then
2308      :exc:`multiprocessing.TimeoutError` is raised.  If the remote call raised
2309      an exception then that exception will be reraised by :meth:`get`.
2310
2311   .. method:: wait([timeout])
2312
2313      Wait until the result is available or until *timeout* seconds pass.
2314
2315   .. method:: ready()
2316
2317      Return whether the call has completed.
2318
2319   .. method:: successful()
2320
2321      Return whether the call completed without raising an exception.  Will
2322      raise :exc:`ValueError` if the result is not ready.
2323
2324      .. versionchanged:: 3.7
2325         If the result is not ready, :exc:`ValueError` is raised instead of
2326         :exc:`AssertionError`.
2327
2328The following example demonstrates the use of a pool::
2329
2330   from multiprocessing import Pool
2331   import time
2332
2333   def f(x):
2334       return x*x
2335
2336   if __name__ == '__main__':
2337       with Pool(processes=4) as pool:         # start 4 worker processes
2338           result = pool.apply_async(f, (10,)) # evaluate "f(10)" asynchronously in a single process
2339           print(result.get(timeout=1))        # prints "100" unless your computer is *very* slow
2340
2341           print(pool.map(f, range(10)))       # prints "[0, 1, 4,..., 81]"
2342
2343           it = pool.imap(f, range(10))
2344           print(next(it))                     # prints "0"
2345           print(next(it))                     # prints "1"
2346           print(it.next(timeout=1))           # prints "4" unless your computer is *very* slow
2347
2348           result = pool.apply_async(time.sleep, (10,))
2349           print(result.get(timeout=1))        # raises multiprocessing.TimeoutError
2350
2351
2352.. _multiprocessing-listeners-clients:
2353
2354Listeners and Clients
2355~~~~~~~~~~~~~~~~~~~~~
2356
2357.. module:: multiprocessing.connection
2358   :synopsis: API for dealing with sockets.
2359
2360Usually message passing between processes is done using queues or by using
2361:class:`~Connection` objects returned by
2362:func:`~multiprocessing.Pipe`.
2363
2364However, the :mod:`multiprocessing.connection` module allows some extra
2365flexibility.  It basically gives a high level message oriented API for dealing
2366with sockets or Windows named pipes.  It also has support for *digest
2367authentication* using the :mod:`hmac` module, and for polling
2368multiple connections at the same time.
2369
2370
2371.. function:: deliver_challenge(connection, authkey)
2372
2373   Send a randomly generated message to the other end of the connection and wait
2374   for a reply.
2375
2376   If the reply matches the digest of the message using *authkey* as the key
2377   then a welcome message is sent to the other end of the connection.  Otherwise
2378   :exc:`~multiprocessing.AuthenticationError` is raised.
2379
2380.. function:: answer_challenge(connection, authkey)
2381
2382   Receive a message, calculate the digest of the message using *authkey* as the
2383   key, and then send the digest back.
2384
2385   If a welcome message is not received, then
2386   :exc:`~multiprocessing.AuthenticationError` is raised.
2387
2388.. function:: Client(address[, family[, authkey]])
2389
2390   Attempt to set up a connection to the listener which is using address
2391   *address*, returning a :class:`~Connection`.
2392
2393   The type of the connection is determined by *family* argument, but this can
2394   generally be omitted since it can usually be inferred from the format of
2395   *address*. (See :ref:`multiprocessing-address-formats`)
2396
2397   If *authkey* is given and not None, it should be a byte string and will be
2398   used as the secret key for an HMAC-based authentication challenge. No
2399   authentication is done if *authkey* is None.
2400   :exc:`~multiprocessing.AuthenticationError` is raised if authentication fails.
2401   See :ref:`multiprocessing-auth-keys`.
2402
2403.. class:: Listener([address[, family[, backlog[, authkey]]]])
2404
2405   A wrapper for a bound socket or Windows named pipe which is 'listening' for
2406   connections.
2407
2408   *address* is the address to be used by the bound socket or named pipe of the
2409   listener object.
2410
2411   .. note::
2412
2413      If an address of '0.0.0.0' is used, the address will not be a connectable
2414      end point on Windows. If you require a connectable end-point,
2415      you should use '127.0.0.1'.
2416
2417   *family* is the type of socket (or named pipe) to use.  This can be one of
2418   the strings ``'AF_INET'`` (for a TCP socket), ``'AF_UNIX'`` (for a Unix
2419   domain socket) or ``'AF_PIPE'`` (for a Windows named pipe).  Of these only
2420   the first is guaranteed to be available.  If *family* is ``None`` then the
2421   family is inferred from the format of *address*.  If *address* is also
2422   ``None`` then a default is chosen.  This default is the family which is
2423   assumed to be the fastest available.  See
2424   :ref:`multiprocessing-address-formats`.  Note that if *family* is
2425   ``'AF_UNIX'`` and address is ``None`` then the socket will be created in a
2426   private temporary directory created using :func:`tempfile.mkstemp`.
2427
2428   If the listener object uses a socket then *backlog* (1 by default) is passed
2429   to the :meth:`~socket.socket.listen` method of the socket once it has been
2430   bound.
2431
2432   If *authkey* is given and not None, it should be a byte string and will be
2433   used as the secret key for an HMAC-based authentication challenge. No
2434   authentication is done if *authkey* is None.
2435   :exc:`~multiprocessing.AuthenticationError` is raised if authentication fails.
2436   See :ref:`multiprocessing-auth-keys`.
2437
2438   .. method:: accept()
2439
2440      Accept a connection on the bound socket or named pipe of the listener
2441      object and return a :class:`~Connection` object.
2442      If authentication is attempted and fails, then
2443      :exc:`~multiprocessing.AuthenticationError` is raised.
2444
2445   .. method:: close()
2446
2447      Close the bound socket or named pipe of the listener object.  This is
2448      called automatically when the listener is garbage collected.  However it
2449      is advisable to call it explicitly.
2450
2451   Listener objects have the following read-only properties:
2452
2453   .. attribute:: address
2454
2455      The address which is being used by the Listener object.
2456
2457   .. attribute:: last_accepted
2458
2459      The address from which the last accepted connection came.  If this is
2460      unavailable then it is ``None``.
2461
2462   .. versionadded:: 3.3
2463      Listener objects now support the context management protocol -- see
2464      :ref:`typecontextmanager`.  :meth:`~contextmanager.__enter__` returns the
2465      listener object, and :meth:`~contextmanager.__exit__` calls :meth:`close`.
2466
2467.. function:: wait(object_list, timeout=None)
2468
2469   Wait till an object in *object_list* is ready.  Returns the list of
2470   those objects in *object_list* which are ready.  If *timeout* is a
2471   float then the call blocks for at most that many seconds.  If
2472   *timeout* is ``None`` then it will block for an unlimited period.
2473   A negative timeout is equivalent to a zero timeout.
2474
2475   For both Unix and Windows, an object can appear in *object_list* if
2476   it is
2477
2478   * a readable :class:`~multiprocessing.connection.Connection` object;
2479   * a connected and readable :class:`socket.socket` object; or
2480   * the :attr:`~multiprocessing.Process.sentinel` attribute of a
2481     :class:`~multiprocessing.Process` object.
2482
2483   A connection or socket object is ready when there is data available
2484   to be read from it, or the other end has been closed.
2485
2486   **Unix**: ``wait(object_list, timeout)`` almost equivalent
2487   ``select.select(object_list, [], [], timeout)``.  The difference is
2488   that, if :func:`select.select` is interrupted by a signal, it can
2489   raise :exc:`OSError` with an error number of ``EINTR``, whereas
2490   :func:`wait` will not.
2491
2492   **Windows**: An item in *object_list* must either be an integer
2493   handle which is waitable (according to the definition used by the
2494   documentation of the Win32 function ``WaitForMultipleObjects()``)
2495   or it can be an object with a :meth:`fileno` method which returns a
2496   socket handle or pipe handle.  (Note that pipe handles and socket
2497   handles are **not** waitable handles.)
2498
2499   .. versionadded:: 3.3
2500
2501
2502**Examples**
2503
2504The following server code creates a listener which uses ``'secret password'`` as
2505an authentication key.  It then waits for a connection and sends some data to
2506the client::
2507
2508   from multiprocessing.connection import Listener
2509   from array import array
2510
2511   address = ('localhost', 6000)     # family is deduced to be 'AF_INET'
2512
2513   with Listener(address, authkey=b'secret password') as listener:
2514       with listener.accept() as conn:
2515           print('connection accepted from', listener.last_accepted)
2516
2517           conn.send([2.25, None, 'junk', float])
2518
2519           conn.send_bytes(b'hello')
2520
2521           conn.send_bytes(array('i', [42, 1729]))
2522
2523The following code connects to the server and receives some data from the
2524server::
2525
2526   from multiprocessing.connection import Client
2527   from array import array
2528
2529   address = ('localhost', 6000)
2530
2531   with Client(address, authkey=b'secret password') as conn:
2532       print(conn.recv())                  # => [2.25, None, 'junk', float]
2533
2534       print(conn.recv_bytes())            # => 'hello'
2535
2536       arr = array('i', [0, 0, 0, 0, 0])
2537       print(conn.recv_bytes_into(arr))    # => 8
2538       print(arr)                          # => array('i', [42, 1729, 0, 0, 0])
2539
2540The following code uses :func:`~multiprocessing.connection.wait` to
2541wait for messages from multiple processes at once::
2542
2543   import time, random
2544   from multiprocessing import Process, Pipe, current_process
2545   from multiprocessing.connection import wait
2546
2547   def foo(w):
2548       for i in range(10):
2549           w.send((i, current_process().name))
2550       w.close()
2551
2552   if __name__ == '__main__':
2553       readers = []
2554
2555       for i in range(4):
2556           r, w = Pipe(duplex=False)
2557           readers.append(r)
2558           p = Process(target=foo, args=(w,))
2559           p.start()
2560           # We close the writable end of the pipe now to be sure that
2561           # p is the only process which owns a handle for it.  This
2562           # ensures that when p closes its handle for the writable end,
2563           # wait() will promptly report the readable end as being ready.
2564           w.close()
2565
2566       while readers:
2567           for r in wait(readers):
2568               try:
2569                   msg = r.recv()
2570               except EOFError:
2571                   readers.remove(r)
2572               else:
2573                   print(msg)
2574
2575
2576.. _multiprocessing-address-formats:
2577
2578Address Formats
2579>>>>>>>>>>>>>>>
2580
2581* An ``'AF_INET'`` address is a tuple of the form ``(hostname, port)`` where
2582  *hostname* is a string and *port* is an integer.
2583
2584* An ``'AF_UNIX'`` address is a string representing a filename on the
2585  filesystem.
2586
2587* An ``'AF_PIPE'`` address is a string of the form
2588  :samp:`r'\\\\.\\pipe\\{PipeName}'`.  To use :func:`Client` to connect to a named
2589  pipe on a remote computer called *ServerName* one should use an address of the
2590  form :samp:`r'\\\\{ServerName}\\pipe\\{PipeName}'` instead.
2591
2592Note that any string beginning with two backslashes is assumed by default to be
2593an ``'AF_PIPE'`` address rather than an ``'AF_UNIX'`` address.
2594
2595
2596.. _multiprocessing-auth-keys:
2597
2598Authentication keys
2599~~~~~~~~~~~~~~~~~~~
2600
2601When one uses :meth:`Connection.recv <Connection.recv>`, the
2602data received is automatically
2603unpickled. Unfortunately unpickling data from an untrusted source is a security
2604risk. Therefore :class:`Listener` and :func:`Client` use the :mod:`hmac` module
2605to provide digest authentication.
2606
2607An authentication key is a byte string which can be thought of as a
2608password: once a connection is established both ends will demand proof
2609that the other knows the authentication key.  (Demonstrating that both
2610ends are using the same key does **not** involve sending the key over
2611the connection.)
2612
2613If authentication is requested but no authentication key is specified then the
2614return value of ``current_process().authkey`` is used (see
2615:class:`~multiprocessing.Process`).  This value will be automatically inherited by
2616any :class:`~multiprocessing.Process` object that the current process creates.
2617This means that (by default) all processes of a multi-process program will share
2618a single authentication key which can be used when setting up connections
2619between themselves.
2620
2621Suitable authentication keys can also be generated by using :func:`os.urandom`.
2622
2623
2624Logging
2625~~~~~~~
2626
2627Some support for logging is available.  Note, however, that the :mod:`logging`
2628package does not use process shared locks so it is possible (depending on the
2629handler type) for messages from different processes to get mixed up.
2630
2631.. currentmodule:: multiprocessing
2632.. function:: get_logger()
2633
2634   Returns the logger used by :mod:`multiprocessing`.  If necessary, a new one
2635   will be created.
2636
2637   When first created the logger has level :data:`logging.NOTSET` and no
2638   default handler. Messages sent to this logger will not by default propagate
2639   to the root logger.
2640
2641   Note that on Windows child processes will only inherit the level of the
2642   parent process's logger -- any other customization of the logger will not be
2643   inherited.
2644
2645.. currentmodule:: multiprocessing
2646.. function:: log_to_stderr(level=None)
2647
2648   This function performs a call to :func:`get_logger` but in addition to
2649   returning the logger created by get_logger, it adds a handler which sends
2650   output to :data:`sys.stderr` using format
2651   ``'[%(levelname)s/%(processName)s] %(message)s'``.
2652   You can modify ``levelname`` of the logger by passing a ``level`` argument.
2653
2654Below is an example session with logging turned on::
2655
2656    >>> import multiprocessing, logging
2657    >>> logger = multiprocessing.log_to_stderr()
2658    >>> logger.setLevel(logging.INFO)
2659    >>> logger.warning('doomed')
2660    [WARNING/MainProcess] doomed
2661    >>> m = multiprocessing.Manager()
2662    [INFO/SyncManager-...] child process calling self.run()
2663    [INFO/SyncManager-...] created temp directory /.../pymp-...
2664    [INFO/SyncManager-...] manager serving at '/.../listener-...'
2665    >>> del m
2666    [INFO/MainProcess] sending shutdown message to manager
2667    [INFO/SyncManager-...] manager exiting with exitcode 0
2668
2669For a full table of logging levels, see the :mod:`logging` module.
2670
2671
2672The :mod:`multiprocessing.dummy` module
2673~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2674
2675.. module:: multiprocessing.dummy
2676   :synopsis: Dumb wrapper around threading.
2677
2678:mod:`multiprocessing.dummy` replicates the API of :mod:`multiprocessing` but is
2679no more than a wrapper around the :mod:`threading` module.
2680
2681.. currentmodule:: multiprocessing.pool
2682
2683In particular, the ``Pool`` function provided by :mod:`multiprocessing.dummy`
2684returns an instance of :class:`ThreadPool`, which is a subclass of
2685:class:`Pool` that supports all the same method calls but uses a pool of
2686worker threads rather than worker processes.
2687
2688
2689.. class:: ThreadPool([processes[, initializer[, initargs]]])
2690
2691   A thread pool object which controls a pool of worker threads to which jobs
2692   can be submitted.  :class:`ThreadPool` instances are fully interface
2693   compatible with :class:`Pool` instances, and their resources must also be
2694   properly managed, either by using the pool as a context manager or by
2695   calling :meth:`~multiprocessing.pool.Pool.close` and
2696   :meth:`~multiprocessing.pool.Pool.terminate` manually.
2697
2698   *processes* is the number of worker threads to use.  If *processes* is
2699   ``None`` then the number returned by :func:`os.cpu_count` is used.
2700
2701   If *initializer* is not ``None`` then each worker process will call
2702   ``initializer(*initargs)`` when it starts.
2703
2704   Unlike :class:`Pool`, *maxtasksperchild* and *context* cannot be provided.
2705
2706    .. note::
2707
2708        A :class:`ThreadPool` shares the same interface as :class:`Pool`, which
2709        is designed around a pool of processes and predates the introduction of
2710        the :class:`concurrent.futures` module.  As such, it inherits some
2711        operations that don't make sense for a pool backed by threads, and it
2712        has its own type for representing the status of asynchronous jobs,
2713        :class:`AsyncResult`, that is not understood by any other libraries.
2714
2715        Users should generally prefer to use
2716        :class:`concurrent.futures.ThreadPoolExecutor`, which has a simpler
2717        interface that was designed around threads from the start, and which
2718        returns :class:`concurrent.futures.Future` instances that are
2719        compatible with many other libraries, including :mod:`asyncio`.
2720
2721
2722.. _multiprocessing-programming:
2723
2724Programming guidelines
2725----------------------
2726
2727There are certain guidelines and idioms which should be adhered to when using
2728:mod:`multiprocessing`.
2729
2730
2731All start methods
2732~~~~~~~~~~~~~~~~~
2733
2734The following applies to all start methods.
2735
2736Avoid shared state
2737
2738    As far as possible one should try to avoid shifting large amounts of data
2739    between processes.
2740
2741    It is probably best to stick to using queues or pipes for communication
2742    between processes rather than using the lower level synchronization
2743    primitives.
2744
2745Picklability
2746
2747    Ensure that the arguments to the methods of proxies are picklable.
2748
2749Thread safety of proxies
2750
2751    Do not use a proxy object from more than one thread unless you protect it
2752    with a lock.
2753
2754    (There is never a problem with different processes using the *same* proxy.)
2755
2756Joining zombie processes
2757
2758    On Unix when a process finishes but has not been joined it becomes a zombie.
2759    There should never be very many because each time a new process starts (or
2760    :func:`~multiprocessing.active_children` is called) all completed processes
2761    which have not yet been joined will be joined.  Also calling a finished
2762    process's :meth:`Process.is_alive <multiprocessing.Process.is_alive>` will
2763    join the process.  Even so it is probably good
2764    practice to explicitly join all the processes that you start.
2765
2766Better to inherit than pickle/unpickle
2767
2768    When using the *spawn* or *forkserver* start methods many types
2769    from :mod:`multiprocessing` need to be picklable so that child
2770    processes can use them.  However, one should generally avoid
2771    sending shared objects to other processes using pipes or queues.
2772    Instead you should arrange the program so that a process which
2773    needs access to a shared resource created elsewhere can inherit it
2774    from an ancestor process.
2775
2776Avoid terminating processes
2777
2778    Using the :meth:`Process.terminate <multiprocessing.Process.terminate>`
2779    method to stop a process is liable to
2780    cause any shared resources (such as locks, semaphores, pipes and queues)
2781    currently being used by the process to become broken or unavailable to other
2782    processes.
2783
2784    Therefore it is probably best to only consider using
2785    :meth:`Process.terminate <multiprocessing.Process.terminate>` on processes
2786    which never use any shared resources.
2787
2788Joining processes that use queues
2789
2790    Bear in mind that a process that has put items in a queue will wait before
2791    terminating until all the buffered items are fed by the "feeder" thread to
2792    the underlying pipe.  (The child process can call the
2793    :meth:`Queue.cancel_join_thread <multiprocessing.Queue.cancel_join_thread>`
2794    method of the queue to avoid this behaviour.)
2795
2796    This means that whenever you use a queue you need to make sure that all
2797    items which have been put on the queue will eventually be removed before the
2798    process is joined.  Otherwise you cannot be sure that processes which have
2799    put items on the queue will terminate.  Remember also that non-daemonic
2800    processes will be joined automatically.
2801
2802    An example which will deadlock is the following::
2803
2804        from multiprocessing import Process, Queue
2805
2806        def f(q):
2807            q.put('X' * 1000000)
2808
2809        if __name__ == '__main__':
2810            queue = Queue()
2811            p = Process(target=f, args=(queue,))
2812            p.start()
2813            p.join()                    # this deadlocks
2814            obj = queue.get()
2815
2816    A fix here would be to swap the last two lines (or simply remove the
2817    ``p.join()`` line).
2818
2819Explicitly pass resources to child processes
2820
2821    On Unix using the *fork* start method, a child process can make
2822    use of a shared resource created in a parent process using a
2823    global resource.  However, it is better to pass the object as an
2824    argument to the constructor for the child process.
2825
2826    Apart from making the code (potentially) compatible with Windows
2827    and the other start methods this also ensures that as long as the
2828    child process is still alive the object will not be garbage
2829    collected in the parent process.  This might be important if some
2830    resource is freed when the object is garbage collected in the
2831    parent process.
2832
2833    So for instance ::
2834
2835        from multiprocessing import Process, Lock
2836
2837        def f():
2838            ... do something using "lock" ...
2839
2840        if __name__ == '__main__':
2841            lock = Lock()
2842            for i in range(10):
2843                Process(target=f).start()
2844
2845    should be rewritten as ::
2846
2847        from multiprocessing import Process, Lock
2848
2849        def f(l):
2850            ... do something using "l" ...
2851
2852        if __name__ == '__main__':
2853            lock = Lock()
2854            for i in range(10):
2855                Process(target=f, args=(lock,)).start()
2856
2857Beware of replacing :data:`sys.stdin` with a "file like object"
2858
2859    :mod:`multiprocessing` originally unconditionally called::
2860
2861        os.close(sys.stdin.fileno())
2862
2863    in the :meth:`multiprocessing.Process._bootstrap` method --- this resulted
2864    in issues with processes-in-processes. This has been changed to::
2865
2866        sys.stdin.close()
2867        sys.stdin = open(os.open(os.devnull, os.O_RDONLY), closefd=False)
2868
2869    Which solves the fundamental issue of processes colliding with each other
2870    resulting in a bad file descriptor error, but introduces a potential danger
2871    to applications which replace :func:`sys.stdin` with a "file-like object"
2872    with output buffering.  This danger is that if multiple processes call
2873    :meth:`~io.IOBase.close()` on this file-like object, it could result in the same
2874    data being flushed to the object multiple times, resulting in corruption.
2875
2876    If you write a file-like object and implement your own caching, you can
2877    make it fork-safe by storing the pid whenever you append to the cache,
2878    and discarding the cache when the pid changes. For example::
2879
2880       @property
2881       def cache(self):
2882           pid = os.getpid()
2883           if pid != self._pid:
2884               self._pid = pid
2885               self._cache = []
2886           return self._cache
2887
2888    For more information, see :issue:`5155`, :issue:`5313` and :issue:`5331`
2889
2890The *spawn* and *forkserver* start methods
2891~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2892
2893There are a few extra restriction which don't apply to the *fork*
2894start method.
2895
2896More picklability
2897
2898    Ensure that all arguments to :meth:`Process.__init__` are picklable.
2899    Also, if you subclass :class:`~multiprocessing.Process` then make sure that
2900    instances will be picklable when the :meth:`Process.start
2901    <multiprocessing.Process.start>` method is called.
2902
2903Global variables
2904
2905    Bear in mind that if code run in a child process tries to access a global
2906    variable, then the value it sees (if any) may not be the same as the value
2907    in the parent process at the time that :meth:`Process.start
2908    <multiprocessing.Process.start>` was called.
2909
2910    However, global variables which are just module level constants cause no
2911    problems.
2912
2913Safe importing of main module
2914
2915    Make sure that the main module can be safely imported by a new Python
2916    interpreter without causing unintended side effects (such a starting a new
2917    process).
2918
2919    For example, using the *spawn* or *forkserver* start method
2920    running the following module would fail with a
2921    :exc:`RuntimeError`::
2922
2923        from multiprocessing import Process
2924
2925        def foo():
2926            print('hello')
2927
2928        p = Process(target=foo)
2929        p.start()
2930
2931    Instead one should protect the "entry point" of the program by using ``if
2932    __name__ == '__main__':`` as follows::
2933
2934       from multiprocessing import Process, freeze_support, set_start_method
2935
2936       def foo():
2937           print('hello')
2938
2939       if __name__ == '__main__':
2940           freeze_support()
2941           set_start_method('spawn')
2942           p = Process(target=foo)
2943           p.start()
2944
2945    (The ``freeze_support()`` line can be omitted if the program will be run
2946    normally instead of frozen.)
2947
2948    This allows the newly spawned Python interpreter to safely import the module
2949    and then run the module's ``foo()`` function.
2950
2951    Similar restrictions apply if a pool or manager is created in the main
2952    module.
2953
2954
2955.. _multiprocessing-examples:
2956
2957Examples
2958--------
2959
2960Demonstration of how to create and use customized managers and proxies:
2961
2962.. literalinclude:: ../includes/mp_newtype.py
2963   :language: python3
2964
2965
2966Using :class:`~multiprocessing.pool.Pool`:
2967
2968.. literalinclude:: ../includes/mp_pool.py
2969   :language: python3
2970
2971
2972An example showing how to use queues to feed tasks to a collection of worker
2973processes and collect the results:
2974
2975.. literalinclude:: ../includes/mp_workers.py
2976