Lines Matching +full:keep +full:- +full:going
3 .. _multiprocessing-doc:
5 Multiprocessing package - torch.multiprocessing
20 -------------------
27 .. _multiprocessing-cuda-sharing-details:
30 --------------------
36 Unlike CPU tensors, the sending process is required to keep the original tensor
59 # do everything else (producer have to keep x in memory)
61 2. Keep producer process running until all consumers exits. This will prevent
82 # not going to work
89 # you need to create a process-local copy
103 ------------------
106 work. Note that it applies only to CPU tensor - CUDA tensors will always use
109 File descriptor - ``file_descriptor``
120 is cached with the object, and when it's going to be sent to other processes,
125 Note that if there will be a lot of tensors shared, this strategy will keep a
130 File system - ``file_system``
139 remain in the system. This is very serious, because they keep using up the
144 the current process group, and will keep track of all shared memory allocations.
153 ---------------------