• Home
  • Line#
  • Scopes#
  • Navigate#
  • Raw
  • Download
1#!/usr/bin/env/python3
2
3# Copyright (c) Facebook, Inc. and its affiliates.
4# All rights reserved.
5#
6# This source code is licensed under the BSD-style license found in the
7# LICENSE file in the root directory of this source tree.
8
9"""
10
11Torchelastic agent and user worker failover contract:
12
13**TL;DR;**:
14
15* TE(torchelastic) expects user workers to finish with the 5 minutes drift
16* It is better to design DDP app to fail for all workers, rather than a single one.
17* TE does not synchronize number of restarts between agents
18* TE re-rendezvous does not trigger restart decrease
19* When a single agent finishes its job(successfully or not), it will close rendezvous.
20  If other agents still have workers in progress, they will be terminated.
21* Based on above, scale down does not work if at least single agent finishes the job.
22* When Scale up is detected by agents, it will not decrease ``max_restarts``
23
24
25In general TE(torchelastic) can launch arbitrary user code, but there is some
26clarifications need to be done around what failover mechanism torchelastic
27provides and what failover mechanism it expects from user workers.
28
29Torchelastic currently supports DDP style applications.  That means that
30TE expects *ALL* workers finish approximately at the same time. In practice,
31it is nearly to impossible to guarantee that all workers in arbitrary
32DDP application finish at the time, so TE provides a finalization barrier
33that waits for TIMEOUT(5 minutes) for worker finalization.
34
35**Worker Failure**
36
37When worker fails, TE will check the number of restarts
38available, if there is more than 0 restarts, TE will start a new rendezvous
39round and restart the worker process. New rendezvous round will other
40TE agents to terminate their workers.
41
42.. note:: The TE agent does not synchronize restarts between themselves.
43          When a single agent performs restart, it will trigger a local ``max_restarts``
44          decrease, other agent will not decrease their ``max_restarts``.
45          the user to run the distributed application locally on a dev host.
46
47A single worker failure can cause the whole cluster to fail:
48If a single worker is constantly failing, it will cause the TE agent
49``max_restarts``  to go to zero. This will cause an agent to finish its
50work and close rendezvous. If there are any other workers on different
51agents, they will be terminated.
52
53
54**Re-Rendezvous**
55
56Re-rendezvous occurs when TE agents detect a new node
57trying to joint a cluster. TE will not decrease ``max_restarts``. TE agents
58will terminate its workers and start a new rendezvous round.
59
60Note about DynamicRendezvous(etcd-v2, c10d-experimental): If the rendezvous
61has already max_nodes, the new node won't be added to the wait list right
62away since there is no need to tear down a rendezvous that is already fully
63utilized. The new node will wait until its timeout (600 secs by default)
64and periodically check the number of participants. If the number becomes
65less than max_nodes, it will be added to the wait list; otherwise, it will time out after 600 secs.
66
67*Scale up event*. When scale up event happens, torchelastic rendezvous
68will detect that there are new nodes trying to join. Torchelastic agent
69will stop all workers and perform re-rendezvous. Note: when scale up event
70happens, *``max_restarts``* will *not* decrease.
71
72*Scale down event*. When scale down event happens, rendezvous will not
73notify the torchelastic agent about it. If TE agent launched with ``max_restarts=0`` ,
74it relies on the underlying scheduler to handle job restart. If the ``max_restarts>0`` ,
75TE agent will terminate workers and start a new rdzv round, which is a *Scale up event*.
76
77"""
78