• Home
  • Line#
  • Scopes#
  • Navigate#
  • Raw
  • Download
1# xDS (Load-Balancing) Interop Test Case Descriptions
2
3Client and server use [test.proto](../src/proto/grpc/testing/test.proto).
4
5## Server
6
7The code for the xDS test server can be found at:
8[Java](https://github.com/grpc/grpc-java/blob/master/interop-testing/src/main/java/io/grpc/testing/integration/XdsTestServer.java) (other language implementations are in progress).
9
10Server should accept these arguments:
11
12*   --port=PORT
13    *   The port the server will run on.
14
15## Client
16
17The base behavior of the xDS test client is to send a constant QPS of unary
18messages and record the remote-peer distribution of the responses. Further, the
19client must expose an implementation of the `LoadBalancerStatsService` gRPC
20service to allow the test driver to validate the load balancing behavior for a
21particular test case (see below for more details).
22
23The code for the xDS test client can be at:
24[Java](https://github.com/grpc/grpc-java/blob/master/interop-testing/src/main/java/io/grpc/testing/integration/XdsTestClient.java) (other language implementations are in progress).
25
26Clients should accept these arguments:
27
28*   --fail_on_failed_rpcs=BOOL
29    *   If true, the client should exit with a non-zero return code if any RPCs
30        fail after at least one RPC has succeeded, indicating a valid xDS config
31        was received. This accounts for any startup-related delays in receiving
32        an initial config from the load balancer. Default is false.
33*   --num_channels=CHANNELS
34    *   The number of channels to create to the server.
35*   --qps=QPS
36    *   The QPS per channel.
37*   --server=HOSTNAME:PORT
38    *   The server host to connect to. For example, "localhost:8080"
39*   --stats_port=PORT
40    *   The port for to expose the client's `LoadBalancerStatsService`
41        implementation.
42
43## Test Driver
44
45Note that, unlike our other interop tests, neither the client nor the server has
46any notion of which of the following test scenarios is under test. Instead, a
47separate test driver is responsible for configuring the load balancer and the
48server backends, running the client, and then querying the client's
49`LoadBalancerStatsService` to validate load balancer behavior for each of the
50tests described below.
51
52## LoadBalancerStatsService
53
54The service is defined as:
55
56```
57message LoadBalancerStatsRequest {
58  // Request stats for the next num_rpcs sent by client.
59  int32 num_rpcs = 1;
60  // If num_rpcs have not completed within timeout_sec, return partial results.
61  int32 timeout_sec = 2;
62}
63
64message LoadBalancerStatsResponse {
65  // The number of completed RPCs for each peer.
66  map<string, int32> rpcs_by_peer = 1;
67  // The number of RPCs that failed to record a remote peer.
68  int32 num_failures = 2;
69}
70
71service LoadBalancerStatsService {
72  // Gets the backend distribution for RPCs sent by a test client.
73  rpc GetClientStats(LoadBalancerStatsRequest)
74      returns (LoadBalancerStatsResponse) {}
75}
76```
77
78Note that the `LoadBalancerStatsResponse` contains the remote peer distribution
79of the next `num_rpcs` *sent* by the client after receiving the
80`LoadBalancerStatsRequest`. It is important that the remote peer distribution be
81recorded for a block of consecutive outgoing RPCs, to validate the intended
82distribution from the load balancer, rather than just looking at the next
83`num_rpcs` responses received from backends, as different backends may respond
84at different rates.
85
86## Test Cases
87
88### ping_pong
89
90This test verifies that every backend receives traffic.
91
92Client parameters:
93
941.  --num_channels=1
951.  --qps=100
961.  --fail_on_failed_rpc=true
97
98Load balancer configuration:
99
1001.  4 backends are created in a single managed instance group (MIG).
101
102Test driver asserts:
103
1041.  All backends receive at least one RPC
105
106### round_robin
107
108This test verifies that RPCs are evenly routed according to an unweighted round
109robin policy.
110
111Client parameters:
112
1131.  --num_channels=1
1141.  --qps=100
1151.  --fail_on_failed_rpc=true
116
117Load balancer configuration:
118
1191.  4 backends are created in a single MIG.
120
121Test driver asserts that:
122
1231.  Once all backends receive at least one RPC, the following 100 RPCs are
124    evenly distributed across the 4 backends.
125
126### backends_restart
127
128This test verifies that the load balancer will resume sending traffic to a set
129of backends that is stopped and then resumed.
130
131Client parameters:
132
1331.  --num_channels=1
1341.  --qps=100
135
136Load balancer configuration:
137
1381.  4 backends are created in a single MIG.
139
140Test driver asserts:
141
1421.  All backends receive at least one RPC.
143
144The test driver records the peer distribution for a subsequent block of 100 RPCs
145then stops the backends.
146
147Test driver asserts:
148
1491.  No RPCs from the client are successful.
150
151The test driver resumes the backends.
152
153Test driver asserts:
154
1551.  Once all backends receive at least one RPC, the distribution for a block of
156    100 RPCs is the same as the distribution recorded prior to restart.
157
158### secondary_locality_gets_requests_on_primary_failure
159
160This test verifies that backends in a secondary locality receive traffic when
161all backends in the primary locality fail.
162
163Client parameters:
164
1651.  --num_channels=1
1661.  --qps=100
167
168Load balancer configuration:
169
1701.  The primary MIG with 2 backends in the same zone as the client
1711.  The secondary MIG with 2 backends in a different zone
172
173Test driver asserts:
174
1751.  All backends in the primary locality receive at least 1 RPC.
1761.  No backends in the secondary locality receive RPCs.
177
178The test driver stops the backends in the primary locality.
179
180Test driver asserts:
181
1821.  All backends in the secondary locality receive at least 1 RPC.
183
184The test driver resumes the backends in the primary locality.
185
186Test driver asserts:
187
1881.  All backends in the primary locality receive at least 1 RPC.
1891.  No backends in the secondary locality receive RPCs.
190
191### secondary_locality_gets_no_requests_on_partial_primary_failure
192
193This test verifies that backends in a failover locality do not receive traffic
194when at least one of the backends in the primary locality remain healthy.
195
196**Note:** Future TD features may change the expected behavior and require
197changes to this test case.
198
199Client parameters:
200
2011.  --num_channels=1
2021.  --qps=100
203
204Load balancer configuration:
205
2061.  The primary MIG with 2 backends in the same zone as the client
2071.  The secondary MIG with 2 backends in a different zone
208
209Test driver asserts:
210
2111.  All backends in the primary locality receive at least 1 RPC.
2121.  No backends in the secondary locality receive RPCs.
213
214The test driver stops one of the backends in the primary locality.
215
216Test driver asserts:
217
2181.  All backends in the primary locality receive at least 1 RPC.
2191.  No backends in the secondary locality receive RPCs.
220
221### new_instance_group_receives_traffic
222
223This test verifies that new instance groups added to a backend service in the
224same zone receive traffic.
225
226Client parameters:
227
2281.  --num_channels=1
2291.  --qps=100
2301.  --fail_on_failed_rpc=true
231
232Load balancer configuration:
233
2341.  One MIG with two backends, using rate balancing mode.
235
236Test driver asserts:
237
2381.  All backends receive at least one RPC.
239
240The test driver adds a new MIG with two backends in the same zone.
241
242Test driver asserts:
243
2441.  All backends in each MIG receive at least one RPC.
245
246### remove_instance_group
247
248This test verifies that a remaining instance group can successfully serve RPCs
249after removal of another instance group in the same zone.
250
251Client parameters:
252
2531.  --num_channels=1
2541.  --qps=100
255
256Load balancer configuration:
257
2581.  Two MIGs with two backends each, using rate balancing mode.
259
260Test driver asserts:
261
2621.  All backends receive at least one RPC.
263
264The test driver removes one MIG.
265
266Test driver asserts:
267
2681.  All RPCs are directed to the two remaining backends (no RPC failures).
269
270### change_backend_service
271
272This test verifies that the backend service can be replaced and traffic routed
273to the new backends.
274
275Client parameters:
276
2771.  --num_channels=1
2781.  --qps=100
2791.  --fail_on_failed_rpc=true
280
281Load balancer configuration:
282
2831.  One MIG with two backends
284
285Test driver asserts:
286
2871.  All backends receive at least one RPC.
288
289The test driver creates a new backend service containing a MIG with two backends
290and changes the TD URL map to point to this new backend service.
291
292Test driver asserts:
293
2941.  All RPCs are directed to the new backend service.
295
296### traffic_splitting
297
298This test verifies that the traffic will be distributed between backend
299services with the correct weights when route action is set to weighted
300backend services.
301
302Client parameters:
303
3041.  --num_channels=1
3051.  --qps=100
306
307Load balancer configuration:
308
3091.  One MIG with one backend
310
311Assert:
312
3131. Once all backends receive at least one RPC, the following 1000 RPCs are
314all sent to MIG_a.
315
316The test driver adds a new MIG with 1 backend, and changes the route action
317to weighted backend services with {a: 20, b: 80}.
318
319Assert:
320
3211. Once all backends receive at least one RPC, the following 1000 RPCs are
322distributed across the 2 backends as a: 20, b: 80.
323
324