• Home
  • Line#
  • Scopes#
  • Navigate#
  • Raw
  • Download
1gRPC Manual Flow Control Example
2=====================
3Flow control is relevant for streaming RPC calls.
4
5By default, gRPC will handle dealing with flow control. However, for specific
6use cases, you may wish to take explicit control.
7
8The default, if you do not disable auto requests, is for the gRPC framework to
9automatically request 1 message at startup and after each onNext call,
10request 1 more. With manual flow control, you explicitly do the request. Note
11that acknowledgements (which is what lets the server know that the receiver can
12handle more data) are sent after an onNext call. The onNext method is called
13when there is both an undelivered message and an outstanding request.
14
15The most common use case for manual flow control is to avoid requiring your
16asynchronous onNext method to block when processing of the read message is being
17done somewhere else.
18
19Another, minor use case for manual flow control, is when there are lots of small
20messages and you are using Netty. To avoid switching back and forth between the
21application and network threads, you can specify a larger initial value (such
22as 5) so that the application thread can have values waiting for it rather than
23constantly having to block and wait for the network thread to provide the next
24value.
25
26### Outgoing Flow Control
27
28The underlying layer (such as Netty) will make the write wait when there is no
29space to write the next message. This causes the request stream to go into
30a not ready state and the outgoing onNext method invocation waits. You can
31explicitly check that the stream is ready for writing before calling onNext to
32avoid blocking. This is done with `CallStreamObserver.isReady()`. You can
33utilize this to start doing reads, which may allow
34the other side of the channel to complete a write and then to do its own reads,
35thereby avoiding deadlock.
36
37### Incoming Manual Flow Control
38
39An example use case is, you have a buffer where your onNext places values from
40the stream. Manual flow control can be used to avoid buffer overflows. You could
41use a blocking buffer, but you may not want to have the thread being used by
42onNext block.
43
44By default, gRPC will configure a stream to request one value at startup and
45then at the completion of each "onNext" invocation requests one more message.
46You can take control of this by disabling AutoRequest on the
47request stream. If you do so, then you are responsible for asynchronously
48telling the stream each time that you would like a new message to be
49asynchronously sent to onNext when one is available. This is done by calling a
50method on the request stream to request messages (while this has a count,
51generally you request 1). Putting this request at the end of your onNext method
52essentially duplicates the default behavior.
53
54#### Client side (server or bidi streaming)
55
56In the `ClientResponseObserver.beforeStart` method, call
57`requestStream.disableAutoRequestWithInitial(1)`
58
59When you are ready to begin processing the next value from the stream call
60`requestStream.request(1)`
61
62#### Server side (client or bidi streaming)
63
64In your stub methods supporting streaming, add the following at the top
65
661. cast `StreamObserver<> responseObserver`
67   to `ServerCallStreamObserver<> serverCallStreamObserver`
681. call `serverCallStreamObserver.disableAutoRequest()`
69
70When you are ready to begin processing the next value from the stream call
71`serverCallStreamObserver.request(1)`
72
73### Related documents
74Also see [gRPC Flow Control Users Guide][user guide]
75
76 [user guide]: https://grpc.io/docs/guides/flow-control