• Home
  • Line#
  • Scopes#
  • Navigate#
  • Raw
  • Download
1.. _libc_gpu_rpc:
2
3======================
4Remote Procedure Calls
5======================
6
7.. contents:: Table of Contents
8  :depth: 4
9  :local:
10
11Remote Procedure Call Implementation
12====================================
13
14Traditionally, the C library abstracts over several functions that interface
15with the platform's operating system through system calls. The GPU however does
16not provide an operating system that can handle target dependent operations.
17Instead, we implemented remote procedure calls to interface with the host's
18operating system while executing on a GPU.
19
20We implemented remote procedure calls using unified virtual memory to create a
21shared communicate channel between the two processes. This memory is often
22pinned memory that can be accessed asynchronously and atomically by multiple
23processes simultaneously. This supports means that we can simply provide mutual
24exclusion on a shared better to swap work back and forth between the host system
25and the GPU. We can then use this to create a simple client-server protocol
26using this shared memory.
27
28This work treats the GPU as a client and the host as a server. The client
29initiates a communication while the server listens for them. In order to
30communicate between the host and the device, we simply maintain a buffer of
31memory and two mailboxes. One mailbox is write-only while the other is
32read-only. This exposes three primitive operations: using the buffer, giving
33away ownership, and waiting for ownership. This is implemented as a half-duplex
34transmission channel between the two sides. We decided to assign ownership of
35the buffer to the client when the inbox and outbox bits are equal and to the
36server when they are not.
37
38In order to make this transmission channel thread-safe, we abstract ownership of
39the given mailbox pair and buffer around a port, effectively acting as a lock
40and an index into the allocated buffer slice. The server and device have
41independent locks around the given port. In this scheme, the buffer can be used
42to communicate intent and data generically with the server. We them simply
43provide multiple copies of this protocol and expose them as multiple ports.
44
45If this were simply a standard CPU system, this would be sufficient. However,
46GPUs have my unique architectural challenges. First, GPU threads execute in
47lock-step with each other in groups typically called warps or wavefronts. We
48need to target the smallest unit of independent parallelism, so the RPC
49interface needs to handle an entire group of threads at once. This is done by
50increasing the size of the buffer and adding a thread mask argument so the
51server knows which threads are active when it handles the communication. Second,
52GPUs generally have no forward progress guarantees. In order to guarantee we do
53not encounter deadlocks while executing it is required that the number of ports
54matches the maximum amount of hardware parallelism on the device. It is also
55very important that the thread mask remains consistent while interfacing with
56the port.
57
58.. image:: ./rpc-diagram.svg
59   :width: 75%
60   :align: center
61
62The above diagram outlines the architecture of the RPC interface. For clarity
63the following list will explain the operations done by the client and server
64respectively when initiating a communication.
65
66First, a communication from the perspective of the client:
67
68* The client searches for an available port and claims the lock.
69* The client checks that the port is still available to the current device and
70  continues if so.
71* The client writes its data to the fixed-size packet and toggles its outbox.
72* The client waits until its inbox matches its outbox.
73* The client reads the data from the fixed-size packet.
74* The client closes the port and continues executing.
75
76Now, the same communication from the perspective of the server:
77
78* The server searches for an available port with pending work and claims the
79  lock.
80* The server checks that the port is still available to the current device.
81* The server reads the opcode to perform the expected operation, in this
82  case a receive and then send.
83* The server reads the data from the fixed-size packet.
84* The server writes its data to the fixed-size packet and toggles its outbox.
85* The server closes the port and continues searching for ports that need to be
86  serviced
87
88This architecture currently requires that the host periodically checks the RPC
89server's buffer for ports with pending work. Note that a port can be closed
90without waiting for its submitted work to be completed. This allows us to model
91asynchronous operations that do not need to wait until the server has completed
92them. If an operation requires more data than the fixed size buffer, we simply
93send multiple packets back and forth in a streaming fashion.
94
95Server Library
96--------------
97
98The RPC server's basic functionality is provided by the LLVM C library. A static
99library called ``libllvmlibc_rpc_server.a`` includes handling for the basic
100operations, such as printing or exiting. This has a small API that handles
101setting up the unified buffer and an interface to check the opcodes.
102
103Some operations are too divergent to provide generic implementations for, such
104as allocating device accessible memory. For these cases, we provide a callback
105registration scheme to add a custom handler for any given opcode through the
106port API. More information can be found in the installed header
107``<install>/include/llvmlibc_rpc_server.h``.
108
109Client Example
110--------------
111
112The Client API is not currently exported by the LLVM C library. This is
113primarily due to being written in C++ and relying on internal data structures.
114It uses a simple send and receive interface with a fixed-size packet. The
115following example uses the RPC interface to call a function pointer on the
116server.
117
118This code first opens a port with the given opcode to facilitate the
119communication. It then copies over the argument struct to the server using the
120``send_n`` interface to stream arbitrary bytes. The next send operation provides
121the server with the function pointer that will be executed. The final receive
122operation is a no-op and simply forces the client to wait until the server is
123done. It can be omitted if asynchronous execution is desired.
124
125.. code-block:: c++
126
127  void rpc_host_call(void *fn, void *data, size_t size) {
128    rpc::Client::Port port = rpc::client.open<RPC_HOST_CALL>();
129    port.send_n(data, size);
130    port.send([=](rpc::Buffer *buffer) {
131      buffer->data[0] = reinterpret_cast<uintptr_t>(fn);
132    });
133    port.recv([](rpc::Buffer *) {});
134    port.close();
135  }
136
137Server Example
138--------------
139
140This example shows the server-side handling of the previous client example. When
141the server is checked, if there are any ports with pending work it will check
142the opcode and perform the appropriate action. In this case, the action is to
143call a function pointer provided by the client.
144
145In this example, the server simply runs forever in a separate thread for
146brevity's sake. Because the client is a GPU potentially handling several threads
147at once, the server needs to loop over all the active threads on the GPU. We
148abstract this into the ``lane_size`` variable, which is simply the device's warp
149or wavefront size. The identifier is simply the threads index into the current
150warp or wavefront. We allocate memory to copy the struct data into, and then
151call the given function pointer with that copied data. The final send simply
152signals completion and uses the implicit thread mask to delete the temporary
153data.
154
155.. code-block:: c++
156
157  for(;;) {
158    auto port = server.try_open(index);
159    if (!port)
160      return continue;
161
162    switch(port->get_opcode()) {
163    case RPC_HOST_CALL: {
164      uint64_t sizes[LANE_SIZE];
165      void *args[LANE_SIZE];
166      port->recv_n(args, sizes, [&](uint64_t size) { return new char[size]; });
167      port->recv([&](rpc::Buffer *buffer, uint32_t id) {
168        reinterpret_cast<void (*)(void *)>(buffer->data[0])(args[id]);
169      });
170      port->send([&](rpc::Buffer *, uint32_t id) {
171        delete[] reinterpret_cast<uint8_t *>(args[id]);
172      });
173      break;
174    }
175    default:
176      port->recv([](rpc::Buffer *) {});
177      break;
178    }
179  }
180
181CUDA Server Example
182-------------------
183
184The following code shows an example of using the exported RPC interface along
185with the C library to manually configure a working server using the CUDA
186language. Other runtimes can use the presence of the ``__llvm_libc_rpc_client``
187in the GPU executable as an indicator for whether or not the server can be
188checked. These details should ideally be handled by the GPU language runtime,
189but the following example shows how it can be used by a standard user.
190
191.. _libc_gpu_cuda_server:
192
193.. code-block:: cuda
194
195  #include <cstdio>
196  #include <cstdlib>
197  #include <cuda_runtime.h>
198
199  #include <llvmlibc_rpc_server.h>
200
201  [[noreturn]] void handle_error(cudaError_t err) {
202    fprintf(stderr, "CUDA error: %s\n", cudaGetErrorString(err));
203    exit(EXIT_FAILURE);
204  }
205
206  [[noreturn]] void handle_error(rpc_status_t err) {
207    fprintf(stderr, "RPC error: %d\n", err);
208    exit(EXIT_FAILURE);
209  }
210
211  // The handle to the RPC client provided by the C library.
212  extern "C" __device__ void *__llvm_libc_rpc_client;
213
214  __global__ void get_client_ptr(void **ptr) { *ptr = __llvm_libc_rpc_client; }
215
216  // Obtain the RPC client's handle from the device. The CUDA language cannot look
217  // up the symbol directly like the driver API, so we launch a kernel to read it.
218  void *get_rpc_client() {
219    void *rpc_client = nullptr;
220    void **rpc_client_d = nullptr;
221
222    if (cudaError_t err = cudaMalloc(&rpc_client_d, sizeof(void *)))
223      handle_error(err);
224    get_client_ptr<<<1, 1>>>(rpc_client_d);
225    if (cudaError_t err = cudaDeviceSynchronize())
226      handle_error(err);
227    if (cudaError_t err = cudaMemcpy(&rpc_client, rpc_client_d, sizeof(void *),
228                                     cudaMemcpyDeviceToHost))
229      handle_error(err);
230    return rpc_client;
231  }
232
233  // Routines to allocate mapped memory that both the host and the device can
234  // access asychonrously to communicate with each other.
235  void *alloc_host(size_t size, void *) {
236    void *sharable_ptr;
237    if (cudaError_t err = cudaMallocHost(&sharable_ptr, sizeof(void *)))
238      handle_error(err);
239    return sharable_ptr;
240  };
241
242  void free_host(void *ptr, void *) {
243    if (cudaError_t err = cudaFreeHost(ptr))
244      handle_error(err);
245  }
246
247  // The device-side overload of the standard C function to call.
248  extern "C" __device__ int puts(const char *);
249
250  // Calls the C library function from the GPU C library.
251  __global__ void hello() { puts("Hello world!"); }
252
253  int main() {
254    // Initialize the RPC server to run on the given device.
255    rpc_device_t device;
256    if (rpc_status_t err =
257            rpc_server_init(&device, RPC_MAXIMUM_PORT_COUNT,
258                            /*warp_size=*/32, alloc_host, /*data=*/nullptr))
259      handle_error(err);
260
261    // Initialize the RPC client by copying the buffer to the device's handle.
262    void *rpc_client = get_rpc_client();
263    if (cudaError_t err =
264            cudaMemcpy(rpc_client, rpc_get_client_buffer(device),
265                       rpc_get_client_size(), cudaMemcpyHostToDevice))
266      handle_error(err);
267
268    cudaStream_t stream;
269    if (cudaError_t err = cudaStreamCreate(&stream))
270      handle_error(err);
271
272    // Execute the kernel.
273    hello<<<1, 1, 0, stream>>>();
274
275    // While the kernel is executing, check the RPC server for work to do.
276    // Requires non-blocking CUDA kernels but avoids a separate thread.
277    while (cudaStreamQuery(stream) == cudaErrorNotReady)
278      if (rpc_status_t err = rpc_handle_server(device))
279        handle_error(err);
280
281    // Shut down the server running on the given device.
282    if (rpc_status_t err =
283            rpc_server_shutdown(device, free_host, /*data=*/nullptr))
284      handle_error(err);
285
286    return EXIT_SUCCESS;
287  }
288
289The above code must be compiled in CUDA's relocatable device code mode and with
290the advanced offloading driver to link in the library. Currently this can be
291done with the following invocation. Using LTO avoids the overhead normally
292associated with relocatable device code linking.
293
294.. code-block:: sh
295
296  $> clang++ -x cuda rpc.cpp --offload-arch=native -fgpu-rdc -lcudart -lcgpu-nvptx \
297       -I<install-path>include -L<install-path>/lib -lllvmlibc_rpc_server \
298       -O3 -foffload-lto -o hello
299  $> ./hello
300  Hello world!
301
302Extensions
303----------
304
305The opcode is a 32-bit integer that must be unique to the requested operation.
306All opcodes used by ``libc`` internally have the character ``c`` in the most
307significant byte.
308