1.. _module-pw_transfer: 2 3=========== 4pw_transfer 5=========== 6 7.. attention:: 8 9 ``pw_transfer`` is under construction and so is its documentation. 10 11----- 12Usage 13----- 14 15C++ 16=== 17The transfer service is defined and registered with an RPC server like any other 18RPC service. 19 20To know how to read data from or write data to device, a ``TransferHandler`` 21interface is defined (``pw_transfer/public/pw_transfer/handler.h``). Transfer 22handlers wrap a stream reader and/or writer with initialization and completion 23code. Custom transfer handler implementations should derive from 24``ReadOnlyHandler``, ``WriteOnlyHandler``, or ``ReadWriteHandler`` as 25appropriate and override Prepare and Finalize methods if necessary. 26 27A transfer handler should be implemented and instantiated for each unique data 28transfer to or from a device. These handlers are then registered with the 29transfer service using their transfer IDs. 30 31**Example** 32 33.. code-block:: cpp 34 35 #include "pw_transfer/transfer.h" 36 37 namespace { 38 39 // Simple transfer handler which reads data from an in-memory buffer. 40 class SimpleBufferReadHandler : public pw::transfer::ReadOnlyHandler { 41 public: 42 SimpleReadTransfer(uint32_t transfer_id, pw::ConstByteSpan data) 43 : ReadOnlyHandler(transfer_id), reader_(data) { 44 set_reader(reader_); 45 } 46 47 private: 48 pw::stream::MemoryReader reader_; 49 }; 50 51 // The maximum amount of data that can be sent in a single chunk, excluding 52 // transport layer overhead. 53 constexpr size_t kMaxChunkSizeBytes = 256; 54 55 // In a write transfer, the maximum number of bytes to receive at one time, 56 // (potentially across multiple chunks), unless specified otherwise by the 57 // transfer handler's stream::Writer. 58 constexpr size_t kDefaultMaxBytesToReceive = 1024; 59 60 // Instantiate a static transfer service. 61 // The service requires a work queue, and a buffer to store data from a chunk. 62 // The helper class TransferServiceBuffer comes with a builtin buffer. 63 pw::transfer::TransferServiceBuffer<kMaxChunkSizeBytes> transfer_service( 64 GetSystemWorkQueue(), kDefaultMaxBytesToReceive); 65 66 // Instantiate a handler for the data to be transferred. 67 constexpr uint32_t kBufferTransferId = 1; 68 char buffer_to_transfer[256] = { /* ... */ }; 69 SimpleBufferReadHandler buffer_handler(kBufferTransferId, buffer_to_transfer); 70 71 } // namespace 72 73 void InitTransfer() { 74 // Register the handler with the transfer service, then the transfer service 75 // with an RPC server. 76 transfer_service.RegisterHandler(buffer_handler); 77 GetSystemRpcServer().RegisterService(transfer_service); 78 } 79 80Module Configuration Options 81---------------------------- 82The following configurations can be adjusted via compile-time configuration of 83this module, see the 84:ref:`module documentation <module-structure-compile-time-configuration>` for 85more details. 86 87.. c:macro:: PW_TRANSFER_DEFAULT_MAX_RETRIES 88 89 The default maximum number of times a transfer should retry sending a chunk 90 when no response is received. This can later be configured per-transfer. 91 92.. c:macro:: PW_TRANSFER_DEFAULT_TIMEOUT_MS 93 94 The default amount of time, in milliseconds, to wait for a chunk to arrive 95 before retrying. This can later be configured per-transfer. 96 97.. c:macro:: PW_TRANSFER_DEFAULT_EXTEND_WINDOW_DIVISOR 98 99 The fractional position within a window at which a receive transfer should 100 extend its window size to minimize the amount of time the transmitter 101 spends blocked. 102 103 For example, a divisor of 2 will extend the window when half of the 104 requested data has been received, a divisor of three will extend at a third 105 of the window, and so on. 106 107Python 108====== 109.. automodule:: pw_transfer 110 :members: ProgressStats, Manager, Error 111 112**Example** 113 114.. code-block:: python 115 116 import pw_transfer 117 118 # Initialize a Pigweed RPC client; see pw_rpc docs for more info. 119 rpc_client = CustomRpcClient() 120 rpcs = rpc_client.channel(1).rpcs 121 122 transfer_service = rpcs.pw.transfer.Transfer 123 transfer_manager = pw_transfer.Manager(transfer_service) 124 125 try: 126 # Read transfer_id 3 from the server. 127 data = transfer_manager.read(3) 128 except pw_transfer.Error as err: 129 print('Failed to read:', err.status) 130 131 try: 132 # Send some data to the server. The transfer manager does not have to be 133 # reinitialized. 134 transfer_manager.write(2, b'hello, world') 135 except pw_transfer.Error as err: 136 print('Failed to write:', err.status) 137 138Typescript 139========== 140 141Provides a simple interface for transferring bulk data over pw_rpc. 142 143**Example** 144 145.. code-block:: typescript 146 147 import {Manager} from '@pigweed/pw_transfer' 148 149 const client = new CustomRpcClient(); 150 service = client.channel()!.service('pw.transfer.Transfer')!; 151 152 const manager = new Manager(service, DEFAULT_TIMEOUT_S); 153 154 manager.read(3, (stats: ProgressStats) => { 155 console.log(`Progress Update: ${stats}`); 156 }).then((data: Uint8Array) => { 157 console.log(`Completed read: ${data}`); 158 }).catch(error => { 159 console.log(`Failed to read: ${error.status}`); 160 }); 161 162 manager.write(2, textEncoder.encode('hello world')) 163 .catch(error => { 164 console.log(`Failed to read: ${error.status}`); 165 }); 166 167-------- 168Protocol 169-------- 170 171Protocol buffer definition 172========================== 173.. literalinclude:: transfer.proto 174 :language: protobuf 175 :lines: 14- 176 177Server to client transfer (read) 178================================ 179.. image:: read.svg 180 181Client to server transfer (write) 182================================= 183.. image:: write.svg 184 185Errors 186====== 187 188Protocol errors 189--------------- 190At any point, either the client or server may terminate the transfer with a 191status code. The transfer chunk with the status is the final chunk of the 192transfer. 193 194The following table describes the meaning of each status code when sent by the 195sender or the receiver (see `Transfer roles`_). 196 197.. cpp:namespace-push:: pw::stream 198 199+-------------------------+-------------------------+-------------------------+ 200| Status | Sent by sender | Sent by receiver | 201+=========================+=========================+=========================+ 202| ``OK`` | (not sent) | All data was received | 203| | | and handled | 204| | | successfully. | 205+-------------------------+-------------------------+-------------------------+ 206| ``ABORTED`` | The service aborted the transfer because the | 207| | client restarted it. This status is passed to the | 208| | transfer handler, but not sent to the client | 209| | because it restarted the transfer. | 210+-------------------------+---------------------------------------------------+ 211| ``CANCELLED`` | The client cancelled the transfer. | 212+-------------------------+-------------------------+-------------------------+ 213| ``DATA_LOSS`` | Failed to read the data | Failed to write the | 214| | to send. The | received data. The | 215| | :cpp:class:`Reader` | :cpp:class:`Writer` | 216| | returned an error. | returned an error. | 217+-------------------------+-------------------------+-------------------------+ 218| ``FAILED_PRECONDITION`` | Received chunk for transfer that is not active. | 219+-------------------------+-------------------------+-------------------------+ 220| ``INVALID_ARGUMENT`` | Received a malformed packet. | 221+-------------------------+-------------------------+-------------------------+ 222| ``INTERNAL`` | An assumption of the protocol was violated. | 223| | Encountering ``INTERNAL`` indicates that there is | 224| | a bug in the service or client implementation. | 225+-------------------------+-------------------------+-------------------------+ 226| ``PERMISSION_DENIED`` | The transfer does not support the requested | 227| | operation (either reading or writing). | 228+-------------------------+-------------------------+-------------------------+ 229| ``RESOURCE_EXHAUSTED`` | The receiver requested | Storage is full. | 230| | zero bytes, indicating | | 231| | their storage is full, | | 232| | but there is still data | | 233| | to send. | | 234+-------------------------+-------------------------+-------------------------+ 235| ``UNAVAILABLE`` | The service is busy with other transfers and | 236| | cannot begin a new transfer at this time. | 237+-------------------------+-------------------------+-------------------------+ 238| ``UNIMPLEMENTED`` | Out-of-order chunk was | (not sent) | 239| | requested, but seeking | | 240| | is not supported. | | 241+-------------------------+-------------------------+-------------------------+ 242 243.. cpp:namespace-pop:: 244 245Client errors 246------------- 247``pw_transfer`` clients may immediately return certain errors if they cannot 248start a transfer. 249 250.. list-table:: 251 252 * - **Status** 253 - **Reason** 254 * - ``ALREADY_EXISTS`` 255 - A transfer with the requested ID is already pending on this client. 256 * - ``DATA_LOSS`` 257 - Sending the initial transfer chunk failed. 258 * - ``RESOURCE_EXHAUSTED`` 259 - The client has insufficient resources to start an additional transfer at 260 this time. 261 262 263Transfer roles 264============== 265Every transfer has two participants: the sender and the receiver. The sender 266transmits data to the receiver. The receiver controls how the data is 267transferred and sends the final status when the transfer is complete. 268 269In read transfers, the client is the receiver and the service is the sender. In 270write transfers, the client is the sender and the service is the receiver. 271 272Sender flow 273----------- 274.. mermaid:: 275 276 graph TD 277 start([Client initiates<br>transfer]) -->data_request 278 data_request[Receive transfer<br>parameters]-->send_chunk 279 280 send_chunk[Send chunk]-->sent_all 281 282 sent_all{Sent final<br>chunk?} -->|yes|wait 283 sent_all-->|no|sent_requested 284 285 sent_requested{Sent all<br>pending?}-->|yes|data_request 286 sent_requested-->|no|send_chunk 287 288 wait[Wait for receiver]-->is_done 289 290 is_done{Received<br>final chunk?}-->|yes|done 291 is_done-->|no|data_request 292 293 done([Transfer complete]) 294 295Receiver flow 296------------- 297.. mermaid:: 298 299 graph TD 300 start([Client initiates<br>transfer]) -->request_bytes 301 request_bytes[Set transfer<br>parameters]-->wait 302 303 wait[Wait for chunk]-->received_chunk 304 305 received_chunk{Received<br>chunk by<br>deadline?}-->|no|request_bytes 306 received_chunk-->|yes|check_chunk 307 308 check_chunk{Correct<br>offset?} -->|yes|process_chunk 309 check_chunk --> |no|request_bytes 310 311 process_chunk[Process chunk]-->final_chunk 312 313 final_chunk{Final<br>chunk?}-->|yes|signal_completion 314 final_chunk{Final<br>chunk?}-->|no|received_requested 315 316 received_requested{Received all<br>pending?}-->|yes|request_bytes 317 received_requested-->|no|wait 318 319 signal_completion[Signal completion]-->done 320 321 done([Transfer complete]) 322