1curl internals 2============== 3 4 - [Intro](#intro) 5 - [git](#git) 6 - [Portability](#Portability) 7 - [Windows vs Unix](#winvsunix) 8 - [Library](#Library) 9 - [`Curl_connect`](#Curl_connect) 10 - [`multi_do`](#multi_do) 11 - [`Curl_readwrite`](#Curl_readwrite) 12 - [`multi_done`](#multi_done) 13 - [`Curl_disconnect`](#Curl_disconnect) 14 - [HTTP(S)](#http) 15 - [FTP](#ftp) 16 - [Kerberos](#kerberos) 17 - [TELNET](#telnet) 18 - [FILE](#file) 19 - [SMB](#smb) 20 - [LDAP](#ldap) 21 - [E-mail](#email) 22 - [General](#general) 23 - [Persistent Connections](#persistent) 24 - [multi interface/non-blocking](#multi) 25 - [SSL libraries](#ssl) 26 - [Library Symbols](#symbols) 27 - [Return Codes and Informationals](#returncodes) 28 - [AP/ABI](#abi) 29 - [Client](#client) 30 - [Memory Debugging](#memorydebug) 31 - [Test Suite](#test) 32 - [Asynchronous name resolves](#asyncdns) 33 - [c-ares](#cares) 34 - [`curl_off_t`](#curl_off_t) 35 - [curlx](#curlx) 36 - [Content Encoding](#contentencoding) 37 - [`hostip.c` explained](#hostip) 38 - [Track Down Memory Leaks](#memoryleak) 39 - [`multi_socket`](#multi_socket) 40 - [Structs in libcurl](#structs) 41 - [Curl_easy](#Curl_easy) 42 - [connectdata](#connectdata) 43 - [Curl_multi](#Curl_multi) 44 - [Curl_handler](#Curl_handler) 45 - [conncache](#conncache) 46 - [Curl_share](#Curl_share) 47 - [CookieInfo](#CookieInfo) 48 49<a name="intro"></a> 50Intro 51===== 52 53 This project is split in two. The library and the client. The client part 54 uses the library, but the library is designed to allow other applications to 55 use it. 56 57 The largest amount of code and complexity is in the library part. 58 59 60<a name="git"></a> 61git 62=== 63 64 All changes to the sources are committed to the git repository as soon as 65 they're somewhat verified to work. Changes shall be committed as independently 66 as possible so that individual changes can be easily spotted and tracked 67 afterwards. 68 69 Tagging shall be used extensively, and by the time we release new archives we 70 should tag the sources with a name similar to the released version number. 71 72<a name="Portability"></a> 73Portability 74=========== 75 76 We write curl and libcurl to compile with C89 compilers. On 32-bit and up 77 machines. Most of libcurl assumes more or less POSIX compliance but that's 78 not a requirement. 79 80 We write libcurl to build and work with lots of third party tools, and we 81 want it to remain functional and buildable with these and later versions 82 (older versions may still work but is not what we work hard to maintain): 83 84Dependencies 85------------ 86 87 - OpenSSL 0.9.7 88 - GnuTLS 3.1.10 89 - zlib 1.1.4 90 - libssh2 1.0 91 - c-ares 1.6.0 92 - libidn2 2.0.0 93 - wolfSSL 2.0.0 94 - openldap 2.0 95 - MIT Kerberos 1.2.4 96 - GSKit V5R3M0 97 - NSS 3.14.x 98 - Heimdal ? 99 - nghttp2 1.12.0 100 - WinSock 2.2 (on Windows 95+ and Windows CE .NET 4.1+) 101 102Operating Systems 103----------------- 104 105 On systems where configure runs, we aim at working on them all - if they have 106 a suitable C compiler. On systems that don't run configure, we strive to keep 107 curl running correctly on: 108 109 - Windows 98 110 - AS/400 V5R3M0 111 - Symbian 9.1 112 - Windows CE ? 113 - TPF ? 114 115Build tools 116----------- 117 118 When writing code (mostly for generating stuff included in release tarballs) 119 we use a few "build tools" and we make sure that we remain functional with 120 these versions: 121 122 - GNU Libtool 1.4.2 123 - GNU Autoconf 2.57 124 - GNU Automake 1.7 125 - GNU M4 1.4 126 - perl 5.004 127 - roffit 0.5 128 - groff ? (any version that supports `groff -Tps -man [in] [out]`) 129 - ps2pdf (gs) ? 130 131<a name="winvsunix"></a> 132Windows vs Unix 133=============== 134 135 There are a few differences in how to program curl the Unix way compared to 136 the Windows way. Perhaps the four most notable details are: 137 138 1. Different function names for socket operations. 139 140 In curl, this is solved with defines and macros, so that the source looks 141 the same in all places except for the header file that defines them. The 142 macros in use are `sclose()`, `sread()` and `swrite()`. 143 144 2. Windows requires a couple of init calls for the socket stuff. 145 146 That's taken care of by the `curl_global_init()` call, but if other libs 147 also do it etc there might be reasons for applications to alter that 148 behavior. 149 150 We require WinSock version 2.2 and load this version during global init. 151 152 3. The file descriptors for network communication and file operations are 153 not as easily interchangeable as in Unix. 154 155 We avoid this by not trying any funny tricks on file descriptors. 156 157 4. When writing data to stdout, Windows makes end-of-lines the DOS way, thus 158 destroying binary data, although you do want that conversion if it is 159 text coming through... (sigh) 160 161 We set stdout to binary under windows 162 163 Inside the source code, We make an effort to avoid `#ifdef [Your OS]`. All 164 conditionals that deal with features *should* instead be in the format 165 `#ifdef HAVE_THAT_WEIRD_FUNCTION`. Since Windows can't run configure scripts, 166 we maintain a `curl_config-win32.h` file in lib directory that is supposed to 167 look exactly like a `curl_config.h` file would have looked like on a Windows 168 machine! 169 170 Generally speaking: always remember that this will be compiled on dozens of 171 operating systems. Don't walk on the edge! 172 173<a name="Library"></a> 174Library 175======= 176 177 (See [Structs in libcurl](#structs) for the separate section describing all 178 major internal structs and their purposes.) 179 180 There are plenty of entry points to the library, namely each publicly defined 181 function that libcurl offers to applications. All of those functions are 182 rather small and easy-to-follow. All the ones prefixed with `curl_easy` are 183 put in the `lib/easy.c` file. 184 185 `curl_global_init()` and `curl_global_cleanup()` should be called by the 186 application to initialize and clean up global stuff in the library. As of 187 today, it can handle the global SSL initialization if SSL is enabled and it 188 can initialize the socket layer on Windows machines. libcurl itself has no 189 "global" scope. 190 191 All printf()-style functions use the supplied clones in `lib/mprintf.c`. This 192 makes sure we stay absolutely platform independent. 193 194 [ `curl_easy_init()`][2] allocates an internal struct and makes some 195 initializations. The returned handle does not reveal internals. This is the 196 `Curl_easy` struct which works as an "anchor" struct for all `curl_easy` 197 functions. All connections performed will get connect-specific data allocated 198 that should be used for things related to particular connections/requests. 199 200 [`curl_easy_setopt()`][1] takes three arguments, where the option stuff must 201 be passed in pairs: the parameter-ID and the parameter-value. The list of 202 options is documented in the man page. This function mainly sets things in 203 the `Curl_easy` struct. 204 205 `curl_easy_perform()` is just a wrapper function that makes use of the multi 206 API. It basically calls `curl_multi_init()`, `curl_multi_add_handle()`, 207 `curl_multi_wait()`, and `curl_multi_perform()` until the transfer is done 208 and then returns. 209 210 Some of the most important key functions in `url.c` are called from 211 `multi.c` when certain key steps are to be made in the transfer operation. 212 213<a name="Curl_connect"></a> 214Curl_connect() 215-------------- 216 217 Analyzes the URL, it separates the different components and connects to the 218 remote host. This may involve using a proxy and/or using SSL. The 219 `Curl_resolv()` function in `lib/hostip.c` is used for looking up host 220 names (it does then use the proper underlying method, which may vary 221 between platforms and builds). 222 223 When `Curl_connect` is done, we are connected to the remote site. Then it 224 is time to tell the server to get a document/file. `Curl_do()` arranges 225 this. 226 227 This function makes sure there's an allocated and initiated `connectdata` 228 struct that is used for this particular connection only (although there may 229 be several requests performed on the same connect). A bunch of things are 230 initialized/inherited from the `Curl_easy` struct. 231 232<a name="multi_do"></a> 233multi_do() 234--------- 235 236 `multi_do()` makes sure the proper protocol-specific function is called. 237 The functions are named after the protocols they handle. 238 239 The protocol-specific functions of course deal with protocol-specific 240 negotiations and setup. When they're ready to start the actual file 241 transfer they call the `Curl_setup_transfer()` function (in 242 `lib/transfer.c`) to setup the transfer and returns. 243 244 If this DO function fails and the connection is being re-used, libcurl will 245 then close this connection, setup a new connection and re-issue the DO 246 request on that. This is because there is no way to be perfectly sure that 247 we have discovered a dead connection before the DO function and thus we 248 might wrongly be re-using a connection that was closed by the remote peer. 249 250<a name="Curl_readwrite"></a> 251Curl_readwrite() 252---------------- 253 254 Called during the transfer of the actual protocol payload. 255 256 During transfer, the progress functions in `lib/progress.c` are called at 257 frequent intervals (or at the user's choice, a specified callback might get 258 called). The speedcheck functions in `lib/speedcheck.c` are also used to 259 verify that the transfer is as fast as required. 260 261<a name="multi_done"></a> 262multi_done() 263----------- 264 265 Called after a transfer is done. This function takes care of everything 266 that has to be done after a transfer. This function attempts to leave 267 matters in a state so that `multi_do()` should be possible to call again on 268 the same connection (in a persistent connection case). It might also soon 269 be closed with `Curl_disconnect()`. 270 271<a name="Curl_disconnect"></a> 272Curl_disconnect() 273----------------- 274 275 When doing normal connections and transfers, no one ever tries to close any 276 connections so this is not normally called when `curl_easy_perform()` is 277 used. This function is only used when we are certain that no more transfers 278 are going to be made on the connection. It can be also closed by force, or 279 it can be called to make sure that libcurl doesn't keep too many 280 connections alive at the same time. 281 282 This function cleans up all resources that are associated with a single 283 connection. 284 285<a name="http"></a> 286HTTP(S) 287======= 288 289 HTTP offers a lot and is the protocol in curl that uses the most lines of 290 code. There is a special file `lib/formdata.c` that offers all the 291 multipart post functions. 292 293 base64-functions for user+password stuff (and more) is in `lib/base64.c` 294 and all functions for parsing and sending cookies are found in 295 `lib/cookie.c`. 296 297 HTTPS uses in almost every case the same procedure as HTTP, with only two 298 exceptions: the connect procedure is different and the function used to read 299 or write from the socket is different, although the latter fact is hidden in 300 the source by the use of `Curl_read()` for reading and `Curl_write()` for 301 writing data to the remote server. 302 303 `http_chunks.c` contains functions that understands HTTP 1.1 chunked transfer 304 encoding. 305 306 An interesting detail with the HTTP(S) request, is the `Curl_add_buffer()` 307 series of functions we use. They append data to one single buffer, and when 308 the building is finished the entire request is sent off in one single write. 309 This is done this way to overcome problems with flawed firewalls and lame 310 servers. 311 312<a name="ftp"></a> 313FTP 314=== 315 316 The `Curl_if2ip()` function can be used for getting the IP number of a 317 specified network interface, and it resides in `lib/if2ip.c`. 318 319 `Curl_ftpsendf()` is used for sending FTP commands to the remote server. It 320 was made a separate function to prevent us programmers from forgetting that 321 they must be CRLF terminated. They must also be sent in one single `write()` 322 to make firewalls and similar happy. 323 324<a name="kerberos"></a> 325Kerberos 326======== 327 328 Kerberos support is mainly in `lib/krb5.c` but also `curl_sasl_sspi.c` and 329 `curl_sasl_gssapi.c` for the email protocols and `socks_gssapi.c` and 330 `socks_sspi.c` for SOCKS5 proxy specifics. 331 332<a name="telnet"></a> 333TELNET 334====== 335 336 Telnet is implemented in `lib/telnet.c`. 337 338<a name="file"></a> 339FILE 340==== 341 342 The `file://` protocol is dealt with in `lib/file.c`. 343 344<a name="smb"></a> 345SMB 346=== 347 348 The `smb://` protocol is dealt with in `lib/smb.c`. 349 350<a name="ldap"></a> 351LDAP 352==== 353 354 Everything LDAP is in `lib/ldap.c` and `lib/openldap.c`. 355 356<a name="email"></a> 357E-mail 358====== 359 360 The e-mail related source code is in `lib/imap.c`, `lib/pop3.c` and 361 `lib/smtp.c`. 362 363<a name="general"></a> 364General 365======= 366 367 URL encoding and decoding, called escaping and unescaping in the source code, 368 is found in `lib/escape.c`. 369 370 While transferring data in `Transfer()` a few functions might get used. 371 `curl_getdate()` in `lib/parsedate.c` is for HTTP date comparisons (and 372 more). 373 374 `lib/getenv.c` offers `curl_getenv()` which is for reading environment 375 variables in a neat platform independent way. That's used in the client, but 376 also in `lib/url.c` when checking the proxy environment variables. Note that 377 contrary to the normal unix `getenv()`, this returns an allocated buffer that 378 must be `free()`ed after use. 379 380 `lib/netrc.c` holds the `.netrc` parser. 381 382 `lib/timeval.c` features replacement functions for systems that don't have 383 `gettimeofday()` and a few support functions for timeval conversions. 384 385 A function named `curl_version()` that returns the full curl version string 386 is found in `lib/version.c`. 387 388<a name="persistent"></a> 389Persistent Connections 390====================== 391 392 The persistent connection support in libcurl requires some considerations on 393 how to do things inside of the library. 394 395 - The `Curl_easy` struct returned in the [`curl_easy_init()`][2] call 396 must never hold connection-oriented data. It is meant to hold the root data 397 as well as all the options etc that the library-user may choose. 398 399 - The `Curl_easy` struct holds the "connection cache" (an array of 400 pointers to `connectdata` structs). 401 402 - This enables the 'curl handle' to be reused on subsequent transfers. 403 404 - When libcurl is told to perform a transfer, it first checks for an already 405 existing connection in the cache that we can use. Otherwise it creates a 406 new one and adds that to the cache. If the cache is full already when a new 407 connection is added, it will first close the oldest unused one. 408 409 - When the transfer operation is complete, the connection is left 410 open. Particular options may tell libcurl not to, and protocols may signal 411 closure on connections and then they won't be kept open, of course. 412 413 - When `curl_easy_cleanup()` is called, we close all still opened connections, 414 unless of course the multi interface "owns" the connections. 415 416 The curl handle must be re-used in order for the persistent connections to 417 work. 418 419<a name="multi"></a> 420multi interface/non-blocking 421============================ 422 423 The multi interface is a non-blocking interface to the library. To make that 424 interface work as well as possible, no low-level functions within libcurl 425 must be written to work in a blocking manner. (There are still a few spots 426 violating this rule.) 427 428 One of the primary reasons we introduced c-ares support was to allow the name 429 resolve phase to be perfectly non-blocking as well. 430 431 The FTP and the SFTP/SCP protocols are examples of how we adapt and adjust 432 the code to allow non-blocking operations even on multi-stage command- 433 response protocols. They are built around state machines that return when 434 they would otherwise block waiting for data. The DICT, LDAP and TELNET 435 protocols are crappy examples and they are subject for rewrite in the future 436 to better fit the libcurl protocol family. 437 438<a name="ssl"></a> 439SSL libraries 440============= 441 442 Originally libcurl supported SSLeay for SSL/TLS transports, but that was then 443 extended to its successor OpenSSL but has since also been extended to several 444 other SSL/TLS libraries and we expect and hope to further extend the support 445 in future libcurl versions. 446 447 To deal with this internally in the best way possible, we have a generic SSL 448 function API as provided by the `vtls/vtls.[ch]` system, and they are the only 449 SSL functions we must use from within libcurl. vtls is then crafted to use 450 the appropriate lower-level function calls to whatever SSL library that is in 451 use. For example `vtls/openssl.[ch]` for the OpenSSL library. 452 453<a name="symbols"></a> 454Library Symbols 455=============== 456 457 All symbols used internally in libcurl must use a `Curl_` prefix if they're 458 used in more than a single file. Single-file symbols must be made static. 459 Public ("exported") symbols must use a `curl_` prefix. (There are exceptions, 460 but they are to be changed to follow this pattern in future versions.) Public 461 API functions are marked with `CURL_EXTERN` in the public header files so 462 that all others can be hidden on platforms where this is possible. 463 464<a name="returncodes"></a> 465Return Codes and Informationals 466=============================== 467 468 I've made things simple. Almost every function in libcurl returns a CURLcode, 469 that must be `CURLE_OK` if everything is OK or otherwise a suitable error 470 code as the `curl/curl.h` include file defines. The very spot that detects an 471 error must use the `Curl_failf()` function to set the human-readable error 472 description. 473 474 In aiding the user to understand what's happening and to debug curl usage, we 475 must supply a fair number of informational messages by using the 476 `Curl_infof()` function. Those messages are only displayed when the user 477 explicitly asks for them. They are best used when revealing information that 478 isn't otherwise obvious. 479 480<a name="abi"></a> 481API/ABI 482======= 483 484 We make an effort to not export or show internals or how internals work, as 485 that makes it easier to keep a solid API/ABI over time. See docs/libcurl/ABI 486 for our promise to users. 487 488<a name="client"></a> 489Client 490====== 491 492 `main()` resides in `src/tool_main.c`. 493 494 `src/tool_hugehelp.c` is automatically generated by the `mkhelp.pl` perl 495 script to display the complete "manual" and the `src/tool_urlglob.c` file 496 holds the functions used for the URL-"globbing" support. Globbing in the 497 sense that the `{}` and `[]` expansion stuff is there. 498 499 The client mostly sets up its `config` struct properly, then 500 it calls the `curl_easy_*()` functions of the library and when it gets back 501 control after the `curl_easy_perform()` it cleans up the library, checks 502 status and exits. 503 504 When the operation is done, the `ourWriteOut()` function in `src/writeout.c` 505 may be called to report about the operation. That function is mostly using the 506 `curl_easy_getinfo()` function to extract useful information from the curl 507 session. 508 509 It may loop and do all this several times if many URLs were specified on the 510 command line or config file. 511 512<a name="memorydebug"></a> 513Memory Debugging 514================ 515 516 The file `lib/memdebug.c` contains debug-versions of a few functions. 517 Functions such as `malloc()`, `free()`, `fopen()`, `fclose()`, etc that 518 somehow deal with resources that might give us problems if we "leak" them. 519 The functions in the memdebug system do nothing fancy, they do their normal 520 function and then log information about what they just did. The logged data 521 can then be analyzed after a complete session, 522 523 `memanalyze.pl` is the perl script present in `tests/` that analyzes a log 524 file generated by the memory tracking system. It detects if resources are 525 allocated but never freed and other kinds of errors related to resource 526 management. 527 528 Internally, definition of preprocessor symbol `DEBUGBUILD` restricts code 529 which is only compiled for debug enabled builds. And symbol `CURLDEBUG` is 530 used to differentiate code which is _only_ used for memory 531 tracking/debugging. 532 533 Use `-DCURLDEBUG` when compiling to enable memory debugging, this is also 534 switched on by running configure with `--enable-curldebug`. Use 535 `-DDEBUGBUILD` when compiling to enable a debug build or run configure with 536 `--enable-debug`. 537 538 `curl --version` will list 'Debug' feature for debug enabled builds, and 539 will list 'TrackMemory' feature for curl debug memory tracking capable 540 builds. These features are independent and can be controlled when running 541 the configure script. When `--enable-debug` is given both features will be 542 enabled, unless some restriction prevents memory tracking from being used. 543 544<a name="test"></a> 545Test Suite 546========== 547 548 The test suite is placed in its own subdirectory directly off the root in the 549 curl archive tree, and it contains a bunch of scripts and a lot of test case 550 data. 551 552 The main test script is `runtests.pl` that will invoke test servers like 553 `httpserver.pl` and `ftpserver.pl` before all the test cases are performed. 554 The test suite currently only runs on Unix-like platforms. 555 556 You'll find a description of the test suite in the `tests/README` file, and 557 the test case data files in the `tests/FILEFORMAT` file. 558 559 The test suite automatically detects if curl was built with the memory 560 debugging enabled, and if it was, it will detect memory leaks, too. 561 562<a name="asyncdns"></a> 563Asynchronous name resolves 564========================== 565 566 libcurl can be built to do name resolves asynchronously, using either the 567 normal resolver in a threaded manner or by using c-ares. 568 569<a name="cares"></a> 570[c-ares][3] 571------ 572 573### Build libcurl to use a c-ares 574 5751. ./configure --enable-ares=/path/to/ares/install 5762. make 577 578### c-ares on win32 579 580 First I compiled c-ares. I changed the default C runtime library to be the 581 single-threaded rather than the multi-threaded (this seems to be required to 582 prevent linking errors later on). Then I simply build the areslib project 583 (the other projects adig/ahost seem to fail under MSVC). 584 585 Next was libcurl. I opened `lib/config-win32.h` and I added a: 586 `#define USE_ARES 1` 587 588 Next thing I did was I added the path for the ares includes to the include 589 path, and the libares.lib to the libraries. 590 591 Lastly, I also changed libcurl to be single-threaded rather than 592 multi-threaded, again this was to prevent some duplicate symbol errors. I'm 593 not sure why I needed to change everything to single-threaded, but when I 594 didn't I got redefinition errors for several CRT functions (`malloc()`, 595 `stricmp()`, etc.) 596 597<a name="curl_off_t"></a> 598`curl_off_t` 599========== 600 601 `curl_off_t` is a data type provided by the external libcurl include 602 headers. It is the type meant to be used for the [`curl_easy_setopt()`][1] 603 options that end with LARGE. The type is 64-bit large on most modern 604 platforms. 605 606<a name="curlx"></a> 607curlx 608===== 609 610 The libcurl source code offers a few functions by source only. They are not 611 part of the official libcurl API, but the source files might be useful for 612 others so apps can optionally compile/build with these sources to gain 613 additional functions. 614 615 We provide them through a single header file for easy access for apps: 616 `curlx.h` 617 618`curlx_strtoofft()` 619------------------- 620 A macro that converts a string containing a number to a `curl_off_t` number. 621 This might use the `curlx_strtoll()` function which is provided as source 622 code in strtoofft.c. Note that the function is only provided if no 623 `strtoll()` (or equivalent) function exist on your platform. If `curl_off_t` 624 is only a 32-bit number on your platform, this macro uses `strtol()`. 625 626Future 627------ 628 629 Several functions will be removed from the public `curl_` name space in a 630 future libcurl release. They will then only become available as `curlx_` 631 functions instead. To make the transition easier, we already today provide 632 these functions with the `curlx_` prefix to allow sources to be built 633 properly with the new function names. The concerned functions are: 634 635 - `curlx_getenv` 636 - `curlx_strequal` 637 - `curlx_strnequal` 638 - `curlx_mvsnprintf` 639 - `curlx_msnprintf` 640 - `curlx_maprintf` 641 - `curlx_mvaprintf` 642 - `curlx_msprintf` 643 - `curlx_mprintf` 644 - `curlx_mfprintf` 645 - `curlx_mvsprintf` 646 - `curlx_mvprintf` 647 - `curlx_mvfprintf` 648 649<a name="contentencoding"></a> 650Content Encoding 651================ 652 653## About content encodings 654 655 [HTTP/1.1][4] specifies that a client may request that a server encode its 656 response. This is usually used to compress a response using one (or more) 657 encodings from a set of commonly available compression techniques. These 658 schemes include `deflate` (the zlib algorithm), `gzip`, `br` (brotli) and 659 `compress`. A client requests that the server perform an encoding by including 660 an `Accept-Encoding` header in the request document. The value of the header 661 should be one of the recognized tokens `deflate`, ... (there's a way to 662 register new schemes/tokens, see sec 3.5 of the spec). A server MAY honor 663 the client's encoding request. When a response is encoded, the server 664 includes a `Content-Encoding` header in the response. The value of the 665 `Content-Encoding` header indicates which encodings were used to encode the 666 data, in the order in which they were applied. 667 668 It's also possible for a client to attach priorities to different schemes so 669 that the server knows which it prefers. See sec 14.3 of RFC 2616 for more 670 information on the `Accept-Encoding` header. See sec 671 [3.1.2.2 of RFC 7231][15] for more information on the `Content-Encoding` 672 header. 673 674## Supported content encodings 675 676 The `deflate`, `gzip` and `br` content encodings are supported by libcurl. 677 Both regular and chunked transfers work fine. The zlib library is required 678 for the `deflate` and `gzip` encodings, while the brotli decoding library is 679 for the `br` encoding. 680 681## The libcurl interface 682 683 To cause libcurl to request a content encoding use: 684 685 [`curl_easy_setopt`][1](curl, [`CURLOPT_ACCEPT_ENCODING`][5], string) 686 687 where string is the intended value of the `Accept-Encoding` header. 688 689 Currently, libcurl does support multiple encodings but only 690 understands how to process responses that use the `deflate`, `gzip` and/or 691 `br` content encodings, so the only values for [`CURLOPT_ACCEPT_ENCODING`][5] 692 that will work (besides `identity`, which does nothing) are `deflate`, 693 `gzip` and `br`. If a response is encoded using the `compress` or methods, 694 libcurl will return an error indicating that the response could 695 not be decoded. If `<string>` is NULL no `Accept-Encoding` header is 696 generated. If `<string>` is a zero-length string, then an `Accept-Encoding` 697 header containing all supported encodings will be generated. 698 699 The [`CURLOPT_ACCEPT_ENCODING`][5] must be set to any non-NULL value for 700 content to be automatically decoded. If it is not set and the server still 701 sends encoded content (despite not having been asked), the data is returned 702 in its raw form and the `Content-Encoding` type is not checked. 703 704## The curl interface 705 706 Use the [`--compressed`][6] option with curl to cause it to ask servers to 707 compress responses using any format supported by curl. 708 709<a name="hostip"></a> 710`hostip.c` explained 711==================== 712 713 The main compile-time defines to keep in mind when reading the `host*.c` 714 source file are these: 715 716## `CURLRES_IPV6` 717 718 this host has `getaddrinfo()` and family, and thus we use that. The host may 719 not be able to resolve IPv6, but we don't really have to take that into 720 account. Hosts that aren't IPv6-enabled have `CURLRES_IPV4` defined. 721 722## `CURLRES_ARES` 723 724 is defined if libcurl is built to use c-ares for asynchronous name 725 resolves. This can be Windows or \*nix. 726 727## `CURLRES_THREADED` 728 729 is defined if libcurl is built to use threading for asynchronous name 730 resolves. The name resolve will be done in a new thread, and the supported 731 asynch API will be the same as for ares-builds. This is the default under 732 (native) Windows. 733 734 If any of the two previous are defined, `CURLRES_ASYNCH` is defined too. If 735 libcurl is not built to use an asynchronous resolver, `CURLRES_SYNCH` is 736 defined. 737 738## `host*.c` sources 739 740 The `host*.c` sources files are split up like this: 741 742 - `hostip.c` - method-independent resolver functions and utility functions 743 - `hostasyn.c` - functions for asynchronous name resolves 744 - `hostsyn.c` - functions for synchronous name resolves 745 - `asyn-ares.c` - functions for asynchronous name resolves using c-ares 746 - `asyn-thread.c` - functions for asynchronous name resolves using threads 747 - `hostip4.c` - IPv4 specific functions 748 - `hostip6.c` - IPv6 specific functions 749 750 The `hostip.h` is the single united header file for all this. It defines the 751 `CURLRES_*` defines based on the `config*.h` and `curl_setup.h` defines. 752 753<a name="memoryleak"></a> 754Track Down Memory Leaks 755======================= 756 757## Single-threaded 758 759 Please note that this memory leak system is not adjusted to work in more 760 than one thread. If you want/need to use it in a multi-threaded app. Please 761 adjust accordingly. 762 763## Build 764 765 Rebuild libcurl with `-DCURLDEBUG` (usually, rerunning configure with 766 `--enable-debug` fixes this). `make clean` first, then `make` so that all 767 files are actually rebuilt properly. It will also make sense to build 768 libcurl with the debug option (usually `-g` to the compiler) so that 769 debugging it will be easier if you actually do find a leak in the library. 770 771 This will create a library that has memory debugging enabled. 772 773## Modify Your Application 774 775 Add a line in your application code: 776 777```c 778 curl_dbg_memdebug("dump"); 779``` 780 781 This will make the malloc debug system output a full trace of all resource 782 using functions to the given file name. Make sure you rebuild your program 783 and that you link with the same libcurl you built for this purpose as 784 described above. 785 786## Run Your Application 787 788 Run your program as usual. Watch the specified memory trace file grow. 789 790 Make your program exit and use the proper libcurl cleanup functions etc. So 791 that all non-leaks are returned/freed properly. 792 793## Analyze the Flow 794 795 Use the `tests/memanalyze.pl` perl script to analyze the dump file: 796 797 tests/memanalyze.pl dump 798 799 This now outputs a report on what resources that were allocated but never 800 freed etc. This report is very fine for posting to the list! 801 802 If this doesn't produce any output, no leak was detected in libcurl. Then 803 the leak is mostly likely to be in your code. 804 805<a name="multi_socket"></a> 806`multi_socket` 807============== 808 809 Implementation of the `curl_multi_socket` API 810 811 The main ideas of this API are simply: 812 813 1. The application can use whatever event system it likes as it gets info 814 from libcurl about what file descriptors libcurl waits for what action 815 on. (The previous API returns `fd_sets` which is very 816 `select()`-centric). 817 818 2. When the application discovers action on a single socket, it calls 819 libcurl and informs that there was action on this particular socket and 820 libcurl can then act on that socket/transfer only and not care about 821 any other transfers. (The previous API always had to scan through all 822 the existing transfers.) 823 824 The idea is that [`curl_multi_socket_action()`][7] calls a given callback 825 with information about what socket to wait for what action on, and the 826 callback only gets called if the status of that socket has changed. 827 828 We also added a timer callback that makes libcurl call the application when 829 the timeout value changes, and you set that with [`curl_multi_setopt()`][9] 830 and the [`CURLMOPT_TIMERFUNCTION`][10] option. To get this to work, 831 Internally, there's an added struct to each easy handle in which we store 832 an "expire time" (if any). The structs are then "splay sorted" so that we 833 can add and remove times from the linked list and yet somewhat swiftly 834 figure out both how long there is until the next nearest timer expires 835 and which timer (handle) we should take care of now. Of course, the upside 836 of all this is that we get a [`curl_multi_timeout()`][8] that should also 837 work with old-style applications that use [`curl_multi_perform()`][11]. 838 839 We created an internal "socket to easy handles" hash table that given 840 a socket (file descriptor) returns the easy handle that waits for action on 841 that socket. This hash is made using the already existing hash code 842 (previously only used for the DNS cache). 843 844 To make libcurl able to report plain sockets in the socket callback, we had 845 to re-organize the internals of the [`curl_multi_fdset()`][12] etc so that 846 the conversion from sockets to `fd_sets` for that function is only done in 847 the last step before the data is returned. I also had to extend c-ares to 848 get a function that can return plain sockets, as that library too returned 849 only `fd_sets` and that is no longer good enough. The changes done to c-ares 850 are available in c-ares 1.3.1 and later. 851 852<a name="structs"></a> 853Structs in libcurl 854================== 855 856This section should cover 7.32.0 pretty accurately, but will make sense even 857for older and later versions as things don't change drastically that often. 858 859<a name="Curl_easy"></a> 860## Curl_easy 861 862 The `Curl_easy` struct is the one returned to the outside in the external API 863 as a `CURL *`. This is usually known as an easy handle in API documentations 864 and examples. 865 866 Information and state that is related to the actual connection is in the 867 `connectdata` struct. When a transfer is about to be made, libcurl will 868 either create a new connection or re-use an existing one. The particular 869 connectdata that is used by this handle is pointed out by 870 `Curl_easy->easy_conn`. 871 872 Data and information that regard this particular single transfer is put in 873 the `SingleRequest` sub-struct. 874 875 When the `Curl_easy` struct is added to a multi handle, as it must be in 876 order to do any transfer, the `->multi` member will point to the `Curl_multi` 877 struct it belongs to. The `->prev` and `->next` members will then be used by 878 the multi code to keep a linked list of `Curl_easy` structs that are added to 879 that same multi handle. libcurl always uses multi so `->multi` *will* point 880 to a `Curl_multi` when a transfer is in progress. 881 882 `->mstate` is the multi state of this particular `Curl_easy`. When 883 `multi_runsingle()` is called, it will act on this handle according to which 884 state it is in. The mstate is also what tells which sockets to return for a 885 specific `Curl_easy` when [`curl_multi_fdset()`][12] is called etc. 886 887 The libcurl source code generally use the name `data` for the variable that 888 points to the `Curl_easy`. 889 890 When doing multiplexed HTTP/2 transfers, each `Curl_easy` is associated with 891 an individual stream, sharing the same connectdata struct. Multiplexing 892 makes it even more important to keep things associated with the right thing! 893 894<a name="connectdata"></a> 895## connectdata 896 897 A general idea in libcurl is to keep connections around in a connection 898 "cache" after they have been used in case they will be used again and then 899 re-use an existing one instead of creating a new as it creates a significant 900 performance boost. 901 902 Each `connectdata` identifies a single physical connection to a server. If 903 the connection can't be kept alive, the connection will be closed after use 904 and then this struct can be removed from the cache and freed. 905 906 Thus, the same `Curl_easy` can be used multiple times and each time select 907 another `connectdata` struct to use for the connection. Keep this in mind, 908 as it is then important to consider if options or choices are based on the 909 connection or the `Curl_easy`. 910 911 Functions in libcurl will assume that `connectdata->data` points to the 912 `Curl_easy` that uses this connection (for the moment). 913 914 As a special complexity, some protocols supported by libcurl require a 915 special disconnect procedure that is more than just shutting down the 916 socket. It can involve sending one or more commands to the server before 917 doing so. Since connections are kept in the connection cache after use, the 918 original `Curl_easy` may no longer be around when the time comes to shut down 919 a particular connection. For this purpose, libcurl holds a special dummy 920 `closure_handle` `Curl_easy` in the `Curl_multi` struct to use when needed. 921 922 FTP uses two TCP connections for a typical transfer but it keeps both in 923 this single struct and thus can be considered a single connection for most 924 internal concerns. 925 926 The libcurl source code generally use the name `conn` for the variable that 927 points to the connectdata. 928 929<a name="Curl_multi"></a> 930## Curl_multi 931 932 Internally, the easy interface is implemented as a wrapper around multi 933 interface functions. This makes everything multi interface. 934 935 `Curl_multi` is the multi handle struct exposed as `CURLM *` in external 936 APIs. 937 938 This struct holds a list of `Curl_easy` structs that have been added to this 939 handle with [`curl_multi_add_handle()`][13]. The start of the list is 940 `->easyp` and `->num_easy` is a counter of added `Curl_easy`s. 941 942 `->msglist` is a linked list of messages to send back when 943 [`curl_multi_info_read()`][14] is called. Basically a node is added to that 944 list when an individual `Curl_easy`'s transfer has completed. 945 946 `->hostcache` points to the name cache. It is a hash table for looking up 947 name to IP. The nodes have a limited life time in there and this cache is 948 meant to reduce the time for when the same name is wanted within a short 949 period of time. 950 951 `->timetree` points to a tree of `Curl_easy`s, sorted by the remaining time 952 until it should be checked - normally some sort of timeout. Each `Curl_easy` 953 has one node in the tree. 954 955 `->sockhash` is a hash table to allow fast lookups of socket descriptor for 956 which `Curl_easy` uses that descriptor. This is necessary for the 957 `multi_socket` API. 958 959 `->conn_cache` points to the connection cache. It keeps track of all 960 connections that are kept after use. The cache has a maximum size. 961 962 `->closure_handle` is described in the `connectdata` section. 963 964 The libcurl source code generally use the name `multi` for the variable that 965 points to the `Curl_multi` struct. 966 967<a name="Curl_handler"></a> 968## Curl_handler 969 970 Each unique protocol that is supported by libcurl needs to provide at least 971 one `Curl_handler` struct. It defines what the protocol is called and what 972 functions the main code should call to deal with protocol specific issues. 973 In general, there's a source file named `[protocol].c` in which there's a 974 `struct Curl_handler Curl_handler_[protocol]` declared. In `url.c` there's 975 then the main array with all individual `Curl_handler` structs pointed to 976 from a single array which is scanned through when a URL is given to libcurl 977 to work with. 978 979 The concrete function pointer prototypes can be found in `lib/urldata.h`. 980 981 `->scheme` is the URL scheme name, usually spelled out in uppercase. That's 982 "HTTP" or "FTP" etc. SSL versions of the protocol need their own 983 `Curl_handler` setup so HTTPS separate from HTTP. 984 985 `->setup_connection` is called to allow the protocol code to allocate 986 protocol specific data that then gets associated with that `Curl_easy` for 987 the rest of this transfer. It gets freed again at the end of the transfer. 988 It will be called before the `connectdata` for the transfer has been 989 selected/created. Most protocols will allocate its private `struct 990 [PROTOCOL]` here and assign `Curl_easy->req.p.[protocol]` to it. 991 992 `->connect_it` allows a protocol to do some specific actions after the TCP 993 connect is done, that can still be considered part of the connection phase. 994 995 Some protocols will alter the `connectdata->recv[]` and 996 `connectdata->send[]` function pointers in this function. 997 998 `->connecting` is similarly a function that keeps getting called as long as 999 the protocol considers itself still in the connecting phase. 1000 1001 `->do_it` is the function called to issue the transfer request. What we call 1002 the DO action internally. If the DO is not enough and things need to be kept 1003 getting done for the entire DO sequence to complete, `->doing` is then 1004 usually also provided. Each protocol that needs to do multiple commands or 1005 similar for do/doing need to implement their own state machines (see SCP, 1006 SFTP, FTP). Some protocols (only FTP and only due to historical reasons) has 1007 a separate piece of the DO state called `DO_MORE`. 1008 1009 `->doing` keeps getting called while issuing the transfer request command(s) 1010 1011 `->done` gets called when the transfer is complete and DONE. That's after the 1012 main data has been transferred. 1013 1014 `->do_more` gets called during the `DO_MORE` state. The FTP protocol uses 1015 this state when setting up the second connection. 1016 1017 `->proto_getsock` 1018 `->doing_getsock` 1019 `->domore_getsock` 1020 `->perform_getsock` 1021 Functions that return socket information. Which socket(s) to wait for which 1022 I/O action(s) during the particular multi state. 1023 1024 `->disconnect` is called immediately before the TCP connection is shutdown. 1025 1026 `->readwrite` gets called during transfer to allow the protocol to do extra 1027 reads/writes 1028 1029 `->attach` attaches a transfer to the connection. 1030 1031 `->defport` is the default report TCP or UDP port this protocol uses 1032 1033 `->protocol` is one or more bits in the `CURLPROTO_*` set. The SSL versions 1034 have their "base" protocol set and then the SSL variation. Like 1035 "HTTP|HTTPS". 1036 1037 `->flags` is a bitmask with additional information about the protocol that will 1038 make it get treated differently by the generic engine: 1039 1040 - `PROTOPT_SSL` - will make it connect and negotiate SSL 1041 1042 - `PROTOPT_DUAL` - this protocol uses two connections 1043 1044 - `PROTOPT_CLOSEACTION` - this protocol has actions to do before closing the 1045 connection. This flag is no longer used by code, yet still set for a bunch 1046 of protocol handlers. 1047 1048 - `PROTOPT_DIRLOCK` - "direction lock". The SSH protocols set this bit to 1049 limit which "direction" of socket actions that the main engine will 1050 concern itself with. 1051 1052 - `PROTOPT_NONETWORK` - a protocol that doesn't use network (read `file:`) 1053 1054 - `PROTOPT_NEEDSPWD` - this protocol needs a password and will use a default 1055 one unless one is provided 1056 1057 - `PROTOPT_NOURLQUERY` - this protocol can't handle a query part on the URL 1058 (?foo=bar) 1059 1060<a name="conncache"></a> 1061## conncache 1062 1063 Is a hash table with connections for later re-use. Each `Curl_easy` has a 1064 pointer to its connection cache. Each multi handle sets up a connection 1065 cache that all added `Curl_easy`s share by default. 1066 1067<a name="Curl_share"></a> 1068## Curl_share 1069 1070 The libcurl share API allocates a `Curl_share` struct, exposed to the 1071 external API as `CURLSH *`. 1072 1073 The idea is that the struct can have a set of its own versions of caches and 1074 pools and then by providing this struct in the `CURLOPT_SHARE` option, those 1075 specific `Curl_easy`s will use the caches/pools that this share handle 1076 holds. 1077 1078 Then individual `Curl_easy` structs can be made to share specific things 1079 that they otherwise wouldn't, such as cookies. 1080 1081 The `Curl_share` struct can currently hold cookies, DNS cache and the SSL 1082 session cache. 1083 1084<a name="CookieInfo"></a> 1085## CookieInfo 1086 1087 This is the main cookie struct. It holds all known cookies and related 1088 information. Each `Curl_easy` has its own private `CookieInfo` even when 1089 they are added to a multi handle. They can be made to share cookies by using 1090 the share API. 1091 1092 1093[1]: https://curl.se/libcurl/c/curl_easy_setopt.html 1094[2]: https://curl.se/libcurl/c/curl_easy_init.html 1095[3]: https://c-ares.haxx.se/ 1096[4]: https://tools.ietf.org/html/rfc7230 "RFC 7230" 1097[5]: https://curl.se/libcurl/c/CURLOPT_ACCEPT_ENCODING.html 1098[6]: https://curl.se/docs/manpage.html#--compressed 1099[7]: https://curl.se/libcurl/c/curl_multi_socket_action.html 1100[8]: https://curl.se/libcurl/c/curl_multi_timeout.html 1101[9]: https://curl.se/libcurl/c/curl_multi_setopt.html 1102[10]: https://curl.se/libcurl/c/CURLMOPT_TIMERFUNCTION.html 1103[11]: https://curl.se/libcurl/c/curl_multi_perform.html 1104[12]: https://curl.se/libcurl/c/curl_multi_fdset.html 1105[13]: https://curl.se/libcurl/c/curl_multi_add_handle.html 1106[14]: https://curl.se/libcurl/c/curl_multi_info_read.html 1107[15]: https://tools.ietf.org/html/rfc7231#section-3.1.2.2 1108