• Home
  • Line#
  • Scopes#
  • Navigate#
  • Raw
  • Download
1 //! Native threads.
2 //!
3 //! ## The threading model
4 //!
5 //! An executing Rust program consists of a collection of native OS threads,
6 //! each with their own stack and local state. Threads can be named, and
7 //! provide some built-in support for low-level synchronization.
8 //!
9 //! Communication between threads can be done through
10 //! [channels], Rust's message-passing types, along with [other forms of thread
11 //! synchronization](../../std/sync/index.html) and shared-memory data
12 //! structures. In particular, types that are guaranteed to be
13 //! threadsafe are easily shared between threads using the
14 //! atomically-reference-counted container, [`Arc`].
15 //!
16 //! Fatal logic errors in Rust cause *thread panic*, during which
17 //! a thread will unwind the stack, running destructors and freeing
18 //! owned resources. While not meant as a 'try/catch' mechanism, panics
19 //! in Rust can nonetheless be caught (unless compiling with `panic=abort`) with
20 //! [`catch_unwind`](../../std/panic/fn.catch_unwind.html) and recovered
21 //! from, or alternatively be resumed with
22 //! [`resume_unwind`](../../std/panic/fn.resume_unwind.html). If the panic
23 //! is not caught the thread will exit, but the panic may optionally be
24 //! detected from a different thread with [`join`]. If the main thread panics
25 //! without the panic being caught, the application will exit with a
26 //! non-zero exit code.
27 //!
28 //! When the main thread of a Rust program terminates, the entire program shuts
29 //! down, even if other threads are still running. However, this module provides
30 //! convenient facilities for automatically waiting for the termination of a
31 //! thread (i.e., join).
32 //!
33 //! ## Spawning a thread
34 //!
35 //! A new thread can be spawned using the [`thread::spawn`][`spawn`] function:
36 //!
37 //! ```rust
38 //! use std::thread;
39 //!
40 //! thread::spawn(move || {
41 //!     // some work here
42 //! });
43 //! ```
44 //!
45 //! In this example, the spawned thread is "detached," which means that there is
46 //! no way for the program to learn when the spawned thread completes or otherwise
47 //! terminates.
48 //!
49 //! To learn when a thread completes, it is necessary to capture the [`JoinHandle`]
50 //! object that is returned by the call to [`spawn`], which provides
51 //! a `join` method that allows the caller to wait for the completion of the
52 //! spawned thread:
53 //!
54 //! ```rust
55 //! use std::thread;
56 //!
57 //! let thread_join_handle = thread::spawn(move || {
58 //!     // some work here
59 //! });
60 //! // some work here
61 //! let res = thread_join_handle.join();
62 //! ```
63 //!
64 //! The [`join`] method returns a [`thread::Result`] containing [`Ok`] of the final
65 //! value produced by the spawned thread, or [`Err`] of the value given to
66 //! a call to [`panic!`] if the thread panicked.
67 //!
68 //! Note that there is no parent/child relationship between a thread that spawns a
69 //! new thread and the thread being spawned.  In particular, the spawned thread may or
70 //! may not outlive the spawning thread, unless the spawning thread is the main thread.
71 //!
72 //! ## Configuring threads
73 //!
74 //! A new thread can be configured before it is spawned via the [`Builder`] type,
75 //! which currently allows you to set the name and stack size for the thread:
76 //!
77 //! ```rust
78 //! # #![allow(unused_must_use)]
79 //! use std::thread;
80 //!
81 //! thread::Builder::new().name("thread1".to_string()).spawn(move || {
82 //!     println!("Hello, world!");
83 //! });
84 //! ```
85 //!
86 //! ## The `Thread` type
87 //!
88 //! Threads are represented via the [`Thread`] type, which you can get in one of
89 //! two ways:
90 //!
91 //! * By spawning a new thread, e.g., using the [`thread::spawn`][`spawn`]
92 //!   function, and calling [`thread`][`JoinHandle::thread`] on the [`JoinHandle`].
93 //! * By requesting the current thread, using the [`thread::current`] function.
94 //!
95 //! The [`thread::current`] function is available even for threads not spawned
96 //! by the APIs of this module.
97 //!
98 //! ## Thread-local storage
99 //!
100 //! This module also provides an implementation of thread-local storage for Rust
101 //! programs. Thread-local storage is a method of storing data into a global
102 //! variable that each thread in the program will have its own copy of.
103 //! Threads do not share this data, so accesses do not need to be synchronized.
104 //!
105 //! A thread-local key owns the value it contains and will destroy the value when the
106 //! thread exits. It is created with the [`thread_local!`] macro and can contain any
107 //! value that is `'static` (no borrowed pointers). It provides an accessor function,
108 //! [`with`], that yields a shared reference to the value to the specified
109 //! closure. Thread-local keys allow only shared access to values, as there would be no
110 //! way to guarantee uniqueness if mutable borrows were allowed. Most values
111 //! will want to make use of some form of **interior mutability** through the
112 //! [`Cell`] or [`RefCell`] types.
113 //!
114 //! ## Naming threads
115 //!
116 //! Threads are able to have associated names for identification purposes. By default, spawned
117 //! threads are unnamed. To specify a name for a thread, build the thread with [`Builder`] and pass
118 //! the desired thread name to [`Builder::name`]. To retrieve the thread name from within the
119 //! thread, use [`Thread::name`]. A couple of examples where the name of a thread gets used:
120 //!
121 //! * If a panic occurs in a named thread, the thread name will be printed in the panic message.
122 //! * The thread name is provided to the OS where applicable (e.g., `pthread_setname_np` in
123 //!   unix-like platforms).
124 //!
125 //! ## Stack size
126 //!
127 //! The default stack size is platform-dependent and subject to change.
128 //! Currently, it is 2 MiB on all Tier-1 platforms.
129 //!
130 //! There are two ways to manually specify the stack size for spawned threads:
131 //!
132 //! * Build the thread with [`Builder`] and pass the desired stack size to [`Builder::stack_size`].
133 //! * Set the `RUST_MIN_STACK` environment variable to an integer representing the desired stack
134 //!   size (in bytes). Note that setting [`Builder::stack_size`] will override this. Be aware that
135 //!   changes to `RUST_MIN_STACK` may be ignored after program start.
136 //!
137 //! Note that the stack size of the main thread is *not* determined by Rust.
138 //!
139 //! [channels]: crate::sync::mpsc
140 //! [`join`]: JoinHandle::join
141 //! [`Result`]: crate::result::Result
142 //! [`Ok`]: crate::result::Result::Ok
143 //! [`Err`]: crate::result::Result::Err
144 //! [`thread::current`]: current
145 //! [`thread::Result`]: Result
146 //! [`unpark`]: Thread::unpark
147 //! [`thread::park_timeout`]: park_timeout
148 //! [`Cell`]: crate::cell::Cell
149 //! [`RefCell`]: crate::cell::RefCell
150 //! [`with`]: LocalKey::with
151 //! [`thread_local!`]: crate::thread_local
152 
153 #![stable(feature = "rust1", since = "1.0.0")]
154 #![deny(unsafe_op_in_unsafe_fn)]
155 // Under `test`, `__FastLocalKeyInner` seems unused.
156 #![cfg_attr(test, allow(dead_code))]
157 
158 #[cfg(all(test, not(target_os = "emscripten")))]
159 mod tests;
160 
161 use crate::any::Any;
162 use crate::cell::UnsafeCell;
163 use crate::ffi::{CStr, CString};
164 use crate::fmt;
165 use crate::io;
166 use crate::marker::PhantomData;
167 use crate::mem::{self, forget};
168 use crate::num::NonZeroU64;
169 use crate::num::NonZeroUsize;
170 use crate::panic;
171 use crate::panicking;
172 use crate::pin::Pin;
173 use crate::ptr::addr_of_mut;
174 use crate::str;
175 use crate::sync::Arc;
176 use crate::sys::thread as imp;
177 use crate::sys_common::thread;
178 use crate::sys_common::thread_info;
179 use crate::sys_common::thread_parking::Parker;
180 use crate::sys_common::{AsInner, IntoInner};
181 use crate::time::Duration;
182 
183 #[stable(feature = "scoped_threads", since = "1.63.0")]
184 mod scoped;
185 
186 #[stable(feature = "scoped_threads", since = "1.63.0")]
187 pub use scoped::{scope, Scope, ScopedJoinHandle};
188 
189 ////////////////////////////////////////////////////////////////////////////////
190 // Thread-local storage
191 ////////////////////////////////////////////////////////////////////////////////
192 
193 #[macro_use]
194 mod local;
195 
196 cfg_if::cfg_if! {
197     if #[cfg(test)] {
198         // Avoid duplicating the global state associated with thread-locals between this crate and
199         // realstd. Miri relies on this.
200         pub use realstd::thread::{local_impl, AccessError, LocalKey};
201     } else {
202         #[stable(feature = "rust1", since = "1.0.0")]
203         pub use self::local::{AccessError, LocalKey};
204 
205         // Implementation details used by the thread_local!{} macro.
206         #[doc(hidden)]
207         #[unstable(feature = "thread_local_internals", issue = "none")]
208         pub mod local_impl {
209             pub use crate::sys::common::thread_local::{thread_local_inner, Key, abort_on_dtor_unwind};
210         }
211     }
212 }
213 
214 ////////////////////////////////////////////////////////////////////////////////
215 // Builder
216 ////////////////////////////////////////////////////////////////////////////////
217 
218 /// Thread factory, which can be used in order to configure the properties of
219 /// a new thread.
220 ///
221 /// Methods can be chained on it in order to configure it.
222 ///
223 /// The two configurations available are:
224 ///
225 /// - [`name`]: specifies an [associated name for the thread][naming-threads]
226 /// - [`stack_size`]: specifies the [desired stack size for the thread][stack-size]
227 ///
228 /// The [`spawn`] method will take ownership of the builder and create an
229 /// [`io::Result`] to the thread handle with the given configuration.
230 ///
231 /// The [`thread::spawn`] free function uses a `Builder` with default
232 /// configuration and [`unwrap`]s its return value.
233 ///
234 /// You may want to use [`spawn`] instead of [`thread::spawn`], when you want
235 /// to recover from a failure to launch a thread, indeed the free function will
236 /// panic where the `Builder` method will return a [`io::Result`].
237 ///
238 /// # Examples
239 ///
240 /// ```
241 /// use std::thread;
242 ///
243 /// let builder = thread::Builder::new();
244 ///
245 /// let handler = builder.spawn(|| {
246 ///     // thread code
247 /// }).unwrap();
248 ///
249 /// handler.join().unwrap();
250 /// ```
251 ///
252 /// [`stack_size`]: Builder::stack_size
253 /// [`name`]: Builder::name
254 /// [`spawn`]: Builder::spawn
255 /// [`thread::spawn`]: spawn
256 /// [`io::Result`]: crate::io::Result
257 /// [`unwrap`]: crate::result::Result::unwrap
258 /// [naming-threads]: ./index.html#naming-threads
259 /// [stack-size]: ./index.html#stack-size
260 #[must_use = "must eventually spawn the thread"]
261 #[stable(feature = "rust1", since = "1.0.0")]
262 #[derive(Debug)]
263 pub struct Builder {
264     // A name for the thread-to-be, for identification in panic messages
265     name: Option<String>,
266     // The size of the stack for the spawned thread in bytes
267     stack_size: Option<usize>,
268 }
269 
270 impl Builder {
271     /// Generates the base configuration for spawning a thread, from which
272     /// configuration methods can be chained.
273     ///
274     /// # Examples
275     ///
276     /// ```
277     /// use std::thread;
278     ///
279     /// let builder = thread::Builder::new()
280     ///                               .name("foo".into())
281     ///                               .stack_size(32 * 1024);
282     ///
283     /// let handler = builder.spawn(|| {
284     ///     // thread code
285     /// }).unwrap();
286     ///
287     /// handler.join().unwrap();
288     /// ```
289     #[stable(feature = "rust1", since = "1.0.0")]
new() -> Builder290     pub fn new() -> Builder {
291         Builder { name: None, stack_size: None }
292     }
293 
294     /// Names the thread-to-be. Currently the name is used for identification
295     /// only in panic messages.
296     ///
297     /// The name must not contain null bytes (`\0`).
298     ///
299     /// For more information about named threads, see
300     /// [this module-level documentation][naming-threads].
301     ///
302     /// # Examples
303     ///
304     /// ```
305     /// use std::thread;
306     ///
307     /// let builder = thread::Builder::new()
308     ///     .name("foo".into());
309     ///
310     /// let handler = builder.spawn(|| {
311     ///     assert_eq!(thread::current().name(), Some("foo"))
312     /// }).unwrap();
313     ///
314     /// handler.join().unwrap();
315     /// ```
316     ///
317     /// [naming-threads]: ./index.html#naming-threads
318     #[stable(feature = "rust1", since = "1.0.0")]
name(mut self, name: String) -> Builder319     pub fn name(mut self, name: String) -> Builder {
320         self.name = Some(name);
321         self
322     }
323 
324     /// Sets the size of the stack (in bytes) for the new thread.
325     ///
326     /// The actual stack size may be greater than this value if
327     /// the platform specifies a minimal stack size.
328     ///
329     /// For more information about the stack size for threads, see
330     /// [this module-level documentation][stack-size].
331     ///
332     /// # Examples
333     ///
334     /// ```
335     /// use std::thread;
336     ///
337     /// let builder = thread::Builder::new().stack_size(32 * 1024);
338     /// ```
339     ///
340     /// [stack-size]: ./index.html#stack-size
341     #[stable(feature = "rust1", since = "1.0.0")]
stack_size(mut self, size: usize) -> Builder342     pub fn stack_size(mut self, size: usize) -> Builder {
343         self.stack_size = Some(size);
344         self
345     }
346 
347     /// Spawns a new thread by taking ownership of the `Builder`, and returns an
348     /// [`io::Result`] to its [`JoinHandle`].
349     ///
350     /// The spawned thread may outlive the caller (unless the caller thread
351     /// is the main thread; the whole process is terminated when the main
352     /// thread finishes). The join handle can be used to block on
353     /// termination of the spawned thread, including recovering its panics.
354     ///
355     /// For a more complete documentation see [`thread::spawn`][`spawn`].
356     ///
357     /// # Errors
358     ///
359     /// Unlike the [`spawn`] free function, this method yields an
360     /// [`io::Result`] to capture any failure to create the thread at
361     /// the OS level.
362     ///
363     /// [`io::Result`]: crate::io::Result
364     ///
365     /// # Panics
366     ///
367     /// Panics if a thread name was set and it contained null bytes.
368     ///
369     /// # Examples
370     ///
371     /// ```
372     /// use std::thread;
373     ///
374     /// let builder = thread::Builder::new();
375     ///
376     /// let handler = builder.spawn(|| {
377     ///     // thread code
378     /// }).unwrap();
379     ///
380     /// handler.join().unwrap();
381     /// ```
382     #[stable(feature = "rust1", since = "1.0.0")]
spawn<F, T>(self, f: F) -> io::Result<JoinHandle<T>> where F: FnOnce() -> T, F: Send + 'static, T: Send + 'static,383     pub fn spawn<F, T>(self, f: F) -> io::Result<JoinHandle<T>>
384     where
385         F: FnOnce() -> T,
386         F: Send + 'static,
387         T: Send + 'static,
388     {
389         unsafe { self.spawn_unchecked(f) }
390     }
391 
392     /// Spawns a new thread without any lifetime restrictions by taking ownership
393     /// of the `Builder`, and returns an [`io::Result`] to its [`JoinHandle`].
394     ///
395     /// The spawned thread may outlive the caller (unless the caller thread
396     /// is the main thread; the whole process is terminated when the main
397     /// thread finishes). The join handle can be used to block on
398     /// termination of the spawned thread, including recovering its panics.
399     ///
400     /// This method is identical to [`thread::Builder::spawn`][`Builder::spawn`],
401     /// except for the relaxed lifetime bounds, which render it unsafe.
402     /// For a more complete documentation see [`thread::spawn`][`spawn`].
403     ///
404     /// # Errors
405     ///
406     /// Unlike the [`spawn`] free function, this method yields an
407     /// [`io::Result`] to capture any failure to create the thread at
408     /// the OS level.
409     ///
410     /// # Panics
411     ///
412     /// Panics if a thread name was set and it contained null bytes.
413     ///
414     /// # Safety
415     ///
416     /// The caller has to ensure that the spawned thread does not outlive any
417     /// references in the supplied thread closure and its return type.
418     /// This can be guaranteed in two ways:
419     ///
420     /// - ensure that [`join`][`JoinHandle::join`] is called before any referenced
421     /// data is dropped
422     /// - use only types with `'static` lifetime bounds, i.e., those with no or only
423     /// `'static` references (both [`thread::Builder::spawn`][`Builder::spawn`]
424     /// and [`thread::spawn`][`spawn`] enforce this property statically)
425     ///
426     /// # Examples
427     ///
428     /// ```
429     /// #![feature(thread_spawn_unchecked)]
430     /// use std::thread;
431     ///
432     /// let builder = thread::Builder::new();
433     ///
434     /// let x = 1;
435     /// let thread_x = &x;
436     ///
437     /// let handler = unsafe {
438     ///     builder.spawn_unchecked(move || {
439     ///         println!("x = {}", *thread_x);
440     ///     }).unwrap()
441     /// };
442     ///
443     /// // caller has to ensure `join()` is called, otherwise
444     /// // it is possible to access freed memory if `x` gets
445     /// // dropped before the thread closure is executed!
446     /// handler.join().unwrap();
447     /// ```
448     ///
449     /// [`io::Result`]: crate::io::Result
450     #[unstable(feature = "thread_spawn_unchecked", issue = "55132")]
spawn_unchecked<'a, F, T>(self, f: F) -> io::Result<JoinHandle<T>> where F: FnOnce() -> T, F: Send + 'a, T: Send + 'a,451     pub unsafe fn spawn_unchecked<'a, F, T>(self, f: F) -> io::Result<JoinHandle<T>>
452     where
453         F: FnOnce() -> T,
454         F: Send + 'a,
455         T: Send + 'a,
456     {
457         Ok(JoinHandle(unsafe { self.spawn_unchecked_(f, None) }?))
458     }
459 
spawn_unchecked_<'a, 'scope, F, T>( self, f: F, scope_data: Option<Arc<scoped::ScopeData>>, ) -> io::Result<JoinInner<'scope, T>> where F: FnOnce() -> T, F: Send + 'a, T: Send + 'a, 'scope: 'a,460     unsafe fn spawn_unchecked_<'a, 'scope, F, T>(
461         self,
462         f: F,
463         scope_data: Option<Arc<scoped::ScopeData>>,
464     ) -> io::Result<JoinInner<'scope, T>>
465     where
466         F: FnOnce() -> T,
467         F: Send + 'a,
468         T: Send + 'a,
469         'scope: 'a,
470     {
471         let Builder { name, stack_size } = self;
472 
473         let stack_size = stack_size.unwrap_or_else(thread::min_stack);
474 
475         let my_thread = Thread::new(name.map(|name| {
476             CString::new(name).expect("thread name may not contain interior null bytes")
477         }));
478         let their_thread = my_thread.clone();
479 
480         let my_packet: Arc<Packet<'scope, T>> = Arc::new(Packet {
481             scope: scope_data,
482             result: UnsafeCell::new(None),
483             _marker: PhantomData,
484         });
485         let their_packet = my_packet.clone();
486 
487         let output_capture = crate::io::set_output_capture(None);
488         crate::io::set_output_capture(output_capture.clone());
489 
490         // Pass `f` in `MaybeUninit` because actually that closure might *run longer than the lifetime of `F`*.
491         // See <https://github.com/rust-lang/rust/issues/101983> for more details.
492         // To prevent leaks we use a wrapper that drops its contents.
493         #[repr(transparent)]
494         struct MaybeDangling<T>(mem::MaybeUninit<T>);
495         impl<T> MaybeDangling<T> {
496             fn new(x: T) -> Self {
497                 MaybeDangling(mem::MaybeUninit::new(x))
498             }
499             fn into_inner(self) -> T {
500                 // SAFETY: we are always initialized.
501                 let ret = unsafe { self.0.assume_init_read() };
502                 // Make sure we don't drop.
503                 mem::forget(self);
504                 ret
505             }
506         }
507         impl<T> Drop for MaybeDangling<T> {
508             fn drop(&mut self) {
509                 // SAFETY: we are always initialized.
510                 unsafe { self.0.assume_init_drop() };
511             }
512         }
513 
514         let f = MaybeDangling::new(f);
515         let main = move || {
516             if let Some(name) = their_thread.cname() {
517                 imp::Thread::set_name(name);
518             }
519 
520             crate::io::set_output_capture(output_capture);
521 
522             // SAFETY: we constructed `f` initialized.
523             let f = f.into_inner();
524             // SAFETY: the stack guard passed is the one for the current thread.
525             // This means the current thread's stack and the new thread's stack
526             // are properly set and protected from each other.
527             thread_info::set(unsafe { imp::guard::current() }, their_thread);
528             let try_result = panic::catch_unwind(panic::AssertUnwindSafe(|| {
529                 crate::sys_common::backtrace::__rust_begin_short_backtrace(f)
530             }));
531             // SAFETY: `their_packet` as been built just above and moved by the
532             // closure (it is an Arc<...>) and `my_packet` will be stored in the
533             // same `JoinInner` as this closure meaning the mutation will be
534             // safe (not modify it and affect a value far away).
535             unsafe { *their_packet.result.get() = Some(try_result) };
536             // Here `their_packet` gets dropped, and if this is the last `Arc` for that packet that
537             // will call `decrement_num_running_threads` and therefore signal that this thread is
538             // done.
539             drop(their_packet);
540             // Here, the lifetime `'a` and even `'scope` can end. `main` keeps running for a bit
541             // after that before returning itself.
542         };
543 
544         if let Some(scope_data) = &my_packet.scope {
545             scope_data.increment_num_running_threads();
546         }
547 
548         Ok(JoinInner {
549             // SAFETY:
550             //
551             // `imp::Thread::new` takes a closure with a `'static` lifetime, since it's passed
552             // through FFI or otherwise used with low-level threading primitives that have no
553             // notion of or way to enforce lifetimes.
554             //
555             // As mentioned in the `Safety` section of this function's documentation, the caller of
556             // this function needs to guarantee that the passed-in lifetime is sufficiently long
557             // for the lifetime of the thread.
558             //
559             // Similarly, the `sys` implementation must guarantee that no references to the closure
560             // exist after the thread has terminated, which is signaled by `Thread::join`
561             // returning.
562             native: unsafe {
563                 imp::Thread::new(
564                     stack_size,
565                     mem::transmute::<Box<dyn FnOnce() + 'a>, Box<dyn FnOnce() + 'static>>(
566                         Box::new(main),
567                     ),
568                 )?
569             },
570             thread: my_thread,
571             packet: my_packet,
572         })
573     }
574 }
575 
576 ////////////////////////////////////////////////////////////////////////////////
577 // Free functions
578 ////////////////////////////////////////////////////////////////////////////////
579 
580 /// Spawns a new thread, returning a [`JoinHandle`] for it.
581 ///
582 /// The join handle provides a [`join`] method that can be used to join the spawned
583 /// thread. If the spawned thread panics, [`join`] will return an [`Err`] containing
584 /// the argument given to [`panic!`].
585 ///
586 /// If the join handle is dropped, the spawned thread will implicitly be *detached*.
587 /// In this case, the spawned thread may no longer be joined.
588 /// (It is the responsibility of the program to either eventually join threads it
589 /// creates or detach them; otherwise, a resource leak will result.)
590 ///
591 /// This call will create a thread using default parameters of [`Builder`], if you
592 /// want to specify the stack size or the name of the thread, use this API
593 /// instead.
594 ///
595 /// As you can see in the signature of `spawn` there are two constraints on
596 /// both the closure given to `spawn` and its return value, let's explain them:
597 ///
598 /// - The `'static` constraint means that the closure and its return value
599 ///   must have a lifetime of the whole program execution. The reason for this
600 ///   is that threads can outlive the lifetime they have been created in.
601 ///
602 ///   Indeed if the thread, and by extension its return value, can outlive their
603 ///   caller, we need to make sure that they will be valid afterwards, and since
604 ///   we *can't* know when it will return we need to have them valid as long as
605 ///   possible, that is until the end of the program, hence the `'static`
606 ///   lifetime.
607 /// - The [`Send`] constraint is because the closure will need to be passed
608 ///   *by value* from the thread where it is spawned to the new thread. Its
609 ///   return value will need to be passed from the new thread to the thread
610 ///   where it is `join`ed.
611 ///   As a reminder, the [`Send`] marker trait expresses that it is safe to be
612 ///   passed from thread to thread. [`Sync`] expresses that it is safe to have a
613 ///   reference be passed from thread to thread.
614 ///
615 /// # Panics
616 ///
617 /// Panics if the OS fails to create a thread; use [`Builder::spawn`]
618 /// to recover from such errors.
619 ///
620 /// # Examples
621 ///
622 /// Creating a thread.
623 ///
624 /// ```
625 /// use std::thread;
626 ///
627 /// let handler = thread::spawn(|| {
628 ///     // thread code
629 /// });
630 ///
631 /// handler.join().unwrap();
632 /// ```
633 ///
634 /// As mentioned in the module documentation, threads are usually made to
635 /// communicate using [`channels`], here is how it usually looks.
636 ///
637 /// This example also shows how to use `move`, in order to give ownership
638 /// of values to a thread.
639 ///
640 /// ```
641 /// use std::thread;
642 /// use std::sync::mpsc::channel;
643 ///
644 /// let (tx, rx) = channel();
645 ///
646 /// let sender = thread::spawn(move || {
647 ///     tx.send("Hello, thread".to_owned())
648 ///         .expect("Unable to send on channel");
649 /// });
650 ///
651 /// let receiver = thread::spawn(move || {
652 ///     let value = rx.recv().expect("Unable to receive from channel");
653 ///     println!("{value}");
654 /// });
655 ///
656 /// sender.join().expect("The sender thread has panicked");
657 /// receiver.join().expect("The receiver thread has panicked");
658 /// ```
659 ///
660 /// A thread can also return a value through its [`JoinHandle`], you can use
661 /// this to make asynchronous computations (futures might be more appropriate
662 /// though).
663 ///
664 /// ```
665 /// use std::thread;
666 ///
667 /// let computation = thread::spawn(|| {
668 ///     // Some expensive computation.
669 ///     42
670 /// });
671 ///
672 /// let result = computation.join().unwrap();
673 /// println!("{result}");
674 /// ```
675 ///
676 /// [`channels`]: crate::sync::mpsc
677 /// [`join`]: JoinHandle::join
678 /// [`Err`]: crate::result::Result::Err
679 #[stable(feature = "rust1", since = "1.0.0")]
spawn<F, T>(f: F) -> JoinHandle<T> where F: FnOnce() -> T, F: Send + 'static, T: Send + 'static,680 pub fn spawn<F, T>(f: F) -> JoinHandle<T>
681 where
682     F: FnOnce() -> T,
683     F: Send + 'static,
684     T: Send + 'static,
685 {
686     Builder::new().spawn(f).expect("failed to spawn thread")
687 }
688 
689 /// Gets a handle to the thread that invokes it.
690 ///
691 /// # Examples
692 ///
693 /// Getting a handle to the current thread with `thread::current()`:
694 ///
695 /// ```
696 /// use std::thread;
697 ///
698 /// let handler = thread::Builder::new()
699 ///     .name("named thread".into())
700 ///     .spawn(|| {
701 ///         let handle = thread::current();
702 ///         assert_eq!(handle.name(), Some("named thread"));
703 ///     })
704 ///     .unwrap();
705 ///
706 /// handler.join().unwrap();
707 /// ```
708 #[must_use]
709 #[stable(feature = "rust1", since = "1.0.0")]
current() -> Thread710 pub fn current() -> Thread {
711     thread_info::current_thread().expect(
712         "use of std::thread::current() is not possible \
713          after the thread's local data has been destroyed",
714     )
715 }
716 
717 /// Cooperatively gives up a timeslice to the OS scheduler.
718 ///
719 /// This calls the underlying OS scheduler's yield primitive, signaling
720 /// that the calling thread is willing to give up its remaining timeslice
721 /// so that the OS may schedule other threads on the CPU.
722 ///
723 /// A drawback of yielding in a loop is that if the OS does not have any
724 /// other ready threads to run on the current CPU, the thread will effectively
725 /// busy-wait, which wastes CPU time and energy.
726 ///
727 /// Therefore, when waiting for events of interest, a programmer's first
728 /// choice should be to use synchronization devices such as [`channel`]s,
729 /// [`Condvar`]s, [`Mutex`]es or [`join`] since these primitives are
730 /// implemented in a blocking manner, giving up the CPU until the event
731 /// of interest has occurred which avoids repeated yielding.
732 ///
733 /// `yield_now` should thus be used only rarely, mostly in situations where
734 /// repeated polling is required because there is no other suitable way to
735 /// learn when an event of interest has occurred.
736 ///
737 /// # Examples
738 ///
739 /// ```
740 /// use std::thread;
741 ///
742 /// thread::yield_now();
743 /// ```
744 ///
745 /// [`channel`]: crate::sync::mpsc
746 /// [`join`]: JoinHandle::join
747 /// [`Condvar`]: crate::sync::Condvar
748 /// [`Mutex`]: crate::sync::Mutex
749 #[stable(feature = "rust1", since = "1.0.0")]
yield_now()750 pub fn yield_now() {
751     imp::Thread::yield_now()
752 }
753 
754 /// Determines whether the current thread is unwinding because of panic.
755 ///
756 /// A common use of this feature is to poison shared resources when writing
757 /// unsafe code, by checking `panicking` when the `drop` is called.
758 ///
759 /// This is usually not needed when writing safe code, as [`Mutex`es][Mutex]
760 /// already poison themselves when a thread panics while holding the lock.
761 ///
762 /// This can also be used in multithreaded applications, in order to send a
763 /// message to other threads warning that a thread has panicked (e.g., for
764 /// monitoring purposes).
765 ///
766 /// # Examples
767 ///
768 /// ```should_panic
769 /// use std::thread;
770 ///
771 /// struct SomeStruct;
772 ///
773 /// impl Drop for SomeStruct {
774 ///     fn drop(&mut self) {
775 ///         if thread::panicking() {
776 ///             println!("dropped while unwinding");
777 ///         } else {
778 ///             println!("dropped while not unwinding");
779 ///         }
780 ///     }
781 /// }
782 ///
783 /// {
784 ///     print!("a: ");
785 ///     let a = SomeStruct;
786 /// }
787 ///
788 /// {
789 ///     print!("b: ");
790 ///     let b = SomeStruct;
791 ///     panic!()
792 /// }
793 /// ```
794 ///
795 /// [Mutex]: crate::sync::Mutex
796 #[inline]
797 #[must_use]
798 #[stable(feature = "rust1", since = "1.0.0")]
panicking() -> bool799 pub fn panicking() -> bool {
800     panicking::panicking()
801 }
802 
803 /// Use [`sleep`].
804 ///
805 /// Puts the current thread to sleep for at least the specified amount of time.
806 ///
807 /// The thread may sleep longer than the duration specified due to scheduling
808 /// specifics or platform-dependent functionality. It will never sleep less.
809 ///
810 /// This function is blocking, and should not be used in `async` functions.
811 ///
812 /// # Platform-specific behavior
813 ///
814 /// On Unix platforms, the underlying syscall may be interrupted by a
815 /// spurious wakeup or signal handler. To ensure the sleep occurs for at least
816 /// the specified duration, this function may invoke that system call multiple
817 /// times.
818 ///
819 /// # Examples
820 ///
821 /// ```no_run
822 /// use std::thread;
823 ///
824 /// // Let's sleep for 2 seconds:
825 /// thread::sleep_ms(2000);
826 /// ```
827 #[stable(feature = "rust1", since = "1.0.0")]
828 #[deprecated(since = "1.6.0", note = "replaced by `std::thread::sleep`")]
sleep_ms(ms: u32)829 pub fn sleep_ms(ms: u32) {
830     sleep(Duration::from_millis(ms as u64))
831 }
832 
833 /// Puts the current thread to sleep for at least the specified amount of time.
834 ///
835 /// The thread may sleep longer than the duration specified due to scheduling
836 /// specifics or platform-dependent functionality. It will never sleep less.
837 ///
838 /// This function is blocking, and should not be used in `async` functions.
839 ///
840 /// # Platform-specific behavior
841 ///
842 /// On Unix platforms, the underlying syscall may be interrupted by a
843 /// spurious wakeup or signal handler. To ensure the sleep occurs for at least
844 /// the specified duration, this function may invoke that system call multiple
845 /// times.
846 /// Platforms which do not support nanosecond precision for sleeping will
847 /// have `dur` rounded up to the nearest granularity of time they can sleep for.
848 ///
849 /// Currently, specifying a zero duration on Unix platforms returns immediately
850 /// without invoking the underlying [`nanosleep`] syscall, whereas on Windows
851 /// platforms the underlying [`Sleep`] syscall is always invoked.
852 /// If the intention is to yield the current time-slice you may want to use
853 /// [`yield_now`] instead.
854 ///
855 /// [`nanosleep`]: https://linux.die.net/man/2/nanosleep
856 /// [`Sleep`]: https://docs.microsoft.com/en-us/windows/win32/api/synchapi/nf-synchapi-sleep
857 ///
858 /// # Examples
859 ///
860 /// ```no_run
861 /// use std::{thread, time};
862 ///
863 /// let ten_millis = time::Duration::from_millis(10);
864 /// let now = time::Instant::now();
865 ///
866 /// thread::sleep(ten_millis);
867 ///
868 /// assert!(now.elapsed() >= ten_millis);
869 /// ```
870 #[stable(feature = "thread_sleep", since = "1.4.0")]
sleep(dur: Duration)871 pub fn sleep(dur: Duration) {
872     imp::Thread::sleep(dur)
873 }
874 
875 /// Used to ensure that `park` and `park_timeout` do not unwind, as that can
876 /// cause undefined behaviour if not handled correctly (see #102398 for context).
877 struct PanicGuard;
878 
879 impl Drop for PanicGuard {
drop(&mut self)880     fn drop(&mut self) {
881         rtabort!("an irrecoverable error occurred while synchronizing threads")
882     }
883 }
884 
885 /// Blocks unless or until the current thread's token is made available.
886 ///
887 /// A call to `park` does not guarantee that the thread will remain parked
888 /// forever, and callers should be prepared for this possibility. However,
889 /// it is guaranteed that this function will not panic (it may abort the
890 /// process if the implementation encounters some rare errors).
891 ///
892 /// # `park` and `unpark`
893 ///
894 /// Every thread is equipped with some basic low-level blocking support, via the
895 /// [`thread::park`][`park`] function and [`thread::Thread::unpark`][`unpark`]
896 /// method. [`park`] blocks the current thread, which can then be resumed from
897 /// another thread by calling the [`unpark`] method on the blocked thread's
898 /// handle.
899 ///
900 /// Conceptually, each [`Thread`] handle has an associated token, which is
901 /// initially not present:
902 ///
903 /// * The [`thread::park`][`park`] function blocks the current thread unless or
904 ///   until the token is available for its thread handle, at which point it
905 ///   atomically consumes the token. It may also return *spuriously*, without
906 ///   consuming the token. [`thread::park_timeout`] does the same, but allows
907 ///   specifying a maximum time to block the thread for.
908 ///
909 /// * The [`unpark`] method on a [`Thread`] atomically makes the token available
910 ///   if it wasn't already. Because the token is initially absent, [`unpark`]
911 ///   followed by [`park`] will result in the second call returning immediately.
912 ///
913 /// The API is typically used by acquiring a handle to the current thread,
914 /// placing that handle in a shared data structure so that other threads can
915 /// find it, and then `park`ing in a loop. When some desired condition is met, another
916 /// thread calls [`unpark`] on the handle.
917 ///
918 /// The motivation for this design is twofold:
919 ///
920 /// * It avoids the need to allocate mutexes and condvars when building new
921 ///   synchronization primitives; the threads already provide basic
922 ///   blocking/signaling.
923 ///
924 /// * It can be implemented very efficiently on many platforms.
925 ///
926 /// # Memory Ordering
927 ///
928 /// Calls to `park` _synchronize-with_ calls to `unpark`, meaning that memory
929 /// operations performed before a call to `unpark` are made visible to the thread that
930 /// consumes the token and returns from `park`. Note that all `park` and `unpark`
931 /// operations for a given thread form a total order and `park` synchronizes-with
932 /// _all_ prior `unpark` operations.
933 ///
934 /// In atomic ordering terms, `unpark` performs a `Release` operation and `park`
935 /// performs the corresponding `Acquire` operation. Calls to `unpark` for the same
936 /// thread form a [release sequence].
937 ///
938 /// Note that being unblocked does not imply a call was made to `unpark`, because
939 /// wakeups can also be spurious. For example, a valid, but inefficient,
940 /// implementation could have `park` and `unpark` return immediately without doing anything,
941 /// making *all* wakeups spurious.
942 ///
943 /// # Examples
944 ///
945 /// ```
946 /// use std::thread;
947 /// use std::sync::{Arc, atomic::{Ordering, AtomicBool}};
948 /// use std::time::Duration;
949 ///
950 /// let flag = Arc::new(AtomicBool::new(false));
951 /// let flag2 = Arc::clone(&flag);
952 ///
953 /// let parked_thread = thread::spawn(move || {
954 ///     // We want to wait until the flag is set. We *could* just spin, but using
955 ///     // park/unpark is more efficient.
956 ///     while !flag2.load(Ordering::Relaxed) {
957 ///         println!("Parking thread");
958 ///         thread::park();
959 ///         // We *could* get here spuriously, i.e., way before the 10ms below are over!
960 ///         // But that is no problem, we are in a loop until the flag is set anyway.
961 ///         println!("Thread unparked");
962 ///     }
963 ///     println!("Flag received");
964 /// });
965 ///
966 /// // Let some time pass for the thread to be spawned.
967 /// thread::sleep(Duration::from_millis(10));
968 ///
969 /// // Set the flag, and let the thread wake up.
970 /// // There is no race condition here, if `unpark`
971 /// // happens first, `park` will return immediately.
972 /// // Hence there is no risk of a deadlock.
973 /// flag.store(true, Ordering::Relaxed);
974 /// println!("Unpark the thread");
975 /// parked_thread.thread().unpark();
976 ///
977 /// parked_thread.join().unwrap();
978 /// ```
979 ///
980 /// [`unpark`]: Thread::unpark
981 /// [`thread::park_timeout`]: park_timeout
982 /// [release sequence]: https://en.cppreference.com/w/cpp/atomic/memory_order#Release_sequence
983 #[stable(feature = "rust1", since = "1.0.0")]
park()984 pub fn park() {
985     let guard = PanicGuard;
986     // SAFETY: park_timeout is called on the parker owned by this thread.
987     unsafe {
988         current().inner.as_ref().parker().park();
989     }
990     // No panic occurred, do not abort.
991     forget(guard);
992 }
993 
994 /// Use [`park_timeout`].
995 ///
996 /// Blocks unless or until the current thread's token is made available or
997 /// the specified duration has been reached (may wake spuriously).
998 ///
999 /// The semantics of this function are equivalent to [`park`] except
1000 /// that the thread will be blocked for roughly no longer than `dur`. This
1001 /// method should not be used for precise timing due to anomalies such as
1002 /// preemption or platform differences that might not cause the maximum
1003 /// amount of time waited to be precisely `ms` long.
1004 ///
1005 /// See the [park documentation][`park`] for more detail.
1006 #[stable(feature = "rust1", since = "1.0.0")]
1007 #[deprecated(since = "1.6.0", note = "replaced by `std::thread::park_timeout`")]
park_timeout_ms(ms: u32)1008 pub fn park_timeout_ms(ms: u32) {
1009     park_timeout(Duration::from_millis(ms as u64))
1010 }
1011 
1012 /// Blocks unless or until the current thread's token is made available or
1013 /// the specified duration has been reached (may wake spuriously).
1014 ///
1015 /// The semantics of this function are equivalent to [`park`][park] except
1016 /// that the thread will be blocked for roughly no longer than `dur`. This
1017 /// method should not be used for precise timing due to anomalies such as
1018 /// preemption or platform differences that might not cause the maximum
1019 /// amount of time waited to be precisely `dur` long.
1020 ///
1021 /// See the [park documentation][park] for more details.
1022 ///
1023 /// # Platform-specific behavior
1024 ///
1025 /// Platforms which do not support nanosecond precision for sleeping will have
1026 /// `dur` rounded up to the nearest granularity of time they can sleep for.
1027 ///
1028 /// # Examples
1029 ///
1030 /// Waiting for the complete expiration of the timeout:
1031 ///
1032 /// ```rust,no_run
1033 /// use std::thread::park_timeout;
1034 /// use std::time::{Instant, Duration};
1035 ///
1036 /// let timeout = Duration::from_secs(2);
1037 /// let beginning_park = Instant::now();
1038 ///
1039 /// let mut timeout_remaining = timeout;
1040 /// loop {
1041 ///     park_timeout(timeout_remaining);
1042 ///     let elapsed = beginning_park.elapsed();
1043 ///     if elapsed >= timeout {
1044 ///         break;
1045 ///     }
1046 ///     println!("restarting park_timeout after {elapsed:?}");
1047 ///     timeout_remaining = timeout - elapsed;
1048 /// }
1049 /// ```
1050 #[stable(feature = "park_timeout", since = "1.4.0")]
park_timeout(dur: Duration)1051 pub fn park_timeout(dur: Duration) {
1052     let guard = PanicGuard;
1053     // SAFETY: park_timeout is called on the parker owned by this thread.
1054     unsafe {
1055         current().inner.as_ref().parker().park_timeout(dur);
1056     }
1057     // No panic occurred, do not abort.
1058     forget(guard);
1059 }
1060 
1061 ////////////////////////////////////////////////////////////////////////////////
1062 // ThreadId
1063 ////////////////////////////////////////////////////////////////////////////////
1064 
1065 /// A unique identifier for a running thread.
1066 ///
1067 /// A `ThreadId` is an opaque object that uniquely identifies each thread
1068 /// created during the lifetime of a process. `ThreadId`s are guaranteed not to
1069 /// be reused, even when a thread terminates. `ThreadId`s are under the control
1070 /// of Rust's standard library and there may not be any relationship between
1071 /// `ThreadId` and the underlying platform's notion of a thread identifier --
1072 /// the two concepts cannot, therefore, be used interchangeably. A `ThreadId`
1073 /// can be retrieved from the [`id`] method on a [`Thread`].
1074 ///
1075 /// # Examples
1076 ///
1077 /// ```
1078 /// use std::thread;
1079 ///
1080 /// let other_thread = thread::spawn(|| {
1081 ///     thread::current().id()
1082 /// });
1083 ///
1084 /// let other_thread_id = other_thread.join().unwrap();
1085 /// assert!(thread::current().id() != other_thread_id);
1086 /// ```
1087 ///
1088 /// [`id`]: Thread::id
1089 #[stable(feature = "thread_id", since = "1.19.0")]
1090 #[derive(Eq, PartialEq, Clone, Copy, Hash, Debug)]
1091 pub struct ThreadId(NonZeroU64);
1092 
1093 impl ThreadId {
1094     // Generate a new unique thread ID.
new() -> ThreadId1095     fn new() -> ThreadId {
1096         #[cold]
1097         fn exhausted() -> ! {
1098             panic!("failed to generate unique thread ID: bitspace exhausted")
1099         }
1100 
1101         cfg_if::cfg_if! {
1102             if #[cfg(target_has_atomic = "64")] {
1103                 use crate::sync::atomic::{AtomicU64, Ordering::Relaxed};
1104 
1105                 static COUNTER: AtomicU64 = AtomicU64::new(0);
1106 
1107                 let mut last = COUNTER.load(Relaxed);
1108                 loop {
1109                     let Some(id) = last.checked_add(1) else {
1110                         exhausted();
1111                     };
1112 
1113                     match COUNTER.compare_exchange_weak(last, id, Relaxed, Relaxed) {
1114                         Ok(_) => return ThreadId(NonZeroU64::new(id).unwrap()),
1115                         Err(id) => last = id,
1116                     }
1117                 }
1118             } else {
1119                 use crate::sync::{Mutex, PoisonError};
1120 
1121                 static COUNTER: Mutex<u64> = Mutex::new(0);
1122 
1123                 let mut counter = COUNTER.lock().unwrap_or_else(PoisonError::into_inner);
1124                 let Some(id) = counter.checked_add(1) else {
1125                     // in case the panic handler ends up calling `ThreadId::new()`,
1126                     // avoid reentrant lock acquire.
1127                     drop(counter);
1128                     exhausted();
1129                 };
1130 
1131                 *counter = id;
1132                 drop(counter);
1133                 ThreadId(NonZeroU64::new(id).unwrap())
1134             }
1135         }
1136     }
1137 
1138     /// This returns a numeric identifier for the thread identified by this
1139     /// `ThreadId`.
1140     ///
1141     /// As noted in the documentation for the type itself, it is essentially an
1142     /// opaque ID, but is guaranteed to be unique for each thread. The returned
1143     /// value is entirely opaque -- only equality testing is stable. Note that
1144     /// it is not guaranteed which values new threads will return, and this may
1145     /// change across Rust versions.
1146     #[must_use]
1147     #[unstable(feature = "thread_id_value", issue = "67939")]
as_u64(&self) -> NonZeroU641148     pub fn as_u64(&self) -> NonZeroU64 {
1149         self.0
1150     }
1151 }
1152 
1153 ////////////////////////////////////////////////////////////////////////////////
1154 // Thread
1155 ////////////////////////////////////////////////////////////////////////////////
1156 
1157 /// The internal representation of a `Thread` handle
1158 struct Inner {
1159     name: Option<CString>, // Guaranteed to be UTF-8
1160     id: ThreadId,
1161     parker: Parker,
1162 }
1163 
1164 impl Inner {
parker(self: Pin<&Self>) -> Pin<&Parker>1165     fn parker(self: Pin<&Self>) -> Pin<&Parker> {
1166         unsafe { Pin::map_unchecked(self, |inner| &inner.parker) }
1167     }
1168 }
1169 
1170 #[derive(Clone)]
1171 #[stable(feature = "rust1", since = "1.0.0")]
1172 /// A handle to a thread.
1173 ///
1174 /// Threads are represented via the `Thread` type, which you can get in one of
1175 /// two ways:
1176 ///
1177 /// * By spawning a new thread, e.g., using the [`thread::spawn`][`spawn`]
1178 ///   function, and calling [`thread`][`JoinHandle::thread`] on the
1179 ///   [`JoinHandle`].
1180 /// * By requesting the current thread, using the [`thread::current`] function.
1181 ///
1182 /// The [`thread::current`] function is available even for threads not spawned
1183 /// by the APIs of this module.
1184 ///
1185 /// There is usually no need to create a `Thread` struct yourself, one
1186 /// should instead use a function like `spawn` to create new threads, see the
1187 /// docs of [`Builder`] and [`spawn`] for more details.
1188 ///
1189 /// [`thread::current`]: current
1190 pub struct Thread {
1191     inner: Pin<Arc<Inner>>,
1192 }
1193 
1194 impl Thread {
1195     // Used only internally to construct a thread object without spawning
1196     // Panics if the name contains nuls.
new(name: Option<CString>) -> Thread1197     pub(crate) fn new(name: Option<CString>) -> Thread {
1198         // We have to use `unsafe` here to construct the `Parker` in-place,
1199         // which is required for the UNIX implementation.
1200         //
1201         // SAFETY: We pin the Arc immediately after creation, so its address never
1202         // changes.
1203         let inner = unsafe {
1204             let mut arc = Arc::<Inner>::new_uninit();
1205             let ptr = Arc::get_mut_unchecked(&mut arc).as_mut_ptr();
1206             addr_of_mut!((*ptr).name).write(name);
1207             addr_of_mut!((*ptr).id).write(ThreadId::new());
1208             Parker::new_in_place(addr_of_mut!((*ptr).parker));
1209             Pin::new_unchecked(arc.assume_init())
1210         };
1211 
1212         Thread { inner }
1213     }
1214 
1215     /// Atomically makes the handle's token available if it is not already.
1216     ///
1217     /// Every thread is equipped with some basic low-level blocking support, via
1218     /// the [`park`][park] function and the `unpark()` method. These can be
1219     /// used as a more CPU-efficient implementation of a spinlock.
1220     ///
1221     /// See the [park documentation][park] for more details.
1222     ///
1223     /// # Examples
1224     ///
1225     /// ```
1226     /// use std::thread;
1227     /// use std::time::Duration;
1228     ///
1229     /// let parked_thread = thread::Builder::new()
1230     ///     .spawn(|| {
1231     ///         println!("Parking thread");
1232     ///         thread::park();
1233     ///         println!("Thread unparked");
1234     ///     })
1235     ///     .unwrap();
1236     ///
1237     /// // Let some time pass for the thread to be spawned.
1238     /// thread::sleep(Duration::from_millis(10));
1239     ///
1240     /// println!("Unpark the thread");
1241     /// parked_thread.thread().unpark();
1242     ///
1243     /// parked_thread.join().unwrap();
1244     /// ```
1245     #[stable(feature = "rust1", since = "1.0.0")]
1246     #[inline]
unpark(&self)1247     pub fn unpark(&self) {
1248         self.inner.as_ref().parker().unpark();
1249     }
1250 
1251     /// Gets the thread's unique identifier.
1252     ///
1253     /// # Examples
1254     ///
1255     /// ```
1256     /// use std::thread;
1257     ///
1258     /// let other_thread = thread::spawn(|| {
1259     ///     thread::current().id()
1260     /// });
1261     ///
1262     /// let other_thread_id = other_thread.join().unwrap();
1263     /// assert!(thread::current().id() != other_thread_id);
1264     /// ```
1265     #[stable(feature = "thread_id", since = "1.19.0")]
1266     #[must_use]
id(&self) -> ThreadId1267     pub fn id(&self) -> ThreadId {
1268         self.inner.id
1269     }
1270 
1271     /// Gets the thread's name.
1272     ///
1273     /// For more information about named threads, see
1274     /// [this module-level documentation][naming-threads].
1275     ///
1276     /// # Examples
1277     ///
1278     /// Threads by default have no name specified:
1279     ///
1280     /// ```
1281     /// use std::thread;
1282     ///
1283     /// let builder = thread::Builder::new();
1284     ///
1285     /// let handler = builder.spawn(|| {
1286     ///     assert!(thread::current().name().is_none());
1287     /// }).unwrap();
1288     ///
1289     /// handler.join().unwrap();
1290     /// ```
1291     ///
1292     /// Thread with a specified name:
1293     ///
1294     /// ```
1295     /// use std::thread;
1296     ///
1297     /// let builder = thread::Builder::new()
1298     ///     .name("foo".into());
1299     ///
1300     /// let handler = builder.spawn(|| {
1301     ///     assert_eq!(thread::current().name(), Some("foo"))
1302     /// }).unwrap();
1303     ///
1304     /// handler.join().unwrap();
1305     /// ```
1306     ///
1307     /// [naming-threads]: ./index.html#naming-threads
1308     #[stable(feature = "rust1", since = "1.0.0")]
1309     #[must_use]
name(&self) -> Option<&str>1310     pub fn name(&self) -> Option<&str> {
1311         self.cname().map(|s| unsafe { str::from_utf8_unchecked(s.to_bytes()) })
1312     }
1313 
cname(&self) -> Option<&CStr>1314     fn cname(&self) -> Option<&CStr> {
1315         self.inner.name.as_deref()
1316     }
1317 }
1318 
1319 #[stable(feature = "rust1", since = "1.0.0")]
1320 impl fmt::Debug for Thread {
fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result1321     fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
1322         f.debug_struct("Thread")
1323             .field("id", &self.id())
1324             .field("name", &self.name())
1325             .finish_non_exhaustive()
1326     }
1327 }
1328 
1329 ////////////////////////////////////////////////////////////////////////////////
1330 // JoinHandle
1331 ////////////////////////////////////////////////////////////////////////////////
1332 
1333 /// A specialized [`Result`] type for threads.
1334 ///
1335 /// Indicates the manner in which a thread exited.
1336 ///
1337 /// The value contained in the `Result::Err` variant
1338 /// is the value the thread panicked with;
1339 /// that is, the argument the `panic!` macro was called with.
1340 /// Unlike with normal errors, this value doesn't implement
1341 /// the [`Error`](crate::error::Error) trait.
1342 ///
1343 /// Thus, a sensible way to handle a thread panic is to either:
1344 ///
1345 /// 1. propagate the panic with [`std::panic::resume_unwind`]
1346 /// 2. or in case the thread is intended to be a subsystem boundary
1347 /// that is supposed to isolate system-level failures,
1348 /// match on the `Err` variant and handle the panic in an appropriate way
1349 ///
1350 /// A thread that completes without panicking is considered to exit successfully.
1351 ///
1352 /// # Examples
1353 ///
1354 /// Matching on the result of a joined thread:
1355 ///
1356 /// ```no_run
1357 /// use std::{fs, thread, panic};
1358 ///
1359 /// fn copy_in_thread() -> thread::Result<()> {
1360 ///     thread::spawn(|| {
1361 ///         fs::copy("foo.txt", "bar.txt").unwrap();
1362 ///     }).join()
1363 /// }
1364 ///
1365 /// fn main() {
1366 ///     match copy_in_thread() {
1367 ///         Ok(_) => println!("copy succeeded"),
1368 ///         Err(e) => panic::resume_unwind(e),
1369 ///     }
1370 /// }
1371 /// ```
1372 ///
1373 /// [`Result`]: crate::result::Result
1374 /// [`std::panic::resume_unwind`]: crate::panic::resume_unwind
1375 #[stable(feature = "rust1", since = "1.0.0")]
1376 pub type Result<T> = crate::result::Result<T, Box<dyn Any + Send + 'static>>;
1377 
1378 // This packet is used to communicate the return value between the spawned
1379 // thread and the rest of the program. It is shared through an `Arc` and
1380 // there's no need for a mutex here because synchronization happens with `join()`
1381 // (the caller will never read this packet until the thread has exited).
1382 //
1383 // An Arc to the packet is stored into a `JoinInner` which in turns is placed
1384 // in `JoinHandle`.
1385 struct Packet<'scope, T> {
1386     scope: Option<Arc<scoped::ScopeData>>,
1387     result: UnsafeCell<Option<Result<T>>>,
1388     _marker: PhantomData<Option<&'scope scoped::ScopeData>>,
1389 }
1390 
1391 // Due to the usage of `UnsafeCell` we need to manually implement Sync.
1392 // The type `T` should already always be Send (otherwise the thread could not
1393 // have been created) and the Packet is Sync because all access to the
1394 // `UnsafeCell` synchronized (by the `join()` boundary), and `ScopeData` is Sync.
1395 unsafe impl<'scope, T: Sync> Sync for Packet<'scope, T> {}
1396 
1397 impl<'scope, T> Drop for Packet<'scope, T> {
drop(&mut self)1398     fn drop(&mut self) {
1399         // If this packet was for a thread that ran in a scope, the thread
1400         // panicked, and nobody consumed the panic payload, we make sure
1401         // the scope function will panic.
1402         let unhandled_panic = matches!(self.result.get_mut(), Some(Err(_)));
1403         // Drop the result without causing unwinding.
1404         // This is only relevant for threads that aren't join()ed, as
1405         // join() will take the `result` and set it to None, such that
1406         // there is nothing left to drop here.
1407         // If this panics, we should handle that, because we're outside the
1408         // outermost `catch_unwind` of our thread.
1409         // We just abort in that case, since there's nothing else we can do.
1410         // (And even if we tried to handle it somehow, we'd also need to handle
1411         // the case where the panic payload we get out of it also panics on
1412         // drop, and so on. See issue #86027.)
1413         if let Err(_) = panic::catch_unwind(panic::AssertUnwindSafe(|| {
1414             *self.result.get_mut() = None;
1415         })) {
1416             rtabort!("thread result panicked on drop");
1417         }
1418         // Book-keeping so the scope knows when it's done.
1419         if let Some(scope) = &self.scope {
1420             // Now that there will be no more user code running on this thread
1421             // that can use 'scope, mark the thread as 'finished'.
1422             // It's important we only do this after the `result` has been dropped,
1423             // since dropping it might still use things it borrowed from 'scope.
1424             scope.decrement_num_running_threads(unhandled_panic);
1425         }
1426     }
1427 }
1428 
1429 /// Inner representation for JoinHandle
1430 struct JoinInner<'scope, T> {
1431     native: imp::Thread,
1432     thread: Thread,
1433     packet: Arc<Packet<'scope, T>>,
1434 }
1435 
1436 impl<'scope, T> JoinInner<'scope, T> {
join(mut self) -> Result<T>1437     fn join(mut self) -> Result<T> {
1438         self.native.join();
1439         Arc::get_mut(&mut self.packet).unwrap().result.get_mut().take().unwrap()
1440     }
1441 }
1442 
1443 /// An owned permission to join on a thread (block on its termination).
1444 ///
1445 /// A `JoinHandle` *detaches* the associated thread when it is dropped, which
1446 /// means that there is no longer any handle to the thread and no way to `join`
1447 /// on it.
1448 ///
1449 /// Due to platform restrictions, it is not possible to [`Clone`] this
1450 /// handle: the ability to join a thread is a uniquely-owned permission.
1451 ///
1452 /// This `struct` is created by the [`thread::spawn`] function and the
1453 /// [`thread::Builder::spawn`] method.
1454 ///
1455 /// # Examples
1456 ///
1457 /// Creation from [`thread::spawn`]:
1458 ///
1459 /// ```
1460 /// use std::thread;
1461 ///
1462 /// let join_handle: thread::JoinHandle<_> = thread::spawn(|| {
1463 ///     // some work here
1464 /// });
1465 /// ```
1466 ///
1467 /// Creation from [`thread::Builder::spawn`]:
1468 ///
1469 /// ```
1470 /// use std::thread;
1471 ///
1472 /// let builder = thread::Builder::new();
1473 ///
1474 /// let join_handle: thread::JoinHandle<_> = builder.spawn(|| {
1475 ///     // some work here
1476 /// }).unwrap();
1477 /// ```
1478 ///
1479 /// A thread being detached and outliving the thread that spawned it:
1480 ///
1481 /// ```no_run
1482 /// use std::thread;
1483 /// use std::time::Duration;
1484 ///
1485 /// let original_thread = thread::spawn(|| {
1486 ///     let _detached_thread = thread::spawn(|| {
1487 ///         // Here we sleep to make sure that the first thread returns before.
1488 ///         thread::sleep(Duration::from_millis(10));
1489 ///         // This will be called, even though the JoinHandle is dropped.
1490 ///         println!("♫ Still alive ♫");
1491 ///     });
1492 /// });
1493 ///
1494 /// original_thread.join().expect("The thread being joined has panicked");
1495 /// println!("Original thread is joined.");
1496 ///
1497 /// // We make sure that the new thread has time to run, before the main
1498 /// // thread returns.
1499 ///
1500 /// thread::sleep(Duration::from_millis(1000));
1501 /// ```
1502 ///
1503 /// [`thread::Builder::spawn`]: Builder::spawn
1504 /// [`thread::spawn`]: spawn
1505 #[stable(feature = "rust1", since = "1.0.0")]
1506 pub struct JoinHandle<T>(JoinInner<'static, T>);
1507 
1508 #[stable(feature = "joinhandle_impl_send_sync", since = "1.29.0")]
1509 unsafe impl<T> Send for JoinHandle<T> {}
1510 #[stable(feature = "joinhandle_impl_send_sync", since = "1.29.0")]
1511 unsafe impl<T> Sync for JoinHandle<T> {}
1512 
1513 impl<T> JoinHandle<T> {
1514     /// Extracts a handle to the underlying thread.
1515     ///
1516     /// # Examples
1517     ///
1518     /// ```
1519     /// use std::thread;
1520     ///
1521     /// let builder = thread::Builder::new();
1522     ///
1523     /// let join_handle: thread::JoinHandle<_> = builder.spawn(|| {
1524     ///     // some work here
1525     /// }).unwrap();
1526     ///
1527     /// let thread = join_handle.thread();
1528     /// println!("thread id: {:?}", thread.id());
1529     /// ```
1530     #[stable(feature = "rust1", since = "1.0.0")]
1531     #[must_use]
thread(&self) -> &Thread1532     pub fn thread(&self) -> &Thread {
1533         &self.0.thread
1534     }
1535 
1536     /// Waits for the associated thread to finish.
1537     ///
1538     /// This function will return immediately if the associated thread has already finished.
1539     ///
1540     /// In terms of [atomic memory orderings],  the completion of the associated
1541     /// thread synchronizes with this function returning. In other words, all
1542     /// operations performed by that thread [happen
1543     /// before](https://doc.rust-lang.org/nomicon/atomics.html#data-accesses) all
1544     /// operations that happen after `join` returns.
1545     ///
1546     /// If the associated thread panics, [`Err`] is returned with the parameter given
1547     /// to [`panic!`].
1548     ///
1549     /// [`Err`]: crate::result::Result::Err
1550     /// [atomic memory orderings]: crate::sync::atomic
1551     ///
1552     /// # Panics
1553     ///
1554     /// This function may panic on some platforms if a thread attempts to join
1555     /// itself or otherwise may create a deadlock with joining threads.
1556     ///
1557     /// # Examples
1558     ///
1559     /// ```
1560     /// use std::thread;
1561     ///
1562     /// let builder = thread::Builder::new();
1563     ///
1564     /// let join_handle: thread::JoinHandle<_> = builder.spawn(|| {
1565     ///     // some work here
1566     /// }).unwrap();
1567     /// join_handle.join().expect("Couldn't join on the associated thread");
1568     /// ```
1569     #[stable(feature = "rust1", since = "1.0.0")]
join(self) -> Result<T>1570     pub fn join(self) -> Result<T> {
1571         self.0.join()
1572     }
1573 
1574     /// Checks if the associated thread has finished running its main function.
1575     ///
1576     /// `is_finished` supports implementing a non-blocking join operation, by checking
1577     /// `is_finished`, and calling `join` if it returns `true`. This function does not block. To
1578     /// block while waiting on the thread to finish, use [`join`][Self::join].
1579     ///
1580     /// This might return `true` for a brief moment after the thread's main
1581     /// function has returned, but before the thread itself has stopped running.
1582     /// However, once this returns `true`, [`join`][Self::join] can be expected
1583     /// to return quickly, without blocking for any significant amount of time.
1584     #[stable(feature = "thread_is_running", since = "1.61.0")]
is_finished(&self) -> bool1585     pub fn is_finished(&self) -> bool {
1586         Arc::strong_count(&self.0.packet) == 1
1587     }
1588 }
1589 
1590 impl<T> AsInner<imp::Thread> for JoinHandle<T> {
as_inner(&self) -> &imp::Thread1591     fn as_inner(&self) -> &imp::Thread {
1592         &self.0.native
1593     }
1594 }
1595 
1596 impl<T> IntoInner<imp::Thread> for JoinHandle<T> {
into_inner(self) -> imp::Thread1597     fn into_inner(self) -> imp::Thread {
1598         self.0.native
1599     }
1600 }
1601 
1602 #[stable(feature = "std_debug", since = "1.16.0")]
1603 impl<T> fmt::Debug for JoinHandle<T> {
fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result1604     fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
1605         f.debug_struct("JoinHandle").finish_non_exhaustive()
1606     }
1607 }
1608 
_assert_sync_and_send()1609 fn _assert_sync_and_send() {
1610     fn _assert_both<T: Send + Sync>() {}
1611     _assert_both::<JoinHandle<()>>();
1612     _assert_both::<Thread>();
1613 }
1614 
1615 /// Returns an estimate of the default amount of parallelism a program should use.
1616 ///
1617 /// Parallelism is a resource. A given machine provides a certain capacity for
1618 /// parallelism, i.e., a bound on the number of computations it can perform
1619 /// simultaneously. This number often corresponds to the amount of CPUs a
1620 /// computer has, but it may diverge in various cases.
1621 ///
1622 /// Host environments such as VMs or container orchestrators may want to
1623 /// restrict the amount of parallelism made available to programs in them. This
1624 /// is often done to limit the potential impact of (unintentionally)
1625 /// resource-intensive programs on other programs running on the same machine.
1626 ///
1627 /// # Limitations
1628 ///
1629 /// The purpose of this API is to provide an easy and portable way to query
1630 /// the default amount of parallelism the program should use. Among other things it
1631 /// does not expose information on NUMA regions, does not account for
1632 /// differences in (co)processor capabilities or current system load,
1633 /// and will not modify the program's global state in order to more accurately
1634 /// query the amount of available parallelism.
1635 ///
1636 /// Where both fixed steady-state and burst limits are available the steady-state
1637 /// capacity will be used to ensure more predictable latencies.
1638 ///
1639 /// Resource limits can be changed during the runtime of a program, therefore the value is
1640 /// not cached and instead recomputed every time this function is called. It should not be
1641 /// called from hot code.
1642 ///
1643 /// The value returned by this function should be considered a simplified
1644 /// approximation of the actual amount of parallelism available at any given
1645 /// time. To get a more detailed or precise overview of the amount of
1646 /// parallelism available to the program, you may wish to use
1647 /// platform-specific APIs as well. The following platform limitations currently
1648 /// apply to `available_parallelism`:
1649 ///
1650 /// On Windows:
1651 /// - It may undercount the amount of parallelism available on systems with more
1652 ///   than 64 logical CPUs. However, programs typically need specific support to
1653 ///   take advantage of more than 64 logical CPUs, and in the absence of such
1654 ///   support, the number returned by this function accurately reflects the
1655 ///   number of logical CPUs the program can use by default.
1656 /// - It may overcount the amount of parallelism available on systems limited by
1657 ///   process-wide affinity masks, or job object limitations.
1658 ///
1659 /// On Linux:
1660 /// - It may overcount the amount of parallelism available when limited by a
1661 ///   process-wide affinity mask or cgroup quotas and `sched_getaffinity()` or cgroup fs can't be
1662 ///   queried, e.g. due to sandboxing.
1663 /// - It may undercount the amount of parallelism if the current thread's affinity mask
1664 ///   does not reflect the process' cpuset, e.g. due to pinned threads.
1665 /// - If the process is in a cgroup v1 cpu controller, this may need to
1666 ///   scan mountpoints to find the corresponding cgroup v1 controller,
1667 ///   which may take time on systems with large numbers of mountpoints.
1668 ///   (This does not apply to cgroup v2, or to processes not in a
1669 ///   cgroup.)
1670 ///
1671 /// On all targets:
1672 /// - It may overcount the amount of parallelism available when running in a VM
1673 /// with CPU usage limits (e.g. an overcommitted host).
1674 ///
1675 /// # Errors
1676 ///
1677 /// This function will, but is not limited to, return errors in the following
1678 /// cases:
1679 ///
1680 /// - If the amount of parallelism is not known for the target platform.
1681 /// - If the program lacks permission to query the amount of parallelism made
1682 ///   available to it.
1683 ///
1684 /// # Examples
1685 ///
1686 /// ```
1687 /// # #![allow(dead_code)]
1688 /// use std::{io, thread};
1689 ///
1690 /// fn main() -> io::Result<()> {
1691 ///     let count = thread::available_parallelism()?.get();
1692 ///     assert!(count >= 1_usize);
1693 ///     Ok(())
1694 /// }
1695 /// ```
1696 #[doc(alias = "available_concurrency")] // Alias for a previous name we gave this API on unstable.
1697 #[doc(alias = "hardware_concurrency")] // Alias for C++ `std::thread::hardware_concurrency`.
1698 #[doc(alias = "num_cpus")] // Alias for a popular ecosystem crate which provides similar functionality.
1699 #[stable(feature = "available_parallelism", since = "1.59.0")]
available_parallelism() -> io::Result<NonZeroUsize>1700 pub fn available_parallelism() -> io::Result<NonZeroUsize> {
1701     imp::available_parallelism()
1702 }
1703