• Home
  • Line#
  • Scopes#
  • Navigate#
  • Raw
  • Download
1 // Copyright 2018 The Fuchsia Authors
2 //
3 // Licensed under the 2-Clause BSD License <LICENSE-BSD or
4 // https://opensource.org/license/bsd-2-clause>, Apache License, Version 2.0
5 // <LICENSE-APACHE or https://www.apache.org/licenses/LICENSE-2.0>, or the MIT
6 // license <LICENSE-MIT or https://opensource.org/licenses/MIT>, at your option.
7 // This file may not be copied, modified, or distributed except according to
8 // those terms.
9 
10 // After updating the following doc comment, make sure to run the following
11 // command to update `README.md` based on its contents:
12 //
13 //   cargo -q run --manifest-path tools/Cargo.toml -p generate-readme > README.md
14 
15 //! *<span style="font-size: 100%; color:grey;">Need more out of zerocopy?
16 //! Submit a [customer request issue][customer-request-issue]!</span>*
17 //!
18 //! ***<span style="font-size: 140%">Fast, safe, <span
19 //! style="color:red;">compile error</span>. Pick two.</span>***
20 //!
21 //! Zerocopy makes zero-cost memory manipulation effortless. We write `unsafe`
22 //! so you don't have to.
23 //!
24 //! *Thanks for using zerocopy 0.8! For an overview of what changes from 0.7,
25 //! check out our [release notes][release-notes], which include a step-by-step
26 //! guide for upgrading from 0.7.*
27 //!
28 //! *Have questions? Need help? Ask the maintainers on [GitHub][github-q-a] or
29 //! on [Discord][discord]!*
30 //!
31 //! [customer-request-issue]: https://github.com/google/zerocopy/issues/new/choose
32 //! [release-notes]: https://github.com/google/zerocopy/discussions/1680
33 //! [github-q-a]: https://github.com/google/zerocopy/discussions/categories/q-a
34 //! [discord]: https://discord.gg/MAvWH2R6zk
35 //!
36 //! # Overview
37 //!
38 //! ##### Conversion Traits
39 //!
40 //! Zerocopy provides four derivable traits for zero-cost conversions:
41 //! - [`TryFromBytes`] indicates that a type may safely be converted from
42 //!   certain byte sequences (conditional on runtime checks)
43 //! - [`FromZeros`] indicates that a sequence of zero bytes represents a valid
44 //!   instance of a type
45 //! - [`FromBytes`] indicates that a type may safely be converted from an
46 //!   arbitrary byte sequence
47 //! - [`IntoBytes`] indicates that a type may safely be converted *to* a byte
48 //!   sequence
49 //!
50 //! These traits support sized types, slices, and [slice DSTs][slice-dsts].
51 //!
52 //! [slice-dsts]: KnownLayout#dynamically-sized-types
53 //!
54 //! ##### Marker Traits
55 //!
56 //! Zerocopy provides three derivable marker traits that do not provide any
57 //! functionality themselves, but are required to call certain methods provided
58 //! by the conversion traits:
59 //! - [`KnownLayout`] indicates that zerocopy can reason about certain layout
60 //!   qualities of a type
61 //! - [`Immutable`] indicates that a type is free from interior mutability,
62 //!   except by ownership or an exclusive (`&mut`) borrow
63 //! - [`Unaligned`] indicates that a type's alignment requirement is 1
64 //!
65 //! You should generally derive these marker traits whenever possible.
66 //!
67 //! ##### Conversion Macros
68 //!
69 //! Zerocopy provides six macros for safe casting between types:
70 //!
71 //! - ([`try_`][try_transmute])[`transmute`] (conditionally) converts a value of
72 //!   one type to a value of another type of the same size
73 //! - ([`try_`][try_transmute_mut])[`transmute_mut`] (conditionally) converts a
74 //!   mutable reference of one type to a mutable reference of another type of
75 //!   the same size
76 //! - ([`try_`][try_transmute_ref])[`transmute_ref`] (conditionally) converts a
77 //!   mutable or immutable reference of one type to an immutable reference of
78 //!   another type of the same size
79 //!
80 //! These macros perform *compile-time* size and alignment checks, meaning that
81 //! unconditional casts have zero cost at runtime. Conditional casts do not need
82 //! to validate size or alignment runtime, but do need to validate contents.
83 //!
84 //! These macros cannot be used in generic contexts. For generic conversions,
85 //! use the methods defined by the [conversion traits](#conversion-traits).
86 //!
87 //! ##### Byteorder-Aware Numerics
88 //!
89 //! Zerocopy provides byte-order aware integer types that support these
90 //! conversions; see the [`byteorder`] module. These types are especially useful
91 //! for network parsing.
92 //!
93 //! # Cargo Features
94 //!
95 //! - **`alloc`**
96 //!   By default, `zerocopy` is `no_std`. When the `alloc` feature is enabled,
97 //!   the `alloc` crate is added as a dependency, and some allocation-related
98 //!   functionality is added.
99 //!
100 //! - **`std`**
101 //!   By default, `zerocopy` is `no_std`. When the `std` feature is enabled, the
102 //!   `std` crate is added as a dependency (ie, `no_std` is disabled), and
103 //!   support for some `std` types is added. `std` implies `alloc`.
104 //!
105 //! - **`derive`**
106 //!   Provides derives for the core marker traits via the `zerocopy-derive`
107 //!   crate. These derives are re-exported from `zerocopy`, so it is not
108 //!   necessary to depend on `zerocopy-derive` directly.
109 //!
110 //!   However, you may experience better compile times if you instead directly
111 //!   depend on both `zerocopy` and `zerocopy-derive` in your `Cargo.toml`,
112 //!   since doing so will allow Rust to compile these crates in parallel. To do
113 //!   so, do *not* enable the `derive` feature, and list both dependencies in
114 //!   your `Cargo.toml` with the same leading non-zero version number; e.g:
115 //!
116 //!   ```toml
117 //!   [dependencies]
118 //!   zerocopy = "0.X"
119 //!   zerocopy-derive = "0.X"
120 //!   ```
121 //!
122 //!   To avoid the risk of [duplicate import errors][duplicate-import-errors] if
123 //!   one of your dependencies enables zerocopy's `derive` feature, import
124 //!   derives as `use zerocopy_derive::*` rather than by name (e.g., `use
125 //!   zerocopy_derive::FromBytes`).
126 //!
127 //! - **`simd`**
128 //!   When the `simd` feature is enabled, `FromZeros`, `FromBytes`, and
129 //!   `IntoBytes` impls are emitted for all stable SIMD types which exist on the
130 //!   target platform. Note that the layout of SIMD types is not yet stabilized,
131 //!   so these impls may be removed in the future if layout changes make them
132 //!   invalid. For more information, see the Unsafe Code Guidelines Reference
133 //!   page on the [layout of packed SIMD vectors][simd-layout].
134 //!
135 //! - **`simd-nightly`**
136 //!   Enables the `simd` feature and adds support for SIMD types which are only
137 //!   available on nightly. Since these types are unstable, support for any type
138 //!   may be removed at any point in the future.
139 //!
140 //! - **`float-nightly`**
141 //!   Adds support for the unstable `f16` and `f128` types. These types are
142 //!   not yet fully implemented and may not be supported on all platforms.
143 //!
144 //! [duplicate-import-errors]: https://github.com/google/zerocopy/issues/1587
145 //! [simd-layout]: https://rust-lang.github.io/unsafe-code-guidelines/layout/packed-simd-vectors.html
146 //!
147 //! # Security Ethos
148 //!
149 //! Zerocopy is expressly designed for use in security-critical contexts. We
150 //! strive to ensure that that zerocopy code is sound under Rust's current
151 //! memory model, and *any future memory model*. We ensure this by:
152 //! - **...not 'guessing' about Rust's semantics.**
153 //!   We annotate `unsafe` code with a precise rationale for its soundness that
154 //!   cites a relevant section of Rust's official documentation. When Rust's
155 //!   documented semantics are unclear, we work with the Rust Operational
156 //!   Semantics Team to clarify Rust's documentation.
157 //! - **...rigorously testing our implementation.**
158 //!   We run tests using [Miri], ensuring that zerocopy is sound across a wide
159 //!   array of supported target platforms of varying endianness and pointer
160 //!   width, and across both current and experimental memory models of Rust.
161 //! - **...formally proving the correctness of our implementation.**
162 //!   We apply formal verification tools like [Kani][kani] to prove zerocopy's
163 //!   correctness.
164 //!
165 //! For more information, see our full [soundness policy].
166 //!
167 //! [Miri]: https://github.com/rust-lang/miri
168 //! [Kani]: https://github.com/model-checking/kani
169 //! [soundness policy]: https://github.com/google/zerocopy/blob/main/POLICIES.md#soundness
170 //!
171 //! # Relationship to Project Safe Transmute
172 //!
173 //! [Project Safe Transmute] is an official initiative of the Rust Project to
174 //! develop language-level support for safer transmutation. The Project consults
175 //! with crates like zerocopy to identify aspects of safer transmutation that
176 //! would benefit from compiler support, and has developed an [experimental,
177 //! compiler-supported analysis][mcp-transmutability] which determines whether,
178 //! for a given type, any value of that type may be soundly transmuted into
179 //! another type. Once this functionality is sufficiently mature, zerocopy
180 //! intends to replace its internal transmutability analysis (implemented by our
181 //! custom derives) with the compiler-supported one. This change will likely be
182 //! an implementation detail that is invisible to zerocopy's users.
183 //!
184 //! Project Safe Transmute will not replace the need for most of zerocopy's
185 //! higher-level abstractions. The experimental compiler analysis is a tool for
186 //! checking the soundness of `unsafe` code, not a tool to avoid writing
187 //! `unsafe` code altogether. For the foreseeable future, crates like zerocopy
188 //! will still be required in order to provide higher-level abstractions on top
189 //! of the building block provided by Project Safe Transmute.
190 //!
191 //! [Project Safe Transmute]: https://rust-lang.github.io/rfcs/2835-project-safe-transmute.html
192 //! [mcp-transmutability]: https://github.com/rust-lang/compiler-team/issues/411
193 //!
194 //! # MSRV
195 //!
196 //! See our [MSRV policy].
197 //!
198 //! [MSRV policy]: https://github.com/google/zerocopy/blob/main/POLICIES.md#msrv
199 //!
200 //! # Changelog
201 //!
202 //! Zerocopy uses [GitHub Releases].
203 //!
204 //! [GitHub Releases]: https://github.com/google/zerocopy/releases
205 //!
206 //! # Thanks
207 //!
208 //! Zerocopy is maintained by engineers at Google and Amazon with help from
209 //! [many wonderful contributors][contributors]. Thank you to everyone who has
210 //! lent a hand in making Rust a little more secure!
211 //!
212 //! [contributors]: https://github.com/google/zerocopy/graphs/contributors
213 
214 // Sometimes we want to use lints which were added after our MSRV.
215 // `unknown_lints` is `warn` by default and we deny warnings in CI, so without
216 // this attribute, any unknown lint would cause a CI failure when testing with
217 // our MSRV.
218 #![allow(unknown_lints, non_local_definitions, unreachable_patterns)]
219 #![deny(renamed_and_removed_lints)]
220 #![deny(
221     anonymous_parameters,
222     deprecated_in_future,
223     late_bound_lifetime_arguments,
224     missing_copy_implementations,
225     missing_debug_implementations,
226     missing_docs,
227     path_statements,
228     patterns_in_fns_without_body,
229     rust_2018_idioms,
230     trivial_numeric_casts,
231     unreachable_pub,
232     unsafe_op_in_unsafe_fn,
233     unused_extern_crates,
234     // We intentionally choose not to deny `unused_qualifications`. When items
235     // are added to the prelude (e.g., `core::mem::size_of`), this has the
236     // consequence of making some uses trigger this lint on the latest toolchain
237     // (e.g., `mem::size_of`), but fixing it (e.g. by replacing with `size_of`)
238     // does not work on older toolchains.
239     //
240     // We tested a more complicated fix in #1413, but ultimately decided that,
241     // since this lint is just a minor style lint, the complexity isn't worth it
242     // - it's fine to occasionally have unused qualifications slip through,
243     // especially since these do not affect our user-facing API in any way.
244     variant_size_differences
245 )]
246 #![cfg_attr(
247     __ZEROCOPY_INTERNAL_USE_ONLY_NIGHTLY_FEATURES_IN_TESTS,
248     deny(fuzzy_provenance_casts, lossy_provenance_casts)
249 )]
250 #![deny(
251     clippy::all,
252     clippy::alloc_instead_of_core,
253     clippy::arithmetic_side_effects,
254     clippy::as_underscore,
255     clippy::assertions_on_result_states,
256     clippy::as_conversions,
257     clippy::correctness,
258     clippy::dbg_macro,
259     clippy::decimal_literal_representation,
260     clippy::double_must_use,
261     clippy::get_unwrap,
262     clippy::indexing_slicing,
263     clippy::missing_inline_in_public_items,
264     clippy::missing_safety_doc,
265     clippy::must_use_candidate,
266     clippy::must_use_unit,
267     clippy::obfuscated_if_else,
268     clippy::perf,
269     clippy::print_stdout,
270     clippy::return_self_not_must_use,
271     clippy::std_instead_of_core,
272     clippy::style,
273     clippy::suspicious,
274     clippy::todo,
275     clippy::undocumented_unsafe_blocks,
276     clippy::unimplemented,
277     clippy::unnested_or_patterns,
278     clippy::unwrap_used,
279     clippy::use_debug
280 )]
281 #![allow(clippy::type_complexity)]
282 #![deny(
283     rustdoc::bare_urls,
284     rustdoc::broken_intra_doc_links,
285     rustdoc::invalid_codeblock_attributes,
286     rustdoc::invalid_html_tags,
287     rustdoc::invalid_rust_codeblocks,
288     rustdoc::missing_crate_level_docs,
289     rustdoc::private_intra_doc_links
290 )]
291 // In test code, it makes sense to weight more heavily towards concise, readable
292 // code over correct or debuggable code.
293 #![cfg_attr(any(test, kani), allow(
294     // In tests, you get line numbers and have access to source code, so panic
295     // messages are less important. You also often unwrap a lot, which would
296     // make expect'ing instead very verbose.
297     clippy::unwrap_used,
298     // In tests, there's no harm to "panic risks" - the worst that can happen is
299     // that your test will fail, and you'll fix it. By contrast, panic risks in
300     // production code introduce the possibly of code panicking unexpectedly "in
301     // the field".
302     clippy::arithmetic_side_effects,
303     clippy::indexing_slicing,
304 ))]
305 #![cfg_attr(not(any(test, feature = "std")), no_std)]
306 #![cfg_attr(
307     all(feature = "simd-nightly", any(target_arch = "x86", target_arch = "x86_64")),
308     feature(stdarch_x86_avx512)
309 )]
310 #![cfg_attr(
311     all(feature = "simd-nightly", target_arch = "arm"),
312     feature(stdarch_arm_dsp, stdarch_arm_neon_intrinsics)
313 )]
314 #![cfg_attr(
315     all(feature = "simd-nightly", any(target_arch = "powerpc", target_arch = "powerpc64")),
316     feature(stdarch_powerpc)
317 )]
318 #![cfg_attr(feature = "float-nightly", feature(f16, f128))]
319 #![cfg_attr(doc_cfg, feature(doc_cfg))]
320 #![cfg_attr(
321     __ZEROCOPY_INTERNAL_USE_ONLY_NIGHTLY_FEATURES_IN_TESTS,
322     feature(layout_for_ptr, coverage_attribute)
323 )]
324 
325 // This is a hack to allow zerocopy-derive derives to work in this crate. They
326 // assume that zerocopy is linked as an extern crate, so they access items from
327 // it as `zerocopy::Xxx`. This makes that still work.
328 #[cfg(any(feature = "derive", test))]
329 extern crate self as zerocopy;
330 
331 #[doc(hidden)]
332 #[macro_use]
333 pub mod util;
334 
335 pub mod byte_slice;
336 pub mod byteorder;
337 mod deprecated;
338 // This module is `pub` so that zerocopy's error types and error handling
339 // documentation is grouped together in a cohesive module. In practice, we
340 // expect most users to use the re-export of `error`'s items to avoid identifier
341 // stuttering.
342 pub mod error;
343 mod impls;
344 #[doc(hidden)]
345 pub mod layout;
346 mod macros;
347 #[doc(hidden)]
348 pub mod pointer;
349 mod r#ref;
350 // TODO(#252): If we make this pub, come up with a better name.
351 mod wrappers;
352 
353 pub use crate::byte_slice::*;
354 pub use crate::byteorder::*;
355 pub use crate::error::*;
356 pub use crate::r#ref::*;
357 pub use crate::wrappers::*;
358 
359 use core::{
360     cell::UnsafeCell,
361     cmp::Ordering,
362     fmt::{self, Debug, Display, Formatter},
363     hash::Hasher,
364     marker::PhantomData,
365     mem::{self, ManuallyDrop, MaybeUninit as CoreMaybeUninit},
366     num::{
367         NonZeroI128, NonZeroI16, NonZeroI32, NonZeroI64, NonZeroI8, NonZeroIsize, NonZeroU128,
368         NonZeroU16, NonZeroU32, NonZeroU64, NonZeroU8, NonZeroUsize, Wrapping,
369     },
370     ops::{Deref, DerefMut},
371     ptr::{self, NonNull},
372     slice,
373 };
374 
375 #[cfg(feature = "std")]
376 use std::io;
377 
378 use crate::pointer::invariant::{self, BecauseExclusive};
379 
380 #[cfg(any(feature = "alloc", test))]
381 extern crate alloc;
382 #[cfg(any(feature = "alloc", test))]
383 use alloc::{boxed::Box, vec::Vec};
384 
385 #[cfg(any(feature = "alloc", test, kani))]
386 use core::alloc::Layout;
387 
388 // Used by `TryFromBytes::is_bit_valid`.
389 #[doc(hidden)]
390 pub use crate::pointer::{invariant::BecauseImmutable, Maybe, MaybeAligned, Ptr};
391 // Used by `KnownLayout`.
392 #[doc(hidden)]
393 pub use crate::layout::*;
394 
395 // For each trait polyfill, as soon as the corresponding feature is stable, the
396 // polyfill import will be unused because method/function resolution will prefer
397 // the inherent method/function over a trait method/function. Thus, we suppress
398 // the `unused_imports` warning.
399 //
400 // See the documentation on `util::polyfills` for more information.
401 #[allow(unused_imports)]
402 use crate::util::polyfills::{self, NonNullExt as _, NumExt as _};
403 
404 #[rustversion::nightly]
405 #[cfg(all(test, not(__ZEROCOPY_INTERNAL_USE_ONLY_NIGHTLY_FEATURES_IN_TESTS)))]
406 const _: () = {
407     #[deprecated = "some tests may be skipped due to missing RUSTFLAGS=\"--cfg __ZEROCOPY_INTERNAL_USE_ONLY_NIGHTLY_FEATURES_IN_TESTS\""]
408     const _WARNING: () = ();
409     #[warn(deprecated)]
410     _WARNING
411 };
412 
413 // These exist so that code which was written against the old names will get
414 // less confusing error messages when they upgrade to a more recent version of
415 // zerocopy. On our MSRV toolchain, the error messages read, for example:
416 //
417 //   error[E0603]: trait `FromZeroes` is private
418 //       --> examples/deprecated.rs:1:15
419 //        |
420 //   1    | use zerocopy::FromZeroes;
421 //        |               ^^^^^^^^^^ private trait
422 //        |
423 //   note: the trait `FromZeroes` is defined here
424 //       --> /Users/josh/workspace/zerocopy/src/lib.rs:1845:5
425 //        |
426 //   1845 | use FromZeros as FromZeroes;
427 //        |     ^^^^^^^^^^^^^^^^^^^^^^^
428 //
429 // The "note" provides enough context to make it easy to figure out how to fix
430 // the error.
431 #[allow(unused)]
432 use {FromZeros as FromZeroes, IntoBytes as AsBytes, Ref as LayoutVerified};
433 
434 /// Implements [`KnownLayout`].
435 ///
436 /// This derive analyzes various aspects of a type's layout that are needed for
437 /// some of zerocopy's APIs. It can be applied to structs, enums, and unions;
438 /// e.g.:
439 ///
440 /// ```
441 /// # use zerocopy_derive::KnownLayout;
442 /// #[derive(KnownLayout)]
443 /// struct MyStruct {
444 /// # /*
445 ///     ...
446 /// # */
447 /// }
448 ///
449 /// #[derive(KnownLayout)]
450 /// enum MyEnum {
451 /// #   V00,
452 /// # /*
453 ///     ...
454 /// # */
455 /// }
456 ///
457 /// #[derive(KnownLayout)]
458 /// union MyUnion {
459 /// #   variant: u8,
460 /// # /*
461 ///     ...
462 /// # */
463 /// }
464 /// ```
465 ///
466 /// # Limitations
467 ///
468 /// This derive cannot currently be applied to unsized structs without an
469 /// explicit `repr` attribute.
470 ///
471 /// Some invocations of this derive run afoul of a [known bug] in Rust's type
472 /// privacy checker. For example, this code:
473 ///
474 /// ```compile_fail,E0446
475 /// use zerocopy::*;
476 /// # use zerocopy_derive::*;
477 ///
478 /// #[derive(KnownLayout)]
479 /// #[repr(C)]
480 /// pub struct PublicType {
481 ///     leading: Foo,
482 ///     trailing: Bar,
483 /// }
484 ///
485 /// #[derive(KnownLayout)]
486 /// struct Foo;
487 ///
488 /// #[derive(KnownLayout)]
489 /// struct Bar;
490 /// ```
491 ///
492 /// ...results in a compilation error:
493 ///
494 /// ```text
495 /// error[E0446]: private type `Bar` in public interface
496 ///  --> examples/bug.rs:3:10
497 ///    |
498 /// 3  | #[derive(KnownLayout)]
499 ///    |          ^^^^^^^^^^^ can't leak private type
500 /// ...
501 /// 14 | struct Bar;
502 ///    | ---------- `Bar` declared as private
503 ///    |
504 ///    = note: this error originates in the derive macro `KnownLayout` (in Nightly builds, run with -Z macro-backtrace for more info)
505 /// ```
506 ///
507 /// This issue arises when `#[derive(KnownLayout)]` is applied to `repr(C)`
508 /// structs whose trailing field type is less public than the enclosing struct.
509 ///
510 /// To work around this, mark the trailing field type `pub` and annotate it with
511 /// `#[doc(hidden)]`; e.g.:
512 ///
513 /// ```no_run
514 /// use zerocopy::*;
515 /// # use zerocopy_derive::*;
516 ///
517 /// #[derive(KnownLayout)]
518 /// #[repr(C)]
519 /// pub struct PublicType {
520 ///     leading: Foo,
521 ///     trailing: Bar,
522 /// }
523 ///
524 /// #[derive(KnownLayout)]
525 /// struct Foo;
526 ///
527 /// #[doc(hidden)]
528 /// #[derive(KnownLayout)]
529 /// pub struct Bar; // <- `Bar` is now also `pub`
530 /// ```
531 ///
532 /// [known bug]: https://github.com/rust-lang/rust/issues/45713
533 #[cfg(any(feature = "derive", test))]
534 #[cfg_attr(doc_cfg, doc(cfg(feature = "derive")))]
535 pub use zerocopy_derive::KnownLayout;
536 
537 /// Indicates that zerocopy can reason about certain aspects of a type's layout.
538 ///
539 /// This trait is required by many of zerocopy's APIs. It supports sized types,
540 /// slices, and [slice DSTs](#dynamically-sized-types).
541 ///
542 /// # Implementation
543 ///
544 /// **Do not implement this trait yourself!** Instead, use
545 /// [`#[derive(KnownLayout)]`][derive]; e.g.:
546 ///
547 /// ```
548 /// # use zerocopy_derive::KnownLayout;
549 /// #[derive(KnownLayout)]
550 /// struct MyStruct {
551 /// # /*
552 ///     ...
553 /// # */
554 /// }
555 ///
556 /// #[derive(KnownLayout)]
557 /// enum MyEnum {
558 /// # /*
559 ///     ...
560 /// # */
561 /// }
562 ///
563 /// #[derive(KnownLayout)]
564 /// union MyUnion {
565 /// #   variant: u8,
566 /// # /*
567 ///     ...
568 /// # */
569 /// }
570 /// ```
571 ///
572 /// This derive performs a sophisticated analysis to deduce the layout
573 /// characteristics of types. You **must** implement this trait via the derive.
574 ///
575 /// # Dynamically-sized types
576 ///
577 /// `KnownLayout` supports slice-based dynamically sized types ("slice DSTs").
578 ///
579 /// A slice DST is a type whose trailing field is either a slice or another
580 /// slice DST, rather than a type with fixed size. For example:
581 ///
582 /// ```
583 /// #[repr(C)]
584 /// struct PacketHeader {
585 /// # /*
586 ///     ...
587 /// # */
588 /// }
589 ///
590 /// #[repr(C)]
591 /// struct Packet {
592 ///     header: PacketHeader,
593 ///     body: [u8],
594 /// }
595 /// ```
596 ///
597 /// It can be useful to think of slice DSTs as a generalization of slices - in
598 /// other words, a normal slice is just the special case of a slice DST with
599 /// zero leading fields. In particular:
600 /// - Like slices, slice DSTs can have different lengths at runtime
601 /// - Like slices, slice DSTs cannot be passed by-value, but only by reference
602 ///   or via other indirection such as `Box`
603 /// - Like slices, a reference (or `Box`, or other pointer type) to a slice DST
604 ///   encodes the number of elements in the trailing slice field
605 ///
606 /// ## Slice DST layout
607 ///
608 /// Just like other composite Rust types, the layout of a slice DST is not
609 /// well-defined unless it is specified using an explicit `#[repr(...)]`
610 /// attribute such as `#[repr(C)]`. [Other representations are
611 /// supported][reprs], but in this section, we'll use `#[repr(C)]` as our
612 /// example.
613 ///
614 /// A `#[repr(C)]` slice DST is laid out [just like sized `#[repr(C)]`
615 /// types][repr-c-structs], but the presenence of a variable-length field
616 /// introduces the possibility of *dynamic padding*. In particular, it may be
617 /// necessary to add trailing padding *after* the trailing slice field in order
618 /// to satisfy the outer type's alignment, and the amount of padding required
619 /// may be a function of the length of the trailing slice field. This is just a
620 /// natural consequence of the normal `#[repr(C)]` rules applied to slice DSTs,
621 /// but it can result in surprising behavior. For example, consider the
622 /// following type:
623 ///
624 /// ```
625 /// #[repr(C)]
626 /// struct Foo {
627 ///     a: u32,
628 ///     b: u8,
629 ///     z: [u16],
630 /// }
631 /// ```
632 ///
633 /// Assuming that `u32` has alignment 4 (this is not true on all platforms),
634 /// then `Foo` has alignment 4 as well. Here is the smallest possible value for
635 /// `Foo`:
636 ///
637 /// ```text
638 /// byte offset | 01234567
639 ///       field | aaaab---
640 ///                    ><
641 /// ```
642 ///
643 /// In this value, `z` has length 0. Abiding by `#[repr(C)]`, the lowest offset
644 /// that we can place `z` at is 5, but since `z` has alignment 2, we need to
645 /// round up to offset 6. This means that there is one byte of padding between
646 /// `b` and `z`, then 0 bytes of `z` itself (denoted `><` in this diagram), and
647 /// then two bytes of padding after `z` in order to satisfy the overall
648 /// alignment of `Foo`. The size of this instance is 8 bytes.
649 ///
650 /// What about if `z` has length 1?
651 ///
652 /// ```text
653 /// byte offset | 01234567
654 ///       field | aaaab-zz
655 /// ```
656 ///
657 /// In this instance, `z` has length 1, and thus takes up 2 bytes. That means
658 /// that we no longer need padding after `z` in order to satisfy `Foo`'s
659 /// alignment. We've now seen two different values of `Foo` with two different
660 /// lengths of `z`, but they both have the same size - 8 bytes.
661 ///
662 /// What about if `z` has length 2?
663 ///
664 /// ```text
665 /// byte offset | 012345678901
666 ///       field | aaaab-zzzz--
667 /// ```
668 ///
669 /// Now `z` has length 2, and thus takes up 4 bytes. This brings our un-padded
670 /// size to 10, and so we now need another 2 bytes of padding after `z` to
671 /// satisfy `Foo`'s alignment.
672 ///
673 /// Again, all of this is just a logical consequence of the `#[repr(C)]` rules
674 /// applied to slice DSTs, but it can be surprising that the amount of trailing
675 /// padding becomes a function of the trailing slice field's length, and thus
676 /// can only be computed at runtime.
677 ///
678 /// [reprs]: https://doc.rust-lang.org/reference/type-layout.html#representations
679 /// [repr-c-structs]: https://doc.rust-lang.org/reference/type-layout.html#reprc-structs
680 ///
681 /// ## What is a valid size?
682 ///
683 /// There are two places in zerocopy's API that we refer to "a valid size" of a
684 /// type. In normal casts or conversions, where the source is a byte slice, we
685 /// need to know whether the source byte slice is a valid size of the
686 /// destination type. In prefix or suffix casts, we need to know whether *there
687 /// exists* a valid size of the destination type which fits in the source byte
688 /// slice and, if so, what the largest such size is.
689 ///
690 /// As outlined above, a slice DST's size is defined by the number of elements
691 /// in its trailing slice field. However, there is not necessarily a 1-to-1
692 /// mapping between trailing slice field length and overall size. As we saw in
693 /// the previous section with the type `Foo`, instances with both 0 and 1
694 /// elements in the trailing `z` field result in a `Foo` whose size is 8 bytes.
695 ///
696 /// When we say "x is a valid size of `T`", we mean one of two things:
697 /// - If `T: Sized`, then we mean that `x == size_of::<T>()`
698 /// - If `T` is a slice DST, then we mean that there exists a `len` such that the instance of
699 ///   `T` with `len` trailing slice elements has size `x`
700 ///
701 /// When we say "largest possible size of `T` that fits in a byte slice", we
702 /// mean one of two things:
703 /// - If `T: Sized`, then we mean `size_of::<T>()` if the byte slice is at least
704 ///   `size_of::<T>()` bytes long
705 /// - If `T` is a slice DST, then we mean to consider all values, `len`, such
706 ///   that the instance of `T` with `len` trailing slice elements fits in the
707 ///   byte slice, and to choose the largest such `len`, if any
708 ///
709 ///
710 /// # Safety
711 ///
712 /// This trait does not convey any safety guarantees to code outside this crate.
713 ///
714 /// You must not rely on the `#[doc(hidden)]` internals of `KnownLayout`. Future
715 /// releases of zerocopy may make backwards-breaking changes to these items,
716 /// including changes that only affect soundness, which may cause code which
717 /// uses those items to silently become unsound.
718 ///
719 #[cfg_attr(feature = "derive", doc = "[derive]: zerocopy_derive::KnownLayout")]
720 #[cfg_attr(
721     not(feature = "derive"),
722     doc = concat!("[derive]: https://docs.rs/zerocopy/", env!("CARGO_PKG_VERSION"), "/zerocopy/derive.KnownLayout.html"),
723 )]
724 #[cfg_attr(
725     zerocopy_diagnostic_on_unimplemented_1_78_0,
726     diagnostic::on_unimplemented(note = "Consider adding `#[derive(KnownLayout)]` to `{Self}`")
727 )]
728 pub unsafe trait KnownLayout {
729     // The `Self: Sized` bound makes it so that `KnownLayout` can still be
730     // object safe. It's not currently object safe thanks to `const LAYOUT`, and
731     // it likely won't be in the future, but there's no reason not to be
732     // forwards-compatible with object safety.
733     #[doc(hidden)]
only_derive_is_allowed_to_implement_this_trait() where Self: Sized734     fn only_derive_is_allowed_to_implement_this_trait()
735     where
736         Self: Sized;
737 
738     /// The type of metadata stored in a pointer to `Self`.
739     ///
740     /// This is `()` for sized types and `usize` for slice DSTs.
741     type PointerMetadata: PointerMetadata;
742 
743     /// A maybe-uninitialized analog of `Self`
744     ///
745     /// # Safety
746     ///
747     /// `Self::LAYOUT` and `Self::MaybeUninit::LAYOUT` are identical.
748     /// `Self::MaybeUninit` admits uninitialized bytes in all positions.
749     #[doc(hidden)]
750     type MaybeUninit: ?Sized + KnownLayout<PointerMetadata = Self::PointerMetadata>;
751 
752     /// The layout of `Self`.
753     ///
754     /// # Safety
755     ///
756     /// Callers may assume that `LAYOUT` accurately reflects the layout of
757     /// `Self`. In particular:
758     /// - `LAYOUT.align` is equal to `Self`'s alignment
759     /// - If `Self: Sized`, then `LAYOUT.size_info == SizeInfo::Sized { size }`
760     ///   where `size == size_of::<Self>()`
761     /// - If `Self` is a slice DST, then `LAYOUT.size_info ==
762     ///   SizeInfo::SliceDst(slice_layout)` where:
763     ///   - The size, `size`, of an instance of `Self` with `elems` trailing
764     ///     slice elements is equal to `slice_layout.offset +
765     ///     slice_layout.elem_size * elems` rounded up to the nearest multiple
766     ///     of `LAYOUT.align`
767     ///   - For such an instance, any bytes in the range `[slice_layout.offset +
768     ///     slice_layout.elem_size * elems, size)` are padding and must not be
769     ///     assumed to be initialized
770     #[doc(hidden)]
771     const LAYOUT: DstLayout;
772 
773     /// SAFETY: The returned pointer has the same address and provenance as
774     /// `bytes`. If `Self` is a DST, the returned pointer's referent has `elems`
775     /// elements in its trailing slice.
776     #[doc(hidden)]
raw_from_ptr_len(bytes: NonNull<u8>, meta: Self::PointerMetadata) -> NonNull<Self>777     fn raw_from_ptr_len(bytes: NonNull<u8>, meta: Self::PointerMetadata) -> NonNull<Self>;
778 
779     /// Extracts the metadata from a pointer to `Self`.
780     ///
781     /// # Safety
782     ///
783     /// `pointer_to_metadata` always returns the correct metadata stored in
784     /// `ptr`.
785     #[doc(hidden)]
pointer_to_metadata(ptr: *mut Self) -> Self::PointerMetadata786     fn pointer_to_metadata(ptr: *mut Self) -> Self::PointerMetadata;
787 
788     /// Computes the length of the byte range addressed by `ptr`.
789     ///
790     /// Returns `None` if the resulting length would not fit in an `usize`.
791     ///
792     /// # Safety
793     ///
794     /// Callers may assume that `size_of_val_raw` always returns the correct
795     /// size.
796     ///
797     /// Callers may assume that, if `ptr` addresses a byte range whose length
798     /// fits in an `usize`, this will return `Some`.
799     #[doc(hidden)]
800     #[must_use]
801     #[inline(always)]
size_of_val_raw(ptr: NonNull<Self>) -> Option<usize>802     fn size_of_val_raw(ptr: NonNull<Self>) -> Option<usize> {
803         let meta = Self::pointer_to_metadata(ptr.as_ptr());
804         // SAFETY: `size_for_metadata` promises to only return `None` if the
805         // resulting size would not fit in a `usize`.
806         meta.size_for_metadata(Self::LAYOUT)
807     }
808 }
809 
810 /// The metadata associated with a [`KnownLayout`] type.
811 #[doc(hidden)]
812 pub trait PointerMetadata: Copy + Eq + Debug {
813     /// Constructs a `Self` from an element count.
814     ///
815     /// If `Self = ()`, this returns `()`. If `Self = usize`, this returns
816     /// `elems`. No other types are currently supported.
from_elem_count(elems: usize) -> Self817     fn from_elem_count(elems: usize) -> Self;
818 
819     /// Computes the size of the object with the given layout and pointer
820     /// metadata.
821     ///
822     /// # Panics
823     ///
824     /// If `Self = ()`, `layout` must describe a sized type. If `Self = usize`,
825     /// `layout` must describe a slice DST. Otherwise, `size_for_metadata` may
826     /// panic.
827     ///
828     /// # Safety
829     ///
830     /// `size_for_metadata` promises to only return `None` if the resulting size
831     /// would not fit in a `usize`.
size_for_metadata(&self, layout: DstLayout) -> Option<usize>832     fn size_for_metadata(&self, layout: DstLayout) -> Option<usize>;
833 }
834 
835 impl PointerMetadata for () {
836     #[inline]
837     #[allow(clippy::unused_unit)]
from_elem_count(_elems: usize) -> ()838     fn from_elem_count(_elems: usize) -> () {}
839 
840     #[inline]
size_for_metadata(&self, layout: DstLayout) -> Option<usize>841     fn size_for_metadata(&self, layout: DstLayout) -> Option<usize> {
842         match layout.size_info {
843             SizeInfo::Sized { size } => Some(size),
844             // NOTE: This branch is unreachable, but we return `None` rather
845             // than `unreachable!()` to avoid generating panic paths.
846             SizeInfo::SliceDst(_) => None,
847         }
848     }
849 }
850 
851 impl PointerMetadata for usize {
852     #[inline]
from_elem_count(elems: usize) -> usize853     fn from_elem_count(elems: usize) -> usize {
854         elems
855     }
856 
857     #[inline]
size_for_metadata(&self, layout: DstLayout) -> Option<usize>858     fn size_for_metadata(&self, layout: DstLayout) -> Option<usize> {
859         match layout.size_info {
860             SizeInfo::SliceDst(TrailingSliceLayout { offset, elem_size }) => {
861                 let slice_len = elem_size.checked_mul(*self)?;
862                 let without_padding = offset.checked_add(slice_len)?;
863                 without_padding.checked_add(util::padding_needed_for(without_padding, layout.align))
864             }
865             // NOTE: This branch is unreachable, but we return `None` rather
866             // than `unreachable!()` to avoid generating panic paths.
867             SizeInfo::Sized { .. } => None,
868         }
869     }
870 }
871 
872 // SAFETY: Delegates safety to `DstLayout::for_slice`.
873 unsafe impl<T> KnownLayout for [T] {
874     #[allow(clippy::missing_inline_in_public_items)]
875     #[cfg_attr(
876         all(coverage_nightly, __ZEROCOPY_INTERNAL_USE_ONLY_NIGHTLY_FEATURES_IN_TESTS),
877         coverage(off)
878     )]
only_derive_is_allowed_to_implement_this_trait() where Self: Sized,879     fn only_derive_is_allowed_to_implement_this_trait()
880     where
881         Self: Sized,
882     {
883     }
884 
885     type PointerMetadata = usize;
886 
887     // SAFETY: `CoreMaybeUninit<T>::LAYOUT` and `T::LAYOUT` are identical
888     // because `CoreMaybeUninit<T>` has the same size and alignment as `T` [1].
889     // Consequently, `[CoreMaybeUninit<T>]::LAYOUT` and `[T]::LAYOUT` are
890     // identical, because they both lack a fixed-sized prefix and because they
891     // inherit the alignments of their inner element type (which are identical)
892     // [2][3].
893     //
894     // `[CoreMaybeUninit<T>]` admits uninitialized bytes at all positions
895     // because `CoreMaybeUninit<T>` admits uninitialized bytes at all positions
896     // and because the inner elements of `[CoreMaybeUninit<T>]` are laid out
897     // back-to-back [2][3].
898     //
899     // [1] Per https://doc.rust-lang.org/1.81.0/std/mem/union.MaybeUninit.html#layout-1:
900     //
901     //   `MaybeUninit<T>` is guaranteed to have the same size, alignment, and ABI as
902     //   `T`
903     //
904     // [2] Per https://doc.rust-lang.org/1.82.0/reference/type-layout.html#slice-layout:
905     //
906     //   Slices have the same layout as the section of the array they slice.
907     //
908     // [3] Per https://doc.rust-lang.org/1.82.0/reference/type-layout.html#array-layout:
909     //
910     //   An array of `[T; N]` has a size of `size_of::<T>() * N` and the same
911     //   alignment of `T`. Arrays are laid out so that the zero-based `nth`
912     //   element of the array is offset from the start of the array by `n *
913     //   size_of::<T>()` bytes.
914     type MaybeUninit = [CoreMaybeUninit<T>];
915 
916     const LAYOUT: DstLayout = DstLayout::for_slice::<T>();
917 
918     // SAFETY: `.cast` preserves address and provenance. The returned pointer
919     // refers to an object with `elems` elements by construction.
920     #[inline(always)]
raw_from_ptr_len(data: NonNull<u8>, elems: usize) -> NonNull<Self>921     fn raw_from_ptr_len(data: NonNull<u8>, elems: usize) -> NonNull<Self> {
922         // TODO(#67): Remove this allow. See NonNullExt for more details.
923         #[allow(unstable_name_collisions)]
924         NonNull::slice_from_raw_parts(data.cast::<T>(), elems)
925     }
926 
927     #[inline(always)]
pointer_to_metadata(ptr: *mut [T]) -> usize928     fn pointer_to_metadata(ptr: *mut [T]) -> usize {
929         #[allow(clippy::as_conversions)]
930         let slc = ptr as *const [()];
931 
932         // SAFETY:
933         // - `()` has alignment 1, so `slc` is trivially aligned.
934         // - `slc` was derived from a non-null pointer.
935         // - The size is 0 regardless of the length, so it is sound to
936         //   materialize a reference regardless of location.
937         // - By invariant, `self.ptr` has valid provenance.
938         let slc = unsafe { &*slc };
939 
940         // This is correct because the preceding `as` cast preserves the number
941         // of slice elements. [1]
942         //
943         // [1] Per https://doc.rust-lang.org/reference/expressions/operator-expr.html#pointer-to-pointer-cast:
944         //
945         //   For slice types like `[T]` and `[U]`, the raw pointer types `*const
946         //   [T]`, `*mut [T]`, `*const [U]`, and `*mut [U]` encode the number of
947         //   elements in this slice. Casts between these raw pointer types
948         //   preserve the number of elements. ... The same holds for `str` and
949         //   any compound type whose unsized tail is a slice type, such as
950         //   struct `Foo(i32, [u8])` or `(u64, Foo)`.
951         slc.len()
952     }
953 }
954 
955 #[rustfmt::skip]
956 impl_known_layout!(
957     (),
958     u8, i8, u16, i16, u32, i32, u64, i64, u128, i128, usize, isize, f32, f64,
959     bool, char,
960     NonZeroU8, NonZeroI8, NonZeroU16, NonZeroI16, NonZeroU32, NonZeroI32,
961     NonZeroU64, NonZeroI64, NonZeroU128, NonZeroI128, NonZeroUsize, NonZeroIsize
962 );
963 #[rustfmt::skip]
964 #[cfg(feature = "float-nightly")]
965 impl_known_layout!(
966     #[cfg_attr(doc_cfg, doc(cfg(feature = "float-nightly")))]
967     f16,
968     #[cfg_attr(doc_cfg, doc(cfg(feature = "float-nightly")))]
969     f128
970 );
971 #[rustfmt::skip]
972 impl_known_layout!(
973     T         => Option<T>,
974     T: ?Sized => PhantomData<T>,
975     T         => Wrapping<T>,
976     T         => CoreMaybeUninit<T>,
977     T: ?Sized => *const T,
978     T: ?Sized => *mut T,
979     T: ?Sized => &'_ T,
980     T: ?Sized => &'_ mut T,
981 );
982 impl_known_layout!(const N: usize, T => [T; N]);
983 
984 safety_comment! {
985     /// SAFETY:
986     /// `str`, `ManuallyDrop<[T]>` [1], and `UnsafeCell<T>` [2] have the same
987     /// representations as `[u8]`, `[T]`, and `T` repsectively. `str` has
988     /// different bit validity than `[u8]`, but that doesn't affect the
989     /// soundness of this impl.
990     ///
991     /// [1] Per https://doc.rust-lang.org/nightly/core/mem/struct.ManuallyDrop.html:
992     ///
993     ///   `ManuallyDrop<T>` is guaranteed to have the same layout and bit
994     ///   validity as `T`
995     ///
996     /// [2] Per https://doc.rust-lang.org/core/cell/struct.UnsafeCell.html#memory-layout:
997     ///
998     ///   `UnsafeCell<T>` has the same in-memory representation as its inner
999     ///   type `T`.
1000     ///
1001     /// TODO(#429):
1002     /// -  Add quotes from docs.
1003     /// -  Once [1] (added in
1004     /// https://github.com/rust-lang/rust/pull/115522) is available on stable,
1005     /// quote the stable docs instead of the nightly docs.
1006     unsafe_impl_known_layout!(#[repr([u8])] str);
1007     unsafe_impl_known_layout!(T: ?Sized + KnownLayout => #[repr(T)] ManuallyDrop<T>);
1008     unsafe_impl_known_layout!(T: ?Sized + KnownLayout => #[repr(T)] UnsafeCell<T>);
1009 }
1010 
1011 safety_comment! {
1012     /// SAFETY:
1013     /// - By consequence of the invariant on `T::MaybeUninit` that `T::LAYOUT`
1014     ///   and `T::MaybeUninit::LAYOUT` are equal, `T` and `T::MaybeUninit`
1015     ///   have the same:
1016     ///   - Fixed prefix size
1017     ///   - Alignment
1018     ///   - (For DSTs) trailing slice element size
1019     /// - By consequence of the above, referents `T::MaybeUninit` and `T` have
1020     ///   the require the same kind of pointer metadata, and thus it is valid to
1021     ///   perform an `as` cast from `*mut T` and `*mut T::MaybeUninit`, and this
1022     ///   operation preserves referent size (ie, `size_of_val_raw`).
1023     unsafe_impl_known_layout!(T: ?Sized + KnownLayout => #[repr(T::MaybeUninit)] MaybeUninit<T>);
1024 }
1025 
1026 /// Analyzes whether a type is [`FromZeros`].
1027 ///
1028 /// This derive analyzes, at compile time, whether the annotated type satisfies
1029 /// the [safety conditions] of `FromZeros` and implements `FromZeros` and its
1030 /// supertraits if it is sound to do so. This derive can be applied to structs,
1031 /// enums, and unions; e.g.:
1032 ///
1033 /// ```
1034 /// # use zerocopy_derive::{FromZeros, Immutable};
1035 /// #[derive(FromZeros)]
1036 /// struct MyStruct {
1037 /// # /*
1038 ///     ...
1039 /// # */
1040 /// }
1041 ///
1042 /// #[derive(FromZeros)]
1043 /// #[repr(u8)]
1044 /// enum MyEnum {
1045 /// #   Variant0,
1046 /// # /*
1047 ///     ...
1048 /// # */
1049 /// }
1050 ///
1051 /// #[derive(FromZeros, Immutable)]
1052 /// union MyUnion {
1053 /// #   variant: u8,
1054 /// # /*
1055 ///     ...
1056 /// # */
1057 /// }
1058 /// ```
1059 ///
1060 /// [safety conditions]: trait@FromZeros#safety
1061 ///
1062 /// # Analysis
1063 ///
1064 /// *This section describes, roughly, the analysis performed by this derive to
1065 /// determine whether it is sound to implement `FromZeros` for a given type.
1066 /// Unless you are modifying the implementation of this derive, or attempting to
1067 /// manually implement `FromZeros` for a type yourself, you don't need to read
1068 /// this section.*
1069 ///
1070 /// If a type has the following properties, then this derive can implement
1071 /// `FromZeros` for that type:
1072 ///
1073 /// - If the type is a struct, all of its fields must be `FromZeros`.
1074 /// - If the type is an enum:
1075 ///   - It must have a defined representation (`repr`s `C`, `u8`, `u16`, `u32`,
1076 ///     `u64`, `usize`, `i8`, `i16`, `i32`, `i64`, or `isize`).
1077 ///   - It must have a variant with a discriminant/tag of `0`, and its fields
1078 ///     must be `FromZeros`. See [the reference] for a description of
1079 ///     discriminant values are specified.
1080 ///   - The fields of that variant must be `FromZeros`.
1081 ///
1082 /// This analysis is subject to change. Unsafe code may *only* rely on the
1083 /// documented [safety conditions] of `FromZeros`, and must *not* rely on the
1084 /// implementation details of this derive.
1085 ///
1086 /// [the reference]: https://doc.rust-lang.org/reference/items/enumerations.html#custom-discriminant-values-for-fieldless-enumerations
1087 ///
1088 /// ## Why isn't an explicit representation required for structs?
1089 ///
1090 /// Neither this derive, nor the [safety conditions] of `FromZeros`, requires
1091 /// that structs are marked with `#[repr(C)]`.
1092 ///
1093 /// Per the [Rust reference](reference),
1094 ///
1095 /// > The representation of a type can change the padding between fields, but
1096 /// > does not change the layout of the fields themselves.
1097 ///
1098 /// [reference]: https://doc.rust-lang.org/reference/type-layout.html#representations
1099 ///
1100 /// Since the layout of structs only consists of padding bytes and field bytes,
1101 /// a struct is soundly `FromZeros` if:
1102 /// 1. its padding is soundly `FromZeros`, and
1103 /// 2. its fields are soundly `FromZeros`.
1104 ///
1105 /// The answer to the first question is always yes: padding bytes do not have
1106 /// any validity constraints. A [discussion] of this question in the Unsafe Code
1107 /// Guidelines Working Group concluded that it would be virtually unimaginable
1108 /// for future versions of rustc to add validity constraints to padding bytes.
1109 ///
1110 /// [discussion]: https://github.com/rust-lang/unsafe-code-guidelines/issues/174
1111 ///
1112 /// Whether a struct is soundly `FromZeros` therefore solely depends on whether
1113 /// its fields are `FromZeros`.
1114 // TODO(#146): Document why we don't require an enum to have an explicit `repr`
1115 // attribute.
1116 #[cfg(any(feature = "derive", test))]
1117 #[cfg_attr(doc_cfg, doc(cfg(feature = "derive")))]
1118 pub use zerocopy_derive::FromZeros;
1119 
1120 /// Analyzes whether a type is [`Immutable`].
1121 ///
1122 /// This derive analyzes, at compile time, whether the annotated type satisfies
1123 /// the [safety conditions] of `Immutable` and implements `Immutable` if it is
1124 /// sound to do so. This derive can be applied to structs, enums, and unions;
1125 /// e.g.:
1126 ///
1127 /// ```
1128 /// # use zerocopy_derive::Immutable;
1129 /// #[derive(Immutable)]
1130 /// struct MyStruct {
1131 /// # /*
1132 ///     ...
1133 /// # */
1134 /// }
1135 ///
1136 /// #[derive(Immutable)]
1137 /// enum MyEnum {
1138 /// #   Variant0,
1139 /// # /*
1140 ///     ...
1141 /// # */
1142 /// }
1143 ///
1144 /// #[derive(Immutable)]
1145 /// union MyUnion {
1146 /// #   variant: u8,
1147 /// # /*
1148 ///     ...
1149 /// # */
1150 /// }
1151 /// ```
1152 ///
1153 /// # Analysis
1154 ///
1155 /// *This section describes, roughly, the analysis performed by this derive to
1156 /// determine whether it is sound to implement `Immutable` for a given type.
1157 /// Unless you are modifying the implementation of this derive, you don't need
1158 /// to read this section.*
1159 ///
1160 /// If a type has the following properties, then this derive can implement
1161 /// `Immutable` for that type:
1162 ///
1163 /// - All fields must be `Immutable`.
1164 ///
1165 /// This analysis is subject to change. Unsafe code may *only* rely on the
1166 /// documented [safety conditions] of `Immutable`, and must *not* rely on the
1167 /// implementation details of this derive.
1168 ///
1169 /// [safety conditions]: trait@Immutable#safety
1170 #[cfg(any(feature = "derive", test))]
1171 #[cfg_attr(doc_cfg, doc(cfg(feature = "derive")))]
1172 pub use zerocopy_derive::Immutable;
1173 
1174 /// Types which are free from interior mutability.
1175 ///
1176 /// `T: Immutable` indicates that `T` does not permit interior mutation, except
1177 /// by ownership or an exclusive (`&mut`) borrow.
1178 ///
1179 /// # Implementation
1180 ///
1181 /// **Do not implement this trait yourself!** Instead, use
1182 /// [`#[derive(Immutable)]`][derive] (requires the `derive` Cargo feature);
1183 /// e.g.:
1184 ///
1185 /// ```
1186 /// # use zerocopy_derive::Immutable;
1187 /// #[derive(Immutable)]
1188 /// struct MyStruct {
1189 /// # /*
1190 ///     ...
1191 /// # */
1192 /// }
1193 ///
1194 /// #[derive(Immutable)]
1195 /// enum MyEnum {
1196 /// # /*
1197 ///     ...
1198 /// # */
1199 /// }
1200 ///
1201 /// #[derive(Immutable)]
1202 /// union MyUnion {
1203 /// #   variant: u8,
1204 /// # /*
1205 ///     ...
1206 /// # */
1207 /// }
1208 /// ```
1209 ///
1210 /// This derive performs a sophisticated, compile-time safety analysis to
1211 /// determine whether a type is `Immutable`.
1212 ///
1213 /// # Safety
1214 ///
1215 /// Unsafe code outside of this crate must not make any assumptions about `T`
1216 /// based on `T: Immutable`. We reserve the right to relax the requirements for
1217 /// `Immutable` in the future, and if unsafe code outside of this crate makes
1218 /// assumptions based on `T: Immutable`, future relaxations may cause that code
1219 /// to become unsound.
1220 ///
1221 // # Safety (Internal)
1222 //
1223 // If `T: Immutable`, unsafe code *inside of this crate* may assume that, given
1224 // `t: &T`, `t` does not contain any [`UnsafeCell`]s at any byte location
1225 // within the byte range addressed by `t`. This includes ranges of length 0
1226 // (e.g., `UnsafeCell<()>` and `[UnsafeCell<u8>; 0]`). If a type implements
1227 // `Immutable` which violates this assumptions, it may cause this crate to
1228 // exhibit [undefined behavior].
1229 //
1230 // [`UnsafeCell`]: core::cell::UnsafeCell
1231 // [undefined behavior]: https://raphlinus.github.io/programming/rust/2018/08/17/undefined-behavior.html
1232 #[cfg_attr(
1233     feature = "derive",
1234     doc = "[derive]: zerocopy_derive::Immutable",
1235     doc = "[derive-analysis]: zerocopy_derive::Immutable#analysis"
1236 )]
1237 #[cfg_attr(
1238     not(feature = "derive"),
1239     doc = concat!("[derive]: https://docs.rs/zerocopy/", env!("CARGO_PKG_VERSION"), "/zerocopy/derive.Immutable.html"),
1240     doc = concat!("[derive-analysis]: https://docs.rs/zerocopy/", env!("CARGO_PKG_VERSION"), "/zerocopy/derive.Immutable.html#analysis"),
1241 )]
1242 #[cfg_attr(
1243     zerocopy_diagnostic_on_unimplemented_1_78_0,
1244     diagnostic::on_unimplemented(note = "Consider adding `#[derive(Immutable)]` to `{Self}`")
1245 )]
1246 pub unsafe trait Immutable {
1247     // The `Self: Sized` bound makes it so that `Immutable` is still object
1248     // safe.
1249     #[doc(hidden)]
only_derive_is_allowed_to_implement_this_trait() where Self: Sized1250     fn only_derive_is_allowed_to_implement_this_trait()
1251     where
1252         Self: Sized;
1253 }
1254 
1255 /// Implements [`TryFromBytes`].
1256 ///
1257 /// This derive synthesizes the runtime checks required to check whether a
1258 /// sequence of initialized bytes corresponds to a valid instance of a type.
1259 /// This derive can be applied to structs, enums, and unions; e.g.:
1260 ///
1261 /// ```
1262 /// # use zerocopy_derive::{TryFromBytes, Immutable};
1263 /// #[derive(TryFromBytes)]
1264 /// struct MyStruct {
1265 /// # /*
1266 ///     ...
1267 /// # */
1268 /// }
1269 ///
1270 /// #[derive(TryFromBytes)]
1271 /// #[repr(u8)]
1272 /// enum MyEnum {
1273 /// #   V00,
1274 /// # /*
1275 ///     ...
1276 /// # */
1277 /// }
1278 ///
1279 /// #[derive(TryFromBytes, Immutable)]
1280 /// union MyUnion {
1281 /// #   variant: u8,
1282 /// # /*
1283 ///     ...
1284 /// # */
1285 /// }
1286 /// ```
1287 ///
1288 /// # Portability
1289 ///
1290 /// To ensure consistent endianness for enums with multi-byte representations,
1291 /// explicitly specify and convert each discriminant using `.to_le()` or
1292 /// `.to_be()`; e.g.:
1293 ///
1294 /// ```
1295 /// # use zerocopy_derive::TryFromBytes;
1296 /// // `DataStoreVersion` is encoded in little-endian.
1297 /// #[derive(TryFromBytes)]
1298 /// #[repr(u32)]
1299 /// pub enum DataStoreVersion {
1300 ///     /// Version 1 of the data store.
1301 ///     V1 = 9u32.to_le(),
1302 ///
1303 ///     /// Version 2 of the data store.
1304 ///     V2 = 10u32.to_le(),
1305 /// }
1306 /// ```
1307 ///
1308 /// [safety conditions]: trait@TryFromBytes#safety
1309 #[cfg(any(feature = "derive", test))]
1310 #[cfg_attr(doc_cfg, doc(cfg(feature = "derive")))]
1311 pub use zerocopy_derive::TryFromBytes;
1312 
1313 /// Types for which some bit patterns are valid.
1314 ///
1315 /// A memory region of the appropriate length which contains initialized bytes
1316 /// can be viewed as a `TryFromBytes` type so long as the runtime value of those
1317 /// bytes corresponds to a [*valid instance*] of that type. For example,
1318 /// [`bool`] is `TryFromBytes`, so zerocopy can transmute a [`u8`] into a
1319 /// [`bool`] so long as it first checks that the value of the [`u8`] is `0` or
1320 /// `1`.
1321 ///
1322 /// # Implementation
1323 ///
1324 /// **Do not implement this trait yourself!** Instead, use
1325 /// [`#[derive(TryFromBytes)]`][derive]; e.g.:
1326 ///
1327 /// ```
1328 /// # use zerocopy_derive::{TryFromBytes, Immutable};
1329 /// #[derive(TryFromBytes)]
1330 /// struct MyStruct {
1331 /// # /*
1332 ///     ...
1333 /// # */
1334 /// }
1335 ///
1336 /// #[derive(TryFromBytes)]
1337 /// #[repr(u8)]
1338 /// enum MyEnum {
1339 /// #   V00,
1340 /// # /*
1341 ///     ...
1342 /// # */
1343 /// }
1344 ///
1345 /// #[derive(TryFromBytes, Immutable)]
1346 /// union MyUnion {
1347 /// #   variant: u8,
1348 /// # /*
1349 ///     ...
1350 /// # */
1351 /// }
1352 /// ```
1353 ///
1354 /// This derive ensures that the runtime check of whether bytes correspond to a
1355 /// valid instance is sound. You **must** implement this trait via the derive.
1356 ///
1357 /// # What is a "valid instance"?
1358 ///
1359 /// In Rust, each type has *bit validity*, which refers to the set of bit
1360 /// patterns which may appear in an instance of that type. It is impossible for
1361 /// safe Rust code to produce values which violate bit validity (ie, values
1362 /// outside of the "valid" set of bit patterns). If `unsafe` code produces an
1363 /// invalid value, this is considered [undefined behavior].
1364 ///
1365 /// Rust's bit validity rules are currently being decided, which means that some
1366 /// types have three classes of bit patterns: those which are definitely valid,
1367 /// and whose validity is documented in the language; those which may or may not
1368 /// be considered valid at some point in the future; and those which are
1369 /// definitely invalid.
1370 ///
1371 /// Zerocopy takes a conservative approach, and only considers a bit pattern to
1372 /// be valid if its validity is a documenteed guarantee provided by the
1373 /// language.
1374 ///
1375 /// For most use cases, Rust's current guarantees align with programmers'
1376 /// intuitions about what ought to be valid. As a result, zerocopy's
1377 /// conservatism should not affect most users.
1378 ///
1379 /// If you are negatively affected by lack of support for a particular type,
1380 /// we encourage you to let us know by [filing an issue][github-repo].
1381 ///
1382 /// # `TryFromBytes` is not symmetrical with [`IntoBytes`]
1383 ///
1384 /// There are some types which implement both `TryFromBytes` and [`IntoBytes`],
1385 /// but for which `TryFromBytes` is not guaranteed to accept all byte sequences
1386 /// produced by `IntoBytes`. In other words, for some `T: TryFromBytes +
1387 /// IntoBytes`, there exist values of `t: T` such that
1388 /// `TryFromBytes::try_ref_from_bytes(t.as_bytes()) == None`. Code should not
1389 /// generally assume that values produced by `IntoBytes` will necessarily be
1390 /// accepted as valid by `TryFromBytes`.
1391 ///
1392 /// # Safety
1393 ///
1394 /// On its own, `T: TryFromBytes` does not make any guarantees about the layout
1395 /// or representation of `T`. It merely provides the ability to perform a
1396 /// validity check at runtime via methods like [`try_ref_from_bytes`].
1397 ///
1398 /// You must not rely on the `#[doc(hidden)]` internals of `TryFromBytes`.
1399 /// Future releases of zerocopy may make backwards-breaking changes to these
1400 /// items, including changes that only affect soundness, which may cause code
1401 /// which uses those items to silently become unsound.
1402 ///
1403 /// [undefined behavior]: https://raphlinus.github.io/programming/rust/2018/08/17/undefined-behavior.html
1404 /// [github-repo]: https://github.com/google/zerocopy
1405 /// [`try_ref_from_bytes`]: TryFromBytes::try_ref_from_bytes
1406 /// [*valid instance*]: #what-is-a-valid-instance
1407 #[cfg_attr(feature = "derive", doc = "[derive]: zerocopy_derive::TryFromBytes")]
1408 #[cfg_attr(
1409     not(feature = "derive"),
1410     doc = concat!("[derive]: https://docs.rs/zerocopy/", env!("CARGO_PKG_VERSION"), "/zerocopy/derive.TryFromBytes.html"),
1411 )]
1412 #[cfg_attr(
1413     zerocopy_diagnostic_on_unimplemented_1_78_0,
1414     diagnostic::on_unimplemented(note = "Consider adding `#[derive(TryFromBytes)]` to `{Self}`")
1415 )]
1416 pub unsafe trait TryFromBytes {
1417     // The `Self: Sized` bound makes it so that `TryFromBytes` is still object
1418     // safe.
1419     #[doc(hidden)]
only_derive_is_allowed_to_implement_this_trait() where Self: Sized1420     fn only_derive_is_allowed_to_implement_this_trait()
1421     where
1422         Self: Sized;
1423 
1424     /// Does a given memory range contain a valid instance of `Self`?
1425     ///
1426     /// # Safety
1427     ///
1428     /// Unsafe code may assume that, if `is_bit_valid(candidate)` returns true,
1429     /// `*candidate` contains a valid `Self`.
1430     ///
1431     /// # Panics
1432     ///
1433     /// `is_bit_valid` may panic. Callers are responsible for ensuring that any
1434     /// `unsafe` code remains sound even in the face of `is_bit_valid`
1435     /// panicking. (We support user-defined validation routines; so long as
1436     /// these routines are not required to be `unsafe`, there is no way to
1437     /// ensure that these do not generate panics.)
1438     ///
1439     /// Besides user-defined validation routines panicking, `is_bit_valid` will
1440     /// either panic or fail to compile if called on a pointer with [`Shared`]
1441     /// aliasing when `Self: !Immutable`.
1442     ///
1443     /// [`UnsafeCell`]: core::cell::UnsafeCell
1444     /// [`Shared`]: invariant::Shared
1445     #[doc(hidden)]
is_bit_valid<A: invariant::Reference>(candidate: Maybe<'_, Self, A>) -> bool1446     fn is_bit_valid<A: invariant::Reference>(candidate: Maybe<'_, Self, A>) -> bool;
1447 
1448     /// Attempts to interpret the given `source` as a `&Self`.
1449     ///
1450     /// If the bytes of `source` are a valid instance of `Self`, this method
1451     /// returns a reference to those bytes interpreted as a `Self`. If the
1452     /// length of `source` is not a [valid size of `Self`][valid-size], or if
1453     /// `source` is not appropriately aligned, or if `source` is not a valid
1454     /// instance of `Self`, this returns `Err`. If [`Self:
1455     /// Unaligned`][self-unaligned], you can [infallibly discard the alignment
1456     /// error][ConvertError::from].
1457     ///
1458     /// `Self` may be a sized type, a slice, or a [slice DST][slice-dst].
1459     ///
1460     /// [valid-size]: crate::KnownLayout#what-is-a-valid-size
1461     /// [self-unaligned]: Unaligned
1462     /// [slice-dst]: KnownLayout#dynamically-sized-types
1463     ///
1464     /// # Compile-Time Assertions
1465     ///
1466     /// This method cannot yet be used on unsized types whose dynamically-sized
1467     /// component is zero-sized. Attempting to use this method on such types
1468     /// results in a compile-time assertion error; e.g.:
1469     ///
1470     /// ```compile_fail,E0080
1471     /// use zerocopy::*;
1472     /// # use zerocopy_derive::*;
1473     ///
1474     /// #[derive(TryFromBytes, Immutable, KnownLayout)]
1475     /// #[repr(C)]
1476     /// struct ZSTy {
1477     ///     leading_sized: u16,
1478     ///     trailing_dst: [()],
1479     /// }
1480     ///
1481     /// let _ = ZSTy::try_ref_from_bytes(0u16.as_bytes()); // ⚠ Compile Error!
1482     /// ```
1483     ///
1484     /// # Examples
1485     ///
1486     /// ```
1487     /// use zerocopy::TryFromBytes;
1488     /// # use zerocopy_derive::*;
1489     ///
1490     /// // The only valid value of this type is the byte `0xC0`
1491     /// #[derive(TryFromBytes, KnownLayout, Immutable)]
1492     /// #[repr(u8)]
1493     /// enum C0 { xC0 = 0xC0 }
1494     ///
1495     /// // The only valid value of this type is the byte sequence `0xC0C0`.
1496     /// #[derive(TryFromBytes, KnownLayout, Immutable)]
1497     /// #[repr(C)]
1498     /// struct C0C0(C0, C0);
1499     ///
1500     /// #[derive(TryFromBytes, KnownLayout, Immutable)]
1501     /// #[repr(C)]
1502     /// struct Packet {
1503     ///     magic_number: C0C0,
1504     ///     mug_size: u8,
1505     ///     temperature: u8,
1506     ///     marshmallows: [[u8; 2]],
1507     /// }
1508     ///
1509     /// let bytes = &[0xC0, 0xC0, 240, 77, 0, 1, 2, 3, 4, 5][..];
1510     ///
1511     /// let packet = Packet::try_ref_from_bytes(bytes).unwrap();
1512     ///
1513     /// assert_eq!(packet.mug_size, 240);
1514     /// assert_eq!(packet.temperature, 77);
1515     /// assert_eq!(packet.marshmallows, [[0, 1], [2, 3], [4, 5]]);
1516     ///
1517     /// // These bytes are not valid instance of `Packet`.
1518     /// let bytes = &[0x10, 0xC0, 240, 77, 0, 1, 2, 3, 4, 5][..];
1519     /// assert!(Packet::try_ref_from_bytes(bytes).is_err());
1520     /// ```
1521     #[must_use = "has no side effects"]
1522     #[inline]
try_ref_from_bytes(source: &[u8]) -> Result<&Self, TryCastError<&[u8], Self>> where Self: KnownLayout + Immutable,1523     fn try_ref_from_bytes(source: &[u8]) -> Result<&Self, TryCastError<&[u8], Self>>
1524     where
1525         Self: KnownLayout + Immutable,
1526     {
1527         static_assert_dst_is_not_zst!(Self);
1528         match Ptr::from_ref(source).try_cast_into_no_leftover::<Self, BecauseImmutable>(None) {
1529             Ok(source) => {
1530                 // This call may panic. If that happens, it doesn't cause any soundness
1531                 // issues, as we have not generated any invalid state which we need to
1532                 // fix before returning.
1533                 //
1534                 // Note that one panic or post-monomorphization error condition is
1535                 // calling `try_into_valid` (and thus `is_bit_valid`) with a shared
1536                 // pointer when `Self: !Immutable`. Since `Self: Immutable`, this panic
1537                 // condition will not happen.
1538                 match source.try_into_valid() {
1539                     Ok(valid) => Ok(valid.as_ref()),
1540                     Err(e) => {
1541                         Err(e.map_src(|src| src.as_bytes::<BecauseImmutable>().as_ref()).into())
1542                     }
1543                 }
1544             }
1545             Err(e) => Err(e.map_src(Ptr::as_ref).into()),
1546         }
1547     }
1548 
1549     /// Attempts to interpret the prefix of the given `source` as a `&Self`.
1550     ///
1551     /// This method computes the [largest possible size of `Self`][valid-size]
1552     /// that can fit in the leading bytes of `source`. If that prefix is a valid
1553     /// instance of `Self`, this method returns a reference to those bytes
1554     /// interpreted as `Self`, and a reference to the remaining bytes. If there
1555     /// are insufficient bytes, or if `source` is not appropriately aligned, or
1556     /// if those bytes are not a valid instance of `Self`, this returns `Err`.
1557     /// If [`Self: Unaligned`][self-unaligned], you can [infallibly discard the
1558     /// alignment error][ConvertError::from].
1559     ///
1560     /// `Self` may be a sized type, a slice, or a [slice DST][slice-dst].
1561     ///
1562     /// [valid-size]: crate::KnownLayout#what-is-a-valid-size
1563     /// [self-unaligned]: Unaligned
1564     /// [slice-dst]: KnownLayout#dynamically-sized-types
1565     ///
1566     /// # Compile-Time Assertions
1567     ///
1568     /// This method cannot yet be used on unsized types whose dynamically-sized
1569     /// component is zero-sized. Attempting to use this method on such types
1570     /// results in a compile-time assertion error; e.g.:
1571     ///
1572     /// ```compile_fail,E0080
1573     /// use zerocopy::*;
1574     /// # use zerocopy_derive::*;
1575     ///
1576     /// #[derive(TryFromBytes, Immutable, KnownLayout)]
1577     /// #[repr(C)]
1578     /// struct ZSTy {
1579     ///     leading_sized: u16,
1580     ///     trailing_dst: [()],
1581     /// }
1582     ///
1583     /// let _ = ZSTy::try_ref_from_prefix(0u16.as_bytes()); // ⚠ Compile Error!
1584     /// ```
1585     ///
1586     /// # Examples
1587     ///
1588     /// ```
1589     /// use zerocopy::TryFromBytes;
1590     /// # use zerocopy_derive::*;
1591     ///
1592     /// // The only valid value of this type is the byte `0xC0`
1593     /// #[derive(TryFromBytes, KnownLayout, Immutable)]
1594     /// #[repr(u8)]
1595     /// enum C0 { xC0 = 0xC0 }
1596     ///
1597     /// // The only valid value of this type is the bytes `0xC0C0`.
1598     /// #[derive(TryFromBytes, KnownLayout, Immutable)]
1599     /// #[repr(C)]
1600     /// struct C0C0(C0, C0);
1601     ///
1602     /// #[derive(TryFromBytes, KnownLayout, Immutable)]
1603     /// #[repr(C)]
1604     /// struct Packet {
1605     ///     magic_number: C0C0,
1606     ///     mug_size: u8,
1607     ///     temperature: u8,
1608     ///     marshmallows: [[u8; 2]],
1609     /// }
1610     ///
1611     /// // These are more bytes than are needed to encode a `Packet`.
1612     /// let bytes = &[0xC0, 0xC0, 240, 77, 0, 1, 2, 3, 4, 5, 6][..];
1613     ///
1614     /// let (packet, suffix) = Packet::try_ref_from_prefix(bytes).unwrap();
1615     ///
1616     /// assert_eq!(packet.mug_size, 240);
1617     /// assert_eq!(packet.temperature, 77);
1618     /// assert_eq!(packet.marshmallows, [[0, 1], [2, 3], [4, 5]]);
1619     /// assert_eq!(suffix, &[6u8][..]);
1620     ///
1621     /// // These bytes are not valid instance of `Packet`.
1622     /// let bytes = &[0x10, 0xC0, 240, 77, 0, 1, 2, 3, 4, 5, 6][..];
1623     /// assert!(Packet::try_ref_from_prefix(bytes).is_err());
1624     /// ```
1625     #[must_use = "has no side effects"]
1626     #[inline]
try_ref_from_prefix(source: &[u8]) -> Result<(&Self, &[u8]), TryCastError<&[u8], Self>> where Self: KnownLayout + Immutable,1627     fn try_ref_from_prefix(source: &[u8]) -> Result<(&Self, &[u8]), TryCastError<&[u8], Self>>
1628     where
1629         Self: KnownLayout + Immutable,
1630     {
1631         static_assert_dst_is_not_zst!(Self);
1632         try_ref_from_prefix_suffix(source, CastType::Prefix, None)
1633     }
1634 
1635     /// Attempts to interpret the suffix of the given `source` as a `&Self`.
1636     ///
1637     /// This method computes the [largest possible size of `Self`][valid-size]
1638     /// that can fit in the trailing bytes of `source`. If that suffix is a
1639     /// valid instance of `Self`, this method returns a reference to those bytes
1640     /// interpreted as `Self`, and a reference to the preceding bytes. If there
1641     /// are insufficient bytes, or if the suffix of `source` would not be
1642     /// appropriately aligned, or if the suffix is not a valid instance of
1643     /// `Self`, this returns `Err`. If [`Self: Unaligned`][self-unaligned], you
1644     /// can [infallibly discard the alignment error][ConvertError::from].
1645     ///
1646     /// `Self` may be a sized type, a slice, or a [slice DST][slice-dst].
1647     ///
1648     /// [valid-size]: crate::KnownLayout#what-is-a-valid-size
1649     /// [self-unaligned]: Unaligned
1650     /// [slice-dst]: KnownLayout#dynamically-sized-types
1651     ///
1652     /// # Compile-Time Assertions
1653     ///
1654     /// This method cannot yet be used on unsized types whose dynamically-sized
1655     /// component is zero-sized. Attempting to use this method on such types
1656     /// results in a compile-time assertion error; e.g.:
1657     ///
1658     /// ```compile_fail,E0080
1659     /// use zerocopy::*;
1660     /// # use zerocopy_derive::*;
1661     ///
1662     /// #[derive(TryFromBytes, Immutable, KnownLayout)]
1663     /// #[repr(C)]
1664     /// struct ZSTy {
1665     ///     leading_sized: u16,
1666     ///     trailing_dst: [()],
1667     /// }
1668     ///
1669     /// let _ = ZSTy::try_ref_from_suffix(0u16.as_bytes()); // ⚠ Compile Error!
1670     /// ```
1671     ///
1672     /// # Examples
1673     ///
1674     /// ```
1675     /// use zerocopy::TryFromBytes;
1676     /// # use zerocopy_derive::*;
1677     ///
1678     /// // The only valid value of this type is the byte `0xC0`
1679     /// #[derive(TryFromBytes, KnownLayout, Immutable)]
1680     /// #[repr(u8)]
1681     /// enum C0 { xC0 = 0xC0 }
1682     ///
1683     /// // The only valid value of this type is the bytes `0xC0C0`.
1684     /// #[derive(TryFromBytes, KnownLayout, Immutable)]
1685     /// #[repr(C)]
1686     /// struct C0C0(C0, C0);
1687     ///
1688     /// #[derive(TryFromBytes, KnownLayout, Immutable)]
1689     /// #[repr(C)]
1690     /// struct Packet {
1691     ///     magic_number: C0C0,
1692     ///     mug_size: u8,
1693     ///     temperature: u8,
1694     ///     marshmallows: [[u8; 2]],
1695     /// }
1696     ///
1697     /// // These are more bytes than are needed to encode a `Packet`.
1698     /// let bytes = &[0, 0xC0, 0xC0, 240, 77, 2, 3, 4, 5, 6, 7][..];
1699     ///
1700     /// let (prefix, packet) = Packet::try_ref_from_suffix(bytes).unwrap();
1701     ///
1702     /// assert_eq!(packet.mug_size, 240);
1703     /// assert_eq!(packet.temperature, 77);
1704     /// assert_eq!(packet.marshmallows, [[2, 3], [4, 5], [6, 7]]);
1705     /// assert_eq!(prefix, &[0u8][..]);
1706     ///
1707     /// // These bytes are not valid instance of `Packet`.
1708     /// let bytes = &[0, 1, 2, 3, 4, 5, 6, 77, 240, 0xC0, 0x10][..];
1709     /// assert!(Packet::try_ref_from_suffix(bytes).is_err());
1710     /// ```
1711     #[must_use = "has no side effects"]
1712     #[inline]
try_ref_from_suffix(source: &[u8]) -> Result<(&[u8], &Self), TryCastError<&[u8], Self>> where Self: KnownLayout + Immutable,1713     fn try_ref_from_suffix(source: &[u8]) -> Result<(&[u8], &Self), TryCastError<&[u8], Self>>
1714     where
1715         Self: KnownLayout + Immutable,
1716     {
1717         static_assert_dst_is_not_zst!(Self);
1718         try_ref_from_prefix_suffix(source, CastType::Suffix, None).map(swap)
1719     }
1720 
1721     /// Attempts to interpret the given `source` as a `&mut Self` without
1722     /// copying.
1723     ///
1724     /// If the bytes of `source` are a valid instance of `Self`, this method
1725     /// returns a reference to those bytes interpreted as a `Self`. If the
1726     /// length of `source` is not a [valid size of `Self`][valid-size], or if
1727     /// `source` is not appropriately aligned, or if `source` is not a valid
1728     /// instance of `Self`, this returns `Err`. If [`Self:
1729     /// Unaligned`][self-unaligned], you can [infallibly discard the alignment
1730     /// error][ConvertError::from].
1731     ///
1732     /// `Self` may be a sized type, a slice, or a [slice DST][slice-dst].
1733     ///
1734     /// [valid-size]: crate::KnownLayout#what-is-a-valid-size
1735     /// [self-unaligned]: Unaligned
1736     /// [slice-dst]: KnownLayout#dynamically-sized-types
1737     ///
1738     /// # Compile-Time Assertions
1739     ///
1740     /// This method cannot yet be used on unsized types whose dynamically-sized
1741     /// component is zero-sized. Attempting to use this method on such types
1742     /// results in a compile-time assertion error; e.g.:
1743     ///
1744     /// ```compile_fail,E0080
1745     /// use zerocopy::*;
1746     /// # use zerocopy_derive::*;
1747     ///
1748     /// #[derive(TryFromBytes, IntoBytes, KnownLayout)]
1749     /// #[repr(C, packed)]
1750     /// struct ZSTy {
1751     ///     leading_sized: [u8; 2],
1752     ///     trailing_dst: [()],
1753     /// }
1754     ///
1755     /// let mut source = [85, 85];
1756     /// let _ = ZSTy::try_mut_from_bytes(&mut source[..]); // ⚠ Compile Error!
1757     /// ```
1758     ///
1759     /// # Examples
1760     ///
1761     /// ```
1762     /// use zerocopy::TryFromBytes;
1763     /// # use zerocopy_derive::*;
1764     ///
1765     /// // The only valid value of this type is the byte `0xC0`
1766     /// #[derive(TryFromBytes, IntoBytes, KnownLayout)]
1767     /// #[repr(u8)]
1768     /// enum C0 { xC0 = 0xC0 }
1769     ///
1770     /// // The only valid value of this type is the bytes `0xC0C0`.
1771     /// #[derive(TryFromBytes, IntoBytes, KnownLayout)]
1772     /// #[repr(C)]
1773     /// struct C0C0(C0, C0);
1774     ///
1775     /// #[derive(TryFromBytes, IntoBytes, KnownLayout)]
1776     /// #[repr(C, packed)]
1777     /// struct Packet {
1778     ///     magic_number: C0C0,
1779     ///     mug_size: u8,
1780     ///     temperature: u8,
1781     ///     marshmallows: [[u8; 2]],
1782     /// }
1783     ///
1784     /// let bytes = &mut [0xC0, 0xC0, 240, 77, 0, 1, 2, 3, 4, 5][..];
1785     ///
1786     /// let packet = Packet::try_mut_from_bytes(bytes).unwrap();
1787     ///
1788     /// assert_eq!(packet.mug_size, 240);
1789     /// assert_eq!(packet.temperature, 77);
1790     /// assert_eq!(packet.marshmallows, [[0, 1], [2, 3], [4, 5]]);
1791     ///
1792     /// packet.temperature = 111;
1793     ///
1794     /// assert_eq!(bytes, [0xC0, 0xC0, 240, 111, 0, 1, 2, 3, 4, 5]);
1795     ///
1796     /// // These bytes are not valid instance of `Packet`.
1797     /// let bytes = &mut [0x10, 0xC0, 240, 77, 0, 1, 2, 3, 4, 5, 6][..];
1798     /// assert!(Packet::try_mut_from_bytes(bytes).is_err());
1799     /// ```
1800     #[must_use = "has no side effects"]
1801     #[inline]
try_mut_from_bytes(bytes: &mut [u8]) -> Result<&mut Self, TryCastError<&mut [u8], Self>> where Self: KnownLayout + IntoBytes,1802     fn try_mut_from_bytes(bytes: &mut [u8]) -> Result<&mut Self, TryCastError<&mut [u8], Self>>
1803     where
1804         Self: KnownLayout + IntoBytes,
1805     {
1806         static_assert_dst_is_not_zst!(Self);
1807         match Ptr::from_mut(bytes).try_cast_into_no_leftover::<Self, BecauseExclusive>(None) {
1808             Ok(source) => {
1809                 // This call may panic. If that happens, it doesn't cause any soundness
1810                 // issues, as we have not generated any invalid state which we need to
1811                 // fix before returning.
1812                 //
1813                 // Note that one panic or post-monomorphization error condition is
1814                 // calling `try_into_valid` (and thus `is_bit_valid`) with a shared
1815                 // pointer when `Self: !Immutable`. Since `Self: Immutable`, this panic
1816                 // condition will not happen.
1817                 match source.try_into_valid() {
1818                     Ok(source) => Ok(source.as_mut()),
1819                     Err(e) => {
1820                         Err(e.map_src(|src| src.as_bytes::<BecauseExclusive>().as_mut()).into())
1821                     }
1822                 }
1823             }
1824             Err(e) => Err(e.map_src(Ptr::as_mut).into()),
1825         }
1826     }
1827 
1828     /// Attempts to interpret the prefix of the given `source` as a `&mut
1829     /// Self`.
1830     ///
1831     /// This method computes the [largest possible size of `Self`][valid-size]
1832     /// that can fit in the leading bytes of `source`. If that prefix is a valid
1833     /// instance of `Self`, this method returns a reference to those bytes
1834     /// interpreted as `Self`, and a reference to the remaining bytes. If there
1835     /// are insufficient bytes, or if `source` is not appropriately aligned, or
1836     /// if the bytes are not a valid instance of `Self`, this returns `Err`. If
1837     /// [`Self: Unaligned`][self-unaligned], you can [infallibly discard the
1838     /// alignment error][ConvertError::from].
1839     ///
1840     /// `Self` may be a sized type, a slice, or a [slice DST][slice-dst].
1841     ///
1842     /// [valid-size]: crate::KnownLayout#what-is-a-valid-size
1843     /// [self-unaligned]: Unaligned
1844     /// [slice-dst]: KnownLayout#dynamically-sized-types
1845     ///
1846     /// # Compile-Time Assertions
1847     ///
1848     /// This method cannot yet be used on unsized types whose dynamically-sized
1849     /// component is zero-sized. Attempting to use this method on such types
1850     /// results in a compile-time assertion error; e.g.:
1851     ///
1852     /// ```compile_fail,E0080
1853     /// use zerocopy::*;
1854     /// # use zerocopy_derive::*;
1855     ///
1856     /// #[derive(TryFromBytes, IntoBytes, KnownLayout)]
1857     /// #[repr(C, packed)]
1858     /// struct ZSTy {
1859     ///     leading_sized: [u8; 2],
1860     ///     trailing_dst: [()],
1861     /// }
1862     ///
1863     /// let mut source = [85, 85];
1864     /// let _ = ZSTy::try_mut_from_prefix(&mut source[..]); // ⚠ Compile Error!
1865     /// ```
1866     ///
1867     /// # Examples
1868     ///
1869     /// ```
1870     /// use zerocopy::TryFromBytes;
1871     /// # use zerocopy_derive::*;
1872     ///
1873     /// // The only valid value of this type is the byte `0xC0`
1874     /// #[derive(TryFromBytes, IntoBytes, KnownLayout)]
1875     /// #[repr(u8)]
1876     /// enum C0 { xC0 = 0xC0 }
1877     ///
1878     /// // The only valid value of this type is the bytes `0xC0C0`.
1879     /// #[derive(TryFromBytes, IntoBytes, KnownLayout)]
1880     /// #[repr(C)]
1881     /// struct C0C0(C0, C0);
1882     ///
1883     /// #[derive(TryFromBytes, IntoBytes, KnownLayout)]
1884     /// #[repr(C, packed)]
1885     /// struct Packet {
1886     ///     magic_number: C0C0,
1887     ///     mug_size: u8,
1888     ///     temperature: u8,
1889     ///     marshmallows: [[u8; 2]],
1890     /// }
1891     ///
1892     /// // These are more bytes than are needed to encode a `Packet`.
1893     /// let bytes = &mut [0xC0, 0xC0, 240, 77, 0, 1, 2, 3, 4, 5, 6][..];
1894     ///
1895     /// let (packet, suffix) = Packet::try_mut_from_prefix(bytes).unwrap();
1896     ///
1897     /// assert_eq!(packet.mug_size, 240);
1898     /// assert_eq!(packet.temperature, 77);
1899     /// assert_eq!(packet.marshmallows, [[0, 1], [2, 3], [4, 5]]);
1900     /// assert_eq!(suffix, &[6u8][..]);
1901     ///
1902     /// packet.temperature = 111;
1903     /// suffix[0] = 222;
1904     ///
1905     /// assert_eq!(bytes, [0xC0, 0xC0, 240, 111, 0, 1, 2, 3, 4, 5, 222]);
1906     ///
1907     /// // These bytes are not valid instance of `Packet`.
1908     /// let bytes = &mut [0x10, 0xC0, 240, 77, 0, 1, 2, 3, 4, 5, 6][..];
1909     /// assert!(Packet::try_mut_from_prefix(bytes).is_err());
1910     /// ```
1911     #[must_use = "has no side effects"]
1912     #[inline]
try_mut_from_prefix( source: &mut [u8], ) -> Result<(&mut Self, &mut [u8]), TryCastError<&mut [u8], Self>> where Self: KnownLayout + IntoBytes,1913     fn try_mut_from_prefix(
1914         source: &mut [u8],
1915     ) -> Result<(&mut Self, &mut [u8]), TryCastError<&mut [u8], Self>>
1916     where
1917         Self: KnownLayout + IntoBytes,
1918     {
1919         static_assert_dst_is_not_zst!(Self);
1920         try_mut_from_prefix_suffix(source, CastType::Prefix, None)
1921     }
1922 
1923     /// Attempts to interpret the suffix of the given `source` as a `&mut
1924     /// Self`.
1925     ///
1926     /// This method computes the [largest possible size of `Self`][valid-size]
1927     /// that can fit in the trailing bytes of `source`. If that suffix is a
1928     /// valid instance of `Self`, this method returns a reference to those bytes
1929     /// interpreted as `Self`, and a reference to the preceding bytes. If there
1930     /// are insufficient bytes, or if the suffix of `source` would not be
1931     /// appropriately aligned, or if the suffix is not a valid instance of
1932     /// `Self`, this returns `Err`. If [`Self: Unaligned`][self-unaligned], you
1933     /// can [infallibly discard the alignment error][ConvertError::from].
1934     ///
1935     /// `Self` may be a sized type, a slice, or a [slice DST][slice-dst].
1936     ///
1937     /// [valid-size]: crate::KnownLayout#what-is-a-valid-size
1938     /// [self-unaligned]: Unaligned
1939     /// [slice-dst]: KnownLayout#dynamically-sized-types
1940     ///
1941     /// # Compile-Time Assertions
1942     ///
1943     /// This method cannot yet be used on unsized types whose dynamically-sized
1944     /// component is zero-sized. Attempting to use this method on such types
1945     /// results in a compile-time assertion error; e.g.:
1946     ///
1947     /// ```compile_fail,E0080
1948     /// use zerocopy::*;
1949     /// # use zerocopy_derive::*;
1950     ///
1951     /// #[derive(TryFromBytes, IntoBytes, KnownLayout)]
1952     /// #[repr(C, packed)]
1953     /// struct ZSTy {
1954     ///     leading_sized: u16,
1955     ///     trailing_dst: [()],
1956     /// }
1957     ///
1958     /// let mut source = [85, 85];
1959     /// let _ = ZSTy::try_mut_from_suffix(&mut source[..]); // ⚠ Compile Error!
1960     /// ```
1961     ///
1962     /// # Examples
1963     ///
1964     /// ```
1965     /// use zerocopy::TryFromBytes;
1966     /// # use zerocopy_derive::*;
1967     ///
1968     /// // The only valid value of this type is the byte `0xC0`
1969     /// #[derive(TryFromBytes, IntoBytes, KnownLayout)]
1970     /// #[repr(u8)]
1971     /// enum C0 { xC0 = 0xC0 }
1972     ///
1973     /// // The only valid value of this type is the bytes `0xC0C0`.
1974     /// #[derive(TryFromBytes, IntoBytes, KnownLayout)]
1975     /// #[repr(C)]
1976     /// struct C0C0(C0, C0);
1977     ///
1978     /// #[derive(TryFromBytes, IntoBytes, KnownLayout)]
1979     /// #[repr(C, packed)]
1980     /// struct Packet {
1981     ///     magic_number: C0C0,
1982     ///     mug_size: u8,
1983     ///     temperature: u8,
1984     ///     marshmallows: [[u8; 2]],
1985     /// }
1986     ///
1987     /// // These are more bytes than are needed to encode a `Packet`.
1988     /// let bytes = &mut [0, 0xC0, 0xC0, 240, 77, 2, 3, 4, 5, 6, 7][..];
1989     ///
1990     /// let (prefix, packet) = Packet::try_mut_from_suffix(bytes).unwrap();
1991     ///
1992     /// assert_eq!(packet.mug_size, 240);
1993     /// assert_eq!(packet.temperature, 77);
1994     /// assert_eq!(packet.marshmallows, [[2, 3], [4, 5], [6, 7]]);
1995     /// assert_eq!(prefix, &[0u8][..]);
1996     ///
1997     /// prefix[0] = 111;
1998     /// packet.temperature = 222;
1999     ///
2000     /// assert_eq!(bytes, [111, 0xC0, 0xC0, 240, 222, 2, 3, 4, 5, 6, 7]);
2001     ///
2002     /// // These bytes are not valid instance of `Packet`.
2003     /// let bytes = &mut [0, 1, 2, 3, 4, 5, 6, 77, 240, 0xC0, 0x10][..];
2004     /// assert!(Packet::try_mut_from_suffix(bytes).is_err());
2005     /// ```
2006     #[must_use = "has no side effects"]
2007     #[inline]
try_mut_from_suffix( source: &mut [u8], ) -> Result<(&mut [u8], &mut Self), TryCastError<&mut [u8], Self>> where Self: KnownLayout + IntoBytes,2008     fn try_mut_from_suffix(
2009         source: &mut [u8],
2010     ) -> Result<(&mut [u8], &mut Self), TryCastError<&mut [u8], Self>>
2011     where
2012         Self: KnownLayout + IntoBytes,
2013     {
2014         static_assert_dst_is_not_zst!(Self);
2015         try_mut_from_prefix_suffix(source, CastType::Suffix, None).map(swap)
2016     }
2017 
2018     /// Attempts to interpret the given `source` as a `&Self` with a DST length
2019     /// equal to `count`.
2020     ///
2021     /// This method attempts to return a reference to `source` interpreted as a
2022     /// `Self` with `count` trailing elements. If the length of `source` is not
2023     /// equal to the size of `Self` with `count` elements, if `source` is not
2024     /// appropriately aligned, or if `source` does not contain a valid instance
2025     /// of `Self`, this returns `Err`. If [`Self: Unaligned`][self-unaligned],
2026     /// you can [infallibly discard the alignment error][ConvertError::from].
2027     ///
2028     /// [self-unaligned]: Unaligned
2029     /// [slice-dst]: KnownLayout#dynamically-sized-types
2030     ///
2031     /// # Examples
2032     ///
2033     /// ```
2034     /// # #![allow(non_camel_case_types)] // For C0::xC0
2035     /// use zerocopy::TryFromBytes;
2036     /// # use zerocopy_derive::*;
2037     ///
2038     /// // The only valid value of this type is the byte `0xC0`
2039     /// #[derive(TryFromBytes, KnownLayout, Immutable)]
2040     /// #[repr(u8)]
2041     /// enum C0 { xC0 = 0xC0 }
2042     ///
2043     /// // The only valid value of this type is the bytes `0xC0C0`.
2044     /// #[derive(TryFromBytes, KnownLayout, Immutable)]
2045     /// #[repr(C)]
2046     /// struct C0C0(C0, C0);
2047     ///
2048     /// #[derive(TryFromBytes, KnownLayout, Immutable)]
2049     /// #[repr(C)]
2050     /// struct Packet {
2051     ///     magic_number: C0C0,
2052     ///     mug_size: u8,
2053     ///     temperature: u8,
2054     ///     marshmallows: [[u8; 2]],
2055     /// }
2056     ///
2057     /// let bytes = &[0xC0, 0xC0, 240, 77, 2, 3, 4, 5, 6, 7][..];
2058     ///
2059     /// let packet = Packet::try_ref_from_bytes_with_elems(bytes, 3).unwrap();
2060     ///
2061     /// assert_eq!(packet.mug_size, 240);
2062     /// assert_eq!(packet.temperature, 77);
2063     /// assert_eq!(packet.marshmallows, [[2, 3], [4, 5], [6, 7]]);
2064     ///
2065     /// // These bytes are not valid instance of `Packet`.
2066     /// let bytes = &[0, 1, 2, 3, 4, 5, 6, 77, 240, 0xC0, 0xC0][..];
2067     /// assert!(Packet::try_ref_from_bytes_with_elems(bytes, 3).is_err());
2068     /// ```
2069     ///
2070     /// Since an explicit `count` is provided, this method supports types with
2071     /// zero-sized trailing slice elements. Methods such as [`try_ref_from_bytes`]
2072     /// which do not take an explicit count do not support such types.
2073     ///
2074     /// ```
2075     /// use core::num::NonZeroU16;
2076     /// use zerocopy::*;
2077     /// # use zerocopy_derive::*;
2078     ///
2079     /// #[derive(TryFromBytes, Immutable, KnownLayout)]
2080     /// #[repr(C)]
2081     /// struct ZSTy {
2082     ///     leading_sized: NonZeroU16,
2083     ///     trailing_dst: [()],
2084     /// }
2085     ///
2086     /// let src = 0xCAFEu16.as_bytes();
2087     /// let zsty = ZSTy::try_ref_from_bytes_with_elems(src, 42).unwrap();
2088     /// assert_eq!(zsty.trailing_dst.len(), 42);
2089     /// ```
2090     ///
2091     /// [`try_ref_from_bytes`]: TryFromBytes::try_ref_from_bytes
2092     #[must_use = "has no side effects"]
2093     #[inline]
try_ref_from_bytes_with_elems( source: &[u8], count: usize, ) -> Result<&Self, TryCastError<&[u8], Self>> where Self: KnownLayout<PointerMetadata = usize> + Immutable,2094     fn try_ref_from_bytes_with_elems(
2095         source: &[u8],
2096         count: usize,
2097     ) -> Result<&Self, TryCastError<&[u8], Self>>
2098     where
2099         Self: KnownLayout<PointerMetadata = usize> + Immutable,
2100     {
2101         match Ptr::from_ref(source).try_cast_into_no_leftover::<Self, BecauseImmutable>(Some(count))
2102         {
2103             Ok(source) => {
2104                 // This call may panic. If that happens, it doesn't cause any soundness
2105                 // issues, as we have not generated any invalid state which we need to
2106                 // fix before returning.
2107                 //
2108                 // Note that one panic or post-monomorphization error condition is
2109                 // calling `try_into_valid` (and thus `is_bit_valid`) with a shared
2110                 // pointer when `Self: !Immutable`. Since `Self: Immutable`, this panic
2111                 // condition will not happen.
2112                 match source.try_into_valid() {
2113                     Ok(source) => Ok(source.as_ref()),
2114                     Err(e) => {
2115                         Err(e.map_src(|src| src.as_bytes::<BecauseImmutable>().as_ref()).into())
2116                     }
2117                 }
2118             }
2119             Err(e) => Err(e.map_src(Ptr::as_ref).into()),
2120         }
2121     }
2122 
2123     /// Attempts to interpret the prefix of the given `source` as a `&Self` with
2124     /// a DST length equal to `count`.
2125     ///
2126     /// This method attempts to return a reference to the prefix of `source`
2127     /// interpreted as a `Self` with `count` trailing elements, and a reference
2128     /// to the remaining bytes. If the length of `source` is less than the size
2129     /// of `Self` with `count` elements, if `source` is not appropriately
2130     /// aligned, or if the prefix of `source` does not contain a valid instance
2131     /// of `Self`, this returns `Err`. If [`Self: Unaligned`][self-unaligned],
2132     /// you can [infallibly discard the alignment error][ConvertError::from].
2133     ///
2134     /// [self-unaligned]: Unaligned
2135     /// [slice-dst]: KnownLayout#dynamically-sized-types
2136     ///
2137     /// # Examples
2138     ///
2139     /// ```
2140     /// # #![allow(non_camel_case_types)] // For C0::xC0
2141     /// use zerocopy::TryFromBytes;
2142     /// # use zerocopy_derive::*;
2143     ///
2144     /// // The only valid value of this type is the byte `0xC0`
2145     /// #[derive(TryFromBytes, KnownLayout, Immutable)]
2146     /// #[repr(u8)]
2147     /// enum C0 { xC0 = 0xC0 }
2148     ///
2149     /// // The only valid value of this type is the bytes `0xC0C0`.
2150     /// #[derive(TryFromBytes, KnownLayout, Immutable)]
2151     /// #[repr(C)]
2152     /// struct C0C0(C0, C0);
2153     ///
2154     /// #[derive(TryFromBytes, KnownLayout, Immutable)]
2155     /// #[repr(C)]
2156     /// struct Packet {
2157     ///     magic_number: C0C0,
2158     ///     mug_size: u8,
2159     ///     temperature: u8,
2160     ///     marshmallows: [[u8; 2]],
2161     /// }
2162     ///
2163     /// let bytes = &[0xC0, 0xC0, 240, 77, 2, 3, 4, 5, 6, 7, 8][..];
2164     ///
2165     /// let (packet, suffix) = Packet::try_ref_from_prefix_with_elems(bytes, 3).unwrap();
2166     ///
2167     /// assert_eq!(packet.mug_size, 240);
2168     /// assert_eq!(packet.temperature, 77);
2169     /// assert_eq!(packet.marshmallows, [[2, 3], [4, 5], [6, 7]]);
2170     /// assert_eq!(suffix, &[8u8][..]);
2171     ///
2172     /// // These bytes are not valid instance of `Packet`.
2173     /// let bytes = &mut [0, 1, 2, 3, 4, 5, 6, 7, 8, 77, 240, 0xC0, 0xC0][..];
2174     /// assert!(Packet::try_ref_from_prefix_with_elems(bytes, 3).is_err());
2175     /// ```
2176     ///
2177     /// Since an explicit `count` is provided, this method supports types with
2178     /// zero-sized trailing slice elements. Methods such as [`try_ref_from_prefix`]
2179     /// which do not take an explicit count do not support such types.
2180     ///
2181     /// ```
2182     /// use core::num::NonZeroU16;
2183     /// use zerocopy::*;
2184     /// # use zerocopy_derive::*;
2185     ///
2186     /// #[derive(TryFromBytes, Immutable, KnownLayout)]
2187     /// #[repr(C)]
2188     /// struct ZSTy {
2189     ///     leading_sized: NonZeroU16,
2190     ///     trailing_dst: [()],
2191     /// }
2192     ///
2193     /// let src = 0xCAFEu16.as_bytes();
2194     /// let (zsty, _) = ZSTy::try_ref_from_prefix_with_elems(src, 42).unwrap();
2195     /// assert_eq!(zsty.trailing_dst.len(), 42);
2196     /// ```
2197     ///
2198     /// [`try_ref_from_prefix`]: TryFromBytes::try_ref_from_prefix
2199     #[must_use = "has no side effects"]
2200     #[inline]
try_ref_from_prefix_with_elems( source: &[u8], count: usize, ) -> Result<(&Self, &[u8]), TryCastError<&[u8], Self>> where Self: KnownLayout<PointerMetadata = usize> + Immutable,2201     fn try_ref_from_prefix_with_elems(
2202         source: &[u8],
2203         count: usize,
2204     ) -> Result<(&Self, &[u8]), TryCastError<&[u8], Self>>
2205     where
2206         Self: KnownLayout<PointerMetadata = usize> + Immutable,
2207     {
2208         try_ref_from_prefix_suffix(source, CastType::Prefix, Some(count))
2209     }
2210 
2211     /// Attempts to interpret the suffix of the given `source` as a `&Self` with
2212     /// a DST length equal to `count`.
2213     ///
2214     /// This method attempts to return a reference to the suffix of `source`
2215     /// interpreted as a `Self` with `count` trailing elements, and a reference
2216     /// to the preceding bytes. If the length of `source` is less than the size
2217     /// of `Self` with `count` elements, if the suffix of `source` is not
2218     /// appropriately aligned, or if the suffix of `source` does not contain a
2219     /// valid instance of `Self`, this returns `Err`. If [`Self:
2220     /// Unaligned`][self-unaligned], you can [infallibly discard the alignment
2221     /// error][ConvertError::from].
2222     ///
2223     /// [self-unaligned]: Unaligned
2224     /// [slice-dst]: KnownLayout#dynamically-sized-types
2225     ///
2226     /// # Examples
2227     ///
2228     /// ```
2229     /// # #![allow(non_camel_case_types)] // For C0::xC0
2230     /// use zerocopy::TryFromBytes;
2231     /// # use zerocopy_derive::*;
2232     ///
2233     /// // The only valid value of this type is the byte `0xC0`
2234     /// #[derive(TryFromBytes, KnownLayout, Immutable)]
2235     /// #[repr(u8)]
2236     /// enum C0 { xC0 = 0xC0 }
2237     ///
2238     /// // The only valid value of this type is the bytes `0xC0C0`.
2239     /// #[derive(TryFromBytes, KnownLayout, Immutable)]
2240     /// #[repr(C)]
2241     /// struct C0C0(C0, C0);
2242     ///
2243     /// #[derive(TryFromBytes, KnownLayout, Immutable)]
2244     /// #[repr(C)]
2245     /// struct Packet {
2246     ///     magic_number: C0C0,
2247     ///     mug_size: u8,
2248     ///     temperature: u8,
2249     ///     marshmallows: [[u8; 2]],
2250     /// }
2251     ///
2252     /// let bytes = &[123, 0xC0, 0xC0, 240, 77, 2, 3, 4, 5, 6, 7][..];
2253     ///
2254     /// let (prefix, packet) = Packet::try_ref_from_suffix_with_elems(bytes, 3).unwrap();
2255     ///
2256     /// assert_eq!(packet.mug_size, 240);
2257     /// assert_eq!(packet.temperature, 77);
2258     /// assert_eq!(packet.marshmallows, [[2, 3], [4, 5], [6, 7]]);
2259     /// assert_eq!(prefix, &[123u8][..]);
2260     ///
2261     /// // These bytes are not valid instance of `Packet`.
2262     /// let bytes = &[0, 1, 2, 3, 4, 5, 6, 7, 8, 77, 240, 0xC0, 0xC0][..];
2263     /// assert!(Packet::try_ref_from_suffix_with_elems(bytes, 3).is_err());
2264     /// ```
2265     ///
2266     /// Since an explicit `count` is provided, this method supports types with
2267     /// zero-sized trailing slice elements. Methods such as [`try_ref_from_prefix`]
2268     /// which do not take an explicit count do not support such types.
2269     ///
2270     /// ```
2271     /// use core::num::NonZeroU16;
2272     /// use zerocopy::*;
2273     /// # use zerocopy_derive::*;
2274     ///
2275     /// #[derive(TryFromBytes, Immutable, KnownLayout)]
2276     /// #[repr(C)]
2277     /// struct ZSTy {
2278     ///     leading_sized: NonZeroU16,
2279     ///     trailing_dst: [()],
2280     /// }
2281     ///
2282     /// let src = 0xCAFEu16.as_bytes();
2283     /// let (_, zsty) = ZSTy::try_ref_from_suffix_with_elems(src, 42).unwrap();
2284     /// assert_eq!(zsty.trailing_dst.len(), 42);
2285     /// ```
2286     ///
2287     /// [`try_ref_from_prefix`]: TryFromBytes::try_ref_from_prefix
2288     #[must_use = "has no side effects"]
2289     #[inline]
try_ref_from_suffix_with_elems( source: &[u8], count: usize, ) -> Result<(&[u8], &Self), TryCastError<&[u8], Self>> where Self: KnownLayout<PointerMetadata = usize> + Immutable,2290     fn try_ref_from_suffix_with_elems(
2291         source: &[u8],
2292         count: usize,
2293     ) -> Result<(&[u8], &Self), TryCastError<&[u8], Self>>
2294     where
2295         Self: KnownLayout<PointerMetadata = usize> + Immutable,
2296     {
2297         try_ref_from_prefix_suffix(source, CastType::Suffix, Some(count)).map(swap)
2298     }
2299 
2300     /// Attempts to interpret the given `source` as a `&mut Self` with a DST
2301     /// length equal to `count`.
2302     ///
2303     /// This method attempts to return a reference to `source` interpreted as a
2304     /// `Self` with `count` trailing elements. If the length of `source` is not
2305     /// equal to the size of `Self` with `count` elements, if `source` is not
2306     /// appropriately aligned, or if `source` does not contain a valid instance
2307     /// of `Self`, this returns `Err`. If [`Self: Unaligned`][self-unaligned],
2308     /// you can [infallibly discard the alignment error][ConvertError::from].
2309     ///
2310     /// [self-unaligned]: Unaligned
2311     /// [slice-dst]: KnownLayout#dynamically-sized-types
2312     ///
2313     /// # Examples
2314     ///
2315     /// ```
2316     /// # #![allow(non_camel_case_types)] // For C0::xC0
2317     /// use zerocopy::TryFromBytes;
2318     /// # use zerocopy_derive::*;
2319     ///
2320     /// // The only valid value of this type is the byte `0xC0`
2321     /// #[derive(TryFromBytes, IntoBytes, KnownLayout)]
2322     /// #[repr(u8)]
2323     /// enum C0 { xC0 = 0xC0 }
2324     ///
2325     /// // The only valid value of this type is the bytes `0xC0C0`.
2326     /// #[derive(TryFromBytes, IntoBytes, KnownLayout)]
2327     /// #[repr(C)]
2328     /// struct C0C0(C0, C0);
2329     ///
2330     /// #[derive(TryFromBytes, IntoBytes, KnownLayout)]
2331     /// #[repr(C, packed)]
2332     /// struct Packet {
2333     ///     magic_number: C0C0,
2334     ///     mug_size: u8,
2335     ///     temperature: u8,
2336     ///     marshmallows: [[u8; 2]],
2337     /// }
2338     ///
2339     /// let bytes = &mut [0xC0, 0xC0, 240, 77, 2, 3, 4, 5, 6, 7][..];
2340     ///
2341     /// let packet = Packet::try_mut_from_bytes_with_elems(bytes, 3).unwrap();
2342     ///
2343     /// assert_eq!(packet.mug_size, 240);
2344     /// assert_eq!(packet.temperature, 77);
2345     /// assert_eq!(packet.marshmallows, [[2, 3], [4, 5], [6, 7]]);
2346     ///
2347     /// packet.temperature = 111;
2348     ///
2349     /// assert_eq!(bytes, [0xC0, 0xC0, 240, 111, 2, 3, 4, 5, 6, 7]);
2350     ///
2351     /// // These bytes are not valid instance of `Packet`.
2352     /// let bytes = &mut [0, 1, 2, 3, 4, 5, 6, 77, 240, 0xC0, 0xC0][..];
2353     /// assert!(Packet::try_mut_from_bytes_with_elems(bytes, 3).is_err());
2354     /// ```
2355     ///
2356     /// Since an explicit `count` is provided, this method supports types with
2357     /// zero-sized trailing slice elements. Methods such as [`try_mut_from_bytes`]
2358     /// which do not take an explicit count do not support such types.
2359     ///
2360     /// ```
2361     /// use core::num::NonZeroU16;
2362     /// use zerocopy::*;
2363     /// # use zerocopy_derive::*;
2364     ///
2365     /// #[derive(TryFromBytes, IntoBytes, KnownLayout)]
2366     /// #[repr(C, packed)]
2367     /// struct ZSTy {
2368     ///     leading_sized: NonZeroU16,
2369     ///     trailing_dst: [()],
2370     /// }
2371     ///
2372     /// let mut src = 0xCAFEu16;
2373     /// let src = src.as_mut_bytes();
2374     /// let zsty = ZSTy::try_mut_from_bytes_with_elems(src, 42).unwrap();
2375     /// assert_eq!(zsty.trailing_dst.len(), 42);
2376     /// ```
2377     ///
2378     /// [`try_mut_from_bytes`]: TryFromBytes::try_mut_from_bytes
2379     #[must_use = "has no side effects"]
2380     #[inline]
try_mut_from_bytes_with_elems( source: &mut [u8], count: usize, ) -> Result<&mut Self, TryCastError<&mut [u8], Self>> where Self: KnownLayout<PointerMetadata = usize> + IntoBytes,2381     fn try_mut_from_bytes_with_elems(
2382         source: &mut [u8],
2383         count: usize,
2384     ) -> Result<&mut Self, TryCastError<&mut [u8], Self>>
2385     where
2386         Self: KnownLayout<PointerMetadata = usize> + IntoBytes,
2387     {
2388         match Ptr::from_mut(source).try_cast_into_no_leftover::<Self, BecauseExclusive>(Some(count))
2389         {
2390             Ok(source) => {
2391                 // This call may panic. If that happens, it doesn't cause any soundness
2392                 // issues, as we have not generated any invalid state which we need to
2393                 // fix before returning.
2394                 //
2395                 // Note that one panic or post-monomorphization error condition is
2396                 // calling `try_into_valid` (and thus `is_bit_valid`) with a shared
2397                 // pointer when `Self: !Immutable`. Since `Self: Immutable`, this panic
2398                 // condition will not happen.
2399                 match source.try_into_valid() {
2400                     Ok(source) => Ok(source.as_mut()),
2401                     Err(e) => {
2402                         Err(e.map_src(|src| src.as_bytes::<BecauseExclusive>().as_mut()).into())
2403                     }
2404                 }
2405             }
2406             Err(e) => Err(e.map_src(Ptr::as_mut).into()),
2407         }
2408     }
2409 
2410     /// Attempts to interpret the prefix of the given `source` as a `&mut Self`
2411     /// with a DST length equal to `count`.
2412     ///
2413     /// This method attempts to return a reference to the prefix of `source`
2414     /// interpreted as a `Self` with `count` trailing elements, and a reference
2415     /// to the remaining bytes. If the length of `source` is less than the size
2416     /// of `Self` with `count` elements, if `source` is not appropriately
2417     /// aligned, or if the prefix of `source` does not contain a valid instance
2418     /// of `Self`, this returns `Err`. If [`Self: Unaligned`][self-unaligned],
2419     /// you can [infallibly discard the alignment error][ConvertError::from].
2420     ///
2421     /// [self-unaligned]: Unaligned
2422     /// [slice-dst]: KnownLayout#dynamically-sized-types
2423     ///
2424     /// # Examples
2425     ///
2426     /// ```
2427     /// # #![allow(non_camel_case_types)] // For C0::xC0
2428     /// use zerocopy::TryFromBytes;
2429     /// # use zerocopy_derive::*;
2430     ///
2431     /// // The only valid value of this type is the byte `0xC0`
2432     /// #[derive(TryFromBytes, IntoBytes, KnownLayout)]
2433     /// #[repr(u8)]
2434     /// enum C0 { xC0 = 0xC0 }
2435     ///
2436     /// // The only valid value of this type is the bytes `0xC0C0`.
2437     /// #[derive(TryFromBytes, IntoBytes, KnownLayout)]
2438     /// #[repr(C)]
2439     /// struct C0C0(C0, C0);
2440     ///
2441     /// #[derive(TryFromBytes, IntoBytes, KnownLayout)]
2442     /// #[repr(C, packed)]
2443     /// struct Packet {
2444     ///     magic_number: C0C0,
2445     ///     mug_size: u8,
2446     ///     temperature: u8,
2447     ///     marshmallows: [[u8; 2]],
2448     /// }
2449     ///
2450     /// let bytes = &mut [0xC0, 0xC0, 240, 77, 2, 3, 4, 5, 6, 7, 8][..];
2451     ///
2452     /// let (packet, suffix) = Packet::try_mut_from_prefix_with_elems(bytes, 3).unwrap();
2453     ///
2454     /// assert_eq!(packet.mug_size, 240);
2455     /// assert_eq!(packet.temperature, 77);
2456     /// assert_eq!(packet.marshmallows, [[2, 3], [4, 5], [6, 7]]);
2457     /// assert_eq!(suffix, &[8u8][..]);
2458     ///
2459     /// packet.temperature = 111;
2460     /// suffix[0] = 222;
2461     ///
2462     /// assert_eq!(bytes, [0xC0, 0xC0, 240, 111, 2, 3, 4, 5, 6, 7, 222]);
2463     ///
2464     /// // These bytes are not valid instance of `Packet`.
2465     /// let bytes = &mut [0, 1, 2, 3, 4, 5, 6, 7, 8, 77, 240, 0xC0, 0xC0][..];
2466     /// assert!(Packet::try_mut_from_prefix_with_elems(bytes, 3).is_err());
2467     /// ```
2468     ///
2469     /// Since an explicit `count` is provided, this method supports types with
2470     /// zero-sized trailing slice elements. Methods such as [`try_mut_from_prefix`]
2471     /// which do not take an explicit count do not support such types.
2472     ///
2473     /// ```
2474     /// use core::num::NonZeroU16;
2475     /// use zerocopy::*;
2476     /// # use zerocopy_derive::*;
2477     ///
2478     /// #[derive(TryFromBytes, IntoBytes, KnownLayout)]
2479     /// #[repr(C, packed)]
2480     /// struct ZSTy {
2481     ///     leading_sized: NonZeroU16,
2482     ///     trailing_dst: [()],
2483     /// }
2484     ///
2485     /// let mut src = 0xCAFEu16;
2486     /// let src = src.as_mut_bytes();
2487     /// let (zsty, _) = ZSTy::try_mut_from_prefix_with_elems(src, 42).unwrap();
2488     /// assert_eq!(zsty.trailing_dst.len(), 42);
2489     /// ```
2490     ///
2491     /// [`try_mut_from_prefix`]: TryFromBytes::try_mut_from_prefix
2492     #[must_use = "has no side effects"]
2493     #[inline]
try_mut_from_prefix_with_elems( source: &mut [u8], count: usize, ) -> Result<(&mut Self, &mut [u8]), TryCastError<&mut [u8], Self>> where Self: KnownLayout<PointerMetadata = usize> + IntoBytes,2494     fn try_mut_from_prefix_with_elems(
2495         source: &mut [u8],
2496         count: usize,
2497     ) -> Result<(&mut Self, &mut [u8]), TryCastError<&mut [u8], Self>>
2498     where
2499         Self: KnownLayout<PointerMetadata = usize> + IntoBytes,
2500     {
2501         try_mut_from_prefix_suffix(source, CastType::Prefix, Some(count))
2502     }
2503 
2504     /// Attempts to interpret the suffix of the given `source` as a `&mut Self`
2505     /// with a DST length equal to `count`.
2506     ///
2507     /// This method attempts to return a reference to the suffix of `source`
2508     /// interpreted as a `Self` with `count` trailing elements, and a reference
2509     /// to the preceding bytes. If the length of `source` is less than the size
2510     /// of `Self` with `count` elements, if the suffix of `source` is not
2511     /// appropriately aligned, or if the suffix of `source` does not contain a
2512     /// valid instance of `Self`, this returns `Err`. If [`Self:
2513     /// Unaligned`][self-unaligned], you can [infallibly discard the alignment
2514     /// error][ConvertError::from].
2515     ///
2516     /// [self-unaligned]: Unaligned
2517     /// [slice-dst]: KnownLayout#dynamically-sized-types
2518     ///
2519     /// # Examples
2520     ///
2521     /// ```
2522     /// # #![allow(non_camel_case_types)] // For C0::xC0
2523     /// use zerocopy::TryFromBytes;
2524     /// # use zerocopy_derive::*;
2525     ///
2526     /// // The only valid value of this type is the byte `0xC0`
2527     /// #[derive(TryFromBytes, IntoBytes, KnownLayout)]
2528     /// #[repr(u8)]
2529     /// enum C0 { xC0 = 0xC0 }
2530     ///
2531     /// // The only valid value of this type is the bytes `0xC0C0`.
2532     /// #[derive(TryFromBytes, IntoBytes, KnownLayout)]
2533     /// #[repr(C)]
2534     /// struct C0C0(C0, C0);
2535     ///
2536     /// #[derive(TryFromBytes, IntoBytes, KnownLayout)]
2537     /// #[repr(C, packed)]
2538     /// struct Packet {
2539     ///     magic_number: C0C0,
2540     ///     mug_size: u8,
2541     ///     temperature: u8,
2542     ///     marshmallows: [[u8; 2]],
2543     /// }
2544     ///
2545     /// let bytes = &mut [123, 0xC0, 0xC0, 240, 77, 2, 3, 4, 5, 6, 7][..];
2546     ///
2547     /// let (prefix, packet) = Packet::try_mut_from_suffix_with_elems(bytes, 3).unwrap();
2548     ///
2549     /// assert_eq!(packet.mug_size, 240);
2550     /// assert_eq!(packet.temperature, 77);
2551     /// assert_eq!(packet.marshmallows, [[2, 3], [4, 5], [6, 7]]);
2552     /// assert_eq!(prefix, &[123u8][..]);
2553     ///
2554     /// prefix[0] = 111;
2555     /// packet.temperature = 222;
2556     ///
2557     /// assert_eq!(bytes, [111, 0xC0, 0xC0, 240, 222, 2, 3, 4, 5, 6, 7]);
2558     ///
2559     /// // These bytes are not valid instance of `Packet`.
2560     /// let bytes = &mut [0, 1, 2, 3, 4, 5, 6, 7, 8, 77, 240, 0xC0, 0xC0][..];
2561     /// assert!(Packet::try_mut_from_suffix_with_elems(bytes, 3).is_err());
2562     /// ```
2563     ///
2564     /// Since an explicit `count` is provided, this method supports types with
2565     /// zero-sized trailing slice elements. Methods such as [`try_mut_from_prefix`]
2566     /// which do not take an explicit count do not support such types.
2567     ///
2568     /// ```
2569     /// use core::num::NonZeroU16;
2570     /// use zerocopy::*;
2571     /// # use zerocopy_derive::*;
2572     ///
2573     /// #[derive(TryFromBytes, IntoBytes, KnownLayout)]
2574     /// #[repr(C, packed)]
2575     /// struct ZSTy {
2576     ///     leading_sized: NonZeroU16,
2577     ///     trailing_dst: [()],
2578     /// }
2579     ///
2580     /// let mut src = 0xCAFEu16;
2581     /// let src = src.as_mut_bytes();
2582     /// let (_, zsty) = ZSTy::try_mut_from_suffix_with_elems(src, 42).unwrap();
2583     /// assert_eq!(zsty.trailing_dst.len(), 42);
2584     /// ```
2585     ///
2586     /// [`try_mut_from_prefix`]: TryFromBytes::try_mut_from_prefix
2587     #[must_use = "has no side effects"]
2588     #[inline]
try_mut_from_suffix_with_elems( source: &mut [u8], count: usize, ) -> Result<(&mut [u8], &mut Self), TryCastError<&mut [u8], Self>> where Self: KnownLayout<PointerMetadata = usize> + IntoBytes,2589     fn try_mut_from_suffix_with_elems(
2590         source: &mut [u8],
2591         count: usize,
2592     ) -> Result<(&mut [u8], &mut Self), TryCastError<&mut [u8], Self>>
2593     where
2594         Self: KnownLayout<PointerMetadata = usize> + IntoBytes,
2595     {
2596         try_mut_from_prefix_suffix(source, CastType::Suffix, Some(count)).map(swap)
2597     }
2598 
2599     /// Attempts to read the given `source` as a `Self`.
2600     ///
2601     /// If `source.len() != size_of::<Self>()` or the bytes are not a valid
2602     /// instance of `Self`, this returns `Err`.
2603     ///
2604     /// # Examples
2605     ///
2606     /// ```
2607     /// use zerocopy::TryFromBytes;
2608     /// # use zerocopy_derive::*;
2609     ///
2610     /// // The only valid value of this type is the byte `0xC0`
2611     /// #[derive(TryFromBytes)]
2612     /// #[repr(u8)]
2613     /// enum C0 { xC0 = 0xC0 }
2614     ///
2615     /// // The only valid value of this type is the bytes `0xC0C0`.
2616     /// #[derive(TryFromBytes)]
2617     /// #[repr(C)]
2618     /// struct C0C0(C0, C0);
2619     ///
2620     /// #[derive(TryFromBytes)]
2621     /// #[repr(C)]
2622     /// struct Packet {
2623     ///     magic_number: C0C0,
2624     ///     mug_size: u8,
2625     ///     temperature: u8,
2626     /// }
2627     ///
2628     /// let bytes = &[0xC0, 0xC0, 240, 77][..];
2629     ///
2630     /// let packet = Packet::try_read_from_bytes(bytes).unwrap();
2631     ///
2632     /// assert_eq!(packet.mug_size, 240);
2633     /// assert_eq!(packet.temperature, 77);
2634     ///
2635     /// // These bytes are not valid instance of `Packet`.
2636     /// let bytes = &mut [0x10, 0xC0, 240, 77][..];
2637     /// assert!(Packet::try_read_from_bytes(bytes).is_err());
2638     /// ```
2639     #[must_use = "has no side effects"]
2640     #[inline]
try_read_from_bytes(source: &[u8]) -> Result<Self, TryReadError<&[u8], Self>> where Self: Sized,2641     fn try_read_from_bytes(source: &[u8]) -> Result<Self, TryReadError<&[u8], Self>>
2642     where
2643         Self: Sized,
2644     {
2645         let candidate = match CoreMaybeUninit::<Self>::read_from_bytes(source) {
2646             Ok(candidate) => candidate,
2647             Err(e) => {
2648                 return Err(TryReadError::Size(e.with_dst()));
2649             }
2650         };
2651         // SAFETY: `candidate` was copied from from `source: &[u8]`, so all of
2652         // its bytes are initialized.
2653         unsafe { try_read_from(source, candidate) }
2654     }
2655 
2656     /// Attempts to read a `Self` from the prefix of the given `source`.
2657     ///
2658     /// This attempts to read a `Self` from the first `size_of::<Self>()` bytes
2659     /// of `source`, returning that `Self` and any remaining bytes. If
2660     /// `source.len() < size_of::<Self>()` or the bytes are not a valid instance
2661     /// of `Self`, it returns `Err`.
2662     ///
2663     /// # Examples
2664     ///
2665     /// ```
2666     /// use zerocopy::TryFromBytes;
2667     /// # use zerocopy_derive::*;
2668     ///
2669     /// // The only valid value of this type is the byte `0xC0`
2670     /// #[derive(TryFromBytes)]
2671     /// #[repr(u8)]
2672     /// enum C0 { xC0 = 0xC0 }
2673     ///
2674     /// // The only valid value of this type is the bytes `0xC0C0`.
2675     /// #[derive(TryFromBytes)]
2676     /// #[repr(C)]
2677     /// struct C0C0(C0, C0);
2678     ///
2679     /// #[derive(TryFromBytes)]
2680     /// #[repr(C)]
2681     /// struct Packet {
2682     ///     magic_number: C0C0,
2683     ///     mug_size: u8,
2684     ///     temperature: u8,
2685     /// }
2686     ///
2687     /// // These are more bytes than are needed to encode a `Packet`.
2688     /// let bytes = &[0xC0, 0xC0, 240, 77, 0, 1, 2, 3, 4, 5, 6][..];
2689     ///
2690     /// let (packet, suffix) = Packet::try_read_from_prefix(bytes).unwrap();
2691     ///
2692     /// assert_eq!(packet.mug_size, 240);
2693     /// assert_eq!(packet.temperature, 77);
2694     /// assert_eq!(suffix, &[0u8, 1, 2, 3, 4, 5, 6][..]);
2695     ///
2696     /// // These bytes are not valid instance of `Packet`.
2697     /// let bytes = &[0x10, 0xC0, 240, 77, 0, 1, 2, 3, 4, 5, 6][..];
2698     /// assert!(Packet::try_read_from_prefix(bytes).is_err());
2699     /// ```
2700     #[must_use = "has no side effects"]
2701     #[inline]
try_read_from_prefix(source: &[u8]) -> Result<(Self, &[u8]), TryReadError<&[u8], Self>> where Self: Sized,2702     fn try_read_from_prefix(source: &[u8]) -> Result<(Self, &[u8]), TryReadError<&[u8], Self>>
2703     where
2704         Self: Sized,
2705     {
2706         let (candidate, suffix) = match CoreMaybeUninit::<Self>::read_from_prefix(source) {
2707             Ok(candidate) => candidate,
2708             Err(e) => {
2709                 return Err(TryReadError::Size(e.with_dst()));
2710             }
2711         };
2712         // SAFETY: `candidate` was copied from from `source: &[u8]`, so all of
2713         // its bytes are initialized.
2714         unsafe { try_read_from(source, candidate).map(|slf| (slf, suffix)) }
2715     }
2716 
2717     /// Attempts to read a `Self` from the suffix of the given `source`.
2718     ///
2719     /// This attempts to read a `Self` from the last `size_of::<Self>()` bytes
2720     /// of `source`, returning that `Self` and any preceding bytes. If
2721     /// `source.len() < size_of::<Self>()` or the bytes are not a valid instance
2722     /// of `Self`, it returns `Err`.
2723     ///
2724     /// # Examples
2725     ///
2726     /// ```
2727     /// # #![allow(non_camel_case_types)] // For C0::xC0
2728     /// use zerocopy::TryFromBytes;
2729     /// # use zerocopy_derive::*;
2730     ///
2731     /// // The only valid value of this type is the byte `0xC0`
2732     /// #[derive(TryFromBytes)]
2733     /// #[repr(u8)]
2734     /// enum C0 { xC0 = 0xC0 }
2735     ///
2736     /// // The only valid value of this type is the bytes `0xC0C0`.
2737     /// #[derive(TryFromBytes)]
2738     /// #[repr(C)]
2739     /// struct C0C0(C0, C0);
2740     ///
2741     /// #[derive(TryFromBytes)]
2742     /// #[repr(C)]
2743     /// struct Packet {
2744     ///     magic_number: C0C0,
2745     ///     mug_size: u8,
2746     ///     temperature: u8,
2747     /// }
2748     ///
2749     /// // These are more bytes than are needed to encode a `Packet`.
2750     /// let bytes = &[0, 1, 2, 3, 4, 5, 0xC0, 0xC0, 240, 77][..];
2751     ///
2752     /// let (prefix, packet) = Packet::try_read_from_suffix(bytes).unwrap();
2753     ///
2754     /// assert_eq!(packet.mug_size, 240);
2755     /// assert_eq!(packet.temperature, 77);
2756     /// assert_eq!(prefix, &[0u8, 1, 2, 3, 4, 5][..]);
2757     ///
2758     /// // These bytes are not valid instance of `Packet`.
2759     /// let bytes = &[0, 1, 2, 3, 4, 5, 0x10, 0xC0, 240, 77][..];
2760     /// assert!(Packet::try_read_from_suffix(bytes).is_err());
2761     /// ```
2762     #[must_use = "has no side effects"]
2763     #[inline]
try_read_from_suffix(source: &[u8]) -> Result<(&[u8], Self), TryReadError<&[u8], Self>> where Self: Sized,2764     fn try_read_from_suffix(source: &[u8]) -> Result<(&[u8], Self), TryReadError<&[u8], Self>>
2765     where
2766         Self: Sized,
2767     {
2768         let (prefix, candidate) = match CoreMaybeUninit::<Self>::read_from_suffix(source) {
2769             Ok(candidate) => candidate,
2770             Err(e) => {
2771                 return Err(TryReadError::Size(e.with_dst()));
2772             }
2773         };
2774         // SAFETY: `candidate` was copied from from `source: &[u8]`, so all of
2775         // its bytes are initialized.
2776         unsafe { try_read_from(source, candidate).map(|slf| (prefix, slf)) }
2777     }
2778 }
2779 
2780 #[inline(always)]
try_ref_from_prefix_suffix<T: TryFromBytes + KnownLayout + Immutable + ?Sized>( source: &[u8], cast_type: CastType, meta: Option<T::PointerMetadata>, ) -> Result<(&T, &[u8]), TryCastError<&[u8], T>>2781 fn try_ref_from_prefix_suffix<T: TryFromBytes + KnownLayout + Immutable + ?Sized>(
2782     source: &[u8],
2783     cast_type: CastType,
2784     meta: Option<T::PointerMetadata>,
2785 ) -> Result<(&T, &[u8]), TryCastError<&[u8], T>> {
2786     match Ptr::from_ref(source).try_cast_into::<T, BecauseImmutable>(cast_type, meta) {
2787         Ok((source, prefix_suffix)) => {
2788             // This call may panic. If that happens, it doesn't cause any soundness
2789             // issues, as we have not generated any invalid state which we need to
2790             // fix before returning.
2791             //
2792             // Note that one panic or post-monomorphization error condition is
2793             // calling `try_into_valid` (and thus `is_bit_valid`) with a shared
2794             // pointer when `Self: !Immutable`. Since `Self: Immutable`, this panic
2795             // condition will not happen.
2796             match source.try_into_valid() {
2797                 Ok(valid) => Ok((valid.as_ref(), prefix_suffix.as_ref())),
2798                 Err(e) => Err(e.map_src(|src| src.as_bytes::<BecauseImmutable>().as_ref()).into()),
2799             }
2800         }
2801         Err(e) => Err(e.map_src(Ptr::as_ref).into()),
2802     }
2803 }
2804 
2805 #[inline(always)]
try_mut_from_prefix_suffix<T: IntoBytes + TryFromBytes + KnownLayout + ?Sized>( candidate: &mut [u8], cast_type: CastType, meta: Option<T::PointerMetadata>, ) -> Result<(&mut T, &mut [u8]), TryCastError<&mut [u8], T>>2806 fn try_mut_from_prefix_suffix<T: IntoBytes + TryFromBytes + KnownLayout + ?Sized>(
2807     candidate: &mut [u8],
2808     cast_type: CastType,
2809     meta: Option<T::PointerMetadata>,
2810 ) -> Result<(&mut T, &mut [u8]), TryCastError<&mut [u8], T>> {
2811     match Ptr::from_mut(candidate).try_cast_into::<T, BecauseExclusive>(cast_type, meta) {
2812         Ok((candidate, prefix_suffix)) => {
2813             // This call may panic. If that happens, it doesn't cause any soundness
2814             // issues, as we have not generated any invalid state which we need to
2815             // fix before returning.
2816             //
2817             // Note that one panic or post-monomorphization error condition is
2818             // calling `try_into_valid` (and thus `is_bit_valid`) with a shared
2819             // pointer when `Self: !Immutable`. Since `Self: Immutable`, this panic
2820             // condition will not happen.
2821             match candidate.try_into_valid() {
2822                 Ok(valid) => Ok((valid.as_mut(), prefix_suffix.as_mut())),
2823                 Err(e) => Err(e.map_src(|src| src.as_bytes::<BecauseExclusive>().as_mut()).into()),
2824             }
2825         }
2826         Err(e) => Err(e.map_src(Ptr::as_mut).into()),
2827     }
2828 }
2829 
2830 #[inline(always)]
swap<T, U>((t, u): (T, U)) -> (U, T)2831 fn swap<T, U>((t, u): (T, U)) -> (U, T) {
2832     (u, t)
2833 }
2834 
2835 /// # Safety
2836 ///
2837 /// All bytes of `candidate` must be initialized.
2838 #[inline(always)]
try_read_from<S, T: TryFromBytes>( source: S, mut candidate: CoreMaybeUninit<T>, ) -> Result<T, TryReadError<S, T>>2839 unsafe fn try_read_from<S, T: TryFromBytes>(
2840     source: S,
2841     mut candidate: CoreMaybeUninit<T>,
2842 ) -> Result<T, TryReadError<S, T>> {
2843     // We use `from_mut` despite not mutating via `c_ptr` so that we don't need
2844     // to add a `T: Immutable` bound.
2845     let c_ptr = Ptr::from_mut(&mut candidate);
2846     let c_ptr = c_ptr.transparent_wrapper_into_inner();
2847     // SAFETY: `c_ptr` has no uninitialized sub-ranges because it derived from
2848     // `candidate`, which the caller promises is entirely initialized.
2849     let c_ptr = unsafe { c_ptr.assume_validity::<invariant::Initialized>() };
2850 
2851     // This call may panic. If that happens, it doesn't cause any soundness
2852     // issues, as we have not generated any invalid state which we need to
2853     // fix before returning.
2854     //
2855     // Note that one panic or post-monomorphization error condition is
2856     // calling `try_into_valid` (and thus `is_bit_valid`) with a shared
2857     // pointer when `Self: !Immutable`. Since `Self: Immutable`, this panic
2858     // condition will not happen.
2859     if !T::is_bit_valid(c_ptr.forget_aligned()) {
2860         return Err(ValidityError::new(source).into());
2861     }
2862 
2863     // SAFETY: We just validated that `candidate` contains a valid `T`.
2864     Ok(unsafe { candidate.assume_init() })
2865 }
2866 
2867 /// Types for which a sequence of bytes all set to zero represents a valid
2868 /// instance of the type.
2869 ///
2870 /// Any memory region of the appropriate length which is guaranteed to contain
2871 /// only zero bytes can be viewed as any `FromZeros` type with no runtime
2872 /// overhead. This is useful whenever memory is known to be in a zeroed state,
2873 /// such memory returned from some allocation routines.
2874 ///
2875 /// # Warning: Padding bytes
2876 ///
2877 /// Note that, when a value is moved or copied, only the non-padding bytes of
2878 /// that value are guaranteed to be preserved. It is unsound to assume that
2879 /// values written to padding bytes are preserved after a move or copy. For more
2880 /// details, see the [`FromBytes` docs][frombytes-warning-padding-bytes].
2881 ///
2882 /// [frombytes-warning-padding-bytes]: FromBytes#warning-padding-bytes
2883 ///
2884 /// # Implementation
2885 ///
2886 /// **Do not implement this trait yourself!** Instead, use
2887 /// [`#[derive(FromZeros)]`][derive]; e.g.:
2888 ///
2889 /// ```
2890 /// # use zerocopy_derive::{FromZeros, Immutable};
2891 /// #[derive(FromZeros)]
2892 /// struct MyStruct {
2893 /// # /*
2894 ///     ...
2895 /// # */
2896 /// }
2897 ///
2898 /// #[derive(FromZeros)]
2899 /// #[repr(u8)]
2900 /// enum MyEnum {
2901 /// #   Variant0,
2902 /// # /*
2903 ///     ...
2904 /// # */
2905 /// }
2906 ///
2907 /// #[derive(FromZeros, Immutable)]
2908 /// union MyUnion {
2909 /// #   variant: u8,
2910 /// # /*
2911 ///     ...
2912 /// # */
2913 /// }
2914 /// ```
2915 ///
2916 /// This derive performs a sophisticated, compile-time safety analysis to
2917 /// determine whether a type is `FromZeros`.
2918 ///
2919 /// # Safety
2920 ///
2921 /// *This section describes what is required in order for `T: FromZeros`, and
2922 /// what unsafe code may assume of such types. If you don't plan on implementing
2923 /// `FromZeros` manually, and you don't plan on writing unsafe code that
2924 /// operates on `FromZeros` types, then you don't need to read this section.*
2925 ///
2926 /// If `T: FromZeros`, then unsafe code may assume that it is sound to produce a
2927 /// `T` whose bytes are all initialized to zero. If a type is marked as
2928 /// `FromZeros` which violates this contract, it may cause undefined behavior.
2929 ///
2930 /// `#[derive(FromZeros)]` only permits [types which satisfy these
2931 /// requirements][derive-analysis].
2932 ///
2933 #[cfg_attr(
2934     feature = "derive",
2935     doc = "[derive]: zerocopy_derive::FromZeros",
2936     doc = "[derive-analysis]: zerocopy_derive::FromZeros#analysis"
2937 )]
2938 #[cfg_attr(
2939     not(feature = "derive"),
2940     doc = concat!("[derive]: https://docs.rs/zerocopy/", env!("CARGO_PKG_VERSION"), "/zerocopy/derive.FromZeros.html"),
2941     doc = concat!("[derive-analysis]: https://docs.rs/zerocopy/", env!("CARGO_PKG_VERSION"), "/zerocopy/derive.FromZeros.html#analysis"),
2942 )]
2943 #[cfg_attr(
2944     zerocopy_diagnostic_on_unimplemented_1_78_0,
2945     diagnostic::on_unimplemented(note = "Consider adding `#[derive(FromZeros)]` to `{Self}`")
2946 )]
2947 pub unsafe trait FromZeros: TryFromBytes {
2948     // The `Self: Sized` bound makes it so that `FromZeros` is still object
2949     // safe.
2950     #[doc(hidden)]
only_derive_is_allowed_to_implement_this_trait() where Self: Sized2951     fn only_derive_is_allowed_to_implement_this_trait()
2952     where
2953         Self: Sized;
2954 
2955     /// Overwrites `self` with zeros.
2956     ///
2957     /// Sets every byte in `self` to 0. While this is similar to doing `*self =
2958     /// Self::new_zeroed()`, it differs in that `zero` does not semantically
2959     /// drop the current value and replace it with a new one — it simply
2960     /// modifies the bytes of the existing value.
2961     ///
2962     /// # Examples
2963     ///
2964     /// ```
2965     /// # use zerocopy::FromZeros;
2966     /// # use zerocopy_derive::*;
2967     /// #
2968     /// #[derive(FromZeros)]
2969     /// #[repr(C)]
2970     /// struct PacketHeader {
2971     ///     src_port: [u8; 2],
2972     ///     dst_port: [u8; 2],
2973     ///     length: [u8; 2],
2974     ///     checksum: [u8; 2],
2975     /// }
2976     ///
2977     /// let mut header = PacketHeader {
2978     ///     src_port: 100u16.to_be_bytes(),
2979     ///     dst_port: 200u16.to_be_bytes(),
2980     ///     length: 300u16.to_be_bytes(),
2981     ///     checksum: 400u16.to_be_bytes(),
2982     /// };
2983     ///
2984     /// header.zero();
2985     ///
2986     /// assert_eq!(header.src_port, [0, 0]);
2987     /// assert_eq!(header.dst_port, [0, 0]);
2988     /// assert_eq!(header.length, [0, 0]);
2989     /// assert_eq!(header.checksum, [0, 0]);
2990     /// ```
2991     #[inline(always)]
zero(&mut self)2992     fn zero(&mut self) {
2993         let slf: *mut Self = self;
2994         let len = mem::size_of_val(self);
2995         // SAFETY:
2996         // - `self` is guaranteed by the type system to be valid for writes of
2997         //   size `size_of_val(self)`.
2998         // - `u8`'s alignment is 1, and thus `self` is guaranteed to be aligned
2999         //   as required by `u8`.
3000         // - Since `Self: FromZeros`, the all-zeros instance is a valid instance
3001         //   of `Self.`
3002         //
3003         // TODO(#429): Add references to docs and quotes.
3004         unsafe { ptr::write_bytes(slf.cast::<u8>(), 0, len) };
3005     }
3006 
3007     /// Creates an instance of `Self` from zeroed bytes.
3008     ///
3009     /// # Examples
3010     ///
3011     /// ```
3012     /// # use zerocopy::FromZeros;
3013     /// # use zerocopy_derive::*;
3014     /// #
3015     /// #[derive(FromZeros)]
3016     /// #[repr(C)]
3017     /// struct PacketHeader {
3018     ///     src_port: [u8; 2],
3019     ///     dst_port: [u8; 2],
3020     ///     length: [u8; 2],
3021     ///     checksum: [u8; 2],
3022     /// }
3023     ///
3024     /// let header: PacketHeader = FromZeros::new_zeroed();
3025     ///
3026     /// assert_eq!(header.src_port, [0, 0]);
3027     /// assert_eq!(header.dst_port, [0, 0]);
3028     /// assert_eq!(header.length, [0, 0]);
3029     /// assert_eq!(header.checksum, [0, 0]);
3030     /// ```
3031     #[must_use = "has no side effects"]
3032     #[inline(always)]
new_zeroed() -> Self where Self: Sized,3033     fn new_zeroed() -> Self
3034     where
3035         Self: Sized,
3036     {
3037         // SAFETY: `FromZeros` says that the all-zeros bit pattern is legal.
3038         unsafe { mem::zeroed() }
3039     }
3040 
3041     /// Creates a `Box<Self>` from zeroed bytes.
3042     ///
3043     /// This function is useful for allocating large values on the heap and
3044     /// zero-initializing them, without ever creating a temporary instance of
3045     /// `Self` on the stack. For example, `<[u8; 1048576]>::new_box_zeroed()`
3046     /// will allocate `[u8; 1048576]` directly on the heap; it does not require
3047     /// storing `[u8; 1048576]` in a temporary variable on the stack.
3048     ///
3049     /// On systems that use a heap implementation that supports allocating from
3050     /// pre-zeroed memory, using `new_box_zeroed` (or related functions) may
3051     /// have performance benefits.
3052     ///
3053     /// # Errors
3054     ///
3055     /// Returns an error on allocation failure. Allocation failure is guaranteed
3056     /// never to cause a panic or an abort.
3057     #[must_use = "has no side effects (other than allocation)"]
3058     #[cfg(any(feature = "alloc", test))]
3059     #[cfg_attr(doc_cfg, doc(cfg(feature = "alloc")))]
3060     #[inline]
new_box_zeroed() -> Result<Box<Self>, AllocError> where Self: Sized,3061     fn new_box_zeroed() -> Result<Box<Self>, AllocError>
3062     where
3063         Self: Sized,
3064     {
3065         // If `T` is a ZST, then return a proper boxed instance of it. There is
3066         // no allocation, but `Box` does require a correct dangling pointer.
3067         let layout = Layout::new::<Self>();
3068         if layout.size() == 0 {
3069             // Construct the `Box` from a dangling pointer to avoid calling
3070             // `Self::new_zeroed`. This ensures that stack space is never
3071             // allocated for `Self` even on lower opt-levels where this branch
3072             // might not get optimized out.
3073 
3074             // SAFETY: Per [1], when `T` is a ZST, `Box<T>`'s only validity
3075             // requirements are that the pointer is non-null and sufficiently
3076             // aligned. Per [2], `NonNull::dangling` produces a pointer which
3077             // is sufficiently aligned. Since the produced pointer is a
3078             // `NonNull`, it is non-null.
3079             //
3080             // [1] Per https://doc.rust-lang.org/nightly/std/boxed/index.html#memory-layout:
3081             //
3082             //   For zero-sized values, the `Box` pointer has to be non-null and sufficiently aligned.
3083             //
3084             // [2] Per https://doc.rust-lang.org/std/ptr/struct.NonNull.html#method.dangling:
3085             //
3086             //   Creates a new `NonNull` that is dangling, but well-aligned.
3087             return Ok(unsafe { Box::from_raw(NonNull::dangling().as_ptr()) });
3088         }
3089 
3090         // TODO(#429): Add a "SAFETY" comment and remove this `allow`.
3091         #[allow(clippy::undocumented_unsafe_blocks)]
3092         let ptr = unsafe { alloc::alloc::alloc_zeroed(layout).cast::<Self>() };
3093         if ptr.is_null() {
3094             return Err(AllocError);
3095         }
3096         // TODO(#429): Add a "SAFETY" comment and remove this `allow`.
3097         #[allow(clippy::undocumented_unsafe_blocks)]
3098         Ok(unsafe { Box::from_raw(ptr) })
3099     }
3100 
3101     /// Creates a `Box<[Self]>` (a boxed slice) from zeroed bytes.
3102     ///
3103     /// This function is useful for allocating large values of `[Self]` on the
3104     /// heap and zero-initializing them, without ever creating a temporary
3105     /// instance of `[Self; _]` on the stack. For example,
3106     /// `u8::new_box_slice_zeroed(1048576)` will allocate the slice directly on
3107     /// the heap; it does not require storing the slice on the stack.
3108     ///
3109     /// On systems that use a heap implementation that supports allocating from
3110     /// pre-zeroed memory, using `new_box_slice_zeroed` may have performance
3111     /// benefits.
3112     ///
3113     /// If `Self` is a zero-sized type, then this function will return a
3114     /// `Box<[Self]>` that has the correct `len`. Such a box cannot contain any
3115     /// actual information, but its `len()` property will report the correct
3116     /// value.
3117     ///
3118     /// # Errors
3119     ///
3120     /// Returns an error on allocation failure. Allocation failure is
3121     /// guaranteed never to cause a panic or an abort.
3122     #[must_use = "has no side effects (other than allocation)"]
3123     #[cfg(feature = "alloc")]
3124     #[cfg_attr(doc_cfg, doc(cfg(feature = "alloc")))]
3125     #[inline]
new_box_zeroed_with_elems(count: usize) -> Result<Box<Self>, AllocError> where Self: KnownLayout<PointerMetadata = usize>,3126     fn new_box_zeroed_with_elems(count: usize) -> Result<Box<Self>, AllocError>
3127     where
3128         Self: KnownLayout<PointerMetadata = usize>,
3129     {
3130         // SAFETY: `alloc::alloc::alloc_zeroed` is a valid argument of
3131         // `new_box`. The referent of the pointer returned by `alloc_zeroed`
3132         // (and, consequently, the `Box` derived from it) is a valid instance of
3133         // `Self`, because `Self` is `FromZeros`.
3134         unsafe { crate::util::new_box(count, alloc::alloc::alloc_zeroed) }
3135     }
3136 
3137     #[deprecated(since = "0.8.0", note = "renamed to `FromZeros::new_box_zeroed_with_elems`")]
3138     #[doc(hidden)]
3139     #[cfg(feature = "alloc")]
3140     #[cfg_attr(doc_cfg, doc(cfg(feature = "alloc")))]
3141     #[must_use = "has no side effects (other than allocation)"]
3142     #[inline(always)]
new_box_slice_zeroed(len: usize) -> Result<Box<[Self]>, AllocError> where Self: Sized,3143     fn new_box_slice_zeroed(len: usize) -> Result<Box<[Self]>, AllocError>
3144     where
3145         Self: Sized,
3146     {
3147         <[Self]>::new_box_zeroed_with_elems(len)
3148     }
3149 
3150     /// Creates a `Vec<Self>` from zeroed bytes.
3151     ///
3152     /// This function is useful for allocating large values of `Vec`s and
3153     /// zero-initializing them, without ever creating a temporary instance of
3154     /// `[Self; _]` (or many temporary instances of `Self`) on the stack. For
3155     /// example, `u8::new_vec_zeroed(1048576)` will allocate directly on the
3156     /// heap; it does not require storing intermediate values on the stack.
3157     ///
3158     /// On systems that use a heap implementation that supports allocating from
3159     /// pre-zeroed memory, using `new_vec_zeroed` may have performance benefits.
3160     ///
3161     /// If `Self` is a zero-sized type, then this function will return a
3162     /// `Vec<Self>` that has the correct `len`. Such a `Vec` cannot contain any
3163     /// actual information, but its `len()` property will report the correct
3164     /// value.
3165     ///
3166     /// # Errors
3167     ///
3168     /// Returns an error on allocation failure. Allocation failure is
3169     /// guaranteed never to cause a panic or an abort.
3170     #[must_use = "has no side effects (other than allocation)"]
3171     #[cfg(feature = "alloc")]
3172     #[cfg_attr(doc_cfg, doc(cfg(feature = "alloc")))]
3173     #[inline(always)]
new_vec_zeroed(len: usize) -> Result<Vec<Self>, AllocError> where Self: Sized,3174     fn new_vec_zeroed(len: usize) -> Result<Vec<Self>, AllocError>
3175     where
3176         Self: Sized,
3177     {
3178         <[Self]>::new_box_zeroed_with_elems(len).map(Into::into)
3179     }
3180 
3181     /// Extends a `Vec<Self>` by pushing `additional` new items onto the end of
3182     /// the vector. The new items are initialized with zeros.
3183     #[cfg(zerocopy_panic_in_const_and_vec_try_reserve_1_57_0)]
3184     #[cfg(feature = "alloc")]
3185     #[cfg_attr(doc_cfg, doc(cfg(all(rust = "1.57.0", feature = "alloc"))))]
3186     #[inline(always)]
extend_vec_zeroed(v: &mut Vec<Self>, additional: usize) -> Result<(), AllocError> where Self: Sized,3187     fn extend_vec_zeroed(v: &mut Vec<Self>, additional: usize) -> Result<(), AllocError>
3188     where
3189         Self: Sized,
3190     {
3191         // PANICS: We pass `v.len()` for `position`, so the `position > v.len()`
3192         // panic condition is not satisfied.
3193         <Self as FromZeros>::insert_vec_zeroed(v, v.len(), additional)
3194     }
3195 
3196     /// Inserts `additional` new items into `Vec<Self>` at `position`. The new
3197     /// items are initialized with zeros.
3198     ///
3199     /// # Panics
3200     ///
3201     /// Panics if `position > v.len()`.
3202     #[cfg(zerocopy_panic_in_const_and_vec_try_reserve_1_57_0)]
3203     #[cfg(feature = "alloc")]
3204     #[cfg_attr(doc_cfg, doc(cfg(all(rust = "1.57.0", feature = "alloc"))))]
3205     #[inline]
insert_vec_zeroed( v: &mut Vec<Self>, position: usize, additional: usize, ) -> Result<(), AllocError> where Self: Sized,3206     fn insert_vec_zeroed(
3207         v: &mut Vec<Self>,
3208         position: usize,
3209         additional: usize,
3210     ) -> Result<(), AllocError>
3211     where
3212         Self: Sized,
3213     {
3214         assert!(position <= v.len());
3215         // We only conditionally compile on versions on which `try_reserve` is
3216         // stable; the Clippy lint is a false positive.
3217         #[allow(clippy::incompatible_msrv)]
3218         v.try_reserve(additional).map_err(|_| AllocError)?;
3219         // SAFETY: The `try_reserve` call guarantees that these cannot overflow:
3220         // * `ptr.add(position)`
3221         // * `position + additional`
3222         // * `v.len() + additional`
3223         //
3224         // `v.len() - position` cannot overflow because we asserted that
3225         // `position <= v.len()`.
3226         unsafe {
3227             // This is a potentially overlapping copy.
3228             let ptr = v.as_mut_ptr();
3229             #[allow(clippy::arithmetic_side_effects)]
3230             ptr.add(position).copy_to(ptr.add(position + additional), v.len() - position);
3231             ptr.add(position).write_bytes(0, additional);
3232             #[allow(clippy::arithmetic_side_effects)]
3233             v.set_len(v.len() + additional);
3234         }
3235 
3236         Ok(())
3237     }
3238 }
3239 
3240 /// Analyzes whether a type is [`FromBytes`].
3241 ///
3242 /// This derive analyzes, at compile time, whether the annotated type satisfies
3243 /// the [safety conditions] of `FromBytes` and implements `FromBytes` and its
3244 /// supertraits if it is sound to do so. This derive can be applied to structs,
3245 /// enums, and unions;
3246 /// e.g.:
3247 ///
3248 /// ```
3249 /// # use zerocopy_derive::{FromBytes, FromZeros, Immutable};
3250 /// #[derive(FromBytes)]
3251 /// struct MyStruct {
3252 /// # /*
3253 ///     ...
3254 /// # */
3255 /// }
3256 ///
3257 /// #[derive(FromBytes)]
3258 /// #[repr(u8)]
3259 /// enum MyEnum {
3260 /// #   V00, V01, V02, V03, V04, V05, V06, V07, V08, V09, V0A, V0B, V0C, V0D, V0E,
3261 /// #   V0F, V10, V11, V12, V13, V14, V15, V16, V17, V18, V19, V1A, V1B, V1C, V1D,
3262 /// #   V1E, V1F, V20, V21, V22, V23, V24, V25, V26, V27, V28, V29, V2A, V2B, V2C,
3263 /// #   V2D, V2E, V2F, V30, V31, V32, V33, V34, V35, V36, V37, V38, V39, V3A, V3B,
3264 /// #   V3C, V3D, V3E, V3F, V40, V41, V42, V43, V44, V45, V46, V47, V48, V49, V4A,
3265 /// #   V4B, V4C, V4D, V4E, V4F, V50, V51, V52, V53, V54, V55, V56, V57, V58, V59,
3266 /// #   V5A, V5B, V5C, V5D, V5E, V5F, V60, V61, V62, V63, V64, V65, V66, V67, V68,
3267 /// #   V69, V6A, V6B, V6C, V6D, V6E, V6F, V70, V71, V72, V73, V74, V75, V76, V77,
3268 /// #   V78, V79, V7A, V7B, V7C, V7D, V7E, V7F, V80, V81, V82, V83, V84, V85, V86,
3269 /// #   V87, V88, V89, V8A, V8B, V8C, V8D, V8E, V8F, V90, V91, V92, V93, V94, V95,
3270 /// #   V96, V97, V98, V99, V9A, V9B, V9C, V9D, V9E, V9F, VA0, VA1, VA2, VA3, VA4,
3271 /// #   VA5, VA6, VA7, VA8, VA9, VAA, VAB, VAC, VAD, VAE, VAF, VB0, VB1, VB2, VB3,
3272 /// #   VB4, VB5, VB6, VB7, VB8, VB9, VBA, VBB, VBC, VBD, VBE, VBF, VC0, VC1, VC2,
3273 /// #   VC3, VC4, VC5, VC6, VC7, VC8, VC9, VCA, VCB, VCC, VCD, VCE, VCF, VD0, VD1,
3274 /// #   VD2, VD3, VD4, VD5, VD6, VD7, VD8, VD9, VDA, VDB, VDC, VDD, VDE, VDF, VE0,
3275 /// #   VE1, VE2, VE3, VE4, VE5, VE6, VE7, VE8, VE9, VEA, VEB, VEC, VED, VEE, VEF,
3276 /// #   VF0, VF1, VF2, VF3, VF4, VF5, VF6, VF7, VF8, VF9, VFA, VFB, VFC, VFD, VFE,
3277 /// #   VFF,
3278 /// # /*
3279 ///     ...
3280 /// # */
3281 /// }
3282 ///
3283 /// #[derive(FromBytes, Immutable)]
3284 /// union MyUnion {
3285 /// #   variant: u8,
3286 /// # /*
3287 ///     ...
3288 /// # */
3289 /// }
3290 /// ```
3291 ///
3292 /// [safety conditions]: trait@FromBytes#safety
3293 ///
3294 /// # Analysis
3295 ///
3296 /// *This section describes, roughly, the analysis performed by this derive to
3297 /// determine whether it is sound to implement `FromBytes` for a given type.
3298 /// Unless you are modifying the implementation of this derive, or attempting to
3299 /// manually implement `FromBytes` for a type yourself, you don't need to read
3300 /// this section.*
3301 ///
3302 /// If a type has the following properties, then this derive can implement
3303 /// `FromBytes` for that type:
3304 ///
3305 /// - If the type is a struct, all of its fields must be `FromBytes`.
3306 /// - If the type is an enum:
3307 ///   - It must have a defined representation (`repr`s `C`, `u8`, `u16`, `u32`,
3308 ///     `u64`, `usize`, `i8`, `i16`, `i32`, `i64`, or `isize`).
3309 ///   - The maximum number of discriminants must be used (so that every possible
3310 ///     bit pattern is a valid one). Be very careful when using the `C`,
3311 ///     `usize`, or `isize` representations, as their size is
3312 ///     platform-dependent.
3313 ///   - Its fields must be `FromBytes`.
3314 ///
3315 /// This analysis is subject to change. Unsafe code may *only* rely on the
3316 /// documented [safety conditions] of `FromBytes`, and must *not* rely on the
3317 /// implementation details of this derive.
3318 ///
3319 /// ## Why isn't an explicit representation required for structs?
3320 ///
3321 /// Neither this derive, nor the [safety conditions] of `FromBytes`, requires
3322 /// that structs are marked with `#[repr(C)]`.
3323 ///
3324 /// Per the [Rust reference](reference),
3325 ///
3326 /// > The representation of a type can change the padding between fields, but
3327 /// > does not change the layout of the fields themselves.
3328 ///
3329 /// [reference]: https://doc.rust-lang.org/reference/type-layout.html#representations
3330 ///
3331 /// Since the layout of structs only consists of padding bytes and field bytes,
3332 /// a struct is soundly `FromBytes` if:
3333 /// 1. its padding is soundly `FromBytes`, and
3334 /// 2. its fields are soundly `FromBytes`.
3335 ///
3336 /// The answer to the first question is always yes: padding bytes do not have
3337 /// any validity constraints. A [discussion] of this question in the Unsafe Code
3338 /// Guidelines Working Group concluded that it would be virtually unimaginable
3339 /// for future versions of rustc to add validity constraints to padding bytes.
3340 ///
3341 /// [discussion]: https://github.com/rust-lang/unsafe-code-guidelines/issues/174
3342 ///
3343 /// Whether a struct is soundly `FromBytes` therefore solely depends on whether
3344 /// its fields are `FromBytes`.
3345 // TODO(#146): Document why we don't require an enum to have an explicit `repr`
3346 // attribute.
3347 #[cfg(any(feature = "derive", test))]
3348 #[cfg_attr(doc_cfg, doc(cfg(feature = "derive")))]
3349 pub use zerocopy_derive::FromBytes;
3350 
3351 /// Types for which any bit pattern is valid.
3352 ///
3353 /// Any memory region of the appropriate length which contains initialized bytes
3354 /// can be viewed as any `FromBytes` type with no runtime overhead. This is
3355 /// useful for efficiently parsing bytes as structured data.
3356 ///
3357 /// # Warning: Padding bytes
3358 ///
3359 /// Note that, when a value is moved or copied, only the non-padding bytes of
3360 /// that value are guaranteed to be preserved. It is unsound to assume that
3361 /// values written to padding bytes are preserved after a move or copy. For
3362 /// example, the following is unsound:
3363 ///
3364 /// ```rust,no_run
3365 /// use core::mem::{size_of, transmute};
3366 /// use zerocopy::FromZeros;
3367 /// # use zerocopy_derive::*;
3368 ///
3369 /// // Assume `Foo` is a type with padding bytes.
3370 /// #[derive(FromZeros, Default)]
3371 /// struct Foo {
3372 /// # /*
3373 ///     ...
3374 /// # */
3375 /// }
3376 ///
3377 /// let mut foo: Foo = Foo::default();
3378 /// FromZeros::zero(&mut foo);
3379 /// // UNSOUND: Although `FromZeros::zero` writes zeros to all bytes of `foo`,
3380 /// // those writes are not guaranteed to be preserved in padding bytes when
3381 /// // `foo` is moved, so this may expose padding bytes as `u8`s.
3382 /// let foo_bytes: [u8; size_of::<Foo>()] = unsafe { transmute(foo) };
3383 /// ```
3384 ///
3385 /// # Implementation
3386 ///
3387 /// **Do not implement this trait yourself!** Instead, use
3388 /// [`#[derive(FromBytes)]`][derive]; e.g.:
3389 ///
3390 /// ```
3391 /// # use zerocopy_derive::{FromBytes, Immutable};
3392 /// #[derive(FromBytes)]
3393 /// struct MyStruct {
3394 /// # /*
3395 ///     ...
3396 /// # */
3397 /// }
3398 ///
3399 /// #[derive(FromBytes)]
3400 /// #[repr(u8)]
3401 /// enum MyEnum {
3402 /// #   V00, V01, V02, V03, V04, V05, V06, V07, V08, V09, V0A, V0B, V0C, V0D, V0E,
3403 /// #   V0F, V10, V11, V12, V13, V14, V15, V16, V17, V18, V19, V1A, V1B, V1C, V1D,
3404 /// #   V1E, V1F, V20, V21, V22, V23, V24, V25, V26, V27, V28, V29, V2A, V2B, V2C,
3405 /// #   V2D, V2E, V2F, V30, V31, V32, V33, V34, V35, V36, V37, V38, V39, V3A, V3B,
3406 /// #   V3C, V3D, V3E, V3F, V40, V41, V42, V43, V44, V45, V46, V47, V48, V49, V4A,
3407 /// #   V4B, V4C, V4D, V4E, V4F, V50, V51, V52, V53, V54, V55, V56, V57, V58, V59,
3408 /// #   V5A, V5B, V5C, V5D, V5E, V5F, V60, V61, V62, V63, V64, V65, V66, V67, V68,
3409 /// #   V69, V6A, V6B, V6C, V6D, V6E, V6F, V70, V71, V72, V73, V74, V75, V76, V77,
3410 /// #   V78, V79, V7A, V7B, V7C, V7D, V7E, V7F, V80, V81, V82, V83, V84, V85, V86,
3411 /// #   V87, V88, V89, V8A, V8B, V8C, V8D, V8E, V8F, V90, V91, V92, V93, V94, V95,
3412 /// #   V96, V97, V98, V99, V9A, V9B, V9C, V9D, V9E, V9F, VA0, VA1, VA2, VA3, VA4,
3413 /// #   VA5, VA6, VA7, VA8, VA9, VAA, VAB, VAC, VAD, VAE, VAF, VB0, VB1, VB2, VB3,
3414 /// #   VB4, VB5, VB6, VB7, VB8, VB9, VBA, VBB, VBC, VBD, VBE, VBF, VC0, VC1, VC2,
3415 /// #   VC3, VC4, VC5, VC6, VC7, VC8, VC9, VCA, VCB, VCC, VCD, VCE, VCF, VD0, VD1,
3416 /// #   VD2, VD3, VD4, VD5, VD6, VD7, VD8, VD9, VDA, VDB, VDC, VDD, VDE, VDF, VE0,
3417 /// #   VE1, VE2, VE3, VE4, VE5, VE6, VE7, VE8, VE9, VEA, VEB, VEC, VED, VEE, VEF,
3418 /// #   VF0, VF1, VF2, VF3, VF4, VF5, VF6, VF7, VF8, VF9, VFA, VFB, VFC, VFD, VFE,
3419 /// #   VFF,
3420 /// # /*
3421 ///     ...
3422 /// # */
3423 /// }
3424 ///
3425 /// #[derive(FromBytes, Immutable)]
3426 /// union MyUnion {
3427 /// #   variant: u8,
3428 /// # /*
3429 ///     ...
3430 /// # */
3431 /// }
3432 /// ```
3433 ///
3434 /// This derive performs a sophisticated, compile-time safety analysis to
3435 /// determine whether a type is `FromBytes`.
3436 ///
3437 /// # Safety
3438 ///
3439 /// *This section describes what is required in order for `T: FromBytes`, and
3440 /// what unsafe code may assume of such types. If you don't plan on implementing
3441 /// `FromBytes` manually, and you don't plan on writing unsafe code that
3442 /// operates on `FromBytes` types, then you don't need to read this section.*
3443 ///
3444 /// If `T: FromBytes`, then unsafe code may assume that it is sound to produce a
3445 /// `T` whose bytes are initialized to any sequence of valid `u8`s (in other
3446 /// words, any byte value which is not uninitialized). If a type is marked as
3447 /// `FromBytes` which violates this contract, it may cause undefined behavior.
3448 ///
3449 /// `#[derive(FromBytes)]` only permits [types which satisfy these
3450 /// requirements][derive-analysis].
3451 ///
3452 #[cfg_attr(
3453     feature = "derive",
3454     doc = "[derive]: zerocopy_derive::FromBytes",
3455     doc = "[derive-analysis]: zerocopy_derive::FromBytes#analysis"
3456 )]
3457 #[cfg_attr(
3458     not(feature = "derive"),
3459     doc = concat!("[derive]: https://docs.rs/zerocopy/", env!("CARGO_PKG_VERSION"), "/zerocopy/derive.FromBytes.html"),
3460     doc = concat!("[derive-analysis]: https://docs.rs/zerocopy/", env!("CARGO_PKG_VERSION"), "/zerocopy/derive.FromBytes.html#analysis"),
3461 )]
3462 #[cfg_attr(
3463     zerocopy_diagnostic_on_unimplemented_1_78_0,
3464     diagnostic::on_unimplemented(note = "Consider adding `#[derive(FromBytes)]` to `{Self}`")
3465 )]
3466 pub unsafe trait FromBytes: FromZeros {
3467     // The `Self: Sized` bound makes it so that `FromBytes` is still object
3468     // safe.
3469     #[doc(hidden)]
only_derive_is_allowed_to_implement_this_trait() where Self: Sized3470     fn only_derive_is_allowed_to_implement_this_trait()
3471     where
3472         Self: Sized;
3473 
3474     /// Interprets the given `source` as a `&Self`.
3475     ///
3476     /// This method attempts to return a reference to `source` interpreted as a
3477     /// `Self`. If the length of `source` is not a [valid size of
3478     /// `Self`][valid-size], or if `source` is not appropriately aligned, this
3479     /// returns `Err`. If [`Self: Unaligned`][self-unaligned], you can
3480     /// [infallibly discard the alignment error][size-error-from].
3481     ///
3482     /// `Self` may be a sized type, a slice, or a [slice DST][slice-dst].
3483     ///
3484     /// [valid-size]: crate::KnownLayout#what-is-a-valid-size
3485     /// [self-unaligned]: Unaligned
3486     /// [size-error-from]: error/struct.SizeError.html#method.from-1
3487     /// [slice-dst]: KnownLayout#dynamically-sized-types
3488     ///
3489     /// # Compile-Time Assertions
3490     ///
3491     /// This method cannot yet be used on unsized types whose dynamically-sized
3492     /// component is zero-sized. Attempting to use this method on such types
3493     /// results in a compile-time assertion error; e.g.:
3494     ///
3495     /// ```compile_fail,E0080
3496     /// use zerocopy::*;
3497     /// # use zerocopy_derive::*;
3498     ///
3499     /// #[derive(FromBytes, Immutable, KnownLayout)]
3500     /// #[repr(C)]
3501     /// struct ZSTy {
3502     ///     leading_sized: u16,
3503     ///     trailing_dst: [()],
3504     /// }
3505     ///
3506     /// let _ = ZSTy::ref_from_bytes(0u16.as_bytes()); // ⚠ Compile Error!
3507     /// ```
3508     ///
3509     /// # Examples
3510     ///
3511     /// ```
3512     /// use zerocopy::FromBytes;
3513     /// # use zerocopy_derive::*;
3514     ///
3515     /// #[derive(FromBytes, KnownLayout, Immutable)]
3516     /// #[repr(C)]
3517     /// struct PacketHeader {
3518     ///     src_port: [u8; 2],
3519     ///     dst_port: [u8; 2],
3520     ///     length: [u8; 2],
3521     ///     checksum: [u8; 2],
3522     /// }
3523     ///
3524     /// #[derive(FromBytes, KnownLayout, Immutable)]
3525     /// #[repr(C)]
3526     /// struct Packet {
3527     ///     header: PacketHeader,
3528     ///     body: [u8],
3529     /// }
3530     ///
3531     /// // These bytes encode a `Packet`.
3532     /// let bytes = &[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11][..];
3533     ///
3534     /// let packet = Packet::ref_from_bytes(bytes).unwrap();
3535     ///
3536     /// assert_eq!(packet.header.src_port, [0, 1]);
3537     /// assert_eq!(packet.header.dst_port, [2, 3]);
3538     /// assert_eq!(packet.header.length, [4, 5]);
3539     /// assert_eq!(packet.header.checksum, [6, 7]);
3540     /// assert_eq!(packet.body, [8, 9, 10, 11]);
3541     /// ```
3542     #[must_use = "has no side effects"]
3543     #[inline]
ref_from_bytes(source: &[u8]) -> Result<&Self, CastError<&[u8], Self>> where Self: KnownLayout + Immutable,3544     fn ref_from_bytes(source: &[u8]) -> Result<&Self, CastError<&[u8], Self>>
3545     where
3546         Self: KnownLayout + Immutable,
3547     {
3548         static_assert_dst_is_not_zst!(Self);
3549         match Ptr::from_ref(source).try_cast_into_no_leftover::<_, BecauseImmutable>(None) {
3550             Ok(ptr) => Ok(ptr.bikeshed_recall_valid().as_ref()),
3551             Err(err) => Err(err.map_src(|src| src.as_ref())),
3552         }
3553     }
3554 
3555     /// Interprets the prefix of the given `source` as a `&Self` without
3556     /// copying.
3557     ///
3558     /// This method computes the [largest possible size of `Self`][valid-size]
3559     /// that can fit in the leading bytes of `source`, then attempts to return
3560     /// both a reference to those bytes interpreted as a `Self`, and a reference
3561     /// to the remaining bytes. If there are insufficient bytes, or if `source`
3562     /// is not appropriately aligned, this returns `Err`. If [`Self:
3563     /// Unaligned`][self-unaligned], you can [infallibly discard the alignment
3564     /// error][size-error-from].
3565     ///
3566     /// `Self` may be a sized type, a slice, or a [slice DST][slice-dst].
3567     ///
3568     /// [valid-size]: crate::KnownLayout#what-is-a-valid-size
3569     /// [self-unaligned]: Unaligned
3570     /// [size-error-from]: error/struct.SizeError.html#method.from-1
3571     /// [slice-dst]: KnownLayout#dynamically-sized-types
3572     ///
3573     /// # Compile-Time Assertions
3574     ///
3575     /// This method cannot yet be used on unsized types whose dynamically-sized
3576     /// component is zero-sized. See [`ref_from_prefix_with_elems`], which does
3577     /// support such types. Attempting to use this method on such types results
3578     /// in a compile-time assertion error; e.g.:
3579     ///
3580     /// ```compile_fail,E0080
3581     /// use zerocopy::*;
3582     /// # use zerocopy_derive::*;
3583     ///
3584     /// #[derive(FromBytes, Immutable, KnownLayout)]
3585     /// #[repr(C)]
3586     /// struct ZSTy {
3587     ///     leading_sized: u16,
3588     ///     trailing_dst: [()],
3589     /// }
3590     ///
3591     /// let _ = ZSTy::ref_from_prefix(0u16.as_bytes()); // ⚠ Compile Error!
3592     /// ```
3593     ///
3594     /// [`ref_from_prefix_with_elems`]: FromBytes::ref_from_prefix_with_elems
3595     ///
3596     /// # Examples
3597     ///
3598     /// ```
3599     /// use zerocopy::FromBytes;
3600     /// # use zerocopy_derive::*;
3601     ///
3602     /// #[derive(FromBytes, KnownLayout, Immutable)]
3603     /// #[repr(C)]
3604     /// struct PacketHeader {
3605     ///     src_port: [u8; 2],
3606     ///     dst_port: [u8; 2],
3607     ///     length: [u8; 2],
3608     ///     checksum: [u8; 2],
3609     /// }
3610     ///
3611     /// #[derive(FromBytes, KnownLayout, Immutable)]
3612     /// #[repr(C)]
3613     /// struct Packet {
3614     ///     header: PacketHeader,
3615     ///     body: [[u8; 2]],
3616     /// }
3617     ///
3618     /// // These are more bytes than are needed to encode a `Packet`.
3619     /// let bytes = &[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14][..];
3620     ///
3621     /// let (packet, suffix) = Packet::ref_from_prefix(bytes).unwrap();
3622     ///
3623     /// assert_eq!(packet.header.src_port, [0, 1]);
3624     /// assert_eq!(packet.header.dst_port, [2, 3]);
3625     /// assert_eq!(packet.header.length, [4, 5]);
3626     /// assert_eq!(packet.header.checksum, [6, 7]);
3627     /// assert_eq!(packet.body, [[8, 9], [10, 11], [12, 13]]);
3628     /// assert_eq!(suffix, &[14u8][..]);
3629     /// ```
3630     #[must_use = "has no side effects"]
3631     #[inline]
ref_from_prefix(source: &[u8]) -> Result<(&Self, &[u8]), CastError<&[u8], Self>> where Self: KnownLayout + Immutable,3632     fn ref_from_prefix(source: &[u8]) -> Result<(&Self, &[u8]), CastError<&[u8], Self>>
3633     where
3634         Self: KnownLayout + Immutable,
3635     {
3636         static_assert_dst_is_not_zst!(Self);
3637         ref_from_prefix_suffix(source, None, CastType::Prefix)
3638     }
3639 
3640     /// Interprets the suffix of the given bytes as a `&Self`.
3641     ///
3642     /// This method computes the [largest possible size of `Self`][valid-size]
3643     /// that can fit in the trailing bytes of `source`, then attempts to return
3644     /// both a reference to those bytes interpreted as a `Self`, and a reference
3645     /// to the preceding bytes. If there are insufficient bytes, or if that
3646     /// suffix of `source` is not appropriately aligned, this returns `Err`. If
3647     /// [`Self: Unaligned`][self-unaligned], you can [infallibly discard the
3648     /// alignment error][size-error-from].
3649     ///
3650     /// `Self` may be a sized type, a slice, or a [slice DST][slice-dst].
3651     ///
3652     /// [valid-size]: crate::KnownLayout#what-is-a-valid-size
3653     /// [self-unaligned]: Unaligned
3654     /// [size-error-from]: error/struct.SizeError.html#method.from-1
3655     /// [slice-dst]: KnownLayout#dynamically-sized-types
3656     ///
3657     /// # Compile-Time Assertions
3658     ///
3659     /// This method cannot yet be used on unsized types whose dynamically-sized
3660     /// component is zero-sized. See [`ref_from_suffix_with_elems`], which does
3661     /// support such types. Attempting to use this method on such types results
3662     /// in a compile-time assertion error; e.g.:
3663     ///
3664     /// ```compile_fail,E0080
3665     /// use zerocopy::*;
3666     /// # use zerocopy_derive::*;
3667     ///
3668     /// #[derive(FromBytes, Immutable, KnownLayout)]
3669     /// #[repr(C)]
3670     /// struct ZSTy {
3671     ///     leading_sized: u16,
3672     ///     trailing_dst: [()],
3673     /// }
3674     ///
3675     /// let _ = ZSTy::ref_from_suffix(0u16.as_bytes()); // ⚠ Compile Error!
3676     /// ```
3677     ///
3678     /// [`ref_from_suffix_with_elems`]: FromBytes::ref_from_suffix_with_elems
3679     ///
3680     /// # Examples
3681     ///
3682     /// ```
3683     /// use zerocopy::FromBytes;
3684     /// # use zerocopy_derive::*;
3685     ///
3686     /// #[derive(FromBytes, Immutable, KnownLayout)]
3687     /// #[repr(C)]
3688     /// struct PacketTrailer {
3689     ///     frame_check_sequence: [u8; 4],
3690     /// }
3691     ///
3692     /// // These are more bytes than are needed to encode a `PacketTrailer`.
3693     /// let bytes = &[0, 1, 2, 3, 4, 5, 6, 7, 8, 9][..];
3694     ///
3695     /// let (prefix, trailer) = PacketTrailer::ref_from_suffix(bytes).unwrap();
3696     ///
3697     /// assert_eq!(prefix, &[0, 1, 2, 3, 4, 5][..]);
3698     /// assert_eq!(trailer.frame_check_sequence, [6, 7, 8, 9]);
3699     /// ```
3700     #[must_use = "has no side effects"]
3701     #[inline]
ref_from_suffix(source: &[u8]) -> Result<(&[u8], &Self), CastError<&[u8], Self>> where Self: Immutable + KnownLayout,3702     fn ref_from_suffix(source: &[u8]) -> Result<(&[u8], &Self), CastError<&[u8], Self>>
3703     where
3704         Self: Immutable + KnownLayout,
3705     {
3706         static_assert_dst_is_not_zst!(Self);
3707         ref_from_prefix_suffix(source, None, CastType::Suffix).map(swap)
3708     }
3709 
3710     /// Interprets the given `source` as a `&mut Self`.
3711     ///
3712     /// This method attempts to return a reference to `source` interpreted as a
3713     /// `Self`. If the length of `source` is not a [valid size of
3714     /// `Self`][valid-size], or if `source` is not appropriately aligned, this
3715     /// returns `Err`. If [`Self: Unaligned`][self-unaligned], you can
3716     /// [infallibly discard the alignment error][size-error-from].
3717     ///
3718     /// `Self` may be a sized type, a slice, or a [slice DST][slice-dst].
3719     ///
3720     /// [valid-size]: crate::KnownLayout#what-is-a-valid-size
3721     /// [self-unaligned]: Unaligned
3722     /// [size-error-from]: error/struct.SizeError.html#method.from-1
3723     /// [slice-dst]: KnownLayout#dynamically-sized-types
3724     ///
3725     /// # Compile-Time Assertions
3726     ///
3727     /// This method cannot yet be used on unsized types whose dynamically-sized
3728     /// component is zero-sized. See [`mut_from_prefix_with_elems`], which does
3729     /// support such types. Attempting to use this method on such types results
3730     /// in a compile-time assertion error; e.g.:
3731     ///
3732     /// ```compile_fail,E0080
3733     /// use zerocopy::*;
3734     /// # use zerocopy_derive::*;
3735     ///
3736     /// #[derive(FromBytes, Immutable, IntoBytes, KnownLayout)]
3737     /// #[repr(C, packed)]
3738     /// struct ZSTy {
3739     ///     leading_sized: [u8; 2],
3740     ///     trailing_dst: [()],
3741     /// }
3742     ///
3743     /// let mut source = [85, 85];
3744     /// let _ = ZSTy::mut_from_bytes(&mut source[..]); // ⚠ Compile Error!
3745     /// ```
3746     ///
3747     /// [`mut_from_prefix_with_elems`]: FromBytes::mut_from_prefix_with_elems
3748     ///
3749     /// # Examples
3750     ///
3751     /// ```
3752     /// use zerocopy::FromBytes;
3753     /// # use zerocopy_derive::*;
3754     ///
3755     /// #[derive(FromBytes, IntoBytes, KnownLayout, Immutable)]
3756     /// #[repr(C)]
3757     /// struct PacketHeader {
3758     ///     src_port: [u8; 2],
3759     ///     dst_port: [u8; 2],
3760     ///     length: [u8; 2],
3761     ///     checksum: [u8; 2],
3762     /// }
3763     ///
3764     /// // These bytes encode a `PacketHeader`.
3765     /// let bytes = &mut [0, 1, 2, 3, 4, 5, 6, 7][..];
3766     ///
3767     /// let header = PacketHeader::mut_from_bytes(bytes).unwrap();
3768     ///
3769     /// assert_eq!(header.src_port, [0, 1]);
3770     /// assert_eq!(header.dst_port, [2, 3]);
3771     /// assert_eq!(header.length, [4, 5]);
3772     /// assert_eq!(header.checksum, [6, 7]);
3773     ///
3774     /// header.checksum = [0, 0];
3775     ///
3776     /// assert_eq!(bytes, [0, 1, 2, 3, 4, 5, 0, 0]);
3777     /// ```
3778     #[must_use = "has no side effects"]
3779     #[inline]
mut_from_bytes(source: &mut [u8]) -> Result<&mut Self, CastError<&mut [u8], Self>> where Self: IntoBytes + KnownLayout,3780     fn mut_from_bytes(source: &mut [u8]) -> Result<&mut Self, CastError<&mut [u8], Self>>
3781     where
3782         Self: IntoBytes + KnownLayout,
3783     {
3784         static_assert_dst_is_not_zst!(Self);
3785         match Ptr::from_mut(source).try_cast_into_no_leftover::<_, BecauseExclusive>(None) {
3786             Ok(ptr) => Ok(ptr.bikeshed_recall_valid().as_mut()),
3787             Err(err) => Err(err.map_src(|src| src.as_mut())),
3788         }
3789     }
3790 
3791     /// Interprets the prefix of the given `source` as a `&mut Self` without
3792     /// copying.
3793     ///
3794     /// This method computes the [largest possible size of `Self`][valid-size]
3795     /// that can fit in the leading bytes of `source`, then attempts to return
3796     /// both a reference to those bytes interpreted as a `Self`, and a reference
3797     /// to the remaining bytes. If there are insufficient bytes, or if `source`
3798     /// is not appropriately aligned, this returns `Err`. If [`Self:
3799     /// Unaligned`][self-unaligned], you can [infallibly discard the alignment
3800     /// error][size-error-from].
3801     ///
3802     /// `Self` may be a sized type, a slice, or a [slice DST][slice-dst].
3803     ///
3804     /// [valid-size]: crate::KnownLayout#what-is-a-valid-size
3805     /// [self-unaligned]: Unaligned
3806     /// [size-error-from]: error/struct.SizeError.html#method.from-1
3807     /// [slice-dst]: KnownLayout#dynamically-sized-types
3808     ///
3809     /// # Compile-Time Assertions
3810     ///
3811     /// This method cannot yet be used on unsized types whose dynamically-sized
3812     /// component is zero-sized. See [`mut_from_suffix_with_elems`], which does
3813     /// support such types. Attempting to use this method on such types results
3814     /// in a compile-time assertion error; e.g.:
3815     ///
3816     /// ```compile_fail,E0080
3817     /// use zerocopy::*;
3818     /// # use zerocopy_derive::*;
3819     ///
3820     /// #[derive(FromBytes, Immutable, IntoBytes, KnownLayout)]
3821     /// #[repr(C, packed)]
3822     /// struct ZSTy {
3823     ///     leading_sized: [u8; 2],
3824     ///     trailing_dst: [()],
3825     /// }
3826     ///
3827     /// let mut source = [85, 85];
3828     /// let _ = ZSTy::mut_from_prefix(&mut source[..]); // ⚠ Compile Error!
3829     /// ```
3830     ///
3831     /// [`mut_from_suffix_with_elems`]: FromBytes::mut_from_suffix_with_elems
3832     ///
3833     /// # Examples
3834     ///
3835     /// ```
3836     /// use zerocopy::FromBytes;
3837     /// # use zerocopy_derive::*;
3838     ///
3839     /// #[derive(FromBytes, IntoBytes, KnownLayout, Immutable)]
3840     /// #[repr(C)]
3841     /// struct PacketHeader {
3842     ///     src_port: [u8; 2],
3843     ///     dst_port: [u8; 2],
3844     ///     length: [u8; 2],
3845     ///     checksum: [u8; 2],
3846     /// }
3847     ///
3848     /// // These are more bytes than are needed to encode a `PacketHeader`.
3849     /// let bytes = &mut [0, 1, 2, 3, 4, 5, 6, 7, 8, 9][..];
3850     ///
3851     /// let (header, body) = PacketHeader::mut_from_prefix(bytes).unwrap();
3852     ///
3853     /// assert_eq!(header.src_port, [0, 1]);
3854     /// assert_eq!(header.dst_port, [2, 3]);
3855     /// assert_eq!(header.length, [4, 5]);
3856     /// assert_eq!(header.checksum, [6, 7]);
3857     /// assert_eq!(body, &[8, 9][..]);
3858     ///
3859     /// header.checksum = [0, 0];
3860     /// body.fill(1);
3861     ///
3862     /// assert_eq!(bytes, [0, 1, 2, 3, 4, 5, 0, 0, 1, 1]);
3863     /// ```
3864     #[must_use = "has no side effects"]
3865     #[inline]
mut_from_prefix( source: &mut [u8], ) -> Result<(&mut Self, &mut [u8]), CastError<&mut [u8], Self>> where Self: IntoBytes + KnownLayout,3866     fn mut_from_prefix(
3867         source: &mut [u8],
3868     ) -> Result<(&mut Self, &mut [u8]), CastError<&mut [u8], Self>>
3869     where
3870         Self: IntoBytes + KnownLayout,
3871     {
3872         static_assert_dst_is_not_zst!(Self);
3873         mut_from_prefix_suffix(source, None, CastType::Prefix)
3874     }
3875 
3876     /// Interprets the suffix of the given `source` as a `&mut Self` without
3877     /// copying.
3878     ///
3879     /// This method computes the [largest possible size of `Self`][valid-size]
3880     /// that can fit in the trailing bytes of `source`, then attempts to return
3881     /// both a reference to those bytes interpreted as a `Self`, and a reference
3882     /// to the preceding bytes. If there are insufficient bytes, or if that
3883     /// suffix of `source` is not appropriately aligned, this returns `Err`. If
3884     /// [`Self: Unaligned`][self-unaligned], you can [infallibly discard the
3885     /// alignment error][size-error-from].
3886     ///
3887     /// `Self` may be a sized type, a slice, or a [slice DST][slice-dst].
3888     ///
3889     /// [valid-size]: crate::KnownLayout#what-is-a-valid-size
3890     /// [self-unaligned]: Unaligned
3891     /// [size-error-from]: error/struct.SizeError.html#method.from-1
3892     /// [slice-dst]: KnownLayout#dynamically-sized-types
3893     ///
3894     /// # Compile-Time Assertions
3895     ///
3896     /// This method cannot yet be used on unsized types whose dynamically-sized
3897     /// component is zero-sized. Attempting to use this method on such types
3898     /// results in a compile-time assertion error; e.g.:
3899     ///
3900     /// ```compile_fail,E0080
3901     /// use zerocopy::*;
3902     /// # use zerocopy_derive::*;
3903     ///
3904     /// #[derive(FromBytes, Immutable, IntoBytes, KnownLayout)]
3905     /// #[repr(C, packed)]
3906     /// struct ZSTy {
3907     ///     leading_sized: [u8; 2],
3908     ///     trailing_dst: [()],
3909     /// }
3910     ///
3911     /// let mut source = [85, 85];
3912     /// let _ = ZSTy::mut_from_suffix(&mut source[..]); // ⚠ Compile Error!
3913     /// ```
3914     ///
3915     /// # Examples
3916     ///
3917     /// ```
3918     /// use zerocopy::FromBytes;
3919     /// # use zerocopy_derive::*;
3920     ///
3921     /// #[derive(FromBytes, IntoBytes, KnownLayout, Immutable)]
3922     /// #[repr(C)]
3923     /// struct PacketTrailer {
3924     ///     frame_check_sequence: [u8; 4],
3925     /// }
3926     ///
3927     /// // These are more bytes than are needed to encode a `PacketTrailer`.
3928     /// let bytes = &mut [0, 1, 2, 3, 4, 5, 6, 7, 8, 9][..];
3929     ///
3930     /// let (prefix, trailer) = PacketTrailer::mut_from_suffix(bytes).unwrap();
3931     ///
3932     /// assert_eq!(prefix, &[0u8, 1, 2, 3, 4, 5][..]);
3933     /// assert_eq!(trailer.frame_check_sequence, [6, 7, 8, 9]);
3934     ///
3935     /// prefix.fill(0);
3936     /// trailer.frame_check_sequence.fill(1);
3937     ///
3938     /// assert_eq!(bytes, [0, 0, 0, 0, 0, 0, 1, 1, 1, 1]);
3939     /// ```
3940     #[must_use = "has no side effects"]
3941     #[inline]
mut_from_suffix( source: &mut [u8], ) -> Result<(&mut [u8], &mut Self), CastError<&mut [u8], Self>> where Self: IntoBytes + KnownLayout,3942     fn mut_from_suffix(
3943         source: &mut [u8],
3944     ) -> Result<(&mut [u8], &mut Self), CastError<&mut [u8], Self>>
3945     where
3946         Self: IntoBytes + KnownLayout,
3947     {
3948         static_assert_dst_is_not_zst!(Self);
3949         mut_from_prefix_suffix(source, None, CastType::Suffix).map(swap)
3950     }
3951 
3952     /// Interprets the given `source` as a `&Self` with a DST length equal to
3953     /// `count`.
3954     ///
3955     /// This method attempts to return a reference to `source` interpreted as a
3956     /// `Self` with `count` trailing elements. If the length of `source` is not
3957     /// equal to the size of `Self` with `count` elements, or if `source` is not
3958     /// appropriately aligned, this returns `Err`. If [`Self:
3959     /// Unaligned`][self-unaligned], you can [infallibly discard the alignment
3960     /// error][size-error-from].
3961     ///
3962     /// [self-unaligned]: Unaligned
3963     /// [size-error-from]: error/struct.SizeError.html#method.from-1
3964     ///
3965     /// # Examples
3966     ///
3967     /// ```
3968     /// use zerocopy::FromBytes;
3969     /// # use zerocopy_derive::*;
3970     ///
3971     /// # #[derive(Debug, PartialEq, Eq)]
3972     /// #[derive(FromBytes, Immutable)]
3973     /// #[repr(C)]
3974     /// struct Pixel {
3975     ///     r: u8,
3976     ///     g: u8,
3977     ///     b: u8,
3978     ///     a: u8,
3979     /// }
3980     ///
3981     /// let bytes = &[0, 1, 2, 3, 4, 5, 6, 7][..];
3982     ///
3983     /// let pixels = <[Pixel]>::ref_from_bytes_with_elems(bytes, 2).unwrap();
3984     ///
3985     /// assert_eq!(pixels, &[
3986     ///     Pixel { r: 0, g: 1, b: 2, a: 3 },
3987     ///     Pixel { r: 4, g: 5, b: 6, a: 7 },
3988     /// ]);
3989     ///
3990     /// ```
3991     ///
3992     /// Since an explicit `count` is provided, this method supports types with
3993     /// zero-sized trailing slice elements. Methods such as [`ref_from_bytes`]
3994     /// which do not take an explicit count do not support such types.
3995     ///
3996     /// ```
3997     /// use zerocopy::*;
3998     /// # use zerocopy_derive::*;
3999     ///
4000     /// #[derive(FromBytes, Immutable, KnownLayout)]
4001     /// #[repr(C)]
4002     /// struct ZSTy {
4003     ///     leading_sized: [u8; 2],
4004     ///     trailing_dst: [()],
4005     /// }
4006     ///
4007     /// let src = &[85, 85][..];
4008     /// let zsty = ZSTy::ref_from_bytes_with_elems(src, 42).unwrap();
4009     /// assert_eq!(zsty.trailing_dst.len(), 42);
4010     /// ```
4011     ///
4012     /// [`ref_from_bytes`]: FromBytes::ref_from_bytes
4013     #[must_use = "has no side effects"]
4014     #[inline]
ref_from_bytes_with_elems( source: &[u8], count: usize, ) -> Result<&Self, CastError<&[u8], Self>> where Self: KnownLayout<PointerMetadata = usize> + Immutable,4015     fn ref_from_bytes_with_elems(
4016         source: &[u8],
4017         count: usize,
4018     ) -> Result<&Self, CastError<&[u8], Self>>
4019     where
4020         Self: KnownLayout<PointerMetadata = usize> + Immutable,
4021     {
4022         let source = Ptr::from_ref(source);
4023         let maybe_slf = source.try_cast_into_no_leftover::<_, BecauseImmutable>(Some(count));
4024         match maybe_slf {
4025             Ok(slf) => Ok(slf.bikeshed_recall_valid().as_ref()),
4026             Err(err) => Err(err.map_src(|s| s.as_ref())),
4027         }
4028     }
4029 
4030     /// Interprets the prefix of the given `source` as a DST `&Self` with length
4031     /// equal to `count`.
4032     ///
4033     /// This method attempts to return a reference to the prefix of `source`
4034     /// interpreted as a `Self` with `count` trailing elements, and a reference
4035     /// to the remaining bytes. If there are insufficient bytes, or if `source`
4036     /// is not appropriately aligned, this returns `Err`. If [`Self:
4037     /// Unaligned`][self-unaligned], you can [infallibly discard the alignment
4038     /// error][size-error-from].
4039     ///
4040     /// [self-unaligned]: Unaligned
4041     /// [size-error-from]: error/struct.SizeError.html#method.from-1
4042     ///
4043     /// # Examples
4044     ///
4045     /// ```
4046     /// use zerocopy::FromBytes;
4047     /// # use zerocopy_derive::*;
4048     ///
4049     /// # #[derive(Debug, PartialEq, Eq)]
4050     /// #[derive(FromBytes, Immutable)]
4051     /// #[repr(C)]
4052     /// struct Pixel {
4053     ///     r: u8,
4054     ///     g: u8,
4055     ///     b: u8,
4056     ///     a: u8,
4057     /// }
4058     ///
4059     /// // These are more bytes than are needed to encode two `Pixel`s.
4060     /// let bytes = &[0, 1, 2, 3, 4, 5, 6, 7, 8, 9][..];
4061     ///
4062     /// let (pixels, suffix) = <[Pixel]>::ref_from_prefix_with_elems(bytes, 2).unwrap();
4063     ///
4064     /// assert_eq!(pixels, &[
4065     ///     Pixel { r: 0, g: 1, b: 2, a: 3 },
4066     ///     Pixel { r: 4, g: 5, b: 6, a: 7 },
4067     /// ]);
4068     ///
4069     /// assert_eq!(suffix, &[8, 9]);
4070     /// ```
4071     ///
4072     /// Since an explicit `count` is provided, this method supports types with
4073     /// zero-sized trailing slice elements. Methods such as [`ref_from_prefix`]
4074     /// which do not take an explicit count do not support such types.
4075     ///
4076     /// ```
4077     /// use zerocopy::*;
4078     /// # use zerocopy_derive::*;
4079     ///
4080     /// #[derive(FromBytes, Immutable, KnownLayout)]
4081     /// #[repr(C)]
4082     /// struct ZSTy {
4083     ///     leading_sized: [u8; 2],
4084     ///     trailing_dst: [()],
4085     /// }
4086     ///
4087     /// let src = &[85, 85][..];
4088     /// let (zsty, _) = ZSTy::ref_from_prefix_with_elems(src, 42).unwrap();
4089     /// assert_eq!(zsty.trailing_dst.len(), 42);
4090     /// ```
4091     ///
4092     /// [`ref_from_prefix`]: FromBytes::ref_from_prefix
4093     #[must_use = "has no side effects"]
4094     #[inline]
ref_from_prefix_with_elems( source: &[u8], count: usize, ) -> Result<(&Self, &[u8]), CastError<&[u8], Self>> where Self: KnownLayout<PointerMetadata = usize> + Immutable,4095     fn ref_from_prefix_with_elems(
4096         source: &[u8],
4097         count: usize,
4098     ) -> Result<(&Self, &[u8]), CastError<&[u8], Self>>
4099     where
4100         Self: KnownLayout<PointerMetadata = usize> + Immutable,
4101     {
4102         ref_from_prefix_suffix(source, Some(count), CastType::Prefix)
4103     }
4104 
4105     /// Interprets the suffix of the given `source` as a DST `&Self` with length
4106     /// equal to `count`.
4107     ///
4108     /// This method attempts to return a reference to the suffix of `source`
4109     /// interpreted as a `Self` with `count` trailing elements, and a reference
4110     /// to the preceding bytes. If there are insufficient bytes, or if that
4111     /// suffix of `source` is not appropriately aligned, this returns `Err`. If
4112     /// [`Self: Unaligned`][self-unaligned], you can [infallibly discard the
4113     /// alignment error][size-error-from].
4114     ///
4115     /// [self-unaligned]: Unaligned
4116     /// [size-error-from]: error/struct.SizeError.html#method.from-1
4117     ///
4118     /// # Examples
4119     ///
4120     /// ```
4121     /// use zerocopy::FromBytes;
4122     /// # use zerocopy_derive::*;
4123     ///
4124     /// # #[derive(Debug, PartialEq, Eq)]
4125     /// #[derive(FromBytes, Immutable)]
4126     /// #[repr(C)]
4127     /// struct Pixel {
4128     ///     r: u8,
4129     ///     g: u8,
4130     ///     b: u8,
4131     ///     a: u8,
4132     /// }
4133     ///
4134     /// // These are more bytes than are needed to encode two `Pixel`s.
4135     /// let bytes = &[0, 1, 2, 3, 4, 5, 6, 7, 8, 9][..];
4136     ///
4137     /// let (prefix, pixels) = <[Pixel]>::ref_from_suffix_with_elems(bytes, 2).unwrap();
4138     ///
4139     /// assert_eq!(prefix, &[0, 1]);
4140     ///
4141     /// assert_eq!(pixels, &[
4142     ///     Pixel { r: 2, g: 3, b: 4, a: 5 },
4143     ///     Pixel { r: 6, g: 7, b: 8, a: 9 },
4144     /// ]);
4145     /// ```
4146     ///
4147     /// Since an explicit `count` is provided, this method supports types with
4148     /// zero-sized trailing slice elements. Methods such as [`ref_from_suffix`]
4149     /// which do not take an explicit count do not support such types.
4150     ///
4151     /// ```
4152     /// use zerocopy::*;
4153     /// # use zerocopy_derive::*;
4154     ///
4155     /// #[derive(FromBytes, Immutable, KnownLayout)]
4156     /// #[repr(C)]
4157     /// struct ZSTy {
4158     ///     leading_sized: [u8; 2],
4159     ///     trailing_dst: [()],
4160     /// }
4161     ///
4162     /// let src = &[85, 85][..];
4163     /// let (_, zsty) = ZSTy::ref_from_suffix_with_elems(src, 42).unwrap();
4164     /// assert_eq!(zsty.trailing_dst.len(), 42);
4165     /// ```
4166     ///
4167     /// [`ref_from_suffix`]: FromBytes::ref_from_suffix
4168     #[must_use = "has no side effects"]
4169     #[inline]
ref_from_suffix_with_elems( source: &[u8], count: usize, ) -> Result<(&[u8], &Self), CastError<&[u8], Self>> where Self: KnownLayout<PointerMetadata = usize> + Immutable,4170     fn ref_from_suffix_with_elems(
4171         source: &[u8],
4172         count: usize,
4173     ) -> Result<(&[u8], &Self), CastError<&[u8], Self>>
4174     where
4175         Self: KnownLayout<PointerMetadata = usize> + Immutable,
4176     {
4177         ref_from_prefix_suffix(source, Some(count), CastType::Suffix).map(swap)
4178     }
4179 
4180     /// Interprets the given `source` as a `&mut Self` with a DST length equal
4181     /// to `count`.
4182     ///
4183     /// This method attempts to return a reference to `source` interpreted as a
4184     /// `Self` with `count` trailing elements. If the length of `source` is not
4185     /// equal to the size of `Self` with `count` elements, or if `source` is not
4186     /// appropriately aligned, this returns `Err`. If [`Self:
4187     /// Unaligned`][self-unaligned], you can [infallibly discard the alignment
4188     /// error][size-error-from].
4189     ///
4190     /// [self-unaligned]: Unaligned
4191     /// [size-error-from]: error/struct.SizeError.html#method.from-1
4192     ///
4193     /// # Examples
4194     ///
4195     /// ```
4196     /// use zerocopy::FromBytes;
4197     /// # use zerocopy_derive::*;
4198     ///
4199     /// # #[derive(Debug, PartialEq, Eq)]
4200     /// #[derive(KnownLayout, FromBytes, IntoBytes, Immutable)]
4201     /// #[repr(C)]
4202     /// struct Pixel {
4203     ///     r: u8,
4204     ///     g: u8,
4205     ///     b: u8,
4206     ///     a: u8,
4207     /// }
4208     ///
4209     /// let bytes = &mut [0, 1, 2, 3, 4, 5, 6, 7][..];
4210     ///
4211     /// let pixels = <[Pixel]>::mut_from_bytes_with_elems(bytes, 2).unwrap();
4212     ///
4213     /// assert_eq!(pixels, &[
4214     ///     Pixel { r: 0, g: 1, b: 2, a: 3 },
4215     ///     Pixel { r: 4, g: 5, b: 6, a: 7 },
4216     /// ]);
4217     ///
4218     /// pixels[1] = Pixel { r: 0, g: 0, b: 0, a: 0 };
4219     ///
4220     /// assert_eq!(bytes, [0, 1, 2, 3, 0, 0, 0, 0]);
4221     /// ```
4222     ///
4223     /// Since an explicit `count` is provided, this method supports types with
4224     /// zero-sized trailing slice elements. Methods such as [`mut_from`] which
4225     /// do not take an explicit count do not support such types.
4226     ///
4227     /// ```
4228     /// use zerocopy::*;
4229     /// # use zerocopy_derive::*;
4230     ///
4231     /// #[derive(FromBytes, IntoBytes, Immutable, KnownLayout)]
4232     /// #[repr(C, packed)]
4233     /// struct ZSTy {
4234     ///     leading_sized: [u8; 2],
4235     ///     trailing_dst: [()],
4236     /// }
4237     ///
4238     /// let src = &mut [85, 85][..];
4239     /// let zsty = ZSTy::mut_from_bytes_with_elems(src, 42).unwrap();
4240     /// assert_eq!(zsty.trailing_dst.len(), 42);
4241     /// ```
4242     ///
4243     /// [`mut_from`]: FromBytes::mut_from
4244     #[must_use = "has no side effects"]
4245     #[inline]
mut_from_bytes_with_elems( source: &mut [u8], count: usize, ) -> Result<&mut Self, CastError<&mut [u8], Self>> where Self: IntoBytes + KnownLayout<PointerMetadata = usize> + Immutable,4246     fn mut_from_bytes_with_elems(
4247         source: &mut [u8],
4248         count: usize,
4249     ) -> Result<&mut Self, CastError<&mut [u8], Self>>
4250     where
4251         Self: IntoBytes + KnownLayout<PointerMetadata = usize> + Immutable,
4252     {
4253         let source = Ptr::from_mut(source);
4254         let maybe_slf = source.try_cast_into_no_leftover::<_, BecauseImmutable>(Some(count));
4255         match maybe_slf {
4256             Ok(slf) => Ok(slf.bikeshed_recall_valid().as_mut()),
4257             Err(err) => Err(err.map_src(|s| s.as_mut())),
4258         }
4259     }
4260 
4261     /// Interprets the prefix of the given `source` as a `&mut Self` with DST
4262     /// length equal to `count`.
4263     ///
4264     /// This method attempts to return a reference to the prefix of `source`
4265     /// interpreted as a `Self` with `count` trailing elements, and a reference
4266     /// to the preceding bytes. If there are insufficient bytes, or if `source`
4267     /// is not appropriately aligned, this returns `Err`. If [`Self:
4268     /// Unaligned`][self-unaligned], you can [infallibly discard the alignment
4269     /// error][size-error-from].
4270     ///
4271     /// [self-unaligned]: Unaligned
4272     /// [size-error-from]: error/struct.SizeError.html#method.from-1
4273     ///
4274     /// # Examples
4275     ///
4276     /// ```
4277     /// use zerocopy::FromBytes;
4278     /// # use zerocopy_derive::*;
4279     ///
4280     /// # #[derive(Debug, PartialEq, Eq)]
4281     /// #[derive(KnownLayout, FromBytes, IntoBytes, Immutable)]
4282     /// #[repr(C)]
4283     /// struct Pixel {
4284     ///     r: u8,
4285     ///     g: u8,
4286     ///     b: u8,
4287     ///     a: u8,
4288     /// }
4289     ///
4290     /// // These are more bytes than are needed to encode two `Pixel`s.
4291     /// let bytes = &mut [0, 1, 2, 3, 4, 5, 6, 7, 8, 9][..];
4292     ///
4293     /// let (pixels, suffix) = <[Pixel]>::mut_from_prefix_with_elems(bytes, 2).unwrap();
4294     ///
4295     /// assert_eq!(pixels, &[
4296     ///     Pixel { r: 0, g: 1, b: 2, a: 3 },
4297     ///     Pixel { r: 4, g: 5, b: 6, a: 7 },
4298     /// ]);
4299     ///
4300     /// assert_eq!(suffix, &[8, 9]);
4301     ///
4302     /// pixels[1] = Pixel { r: 0, g: 0, b: 0, a: 0 };
4303     /// suffix.fill(1);
4304     ///
4305     /// assert_eq!(bytes, [0, 1, 2, 3, 0, 0, 0, 0, 1, 1]);
4306     /// ```
4307     ///
4308     /// Since an explicit `count` is provided, this method supports types with
4309     /// zero-sized trailing slice elements. Methods such as [`mut_from_prefix`]
4310     /// which do not take an explicit count do not support such types.
4311     ///
4312     /// ```
4313     /// use zerocopy::*;
4314     /// # use zerocopy_derive::*;
4315     ///
4316     /// #[derive(FromBytes, IntoBytes, Immutable, KnownLayout)]
4317     /// #[repr(C, packed)]
4318     /// struct ZSTy {
4319     ///     leading_sized: [u8; 2],
4320     ///     trailing_dst: [()],
4321     /// }
4322     ///
4323     /// let src = &mut [85, 85][..];
4324     /// let (zsty, _) = ZSTy::mut_from_prefix_with_elems(src, 42).unwrap();
4325     /// assert_eq!(zsty.trailing_dst.len(), 42);
4326     /// ```
4327     ///
4328     /// [`mut_from_prefix`]: FromBytes::mut_from_prefix
4329     #[must_use = "has no side effects"]
4330     #[inline]
mut_from_prefix_with_elems( source: &mut [u8], count: usize, ) -> Result<(&mut Self, &mut [u8]), CastError<&mut [u8], Self>> where Self: IntoBytes + KnownLayout<PointerMetadata = usize>,4331     fn mut_from_prefix_with_elems(
4332         source: &mut [u8],
4333         count: usize,
4334     ) -> Result<(&mut Self, &mut [u8]), CastError<&mut [u8], Self>>
4335     where
4336         Self: IntoBytes + KnownLayout<PointerMetadata = usize>,
4337     {
4338         mut_from_prefix_suffix(source, Some(count), CastType::Prefix)
4339     }
4340 
4341     /// Interprets the suffix of the given `source` as a `&mut Self` with DST
4342     /// length equal to `count`.
4343     ///
4344     /// This method attempts to return a reference to the suffix of `source`
4345     /// interpreted as a `Self` with `count` trailing elements, and a reference
4346     /// to the remaining bytes. If there are insufficient bytes, or if that
4347     /// suffix of `source` is not appropriately aligned, this returns `Err`. If
4348     /// [`Self: Unaligned`][self-unaligned], you can [infallibly discard the
4349     /// alignment error][size-error-from].
4350     ///
4351     /// [self-unaligned]: Unaligned
4352     /// [size-error-from]: error/struct.SizeError.html#method.from-1
4353     ///
4354     /// # Examples
4355     ///
4356     /// ```
4357     /// use zerocopy::FromBytes;
4358     /// # use zerocopy_derive::*;
4359     ///
4360     /// # #[derive(Debug, PartialEq, Eq)]
4361     /// #[derive(FromBytes, IntoBytes, Immutable)]
4362     /// #[repr(C)]
4363     /// struct Pixel {
4364     ///     r: u8,
4365     ///     g: u8,
4366     ///     b: u8,
4367     ///     a: u8,
4368     /// }
4369     ///
4370     /// // These are more bytes than are needed to encode two `Pixel`s.
4371     /// let bytes = &mut [0, 1, 2, 3, 4, 5, 6, 7, 8, 9][..];
4372     ///
4373     /// let (prefix, pixels) = <[Pixel]>::mut_from_suffix_with_elems(bytes, 2).unwrap();
4374     ///
4375     /// assert_eq!(prefix, &[0, 1]);
4376     ///
4377     /// assert_eq!(pixels, &[
4378     ///     Pixel { r: 2, g: 3, b: 4, a: 5 },
4379     ///     Pixel { r: 6, g: 7, b: 8, a: 9 },
4380     /// ]);
4381     ///
4382     /// prefix.fill(9);
4383     /// pixels[1] = Pixel { r: 0, g: 0, b: 0, a: 0 };
4384     ///
4385     /// assert_eq!(bytes, [9, 9, 2, 3, 4, 5, 0, 0, 0, 0]);
4386     /// ```
4387     ///
4388     /// Since an explicit `count` is provided, this method supports types with
4389     /// zero-sized trailing slice elements. Methods such as [`mut_from_suffix`]
4390     /// which do not take an explicit count do not support such types.
4391     ///
4392     /// ```
4393     /// use zerocopy::*;
4394     /// # use zerocopy_derive::*;
4395     ///
4396     /// #[derive(FromBytes, IntoBytes, Immutable, KnownLayout)]
4397     /// #[repr(C, packed)]
4398     /// struct ZSTy {
4399     ///     leading_sized: [u8; 2],
4400     ///     trailing_dst: [()],
4401     /// }
4402     ///
4403     /// let src = &mut [85, 85][..];
4404     /// let (_, zsty) = ZSTy::mut_from_suffix_with_elems(src, 42).unwrap();
4405     /// assert_eq!(zsty.trailing_dst.len(), 42);
4406     /// ```
4407     ///
4408     /// [`mut_from_suffix`]: FromBytes::mut_from_suffix
4409     #[must_use = "has no side effects"]
4410     #[inline]
mut_from_suffix_with_elems( source: &mut [u8], count: usize, ) -> Result<(&mut [u8], &mut Self), CastError<&mut [u8], Self>> where Self: IntoBytes + KnownLayout<PointerMetadata = usize>,4411     fn mut_from_suffix_with_elems(
4412         source: &mut [u8],
4413         count: usize,
4414     ) -> Result<(&mut [u8], &mut Self), CastError<&mut [u8], Self>>
4415     where
4416         Self: IntoBytes + KnownLayout<PointerMetadata = usize>,
4417     {
4418         mut_from_prefix_suffix(source, Some(count), CastType::Suffix).map(swap)
4419     }
4420 
4421     /// Reads a copy of `Self` from the given `source`.
4422     ///
4423     /// If `source.len() != size_of::<Self>()`, `read_from_bytes` returns `Err`.
4424     ///
4425     /// # Examples
4426     ///
4427     /// ```
4428     /// use zerocopy::FromBytes;
4429     /// # use zerocopy_derive::*;
4430     ///
4431     /// #[derive(FromBytes)]
4432     /// #[repr(C)]
4433     /// struct PacketHeader {
4434     ///     src_port: [u8; 2],
4435     ///     dst_port: [u8; 2],
4436     ///     length: [u8; 2],
4437     ///     checksum: [u8; 2],
4438     /// }
4439     ///
4440     /// // These bytes encode a `PacketHeader`.
4441     /// let bytes = &[0, 1, 2, 3, 4, 5, 6, 7][..];
4442     ///
4443     /// let header = PacketHeader::read_from_bytes(bytes).unwrap();
4444     ///
4445     /// assert_eq!(header.src_port, [0, 1]);
4446     /// assert_eq!(header.dst_port, [2, 3]);
4447     /// assert_eq!(header.length, [4, 5]);
4448     /// assert_eq!(header.checksum, [6, 7]);
4449     /// ```
4450     #[must_use = "has no side effects"]
4451     #[inline]
read_from_bytes(source: &[u8]) -> Result<Self, SizeError<&[u8], Self>> where Self: Sized,4452     fn read_from_bytes(source: &[u8]) -> Result<Self, SizeError<&[u8], Self>>
4453     where
4454         Self: Sized,
4455     {
4456         match Ref::<_, Unalign<Self>>::sized_from(source) {
4457             Ok(r) => Ok(Ref::read(&r).into_inner()),
4458             Err(CastError::Size(e)) => Err(e.with_dst()),
4459             Err(CastError::Alignment(_)) => {
4460                 // SAFETY: `Unalign<Self>` is trivially aligned, so
4461                 // `Ref::sized_from` cannot fail due to unmet alignment
4462                 // requirements.
4463                 unsafe { core::hint::unreachable_unchecked() }
4464             }
4465             Err(CastError::Validity(i)) => match i {},
4466         }
4467     }
4468 
4469     /// Reads a copy of `Self` from the prefix of the given `source`.
4470     ///
4471     /// This attempts to read a `Self` from the first `size_of::<Self>()` bytes
4472     /// of `source`, returning that `Self` and any remaining bytes. If
4473     /// `source.len() < size_of::<Self>()`, it returns `Err`.
4474     ///
4475     /// # Examples
4476     ///
4477     /// ```
4478     /// use zerocopy::FromBytes;
4479     /// # use zerocopy_derive::*;
4480     ///
4481     /// #[derive(FromBytes)]
4482     /// #[repr(C)]
4483     /// struct PacketHeader {
4484     ///     src_port: [u8; 2],
4485     ///     dst_port: [u8; 2],
4486     ///     length: [u8; 2],
4487     ///     checksum: [u8; 2],
4488     /// }
4489     ///
4490     /// // These are more bytes than are needed to encode a `PacketHeader`.
4491     /// let bytes = &[0, 1, 2, 3, 4, 5, 6, 7, 8, 9][..];
4492     ///
4493     /// let (header, body) = PacketHeader::read_from_prefix(bytes).unwrap();
4494     ///
4495     /// assert_eq!(header.src_port, [0, 1]);
4496     /// assert_eq!(header.dst_port, [2, 3]);
4497     /// assert_eq!(header.length, [4, 5]);
4498     /// assert_eq!(header.checksum, [6, 7]);
4499     /// assert_eq!(body, [8, 9]);
4500     /// ```
4501     #[must_use = "has no side effects"]
4502     #[inline]
read_from_prefix(source: &[u8]) -> Result<(Self, &[u8]), SizeError<&[u8], Self>> where Self: Sized,4503     fn read_from_prefix(source: &[u8]) -> Result<(Self, &[u8]), SizeError<&[u8], Self>>
4504     where
4505         Self: Sized,
4506     {
4507         match Ref::<_, Unalign<Self>>::sized_from_prefix(source) {
4508             Ok((r, suffix)) => Ok((Ref::read(&r).into_inner(), suffix)),
4509             Err(CastError::Size(e)) => Err(e.with_dst()),
4510             Err(CastError::Alignment(_)) => {
4511                 // SAFETY: `Unalign<Self>` is trivially aligned, so
4512                 // `Ref::sized_from_prefix` cannot fail due to unmet alignment
4513                 // requirements.
4514                 unsafe { core::hint::unreachable_unchecked() }
4515             }
4516             Err(CastError::Validity(i)) => match i {},
4517         }
4518     }
4519 
4520     /// Reads a copy of `Self` from the suffix of the given `source`.
4521     ///
4522     /// This attempts to read a `Self` from the last `size_of::<Self>()` bytes
4523     /// of `source`, returning that `Self` and any preceding bytes. If
4524     /// `source.len() < size_of::<Self>()`, it returns `Err`.
4525     ///
4526     /// # Examples
4527     ///
4528     /// ```
4529     /// use zerocopy::FromBytes;
4530     /// # use zerocopy_derive::*;
4531     ///
4532     /// #[derive(FromBytes)]
4533     /// #[repr(C)]
4534     /// struct PacketTrailer {
4535     ///     frame_check_sequence: [u8; 4],
4536     /// }
4537     ///
4538     /// // These are more bytes than are needed to encode a `PacketTrailer`.
4539     /// let bytes = &[0, 1, 2, 3, 4, 5, 6, 7, 8, 9][..];
4540     ///
4541     /// let (prefix, trailer) = PacketTrailer::read_from_suffix(bytes).unwrap();
4542     ///
4543     /// assert_eq!(prefix, [0, 1, 2, 3, 4, 5]);
4544     /// assert_eq!(trailer.frame_check_sequence, [6, 7, 8, 9]);
4545     /// ```
4546     #[must_use = "has no side effects"]
4547     #[inline]
read_from_suffix(source: &[u8]) -> Result<(&[u8], Self), SizeError<&[u8], Self>> where Self: Sized,4548     fn read_from_suffix(source: &[u8]) -> Result<(&[u8], Self), SizeError<&[u8], Self>>
4549     where
4550         Self: Sized,
4551     {
4552         match Ref::<_, Unalign<Self>>::sized_from_suffix(source) {
4553             Ok((prefix, r)) => Ok((prefix, Ref::read(&r).into_inner())),
4554             Err(CastError::Size(e)) => Err(e.with_dst()),
4555             Err(CastError::Alignment(_)) => {
4556                 // SAFETY: `Unalign<Self>` is trivially aligned, so
4557                 // `Ref::sized_from_suffix` cannot fail due to unmet alignment
4558                 // requirements.
4559                 unsafe { core::hint::unreachable_unchecked() }
4560             }
4561             Err(CastError::Validity(i)) => match i {},
4562         }
4563     }
4564 
4565     /// Reads a copy of `self` from an `io::Read`.
4566     ///
4567     /// This is useful for interfacing with operating system byte sinks (files,
4568     /// sockets, etc.).
4569     ///
4570     /// # Examples
4571     ///
4572     /// ```no_run
4573     /// use zerocopy::{byteorder::big_endian::*, FromBytes};
4574     /// use std::fs::File;
4575     /// # use zerocopy_derive::*;
4576     ///
4577     /// #[derive(FromBytes)]
4578     /// #[repr(C)]
4579     /// struct BitmapFileHeader {
4580     ///     signature: [u8; 2],
4581     ///     size: U32,
4582     ///     reserved: U64,
4583     ///     offset: U64,
4584     /// }
4585     ///
4586     /// let mut file = File::open("image.bin").unwrap();
4587     /// let header = BitmapFileHeader::read_from_io(&mut file).unwrap();
4588     /// ```
4589     #[cfg(feature = "std")]
4590     #[inline(always)]
read_from_io<R>(mut src: R) -> io::Result<Self> where Self: Sized, R: io::Read,4591     fn read_from_io<R>(mut src: R) -> io::Result<Self>
4592     where
4593         Self: Sized,
4594         R: io::Read,
4595     {
4596         // NOTE(#2319, #2320): We do `buf.zero()` separately rather than
4597         // constructing `let buf = CoreMaybeUninit::zeroed()` because, if `Self`
4598         // contains padding bytes, then a typed copy of `CoreMaybeUninit<Self>`
4599         // will not necessarily preserve zeros written to those padding byte
4600         // locations, and so `buf` could contain uninitialized bytes.
4601         let mut buf = CoreMaybeUninit::<Self>::uninit();
4602         buf.zero();
4603 
4604         let ptr = Ptr::from_mut(&mut buf);
4605         // SAFETY: After `buf.zero()`, `buf` consists entirely of initialized,
4606         // zeroed bytes.
4607         let ptr = unsafe { ptr.assume_validity::<invariant::Initialized>() };
4608         let ptr = ptr.as_bytes::<BecauseExclusive>();
4609         src.read_exact(ptr.as_mut())?;
4610         // SAFETY: `buf` entirely consists of initialized bytes, and `Self` is
4611         // `FromBytes`.
4612         Ok(unsafe { buf.assume_init() })
4613     }
4614 
4615     #[deprecated(since = "0.8.0", note = "renamed to `FromBytes::ref_from_bytes`")]
4616     #[doc(hidden)]
4617     #[must_use = "has no side effects"]
4618     #[inline(always)]
ref_from(source: &[u8]) -> Option<&Self> where Self: KnownLayout + Immutable,4619     fn ref_from(source: &[u8]) -> Option<&Self>
4620     where
4621         Self: KnownLayout + Immutable,
4622     {
4623         Self::ref_from_bytes(source).ok()
4624     }
4625 
4626     #[deprecated(since = "0.8.0", note = "renamed to `FromBytes::mut_from_bytes`")]
4627     #[doc(hidden)]
4628     #[must_use = "has no side effects"]
4629     #[inline(always)]
mut_from(source: &mut [u8]) -> Option<&mut Self> where Self: KnownLayout + IntoBytes,4630     fn mut_from(source: &mut [u8]) -> Option<&mut Self>
4631     where
4632         Self: KnownLayout + IntoBytes,
4633     {
4634         Self::mut_from_bytes(source).ok()
4635     }
4636 
4637     #[deprecated(since = "0.8.0", note = "renamed to `FromBytes::ref_from_prefix_with_elems`")]
4638     #[doc(hidden)]
4639     #[must_use = "has no side effects"]
4640     #[inline(always)]
slice_from_prefix(source: &[u8], count: usize) -> Option<(&[Self], &[u8])> where Self: Sized + Immutable,4641     fn slice_from_prefix(source: &[u8], count: usize) -> Option<(&[Self], &[u8])>
4642     where
4643         Self: Sized + Immutable,
4644     {
4645         <[Self]>::ref_from_prefix_with_elems(source, count).ok()
4646     }
4647 
4648     #[deprecated(since = "0.8.0", note = "renamed to `FromBytes::ref_from_suffix_with_elems`")]
4649     #[doc(hidden)]
4650     #[must_use = "has no side effects"]
4651     #[inline(always)]
slice_from_suffix(source: &[u8], count: usize) -> Option<(&[u8], &[Self])> where Self: Sized + Immutable,4652     fn slice_from_suffix(source: &[u8], count: usize) -> Option<(&[u8], &[Self])>
4653     where
4654         Self: Sized + Immutable,
4655     {
4656         <[Self]>::ref_from_suffix_with_elems(source, count).ok()
4657     }
4658 
4659     #[deprecated(since = "0.8.0", note = "renamed to `FromBytes::mut_from_prefix_with_elems`")]
4660     #[doc(hidden)]
4661     #[must_use = "has no side effects"]
4662     #[inline(always)]
mut_slice_from_prefix(source: &mut [u8], count: usize) -> Option<(&mut [Self], &mut [u8])> where Self: Sized + IntoBytes,4663     fn mut_slice_from_prefix(source: &mut [u8], count: usize) -> Option<(&mut [Self], &mut [u8])>
4664     where
4665         Self: Sized + IntoBytes,
4666     {
4667         <[Self]>::mut_from_prefix_with_elems(source, count).ok()
4668     }
4669 
4670     #[deprecated(since = "0.8.0", note = "renamed to `FromBytes::mut_from_suffix_with_elems`")]
4671     #[doc(hidden)]
4672     #[must_use = "has no side effects"]
4673     #[inline(always)]
mut_slice_from_suffix(source: &mut [u8], count: usize) -> Option<(&mut [u8], &mut [Self])> where Self: Sized + IntoBytes,4674     fn mut_slice_from_suffix(source: &mut [u8], count: usize) -> Option<(&mut [u8], &mut [Self])>
4675     where
4676         Self: Sized + IntoBytes,
4677     {
4678         <[Self]>::mut_from_suffix_with_elems(source, count).ok()
4679     }
4680 
4681     #[deprecated(since = "0.8.0", note = "renamed to `FromBytes::read_from_bytes`")]
4682     #[doc(hidden)]
4683     #[must_use = "has no side effects"]
4684     #[inline(always)]
read_from(source: &[u8]) -> Option<Self> where Self: Sized,4685     fn read_from(source: &[u8]) -> Option<Self>
4686     where
4687         Self: Sized,
4688     {
4689         Self::read_from_bytes(source).ok()
4690     }
4691 }
4692 
4693 /// Interprets the given affix of the given bytes as a `&Self`.
4694 ///
4695 /// This method computes the largest possible size of `Self` that can fit in the
4696 /// prefix or suffix bytes of `source`, then attempts to return both a reference
4697 /// to those bytes interpreted as a `Self`, and a reference to the excess bytes.
4698 /// If there are insufficient bytes, or if that affix of `source` is not
4699 /// appropriately aligned, this returns `Err`.
4700 #[inline(always)]
ref_from_prefix_suffix<T: FromBytes + KnownLayout + Immutable + ?Sized>( source: &[u8], meta: Option<T::PointerMetadata>, cast_type: CastType, ) -> Result<(&T, &[u8]), CastError<&[u8], T>>4701 fn ref_from_prefix_suffix<T: FromBytes + KnownLayout + Immutable + ?Sized>(
4702     source: &[u8],
4703     meta: Option<T::PointerMetadata>,
4704     cast_type: CastType,
4705 ) -> Result<(&T, &[u8]), CastError<&[u8], T>> {
4706     let (slf, prefix_suffix) = Ptr::from_ref(source)
4707         .try_cast_into::<_, BecauseImmutable>(cast_type, meta)
4708         .map_err(|err| err.map_src(|s| s.as_ref()))?;
4709     Ok((slf.bikeshed_recall_valid().as_ref(), prefix_suffix.as_ref()))
4710 }
4711 
4712 /// Interprets the given affix of the given bytes as a `&mut Self` without
4713 /// copying.
4714 ///
4715 /// This method computes the largest possible size of `Self` that can fit in the
4716 /// prefix or suffix bytes of `source`, then attempts to return both a reference
4717 /// to those bytes interpreted as a `Self`, and a reference to the excess bytes.
4718 /// If there are insufficient bytes, or if that affix of `source` is not
4719 /// appropriately aligned, this returns `Err`.
4720 #[inline(always)]
mut_from_prefix_suffix<T: FromBytes + KnownLayout + ?Sized>( source: &mut [u8], meta: Option<T::PointerMetadata>, cast_type: CastType, ) -> Result<(&mut T, &mut [u8]), CastError<&mut [u8], T>>4721 fn mut_from_prefix_suffix<T: FromBytes + KnownLayout + ?Sized>(
4722     source: &mut [u8],
4723     meta: Option<T::PointerMetadata>,
4724     cast_type: CastType,
4725 ) -> Result<(&mut T, &mut [u8]), CastError<&mut [u8], T>> {
4726     let (slf, prefix_suffix) = Ptr::from_mut(source)
4727         .try_cast_into::<_, BecauseExclusive>(cast_type, meta)
4728         .map_err(|err| err.map_src(|s| s.as_mut()))?;
4729     Ok((slf.bikeshed_recall_valid().as_mut(), prefix_suffix.as_mut()))
4730 }
4731 
4732 /// Analyzes whether a type is [`IntoBytes`].
4733 ///
4734 /// This derive analyzes, at compile time, whether the annotated type satisfies
4735 /// the [safety conditions] of `IntoBytes` and implements `IntoBytes` if it is
4736 /// sound to do so. This derive can be applied to structs and enums (see below
4737 /// for union support); e.g.:
4738 ///
4739 /// ```
4740 /// # use zerocopy_derive::{IntoBytes};
4741 /// #[derive(IntoBytes)]
4742 /// #[repr(C)]
4743 /// struct MyStruct {
4744 /// # /*
4745 ///     ...
4746 /// # */
4747 /// }
4748 ///
4749 /// #[derive(IntoBytes)]
4750 /// #[repr(u8)]
4751 /// enum MyEnum {
4752 /// #   Variant,
4753 /// # /*
4754 ///     ...
4755 /// # */
4756 /// }
4757 /// ```
4758 ///
4759 /// [safety conditions]: trait@IntoBytes#safety
4760 ///
4761 /// # Error Messages
4762 ///
4763 /// On Rust toolchains prior to 1.78.0, due to the way that the custom derive
4764 /// for `IntoBytes` is implemented, you may get an error like this:
4765 ///
4766 /// ```text
4767 /// error[E0277]: the trait bound `(): PaddingFree<Foo, true>` is not satisfied
4768 ///   --> lib.rs:23:10
4769 ///    |
4770 ///  1 | #[derive(IntoBytes)]
4771 ///    |          ^^^^^^^^^ the trait `PaddingFree<Foo, true>` is not implemented for `()`
4772 ///    |
4773 ///    = help: the following implementations were found:
4774 ///                   <() as PaddingFree<T, false>>
4775 /// ```
4776 ///
4777 /// This error indicates that the type being annotated has padding bytes, which
4778 /// is illegal for `IntoBytes` types. Consider reducing the alignment of some
4779 /// fields by using types in the [`byteorder`] module, wrapping field types in
4780 /// [`Unalign`], adding explicit struct fields where those padding bytes would
4781 /// be, or using `#[repr(packed)]`. See the Rust Reference's page on [type
4782 /// layout] for more information about type layout and padding.
4783 ///
4784 /// [type layout]: https://doc.rust-lang.org/reference/type-layout.html
4785 ///
4786 /// # Unions
4787 ///
4788 /// Currently, union bit validity is [up in the air][union-validity], and so
4789 /// zerocopy does not support `#[derive(IntoBytes)]` on unions by default.
4790 /// However, implementing `IntoBytes` on a union type is likely sound on all
4791 /// existing Rust toolchains - it's just that it may become unsound in the
4792 /// future. You can opt-in to `#[derive(IntoBytes)]` support on unions by
4793 /// passing the unstable `zerocopy_derive_union_into_bytes` cfg:
4794 ///
4795 /// ```shell
4796 /// $ RUSTFLAGS='--cfg zerocopy_derive_union_into_bytes' cargo build
4797 /// ```
4798 ///
4799 /// However, it is your responsibility to ensure that this derive is sound on
4800 /// the specific versions of the Rust toolchain you are using! We make no
4801 /// stability or soundness guarantees regarding this cfg, and may remove it at
4802 /// any point.
4803 ///
4804 /// We are actively working with Rust to stabilize the necessary language
4805 /// guarantees to support this in a forwards-compatible way, which will enable
4806 /// us to remove the cfg gate. As part of this effort, we need to know how much
4807 /// demand there is for this feature. If you would like to use `IntoBytes` on
4808 /// unions, [please let us know][discussion].
4809 ///
4810 /// [union-validity]: https://github.com/rust-lang/unsafe-code-guidelines/issues/438
4811 /// [discussion]: https://github.com/google/zerocopy/discussions/1802
4812 ///
4813 /// # Analysis
4814 ///
4815 /// *This section describes, roughly, the analysis performed by this derive to
4816 /// determine whether it is sound to implement `IntoBytes` for a given type.
4817 /// Unless you are modifying the implementation of this derive, or attempting to
4818 /// manually implement `IntoBytes` for a type yourself, you don't need to read
4819 /// this section.*
4820 ///
4821 /// If a type has the following properties, then this derive can implement
4822 /// `IntoBytes` for that type:
4823 ///
4824 /// - If the type is a struct, its fields must be [`IntoBytes`]. Additionally:
4825 ///     - if the type is `repr(transparent)` or `repr(packed)`, it is
4826 ///       [`IntoBytes`] if its fields are [`IntoBytes`]; else,
4827 ///     - if the type is `repr(C)` with at most one field, it is [`IntoBytes`]
4828 ///       if its field is [`IntoBytes`]; else,
4829 ///     - if the type has no generic parameters, it is [`IntoBytes`] if the type
4830 ///       is sized and has no padding bytes; else,
4831 ///     - if the type is `repr(C)`, its fields must be [`Unaligned`].
4832 /// - If the type is an enum:
4833 ///   - It must have a defined representation (`repr`s `C`, `u8`, `u16`, `u32`,
4834 ///     `u64`, `usize`, `i8`, `i16`, `i32`, `i64`, or `isize`).
4835 ///   - It must have no padding bytes.
4836 ///   - Its fields must be [`IntoBytes`].
4837 ///
4838 /// This analysis is subject to change. Unsafe code may *only* rely on the
4839 /// documented [safety conditions] of `FromBytes`, and must *not* rely on the
4840 /// implementation details of this derive.
4841 ///
4842 /// [Rust Reference]: https://doc.rust-lang.org/reference/type-layout.html
4843 #[cfg(any(feature = "derive", test))]
4844 #[cfg_attr(doc_cfg, doc(cfg(feature = "derive")))]
4845 pub use zerocopy_derive::IntoBytes;
4846 
4847 /// Types that can be converted to an immutable slice of initialized bytes.
4848 ///
4849 /// Any `IntoBytes` type can be converted to a slice of initialized bytes of the
4850 /// same size. This is useful for efficiently serializing structured data as raw
4851 /// bytes.
4852 ///
4853 /// # Implementation
4854 ///
4855 /// **Do not implement this trait yourself!** Instead, use
4856 /// [`#[derive(IntoBytes)]`][derive]; e.g.:
4857 ///
4858 /// ```
4859 /// # use zerocopy_derive::IntoBytes;
4860 /// #[derive(IntoBytes)]
4861 /// #[repr(C)]
4862 /// struct MyStruct {
4863 /// # /*
4864 ///     ...
4865 /// # */
4866 /// }
4867 ///
4868 /// #[derive(IntoBytes)]
4869 /// #[repr(u8)]
4870 /// enum MyEnum {
4871 /// #   Variant0,
4872 /// # /*
4873 ///     ...
4874 /// # */
4875 /// }
4876 /// ```
4877 ///
4878 /// This derive performs a sophisticated, compile-time safety analysis to
4879 /// determine whether a type is `IntoBytes`. See the [derive
4880 /// documentation][derive] for guidance on how to interpret error messages
4881 /// produced by the derive's analysis.
4882 ///
4883 /// # Safety
4884 ///
4885 /// *This section describes what is required in order for `T: IntoBytes`, and
4886 /// what unsafe code may assume of such types. If you don't plan on implementing
4887 /// `IntoBytes` manually, and you don't plan on writing unsafe code that
4888 /// operates on `IntoBytes` types, then you don't need to read this section.*
4889 ///
4890 /// If `T: IntoBytes`, then unsafe code may assume that it is sound to treat any
4891 /// `t: T` as an immutable `[u8]` of length `size_of_val(t)`. If a type is
4892 /// marked as `IntoBytes` which violates this contract, it may cause undefined
4893 /// behavior.
4894 ///
4895 /// `#[derive(IntoBytes)]` only permits [types which satisfy these
4896 /// requirements][derive-analysis].
4897 ///
4898 #[cfg_attr(
4899     feature = "derive",
4900     doc = "[derive]: zerocopy_derive::IntoBytes",
4901     doc = "[derive-analysis]: zerocopy_derive::IntoBytes#analysis"
4902 )]
4903 #[cfg_attr(
4904     not(feature = "derive"),
4905     doc = concat!("[derive]: https://docs.rs/zerocopy/", env!("CARGO_PKG_VERSION"), "/zerocopy/derive.IntoBytes.html"),
4906     doc = concat!("[derive-analysis]: https://docs.rs/zerocopy/", env!("CARGO_PKG_VERSION"), "/zerocopy/derive.IntoBytes.html#analysis"),
4907 )]
4908 #[cfg_attr(
4909     zerocopy_diagnostic_on_unimplemented_1_78_0,
4910     diagnostic::on_unimplemented(note = "Consider adding `#[derive(IntoBytes)]` to `{Self}`")
4911 )]
4912 pub unsafe trait IntoBytes {
4913     // The `Self: Sized` bound makes it so that this function doesn't prevent
4914     // `IntoBytes` from being object safe. Note that other `IntoBytes` methods
4915     // prevent object safety, but those provide a benefit in exchange for object
4916     // safety. If at some point we remove those methods, change their type
4917     // signatures, or move them out of this trait so that `IntoBytes` is object
4918     // safe again, it's important that this function not prevent object safety.
4919     #[doc(hidden)]
only_derive_is_allowed_to_implement_this_trait() where Self: Sized4920     fn only_derive_is_allowed_to_implement_this_trait()
4921     where
4922         Self: Sized;
4923 
4924     /// Gets the bytes of this value.
4925     ///
4926     /// # Examples
4927     ///
4928     /// ```
4929     /// use zerocopy::IntoBytes;
4930     /// # use zerocopy_derive::*;
4931     ///
4932     /// #[derive(IntoBytes, Immutable)]
4933     /// #[repr(C)]
4934     /// struct PacketHeader {
4935     ///     src_port: [u8; 2],
4936     ///     dst_port: [u8; 2],
4937     ///     length: [u8; 2],
4938     ///     checksum: [u8; 2],
4939     /// }
4940     ///
4941     /// let header = PacketHeader {
4942     ///     src_port: [0, 1],
4943     ///     dst_port: [2, 3],
4944     ///     length: [4, 5],
4945     ///     checksum: [6, 7],
4946     /// };
4947     ///
4948     /// let bytes = header.as_bytes();
4949     ///
4950     /// assert_eq!(bytes, [0, 1, 2, 3, 4, 5, 6, 7]);
4951     /// ```
4952     #[must_use = "has no side effects"]
4953     #[inline(always)]
as_bytes(&self) -> &[u8] where Self: Immutable,4954     fn as_bytes(&self) -> &[u8]
4955     where
4956         Self: Immutable,
4957     {
4958         // Note that this method does not have a `Self: Sized` bound;
4959         // `size_of_val` works for unsized values too.
4960         let len = mem::size_of_val(self);
4961         let slf: *const Self = self;
4962 
4963         // SAFETY:
4964         // - `slf.cast::<u8>()` is valid for reads for `len * size_of::<u8>()`
4965         //   many bytes because...
4966         //   - `slf` is the same pointer as `self`, and `self` is a reference
4967         //     which points to an object whose size is `len`. Thus...
4968         //     - The entire region of `len` bytes starting at `slf` is contained
4969         //       within a single allocation.
4970         //     - `slf` is non-null.
4971         //   - `slf` is trivially aligned to `align_of::<u8>() == 1`.
4972         // - `Self: IntoBytes` ensures that all of the bytes of `slf` are
4973         //   initialized.
4974         // - Since `slf` is derived from `self`, and `self` is an immutable
4975         //   reference, the only other references to this memory region that
4976         //   could exist are other immutable references, and those don't allow
4977         //   mutation. `Self: Immutable` prohibits types which contain
4978         //   `UnsafeCell`s, which are the only types for which this rule
4979         //   wouldn't be sufficient.
4980         // - The total size of the resulting slice is no larger than
4981         //   `isize::MAX` because no allocation produced by safe code can be
4982         //   larger than `isize::MAX`.
4983         //
4984         // TODO(#429): Add references to docs and quotes.
4985         unsafe { slice::from_raw_parts(slf.cast::<u8>(), len) }
4986     }
4987 
4988     /// Gets the bytes of this value mutably.
4989     ///
4990     /// # Examples
4991     ///
4992     /// ```
4993     /// use zerocopy::IntoBytes;
4994     /// # use zerocopy_derive::*;
4995     ///
4996     /// # #[derive(Eq, PartialEq, Debug)]
4997     /// #[derive(FromBytes, IntoBytes, Immutable)]
4998     /// #[repr(C)]
4999     /// struct PacketHeader {
5000     ///     src_port: [u8; 2],
5001     ///     dst_port: [u8; 2],
5002     ///     length: [u8; 2],
5003     ///     checksum: [u8; 2],
5004     /// }
5005     ///
5006     /// let mut header = PacketHeader {
5007     ///     src_port: [0, 1],
5008     ///     dst_port: [2, 3],
5009     ///     length: [4, 5],
5010     ///     checksum: [6, 7],
5011     /// };
5012     ///
5013     /// let bytes = header.as_mut_bytes();
5014     ///
5015     /// assert_eq!(bytes, [0, 1, 2, 3, 4, 5, 6, 7]);
5016     ///
5017     /// bytes.reverse();
5018     ///
5019     /// assert_eq!(header, PacketHeader {
5020     ///     src_port: [7, 6],
5021     ///     dst_port: [5, 4],
5022     ///     length: [3, 2],
5023     ///     checksum: [1, 0],
5024     /// });
5025     /// ```
5026     #[must_use = "has no side effects"]
5027     #[inline(always)]
as_mut_bytes(&mut self) -> &mut [u8] where Self: FromBytes,5028     fn as_mut_bytes(&mut self) -> &mut [u8]
5029     where
5030         Self: FromBytes,
5031     {
5032         // Note that this method does not have a `Self: Sized` bound;
5033         // `size_of_val` works for unsized values too.
5034         let len = mem::size_of_val(self);
5035         let slf: *mut Self = self;
5036 
5037         // SAFETY:
5038         // - `slf.cast::<u8>()` is valid for reads and writes for `len *
5039         //   size_of::<u8>()` many bytes because...
5040         //   - `slf` is the same pointer as `self`, and `self` is a reference
5041         //     which points to an object whose size is `len`. Thus...
5042         //     - The entire region of `len` bytes starting at `slf` is contained
5043         //       within a single allocation.
5044         //     - `slf` is non-null.
5045         //   - `slf` is trivially aligned to `align_of::<u8>() == 1`.
5046         // - `Self: IntoBytes` ensures that all of the bytes of `slf` are
5047         //   initialized.
5048         // - `Self: FromBytes` ensures that no write to this memory region
5049         //   could result in it containing an invalid `Self`.
5050         // - Since `slf` is derived from `self`, and `self` is a mutable
5051         //   reference, no other references to this memory region can exist.
5052         // - The total size of the resulting slice is no larger than
5053         //   `isize::MAX` because no allocation produced by safe code can be
5054         //   larger than `isize::MAX`.
5055         //
5056         // TODO(#429): Add references to docs and quotes.
5057         unsafe { slice::from_raw_parts_mut(slf.cast::<u8>(), len) }
5058     }
5059 
5060     /// Writes a copy of `self` to `dst`.
5061     ///
5062     /// If `dst.len() != size_of_val(self)`, `write_to` returns `Err`.
5063     ///
5064     /// # Examples
5065     ///
5066     /// ```
5067     /// use zerocopy::IntoBytes;
5068     /// # use zerocopy_derive::*;
5069     ///
5070     /// #[derive(IntoBytes, Immutable)]
5071     /// #[repr(C)]
5072     /// struct PacketHeader {
5073     ///     src_port: [u8; 2],
5074     ///     dst_port: [u8; 2],
5075     ///     length: [u8; 2],
5076     ///     checksum: [u8; 2],
5077     /// }
5078     ///
5079     /// let header = PacketHeader {
5080     ///     src_port: [0, 1],
5081     ///     dst_port: [2, 3],
5082     ///     length: [4, 5],
5083     ///     checksum: [6, 7],
5084     /// };
5085     ///
5086     /// let mut bytes = [0, 0, 0, 0, 0, 0, 0, 0];
5087     ///
5088     /// header.write_to(&mut bytes[..]);
5089     ///
5090     /// assert_eq!(bytes, [0, 1, 2, 3, 4, 5, 6, 7]);
5091     /// ```
5092     ///
5093     /// If too many or too few target bytes are provided, `write_to` returns
5094     /// `Err` and leaves the target bytes unmodified:
5095     ///
5096     /// ```
5097     /// # use zerocopy::IntoBytes;
5098     /// # let header = u128::MAX;
5099     /// let mut excessive_bytes = &mut [0u8; 128][..];
5100     ///
5101     /// let write_result = header.write_to(excessive_bytes);
5102     ///
5103     /// assert!(write_result.is_err());
5104     /// assert_eq!(excessive_bytes, [0u8; 128]);
5105     /// ```
5106     #[must_use = "callers should check the return value to see if the operation succeeded"]
5107     #[inline]
write_to(&self, dst: &mut [u8]) -> Result<(), SizeError<&Self, &mut [u8]>> where Self: Immutable,5108     fn write_to(&self, dst: &mut [u8]) -> Result<(), SizeError<&Self, &mut [u8]>>
5109     where
5110         Self: Immutable,
5111     {
5112         let src = self.as_bytes();
5113         if dst.len() == src.len() {
5114             // SAFETY: Within this branch of the conditional, we have ensured
5115             // that `dst.len()` is equal to `src.len()`. Neither the size of the
5116             // source nor the size of the destination change between the above
5117             // size check and the invocation of `copy_unchecked`.
5118             unsafe { util::copy_unchecked(src, dst) }
5119             Ok(())
5120         } else {
5121             Err(SizeError::new(self))
5122         }
5123     }
5124 
5125     /// Writes a copy of `self` to the prefix of `dst`.
5126     ///
5127     /// `write_to_prefix` writes `self` to the first `size_of_val(self)` bytes
5128     /// of `dst`. If `dst.len() < size_of_val(self)`, it returns `Err`.
5129     ///
5130     /// # Examples
5131     ///
5132     /// ```
5133     /// use zerocopy::IntoBytes;
5134     /// # use zerocopy_derive::*;
5135     ///
5136     /// #[derive(IntoBytes, Immutable)]
5137     /// #[repr(C)]
5138     /// struct PacketHeader {
5139     ///     src_port: [u8; 2],
5140     ///     dst_port: [u8; 2],
5141     ///     length: [u8; 2],
5142     ///     checksum: [u8; 2],
5143     /// }
5144     ///
5145     /// let header = PacketHeader {
5146     ///     src_port: [0, 1],
5147     ///     dst_port: [2, 3],
5148     ///     length: [4, 5],
5149     ///     checksum: [6, 7],
5150     /// };
5151     ///
5152     /// let mut bytes = [0, 0, 0, 0, 0, 0, 0, 0, 0, 0];
5153     ///
5154     /// header.write_to_prefix(&mut bytes[..]);
5155     ///
5156     /// assert_eq!(bytes, [0, 1, 2, 3, 4, 5, 6, 7, 0, 0]);
5157     /// ```
5158     ///
5159     /// If insufficient target bytes are provided, `write_to_prefix` returns
5160     /// `Err` and leaves the target bytes unmodified:
5161     ///
5162     /// ```
5163     /// # use zerocopy::IntoBytes;
5164     /// # let header = u128::MAX;
5165     /// let mut insufficent_bytes = &mut [0, 0][..];
5166     ///
5167     /// let write_result = header.write_to_suffix(insufficent_bytes);
5168     ///
5169     /// assert!(write_result.is_err());
5170     /// assert_eq!(insufficent_bytes, [0, 0]);
5171     /// ```
5172     #[must_use = "callers should check the return value to see if the operation succeeded"]
5173     #[inline]
write_to_prefix(&self, dst: &mut [u8]) -> Result<(), SizeError<&Self, &mut [u8]>> where Self: Immutable,5174     fn write_to_prefix(&self, dst: &mut [u8]) -> Result<(), SizeError<&Self, &mut [u8]>>
5175     where
5176         Self: Immutable,
5177     {
5178         let src = self.as_bytes();
5179         match dst.get_mut(..src.len()) {
5180             Some(dst) => {
5181                 // SAFETY: Within this branch of the `match`, we have ensured
5182                 // through fallible subslicing that `dst.len()` is equal to
5183                 // `src.len()`. Neither the size of the source nor the size of
5184                 // the destination change between the above subslicing operation
5185                 // and the invocation of `copy_unchecked`.
5186                 unsafe { util::copy_unchecked(src, dst) }
5187                 Ok(())
5188             }
5189             None => Err(SizeError::new(self)),
5190         }
5191     }
5192 
5193     /// Writes a copy of `self` to the suffix of `dst`.
5194     ///
5195     /// `write_to_suffix` writes `self` to the last `size_of_val(self)` bytes of
5196     /// `dst`. If `dst.len() < size_of_val(self)`, it returns `Err`.
5197     ///
5198     /// # Examples
5199     ///
5200     /// ```
5201     /// use zerocopy::IntoBytes;
5202     /// # use zerocopy_derive::*;
5203     ///
5204     /// #[derive(IntoBytes, Immutable)]
5205     /// #[repr(C)]
5206     /// struct PacketHeader {
5207     ///     src_port: [u8; 2],
5208     ///     dst_port: [u8; 2],
5209     ///     length: [u8; 2],
5210     ///     checksum: [u8; 2],
5211     /// }
5212     ///
5213     /// let header = PacketHeader {
5214     ///     src_port: [0, 1],
5215     ///     dst_port: [2, 3],
5216     ///     length: [4, 5],
5217     ///     checksum: [6, 7],
5218     /// };
5219     ///
5220     /// let mut bytes = [0, 0, 0, 0, 0, 0, 0, 0, 0, 0];
5221     ///
5222     /// header.write_to_suffix(&mut bytes[..]);
5223     ///
5224     /// assert_eq!(bytes, [0, 0, 0, 1, 2, 3, 4, 5, 6, 7]);
5225     ///
5226     /// let mut insufficent_bytes = &mut [0, 0][..];
5227     ///
5228     /// let write_result = header.write_to_suffix(insufficent_bytes);
5229     ///
5230     /// assert!(write_result.is_err());
5231     /// assert_eq!(insufficent_bytes, [0, 0]);
5232     /// ```
5233     ///
5234     /// If insufficient target bytes are provided, `write_to_suffix` returns
5235     /// `Err` and leaves the target bytes unmodified:
5236     ///
5237     /// ```
5238     /// # use zerocopy::IntoBytes;
5239     /// # let header = u128::MAX;
5240     /// let mut insufficent_bytes = &mut [0, 0][..];
5241     ///
5242     /// let write_result = header.write_to_suffix(insufficent_bytes);
5243     ///
5244     /// assert!(write_result.is_err());
5245     /// assert_eq!(insufficent_bytes, [0, 0]);
5246     /// ```
5247     #[must_use = "callers should check the return value to see if the operation succeeded"]
5248     #[inline]
write_to_suffix(&self, dst: &mut [u8]) -> Result<(), SizeError<&Self, &mut [u8]>> where Self: Immutable,5249     fn write_to_suffix(&self, dst: &mut [u8]) -> Result<(), SizeError<&Self, &mut [u8]>>
5250     where
5251         Self: Immutable,
5252     {
5253         let src = self.as_bytes();
5254         let start = if let Some(start) = dst.len().checked_sub(src.len()) {
5255             start
5256         } else {
5257             return Err(SizeError::new(self));
5258         };
5259         let dst = if let Some(dst) = dst.get_mut(start..) {
5260             dst
5261         } else {
5262             // get_mut() should never return None here. We return a `SizeError`
5263             // rather than .unwrap() because in the event the branch is not
5264             // optimized away, returning a value is generally lighter-weight
5265             // than panicking.
5266             return Err(SizeError::new(self));
5267         };
5268         // SAFETY: Through fallible subslicing of `dst`, we have ensured that
5269         // `dst.len()` is equal to `src.len()`. Neither the size of the source
5270         // nor the size of the destination change between the above subslicing
5271         // operation and the invocation of `copy_unchecked`.
5272         unsafe {
5273             util::copy_unchecked(src, dst);
5274         }
5275         Ok(())
5276     }
5277 
5278     /// Writes a copy of `self` to an `io::Write`.
5279     ///
5280     /// This is a shorthand for `dst.write_all(self.as_bytes())`, and is useful
5281     /// for interfacing with operating system byte sinks (files, sockets, etc.).
5282     ///
5283     /// # Examples
5284     ///
5285     /// ```no_run
5286     /// use zerocopy::{byteorder::big_endian::U16, FromBytes, IntoBytes};
5287     /// use std::fs::File;
5288     /// # use zerocopy_derive::*;
5289     ///
5290     /// #[derive(FromBytes, IntoBytes, Immutable, KnownLayout)]
5291     /// #[repr(C, packed)]
5292     /// struct GrayscaleImage {
5293     ///     height: U16,
5294     ///     width: U16,
5295     ///     pixels: [U16],
5296     /// }
5297     ///
5298     /// let image = GrayscaleImage::ref_from_bytes(&[0, 0, 0, 0][..]).unwrap();
5299     /// let mut file = File::create("image.bin").unwrap();
5300     /// image.write_to_io(&mut file).unwrap();
5301     /// ```
5302     ///
5303     /// If the write fails, `write_to_io` returns `Err` and a partial write may
5304     /// have occured; e.g.:
5305     ///
5306     /// ```
5307     /// # use zerocopy::IntoBytes;
5308     ///
5309     /// let src = u128::MAX;
5310     /// let mut dst = [0u8; 2];
5311     ///
5312     /// let write_result = src.write_to_io(&mut dst[..]);
5313     ///
5314     /// assert!(write_result.is_err());
5315     /// assert_eq!(dst, [255, 255]);
5316     /// ```
5317     #[cfg(feature = "std")]
5318     #[inline(always)]
write_to_io<W>(&self, mut dst: W) -> io::Result<()> where Self: Immutable, W: io::Write,5319     fn write_to_io<W>(&self, mut dst: W) -> io::Result<()>
5320     where
5321         Self: Immutable,
5322         W: io::Write,
5323     {
5324         dst.write_all(self.as_bytes())
5325     }
5326 
5327     #[deprecated(since = "0.8.0", note = "`IntoBytes::as_bytes_mut` was renamed to `as_mut_bytes`")]
5328     #[doc(hidden)]
5329     #[inline]
as_bytes_mut(&mut self) -> &mut [u8] where Self: FromBytes,5330     fn as_bytes_mut(&mut self) -> &mut [u8]
5331     where
5332         Self: FromBytes,
5333     {
5334         self.as_mut_bytes()
5335     }
5336 }
5337 
5338 /// Analyzes whether a type is [`Unaligned`].
5339 ///
5340 /// This derive analyzes, at compile time, whether the annotated type satisfies
5341 /// the [safety conditions] of `Unaligned` and implements `Unaligned` if it is
5342 /// sound to do so. This derive can be applied to structs, enums, and unions;
5343 /// e.g.:
5344 ///
5345 /// ```
5346 /// # use zerocopy_derive::Unaligned;
5347 /// #[derive(Unaligned)]
5348 /// #[repr(C)]
5349 /// struct MyStruct {
5350 /// # /*
5351 ///     ...
5352 /// # */
5353 /// }
5354 ///
5355 /// #[derive(Unaligned)]
5356 /// #[repr(u8)]
5357 /// enum MyEnum {
5358 /// #   Variant0,
5359 /// # /*
5360 ///     ...
5361 /// # */
5362 /// }
5363 ///
5364 /// #[derive(Unaligned)]
5365 /// #[repr(packed)]
5366 /// union MyUnion {
5367 /// #   variant: u8,
5368 /// # /*
5369 ///     ...
5370 /// # */
5371 /// }
5372 /// ```
5373 ///
5374 /// # Analysis
5375 ///
5376 /// *This section describes, roughly, the analysis performed by this derive to
5377 /// determine whether it is sound to implement `Unaligned` for a given type.
5378 /// Unless you are modifying the implementation of this derive, or attempting to
5379 /// manually implement `Unaligned` for a type yourself, you don't need to read
5380 /// this section.*
5381 ///
5382 /// If a type has the following properties, then this derive can implement
5383 /// `Unaligned` for that type:
5384 ///
5385 /// - If the type is a struct or union:
5386 ///   - If `repr(align(N))` is provided, `N` must equal 1.
5387 ///   - If the type is `repr(C)` or `repr(transparent)`, all fields must be
5388 ///     [`Unaligned`].
5389 ///   - If the type is not `repr(C)` or `repr(transparent)`, it must be
5390 ///     `repr(packed)` or `repr(packed(1))`.
5391 /// - If the type is an enum:
5392 ///   - If `repr(align(N))` is provided, `N` must equal 1.
5393 ///   - It must be a field-less enum (meaning that all variants have no fields).
5394 ///   - It must be `repr(i8)` or `repr(u8)`.
5395 ///
5396 /// [safety conditions]: trait@Unaligned#safety
5397 #[cfg(any(feature = "derive", test))]
5398 #[cfg_attr(doc_cfg, doc(cfg(feature = "derive")))]
5399 pub use zerocopy_derive::Unaligned;
5400 
5401 /// Types with no alignment requirement.
5402 ///
5403 /// If `T: Unaligned`, then `align_of::<T>() == 1`.
5404 ///
5405 /// # Implementation
5406 ///
5407 /// **Do not implement this trait yourself!** Instead, use
5408 /// [`#[derive(Unaligned)]`][derive]; e.g.:
5409 ///
5410 /// ```
5411 /// # use zerocopy_derive::Unaligned;
5412 /// #[derive(Unaligned)]
5413 /// #[repr(C)]
5414 /// struct MyStruct {
5415 /// # /*
5416 ///     ...
5417 /// # */
5418 /// }
5419 ///
5420 /// #[derive(Unaligned)]
5421 /// #[repr(u8)]
5422 /// enum MyEnum {
5423 /// #   Variant0,
5424 /// # /*
5425 ///     ...
5426 /// # */
5427 /// }
5428 ///
5429 /// #[derive(Unaligned)]
5430 /// #[repr(packed)]
5431 /// union MyUnion {
5432 /// #   variant: u8,
5433 /// # /*
5434 ///     ...
5435 /// # */
5436 /// }
5437 /// ```
5438 ///
5439 /// This derive performs a sophisticated, compile-time safety analysis to
5440 /// determine whether a type is `Unaligned`.
5441 ///
5442 /// # Safety
5443 ///
5444 /// *This section describes what is required in order for `T: Unaligned`, and
5445 /// what unsafe code may assume of such types. If you don't plan on implementing
5446 /// `Unaligned` manually, and you don't plan on writing unsafe code that
5447 /// operates on `Unaligned` types, then you don't need to read this section.*
5448 ///
5449 /// If `T: Unaligned`, then unsafe code may assume that it is sound to produce a
5450 /// reference to `T` at any memory location regardless of alignment. If a type
5451 /// is marked as `Unaligned` which violates this contract, it may cause
5452 /// undefined behavior.
5453 ///
5454 /// `#[derive(Unaligned)]` only permits [types which satisfy these
5455 /// requirements][derive-analysis].
5456 ///
5457 #[cfg_attr(
5458     feature = "derive",
5459     doc = "[derive]: zerocopy_derive::Unaligned",
5460     doc = "[derive-analysis]: zerocopy_derive::Unaligned#analysis"
5461 )]
5462 #[cfg_attr(
5463     not(feature = "derive"),
5464     doc = concat!("[derive]: https://docs.rs/zerocopy/", env!("CARGO_PKG_VERSION"), "/zerocopy/derive.Unaligned.html"),
5465     doc = concat!("[derive-analysis]: https://docs.rs/zerocopy/", env!("CARGO_PKG_VERSION"), "/zerocopy/derive.Unaligned.html#analysis"),
5466 )]
5467 #[cfg_attr(
5468     zerocopy_diagnostic_on_unimplemented_1_78_0,
5469     diagnostic::on_unimplemented(note = "Consider adding `#[derive(Unaligned)]` to `{Self}`")
5470 )]
5471 pub unsafe trait Unaligned {
5472     // The `Self: Sized` bound makes it so that `Unaligned` is still object
5473     // safe.
5474     #[doc(hidden)]
only_derive_is_allowed_to_implement_this_trait() where Self: Sized5475     fn only_derive_is_allowed_to_implement_this_trait()
5476     where
5477         Self: Sized;
5478 }
5479 
5480 /// Derives an optimized implementation of [`Hash`] for types that implement
5481 /// [`IntoBytes`] and [`Immutable`].
5482 ///
5483 /// The standard library's derive for `Hash` generates a recursive descent
5484 /// into the fields of the type it is applied to. Instead, the implementation
5485 /// derived by this macro makes a single call to [`Hasher::write()`] for both
5486 /// [`Hash::hash()`] and [`Hash::hash_slice()`], feeding the hasher the bytes
5487 /// of the type or slice all at once.
5488 ///
5489 /// [`Hash`]: core::hash::Hash
5490 /// [`Hash::hash()`]: core::hash::Hash::hash()
5491 /// [`Hash::hash_slice()`]: core::hash::Hash::hash_slice()
5492 #[cfg(any(feature = "derive", test))]
5493 #[cfg_attr(doc_cfg, doc(cfg(feature = "derive")))]
5494 pub use zerocopy_derive::ByteHash;
5495 
5496 /// Derives an optimized implementation of [`PartialEq`] and [`Eq`] for types
5497 /// that implement [`IntoBytes`] and [`Immutable`].
5498 ///
5499 /// The standard library's derive for [`PartialEq`] generates a recursive
5500 /// descent into the fields of the type it is applied to. Instead, the
5501 /// implementation derived by this macro performs a single slice comparison of
5502 /// the bytes of the two values being compared.
5503 #[cfg(any(feature = "derive", test))]
5504 #[cfg_attr(doc_cfg, doc(cfg(feature = "derive")))]
5505 pub use zerocopy_derive::ByteEq;
5506 
5507 #[cfg(feature = "alloc")]
5508 #[cfg_attr(doc_cfg, doc(cfg(feature = "alloc")))]
5509 #[cfg(zerocopy_panic_in_const_and_vec_try_reserve_1_57_0)]
5510 mod alloc_support {
5511     use super::*;
5512 
5513     /// Extends a `Vec<T>` by pushing `additional` new items onto the end of the
5514     /// vector. The new items are initialized with zeros.
5515     #[cfg(zerocopy_panic_in_const_and_vec_try_reserve_1_57_0)]
5516     #[doc(hidden)]
5517     #[deprecated(since = "0.8.0", note = "moved to `FromZeros`")]
5518     #[inline(always)]
extend_vec_zeroed<T: FromZeros>( v: &mut Vec<T>, additional: usize, ) -> Result<(), AllocError>5519     pub fn extend_vec_zeroed<T: FromZeros>(
5520         v: &mut Vec<T>,
5521         additional: usize,
5522     ) -> Result<(), AllocError> {
5523         <T as FromZeros>::extend_vec_zeroed(v, additional)
5524     }
5525 
5526     /// Inserts `additional` new items into `Vec<T>` at `position`. The new
5527     /// items are initialized with zeros.
5528     ///
5529     /// # Panics
5530     ///
5531     /// Panics if `position > v.len()`.
5532     #[cfg(zerocopy_panic_in_const_and_vec_try_reserve_1_57_0)]
5533     #[doc(hidden)]
5534     #[deprecated(since = "0.8.0", note = "moved to `FromZeros`")]
5535     #[inline(always)]
insert_vec_zeroed<T: FromZeros>( v: &mut Vec<T>, position: usize, additional: usize, ) -> Result<(), AllocError>5536     pub fn insert_vec_zeroed<T: FromZeros>(
5537         v: &mut Vec<T>,
5538         position: usize,
5539         additional: usize,
5540     ) -> Result<(), AllocError> {
5541         <T as FromZeros>::insert_vec_zeroed(v, position, additional)
5542     }
5543 }
5544 
5545 #[cfg(feature = "alloc")]
5546 #[cfg(zerocopy_panic_in_const_and_vec_try_reserve_1_57_0)]
5547 #[doc(hidden)]
5548 pub use alloc_support::*;
5549 
5550 #[cfg(test)]
5551 #[allow(clippy::assertions_on_result_states, clippy::unreadable_literal)]
5552 mod tests {
5553     use static_assertions::assert_impl_all;
5554 
5555     use super::*;
5556     use crate::util::testutil::*;
5557 
5558     // An unsized type.
5559     //
5560     // This is used to test the custom derives of our traits. The `[u8]` type
5561     // gets a hand-rolled impl, so it doesn't exercise our custom derives.
5562     #[derive(Debug, Eq, PartialEq, FromBytes, IntoBytes, Unaligned, Immutable)]
5563     #[repr(transparent)]
5564     struct Unsized([u8]);
5565 
5566     impl Unsized {
from_mut_slice(slc: &mut [u8]) -> &mut Unsized5567         fn from_mut_slice(slc: &mut [u8]) -> &mut Unsized {
5568             // SAFETY: This *probably* sound - since the layouts of `[u8]` and
5569             // `Unsized` are the same, so are the layouts of `&mut [u8]` and
5570             // `&mut Unsized`. [1] Even if it turns out that this isn't actually
5571             // guaranteed by the language spec, we can just change this since
5572             // it's in test code.
5573             //
5574             // [1] https://github.com/rust-lang/unsafe-code-guidelines/issues/375
5575             unsafe { mem::transmute(slc) }
5576         }
5577     }
5578 
5579     #[test]
test_known_layout()5580     fn test_known_layout() {
5581         // Test that `$ty` and `ManuallyDrop<$ty>` have the expected layout.
5582         // Test that `PhantomData<$ty>` has the same layout as `()` regardless
5583         // of `$ty`.
5584         macro_rules! test {
5585             ($ty:ty, $expect:expr) => {
5586                 let expect = $expect;
5587                 assert_eq!(<$ty as KnownLayout>::LAYOUT, expect);
5588                 assert_eq!(<ManuallyDrop<$ty> as KnownLayout>::LAYOUT, expect);
5589                 assert_eq!(<PhantomData<$ty> as KnownLayout>::LAYOUT, <() as KnownLayout>::LAYOUT);
5590             };
5591         }
5592 
5593         let layout = |offset, align, _trailing_slice_elem_size| DstLayout {
5594             align: NonZeroUsize::new(align).unwrap(),
5595             size_info: match _trailing_slice_elem_size {
5596                 None => SizeInfo::Sized { size: offset },
5597                 Some(elem_size) => SizeInfo::SliceDst(TrailingSliceLayout { offset, elem_size }),
5598             },
5599         };
5600 
5601         test!((), layout(0, 1, None));
5602         test!(u8, layout(1, 1, None));
5603         // Use `align_of` because `u64` alignment may be smaller than 8 on some
5604         // platforms.
5605         test!(u64, layout(8, mem::align_of::<u64>(), None));
5606         test!(AU64, layout(8, 8, None));
5607 
5608         test!(Option<&'static ()>, usize::LAYOUT);
5609 
5610         test!([()], layout(0, 1, Some(0)));
5611         test!([u8], layout(0, 1, Some(1)));
5612         test!(str, layout(0, 1, Some(1)));
5613     }
5614 
5615     #[cfg(feature = "derive")]
5616     #[test]
test_known_layout_derive()5617     fn test_known_layout_derive() {
5618         // In this and other files (`late_compile_pass.rs`,
5619         // `mid_compile_pass.rs`, and `struct.rs`), we test success and failure
5620         // modes of `derive(KnownLayout)` for the following combination of
5621         // properties:
5622         //
5623         // +------------+--------------------------------------+-----------+
5624         // |            |      trailing field properties       |           |
5625         // | `repr(C)`? | generic? | `KnownLayout`? | `Sized`? | Type Name |
5626         // |------------+----------+----------------+----------+-----------|
5627         // |          N |        N |              N |        N |      KL00 |
5628         // |          N |        N |              N |        Y |      KL01 |
5629         // |          N |        N |              Y |        N |      KL02 |
5630         // |          N |        N |              Y |        Y |      KL03 |
5631         // |          N |        Y |              N |        N |      KL04 |
5632         // |          N |        Y |              N |        Y |      KL05 |
5633         // |          N |        Y |              Y |        N |      KL06 |
5634         // |          N |        Y |              Y |        Y |      KL07 |
5635         // |          Y |        N |              N |        N |      KL08 |
5636         // |          Y |        N |              N |        Y |      KL09 |
5637         // |          Y |        N |              Y |        N |      KL10 |
5638         // |          Y |        N |              Y |        Y |      KL11 |
5639         // |          Y |        Y |              N |        N |      KL12 |
5640         // |          Y |        Y |              N |        Y |      KL13 |
5641         // |          Y |        Y |              Y |        N |      KL14 |
5642         // |          Y |        Y |              Y |        Y |      KL15 |
5643         // +------------+----------+----------------+----------+-----------+
5644 
5645         struct NotKnownLayout<T = ()> {
5646             _t: T,
5647         }
5648 
5649         #[derive(KnownLayout)]
5650         #[repr(C)]
5651         struct AlignSize<const ALIGN: usize, const SIZE: usize>
5652         where
5653             elain::Align<ALIGN>: elain::Alignment,
5654         {
5655             _align: elain::Align<ALIGN>,
5656             size: [u8; SIZE],
5657         }
5658 
5659         type AU16 = AlignSize<2, 2>;
5660         type AU32 = AlignSize<4, 4>;
5661 
5662         fn _assert_kl<T: ?Sized + KnownLayout>(_: &T) {}
5663 
5664         let sized_layout = |align, size| DstLayout {
5665             align: NonZeroUsize::new(align).unwrap(),
5666             size_info: SizeInfo::Sized { size },
5667         };
5668 
5669         let unsized_layout = |align, elem_size, offset| DstLayout {
5670             align: NonZeroUsize::new(align).unwrap(),
5671             size_info: SizeInfo::SliceDst(TrailingSliceLayout { offset, elem_size }),
5672         };
5673 
5674         // | `repr(C)`? | generic? | `KnownLayout`? | `Sized`? | Type Name |
5675         // |          N |        N |              N |        Y |      KL01 |
5676         #[allow(dead_code)]
5677         #[derive(KnownLayout)]
5678         struct KL01(NotKnownLayout<AU32>, NotKnownLayout<AU16>);
5679 
5680         let expected = DstLayout::for_type::<KL01>();
5681 
5682         assert_eq!(<KL01 as KnownLayout>::LAYOUT, expected);
5683         assert_eq!(<KL01 as KnownLayout>::LAYOUT, sized_layout(4, 8));
5684 
5685         // ...with `align(N)`:
5686         #[allow(dead_code)]
5687         #[derive(KnownLayout)]
5688         #[repr(align(64))]
5689         struct KL01Align(NotKnownLayout<AU32>, NotKnownLayout<AU16>);
5690 
5691         let expected = DstLayout::for_type::<KL01Align>();
5692 
5693         assert_eq!(<KL01Align as KnownLayout>::LAYOUT, expected);
5694         assert_eq!(<KL01Align as KnownLayout>::LAYOUT, sized_layout(64, 64));
5695 
5696         // ...with `packed`:
5697         #[allow(dead_code)]
5698         #[derive(KnownLayout)]
5699         #[repr(packed)]
5700         struct KL01Packed(NotKnownLayout<AU32>, NotKnownLayout<AU16>);
5701 
5702         let expected = DstLayout::for_type::<KL01Packed>();
5703 
5704         assert_eq!(<KL01Packed as KnownLayout>::LAYOUT, expected);
5705         assert_eq!(<KL01Packed as KnownLayout>::LAYOUT, sized_layout(1, 6));
5706 
5707         // ...with `packed(N)`:
5708         #[allow(dead_code)]
5709         #[derive(KnownLayout)]
5710         #[repr(packed(2))]
5711         struct KL01PackedN(NotKnownLayout<AU32>, NotKnownLayout<AU16>);
5712 
5713         assert_impl_all!(KL01PackedN: KnownLayout);
5714 
5715         let expected = DstLayout::for_type::<KL01PackedN>();
5716 
5717         assert_eq!(<KL01PackedN as KnownLayout>::LAYOUT, expected);
5718         assert_eq!(<KL01PackedN as KnownLayout>::LAYOUT, sized_layout(2, 6));
5719 
5720         // | `repr(C)`? | generic? | `KnownLayout`? | `Sized`? | Type Name |
5721         // |          N |        N |              Y |        Y |      KL03 |
5722         #[allow(dead_code)]
5723         #[derive(KnownLayout)]
5724         struct KL03(NotKnownLayout, u8);
5725 
5726         let expected = DstLayout::for_type::<KL03>();
5727 
5728         assert_eq!(<KL03 as KnownLayout>::LAYOUT, expected);
5729         assert_eq!(<KL03 as KnownLayout>::LAYOUT, sized_layout(1, 1));
5730 
5731         // ... with `align(N)`
5732         #[allow(dead_code)]
5733         #[derive(KnownLayout)]
5734         #[repr(align(64))]
5735         struct KL03Align(NotKnownLayout<AU32>, u8);
5736 
5737         let expected = DstLayout::for_type::<KL03Align>();
5738 
5739         assert_eq!(<KL03Align as KnownLayout>::LAYOUT, expected);
5740         assert_eq!(<KL03Align as KnownLayout>::LAYOUT, sized_layout(64, 64));
5741 
5742         // ... with `packed`:
5743         #[allow(dead_code)]
5744         #[derive(KnownLayout)]
5745         #[repr(packed)]
5746         struct KL03Packed(NotKnownLayout<AU32>, u8);
5747 
5748         let expected = DstLayout::for_type::<KL03Packed>();
5749 
5750         assert_eq!(<KL03Packed as KnownLayout>::LAYOUT, expected);
5751         assert_eq!(<KL03Packed as KnownLayout>::LAYOUT, sized_layout(1, 5));
5752 
5753         // ... with `packed(N)`
5754         #[allow(dead_code)]
5755         #[derive(KnownLayout)]
5756         #[repr(packed(2))]
5757         struct KL03PackedN(NotKnownLayout<AU32>, u8);
5758 
5759         assert_impl_all!(KL03PackedN: KnownLayout);
5760 
5761         let expected = DstLayout::for_type::<KL03PackedN>();
5762 
5763         assert_eq!(<KL03PackedN as KnownLayout>::LAYOUT, expected);
5764         assert_eq!(<KL03PackedN as KnownLayout>::LAYOUT, sized_layout(2, 6));
5765 
5766         // | `repr(C)`? | generic? | `KnownLayout`? | `Sized`? | Type Name |
5767         // |          N |        Y |              N |        Y |      KL05 |
5768         #[allow(dead_code)]
5769         #[derive(KnownLayout)]
5770         struct KL05<T>(u8, T);
5771 
5772         fn _test_kl05<T>(t: T) -> impl KnownLayout {
5773             KL05(0u8, t)
5774         }
5775 
5776         // | `repr(C)`? | generic? | `KnownLayout`? | `Sized`? | Type Name |
5777         // |          N |        Y |              Y |        Y |      KL07 |
5778         #[allow(dead_code)]
5779         #[derive(KnownLayout)]
5780         struct KL07<T: KnownLayout>(u8, T);
5781 
5782         fn _test_kl07<T: KnownLayout>(t: T) -> impl KnownLayout {
5783             let _ = KL07(0u8, t);
5784         }
5785 
5786         // | `repr(C)`? | generic? | `KnownLayout`? | `Sized`? | Type Name |
5787         // |          Y |        N |              Y |        N |      KL10 |
5788         #[allow(dead_code)]
5789         #[derive(KnownLayout)]
5790         #[repr(C)]
5791         struct KL10(NotKnownLayout<AU32>, [u8]);
5792 
5793         let expected = DstLayout::new_zst(None)
5794             .extend(DstLayout::for_type::<NotKnownLayout<AU32>>(), None)
5795             .extend(<[u8] as KnownLayout>::LAYOUT, None)
5796             .pad_to_align();
5797 
5798         assert_eq!(<KL10 as KnownLayout>::LAYOUT, expected);
5799         assert_eq!(<KL10 as KnownLayout>::LAYOUT, unsized_layout(4, 1, 4));
5800 
5801         // ...with `align(N)`:
5802         #[allow(dead_code)]
5803         #[derive(KnownLayout)]
5804         #[repr(C, align(64))]
5805         struct KL10Align(NotKnownLayout<AU32>, [u8]);
5806 
5807         let repr_align = NonZeroUsize::new(64);
5808 
5809         let expected = DstLayout::new_zst(repr_align)
5810             .extend(DstLayout::for_type::<NotKnownLayout<AU32>>(), None)
5811             .extend(<[u8] as KnownLayout>::LAYOUT, None)
5812             .pad_to_align();
5813 
5814         assert_eq!(<KL10Align as KnownLayout>::LAYOUT, expected);
5815         assert_eq!(<KL10Align as KnownLayout>::LAYOUT, unsized_layout(64, 1, 4));
5816 
5817         // ...with `packed`:
5818         #[allow(dead_code)]
5819         #[derive(KnownLayout)]
5820         #[repr(C, packed)]
5821         struct KL10Packed(NotKnownLayout<AU32>, [u8]);
5822 
5823         let repr_packed = NonZeroUsize::new(1);
5824 
5825         let expected = DstLayout::new_zst(None)
5826             .extend(DstLayout::for_type::<NotKnownLayout<AU32>>(), repr_packed)
5827             .extend(<[u8] as KnownLayout>::LAYOUT, repr_packed)
5828             .pad_to_align();
5829 
5830         assert_eq!(<KL10Packed as KnownLayout>::LAYOUT, expected);
5831         assert_eq!(<KL10Packed as KnownLayout>::LAYOUT, unsized_layout(1, 1, 4));
5832 
5833         // ...with `packed(N)`:
5834         #[allow(dead_code)]
5835         #[derive(KnownLayout)]
5836         #[repr(C, packed(2))]
5837         struct KL10PackedN(NotKnownLayout<AU32>, [u8]);
5838 
5839         let repr_packed = NonZeroUsize::new(2);
5840 
5841         let expected = DstLayout::new_zst(None)
5842             .extend(DstLayout::for_type::<NotKnownLayout<AU32>>(), repr_packed)
5843             .extend(<[u8] as KnownLayout>::LAYOUT, repr_packed)
5844             .pad_to_align();
5845 
5846         assert_eq!(<KL10PackedN as KnownLayout>::LAYOUT, expected);
5847         assert_eq!(<KL10PackedN as KnownLayout>::LAYOUT, unsized_layout(2, 1, 4));
5848 
5849         // | `repr(C)`? | generic? | `KnownLayout`? | `Sized`? | Type Name |
5850         // |          Y |        N |              Y |        Y |      KL11 |
5851         #[allow(dead_code)]
5852         #[derive(KnownLayout)]
5853         #[repr(C)]
5854         struct KL11(NotKnownLayout<AU64>, u8);
5855 
5856         let expected = DstLayout::new_zst(None)
5857             .extend(DstLayout::for_type::<NotKnownLayout<AU64>>(), None)
5858             .extend(<u8 as KnownLayout>::LAYOUT, None)
5859             .pad_to_align();
5860 
5861         assert_eq!(<KL11 as KnownLayout>::LAYOUT, expected);
5862         assert_eq!(<KL11 as KnownLayout>::LAYOUT, sized_layout(8, 16));
5863 
5864         // ...with `align(N)`:
5865         #[allow(dead_code)]
5866         #[derive(KnownLayout)]
5867         #[repr(C, align(64))]
5868         struct KL11Align(NotKnownLayout<AU64>, u8);
5869 
5870         let repr_align = NonZeroUsize::new(64);
5871 
5872         let expected = DstLayout::new_zst(repr_align)
5873             .extend(DstLayout::for_type::<NotKnownLayout<AU64>>(), None)
5874             .extend(<u8 as KnownLayout>::LAYOUT, None)
5875             .pad_to_align();
5876 
5877         assert_eq!(<KL11Align as KnownLayout>::LAYOUT, expected);
5878         assert_eq!(<KL11Align as KnownLayout>::LAYOUT, sized_layout(64, 64));
5879 
5880         // ...with `packed`:
5881         #[allow(dead_code)]
5882         #[derive(KnownLayout)]
5883         #[repr(C, packed)]
5884         struct KL11Packed(NotKnownLayout<AU64>, u8);
5885 
5886         let repr_packed = NonZeroUsize::new(1);
5887 
5888         let expected = DstLayout::new_zst(None)
5889             .extend(DstLayout::for_type::<NotKnownLayout<AU64>>(), repr_packed)
5890             .extend(<u8 as KnownLayout>::LAYOUT, repr_packed)
5891             .pad_to_align();
5892 
5893         assert_eq!(<KL11Packed as KnownLayout>::LAYOUT, expected);
5894         assert_eq!(<KL11Packed as KnownLayout>::LAYOUT, sized_layout(1, 9));
5895 
5896         // ...with `packed(N)`:
5897         #[allow(dead_code)]
5898         #[derive(KnownLayout)]
5899         #[repr(C, packed(2))]
5900         struct KL11PackedN(NotKnownLayout<AU64>, u8);
5901 
5902         let repr_packed = NonZeroUsize::new(2);
5903 
5904         let expected = DstLayout::new_zst(None)
5905             .extend(DstLayout::for_type::<NotKnownLayout<AU64>>(), repr_packed)
5906             .extend(<u8 as KnownLayout>::LAYOUT, repr_packed)
5907             .pad_to_align();
5908 
5909         assert_eq!(<KL11PackedN as KnownLayout>::LAYOUT, expected);
5910         assert_eq!(<KL11PackedN as KnownLayout>::LAYOUT, sized_layout(2, 10));
5911 
5912         // | `repr(C)`? | generic? | `KnownLayout`? | `Sized`? | Type Name |
5913         // |          Y |        Y |              Y |        N |      KL14 |
5914         #[allow(dead_code)]
5915         #[derive(KnownLayout)]
5916         #[repr(C)]
5917         struct KL14<T: ?Sized + KnownLayout>(u8, T);
5918 
5919         fn _test_kl14<T: ?Sized + KnownLayout>(kl: &KL14<T>) {
5920             _assert_kl(kl)
5921         }
5922 
5923         // | `repr(C)`? | generic? | `KnownLayout`? | `Sized`? | Type Name |
5924         // |          Y |        Y |              Y |        Y |      KL15 |
5925         #[allow(dead_code)]
5926         #[derive(KnownLayout)]
5927         #[repr(C)]
5928         struct KL15<T: KnownLayout>(u8, T);
5929 
5930         fn _test_kl15<T: KnownLayout>(t: T) -> impl KnownLayout {
5931             let _ = KL15(0u8, t);
5932         }
5933 
5934         // Test a variety of combinations of field types:
5935         //  - ()
5936         //  - u8
5937         //  - AU16
5938         //  - [()]
5939         //  - [u8]
5940         //  - [AU16]
5941 
5942         #[allow(clippy::upper_case_acronyms, dead_code)]
5943         #[derive(KnownLayout)]
5944         #[repr(C)]
5945         struct KLTU<T, U: ?Sized>(T, U);
5946 
5947         assert_eq!(<KLTU<(), ()> as KnownLayout>::LAYOUT, sized_layout(1, 0));
5948 
5949         assert_eq!(<KLTU<(), u8> as KnownLayout>::LAYOUT, sized_layout(1, 1));
5950 
5951         assert_eq!(<KLTU<(), AU16> as KnownLayout>::LAYOUT, sized_layout(2, 2));
5952 
5953         assert_eq!(<KLTU<(), [()]> as KnownLayout>::LAYOUT, unsized_layout(1, 0, 0));
5954 
5955         assert_eq!(<KLTU<(), [u8]> as KnownLayout>::LAYOUT, unsized_layout(1, 1, 0));
5956 
5957         assert_eq!(<KLTU<(), [AU16]> as KnownLayout>::LAYOUT, unsized_layout(2, 2, 0));
5958 
5959         assert_eq!(<KLTU<u8, ()> as KnownLayout>::LAYOUT, sized_layout(1, 1));
5960 
5961         assert_eq!(<KLTU<u8, u8> as KnownLayout>::LAYOUT, sized_layout(1, 2));
5962 
5963         assert_eq!(<KLTU<u8, AU16> as KnownLayout>::LAYOUT, sized_layout(2, 4));
5964 
5965         assert_eq!(<KLTU<u8, [()]> as KnownLayout>::LAYOUT, unsized_layout(1, 0, 1));
5966 
5967         assert_eq!(<KLTU<u8, [u8]> as KnownLayout>::LAYOUT, unsized_layout(1, 1, 1));
5968 
5969         assert_eq!(<KLTU<u8, [AU16]> as KnownLayout>::LAYOUT, unsized_layout(2, 2, 2));
5970 
5971         assert_eq!(<KLTU<AU16, ()> as KnownLayout>::LAYOUT, sized_layout(2, 2));
5972 
5973         assert_eq!(<KLTU<AU16, u8> as KnownLayout>::LAYOUT, sized_layout(2, 4));
5974 
5975         assert_eq!(<KLTU<AU16, AU16> as KnownLayout>::LAYOUT, sized_layout(2, 4));
5976 
5977         assert_eq!(<KLTU<AU16, [()]> as KnownLayout>::LAYOUT, unsized_layout(2, 0, 2));
5978 
5979         assert_eq!(<KLTU<AU16, [u8]> as KnownLayout>::LAYOUT, unsized_layout(2, 1, 2));
5980 
5981         assert_eq!(<KLTU<AU16, [AU16]> as KnownLayout>::LAYOUT, unsized_layout(2, 2, 2));
5982 
5983         // Test a variety of field counts.
5984 
5985         #[derive(KnownLayout)]
5986         #[repr(C)]
5987         struct KLF0;
5988 
5989         assert_eq!(<KLF0 as KnownLayout>::LAYOUT, sized_layout(1, 0));
5990 
5991         #[derive(KnownLayout)]
5992         #[repr(C)]
5993         struct KLF1([u8]);
5994 
5995         assert_eq!(<KLF1 as KnownLayout>::LAYOUT, unsized_layout(1, 1, 0));
5996 
5997         #[derive(KnownLayout)]
5998         #[repr(C)]
5999         struct KLF2(NotKnownLayout<u8>, [u8]);
6000 
6001         assert_eq!(<KLF2 as KnownLayout>::LAYOUT, unsized_layout(1, 1, 1));
6002 
6003         #[derive(KnownLayout)]
6004         #[repr(C)]
6005         struct KLF3(NotKnownLayout<u8>, NotKnownLayout<AU16>, [u8]);
6006 
6007         assert_eq!(<KLF3 as KnownLayout>::LAYOUT, unsized_layout(2, 1, 4));
6008 
6009         #[derive(KnownLayout)]
6010         #[repr(C)]
6011         struct KLF4(NotKnownLayout<u8>, NotKnownLayout<AU16>, NotKnownLayout<AU32>, [u8]);
6012 
6013         assert_eq!(<KLF4 as KnownLayout>::LAYOUT, unsized_layout(4, 1, 8));
6014     }
6015 
6016     #[test]
test_object_safety()6017     fn test_object_safety() {
6018         fn _takes_no_cell(_: &dyn Immutable) {}
6019         fn _takes_unaligned(_: &dyn Unaligned) {}
6020     }
6021 
6022     #[test]
test_from_zeros_only()6023     fn test_from_zeros_only() {
6024         // Test types that implement `FromZeros` but not `FromBytes`.
6025 
6026         assert!(!bool::new_zeroed());
6027         assert_eq!(char::new_zeroed(), '\0');
6028 
6029         #[cfg(feature = "alloc")]
6030         {
6031             assert_eq!(bool::new_box_zeroed(), Ok(Box::new(false)));
6032             assert_eq!(char::new_box_zeroed(), Ok(Box::new('\0')));
6033 
6034             assert_eq!(
6035                 <[bool]>::new_box_zeroed_with_elems(3).unwrap().as_ref(),
6036                 [false, false, false]
6037             );
6038             assert_eq!(
6039                 <[char]>::new_box_zeroed_with_elems(3).unwrap().as_ref(),
6040                 ['\0', '\0', '\0']
6041             );
6042 
6043             assert_eq!(bool::new_vec_zeroed(3).unwrap().as_ref(), [false, false, false]);
6044             assert_eq!(char::new_vec_zeroed(3).unwrap().as_ref(), ['\0', '\0', '\0']);
6045         }
6046 
6047         let mut string = "hello".to_string();
6048         let s: &mut str = string.as_mut();
6049         assert_eq!(s, "hello");
6050         s.zero();
6051         assert_eq!(s, "\0\0\0\0\0");
6052     }
6053 
6054     #[test]
test_zst_count_preserved()6055     fn test_zst_count_preserved() {
6056         // Test that, when an explicit count is provided to for a type with a
6057         // ZST trailing slice element, that count is preserved. This is
6058         // important since, for such types, all element counts result in objects
6059         // of the same size, and so the correct behavior is ambiguous. However,
6060         // preserving the count as requested by the user is the behavior that we
6061         // document publicly.
6062 
6063         // FromZeros methods
6064         #[cfg(feature = "alloc")]
6065         assert_eq!(<[()]>::new_box_zeroed_with_elems(3).unwrap().len(), 3);
6066         #[cfg(feature = "alloc")]
6067         assert_eq!(<()>::new_vec_zeroed(3).unwrap().len(), 3);
6068 
6069         // FromBytes methods
6070         assert_eq!(<[()]>::ref_from_bytes_with_elems(&[][..], 3).unwrap().len(), 3);
6071         assert_eq!(<[()]>::ref_from_prefix_with_elems(&[][..], 3).unwrap().0.len(), 3);
6072         assert_eq!(<[()]>::ref_from_suffix_with_elems(&[][..], 3).unwrap().1.len(), 3);
6073         assert_eq!(<[()]>::mut_from_bytes_with_elems(&mut [][..], 3).unwrap().len(), 3);
6074         assert_eq!(<[()]>::mut_from_prefix_with_elems(&mut [][..], 3).unwrap().0.len(), 3);
6075         assert_eq!(<[()]>::mut_from_suffix_with_elems(&mut [][..], 3).unwrap().1.len(), 3);
6076     }
6077 
6078     #[test]
test_read_write()6079     fn test_read_write() {
6080         const VAL: u64 = 0x12345678;
6081         #[cfg(target_endian = "big")]
6082         const VAL_BYTES: [u8; 8] = VAL.to_be_bytes();
6083         #[cfg(target_endian = "little")]
6084         const VAL_BYTES: [u8; 8] = VAL.to_le_bytes();
6085         const ZEROS: [u8; 8] = [0u8; 8];
6086 
6087         // Test `FromBytes::{read_from, read_from_prefix, read_from_suffix}`.
6088 
6089         assert_eq!(u64::read_from_bytes(&VAL_BYTES[..]), Ok(VAL));
6090         // The first 8 bytes are from `VAL_BYTES` and the second 8 bytes are all
6091         // zeros.
6092         let bytes_with_prefix: [u8; 16] = transmute!([VAL_BYTES, [0; 8]]);
6093         assert_eq!(u64::read_from_prefix(&bytes_with_prefix[..]), Ok((VAL, &ZEROS[..])));
6094         assert_eq!(u64::read_from_suffix(&bytes_with_prefix[..]), Ok((&VAL_BYTES[..], 0)));
6095         // The first 8 bytes are all zeros and the second 8 bytes are from
6096         // `VAL_BYTES`
6097         let bytes_with_suffix: [u8; 16] = transmute!([[0; 8], VAL_BYTES]);
6098         assert_eq!(u64::read_from_prefix(&bytes_with_suffix[..]), Ok((0, &VAL_BYTES[..])));
6099         assert_eq!(u64::read_from_suffix(&bytes_with_suffix[..]), Ok((&ZEROS[..], VAL)));
6100 
6101         // Test `IntoBytes::{write_to, write_to_prefix, write_to_suffix}`.
6102 
6103         let mut bytes = [0u8; 8];
6104         assert_eq!(VAL.write_to(&mut bytes[..]), Ok(()));
6105         assert_eq!(bytes, VAL_BYTES);
6106         let mut bytes = [0u8; 16];
6107         assert_eq!(VAL.write_to_prefix(&mut bytes[..]), Ok(()));
6108         let want: [u8; 16] = transmute!([VAL_BYTES, [0; 8]]);
6109         assert_eq!(bytes, want);
6110         let mut bytes = [0u8; 16];
6111         assert_eq!(VAL.write_to_suffix(&mut bytes[..]), Ok(()));
6112         let want: [u8; 16] = transmute!([[0; 8], VAL_BYTES]);
6113         assert_eq!(bytes, want);
6114     }
6115 
6116     #[test]
6117     #[cfg(feature = "std")]
test_read_io_with_padding_soundness()6118     fn test_read_io_with_padding_soundness() {
6119         // This test is designed to exhibit potential UB in
6120         // `FromBytes::read_from_io`. (see #2319, #2320).
6121 
6122         // On most platforms (where `align_of::<u16>() == 2`), `WithPadding`
6123         // will have inter-field padding between `x` and `y`.
6124         #[derive(FromBytes)]
6125         #[repr(C)]
6126         struct WithPadding {
6127             x: u8,
6128             y: u16,
6129         }
6130         struct ReadsInRead;
6131         impl std::io::Read for ReadsInRead {
6132             fn read(&mut self, buf: &mut [u8]) -> std::io::Result<usize> {
6133                 // This body branches on every byte of `buf`, ensuring that it
6134                 // exhibits UB if any byte of `buf` is uninitialized.
6135                 if buf.iter().all(|&x| x == 0) {
6136                     Ok(buf.len())
6137                 } else {
6138                     buf.iter_mut().for_each(|x| *x = 0);
6139                     Ok(buf.len())
6140                 }
6141             }
6142         }
6143         assert!(matches!(WithPadding::read_from_io(ReadsInRead), Ok(WithPadding { x: 0, y: 0 })));
6144     }
6145 
6146     #[test]
6147     #[cfg(feature = "std")]
test_read_write_io()6148     fn test_read_write_io() {
6149         let mut long_buffer = [0, 0, 0, 0];
6150         assert!(matches!(u16::MAX.write_to_io(&mut long_buffer[..]), Ok(())));
6151         assert_eq!(long_buffer, [255, 255, 0, 0]);
6152         assert!(matches!(u16::read_from_io(&long_buffer[..]), Ok(u16::MAX)));
6153 
6154         let mut short_buffer = [0, 0];
6155         assert!(u32::MAX.write_to_io(&mut short_buffer[..]).is_err());
6156         assert_eq!(short_buffer, [255, 255]);
6157         assert!(u32::read_from_io(&short_buffer[..]).is_err());
6158     }
6159 
6160     #[test]
test_try_from_bytes_try_read_from()6161     fn test_try_from_bytes_try_read_from() {
6162         assert_eq!(<bool as TryFromBytes>::try_read_from_bytes(&[0]), Ok(false));
6163         assert_eq!(<bool as TryFromBytes>::try_read_from_bytes(&[1]), Ok(true));
6164 
6165         assert_eq!(<bool as TryFromBytes>::try_read_from_prefix(&[0, 2]), Ok((false, &[2][..])));
6166         assert_eq!(<bool as TryFromBytes>::try_read_from_prefix(&[1, 2]), Ok((true, &[2][..])));
6167 
6168         assert_eq!(<bool as TryFromBytes>::try_read_from_suffix(&[2, 0]), Ok((&[2][..], false)));
6169         assert_eq!(<bool as TryFromBytes>::try_read_from_suffix(&[2, 1]), Ok((&[2][..], true)));
6170 
6171         // If we don't pass enough bytes, it fails.
6172         assert!(matches!(
6173             <u8 as TryFromBytes>::try_read_from_bytes(&[]),
6174             Err(TryReadError::Size(_))
6175         ));
6176         assert!(matches!(
6177             <u8 as TryFromBytes>::try_read_from_prefix(&[]),
6178             Err(TryReadError::Size(_))
6179         ));
6180         assert!(matches!(
6181             <u8 as TryFromBytes>::try_read_from_suffix(&[]),
6182             Err(TryReadError::Size(_))
6183         ));
6184 
6185         // If we pass too many bytes, it fails.
6186         assert!(matches!(
6187             <u8 as TryFromBytes>::try_read_from_bytes(&[0, 0]),
6188             Err(TryReadError::Size(_))
6189         ));
6190 
6191         // If we pass an invalid value, it fails.
6192         assert!(matches!(
6193             <bool as TryFromBytes>::try_read_from_bytes(&[2]),
6194             Err(TryReadError::Validity(_))
6195         ));
6196         assert!(matches!(
6197             <bool as TryFromBytes>::try_read_from_prefix(&[2, 0]),
6198             Err(TryReadError::Validity(_))
6199         ));
6200         assert!(matches!(
6201             <bool as TryFromBytes>::try_read_from_suffix(&[0, 2]),
6202             Err(TryReadError::Validity(_))
6203         ));
6204 
6205         // Reading from a misaligned buffer should still succeed. Since `AU64`'s
6206         // alignment is 8, and since we read from two adjacent addresses one
6207         // byte apart, it is guaranteed that at least one of them (though
6208         // possibly both) will be misaligned.
6209         let bytes: [u8; 9] = [0, 0, 0, 0, 0, 0, 0, 0, 0];
6210         assert_eq!(<AU64 as TryFromBytes>::try_read_from_bytes(&bytes[..8]), Ok(AU64(0)));
6211         assert_eq!(<AU64 as TryFromBytes>::try_read_from_bytes(&bytes[1..9]), Ok(AU64(0)));
6212 
6213         assert_eq!(
6214             <AU64 as TryFromBytes>::try_read_from_prefix(&bytes[..8]),
6215             Ok((AU64(0), &[][..]))
6216         );
6217         assert_eq!(
6218             <AU64 as TryFromBytes>::try_read_from_prefix(&bytes[1..9]),
6219             Ok((AU64(0), &[][..]))
6220         );
6221 
6222         assert_eq!(
6223             <AU64 as TryFromBytes>::try_read_from_suffix(&bytes[..8]),
6224             Ok((&[][..], AU64(0)))
6225         );
6226         assert_eq!(
6227             <AU64 as TryFromBytes>::try_read_from_suffix(&bytes[1..9]),
6228             Ok((&[][..], AU64(0)))
6229         );
6230     }
6231 
6232     #[test]
test_ref_from_mut_from()6233     fn test_ref_from_mut_from() {
6234         // Test `FromBytes::{ref_from, mut_from}{,_prefix,Suffix}` success cases
6235         // Exhaustive coverage for these methods is covered by the `Ref` tests above,
6236         // which these helper methods defer to.
6237 
6238         let mut buf =
6239             Align::<[u8; 16], AU64>::new([0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15]);
6240 
6241         assert_eq!(
6242             AU64::ref_from_bytes(&buf.t[8..]).unwrap().0.to_ne_bytes(),
6243             [8, 9, 10, 11, 12, 13, 14, 15]
6244         );
6245         let suffix = AU64::mut_from_bytes(&mut buf.t[8..]).unwrap();
6246         suffix.0 = 0x0101010101010101;
6247         // The `[u8:9]` is a non-half size of the full buffer, which would catch
6248         // `from_prefix` having the same implementation as `from_suffix` (issues #506, #511).
6249         assert_eq!(
6250             <[u8; 9]>::ref_from_suffix(&buf.t[..]).unwrap(),
6251             (&[0, 1, 2, 3, 4, 5, 6][..], &[7u8, 1, 1, 1, 1, 1, 1, 1, 1])
6252         );
6253         let (prefix, suffix) = AU64::mut_from_suffix(&mut buf.t[1..]).unwrap();
6254         assert_eq!(prefix, &mut [1u8, 2, 3, 4, 5, 6, 7][..]);
6255         suffix.0 = 0x0202020202020202;
6256         let (prefix, suffix) = <[u8; 10]>::mut_from_suffix(&mut buf.t[..]).unwrap();
6257         assert_eq!(prefix, &mut [0u8, 1, 2, 3, 4, 5][..]);
6258         suffix[0] = 42;
6259         assert_eq!(
6260             <[u8; 9]>::ref_from_prefix(&buf.t[..]).unwrap(),
6261             (&[0u8, 1, 2, 3, 4, 5, 42, 7, 2], &[2u8, 2, 2, 2, 2, 2, 2][..])
6262         );
6263         <[u8; 2]>::mut_from_prefix(&mut buf.t[..]).unwrap().0[1] = 30;
6264         assert_eq!(buf.t, [0, 30, 2, 3, 4, 5, 42, 7, 2, 2, 2, 2, 2, 2, 2, 2]);
6265     }
6266 
6267     #[test]
test_ref_from_mut_from_error()6268     fn test_ref_from_mut_from_error() {
6269         // Test `FromBytes::{ref_from, mut_from}{,_prefix,Suffix}` error cases.
6270 
6271         // Fail because the buffer is too large.
6272         let mut buf = Align::<[u8; 16], AU64>::default();
6273         // `buf.t` should be aligned to 8, so only the length check should fail.
6274         assert!(AU64::ref_from_bytes(&buf.t[..]).is_err());
6275         assert!(AU64::mut_from_bytes(&mut buf.t[..]).is_err());
6276         assert!(<[u8; 8]>::ref_from_bytes(&buf.t[..]).is_err());
6277         assert!(<[u8; 8]>::mut_from_bytes(&mut buf.t[..]).is_err());
6278 
6279         // Fail because the buffer is too small.
6280         let mut buf = Align::<[u8; 4], AU64>::default();
6281         assert!(AU64::ref_from_bytes(&buf.t[..]).is_err());
6282         assert!(AU64::mut_from_bytes(&mut buf.t[..]).is_err());
6283         assert!(<[u8; 8]>::ref_from_bytes(&buf.t[..]).is_err());
6284         assert!(<[u8; 8]>::mut_from_bytes(&mut buf.t[..]).is_err());
6285         assert!(AU64::ref_from_prefix(&buf.t[..]).is_err());
6286         assert!(AU64::mut_from_prefix(&mut buf.t[..]).is_err());
6287         assert!(AU64::ref_from_suffix(&buf.t[..]).is_err());
6288         assert!(AU64::mut_from_suffix(&mut buf.t[..]).is_err());
6289         assert!(<[u8; 8]>::ref_from_prefix(&buf.t[..]).is_err());
6290         assert!(<[u8; 8]>::mut_from_prefix(&mut buf.t[..]).is_err());
6291         assert!(<[u8; 8]>::ref_from_suffix(&buf.t[..]).is_err());
6292         assert!(<[u8; 8]>::mut_from_suffix(&mut buf.t[..]).is_err());
6293 
6294         // Fail because the alignment is insufficient.
6295         let mut buf = Align::<[u8; 13], AU64>::default();
6296         assert!(AU64::ref_from_bytes(&buf.t[1..]).is_err());
6297         assert!(AU64::mut_from_bytes(&mut buf.t[1..]).is_err());
6298         assert!(AU64::ref_from_bytes(&buf.t[1..]).is_err());
6299         assert!(AU64::mut_from_bytes(&mut buf.t[1..]).is_err());
6300         assert!(AU64::ref_from_prefix(&buf.t[1..]).is_err());
6301         assert!(AU64::mut_from_prefix(&mut buf.t[1..]).is_err());
6302         assert!(AU64::ref_from_suffix(&buf.t[..]).is_err());
6303         assert!(AU64::mut_from_suffix(&mut buf.t[..]).is_err());
6304     }
6305 
6306     #[test]
test_to_methods()6307     fn test_to_methods() {
6308         /// Run a series of tests by calling `IntoBytes` methods on `t`.
6309         ///
6310         /// `bytes` is the expected byte sequence returned from `t.as_bytes()`
6311         /// before `t` has been modified. `post_mutation` is the expected
6312         /// sequence returned from `t.as_bytes()` after `t.as_mut_bytes()[0]`
6313         /// has had its bits flipped (by applying `^= 0xFF`).
6314         ///
6315         /// `N` is the size of `t` in bytes.
6316         fn test<T: FromBytes + IntoBytes + Immutable + Debug + Eq + ?Sized, const N: usize>(
6317             t: &mut T,
6318             bytes: &[u8],
6319             post_mutation: &T,
6320         ) {
6321             // Test that we can access the underlying bytes, and that we get the
6322             // right bytes and the right number of bytes.
6323             assert_eq!(t.as_bytes(), bytes);
6324 
6325             // Test that changes to the underlying byte slices are reflected in
6326             // the original object.
6327             t.as_mut_bytes()[0] ^= 0xFF;
6328             assert_eq!(t, post_mutation);
6329             t.as_mut_bytes()[0] ^= 0xFF;
6330 
6331             // `write_to` rejects slices that are too small or too large.
6332             assert!(t.write_to(&mut vec![0; N - 1][..]).is_err());
6333             assert!(t.write_to(&mut vec![0; N + 1][..]).is_err());
6334 
6335             // `write_to` works as expected.
6336             let mut bytes = [0; N];
6337             assert_eq!(t.write_to(&mut bytes[..]), Ok(()));
6338             assert_eq!(bytes, t.as_bytes());
6339 
6340             // `write_to_prefix` rejects slices that are too small.
6341             assert!(t.write_to_prefix(&mut vec![0; N - 1][..]).is_err());
6342 
6343             // `write_to_prefix` works with exact-sized slices.
6344             let mut bytes = [0; N];
6345             assert_eq!(t.write_to_prefix(&mut bytes[..]), Ok(()));
6346             assert_eq!(bytes, t.as_bytes());
6347 
6348             // `write_to_prefix` works with too-large slices, and any bytes past
6349             // the prefix aren't modified.
6350             let mut too_many_bytes = vec![0; N + 1];
6351             too_many_bytes[N] = 123;
6352             assert_eq!(t.write_to_prefix(&mut too_many_bytes[..]), Ok(()));
6353             assert_eq!(&too_many_bytes[..N], t.as_bytes());
6354             assert_eq!(too_many_bytes[N], 123);
6355 
6356             // `write_to_suffix` rejects slices that are too small.
6357             assert!(t.write_to_suffix(&mut vec![0; N - 1][..]).is_err());
6358 
6359             // `write_to_suffix` works with exact-sized slices.
6360             let mut bytes = [0; N];
6361             assert_eq!(t.write_to_suffix(&mut bytes[..]), Ok(()));
6362             assert_eq!(bytes, t.as_bytes());
6363 
6364             // `write_to_suffix` works with too-large slices, and any bytes
6365             // before the suffix aren't modified.
6366             let mut too_many_bytes = vec![0; N + 1];
6367             too_many_bytes[0] = 123;
6368             assert_eq!(t.write_to_suffix(&mut too_many_bytes[..]), Ok(()));
6369             assert_eq!(&too_many_bytes[1..], t.as_bytes());
6370             assert_eq!(too_many_bytes[0], 123);
6371         }
6372 
6373         #[derive(Debug, Eq, PartialEq, FromBytes, IntoBytes, Immutable)]
6374         #[repr(C)]
6375         struct Foo {
6376             a: u32,
6377             b: Wrapping<u32>,
6378             c: Option<NonZeroU32>,
6379         }
6380 
6381         let expected_bytes: Vec<u8> = if cfg!(target_endian = "little") {
6382             vec![1, 0, 0, 0, 2, 0, 0, 0, 0, 0, 0, 0]
6383         } else {
6384             vec![0, 0, 0, 1, 0, 0, 0, 2, 0, 0, 0, 0]
6385         };
6386         let post_mutation_expected_a =
6387             if cfg!(target_endian = "little") { 0x00_00_00_FE } else { 0xFF_00_00_01 };
6388         test::<_, 12>(
6389             &mut Foo { a: 1, b: Wrapping(2), c: None },
6390             expected_bytes.as_bytes(),
6391             &Foo { a: post_mutation_expected_a, b: Wrapping(2), c: None },
6392         );
6393         test::<_, 3>(
6394             Unsized::from_mut_slice(&mut [1, 2, 3]),
6395             &[1, 2, 3],
6396             Unsized::from_mut_slice(&mut [0xFE, 2, 3]),
6397         );
6398     }
6399 
6400     #[test]
test_array()6401     fn test_array() {
6402         #[derive(FromBytes, IntoBytes, Immutable)]
6403         #[repr(C)]
6404         struct Foo {
6405             a: [u16; 33],
6406         }
6407 
6408         let foo = Foo { a: [0xFFFF; 33] };
6409         let expected = [0xFFu8; 66];
6410         assert_eq!(foo.as_bytes(), &expected[..]);
6411     }
6412 
6413     #[test]
test_new_zeroed()6414     fn test_new_zeroed() {
6415         assert!(!bool::new_zeroed());
6416         assert_eq!(u64::new_zeroed(), 0);
6417         // This test exists in order to exercise unsafe code, especially when
6418         // running under Miri.
6419         #[allow(clippy::unit_cmp)]
6420         {
6421             assert_eq!(<()>::new_zeroed(), ());
6422         }
6423     }
6424 
6425     #[test]
test_transparent_packed_generic_struct()6426     fn test_transparent_packed_generic_struct() {
6427         #[derive(IntoBytes, FromBytes, Unaligned)]
6428         #[repr(transparent)]
6429         #[allow(dead_code)] // We never construct this type
6430         struct Foo<T> {
6431             _t: T,
6432             _phantom: PhantomData<()>,
6433         }
6434 
6435         assert_impl_all!(Foo<u32>: FromZeros, FromBytes, IntoBytes);
6436         assert_impl_all!(Foo<u8>: Unaligned);
6437 
6438         #[derive(IntoBytes, FromBytes, Unaligned)]
6439         #[repr(C, packed)]
6440         #[allow(dead_code)] // We never construct this type
6441         struct Bar<T, U> {
6442             _t: T,
6443             _u: U,
6444         }
6445 
6446         assert_impl_all!(Bar<u8, AU64>: FromZeros, FromBytes, IntoBytes, Unaligned);
6447     }
6448 
6449     #[cfg(feature = "alloc")]
6450     mod alloc {
6451         use super::*;
6452 
6453         #[cfg(zerocopy_panic_in_const_and_vec_try_reserve_1_57_0)]
6454         #[test]
test_extend_vec_zeroed()6455         fn test_extend_vec_zeroed() {
6456             // Test extending when there is an existing allocation.
6457             let mut v = vec![100u16, 200, 300];
6458             FromZeros::extend_vec_zeroed(&mut v, 3).unwrap();
6459             assert_eq!(v.len(), 6);
6460             assert_eq!(&*v, &[100, 200, 300, 0, 0, 0]);
6461             drop(v);
6462 
6463             // Test extending when there is no existing allocation.
6464             let mut v: Vec<u64> = Vec::new();
6465             FromZeros::extend_vec_zeroed(&mut v, 3).unwrap();
6466             assert_eq!(v.len(), 3);
6467             assert_eq!(&*v, &[0, 0, 0]);
6468             drop(v);
6469         }
6470 
6471         #[cfg(zerocopy_panic_in_const_and_vec_try_reserve_1_57_0)]
6472         #[test]
test_extend_vec_zeroed_zst()6473         fn test_extend_vec_zeroed_zst() {
6474             // Test extending when there is an existing (fake) allocation.
6475             let mut v = vec![(), (), ()];
6476             <()>::extend_vec_zeroed(&mut v, 3).unwrap();
6477             assert_eq!(v.len(), 6);
6478             assert_eq!(&*v, &[(), (), (), (), (), ()]);
6479             drop(v);
6480 
6481             // Test extending when there is no existing (fake) allocation.
6482             let mut v: Vec<()> = Vec::new();
6483             <()>::extend_vec_zeroed(&mut v, 3).unwrap();
6484             assert_eq!(&*v, &[(), (), ()]);
6485             drop(v);
6486         }
6487 
6488         #[cfg(zerocopy_panic_in_const_and_vec_try_reserve_1_57_0)]
6489         #[test]
test_insert_vec_zeroed()6490         fn test_insert_vec_zeroed() {
6491             // Insert at start (no existing allocation).
6492             let mut v: Vec<u64> = Vec::new();
6493             u64::insert_vec_zeroed(&mut v, 0, 2).unwrap();
6494             assert_eq!(v.len(), 2);
6495             assert_eq!(&*v, &[0, 0]);
6496             drop(v);
6497 
6498             // Insert at start.
6499             let mut v = vec![100u64, 200, 300];
6500             u64::insert_vec_zeroed(&mut v, 0, 2).unwrap();
6501             assert_eq!(v.len(), 5);
6502             assert_eq!(&*v, &[0, 0, 100, 200, 300]);
6503             drop(v);
6504 
6505             // Insert at middle.
6506             let mut v = vec![100u64, 200, 300];
6507             u64::insert_vec_zeroed(&mut v, 1, 1).unwrap();
6508             assert_eq!(v.len(), 4);
6509             assert_eq!(&*v, &[100, 0, 200, 300]);
6510             drop(v);
6511 
6512             // Insert at end.
6513             let mut v = vec![100u64, 200, 300];
6514             u64::insert_vec_zeroed(&mut v, 3, 1).unwrap();
6515             assert_eq!(v.len(), 4);
6516             assert_eq!(&*v, &[100, 200, 300, 0]);
6517             drop(v);
6518         }
6519 
6520         #[cfg(zerocopy_panic_in_const_and_vec_try_reserve_1_57_0)]
6521         #[test]
test_insert_vec_zeroed_zst()6522         fn test_insert_vec_zeroed_zst() {
6523             // Insert at start (no existing fake allocation).
6524             let mut v: Vec<()> = Vec::new();
6525             <()>::insert_vec_zeroed(&mut v, 0, 2).unwrap();
6526             assert_eq!(v.len(), 2);
6527             assert_eq!(&*v, &[(), ()]);
6528             drop(v);
6529 
6530             // Insert at start.
6531             let mut v = vec![(), (), ()];
6532             <()>::insert_vec_zeroed(&mut v, 0, 2).unwrap();
6533             assert_eq!(v.len(), 5);
6534             assert_eq!(&*v, &[(), (), (), (), ()]);
6535             drop(v);
6536 
6537             // Insert at middle.
6538             let mut v = vec![(), (), ()];
6539             <()>::insert_vec_zeroed(&mut v, 1, 1).unwrap();
6540             assert_eq!(v.len(), 4);
6541             assert_eq!(&*v, &[(), (), (), ()]);
6542             drop(v);
6543 
6544             // Insert at end.
6545             let mut v = vec![(), (), ()];
6546             <()>::insert_vec_zeroed(&mut v, 3, 1).unwrap();
6547             assert_eq!(v.len(), 4);
6548             assert_eq!(&*v, &[(), (), (), ()]);
6549             drop(v);
6550         }
6551 
6552         #[test]
test_new_box_zeroed()6553         fn test_new_box_zeroed() {
6554             assert_eq!(u64::new_box_zeroed(), Ok(Box::new(0)));
6555         }
6556 
6557         #[test]
test_new_box_zeroed_array()6558         fn test_new_box_zeroed_array() {
6559             drop(<[u32; 0x1000]>::new_box_zeroed());
6560         }
6561 
6562         #[test]
test_new_box_zeroed_zst()6563         fn test_new_box_zeroed_zst() {
6564             // This test exists in order to exercise unsafe code, especially
6565             // when running under Miri.
6566             #[allow(clippy::unit_cmp)]
6567             {
6568                 assert_eq!(<()>::new_box_zeroed(), Ok(Box::new(())));
6569             }
6570         }
6571 
6572         #[test]
test_new_box_zeroed_with_elems()6573         fn test_new_box_zeroed_with_elems() {
6574             let mut s: Box<[u64]> = <[u64]>::new_box_zeroed_with_elems(3).unwrap();
6575             assert_eq!(s.len(), 3);
6576             assert_eq!(&*s, &[0, 0, 0]);
6577             s[1] = 3;
6578             assert_eq!(&*s, &[0, 3, 0]);
6579         }
6580 
6581         #[test]
test_new_box_zeroed_with_elems_empty()6582         fn test_new_box_zeroed_with_elems_empty() {
6583             let s: Box<[u64]> = <[u64]>::new_box_zeroed_with_elems(0).unwrap();
6584             assert_eq!(s.len(), 0);
6585         }
6586 
6587         #[test]
test_new_box_zeroed_with_elems_zst()6588         fn test_new_box_zeroed_with_elems_zst() {
6589             let mut s: Box<[()]> = <[()]>::new_box_zeroed_with_elems(3).unwrap();
6590             assert_eq!(s.len(), 3);
6591             assert!(s.get(10).is_none());
6592             // This test exists in order to exercise unsafe code, especially
6593             // when running under Miri.
6594             #[allow(clippy::unit_cmp)]
6595             {
6596                 assert_eq!(s[1], ());
6597             }
6598             s[2] = ();
6599         }
6600 
6601         #[test]
test_new_box_zeroed_with_elems_zst_empty()6602         fn test_new_box_zeroed_with_elems_zst_empty() {
6603             let s: Box<[()]> = <[()]>::new_box_zeroed_with_elems(0).unwrap();
6604             assert_eq!(s.len(), 0);
6605         }
6606 
6607         #[test]
new_box_zeroed_with_elems_errors()6608         fn new_box_zeroed_with_elems_errors() {
6609             assert_eq!(<[u16]>::new_box_zeroed_with_elems(usize::MAX), Err(AllocError));
6610 
6611             let max = <usize as core::convert::TryFrom<_>>::try_from(isize::MAX).unwrap();
6612             assert_eq!(
6613                 <[u16]>::new_box_zeroed_with_elems((max / mem::size_of::<u16>()) + 1),
6614                 Err(AllocError)
6615             );
6616         }
6617     }
6618 }
6619 
6620 #[cfg(kani)]
6621 mod proofs {
6622     use super::*;
6623 
6624     impl kani::Arbitrary for DstLayout {
any() -> Self6625         fn any() -> Self {
6626             let align: NonZeroUsize = kani::any();
6627             let size_info: SizeInfo = kani::any();
6628 
6629             kani::assume(align.is_power_of_two());
6630             kani::assume(align < DstLayout::THEORETICAL_MAX_ALIGN);
6631 
6632             // For testing purposes, we most care about instantiations of
6633             // `DstLayout` that can correspond to actual Rust types. We use
6634             // `Layout` to verify that our `DstLayout` satisfies the validity
6635             // conditions of Rust layouts.
6636             kani::assume(
6637                 match size_info {
6638                     SizeInfo::Sized { size } => Layout::from_size_align(size, align.get()),
6639                     SizeInfo::SliceDst(TrailingSliceLayout { offset, elem_size: _ }) => {
6640                         // `SliceDst`` cannot encode an exact size, but we know
6641                         // it is at least `offset` bytes.
6642                         Layout::from_size_align(offset, align.get())
6643                     }
6644                 }
6645                 .is_ok(),
6646             );
6647 
6648             Self { align: align, size_info: size_info }
6649         }
6650     }
6651 
6652     impl kani::Arbitrary for SizeInfo {
any() -> Self6653         fn any() -> Self {
6654             let is_sized: bool = kani::any();
6655 
6656             match is_sized {
6657                 true => {
6658                     let size: usize = kani::any();
6659 
6660                     kani::assume(size <= isize::MAX as _);
6661 
6662                     SizeInfo::Sized { size }
6663                 }
6664                 false => SizeInfo::SliceDst(kani::any()),
6665             }
6666         }
6667     }
6668 
6669     impl kani::Arbitrary for TrailingSliceLayout {
any() -> Self6670         fn any() -> Self {
6671             let elem_size: usize = kani::any();
6672             let offset: usize = kani::any();
6673 
6674             kani::assume(elem_size < isize::MAX as _);
6675             kani::assume(offset < isize::MAX as _);
6676 
6677             TrailingSliceLayout { elem_size, offset }
6678         }
6679     }
6680 
6681     #[kani::proof]
prove_dst_layout_extend()6682     fn prove_dst_layout_extend() {
6683         use crate::util::{max, min, padding_needed_for};
6684 
6685         let base: DstLayout = kani::any();
6686         let field: DstLayout = kani::any();
6687         let packed: Option<NonZeroUsize> = kani::any();
6688 
6689         if let Some(max_align) = packed {
6690             kani::assume(max_align.is_power_of_two());
6691             kani::assume(base.align <= max_align);
6692         }
6693 
6694         // The base can only be extended if it's sized.
6695         kani::assume(matches!(base.size_info, SizeInfo::Sized { .. }));
6696         let base_size = if let SizeInfo::Sized { size } = base.size_info {
6697             size
6698         } else {
6699             unreachable!();
6700         };
6701 
6702         // Under the above conditions, `DstLayout::extend` will not panic.
6703         let composite = base.extend(field, packed);
6704 
6705         // The field's alignment is clamped by `max_align` (i.e., the
6706         // `packed` attribute, if any) [1].
6707         //
6708         // [1] Per https://doc.rust-lang.org/reference/type-layout.html#the-alignment-modifiers:
6709         //
6710         //   The alignments of each field, for the purpose of positioning
6711         //   fields, is the smaller of the specified alignment and the
6712         //   alignment of the field's type.
6713         let field_align = min(field.align, packed.unwrap_or(DstLayout::THEORETICAL_MAX_ALIGN));
6714 
6715         // The struct's alignment is the maximum of its previous alignment and
6716         // `field_align`.
6717         assert_eq!(composite.align, max(base.align, field_align));
6718 
6719         // Compute the minimum amount of inter-field padding needed to
6720         // satisfy the field's alignment, and offset of the trailing field.
6721         // [1]
6722         //
6723         // [1] Per https://doc.rust-lang.org/reference/type-layout.html#the-alignment-modifiers:
6724         //
6725         //   Inter-field padding is guaranteed to be the minimum required in
6726         //   order to satisfy each field's (possibly altered) alignment.
6727         let padding = padding_needed_for(base_size, field_align);
6728         let offset = base_size + padding;
6729 
6730         // For testing purposes, we'll also construct `alloc::Layout`
6731         // stand-ins for `DstLayout`, and show that `extend` behaves
6732         // comparably on both types.
6733         let base_analog = Layout::from_size_align(base_size, base.align.get()).unwrap();
6734 
6735         match field.size_info {
6736             SizeInfo::Sized { size: field_size } => {
6737                 if let SizeInfo::Sized { size: composite_size } = composite.size_info {
6738                     // If the trailing field is sized, the resulting layout will
6739                     // be sized. Its size will be the sum of the preceding
6740                     // layout, the size of the new field, and the size of
6741                     // inter-field padding between the two.
6742                     assert_eq!(composite_size, offset + field_size);
6743 
6744                     let field_analog =
6745                         Layout::from_size_align(field_size, field_align.get()).unwrap();
6746 
6747                     if let Ok((actual_composite, actual_offset)) = base_analog.extend(field_analog)
6748                     {
6749                         assert_eq!(actual_offset, offset);
6750                         assert_eq!(actual_composite.size(), composite_size);
6751                         assert_eq!(actual_composite.align(), composite.align.get());
6752                     } else {
6753                         // An error here reflects that composite of `base`
6754                         // and `field` cannot correspond to a real Rust type
6755                         // fragment, because such a fragment would violate
6756                         // the basic invariants of a valid Rust layout. At
6757                         // the time of writing, `DstLayout` is a little more
6758                         // permissive than `Layout`, so we don't assert
6759                         // anything in this branch (e.g., unreachability).
6760                     }
6761                 } else {
6762                     panic!("The composite of two sized layouts must be sized.")
6763                 }
6764             }
6765             SizeInfo::SliceDst(TrailingSliceLayout {
6766                 offset: field_offset,
6767                 elem_size: field_elem_size,
6768             }) => {
6769                 if let SizeInfo::SliceDst(TrailingSliceLayout {
6770                     offset: composite_offset,
6771                     elem_size: composite_elem_size,
6772                 }) = composite.size_info
6773                 {
6774                     // The offset of the trailing slice component is the sum
6775                     // of the offset of the trailing field and the trailing
6776                     // slice offset within that field.
6777                     assert_eq!(composite_offset, offset + field_offset);
6778                     // The elem size is unchanged.
6779                     assert_eq!(composite_elem_size, field_elem_size);
6780 
6781                     let field_analog =
6782                         Layout::from_size_align(field_offset, field_align.get()).unwrap();
6783 
6784                     if let Ok((actual_composite, actual_offset)) = base_analog.extend(field_analog)
6785                     {
6786                         assert_eq!(actual_offset, offset);
6787                         assert_eq!(actual_composite.size(), composite_offset);
6788                         assert_eq!(actual_composite.align(), composite.align.get());
6789                     } else {
6790                         // An error here reflects that composite of `base`
6791                         // and `field` cannot correspond to a real Rust type
6792                         // fragment, because such a fragment would violate
6793                         // the basic invariants of a valid Rust layout. At
6794                         // the time of writing, `DstLayout` is a little more
6795                         // permissive than `Layout`, so we don't assert
6796                         // anything in this branch (e.g., unreachability).
6797                     }
6798                 } else {
6799                     panic!("The extension of a layout with a DST must result in a DST.")
6800                 }
6801             }
6802         }
6803     }
6804 
6805     #[kani::proof]
6806     #[kani::should_panic]
prove_dst_layout_extend_dst_panics()6807     fn prove_dst_layout_extend_dst_panics() {
6808         let base: DstLayout = kani::any();
6809         let field: DstLayout = kani::any();
6810         let packed: Option<NonZeroUsize> = kani::any();
6811 
6812         if let Some(max_align) = packed {
6813             kani::assume(max_align.is_power_of_two());
6814             kani::assume(base.align <= max_align);
6815         }
6816 
6817         kani::assume(matches!(base.size_info, SizeInfo::SliceDst(..)));
6818 
6819         let _ = base.extend(field, packed);
6820     }
6821 
6822     #[kani::proof]
prove_dst_layout_pad_to_align()6823     fn prove_dst_layout_pad_to_align() {
6824         use crate::util::padding_needed_for;
6825 
6826         let layout: DstLayout = kani::any();
6827 
6828         let padded: DstLayout = layout.pad_to_align();
6829 
6830         // Calling `pad_to_align` does not alter the `DstLayout`'s alignment.
6831         assert_eq!(padded.align, layout.align);
6832 
6833         if let SizeInfo::Sized { size: unpadded_size } = layout.size_info {
6834             if let SizeInfo::Sized { size: padded_size } = padded.size_info {
6835                 // If the layout is sized, it will remain sized after padding is
6836                 // added. Its sum will be its unpadded size and the size of the
6837                 // trailing padding needed to satisfy its alignment
6838                 // requirements.
6839                 let padding = padding_needed_for(unpadded_size, layout.align);
6840                 assert_eq!(padded_size, unpadded_size + padding);
6841 
6842                 // Prove that calling `DstLayout::pad_to_align` behaves
6843                 // identically to `Layout::pad_to_align`.
6844                 let layout_analog =
6845                     Layout::from_size_align(unpadded_size, layout.align.get()).unwrap();
6846                 let padded_analog = layout_analog.pad_to_align();
6847                 assert_eq!(padded_analog.align(), layout.align.get());
6848                 assert_eq!(padded_analog.size(), padded_size);
6849             } else {
6850                 panic!("The padding of a sized layout must result in a sized layout.")
6851             }
6852         } else {
6853             // If the layout is a DST, padding cannot be statically added.
6854             assert_eq!(padded.size_info, layout.size_info);
6855         }
6856     }
6857 }
6858