• Home
  • Line#
  • Scopes#
  • Navigate#
  • Raw
  • Download
1 /*!
2 Managing the scope stack. The scopes are tied to lexical scopes, so as
3 we descend the THIR, we push a scope on the stack, build its
4 contents, and then pop it off. Every scope is named by a
5 `region::Scope`.
6 
7 ### SEME Regions
8 
9 When pushing a new [Scope], we record the current point in the graph (a
10 basic block); this marks the entry to the scope. We then generate more
11 stuff in the control-flow graph. Whenever the scope is exited, either
12 via a `break` or `return` or just by fallthrough, that marks an exit
13 from the scope. Each lexical scope thus corresponds to a single-entry,
14 multiple-exit (SEME) region in the control-flow graph.
15 
16 For now, we record the `region::Scope` to each SEME region for later reference
17 (see caveat in next paragraph). This is because destruction scopes are tied to
18 them. This may change in the future so that MIR lowering determines its own
19 destruction scopes.
20 
21 ### Not so SEME Regions
22 
23 In the course of building matches, it sometimes happens that certain code
24 (namely guards) gets executed multiple times. This means that the scope lexical
25 scope may in fact correspond to multiple, disjoint SEME regions. So in fact our
26 mapping is from one scope to a vector of SEME regions. Since the SEME regions
27 are disjoint, the mapping is still one-to-one for the set of SEME regions that
28 we're currently in.
29 
30 Also in matches, the scopes assigned to arms are not always even SEME regions!
31 Each arm has a single region with one entry for each pattern. We manually
32 manipulate the scheduled drops in this scope to avoid dropping things multiple
33 times.
34 
35 ### Drops
36 
37 The primary purpose for scopes is to insert drops: while building
38 the contents, we also accumulate places that need to be dropped upon
39 exit from each scope. This is done by calling `schedule_drop`. Once a
40 drop is scheduled, whenever we branch out we will insert drops of all
41 those places onto the outgoing edge. Note that we don't know the full
42 set of scheduled drops up front, and so whenever we exit from the
43 scope we only drop the values scheduled thus far. For example, consider
44 the scope S corresponding to this loop:
45 
46 ```
47 # let cond = true;
48 loop {
49     let x = ..;
50     if cond { break; }
51     let y = ..;
52 }
53 ```
54 
55 When processing the `let x`, we will add one drop to the scope for
56 `x`. The break will then insert a drop for `x`. When we process `let
57 y`, we will add another drop (in fact, to a subscope, but let's ignore
58 that for now); any later drops would also drop `y`.
59 
60 ### Early exit
61 
62 There are numerous "normal" ways to early exit a scope: `break`,
63 `continue`, `return` (panics are handled separately). Whenever an
64 early exit occurs, the method `break_scope` is called. It is given the
65 current point in execution where the early exit occurs, as well as the
66 scope you want to branch to (note that all early exits from to some
67 other enclosing scope). `break_scope` will record the set of drops currently
68 scheduled in a [DropTree]. Later, before `in_breakable_scope` exits, the drops
69 will be added to the CFG.
70 
71 Panics are handled in a similar fashion, except that the drops are added to the
72 MIR once the rest of the function has finished being lowered. If a terminator
73 can panic, call `diverge_from(block)` with the block containing the terminator
74 `block`.
75 
76 ### Breakable scopes
77 
78 In addition to the normal scope stack, we track a loop scope stack
79 that contains only loops and breakable blocks. It tracks where a `break`,
80 `continue` or `return` should go to.
81 
82 */
83 
84 use std::mem;
85 
86 use crate::build::{BlockAnd, BlockAndExtension, BlockFrame, Builder, CFG};
87 use rustc_data_structures::fx::FxHashMap;
88 use rustc_hir::HirId;
89 use rustc_index::{IndexSlice, IndexVec};
90 use rustc_middle::middle::region;
91 use rustc_middle::mir::*;
92 use rustc_middle::thir::{Expr, LintLevel};
93 
94 use rustc_middle::ty::Ty;
95 use rustc_span::{Span, DUMMY_SP};
96 
97 #[derive(Debug)]
98 pub struct Scopes<'tcx> {
99     scopes: Vec<Scope>,
100 
101     /// The current set of breakable scopes. See module comment for more details.
102     breakable_scopes: Vec<BreakableScope<'tcx>>,
103 
104     /// The scope of the innermost if-then currently being lowered.
105     if_then_scope: Option<IfThenScope>,
106 
107     /// Drops that need to be done on unwind paths. See the comment on
108     /// [DropTree] for more details.
109     unwind_drops: DropTree,
110 
111     /// Drops that need to be done on paths to the `GeneratorDrop` terminator.
112     generator_drops: DropTree,
113 }
114 
115 #[derive(Debug)]
116 struct Scope {
117     /// The source scope this scope was created in.
118     source_scope: SourceScope,
119 
120     /// the region span of this scope within source code.
121     region_scope: region::Scope,
122 
123     /// set of places to drop when exiting this scope. This starts
124     /// out empty but grows as variables are declared during the
125     /// building process. This is a stack, so we always drop from the
126     /// end of the vector (top of the stack) first.
127     drops: Vec<DropData>,
128 
129     moved_locals: Vec<Local>,
130 
131     /// The drop index that will drop everything in and below this scope on an
132     /// unwind path.
133     cached_unwind_block: Option<DropIdx>,
134 
135     /// The drop index that will drop everything in and below this scope on a
136     /// generator drop path.
137     cached_generator_drop_block: Option<DropIdx>,
138 }
139 
140 #[derive(Clone, Copy, Debug)]
141 struct DropData {
142     /// The `Span` where drop obligation was incurred (typically where place was
143     /// declared)
144     source_info: SourceInfo,
145 
146     /// local to drop
147     local: Local,
148 
149     /// Whether this is a value Drop or a StorageDead.
150     kind: DropKind,
151 }
152 
153 #[derive(Debug, Clone, Copy, PartialEq, Eq, Hash)]
154 pub(crate) enum DropKind {
155     Value,
156     Storage,
157 }
158 
159 #[derive(Debug)]
160 struct BreakableScope<'tcx> {
161     /// Region scope of the loop
162     region_scope: region::Scope,
163     /// The destination of the loop/block expression itself (i.e., where to put
164     /// the result of a `break` or `return` expression)
165     break_destination: Place<'tcx>,
166     /// Drops that happen on the `break`/`return` path.
167     break_drops: DropTree,
168     /// Drops that happen on the `continue` path.
169     continue_drops: Option<DropTree>,
170 }
171 
172 #[derive(Debug)]
173 struct IfThenScope {
174     /// The if-then scope or arm scope
175     region_scope: region::Scope,
176     /// Drops that happen on the `else` path.
177     else_drops: DropTree,
178 }
179 
180 /// The target of an expression that breaks out of a scope
181 #[derive(Clone, Copy, Debug)]
182 pub(crate) enum BreakableTarget {
183     Continue(region::Scope),
184     Break(region::Scope),
185     Return,
186 }
187 
188 rustc_index::newtype_index! {
189     struct DropIdx {}
190 }
191 
192 const ROOT_NODE: DropIdx = DropIdx::from_u32(0);
193 
194 /// A tree of drops that we have deferred lowering. It's used for:
195 ///
196 /// * Drops on unwind paths
197 /// * Drops on generator drop paths (when a suspended generator is dropped)
198 /// * Drops on return and loop exit paths
199 /// * Drops on the else path in an `if let` chain
200 ///
201 /// Once no more nodes could be added to the tree, we lower it to MIR in one go
202 /// in `build_mir`.
203 #[derive(Debug)]
204 struct DropTree {
205     /// Drops in the tree.
206     drops: IndexVec<DropIdx, (DropData, DropIdx)>,
207     /// Map for finding the inverse of the `next_drop` relation:
208     ///
209     /// `previous_drops[(drops[i].1, drops[i].0.local, drops[i].0.kind)] == i`
210     previous_drops: FxHashMap<(DropIdx, Local, DropKind), DropIdx>,
211     /// Edges into the `DropTree` that need to be added once it's lowered.
212     entry_points: Vec<(DropIdx, BasicBlock)>,
213 }
214 
215 impl Scope {
216     /// Whether there's anything to do for the cleanup path, that is,
217     /// when unwinding through this scope. This includes destructors,
218     /// but not StorageDead statements, which don't get emitted at all
219     /// for unwinding, for several reasons:
220     ///  * clang doesn't emit llvm.lifetime.end for C++ unwinding
221     ///  * LLVM's memory dependency analysis can't handle it atm
222     ///  * polluting the cleanup MIR with StorageDead creates
223     ///    landing pads even though there's no actual destructors
224     ///  * freeing up stack space has no effect during unwinding
225     /// Note that for generators we do emit StorageDeads, for the
226     /// use of optimizations in the MIR generator transform.
needs_cleanup(&self) -> bool227     fn needs_cleanup(&self) -> bool {
228         self.drops.iter().any(|drop| match drop.kind {
229             DropKind::Value => true,
230             DropKind::Storage => false,
231         })
232     }
233 
invalidate_cache(&mut self)234     fn invalidate_cache(&mut self) {
235         self.cached_unwind_block = None;
236         self.cached_generator_drop_block = None;
237     }
238 }
239 
240 /// A trait that determined how [DropTree] creates its blocks and
241 /// links to any entry nodes.
242 trait DropTreeBuilder<'tcx> {
243     /// Create a new block for the tree. This should call either
244     /// `cfg.start_new_block()` or `cfg.start_new_cleanup_block()`.
make_block(cfg: &mut CFG<'tcx>) -> BasicBlock245     fn make_block(cfg: &mut CFG<'tcx>) -> BasicBlock;
246 
247     /// Links a block outside the drop tree, `from`, to the block `to` inside
248     /// the drop tree.
add_entry(cfg: &mut CFG<'tcx>, from: BasicBlock, to: BasicBlock)249     fn add_entry(cfg: &mut CFG<'tcx>, from: BasicBlock, to: BasicBlock);
250 }
251 
252 impl DropTree {
new() -> Self253     fn new() -> Self {
254         // The root node of the tree doesn't represent a drop, but instead
255         // represents the block in the tree that should be jumped to once all
256         // of the required drops have been performed.
257         let fake_source_info = SourceInfo::outermost(DUMMY_SP);
258         let fake_data =
259             DropData { source_info: fake_source_info, local: Local::MAX, kind: DropKind::Storage };
260         let drop_idx = DropIdx::MAX;
261         let drops = IndexVec::from_elem_n((fake_data, drop_idx), 1);
262         Self { drops, entry_points: Vec::new(), previous_drops: FxHashMap::default() }
263     }
264 
add_drop(&mut self, drop: DropData, next: DropIdx) -> DropIdx265     fn add_drop(&mut self, drop: DropData, next: DropIdx) -> DropIdx {
266         let drops = &mut self.drops;
267         *self
268             .previous_drops
269             .entry((next, drop.local, drop.kind))
270             .or_insert_with(|| drops.push((drop, next)))
271     }
272 
add_entry(&mut self, from: BasicBlock, to: DropIdx)273     fn add_entry(&mut self, from: BasicBlock, to: DropIdx) {
274         debug_assert!(to < self.drops.next_index());
275         self.entry_points.push((to, from));
276     }
277 
278     /// Builds the MIR for a given drop tree.
279     ///
280     /// `blocks` should have the same length as `self.drops`, and may have its
281     /// first value set to some already existing block.
build_mir<'tcx, T: DropTreeBuilder<'tcx>>( &mut self, cfg: &mut CFG<'tcx>, blocks: &mut IndexVec<DropIdx, Option<BasicBlock>>, )282     fn build_mir<'tcx, T: DropTreeBuilder<'tcx>>(
283         &mut self,
284         cfg: &mut CFG<'tcx>,
285         blocks: &mut IndexVec<DropIdx, Option<BasicBlock>>,
286     ) {
287         debug!("DropTree::build_mir(drops = {:#?})", self);
288         assert_eq!(blocks.len(), self.drops.len());
289 
290         self.assign_blocks::<T>(cfg, blocks);
291         self.link_blocks(cfg, blocks)
292     }
293 
294     /// Assign blocks for all of the drops in the drop tree that need them.
assign_blocks<'tcx, T: DropTreeBuilder<'tcx>>( &mut self, cfg: &mut CFG<'tcx>, blocks: &mut IndexVec<DropIdx, Option<BasicBlock>>, )295     fn assign_blocks<'tcx, T: DropTreeBuilder<'tcx>>(
296         &mut self,
297         cfg: &mut CFG<'tcx>,
298         blocks: &mut IndexVec<DropIdx, Option<BasicBlock>>,
299     ) {
300         // StorageDead statements can share blocks with each other and also with
301         // a Drop terminator. We iterate through the drops to find which drops
302         // need their own block.
303         #[derive(Clone, Copy)]
304         enum Block {
305             // This drop is unreachable
306             None,
307             // This drop is only reachable through the `StorageDead` with the
308             // specified index.
309             Shares(DropIdx),
310             // This drop has more than one way of being reached, or it is
311             // branched to from outside the tree, or its predecessor is a
312             // `Value` drop.
313             Own,
314         }
315 
316         let mut needs_block = IndexVec::from_elem(Block::None, &self.drops);
317         if blocks[ROOT_NODE].is_some() {
318             // In some cases (such as drops for `continue`) the root node
319             // already has a block. In this case, make sure that we don't
320             // override it.
321             needs_block[ROOT_NODE] = Block::Own;
322         }
323 
324         // Sort so that we only need to check the last value.
325         let entry_points = &mut self.entry_points;
326         entry_points.sort();
327 
328         for (drop_idx, drop_data) in self.drops.iter_enumerated().rev() {
329             if entry_points.last().is_some_and(|entry_point| entry_point.0 == drop_idx) {
330                 let block = *blocks[drop_idx].get_or_insert_with(|| T::make_block(cfg));
331                 needs_block[drop_idx] = Block::Own;
332                 while entry_points.last().is_some_and(|entry_point| entry_point.0 == drop_idx) {
333                     let entry_block = entry_points.pop().unwrap().1;
334                     T::add_entry(cfg, entry_block, block);
335                 }
336             }
337             match needs_block[drop_idx] {
338                 Block::None => continue,
339                 Block::Own => {
340                     blocks[drop_idx].get_or_insert_with(|| T::make_block(cfg));
341                 }
342                 Block::Shares(pred) => {
343                     blocks[drop_idx] = blocks[pred];
344                 }
345             }
346             if let DropKind::Value = drop_data.0.kind {
347                 needs_block[drop_data.1] = Block::Own;
348             } else if drop_idx != ROOT_NODE {
349                 match &mut needs_block[drop_data.1] {
350                     pred @ Block::None => *pred = Block::Shares(drop_idx),
351                     pred @ Block::Shares(_) => *pred = Block::Own,
352                     Block::Own => (),
353                 }
354             }
355         }
356 
357         debug!("assign_blocks: blocks = {:#?}", blocks);
358         assert!(entry_points.is_empty());
359     }
360 
link_blocks<'tcx>( &self, cfg: &mut CFG<'tcx>, blocks: &IndexSlice<DropIdx, Option<BasicBlock>>, )361     fn link_blocks<'tcx>(
362         &self,
363         cfg: &mut CFG<'tcx>,
364         blocks: &IndexSlice<DropIdx, Option<BasicBlock>>,
365     ) {
366         for (drop_idx, drop_data) in self.drops.iter_enumerated().rev() {
367             let Some(block) = blocks[drop_idx] else { continue };
368             match drop_data.0.kind {
369                 DropKind::Value => {
370                     let terminator = TerminatorKind::Drop {
371                         target: blocks[drop_data.1].unwrap(),
372                         // The caller will handle this if needed.
373                         unwind: UnwindAction::Terminate,
374                         place: drop_data.0.local.into(),
375                         replace: false,
376                     };
377                     cfg.terminate(block, drop_data.0.source_info, terminator);
378                 }
379                 // Root nodes don't correspond to a drop.
380                 DropKind::Storage if drop_idx == ROOT_NODE => {}
381                 DropKind::Storage => {
382                     let stmt = Statement {
383                         source_info: drop_data.0.source_info,
384                         kind: StatementKind::StorageDead(drop_data.0.local),
385                     };
386                     cfg.push(block, stmt);
387                     let target = blocks[drop_data.1].unwrap();
388                     if target != block {
389                         // Diagnostics don't use this `Span` but debuginfo
390                         // might. Since we don't want breakpoints to be placed
391                         // here, especially when this is on an unwind path, we
392                         // use `DUMMY_SP`.
393                         let source_info = SourceInfo { span: DUMMY_SP, ..drop_data.0.source_info };
394                         let terminator = TerminatorKind::Goto { target };
395                         cfg.terminate(block, source_info, terminator);
396                     }
397                 }
398             }
399         }
400     }
401 }
402 
403 impl<'tcx> Scopes<'tcx> {
new() -> Self404     pub(crate) fn new() -> Self {
405         Self {
406             scopes: Vec::new(),
407             breakable_scopes: Vec::new(),
408             if_then_scope: None,
409             unwind_drops: DropTree::new(),
410             generator_drops: DropTree::new(),
411         }
412     }
413 
push_scope(&mut self, region_scope: (region::Scope, SourceInfo), vis_scope: SourceScope)414     fn push_scope(&mut self, region_scope: (region::Scope, SourceInfo), vis_scope: SourceScope) {
415         debug!("push_scope({:?})", region_scope);
416         self.scopes.push(Scope {
417             source_scope: vis_scope,
418             region_scope: region_scope.0,
419             drops: vec![],
420             moved_locals: vec![],
421             cached_unwind_block: None,
422             cached_generator_drop_block: None,
423         });
424     }
425 
pop_scope(&mut self, region_scope: (region::Scope, SourceInfo)) -> Scope426     fn pop_scope(&mut self, region_scope: (region::Scope, SourceInfo)) -> Scope {
427         let scope = self.scopes.pop().unwrap();
428         assert_eq!(scope.region_scope, region_scope.0);
429         scope
430     }
431 
scope_index(&self, region_scope: region::Scope, span: Span) -> usize432     fn scope_index(&self, region_scope: region::Scope, span: Span) -> usize {
433         self.scopes
434             .iter()
435             .rposition(|scope| scope.region_scope == region_scope)
436             .unwrap_or_else(|| span_bug!(span, "region_scope {:?} does not enclose", region_scope))
437     }
438 
439     /// Returns the topmost active scope, which is known to be alive until
440     /// the next scope expression.
topmost(&self) -> region::Scope441     fn topmost(&self) -> region::Scope {
442         self.scopes.last().expect("topmost_scope: no scopes present").region_scope
443     }
444 }
445 
446 impl<'a, 'tcx> Builder<'a, 'tcx> {
447     // Adding and removing scopes
448     // ==========================
449 
450     ///  Start a breakable scope, which tracks where `continue`, `break` and
451     ///  `return` should branch to.
in_breakable_scope<F>( &mut self, loop_block: Option<BasicBlock>, break_destination: Place<'tcx>, span: Span, f: F, ) -> BlockAnd<()> where F: FnOnce(&mut Builder<'a, 'tcx>) -> Option<BlockAnd<()>>,452     pub(crate) fn in_breakable_scope<F>(
453         &mut self,
454         loop_block: Option<BasicBlock>,
455         break_destination: Place<'tcx>,
456         span: Span,
457         f: F,
458     ) -> BlockAnd<()>
459     where
460         F: FnOnce(&mut Builder<'a, 'tcx>) -> Option<BlockAnd<()>>,
461     {
462         let region_scope = self.scopes.topmost();
463         let scope = BreakableScope {
464             region_scope,
465             break_destination,
466             break_drops: DropTree::new(),
467             continue_drops: loop_block.map(|_| DropTree::new()),
468         };
469         self.scopes.breakable_scopes.push(scope);
470         let normal_exit_block = f(self);
471         let breakable_scope = self.scopes.breakable_scopes.pop().unwrap();
472         assert!(breakable_scope.region_scope == region_scope);
473         let break_block =
474             self.build_exit_tree(breakable_scope.break_drops, region_scope, span, None);
475         if let Some(drops) = breakable_scope.continue_drops {
476             self.build_exit_tree(drops, region_scope, span, loop_block);
477         }
478         match (normal_exit_block, break_block) {
479             (Some(block), None) | (None, Some(block)) => block,
480             (None, None) => self.cfg.start_new_block().unit(),
481             (Some(normal_block), Some(exit_block)) => {
482                 let target = self.cfg.start_new_block();
483                 let source_info = self.source_info(span);
484                 self.cfg.terminate(
485                     unpack!(normal_block),
486                     source_info,
487                     TerminatorKind::Goto { target },
488                 );
489                 self.cfg.terminate(
490                     unpack!(exit_block),
491                     source_info,
492                     TerminatorKind::Goto { target },
493                 );
494                 target.unit()
495             }
496         }
497     }
498 
499     /// Start an if-then scope which tracks drop for `if` expressions and `if`
500     /// guards.
501     ///
502     /// For an if-let chain:
503     ///
504     /// if let Some(x) = a && let Some(y) = b && let Some(z) = c { ... }
505     ///
506     /// There are three possible ways the condition can be false and we may have
507     /// to drop `x`, `x` and `y`, or neither depending on which binding fails.
508     /// To handle this correctly we use a `DropTree` in a similar way to a
509     /// `loop` expression and 'break' out on all of the 'else' paths.
510     ///
511     /// Notes:
512     /// - We don't need to keep a stack of scopes in the `Builder` because the
513     ///   'else' paths will only leave the innermost scope.
514     /// - This is also used for match guards.
in_if_then_scope<F>( &mut self, region_scope: region::Scope, span: Span, f: F, ) -> (BasicBlock, BasicBlock) where F: FnOnce(&mut Builder<'a, 'tcx>) -> BlockAnd<()>,515     pub(crate) fn in_if_then_scope<F>(
516         &mut self,
517         region_scope: region::Scope,
518         span: Span,
519         f: F,
520     ) -> (BasicBlock, BasicBlock)
521     where
522         F: FnOnce(&mut Builder<'a, 'tcx>) -> BlockAnd<()>,
523     {
524         let scope = IfThenScope { region_scope, else_drops: DropTree::new() };
525         let previous_scope = mem::replace(&mut self.scopes.if_then_scope, Some(scope));
526 
527         let then_block = unpack!(f(self));
528 
529         let if_then_scope = mem::replace(&mut self.scopes.if_then_scope, previous_scope).unwrap();
530         assert!(if_then_scope.region_scope == region_scope);
531 
532         let else_block = self
533             .build_exit_tree(if_then_scope.else_drops, region_scope, span, None)
534             .map_or_else(|| self.cfg.start_new_block(), |else_block_and| unpack!(else_block_and));
535 
536         (then_block, else_block)
537     }
538 
in_opt_scope<F, R>( &mut self, opt_scope: Option<(region::Scope, SourceInfo)>, f: F, ) -> BlockAnd<R> where F: FnOnce(&mut Builder<'a, 'tcx>) -> BlockAnd<R>,539     pub(crate) fn in_opt_scope<F, R>(
540         &mut self,
541         opt_scope: Option<(region::Scope, SourceInfo)>,
542         f: F,
543     ) -> BlockAnd<R>
544     where
545         F: FnOnce(&mut Builder<'a, 'tcx>) -> BlockAnd<R>,
546     {
547         debug!("in_opt_scope(opt_scope={:?})", opt_scope);
548         if let Some(region_scope) = opt_scope {
549             self.push_scope(region_scope);
550         }
551         let mut block;
552         let rv = unpack!(block = f(self));
553         if let Some(region_scope) = opt_scope {
554             unpack!(block = self.pop_scope(region_scope, block));
555         }
556         debug!("in_scope: exiting opt_scope={:?} block={:?}", opt_scope, block);
557         block.and(rv)
558     }
559 
560     /// Convenience wrapper that pushes a scope and then executes `f`
561     /// to build its contents, popping the scope afterwards.
562     #[instrument(skip(self, f), level = "debug")]
in_scope<F, R>( &mut self, region_scope: (region::Scope, SourceInfo), lint_level: LintLevel, f: F, ) -> BlockAnd<R> where F: FnOnce(&mut Builder<'a, 'tcx>) -> BlockAnd<R>,563     pub(crate) fn in_scope<F, R>(
564         &mut self,
565         region_scope: (region::Scope, SourceInfo),
566         lint_level: LintLevel,
567         f: F,
568     ) -> BlockAnd<R>
569     where
570         F: FnOnce(&mut Builder<'a, 'tcx>) -> BlockAnd<R>,
571     {
572         let source_scope = self.source_scope;
573         if let LintLevel::Explicit(current_hir_id) = lint_level {
574             let parent_id =
575                 self.source_scopes[source_scope].local_data.as_ref().assert_crate_local().lint_root;
576             self.maybe_new_source_scope(region_scope.1.span, None, current_hir_id, parent_id);
577         }
578         self.push_scope(region_scope);
579         let mut block;
580         let rv = unpack!(block = f(self));
581         unpack!(block = self.pop_scope(region_scope, block));
582         self.source_scope = source_scope;
583         debug!(?block);
584         block.and(rv)
585     }
586 
587     /// Push a scope onto the stack. You can then build code in this
588     /// scope and call `pop_scope` afterwards. Note that these two
589     /// calls must be paired; using `in_scope` as a convenience
590     /// wrapper maybe preferable.
push_scope(&mut self, region_scope: (region::Scope, SourceInfo))591     pub(crate) fn push_scope(&mut self, region_scope: (region::Scope, SourceInfo)) {
592         self.scopes.push_scope(region_scope, self.source_scope);
593     }
594 
595     /// Pops a scope, which should have region scope `region_scope`,
596     /// adding any drops onto the end of `block` that are needed.
597     /// This must match 1-to-1 with `push_scope`.
pop_scope( &mut self, region_scope: (region::Scope, SourceInfo), mut block: BasicBlock, ) -> BlockAnd<()>598     pub(crate) fn pop_scope(
599         &mut self,
600         region_scope: (region::Scope, SourceInfo),
601         mut block: BasicBlock,
602     ) -> BlockAnd<()> {
603         debug!("pop_scope({:?}, {:?})", region_scope, block);
604 
605         block = self.leave_top_scope(block);
606 
607         self.scopes.pop_scope(region_scope);
608 
609         block.unit()
610     }
611 
612     /// Sets up the drops for breaking from `block` to `target`.
break_scope( &mut self, mut block: BasicBlock, value: Option<&Expr<'tcx>>, target: BreakableTarget, source_info: SourceInfo, ) -> BlockAnd<()>613     pub(crate) fn break_scope(
614         &mut self,
615         mut block: BasicBlock,
616         value: Option<&Expr<'tcx>>,
617         target: BreakableTarget,
618         source_info: SourceInfo,
619     ) -> BlockAnd<()> {
620         let span = source_info.span;
621 
622         let get_scope_index = |scope: region::Scope| {
623             // find the loop-scope by its `region::Scope`.
624             self.scopes
625                 .breakable_scopes
626                 .iter()
627                 .rposition(|breakable_scope| breakable_scope.region_scope == scope)
628                 .unwrap_or_else(|| span_bug!(span, "no enclosing breakable scope found"))
629         };
630         let (break_index, destination) = match target {
631             BreakableTarget::Return => {
632                 let scope = &self.scopes.breakable_scopes[0];
633                 if scope.break_destination != Place::return_place() {
634                     span_bug!(span, "`return` in item with no return scope");
635                 }
636                 (0, Some(scope.break_destination))
637             }
638             BreakableTarget::Break(scope) => {
639                 let break_index = get_scope_index(scope);
640                 let scope = &self.scopes.breakable_scopes[break_index];
641                 (break_index, Some(scope.break_destination))
642             }
643             BreakableTarget::Continue(scope) => {
644                 let break_index = get_scope_index(scope);
645                 (break_index, None)
646             }
647         };
648 
649         match (destination, value) {
650             (Some(destination), Some(value)) => {
651                 debug!("stmt_expr Break val block_context.push(SubExpr)");
652                 self.block_context.push(BlockFrame::SubExpr);
653                 unpack!(block = self.expr_into_dest(destination, block, value));
654                 self.block_context.pop();
655             }
656             (Some(destination), None) => {
657                 self.cfg.push_assign_unit(block, source_info, destination, self.tcx)
658             }
659             (None, Some(_)) => {
660                 panic!("`return`, `become` and `break` with value and must have a destination")
661             }
662             (None, None) if self.tcx.sess.instrument_coverage() => {
663                 // Unlike `break` and `return`, which push an `Assign` statement to MIR, from which
664                 // a Coverage code region can be generated, `continue` needs no `Assign`; but
665                 // without one, the `InstrumentCoverage` MIR pass cannot generate a code region for
666                 // `continue`. Coverage will be missing unless we add a dummy `Assign` to MIR.
667                 self.add_dummy_assignment(span, block, source_info);
668             }
669             (None, None) => {}
670         }
671 
672         let region_scope = self.scopes.breakable_scopes[break_index].region_scope;
673         let scope_index = self.scopes.scope_index(region_scope, span);
674         let drops = if destination.is_some() {
675             &mut self.scopes.breakable_scopes[break_index].break_drops
676         } else {
677             self.scopes.breakable_scopes[break_index].continue_drops.as_mut().unwrap()
678         };
679 
680         let drop_idx = self.scopes.scopes[scope_index + 1..]
681             .iter()
682             .flat_map(|scope| &scope.drops)
683             .fold(ROOT_NODE, |drop_idx, &drop| drops.add_drop(drop, drop_idx));
684 
685         drops.add_entry(block, drop_idx);
686 
687         // `build_drop_trees` doesn't have access to our source_info, so we
688         // create a dummy terminator now. `TerminatorKind::Resume` is used
689         // because MIR type checking will panic if it hasn't been overwritten.
690         self.cfg.terminate(block, source_info, TerminatorKind::Resume);
691 
692         self.cfg.start_new_block().unit()
693     }
694 
break_for_else( &mut self, block: BasicBlock, target: region::Scope, source_info: SourceInfo, )695     pub(crate) fn break_for_else(
696         &mut self,
697         block: BasicBlock,
698         target: region::Scope,
699         source_info: SourceInfo,
700     ) {
701         let scope_index = self.scopes.scope_index(target, source_info.span);
702         let if_then_scope = self
703             .scopes
704             .if_then_scope
705             .as_mut()
706             .unwrap_or_else(|| span_bug!(source_info.span, "no if-then scope found"));
707 
708         assert_eq!(if_then_scope.region_scope, target, "breaking to incorrect scope");
709 
710         let mut drop_idx = ROOT_NODE;
711         let drops = &mut if_then_scope.else_drops;
712         for scope in &self.scopes.scopes[scope_index + 1..] {
713             for drop in &scope.drops {
714                 drop_idx = drops.add_drop(*drop, drop_idx);
715             }
716         }
717         drops.add_entry(block, drop_idx);
718 
719         // `build_drop_trees` doesn't have access to our source_info, so we
720         // create a dummy terminator now. `TerminatorKind::Resume` is used
721         // because MIR type checking will panic if it hasn't been overwritten.
722         self.cfg.terminate(block, source_info, TerminatorKind::Resume);
723     }
724 
725     // Add a dummy `Assign` statement to the CFG, with the span for the source code's `continue`
726     // statement.
add_dummy_assignment(&mut self, span: Span, block: BasicBlock, source_info: SourceInfo)727     fn add_dummy_assignment(&mut self, span: Span, block: BasicBlock, source_info: SourceInfo) {
728         let local_decl = LocalDecl::new(Ty::new_unit(self.tcx), span).internal();
729         let temp_place = Place::from(self.local_decls.push(local_decl));
730         self.cfg.push_assign_unit(block, source_info, temp_place, self.tcx);
731     }
732 
leave_top_scope(&mut self, block: BasicBlock) -> BasicBlock733     fn leave_top_scope(&mut self, block: BasicBlock) -> BasicBlock {
734         // If we are emitting a `drop` statement, we need to have the cached
735         // diverge cleanup pads ready in case that drop panics.
736         let needs_cleanup = self.scopes.scopes.last().is_some_and(|scope| scope.needs_cleanup());
737         let is_generator = self.generator_kind.is_some();
738         let unwind_to = if needs_cleanup { self.diverge_cleanup() } else { DropIdx::MAX };
739 
740         let scope = self.scopes.scopes.last().expect("leave_top_scope called with no scopes");
741         unpack!(build_scope_drops(
742             &mut self.cfg,
743             &mut self.scopes.unwind_drops,
744             scope,
745             block,
746             unwind_to,
747             is_generator && needs_cleanup,
748             self.arg_count,
749         ))
750     }
751 
752     /// Possibly creates a new source scope if `current_root` and `parent_root`
753     /// are different, or if -Zmaximal-hir-to-mir-coverage is enabled.
maybe_new_source_scope( &mut self, span: Span, safety: Option<Safety>, current_id: HirId, parent_id: HirId, )754     pub(crate) fn maybe_new_source_scope(
755         &mut self,
756         span: Span,
757         safety: Option<Safety>,
758         current_id: HirId,
759         parent_id: HirId,
760     ) {
761         let (current_root, parent_root) =
762             if self.tcx.sess.opts.unstable_opts.maximal_hir_to_mir_coverage {
763                 // Some consumers of rustc need to map MIR locations back to HIR nodes. Currently the
764                 // the only part of rustc that tracks MIR -> HIR is the `SourceScopeLocalData::lint_root`
765                 // field that tracks lint levels for MIR locations. Normally the number of source scopes
766                 // is limited to the set of nodes with lint annotations. The -Zmaximal-hir-to-mir-coverage
767                 // flag changes this behavior to maximize the number of source scopes, increasing the
768                 // granularity of the MIR->HIR mapping.
769                 (current_id, parent_id)
770             } else {
771                 // Use `maybe_lint_level_root_bounded` with `self.hir_id` as a bound
772                 // to avoid adding Hir dependencies on our parents.
773                 // We estimate the true lint roots here to avoid creating a lot of source scopes.
774                 (
775                     self.tcx.maybe_lint_level_root_bounded(current_id, self.hir_id),
776                     self.tcx.maybe_lint_level_root_bounded(parent_id, self.hir_id),
777                 )
778             };
779 
780         if current_root != parent_root {
781             let lint_level = LintLevel::Explicit(current_root);
782             self.source_scope = self.new_source_scope(span, lint_level, safety);
783         }
784     }
785 
786     /// Creates a new source scope, nested in the current one.
new_source_scope( &mut self, span: Span, lint_level: LintLevel, safety: Option<Safety>, ) -> SourceScope787     pub(crate) fn new_source_scope(
788         &mut self,
789         span: Span,
790         lint_level: LintLevel,
791         safety: Option<Safety>,
792     ) -> SourceScope {
793         let parent = self.source_scope;
794         debug!(
795             "new_source_scope({:?}, {:?}, {:?}) - parent({:?})={:?}",
796             span,
797             lint_level,
798             safety,
799             parent,
800             self.source_scopes.get(parent)
801         );
802         let scope_local_data = SourceScopeLocalData {
803             lint_root: if let LintLevel::Explicit(lint_root) = lint_level {
804                 lint_root
805             } else {
806                 self.source_scopes[parent].local_data.as_ref().assert_crate_local().lint_root
807             },
808             safety: safety.unwrap_or_else(|| {
809                 self.source_scopes[parent].local_data.as_ref().assert_crate_local().safety
810             }),
811         };
812         self.source_scopes.push(SourceScopeData {
813             span,
814             parent_scope: Some(parent),
815             inlined: None,
816             inlined_parent_scope: None,
817             local_data: ClearCrossCrate::Set(scope_local_data),
818         })
819     }
820 
821     /// Given a span and the current source scope, make a SourceInfo.
source_info(&self, span: Span) -> SourceInfo822     pub(crate) fn source_info(&self, span: Span) -> SourceInfo {
823         SourceInfo { span, scope: self.source_scope }
824     }
825 
826     // Finding scopes
827     // ==============
828 
829     /// Returns the scope that we should use as the lifetime of an
830     /// operand. Basically, an operand must live until it is consumed.
831     /// This is similar to, but not quite the same as, the temporary
832     /// scope (which can be larger or smaller).
833     ///
834     /// Consider:
835     /// ```ignore (illustrative)
836     /// let x = foo(bar(X, Y));
837     /// ```
838     /// We wish to pop the storage for X and Y after `bar()` is
839     /// called, not after the whole `let` is completed.
840     ///
841     /// As another example, if the second argument diverges:
842     /// ```ignore (illustrative)
843     /// foo(Box::new(2), panic!())
844     /// ```
845     /// We would allocate the box but then free it on the unwinding
846     /// path; we would also emit a free on the 'success' path from
847     /// panic, but that will turn out to be removed as dead-code.
local_scope(&self) -> region::Scope848     pub(crate) fn local_scope(&self) -> region::Scope {
849         self.scopes.topmost()
850     }
851 
852     // Scheduling drops
853     // ================
854 
schedule_drop_storage_and_value( &mut self, span: Span, region_scope: region::Scope, local: Local, )855     pub(crate) fn schedule_drop_storage_and_value(
856         &mut self,
857         span: Span,
858         region_scope: region::Scope,
859         local: Local,
860     ) {
861         self.schedule_drop(span, region_scope, local, DropKind::Storage);
862         self.schedule_drop(span, region_scope, local, DropKind::Value);
863     }
864 
865     /// Indicates that `place` should be dropped on exit from `region_scope`.
866     ///
867     /// When called with `DropKind::Storage`, `place` shouldn't be the return
868     /// place, or a function parameter.
schedule_drop( &mut self, span: Span, region_scope: region::Scope, local: Local, drop_kind: DropKind, )869     pub(crate) fn schedule_drop(
870         &mut self,
871         span: Span,
872         region_scope: region::Scope,
873         local: Local,
874         drop_kind: DropKind,
875     ) {
876         let needs_drop = match drop_kind {
877             DropKind::Value => {
878                 if !self.local_decls[local].ty.needs_drop(self.tcx, self.param_env) {
879                     return;
880                 }
881                 true
882             }
883             DropKind::Storage => {
884                 if local.index() <= self.arg_count {
885                     span_bug!(
886                         span,
887                         "`schedule_drop` called with local {:?} and arg_count {}",
888                         local,
889                         self.arg_count,
890                     )
891                 }
892                 false
893             }
894         };
895 
896         // When building drops, we try to cache chains of drops to reduce the
897         // number of `DropTree::add_drop` calls. This, however, means that
898         // whenever we add a drop into a scope which already had some entries
899         // in the drop tree built (and thus, cached) for it, we must invalidate
900         // all caches which might branch into the scope which had a drop just
901         // added to it. This is necessary, because otherwise some other code
902         // might use the cache to branch into already built chain of drops,
903         // essentially ignoring the newly added drop.
904         //
905         // For example consider there’s two scopes with a drop in each. These
906         // are built and thus the caches are filled:
907         //
908         // +--------------------------------------------------------+
909         // | +---------------------------------+                    |
910         // | | +--------+     +-------------+  |  +---------------+ |
911         // | | | return | <-+ | drop(outer) | <-+ |  drop(middle) | |
912         // | | +--------+     +-------------+  |  +---------------+ |
913         // | +------------|outer_scope cache|--+                    |
914         // +------------------------------|middle_scope cache|------+
915         //
916         // Now, a new, inner-most scope is added along with a new drop into
917         // both inner-most and outer-most scopes:
918         //
919         // +------------------------------------------------------------+
920         // | +----------------------------------+                       |
921         // | | +--------+      +-------------+  |   +---------------+   | +-------------+
922         // | | | return | <+   | drop(new)   | <-+  |  drop(middle) | <--+| drop(inner) |
923         // | | +--------+  |   | drop(outer) |  |   +---------------+   | +-------------+
924         // | |             +-+ +-------------+  |                       |
925         // | +---|invalid outer_scope cache|----+                       |
926         // +----=----------------|invalid middle_scope cache|-----------+
927         //
928         // If, when adding `drop(new)` we do not invalidate the cached blocks for both
929         // outer_scope and middle_scope, then, when building drops for the inner (right-most)
930         // scope, the old, cached blocks, without `drop(new)` will get used, producing the
931         // wrong results.
932         //
933         // Note that this code iterates scopes from the inner-most to the outer-most,
934         // invalidating caches of each scope visited. This way bare minimum of the
935         // caches gets invalidated. i.e., if a new drop is added into the middle scope, the
936         // cache of outer scope stays intact.
937         //
938         // Since we only cache drops for the unwind path and the generator drop
939         // path, we only need to invalidate the cache for drops that happen on
940         // the unwind or generator drop paths. This means that for
941         // non-generators we don't need to invalidate caches for `DropKind::Storage`.
942         let invalidate_caches = needs_drop || self.generator_kind.is_some();
943         for scope in self.scopes.scopes.iter_mut().rev() {
944             if invalidate_caches {
945                 scope.invalidate_cache();
946             }
947 
948             if scope.region_scope == region_scope {
949                 let region_scope_span = region_scope.span(self.tcx, &self.region_scope_tree);
950                 // Attribute scope exit drops to scope's closing brace.
951                 let scope_end = self.tcx.sess.source_map().end_point(region_scope_span);
952 
953                 scope.drops.push(DropData {
954                     source_info: SourceInfo { span: scope_end, scope: scope.source_scope },
955                     local,
956                     kind: drop_kind,
957                 });
958 
959                 return;
960             }
961         }
962 
963         span_bug!(span, "region scope {:?} not in scope to drop {:?}", region_scope, local);
964     }
965 
966     /// Indicates that the "local operand" stored in `local` is
967     /// *moved* at some point during execution (see `local_scope` for
968     /// more information about what a "local operand" is -- in short,
969     /// it's an intermediate operand created as part of preparing some
970     /// MIR instruction). We use this information to suppress
971     /// redundant drops on the non-unwind paths. This results in less
972     /// MIR, but also avoids spurious borrow check errors
973     /// (c.f. #64391).
974     ///
975     /// Example: when compiling the call to `foo` here:
976     ///
977     /// ```ignore (illustrative)
978     /// foo(bar(), ...)
979     /// ```
980     ///
981     /// we would evaluate `bar()` to an operand `_X`. We would also
982     /// schedule `_X` to be dropped when the expression scope for
983     /// `foo(bar())` is exited. This is relevant, for example, if the
984     /// later arguments should unwind (it would ensure that `_X` gets
985     /// dropped). However, if no unwind occurs, then `_X` will be
986     /// unconditionally consumed by the `call`:
987     ///
988     /// ```ignore (illustrative)
989     /// bb {
990     ///   ...
991     ///   _R = CALL(foo, _X, ...)
992     /// }
993     /// ```
994     ///
995     /// However, `_X` is still registered to be dropped, and so if we
996     /// do nothing else, we would generate a `DROP(_X)` that occurs
997     /// after the call. This will later be optimized out by the
998     /// drop-elaboration code, but in the meantime it can lead to
999     /// spurious borrow-check errors -- the problem, ironically, is
1000     /// not the `DROP(_X)` itself, but the (spurious) unwind pathways
1001     /// that it creates. See #64391 for an example.
record_operands_moved(&mut self, operands: &[Operand<'tcx>])1002     pub(crate) fn record_operands_moved(&mut self, operands: &[Operand<'tcx>]) {
1003         let local_scope = self.local_scope();
1004         let scope = self.scopes.scopes.last_mut().unwrap();
1005 
1006         assert_eq!(scope.region_scope, local_scope, "local scope is not the topmost scope!",);
1007 
1008         // look for moves of a local variable, like `MOVE(_X)`
1009         let locals_moved = operands.iter().flat_map(|operand| match operand {
1010             Operand::Copy(_) | Operand::Constant(_) => None,
1011             Operand::Move(place) => place.as_local(),
1012         });
1013 
1014         for local in locals_moved {
1015             // check if we have a Drop for this operand and -- if so
1016             // -- add it to the list of moved operands. Note that this
1017             // local might not have been an operand created for this
1018             // call, it could come from other places too.
1019             if scope.drops.iter().any(|drop| drop.local == local && drop.kind == DropKind::Value) {
1020                 scope.moved_locals.push(local);
1021             }
1022         }
1023     }
1024 
1025     // Other
1026     // =====
1027 
1028     /// Returns the [DropIdx] for the innermost drop if the function unwound at
1029     /// this point. The `DropIdx` will be created if it doesn't already exist.
diverge_cleanup(&mut self) -> DropIdx1030     fn diverge_cleanup(&mut self) -> DropIdx {
1031         // It is okay to use dummy span because the getting scope index on the topmost scope
1032         // must always succeed.
1033         self.diverge_cleanup_target(self.scopes.topmost(), DUMMY_SP)
1034     }
1035 
1036     /// This is similar to [diverge_cleanup](Self::diverge_cleanup) except its target is set to
1037     /// some ancestor scope instead of the current scope.
1038     /// It is possible to unwind to some ancestor scope if some drop panics as
1039     /// the program breaks out of a if-then scope.
diverge_cleanup_target(&mut self, target_scope: region::Scope, span: Span) -> DropIdx1040     fn diverge_cleanup_target(&mut self, target_scope: region::Scope, span: Span) -> DropIdx {
1041         let target = self.scopes.scope_index(target_scope, span);
1042         let (uncached_scope, mut cached_drop) = self.scopes.scopes[..=target]
1043             .iter()
1044             .enumerate()
1045             .rev()
1046             .find_map(|(scope_idx, scope)| {
1047                 scope.cached_unwind_block.map(|cached_block| (scope_idx + 1, cached_block))
1048             })
1049             .unwrap_or((0, ROOT_NODE));
1050 
1051         if uncached_scope > target {
1052             return cached_drop;
1053         }
1054 
1055         let is_generator = self.generator_kind.is_some();
1056         for scope in &mut self.scopes.scopes[uncached_scope..=target] {
1057             for drop in &scope.drops {
1058                 if is_generator || drop.kind == DropKind::Value {
1059                     cached_drop = self.scopes.unwind_drops.add_drop(*drop, cached_drop);
1060                 }
1061             }
1062             scope.cached_unwind_block = Some(cached_drop);
1063         }
1064 
1065         cached_drop
1066     }
1067 
1068     /// Prepares to create a path that performs all required cleanup for a
1069     /// terminator that can unwind at the given basic block.
1070     ///
1071     /// This path terminates in Resume. The path isn't created until after all
1072     /// of the non-unwind paths in this item have been lowered.
diverge_from(&mut self, start: BasicBlock)1073     pub(crate) fn diverge_from(&mut self, start: BasicBlock) {
1074         debug_assert!(
1075             matches!(
1076                 self.cfg.block_data(start).terminator().kind,
1077                 TerminatorKind::Assert { .. }
1078                     | TerminatorKind::Call { .. }
1079                     | TerminatorKind::Drop { .. }
1080                     | TerminatorKind::FalseUnwind { .. }
1081                     | TerminatorKind::InlineAsm { .. }
1082             ),
1083             "diverge_from called on block with terminator that cannot unwind."
1084         );
1085 
1086         let next_drop = self.diverge_cleanup();
1087         self.scopes.unwind_drops.add_entry(start, next_drop);
1088     }
1089 
1090     /// Sets up a path that performs all required cleanup for dropping a
1091     /// generator, starting from the given block that ends in
1092     /// [TerminatorKind::Yield].
1093     ///
1094     /// This path terminates in GeneratorDrop.
generator_drop_cleanup(&mut self, yield_block: BasicBlock)1095     pub(crate) fn generator_drop_cleanup(&mut self, yield_block: BasicBlock) {
1096         debug_assert!(
1097             matches!(
1098                 self.cfg.block_data(yield_block).terminator().kind,
1099                 TerminatorKind::Yield { .. }
1100             ),
1101             "generator_drop_cleanup called on block with non-yield terminator."
1102         );
1103         let (uncached_scope, mut cached_drop) = self
1104             .scopes
1105             .scopes
1106             .iter()
1107             .enumerate()
1108             .rev()
1109             .find_map(|(scope_idx, scope)| {
1110                 scope.cached_generator_drop_block.map(|cached_block| (scope_idx + 1, cached_block))
1111             })
1112             .unwrap_or((0, ROOT_NODE));
1113 
1114         for scope in &mut self.scopes.scopes[uncached_scope..] {
1115             for drop in &scope.drops {
1116                 cached_drop = self.scopes.generator_drops.add_drop(*drop, cached_drop);
1117             }
1118             scope.cached_generator_drop_block = Some(cached_drop);
1119         }
1120 
1121         self.scopes.generator_drops.add_entry(yield_block, cached_drop);
1122     }
1123 
1124     /// Utility function for *non*-scope code to build their own drops
1125     /// Force a drop at this point in the MIR by creating a new block.
build_drop_and_replace( &mut self, block: BasicBlock, span: Span, place: Place<'tcx>, value: Rvalue<'tcx>, ) -> BlockAnd<()>1126     pub(crate) fn build_drop_and_replace(
1127         &mut self,
1128         block: BasicBlock,
1129         span: Span,
1130         place: Place<'tcx>,
1131         value: Rvalue<'tcx>,
1132     ) -> BlockAnd<()> {
1133         let source_info = self.source_info(span);
1134 
1135         // create the new block for the assignment
1136         let assign = self.cfg.start_new_block();
1137         self.cfg.push_assign(assign, source_info, place, value.clone());
1138 
1139         // create the new block for the assignment in the case of unwinding
1140         let assign_unwind = self.cfg.start_new_cleanup_block();
1141         self.cfg.push_assign(assign_unwind, source_info, place, value.clone());
1142 
1143         self.cfg.terminate(
1144             block,
1145             source_info,
1146             TerminatorKind::Drop {
1147                 place,
1148                 target: assign,
1149                 unwind: UnwindAction::Cleanup(assign_unwind),
1150                 replace: true,
1151             },
1152         );
1153         self.diverge_from(block);
1154 
1155         assign.unit()
1156     }
1157 
1158     /// Creates an `Assert` terminator and return the success block.
1159     /// If the boolean condition operand is not the expected value,
1160     /// a runtime panic will be caused with the given message.
assert( &mut self, block: BasicBlock, cond: Operand<'tcx>, expected: bool, msg: AssertMessage<'tcx>, span: Span, ) -> BasicBlock1161     pub(crate) fn assert(
1162         &mut self,
1163         block: BasicBlock,
1164         cond: Operand<'tcx>,
1165         expected: bool,
1166         msg: AssertMessage<'tcx>,
1167         span: Span,
1168     ) -> BasicBlock {
1169         let source_info = self.source_info(span);
1170         let success_block = self.cfg.start_new_block();
1171 
1172         self.cfg.terminate(
1173             block,
1174             source_info,
1175             TerminatorKind::Assert {
1176                 cond,
1177                 expected,
1178                 msg: Box::new(msg),
1179                 target: success_block,
1180                 unwind: UnwindAction::Continue,
1181             },
1182         );
1183         self.diverge_from(block);
1184 
1185         success_block
1186     }
1187 
1188     /// Unschedules any drops in the top scope.
1189     ///
1190     /// This is only needed for `match` arm scopes, because they have one
1191     /// entrance per pattern, but only one exit.
clear_top_scope(&mut self, region_scope: region::Scope)1192     pub(crate) fn clear_top_scope(&mut self, region_scope: region::Scope) {
1193         let top_scope = self.scopes.scopes.last_mut().unwrap();
1194 
1195         assert_eq!(top_scope.region_scope, region_scope);
1196 
1197         top_scope.drops.clear();
1198         top_scope.invalidate_cache();
1199     }
1200 }
1201 
1202 /// Builds drops for `pop_scope` and `leave_top_scope`.
build_scope_drops<'tcx>( cfg: &mut CFG<'tcx>, unwind_drops: &mut DropTree, scope: &Scope, mut block: BasicBlock, mut unwind_to: DropIdx, storage_dead_on_unwind: bool, arg_count: usize, ) -> BlockAnd<()>1203 fn build_scope_drops<'tcx>(
1204     cfg: &mut CFG<'tcx>,
1205     unwind_drops: &mut DropTree,
1206     scope: &Scope,
1207     mut block: BasicBlock,
1208     mut unwind_to: DropIdx,
1209     storage_dead_on_unwind: bool,
1210     arg_count: usize,
1211 ) -> BlockAnd<()> {
1212     debug!("build_scope_drops({:?} -> {:?})", block, scope);
1213 
1214     // Build up the drops in evaluation order. The end result will
1215     // look like:
1216     //
1217     // [SDs, drops[n]] --..> [SDs, drop[1]] -> [SDs, drop[0]] -> [[SDs]]
1218     //               |                    |                 |
1219     //               :                    |                 |
1220     //                                    V                 V
1221     // [drop[n]] -...-> [drop[1]] ------> [drop[0]] ------> [last_unwind_to]
1222     //
1223     // The horizontal arrows represent the execution path when the drops return
1224     // successfully. The downwards arrows represent the execution path when the
1225     // drops panic (panicking while unwinding will abort, so there's no need for
1226     // another set of arrows).
1227     //
1228     // For generators, we unwind from a drop on a local to its StorageDead
1229     // statement. For other functions we don't worry about StorageDead. The
1230     // drops for the unwind path should have already been generated by
1231     // `diverge_cleanup_gen`.
1232 
1233     for drop_data in scope.drops.iter().rev() {
1234         let source_info = drop_data.source_info;
1235         let local = drop_data.local;
1236 
1237         match drop_data.kind {
1238             DropKind::Value => {
1239                 // `unwind_to` should drop the value that we're about to
1240                 // schedule. If dropping this value panics, then we continue
1241                 // with the *next* value on the unwind path.
1242                 debug_assert_eq!(unwind_drops.drops[unwind_to].0.local, drop_data.local);
1243                 debug_assert_eq!(unwind_drops.drops[unwind_to].0.kind, drop_data.kind);
1244                 unwind_to = unwind_drops.drops[unwind_to].1;
1245 
1246                 // If the operand has been moved, and we are not on an unwind
1247                 // path, then don't generate the drop. (We only take this into
1248                 // account for non-unwind paths so as not to disturb the
1249                 // caching mechanism.)
1250                 if scope.moved_locals.iter().any(|&o| o == local) {
1251                     continue;
1252                 }
1253 
1254                 unwind_drops.add_entry(block, unwind_to);
1255 
1256                 let next = cfg.start_new_block();
1257                 cfg.terminate(
1258                     block,
1259                     source_info,
1260                     TerminatorKind::Drop {
1261                         place: local.into(),
1262                         target: next,
1263                         unwind: UnwindAction::Continue,
1264                         replace: false,
1265                     },
1266                 );
1267                 block = next;
1268             }
1269             DropKind::Storage => {
1270                 if storage_dead_on_unwind {
1271                     debug_assert_eq!(unwind_drops.drops[unwind_to].0.local, drop_data.local);
1272                     debug_assert_eq!(unwind_drops.drops[unwind_to].0.kind, drop_data.kind);
1273                     unwind_to = unwind_drops.drops[unwind_to].1;
1274                 }
1275                 // Only temps and vars need their storage dead.
1276                 assert!(local.index() > arg_count);
1277                 cfg.push(block, Statement { source_info, kind: StatementKind::StorageDead(local) });
1278             }
1279         }
1280     }
1281     block.unit()
1282 }
1283 
1284 impl<'a, 'tcx: 'a> Builder<'a, 'tcx> {
1285     /// Build a drop tree for a breakable scope.
1286     ///
1287     /// If `continue_block` is `Some`, then the tree is for `continue` inside a
1288     /// loop. Otherwise this is for `break` or `return`.
build_exit_tree( &mut self, mut drops: DropTree, else_scope: region::Scope, span: Span, continue_block: Option<BasicBlock>, ) -> Option<BlockAnd<()>>1289     fn build_exit_tree(
1290         &mut self,
1291         mut drops: DropTree,
1292         else_scope: region::Scope,
1293         span: Span,
1294         continue_block: Option<BasicBlock>,
1295     ) -> Option<BlockAnd<()>> {
1296         let mut blocks = IndexVec::from_elem(None, &drops.drops);
1297         blocks[ROOT_NODE] = continue_block;
1298 
1299         drops.build_mir::<ExitScopes>(&mut self.cfg, &mut blocks);
1300         let is_generator = self.generator_kind.is_some();
1301 
1302         // Link the exit drop tree to unwind drop tree.
1303         if drops.drops.iter().any(|(drop, _)| drop.kind == DropKind::Value) {
1304             let unwind_target = self.diverge_cleanup_target(else_scope, span);
1305             let mut unwind_indices = IndexVec::from_elem_n(unwind_target, 1);
1306             for (drop_idx, drop_data) in drops.drops.iter_enumerated().skip(1) {
1307                 match drop_data.0.kind {
1308                     DropKind::Storage => {
1309                         if is_generator {
1310                             let unwind_drop = self
1311                                 .scopes
1312                                 .unwind_drops
1313                                 .add_drop(drop_data.0, unwind_indices[drop_data.1]);
1314                             unwind_indices.push(unwind_drop);
1315                         } else {
1316                             unwind_indices.push(unwind_indices[drop_data.1]);
1317                         }
1318                     }
1319                     DropKind::Value => {
1320                         let unwind_drop = self
1321                             .scopes
1322                             .unwind_drops
1323                             .add_drop(drop_data.0, unwind_indices[drop_data.1]);
1324                         self.scopes
1325                             .unwind_drops
1326                             .add_entry(blocks[drop_idx].unwrap(), unwind_indices[drop_data.1]);
1327                         unwind_indices.push(unwind_drop);
1328                     }
1329                 }
1330             }
1331         }
1332         blocks[ROOT_NODE].map(BasicBlock::unit)
1333     }
1334 
1335     /// Build the unwind and generator drop trees.
build_drop_trees(&mut self)1336     pub(crate) fn build_drop_trees(&mut self) {
1337         if self.generator_kind.is_some() {
1338             self.build_generator_drop_trees();
1339         } else {
1340             Self::build_unwind_tree(
1341                 &mut self.cfg,
1342                 &mut self.scopes.unwind_drops,
1343                 self.fn_span,
1344                 &mut None,
1345             );
1346         }
1347     }
1348 
build_generator_drop_trees(&mut self)1349     fn build_generator_drop_trees(&mut self) {
1350         // Build the drop tree for dropping the generator while it's suspended.
1351         let drops = &mut self.scopes.generator_drops;
1352         let cfg = &mut self.cfg;
1353         let fn_span = self.fn_span;
1354         let mut blocks = IndexVec::from_elem(None, &drops.drops);
1355         drops.build_mir::<GeneratorDrop>(cfg, &mut blocks);
1356         if let Some(root_block) = blocks[ROOT_NODE] {
1357             cfg.terminate(
1358                 root_block,
1359                 SourceInfo::outermost(fn_span),
1360                 TerminatorKind::GeneratorDrop,
1361             );
1362         }
1363 
1364         // Build the drop tree for unwinding in the normal control flow paths.
1365         let resume_block = &mut None;
1366         let unwind_drops = &mut self.scopes.unwind_drops;
1367         Self::build_unwind_tree(cfg, unwind_drops, fn_span, resume_block);
1368 
1369         // Build the drop tree for unwinding when dropping a suspended
1370         // generator.
1371         //
1372         // This is a different tree to the standard unwind paths here to
1373         // prevent drop elaboration from creating drop flags that would have
1374         // to be captured by the generator. I'm not sure how important this
1375         // optimization is, but it is here.
1376         for (drop_idx, drop_data) in drops.drops.iter_enumerated() {
1377             if let DropKind::Value = drop_data.0.kind {
1378                 debug_assert!(drop_data.1 < drops.drops.next_index());
1379                 drops.entry_points.push((drop_data.1, blocks[drop_idx].unwrap()));
1380             }
1381         }
1382         Self::build_unwind_tree(cfg, drops, fn_span, resume_block);
1383     }
1384 
build_unwind_tree( cfg: &mut CFG<'tcx>, drops: &mut DropTree, fn_span: Span, resume_block: &mut Option<BasicBlock>, )1385     fn build_unwind_tree(
1386         cfg: &mut CFG<'tcx>,
1387         drops: &mut DropTree,
1388         fn_span: Span,
1389         resume_block: &mut Option<BasicBlock>,
1390     ) {
1391         let mut blocks = IndexVec::from_elem(None, &drops.drops);
1392         blocks[ROOT_NODE] = *resume_block;
1393         drops.build_mir::<Unwind>(cfg, &mut blocks);
1394         if let (None, Some(resume)) = (*resume_block, blocks[ROOT_NODE]) {
1395             cfg.terminate(resume, SourceInfo::outermost(fn_span), TerminatorKind::Resume);
1396 
1397             *resume_block = blocks[ROOT_NODE];
1398         }
1399     }
1400 }
1401 
1402 // DropTreeBuilder implementations.
1403 
1404 struct ExitScopes;
1405 
1406 impl<'tcx> DropTreeBuilder<'tcx> for ExitScopes {
make_block(cfg: &mut CFG<'tcx>) -> BasicBlock1407     fn make_block(cfg: &mut CFG<'tcx>) -> BasicBlock {
1408         cfg.start_new_block()
1409     }
add_entry(cfg: &mut CFG<'tcx>, from: BasicBlock, to: BasicBlock)1410     fn add_entry(cfg: &mut CFG<'tcx>, from: BasicBlock, to: BasicBlock) {
1411         cfg.block_data_mut(from).terminator_mut().kind = TerminatorKind::Goto { target: to };
1412     }
1413 }
1414 
1415 struct GeneratorDrop;
1416 
1417 impl<'tcx> DropTreeBuilder<'tcx> for GeneratorDrop {
make_block(cfg: &mut CFG<'tcx>) -> BasicBlock1418     fn make_block(cfg: &mut CFG<'tcx>) -> BasicBlock {
1419         cfg.start_new_block()
1420     }
add_entry(cfg: &mut CFG<'tcx>, from: BasicBlock, to: BasicBlock)1421     fn add_entry(cfg: &mut CFG<'tcx>, from: BasicBlock, to: BasicBlock) {
1422         let term = cfg.block_data_mut(from).terminator_mut();
1423         if let TerminatorKind::Yield { ref mut drop, .. } = term.kind {
1424             *drop = Some(to);
1425         } else {
1426             span_bug!(
1427                 term.source_info.span,
1428                 "cannot enter generator drop tree from {:?}",
1429                 term.kind
1430             )
1431         }
1432     }
1433 }
1434 
1435 struct Unwind;
1436 
1437 impl<'tcx> DropTreeBuilder<'tcx> for Unwind {
make_block(cfg: &mut CFG<'tcx>) -> BasicBlock1438     fn make_block(cfg: &mut CFG<'tcx>) -> BasicBlock {
1439         cfg.start_new_cleanup_block()
1440     }
add_entry(cfg: &mut CFG<'tcx>, from: BasicBlock, to: BasicBlock)1441     fn add_entry(cfg: &mut CFG<'tcx>, from: BasicBlock, to: BasicBlock) {
1442         let term = &mut cfg.block_data_mut(from).terminator_mut();
1443         match &mut term.kind {
1444             TerminatorKind::Drop { unwind, .. } => {
1445                 if let UnwindAction::Cleanup(unwind) = *unwind {
1446                     let source_info = term.source_info;
1447                     cfg.terminate(unwind, source_info, TerminatorKind::Goto { target: to });
1448                 } else {
1449                     *unwind = UnwindAction::Cleanup(to);
1450                 }
1451             }
1452             TerminatorKind::FalseUnwind { unwind, .. }
1453             | TerminatorKind::Call { unwind, .. }
1454             | TerminatorKind::Assert { unwind, .. }
1455             | TerminatorKind::InlineAsm { unwind, .. } => {
1456                 *unwind = UnwindAction::Cleanup(to);
1457             }
1458             TerminatorKind::Goto { .. }
1459             | TerminatorKind::SwitchInt { .. }
1460             | TerminatorKind::Resume
1461             | TerminatorKind::Terminate
1462             | TerminatorKind::Return
1463             | TerminatorKind::Unreachable
1464             | TerminatorKind::Yield { .. }
1465             | TerminatorKind::GeneratorDrop
1466             | TerminatorKind::FalseEdge { .. } => {
1467                 span_bug!(term.source_info.span, "cannot unwind from {:?}", term.kind)
1468             }
1469         }
1470     }
1471 }
1472