Lines Matching full:as_strided
501 "vmap: Calling Tensor.as_strided is not supported unless the batch dims being ", in checkBatchDimsAtFrontInLayout()
505 "express the as_strided operation in terms of PyTorch view operations"); in checkBatchDimsAtFrontInLayout()
522 // x.as_strided(sizes, strides, maybe_storage_offset)
544 "result = tensor.as_strided(", sizes, ",", strides, ",", storage_offset, ")", in checkBasicAsStridedValidForSlice()
548 "`as_strided` call as a sequence of PyTorch view operations"); in checkBasicAsStridedValidForSlice()
553 "result = tensor.as_strided(", sizes, ",", strides, ",", storage_offset, ")", in checkBasicAsStridedValidForSlice()
558 "rewrite the `as_strided` call as a sequence of PyTorch view operations"); in checkBasicAsStridedValidForSlice()
586 // tensor because using as_strided to access storage locations not indexable in _has_same_storage_numel_batching_rule()
591 // What are the semantics of as_strided inside of vmap?
592 // y = vmap(lambda x: x.as_strided(sizes, strides, offset))(xs)
599 // offset equal to xs.offset() and called as_strided(sizes, sizes, offset).
600 // (that is equivalent to x[i].as_strided(
603 // Note that this *may* be different from actually running as_strided
604 // in a for-loop. This is due to how as_strided takes in `offset` to be
606 // >>> x = torch.tensor([0., 1., 2., 3., 4.]).as_strided([4], [1], 1)
607 // >>> z = [x[i].as_strided([1], [1], 1) for i in range(4)]
610 // a user should have written the following if they wanted to use as_strided
612 // >>> z = [x[i].as_strided([1], [1], 1 + x[i].storage_offset() - 1) for i in range(4)]
623 // We can't rely on the physical as_strided call to do this for us because in as_strided_batching_rule()
624 // we do some sanity checks on the size/strides before calling into as_strided. in as_strided_batching_rule()
626 "Tensor.as_strided(size, stride, ...): size and stride must have the ", in as_strided_batching_rule()
632 // 2. as_strided(sizes, strides, storage_offset + tensor[i].offset() - tensor.offset()) in as_strided_batching_rule()
634 // See Note: [When will the as_strided batching rule fail?] for details. in as_strided_batching_rule()
648 // If zi = xs[i].as_strided(sizes, strides, offset + xs[i].offset() - xs.offset()) in as_strided_batching_rule()
650 // xs.as_strided(physical_sizes, physical_strides, offset) always succeeds in as_strided_batching_rule()
652 // locations as zi. See NOTE: [When will the as_strided batching rule fail?] in as_strided_batching_rule()
653 auto result = physical_view.tensor().as_strided( in as_strided_batching_rule()
658 // NOTE: [When will the as_strided batching rule fail?]
659 // If zi = xs[i].as_strided(sizes, strides, offset + xs[i].offset() - xs.offset())
661 // xs.as_strided(physical_sizes, physical_strides, offset) always succeeds and
664 // Let's say we have xs[i].as_strided(sizes, strides, offset + xs[i].offset() - xs.offset()).
665 // Furthermore, let's say that as a part of being "valid" this as_strided call
681 // xs[i].as_strided(sizes, strides, offset + xs[i].offset() - xs.offset()) has:
686 // x.as_strided itself checks that:
691 // Claim 1: if xs[i].as_strided(sizes, strides, offset + xs[i].offset() - xs.offset())
695 // If we have the claim, then xs.as_strided([B] + sizes, [S] + strides, offset)
699 // xs.as_strided(physical_sizes, physical_strides, offset) is equivalent to
700 // xs.as_strided([B] + sizes, [S] + strides, offset)
702 // xs.as_strided([B] + sizes, [S] + strides, offset) has:
707 // xs.as_strided([B] + sizes, [S] + strides, offset)[i] has:
712 // so the xs.as_strided([B] + sizes, [S] + strides, offset) is valid.
715 // Part of our definition of being valid is that xs[i].as_strided(...)
720 // (the largest-index memory location of xs[i].as_strided(...) must be \leq
735 // (the largest-index memory location of xs.as_strided(size, stride, offset)
1101 m.impl("as_strided", as_strided_batching_rule); in TORCH_LIBRARY_IMPL()