| /third_party/typescript/tests/baselines/reference/ |
| D | conditionalOperatorConditionIsNumberType.js | 23 condNumber ? exprString1 : exprBoolean1; // Union 29 - 10000000000000 ? exprString1 : exprString2; 31 10000 ? exprString1 : exprBoolean1; // Union 35 var array = [1, 2, 3]; variable 41 foo() / array[1] ? exprIsObject1 : exprIsObject2; 42 foo() ? exprString1 : exprBoolean1; // Union 50 var resultIsStringOrBoolean1 = condNumber ? exprString1 : exprBoolean1; // Union 55 var resultIsString2 = - 10000000000000 ? exprString1 : exprString2; 57 var resultIsStringOrBoolean2 = 10000 ? exprString1 : exprBoolean1; // Union 63 var resultIsObject3 = foo() / array[1] ? exprIsObject1 : exprIsObject2; [all …]
|
| D | conditionalOperatorConditionIsNumberType.types | 67 condNumber ? exprString1 : exprBoolean1; // Union 92 - 10000000000000 ? exprString1 : exprString2; 93 >- 10000000000000 ? exprString1 : exprString2 : string 94 >- 10000000000000 : -10000000000000 105 10000 ? exprString1 : exprBoolean1; // Union 116 var array = [1, 2, 3]; 117 >array : number[] 154 foo() / array[1] ? exprIsObject1 : exprIsObject2; 155 >foo() / array[1] ? exprIsObject1 : exprIsObject2 : Object 156 >foo() / array[1] : number [all …]
|
| D | conditionalOperatorConditoinIsStringType.symbols | 20 >Object : Symbol(Object, Decl(lib.es5.d.ts, --, --), Decl(lib.es5.d.ts, --, --)) 36 >Object : Symbol(Object, Decl(lib.es5.d.ts, --, --), Decl(lib.es5.d.ts, --, --)) 64 condString ? exprString1 : exprBoolean1; // union 90 "hello " ? exprString1 : exprBoolean1; // union 98 var array = ["1", "2", "3"]; 99 >array : Symbol(array, Decl(conditionalOperatorConditoinIsStringType.ts, 33, 3)) 107 >condString.toUpperCase : Symbol(String.toUpperCase, Decl(lib.es5.d.ts, --, --)) 109 >toUpperCase : Symbol(String.toUpperCase, Decl(lib.es5.d.ts, --, --)) 123 array[1] ? exprIsObject1 : exprIsObject2; 124 >array : Symbol(array, Decl(conditionalOperatorConditoinIsStringType.ts, 33, 3)) [all …]
|
| D | conditionalOperatorConditionIsNumberType.symbols | 20 >Object : Symbol(Object, Decl(lib.es5.d.ts, --, --), Decl(lib.es5.d.ts, --, --)) 36 >Object : Symbol(Object, Decl(lib.es5.d.ts, --, --), Decl(lib.es5.d.ts, --, --)) 64 condNumber ? exprString1 : exprBoolean1; // Union 82 - 10000000000000 ? exprString1 : exprString2; 90 10000 ? exprString1 : exprBoolean1; // Union 98 var array = [1, 2, 3]; 99 >array : Symbol(array, Decl(conditionalOperatorConditionIsNumberType.ts, 33, 3)) 110 >"string".length : Symbol(String.length, Decl(lib.es5.d.ts, --, --)) 111 >length : Symbol(String.length, Decl(lib.es5.d.ts, --, --)) 120 foo() / array[1] ? exprIsObject1 : exprIsObject2; [all …]
|
| /third_party/skia/third_party/externals/swiftshader/third_party/llvm-10.0/llvm/include/llvm/CodeGen/ |
| D | LiveIntervalUnion.h | 1 //===- LiveIntervalUnion.h - Live interval union data struct ---*- C++ -*--===// 5 // SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception 7 //===----------------------------------------------------------------------===// 9 // LiveIntervalUnion is a union of live segments across multiple live virtual 14 //===----------------------------------------------------------------------===// 38 /// Union of live intervals that are strong candidates for coalescing into a 41 /// eventually make exceptions to handle value-based interference. 62 LiveSegments Segments; // union of virtual reg segments 67 // Iterate over all segments in the union of live virtual registers ordered 84 /// getTag - Return an opaque tag representing the current state of the union. [all …]
|
| /third_party/mindspore/mindspore-src/source/docs/api/api_python/nn/ |
| D | mindspore.nn.Conv2dTranspose.rst | 9 …,且 `pad_mode` 设置为"pad",它们会在输入的高度和宽度方向上填充 :math:`dilation * (kernel\_size - 1) - padding` 个零,这种情况下它… 13 - **in_channels** (int) - Conv2dTranspose层输入Tensor的空间维度。 14 - **out_channels** (int) - Conv2dTranspose层输出Tensor的空间维度。 15 …- **kernel_size** (Union[int, tuple[int]]) - 指定二维卷积核的高度和宽度。数据类型为整型或两个整型的tuple。一个整数表示卷积核的高度和宽度均为该值。… 16 …- **stride** (Union[int, tuple[int]]) - 二维卷积核的移动步长。数据类型为整型或两个整型的tuple。一个整数表示在高度和宽度方向的移动步长均为该值。两个整数… 17 …- **pad_mode** (str,可选) - 指定填充模式,填充值为0。可选值为 ``"same"`` , ``"valid"`` 或 ``"pad"`` 。默认值: ``"same"`` 。 19 …- ``"same"``:在输入的四周填充,使得当 `stride` 为 ``1`` 时,输入和输出的shape一致。待填充的量由算子内部计算,若为偶数,则均匀地填充在四周,若为奇数,多余的填充量… 20 - ``"valid"``:不对输入进行填充,返回输出可能的最大高度和宽度,不能构成一个完整stride的额外的像素将被丢弃。如果设置了此模式, `padding` 必须为0。 21 - ``"pad"``:对输入填充指定的量。在这种模式下,在输入的高度和宽度方向上填充的量由 `padding` 参数指定。如果设置此模式, `padding` 必须大于或等于0。 23 …- **padding** (Union[int, tuple[int]]) - 输入的高度和宽度方向上填充的数量。数据类型为整型或包含四个整数的tuple。如果 `padding` 是一个整数,… [all …]
|
| D | mindspore.nn.Conv3dTranspose.rst | 9 …`pad_mode` 设置为"pad",它们会在输入的深度、高度和宽度方向上填充 :math:`dilation * (kernel\_size - 1) - padding` 个零,这种情况下它… 13 - **in_channels** (int) - Conv3dTranspose层输入Tensor的空间维度。 14 - **out_channels** (int) - Conv3dTranspose层输出Tensor的空间维度。 15 …- **kernel_size** (Union[int, tuple[int]]) - 指定三维卷积核的深度、高度和宽度。数据类型为int或包含三个整数的tuple。一个整数表示卷积核的深度、高… 16 …- **stride** (Union[int, tuple[int]]) - 三维卷积核的移动步长。数据类型为整型或三个整型的tuple。一个整数表示在深度、高度和宽度方向的移动步长均为该值。三… 17 …- **pad_mode** (str,可选) - 指定填充模式,填充值为0。可选值为 ``"same"`` , ``"valid"`` 或 ``"pad"`` 。默认值: ``"same"`` 。 19 …- ``"same"``:在输入的深度、高度和宽度维度进行填充,使得当 `stride` 为 ``1`` 时,输入和输出的shape一致。待填充的量由算子内部计算,若为偶数,则均匀地填充在四周,若… 20 … - ``"valid"``:不对输入进行填充,返回输出可能的最大深度、高度和宽度,不能构成一个完整stride的额外的像素将被丢弃。如果设置了此模式, `padding` 必须为0。 21 … - ``"pad"``:对输入填充指定的量。在这种模式下,在输入的深度、高度和宽度方向上填充的量由 `padding` 参数指定。如果设置此模式, `padding` 必须大于或等于0。 23 …- **padding** (Union(int, tuple[int])) - 输入的深度、高度和宽度方向上填充的数量。数据类型为int或包含6个整数的tuple。如果 `padding` 是一… [all …]
|
| D | mindspore.nn.Conv2d.rst | 15 … \sum_{k = 0}^{C_{in} - 1} \text{ccor}({\text{weight}(C_{\text{out}_j}, k), \text{X}(N_i, k)}) 17 …其中, :math:`bias` 为输出偏置,:math:`ccor` 为 `cross-correlation <https://en.wikipedia.org/wiki/Cross-corr… 20 - :math:`i` 对应batch数,其范围为 :math:`[0, N-1]` ,其中 :math:`N` 为输入batch。 22 - :math:`j` 对应输出通道,其范围为 :math:`[0, C_{out}-1]` ,其中 :math:`C_{out}` 为输出通道数,该值也等于卷积核的个数。 24 - :math:`k` 对应输入通道数,其范围为 :math:`[0, C_{in}-1]` ,其中 :math:`C_{in}` 为输入通道数,该值也等于卷积核的通道数。 39 - **in_channels** (int) - Conv2d层输入Tensor的空间维度。 40 - **out_channels** (int) - Conv2d层输出Tensor的空间维度。 41 …- **kernel_size** (Union[int, tuple[int]]) - 指定二维卷积核的高度和宽度。数据类型为整型或两个整型的tuple。一个整数表示卷积核的高度和宽度均为该值。… 42 …- **stride** (Union[int, tuple[int]],可选) - 二维卷积核的移动步长。数据类型为整型或者长度为二或四的整型tuple。一个整数表示在高度和宽度方向的移动步长均… 43 …- **pad_mode** (str,可选) - 指定填充模式,填充值为0。可选值为 ``"same"`` , ``"valid"`` 或 ``"pad"`` 。默认值: ``"same"`` 。 [all …]
|
| D | mindspore.nn.Conv3d.rst | 15 … \sum_{k = 0}^{C_{in} - 1} \text{ccor}({\text{weight}(C_{\text{out}_j}, k), \text{X}(N_i, k)}) 17 …其中, :math:`bias` 为输出偏置,:math:`ccor` 为 `cross-correlation <https://en.wikipedia.org/wiki/Cross-corr… 20 - :math:`i` 对应batch数,其范围为 :math:`[0, N-1]` ,其中 :math:`N` 为输入batch。 22 - :math:`j` 对应输出通道,其范围为 :math:`[0, C_{out}-1]` ,其中 :math:`C_{out}` 为输出通道数,该值也等于卷积核的个数。 24 - :math:`k` 对应输入通道数,其范围为 :math:`[0, C_{in}-1]`,其中 :math:`C_{in}` 为输入通道数,该值也等于卷积核的通道数。 40 - **in_channels** (int) - Conv3d层输入Tensor的空间维度。 41 - **out_channels** (int) - Conv3d层输出Tensor的空间维度。 42 …- **kernel_size** (Union[int, tuple[int]]) - 指定三维卷积核的高度和宽度。可以为单个int或一个包含3个int组成的元组。单个整数表示该值同时适用于内核… 43 …- **stride** (Union[int, tuple[int]],可选) - 三维卷积核的移动步长。数据类型为整型或三个整型的tuple。一个整数表示在深度、高度和宽度方向的移动步长均为该… 44 …- **pad_mode** (str,可选) - 指定填充模式,填充值为0。可选值为 ``"same"`` , ``"valid"`` 或 ``"pad"`` 。默认值: ``"same"`` 。 [all …]
|
| /third_party/mindspore/mindspore-src/source/mindspore/python/mindspore/numpy/ |
| D | array_creations.py | 1 # Copyright 2020-2021 Huawei Technologies Co., Ltd 7 # http://www.apache.org/licenses/LICENSE-2.0 15 """array operations, the function docs are adapted from Numpy API.""" 49 # According to official numpy reference, the dimension of a numpy array must be less 60 def array(obj, dtype=None, copy=True, ndmin=0): function 64 This function creates tensors from an array-like object. 67 obj (Union[int, float, bool, list, tuple]): Input data, in any form that 69 dtype (Union[:class:`mindspore.dtype`, str], optional): Designated tensor dtype, can 75 tensor should have. Ones will be pre-pended to the shape as needed to 90 >>> print(np.array([1,2,3])) [all …]
|
| D | math_ops.py | 1 # Copyright 2020-2024 Huawei Technologies Co., Ltd 7 # http://www.apache.org/licenses/LICENSE-2.0 67 _concat = P.Concat(-1) 76 Calculates the absolute value element-wise. 100 >>> x = np.asarray([1, 2, 3, -4, -5], np.float32) 119 Counts the number of non-zero values in the tensor `x`. 122 x (Tensor): The tensor for which to count non-zeros. 123 axis (Union[int,tuple], optional): Axis or tuple of axes along which to 124 count non-zeros. Default is None, meaning that non-zeros will be counted 131 Tensor, indicating number of non-zero values in the `x` along a given axis. [all …]
|
| D | logic_ops.py | 7 # http://www.apache.org/licenses/LICENSE-2.0 32 Returns (x1 != x2) element-wise. 45 Tensor or scalar, element-wise comparison of `x1` and `x2`. Typically of type 69 Returns the truth value of ``(x1 <= x2)`` element-wise. 76 x1 (Tensor): Input array. 77 x2 (Tensor): Input array. If ``x1.shape != x2.shape``, they must be 83 Tensor or scalar, element-wise comparison of `x1` and `x2`. Typically of type 92 >>> output = np.less_equal(np.array([4, 2, 1]), np.array([2, 2, 2])) 102 Returns the truth value of ``(x1 < x2)`` element-wise. 109 x1 (Tensor): input array. [all …]
|
| D | array_ops.py | 1 # Copyright 2020-2021 Huawei Technologies Co., Ltd 7 # http://www.apache.org/licenses/LICENSE-2.0 15 """array operations, the function docs are adapted from Numpy API.""" 39 # According to official numpy reference, the dimension of a numpy array must be less 51 a (Tensor): Input tensor array. 52 axis (Union[int, list(int), tuple(int)]): Position in the expanded axes where 86 Removes single-dimensional entries from the shape of a tensor. 89 a (Tensor): Input tensor array. 90 axis (Union[None, int, list(int), tuple(list)]): The axis(axes) to squeeze, 120 axes (Union[None, tuple, list]): the axes order, if `axes` is `None`, transpose [all …]
|
| /third_party/typescript/tests/cases/conformance/expressions/conditonalOperator/ |
| D | conditionalOperatorConditionIsNumberType.ts | 22 condNumber ? exprString1 : exprBoolean1; // Union 28 - 10000000000000 ? exprString1 : exprString2; 30 10000 ? exprString1 : exprBoolean1; // Union 34 var array = [1, 2, 3]; variable 40 foo() / array[1] ? exprIsObject1 : exprIsObject2; 41 foo() ? exprString1 : exprBoolean1; // Union 49 var resultIsStringOrBoolean1 = condNumber ? exprString1 : exprBoolean1; // Union 54 var resultIsString2 = - 10000000000000 ? exprString1 : exprString2; 56 var resultIsStringOrBoolean2 = 10000 ? exprString1 : exprBoolean1; // Union 62 var resultIsObject3 = foo() / array[1] ? exprIsObject1 : exprIsObject2; [all …]
|
| /third_party/mindspore/mindspore-src/source/mindspore/python/mindspore/ops/function/ |
| D | clip_func.py | 7 # http://www.apache.org/licenses/LICENSE-2.0 74 x (Union(Tensor, list[Tensor], tuple[Tensor])): Input that wishes to be clipped. 75 … max_norm (Union(float, int)): The upper limit of the norm for this group of network parameters. 76 norm_type (Union(float, int)): Norm type. Default: ``2.0``. 78 … is nan, inf or -inf. If it is ``False``, no exception will be thrown.Default: ``False`` . 84 RuntimeError: If the total norm from the `x` is nan, inf or -inf. 104 …se RuntimeError(f"For clip_by_norm, the total norm of order {norm_type} from input is non-finite.") 105 clip_coef = max_norm / (total_norm + 1e-6) 125 \begin{array}{align} 129 \end{array}\right. [all …]
|
| /third_party/skia/third_party/externals/swiftshader/third_party/llvm-subzero/include/llvm/Support/ |
| D | AlignOf.h | 1 //===--- AlignOf.h - Portable calculation of type alignment -----*- C++ -*-===// 8 //===----------------------------------------------------------------------===// 12 //===----------------------------------------------------------------------===// 23 /// \brief Helper for building an aligned character array type. 26 /// character array types. We have to build these up using a macro and explicit 48 // a member of a by-value function argument in MSVC, even if the alignment 49 // request is something reasonably like 8-byte or 16-byte. Note that we can't 50 // even include the declspec with the union that forces the alignment because 51 // MSVC warns on the existence of the declspec despite the union member forcing 56 union { [all …]
|
| /third_party/mindspore/mindspore-src/source/mindspore/python/mindspore/ |
| D | amp.py | 7 # http://www.apache.org/licenses/LICENSE-2.0 100 return 1 - status.all() 120 status_finite = ~AllFinite()(inputs) # pylint: disable=invalid-unary-operand-type 140 inputs (Union(tuple(Tensor), list(Tensor))): a iterable Tensor. 151 >>> x = (Tensor(np.array([np.log(-1), 1, np.log(0)])), Tensor(np.array([1.0]))) 155 - `Automatic Mix Precision - Loss Scaling 156 <https://mindspore.cn/tutorials/en/master/advanced/mixed_precision.html#loss-scaling>`_ 173 mixed_precision.html#loss-scaling>`_. 209 inputs (Union(Tensor, tuple(Tensor))): the input loss value or gradients. 219 inputs (Union(Tensor, tuple(Tensor))): the input loss value or gradients. [all …]
|
| /third_party/mindspore/mindspore-src/source/docs/api/api_python/ops/ |
| D | mindspore.ops.ApplyAdamWithAmsgradV2.rst | 11 \begin{array}{l1} \\ 12 lr_t:=learning\_rate*\sqrt{1-\beta_2^t}/(1-\beta_1^t) \\ 13 m_t:=\beta_1*m_{t-1}+(1-\beta_1)*g \\ 14 v_t:=\beta_2*v_{t-1}+(1-\beta_2)*g*g \\ 15 \hat v_t:=\max(\hat v_{t-1}, v_t) \\ 16 var:=var-lr_t*m_t/(\sqrt{\hat v_t}+\epsilon) \\ 17 \end{array} 24 …- **use_locking** (bool) - 如果为 ``True`` , `var` , `m` 和 `v` 的更新将受到锁的保护。否则,行为为未定义,很可能出现较少的冲突。默认值为 `… 27 - **var** (Parameter) - 待更新的网络参数,为任意维度。数据类型为float16、float32或float64。 28 - **m** (Parameter) - 一阶矩,shape与 `var` 相同。 [all …]
|
| D | mindspore.ops.ApplyAddSign.rst | 9 \begin{array}{ll} \\ 10 m_{t+1} = \beta * m_{t} + (1 - \beta) * g \\ 12 var = var - lr_{t+1} * \text{update} 13 \end{array} 22 - **var** (Parameter) - 要更新的权重。任意维度,其shape为 :math:`(N, *)` ,其中 :math:`*` 为任意数量的额外维度。 23 - **m** (Parameter) - 要更新的权重,shape与 `var` 相同。 24 - **lr** (Union[Number, Tensor]) - 学习率,必须是Scalar。 25 - **sign_decay** (Union[Number, Tensor]) - 必须是Scalar。 26 - **alpha** (Union[Number, Tensor]) - 必须是Scalar。 27 - **beta** (Union[Number, Tensor]) - 指数衰减率,必须是Scalar。 [all …]
|
| D | mindspore.ops.ApplyAdaMax.rst | 13 \begin{array}{ll} \\ 14 m_{t+1} = \beta_1 * m_{t} + (1 - \beta_1) * g \\ 16 var = var - \frac{l}{1 - \beta_1^{t+1}} * \frac{m_{t+1}}{v_{t+1} + \epsilon} 17 \end{array} 24 …- **var** (Parameter) - 待更新的网络参数,为任意维度。数据类型为float32或float16。其shape为 :math:`(N, *)` ,其中 :math:`*` 为… 25 - **m** (Parameter) - 一阶矩,shape与 `var` 相同。数据类型为float32或float16。 26 - **v** (Parameter) - 二阶矩。shape与 `var` 相同。数据类型为float32或float16。 27 … - **beta1_power** (Union[Number, Tensor]) - :math:`beta_1^t` ,必须是Scalar。数据类型为float32或float16。 28 - **lr** (Union[Number, Tensor]) - 学习率,公式中的 :math:`l` ,必须是Scalar。数据类型为float32或float16。 29 - **beta1** (Union[Number, Tensor]) - 一阶矩的指数衰减率,必须是Scalar。数据类型为float32或float16。 [all …]
|
| D | mindspore.ops.ApplyPowerSign.rst | 11 \begin{array}{ll} \\ 12 m_{t+1} = \beta * m_{t} + (1 - \beta) * g \\ 14 var = var - lr_{t+1} * \text{update} 15 \end{array} 25 …- **var** (Parameter) - 要更新的变量。数据类型为float64、float32或float16。如果 `var` 的数据类型为float16,则所有输入的数据类型必须与 `… 26 - **m** (Parameter) - 要更新的变量,shape与 `var` 相同。 27 - **lr** (Union[Number, Tensor]) - 学习率,应该是Scalar或Tensor,数据类型为float64、float32或float16。 28 - **logbase** (Union[Number, Tensor]) - 应该是Scalar或Tensor,数据类型为float64、float32或float16。 29 - **sign_decay** (Union[Number, Tensor]) - 应该是Scalar或Tensor,数据类型为float64、float32或float16。 30 - **beta** (Union[Number, Tensor]) - 指数衰减率,应该是Scalar或Tensor,数据类型为float64、float32或float16。 [all …]
|
| D | mindspore.ops.func_clip_by_value.rst | 12 \begin{array}{align} 16 \end{array}\right. 19 - `clip_value_min` 和 `clip_value_max` 不能同时为None; 20 …- 当 `clip_value_min` 为None,`clip_value_max` 不为None时,Tensor中大于 `clip_value_max` 的元素会变为 `clip_value_… 21 …- 当 `clip_value_min` 不为None,`clip_value_max` 为None时,Tensor中小于 `clip_value_min` 的元素会变为 `clip_value_… 22 - 当 `clip_value_min` 大于 `clip_value_max` 时,Tensor中所有元素的值会被置为 `clip_value_max`; 23 - `x`,`clip_value_min` 和 `clip_value_max` 的数据类型需支持隐式类型转换,且不能为布尔型。 26 …- **x** (Union(Tensor, list[Tensor], tuple[Tensor])) - `clip_by_value` 的输入,类型为Tensor、Tensor的列表或元组。… 27 - **clip_value_min** (Union(Tensor, float, int)) - 指定最小值。默认值为 ``None`` 。 28 - **clip_value_max** (Union(Tensor, float, int)) - 指定最大值。默认值为 ``None`` 。 [all …]
|
| /third_party/mindspore/mindspore-src/source/mindspore/python/mindspore/train/metrics/ |
| D | auc.py | 7 # http://www.apache.org/licenses/LICENSE-2.0 24 curve, for computing the area under the ROC-curve. 27 …x (Union[np.array, list]): From the ROC curve(fpr), np.array with false positive rates. If multicl… 28 … this is a list of such np.array, one for each class. The shape :math:`(N)`. 29 …y (Union[np.array, list]): From the ROC curve(tpr), np.array with true positive rates. If multicla… 30 … this is a list of such np.array, one for each class. The shape :math:`(N)`. 35 float, the area under the ROC-curve. 44 >>> y_pred = np.array([[3, 0, 1], [1, 3, 0], [1, 0, 2]]) 45 >>> y = np.array([[0, 2, 1], [1, 2, 1], [0, 0, 1]]) 73 direction = -1 [all …]
|
| /third_party/skia/third_party/externals/swiftshader/third_party/llvm-10.0/llvm/include/llvm/BinaryFormat/ |
| D | MsgPackReader.h | 1 //===- MsgPackReader.h - Simple MsgPack reader ------------------*- C++ -*-===// 5 // SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception 7 //===----------------------------------------------------------------------===// 31 //===----------------------------------------------------------------------===// 47 /// The types map onto corresponding union members of the \c Object struct. 56 Array, enumerator 61 /// Extension types are composed of a user-defined type ID and an uninterpreted 64 /// User-defined extension type. 70 /// MessagePack object, represented as a tagged union of C++ types. 73 /// completely represented by the \c Kind itself) map to a exactly one union [all …]
|
| /third_party/mindspore/mindspore-src/source/mindspore/python/mindspore/ops/function/grad/ |
| D | grad_func.py | 7 # http://www.apache.org/licenses/LICENSE-2.0 104 fn (Union[Cell, Function]): Function to do GradOperation. 105 …grad_position (Union[NoneType, int, tuple[int]]): Index to specify which inputs to be differentiat… 110 …weights (Union[ParameterTuple, Parameter, list[Parameter]]): The parameters of the training networ… 147 >>> y = Tensor([-2, 3], mindspore.float32) 153 Tensor(shape=[2], dtype=Float32, value=[-2.00000000e+00, 6.00000000e+00])) 213 >>> y = Tensor([-2, 3], mindspore.float32) 219 (2, Tensor(shape=[2], dtype=Float32, value=[-2.00000000e+00, 6.00000000e+00]))) 244 fn (Union[Cell, Function]): Function to do GradOperation. 245 …grad_position (Union[NoneType, int, tuple[int]]): Index to specify which inputs to be differentiat… [all …]
|