• Home
  • Line#
  • Scopes#
  • Navigate#
  • Raw
  • Download
1# MindSpore Release Notes
2
3[查看中文](./RELEASE_CN.md)
4
5## MindSpore 2.3.0 Release Notes
6
7### Major Features and Improvements
8
9#### AutoParallel
10
11- [STABLE] Extend functional parallelism. [mindspore.shard](https://www.mindspore.cn/docs/en/r2.3.0/api_python/mindspore/mindspore.shard.html) supports now the Graph mode. In Graph mode, the parallel sharding strategy of input and weight can be set for nn.Cell/function. For other operators, the parallel strategy can be automatically configured through "sharding_propagation". Add [mindspore.reshard](https://www.mindspore.cn/docs/en/r2.3.0/api_python/mindspore/mindspore.reshard.html) interface that supports manual rearranging and set up a precise sharding strategy ([mindspore.Layout](https://www.mindspore.cn/docs/en/r2.3.0/api_python/mindspore/mindspore.Layout.html)) for tensors.
12- [STABLE] Added Callback interface [mindspore.train.FlopsUtilizationCollector](https://www.mindspore.cn/docs/en/r2.3.0/api_python/train/mindspore.train.FlopsUtilizationCollector.html) statistical model flops utilization information MFU and hardware flops utilization information HFU.
13- [STABLE] Add functional communication API [mindspore.communication.comm_func](https://www.mindspore.cn/docs/en/r2.3.0/api_python/mindspore.communication.comm_func.html).
14- [BETA] Optimize the memory usage of interleaved pipeline in O0 and O1 mode.
15- [BETA] AutoParallel supports automatic pipeline strategy generation in multi-nodes scenarios (not supported in single-node scenario). Need to set `parallel_mode` to ``auto_parallel`` and `search_mode` to ``recursive_programming``.
16
17#### PyNative
18
19- [STABLE] Optimize the basic data structure of PyNative and improve operator API performance.
20- [STABLE] Tensor supports [register_hook](https://www.mindspore.cn/docs/en/r2.3.0/api_python/mindspore/Tensor/mindspore.Tensor.register_hook.html) so that users can print or modify the gradient with respect to the tensor.
21- [STABLE] The PyNative mode supports the recompute function. You can use the recompute interface to reduce the peak device memory of the network.
22
23#### FrontEnd
24
25- [STABLE] Optimize Checkpoint saving and loading basic processes to improve performance by 20%.
26- [STABLE] Support CRC verification of Checkpoint files during saving and loading processes to enhance security.
27
28#### Dataset
29
30- [STABLE] Support Ascend processing backend for the following transforms: Equalize, Rotate, AutoContrast, Posterize, AdjustSharpness, Invert, Solarize, ConvertColor, Erase.
31- [STABLE] Support video files reading and parsing function. For more detailed information, see APIs: [mindspore.dataset.vision.DecodeVideo](https://www.mindspore.cn/docs/en/r2.3.0/api_python/dataset_vision/mindspore.dataset.vision.DecodeVideo.html), [mindspore.dataset.vision.read_video](https://www.mindspore.cn/docs/en/r2.3.0/api_python/dataset_vision/mindspore.dataset.vision.read_video.html#mindspore.dataset.vision.read_video), and [mindspore.dataset.vision.read_video_timestamps](https://www.mindspore.cn/docs/en/r2.3.0/api_python/dataset_vision/mindspore.dataset.vision.read_video_timestamps.html#mindspore.dataset.vision.read_video_timestamps).
32- [STABLE] Support specifying the `max_rowsize` parameter as -1 in `mindspore.dataset.GeneratorDataset`, `mindspore.dataset.Dataset.map` and `mindspore.dataset.Dataset.batch` interfaces. The size of shared memory used by the dataset multiprocessing will be dynamically allocated according to the size of the data. The `max_rowsize` parameter does not need to be adjusted manually.
33
34#### Inference
35
36- [STABLE] 14 large models such as LLaMa2, LLaMa3, and Qwen1.5 are added to support the integrated training and inference architecture to unify scripts, distributed strategies, and runtime. The period from training to inference deployment of typical large models is reduced to days. Large operators are integrated to reduce the inference latency and effectively improve the network throughput.
37
38#### PIJIT
39
40- [BETA] Support bytecode parsing for Python 3.8 and Python 3.10 to expand the supporting version of Python.
41- [BETA] Support dynamic shape and symbolic shape as input to enable the dynamic input scenarios.
42- [BETA] Enable single-step composition capability to optimize compile time
43- [BETA] Support bytecode capture with side effects (STORE_ATTR, STORE_GLOBAL, LIST_APPEND, dict.pop) by bytecode tuning, enabling auto-mixed precision, reduction of cleavage diagrams, and improved performance.
44
45#### Profiler
46
47- [STABLE] Provides a hierarchical Profiler function, controls different levels of performance data collection through the profiler_level parameter.
48- [STABLE] Profiler analyse adds a new mode parameter to configure asynchronous parsing mode to parallelize performance data parsing and training.
49- [STABLE] The Profiler adds a new data_simplification parameter, which allows users to control whether to delete redundant data after parsing the performance data to save hard disk space.
50- [STABLE] The Profiler enhances the memory analysis function. Users can collect the memory application and release information of the framework, CANN and hardware through the profile_memory parameter, and visualize and analyze the information through the [MindStudio tool](https://www.hiascend.com/forum/thread-0230130822583032044-1-1.html).
51- [BETA] In Pynative mode, Timeline integrates host profiling information, including task time and user side stack information.
52
53#### Dump
54
55- [STABLE] Enhanced synchronous & asynchronous dump functionality and adds L2Norm information to statistics dumps, and the statistic_category field to allow users to customize which statistics to save, improving dump usability. For details about the support for synchronous/asynchronous dump, see [Dump Introduction](https://www.mindspore.cn/tutorials/experts/en/r2.3.0/debug/dump.html#dump-introduction).
56- [STABLE] Improved synchronous dump functionality: Enables overflow and exception dumps through the op_debug_mode field.
57- [STABLE] Enhanced synchronous dump functionality: The stat_calc_mode field enables device-side computation of statistics (default is host-side), and the sample_mode field is configured to perform sample-based dumps, improving dump performance.
58- [STABLE] Enhanced asynchronous dump functionality: Now supports saving in complex64 and complex128 formats.
59
60#### Runtime
61
62- [Stable] Supports multi-level compilation of the staic graph by setting [mindspore.set_context(jit_config={"jit_level": "O0/O1/O2"})](https://www.mindspore.cn/docs/en/r2.3.0/api_python/mindspore/mindspore.set_context.html). The default value is empty, the framework automatically selects the optimization level according to the product category, O2 for Altas training products and O0 for the rest of the products.
63- [Stable] Staic graph supports multi-stream concurrent execution of communication calculations in O0/O1.
64- [STABLE] Add memory management API [mindspore.hal.memory](https://www.mindspore.cn/docs/en/r2.3.0/api_python/mindspore.hal.html#memory).
65- [Beta] The memory pool supports virtual memory defragmentation, and virtual memory is enabled by default under graph O0/O1.
66
67#### Ascend
68
69- [STABLE] Provide an operator memory out of bounds access detection switch on the Ascend platform, where users can detect internal memory out of bounds issues of operators on the Ascend platform by setting `mindspore.set_context (Ascend_configuration={"op_debug_option": "oom"})`.
70- [BETA] The environment variable [MS_SIMULATION_LEVEL](https://www.mindspore.cn/docs/en/r2.3.0/note/env_var_list.html) supports graph compilation O0 execution mode on the Ascend platform, which can support compilation performance and runtime memory analysis
71- [BETA] Ascend platform supports [AscendC custom operators](https://www.mindspore.cn/tutorials/experts/en/r2.3.0/operation/op_custom_ascendc.html) through AOT.
72
73### API Change
74
75#### New APIs
76
77- [STABLE] Adds [mindspore.mint](https://www.mindspore.cn/docs/en/r2.3.0/api_python/mindspore.mint.html) API, provides a lot of functional, nn, optimizer interfaces. The API usage and functions are consistent with the mainstream usage in the industry, which is convenient for users to refer to and use. The mint interface is currently an experimental interface and performs better than ops in `jit_level="O0"` and pynative mode. Currently, the graph sinking mode and CPU/GPU backend are not supported, and it will be gradually improved in the future.
78
79  | mindspore.mint  |  |   | |
80  |:----|:----|:----|:----|
81  | mindspore.mint.eye |mindspore.mint.rand_like|mindspore.mint.isfinite|mindspore.mint.any|
82  | mindspore.mint.ones |mindspore.mint.rand|mindspore.mint.log|mindspore.mint.greater_equal|
83  | mindspore.mint.ones_like |mindspore.mint.gather|mindspore.mint.logical_and|mindspore.mint.all|
84  | mindspore.mint.zeros |mindspore.mint.permute|mindspore.mint.logical_not|mindspore.mint.mean|
85  | mindspore.mint.zeros_like |mindspore.mint.repeat_interleave|mindspore.mint.logical_or|mindspore.mint.prod|
86  | mindspore.mint.arange |mindspore.mint.abs|mindspore.mint.mul|mindspore.mint.sum|
87  | mindspore.mint.broadcast_to |mindspore.mint.add|mindspore.mint.neg|mindspore.mint.eq|
88  | mindspore.mint.cat |mindspore.mint.clamp|mindspore.mint.negative|mindspore.mint.ne|
89  | mindspore.mint.index_select |mindspore.mint.cumsum|mindspore.mint.pow|mindspore.mint.greater|
90  | mindspore.mint.max |mindspore.mint.atan2|mindspore.mint.reciprocal|mindspore.mint.gt|
91  | mindspore.mint.min |mindspore.mint.arctan2|mindspore.mint.rsqrt|mindspore.mint.isclose|
92  | mindspore.mint.scatter_add |mindspore.mint.ceil|mindspore.mint.sigmoid|mindspore.mint.le|
93  | mindspore.mint.narrow |mindspore.mint.unique|mindspore.mint.sin|mindspore.mint.less_equal|
94  | mindspore.mint.nonzero |mindspore.mint.div|mindspore.mint.sqrt|mindspore.mint.lt|
95  | mindspore.mint.normal |mindspore.mint.divide|mindspore.mint.square|mindspore.mint.maximum|
96  | mindspore.mint.tile |mindspore.mint.erf|mindspore.mint.sub|mindspore.mint.minimum|
97  | mindspore.mint.topk |mindspore.mint.erfinv|mindspore.mint.tanh|mindspore.mint.inverse|
98  | mindspore.mint.sort |mindspore.mint.exp|mindspore.mint.bmm|mindspore.mint.searchsorted|
99  | mindspore.mint.stack |mindspore.mint.floor|mindspore.mint.matmul|mindspore.mint.argmax|
100  | mindspore.mint.where |mindspore.mint.flip|mindspore.mint.split|mindspore.mint.cos|
101  | mindspore.mint.less |||
102
103  | mindspore.mint.nn|
104  |:----|
105  | mindspore.mint.nn.Dropout  |
106  | mindspore.mint.nn.Unfold |
107  | mindspore.mint.nn.Fold |
108  | mindspore.mint.nn.Linear|
109  | mindspore.mint.nn.BCEWithLogitsLoss |
110
111  | mindspore.mint.nn.functional||
112  |:----|:----|
113  |mindspore.mint.nn.functional.batch_norm |mindspore.mint.nn.functional.group_norm|
114  |mindspore.mint.nn.functional.fold |mindspore.mint.nn.functional.layer_norm|
115  |mindspore.mint.nn.functional.max_pool2d |mindspore.mint.nn.functional.linear|
116  |mindspore.mint.nn.functional.binary_cross_entropy |mindspore.mint.nn.functional.unfold|
117  |mindspore.mint.nn.functional.sigmoid |mindspore.mint.nn.functional.one_hot|
118  |mindspore.mint.nn.functional.tanh |mindspore.mint.nn.functional.elu|
119  |mindspore.mint.nn.functional.binary_cross_entropy_with_logits |mindspore.mint.nn.functional.gelu|
120  |mindspore.mint.nn.functional.dropout|mindspore.mint.nn.functional.leaky_relu|
121  |mindspore.mint.nn.functional.embedding  |mindspore.mint.nn.functional.silu|
122  |mindspore.mint.nn.functional.grid_sample|mindspore.mint.nn.functional.softplus|
123  |mindspore.mint.nn.functional.relu|mindspore.mint.nn.functional.softmax|
124  |mindspore.mint.nn.functional.pad||
125
126  | mindspore.mint.optim |
127  |:----|
128  | mindspore.mint.optim.AdamW |
129
130  | mindspore.mint.linalg |
131  |:----|
132  | mindspore.mint.linalg.inv |
133
134### Non-compatible Interface Changes
135
136- Interface name: `Profiler`
137
138  Changes: The performance data file generated by parsing is streamlined to save space. Delete the FRAMEWORK directory data and other redundant data after exporting the performance data. Retain only the deliverables of the profiler and the original performance data in the PROF_XXX directory to save space. Data simplification mode can be turned off by configuring the `data_simplification` parameter to `False`, which will be consistent with the performance data files generated by the historical version.
139- Interface name: The `saved_data` field in the configuration file of the dump function is `"tensor"`.
140
141  Changes: The name of the file to be dumped to disks is changed. `"/"` is replaced with `"_"`, and the operator name is changed to the global name of the operator.
142
143  <table>
144  <tr>
145  <td style="text-align:center"> Original interface </td> <td style="text-align:center"> v2.1 interface </td>
146  </tr>
147  <tr>
148  <td><pre>
149  File name format:
150  {op_type}.{op_name}.{task_id}.{stream_id}.
151  {timestamp}.{input_output_index}.{slot}.{format}.npy
152  </br>
153  Example:
154  Conv2D.Conv2D-op12.0.0.1623124369613540.
155  output.0.DefaultFormat.npy
156  </pre>
157  </td>
158  <td><pre>
159  File name format:
160  {op_type}.{op_name}.{task_id}.{stream_id}.
161  {timestamp}.{input_output_index}.{slot}.{format}.npy
162  </br>
163  Example:
164  Conv2D.Default_network-WithLossCell__backbone-AlexNet_conv3
165  -Conv2d_Conv2D-op12.0.0.1623124369613540.output.0.DefaultFormat.npy
166  </pre>
167  </td>
168  </tr>
169  </table>
170- Interface name: The `saved_data` field in the Dump function configuration file is `"statistic"`.
171
172  Changes: By default, `'max'`, `'min'`, `'avg'`, `'count'`, `'negative zero count'`, `'positive zero count'`, `'nan count'`,  `'negative inf count'` ,`'positive inf count'`,`'zero count'` and `'md5'`. In the 2.3 version, the `'max'`, `'min'`, and `'l2norm'` statistical items are saved by default. You can customize statistical items by configuring `'statistic_category'`.
173
174### Contributors
175
176caifubi;candanzg;ccsszz;chaiyouheng;changzherui;chenfei_mindspore;chengbin;chengfeng27;Chong;dairenjie;DavidFFFan;DeshiChen;dingjinshan;douzhixing;emmmmtang;Erpim;fary86;fengyixing;fuhouyu;gaoyong10;GuoZhibin;guozhijian;halo;haozhang;hejianheng;Henry Shi;horcham;huandong1;huangbingjian;Jackson_Wong;jiangchenglin3;jiangshanfeng;jiangzhenguang;jiaorui;bantao;jiaxueyu;jijiarong;JuiceZ;jxl;kairui_kou;lanzhineng;LiangZhibo;lichen;limingqi107;linqingke;liubuyu;liujunzhu;liuluobin;liyan2022;liyejun;LLLRT;looop5;lujiale;luochao60;luoyang;lvxudong;machenggui;maning202007;Margaret_wangrui;master_2;mengyuanli;moran;Mrtutu;NaCN;nomindcarry;panzhihui;pengqi;qiuyufeng;qiuzhongya;Renyuan Zhang;shaoshengqi;Shawny;shen_haochen;shenhaojing;shenwei41;shij1anhan;shilishan;shiziyang;shunyuanhan;shuqian0;TAJh;tanghuikang;tan-wei-cheng;Thibaut;tianxiaodong;TronZhang;TuDouNi;VectorSL;wang_ziqi;wanghenchang;wangjie;weiyang;wudawei;wujiangming;wujueying;XianglongZeng;xiaotianci;xiaoxin_zhang;xiaoxiongzhu;xiaoyao;XinDu;xuxinglei;yangchen;yanghaoran;yanglong;yangruoqi713;yangzhenzhang;yangzishuo;Yanzhi_YI;yao_yf;yefeng;yide12;YijieChen;YingLai Lin;yuchaojie;YuJianfeng;zangqx;zhaiyukun;zhangminli;zhangqinghua;ZhangZGC;zhengxinQian;zhengzuohe;zhouyaqiang0;zhuguodong;zhupuxu;zichun_ye;zjun;zlq2020;ZPaC;zuochuanyong;zyli2020;阿琛;狄新凯;范吉斌;冯一航;胡彬;宦晓玲;黄勇;康伟;雷仪婧;李良灿;李林杰;刘崇鸣;刘力力;刘勇琪;刘子涵;吕浩宇;王禹程;熊攀;徐安越;徐永飞;俞涵;张王泽;张栩浩;郑裔;周莉莉;周先琪;朱家兴;邹文祥
177
178Contributions of any kind are welcome!
179
180## MindSpore 2.3.0-rc2 Release Notes
181
182### Major Features and Improvements
183
184#### AutoParallel
185
186- [STABLE] Transpose/Sub/Add/Mul/Div/ReLU/Softmax/Sigmoid supports layout configuration.
187- [STABLE] The collective communication precision will affect network convergence. The configuration item [force_fp32_communication](https://www.mindspore.cn/docs/en/r2.3.0rc2/api_python/mindspore/mindspore.set_auto_parallel_context.html) is provided in the interface mindspore.set_auto_parallel_context. When set to True, the communication type of the reduce communication operator can be forced to be converted to float32.
188- [BETA] Pipeline parallel support Interleave. Optimize the performance when micro batch is limited.
189- [BETA] Optimize checkpoint transformation speed when using pipeline parallel, support single stage transform.
190
191#### PyNative
192
193- [BETA] Support [recompute](https://www.mindspore.cn/docs/en/r2.3.0rc2/api_python/mindspore/mindspore.recompute.html) on PyNative mode.
194- [STABLE] Support [register_hook](https://www.mindspore.cn/docs/en/r2.3.0rc2/api_python/mindspore/Tensor/mindspore.Tensor.register_hook.html#mindspore.Tensor.register_hook) on PyNative mode.
195
196### API Change
197
198Add timeout environment variables in [dynamic networking](https://www.mindspore.cn/tutorials/experts/en/r2.3.0rc2/parallel/dynamic_cluster.html) scenarios:
199
200- `MS_TOPO_TIMEOUT`: Cluster networking phase timeout time in seconds.
201- `MS_NODE_TIMEOUT`: Node heartbeat timeout in seconds.
202- `MS_RECEIVE_MSG_TIMEOUT`: Node timeout for receiving messages in seconds.
203
204Added new environment variable `MS_ENABLE_LCCL` to support the use of LCCL communication library.
205
206### Bug Fixes
207
208- [#I9CR96](https://gitee.com/mindspore/mindspore/issues/I9CR96) Fix the issue of insufficient timeout time causing failure for dynamic networking startup in large-scale clusters.
209- [#I94AQQ](https://gitee.com/mindspore/mindspore/issues/I94AQQ) Fixed the problem of incorrect output shape of ops.Addcdiv operator in graph mode.
210
211### Contributors
212
213Thanks goes to these wonderful people:
214
215bantao,caifubi,changzherui,chenfei_mindspore,chenweifeng,dairenjie,dingjinshan,fangzehua,fanyi20,fary86,GuoZhibin,hanhuifeng,haozhang,hedongdong,Henry Shi,huandong1,huangbingjian,huoxinyou,jiangchenglin3,jiangshanfeng,jiaorui,jiaxueyu,jxl,kairui_kou,lichen,limingqi107,liuluobin,LLLRT,looop5,luochao60,luojianing,maning202007,NaCN,niyuxin94520,nomindcarry,shiziyang,tanghuikang,TronZhang,TuDouNi,VectorSL,wang_ziqi,wanghenchang,wudawei,XianglongZeng,xiaoxiongzhu,xiaoyao,yanghaoran,Yanzhi_YI,yao_yf,yide12,YijieChen,YingLai Lin,yuchaojie,YuJianfeng,zangqx,zhanghanLeo,ZhangZGC,zhengzuohe,zhouyaqiang0,zichun_ye,zjun,ZPaC,zyli2020,冯一航,李林杰,刘力力,王禹程,俞涵,张栩浩,朱家兴,邹文祥
216
217Contributions of any kind are welcome!
218
219## MindSpore Lite 2.3.0-rc2 Release Notes
220
221### Major Features and Improvements
222
223- [STABLE] Support the configuration of FlashAttention related properties in the configuration file used by the cloud-side conversion tool.
224- [STABLE] Support multi-devices memory sharing.
225
226### Contributors
227
228Thanks goes to these wonderful people:
229
230emmmmtang,熊攀
231
232Contributions of any kind are welcome!
233
234## MindSpore 2.3.0-rc1 Release Notes
235
236### Major Features and Improvements
237
238#### DataSet
239
240- [STABLE] Support integrity check, encryption and decryption check for MindRecord to protect the integrity and security of user data.
241- [STABLE] MindRecord api changes: FileWriter.open_and_set_header is deprecated since it has been integrated into FilterWriter, if the old version code reports an error, delete this call; Add type checking for data in FileWriter to ensure that the data type defined by the Schema matches the real data type; The return value of all methods under Mindrecord are removed, replaced by an exception when processing error is occurred.
242- [STABLE] Support Ascend processing backend for the following transforms: ResizedCrop, HorizontalFlip, VerticalFlip, Perspective, Crop, Pad, GaussianBlur, Affine.
243- [STABLE] Optimized the content of data processing part in model migration guide, providing more examples to compare with third-party frameworks.
244- [STABLE] Optimized the parsing efficiency of TFRecordDataset in multiple data columns scenario, improving the parsing performance by 20%.
245
246#### PIJIT
247
248- [BETA]PIJit analyzes and adjusts the Python bytecode and performs graph capture and graph optimization on the execution flow. Supported Python codes are executed in static graph mode, and unsupported ones are divided into subgraphs and executed in dynamic graph mode, automatically achieving dynamic and static unification. Users can enable the PIJit function by decorating the function with @jit(mode="PIJit", jit_config={options:value}).
249
250#### Inference
251
252- [DEMO] The integrated architecture of large model inference, upgrade, training, and promotion unifies scripts, distributed policies, and runtime. The period from training to inference deployment of typical large models is reduced to days. Large operators are integrated to reduce the inference latency and effectively improve the network throughput.
253
254#### AutoParallel
255
256- [STABLE] Add msrun startup method to launch distributed job with single instruction.
257- [STABLE] Add to be deprecated hint for RankTable startup method.
258- [STABLE] Eliminate redundant constants in graph mode to improve compilation performance and memory overhead.
259- [STABLE] The subgraph scenario optimizer parallelizes the first subgraph inline, allowing some computation and communication masking under pipeline parallelism to be performed.
260- [STABLE] Communication information export: export model communication information (communication domain, communication volume) during compilation, and input it to the cluster as the basis for communication scheduling.
261- [STABLE] Pipeline parallel inference is optimized, eliminates shared weights forwarding between stages, improving execution performance. Supports automatic broadcast of pipeline inference results, improving the usability of autoregressive inference.
262- [STABLE] Operator-level parallel sharding supports the configuration of the mapping between the device layout and tensor layout during MatMul/Add/LayerNorm/GeLU/BiasAdd operator sharding.
263- [STABLE] Supports gradient communication and backward calculation overlapping in the data parallel dimension.
264- [STABLE] Single device simulation compilation, used to simulate the compilation process of a certain device in multi device distributed training, assisting in analyzing the compilation processes and memory usage on the front and back ends.
265- [STABLE] Implement ops.Tril sharding to reduce the memory and performance requirements on a single device.
266- [BETA] Supports the fusion between communication operators and computing operators, in order to overlap communication overheads with computation and improve network performance.
267- [BETA] Load checkpoints and compile graphs in parallel to accelerate fault recovery.
268
269#### Runtime
270
271- [BETA] Support O0/O1/O2 multi-level compilation to improve static graph debugging and tuning capabilities.
272
273#### FrontEnd
274
275- [STABLE] The framework supports the bfloat16 data type. dtype=mindspore.bfloat16 can be specified when a tensor is created.
276- [STABLE] The syntax support capability of the rewrite component is optimized, syntaxs such as class variables, functions, and control flows can be parsed.
277- [STABLE] New context setting: debug_level. User can use mindspore.set_context(debug_level=mindspore.DEBUG) to get more debug information.
278
279#### Profiler
280
281- [BETA] Dynamically start and stop profiling. Users can collect profiling data in real time according to the training situation, reducing the amount of data collected.
282- [BETA] Profiling the communication operator time-consuming matrix. Users can find cluster communication performance bottlenecks by analyzing the communication operator time-consuming matrix.
283- [BETA] Improve the performance of Ascend environment in parsing profiling data.
284- [BETA] Supports offline analysis of data generated by Profiling. Users can collect data first and then parse the data as needed.
285- [BETA] Supports collecting performance data of HBM, PCIe, and l2_cache to enrich performance analysis indicators.
286
287#### Dump
288
289- [BETA] The statistical information saved by Dump records MD5 values, and users can determine small differences in tensor values through MD5 values.
290- [BETA] Dump supports the float16 data type and supports users to locate float16 type operator accuracy issues.
291
292#### PyNative
293
294- [STABLE] Reconstruct the single operator calling process for dynamic graphs to improve the performance of dynamic graphs.
295
296#### Ascend
297
298- [BETA] Support set configuration options of CANN, which are divided into two categories: global and session. Users can configure them through mindspore.set_context(Ascend_configuration={"ge_options": {"global": {"global_option": "option_value"}, "session": {"session option": "option_value"}}).
299
300#### API Change
301
302- Add mindspore.hal API to support stream, event, and device management capabilities.
303- Add mindspore.multiprocessing API to provide the capability of creating multiple processes.
304
305#### Operators
306
307- [BETA] mindspore.ops.TopK now supports the second input k as an int32 type tensor.
308
309### Bug Fixes
310
311- [#I92H93] Fixed the issue of 'Launch kernel failed' when using the Print operator to print string objects on the Ascend platform.
312- [#I8S6LY] Fixed RuntimeError: Attribute dyn_input_sizes of Default/AddN-op1 is [const vector]{}, of which size is less than 0 error of variable-length input operator, such as AddN or Concat, for dynamic shape process in graph mode on the Ascend platform.
313- [#I9ADZS] Fixed the data timeout issue in network training due to inefficient dataset recovery in the fault recovery scenario.
314
315### Contributors
316
317Thanks goes to these wonderful people:
318
319AlanCheng511,AlanCheng712,bantao,Bingliang,BJ-WANG,Bokai Li,Brian-K,caifubi,cao1zhg,CaoWenbin,ccsszz,chaiyouheng,changzherui,chenfei_mindspore,chengbin,chengfeng27,chengxb7532,chenjianping,chenkang,chenweifeng,Chong,chuht,chujinjin,Cynthia叶,dairenjie,DavidFFFan,DeshiChen,douzhixing,emmmmtang,Erpim,fangzhou0329,fary86,fengxun,fengyixing,fuhouyu,gaoshuanglong,gaoyong10,GaoZhenlong,gengdongjie,gent1e,Greatpan,GTT,guoqi,guoxiaokang1,GuoZhibin,guozhijian,hangq,hanhuifeng,haozhang,hedongdong,hejianheng,Henry Shi,heyingjiao,HighCloud,Hongxing,huandong1,huangbingjian,HuangLe02,huangxinjing,huangziling,hujiahui8,huoxinyou,jiangchenglin3,jianghui58,jiangshanfeng,jiaorui,jiaxueyu,JichenZhao,jijiarong,jjfeing,JoeyLin,JuiceZ,jxl,kairui_kou,kate,KevinYi,kisnwang,lanzhineng,liangchenghui,LiangZhibo,lianliguang,lichen,ligan,lihao,limingqi107,ling,linqingke,liruyu,liubuyu,liuchao,liuchengji,liujunzhu,liuluobin,liutongtong9,liuzhuoran2333,liyan2022,liyejun,LLLRT,looop5,luochao60,luojianing,luoyang,LV,machenggui,maning202007,Margaret_wangrui,MaZhiming,mengyuanli,MooYeh,moran,Mrtutu,NaCN,nomindcarry,panshaowu,panzhihui,PingqiLi,qinzheng,qiuzhongya,Rice,shaojunsong,Shawny,shenwei41,shenyaxin,shunyuanhan,silver,Songyuanwei,tangdezhi_123,tanghuikang,tan-wei-cheng,TingWang,TronZhang,TuDouNi,VectorSL,WANG Cong,wang_ziqi,wanghenchang,wangpingan,wangshaocong,wangtongyu6,weiyang,WinXPQAQ,wtcheng,wudawei,wujiangming,wujueying,wuweikang,wwwbby,XianglongZeng,xiaosh,xiaotianci,xiaoxin_zhang,xiaoxiongzhu,xiaoyao,XinDu,xingzhongfan,yanghaoran,yangluhang,yangruoqi713,yangzhenzhang,yangzishuo,yanjiaming,Yanzhi_YI,yao_yf,yefeng,yeyunpeng2020,yide12,YijieChen,YingLai Lin,YingtongHu,youshu,yuchaojie,YuJianfeng,zangqx,zby,zhaiyukun,zhangdanyang,zhanghaibo,zhanghanLeo,zhangminli,zhangqinghua,zhangyanhui,zhangyifan,zhangyinxia,zhangyongxian,ZhangZGC,zhanzhan,zhaoting,zhengyafei,zhengzuohe,ZhihaoLi,zhouyaqiang0,zhuguodong,zhumingming,zhupuxu,zichun_ye,zjun,zlq2020,ZPaC,zuochuanyong,zyli2020,陈宇,代宇鑫,狄新凯,范吉斌,冯一航,胡彬,宦晓玲,黄勇,康伟,李良灿,李林杰,刘崇鸣,刘力力,刘勇琪,吕浩宇,没有窗户的小巷,王禹程,吴蕴溥,熊攀,徐安越,徐永飞,许哲纶,俞涵,张峻源,张树仁,张王泽,张栩浩,郑裔,周莉莉,周先琪,朱家兴,邹文祥
320
321Contributions of any kind are welcome!
322
323## MindSpore 2.2.13 Release Notes
324
325### API Change
326
327Add timeout environment variables in dynamic networking scenarios:
328
329- `MS_TOPO_TIMEOUT`: Cluster networking phase timeout time in seconds.
330- `MS_CLUSTER_RETRY_NUM`: Number of node's retrying registration during cluster networking phase.
331- `MS_NODE_TIMEOUT`: Node heartbeat timeout in seconds.
332- `MS_RECEIVE_MSG_TIMEOUT`: Node timeout for receiving messages in seconds.
333
334### Bug Fixes
335
336- [#I9CR96] Fix the issue of insufficient timeout time causing failure for dynamic networking startup in large-scale clusters.
337
338### Contributors
339
340Thanks goes to these wonderful people:
341
342ZPaC, limingqi107, lizhenyu, jiangshanfeng
343
344Contributions of any kind are welcome!
345
346## MindSpore 2.2.12 Release Notes
347
348### Major Features and Improvements
349
350- [STABLE] Optimize scnarios where network parameters are initialized by fp32, and optimizer parallel mode is on, reducing the amount of Cast operator.
351- [STABLE] Add detection and processing capabilities to silent fault detection. Silent faults may lead to error during training procedures, this helps users to prevent or lower the cost of fault location, which caused by silent faults.
352
353### Bug Fixes
354
355- [#I97D1L] Fix ReduceLROnPlateau, LRScheduler, CosineAnnealingWarmRestarts dynamic learning rate related interface sample error.
356- [#I970HV] Fix the problem where order of AllGather/ReduceScatter between two cards is not preserved.
357- [#I99JPI] Fix load checkpoint for bfloat16 parameter during vague load mode.
358
359### Contributors
360
361Thanks goes to these wonderful people:
362
363yao_yf, YijieChen, 冯一航, yuchaojie, 李良灿, YuJianfeng, huangxinjing, GuoZhibin, looop5
364
365Contributions of any kind are welcome!
366
367## MindSpore 2.2.11 Release Notes
368
369### Major Features and Improvements
370
371#### scipy
372
373- [STABLE] Add new API mindspore.scipy.optimize.linear_sum_assignment in scipy module to solve the linear sum assignment problem. It can find the least-cost assignment based on a given cost matrix.
374
375### Bug Fixes
376
377- [#I8JVRU] Fixed the problem where the results of the bernoulli random operator running twice on the GPU are probabilistically consistent.
378- [#I8OC32] Fixed the segmentation fault error because the MatrixSetDiagV3 operator does not verify abnormal input.
379
380### Contributors
381
382Thanks goes to these wonderful people:
383
384fary86, wanghenchang, haozhang, mengyuanli, emmmmtang, luoyang, zhupuxu, zhangyongxian, liuluobin, LLLRT, TuDouNi, hujiahui8, wangtongyu6, ligan, zhuguodong, yanghaoran, YingtongHu, liyejun, zjun, 徐永飞, chuht, 张树仁, 徐安越, DeshiChen, shenyaxin, liujunzhu, shunyuanhan, yuchaojie, yao_yf, 没有窗户的小巷, yeyunpeng2020, weiyang, KevinYi, hedongdong, zhouyaqiang0, Margaret_wangrui, zhanghaibo, moran, huangziling, 朱家兴, GuoZhibin, 李良灿, jiaxueyu, gaoyong10, Greatpan, 宦晓玲, melody, 俞涵, jiangshanfeng, XinDu, ling, caifubi, zhangyinxia, gengdongjie, Erpim, XianglongZeng, zhangminli, fengyixing, 冯一航, 黄勇, panzhihui, 胡彬, linqingke, wangshaocong
385
386Contributions of any kind are welcome!
387
388## MindSpore Lite 2.2.11 Release Notes
389
390### Bug Fixes
391
392- [#I8TPLY] Fixed SSD MobileNetV2 FPN network inference error on Atlas inference series products(configured with Ascend 310P AI processor).
393
394### Contributors
395
396Thanks goes to these wonderful people:
397
398wangtongyu6, zhuguodong, 徐永飞, 徐安越, yeyunpeng2020, moran, XinDu, gengdongjie.
399
400Contributions of any kind are welcome!
401
402## MindSpore 2.2.10 Release Notes
403
404### Major Features and Improvements
405
406#### Operators
407
408- [STABLE] FastGelu, BatchMatMul, AllReduce, AllGather, Broadcast, ReduceScatter support bfloat16 data type
409- [STABLE] AllGather support uint8 data type
410
411### Bug Fixes
412
413- [#I8ALW3] Fixed networks including Faster R-CNN, DeepText, MaskRCNN-ResNet50, which had errors while training RandomChoiceWithMask operator in Ascend 910 8P scenario.
414- [#I8LKG7] Fixed graph compilation error of UNet-2D in Ascend 910 1P/8P scenario.
415- [#I8KU3X] Fixed CRNN-ResNet34 network, which stuck in training phase in Ascend 910 1P/8P PyNative mode.
416- [#I8KTHH] Fixed BERT network error when training without allreduce grouped fusion with enable_parallel_optimizer=True, in Ascend 910 8P scenario.
417
418### Contributors
419
420Thanks goes to these wonderful people:
421
422李林杰, TuDouNi, chengxb7532, Henry Shi, rms-infer-type, 朱家兴, zhouyaqiang0, tanghuikang, gaoyong10, gengdongjie, yao_yf, hujiahui8, hanhuifeng, shenyaxin, KevinYi, 冯一航, chengfeng27, JuiceZ, zhangyanhui, jijiarong, xiaoxiongzhu, 没有窗户的小巷, ling, liyan2022, haozhang, zangqx, xiaoyao, liujunzhu, 胡彬, panzhihui, wangshaocong, linqingke, jianghui58, qiuzhongya, yangruoqi713, zhangminli, moran, 王禹程, shaojunsong, wangtongyu6, zhupuxu, luoyang, 徐安越, qinzheng, caifubi, 徐永飞, chenkang, youshu, XinDu, liubuyu, jxl, yeyunpeng2020, huoxinyou, yefeng, jiaorui, wangpingan, cao1zhg, zjun, zyli2020, yanjiaming, Cynthia叶, 胡安东, 李良灿, liruyu, liuluobin, lihao, huangbingjian, YijieChen, jjfeing, looop5, 刘力力, xiaoxin_zhang, yangluhang, chenweifeng, jiangshanfeng, zichun_ye, 陈宇, NaCN, ligan, YingLai Lin, huangziling, chenjianping, DeshiChen, chengbin, kairui_kou, ccsszz, yanghaoran, zhangdanyang, Yanzhi_YI, zhengzuohe, hangq, TronZhang, wanghenchang, HighCloud, 吕浩宇, VectorSL, ZPaC, mengyuanli, maning202007, 刘勇琪, r1chardf1d0, fary86, 刘崇鸣, yuchaojie, douzhixing, fengyixing
423
424Contributions of any kind are welcome!
425
426## MindSpore Lite 2.2.10 Release Notes
427
428### Bug Fixes
429
430- [#I8K7CC] Optimize error message when non-string segments are passed to get_model_info.
431
432### Contributors
433
434Thanks goes to these wonderful people:
435
436gengdongjie, zhangyanhui, xiaoxiongzhu, wangshaocong, jianghui58, moran, wangtongyu6, 徐安越, qinzheng, 徐永飞, youshu, XinDu, yeyunpeng2020, yefeng, wangpingan, zjun, 胡安东, 刘力力, 陈宇, chenjianping, kairui_kou, zhangdanyang, hangq, mengyuanli, 刘崇鸣
437
438Contributions of any kind are welcome!
439
440## MindSpore 2.2.1 Release Notes
441
442### Bug Fixes
443
444- [#I7R3R5] Fixed the problem that the network precision of the ResNet-50 on the Ascend platform deteriorates.
445- [#I8A9RH] Fixed an issue where the DBNet(ResNet-50) network precision on the Ascend platform deteriorates.
446- [#I8B8IW] Fixed the segment error caused by out-of-bounds multi-dimensional tensor assignment.
447- [#I8J0F4] Fixed an issue where the multidimensional Tensor extension dimension fails to be executed in the dynamic graph.
448- [#I87P3P] Fixed an issue where the compilation cache fails to be loaded during secondary training on the Ascend platform.
449- [#I86GP9] Fixed an issue where the UNet3D network inference precision deteriorates on the Ascend platform.
450- [#I89B4K] Fixed an issue where the dynamic rank execution of dynamic graphs on the Windows platform is suspended.
451- [#I8CX0C] Fixed an issue where dynamic images occasionally fail in mixed precision mode on the Ascend platform.
452- [#I8BGCF] Fixed an issue where a segment error occurs when the command is executed in dynamic diagram mode of the AirNet network on the Ascend platform.
453- [#I8L5DS] Fixed an issue where the ResNet-50 image segmentation network dynamic image is executed slowly on the Ascend platform.
454
455### Contributors
456
457Thanks goes to these wonderful people:
458
459yufan, dingcheng, lvzhangcheng, zhunaipan, fangwenyi, weiyang, changzherui, chujinjin, zangqingxiang, yuchaojie, wuweikang, tanghuikang, xiaoyao, huangbinjian, zhoupeichen, chenfei_mindspore, hedongdong, wangnan, zhengzuohe, yanghaoran, zouliqin, luoyang, liuchongmin, lujiale, machenggui, wangcong, lixiangyi, wangting, huangyong
460
461Contributions of any kind are welcome!
462
463## MindSpore Lite 2.2.1 Release Notes
464
465### Bug Fixes
466
467- [#I88055] Fixed a function issue caused by incorrect format setting of the gridsample operator in MindSpore Lite inference.
468- [#I8D80Y] The MindSpore Lite inference single-operator invoking process resources are not released and exits abnormally.
469
470### Contributors
471
472Thanks goes to these wonderful people:
473
474zhanghaibo, wangsiyuan, wangshaocong, chenjianping
475
476Contributions of any kind are welcome!
477
478## MindSpore 2.2.0 Release Notes
479
480### Major Features and Improvements
481
482#### DataSet
483
484- [STABLE] The `row_size` parameter of data operation map/batch is extended to support passing list, which stands for [Input Shared Memory, Output Shared Memory], so as to flexibly control the size of shared memory in multi-process mode.
485- [STABLE] Provide 100% mindspore.dataset and mindspore.dataset.transforms samples for reference.
486- [STABLE] ConcatDataset supports global sampling. After combining data from multiple sources using concat operation, data can be globally sampled randomly to enhance data diversity.
487- [STABLE] When the model.train API is used for training, TimeMonitor(.., data_time=True) can be used to monitor data processing performance in real time.
488- [STABLE] Introduced the jemalloc library to solve the problem of slow memory rise due to untimely memory debris recovery in extreme scenarios.
489
490#### FrontEnd
491
492- [STABLE] Support adding decorator @lazy_inline to make a graph generated from cell being inlined lazily, which can improve the compilation performance effectively.
493- [STABLE] Optimize the function of mixed precision training, support automatic rewriting of Python scripts through rewrite to achieve mixed precision strategies, and support automatic parsing of functions, branch statements, and other syntax.
494- [STABLE] Mixed precision function optimization, ReWrite supports syntax parsing of class functions and branch statements, and extends O1 functionality.
495- [STABLE] Optimize the dynamic learning rate function and add APIs such as MultiStepLR; function get_lr and global_step decoupling, extending optimizer module functionality.
496- [STABLE] Optimize API code samples, API difference tables, and tutorials for using higher-order functions.
497
498#### Operator
499
500- [STABLE] Add new operator primitive `mindspore.ops.Dense`.
501- [STABLE] Add the random number operator state management feature, which allows the random number operator to save the state of the random number, and can be stably reproduced in scenarios such as model parallelism and recalculation. Currently, it only supports CPU/GPU platforms, and the involved random number operators include: `mindspore.ops.Multinomial`, `mindspore.ops.MultinomialWithReplacement`, `mindspore.ops.ParameterizedTruncatedNormal`, `mindspore.ops.StandardLaplace`, `mindspore.ops.StandardLaplace`, `mindspore.ops.Uniform`, `mindspore.ops.UniformInt`, `mindspore.ops.UniformReal`, `mindspore.ops.UniformInt`, `mindspore.ops.Dropout`, `mindspore.ops.RandomChoiceWithMask`, `mindspore.ops.RandomCategorical`, `mindspore.ops.RandomShuffle`, `mindspore.ops.RandamGamma`, `mindspore.ops.RandomPoisson` and `mindspore.ops.TruncatedNormal`.
502- [STABLE] When a GPU operator encounters an illegal input scenario, it supports asynchronously printing error logs in the CUDA kernel of the operator to the Host side and interrupting the execution of the current CUDA Stream, improving the efficiency of user operator problem positioning.
503
504#### PyNative
505
506- [STABLE] Support viewing mechanism in PyNative mode.
507- [STABLE] Function enhancement in PyNative mode: sens supports dict input type.
508
509#### Ascend
510
511- [STABLE] Supports user configurable operator high-precision/high-performance mode, users can use `context.set_context(ascend_config={"op_precision_mode": "/path/to/op_precision_config_file"})` to configure high-precision/high-performance modes for some TBE operators.
512- [BETA] Supports user configurable operators for fp16-in and fp32-out, users can use `context.set_context(ascend_config={"precision_mode": "force_fp32"})` to configure fp16-in and fp32-out for the TBE Cube operators.
513- [BETA] Remove the strong binding between `jit_level="O3"` and GE processes, so users no longer need to set `jit_level="O3"` when executing GE processes.
514
515#### Parallel
516
517- [STABLE] Support the gradient accumulation feature in non-pipeline parallel scenarios in semi-automatic/fully automatic mode. Users can enable gradient accumulation by writing `net = GradAccumulationCell(net, micro_size)`. The gradient accumulation feature is compatible with the  lazy_inline feature.
518
519#### Inference
520
521Since version 2.2, the MindSpore main release package does not provide the inference interface enabling for the Ascend 310. If you need to use the inference interface, install the MindSpore Lite release package or download the MindSpore version earlier than 2.0. For details about how to install and use MindSpore Lite, see <https://www.mindspore.cn/lite/en>. HUAWEI Ascend 310 (Ascend) is an energy-efficient and highly integrated AI processor for edge scenarios. It supports inference on MindIR models. In the earlier version, MindSpore provides two methods for enabling inference on the Ascend 310 hardware:
522
5231. The MindSpore main release package provides the matching Ascend 310 version that supports C++ inference interfaces.
5242. The MindSpore Lite release package provides the matching Ascend version and supports C++ and Java inference.
525
526The C++ APIs provided by the two solutions are basically the same. In the future, MindSpore Lite is used instead of building and maintaining two sets of interfaces. The original 310 inference service built based on the MindSpore main release package can be switched to MindSpore Lite with a few modifications. For details, see <https://www.mindspore.cn/docs/en/master/faq/inference.html>.
527
528### Bug fixes
529
530- [I7SDA0] Fixed an issue where the accuracy of the CRNN network deteriorates on the NES platform.
531- [I7T4QK] Fixed an issue where the inference precision of the WGAN network deteriorates on the OptiX OSN 8800 platform.
532- [I7TJ8Z] Fixed an issue where the inference precision of the LGTM network deteriorates on the OptiX OSN 8800 platform.
533- [I7M58O] Fixed ASR-dynamic network training core dump issue on Ascend platform.
534- [I7L6B6] Fixed an issue where child processes do not exit in some scenarios when dataset is in multi-process mode.
535- [I7L7AE] Fixed an issue where dataset pipeline contains repeat operations and dynamic batchinfo.get_epoch_num() is incorrectly used in dataset.batch.
536- [I7UY7G] Rectify the file permission modification error in OBSMindDataset.
537
538### Contributors
539
540Thanks goes to these wonderful people:
541bantao, Bingliang, BJ-WANG, Brian-K, caifubi, ccsszz, changzherui, chenfei_mindspore, chengfeng27, chenhaozhe, chenjianping, chenkang, chenweifeng, chuht, chujinjin, CShu0507, Cynthia叶, DeshiChen, douzhixing, Erpim, Etienne, fary86, fengxun, fengyixing, gaoshuanglong, Gaoxiong, gaoyong10, GaoZhenlong, Greatpan, GuoZhibin, guozhijian, hangq, hanhuifeng, haozhang, hedongdong, Henry Shi, HighCloud, Hongxing, huangbingjian, huanghui, huangxinjing, huangziling, hujiahui8, huoxinyou, HWalkingMan, jianghui58, jiangshanfeng, jiaorui, jijiarong, jjfeing, JuiceZ, jxl, KevinYi, kisnwang, KXiong, lanzhineng, Li Qingguo, LiangZhibo, lianliguang, ligan, lihao, Lihoon, limingqi107, ling, linqingke, liruyu, liubuyu, liuchao, liujunzhu, liuluobin, liupeng303, liutongtong9, liyan2022, liyejun, looop5, luochao60, luojianing, luoyang, machenggui, maning202007, Margaret_wangrui, MaZhiming, mengyuanli, moran, NaCN, nomindcarry, panshaowu, panzhihui, qinzheng, qiuzhongya, r1chardf1d0, shaojunsong, shenwei41, shenyaxin, shenzhangyi, Shira Zaloshinski, shunyuanhan, tangdezhi_123, tanghuikang, tan-wei-cheng, tan-wei-cheng-3260, TronZhang, TuDouNi, VectorSL, wang_ziqi, wanghenchang, wangpingan, wangshaocong, wangtongyu6, wtcheng, wujueying, XianglongZeng, xiaotianci, xiaoxin_zhang, xiaoxiongzhu, xiaoyao, xiaoyuanyuan, XinDu, xujinliang, xupan, yanghaoran, yangluhang, yangruoqi713, yangsijia, yangzhenzhang, yangzishuo, yanjiaming, Yanzhi_YI, yao_yf, yefeng, yeyunpeng2020, yide12, YijieChen, YingLai Lin, YingtongHu, yonibaehr, youshu, yuchaojie, YuJianfeng, zangqx, zhaizhiqiang, zhangbuxue, zhangchunlei, zhangdanyang, zhangdong, zhanghaibo, zhangminli, zhangqi, zhangqinghua, zhangyanhui, zhangyifan, zhangyongxian, zhangzhen, zhangzheng, zhanzhan, zhengzuohe, ZhihaoLi, zhoufeng, zhouyaqiang0, zhuguodong, zhupuxu, zichun_ye, zjun, ZPaC, zuochuanyong, zyli2020, 陈宇, 程超, 范吉斌, 冯浩, 冯一航, 胡彬, 宦晓玲, 黄勇, 雷元哲, 黎冠新, 李良灿, 李林杰, 刘崇鸣, 刘力力, 刘思铭, 刘勇琪, 吕浩宇, 没有窗户的小巷, 沈竞兴, 王禹程, 王振邦, 徐安越, 徐永飞, 俞涵, 张澍坤, 周超, 朱家兴
542
543Contributions of any kind are welcome!
544
545## MindSpore Lite 2.2.0 Release Notes
546
547### Major Features and Improvements
548
549#### FlashAttention Operator Fusion
550
551- [STABLE] The OptiX OSN Ascend 910 series supports the FlashAttention large operator fusion of the LLAMA and stable diffusion models.
552
553## MindSpore 2.1.1 Release Notes
554
555### Bug fixes
556
557- [I7Q9RX] The Ascend platform supports adaptive identification of different hardware types.
558- [I7SDA0] Fixed an issue where the accuracy of the CRNN network deteriorates on the NES platform.
559- [I7T4QK] Fixed an issue where the inference precision of the WGAN network deteriorates on the OptiX OSN 8800 platform.
560- [I7TJ8Z] Fixed an issue where the inference precision of the LGTM network deteriorates on the OptiX OSN 8800 platform.
561
562### Contributors
563
564Thanks goes to these wonderful people:
565
566changzherui, chenfei_mindspore, chenjianping, chenkang, chenweifeng, chujinjin, fangwenyi, GuoZhibin, guozhijian, hangq, hanhuifeng, haozhang, hedongdong, You Shu, Zhou Feng, Dai Yuxin
567
568Contributions of any kind are welcome!
569
570## MindSpore Lite 2.1.1 Release Notes
571
572### Major Features and Improvements
573
574- [STABLE] MindSpore Lite Cloud Inference adds support for Python 3.8 and Python 3.9
575
576## MindSpore 2.1.0 Release Notes
577
578### Major Features and Improvements
579
580#### FrontEnd
581
582- [BETA] JIT Fallback supports variable scenarios. In static graph mode, JIT Fallback supports return of Dict type and Scalar type, supports property setting of non-Parameter type objects, supports partial in-place modification operations of List, and supports third-party libraries such as NumPy. Moreover, it supports related operations of user-defined classes and supports Python basic operators and built-in functions to use more data types. It is compatible with features like control flow, side effects, automatic differentiation. For more details, please refer to [Static Graph Syntax Support](https://www.mindspore.cn/docs/en/r2.1/note/static_graph_syntax_support.html).
583
584- [BETA] In static graph mode, the error message of using undefined variables in the control flow scene is optimized. When using variables defined in if, while, and for control flow branches, the variables need to be initialized and defined before the control flow.
585
586- [STABLE] Add module ReWrite, support the ability to modify multiple network in batches based on customized rules.
587
588- [BETA] Add optim_ex module for optimizers, extend the current functionality, support parameter grouping for every parameter in the optimizer, and support parameter modification by assignment while training.
589
590- [STABLE] Optimize PyTorch and MindSpore API Mapping Table, specify the differences between APIs among functionality, parameter, input, output and specialized cases.
591
592#### PyNative
593
594- Optimize the performance of dynamic shape scenes in PyNative mode.
595
596#### DataSet
597
598- [STABLE] Optimize the memory structure of MindRecord data files. Memory consumption can be reduced 60% when loading 100TB+ data for training.
599- [STABLE] Support single-thread execution of data processing pipeline, and users can add code in the data pipeline for debugging.
600- [STABLE] Optimize the performance of TFRecordDataset to improve the performance of dataset loading by 60%+. Optimize the performance of batch to improve the performance by 30% for the scenarios with large number of batch.
601- [STABLE] Optimize API documentation of [mindspore.dataset](https://www.mindspore.cn/docs/en/r2.1/api_python/mindspore.dataset.html) and [mindspore.dataset.transforms](https://www.mindspore.cn/docs/en/r2.1/api_python/mindspore.dataset.transforms.html). Four new sample libraries have been added to show the effect of data enhancement, namely: [Load & Process Datasets Using Data Pipeline](https://www.mindspore.cn/docs/en/r2.1/api_python/mindspore.dataset.html#quick-start-of-dataset-pipeline), [Visual Transformation Sample Library](https://www.mindspore.cn/docs/en/r2.1/api_python/mindspore.dataset.transforms.html#module-mindspore.dataset.vision), [Text Transform Sample Library](https://www.mindspore.cn/docs/en/r2.1/api_python/mindspore.dataset.transforms.html#module-mindspore.dataset.text), [Audio Transform Sample Library](https://www.mindspore.cn/docs/en/r2.1/api_python/mindspore.dataset.transforms.html#module-mindspore.dataset.audio)
602
603#### AutoParallel
604
605- [STABLE] Support offload parameters or intermediate activations to the CPU or NVMe storage during training process. Users can enable this offload feature by configuring context to scale up the trainable model size.
606
607- [STABLE] Enhanced automatic parallel capability including:
608
609  1. Performance of automatic strategy for typical networks is no less than 90% of default configuration.
610
611  2. Support 3D hybrid parallel training: automatic operator-level strategy generation combined with manual configured pipeline partition.
612
613#### Runtime
614
615- [STABLE] Upgrade OpenMPI version to 4.1.4.
616- [STABLE] Upgrade NCCL version to 2.16.5.
617- [STABLE] Assign rank id continuously in same node when using dynamic cluster to launch distributed jobs.
618- [STABLE] No adaptation code is required for Scheduler node. The script of Scheduler could be identical to that of Worker.
619
620#### Ascend
621
622- [STABLE] Support dump assisted debug information for operator AIC Error scenario. The information includes the operator task name, stream ID, input/output/workspace address and so on.
623- [STABLE] Provide default processing mechanism, which skips its execution,  for CANN operators for empty Tensor output scenarios.
624- [STABLE] Supplement debug information when network model fails to execute in graph mode. The debug information will saved in a CSV file in rank_${id}/exec_order/, recording the task ID and stream ID of each task.
625
626#### Profiler
627
628- [STABLE] The Profiler supports the collection of time-consuming data from all phases on the Host side.
629- [BETA] The Profiler supports the collection of memory data from all phases on the Host side.
630- [BETA] The Profiler supports the collection of data processing operator time consumption.
631
632### API Change
633
634- `mindspore.dataset.GraphData`, `mindspore.dataset.Graph`, `mindspore.dataset.InMemoryGraphDataset`, `mindspore.dataset. ArgoverseDataset` are no longer evolved and are deprecated. Use [MindSpore Graph Learning](https://gitee.com/mindspore/graphlearning) for related functional replacements. When replacing networks in Model repositories that use this API, please refer to [GCN](https://gitee.com/mindspore/graphlearning/tree/master/model_zoo/gcn) for GCN and [GAT](https://gitee.com/mindspore/graphlearning/tree/master/model_zoo/gat).
635- `mindspore.set_context` adds `jit_syntax_level` option, which is used to set JIT syntax support level. For more details, please refer to [set_context](https://www.mindspore.cn/docs/en/r2.1/api_python/mindspore/mindspore.set_context.html).
636- Change the `model.infer_predict_layout` interface, which has a new parameter skip_backend_compile with a default value of False. Set to True when the user wants to skip the backend compilation process to get the parameter slicing strategy.
637
638#### Operators
639
640- Add operator primitive for `mindspore.ops.ApplyAdamWithAmsgradV2`. It is recommended to call this operator through API `mindspore.nn.Adam`.
641- Add operator primitive for `mindspore.ops.UpsampleTrilinear3D`. It is recommended to call this operator through API `mindspore.ops.interpolate`.
642- Add operator primitive for `mindspore.ops.UpsampleNearest3D`. It is recommended to call this operator through API `mindspore.ops.interpolate`.
643
644#### API Deprecation
645
646- Deprecate operator primitive `mindspore.ops.ScatterNonAliasingAdd`. It is recommended to use operator primitive `mindspore.ops.TensorScatterAdd` as a replacement.
647
648#### Backwards Incompatible Change
649
650- Interface name: `mindspore.nn.Dense`, `mindspore.nn.Conv1d`, `mindspore.nn.Conv1dTranspose`, `mindspore.nn.Conv2d`, `mindspore.nn.Conv2dTranspose`, `mindspore.nn.Conv3d`, `mindspore.nn.Conv3dTranspose`
651
652  Changes: Change initialization parameter strategy. The default value of weight_init is changed from "normal" to None, and the default value of bias_init is changed from "zeros" to None.
653
654  Description: The default initialization method for weights has been changed from "normal" to internal HeUniform initialization. The default initialization method of bias is changed from "zeros" to internal Uniform initialization.
655
656  <table>
657  <tr>
658  <td style="text-align:center"> Original interface </td> <td style="text-align:center"> v2.1 interface </td>
659  </tr>
660  <tr>
661  <td><pre>
662  mindspore.nn.Dense(in_channels,
663                     out_channels,
664                     weight_init='normal',
665                     bias_init='zeros',
666                     has_bias=True,
667                     activation=None)
668  </pre>
669  </td>
670  <td><pre>
671  mindspore.nn.Dense(in_channels,
672                     out_channels,
673                     weight_init=None,
674                     bias_init=None,
675                     has_bias=True,
676                     activation=None)
677  </pre>
678  </td>
679  </tr>
680  <tr>
681  <td><pre>
682  mindspore.nn.Conv1d(in_channels,
683                      out_channels,
684                      kernel_size,
685                      stride=1,
686                      pad_mode='same',
687                      padding=0,
688                      dilation=1,
689                      group=1,
690                      has_bias=False,
691                      weight_init='normal',
692                      bias_init='zeros')
693  </pre>
694  </td>
695  <td><pre>
696  mindspore.nn.Conv1d(in_channels,
697                      out_channels,
698                      kernel_size,
699                      stride=1,
700                      pad_mode='same',
701                      padding=0,
702                      dilation=1,
703                      group=1,
704                      has_bias=False,
705                      weight_init=None,
706                      bias_init=None)
707  </pre>
708  </td>
709  </tr>
710  <tr>
711  <td><pre>
712  mindspore.nn.Conv1dTranspose(in_channels,
713                               out_channels,
714                               kernel_size,
715                               stride=1,
716                               pad_mode='same',
717                               padding=0,
718                               dilation=1,
719                               group=1,
720                               has_bias=False,
721                               weight_init='normal',
722                               bias_init='zeros')
723  </pre>
724  </td>
725  <td><pre>
726  mindspore.nn.Conv1dTranspose(in_channels,
727                               out_channels,
728                               kernel_size,
729                               stride=1,
730                               pad_mode='same',
731                               padding=0,
732                               dilation=1,
733                               group=1,
734                               has_bias=False,
735                               weight_init=None,
736                               bias_init=None)
737  </pre>
738  </td>
739  </tr>
740  <tr>
741  <td><pre>
742  mindspore.nn.Conv2d(in_channels,
743                      out_channels, kernel_size,
744                      stride=1,
745                      pad_mode='same',
746                      padding=0,
747                      dilation=1,
748                      group=1,
749                      has_bias=False,
750                      weight_init='normal',
751                      bias_init='zeros',
752                      data_format='NCHW')
753  </pre>
754  </td>
755  <td><pre>
756  mindspore.nn.Conv2d(in_channels,
757                      out_channels,
758                      kernel_size,
759                      stride=1,
760                      pad_mode='same',
761                      padding=0,
762                      dilation=1,
763                      group=1,
764                      has_bias=False,
765                      weight_init=None,
766                      bias_init=None,
767                      data_format='NCHW')
768  </pre>
769  </td>
770  </tr>
771  <tr>
772  <td><pre>
773  mindspore.nn.Conv2dTranspose(in_channels,
774                               out_channels,
775                               kernel_size,
776                               stride=1,
777                               pad_mode='same',
778                               padding=0,
779                               output_padding=0,
780                               dilation=1,
781                               group=1,
782                               has_bias=False,
783                               weight_init='normal',
784                               bias_init='zeros')
785  </pre>
786  </td>
787  <td><pre>
788  mindspore.nn.Conv2dTranspose(in_channels,
789                               out_channels,
790                               kernel_size,
791                               stride=1,
792                               pad_mode='same',
793                               padding=0,
794                               output_padding=0,
795                               dilation=1,
796                               group=1,
797                               has_bias=False,
798                               weight_init=None,
799                               bias_init=None)
800  </pre>
801  </td>
802  </tr>
803  <tr>
804  <td><pre>
805  mindspore.nn.Conv3d(in_channels,
806                      out_channels,
807                      kernel_size,
808                      stride=1,
809                      pad_mode='same',
810                      padding=0,
811                      dilation=1,
812                      group=1,
813                      has_bias=False,
814                      weight_init='normal',
815                      bias_init='zeros',
816                      data_format='NCDHW')
817  </pre>
818  </td>
819  <td><pre>
820  mindspore.nn.Conv3d(in_channels,
821                      out_channels,
822                      kernel_size,
823                      stride=1,
824                      pad_mode='same',
825                      padding=0,
826                      dilation=1,
827                      group=1,
828                      has_bias=False,
829                      weight_init=None,
830                      bias_init=None,
831                      data_format='NCDHW')
832  </pre>
833  </td>
834  </tr>
835  <tr>
836  <td><pre>
837  mindspore.nn.Conv3dTranspose(in_channels,
838                               out_channels,
839                               kernel_size,
840                               stride=1,
841                               pad_mode='same',
842                               padding=0,
843                               dilation=1,
844                               group=1,
845                               output_padding=0,
846                               has_bias=False,
847                               weight_init='normal',
848                               bias_init='zeros',
849                               data_format='NCDHW')
850  </pre>
851  </td>
852  <td><pre>
853  mindspore.nn.Conv3dTranspose(in_channels,
854                               out_channels,
855                               kernel_size,
856                               stride=1,
857                               pad_mode='same',
858                               padding=0,
859                               dilation=1,
860                               group=1,
861                               output_padding=0,
862                               has_bias=False,
863                               weight_init=None,
864                               bias_init=None,
865                               data_format='NCDHW')
866  </pre>
867  </td>
868  </tr>
869  </table>
870
871### Bug Fixes
872
873- [I6TKLW] Fix the issue of MobileNetV2 network performance degradation on the Ascend platform.
874- [I7CP5H] Fix the issue where ASR network training failed on the Ascend platform.
875- [I7I3EZ] Fix the issue that caused run_check() failure due to changes to the enumeration interface in Pillow version 10.0.0. If encountered in a lower version of MindSpore, install versions of Pillow below 10.0.0 to avoid this issue.
876- [I7IZ8K] Fix accuracy issues with the assignsub interface in PyNative mode.
877- [I7HGY0] Fix the issue that the loss of the functional programming does not converge in the PyNative data_sink mode.
878- [I7J4N3] Fix the issue that the generation of Step Trace failed in Profiler dynamic Shape mode
879- [I7J4N3] Fix the issue that there is no data displayed in the MindInsight parallel strategy view.
880- [I79YY4] Fix SiLU operator error when high-order differential in PyNative mode.
881- [I6NQJQ] Fix the issue of probabilistic failure in dynamic shape scenarios of the ScatterUpdate operator in PyNative mode.
882- [I6Y4G5] Fix the issue of failure in dynamic Shape scenarios of the Conv3D operator in Graph mode.
883
884### Contributors
885
886Thanks goes to these wonderful people:
887
888alashkari,anzhengqi,archer2049,B.L.LAN,baihuawei,bichaoyang,BJ-WANG,Bokai Li,Brian-K,caifubi,caiyimeng,cathwong,changzherui,ChenDonYY,chenfei_mindspore,chengang,chengbin,chenhaozhe,chenjianping,chenkang,chenweifeng,chuht,chujinjin,davidanugraha,DavidFFFan,DeshiChen,douzhixing,emmmmtang,Erpim,Ethan,fangwenyi,fangzehua,fangzhou0329,fary86,fengyixing,gaoshuanglong,Gaoxiong,gaoyong10,gengdongjie,gongdaguo1,Greatpan,GuoZhibin,guozhijian,hangq,hanhuifeng,haozhang,hedongdong,Henry Shi,heterogeneous_to_backoff_2_0,huangbingjian,huanghui,huangxinjing,hujiahui8,hujingsong,huoxinyou,jachua,jiahongQian,jianghui58,jiangzhenguang,jiaorui,jiaoy1224,jijiarong,jjfeing,JoeyLin,json,JuiceZ,jxl,kairui_kou,KevinYi,kisnwang,KXiong,laiyongqiang,lanzhineng,liangchenghui,liangzelang,LiangZhibo,lianliguang,lichen,ligan,lijunbin,limingqi107,ling,linqingke,liubuyu,liuchao,liuchuting,liujunzhu,liuluobin,liutongtong9,liuyang811,lixiao,liyan2022,liyejun,liyuxia,looop5,luochao60,luojianing,luoyang,luoyuan,lyqlola,maning202007,maoyaomin,Margaret_wangrui,mayadong,MaZhiming,melody,mengyuanli,michaelzhu_70ab,Mohammad Motallebi,moran,NaCN,nomindcarry,OwenSec,panfengfeng,panshaowu,panzhihui,pkuliuliu,qinzheng,qiuzhongya,qujianwei,r1chardf1d0,Renyuan Zhang,RobinGrosman,shaojunsong,shenwei41,Soaringfish,tangdezhi_123,tanghuikang,tan-wei-cheng,TinaMengtingZhang,TronZhang,TuDouNi,VectorSL,wang_ziqi,wanghenchang,wangnan39,wangpingan,wangshaocong,wangshengnan123,wangtongyu6,weichaoran,wind-zyx,wqx,wtcheng,wujueying,wYann,XianglongZeng,xiaohanzhang,xiaotianci,xiaoyao,XinDu,xulei,xumengjuan1,xupan,xwkgch,yanghaoran,yangluhang,yangruoqi713,yangshuo,yangsijia,yangzhenzhang,yanzhenxiang2020,Yanzhi_YI,yao_yf,yefeng,yeyunpeng2020,Yi_zhang95,yide12,YijieChen,YingLai Lin,YingtongHu,youshu,yuchaojie,yuedongli,YuJianfeng,zangqx,ZengZitao,zhangbuxue,zhangdanyang,zhangdong,zhangfanghe,zhangqi,zhangqinghua,zhangyanhui,zhangyinxia,zhangyongxian,zhangzhaoju,zhanzhan,zhengzuohe,ZhidanLiu,zhixinaa,zhoufeng,zhouyaqiang0,zhuguodong,zhupuxu,zhuyuxiao,zichun_ye,zjun,zlq2020,zong_shuai,ZPaC,zuochuanyong,zyli2020,陈宇,范吉斌,冯一航,胡彬,宦晓玲,黄勇,雷元哲,李良灿,李林杰,刘崇鸣,刘力力,刘勇琪,吕浩宇,吕昱峰(Nate.River),没有窗户的小巷,沈竞兴,十六夜,王程浩,王禹程,王振邦,徐安越,徐永飞,杨旭华,于振华,俞涵,张清华,张澍坤,张栩浩,张学同,赵英灼,周超,周洪叶,朱家兴
889
890Contributions of any kind are welcome!
891
892## MindSpore Lite 2.1.0 Release Notes
893
894### Major Features and Improvements
895
896#### MindSpore Lite Cloud Inference
897
898- [STABLE] Supports high-performance inference for single-device large model and single-node multi-device distributed large model at Ascend backend.
899- [STABLE] Python API Ascend backend supports multiple models sharing workspace memory.
900- [STABLE] [The weights can be shared by multiple models through ModelGroup](https://mindspore.cn/lite/docs/en/r2.1/use/cloud_infer/runtime_cpp.html#multiple-models-sharing-weights). For example, weights can be shared between full models and incremental models in the large model scenario.
901
902#### API
903
904The [Python](https://www.mindspore.cn/lite/api/en/r2.1/mindspore_lite/mindspore_lite.ModelGroup.html) and [C++](https://mindspore.cn/lite/api/en/r2.1/generate/classmindspore_ModelGroup.html) ModelGroup interface is added. The interface definition is as follows:
905
906```python
907class ModelGroup
908    def __init__(self, flags=ModelGroupFlag.SHARE_WORKSPACE)
909    def add_model(self, models)
910    def cal_max_size_of_workspace(self, model_type, context)
911```
912
913```C++
914// class ModelGroup
915ModelGroup(ModelGroupFlag flags = ModelGroupFlag::kShareWorkspace);
916Status AddModel(const std::vector<std::string> &model_path_list);
917Status AddModel(const std::vector<std::pair<const void *, size_t>> &model_buff_list);
918Status AddModel(const std::vector &model_list);
919Status AddModel(const std::vector &model_list);
920```
921
922## MindSpore 2.0.0 Release Notes
923
924### Major Features and Improvements
925
926#### PyNative
927
928- [STABLE] Dynamic shape is fully supported on framework. For detailed operator support, refer to [Dynamic Shape Support Status of nn Interface](https://www.mindspore.cn/docs/en/master/note/dynamic_shape_nn.html), [Dynamic Shape Support Status of ops Interface](https://www.mindspore.cn/docs/en/master/note/dynamic_shape_func.html), and [Dynamic Shape Support Status of primitive Interface](https://www.mindspore.cn/docs/en/master/note/dynamic_shape_primitive.html).
929
930#### AutoParallel
931
932- [STABLE] Build new MindFormers independent repositpry, providing distributed parallel suite, replacing mindspore.nn.transformer module.
933- [DEMO] Distributed parallel operator Gather supports the BatchDim attribute.
934- [DEMO] Streamline parallel supports specifying any dimension of the input data as the Batch dimension.
935
936### API Change
937
938#### operator
939
940- Add operator primitive for `mindspore.ops.AdaptiveAvgPool2D` .
941- Add operator primitive for `mindspore.ops.BatchToSpaceNDV2` .
942- Add operator primitive for `mindspore.ops.CeLU` .
943- Add operator primitive for `mindspore.ops.ExtractVolumePatches` .
944- Add operator primitive for `mindspore.ops.FFTWithSize` .
945- Add operator primitive for `mindspore.ops.FillDiagonal` .
946- Add operator primitive for `mindspore.ops.FractionalMaxPool3DWithFixedKsize` .
947- Add operator primitive for `mindspore.ops.Im2Col` .
948- Add operator primitive for `mindspore.ops.MaskedScatter` .
949- Add operator primitive for `mindspore.ops.MatrixBandPart` .
950- Add operator primitive for `mindspore.ops.MatrixInverse` .
951- Add operator primitive for `mindspore.ops.MaxPoolWithArgmaxV2` .
952- Add operator primitive for `mindspore.ops.Ormqr` .
953- Add operator primitive for `mindspore.ops.RandpermV2` .
954- Add operator primitive for `mindspore.ops.ResizeBicubic` .
955- Add operator primitive for `mindspore.ops.Triu` .
956- Add operator primitive for `mindspore.ops.Zeta` .
957
958#### Backwards Incompatible Change
959
960- Interface: mindspore.ops.MultitypeFuncGraph
961
962  Change: The interface parameter doc_url is used as a test feature in MindSpore 2.0.0.rc1 version. After the optimization of MindSpore 2.0.0 version, users do not need to configure this parameter, so this parameter is deleted in MindSpore 2.0.0 version.
963
964  <table>
965  <tr>
966  <td style="text-align:center"> Original Interface </td> <td style="text-align:center"> Interface v2.0.0 </td>
967  </tr>
968  <tr>
969  <td><pre>
970  mindspore.ops.MultitypeFuncGraph(name, read_value=False, doc_url="")
971  </pre>
972  </td>
973  <td><pre>
974  mindspore.ops.MultitypeFuncGraph(name, read_value=False)
975  </pre>
976  </td>
977  </tr>
978  </table>
979
980- Interface: mindspore.set_context(auto_tune_mode="GA,RL")
981
982  Change: The AutoTune tool has been deprecated, delete auto_tune_mode option, new tuning tools will be planned in the future.
983
984- Interface: mindspore.set_context(mode=PYNATIVE_MODE)
985
986  Change: The default value is changed from GRAPH_MODE to PYNATIVE_MODE.
987
988  Description: If the running mode is not set and the diagram mode needs to be set, use the following method:
989  mindspore.set_context(mode=GRAPH_MODE).
990
991  <table>
992  <tr>
993  <td style="text-align:center"> Original Interface </td> <td style="text-align:center"> Interface v2.0.0-rc1 </td>
994  </tr>
995  <tr>
996  <td><pre>
997  mindspore.set_context(mode=GRAPH_MODE)
998  </pre>
999  </td>
1000  <td><pre>
1001  mindspore.set_context(mode=PYNATIVE_MODE)
1002  </pre>
1003  </td>
1004  </tr>
1005  </table>
1006
1007- Interface: mindspore.train.Model.train
1008
1009  Change: The default value of dataset_sink_mode is changed from True to False.
1010
1011  Description: If dataset_sink_mode is not set and the data sinking mode needs to be set, use the following method:
1012  Model.train(dataset_sink_mode=True).
1013
1014  <table>
1015  <tr>
1016  <td style="text-align:center"> Original Interface </td> <td style="text-align:center"> Interface v2.0.0-rc1 </td>
1017  </tr>
1018  <tr>
1019  <td><pre>
1020  Model.train(dataset_sink_mode=True)
1021  </pre>
1022  </td>
1023  <td><pre>
1024  Model.train(dataset_sink_mode=False)
1025  </pre>
1026  </td>
1027  </tr>
1028  </table>
1029
1030- Interface: mindspore.export
1031
1032  Change: The file_format parameter is changed from AIR to no default value.
1033
1034  Description: If file_format is not set in the original mode, you need to set file_format additionally. In this case, use the following method:
1035  mindspore.export(net, *inputs, file_name, file_format="AIR", **kwargs).
1036
1037  <table>
1038  <tr>
1039  <td style="text-align:center"> Original Interface </td> <td style="text-align:center"> Interface v2.0.0-rc1 </td>
1040  </tr>
1041  <tr>
1042  <td><pre>
1043  mindspore.export(net, *inputs, file_name,
1044                   file_format="AIR", **kwargs)
1045  </pre>
1046  </td>
1047  <td><pre>
1048  mindspore.export(net, *inputs, file_name,
1049                   file_format, **kwargs)
1050  </pre>
1051  </td>
1052  </tr>
1053  </table>
1054
1055- Interface: mindspore.ops.norm
1056
1057  Change: The ord parameter function is extended to support multiple forms.
1058
1059  <table>
1060  <tr>
1061  <td style="text-align:center"> Original Interface </td> <td style="text-align:center"> Interface v2.0.0-rc1 </td>
1062  </tr>
1063  <tr>
1064  <td><pre>
1065  ops.norm(input_x, axis, p=2, keep_dims=False, epsilon=1e-12)
1066  >>> # Example:
1067  >>> input = Tensor(np.array([[[1.0, 2.0], [3.0, 4.0]],
1068  ...                          [[5.0, 6.0], [7.0, 8.0]]]).astype(np.float32))
1069  >>> output = ops.norm(input, [0, 1], p=2)
1070  </pre></td>
1071  <td><pre>
1072  ops.norm(A, ord=None, dim=None, keepdim=False, *, dtype=None)
1073  >>> # Example:
1074  >>> input = Tensor(np.array([[[1.0, 2.0], [3.0, 4.0]],
1075  ...                          [[5.0, 6.0], [7.0, 8.0]]]).astype(np.float32))
1076  >>> output = ops.norm(input, ord=2, dim=(0, 1))
1077  </pre>
1078  </td>
1079  </tr>
1080  </table>
1081
1082- Interface: mindspore.Tensor.norm
1083
1084  Change: The ord parameter function is extended to support multiple forms.
1085
1086  Description: For details, see the example of ops.norm.
1087
1088  <table>
1089  <tr>
1090  <td style="text-align:center"> Original Interface </td> <td style="text-align:center"> Interface v2.0.0-rc1 </td>
1091  </tr>
1092  <tr>
1093  <td><pre>
1094  Tensor.norm(axis, p=2, keep_dims=False, epsilon=1e-12)
1095  </pre>
1096  </td>
1097  <td><pre>
1098  Tensor.norm(ord=None, dim=None, keepdim=False, *, dtype=None)
1099  </pre>
1100  </td>
1101  </tr>
1102  </table>
1103
1104- Interface: mindspore.ops.dropout
1105
1106  Change: The seed0 and seed1 parameters are deleted and seed=None parameter is added. Instead of returning Tensors and masks, only Tensors are returned. The input parameter training=True is added.
1107
1108  <table>
1109  <tr>
1110  <td style="text-align:center"> Original Interface </td> <td style="text-align:center"> Interface v2.0.0-rc1 </td>
1111  </tr>
1112  <tr>
1113  <td><pre>
1114  ops.dropout(x, p=0.5, seed0=0, seed1=0)
1115  >>> # Example:
1116  >>> input = Tensor(((20, 16), (50, 50)),
1117  ...                mindspore.float32)
1118  >>> output, mask = dropout(x, p=0.5)
1119  </pre>
1120  </td>
1121  <td><pre>
1122  ops.dropout(input, p=0.5, training=True, seed=None)
1123  >>> # Example:
1124  >>> input = Tensor(((20, 16), (50, 50)),
1125  ...                mindspore.float32)
1126  >>> output = ops.dropout(input, p=0.5,training=True)
1127  </pre>
1128  </td>
1129  </tr>
1130  </table>
1131
1132- Interface: mindspore.ops.dropout2d
1133
1134  Change: Return value is changed from Tensor and mask to Tensor only. The input parameter training=True is added.
1135
1136  <table>
1137  <tr>
1138  <td style="text-align:center"> Original Interface </td> <td style="text-align:center"> Interface v2.0.0-rc1 </td>
1139  </tr>
1140  <tr>
1141  <td><pre>
1142  ops.dropout2d(x, p=0.5)
1143  >>> # Example:
1144  >>> input = Tensor(np.ones([2, 1, 2, 3]),
1145  ...                mindspore.float32)
1146  >>> output, mask = dropout2d(input, 0.5)
1147  </pre>
1148  </td>
1149  <td><pre>
1150  ops.dropout2d(input, p=0.5, training=True)
1151  >>> # Example:
1152  >>> input = Tensor(np.ones([2, 1, 2, 3]),
1153  ...                mindspore.float32)
1154  >>> output = ops.dropout2d(input, 0.5, training=True)
1155  </pre>
1156  </td>
1157  </tr>
1158  </table>
1159
1160- Interface: mindspore.ops.dropout3d
1161
1162  Change: Return value is changed from Tensor and mask to Tensor only. The input parameter training=True is added.
1163
1164  <table>
1165  <tr>
1166  <td style="text-align:center"> Original Interface </td> <td style="text-align:center"> Interface v2.0.0-rc1 </td>
1167  </tr>
1168  <tr>
1169  <td><pre>
1170  ops.dropout3d(x, p=0.5)
1171  >>> # Example:
1172  >>> input = Tensor(np.ones([2, 1, 2, 3]),
1173  ...                mindspore.float32)
1174  >>> output, mask = dropout3d(input, 0.5)
1175  </pre>
1176  </td>
1177  <td><pre>
1178  ops.dropout3d(input, p=0.5, training=True)
1179  >>> # Example:
1180  >>> input = Tensor(np.ones([2, 1, 2, 3]),
1181  ...                mindspore.float32)
1182  >>> output = ops.dropout3d(input, 0.5, training=True)
1183  </pre>
1184  </td>
1185  </tr>
1186  </table>
1187
1188- Interface: mindspore.ops.std
1189
1190  Change: The interface is reconstructed, and the interface usage mode is more consistent with user habits.
1191
1192  Description: If parameter `unbiased` has been set, use the following alternative: `unbiased=False` -> `ddof=0`, `unbiased=True` -> `ddof=1`.
1193
1194  <table>
1195  <tr>
1196  <td style="text-align:center"> Original Interface </td> <td style="text-align:center"> Interface v2.0.0-rc1 </td>
1197  </tr>
1198  <tr>
1199  <td><pre>
1200  ops.std(input_x, axis=(), unbiased=True, keep_dims=False)
1201  </pre>
1202  </td>
1203  <td><pre>
1204  ops.std(input, axis=None, ddof=0, keepdims=False)
1205  </pre>
1206  </td>
1207  </tr>
1208  </table>
1209
1210- Interface: mindspore.load_param_into_net
1211
1212  Change: Parameters that are not loaded in the ckpt are added as return values.
1213
1214  <table>
1215  <tr>
1216  <td style="text-align:center"> Original Interface </td> <td style="text-align:center"> Interface v2.0.0-rc1 </td>
1217  </tr>
1218  <tr>
1219  <td><pre>
1220  net_param = load_param_into_net()
1221  </pre>
1222  </td>
1223  <td><pre>
1224  net_param, ckpt_param = load_param_into_net()
1225  </pre>
1226  </td>
1227  </tr>
1228  </table>
1229
1230- Interface: mindspore.nn.BCELoss
1231
1232  Change: The default value of `reduction` is changed from 'none' to 'mean'.
1233
1234  <table>
1235  <tr>
1236  <td style="text-align:center"> Original Interface </td> <td style="text-align:center"> Interface v2.0.0-rc1 </td>
1237  </tr>
1238  <tr>
1239  <td><pre>
1240  BCELoss(weight=None, reduction='none')
1241  >>> # Example:
1242  >>> weight = Tensor(np.array([[1.0, 2.0, 3.0],
1243  ...                           [4.0, 3.3, 2.2]]),
1244  ...                 mindspore.float32)
1245  >>> loss = nn.BCELoss(weight=weight, reduction='mean')
1246  >>> logits = Tensor(np.array([[0.1, 0.2, 0.3],
1247  ...                           [0.5, 0.7, 0.9]]),
1248  ...                 mindspore.float32)
1249  >>> labels = Tensor(np.array([[0, 1, 0], [0, 0, 1]]),
1250  ...                 mindspore.float32)
1251  >>> output = loss(logits, labels)
1252  >>> print(output)
1253  >>> 1.8952923
1254  </pre>
1255  </td>
1256  <td><pre>
1257  BCELoss(weight=None, reduction='mean')
1258  >>> # Example:
1259  >>> weight = Tensor(np.array([[1.0, 2.0, 3.0],
1260  ...                           [4.0, 3.3, 2.2]]),
1261  ...                 mindspore.float32)
1262  >>> loss = nn.BCELoss(weight=weight)
1263  >>> logits = Tensor(np.array([[0.1, 0.2, 0.3],
1264  ...                           [0.5, 0.7, 0.9]]),
1265  ...                 mindspore.float32)
1266  >>> labels = Tensor(np.array([[0, 1, 0], [0, 0, 1]]),
1267  ...                 mindspore.float32)
1268  >>> output = loss(logits, labels)
1269  >>> print(output)
1270  >>> 1.8952923
1271  </pre>
1272  </td>
1273  </tr>
1274  </table>
1275
1276- Interface: mindspore.ops.split
1277
1278  Change: The interface is reconstructed. The interface usage mode is more suitable for users. The sequence of the second and third parameters is adjusted, and the split_size_or_sections function is modified and extended.
1279
1280  <table>
1281  <tr>
1282  <td style="text-align:center"> Original Interface </td> <td style="text-align:center"> Interface v2.0.0-rc1 </td>
1283  </tr>
1284  <tr>
1285  <td><pre>
1286  ops.split(input_x, axis=0, output_num=1)
1287  >>> # Example:
1288  >>> input = Tensor(np.array([[1, 1, 1, 1], [2, 2, 2, 2]]),
1289  ...                mindspore.int32)
1290  >>> output = ops.split(input, axis=1, output_num=4)
1291  </pre>
1292  </td>
1293  <td><pre>
1294  ops.split(tensor, split_size_or_sections, axis=0)
1295  >>> # Example:
1296  >>> input = Tensor(np.array([[1, 1, 1, 1], [2, 2, 2, 2]]),
1297  ...                mindspore.int32)
1298  >>> output = ops.split(input, split_size_or_sections=1, axis=1)
1299  </pre>
1300  </td>
1301  </tr>
1302  </table>
1303
1304- Interface: mindspore.Tensor.split
1305
1306  Change: The interface is reconstructed. The interface usage mode is more suitable for users. The positions of the two parameters is adjusted, and the split_size_or_sections function is modified and extended.
1307
1308  Description: For details, see the example of ops.split.
1309
1310  <table>
1311  <tr>
1312  <td style="text-align:center"> Original Interface </td> <td style="text-align:center"> Interface v2.0.0-rc1 </td>
1313  </tr>
1314  <tr>
1315  <td><pre>
1316  Tensor.split(axis=0, output_num=1)
1317  </pre>
1318  </td>
1319  <td><pre>
1320  Tensor.split(split_size_or_sections, axis=0)
1321  </pre>
1322  </td>
1323  </tr>
1324  </table>
1325
1326- Interface: mindspore.ops.pad
1327
1328  Change: Modify the parameter name paddings to padding, and the mode and value functions are added.
1329
1330  <table>
1331  <tr>
1332  <td style="text-align:center"> Original Interface </td> <td style="text-align:center"> Interface v2.0.0-rc1 </td>
1333  </tr>
1334  <tr>
1335  <td><pre>
1336  ops.pad(input_x, paddings)
1337  >>> # Example:
1338  >>> input_x = Tensor(np.array([[-0.1, 0.3, 3.6],
1339  ...                            [0.4, 0.5, -3.2]]),
1340  ...                  mindspore.float32)
1341  >>> paddings = ((1, 2), (2, 1))
1342  >>> output = ops.pad(input_x, paddings)
1343  </pre>
1344  </td>
1345  <td><pre>
1346  ops.pad(input_x, padding, mode='constant', value=None)
1347  >>> # Example:
1348  >>> input_x = Tensor(np.array([[-0.1, 0.3, 3.6],
1349  ...                            [0.4, 0.5, -3.2]]),
1350  ...                  mindspore.float32)
1351  >>> paddings = (2, 1, 1, 2)
1352  >>> output = ops.pad(input_x, paddings)
1353  </pre>
1354  </td>
1355  </tr>
1356  </table>
1357
1358- Interface: mindspore.ops.meshgrid
1359
1360  Change: The input parameter is changed from `inputs` to `*input`.
1361
1362  <table>
1363  <tr>
1364  <td style="text-align:center"> Original Interface </td> <td style="text-align:center"> Interface v2.0.0-rc1 </td>
1365  </tr>
1366  <tr>
1367  <td><pre>
1368  ops.meshgrid(inputs, indexing='xy')
1369  >>> # Example:
1370  >>> x = Tensor(np.array([1, 2, 3, 4]).astype(np.int32))
1371  >>> y = Tensor(np.array([5, 6, 7]).astype(np.int32))
1372  >>> z = Tensor(np.array([8, 9, 0, 1, 2]).astype(np.int32))
1373  output = ops.meshgrid((x, y, z), indexing='xy')
1374  </pre>
1375  </td>
1376  <td><pre>
1377  ops.meshgrid(*inputs, indexing='xy')
1378  >>> # Example:
1379  >>> x = Tensor(np.array([1, 2, 3, 4]).astype(np.int32))
1380  >>> y = Tensor(np.array([5, 6, 7]).astype(np.int32))
1381  >>> z = Tensor(np.array([8, 9, 0, 1, 2]).astype(np.int32))
1382  output = ops.meshgrid(x, y, z, indexing='xy')
1383  </pre>
1384  </td>
1385  </tr>
1386  </table>
1387
1388- Interface: mindspore.ops.max
1389
1390  Change: Return value exchange sequence. The value is changed from "index, value" to "value, index".
1391
1392  <table>
1393  <tr>
1394  <td style="text-align:center"> Original Interface </td> <td style="text-align:center"> Interface v2.0.0-rc1 </td>
1395  </tr>
1396  <tr>
1397  <td><pre>
1398  ops.max(x, axis=0, keep_dims=False)
1399  >>> # Example:
1400  >>> input = Tensor(np.array([0.0, 0.4, 0.6, 0.7, 0.1]),
1401  ...                mindspore.float32)
1402  >>> index, output = ops.max(input)
1403  >>> print(index, output)
1404  >>> 3 0.7
1405  </pre>
1406  </td>
1407  <td><pre>
1408  ops.max(input, axis=None, keepdims=False, *, initial=None, where=True, return_indices=False)
1409  >>> # Example:
1410  >>> input = Tensor(np.array([0.0, 0.4, 0.6, 0.7, 0.1]),
1411  ...                mindspore.float32)
1412  >>> output, index = ops.max(input, axis=0)
1413  >>> print(output, index)
1414  </pre>
1415  </td>
1416  </tr>
1417  </table>
1418
1419- Interface: mindspore.ops.min
1420
1421  Change: Return value exchange sequence. The value is changed from "index, value" to "value, index".
1422
1423  <table>
1424  <tr>
1425  <td style="text-align:center"> Original Interface </td> <td style="text-align:center"> Interface v2.0.0-rc1 </td>
1426  </tr>
1427  <tr>
1428  <td><pre>
1429  ops.min(x, axis=0, keep_dims=False)
1430  >>> # Example:
1431  >>> input = Tensor(np.array([0.0, 0.4, 0.6, 0.7, 0.1]),
1432  ...                mindspore.float32)
1433  >>> index, output = ops.min(input)
1434  >>> 0 0.0
1435  </pre>
1436  </td>
1437  <td><pre>
1438  ops.min(input, axis=None, keepdims=False, *, initial=None, where=True, return_indices=False)
1439  >>> # Example:
1440  >>> input = Tensor(np.array([0.0, 0.4, 0.6, 0.7, 0.1]),
1441  ...                mindspore.float32)
1442  >>> output, index = ops.min(input, keepdims=True)
1443  >>> 0.0 0
1444  </pre>
1445  </td>
1446  </tr>
1447  </table>
1448
1449- Interface: mindspore.ops.random_gamma
1450
1451  Change: The seed2 parameter is deleted and seed=0 is changed to None. The framework behavior is unified and complies with the actual application scenarios and habits of users.
1452
1453  <table>
1454  <tr>
1455  <td style="text-align:center"> Original Interface </td> <td style="text-align:center"> Interface v2.0.0-rc1 </td>
1456  </tr>
1457  <tr>
1458  <td><pre>
1459  ops.random_gamma(shape, alpha, seed=0, seed2=0)
1460  </pre>
1461  </td>
1462  <td><pre>
1463  ops.random_gamma(shape, alpha, seed=None)
1464  </pre>
1465  </td>
1466  </tr>
1467  </table>
1468
1469- Interface: mindspore.ops.standard_laplace
1470
1471  Change: The seed2 parameter is deleted and seed=0 is changed to None. The framework behavior is unified and complies with the actual application scenarios and habits of users.
1472
1473  <table>
1474  <tr>
1475  <td style="text-align:center"> Original Interface </td> <td style="text-align:center"> Interface v2.0.0-rc1 </td>
1476  </tr>
1477  <tr>
1478  <td><pre>
1479  ops.standard_laplace(shape, seed=0, seed2=0)
1480  </pre>
1481  </td>
1482  <td><pre>
1483  ops.standard_laplace(shape, seed=None)
1484  </pre>
1485  </td>
1486  </tr>
1487  </table>
1488
1489- Interface: mindspore.ops.standard_normal
1490
1491  Change: The seed2 parameter is deleted and seed=0 is changed to None. The framework behavior is unified and complies with the actual application scenarios and habits of users.
1492
1493  <table>
1494  <tr>
1495  <td style="text-align:center"> Original Interface </td> <td style="text-align:center"> Interface v2.0.0-rc1 </td>
1496  </tr>
1497  <tr>
1498  <td><pre>
1499  ops.standard_normal(shape, seed=0, seed2=0)
1500  </pre>
1501  </td>
1502  <td><pre>
1503  ops.standard_normal(shape, seed=None)
1504  </pre>
1505  </td>
1506  </tr>
1507  </table>
1508
1509- Interface: mindspore.ops.bernoulli
1510
1511  Change: The default value of seed is changed from -1 to None. Meets the actual application scenario.
1512
1513  <table>
1514  <tr>
1515  <td style="text-align:center"> Original Interface </td> <td style="text-align:center"> Interface v2.0.0-rc1 </td>
1516  </tr>
1517  <tr>
1518  <td><pre>
1519  ops.bernoulli(x, p=0.5, seed=-1)
1520  </pre>
1521  </td>
1522  <td><pre>
1523  ops.bernoulli(input, p=0.5, seed=None)
1524  </pre>
1525  </td>
1526  </tr>
1527  </table>
1528
1529- Interface: mindspore.data_sink
1530
1531  Change: Deleted the steps parameter. Parameter name jit is changed to jit_config, and new input_signature parameter is added. The usability is improved to meet the requirements of actual application scenarios.
1532
1533  <table>
1534  <tr>
1535  <td style="text-align:center"> Original Interface </td> <td style="text-align:center"> Interface v2.0.0-rc1 </td>
1536  </tr>
1537  <tr>
1538  <td><pre>
1539  mindspore.data_sink(fn, dataset, steps,
1540                      sink_size=1, jit=False)
1541  </pre>
1542  </td>
1543  <td><pre>
1544  mindspore.data_sink(fn, dataset, sink_size=1,
1545                      jit_config=None, input_signature=None)
1546  </pre>
1547  </td>
1548  </tr>
1549  </table>
1550
1551- Interface: mindspore.ops.conv2d
1552
1553  Change: Extend Interface Function. Add the bias parameter and modify the parameter name and parameter sequence.
1554
1555  <table>
1556  <tr>
1557  <td style="text-align:center"> Original Interface </td> <td style="text-align:center"> Interface v2.0.0-rc1 </td>
1558  </tr>
1559  <tr>
1560  <td><pre>
1561  conv2d(inputs, weight, pad_mode="valid",
1562         padding=0, stride=1, dilation=1, group=1)
1563  </pre>
1564  </td>
1565  <td><pre>
1566  conv2d(input, weight, bias=None, stride=1,
1567         pad_mode="valid", padding=0, dilation=1, groups=1)
1568  </pre>
1569  </td>
1570  </tr>
1571  </table>
1572
1573- Interface: mindspore.dataset.vision.Pad
1574
1575  Change: Adjust the input parameter padding of Pad, RandomCrop, and RandomCropWithBbox. When the input length of Padding is 2, the first value is used to fill the left/upper boundary, the second value is used to fill the right/lower boundary, and the first value is used to fill the left/right boundary. Fill the upper/lower boundary with the second value.
1576
1577  Description: The padding parameter whose size is 2 is not compatible with the effect of the earlier version. The padding parameter needs to be explicitly represented (left, right, top, and bottom).
1578
1579  <table>
1580  <tr>
1581  <td style="text-align:center"> Original Interface </td> <td style="text-align:center"> Interface v2.0.0-rc1 </td>
1582  </tr>
1583  <tr>
1584  <td><pre>
1585  mindspore.dataset.vision.Pad(padding=(1,2))
1586  Indicates that the left/upper part of the image is filled with 1 pixel,
1587  and the right/down part is filled with 2 pixels.
1588  </pre>
1589  </td>
1590  <td><pre>
1591  mindspore.dataset.vision.Pad(padding=(1,2,1,2))
1592  Indicates that the left/upper part of the image is filled with 1 pixel,
1593  and the right/down part is filled with 2 pixels.
1594  </pre>
1595  </td>
1596  </tr>
1597  </table>
1598
1599- Interface: mindspore.dataset.Dataset.map
1600
1601  Change: Delete the column_order parameter. In most cases, output_columns and column_order have the same value. Therefore, column_order does not need to be transferred. To adjust the sequence of data columns, use mindspore.dataset.Dataset.project.
1602
1603  Description:
1604
1605  1. If the column sequence does not need to be changed, delete the column_order parameter.
1606  2. If you need to specify the data column sequence, delete the column_order parameter and add a project method to the end of the parameter for column transformation (as in the following example).
1607
1608  <table>
1609  <tr>
1610  <td style="text-align:center"> Original Interface </td> <td style="text-align:center"> Interface v2.0.0-rc1 </td>
1611  </tr>
1612  <tr>
1613  <td><pre>
1614  >>> dataset = dataset.map(operations=[transforms],
1615  ...                       input_columns=["column_a"],
1616  ...                       output_columns=["column_b", "column_c"],
1617  ...                       column_order=["column_c", "column_b"])
1618  </pre>
1619  </td>
1620  <td><pre>
1621  >>> dataset = dataset.map(operations=[transforms],
1622  ...                       input_columns=["column_a"],
1623  ...                       output_columns=["column_b", "column_c"])
1624  >>> dataset = dataset.project(["column_c", column_b"])")
1625  </pre>
1626  </td>
1627  </tr>
1628  </table>
1629
1630- Interface: mindspore.dataset.Dataset.batch
1631
1632  Change: Delete the column_order parameter. In most cases, output_columns and column_order have the same value. Therefore, column_order does not need to be transferred. To adjust the sequence of data columns, use mindspore.dataset.Dataset.project.
1633
1634  Description:
1635
1636  1. If the column sequence does not need to be changed, delete the column_order parameter.
1637  2. If you need to specify the data column sequence, delete the column_order parameter and add a project method to the end of the parameter for column transformation (as in the following example).
1638
1639  <table>
1640  <tr>
1641  <td style="text-align:center"> Original Interface </td> <td style="text-align:center"> Interface v2.0.0-rc1 </td>
1642  </tr>
1643  <tr>
1644  <td><pre>
1645  >>> dataset = dataset.batch(batch_size=4,
1646  ...                         input_columns=["column_a"],
1647  ...                         output_columns=["column_b", "column_c"],
1648  ...                         column_order=["column_c", "column_b"])
1649  </pre>
1650  </td>
1651  <td><pre>
1652  >>> dataset = dataset.batch(batch_size=4, input_columns=["column_a"]
1653  ...                         output_columns=["column_b", "column_c"])
1654  >>> dataset = dataset.project(["column_c", column_b"])")
1655  </pre>
1656  </td>
1657  </tr>
1658  </table>
1659
1660- Interface: mindspore.dataset.Dataset.batch
1661
1662  Change: Split the batch method into two methods: batch and padded_batch. The pad_info parameter is moved from the batch method to the padded_batch method.
1663
1664  Description: To use the pad_info parameter, use the padded_batch method instead.
1665
1666  <table>
1667  <tr>
1668  <td style="text-align:center"> Original Interface </td> <td style="text-align:center"> Interface v2.0.0-rc1 </td>
1669  </tr>
1670  <tr>
1671  <td><pre>
1672  >>> dataset = dataset.batch(batch_size=4,
1673  ...                         drop_remainder=True, pad_info=...)
1674  </pre>
1675  </td>
1676  <td><pre>
1677  >>> dataset = dataset.padded_batch(batch_size=4,
1678  ...                                drop_remainder=True, pad_info=...)
1679  </pre>
1680  </td>
1681  </tr>
1682  </table>
1683
1684### Bug fixes
1685
1686- [I62I3J] fix inference failure of BGCF network on Ascend 310
1687- [I7C2W3] fix error issuse of null pointer when enabling multiple loss in parallel pipeline scenarios
1688
1689### Contributors
1690
1691Thanks goes to these wonderful people:
1692
1693alashkari,anzhengqi,archer2049,B.L.LAN,baihuawei,bichaoyang,BJ-WANG,Bokai Li,Brian-K,caifubi,caiyimeng,cathwong,changzherui,ChenDonYY,chenfei_mindspore,chengang,chengbin,chenhaozhe,chenjianping,chenkang,chenweifeng,chuht,chujinjin,davidanugraha,DavidFFFan,DeshiChen,douzhixing,emmmmtang,Erpim,Ethan,fangwenyi,fangzehua,fangzhou0329,fary86,fengyixing,gaoshuanglong,Gaoxiong,gaoyong10,gengdongjie,gongdaguo1,Greatpan,GuoZhibin,guozhijian,hangq,hanhuifeng,haozhang,hedongdong,Henry Shi,heterogeneous_to_backoff_2_0,huangbingjian,huanghui,huangxinjing,hujiahui8,hujingsong,huoxinyou,jachua,jiahongQian,jianghui58,jiangzhenguang,jiaorui,jiaoy1224,jijiarong,jjfeing,JoeyLin,json,JuiceZ,jxl,kairui_kou,KevinYi,kisnwang,KXiong,laiyongqiang,lanzhineng,liangchenghui,liangzelang,LiangZhibo,lianliguang,lichen,ligan,lijunbin,limingqi107,ling,linqingke,liubuyu,liuchao,liuchuting,liujunzhu,liuluobin,liutongtong9,liuyang811,lixiao,liyan2022,liyejun,liyuxia,looop5,luochao60,luojianing,luoyang,luoyuan,lyqlola,maning202007,maoyaomin,Margaret_wangrui,mayadong,MaZhiming,melody,mengyuanli,michaelzhu_70ab,Mohammad Motallebi,moran,NaCN,nomindcarry,OwenSec,panfengfeng,panshaowu,panzhihui,pkuliuliu,qinzheng,qiuzhongya,qujianwei,r1chardf1d0,Renyuan Zhang,RobinGrosman,shaojunsong,shenwei41,Soaringfish,tangdezhi_123,tanghuikang,tan-wei-cheng,TinaMengtingZhang,TronZhang,TuDouNi,VectorSL,wang_ziqi,wanghenchang,wangnan39,wangpingan,wangshaocong,wangshengnan123,wangtongyu6,weichaoran,wind-zyx,wqx,wtcheng,wujueying,wYann,XianglongZeng,xiaohanzhang,xiaotianci,xiaoyao,XinDu,xulei,xumengjuan1,xupan,xwkgch,yanghaoran,yangluhang,yangruoqi713,yangshuo,yangsijia,yangzhenzhang,yanzhenxiang2020,Yanzhi_YI,yao_yf,yefeng,yeyunpeng2020,Yi_zhang95,yide12,YijieChen,YingLai Lin,YingtongHu,youshu,yuchaojie,yuedongli,YuJianfeng,zangqx,ZengZitao,zhangbuxue,zhangdanyang,zhangdong,zhangfanghe,zhangqi,zhangqinghua,zhangyanhui,zhangyinxia,zhangyongxian,zhangzhaoju,zhanzhan,zhengzuohe,ZhidanLiu,zhixinaa,zhoufeng,zhouyaqiang0,zhuguodong,zhupuxu,zhuyuxiao,zichun_ye,zjun,zlq2020,zong_shuai,ZPaC,zuochuanyong,zyli2020,陈宇,范吉斌,冯一航,胡彬,宦晓玲,黄勇,雷元哲,李良灿,李林杰,刘崇鸣,刘力力,刘勇琪,吕浩宇,吕昱峰(Nate.River),没有窗户的小巷,沈竞兴,十六夜,王程浩,王禹程,王振邦,徐安越,徐永飞,杨旭华,于振华,俞涵,张清华,张澍坤,张栩浩,张学同,赵英灼,周超,周洪叶,朱家兴
1694
1695Contributions of any kind are welcome!
1696
1697## MindSpore 2.0.0-rc1 Release Notes
1698
1699### Major Features and Improvements
1700
1701#### FrontEnd
1702
1703- [BETA] Statement with "return", "return None" and with no return of function are supported in `GRAPH_MODE`.
1704- [BETA] Object with `list` type are supported in `GRAPH_MODE`.
1705- [BETA] Statement with "raise" are supported in variable condition situation in `GRAPH_MODE`.
1706- [STABLE] Functional call supports data sinking mode.
1707- [BETA] The Transformer layer in nn module is added to provide easy-to-use Transformer APIs. Batch_size does not need to be defined. Dynamic seq_length is supported.
1708
1709#### DataSet
1710
1711- [STABLE] In the Ascend environment,the timeout waiting time in data sink mode is adjusted to 1900s by default. This solves the problem that the GetNext operator may time out due to environment resource competition and large computing workload in data sinking mode.
1712- [STABLE] MindRecord supports to query the schemas and number samples. MindRecord provides multi-process writing mode, allowing users to generate MindRecord data files in parallel.
1713- [STABLE] The Dataset pipeline can process any Python object. For details, see [Supporting Python Objects in Dataset Pipeline](https://www.mindspore.cn/tutorials/en/r2.0/advanced/dataset/python_objects.html).
1714
1715#### AutoParallel
1716
1717- [STABLE] The strategies of whole parameters can be saved when saving strategy.
1718- [STABLE] The Conv3D/MaxPool3D/AvgPool3D distributed operator is supported.
1719- [STABLE] Support operator-level parallelism and optimizer-level parallelism under the PyNative with shard: parallel training and the Model API are decoupled to provide basic parallel expression capabilities.
1720- [STABLE] Support operator-level parallelism, and optimizer-level parallelism under the Graph mode: parallel training and the Model API are decoupled to provide basic parallel expression capabilities.
1721- [BETA] Supports customized distributed graph segmentation, improving the flexibility of distributed training.
1722
1723#### Runtime
1724
1725- [STABLE] Control flow supports subgraph sink.
1726- [STABLE] Support CUDA 11.6.
1727- [STABLE] Support for operator selection and execution of List/Tuple/Scalar type kernel to match native Python expression.
1728- [STABLE] Kernel that is not supported by hardware can automatically select CPU kernel.
1729- [STABLE] Support heterogeneous execution within subgraph.
1730
1731#### Ascend
1732
1733- [STABLE] Support overflow detection scheme and HCCL runtime overflow check.
1734- [STABLE] Support dump of communication operators.
1735
1736#### Profiler
1737
1738- [STABLE] Rich Profiler collection item configuration, users can collect performance data in more detail.
1739
1740#### Dump
1741
1742- [BETA] Single card in PyNatvie mode supports operator overflow detection.
1743- [BETA] Graph mode supports hccl operator dump.
1744
1745### API Change
1746
1747- [STABLE] Add computing APIs, such as MaxUnpool, ReplicationPad, and GaussianNLLLoss.
1748  For details, visit <https://www.mindspore.cn/docs/en/r2.0/api_python/mindspore.html>.
1749- [STABLE] Extend inventory API functions, such as AvgPool, pad, norm, and interplate.
1750
1751#### operator
1752
1753- [BETA] Add operator primitive for `mindspore.ops.AdaptiveAvgPool3D`.
1754- [BETA] Add operator primitive for `mindspore.ops.AffineGrid`.
1755- [BETA] Add operator primitive for `mindspore.ops.Angle`.
1756- [BETA] Add operator primitive for `mindspore.ops.BartlettWindow`.
1757- [BETA] Add operator primitive for `mindspore.ops.Bernoulli`.
1758- [BETA] Add operator primitive for `mindspore.ops.BesselI0`.
1759- [BETA] Add operator primitive for `mindspore.ops.BesselI1`.
1760- [BETA] Add operator primitive for `mindspore.ops.BesselJ0`.
1761- [BETA] Add operator primitive for `mindspore.ops.BesselJ1`.
1762- [BETA] Add operator primitive for `mindspore.ops.BesselK0`.
1763- [BETA] Add operator primitive for `mindspore.ops.BesselK0e`.
1764- [BETA] Add operator primitive for `mindspore.ops.BesselK1`.
1765- [BETA] Add operator primitive for `mindspore.ops.BesselK1e`.
1766- [BETA] Add operator primitive for `mindspore.ops.BesselY0`.
1767- [BETA] Add operator primitive for `mindspore.ops.BesselY1`.
1768- [BETA] Add operator primitive for `mindspore.ops.Bincount`.
1769- [BETA] Add operator primitive for `mindspore.ops.BlackmanWindow`.
1770- [BETA] Add operator primitive for `mindspore.ops.ChannelShuffle`.
1771- [BETA] Add operator primitive for `mindspore.ops.Cholesky`.
1772- [BETA] Add operator primitive for `mindspore.ops.Col2Im`.
1773- [BETA] Add operator primitive for `mindspore.ops.Complex`.
1774- [BETA] Add operator primitive for `mindspore.ops.ComplexAbs`.
1775- [BETA] Add operator primitive for `mindspore.ops.Cross`.
1776- [BETA] Add operator primitive for `mindspore.ops.CTCLossV2`.
1777- [BETA] Add operator primitive for `mindspore.ops.Cummin`.
1778- [BETA] Add operator primitive for `mindspore.ops.Diag`.
1779- [BETA] Add operator primitive for `mindspore.ops.Digamma`.
1780- [BETA] Add operator primitive for `mindspore.ops.Expand`.
1781- [BETA] Add operator primitive for `mindspore.ops.Fmax`.
1782- [BETA] Add operator primitive for `mindspore.ops.Gcd`.
1783- [BETA] Add operator primitive for `mindspore.ops.Geqrf`.
1784- [BETA] Add operator primitive for `mindspore.ops.GLU`.
1785- [BETA] Add operator primitive for `mindspore.ops.GridSampler2D`.
1786- [BETA] Add operator primitive for `mindspore.ops.GridSampler3D`.
1787- [BETA] Add operator primitive for `mindspore.ops.HammingWindow`.
1788- [BETA] Add operator primitive for `mindspore.ops.Heaviside`.
1789- [BETA] Add operator primitive for `mindspore.ops.Hypot`.
1790- [BETA] Add operator primitive for `mindspore.ops.Igamma`.
1791- [BETA] Add operator primitive for `mindspore.ops.IndexFill`.
1792- [BETA] Add operator primitive for `mindspore.ops.InplaceIndexAdd`.
1793- [BETA] Add operator primitive for `mindspore.ops.InplaceUpdateV2`.
1794- [BETA] Add operator primitive for `mindspore.ops.Lcm`.
1795- [BETA] Add operator primitive for `mindspore.ops.LeftShift`.
1796- [BETA] Add operator primitive for `mindspore.ops.LogicalXor`.
1797- [BETA] Add operator primitive for `mindspore.ops.Logit`.
1798- [BETA] Add operator primitive for `mindspore.ops.LogSpace`.
1799- [BETA] Add operator primitive for `mindspore.ops.LuUnpack`.
1800- [BETA] Add operator primitive for `mindspore.ops.MatrixDiagPartV3`.
1801- [BETA] Add operator primitive for `mindspore.ops.MatrixDiagV3`.
1802- [BETA] Add operator primitive for `mindspore.ops.MatrixSetDiagV3`.
1803- [BETA] Add operator primitive for `mindspore.ops.MaxPool3DWithArgmax`.
1804- [BETA] Add operator primitive for `mindspore.ops.MaxUnpool2D`.
1805- [BETA] Add operator primitive for `mindspore.ops.MaxUnpool3D`.
1806- [BETA] Add operator primitive for `mindspore.ops.MultiMarginLoss`.
1807- [BETA] Add operator primitive for `mindspore.ops.MultinomialWithReplacement`.
1808- [BETA] Add operator primitive for `mindspore.ops.Mvlgamma`.
1809- [BETA] Add operator primitive for `mindspore.ops.NanToNum`.
1810- [BETA] Add operator primitive for `mindspore.ops.NextAfter`.
1811- [BETA] Add operator primitive for `mindspore.ops.Orgqr`.
1812- [BETA] Add operator primitive for `mindspore.ops.Polygamma`.
1813- [BETA] Add operator primitive for `mindspore.ops.ResizeBilinearV2`.
1814- [BETA] Add operator primitive for `mindspore.ops.RightShift`.
1815- [BETA] Add operator primitive for `mindspore.ops.ScatterNdDiv`.
1816- [BETA] Add operator primitive for `mindspore.ops.ScatterNdMul`.
1817- [BETA] Add operator primitive for `mindspore.ops.SearchSorted`.
1818- [BETA] Add operator primitive for `mindspore.ops.Sinc`.
1819- [BETA] Add operator primitive for `mindspore.ops.Trace`.
1820- [BETA] Add operator primitive for `mindspore.ops.Tril`.
1821- [BETA] Add operator primitive for `mindspore.ops.TrilIndices`.
1822- [BETA] Add operator primitive for `mindspore.ops.TriuIndices`.
1823- [BETA] Add operator primitive for `mindspore.ops.UniqueConsecutive`.
1824- [STABLE] Add operator primitive for `mindspore.ops.Cummax`.
1825- [STABLE] Add operator primitive for `mindspore.ops.FillV2`.
1826- [STABLE] Add operator primitive for `mindspore.ops.IsClose`.
1827- [STABLE] Add operator primitive for `mindspore.ops.MatrixSolve`.
1828- [STABLE] Add operator primitive for `mindspore.ops.Median`.
1829- [STABLE] Add operator primitive for `mindspore.ops.MultilabelMarginLoss`.
1830- [STABLE] Add operator primitive for `mindspore.ops.NonZero`.
1831- [STABLE] Add operator primitive for `mindspore.ops.Pdist`.
1832- [STABLE] Add operator primitive for `mindspore.ops.Polar`.
1833- [STABLE] Add operator primitive for `mindspore.ops.RandomGamma`.
1834- [STABLE] Add operator primitive for `mindspore.ops.RandomPoisson`.
1835- [STABLE] Add operator primitive for `mindspore.ops.RandomShuffle`.
1836- [STABLE] Add operator primitive for `mindspore.ops.Renorm`.
1837- [STABLE] Add operator primitive for `mindspore.ops.ScatterNdMax`.
1838- [STABLE] Add operator primitive for `mindspore.ops.ScatterNdMin`.
1839- [STABLE] Add operator primitive for `mindspore.ops.Svd`.
1840- [STABLE] Add operator primitive for `mindspore.ops.TripletMarginLoss`.
1841
1842#### Deleted APIs
1843
1844- The `mindspore.compression` feature was deprecated at MindSpore 1.8 and is removed in this version.
1845  The following `mindspore.nn.quant` interfaces are also removed simultaneously: `mindspore.nn.FakeQuantWithMinMaxObserver`, `mindspore.nn.Conv2dBnFoldQuantOneConv`, `mindspore.nn.Conv2dBnFoldQuant`, `mindspore.nn.Conv2dBnWithoutFoldQuant`, `mindspore.nn.Conv2dQuant`, `mindspore.nn.DenseQuant`, `mindspore.nn.ActQuant`, `mindspore.nn.TensorAddQuant`, `mindspore.nn.ActQuant`, `mindspore.nn.MulQuant`. Please use [MindSpore Golden Stick](https://gitee.com/mindspore/golden-stick) instead to implement QuantAwareTraining in MindSpore.
1846- The `mindspore.dataset.close_pool`, `mindspore.dataset.to_device`, and `mindspore.dataset.set_dynamic_columns` interfaces are discarded in earlier version and being removed in this version.
1847
1848#### Backwards Incompatible Change
1849
1850- Interface: mindspore.set_context(mode=PYNATIVE_MODE)
1851
1852  Change: The default value is changed from GRAPH_MODE to PYNATIVE_MODE.
1853
1854  Description: If the running mode is not set and the diagram mode needs to be set, use the following method:
1855  mindspore.set_context(mode=GRAPH_MODE).
1856
1857  <table>
1858  <tr>
1859  <td style="text-align:center"> Original Interface </td> <td style="text-align:center"> Interface v2.0.0-rc1 </td>
1860  </tr>
1861  <tr>
1862  <td><pre>
1863  mindspore.set_context(mode=GRAPH_MODE)
1864  </pre>
1865  </td>
1866  <td><pre>
1867  mindspore.set_context(mode=PYNATIVE_MODE)
1868  </pre>
1869  </td>
1870  </tr>
1871  </table>
1872
1873- Interface: mindspore.train.Model.train
1874
1875  Change: The default value of dataset_sink_mode is changed from True to False.
1876
1877  Description: If dataset_sink_mode is not set and the data sinking mode needs to be set, use the following method:
1878  Model.train(dataset_sink_mode=True).
1879
1880  <table>
1881  <tr>
1882  <td style="text-align:center"> Original Interface </td> <td style="text-align:center"> Interface v2.0.0-rc1 </td>
1883  </tr>
1884  <tr>
1885  <td><pre>
1886  Model.train(dataset_sink_mode=True)
1887  </pre>
1888  </td>
1889  <td><pre>
1890  Model.train(dataset_sink_mode=False)
1891  </pre>
1892  </td>
1893  </tr>
1894  </table>
1895
1896- Interface: mindspore.export
1897
1898  Change: The file_format parameter is changed from AIR to no default value.
1899
1900  Description: If file_format is not set in the original mode, you need to set file_format additionally. In this case, use the following method:
1901  mindspore.export(net, *inputs, file_name, file_format="AIR", **kwargs).
1902
1903  <table>
1904  <tr>
1905  <td style="text-align:center"> Original Interface </td> <td style="text-align:center"> Interface v2.0.0-rc1 </td>
1906  </tr>
1907  <tr>
1908  <td><pre>
1909  mindspore.export(net, *inputs, file_name,
1910                   file_format="AIR", **kwargs)
1911  </pre>
1912  </td>
1913  <td><pre>
1914  mindspore.export(net, *inputs, file_name,
1915                   file_format, **kwargs)
1916  </pre>
1917  </td>
1918  </tr>
1919  </table>
1920
1921- Interface: mindspore.ops.norm
1922
1923  Change: The ord parameter function is extended to support multiple forms.
1924
1925  <table>
1926  <tr>
1927  <td style="text-align:center"> Original Interface </td> <td style="text-align:center"> Interface v2.0.0-rc1 </td>
1928  </tr>
1929  <tr>
1930  <td><pre>
1931  ops.norm(input_x, axis, p=2, keep_dims=False, epsilon=1e-12)
1932  >>> # Example:
1933  >>> input = Tensor(np.array([[[1.0, 2.0], [3.0, 4.0]],
1934  ...                          [[5.0, 6.0], [7.0, 8.0]]]).astype(np.float32))
1935  >>> output = ops.norm(input, [0, 1], p=2)
1936  </pre></td>
1937  <td><pre>
1938  ops.norm(A, ord=None, dim=None, keepdim=False, *, dtype=None)
1939  >>> # Example:
1940  >>> input = Tensor(np.array([[[1.0, 2.0], [3.0, 4.0]],
1941  ...                          [[5.0, 6.0], [7.0, 8.0]]]).astype(np.float32))
1942  >>> output = ops.norm(input, ord=2, dim=(0, 1))
1943  </pre>
1944  </td>
1945  </tr>
1946  </table>
1947
1948- Interface: mindspore.Tensor.norm
1949
1950  Change: The ord parameter function is extended to support multiple forms.
1951
1952  Description: For details, see the example of ops.norm.
1953
1954  <table>
1955  <tr>
1956  <td style="text-align:center"> Original Interface </td> <td style="text-align:center"> Interface v2.0.0-rc1 </td>
1957  </tr>
1958  <tr>
1959  <td><pre>
1960  Tensor.norm(axis, p=2, keep_dims=False, epsilon=1e-12)
1961  </pre>
1962  </td>
1963  <td><pre>
1964  Tensor.norm(ord=None, dim=None, keepdim=False, *, dtype=None)
1965  </pre>
1966  </td>
1967  </tr>
1968  </table>
1969
1970- Interface: mindspore.ops.dropout
1971
1972  Change: The seed0 and seed1 parameters are deleted and seed=None parameter is added. Instead of returning Tensors and masks, only Tensors are returned. The input parameter training=True is added.
1973
1974  <table>
1975  <tr>
1976  <td style="text-align:center"> Original Interface </td> <td style="text-align:center"> Interface v2.0.0-rc1 </td>
1977  </tr>
1978  <tr>
1979  <td><pre>
1980  ops.dropout(x, p=0.5, seed0=0, seed1=0)
1981  >>> # Example:
1982  >>> input = Tensor(((20, 16), (50, 50)),
1983  ...                mindspore.float32)
1984  >>> output, mask = dropout(x, p=0.5)
1985  </pre>
1986  </td>
1987  <td><pre>
1988  ops.dropout(input, p=0.5, training=True, seed=None)
1989  >>> # Example:
1990  >>> input = Tensor(((20, 16), (50, 50)),
1991  ...                mindspore.float32)
1992  >>> output = ops.dropout(input, p=0.5,training=True)
1993  </pre>
1994  </td>
1995  </tr>
1996  </table>
1997
1998- Interface: mindspore.ops.dropout2d
1999
2000  Change: Return value is changed from Tensor and mask to Tensor only. The input parameter training=True is added.
2001
2002  <table>
2003  <tr>
2004  <td style="text-align:center"> Original Interface </td> <td style="text-align:center"> Interface v2.0.0-rc1 </td>
2005  </tr>
2006  <tr>
2007  <td><pre>
2008  ops.dropout2d(x, p=0.5)
2009  >>> # Example:
2010  >>> input = Tensor(np.ones([2, 1, 2, 3]),
2011  ...                mindspore.float32)
2012  >>> output, mask = dropout2d(input, 0.5)
2013  </pre>
2014  </td>
2015  <td><pre>
2016  ops.dropout2d(input, p=0.5, training=True)
2017  >>> # Example:
2018  >>> input = Tensor(np.ones([2, 1, 2, 3]),
2019  ...                mindspore.float32)
2020  >>> output = ops.dropout2d(input, 0.5, training=True)
2021  </pre>
2022  </td>
2023  </tr>
2024  </table>
2025
2026- Interface: mindspore.ops.dropout3d
2027
2028  Change: Return value is changed from Tensor and mask to Tensor only. The input parameter training=True is added.
2029
2030  <table>
2031  <tr>
2032  <td style="text-align:center"> Original Interface </td> <td style="text-align:center"> Interface v2.0.0-rc1 </td>
2033  </tr>
2034  <tr>
2035  <td><pre>
2036  ops.dropout3d(x, p=0.5)
2037  >>> # Example:
2038  >>> input = Tensor(np.ones([2, 1, 2, 3]),
2039  ...                mindspore.float32)
2040  >>> output, mask = dropout3d(input, 0.5)
2041  </pre>
2042  </td>
2043  <td><pre>
2044  ops.dropout3d(input, p=0.5, training=True)
2045  >>> # Example:
2046  >>> input = Tensor(np.ones([2, 1, 2, 3]),
2047  ...                mindspore.float32)
2048  >>> output = ops.dropout3d(input, 0.5, training=True)
2049  </pre>
2050  </td>
2051  </tr>
2052  </table>
2053
2054- Interface: mindspore.ops.std
2055
2056  Change: The interface is reconstructed, and the interface usage mode is more consistent with user habits.
2057
2058  Description: If parameter `unbiased` has been set, use the following alternative: `unbiased=False` -> `ddof=0`, `unbiased=True` -> `ddof=1`.
2059
2060  <table>
2061  <tr>
2062  <td style="text-align:center"> Original Interface </td> <td style="text-align:center"> Interface v2.0.0-rc1 </td>
2063  </tr>
2064  <tr>
2065  <td><pre>
2066  ops.std(input_x, axis=(), unbiased=True, keep_dims=False)
2067  </pre>
2068  </td>
2069  <td><pre>
2070  ops.std(input, axis=None, ddof=0, keepdims=False)
2071  </pre>
2072  </td>
2073  </tr>
2074  </table>
2075
2076- Interface: mindspore.load_param_into_net
2077
2078  Change: Parameters that are not loaded in the ckpt are added as return values.
2079
2080  <table>
2081  <tr>
2082  <td style="text-align:center"> Original Interface </td> <td style="text-align:center"> Interface v2.0.0-rc1 </td>
2083  </tr>
2084  <tr>
2085  <td><pre>
2086  net_param = load_param_into_net()
2087  </pre>
2088  </td>
2089  <td><pre>
2090  net_param, ckpt_param = load_param_into_net()
2091  </pre>
2092  </td>
2093  </tr>
2094  </table>
2095
2096- Interface: mindspore.nn.BCELoss
2097
2098  Change: The default value of `reduction` is changed from 'none' to 'mean'.
2099
2100  <table>
2101  <tr>
2102  <td style="text-align:center"> Original Interface </td> <td style="text-align:center"> Interface v2.0.0-rc1 </td>
2103  </tr>
2104  <tr>
2105  <td><pre>
2106  BCELoss(weight=None, reduction='none')
2107  >>> # Example:
2108  >>> weight = Tensor(np.array([[1.0, 2.0, 3.0],
2109  ...                           [4.0, 3.3, 2.2]]),
2110  ...                 mindspore.float32)
2111  >>> loss = nn.BCELoss(weight=weight, reduction='mean')
2112  >>> logits = Tensor(np.array([[0.1, 0.2, 0.3],
2113  ...                           [0.5, 0.7, 0.9]]),
2114  ...                 mindspore.float32)
2115  >>> labels = Tensor(np.array([[0, 1, 0], [0, 0, 1]]),
2116  ...                 mindspore.float32)
2117  >>> output = loss(logits, labels)
2118  >>> print(output)
2119  >>> 1.8952923
2120  </pre>
2121  </td>
2122  <td><pre>
2123  BCELoss(weight=None, reduction='mean')
2124  >>> # Example:
2125  >>> weight = Tensor(np.array([[1.0, 2.0, 3.0],
2126  ...                           [4.0, 3.3, 2.2]]),
2127  ...                 mindspore.float32)
2128  >>> loss = nn.BCELoss(weight=weight)
2129  >>> logits = Tensor(np.array([[0.1, 0.2, 0.3],
2130  ...                           [0.5, 0.7, 0.9]]),
2131  ...                 mindspore.float32)
2132  >>> labels = Tensor(np.array([[0, 1, 0], [0, 0, 1]]),
2133  ...                 mindspore.float32)
2134  >>> output = loss(logits, labels)
2135  >>> print(output)
2136  >>> 1.8952923
2137  </pre>
2138  </td>
2139  </tr>
2140  </table>
2141
2142- Interface: mindspore.ops.split
2143
2144  Change: The interface is reconstructed. The interface usage mode is more suitable for users. The sequence of the second and third parameters is adjusted, and the split_size_or_sections function is modified and extended.
2145
2146  <table>
2147  <tr>
2148  <td style="text-align:center"> Original Interface </td> <td style="text-align:center"> Interface v2.0.0-rc1 </td>
2149  </tr>
2150  <tr>
2151  <td><pre>
2152  ops.split(input_x, axis=0, output_num=1)
2153  >>> # Example:
2154  >>> input = Tensor(np.array([[1, 1, 1, 1], [2, 2, 2, 2]]),
2155  ...                mindspore.int32)
2156  >>> output = ops.split(input, axis=1, output_num=4)
2157  </pre>
2158  </td>
2159  <td><pre>
2160  ops.split(tensor, split_size_or_sections, axis=0)
2161  >>> # Example:
2162  >>> input = Tensor(np.array([[1, 1, 1, 1], [2, 2, 2, 2]]),
2163  ...                mindspore.int32)
2164  >>> output = ops.split(input, split_size_or_sections=1, axis=1)
2165  </pre>
2166  </td>
2167  </tr>
2168  </table>
2169
2170- Interface: mindspore.Tensor.split
2171
2172  Change: The interface is reconstructed. The interface usage mode is more suitable for users. The positions of the two parameters is adjusted, and the split_size_or_sections function is modified and extended.
2173
2174  Description: For details, see the example of ops.split.
2175
2176  <table>
2177  <tr>
2178  <td style="text-align:center"> Original Interface </td> <td style="text-align:center"> Interface v2.0.0-rc1 </td>
2179  </tr>
2180  <tr>
2181  <td><pre>
2182  Tensor.split(axis=0, output_num=1)
2183  </pre>
2184  </td>
2185  <td><pre>
2186  Tensor.split(split_size_or_sections, axis=0)
2187  </pre>
2188  </td>
2189  </tr>
2190  </table>
2191
2192- Interface: mindspore.ops.pad
2193
2194  Change: Modify the parameter name paddings to padding, and the mode and value functions are added.
2195
2196  <table>
2197  <tr>
2198  <td style="text-align:center"> Original Interface </td> <td style="text-align:center"> Interface v2.0.0-rc1 </td>
2199  </tr>
2200  <tr>
2201  <td><pre>
2202  ops.pad(input_x, paddings)
2203  >>> # Example:
2204  >>> input_x = Tensor(np.array([[-0.1, 0.3, 3.6],
2205  ...                            [0.4, 0.5, -3.2]]),
2206  ...                  mindspore.float32)
2207  >>> paddings = ((1, 2), (2, 1))
2208  >>> output = ops.pad(input_x, paddings)
2209  </pre>
2210  </td>
2211  <td><pre>
2212  ops.pad(input_x, padding, mode='constant', value=None)
2213  >>> # Example:
2214  >>> input_x = Tensor(np.array([[-0.1, 0.3, 3.6],
2215  ...                            [0.4, 0.5, -3.2]]),
2216  ...                  mindspore.float32)
2217  >>> paddings = (2, 1, 1, 2)
2218  >>> output = ops.pad(input_x, paddings)
2219  </pre>
2220  </td>
2221  </tr>
2222  </table>
2223
2224- Interface: mindspore.ops.meshgrid
2225
2226  Change: The input parameter is changed from `inputs` to `*input`.
2227
2228  <table>
2229  <tr>
2230  <td style="text-align:center"> Original Interface </td> <td style="text-align:center"> Interface v2.0.0-rc1 </td>
2231  </tr>
2232  <tr>
2233  <td><pre>
2234  ops.meshgrid(inputs, indexing='xy')
2235  >>> # Example:
2236  >>> x = Tensor(np.array([1, 2, 3, 4]).astype(np.int32))
2237  >>> y = Tensor(np.array([5, 6, 7]).astype(np.int32))
2238  >>> z = Tensor(np.array([8, 9, 0, 1, 2]).astype(np.int32))
2239  output = ops.meshgrid((x, y, z), indexing='xy')
2240  </pre>
2241  </td>
2242  <td><pre>
2243  ops.meshgrid(*inputs, indexing='xy')
2244  >>> # Example:
2245  >>> x = Tensor(np.array([1, 2, 3, 4]).astype(np.int32))
2246  >>> y = Tensor(np.array([5, 6, 7]).astype(np.int32))
2247  >>> z = Tensor(np.array([8, 9, 0, 1, 2]).astype(np.int32))
2248  output = ops.meshgrid(x, y, z, indexing='xy')
2249  </pre>
2250  </td>
2251  </tr>
2252  </table>
2253
2254- Interface: mindspore.ops.max
2255
2256  Change: Return value exchange sequence. The value is changed from "index, value" to "value, index".
2257
2258  <table>
2259  <tr>
2260  <td style="text-align:center"> Original Interface </td> <td style="text-align:center"> Interface v2.0.0-rc1 </td>
2261  </tr>
2262  <tr>
2263  <td><pre>
2264  ops.max(x, axis=0, keep_dims=False)
2265  >>> # Example:
2266  >>> input = Tensor(np.array([0.0, 0.4, 0.6, 0.7, 0.1]),
2267  ...                mindspore.float32)
2268  >>> index, output = ops.max(input)
2269  >>> print(index, output)
2270  >>> 3 0.7
2271  </pre>
2272  </td>
2273  <td><pre>
2274  ops.max(input, axis=None, keepdims=False, *, initial=None, where=True, return_indices=False)
2275  >>> # Example:
2276  >>> input = Tensor(np.array([0.0, 0.4, 0.6, 0.7, 0.1]),
2277  ...                mindspore.float32)
2278  >>> output, index = ops.max(input, axis=0)
2279  >>> print(output, index)
2280  </pre>
2281  </td>
2282  </tr>
2283  </table>
2284
2285- Interface: mindspore.ops.min
2286
2287  Change: Return value exchange sequence. The value is changed from "index, value" to "value, index".
2288
2289  <table>
2290  <tr>
2291  <td style="text-align:center"> Original Interface </td> <td style="text-align:center"> Interface v2.0.0-rc1 </td>
2292  </tr>
2293  <tr>
2294  <td><pre>
2295  ops.min(x, axis=0, keep_dims=False)
2296  >>> # Example:
2297  >>> input = Tensor(np.array([0.0, 0.4, 0.6, 0.7, 0.1]),
2298  ...                mindspore.float32)
2299  >>> index, output = ops.min(input)
2300  >>> 0 0.0
2301  </pre>
2302  </td>
2303  <td><pre>
2304  ops.min(input, axis=None, keepdims=False, *, initial=None, where=True, return_indices=False)
2305  >>> # Example:
2306  >>> input = Tensor(np.array([0.0, 0.4, 0.6, 0.7, 0.1]),
2307  ...                mindspore.float32)
2308  >>> output, index = ops.min(input, keepdims=True)
2309  >>> 0.0 0
2310  </pre>
2311  </td>
2312  </tr>
2313  </table>
2314
2315- Interface: mindspore.ops.random_gamma
2316
2317  Change: The seed2 parameter is deleted and seed=0 is changed to None. The framework behavior is unified and complies with the actual application scenarios and habits of users.
2318
2319  <table>
2320  <tr>
2321  <td style="text-align:center"> Original Interface </td> <td style="text-align:center"> Interface v2.0.0-rc1 </td>
2322  </tr>
2323  <tr>
2324  <td><pre>
2325  ops.random_gamma(shape, alpha, seed=0, seed2=0)
2326  </pre>
2327  </td>
2328  <td><pre>
2329  ops.random_gamma(shape, alpha, seed=None)
2330  </pre>
2331  </td>
2332  </tr>
2333  </table>
2334
2335- Interface: mindspore.ops.standard_laplace
2336
2337  Change: The seed2 parameter is deleted and seed=0 is changed to None. The framework behavior is unified and complies with the actual application scenarios and habits of users.
2338
2339  <table>
2340  <tr>
2341  <td style="text-align:center"> Original Interface </td> <td style="text-align:center"> Interface v2.0.0-rc1 </td>
2342  </tr>
2343  <tr>
2344  <td><pre>
2345  ops.standard_laplace(shape, seed=0, seed2=0)
2346  </pre>
2347  </td>
2348  <td><pre>
2349  ops.standard_laplace(shape, seed=None)
2350  </pre>
2351  </td>
2352  </tr>
2353  </table>
2354
2355- Interface: mindspore.ops.standard_normal
2356
2357  Change: The seed2 parameter is deleted and seed=0 is changed to None. The framework behavior is unified and complies with the actual application scenarios and habits of users.
2358
2359  <table>
2360  <tr>
2361  <td style="text-align:center"> Original Interface </td> <td style="text-align:center"> Interface v2.0.0-rc1 </td>
2362  </tr>
2363  <tr>
2364  <td><pre>
2365  ops.standard_normal(shape, seed=0, seed2=0)
2366  </pre>
2367  </td>
2368  <td><pre>
2369  ops.standard_normal(shape, seed=None)
2370  </pre>
2371  </td>
2372  </tr>
2373  </table>
2374
2375- Interface: mindspore.ops.bernoulli
2376
2377  Change: The default value of seed is changed from -1 to None. Meets the actual application scenario.
2378
2379  <table>
2380  <tr>
2381  <td style="text-align:center"> Original Interface </td> <td style="text-align:center"> Interface v2.0.0-rc1 </td>
2382  </tr>
2383  <tr>
2384  <td><pre>
2385  ops.bernoulli(x, p=0.5, seed=-1)
2386  </pre>
2387  </td>
2388  <td><pre>
2389  ops.bernoulli(input, p=0.5, seed=None)
2390  </pre>
2391  </td>
2392  </tr>
2393  </table>
2394
2395- Interface: mindspore.data_sink
2396
2397  Change: Deleted the steps parameter. Parameter name jit is changed to jit_config, and new input_signature parameter is added. The usability is improved to meet the requirements of actual application scenarios.
2398
2399  <table>
2400  <tr>
2401  <td style="text-align:center"> Original Interface </td> <td style="text-align:center"> Interface v2.0.0-rc1 </td>
2402  </tr>
2403  <tr>
2404  <td><pre>
2405  mindspore.data_sink(fn, dataset, steps,
2406                      sink_size=1, jit=False)
2407  </pre>
2408  </td>
2409  <td><pre>
2410  mindspore.data_sink(fn, dataset, sink_size=1,
2411                      jit_config=None, input_signature=None)
2412  </pre>
2413  </td>
2414  </tr>
2415  </table>
2416
2417- Interface: mindspore.ops.conv2d
2418
2419  Change: Extend Interface Function. Add the bias parameter and modify the parameter name and parameter sequence.
2420
2421  <table>
2422  <tr>
2423  <td style="text-align:center"> Original Interface </td> <td style="text-align:center"> Interface v2.0.0-rc1 </td>
2424  </tr>
2425  <tr>
2426  <td><pre>
2427  conv2d(inputs, weight, pad_mode="valid",
2428         padding=0, stride=1, dilation=1, group=1)
2429  </pre>
2430  </td>
2431  <td><pre>
2432  conv2d(input, weight, bias=None, stride=1,
2433         pad_mode="valid", padding=0, dilation=1, groups=1)
2434  </pre>
2435  </td>
2436  </tr>
2437  </table>
2438
2439- Interface: mindspore.dataset.vision.Pad
2440
2441  Change: Adjust the input parameter padding of Pad, RandomCrop, and RandomCropWithBbox. When the input length of Padding is 2, the first value is used to fill the left/upper boundary, the second value is used to fill the right/lower boundary, and the first value is used to fill the left/right boundary. Fill the upper/lower boundary with the second value.
2442
2443  Description: The padding parameter whose size is 2 is not compatible with the effect of the earlier version. The padding parameter needs to be explicitly represented (left, right, top, and bottom).
2444
2445  <table>
2446  <tr>
2447  <td style="text-align:center"> Original Interface </td> <td style="text-align:center"> Interface v2.0.0-rc1 </td>
2448  </tr>
2449  <tr>
2450  <td><pre>
2451  mindspore.dataset.vision.Pad(padding=(1,2))
2452  Indicates that the left/upper part of the image is filled with 1 pixel,
2453  and the right/down part is filled with 2 pixels.
2454  </pre>
2455  </td>
2456  <td><pre>
2457  mindspore.dataset.vision.Pad(padding=(1,2,1,2))
2458  Indicates that the left/upper part of the image is filled with 1 pixel,
2459  and the right/down part is filled with 2 pixels.
2460  </pre>
2461  </td>
2462  </tr>
2463  </table>
2464
2465- Interface: mindspore.dataset.Dataset.map
2466
2467  Change: Delete the column_order parameter. In most cases, output_columns and column_order have the same value. Therefore, column_order does not need to be transferred. To adjust the sequence of data columns, use mindspore.dataset.Dataset.project.
2468
2469  Description:
2470
2471  1. If the column sequence does not need to be changed, delete the column_order parameter.
2472  2. If you need to specify the data column sequence, delete the column_order parameter and add a project method to the end of the parameter for column transformation (as in the following example).
2473
2474  <table>
2475  <tr>
2476  <td style="text-align:center"> Original Interface </td> <td style="text-align:center"> Interface v2.0.0-rc1 </td>
2477  </tr>
2478  <tr>
2479  <td><pre>
2480  >>> dataset = dataset.map(operations=[transforms],
2481  ...                       input_columns=["column_a"],
2482  ...                       output_columns=["column_b", "column_c"],
2483  ...                       column_order=["column_c", "column_b"])
2484  </pre>
2485  </td>
2486  <td><pre>
2487  >>> dataset = dataset.map(operations=[transforms],
2488  ...                       input_columns=["column_a"],
2489  ...                       output_columns=["column_b", "column_c"])
2490  >>> dataset = dataset.project(["column_c", column_b"])")
2491  </pre>
2492  </td>
2493  </tr>
2494  </table>
2495
2496- Interface: mindspore.dataset.Dataset.batch
2497
2498  Change: Delete the column_order parameter. In most cases, output_columns and column_order have the same value. Therefore, column_order does not need to be transferred. To adjust the sequence of data columns, use mindspore.dataset.Dataset.project.
2499
2500  Description:
2501
2502  1. If the column sequence does not need to be changed, delete the column_order parameter.
2503  2. If you need to specify the data column sequence, delete the column_order parameter and add a project method to the end of the parameter for column transformation (as in the following example).
2504
2505  <table>
2506  <tr>
2507  <td style="text-align:center"> Original Interface </td> <td style="text-align:center"> Interface v2.0.0-rc1 </td>
2508  </tr>
2509  <tr>
2510  <td><pre>
2511  >>> dataset = dataset.batch(batch_size=4,
2512  ...                         input_columns=["column_a"],
2513  ...                         output_columns=["column_b", "column_c"],
2514  ...                         column_order=["column_c", "column_b"])
2515  </pre>
2516  </td>
2517  <td><pre>
2518  >>> dataset = dataset.batch(batch_size=4, input_columns=["column_a"]
2519  ...                         output_columns=["column_b", "column_c"])
2520  >>> dataset = dataset.project(["column_c", column_b"])")
2521  </pre>
2522  </td>
2523  </tr>
2524  </table>
2525
2526- Interface: mindspore.dataset.Dataset.batch
2527
2528  Change: Split the batch method into two methods: batch and padded_batch. The pad_info parameter is moved from the batch method to the padded_batch method.
2529
2530  Description: To use the pad_info parameter, use the padded_batch method instead.
2531
2532  <table>
2533  <tr>
2534  <td style="text-align:center"> Original Interface </td> <td style="text-align:center"> Interface v2.0.0-rc1 </td>
2535  </tr>
2536  <tr>
2537  <td><pre>
2538  >>> dataset = dataset.batch(batch_size=4,
2539  ...                         drop_remainder=True, pad_info=...)
2540  </pre>
2541  </td>
2542  <td><pre>
2543  >>> dataset = dataset.padded_batch(batch_size=4,
2544  ...                                drop_remainder=True, pad_info=...)
2545  </pre>
2546  </td>
2547  </tr>
2548  </table>
2549
2550### Bug fixes
2551
2552- [I66PE6] fix AssignSub primitive abnormal input leads to coredump.
2553
2554- [I6F5E6] fix data_sink function timeout on Ascend.
2555
2556### Others
2557
2558- Windows support is still being optimized,this version does not support now.It will be available for download in version 2.0.
2559
2560### Contributors
2561
2562Thanks goes to these wonderful people:
2563
2564alashkari,anzhengqi,archer2049,B.L.LAN,baihuawei,bichaoyang,BJ-WANG,Bokai Li,Brian-K,caifubi,caiyimeng,cathwong,changzherui,ChenDonYY,chenfei_mindspore,chengang,chengbin,chenhaozhe,chenjianping,chenkang,chenweifeng,chuht,chujinjin,davidanugraha,DavidFFFan,DeshiChen,douzhixing,emmmmtang,Erpim,Ethan,fangwenyi,fangzehua,fangzhou0329,fary86,fengyixing,gaoshuanglong,Gaoxiong,gaoyong10,gengdongjie,gongdaguo1,Greatpan,GuoZhibin,guozhijian,hangq,hanhuifeng,haozhang,hedongdong,Henry Shi,heterogeneous_to_backoff_2_0,huangbingjian,huanghui,huangxinjing,hujiahui8,hujingsong,huoxinyou,jachua,jiahongQian,jianghui58,jiangzhenguang,jiaorui,jiaoy1224,jijiarong,jjfeing,JoeyLin,json,JuiceZ,jxl,kairui_kou,KevinYi,kisnwang,KXiong,laiyongqiang,lanzhineng,liangchenghui,liangzelang,LiangZhibo,lianliguang,lichen,ligan,lijunbin,limingqi107,ling,linqingke,liubuyu,liuchao,liuchuting,liujunzhu,liuluobin,liutongtong9,liuyang811,lixiao,liyan2022,liyejun,liyuxia,looop5,luochao60,luojianing,luoyang,luoyuan,lyqlola,maning202007,maoyaomin,Margaret_wangrui,mayadong,MaZhiming,melody,mengyuanli,michaelzhu_70ab,Mohammad Motallebi,moran,NaCN,nomindcarry,OwenSec,panfengfeng,panshaowu,panzhihui,pkuliuliu,qinzheng,qiuzhongya,qujianwei,r1chardf1d0,Renyuan Zhang,RobinGrosman,shaojunsong,shenwei41,Soaringfish,tangdezhi_123,tanghuikang,tan-wei-cheng,TinaMengtingZhang,TronZhang,TuDouNi,VectorSL,wang_ziqi,wanghenchang,wangnan39,wangpingan,wangshaocong,wangshengnan123,wangtongyu6,weichaoran,wind-zyx,wqx,wtcheng,wujueying,wYann,XianglongZeng,xiaohanzhang,xiaotianci,xiaoyao,XinDu,xulei,xumengjuan1,xupan,xwkgch,yanghaoran,yangluhang,yangruoqi713,yangshuo,yangsijia,yangzhenzhang,yanzhenxiang2020,Yanzhi_YI,yao_yf,yefeng,yeyunpeng2020,Yi_zhang95,yide12,YijieChen,YingLai Lin,YingtongHu,youshu,yuchaojie,yuedongli,YuJianfeng,zangqx,ZengZitao,zhangbuxue,zhangdanyang,zhangdong,zhangfanghe,zhangqi,zhangqinghua,zhangyanhui,zhangyinxia,zhangyongxian,zhangzhaoju,zhanzhan,zhengzuohe,ZhidanLiu,zhixinaa,zhoufeng,zhouyaqiang0,zhuguodong,zhupuxu,zhuyuxiao,zichun_ye,zjun,zlq2020,zong_shuai,ZPaC,zuochuanyong,zyli2020,陈宇,范吉斌,冯一航,胡彬,宦晓玲,黄勇,雷元哲,李良灿,李林杰,刘崇鸣,刘力力,刘勇琪,吕浩宇,吕昱峰(Nate.River),没有窗户的小巷,沈竞兴,十六夜,王程浩,王禹程,王振邦,徐安越,徐永飞,杨旭华,于振华,俞涵,张清华,张澍坤,张栩浩,张学同,赵英灼,周超,周洪叶,朱家兴
2565
2566Contributions of any kind are welcome!
2567
2568## MindSpore Lite 2.0.0-rc1 Release Notes
2569
2570### Major Features and Improvements
2571
2572#### MindSpore Lite Cloud Inference
2573
2574The original MindSpore Lite is mainly used for edge devices such as mobile phones and head units. Cloud inference is added to support scenarios with multiple backend hardware resources on the cloud, supports Ascend and NVIDIA GPU inference cards, and efficiently utilizes multi-core resources on the cloud.
2575
2576The original cloud inference integrated through MindSpore training can be changed to MindSpore Lite. For details, see [Quick Start to Cloud-side Inference](https://mindspore.cn/lite/docs/en/r2.0/quick_start/one_hour_introduction_cloud.html). To retain the original integration method, see [Inference](https://mindspore.cn/docs/en/r2.0/faq/inference.html).
2577
2578- [STABLE] Support MindIR model files.
2579- [STABLE] Third-party Onnx, TensorFlow, and Caffe models can be converted to MindIR model files using the MindSpore Lite conversion tool.
2580- [STABLE] One release package supports multiple hardware backends: Ascend 310/310P/910, NVIDIA GPU, CPU.
2581- [STABLE] Supports the `Model` interface and `ModelParallelRunner` concurrent inference interface.
2582- [STABLE] Supports C++, Python, and Java inference interfaces.
2583
2584#### API
2585
2586- Due to the defects of the original Python API that many configuration parameters and complex usage, the usability of The Python APIs are optimized in version 2.0. The optimizations include class construction methods and class attribute adjustment. In addition, the Python APIs in version 2.0 and later will be integrated into the cloud-side inference scenario, which are incompatible with Python APIs of the earlier versions. For details, see [Python API](https://www.mindspore.cn/lite/api/en/r2.0/mindspore_lite.html).
2587
2588## MindSpore 2.0.0-alpha Release Notes
2589
2590### Major Features and Improvements
2591
2592#### PyNative
2593
2594- The default mode of MindSpore is switched to PyNative. If you want to manually set the mode, please refer to [Computational Graph](https://www.mindspore.cn/tutorials/en/r2.0.0-alpha/advanced/compute_graph.html).
2595- Support dynamic shape without padding, three networks are supported as demos: Transformer-GPU, YOLOV5-GPU, ASR-Ascend. Transformer-GPU and YOLOV5-GPU can be downloaded from [models](https://gitee.com/mindspore/models/tree/dynamic_shape). Only the following operators are available on Ascend backend: Add、Assign、BatchMatMul、BiasAdd、BiasAddGrad、Cast、Conv2D、Conv2DBackpropFilter、Conv2DBackpropInput、CTCLoss、Div、Dropout、DropoutDoMask、Equal、ExpandDims、Gather、GetNext、LayerNorm、LayerNormGrad、LessEqual、Load、Log、LogicalAnd、LogicalNot、LogicalOr、LogSoftmax、LogSoftmaxGrad、MatMul、Maximum、Mul、Neg、NotEqual、NPUAllocFloatStatus、NPUClearFloatStatus、OneHot、RealDiv、Reciprocal、ReduceMean、ReduceSum、ReLU、ReluGrad、Reshape、Select、Softmax、StridedSlice、Sub、Tile、Transpose、UnsortedSegmentSum、ZerosLike。The remaining operators have not been fully verified, please use them as appropriate.
2596
2597#### DataSet
2598
2599- The TFRecordDataset API can directly read TFRecord files compressed by GZIP or ZLIB.
2600- The NumpySlicesDataset API can process data of different dimensions at the same time.
2601- Optimize the structure of error log to display more clear call stack information for debugging.
2602- Fixed `mindspore.dataset.config.set_seed` does not take effect for random seeds in distributed training scenarios.
2603
2604#### AutoParallel
2605
2606- Supports more operators with distributed implements.
2607
2608  Element Wise Operators:AddN, BitwiseAnd, BitwiseOr, BitwiseXor, CumProd, HShrink, HSigmoid, IsFinite, Mish, MulNoNan, Rint, SeLU, SoftShrink, TruncateDiv, TruncateMod, Xdivy Xlogy, InplaceAdd, InplacSub, InplaceUpdate, Cdist, L2Loss, Lerp.
2609
2610  Math Operators:SquaredDifference, Erfinv, MaskedFill, SplitV, Gamma, KLDivLoss, LinSpace.
2611
2612  Scatter Operators:ScatterAdd,ScatterDiv,ScatterMax,ScatterMul,ScatterNdAdd,ScatterNdSub,ScatterNdUpdate,ScatterSub,TensorScatterAdd,TensorScatterDiv,TensorScatterMax,TensorScatterMax,TensorScatterMul,TensorScatterAdd,TensorScatterUpdate.
2613
2614- Add new apis `transform_checkpoints` and `transform_checkpoint_by_rank` to transfer the distributed checkpoint files by strategy files. Please refer to [Distributed Resilience Training and Inference](https://www.mindspore.cn/tutorials/experts/en/r2.0.0-alpha/parallel/resilience_train_and_predict.html)2615
2616### API Change
2617
2618#### operator
2619
2620- [STABLE] Add operator primitive for `mindspore.ops.AdaptiveMaxPool3D`.
2621- [STABLE] Add operator primitive for `mindspore.ops.AdjustHue`.
2622- [STABLE] Add operator primitive for `mindspore.ops.BartlettWindow`.
2623- [STABLE] Add operator primitive for `mindspore.ops.BesselJ0`.
2624- [STABLE] Add operator primitive for `mindspore.ops.BesselJ1`.
2625- [STABLE] Add operator primitive for `mindspore.ops.BesselK0`.
2626- [STABLE] Add operator primitive for `mindspore.ops.BesselK0e`.
2627- [STABLE] Add operator primitive for `mindspore.ops.BesselK1`.
2628- [STABLE] Add operator primitive for `mindspore.ops.BesselK1e`.
2629- [STABLE] Add operator primitive for `mindspore.ops.BesselY0`.
2630- [STABLE] Add operator primitive for `mindspore.ops.BesselY1`.
2631- [STABLE] Add operator primitive for `mindspore.ops.Betainc`.
2632- [STABLE] Add operator primitive for `mindspore.ops.Bincount`.
2633- [STABLE] Add operator primitive for `mindspore.ops.BlackmanWindow`.
2634- [STABLE] Add operator primitive for `mindspore.ops.Bucketize`.
2635- [STABLE] Add operator primitive for `mindspore.ops.CombinedNonMaxSuppression`.
2636- [STABLE] Add operator primitive for `mindspore.ops.CompareAndBitpack`.
2637- [STABLE] Add operator primitive for `mindspore.ops.Complex`.
2638- [STABLE] Add operator primitive for `mindspore.ops.DataFormatVecPermute`.
2639- [STABLE] Add operator primitive for `mindspore.ops.EuclideanNorm`.
2640- [STABLE] Add operator primitive for `mindspore.ops.Expand`.
2641- [STABLE] Add operator primitive for `mindspore.ops.ExtractGlimpse`.
2642- [STABLE] Add operator primitive for `mindspore.ops.FillDiagonal`.
2643- [STABLE] Add operator primitive for `mindspore.ops.FractionalAvgPool`.
2644- [STABLE] Add operator primitive for `mindspore.ops.FractionalMaxPool`.
2645- [STABLE] Add operator primitive for `mindspore.ops.Gcd`.
2646- [STABLE] Add operator primitive for `mindspore.ops.HammingWindow`.
2647- [STABLE] Add operator primitive for `mindspore.ops.Histogram`.
2648- [STABLE] Add operator primitive for `mindspore.ops.HSVToRGB`.
2649- [STABLE] Add operator primitive for `mindspore.ops.Lcm`.
2650- [STABLE] Add operator primitive for `mindspore.ops.LeftShift`.
2651- [STABLE] Add operator primitive for `mindspore.ops.ListDiff`.
2652- [STABLE] Add operator primitive for `mindspore.ops.LogSpace`.
2653- [STABLE] Add operator primitive for `mindspore.ops.Lstsq`.
2654- [STABLE] Add operator primitive for `mindspore.ops.MatrixDiagPartV3`.
2655- [STABLE] Add operator primitive for `mindspore.ops.MatrixDiagV3`.
2656- [STABLE] Add operator primitive for `mindspore.ops.MatrixExp`.
2657- [STABLE] Add operator primitive for `mindspore.ops.MatrixPower`.
2658- [STABLE] Add operator primitive for `mindspore.ops.MaxPool3DWithArgmax`.
2659- [STABLE] Add operator primitive for `mindspore.ops.MaxUnpool2D`.
2660- [STABLE] Add operator primitive for `mindspore.ops.MultilabelMarginLoss`.
2661- [STABLE] Add operator primitive for `mindspore.ops.NextAfter`.
2662- [STABLE] Add operator primitive for `mindspore.ops.Orgqr`.
2663- [STABLE] Add operator primitive for `mindspore.ops.ReduceStd`.
2664- [STABLE] Add operator primitive for `mindspore.ops.RGBToHSV`.
2665- [STABLE] Add operator primitive for `mindspore.ops.RightShift`.
2666- [STABLE] Add operator primitive for `mindspore.ops.SampleDistortedBoundingBoxV2`.
2667- [STABLE] Add operator primitive for `mindspore.ops.ScaleAndTranslate`.
2668- [STABLE] Add operator primitive for `mindspore.ops.ScatterAddWithAxis`.
2669- [STABLE] Add operator primitive for `mindspore.ops.ScatterNdDiv`.
2670- [STABLE] Add operator primitive for `mindspore.ops.ScatterNdMax`.
2671- [STABLE] Add operator primitive for `mindspore.ops.ScatterNdMul`.
2672- [STABLE] Add operator primitive for `mindspore.ops.STFT`.
2673- [STABLE] Add operator primitive for `mindspore.ops.Trace`.
2674- [STABLE] Add operator primitive for `mindspore.ops.UpsampleNearest3D`.
2675- [STABLE] Add operator primitive for `mindspore.ops.UpsampleTrilinear3D`.
2676- [STABLE] Add distributed weight conversion interface `mindspore.parallel.transform_checkpoints`.
2677- [STABLE] Add distributed weight conversion interface `mindspore.parallel.transform_checkpoint_by_rank`.
2678
2679#### Backwards Incompatible Change
2680
2681##### Python API
2682
2683- The `mindspore.ms_function` interface is renamed to `mindspore.jit`, and `mindspore.ms_function` will be deprecated and removed in a future version.
2684- The `mindspore.ms_class` interface is renamed to `mindspore.jit_class`, and `mindspore.ms_class` will be deprecated and removed in a future version.
2685- The `mindspore.ops.ms_kernel` interface is renamed to `mindspore.ops.kernel`, and `mindspore.ops.ms_kernel` will be deprecated and removed in a future version.
2686- The `mindspore.dataset.map` interface parameter `column_order` does not take effect, use`mindspore.dataset.project`.
2687- The `mindspore.dataset.close_pool` and `mindspore.dataset.to_device` and `mindspore.dataset.set_dynamic_columns` are deprecated and removed in this version.
2688
2689### Bug fixes
2690
2691- Fixed an issue where the mixed precision functional interface could not modify the backend driver in graph mode
2692- Fixed the problem that users can automatically transfer device_id in the single-P scenario for the following networks:(mobilenetv1/fasterrcnn/yolov3/yolov4/yolov5/unet/openpose/simplepose/crnn/gnmtv2/faceattribute/facequality/facedetection2693
2694### Contributors
2695
2696Thanks goes to these wonderful people:
2697
2698AGroupofProbiotocs, anzhengqi, askmiao, baihuawei, baiyangfan, bai-yangfan, bingyaweng, BowenK, buxue, caifubi, CaoJian, caojian05, caozhou, Cathy, changzherui, chenbo116, chenfei, chengxianbin, chenhaozhe, chenjianping, chenzomi, chenzupeng, chujinjin, cj, cjh9368, Corleone, damon0626, danish, Danish, davidmc, dayschan, doitH, dong-li001, fary86, fuzhiye, Gaoxiong, GAO_HYP_XYJ, gengdongjie, Gogery, gongdaguo, gray0v0, gukecai, guoqi, gzhcv, hangq, hanhuifeng2020, Harshvardhan, He, heleiwang, hesham, hexia, Hoai, HuangBingjian, huangdongrun, huanghui, huangxinjing, huqi, huzhifeng, hwjiaorui, Jiabin Liu, jianghui58, Jiaqi, jin-xiulang, jinyaohui, jjfeing, John, jonyguo, JulyAi, jzg, kai00, kingfo, kingxian, kpy, kswang, liuyongqi, laiyongqiang, leonwanghui, liangchenghui, liangzelang, lichen_101010, lichenever, lihongkang, lilei, limingqi107, ling, linqingke, Lin Xh, liubuyu, liuwenhao4, liuxiao78, liuxiao93, liuyang_655, liuzhongkai, Lixia, lixian, liyanliu, liyong, lizhenyu, luopengting, lvchangquan, lvliang, lz, maning202007, Margaret_wangrui, mengyuanli, Ming_blue, ms_yan, ougongchang, panfengfeng, panyifeng, Payne, Peilin, peixu_ren, Pengyongrong, qianlong, qianjiahong, r1chardf1d0, riemann_penn, rmdyh, Sheng, shenwei41, simson, Simson, Su, sunsuodong, tao_yunhao, tinazhang, VectorSL, , Wan, wandongdong, wangdongxu, wangmin,  wangyue01, wangzhe, wanyiming, Wei, wenchunjiang, wilfChen, WilliamLian, wsc, wudenggang, wukesong, wuweikang, wuxuejian, Xiao Tianci, Xiaoda, xiefangqi, xinyunfan, xuanyue, xuyongfei, yanghaitao, yanghaitao1, yanghaoran, YangLuo, yangruoqi713, yankai, yanzhenxiang2020, yao_yf, yepei6, yeyunpeng, Yi, yoni, yoonlee666, yuchaojie, yujianfeng, yuximiao, zengzitao, Zhang,  zhanghuiyao, zhanghui_china, zhangxinfeng3, zhangyihui, zhangz0911gm, zhanke, zhanyuan, zhaodezan, zhaojichen, zhaoting, zhaozhenlong, zhengjun10, zhiqwang, zhoufeng, zhousiyi, zhouyaqiang, zhouyifengCode, Zichun, Ziyan, zjun, ZPaC, wangfengwfwf, zymaa, gerayking, shu-kun-zhang.
2699
2700Contributions of any kind are welcome!
2701
2702## MindSpore 1.10.1 Release Notes
2703
2704### Bug fixes
2705
2706- Fixed the issue that the specified axis is not considered in logsumexp anti-overflow processing
2707- Fixed the compilation dependency of proto file
2708- Fixed the issue that the print operator printing result is not normal
2709- Fixed the issue that the equal operator is out of range
2710- Fixed the problem that when function wrapped by @jit,the cell id is not correct
2711- Fixed the GNN scenario data type verification error
2712- Fixed the problem that the dataset.map multi-process degenerates into threads
2713
2714### Contributors
2715
2716Thanks goes to these wonderful people:
2717
2718archer2049, caifubi, chenfei_mindspore, gaoshuanglong, Greatpan, guozhijian, huoxinyou, Kxiong, lanzhineng, lijunbin, liubuyu, liuchuting, luochao60, lyqlola, nomindcarry, TuDouNi, xiaotianci, xupan, yangshuo, yefeng, YingtongHu, yuchaojie, zhoufeng, ZPaC, 刘勇琪, 吕昱峰, 王禹程, 于振华.
2719
2720Contributions of any kind are welcome!
2721
2722## MindSpore 1.10.0 Release Notes
2723
2724### Major Features and Improvements
2725
2726#### DataSet
2727
2728- [STABLE]The timeout waiting time is adjusted in data sinking mode. The default value is 600s after adjusted. This solves the isuses that the GetNext operator may timeout due to environment resource competition and large computing workload when training in sink mode.
2729
2730### Bug fixes
2731
2732- Fixed an issue where some Primitive operators in AMP cannot be instantiated in graph mode and the interface is unavailable.
2733- Fixed an issue of DynamicRNN execution failure in LSTM network under the scenario of computational force segmentation on Ascend platform.
2734- Fixed DEVICE_ID cannot be set by single card train scripts parameters in mobilenet, fasterrcnn, yolo, etc.
2735
2736### Contributors
2737
2738Thanks goes to these wonderful people:
2739
2740AGroupofProbiotocs, anzhengqi, askmiao, baihuawei, baiyangfan, bai-yangfan, bingyaweng, BowenK, buxue, caifubi, CaoJian, caojian05, caozhou, Cathy, changzherui, chenbo116, chenfei, chengxianbin, chenhaozhe, chenjianping, chenzomi, chenzupeng, chujinjin, cj, cjh9368, Corleone, damon0626, danish, Danish, davidmc, dayschan, doitH, dong-li001, fary86, fuzhiye, Gaoxiong, GAO_HYP_XYJ, gengdongjie, Gogery, gongdaguo, gray0v0, gukecai, guoqi, gzhcv, hangq, hanhuifeng2020, Harshvardhan, He, heleiwang, hesham, hexia, Hoai, HuangBingjian, huangdongrun, huanghui, huangxinjing, huqi, huzhifeng, hwjiaorui, Jiabin Liu, jianghui58, Jiaqi, jin-xiulang, jinyaohui, jjfeing, John, jonyguo, JulyAi, jzg, kai00, kingfo, kingxian, kpy, kswang, liuyongqi, laiyongqiang, leonwanghui, liangchenghui, liangzelang, lichen_101010, lichenever, lihongkang, lilei, limingqi107, ling, linqingke, Lin Xh, liubuyu, liuwenhao4, liuxiao78, liuxiao93, liuyang_655, liuzhongkai, Lixia, lixian, liyanliu, liyong, lizhenyu, luopengting, lvchangquan, lvliang, lz, maning202007, Margaret_wangrui, mengyuanli, Ming_blue, ms_yan, ougongchang, panfengfeng, panyifeng, Payne, Peilin, peixu_ren, Pengyongrong, qianlong, qianjiahong, r1chardf1d0, riemann_penn, rmdyh, Sheng, shenwei41, simson, Simson, Su, sunsuodong, tao_yunhao, tinazhang, VectorSL, , Wan, wandongdong, wangdongxu, wangmin,  wangyue01, wangzhe, wanyiming, Wei, wenchunjiang, wilfChen, WilliamLian, wsc, wudenggang, wukesong, wuweikang, wuxuejian, Xiao Tianci, Xiaoda, xiefangqi, xinyunfan, xuanyue, xuyongfei, yanghaitao, yanghaitao1, yanghaoran, YangLuo, yangruoqi713, yankai, yanzhenxiang2020, yao_yf, yepei6, yeyunpeng, Yi, yoni, yoonlee666, yuchaojie, yujianfeng, yuximiao, zengzitao, Zhang,  zhanghuiyao, zhanghui_china, zhangxinfeng3, zhangyihui, zhangz0911gm, zhanke, zhanyuan, zhaodezan, zhaojichen, zhaoting, zhaozhenlong, zhengjun10, zhiqwang, zhoufeng, zhousiyi, zhouyaqiang, zhouyifengCode, Zichun, Ziyan, zjun, ZPaC, wangfengwfwf, zymaa, gerayking, shu-kun-zhang.
2741
2742Contributions of any kind are welcome!
2743
2744## MindSpore Lite 1.10.0 Release Notes
2745
2746### Bug fixes
2747
2748- Fixed potential accuracy problem of arithmetic type CPU kernels at dynamical shape case.
2749- Fixed the Incorrect Write Address of the Deconv Quantization Operator.
2750
2751## MindSpore 1.9.0 Release Notes
2752
2753### Major Features and Improvements
2754
2755#### FrontEnd
2756
2757- [STABLE] Add the object-oriented and functional combination programming paradigm, add mixed-precision APIs for combination programming paradigms such as `mindspore.amp.LossScaler`, `mindspore.amp.DynamicLossScaler`, `mindspore.amp.StaticLossScaler`, `mindspore.amp.auto_mixed_precision` and `mindspore.amp.all_finite`.
2758
2759### API Change
2760
2761#### operator
2762
2763- [STABLE] Add nn interface for `nn.AdaptiveAvgPool3d`.
2764- [STABLE] Add functional interface for `ops.adaptive_avg_pool3d`.
2765- [STABLE] Add functional interface for `ops.addcdiv`.
2766- [STABLE] Add functional interface for `ops.addcmul`.
2767- [STABLE] Add GPU and CPU support for `ops.approximate_equal`.
2768- [STABLE] Add GPU support for `ops.atanh`.
2769- [STABLE] Add GPU support for `ops.bessel_i0`.
2770- [STABLE] Add Ascend support for `ops.bessel_i0e`.
2771- [STABLE] Add GPU support for `ops.bessel_i1`.
2772- [STABLE] Add Ascend and GPU support for `ops.bessel_i1e`.
2773- [STABLE] Add GPU support for `ops.bessel_j0`.
2774- [STABLE] Add GPU support for `ops.bessel_j1`.
2775- [STABLE] Add GPU support for `ops.bessel_k0`.
2776- [STABLE] Add GPU support for `ops.bessel_k0e`.
2777- [STABLE] Add GPU support for `ops.bessel_k1`.
2778- [STABLE] Add GPU support for `ops.bessel_k1e`.
2779- [STABLE] Add GPU support for `ops.bessel_y0`.
2780- [STABLE] Add GPU support for `ops.bessel_y1`.
2781- [STABLE] Add functional interface for `ops.bias_add`.
2782- [STABLE] Add GPU support for `ops.bitwise_and`.
2783- [STABLE] Add GPU support for `ops.bitwise_or`.
2784- [STABLE] Add GPU support for `ops.bitwise_xor`.
2785- [STABLE] Add Ascend support for `ops.grid_sample`.
2786- [STABLE] Add CPU support for `ops.inplace_update`.
2787- [STABLE] Add Ascend and GPU support for `ops.isclose`.
2788- [STABLE] Add Ascend support for `ops.isnan`.
2789- [STABLE] Add GPU support for `ops.lerp`.
2790- [STABLE] Add functional interface for `ops.random_poisson`.
2791- [STABLE] Add functional interface for `ops.reverse_sequence`.
2792- [STABLE] Add GPU support for `ops.scatter_mul`.
2793- [STABLE] Add functional interface for `ops.scatter_nd_max`.
2794- [STABLE] Add functional interface for `ops.scatter_nd_min`.
2795- [STABLE] Add GPU support for `ops.SparseToDense`.
2796- [STABLE] Add functional interface for `ops.square`.
2797- [STABLE] Add GPU support for `ops.standard_laplace`.
2798- [STABLE] Add functional interface for `ops.std`.
2799- [STABLE] Add Ascend and GPU support for `ops.trunc`.
2800- [STABLE] Add functional interface for `ops.unsorted_segment_sum`.
2801- [STABLE] Add functional interface for `ops.xdivy`.
2802- [STABLE] Add GPU support for `ops.xlogy`.
2803- Deprecate `ops.poisson` and use `ops.random_poisson` instead.
2804- Deprecate `ops.SparseApplyAdagrad` and use `ops.SparseApplyAdagradV2` instead.
2805
2806### Bug fixes
2807
2808- [BUGFIX] The logic of the auto mixed precision (amp) O2 level is revised. In addition to the `BatchNorm1d` and `BatchNorm2d` operators, the other two operators `BatchNorm3d` and `LayerNorm` are added. The four operators still use the float32 data type when calculating.
2809
2810- [BUGFIX] Fix the problem that when processing string type data, if `output_numpy=True` is specified when calling the `create_dict_iterator` or `create_tuple_iterator` interface, the obtained data will be of type `numpy.bytes_`. After this fixing, these interfaces will directly return `numpy.str_` type data, and users do not need to perform string decoding operations on it. Likewise, when performing user defined processing functions, the received data will also be of type `numpy.str_` directly, matching the original source data type.
2811
2812### Contributors
2813
2814Thanks goes to these wonderful people:
2815
2816AGroupofProbiotocs, anzhengqi, askmiao, baihuawei, baiyangfan, bai-yangfan, bingyaweng, BowenK, buxue, caifubi, CaoJian, caojian05, caozhou, Cathy, changzherui, chenbo116, chenfei, chengxianbin, chenhaozhe, chenjianping, chenzomi, chenzupeng, chujinjin, cj, cjh9368, Corleone, damon0626, danish, Danish, davidmc, dayschan, doitH, dong-li001, fary86, fuzhiye, Gaoxiong, GAO_HYP_XYJ, gengdongjie, Gogery, gongdaguo, gray0v0, gukecai, guoqi, gzhcv, hangq, hanhuifeng2020, Harshvardhan, He, hesham, hexia, Hoai, HuangBingjian, huangdongrun, huanghui, huangxinjing, huqi, huzhifeng, hwjiaorui, Jiabin Liu, jianghui58, Jiaqi, jin-xiulang, jinyaohui, jjfeing, John, jonyguo, JulyAi, jzg, kai00, kingfo, kingxian, kpy, kswang, liuyongqi, laiyongqiang, leonwanghui, liangchenghui, liangzelang, lichen_101010, lichenever, lihongkang, lilei, limingqi107, ling, linqingke, Lin Xh, liubuyu, liuwenhao4, liuxiao78, liuxiao93, liuyang_655, liuzhongkai, liyanliu, lizhenyu, lvchangquan, lvliang, lz, maning202007, Margaret_wangrui, mengyuanli, Ming_blue, ms_yan, panfengfeng, panyifeng, Payne, peixu_ren, Pengyongrong, qianjiahong, r1chardf1d0, riemann_penn, rmdyh, Sheng, shenwei41, simson, Simson, Su, sunsuodong, tao_yunhao, tinazhang, VectorSL, Wan, wandongdong, wangdongxu, wangmin,  wangyue01, wangzhe, wanyiming, Wei, wenchunjiang, wilfChen, WilliamLian, wsc, wudenggang, wukesong, wuweikang, Xiao Tianci, Xiaoda, xiefangqi, xinyunfan, xuanyue, xuyongfei, yanghaitao, yanghaoran, YangLuo, yangruoqi713, yankai, yanzhenxiang2020, yao_yf, yepei6, yeyunpeng, Yi, yoni, yoonlee666, yuchaojie, yujianfeng, yuximiao, zengzitao, Zhang,  zhanghuiyao, zhanghui_china, zhangxinfeng3, zhangyihui, zhangz0911gm, zhanyuan, zhaojichen, zhaoting, zhaozhenlong, zhengjun10, zhiqwang, zhoufeng, zhousiyi, zhouyaqiang, zhouyifengCode, Zichun, Ziyan, zjun, ZPaC, wangfengwfwf, zymaa, gerayking, shu-kun-zhang.
2817
2818Contributions of any kind are welcome!
2819
2820## MindSpore 1.8.1 Release Notes
2821
2822### API Change
2823
2824#### operator
2825
2826- [STABLE] Add GPU and CPU support for ops.ApplyAdagradDA.
2827- [STABLE] Add CPU support for ops.ApplyAdagradV2.
2828- [STABLE] Add Ascend dynamic shape support for ops.ApplyCenteredRmsProp.
2829- [STABLE] Add CPU support for ops.ApplyFtrl.
2830- [STABLE] Add CPU support for ops.ApplyGradientDescent.
2831- [STABLE] Add CPU support for ops.ApplyPowerSign.
2832- [STABLE] Add GPU and CPU support for ops.ApplyProximalAdagrad.
2833- [STABLE] Add Ascend dynamic shape support for ops.ApplyRmsProp.
2834- [STABLE] Add functional interface for ops.max.
2835- [STABLE] Add functional interface for ops.atan2.
2836- [STABLE] Add GPU support for ops.cummax.
2837- [STABLE] Add GPU and CPU support for ops.cummin.
2838- [STABLE] Add GPU support for ops.diag.
2839- [STABLE] Add functional interface for ops.expand_dims.
2840- [STABLE] Add functional interface for ops.gather_elements.
2841- [STABLE] Add GPU support for ops.grid_sample.
2842- [STABLE] Add Ascend support for ops.hardswish.
2843- [BETA] Add GPU support for ops.index_fill.
2844- [BETA] Add CPU support for ops.inplace_update.
2845- [BETA] Add GPU support for nn.InstanceNorm1d.
2846- [BETA] Add GPU support for nn.InstanceNorm2d.
2847- [BETA] Add GPU support for nn.InstanceNorm3d.
2848- [STABLE] Add functional interface for ops.log1p.
2849- [STABLE] Add GPU and CPU support for ops.masked_fill.
2850- [BETA] Add GPU support for ops.matrix_diag_part.
2851- [BETA] Add GPU support for ops.matrix_diag.
2852- [BETA] Add GPU support for ops.matrix_set_diag.
2853- [STABLE] Add GPU support for ops.max_pool3d.
2854- [STABLE] Add functional interface for ops.nll_loss.
2855- [STABLE] Add functional interface for ops.one_hot.
2856- [STABLE] Add functional interface for ops.pad.
2857- [STABLE] Add CPU support for ops.random_gamma.
2858- [STABLE] Add functional interface for ops.amax.
2859- [STABLE] Add functional interface for ops.mean.
2860- [STABLE] Add functional interface for ops.amin.
2861- [STABLE] Add functional interface for ops.prod.
2862- [STABLE] Add Ascend, GPU, and CPU support for ops.renorm.
2863- [BETA] Add Ascend, GPU, and CPU support for ops.tensor_scatter_elements.
2864- [STABLE] Add GPU support for ops.scatter_max.
2865- [STABLE] Add GPU support for ops.scatter_min.
2866- [STABLE] Add functional interface for ops.scatter_nd.
2867- [STABLE] Add GPU support for ops.scatter_nd_max.
2868- [STABLE] Add functional interface for ops.scatter_update.
2869- [STABLE] Add CPU support for ops.binary_cross_entropy_with_logits.
2870- [STABLE] Add functional interface for ops.smooth_l1_loss.
2871- [STABLE] Add CPU support for ops.space_to_batch_nd.
2872- [STABLE] Add GPU and CPU support for ops.SparseApplyAdagrad.
2873- [STABLE] Add GPU and CPU support for ops.sparse_segment_mean.
2874- [STABLE] Add functional interface for ops.squeeze.
2875- [STABLE] Add CPU support for ops.standard_laplace.
2876- [BETA] Add Ascend, GPU, and CPU support for nn.ReflectionPad1d.
2877- [BETA] Add Ascend, GPU, and CPU support for nn.ReflectionPad2d.
2878- [STABLE] Add Ascend, GPU, and CPU support for nn.SiLU.
2879- [STABLE] Add functional interface for ops.transpose.
2880- [STABLE] Add CPU support for ops.uniform_candidate_sampler.
2881- [STABLE] Add functional interface for ops.uniform.
2882- [STABLE] Add GPU support for ops.unique_with_pad.
2883- [STABLE] Add functional interface for ops.unstack.
2884- [BETA] Add GPU and CPU support for ops.interpolate.
2885- [STABLE] Add CPU support for ops.xdivy.
2886- [STABLE] Add CPU support for ops.xlogy.
2887
2888## MindSpore 1.8.0 Release Notes
2889
2890### Major Features and Improvements
2891
2892#### FrontEnd
2893
2894- [BETA] Add `mindspore.train.Model.fit` API, add `mindspore.train.callback.EarlyStopping` and `mindspore.train.callback.ReduceLROnPlateau` in Callback.
2895- [BETA] Support custom operator implemented by Julia.
2896- [BETA] Support custom operator implemented by MindSpore Hybrid DSL.
2897- [STABLE] The export() interface supports the export of a model using a custom encryption algorithm, and the load() interface supports the import of a model using a custom decryption algorithm.
2898- [BETA] [Unified_Dynamic_and_Static_Graphs] [Usability] Constant-type data (tuple/list/dict is supported in Version 1.8) can be set to be variable during graph compiling.
2899- [BETA] [Unified_Dynamic_and_Static_Graphs] JIT fallback is used to support the control flow capability in the constant scenario.
2900- [STABLE] [Unified_Dynamic_and_Static_Graphs] The Python raise statement is supported in the graph mode constant scenario.
2901- [STABLE] [Unified_Dynamic_and_Static_Graphs] The Python assert statement is supported in the graph mode constant scenario.
2902- [STABLE] [Unified_Dynamic_and_Static_Graphs] The Python print statement is supported in the graph mode constant scenario.
2903- [STABLE] [Unified_Dynamic_and_Static_Graphs] The str.format() method is supported in the graph mode.
2904- [STABLE] [Unified_Dynamic_and_Static_Graphs] The slice method can be used to assign a value to the list in the graph mode.
2905- [STABLE] [Unified_Dynamic_and_Static_Graphs] The instances of custom classes can be created and invoked in the graph mode.
2906- [STABLE] [Unified_Dynamic_and_Static_Graphs] Obtaining the properties of a class from the Cell array and the custom class array is supported.
2907- [STABLE] [Unified_Dynamic_and_Static_Graphs] isinstance supports scenario expanding in the graph mode.
2908- [STABLE] Rename the custom operator decorator 'ms_hybrid' to 'ms_kernel'.
2909- [BETA] Custom operator Hybrid DSL is supported on the backend of CPU.
2910- [BETA] Custom operator Ascend backend adds custom scheduling primitive syntax support.
2911
2912#### PyNative
2913
2914- [STABLE] Implement the AdamWeightDecay operator to replace the original small operator combination mode.
2915- [STABLE] In PyNative mode, execute the optimizer by unifying the dynamic and static graphs.
2916- [STABLE] Optimize the execution performance of PyNative bprop graph and ms_function.
2917
2918#### Auto Parallel
2919
2920- [STABLE] Docking the AllToAll single-operator mode. Support AllToAll Operator in the graph compilation level O0.
2921- [STABLE] Whole-graph offloading supports MPI launching. In Whole-graph offloading, launching with MPI is supported.
2922- [STABLE] Seeds of model weights provide parallel interface configuration. If you do not set the random number of seeds through the mindspore.set_seed command, the weights initialized by each parameter is determined by the current fragment index. If the random number of seeds are configured, the initialization results of the same shape and weight of the same segmentation policy are the same.
2923- [STABLE] The HCCL shields internal full-mesh and non-full-mesh connections. Both fully-connected AllToAllv and hierarchical AllToAllv are allowed in one training session.
2924- [BETA] CPU optimizer fusion. Multiple optimizer operators are combined according to data types through cross-parameter fusion, improving performance. Currently, It has been verified on CPU AdamWeightDecay optimizer. You can use the flatten_weights method in the network cell class to enable this function.
2925
2926#### Executor
2927
2928- [STABLE] Provide southbound API.
2929- [STABLE] Multi-actor fusion execution to optimize the execution performance during runtime.
2930- [STABLE] Nopop operators (eg. reshape) execute elimination.
2931- [STABLE] Embedded cache architecture switches unified distributed runtime.
2932- [STABLE] Parameter Server switches unified distributed runtime.
2933- [STABLE] Support Parameter Server mode training on CPU.
2934
2935#### DataSet
2936
2937- [STABLE] When using the map operation for dataset objects and the parameters like: num_parallel_workers > 1 and python_multiprocessing=True, the multi-process mechanism is optimized, so that the data channel and child processes are mapped one by one, avoiding excessive file handle occupation, and closing_pool interface is also deleted.
2938- [STABLE] Add a batch of Vision, Text and Audio data augmentation operations.
2939- [STABLE] Fix a bug where the flat_map method of the Dataset class does not flatten the result.
2940- [STABLE] Unify import paths of dataset augmentation APIs to provide more easier way to use. Refer to [latest api usages](https://www.mindspore.cn/docs/en/r1.8/api_python/mindspore.dataset.vision.html).
2941
2942### API Change
2943
2944#### operator
2945
2946- [STABLE] Add GPU support for ops.adaptive_avg_pool2d.
2947- [BETA] Add Ascend, GPU, and CPU support for ops.adaptive_max_pool2d .
2948- [BETA] Add CPU support for ops.approximate_equal.
2949- [STABLE] Add CPU support for ops.argmin.
2950- [BETA] Add CPU support for ops.assign_sub.
2951- [STABLE] Add GPU support for ops.bernoulli.
2952- [BETA] Add CPU support for ops.bessel_i0.
2953- [BETA] Add CPU support for ops.bessel_i0e.
2954- [BETA] Add CPU support for ops.bessel_i1.
2955- [BETA] Add CPU support for ops.bessel_i1e Add CPU support.
2956- [STABLE] Add CPU support for ops.bessel_j0.
2957- [STABLE] Add CPU support for ops.bessel_j1.
2958- [STABLE] Add CPU support for ops.bessel_k0.
2959- [STABLE] Add CPU support for ops.bessel_k0e.
2960- [BETA] Add CPU support for ops.bessel_k1.
2961- [BETA] Add CPU support for ops.bessel_k1e.
2962- [STABLE] Add CPU support for ops.bessel_y0.
2963- [STABLE] Add CPU support for ops.bessel_y1.
2964- [STABLE] Add CPU support for ops.bitwise_and.
2965- [STABLE] Add CPU support for ops.bitwise_or.
2966- [STABLE] Add CPU support for ops.bitwise_xor.
2967- [STABLE] Add functional interface for ops.broadcast_to.
2968- [BETA] Add GPU and CPU support for ops.ceil.
2969- [BETA] Add GPU support for ops.col2im.
2970- [BETA] Add functional interface for ops.concat.
2971- [STABLE] Add GPU support for ops.cosh.
2972- [STABLE] Add Ascend and CPU support for ops.ctc_greedy_decoder.
2973- [BETA] Add GPU and CPU support for ops.DataFormatDimMap.
2974- [BETA] Add GPU and CPU support for ops.dropout2d.
2975- [BETA] Add CPU support for ops.dropout3d.
2976- [BETA] Add CPU support for ops.erf.
2977- [BETA] Add CPU support for ops.erfc.
2978- [STABLE] Add functional interface for ops.expand_dims.
2979- [STABLE] Add GPU and CPU support for ops.fast_gelu.
2980- [STABLE] Add Ascend dynamic shape support for ops.flatten.
2981- [BETA] Add GPU and CPU support for ops.ger.
2982- [STABLE] Add Ascend, GPU, and CPU support for ops.gumbel_softmax.
2983- [BETA] Add GPU and CPU support for ops.hardshrink.
2984- [BETA] Add CPU support for ops.index_add.
2985- [BETA] Add CPU support for ops.inplace_add.
2986- [BETA] Add CPU support for ops.inplace_sub.
2987- [STABLE] Add CPU support for ops.intopk.
2988- [STABLE] Add GPU and CPU support for ops.inv.
2989- [STABLE] Add GPU and CPU support for ops.invert.
2990- [BETA] Add CPU support for ops.isclose.
2991- [STABLE] Add CPU support for ops.lerp.
2992- [BETA] Add CPU support for ops.linspace.
2993- [BETA] Add functional interface for ops.log_softmax.
2994- [BETA] Add Ascend, GPU, and CPU support for ops.norm.
2995- [BETA] Add CPU support for ops.lrn.
2996- [BETA] Add GPU support for ops.masked_select.
2997- [BETA] Add GPU and CPU support for ops.matrix_band_part.
2998- [BETA] Add GPU and CPU support for ops.matrix_solve.
2999- [BETA] Add CPU support for ops.meshgrid.
3000- [STABLE] Add CPU support for ops.mish.
3001- [BETA] Add GPU support forops.nonzero.
3002- [STABLE] Add GPU and CPU support for ops.padding.
3003- [BETA] Add Ascend dynamic shape support for ops.pow.
3004- [BETA] Add functional interface for ops.range.
3005- [BETA] Add Ascend dynamic shape support for ops.round.
3006- [STABLE] Add Ascend dynamic shape support for ops.scatter_add.
3007- [STABLE] Add Ascend dynamic shape support for ops.scatter_div.
3008- [BETA] Add GPU support for ops.scatter_max.
3009- [BETA] Add GPU support for ops.scatter_min.
3010- [BETA] Add CPU support for ops.scatter_nd_add.
3011- [STABLE] Add GPU and CPU support for ops.scatter_nd_div.
3012- [STABLE] Add GPU and CPU support for ops.scatter_nd_min.
3013- [STABLE] Add GPU and CPU support for ops.scatter_nd_mul.
3014- [BETA] Add CPU support for ops.scatter_nd_sub.
3015- [STABLE] Add Ascend dynamic shape support for ops.scatter_update.
3016- [BETA] Add Ascend dynamic shape support for ops.select.
3017- [BETA] Add GPU and CPU support for ops.selu.
3018- [BETA] Add GPU and CPU support for ops.soft_shrink.
3019- [BETA] Add CPU support for ops.softsign.
3020- [STABLE] Add GPU support for ops.tan.
3021- [BETA] Add Ascend and CPU support ops.tensor_scatter_add.
3022- [STABLE] Add GPU and CPU support for ops.tensor_scatter_div.
3023- [STABLE] Add GPU and CPU support for ops.tensor_scatter_mul.
3024- [BETA] Add Ascend and CPU support for ops.tensor_scatter_sub.
3025- [STABLE] Add Ascend, GPU, and CPU support for nn.AdaptiveAvgPool1d.
3026- [STABLE] Add Ascend, GPU, and CPU support for nn.AdaptiveMaxPool1d.
3027- [BETA] Add Ascend, GPU, and CPU support for nn.BiDense.
3028- [STABLE] Add Ascend, GPU, and CPU support for nn.ConstantPad1d.
3029- [STABLE] Add Ascend, GPU, and CPU support for nn.ConstantPad2d.
3030- [STABLE] Add Ascend, GPU, and CPU support for nn.ConstantPad3d.
3031- [STABLE] Add Ascend, GPU, and CPU support for nn.Hardtanh.
3032- [STABLE] Add Ascend, GPU, and CPU support for nn.HuberLoss.
3033- [STABLE] Add Ascend, GPU, and CPU support for nn.RReLU.
3034- [STABLE] Add Ascend, GPU, and CPU support for nn.Tanhshrink.
3035- [STABLE] Add Ascend, GPU, and CPU support for nn.Threshold.
3036- [STABLE] Add Ascend, GPU, and CPU support for nn.ZeroPad2d.
3037- [BETA] Add GPU support for ops.unique_consecutive.
3038- [STABLE] Add CPU support for ops.unsorted_segment_max.
3039- [STABLE] Add CPU support for ops.unsorted_segment_min.
3040- [STABLE] Add GPU support for ops.unsorted_segment_prod.
3041
3042#### Backwards Incompatible Change
3043
3044##### Python API
3045
3046- DVPP simulation algorithm is no longer supported. Remove `mindspore.dataset.vision.c_transforms.SoftDvppDecodeRandomCropResizeJpeg` and `mindspore.dataset.vision.c_transforms.SoftDvppDecodeResizeJpeg` interfaces.
3047- Add `on_train_epoch_end` method in LossMonitor, which implements printing metric information in the epoch level when it is used in `mindspore.train.Model.fit`.
3048- TimeMonitor printing content changes, and the printed content is added to "train" or "eval" to distinguish between training and inference phases.
3049- `filter_prefix` of `mindspore.load_checkpoint` interface: empty string ("") is no longer supported, and the matching rules are changed from strong matching to fuzzy matching.
3050
3051#### Import Optimization
3052
3053APIs in `mindspore.context`, `mindspore.parallel`, `mindspore.profiler` and `mindspore.train` can be directly used in `mindspore`. The original usage can still be supported.
3054
3055For examples:
3056
3057- `mindspore.context.set_context` can be simplified to `mindspore.set_context`.
3058- `mindspore.parallel.set_algo_parameters` can be simplified to `mindspore.set_algo_parameters`.
3059- `mindspore.profiler.Profiler` can be simplified to `mindspore.Profiler`.
3060- `mindspore.train.callback.Callback` can be simplified to `mindspore.train.Callback`.
3061
3062The API pages are aggregated to <https://www.mindspore.cn/docs/en/r1.8/api_python/mindspore.html>.
3063
3064### Contributors
3065
3066Thanks goes to these wonderful people:
3067
3068AGroupofProbiotocs, anzhengqi, askmiao, baihuawei, baiyangfan, bai-yangfan, bingyaweng, BowenK, buxue, caifubi, CaoJian, caojian05, caozhou, Cathy, changzherui, chenbo116, chenfei, chengxianbin, chenhaozhe, chenjianping, chenzomi, chenzupeng, chujinjin, cj, cjh9368, Corleone, damon0626, danish, Danish, davidmc, dayschan, doitH, dong-li001, fary86, fuzhiye, Gaoxiong, GAO_HYP_XYJ, gengdongjie, Gogery, gongdaguo, gray0v0, gukecai, guoqi, gzhcv, hangq, hanhuifeng2020, Harshvardhan, He, heleiwang, hesham, hexia, Hoai, HuangBingjian, huangdongrun, huanghui, huangxinjing, huqi, huzhifeng, hwjiaorui, Jiabin Liu, jianghui58, Jiaqi, jin-xiulang, jinyaohui, jjfeing, John, jonyguo, JulyAi, jzg, kai00, kingfo, kingxian, kpy, kswang, liuyongqi, laiyongqiang, leonwanghui, liangchenghui, liangzelang, lichen_101010, lichenever, lihongkang, lilei, limingqi107, ling, linqingke, Lin Xh, liubuyu, liuwenhao4, liuxiao78, liuxiao93, liuyang_655, liuzhongkai, Lixia, lixian, liyanliu, liyong, lizhenyu, luopengting, lvchangquan, lvliang, lz, maning202007, Margaret_wangrui, mengyuanli, Ming_blue, ms_yan, ougongchang, panfengfeng, panyifeng, Payne, Peilin, peixu_ren, Pengyongrong, qianlong, qianjiahong, r1chardf1d0, riemann_penn, rmdyh, Sheng, shenwei41, simson, Simson, Su, sunsuodong, tao_yunhao, tinazhang, VectorSL, , Wan, wandongdong, wangdongxu, wangmin,  wangyue01, wangzhe, wanyiming, Wei, wenchunjiang, wilfChen, WilliamLian, wsc, wudenggang, wukesong, wuweikang, wuxuejian, Xiao Tianci, Xiaoda, xiefangqi, xinyunfan, xuanyue, xuyongfei, yanghaitao, yanghaitao1, yanghaoran, YangLuo, yangruoqi713, yankai, yanzhenxiang2020, yao_yf, yepei6, yeyunpeng, Yi, yoni, yoonlee666, yuchaojie, yujianfeng, yuximiao, zengzitao, Zhang,  zhanghuiyao, zhanghui_china, zhangxinfeng3, zhangyihui, zhangz0911gm, zhanke, zhanyuan, zhaodezan, zhaojichen, zhaoting, zhaozhenlong, zhengjun10, zhiqwang, zhoufeng, zhousiyi, zhouyaqiang, zhouyifengCode, Zichun, Ziyan, zjun, ZPaC, wangfengwfwf, zymaa, gerayking, shu-kun-zhang.
3069
3070Contributions of any kind are welcome!
3071
3072## MindSpore Lite 1.8.0 Release Notes
3073
3074### Major Features and Improvements
3075
3076#### API
3077
3078- [STABLE] Add C++ and Python APIs for model conversion.
3079- [STABLE] Add Python APIs for model inference.
3080
3081#### Post-Training Quantization
3082
3083- [STABLE] Support perlayer quantization, and built-in CLE to optimize perlayer quantization accuracy.
3084
3085## MindSpore 1.7.0 Release Notes
3086
3087### Major Features and Improvements
3088
3089#### OS
3090
3091- [STABLE] Support Python 3.8 (Linux/Windows/Mac).
3092- [STABLE] Installation improved with more detailed install guide and automated shell scripts.
3093- [STABLE] Support operator computing with multi-thread under Windows.
3094- [STABLE] Compatible with GCC from version 7.3 to 9.x.
3095
3096#### FrontEnd
3097
3098- [STABLE] Support dynamic weight decay for optimizers, that is weight decay value will change according to the increasing step during training.
3099- [STABLE] Add four methods to create Tensor, which are `mindspore.numpy.rand()`, `mindspore.numpy.randn()`, `mindspore.numpy.randint()`, and `mindspore.ops.arange()`.
3100- [STABLE] Add `mindspore.train.callback.History` in Callback.
3101- [BETA] Support custom operator implemented by Julia operator.
3102- [STABLE] Support accessing attributes and methods of user-defined classes  through `mindspore.ms_class` class decorator.
3103- [STABLE] Support training when a network has side effect operations and control flow statements at the same time.
3104- [STABLE] Support for more complex control flow syntax, such as a for loop statement in the body of a while loop.
3105- [STABLE] Improve the performance of networks with complex syntax control flow statements by decreasing the num of subgraphs.
3106
3107#### PyNative
3108
3109- [STABLE] Add Hook functions in PyNative mode, including register_forward_pre_hook, register_forward_hook of the forward hook interface, register_backward_hook of the reverse hook interface.
3110- [STABLE] Optimize the execution performance of PyNative mode, and execute the front-end Python and the back-end C++ in parallel.
3111
3112#### Auto Parallel
3113
3114- [STABLE] Support TopK routing, data parallel and optimizer state parallel when enable MoE.
3115- [STABLE] Support AllGather/ReduceScatter communication operator fusion. Support AllReuduce fusion by the data volume size in DATA_PARALLEL mode.
3116- [STABLE] Support ops.clip_by_global_norm in the parallel mode.
3117- [STABLE] Support AdaSum optimizer in the parallel mode.
3118- [STABLE] Support automatic optimizer state parallel.
3119- [STABLE] Support AlltoAll configurable. Support automatically add virtualdataset cell.
3120- [STABLE] Support automatically inference trainable parameters in pipeline parallel training.
3121- [STABLE] Support clusters where the device number is not the power of 2.
3122- [STABLE] Support sharding propagation in auto-parallel mode.
3123- [STABLE] Support optimizer offload under the unified runtime.
3124- [STABLE] Support Adafactor operator on CPU.
3125- [STABLE] Support sharding at H/W axis for Conv2d/Conv2DTranspose operator. Support operators such as ResizeBilinear,ROIAlign, CropAndResize, BoundingBoxEncode, IOU and RandomChoiceWithMask.
3126
3127#### Executor
3128
3129- [BETA] [Failure Recovery Under Data Parallel Training](https://www.mindspore.cn/tutorials/experts/en/r1.7/parallel/train_gpu.html) Support auto failure recovery under data parallel training mode.
3130- [BETA] Support searching for the number of threads under the CPU to obtain the optimal number of threads for execution. The entire search process takes 50 steps, and the overall performance will reach a stable state after 50 steps. When testing performance, data after 50 steps need to be used as a standard.
3131
3132#### DataSet
3133
3134- [STABLE] Add dataset operations mapping between TensorFlow.data module and MindSpore.dataset module, [check list](https://www.mindspore.cn/docs/en/r1.7/note/api_mapping/tensorflow_api_mapping.html#tf-data).
3135- [STABLE] Python multiprocessing optimization and make processes exit normally.
3136- [STABLE] Support [Dataset Autotune](https://www.mindspore.cn/tutorials/experts/en/master/dataset/dataset_autotune.html) for tuning the speed of dataset pipeline automatically.
3137- [BETA]  [Dataset Offload](https://www.mindspore.cn/tutorials/experts/en/master/dataset/dataset_offload.html) support new data augmentation operations: RandomColorAdjust, RandomSharpness, TypeCast.
3138- Output a single data column when `__getitem__/__next__` methods of GeneratorDataset return a single NumPy object.
3139- Use `ulimit -u 10240` to increase the number of threads/processes available to the current user when specify too many processes or threads for loading dataset may cause RuntimeError: can't start new thread.
3140
3141### API Change
3142
3143#### Backwards Incompatible Change
3144
3145##### Python API
3146
3147- Modify the gradient return value type of the hook corresponding to the register_backward_hook function, and change the gradient return value to the tuple type uniformly.([!31876](https://gitee.com/mindspore/mindspore/pulls/31876))
3148- Deprecated usage: `import mindspore.dataset.engine.datasets as ds`. Use `import mindspore.dataset as ds` instead as recommended in [mindspore doc](https://www.mindspore.cn/docs/en/r1.7/api_python/mindspore.dataset.html).
3149- Add `mindspore.ms_class` interface, as class decorator for user-defined classes. It allows MindSpore to identify user-defined classes and access their attributes and methods([!30855](https://gitee.com/mindspore/mindspore/pulls/30855))
3150- Deprecate `mindspore.SparseTensor` and use `mindspore.COOTensor` instead. ([!28505](https://gitee.com/mindspore/mindspore/pulls/28505))
3151- Add Tensor init arg `internal` for internal use.
3152
3153### Contributors
3154
3155Thanks goes to these wonderful people:
3156
3157AGroupofProbiotocs, anzhengqi, askmiao, baihuawei, baiyangfan, bai-yangfan, bingyaweng, BowenK, buxue, caifubi, CaoJian, caojian05, caozhou, Cathy, changzherui, chenbo116, chenfei, chengxianbin, chenhaozhe, chenjianping, chenzomi, chenzupeng, chujinjin, cj, cjh9368, Corleone, damon0626, danish, Danish, davidmc, dayschan, doitH, dong-li001, fary86, fuzhiye, Gaoxiong, GAO_HYP_XYJ, gengdongjie, Gogery, gongdaguo, gray0v0, gukecai, guoqi, gzhcv, hangq, hanhuifeng2020, Harshvardhan, He, heleiwang, hesham, hexia, Hoai, HuangBingjian, huangdongrun, huanghui, huangxinjing, huqi, huzhifeng, hwjiaorui, Jiabin Liu, jianghui58, Jiaqi, jin-xiulang, jinyaohui, jjfeing, John, jonyguo, JulyAi, jzg, kai00, kingfo, kingxian, kpy, kswang, liuyongqi, laiyongqiang, leonwanghui, liangchenghui, liangzelang, lichen_101010, lichenever, lihongkang, lilei, limingqi107, ling, linqingke, Lin Xh, liubuyu, liuwenhao4, liuxiao78, liuxiao93, liuyang_655, liuzhongkai, Lixia, lixian, liyanliu, liyong, lizhenyu, luopengting, lvchangquan, lvliang, lz, maning202007, Margaret_wangrui, mengyuanli, Ming_blue, ms_yan, ougongchang, panfengfeng, panyifeng, Payne, Peilin, peixu_ren, Pengyongrong, qianlong, qianjiahong, r1chardf1d0, riemann_penn, rmdyh, Sheng, shenwei41, simson, Simson, Su, sunsuodong, tao_yunhao, tinazhang, VectorSL, , Wan, wandongdong, wangdongxu, wangmin,  wangyue01, wangzhe, wanyiming, Wei, wenchunjiang, wilfChen, WilliamLian, wsc, wudenggang, wukesong, wuweikang, wuxuejian, Xiao Tianci, Xiaoda, xiefangqi, xinyunfan, xuanyue, xuyongfei, yanghaitao, yanghaitao1, yanghaoran, YangLuo, yangruoqi713, yankai, yanzhenxiang2020, yao_yf, yepei6, yeyunpeng, Yi, yoni, yoonlee666, yuchaojie, yujianfeng, yuximiao, zengzitao, Zhang,  zhanghuiyao, zhanghui_china, zhangxinfeng3, zhangyihui, zhangz0911gm, zhanke, zhanyuan, zhaodezan, zhaojichen, zhaoting, zhaozhenlong, zhengjun10, zhiqwang, zhoufeng, zhousiyi, zhouyaqiang, zhouyifengCode, Zichun, Ziyan, zjun, ZPaC, wangfengwfwf, zymaa, gerayking.
3158
3159Contributions of any kind are welcome!
3160
3161## MindSpore Lite 1.7.0 Release Notes
3162
3163### Major Features and Improvements
3164
3165#### Post quantization
3166
3167- [STABLE] Support post quantization to run dynamic quantization algorithm.
3168- [BETA] Support post quantized model to run on NVIDIA GPU.
3169
3170# MindSpore 1.6.0
3171
3172## MindSpore 1.6.0 Release Notes
3173
3174### Major Features and Improvements
3175
3176#### OS
3177
3178- [STABLE] Support macOS with CPU(X86)
3179- [BETA] Supoport macOS with CPU(M1)
3180
3181#### FrontEnd
3182
3183- [STABLE] Support JIT Fallback feature in Graph mode.
3184- [STABLE] Support compile cache feature in Graph mode.
3185- [STABLE] Add new optimizers, including ASGD and Rprop.
3186- [STABLE] Add new initializers, including Identity, Orthogonal, Dirac, Sparse and VarianceScaling.
3187- [STABLE] Support resuming training when an exception occurs in the process.
3188- [STABLE] Change `mindspore.nn.LSTMCell` from single-layer LSTM to single-cell LSTM.
3189- [BETA] Introduce `mindspore.ops.Custom` to customize your own operators for Ascend(AICore, AICPU), GPU, CPU backends, and the custom type can be one of TBE, AKG, pure Python function or prebuild binary(called aot operator).
3190
3191#### PyNative
3192
3193- [STABLE] Support heterogeneous feature in PyNative mode.
3194- [STABLE] Optimize memory allocation in PyNative mode.
3195
3196#### Auto Parallel
3197
3198- [STABLE] Support configuring the output shard strategy of the MatMul distributed operator.
3199- [STABLE] Support multi-instances parallel.
3200- [STABLE] Support activation slice communication and calculation overlap in Transformer.
3201- [STABLE] Support heterogeneous parallel tensor swap.
3202- [STABLE] Add implementations of distributed operator of ResizeNearestNeighbor.
3203- [STABLE] Add a communication operator named NeighborExchangeV2 that supports data exchange between adjacent 8 rank ids.
3204- [STABLE] Pipeline parallel support GPU platform.
3205- [STABLE] Add cell-level data parallel interface.
3206- [STABLE] Support gradient AllReduce fusion according to the amount of data.
3207- [STABLE] Support a sharding strategy search algorithm called sharding propagation.
3208
3209#### Executor
3210
3211- [STABLE] Support multigraph sink and subgraph sink of MindRT.
3212- [STABLE] Support memory swap to break the device memory size limit on Ascend platform.
3213- [STABLE] Support dynamic deployment of distributed training cluster(GPU).
3214- [BETA] Support automatic failover of parameter server.
3215
3216#### DataSet
3217
3218- [STABLE] Support overwrite feature in MindRecord.
3219- [STABLE] Log improvement and more friendly to users.
3220- [BETA] Support new feature [Dataset Offload](https://www.mindspore.cn/docs/programming_guide/en/r1.6/enable_dataset_offload.html) to speed up data processing by heterogeneous computing.
3221- [BETA] Support new feature [Dataset Autotune](https://www.mindspore.cn/docs/programming_guide/en/r1.6/enable_auto_tune.html) to adjust parallelism of dataset pipeline automatically.
3222
3223#### GraphKernel Fusion
3224
3225- [STABLE] Support kernel fusion and generation for CPU backend.
3226
3227#### Federated Learning
3228
3229- [STABLE] FL-Client framework and model decoupling.
3230- [BETA] Support Cross-silo federated learning framework.
3231
3232#### Debug
3233
3234- [STABLE] Support dump in cell level(Ascend).
3235- [STABLE] Support dump Tensor statistics(Ascend/GPU).
3236- [STABLE] Support displaying corresponding code lines for fusion nodes.
3237- [STABLE] Support passing dump flag in Ascend backend in order to dump correct operators after fusion transformation.
3238
3239### API Change
3240
3241#### Backwards Incompatible Change
3242
3243##### Python API
3244
3245###### `mindspore.dataset.MindDataset` interface changes input parameter dataset_file([!27542](https://gitee.com/mindspore/mindspore/pulls/27542))
3246
3247`MindDataset` contains the input parameter `dataset_file`, which is in the singular format. It can receive a single file path or a list that stores multiple file paths. Thus It is preferred to change the input parameter `dataset_file` into plural format. In addition, the input parameters of most dataset API, such as `TFRecordDataset`, are in plural formart (`dataset_files`). To ensure consistency, the input parameter `dataset_file` of MindDataset is changed to plural formart as `dataset_files`,  we can see the updated version in api of [mindspore.dataset.MindDataset](https://www.mindspore.cn/docs/en/master/api_python/dataset/mindspore.dataset.MindDataset.html#mindspore.dataset.MindDataset).
3248
3249###### Delete `mindspore.Tensor`'s property `virtual_flag`([!26989](https://gitee.com/mindspore/mindspore/pulls/26989))
3250
3251###### Delete `mindspore.Parameter`'s property `is_init`([!26989](https://gitee.com/mindspore/mindspore/pulls/26989))
3252
3253###### Delete `mindspore.nn.ROC`'s interface `roc`([!25713](https://gitee.com/mindspore/mindspore/pulls/25713))
3254
3255###### The `shard()` interface of primitive is changed from `shard(strategy)` to `shard(in_strategy=None, out_strategy=None)`
3256
3257###### The `set_auto_parallel_context()` interface of context is changed from
3258
3259###### `set_auto_parallel_context(parallel_mode=AUTO_PARALLEL, auto_parallel_search_mode="dynamic_programming")` to `set_auto_parallel_context(parallel_mode=AUTO_PARALLEL, search_mode="dynamic_programming")`
3260
3261#### Collect Data and Create Landscape
3262
3263##### Python API
3264
3265###### `mindspore.train.callback.SummaryCollector` interface's parameter `collect_specified_data` add new operations `collect_landscape` ([!26229](https://gitee.com/mindspore/mindspore/pulls/26229))
3266
3267`collect_landscape` can collect the parameters needed to create the loss landscape. we can see the updated version in api of [mindspore.train.callback.SummaryCollector](https://www.mindspore.cn/docs/en/master/api_python/mindspore/mindspore.SummaryCollector.html#mindspore.SummaryCollector).
3268
3269###### `mindspore.train.callback` add new interface `SummaryLandscape` ([!26229](https://gitee.com/mindspore/mindspore/pulls/26229))
3270
3271`SummaryLandscape` can help you to collect loss landscape information. It can create landscape in PCA direction or random direction by calculating loss. We can see the updated version in api of [mindspore.train.callback.SummaryLandscape](https://www.mindspore.cn/docs/en/master/api_python/mindspore/mindspore.SummaryLandscape.html#mindspore.SummaryLandscape).
3272
3273### Bug fixes
3274
3275#### Executor
3276
3277- Fix process hanging while calling MPI_comm_create in asymmetric pipeline split scenario. ([!28707](https://gitee.com/mindspore/mindspore/pulls/28707))
3278- Fix the execution error when the weights are shared between graph mode and PyNative mode.([!26635](https://gitee.com/mindspore/mindspore/pulls/26635))
3279- Fixed the probability coredump when free memory under PyNative mode.([!25472](https://gitee.com/mindspore/mindspore/pulls/25472))
3280
3281#### Dataset
3282
3283- Fix memory increase abnormally when running dataset for a long time. ([!26237](https://gitee.com/mindspore/mindspore/pulls/26237))
3284- Fix saving MindRecord files with Chinese path on Windows. ([!28378](https://gitee.com/mindspore/mindspore/pulls/28378))
3285
3286## MindSpore Lite
3287
3288### Major Features and Improvements
3289
3290#### Converter and runtime
3291
3292- [STABLE] Add more fusion patterns in the converter tool to improve runtime performance.
3293- [STABLE] Support take OpenGL texture as input and output of inference.
3294- [STABLE] Refactor the JAVA API.
3295- [BETA] Support inference on Ascend310.
3296
3297#### x86 backend optimization
3298
3299- [STABLE] Optimize kernels for x86 using Advanced Vector Extensions(AVX512).
3300
3301#### ARM backend optimization
3302
3303- [STABLE] Support heterogeneous parallel inference, including splitting operators, constructing heterogeneous subgraphs, and heterogeneous parallel scheduling between CPUs and GPUs.
3304- [STABLE] Add more FP16 operators.
3305
3306#### Post quantization
3307
3308- [STABLE] Post quantization supports debugging.
3309- [STABLE] Full quantization supports choosing non-quantized nodes.
3310- [STABLE] Mixed bit quantization supports auto-tune.
3311
3312#### Training on Device
3313
3314- [STABLE] Support user-defined algorithm models to access the federated learning framework.
3315
3316### Contributors
3317
3318Thanks goes to these wonderful people:
3319
3320AGroupofProbiotocs, anzhengqi, askmiao, baihuawei, baiyangfan, bai-yangfan, bingyaweng, BowenK, buxue, caifubi, CaoJian, caojian05, caozhou, Cathy, changzherui, chenbo116, chenfei, chengxianbin, chenhaozhe, chenjianping, chenzomi, chenzupeng, chujinjin, cj, cjh9368, Corleone, damon0626, danish, Danish, davidmc, dayschan, doitH, dong-li001, fary86, fuzhiye, Gaoxiong, GAO_HYP_XYJ, gengdongjie, Gogery, gongdaguo, gray0v0, gukecai, guoqi, gzhcv, hangq, hanhuifeng2020, Harshvardhan, He, heleiwang, hesham, hexia, Hoai, HuangBingjian, huangdongrun, huanghui, huangxinjing, huqi, huzhifeng, hwjiaorui, Jiabin Liu, jianghui58, Jiaqi, jin-xiulang, jinyaohui, jjfeing, John, jonyguo, JulyAi, jzg, kai00, kingfo, kingxian, kpy, kswang, liuyongqi, laiyongqiang, leonwanghui, liangchenghui, liangzelang, lichen_101010, lichenever, lihongkang, lilei, limingqi107, ling, linqingke, Lin Xh, liubuyu, liuwenhao4, liuxiao78, liuxiao93, liuyang_655, liuzhongkai, Lixia, lixian, liyanliu, liyong, lizhenyu, luopengting, lvchangquan, lvliang, lz, maning202007, Margaret_wangrui, mengyuanli, Ming_blue, ms_yan, ougongchang, panfengfeng, panyifeng, Payne, Peilin, peixu_ren, Pengyongrong, qianlong, qianjiahong, r1chardf1d0, riemann_penn, rmdyh, Sheng, shenwei41, simson, Simson, Su, sunsuodong, tao_yunhao, tinazhang, VectorSL, , Wan, wandongdong, wangdongxu, wangmin, [wangnan39@huawei.com](mailto:wangnan39@huawei.com), wangyue01, wangzhe, wanyiming, Wei, wenchunjiang, wilfChen, WilliamLian, wsc, wudenggang, wukesong, wuweikang, wuxuejian, Xiao Tianci, Xiaoda, xiefangqi, xinyunfan, xuanyue, xuyongfei, yanghaitao, yanghaitao1, yanghaoran, YangLuo, yangruoqi713, yankai, yanzhenxiang2020, yao_yf, yepei6, yeyunpeng, Yi, yoni, yoonlee666, yuchaojie, yujianfeng, yuximiao, zengzitao, Zhang, [zhanghaibo5@huawei.com](mailto:zhanghaibo5@huawei.com), zhanghuiyao, zhanghui_china, zhangxinfeng3, zhangyihui, zhangz0911gm, zhanke, zhanyuan, zhaodezan, zhaojichen, zhaoting, zhaozhenlong, zhengjun10, zhiqwang, zhoufeng, zhousiyi, zhouyaqiang, zhouyifengCode, Zichun, Ziyan, zjun, ZPaC, wangfengwfwf, zymaa, gerayking.
3321
3322Contributions of any kind are welcome!
3323
3324# MindSpore 1.5.2
3325
3326## MindSpore 1.5.2 Release Notes
3327
3328### Bug fixes
3329
3330- Fix code specification, pclint, codedex alarm.
3331- Repair NN Abnormal output of graphnorm operator.
3332- Fixed the problem of poor performance in scenes with dynamic rnngrad batch size of 16 times.
3333
3334### Contributors
3335
3336Thanks goes to these wonderful people:
3337
3338Adel, AGroupofProbiotocs, anthonyaje, anzhengqi, askmiao, baihuawei, baiyangfan, bai-yangfan, bingyaweng, BowenK, buxue, caifubi, CaoJian, caojian05, caozhou, Cathy, changzherui, chenbo116, chenfei, chengxianbin, chenhaozhe, chenjianping, chenzomi, chenzupeng, chujinjin, cj, cjh9368, Corleone, damon0626, danish, Danish, davidmc, dayschan, doitH, dong-li001, eric, Eric, fary86, fuzhiye, Gaoxiong, GAO_HYP_XYJ, gengdongjie, Gogery, gongdaguo, gray0v0, gukecai, guoqi, gzhcv, hangq, hanhuifeng2020, Harshvardhan, He, heleiwang, hexia, Hoai, HuangBingjian, huangdongrun, huanghui, huangxinjing, huqi, huzhifeng, hwjiaorui, Islam Amin, Jesse, , Jiabin Liu, jianghui58, jiangzhiwen, Jiaqi, jin-xiulang, jinyaohui, jjfeing, John, Jonathan, jonyguo, JulyAi, jzg, kai00, kingfo, kingxian, kpy, kswang, laiyongqiang, leonwanghui, Li, liangchenghui, liangzelang, lichen_101010, lichenever, lihongkang, lilei, limingqi107, ling, linqingke, Lin Xh, liubuyu, liuwenhao4, liuxiao78, liuxiao93, liuyang_655, liuzhongkai, Lixia, lixian, liyanliu, liyong, lizhenyu, luopengting, luoyang, lvchangquan, lvliang, lz, mahdi, Mahdi, maning202007, Margaret_wangrui, mayang, mengyuanli, Ming_blue, nhussain, ougongchang, panfengfeng, panyifeng, Payne, Peilin, peixu_ren, Pengyongrong, qianlong, qianjiahong, r1chardf1d0, riemann_penn, rmdyh, Sheng, shenwei41, simson, Simson, Su, sunsuodong, tao_yunhao, tinazhang, VectorSL, , Wan, wandongdong, wangdongxu, wangmin, wangnan39@huawei.com, wangyue01, wangzhe, wanyiming, Wei, wenchunjiang, wilfChen, WilliamLian, wsc, wudenggang, wukesong, wuweikang, wuxuejian, Xiao Tianci, Xiaoda, xiefangqi, xinyunfan, xuanyue, xulei2020, Xun, xuyongfei, yanghaitao, yanghaitao1, yanghaoran, YangLuo, yangruoqi713, yankai, yanzhenxiang2020, yao_yf, yepei6, yeyunpeng, Yi, yoni, yoonlee666, yuchaojie, yujianfeng, yuximiao, zengzitao, Zhang, zhanghaibo5@huawei.com, zhanghuiyao, zhanghui_china, zhangxinfeng3, zhangyihui, zhangz0911gm, zhanke, zhanyuan, zhaodezan, zhaojichen, zhaoting, zhaozhenlong, zhengjun10, Zhenglong Li, zhiqwang, zhoufeng, zhousiyi, zhouyaqiang, zhouyifengCode, Zichun, Zirui, Ziyan, zjun, ZPaC, wangfengwfwf, zymaa, gerayking.
3339
3340Contributions of any kind are welcome!
3341
3342# MindSpore 1.5.1
3343
3344## MindSpore 1.5.1 Release Notes
3345
3346### Bug fixes
3347
3348- Fix code specification, pclint, codedex alarm.
3349- Fix yolov4 network probabilistic segment error.
3350
3351### Contributors
3352
3353Thanks goes to these wonderful people:
3354
3355Adel, AGroupofProbiotocs, anthonyaje, anzhengqi, askmiao, baihuawei, baiyangfan, bai-yangfan, bingyaweng, BowenK, buxue, caifubi, CaoJian, caojian05, caozhou, Cathy, changzherui, chenbo116, chenfei, chengxianbin, chenhaozhe, chenjianping, chenzomi, chenzupeng, chujinjin, cj, cjh9368, Corleone, damon0626, danish, Danish, davidmc, dayschan, doitH, dong-li001, eric, Eric, fary86, fuzhiye, Gaoxiong, GAO_HYP_XYJ, gengdongjie, Gogery, gongdaguo, gray0v0, gukecai, guoqi, gzhcv, hangq, hanhuifeng2020, Harshvardhan, He, heleiwang, hexia, Hoai, HuangBingjian, huangdongrun, huanghui, huangxinjing, huqi, huzhifeng, hwjiaorui, Islam Amin, Jesse, , Jiabin Liu, jianghui58, jiangzhiwen, Jiaqi, jin-xiulang, jinyaohui, jjfeing, John, Jonathan, jonyguo, JulyAi, jzg, kai00, kingfo, kingxian, kpy, kswang, laiyongqiang, leonwanghui, Li, liangchenghui, liangzelang, lichen_101010, lichenever, lihongkang, lilei, limingqi107, ling, linqingke, Lin Xh, liubuyu, liuwenhao4, liuxiao78, liuxiao93, liuyang_655, liuzhongkai, Lixia, lixian, liyanliu, liyong, lizhenyu, luopengting, luoyang, lvchangquan, lvliang, lz, mahdi, Mahdi, maning202007, Margaret_wangrui, mayang, mengyuanli, Ming_blue, nhussain, ougongchang, panfengfeng, panyifeng, Payne, Peilin, peixu_ren, Pengyongrong, qianlong, qianjiahong, r1chardf1d0, riemann_penn, rmdyh, Sheng, shenwei41, simson, Simson, Su, sunsuodong, tao_yunhao, tinazhang, VectorSL, , Wan, wandongdong, wangdongxu, wangmin, wangnan39@huawei.com, wangyue01, wangzhe, wanyiming, Wei, wenchunjiang, wilfChen, WilliamLian, wsc, wudenggang, wukesong, wuweikang, wuxuejian, Xiao Tianci, Xiaoda, xiefangqi, xinyunfan, xuanyue, xulei2020, Xun, xuyongfei, yanghaitao, yanghaitao1, yanghaoran, YangLuo, yangruoqi713, yankai, yanzhenxiang2020, yao_yf, yepei6, yeyunpeng, Yi, yoni, yoonlee666, yuchaojie, yujianfeng, yuximiao, zengzitao, Zhang, zhanghaibo5@huawei.com, zhanghuiyao, zhanghui_china, zhangxinfeng3, zhangyihui, zhangz0911gm, zhanke, zhanyuan, zhaodezan, zhaojichen, zhaoting, zhaozhenlong, zhengjun10, Zhenglong Li, zhiqwang, zhoufeng, zhousiyi, zhouyaqiang, zhouyifengCode, Zichun, Zirui, Ziyan, zjun, ZPaC, wangfengwfwf, zymaa, gerayking.
3356
3357Contributions of any kind are welcome!
3358
3359# MindSpore 1.5.0
3360
3361## MindSpore 1.5.0 Release Notes
3362
3363### Major Features and Improvements
3364
3365#### NewModels
3366
3367- [STABLE] Add CV model on Ascend: Fast-SCNN
3368- [BETA] Add CV models on Ascend: midas_V2, attgan, FairMOT, CenterNet_resnet101, SEResNext, YOLOV3-tiny, RetinaFace
3369- [STABLE] Add CV models on GPU: ssd_mobilenetv1_fpn, shufflenetv1, tinyDarkNet, CNN-CTC, unet++, DeepText, SqueezeNet
3370- [STABLE] Add NLP models on GPU: GRU, GNMT2, Bert-Squad
3371- [STABLE] Add recommend models on GPU: NCF
3372- [BETA] Add CV models on GPU: FaceAttribute, FaceDetection, FaceRecongnition SENet,
3373- [BETA] Add Audio models on GPU: DeepSpeech2
3374- [STABLE]`model_zoo` has been separated to an individual repository`models`
3375
3376#### FrontEnd
3377
3378- [STABLE] Support`while` and`break`,`continue` statements of training network in`GRAPH_MODE`.
3379- [BETA] Support export MindIR file after model training in cloud side and evaluate in edge side by import the MindIR file.
3380- [STABLE] Support forward mode auto-diff interface Jvp(Jacobian-Vector-Product).
3381- [STABLE] Support backward mode auto-diff interface Vjp(Vector-Jacobian-Product).
3382
3383#### Auto Parallel
3384
3385- [STABLE] Support distributed pipeline inference.
3386- [STABLE] Add implementation of the sparse attention and its distributed operator.
3387- [STABLE] Add implementations of distributed operator of Conv2d/Conv2dTranspose/Conv2dBackpropInput/Maxpool/Avgpool/Batchnorm/Gatherd.
3388- [STABLE] Support configuring the dataset strategy on distributed training and inference mode.
3389- [STABLE] Add high level API of the Transformer module.
3390
3391#### Executor
3392
3393- [STABLE] Support AlltoAll operator.
3394- [STABLE] CPU operator (Adam) performance optimization increased by 50%.
3395- [BETA] Support Adam offload feature, reduce the static memory usage of Pangu large model by 50%.
3396- [STABLE] MindSpore Ascend backend supports configuration operator generation and loading cache path.
3397- [STABLE] MindSpore Ascend backend supports lazy build in PyNaitve mode and compilation performance improved by 10 times.
3398- [STABLE] The function or Cell decorated by ms_function supports gradient calculation in PyNative mode.
3399- [STABLE] The outermost network supports parameters of non tensor type in PyNative mode.
3400
3401#### DataSet
3402
3403- [BETA] Add a new method for class Model to support auto data preprocessing in scenario of Ascend 310 inference.
3404- [STABLE] Add a new drawing tool to visualize detection/segmentation datasets.
3405- [STABLE] Support a new tensor operation named ConvertColor to support color space transform of images.
3406- [STABLE] Enhance the following tensor operations to handle multiple columns simultaneously: RandomCrop, RandomHorizontalFlip, RandomResize, RandomResizedCrop, RandomVerticalFlip.
3407- [STABLE] Support electromagnetic simulation dataset loading and data augmentation.
3408- [STABLE] Optimize the error logs of Dataset to make them more friendly to users.
3409
3410#### Federated Learning
3411
3412- [STABLE] Change the deployment environment of FL-Client.
3413
3414#### Running Data Recorder
3415
3416- [STABLE] RDR saves collected data files within directories named by Rank ID on distributed training on Ascend, GPU and CPU.
3417
3418#### GraphKernel Fusion
3419
3420### API Change
3421
3422#### Backwards Incompatible Change
3423
3424##### Python API
3425
3426###### New Recomputation Configuration for AutoParallel and SemiAutoParallel Scenarios
3427
3428Configuring the recomputation of the communication operations generated by the model parallel and optimizer parallel to save the memory on the
3429devices. Users can pass `mp_comm_recompute` and `parallel_optimizer_comm_recompute` to enable the recomputation of the communication operations.
3430
3431### Bug fixes
3432
3433#### FrontEnd
3434
3435- Fix bug of too many subgraphs when network include`for` statement.([!23669](https://gitee.com/mindspore/mindspore/pulls/23669))
3436
3437#### Executor
3438
3439- RunTask failed when parameter_broadcast is enabled in PyNative mode. ([!23255](https://gitee.com/mindspore/mindspore/pulls/23255))
3440- An illegal memory access was encountered in the dynamic shape net on GPU.
3441- Fix tune failed for DynamicRnn. ([!21081](https://gitee.com/mindspore/mindspore/pulls/21081))
3442
3443#### Dataset
3444
3445- Optimize thread monitoring to solve the problem of running multiple multiprocessesing on Windwos. ([!23232](https://gitee.com/mindspore/mindspore/pulls/23232))
3446- Fix bugs of Dataset tensor operations in lite mode. ([!21999](https://gitee.com/mindspore/mindspore/pulls/21999))
3447- Fix memory increasing when using create_dict_iterator in for loop. ([!22529](https://gitee.com/mindspore/mindspore/pulls/22529))([!22529](https://gitee.com/mindspore/mindspore/pulls/22529))
3448
3449## MindSpore Lite
3450
3451### Major Features and Improvements
3452
3453#### Converter and runtime
3454
34551. Optimize TDNN-like streaming model by reusing the result of last inference.
34562. Support dynamic filter Convolution.
34573. Support serializing float32 weight into float16 weight for reducing size of model file.
34584. Provide unified runtime API for developer reusing their code between cloud side and end side.
34595. Now developer can configure built-in pass as custom passes.
34606. Now user can specify format and shape of model inputs while converting model.
34617. Support multiple devices inference, includeing CPU, NPU, GPU. User can set devices in mindspore::Context.
34628. Support mixed precision inference. User can set inference precision by LoadConfig API.
34639. Support custom operator registration and enable inference on third-party hardware.
3464
3465#### ARM backend optimization
3466
34671. Support the nchw data format of some Operators, such as Conv, InstanceNorm, etc. The performance of some models convertered from onnx and caffe is greatly improved.
34682. Fix bugs of memory leak on NPU.
3469
3470#### Post quantization
3471
34721. Weight quantization supports mixed bit quantization.
34732. Full quantization supports data pre-processing.
34743. Adjust the quantization parameters from the command line to the configuration file.
3475
3476#### Training on Device
3477
34781. Unify lite external api with MindSpore.
34792. Implement static memory allocator and common workspace for TOD,save memory 10-20%.
34803. Provide getgradients and setgradients interface,get and set optimizer params interfaces to support MOE Model.
34814. Support user specified output node when export IOD Model.
34825. Support more text  networks (tinybert,albert) and operators.
3483
3484#### Codegen
3485
34861. Support kernel register for custom op. Third-party hardware like NNIE can be accessed through it.
3487
3488### API Change
3489
3490#### API Incompatible Change
3491
3492##### C++ API
3493
3494### Contributors
3495
3496Thanks goes to these wonderful people:
3497
3498Adel, AGroupofProbiotocs, anthonyaje, anzhengqi, askmiao, baihuawei, baiyangfan, bai-yangfan, bingyaweng, BowenK, buxue, caifubi, CaoJian, caojian05, caozhou, Cathy, changzherui, chenbo116, chenfei, chengxianbin, chenhaozhe, chenjianping, chenzomi, chenzupeng, chujinjin, cj, cjh9368, Corleone, damon0626, danish, Danish, davidmc, dayschan, doitH, dong-li001, eric, Eric, fary86, fuzhiye, Gaoxiong, GAO_HYP_XYJ, gengdongjie, Gogery, gongdaguo, gray0v0, gukecai, guoqi, gzhcv, hangq, hanhuifeng2020, Harshvardhan, He, heleiwang, hexia, Hoai, HuangBingjian, huangdongrun, huanghui, huangxinjing, huqi, huzhifeng, hwjiaorui, Islam Amin, Jesse, , Jiabin Liu, jianghui58, jiangzhiwen, Jiaqi, jin-xiulang, jinyaohui, jjfeing, John, Jonathan, jonyguo, JulyAi, jzg, kai00, kingfo, kingxian, kpy, kswang, laiyongqiang, leonwanghui, Li, liangchenghui, liangzelang, lichen_101010, lichenever, lihongkang, lilei, limingqi107, ling, linqingke, Lin Xh, liubuyu, liuwenhao4, liuxiao78, liuxiao93, liuyang_655, liuzhongkai, Lixia, lixian, liyanliu, liyong, lizhenyu, luopengting, luoyang, lvchangquan, lvliang, lz, mahdi, Mahdi, maning202007, Margaret_wangrui, mayang, mengyuanli, Ming_blue, nhussain, ougongchang, panfengfeng, panyifeng, Payne, Peilin, peixu_ren, Pengyongrong, qianlong, qianjiahong, r1chardf1d0, riemann_penn, rmdyh, Sheng, shenwei41, simson, Simson, Su, sunsuodong, tao_yunhao, tinazhang, VectorSL, , Wan, wandongdong, wangdongxu, wangmin, wangnan39@huawei.com, wangyue01, wangzhe, wanyiming, Wei, wenchunjiang, wilfChen, WilliamLian, wsc, wudenggang, wukesong, wuweikang, wuxuejian, Xiao Tianci, Xiaoda, xiefangqi, xinyunfan, xuanyue, xulei2020, Xun, xuyongfei, yanghaitao, yanghaitao1, yanghaoran, YangLuo, yangruoqi713, yankai, yanzhenxiang2020, yao_yf, yepei6, yeyunpeng, Yi, yoni, yoonlee666, yuchaojie, yujianfeng, yuximiao, zengzitao, Zhang, zhanghaibo5@huawei.com, zhanghuiyao, zhanghui_china, zhangxinfeng3, zhangyihui, zhangz0911gm, zhanke, zhanyuan, zhaodezan, zhaojichen, zhaoting, zhaozhenlong, zhengjun10, Zhenglong Li, zhiqwang, zhoufeng, zhousiyi, zhouyaqiang, zhouyifengCode, Zichun, Zirui, Ziyan, zjun, ZPaC, wangfengwfwf, zymaa, gerayking.
3499
3500Contributions of any kind are welcome!
3501
3502# MindSpore 1.4.0
3503
3504## MindSpore 1.4.0 Release Notes
3505
3506### Major Features and Improvements
3507
3508#### NewModels
3509
3510#### FrontEnd
3511
3512#### Auto Parallel
3513
3514- Add distributed operators: Conv2D/Conv2DTranspose/Conv2DBackpropInput/MaxPool/AvgPool/BatchNorm/GatherD
3515- Support to configure shard strategy for dataset
3516
3517#### Executor
3518
3519#### DataSet
3520
3521- Add SlicePatchesOperation for Remote Sensing feature([!18179](https://e.gitee.com/mind_spore/repos/mindspore/mindspore/pulls/18179)3522
3523#### FederatedLearning
3524
3525#### Running Data Recorder
3526
3527#### GraphKernel Fusion
3528
3529#### Profiler
3530
3531- [STABLE]  Support MS_DIAGNOSTIC_DATA_PATH for profiler feature.(Ascend/GPU)
3532
3533#### Dump
3534
3535- [STABLE]  Support MS_DIAGNOSTIC_DATA_PATH for dump feature.(Ascend/GPU/CPU)
3536
3537### API Change
3538
3539#### Backwards Incompatible Change
3540
3541##### Python API
3542
3543##### Command Line Interface
3544
3545###### Dump Config
3546
3547Previously, we need to set the dump path in dump config file. To make the dump feature easier to use on cloud, we support new environment parameter `MS_DIAGNOSTIC_DATA_PATH`.
3548
3549| 1.3.0                          | 1.4.0                                                                                                                                        |
3550| ------------------------------ | -------------------------------------------------------------------------------------------------------------------------------------------- |
3551| `path` is a mandatory field. | `path` field is optional.  If `path` field is not provided or is empty string, `MS_DIAGNOSTIC_DATA_PATH` should be set in environment. |
3552
3553### Bug fixes
3554
3555#### FrontEnd
3556
3557#### Executor
3558
3559#### Dataset
3560
3561- Fix module 'signal' has no attribute 'SIGCHLD' problem under windows platform. ([!21232](https://gitee.com/mindspore/mindspore/pulls/21232))
3562
3563## MindSpore Lite
3564
3565### Major Features and Improvements
3566
3567#### Converter and runtime
3568
3569#### x86 backend optimization
3570
3571#### ARM backend optimization
3572
3573#### Cuda backend optimization
3574
3575#### OpenCL backend
3576
3577#### Post quantization
3578
3579#### Training on Device
3580
3581#### Codegen
3582
3583### API Change
3584
3585#### API Incompatible Change
3586
3587##### C++ API
3588
3589#### New features
3590
3591##### Java API
3592
3593### Bug fixes
3594
3595#### Deprecations
3596
3597### Contributors
3598
3599Thanks goes to these wonderful people:
3600
3601Adel, AGroupofProbiotocs, anthonyaje, anzhengqi, askmiao, baihuawei, baiyangfan, bai-yangfan, bingyaweng, BowenK, buxue, caifubi, CaoJian, caojian05, caozhou, Cathy, changzherui, chenbo116, chenfei, chengxianbin, chenhaozhe, chenjianping, chenzomi, chenzupeng, chujinjin, cj, cjh9368, Corleone, damon0626, danish, Danish, davidmc, dayschan, doitH, dong-li001, eric, Eric, fary86, fuzhiye, Gaoxiong, GAO_HYP_XYJ, gengdongjie, Gogery, gongdaguo, gray0v0, gukecai, guoqi, gzhcv, hangq, hanhuifeng2020, Harshvardhan, He, heleiwang, hexia, Hoai, HuangBingjian, huangdongrun, huanghui, huangxinjing, huqi, huzhifeng, hwjiaorui, Islam Amin, Jesse, , Jiabin Liu, jianghui58, jiangzhiwen, Jiaqi, jin-xiulang, jinyaohui, jjfeing, John, Jonathan, jonyguo, JulyAi, jzg, kai00, kingfo, kingxian, kpy, kswang, laiyongqiang, leonwanghui, Li, liangchenghui, liangzelang, lichen_101010, lichenever, lihongkang, lilei, limingqi107, ling, linqingke, Lin Xh, liubuyu, liuwenhao4, liuxiao78, liuxiao93, liuyang_655, liuzhongkai, Lixia, lixian, liyanliu, liyong, lizhenyu, luopengting, luoyang, lvchangquan, lvliang, lz, mahdi, Mahdi, maning202007, Margaret_wangrui, mayang, mengyuanli, Ming_blue, nhussain, ougongchang, panfengfeng, panyifeng, Payne, Peilin, peixu_ren, Pengyongrong, qianlong, qianjiahong, r1chardf1d0, riemann_penn, rmdyh, Sheng, shenwei41, simson, Simson, Su, sunsuodong, tao_yunhao, tinazhang, VectorSL, , Wan, wandongdong, wangdongxu, wangmin, wangnan39@huawei.com, wangyue01, wangzhe, wanyiming, Wei, wenchunjiang, wilfChen, WilliamLian, wsc, wudenggang, wukesong, wuweikang, wuxuejian, Xiao Tianci, Xiaoda, xiefangqi, xinyunfan, xuanyue, xulei2020, Xun, xuyongfei, yanghaitao, yanghaitao1, yanghaoran, YangLuo, yangruoqi713, yankai, yanzhenxiang2020, yao_yf, yepei6, yeyunpeng, Yi, yoni, yoonlee666, yuchaojie, yujianfeng, yuximiao, zengzitao, Zhang, zhanghaibo5@huawei.com, zhanghuiyao, zhanghui_china, zhangxinfeng3, zhangyihui, zhangz0911gm, zhanke, zhanyuan, zhaodezan, zhaojichen, zhaoting, zhaozhenlong, zhengjun10, Zhenglong Li, zhiqwang, zhoufeng, zhousiyi, zhouyaqiang, zhouyifengCode, Zichun, Zirui, Ziyan, zjun, ZPaC, wangfengwfwf, zymaa, gerayking.
3602
3603Contributions of any kind are welcome!
3604
3605# MindSpore 1.3.0
3606
3607## MindSpore 1.3.0 Release Notes
3608
3609### Major Features and Improvements
3610
3611#### NewModels
3612
3613- [STABLE] Add CV models on Ascend: CPM, FCN8s, SSD-ResNet50-FPN, EAST, AdvancedEast.
3614- [STABLE] Add NLP models on Ascend: DGU, TextCNN, SentimentNet(LSTM).
3615- [STABLE] Add CV models on GPU: Faster-RCNN, FCN8s, CycleGAN, AdvancedEast.
3616- [BETA] Add CV models on Ascend: CycleGAN, PoseNet, SimCLR.
3617- [BETA] Add NLP models on Ascend: DGU, EmoTect, Senta, KT-Net.
3618- [BETA] Add NLP models on GPU: DGU, EmoTect.
3619- [BETA] Add EPP-MVSNet: a novel deep learning network for 3D reconstruction from multi-view stereo, which has won the first place in Tanks & Temples leaderboard(until April 1, 2021)(GPU).
3620
3621#### FrontEnd
3622
3623- [STABLE] The default running mode of MindSpore is changed to Graph mode.
3624- [STABLE] Support interface `run_check` to check whether MindSpore is working properly or not.
3625- [STABLE] Support saving custom information in the checkpoint file.
3626- [STABLE] Normal class adds mean parameter.
3627- [STABLE] Support export YOLOv3-DarkNet53 and YOLOv4 ONNX model.
3628- [STABLE] Support 40+ operator export ONNX model.
3629- [STABLE] The Metric module supports `set_indexes` to select the inputs of `update` in the specified order.
3630- [STABLE] Switch `_Loss` to an external API `LossBase` as the base class of losses.
3631
3632#### Auto Parallel
3633
3634- [STABLE] Add distributed operators: Select/GatherNd/ScatterUpdate/TopK.
3635- [STABLE] Support basic pipeline parallelism.
3636- [STABLE] Optimize sharding strategy setting of `Gather`.
3637- [STABLE] Optimize mix precision and shared parameter scenarios.
3638- [STABLE] Optimize distributed prediction scenarios.
3639
3640#### Executor
3641
3642- [STABLE] Support unified runtime in GPU and CPU backend.
3643- [STABLE] MindSpore GPU support CUDA11 with cuDNN8.
3644- [STABLE] MindSpore GPU inference performance optimization by integrating TensorRT.
3645- [STABLE] MindSpore built on one Linux distribution can now be used on multiple Linux distributions with the same CPU architecture (e.g. EulerOS, Ubuntu, CentOS).
3646- [STABLE] MindSpore now supports Ascend310 and Ascend910 environments with one single wheel package and provides an alternate binary package for Ascend310 specifically.
3647- [STABLE] MindSpore Ascend support group convolution.
3648
3649#### DataSet
3650
3651- [STABLE] Support caching over MindRecord dataset.
3652- [STABLE] Support new shuffle mode for MindRecord dataset.
3653- [STABLE] Support a cropper tool for MindSpore Lite to allow the user to customize MindData binary file according to their script.
3654- [STABLE] Support share memory mechanism to optimize the multi-processing efficiency of GeneratorDataset/Map/Batch.
3655- [STABLE] Add features for the GNN dataset to support molecular dynamics simulation scenarios.
3656
3657#### FederatedLearning
3658
3659- [STABLE] Support Cross-device federated learning framework.
3660- [STABLE] Support FL-Server distributed networking including TCP and HTTP communication.
3661- [STABLE] Support FL-Server distributed federated aggregation,support autoscaling and fault tolerance.
3662- [STABLE] Develop FL-Client framework.
3663- [STABLE] Supports local differential privacy algorithms.
3664- [STABLE] MPC-based security aggregation algorithm.
3665- [STABLE] MindSpore Lite Device-side Inference & Training Interconnection with FL-Client.
3666
3667#### Running Data Recorder
3668
3669- [STABLE] Provide records of multi-stage computational graphs, memory allocation information and graph execution order when a "Launch kernel failed" occurs. (CPU)
3670
3671#### GraphKernel Fusion
3672
3673- [STABLE] Add options to control the optimization level.
3674- [STABLE] Enhance the generalization ability on GPU. GraphKernel is enabled by default in 40+ networks which cover the field of NLP, CV, Recommender, NAS and Audio. The result shows their throughput is significantly improved, and you are Recommended enabling GraphKernel in your network.
3675
3676#### Debug
3677
3678- [STABLE] Unified dump function.
3679
3680### API Change
3681
3682#### Backwards Incompatible Change
3683
3684##### Python API
3685
3686###### `mindspore.dataset.Dataset.device_que` interface removes unused parameter `prefetch_size`([!18973](https://gitee.com/mindspore/mindspore/pulls/18973))
3687
3688Previously, we have a parameter `prefetch_size` in `device_que` to define the prefetch number of records ahead of the user's request. But indeed this parameter is never used which means it is an ineffective parameter. Therefore, we remove this parameter in 1.3.0 and users can set this configuration by [mindspore.dataset.config.set_prefetch_size](https://www.mindspore.cn/docs/api/en/r1.3/api_python/mindspore.dataset.config.html#mindspore.dataset.config.set_prefetch_size).
3689
3690<table>
3691<tr>
3692<td style="text-align:center"> 1.2.1 </td> <td style="text-align:center"> 1.3.0 </td>
3693</tr>
3694<tr>
3695<td>
3696
3697```python
3698device_que(prefetch_size=None, send_epoch_end=True, create_data_info_queue=False)
3699```
3700
3701</td>
3702<td>
3703
3704```python
3705device_que(send_epoch_end=True, create_data_info_queue=False)
3706```
3707
3708</td>
3709</tr>
3710</table>
3711
3712###### `mindspore.nn.optim.thor` interface changes to lowercase `thor` and adds two parameters `enable_clip_grad` and `frequency`([!17212](https://gitee.com/mindspore/mindspore/pulls/17212))
3713
3714The parameter `enable_clip_grad` is used for gradient clipping and another parameter `frequency` is used to control the update interval of second order information matrix.
3715
3716<table>
3717<tr>
3718<td style="text-align:center"> 1.2.1 </td> <td style="text-align:center"> 1.3.0 </td>
3719</tr>
3720<tr>
3721<td>
3722
3723```python
3724THOR(net, learning_rate, damping, momentum, weight_decay=0.0, loss_scale=1.0, batch_size=32,
3725     use_nesterov=False, decay_filter=lambda x: x.name not in [], split_indices=None)
3726```
3727
3728</td>
3729<td>
3730
3731```python
3732thor(net, learning_rate, damping, momentum, weight_decay=0.0, loss_scale=1.0, batch_size=32,
3733     use_nesterov=False, decay_filter=lambda x: x.name not in [], split_indices=None, enable_clip_grad=False,
3734     frequency=100)
3735```
3736
3737</td>
3738</tr>
3739</table>
3740
3741##### Dump Config
3742
3743Previously, we could only dump tensor data for one or all steps. To make the dump feature easier to use, we changed the dump configuration format and dump structure. View the [New Dump Tutorial](https://www.mindspore.cn/tutorials/experts/en/master/debug/dump.html#dump-introduction).
3744
3745| 1.2.1                                                  | 1.3.0                                                                                       |
3746| ------------------------------------------------------ | ------------------------------------------------------------------------------------------- |
3747| `iteration` is an int.                               | `iteration` is a string.                                                                  |
3748| `op_debug_mode` is in `async_dump_settings` field. | `op_debug_mode` is in `common_dump_settings` field. `async_dump_settings` is removed. |
3749
3750### Bug fixes
3751
3752#### FrontEnd
3753
3754- Fix exception when use import module in while body such as 'F.xxx'.([!17635](https://e.gitee.com/mind_spore/repos/mindspore/mindspore/pulls/17635))
3755- Fix the exception of 'exceeding limit call depth' in compile graph process when using while expression with grad operation. ([!18662](https://e.gitee.com/mind_spore/repos/mindspore/mindspore/pulls/18662))
3756
3757#### Executor
3758
3759- Fix reallocate memory bug for communication op.([!14492](https://gitee.com/mindspore/mindspore/pulls/14492))
3760- Replace memcpy_async op with tensor_move op.([!15204](https://gitee.com/mindspore/mindspore/pulls/15204))
3761- Fix the build error when multiple python versions are installed in the environment. ([!19165](https://gitee.com/mindspore/mindspore/pulls/19165))
3762- The warning when the te/topi/hccl version does not match is optimized, and fix the repeated warning. ([!18704](https://gitee.com/mindspore/mindspore/pulls/18704))
3763- Fix the error in a cluster with more than 8 pcs in pynative mode. ([!16376](https://gitee.com/mindspore/mindspore/pulls/16376))
3764- Fix graph ring problem in UB fusion.([!16109](https://gitee.com/mindspore/mindspore/pulls/16109))
3765- Fix AllGather op select problem when the shape is not divisible by 16. ([!18878](https://gitee.com/mindspore/mindspore/pulls/18878))
3766
3767#### Dataset
3768
3769- Fix an out-of-memory error when ImagefolderDataset gets an illegal directory. ([!16196](https://gitee.com/mindspore/mindspore/pulls/16196))
3770- Fix bugs of vision transformations in lite mode. ([!14722](https://gitee.com/mindspore/mindspore/pulls/14722),[!14774](https://gitee.com/mindspore/mindspore/pulls/14774),[!15050](https://gitee.com/mindspore/mindspore/pulls/15050))
3771- Fix default numbers of parallel workers of MindData for those CPUs with fewer cores. ([!15921](https://gitee.com/mindspore/mindspore/pulls/15921))
3772- Fix MindRecord writing failed probabilistically in multiprocessing. ([!15242](https://gitee.com/mindspore/mindspore/pulls/15242))
3773
3774## MindSpore Lite
3775
3776### Major Features and Improvements
3777
3778#### Converter and runtime
3779
37801. Support Caffe model running on Hi3516D.
37812. Support delegate mechanism to run your models(part or whole) on user specified executor.
37823. Support control flow models.
37834. Support cross-compiling for iOS, so that we can inference models on iOS devices.
3784
3785#### x86 backend optimization
3786
37871. Optimize kernels for x86 using Advanced Vector Extensions(AVX).
3788
3789#### ARM backend optimization
3790
37911. Optimize fp16 kernels.
37922. Support arm32 fp16 instruction acceleration on ARMv8.2.
3793
3794#### Cuda backend optimization
3795
37961. Support NV GPU backend base on delegate mechanism(use TensorRT as delegate).
3797
3798#### OpenCL backend
3799
38001. Optimize the strategy of workgroup and blocksize to improve performance.
38012. Support OpenCL dynamic infershape.
38023. Support INT32 type ops.
3803
3804#### Post quantization
3805
38061. Support fp32 training model converts to quantization training model.
3807
3808#### Training on Device
3809
38101. Support fp32 training model export to quantization model after training process end.
38112. Unify APIs and output package name of training and inference.
38123. Simplify implementation of Train Session.
38134. Optimize train and infer compile, reduce libmindspore-lite-train.so memory.
38145. Training memory optimization:  memory reduce 10-50% compare with  r1.2.
38156. Training performance optimization:  for 1*1 special input shape Cov2DGradInput and SparseSoftmaxCrossEntropyWithLogits operator optimization, improved 10%-20%.
38167. Support more networks(transformer, albert).
3817
3818#### Codegen
3819
38201. Support deployment on HarmonyOS for device.
3821
3822### API Change
3823
3824#### API Incompatible Change
3825
3826##### C++ API
3827
3828###### Unify LiteSession and TrainSession, Merge LiteSession And TrainSession.([!17356](https://gitee.com/mindspore/mindspore/pulls/17356))
3829
3830Previously, Training on Device use TrainSession while Inference on Device use LiteSession. To simplify implementation, we move TrainSession functions to LiteSession as virtual function. and move APIs previous defined in train_session.h to lite_session.h.
3831
3832```cpp
3833class MS_API LiteSession {
3834...
3835static LiteSession *CreateTrainSession(const std::string &filename, const lite::Context *context,
3836                                         bool train_mode = false, const lite::TrainCfg *cfg = nullptr);
3837 static LiteSession *CreateTransferSession(const std::string &filename_backbone, const std::string &filename_head,
3838                                            const lite::Context *context, bool train_mode = false,
3839                                            const lite::TrainCfg *cfg = nullptr);
3840virtual int Train() { return mindspore::lite::RET_ERROR; }
3841virtual int Eval() { return mindspore::lite::RET_OK; }
3842virtual int SetupVirtualBatch(int virtual_batch_multiplier, float lr = -1.0f, float momentum = -1.0f) {
3843    return mindspore::lite::RET_ERROR;
3844  }
3845virtual std::vector<tensor::MSTensor *> GetPredictions() const {
3846    std::vector<tensor::MSTensor *> outputs;
3847    return outputs;
3848 }
3849...
3850```
3851
3852###### Add Export API for Training on device, obsolete SaveToFile API.([!17356](https://gitee.com/mindspore/mindspore/pulls/17356))
3853
3854Previously, Training on Device uses SaveToFile API to save the training model to file. Export API was added in this release to support more format, more model type(train or interface part of the model), and save weight quant model of train.
3855
3856```cpp
3857virtual int Export(const std::string &file_name, lite::ModelType model_type = lite::MT_TRAIN,
3858                     lite::QuantizationType quant_type = lite::QT_DEFAULT, lite::FormatType = lite::FT_FLATBUFFERS) {
3859    return mindspore::lite::RET_ERROR;
3860 }
3861```
3862
3863###### Add GetFeatureMaps and UpdateFeatureMaps interface for Training on device.([!18344](https://gitee.com/mindspore/mindspore/pulls/18344))
3864
3865When Training on the device, we may need to update the model featuremap and get model featuremap.particularly in MindSpore Federated Scenario.
3866
3867```cpp
3868virtual std::vector<tensor::MSTensor *> GetFeatureMaps() const {
3869    std::vector<tensor::MSTensor *> features;
3870    return features;
3871  }
3872  virtual int UpdateFeatureMaps(const std::vector<tensor::MSTensor *> &features) { return mindspore::lite::RET_ERROR; }
3873```
3874
3875#### New features
3876
3877##### Java API
3878
3879###### new static method for creating LiteSession by MSConifg in LiteSession.class
3880
3881Previously, if we want to create a LiteSession object, we need to call two APIs:
3882
3883```js
3884MSConfig config;
3885// config options ...
3886LiteSession liteSession = new LiteSession();
3887boolean ret = liteSession.init(config);
3888if (!ret) {
3889  // handle init LiteSession failed ...
3890}
3891```
3892
3893now we can create a LiteSession object with new API just like:
3894
3895```js
3896MSConfig config;
3897// config options ...
3898LiteSession liteSession = createSession(config);
3899if (liteSession == null) {
3900  // handle create LiteSession failed ...
3901}
3902```
3903
3904###### new static method for creating LiteSession byModelBuffer and MSConfig in LiteSession.class
3905
3906Previously, if we want to inference a model, we need to call APIs like:
3907
3908```js
3909MSConfig config;
3910// config options ...
3911LiteSession liteSession = new LiteSession();
3912boolean initSessionRet = liteSession.init(config);
3913if (!initSessionRet) {
3914  // handle init LiteSession failed and return ...
3915}
3916Model model = new Model();
3917boolean loadModelRet = model.loadModel(modelMappedByteBuffer);
3918if (!loadModelRet) {
3919  // handle load model failed and return ...
3920}
3921boolean compileModelRet = liteSession.compileGraph(model);
3922if (!loadModelRet) {
3923  // handle compile model failed and return ...
3924}
3925model.free();
3926// liteSession is ready to inference model, call runGraph in LiteSession.class ...
3927```
3928
3929now we can use new API just like:
3930
3931```js
3932MSConfig config;
3933// config options ...
3934LiteSession liteSession = createSession(modelMappedByteBuffer, config);
3935if (liteSession == null) {
3936  // handle init LiteSession failed and return ...
3937}
3938// liteSession is ready to inference model, call runGraph in LiteSession.class ...
3939```
3940
3941New createSession method is an API that integrates four old APIs: LiteSession.init, Model.loadModel, LiteSession.compileGraph and model.free. It is simple and efficient as it reduces one modelBuffer copy operation.
3942
3943###### new methods getFeaturesMap and updateFeatures for in LiteSession.class
3944
3945Recently, we add a new C++ api in LiteSession class, Correspondingly we add a new java API in LiteSession.java.
3946
3947```java
3948public List<MSTensor> getFeaturesMap() {
3949         List<Long> ret = this.getFeaturesMap(this.sessionPtr);
3950                ArrayList<MSTensor> tensors = new ArrayList<MSTensor>();
3951                for (Long msTensorAddr : ret) {
3952                    MSTensor msTensor = new MSTensor(msTensorAddr);
3953                    tensors.add(msTensor);
3954                }
3955                return tensors;
3956   }
3957   public boolean updateFeatures(List<MSTensor> features) {
3958            long[] inputsArray = new long[features.size()];
3959            for (int i = 0; i < features.size(); i++) {
3960                inputsArray[i] = features.get(i).getMSTensorPtr();
3961            }
3962             return this.updateFeatures(this.sessionPtr, inputsArray);
3963   }
3964```
3965
3966###### new methods export to replace saveToFile API in LiteSession.class
3967
3968Recently, we add a new C++ api in LiteSession class, Correspondingly we add a new java API in LiteSession.java.
3969
3970```java
3971public boolean export(String modelFileName, int modelType, int quantizationType) {
3972        return this.export(this.sessionPtr, modelFileName, modelType, quantizationType);
3973    }
3974```
3975
3976###### new train related  API moved to LiteSession.class from TrainSession.class
3977
3978Align with update of C++ api in LiteSession class, add new java API to LiteSession.java Correspondingly.
3979
3980```java
3981public class LiteSession {
3982...
3983public static LiteSession createTrainSession(String modelName, final MSConfig config, boolean trainMode){...}
3984public boolean train() {...}
3985public boolean eval() {...}
3986...
3987```
3988
3989### Bug fixes
3990
39911. Fix the bug that the train session does not release memory cause of refcount bug.
3992
3993#### Deprecations
3994
3995### Contributors
3996
3997Thanks goes to these wonderful people:
3998
3999Adel, AGroupofProbiotocs, anthonyaje, anzhengqi, askmiao, baihuawei, baiyangfan, bai-yangfan, bingyaweng, BowenK, buxue, caifubi, CaoJian, caojian05, caozhou, Cathy, changzherui, chenbo116, chenfei, chengxianbin, chenhaozhe, chenjianping, chenzomi, chenzupeng, chujinjin, cj, cjh9368, Corleone, damon0626, danish, Danish, davidmc, dayschan, doitH, dong-li001, eric, Eric, fary86, fuzhiye, Gaoxiong, GAO_HYP_XYJ, gengdongjie, Gogery, gongdaguo, gray0v0, gukecai, guoqi, gzhcv, hangq, hanhuifeng2020, Harshvardhan, He, heleiwang, hexia, Hoai, HuangBingjian, huangdongrun, huanghui, huangxinjing, huqi, huzhifeng, hwjiaorui, Islam Amin, Jesse, , Jiabin Liu, jianghui58, jiangzhiwen, Jiaqi, jin-xiulang, jinyaohui, jjfeing, John, Jonathan, jonyguo, JulyAi, jzg, kai00, kingfo, kingxian, kpy, kswang, laiyongqiang, leonwanghui, Li, liangchenghui, liangzelang, lichen_101010, lichenever, lihongkang, lilei, limingqi107, ling, linqingke, Lin Xh, liubuyu, liuwenhao4, liuxiao78, liuxiao93, liuyang_655, liuzhongkai, Lixia, lixian, liyanliu, liyong, lizhenyu, luopengting, luoyang, lvchangquan, lvliang, lz, mahdi, Mahdi, maning202007, Margaret_wangrui, mayang, mengyuanli, Ming_blue, nhussain, ougongchang, panfengfeng, panyifeng, Payne, Peilin, peixu_ren, Pengyongrong, qianlong, qianjiahong, r1chardf1d0, riemann_penn, rmdyh, Sheng, shenwei41, simson, Simson, Su, sunsuodong, tao_yunhao, tinazhang, VectorSL, , Wan, wandongdong, wangdongxu, wangmin, wangnan39@huawei.com, wangyue01, wangzhe, wanyiming, Wei, wenchunjiang, wilfChen, WilliamLian, wsc, wudenggang, wukesong, wuweikang, wuxuejian, Xiao Tianci, Xiaoda, xiefangqi, xinyunfan, xuanyue, xulei2020, Xun, xuyongfei, yanghaitao, yanghaitao1, yanghaoran, YangLuo, yangruoqi713, yankai, yanzhenxiang2020, yao_yf, yepei6, yeyunpeng, Yi, yoni, yoonlee666, yuchaojie, yujianfeng, yuximiao, zengzitao, Zhang, zhanghaibo5@huawei.com, zhanghuiyao, zhanghui_china, zhangxinfeng3, zhangyihui, zhangz0911gm, zhanke, zhanyuan, zhaodezan, zhaojichen, zhaoting, zhaozhenlong, zhengjun10, Zhenglong Li, zhiqwang, zhoufeng, zhousiyi, zhouyaqiang, zhouyifengCode, Zichun, Zirui, Ziyan, zjun, ZPaC, wangfengwfwf, zymaa, gerayking.
4000
4001Contributions of any kind are welcome!
4002
4003# MindSpore 1.2.1
4004
4005## MindSpore 1.2.1 Release Notes
4006
4007### Major Features and Improvements
4008
4009#### FrontEnd
4010
4011- [STABLE] Add MaskedSelect aicpu operation.(Ascend)
4012
4013#### Auto Parallel
4014
4015- [STABLE] Support distributed checkpoint loading.(Ascend/GPU)
4016
4017# MindSpore 1.2.0
4018
4019## MindSpore 1.2.0 Release Notes
4020
4021### Major Features and Improvements
4022
4023#### NewModels
4024
4025- [STABLE] Add CV models on Ascend: 3D Unet, Unet++, SSD-Resnet50-fpn, SSD-VGG16, crnn_seq2seq_ocr for BSI, CTPN, resnet18, DPN
4026- [STABLE] Add CV models on GPU: Faster-RCNN
4027- [STABLE] Add NLP models on Ascend: NAML, Fasttext, GRU, LSTM
4028- [BETA] Add TPRR: Thinking Path Re-Ranker, an original ranked-base framework for Multi-Hop Question Answering which has won the first place in HotpotQA leaderboard.(Ascend)
4029
4030#### FrontEnd
4031
4032- [STABLE] Support side effects expression to ensure that the perform order of user's semantics is correct.(Ascend/GPU/CPU)
4033- [STABLE] Support calculating the gradient for network that contain non-Tensor input parameters(int, float, bool, mstype,int, mstype.float, mstype.uint, mstype.bool_, tuple, list, dict).(Ascend/GPU/CPU)
4034- [STABLE] Support the inverse of a bool Tensor.(Ascend/GPU/CPU)
4035- [STABLE] Uniform the interface `isinstance`.(Ascend/GPU/CPU)
4036- [STABLE] Support negative indexes.(Ascend/GPU/CPU)
4037- [STABLE] Support 110+ Numpy-like interfaces in mindspore.numpy.(Ascend/GPU/CPU)
4038- [STABLE] Support export/load mindir model with a size greater than 2 GB.
4039- [STABLE] The optimizer supports gradient centralization.(Ascend)
4040- [STABLE] Support support auc metric, rou metric, bleu score metric, confusion matrix metric, cosine similarity metric, dice metric, hausdorff distance metric, occlusion sensitivity metric, perplexity metric, mean surface distance metric, root mean surface distance metric.
4041- [STABLE] Support use EmbeddingLookup with cache.(Ascend)
4042- [STABLE] Add MaskedSelect aicpu operation.(Ascend)
4043
4044#### Auto Parallel
4045
4046- [STABLE] Support AllGather and ReduceScatter fusion.(Ascend)
4047- [STABLE] Support gradient accumulation feature in auto parallel mode.(Ascend/GPU)
4048- [STABLE] Support running parallel optimizer with gradient accumulation.(Ascend)
4049- [STABLE] Add the configuration of communication operators' fusion.(Ascend)
4050- [STABLE] Support distributed checkpoint loading.(Ascend/GPU)
4051
4052#### Executor
4053
4054- [STABLE] Support inference with Nvidia GPU.
4055- [STABLE] Support data parallelism in PyNative mode.(Ascend/GPU)
4056- [STABLE] Optimize LSTM inference memory consumption in Graph mode with CPU.
4057
4058#### Sponge
4059
4060- [STABLE] Add SPONGE modules for molecular dynamics simulation, including Bond, Angle, Dihedral, Non Bond 14, NeighborList, Particle Mesh Ewald, Langevin MD and LIUJIAN MD.(GPU)
4061
4062#### DataSet
4063
4064- [STABLE] If the libnuma library is installed in the environment, you can run `export DATASET_ENABLE_NUMA=True` or `export MS_ENABLE_NUMA=True` to configure NUMA binding. In multi-card training scenarios, the training data processing speed can be improved, thereby improving the network training efficiency.
4065- [STABLE] Unify API Tensor structure of Training/Inference interfaces in C++ SDK.
4066- [STABLE] Optimize duplicated Decode in data preprocess using cache, improve preprocess efficiency.
4067- [STABLE] Support eager mode to run data augmentation in Python & C++.
4068- [STABLE] Support more data augmentation operators(e.g. Affine, Perspective) in MindSpore-Lite.
4069- [STABLE] Support light pipeline to process MindData in MindSpore-Lite training.
4070- [STABLE] Support more data preprossing operators based on DVPP hardware module and can be used on on Ascend310 platform.
4071- [STABLE] Support copy-free property for data in Ascend310 inference process scenarios.
4072
4073#### Running Data Recorder
4074
4075- [STABLE] Support running data recorder (RDR)  for exception demarcation.
4076- [STABLE] Provide records of multi-stage computational graphs, memory allocation information, graph execution order, stream execution order and task debug information when a "run task error" or "distribute task failed" occurs. (Ascend)
4077- [STABLE] Provide records of multi-stage computational graphs, memory allocation information and graph execution order when a "SyncStream error" occurs. (GPU)
4078
4079#### 3D Feature
4080
4081- [STABLE] Support 3D ops: Conv3D, Conv3DBackpropInput, Conv3DBackpropFilter, Conv3DTranspose, BiasAdd, BiasAddGrad, PReLU, Transpose, Reshape, transdata, StrideSlice, MaxPool3D, MaxPool3DGrad, BinaryCrossEntropy, SigmoidCrossEntropyWithLogits, SigmoidCrossEntropyWithLogitsGrad, SoftmaxCrossEntropyWithLogits, SigmoidCrossEntropyWithLogits, SigmoidCrossEntropyWithLogitsGrad, BatchNorm3d, BatchNorm3dGrad, Dropout3d.
4082- [STABLE] Support RMSELoss loss function, MAELoss loss function, FocalLoss loss function, DiceLoss binary loss function, and MultiClassDiceLoss multi-type loss function for 2D/3D network.
4083- [STABLE] Add optimizer: AdamApplyOne(3D), ApplyMomentum(3D), SGD(3D).
4084
4085### API Change
4086
4087#### Backwards Incompatible Change
4088
4089##### Python API
4090
4091###### `mindspore.numpy.array()`, `mindspore.numpy.asarray()`, `mindspore.numpy.asfarray()`, `mindspore.numpy.copy()` now support GRAPH mode, but cannot accept `numpy.ndarray` as input arguments anymore([!12726](https://gitee.com/mindspore/mindspore/pulls/12726))
4092
4093Previously, these interfaces can accept numpy.ndarray as arguments and convert numpy.ndarray to Tensor, but cannot be used in GRAPH mode.
4094However, currently MindSpore Parser cannot parse numpy.ndarray in JIT-graph. To support these interfaces in graph mode, we have to remove `numpy.ndarray` support. With that being said, users can still use `Tensor` to convert `numpy.ndarray` to tensors.
4095
4096<table>
4097<tr>
4098<td style="text-align:center"> 1.1.1 </td> <td style="text-align:center"> 1.2.0 </td>
4099</tr>
4100<tr>
4101<td>
4102
4103```python
4104>>> import mindspore.numpy as mnp
4105>>> import numpy
4106>>>
4107>>> nd_array = numpy.array([1,2,3])
4108>>> tensor = mnp.asarray(nd_array) # this line cannot be parsed in GRAPH mode
4109```
4110
4111</td>
4112<td>
4113
4114```python
4115>>> import mindspore.numpy as mnp
4116>>> import numpy
4117>>>
4118>>> tensor = mnp.asarray([1,2,3]) # this line can be parsed in GRAPH mode
4119```
4120
4121</td>
4122</tr>
4123</table>
4124
4125###### mindspore.numpy interfaces remove support for keyword arguments `out` and `where`([!12726](https://gitee.com/mindspore/mindspore/pulls/12726))
4126
4127Previously, we have incomplete support for keyword arguments `out` and `where` in mindspore.numpy interfaces, however, the `out` argument is only functional when `where` argument is also provided, and `out` cannot be used to pass reference to numpy functions. Therefore, we have removed these two arguments to avoid any confusion users may have. Their original functionality can be found in [np.where](https://www.mindspore.cn/docs/en/master/api_python/numpy/mindspore.numpy.where.html#mindspore.numpy.where)
4128
4129<table>
4130<tr>
4131<td style="text-align:center"> 1.1.1 </td> <td style="text-align:center"> 1.2.0 </td>
4132</tr>
4133<tr>
4134<td>
4135
4136```python
4137>>> import mindspore.numpy as np
4138>>>
4139>>> a = np.ones((3,3))
4140>>> b = np.ones((3,3))
4141>>> out = np.zeros((3,3))
4142>>> where = np.asarray([[True, False, True],[False, False, True],[True, True, True]])
4143>>> res = np.add(a, b, out=out, where=where) # `out` cannot be used as a reference, therefore it is misleading
4144```
4145
4146</td>
4147<td>
4148
4149```python
4150>>> import mindspore.numpy as np
4151>>>
4152>>> a = np.ones((3,3))
4153>>> b = np.ones((3,3))
4154>>> out = np.zeros((3,3))
4155>>> where = np.asarray([[True, False, True],[False, False, True],[True, True, True]])
4156>>> res = np.add(a, b)
4157>>> out = np.where(where, x=res, y=out) # instead of np.add(a, b, out=out, where=where)
4158```
4159
4160</td>
4161</tr>
4162</table>
4163
4164###### Turn `ops.MakeRefKey` into an internal interface ([!12010](https://gitee.com/mindspore/mindspore/pulls/12010))
4165
4166Previously MakeRefKey is an external interface that is not used, now make it an internal interface with the same usage. We do not recommend users to use this interface, and we will remove the relevant introduction of this interface from the official website.
4167
4168###### `ops.ApplyFtrl`, `ops.ApplyMomentum`, `ops.ApplyRMSProp`, `ops.ApplyCenteredRMSProp` change the output on Ascend backend from multiple to a single. ([!11895](https://gitee.com/mindspore/mindspore/pulls/11895))
4169
4170Previously the number of outputs of these operator is different on different backends. To unify their definition we change their output on Ascend backend from multiple to a single.
4171
4172##### `P.FusedBatchNorm`, `P.FusedBatchNormEx` deleted ([!12115](https://gitee.com/mindspore/mindspore/pulls/12115))
4173
4174The FusedBatchNorm and FusedBatchNormEx interface has been deleted. Please use the batchnorm operator to replace it.
4175
4176##### `MetaTensor` deleted ([!10325](https://gitee.com/mindspore/mindspore/pulls/10325))
4177
4178The MetaTensor interface has been deleted. The function of MetaTensor has been integrated into tensor.
4179
4180###### `ControlDepend` is deleted, use `Depend` instead. The decorator `@C.add_flags(has_effect=True)` does not work. ([!13793](https://gitee.com/mindspore/mindspore/pulls/13793))
4181
4182Previously, we used ControlDepend to control the execution order of multiple operators. In version 1.2.0, mindspore introduces the auto-monad side effects expression to ensure that the perform order of user's semantics is correct. Therefore, ControlDepend is deleted and Depend is recommended.
4183
4184In most scenarios, if operators have IO side effects (such as print) or memory side effects (such as assign), they will be executed according to the user's semantics. In some scenarios, if the two operators A and B have no order dependency, and A must be executed before B, we recommend using Depend to specify their execution order. See the API documentation of the Depend operator for specific usage.
4185
4186<table>
4187<tr>
4188<td style="text-align:center"> 1.1.1 </td> <td style="text-align:center"> 1.2.0 </td>
4189</tr>
4190<tr>
4191<td>
4192
4193```python
4194    In some side-effect scenarios, we need to ensure the execution order of operators.
4195    In order to ensure that operator A is executed before operator B, it is recommended
4196    to insert the Depend operator between operators A and B.
4197
4198    Previously, the ControlDepend operator was used to control the execution order.
4199    Since the ControlDepend operator is deprecated from version 1.1, it is recommended
4200    to use the Depend operator instead. The replacement method is as follows::
4201
4202        a = A(x)                --->        a = A(x)
4203        b = B(y)                --->        y = Depend(y, a)
4204        ControlDepend(a, b)     --->        b = B(y)
4205```
4206
4207</td>
4208<td>
4209
4210```python
4211    In most scenarios, if operators have IO side effects or memory side effects,
4212    they will be executed according to the user's semantics. In some scenarios,
4213    if the two operators A and B have no order dependency, and A must be executed
4214    before B, we recommend using Depend to specify their execution order. The
4215    usage method is as follows::
4216
4217        a = A(x)                --->        a = A(x)
4218        b = B(y)                --->        y = Depend(y, a)
4219                                --->        b = B(y)
4220```
4221
4222</td>
4223</tr>
4224</table>
4225
4226After the introduction of the auto-monad side effect expression feature, the decorator `@C.add_flags(has_effect=True)` does not work. If the decorator is used in the script, please modify. Take the overflow identification operator (without side effects) as an example, the modification method is as follows:
4227
4228<table>
4229<tr>
4230<td style="text-align:center"> 1.1.1 </td> <td style="text-align:center"> 1.2.0 </td>
4231</tr>
4232<tr>
4233<td>
4234
4235```python
4236@C.add_flags(has_effect=True)
4237def construct(self, *inputs):
4238    ...
4239    loss = self.network(*inputs)
4240    init = self.allo_status()
4241    self.clear_status(init)
4242    ...
4243```
4244
4245</td>
4246<td>
4247
4248```python
4249def construct(self, *inputs):
4250    ...
4251    loss = self.network(*inputs)
4252    init = self.allo_status()
4253    init = F.depend(init, loss)
4254    clear_status = self.clear_status(init)
4255    ...
4256```
4257
4258</td>
4259</tr>
4260</table>
4261
4262##### C++ API
4263
4264###### C++ API support dual ABI now.([!12432](https://gitee.com/mindspore/mindspore/pulls/12432))
4265
42661.1.1 supports only the old ABI. Currently, both the new and the old are supported.
4267
4268<table>
4269<tr>
4270<td style="text-align:center"> 1.1.1 </td> <td style="text-align:center"> 1.2.0 </td>
4271</tr>
4272<tr>
4273<td>
4274
4275```cmake
4276add_compile_definitions(_GLIBCXX_USE_CXX11_ABI=0)
4277```
4278
4279</td>
4280<td>
4281
4282```cmake
4283add_compile_definitions(_GLIBCXX_USE_CXX11_ABI=0)  # old ABI are supported
4284add_compile_definitions(_GLIBCXX_USE_CXX11_ABI=1)  # new ABI are supprrted, too
4285                                                   # write nothing, use new ABI as default
4286```
4287
4288</td>
4289</tr>
4290</table>
4291
4292###### Context refactor.([!13515](https://gitee.com/mindspore/mindspore/pulls/13515))
4293
4294The `Context` class is refactored. For details, see the API docs.
4295
4296<table>
4297<tr>
4298<td style="text-align:center"> 1.1.1 </td> <td style="text-align:center"> 1.2.0 </td>
4299</tr>
4300<tr>
4301<td>
4302
4303```cpp
4304GlobalContext::SetGlobalDeviceTarget(kDeviceTypeAscend310);       // set device target is ascend310
4305GlobalContext::SetGlobalDeviceID(0);                              // set device id is 0
4306auto model_context = std::make_shared<ModelContext>();            // create a model context
4307ModelContext::SetInsertOpConfigPath(model_context, "./aipp.cfg")  // set aipp config file is ./aipp.cfg
4308```
4309
4310</td>
4311<td>
4312
4313```cpp
4314auto model_context = std::make_shared<Context>();                 // create a model context
4315auto ascend310_info = std::make_shared<Ascend310DeviceInfo>();
4316model_context.MutableDeviceInfo().push_back(ascend310_info );     // set device target is ascend310
4317ascend310_info->SetDeviceID(0);                                   // set device id is 0
4318ascend310_info->SetInsertOpConfigPath("./aipp.cfg");              // set aipp config file is ./aipp.cfg
4319```
4320
4321</td>
4322</tr>
4323</table>
4324
4325###### LoadModel interface changes.([!13515](https://gitee.com/mindspore/mindspore/pulls/13515))
4326
4327`LoadModel` is renamed `Load`. No exception is thrown new but the return status should be checked.
4328
4329<table>
4330<tr>
4331<td style="text-align:center"> 1.1.1 </td> <td style="text-align:center"> 1.2.0 </td>
4332</tr>
4333<tr>
4334<td>
4335
4336```cpp
4337try {
4338  auto graph = Serialization::LoadModel(model_file_path, kMindIR);
4339} catch (...) { ... }
4340```
4341
4342</td>
4343<td>
4344
4345```cpp
4346Graph graph;
4347auto ret = Serialization::Load(model_file_path, kMindIR, &graph);
4348if (ret != kSuccess) { ... }
4349```
4350
4351</td>
4352</tr>
4353</table>
4354
4355###### Model ctor changes.([!13515](https://gitee.com/mindspore/mindspore/pulls/13515))
4356
4357`Model` uses a non-parameter ctor now, and arguments are passed in through `Build`.
4358
4359<table>
4360<tr>
4361<td style="text-align:center"> 1.1.1 </td> <td style="text-align:center"> 1.2.0 </td>
4362</tr>
4363<tr>
4364<td>
4365
4366```cpp
4367Model net(net_cell, model_context);
4368auto ret = net.Build();
4369if (ret != kSuccess) { ... }
4370```
4371
4372</td>
4373<td>
4374
4375```cpp
4376Model net;
4377auto ret = net.Build(net_cell, model_context);
4378if (ret != kSuccess) { ... }
4379```
4380
4381</td>
4382</tr>
4383</table>
4384
4385###### MSTensor::CreateTensor returns a native pointer now.([!13515](https://gitee.com/mindspore/mindspore/pulls/13515))
4386
4387`MSTensor::CreateTensor` and `MSTensor::CreateRefTensor` returns a native pointer now, need to be destroy by `DestroyTensorPtr`.
4388
4389<table>
4390<tr>
4391<td style="text-align:center"> 1.1.1 </td> <td style="text-align:center"> 1.2.0 </td>
4392</tr>
4393<tr>
4394<td>
4395
4396```cpp
4397auto tensor = MSTensor::CreateTensor(xxx, xxx, ...);
4398auto name = tensor.Name();
4399```
4400
4401</td>
4402<td>
4403
4404```cpp
4405auto tensor = MSTensor::CreateTensor(xxx, xxx, ...);
4406auto name = tensor->Name();
4407MSTensor::DestroyTensorPtr(tensor);
4408```
4409
4410</td>
4411</tr>
4412</table>
4413
4414#### New features
4415
4416##### Python API
4417
4418- Add SPONGE functions: `mindspore.ops.operations.BondForceWithAtomEnergy`, `mindspore.ops.operations.AngleForceWithAtomEnergy`, `mindspore.ops.operations.DihedralForceWithAtomEnergy`, `mindspore.ops.operations.Dihedral14LJCFForceWithAtomEnergy`, `mindspore.ops.operations.LJForceWithPMEDirectForce`, `mindspore.ops.operations.PMEExcludedForce`, `mindspore.ops.operations.PMEReciprocalForce`,`mindspore.ops.operations.BondEnergy`, `mindspore.ops.operations.AngleEnergy`,`mindspore.ops.operations.DihedralEnergy`, `mindspore.ops.operations.Dihedral14LJEnergy`, `mindspore.ops.operations.Dihedral14CFEnergy`,`mindspore.ops.operations.LJEnergy`, `mindspore.ops.operations.PMEEnergy`. All operators are supported in `GPU`.
4419
4420#### Deprecations
4421
4422##### Python API
4423
4424###### `nn.MatMul` is now deprecated in favor of `ops.matmul` ([!12817](https://gitee.com/mindspore/mindspore/pulls/12817))
4425
4426[ops.matmul](https://www.mindspore.cn/docs/en/master/api_python/ops/mindspore.ops.matmul.html#mindspore.ops.matmul) follows the API of [numpy.matmul](https://numpy.org/doc/stable/reference/generated/numpy.matmul.html) as closely as possible. As a function interface, [ops.matmul](https://www.mindspore.cn/docs/en/master/api_python/ops/mindspore.ops.matmul.html#mindspore.ops.matmul) is applied without instantiation, as opposed to `nn.MatMul`, which should only be used as a class instance.
4427
4428<table>
4429<tr>
4430<td style="text-align:center"> 1.1.1 </td> <td style="text-align:center"> 1.2.0 </td>
4431</tr>
4432<tr>
4433<td>
4434
4435```python
4436>>> import numpy as np
4437>>> from mindspore import Tensor, nn
4438>>>
4439>>> x = Tensor(np.ones((2, 3)).astype(onp.float32)
4440>>> y = Tensor(np.ones((3, 4)).astype(onp.float32)
4441>>> nn.MatMul()(x, y)
4442```
4443
4444</td>
4445<td>
4446
4447```python
4448>>> import numpy as np
4449>>> from mindspore import Tensor, ops
4450>>>
4451>>> x = Tensor(np.ones((2, 3)).astype(onp.float32)
4452>>> y = Tensor(np.ones((3, 4)).astype(onp.float32)
4453>>> ops.matmul(x, y)
4454```
4455
4456</td>
4457</tr>
4458</table>
4459
4460### Bug fixes
4461
4462#### FrontEnd
4463
4464- fix the null pointer problem of evaluator in control flow.([!13312](https://gitee.com/mindspore/mindspore/pulls/13312))
4465- fix parameter naming conflict bug for CellList and SequentialCell. ([!13260](https://gitee.com/mindspore/mindspore/pulls/13260))
4466
4467#### Executor
4468
4469- fix executor pending task not execute in some heterogeneous cases.([!13465](https://gitee.com/mindspore/mindspore/pulls/13465))
4470- add passes to support frontend IR unification, including following operations: SliceGrad([!11783](https://gitee.com/mindspore/mindspore/pulls/11783)), ApplyFtrl, ApplyMomentum, ApplyRMSProp, CenteredRMSProp([!11895](https://gitee.com/mindspore/mindspore/pulls/11895)), AvgPoolGrad([!12813](https://gitee.com/mindspore/mindspore/pulls/12813)), BatchNorm([!12115](https://gitee.com/mindspore/mindspore/pulls/12115))
4471
4472#### Dataset
4473
4474- Fix getter functions(e.g. GetDatasetSize) terminated abnormally when use python multi-processing. ([!13571](https://gitee.com/mindspore/mindspore/pulls/13571), [!13823](https://gitee.com/mindspore/mindspore/pulls/13823))
4475- Fix unclear error log of data augmentation operators. ([!12398](https://gitee.com/mindspore/mindspore/pulls/12398), [!12883](https://gitee.com/mindspore/mindspore/pulls/12883), [!13176](https://gitee.com/mindspore/mindspore/pulls/13176))
4476- Fix profiling performs abnormally when sink_size = False, as saving data is later than profiling analysis. ([!13944](https://gitee.com/mindspore/mindspore/pulls/13944))
4477
4478## MindSpore Lite
4479
4480### Major Features and Improvements
4481
4482#### Converter and runtime
4483
44841. Support TensorFlow model in Converter except aware-training model.
44852. Add fusion pattern for same horizontal operators in Converter.
44863. Support Jar in x86_64 system for integrating into server with Java backend conveniently.
44874. Provide unified runtime API for developer reusing their code between cloud side and end side.[BETA]
44885. Improve control-flow capabilities continually: Support GRU fusion in Converter; Support weight-quant for control-flow model; Support control-flow model inference with half precision; Support nested control-flow model.[BETA]
4489
4490#### ARM backend optimization
4491
44921. Add NLP dependent float16 operators(like lstm) to enhance inference performance.
44932. Optimize operators: lstm, gru, depthwise.
44943. Add 6 NPU operators(like FullConnection), and fix some bugs about buildIR failed.
4495
4496#### OpenCL backend
4497
44981. Add new ops: add 10+ ops,total 72 ops;
44992. Performance optimization: by memory layout optimize,block tiling,Performance improved by 30% compared to version 1.1 at Adreno GPU.
45003. Initialization time optimization: initialization time improve 100% vs MSLITE Version1.1 by store kernel cache as binary.
45014. Support Java call on Mali or Adreno GPU.
4502
4503#### Post quantization
4504
45051. Support quantization of gather and lstm ops.
45062. Support quantizatizing TF Lite models with sub-graph node.
45073. Add quantiztion strategy to decide quantize ops or not,less accuracy loss and higher compression rate.
4508
4509#### Training on Device
4510
45111. Virtual batching, use mini-batch to minic large batch in theorical with few RAM consumption.
45122. Converter unify, do not compile tod and iod converter separately.
45133. Performance optimization to BWD ops.
45144. TrainLoop with Off-The-Shelf Functionality blocks, like LR scheduler, Loss Monitor, Ckpt Saver, Accuracy Monitor.
45155. Integration of code with Minddata lite.
45166. Support more networks (googlenet, densenet, shufflenetv2, nin, vgg) and operators.
4517
4518#### Codegen
4519
45201. Support 79 ops for the ARM platform and all CMSIS ops for Arm Cortex-M Series.
45212. Multiplatform support, including Android, IoT Devices.
45223. Support offline model weight preprocessing while compiling.
45234. Support offline memory reuse computing for minimum runtime buffer size.
45245. Support kernel register for custom op. Third-party hardware like NNIE can be accessed through it.
4525
4526### API Change
4527
4528#### API Incompatible Change
4529
4530##### C++ API
4531
4532###### Add header file named lite_types.h for some common data structs. ([!12262](https://gitee.com/mindspore/mindspore/pulls/12262))
4533
4534Previously, some common data structs such as `CpuBindMode` and `DeviceType` are in context.h, this may cause cross-dependency between headers. So we create a new header named lite_types.h for some common data structs and move `CpuBindMode` and `DeviceType` from context.h into lite_types.h.
4535
4536<table>
4537<tr>
4538<td style="text-align:center"> lite_types.h </td>
4539</tr>
4540<tr>
4541<td>
4542
4543```cpp
4544namespace mindspore::lite {
4545/// \brief CpuBindMode defined for holding bind cpu strategy argument.
4546typedef enum {
4547  NO_BIND,    /**< no bind */
4548  HIGHER_CPU, /**< bind higher cpu first */
4549  MID_CPU     /**< bind middle cpu first */
4550} CpuBindMode;
4551
4552/// \brief DeviceType defined for holding user's preferred backend.
4553typedef enum {
4554  DT_CPU, /**< CPU device type */
4555  DT_GPU, /**< GPU device type */
4556  DT_NPU  /**< NPU device type */
4557} DeviceType;
4558}  // namespace mindspore::lite
4559```
4560
4561</td>
4562</tr>
4563</table>
4564
4565###### Add some new interfaces in ms_tensor.h for unified runtime API.([!13515](https://gitee.com/mindspore/mindspore/pulls/13515))
4566
4567Previously, users could not create `MSTensor` or modify ``MSTensor, all `MSTensor` are created and managed by framework. However users need to create or modify MSTensor sometimes such as pre-processing input data. So we provide two new interfaces in ms_tensor.h: `CreateTensor` interface for creating `MSTensor` by user and `set_shape` interface for modifying the shape of `MSTensor`.
4568
4569<table>
4570<tr>
4571<td style="text-align:center"> CreateTensor </td>
4572</tr>
4573<tr>
4574<td>
4575
4576```cpp
4577/// \brief Create a MSTensor.
4578///
4579/// \return Pointer to an instance of MindSpore Lite MSTensor.
4580static MSTensor *CreateTensor(const std::string &name, TypeId type, const std::vector<int> &shape, const void *data,
4581                                size_t data_len);
4582```
4583
4584</td>
4585</tr>
4586</table>
4587
4588<table>
4589<tr>
4590<td style="text-align:center"> set_shape </td>
4591</tr>
4592<tr>
4593<td>
4594
4595```cpp
4596/// \brief Set the shape of MSTensor.
4597virtual void set_shape(const std::vector<int> &shape) = 0;
4598```
4599
4600</td>
4601</tr>
4602</table>
4603
4604Previously, users could access to data of `MSTensor` by interface named `MutableData`. However `MutableData` is not only returning data of tensor but also allocating data for tensor if its data is nullptr. So we provide a new interfaces in ms_tensor.h named `data` for returning data of tensor without allocating automatically.
4605
4606<table>
4607<tr>
4608<td style="text-align:center"> data </td>
4609</tr>
4610<tr>
4611<td>
4612
4613```cpp
4614/// \brief Get the pointer of data in MSTensor.
4615///
4616/// \note The data pointer can be used to both write and read data in MSTensor. No memory buffer will be
4617/// allocated.
4618///
4619/// \return the pointer points to data in MSTensor.
4620virtual void *data() = 0;
4621```
4622
4623</td>
4624</tr>
4625</table>
4626
4627###### Delete `DimensionSize()` in ms_tensor.h.([!13515](https://gitee.com/mindspore/mindspore/pulls/13515))
4628
4629The interface named `DimensionSize` is fuinctionally overlapped with the interface named `shape`. For the simplicity of the interface, we delete `DimensionSize` and recommend users to use the new interface named `shape` instead.
4630
4631<table>
4632<tr>
4633<td style="text-align:center"> DimensionSize() </td>
4634</tr>
4635<tr>
4636<td>
4637
4638```cpp
4639/// \brief Get size of the dimension of the MindSpore Lite MSTensor index by the parameter index.
4640///
4641/// \param[in] index Define index of dimension returned.
4642///
4643/// \return Size of dimension of the MindSpore Lite MSTensor.
4644virtual int DimensionSize(size_t index) const = 0;
4645```
4646
4647</td>
4648</tr>
4649</table>
4650
4651###### Move allocator from namespace mindspore::lite to namespace lite for unified runtime API.([!13515](https://gitee.com/mindspore/mindspore/pulls/13515))
4652
4653Previously, class `Allocator` is in namespace mindspore::lite. Considering unified allocator interface for unified runtime API, we move `Allocator` to namespace mindspore.
4654
4655<table>
4656<tr>
4657<td style="text-align:center"> 1.1.0 </td> <td style="text-align:center"> 1.2.0 </td>
4658</tr>
4659<tr>
4660<td>
4661
4662```cpp
4663namespace mindspore::lite {
4664/// \brief Allocator defined a memory pool for malloc memory and free memory dynamically.
4665///
4666/// \note List public class and interface for reference.
4667class Allocator;
4668}
4669```
4670
4671</td>
4672<td>
4673
4674```cpp
4675namespace mindspore {
4676/// \brief Allocator defined a memory pool for malloc memory and free memory dynamically.
4677///
4678/// \note List public class and interface for reference.
4679class Allocator;
4680}
4681```
4682
4683</td>
4684</tr>
4685</table>
4686
4687### Bug fixes
4688
46891. Fix the bug that the array in kernel registrar is not initialized.
46902. Fix segment fault caused by releasing of OpParameter in Crop kernel in mistake.
46913. Fix the bug that the MINDIR aware-training model is finally interpreted as weight-quant model.
4692
4693## Contributors
4694
4695Thanks goes to these wonderful people:
4696
4697Adel, AGroupofProbiotocs, anthonyaje, anzhengqi, askmiao, baihuawei, baiyangfan, bai-yangfan, bingyaweng, BowenK, buxue, caifubi, CaoJian, caojian05, caozhou, Cathy, changzherui, chenbo116, chenfei, chengxianbin, chenhaozhe, chenjianping, chenzomi, chenzupeng, chujinjin, cj, cjh9368, Corleone, damon0626, danish, Danish, davidmc, dayschan, doitH, dong-li001, eric, Eric, fary86, fuzhiye, Gaoxiong, GAO_HYP_XYJ, gengdongjie, Gogery, gongdaguo, gray0v0, gukecai, guoqi, gzhcv, hangq, hanhuifeng2020, Harshvardhan, He, heleiwang, hexia, Hoai, HuangBingjian, huangdongrun, huanghui, huangxinjing, huqi, huzhifeng, hwjiaorui, Islam Amin, Jesse, , Jiabin Liu, jianghui58, jiangzhiwen, Jiaqi, jin-xiulang, jinyaohui, jjfeing, John, Jonathan, jonyguo, JulyAi, jzg, kai00, kingfo, kingxian, kpy, kswang, laiyongqiang, leonwanghui, Li, liangchenghui, liangzelang, lichen_101010, lichenever, lihongkang, lilei, limingqi107, ling, linqingke, Lin Xh, liubuyu, liuwenhao4, liuxiao78, liuxiao93, liuyang_655, liuzhongkai, Lixia, lixian, liyanliu, liyong, lizhenyu, luopengting, luoyang, lvchangquan, lvliang, lz, mahdi, Mahdi, maning202007, Margaret_wangrui, mayang, mengyuanli, Ming_blue, nhussain, ougongchang, panfengfeng, panyifeng, Payne, Peilin, peixu_ren, Pengyongrong, qianlong, qianjiahong, r1chardf1d0, riemann_penn, rmdyh, Sheng, shenwei41, simson, Simson, Su, sunsuodong, tao_yunhao, tinazhang, VectorSL, , Wan, wandongdong, wangdongxu, wangmin, wangnan39@huawei.com, wangyue01, wangzhe, wanyiming, Wei, wenchunjiang, wilfChen, WilliamLian, wsc, wudenggang, wukesong, wuweikang, wuxuejian, Xiaoda, xiefangqi, xinyunfan, xuanyue, xulei2020, Xun, xuyongfei, yanghaitao, yanghaitao1, yanghaoran, YangLuo, yangruoqi713, yankai, yanzhenxiang2020, yao_yf, yepei6, yeyunpeng, Yi, yoni, yoonlee666, yuchaojie, yujianfeng, yuximiao, zengzitao, Zhang, zhanghaibo5@huawei.com, zhanghuiyao, zhanghui_china, zhangxinfeng3, zhangyihui, zhangz0911gm, zhanke, zhanyuan, zhaodezan, zhaojichen, zhaoting, zhaozhenlong, zhengjun10, zhiqwang, zhoufeng, zhousiyi, zhouyaqiang, zhouyifengCode, Zichun, Zirui, Ziyan, zjun, ZPaC, zymaa.
4698
4699Contributions of any kind are welcome!
4700
4701# MindSpore 1.1.1 Release Notes
4702
4703## MindSpore
4704
4705### Major Features and Improvements
4706
4707#### NewModels
4708
4709- [STABLE] BGCF: a Bayesian Graph Collaborative Filtering(BGCF) framework used to model the uncertainty in the user-item interaction graph and thus recommend accurate and diverse items on Amazon recommendation dataset.(Ascend)
4710- [STABLE] GRU: a recurrent neural network architecture like the LSTM(Long-Short Term Memory) on Multi30K dataset.(Ascend)
4711- [STABLE] FastText: a simple and efficient text classification algorithm on AG's news topic classification dataset, DBPedia Ontology classification dataset and Yelp Review Polarity dataset.(Ascend)
4712- [STABLE] LSTM: a recurrent neural network architecture used to learn word vectors for sentiment analysis on aclImdb_v1 dataset.(Ascend)
4713- [STABLE] SimplePoseNet: a convolution-based neural network for the task of human pose estimation and tracking on COCO2017 dataset.(Ascend)
4714
4715#### FrontEnd
4716
4717- [BETA] Support Tensor Fancy Index Getitem with tuple and list. (Ascend/GPU/CPU)
4718
4719### Backwards Incompatible Change
4720
4721#### Python API
4722
4723##### `ops.AvgPool`, `ops.MaxPool`, `ops.MaxPoolWithArgmax` change attr name from 'ksize', 'padding' to 'kernel_size', 'pad_mode' ([!11350](https://gitee.com/mindspore/mindspore/pulls/11350))
4724
4725Previously the kernel size and pad mode attrs of pooling ops are named "ksize" and "padding", which is a little puzzling and inconsistent with convolution ops. So they are rename to "kernel_size" and "pad_mode".
4726
4727<table>
4728<tr>
4729<td style="text-align:center"> 1.1.0 </td> <td style="text-align:center"> 1.1.1 </td>
4730</tr>
4731<tr>
4732<td>
4733
4734```python
4735>>> import mindspore.ops as ops
4736>>>
4737>>> avg_pool = ops.AvgPool(ksize=2, padding='same')
4738>>> max_pool = ops.MaxPool(ksize=2, padding='same')
4739>>> max_pool_with_argmax = ops.MaxPoolWithArgmax(ksize=2, padding='same')
4740```
4741
4742</td>
4743<td>
4744
4745```python
4746>>> import mindspore.ops as ops
4747>>>
4748>>> avg_pool = ops.AvgPool(kernel_size=2, pad_mode='same')
4749>>> max_pool = ops.MaxPool(kernel_size=2, pad_mode='same')
4750>>> max_pool_with_argmax = ops.MaxPoolWithArgmax(kernel_size=2, pad_mode='same')
4751```
4752
4753</td>
4754</tr>
4755</table>
4756
4757##### `ops.TensorAdd`, change API name to `ops.Add` ([!11568](https://gitee.com/mindspore/mindspore/pulls/11568))
4758
4759The operator name TensorAdd is not standardized, it is changed to Add. The old interface can be used continuously, but will be deleted in subsequent versions, it is recommended to use and switch to the latest interface.
4760
4761<table>
4762<tr>
4763<td style="text-align:center"> 1.1.0 </td> <td style="text-align:center"> 1.1.1 </td>
4764</tr>
4765<tr>
4766<td>
4767
4768```python
4769>>> import mindspore.ops as ops
4770>>>
4771>>> add = ops.TensorAdd()
4772```
4773
4774</td>
4775<td>
4776
4777```python
4778>>> import mindspore.ops as ops
4779>>>
4780>>> add = ops.Add()
4781```
4782
4783</td>
4784</tr>
4785</table>
4786
4787##### `ops.Gelu`, `ops.GeluGrad`, `ops.FastGelu`, `ops.FastGeluGrad`, change API name to `ops.GeLU`, `ops.GeLUGrad`, `ops.FastGeLU`, `ops.FastGeLUGrad` ([!11603](https://gitee.com/mindspore/mindspore/pulls/11603))
4788
4789Gelu, GeluGrad, FastGelu, and FastGeluGrad names are unified into ReLU naming rules, "lu" is changed to the uppercase "LU". The old interface can be used continuously, but will be deleted in subsequent versions, it is recommended to use and switch to the latest interface.
4790
4791<table>
4792<tr>
4793<td style="text-align:center"> 1.1.0 </td> <td style="text-align:center"> 1.1.1 </td>
4794</tr>
4795<tr>
4796<td>
4797
4798```python
4799>>> import mindspore.ops as ops
4800>>>
4801>>> gelu = ops.Gelu()
4802>>> gelu_grad = ops.GeluGrad()
4803>>> fast_gelu = ops.FastGelu()
4804>>> fast_gelu_grad = ops.FastGeluGrad()
4805```
4806
4807</td>
4808<td>
4809
4810```python
4811>>> import mindspore.ops as ops
4812>>>
4813>>> gelu = ops.GeLU()
4814>>> gelu_grad = ops.GeLUGrad()
4815>>> fast_gelu = ops.FastGeLU()
4816>>> fast_gelu_grad = ops.FastGeLUGrad()
4817```
4818
4819</td>
4820</tr>
4821</table>
4822
4823##### `ops.GatherV2`, change API name to `ops.Gather` ([!11713](https://gitee.com/mindspore/mindspore/pulls/11713))
4824
4825GatherV2 is changed to Gather. The old interface can be used continuously, but will be deleted in subsequent versions, it is recommended to use and switch to the latest interface.
4826
4827<table>
4828<tr>
4829<td style="text-align:center"> 1.1.0 </td> <td style="text-align:center"> 1.1.1 </td>
4830</tr>
4831<tr>
4832<td>
4833
4834```python
4835>>> import mindspore.ops as ops
4836>>>
4837>>> gather = ops.GatherV2()
4838```
4839
4840</td>
4841<td>
4842
4843```python
4844>>> import mindspore.ops as ops
4845>>>
4846>>> gather = ops.Gather()
4847```
4848
4849</td>
4850</tr>
4851</table>
4852
4853##### `ops.Pack`、`ops.Unpack`, change API name to `ops.Stack`、`ops.Unstack` ([!11828](https://gitee.com/mindspore/mindspore/pulls/11828))
4854
4855Pack is changed to Stack, and Unpack is changed to Unstack. The old interface can be used continuously, but will be deleted in subsequent versions, it is recommended to use and switch to the latest interface.
4856
4857<table>
4858<tr>
4859<td style="text-align:center"> 1.1.0 </td> <td style="text-align:center"> 1.1.1 </td>
4860</tr>
4861<tr>
4862<td>
4863
4864```python
4865>>> import mindspore.ops as ops
4866>>>
4867>>> pack= ops.Pack()
4868>>> unpack= ops.Unpack()
4869```
4870
4871</td>
4872<td>
4873
4874```python
4875>>> import mindspore.ops as ops
4876>>>
4877>>> stack= ops.Stack()
4878>>> unstack= ops.Unstack()
4879```
4880
4881</td>
4882</tr>
4883</table>
4884
4885##### `ops.ControlDepend`, add deprecated to ControlDepend ([!11844](https://gitee.com/mindspore/mindspore/pulls/11844))
4886
4887ControlDepend is deprecated and will be removed in a future version, use Depend instead.
4888
4889<table>
4890<tr>
4891<td style="text-align:center"> 1.1.0 </td> <td style="text-align:center"> 1.1.1 </td>
4892</tr>
4893<tr>
4894<td>
4895
4896```pythonNote:
4897Note:
4898    This operation does not work in `PYNATIVE_MODE`.
4899```
4900
4901</td>
4902<td>
4903
4904```python
4905Note:
4906        This operation does not work in `PYNATIVE_MODE`.
4907        `ControlDepend` is deprecated from version 1.1 and will be removed in a future version, use `Depend` instead.
4908```
4909
4910</td>
4911</tr>
4912</table>
4913
4914##### `ops.Depend`, add operator description and use case ([!11815](https://gitee.com/mindspore/mindspore/pulls/11815)), ([!11879](https://gitee.com/mindspore/mindspore/pulls/11879))
4915
4916Since the ControlDepend operator will be deprecated from version 1.2, it is recommended to use the Depend operator instead.
4917
4918<table>
4919<tr>
4920<td style="text-align:center"> 1.1.0 </td> <td style="text-align:center"> 1.1.1 </td>
4921</tr>
4922<tr>
4923<td>
4924
4925```python
4926Depend is used for processing side-effect operations.
4927
4928Inputs:
4929    - **value** (Tensor) - the real value to return for depend operator.
4930    - **expr** (Expression) - the expression to execute with no outputs.
4931
4932Outputs:
4933    Tensor, the value passed by last operator.
4934
4935Supported Platforms:
4936    ``Ascend`` ``GPU`` ``CPU``
4937```
4938
4939</td>
4940<td>
4941
4942```python
4943Depend is used for processing dependency operations.
4944
4945In some side-effect scenarios, we need to ensure the execution order of operators.
4946In order to ensure that operator A is executed before operator B, it is recommended
4947to insert the Depend operator between operators A and B.
4948
4949Previously, the ControlDepend operator was used to control the execution order.
4950Since the ControlDepend operator will be deprecated from version 1.2, it is
4951recommended to use the Depend operator instead. The replacement method is as follows::
4952
4953    a = A(x)                --->        a = A(x)
4954    b = B(y)                --->        y = Depend(y, a)
4955    ControlDepend(a, b)     --->        b = B(y)
4956
4957Inputs:
4958    - **value** (Tensor) - the real value to return for depend operator.
4959    - **expr** (Expression) - the expression to execute with no outputs.
4960
4961Outputs:
4962    Tensor, the value passed by last operator.
4963
4964Supported Platforms:
4965    ``Ascend`` ``GPU`` ``CPU``
4966
4967Examples:
4968    >>> import numpy as np
4969    >>> import mindspore
4970    >>> import mindspore.nn as nn
4971    >>> import mindspore.ops.operations as P
4972    >>> from mindspore import Tensor
4973    >>> class Net(nn.Cell):
4974    ...     def __init__(self):
4975    ...         super(Net, self).__init__()
4976    ...         self.softmax = P.Softmax()
4977    ...         self.depend = P.Depend()
4978    ...
4979    ...     def construct(self, x, y):
4980    ...         mul = x - y
4981    ...         y = self.depend(y, mul)
4982    ...         ret = self.softmax(y)
4983    ...         return ret
4984    ...
4985    >>> x = Tensor(np.ones([4, 5]), dtype=mindspore.float32)
4986    >>> y = Tensor(np.ones([4, 5]), dtype=mindspore.float32)
4987    >>> net = Net()
4988    >>> output = net(x, y)
4989    >>> print(output)
4990    [[0.2 0.2 0.2 0.2 0.2]
4991     [0.2 0.2 0.2 0.2 0.2]
4992     [0.2 0.2 0.2 0.2 0.2]
4993     [0.2 0.2 0.2 0.2 0.2]]
4994```
4995
4996</td>
4997</tr>
4998</table>
4999
5000#### C++ API
5001
5002##### change namespace from `mindspore::api` to `mindspore` ([!11574](https://gitee.com/mindspore/mindspore/pulls/11574))
5003
5004<table>
5005<tr>
5006<td style="text-align:center"> 1.1.0 </td> <td style="text-align:center"> 1.1.1 </td>
5007</tr>
5008<tr>
5009<td>
5010
5011```c++
5012namespace ms = mindspore::api;
5013```
5014
5015</td>
5016<td>
5017
5018```c++
5019namespace ms = mindspore;
5020```
5021
5022</td>
5023</tr>
5024</table>
5025
5026##### `Context` ([!11574](https://gitee.com/mindspore/mindspore/pulls/11574))
5027
5028<table>
5029<tr>
5030<td style="text-align:center"> 1.1.0 </td> <td style="text-align:center"> 1.1.1 </td>
5031</tr>
5032<tr>
5033<td>
5034
5035```c++
5036ms::Context::Instance().SetDeviceTarget(ms::kDeviceTypeAscend310).SetDeviceID(0);
5037```
5038
5039</td>
5040<td>
5041
5042```c++
5043ms::GlobalContext::SetGlobalDeviceTarget(ms::kDeviceTypeAscend310);
5044ms::GlobalContext::SetGlobalDeviceID(0);
5045```
5046
5047</td>
5048</tr>
5049</table>
5050
5051##### rename `Tensor` to `MSTensor` ([!11574](https://gitee.com/mindspore/mindspore/pulls/11574))
5052
5053<table>
5054<tr>
5055<td style="text-align:center"> 1.1.0 </td> <td style="text-align:center"> 1.1.1 </td>
5056</tr>
5057<tr>
5058<td>
5059
5060```c++
5061ms::Tensor a;
5062```
5063
5064</td>
5065<td>
5066
5067```c++
5068ms::MSTensor a;
5069```
5070
5071</td>
5072</tr>
5073</table>
5074
5075##### `Model` move setting of model options from `Build` to ctor `Model` ([!11574](https://gitee.com/mindspore/mindspore/pulls/11574))
5076
5077<table>
5078<tr>
5079<td style="text-align:center"> 1.1.0 </td> <td style="text-align:center"> 1.1.1 </td>
5080</tr>
5081<tr>
5082<td>
5083
5084```c++
5085ms::Model model(graph_cell);
5086model.Build(model_options);
5087```
5088
5089</td>
5090<td>
5091
5092```c++
5093ms::Model model(graph_cell, model_context);
5094model.Build();
5095```
5096
5097</td>
5098</tr>
5099</table>
5100
5101##### `Model` modify `GetInputsInfo`, `GetOutputsInfo` to `GetInputs`, `GetOutputs` ([!11574](https://gitee.com/mindspore/mindspore/pulls/11574))
5102
5103<table>
5104<tr>
5105<td style="text-align:center"> 1.1.0 </td> <td style="text-align:center"> 1.1.1 </td>
5106</tr>
5107<tr>
5108<td>
5109
5110```c++
5111std::vector<std::string> names;
5112std::vector<ms::DataType> types;
5113std::vector<std::vector<int64_t>> shapes;
5114std::vector<size_t> mem_sizes;
5115model.GetInputsInfo(&names, &types, &shapes, &mem_sizes);
5116std::cout << "Input 0 name: " << names[0] << std::endl;
5117```
5118
5119</td>
5120<td>
5121
5122```c++
5123auto inputs = model.GetInputs();
5124std::cout << "Input 0 name: " << inputs[0].Name() << std::endl;
5125```
5126
5127</td>
5128</tr>
5129</table>
5130
5131##### `Model` modify `Predict` parameters type from `Buffer` to `MSTensor` ([!11574](https://gitee.com/mindspore/mindspore/pulls/11574))
5132
5133<table>
5134<tr>
5135<td style="text-align:center"> 1.1.0 </td> <td style="text-align:center"> 1.1.1 </td>
5136</tr>
5137<tr>
5138<td>
5139
5140```c++
5141std::vector<ms::Buffer> inputs;
5142std::vector<ms::Buffer> outputs;
5143model.Predict(inputs, &outputs);
5144```
5145
5146</td>
5147<td>
5148
5149```c++
5150std::vector<ms::MSTensor> inputs;
5151std::vector<ms::MSTensor> outputs;
5152model.Predict(inputs, &outputs);
5153```
5154
5155</td>
5156</tr>
5157</table>
5158
5159### Deprecations
5160
5161#### Python API
5162
5163##### `ops.SpaceToBatch`, `ops.BatchToSpace` are deprecated in favor of `ops.SpaceToBatchND`, `ops.BatchToSpaceND`([!11527](https://gitee.com/mindspore/mindspore/pulls/11527))
5164
5165The `ops.SpaceToBatchND`, `ops.BatchToSpaceND` are more general and have same behavior as `ops.SpaceToBatch`, `ops.BatchToSpace` when `block_shape` is a int.
5166
5167##### `ops.DepthwiseConv2dNative` is deprecated in favor of `nn.Conv2D`([!11702](https://gitee.com/mindspore/mindspore/pulls/11702))
5168
5169The `ops.DepthwiseConv2dNative` is only supported by Ascend, it is recommended to directly use `nn.Conv2D`. If `group` is equal to `in_ channels` and `out_channels`, the 2D convolution layer is also a 2D depthwise convolution layer.
5170
5171## Contributors
5172
5173Thanks goes to these wonderful people:
5174
5175Adel, AGroupofProbiotocs, anthonyaje, anzhengqi, askmiao, baihuawei, baiyangfan, bai-yangfan, bingyaweng, BowenK, buxue, caifubi, CaoJian, caojian05, caozhou, Cathy, changzherui, chenbo116, chenfei, chengxianbin, chenhaozhe, chenjianping, chenzomi, chenzupeng, chujinjin, cj, cjh9368, Corleone, damon0626, danish, Danish, davidmc, dayschan, doitH, eric, Eric, fary86, fuzhiye, Gaoxiong, gengdongjie, Gogery, gongdaguo, gray0v0, gukecai, guoqi, gzhcv, hangq, hanhuifeng2020, Harshvardhan, He, heleiwang, hexia, Hoai, HuangBingjian, huangdongrun, huanghui, huangxinjing, huqi, huzhifeng, hwjiaorui, Jesse, jianghui58, jiangzhiwen, Jiaqi, jin-xiulang, jinyaohui, jjfeing, John, Jonathan, jonyguo, JulyAi, jzg, kai00, kingfo, kingxian, kpy, kswang, laiyongqiang, leonwanghui, Li, liangchenghui, liangzelang, lichen_101010, lichenever, lihongkang, lilei, limingqi107, ling, linqingke, liubuyu, liuwenhao4, liuxiao78, liuxiao93, liuyang_655, liuzhongkai, Lixia, lixian, liyanliu, liyong, lizhenyu, luoyang, lvchangquan, lvliang, lz, mahdi, Mahdi, maning202007, Margaret_wangrui, mayang, mengyuanli, nhussain, ougongchang, panfengfeng, panyifeng, Payne, Peilin, peixu_ren, Pengyongrong, qianlong, r1chardf1d0, riemann_penn, rmdyh, Sheng, shenwei41, simson, Simson, Su, sunsuodong, tao_yunhao, tinazhang, VectorSL, , Wan, wandongdong, wangdongxu, wangmin, wangnan39@huawei.com, wangyue01, wangzhe, wanyiming, Wei, wenchunjiang, wilfChen, WilliamLian, wsc, wukesong, wuweikang, wuxuejian, Xiaoda, xiefangqi, xinyunfan, xuanyue, xulei2020, Xun, xuyongfei, yanghaitao, yanghaitao1, yanghaoran, YangLuo, yangruoqi713, yankai, yanzhenxiang2020, yao_yf, yepei6, yeyunpeng, Yi, yoni, yoonlee666, yuchaojie, yujianfeng, yuximiao, zengzitao, Zhang, zhanghaibo5@huawei.com, zhanghuiyao, zhangyihui, zhangz0911gm, zhanke, zhanyuan, zhaodezan, zhaojichen, zhaoting, zhaozhenlong, zhengjun10, zhoufeng, zhousiyi, zhouyaqiang, zhouyifengCode, Zichun, Zirui, Ziyan, zjun, ZPaC, zymaa
5176
5177Contributions of any kind are welcome!
5178
5179# MindSpore 1.1.0 Release Notes
5180
5181## MindSpore
5182
5183### Major Features and Improvements
5184
5185#### NewModels
5186
5187- [STABLE] GNMT v2: similar to the model described in Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation, which is mainly used for corpus translation, on WMT Englis-German dataset.(Ascend)
5188- [STABLE] MaskRCNN: a conceptually simple, flexible, and general framework for object instance segmentation on COCO2017 dataset.(Ascend)
5189- [STABLE] YOLOv4: a state-of-the-art detector which is faster and more accurate than all available alternative detectors on MS COCO dataset.(Ascend)
5190- [STABLE] Openpose: proposes a bottom-up human attitude estimation algorithm using Part Affinity Fields on COCO2017 dataset.(Ascend)
5191- [STABLE] CNN-CTC: proposes three major contributions to addresses scene text recognition (STR) on MJSynth and SynthText dataset.(Ascend)
5192- [STABLE] CenterFace: a practical anchor-free face detection and alignment method for edge devices on WiderFace dataset.(Ascend)
5193- [STABLE] ShuffleNetV2:  a much faster and more accurate network than the previous networks on ImageNet 2012 dataset.(GPU)
5194- [STABLE] EfficientNet-B0: a new scaling method that uniformly scales all dimensions of depth/width/resolution using a simple yet highly effective compound coefficient on ImageNet 2012 dataset.(GPU)
5195- [BETA] SSD-GhostNet: based on an Ghost module structure which generate more features from cheap operations on Oxford-IIIT Pet dataset.(Ascend)
5196- [BETA] DS-CNN:  Depthwise separable convolutional neural network on Speech commands dataset.(Ascend)
5197- [BETA] DeepPotentialH2O: A neural network model for molecular dynamics simulations. (Ascend)
5198- [BETA] GOMO: A classical numerical method called GOMO for ocean simulation. (GPU)
5199
5200#### FrontEnd
5201
5202- [STABLE] Refactor the MINDIR to support 310 inference(Ascend).
5203- [STABLE] The execution backend of sparse operations in optimizer can be set through 'target'. (Ascend/GPU/CPU)
5204- [STABLE] Support saving specified network to checkpoint and filtering parameters according to prefix when load checkpoint. (Ascend/GPU/CPU)
5205- [STABLE] Allow users choose whether to load parameter into network strictly.(Ascend/GPU/CPU)
5206- [STABLE] Before training, in graph mode, in order to have the same network initialization parameter values ​​for all devices, broadcast the parameters on device 0 to other devices. (Ascend/GPU)
5207- [STABLE] Support if by if of control flow subgraph. (Ascend/GPU)
5208- [STABLE] Support the judgment that whether a tensor is in a list. (Ascend/GPU/CPU)
5209- [STABLE] Support to get a value by using the corresponding key in a dictionary in the network; Support to get keys and values of a dictionary in the network. (Ascend/GPU/CPU)
5210- [STABLE] Support Tensor in enumerate. (Ascend/GPU/CPU)
5211- [STABLE] Support multilevel index assignment. (Ascend/GPU/CPU)
5212- [STABLE] Support the 'expand_as','view','abs','mean' method of Tensor. (Ascend/GPU/CPU)
5213- [STABLE] Support ResizeBilinear operation transfer ratio. (Ascend)
5214- [STABLE] nn.Matmul supports matrix-vector product and  batched matrix multiply. (Ascend/GPU)
5215- [STABLE] nn.Dense supports input tensor whose dimension can be greater than 2. (Ascend/GPU)
5216- [BETA] Support higher order differentiation for partial operators.(CPU/GPU/Ascend)
5217- [STABLE] Support Tensor Augassign.(Ascend/GPU)
5218- [BETA] Support 22 numpy native interfaces.
5219
5220#### Auto Parallel
5221
5222- [STABLE] Support parallel optimizer with weight shard. (Ascend/GPU)
5223- [STABLE] Support distributed operators: element-wise series, UnsortedSegmentSum, UnsortedSegmentMin, Split, BroadcastTo and Unique etc. (Ascend/GPU)
5224- [STABLE] Support distributed model prediction. (Ascend/GPU)
5225- [STABLE] Support auto mixed precision level "O2" in auto and semi auto parallel mode. (Ascend/GPU)
5226- [STABLE] Add MultiFieldEmbeddingLookup high-level interface. (Ascend/GPU)
5227
5228#### Executor
5229
5230- [STABLE] ResNet50 performance optimize. (GPU)
5231- [STABLE] Support modelzoo net in PyNative mode(Ascend 29, GPU 23, CPU 2).(Ascend/GPU/CPU)
5232- [STABLE] Support PyNative mode on CPU.(CPU)
5233- [STABLE] Optimize performance in PyNative mode.(Ascend/GPU/CPU)
5234- [STABLE] Support Safe Optimized Memory Allocation Solver (SOMAS) on Ascend to improve the memory-reuse, the batch size of Bert large model (128 sequence length) is increased from 160 to 208.(Ascend)
5235- [BETA] Support second order differentiation in PyNative mode.(Ascend/GPU)
5236- [DEMO] Add distributed trainning in PyNative mode.(Ascend/GPU)
5237
5238#### MDP
5239
5240- [STABLE]  Add new operators for Ascend and GPU: IGamma, LGamma, DiGamma;
5241- [STABLE]  Add new distributions for Ascend and GPU: LogNormal, and Logistic;
5242- [BETA]  Add new distributions for Ascend only: Gumbel, Cauchy, Gamma, Beta, and Poisson; Add Categorical distribution for GPU;
5243- [STABLE]  Add new bijectors for Ascend and GPU: GumbelCDF, Invert;
5244- [STABLE]  Add Bayesian layer realized by local reparameterization method for Ascend and GPU;
5245- [STABLE]  Add Anomaly Detection Toolbox based on VAE for Ascend and GPU.
5246
5247#### DataSet
5248
5249- [STABLE] Support single node multi-p distributed cache data sharing
5250- [STABLE] Support GPU profiling with data processing
5251- [STABLE] Support YOLOV3 dynamic shape in sink mode with dataset
5252- [STABLE] Support unique processing in the data processing pipeline
5253- [STABLE] Python layer parameter verification error information unified
5254
5255### API Change
5256
5257#### Backwards Incompatible Change
5258
5259##### Python API
5260
5261###### Delete shape and dtype of class Initializer ([!7373](https://gitee.com/mindspore/mindspore/pulls/7373/files))
5262
5263Delete shape and dtype attributes of Initializer class.
5264
5265###### Modify the return type of initializer ([!7373](https://gitee.com/mindspore/mindspore/pulls/7373/files))
5266
5267Previously, the return type of initializer function may be string, number, instance of class Tensor or subclass of class Initializer.
5268
5269After modification, initializer function will return instance of class MetaTensor, class Tensor or subclass of class Initializer.
5270
5271Noted that the MetaTensor is forbidden to initialize parameters, so we recommend that use str, number or subclass of Initializer for parameters initialization rather than the initializer functions.
5272
5273<table>
5274<tr>
5275<td style="text-align:center"> 1.0.1 </td> <td style="text-align:center"> 1.1.0 </td>
5276</tr>
5277<tr>
5278<td>
5279
5280```python
5281>>> import mindspore.nn as nn
5282>>> from mindspore.common import initializer
5283>>> from mindspore import dtype as mstype
5284>>>
5285>>> def conv3x3(in_channels, out_channels)
5286>>>   weight = initializer('XavierUniform', shape=(3, 2, 32, 32), dtype=mstype.float32)
5287>>>   return nn.Conv2d(in_channels, out_channels, weight_init=weight, has_bias=False, pad_mode="same")
5288```
5289
5290</td>
5291<td>
5292
5293```python
5294>>> import mindspore.nn as nn
5295>>> from mindspore.common.initializer import XavierUniform
5296>>>
5297>>> #1) using string
5298>>> def conv3x3(in_channels, out_channels)
5299>>>   return nn.Conv2d(in_channels, out_channels, weight_init='XavierUniform', has_bias=False, pad_mode="same")
5300>>>
5301>>> #2) using subclass of class Initializer
5302>>> def conv3x3(in_channels, out_channels)
5303>>>   return nn.Conv2d(in_channels, out_channels, weight_init=XavierUniform(), has_bias=False, pad_mode="same")
5304```
5305
5306</td>
5307</tr>
5308</table>
5309
5310Advantages:
5311After modification, we can use the same instance of Initializer to initialize parameters of different shapes, which was not allowed before.
5312
5313<table>
5314<tr>
5315<td style="text-align:center"> 1.0.1 </td> <td style="text-align:center"> 1.1.0 </td>
5316</tr>
5317<tr>
5318<td>
5319
5320```python
5321>>> import mindspore.nn as nn
5322>>> from mindspore.common import initializer
5323>>> from mindspore.common.initializer import XavierUniform
5324>>>
5325>>> weight_init_1 = XavierUniform(gain=1.1)
5326>>> conv1 = nn.Conv2d(3, 6, weight_init=weight_init_1)
5327>>> weight_init_2 = XavierUniform(gain=1.1)
5328>>> conv2 = nn.Conv2d(6, 10, weight_init=weight_init_2)
5329```
5330
5331</td>
5332<td>
5333
5334```python
5335>>> import mindspore.nn as nn
5336>>> from mindspore.common import initializer
5337>>> from mindspore.common.initializer import XavierUniform
5338>>>
5339>>> weight_init = XavierUniform(gain=1.1)
5340>>> conv1 = nn.Conv2d(3, 6, weight_init=weight_init)
5341>>> conv2 = nn.Conv2d(6, 10, weight_init=weight_init)
5342```
5343
5344</td>
5345</tr>
5346</table>
5347
5348###### Modify get_seed function ([!7429](https://gitee.com/mindspore/mindspore/pulls/7429/files))
5349
5350Modify get_seed function implementation
5351
5352Previously, if seed is not set, the value of seed is default, parameters initialized by the normal function are the same every time.
5353
5354After modification, if seed is not set, the value of seed is generated randomly, the initialized parameters change according to the random seed.
5355
5356If you want to fix the initial value of parameters, we suggest to set seed.
5357
5358```python
5359>>> from mindspore.common import set_seed
5360>>> set_seed(1)
5361```
5362
5363###### `nn.LinSpace` ([!9494](https://gitee.com/mindspore/mindspore/pulls/9494)) has been removed and modify `ops.LinSpace` ([!8920](https://gitee.com/mindspore/mindspore/pulls/8920))
5364
5365The `nn.LinSpace` interface only support passing the value by args previously. For the convenience, we provided enhancive `ops.LinSpace` interface, which support passing the value by the inputs at the latest version. So there is no need for `nn.LinSpace`.
5366
5367<table>
5368<tr>
5369<td style="text-align:center"> 1.0.1 </td> <td style="text-align:center"> 1.1.0 </td>
5370</tr>
5371<tr>
5372<td>
5373
5374```python
5375>>> from mindspore import nn
5376>>>
5377>>> start = 1
5378>>> stop = 10
5379>>> num = 5
5380>>> linspace = nn.LinSpace(start, stop, num)
5381>>> output = linspace()
5382```
5383
5384</td>
5385<td>
5386
5387```python
5388>>> import mindspore
5389>>> from mindspore import Tensor
5390>>> from mindspore import ops
5391>>>
5392>>> linspace = ops.LinSpace()
5393>>> start = Tensor(1, mindspore.float32)
5394>>> stop = Tensor(10, mindspore.float32)
5395>>> num = 5
5396>>> output = linspace(start, stop, num)
5397```
5398
5399</td>
5400</tr>
5401</table>
5402
5403###### Parts of `Optimizer` add target interface ([!6760](https://gitee.com/mindspore/mindspore/pulls/6760/files))
5404
5405The usage of the sparse optimizer is changed.
5406
5407The target interface is used to set the execution backend of the sparse operator.
5408
5409The add_primitive_attr interface is no longer allowed.
5410
5411The following optimizers add the target interface:  Adam, FTRL, LazyAdam, ProximalAdagrad
5412
5413<table>
5414<tr>
5415<td style="text-align:center"> 1.0.1 </td> <td style="text-align:center"> 1.1.0 </td>
5416</tr>
5417<tr>
5418<td>
5419
5420```python
5421>>> from mindspore.nn import Adam
5422>>>
5423>>> net = LeNet5()
5424>>> optimizer = Adam(filter(lambda x: x.requires_grad, net.get_parameters()))
5425>>> optimizer.sparse_opt.set_device("CPU")
5426```
5427
5428</td>
5429<td>
5430
5431```python
5432>>> from mindspore.nn import Adam
5433>>>
5434>>> net = LeNet5()
5435>>> optimizer = Adam(filter(lambda x: x.requires_grad, net.get_parameters()))
5436>>> optimizer.target = 'CPU'
5437```
5438
5439</td>
5440</tr>
5441</table>
5442
5443###### `export` Modify the input parameters and export's file name ([!7385](https://gitee.com/mindspore/mindspore/pulls/7385), [!9057](https://gitee.com/mindspore/mindspore/pulls/9057/files))
5444
5445Export the MindSpore prediction model to a file in the specified format.
5446
5447The reference includes: `net`, `*inputs`, `file_name`, `file_format`, `**kwargs`.
5448
5449Input parameters can be input according to specific export requirements.
5450
5451Add the file name extension based on the format.
5452
5453<table>
5454<tr>
5455<td style="text-align:center"> 1.0.1 </td> <td style="text-align:center"> 1.1.0 </td>
5456</tr>
5457<tr>
5458<td>
5459
5460```python
5461>>> from mindspore.train.quant import quant
5462>>>
5463>>> network = LeNetQuant()
5464>>> inputs = Tensor(np.ones([1, 1, 32, 32]), mindspore.float32)
5465>>> quant.export(network, inputs, file_name="lenet_quant.mindir", file_format='MINDIR')
5466lenet_quant.mindir
5467```
5468
5469</td>
5470<td>
5471
5472```python
5473>>> import mindspore as ms
5474>>>
5475>>> network = LeNetQuant()
5476>>> inputs = Tensor(np.ones([1, 1, 32, 32]), mindspore.float32)
5477>>> ms.export(network, inputs, file_name="lenet_quant", file_format='MINDIR', quant_mode='AUTO')
5478lenet_quant.mindir
5479```
5480
5481</td>
5482</tr>
5483</table>
5484
5485###### `Dense`, `Conv2dBnAct`, `DenseBnAct`, `DenseQuant` support setting the activation attribute as an instance of a class derived from `nn.Cell` or `Primtive` ([!7581](https://gitee.com/mindspore/mindspore/pulls/7581))
5486
5487activation (Union[str, Cell, Primitive]): activate function applied to the output of the fully connected layer
5488
5489<table>
5490<tr>
5491<td style="text-align:center"> 1.0.1 </td> <td style="text-align:center"> 1.1.0 </td>
5492</tr>
5493<tr>
5494<td>
5495
5496```python
5497>>> import mindspore.nn as nn
5498>>>
5499>>> dense = nn.Dense(1, 1, activation='relu')
5500```
5501
5502</td>
5503<td>
5504
5505```python
5506>>> import mindspore.nn as nn
5507>>> import mindspore.ops as ops
5508>>>
5509>>> dense = nn.Dense(1, 1, activation=nn.ReLU())
5510>>> dense = nn.Dense(1, 1, activation=ops.ReLU())
5511```
5512
5513</td>
5514</tr>
5515</table>
5516
5517###### `tensor.dim()`, `tensor.size()` has been renamed to `tensor.ndim`, `tensor.size` ([!10175](https://gitee.com/mindspore/mindspore/pulls/10175))
5518
5519Previously, tensor.size() and tensor.dim() were used for checking the total number of elements/dimensions in the tensor.
5520However, from a user's perspective, tensor.size and tensor.ndim (methods -> properties) are better choices, since they follow the numpy naming convention.
5521
5522<table>
5523<tr>
5524<td style="text-align:center"> 1.0.1 </td> <td style="text-align:center"> 1.1.0 </td>
5525</tr>
5526<tr>
5527<td>
5528
5529```python
5530>>> from mindspore import Tensor
5531>>>
5532>>> Tensor((1,2,3)).size()
5533>>> Tensor((1,2,3)).dim()
5534```
5535
5536</td>
5537<td>
5538
5539```python
5540>>> from mindspore import Tensor
5541>>>
5542>>> Tensor((1,2,3)).size
5543>>> Tensor((1,2,3)).ndim
5544```
5545
5546</td>
5547</tr>
5548</table>
5549
5550###### `EmbeddingLookup` add a config in the interface: sparse ([!8202](https://gitee.com/mindspore/mindspore/pulls/8202))
5551
5552sparse (bool): Using sparse mode. When 'target' is set to 'CPU', 'sparse' has to be true. Default: True.
5553
5554<table>
5555<tr>
5556<td style="text-align:center"> 1.0.1 </td> <td style="text-align:center"> 1.1.0 </td>
5557</tr>
5558<tr>
5559<td>
5560
5561```python
5562>>> from mindspore.nn import EmbeddingLookup
5563>>>
5564>>> input_indices = Tensor(np.array([[1, 0], [3, 2]]), mindspore.int32)
5565>>> result = EmbeddingLookup(4,2)(input_indices)
5566>>> print(result.shape)
5567(2, 2, 2)
5568```
5569
5570</td>
5571<td>
5572
5573```python
5574>>> from mindspore.nn import EmbeddingLookup
5575>>>
5576>>> input_indices = Tensor(np.array([[1, 0], [3, 2]]), mindspore.int32)
5577>>> result = EmbeddingLookup(4,2)(input_indices, sparse=False)
5578>>> print(result.shape)
5579(2, 2, 2)
5580```
5581
5582</td>
5583</tr>
5584</table>
5585
5586###### `nn.probability.bijector` change types of attributes from (int, float) to (float, list, numpy.ndarray, Tensor) ([!8191](https://gitee.com/mindspore/mindspore/pulls/8191))
5587
5588Attributes Type change: (int, float) -> (float, list, numpy.ndarray, Tensor).
5589Int type is not supported anymore. Parameters of all bijectors should be type float, list, numpy.ndarray or Tensor.
5590
5591<table>
5592<tr>
5593<td style="text-align:center"> 1.0.1 </td> <td style="text-align:center"> 1.1.0 </td>
5594</tr>
5595<tr>
5596<td>
5597
5598```python
5599>>> import mindspore.nn.probability.bijector as msb
5600>>>
5601>>> power = 2
5602>>> bijector = msb.PowerTransform(power=power)
5603```
5604
5605</td>
5606<td>
5607
5608```python
5609>>> import mindspore.nn.probability.bijector as msb
5610>>>
5611>>> power = 2.0
5612>>> bijector = msb.PowerTransform(power=power)
5613```
5614
5615</td>
5616</tr>
5617</table>
5618
5619###### `nn.probability.bijector.GumbelCDF` remove a attribute in the interface: dtype ([!8191](https://gitee.com/mindspore/mindspore/pulls/8191))
5620
5621dtype is removed from GumbelCDF and is no longer an argument of the class.
5622
5623<table>
5624<tr>
5625<td style="text-align:center"> 1.0.1 </td> <td style="text-align:center"> 1.1.0 </td>
5626</tr>
5627<tr>
5628<td>
5629
5630```python
5631>>> import mindspore.nn.probability.bijector as msb
5632>>> from mindspore import dtype as mstype
5633>>>
5634>>> bijector = msb.GumbelCDF(loc=0.0, scale=1.0, dtype=mstype.float32)
5635```
5636
5637</td>
5638<td>
5639
5640```python
5641>>> import mindspore.nn.probability.bijector as msb
5642>>>
5643>>> bijector = msb.GumbelCDF(loc=0.0, scale=1.0)
5644```
5645
5646</td>
5647</tr>
5648</table>
5649
5650###### `nn.layer.combined.Conv2dBnAct`, `nn.layer.combined.DenseBnAct` move from nn.layer.quant to nn.layer.combined ([!8187](https://gitee.com/mindspore/mindspore/pulls/8187))
5651
5652Previously Conv2dBnAct and DenseBnAct are in nn.layer.quant, since they are not quant cells, now they are moved to nn.layer.combined. If you import Conv2dBnAct, DenseBnAct from mindspore.nn, then your code doesn't need any change.
5653
5654<table>
5655<tr>
5656<td style="text-align:center"> 1.0.1 </td> <td style="text-align:center"> 1.1.0 </td>
5657</tr>
5658<tr>
5659<td>
5660
5661```python
5662>>> from mindspore.nn.layer.quant import Conv2dBnAct, DenseBnAct
5663```
5664
5665</td>
5666<td>
5667
5668```python
5669>>> from mindspore.nn import Conv2dBnAct, DenseBnAct
5670```
5671
5672</td>
5673</tr>
5674</table>
5675
5676###### `nn.layer.conv.Conv2D`, `nn.layer.quant.Conv2dBnFoldQuant`, `nn.layer.quant.Conv2dBnWithoutFoldQuant` change weight shape when group > 1 in Ascend platform ([!9723](https://gitee.com/mindspore/mindspore/pulls/9723))
5677
5678In Ascend platform, if group > 1, the weight shape of Conv2D change from [in_channels//group, out_channels, kernel_size, kernel_size] to [out_channels, in_channels//group, kernel_size, kernel_size]. Previously, checkpoints of the networks are used, which use Conv2D with group > 1, such as MobileNet, can not be directly used now, need to transpose the first and second axis of the weight.
5679
5680### Bug fixes
5681
5682#### FrontEnd
5683
5684- [STABLE] Fix the problem of the cse optimization in the situation of control flow. (Ascend/GPU)
5685
5686#### Auto Parallel
5687
5688- [STABLE] Resolve the restriction: input and output layouts of Reshape are restricted in tensor redistribution. (Ascend/GPU)
5689- [STABLE] Resolve the restriction: output strategy should be data parallel in model evaluation. (Ascend/GPU)
5690
5691#### Executor
5692
5693- [STABLE] Fix fusion operator compilation cache. (Ascend)
5694- [STABLE] Fix compilation error of dynamic shape operator. (Ascend)
5695- [STABLE] Fix bug of pynative cannot insert transdata of node output when node should be spilted in the backend opt.(Ascend)
5696- [STABLE] Fix the bug of TensorMove and memcpy_async merge to one after backend cse pass (Ascend)
5697
5698#### DataSet
5699
5700- [STABLE] Fix cache server hang on RequestFreeTag. (Ascend/GPU/CPU)
5701- [STABLE] Fix hung when use pyfunc multi-processing. (Ascend/GPU/CPU)
5702- [STABLE] Fix add multiple parent nodes to tree node cause core dump. (Ascend/GPU/CPU)
5703
5704## MindSpore Lite
5705
5706### Major Features and Improvements
5707
5708#### Converter and runtime
5709
57101. Support dynamic shape in MindSpore Lite Converter.
57112. Optimize sub-graph mechanism by dynamically splitting the entire graph into multiple subgraphs based on the operator supported, backend hardware and user configuration.
57123. Support TensorList and TensorList operators such as TensorListFromTensor, TensorListGetItem and so on.
57134. Support BatchMatMul fusion and LSTM fusion in MindSpore Lite Converter.
57145. Support converting model and run inference on Windows operator system.
57156. Support Model(.ms) visualization on Netron.
57167. Support Tensorflow model in MindSpore Lite Converter
57178. Add 86 converter parsers.
57189. Convert aware training model without user's awareness
571910. Support scalar tensor in MindSpore Lite Converter and Runtime
572011. Support NPU backend on HUAWEI Kirin SoC.[BETA]
572112. Merge timeprofiler into benchmark
5722
5723#### CPU backend optimization
5724
57251. Add 50+ new operators, including new Op type(like Adder, Gru).
57262. Enhanced performance on armv8.2 supported platform. For example, utilizing sdot instruction more efficiently.
57273. Optimize all operators(fp32, fp16, int8) by implementing multi-thread, SIMD tech as much as possible. Model inference time can reduce at least 20% after these optimizations.
57284. Extending to support operators for x86_64 platform based on SSE/AVX instruction set.
5729
5730#### OpenCL backend
5731
57321. Add new ops: add 10+ ops, total 58 ops;
57332. Performance optimization: by memory layout optimize, Winograd Convolution select strategyoptimize, SIMT local size optimize, local cache optimize,  GPU performance improvement up to 20+% vs MSLITE Version1.0
57343. Add Online Graph optimzation: by fusion Convolution/Matmul/Fullconnection and add/mul/pad/reshape, improve performance up to 50+% for some networks;
57354. Add auto tuning: by online tuning in the graph compilation phase, optimize performance up to 10%;
57365. Add weight quant: support weight quant
57376. Add opencl kernel binary cache: improve Initialization time .
5738
5739#### Post quantization
5740
5741MindSpore Lite supports both weight quantization and full quantization. Currently, Weights can be quantized into 1 ~ 16 bits according to user configuration. In internal testing, quantization of networks, such as classification, detection, segmentation and transformer are well supported. To ensure high accuracy of quantized models, MindSpore Lite uses a pipeline quantization method. In the first phase, the weight and activation value are quantized using linear quantization methods, such as MIN-MAX. In the second phase, the quantization error is analyzed, and uses statistical methods to compensate loss caused by fp32 quantization to a fixed point such as Int8 to quantized models. The features of Post-training quantization are:
5742
57431. perchannel asymmetric quantization for weights, such as MAX_MIN and KMEANS
57442. Perlayer symmetric quantization for activation, such as KL and MAX_MIN.
57453. perlayer asymmetrical quantization for activation, such as, RemoveOutlier.
57464. accuracy loss compensation, such as BiasCorrection
5747
5748| mobilenet_v2   | ACC (ImageNet)  |
5749|---|---|
5750| FP32  | 71.56%  |
5751|A8W8   | 71.16%  |
5752| A8W8(without BiasCorrection)  | 70.74% |
5753| A8W7  | 71.06%  |
5754| A7W7  | 70.78%  |
5755
5756The above table uses the mobilenet_v2 model from TF official website. Using MindSpore Lite quantization, the precision of A8W8 (8-bit activation value quantization and 8-bit weight quantization) decreases from 0.82% to 0.4% after accuracy loss compensation, for 7-bit quantization, the precision loss is still no more than 1%.
5757
5758#### Training on Device
5759
5760Within MindSpore 1.1 release, the MindSpore Lite provides the following Training-on-Device (ToD) capabilities:
5761
57621. Learning from scratch and Transfer Learning strategies are supported
57632. MindSpore based models can be converted and used in training on the device. (Third-party models such as TensorFlow and PyTorch for now cannot be directly imported to the framework)
57643. Grad operations are supported for more than 30 operators such as Dense layers, Convolutions and Batch Normalizations. Momentum, SGD, and ADAM optimizers are supported.
57654. Supports networks such as LeNet, Alexnet, Resnet, MobileNetV1/V2/V3, and EffectiveNet, and provides complete model loading, conversion, and Python training scripts on the device side.
5766
5767The MindSpore Lite ToD framework is already in use in the newest Huawei Smart TV, providing a unique and personalized user experience as a family entertainment center.
5768
5769### API Change
5770
5771#### API Incompatible Change
5772
5773##### C++ API
5774
5775- [Modify] Context now support multi-context configuration.(Context.h)
5776- [Modify] Callback is move from lite_session.h into ms_tensor.h.
5777- [Modify] GetInputsByName in lite_session.h is changed into GetInputsByTensorName
5778- [Add] add static LiteSession *CreateSession(const char*model_buf, size_t size, const lite::Context *context) in lite_session.h
5779- [Add] add GetErrorInfo interface returning error message in errorcode.h
5780- [Delete] Remove model_generated.h, ops_generated.h and headers of FlatBuffers library from interfaces
5781
5782##### Java API
5783
5784- [Add] Implement JNI layer and add Java api for CPU and GPU backend
5785
5786#### Deprecations
5787
5788##### C++ API
5789
5790Deprecate Interface GetOutputsByNodeName
5791
5792### Bug fixes
5793
5794- [BUGFIX] Fix the bug in sub-graph segmentation
5795- [BUGFIX] Fix the bug in Tensor getitem in which the ellipsis matches the wrong dim-size.
5796- [BUGFIX] Fix the bug that activation modification after defining Dense will not take effect.
5797
5798## Contributors
5799
5800Thanks goes to these wonderful people:
5801
5802zhouyifengCode, huqi, JulyAi, damon0626, chenbo116, rmdyh, davidmc, gray0v0, doitH, Gogery, zymaa, xinyunfan
5803
5804Adel, AGroupofProbiotocs, anthonyaje, anzhengqi, askmiao, baihuawei, baiyangfan, bai-yangfan, bingyaweng, BowenK, buxue, caifubi, CaoJian, caojian05, caozhou, Cathy, changzherui, chenbo116, chenfei, chengxianbin, chenhaozhe, chenjianping, chenzomi, chenzupeng, chujinjin, cj, cjh9368, Corleone, damon0626, danish, Danish, davidmc, dayschan, doitH, eric, Eric, fary86, fuzhiye, Gaoxiong, gengdongjie, Gogery, gongdaguo, gray0v0, gukecai, guoqi, gzhcv, hangq, hanhuifeng2020, Harshvardhan, He, heleiwang, hexia, Hoai, HuangBingjian, huangdongrun, huanghui, huangxinjing, huqi, huzhifeng, hwjiaorui, Jesse, jianghui58, jiangzhiwen, Jiaqi, jin-xiulang, jinyaohui, jjfeing, John, Jonathan, jonyguo, JulyAi, jzg, kai00, kingfo, kingxian, kpy, kswang, laiyongqiang, leonwanghui, Li, liangchenghui, liangzelang, lichen_101010, lichenever, lihongkang, lilei, limingqi107, ling, linqingke, liubuyu, liuwenhao4, liuxiao78, liuxiao93, liuyang_655, liuzhongkai, Lixia, lixian, liyanliu, liyong, lizhenyu, luoyang, lvchangquan, lvliang, lz, mahdi, Mahdi, maning202007, Margaret_wangrui, mayang, mengyuanli, nhussain, ougongchang, panfengfeng, panyifeng, Payne, Peilin, peixu_ren, Pengyongrong, qianlong, r1chardf1d0, riemann_penn, rmdyh, Sheng, shenwei41, simson, Simson, Su, sunsuodong, tao_yunhao, tinazhang, VectorSL, , Wan, wandongdong, wangdongxu, wangmin, wangnan39@huawei.com, wangyue01, wangzhe, wanyiming, Wei, wenchunjiang, wilfChen, WilliamLian, wsc, wukesong, wuweikang, wuxuejian, Xiaoda, xiefangqi, xinyunfan, xuanyue, xulei2020, Xun, xuyongfei, yanghaitao, yanghaitao1, yanghaoran, YangLuo, yangruoqi713, yankai, yanzhenxiang2020, yao_yf, yepei6, yeyunpeng, Yi, yoni, yoonlee666, yuchaojie, yujianfeng, yuximiao, zengzitao, Zhang, zhanghaibo5@huawei.com, zhanghuiyao, zhangyihui, zhangz0911gm, zhanke, zhanyuan, zhaodezan, zhaojichen, zhaoting, zhaozhenlong, zhengjun10, zhoufeng, zhousiyi, zhouyaqiang, zhouyifengCode, Zichun, Zirui, Ziyan, zjun, ZPaC, zymaa
5805
5806Contributions of any kind are welcome!
5807
5808# MindSpore 1.0.0 Release Notes
5809
5810## Major Features and Improvements
5811
5812### MindSpore Training and Inference Framework
5813
5814#### Ascend 910
5815
5816- New models
5817    - DenseNet121: a dense convolutional neural network, which connects each layer to every other layer in a feed-forward fashion for object recognition on ImageNet dataset.
5818    - UNet2D-Medical: Unet Medical model for 2D image segmentation, Convolutional Networks for Biomedical Image Segmentation on ISBI Challenge database.
5819- Frontend and user interface
5820    - Second-Order Optimization
5821        - Enable second-order optimization for Bert on Ascend 910, which can achieve a masked lm accuracy of 71.3% in 800 seconds using 8 Ascend 910 (Bert-Large @MLPerf v0.7 dataset).
5822    - New GNN model BGCF
5823        - Bayesian Graph Convolutional Filtering network which naturally incorporate the uncertainty in the user-item interaction graph shows excellent recommendation performance on Amazon-Beauty dataset.
5824    - Add append interface for SequentialCell.
5825    - Add a level `auto` for AMP.
5826- Executor and performance optimization
5827    - Support quantitative network (Resnet50 & YoloV3 & MobileNetV2).
5828    - Project ease of use optimization: project compilation time optimization, CMakelist regularization, cudnn, cuda independent compilation and installation independent.
5829- Data processing, augmentation, and save format
5830    - Support GeneratorDataset return string type
5831
5832#### Other Hardware Support
5833
5834- GPU platform
5835    - Enable second-order optimization for resnet50 on GPU, which achieve 30% improvement on training time compared to SGD with Momentum (Resnet50 @ImageNet).
5836
5837#### User interfaces change log
5838
5839- Remove global object GradOperation in Autodiff([!5011](https://gitee.com/mindspore/mindspore/pulls/5011))
5840- Remove useless attribute 'name' in Autodiff([!5172](https://gitee.com/mindspore/mindspore/pulls/5172))
5841- Rectification distributed init([!5350](https://gitee.com/mindspore/mindspore/pulls/5350))
5842- Move the setting of ParalleMode from train.parallel_utils to context([!5351](https://gitee.com/mindspore/mindspore/pulls/5351))
5843- Modification of save_checkpoint([!5482](https://gitee.com/mindspore/mindspore/pulls/5482))
5844- Wrap numpy random seed into an api([!5634](https://gitee.com/mindspore/mindspore/pulls/5634))
5845- Delete enable_fused_layernorm in some modelzoo scripts([!5665](https://gitee.com/mindspore/mindspore/pulls/5665))
5846- Move 'multi-subgraphs' interface to internal([!5696](https://gitee.com/mindspore/mindspore/pulls/5696))
5847- Rename mirror_mean to gradient_mean([!5700](https://gitee.com/mindspore/mindspore/pulls/5700))
5848- Remove default value of 'group' of DepthWiseConv2d([!5865](https://gitee.com/mindspore/mindspore/pulls/5865))
5849- Modify interface for function and remove duplicated def([!5958](https://gitee.com/mindspore/mindspore/pulls/5958))
5850- Unify Conv2d and DepthwiseConv2d([!5916](https://gitee.com/mindspore/mindspore/pulls/5916))
5851- Modification of SoftmaxCrossEntropyWithLogits([!5502](https://gitee.com/mindspore/mindspore/pulls/5502))
5852- Change API set_strategy() to shard()([!5991](https://gitee.com/mindspore/mindspore/pulls/5991))
5853- Move batch_size from bert_cfg_cfg to cfg([!6233](https://gitee.com/mindspore/mindspore/pulls/6233))
5854- Remove unused parameters from SummaryRecord __init__([!5548](https://gitee.com/mindspore/mindspore/pulls/5548))
5855- remove sens parameter of TrainOneStepWithLossScaleCell([!5753](https://gitee.com/mindspore/mindspore/pulls/5753))
5856- optimize the TrainOneStepCell for user's define([!6159](https://gitee.com/mindspore/mindspore/pulls/6159))
5857- delete seed0 and seed1 of nn.Dropout([!5735](https://gitee.com/mindspore/mindspore/pulls/5735))
5858- delete DataWrapper([!6101](https://gitee.com/mindspore/mindspore/pulls/6101))
5859- LSTM API optimization([!6374](https://gitee.com/mindspore/mindspore/pulls/6374))
5860- Merge P\C\F of ops([!5645](https://gitee.com/mindspore/mindspore/pulls/5645))
5861- delete SoftmaxCrossEntropyExpand interface([!6607](https://gitee.com/mindspore/mindspore/pulls/6607))
5862- Adjust GroupNorm interface([!6329](https://gitee.com/mindspore/mindspore/pulls/6329))
5863- Modify init interface to internal interface([!6651](https://gitee.com/mindspore/mindspore/pulls/6651))
5864- Log optimization([!5842](https://gitee.com/mindspore/mindspore/pulls/5842))
5865- Remove useless API dataset.set_dataset_size([!5806](https://gitee.com/mindspore/mindspore/pulls/5806))
5866- Some of Dataset API add usage parameter([!5605](https://gitee.com/mindspore/mindspore/pulls/5605))
5867- Change the import path, such as from mindspore.dataset.transforms.vision to mindspore.dataset.vision.transforms([!5384](https://gitee.com/mindspore/mindspore/pulls/5384))
5868- Rename ImageFolderDatasetV2 to ImageFolderDataset([!5384](https://gitee.com/mindspore/mindspore/pulls/5384))
5869- Dataset.map parameter optimization([!5384](https://gitee.com/mindspore/mindspore/pulls/5384))
5870- Add new api dataset.get_col_names([!5384](https://gitee.com/mindspore/mindspore/pulls/5384))
5871- Add new api dataset.get_col_names([!5384](https://gitee.com/mindspore/mindspore/pulls/5384))
5872- Remove useless API MindRecord finish([!5580](https://gitee.com/mindspore/mindspore/pulls/5580))
5873
5874### MindSpore Lite
5875
5876- Converter
5877    - Add 6 TFLite op, 7 Caffe op, 1 ONNX op.
5878    - Add support for Windows.
5879    - Support parallel inference of multiple sessions to adapt to more scenarios
5880    - Support 8bits only weight-quantization, most main-stream models has small accuracy loss (less than 0.5%) when compared to non-qunantized fp32 model.
5881
5882- CPU & GPU
5883    - Add 20 CPU ops,include FP32, int8/uint8, FP16 and int32 ops.
5884    - Add supporting FP16 for GPU, add 14 GPU ops include FP32/FP16.
5885    - Add Buffer/Image2D transform op for GPU
5886    - Performance optimization for CPU ops focus on ARM32.
5887    - Performance optimization for GPU Convolution using winograd.
5888
5889- Tool & example
5890    - Add object detection Android Demo.
5891
5892## Bugfixes
5893
5894- Models
5895    - fix the constant folding problem in multiply.([!6092](https://gitee.com/mindspore/mindspore/pulls/6092))
5896    - move batch_size from bert_net_cfg to cfg in bert scripts.([!6233](https://gitee.com/mindspore/mindspore/pulls/6233))
5897    - modify the checkpoint file path.([!6137](https://gitee.com/mindspore/mindspore/pulls/6137))
5898- Python API
5899    - fix semi auto parallel parameter of reshape has another user([!5722](https://gitee.com/mindspore/mindspore/pulls/5722))
5900    - raise ValueError when call hook function in graph mode([!5831](https://gitee.com/mindspore/mindspore/pulls/5831))
5901- Executor
5902    - fix pynative mode to build temporary nn objects.([!6189](https://gitee.com/mindspore/mindspore/pulls/6189))
5903    - fix the accuracy problem of multiple inputs of multi-card communication operator broadcast.([!6522](https://gitee.com/mindspore/mindspore/pulls/5622))
5904    - fix the problem that the sample distribution interface categorical does not support graph mode.([!5772](https://gitee.com/mindspore/mindspore/pulls/5772))
5905    - fix the random seed failure problem of the polynomial downsampling distribution operator.([!5948](https://gitee.com/mindspore/mindspore/pulls/5948))
5906    - fix unnecessary address binding issues in GPU heterogeneous scenarios.([!6232](https://gitee.com/mindspore/mindspore/pulls/6232))
5907- GPU platform
5908    - fix for kernel resource leak([!5315](https://gitee.com/mindspore/mindspore/pulls/5315))
5909    - fix for insufficient memory for continuous unit test running([!5617](https://gitee.com/mindspore/mindspore/pulls/5617))
5910    - fix for the memory leak in the sparse slicer([!5578](https://gitee.com/mindspore/mindspore/pulls/5578))
5911- Data processing
5912    - fix hang when use pyfunc([!6346](https://gitee.com/mindspore/mindspore/pulls/6346))
5913    - fix GPU device queue does not release GIL during resource clean up([!5964](https://gitee.com/mindspore/mindspore/pulls/5964))
5914    - fix hang if scripte exit unnormally([!6441](https://gitee.com/mindspore/mindspore/pulls/6441))
5915- Third party
5916    - Sqlite : Update sqlite to 3.32.2 to handle [CVE-2020-11656](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-11656), [CVE-2020-13871](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-13871), [CVE-2020-11655](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-11655), [CVE-2020-9327](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-9327), [CVE-2020-13630](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-13630), [CVE-2020-15358](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-15358), [CVE-2020-13631](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-13631), [CVE-2020-13632](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-13632), [CVE-2020-13434](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-13434), [CVE-2020-13435](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-13435), and [CVE-2020-15358](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-11655).
5917    - Libjpeg-turbo : Update libjpeg-turbo to 2.0.4 to handle [CVE-2020-13790](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-13790).
5918
5919## Contributors
5920
5921Thanks goes to these wonderful people:
5922
5923Adel, AGroupofProbiotocs, anthonyaje, anzhengqi, askmiao, baihuawei, baiyangfan, bai-yangfan, bingyaweng, BowenK, buxue, caifubi, CaoJian, caojian05, caozhou, Cathy, changzherui, chenfei, chengxianbin, chenhaozhe, chenjianping, chenzomi, chenzupeng, chujinjin, cj, cjh9368, Corleone, danish, Danish, dayschan, eric, Eric, fary86, fuzhiye, Gaoxiong, gengdongjie, gongdaguo, gukecai, guoqi, gzhcv, hangq, hanhuifeng2020, Harshvardhan, He, heleiwang, hexia, Hoai, HuangBingjian, huangdongrun, huanghui, huangxinjing, huzhifeng, hwjiaorui, Jesse, jianghui58, jiangzhiwen, Jiaqi, jin-xiulang, jinyaohui, jjfeing, John, Jonathan, jonyguo, jzg, kai00, kingfo, kingxian, kpy, kswang, laiyongqiang, leonwanghui, Li, liangchenghui, liangzelang, lichen_101010, lichenever, lihongkang, lilei, limingqi107, ling, linqingke, liubuyu, liuwenhao4, liuxiao78, liuxiao93, liuyang_655, liuzhongkai, Lixia, lixian, liyanliu, liyong, lizhenyu, luoyang, lvchangquan, lvliang, lz, mahdi, Mahdi, maning202007, Margaret_wangrui, mayang, mengyuanli, nhussain, ougongchang, panfengfeng, panyifeng, Payne, Peilin, peixu_ren, Pengyongrong, qianlong, r1chardf1d0, riemann_penn, root, Sheng, shenwei41, simson, Simson, Su, sunsuodong, tao_yunhao, tinazhang, VectorSL, , Wan, wandongdong, wangdongxu, wangmin, wangnan39@huawei.com, wangyue01, wangzhe, wanyiming, Wei, wenchunjiang, wilfChen, WilliamLian, wsc, wukesong, wuweikang, wuxuejian, Xiaoda, xiefangqi, xuanyue, xulei2020, Xun, xuyongfei, yanghaitao, yanghaitao1, yanghaoran, YangLuo, yangruoqi713, yankai, yanzhenxiang2020, yao_yf, yepei6, yeyunpeng, Yi, yoni, yoonlee666, yuchaojie, yujianfeng, yuximiao, zengzitao, Zhang, zhanghaibo5@huawei.com, zhanghuiyao, zhangyihui, zhangz0911gm, zhanke, zhanyuan, zhaodezan, zhaojichen, zhaoting, zhaozhenlong, zhengjun10, zhoufeng, zhousiyi, zhouyaqiang, Zichun, Zirui, Ziyan, zjun, ZPaC
5924
5925Contributions of any kind are welcome!
5926
5927# MindSpore 0.7.0-beta Release Notes
5928
5929## Major Features and Improvements
5930
5931### MindSpore Training and Inference Framework
5932
5933#### Ascend 910
5934
5935- New models
5936    - TinyBert: a smaller and faster version of BERT using transformer distillation for natural language understanding on GLUE benchmark.
5937    - SE-ResNet50: add Squeeze-and-Excitation blocks(SE-Blocks) to the resnet50 network to improve channel interdependencies for image classification on ImageNet 2012 dataset.
5938    - Inception V3: the third version of Inception convolutional architectures for image classification on ImageNet 2012 dataset.
5939- Frontend and user interface
5940    - Embedding operator high-level packaging to support segmented by field for Wide&Deep.
5941    - Load multi-node checkpoint into single-process to support host-device hybrid inference.
5942    - Support Concat/Tile/Strideslice distributed operators.
5943    - Support cumulative gradient and batch training split.
5944    - Support variable parameter input for Cell object.
5945    - Parameter mixed calculation optimization for pynative mode.
5946    - Deep Probabilistic Programming
5947        - Support statistical distributions classes used to generate stochastic tensors.
5948        - Support probabilistic inference algorithms.
5949        - Support BNN layers used to construct BNN in Graph mode.
5950        - Support interfaces for the transformation between BNN and DNN in Graph mode.
5951        - Support uncertainty estimation to estimate epistemic uncertainty and aleatoric uncertainty.
5952    - User interfaces change log
5953        - change base class of parameter([!3473](https://gitee.com/mindspore/mindspore/pulls/3473))
5954        - change binary to mindir([!4258](https://gitee.com/mindspore/mindspore/pulls/4258))
5955        - change export from geir to air([!4269](https://gitee.com/mindspore/mindspore/pulls/4269))
5956        - Init parameter data by default([!3967](https://gitee.com/mindspore/mindspore/pulls/3967))
5957        - change IndexedSlices to RowTensor([!4031](https://gitee.com/mindspore/mindspore/pulls/4031))
5958        - Must set or change parallel mode before any Initializer created([!4801](https://gitee.com/mindspore/mindspore/pulls/4801))
5959- Executor and performance optimization
5960    - MindSpore graph compilation process performance improved by 20%.
5961    - Decoupling C++ and Python modules to achieve separate compilation of core modules.
5962- Data processing, augmentation, and save format
5963    - Support automatic data augmentation
5964    - Support GNN distributed cache in single node
5965    - Support ConcatDataset using distributed sampler
5966
5967#### Other Hardware Support
5968
5969- GPU platform
5970    - New model supported: VGG16, ResNet101, DeepFM.
5971    - Support some distributed operators in ResNet50 and Wide&Deep.
5972    - Support automatic parallel for Wide&Deep.
5973    - Support function funcs[i](*inputs) (such as switch-case).
5974    - Support distributed training with parameter server.
5975    - Support GPU operator profiling.
5976    - Performance optimization of the distributed training with allreduce.
5977    - Performance optimization of the mixed precision training.
5978    - Performance optimization of the pynative mode.
5979    - Performance optimization of the convolution operator, batch normalization operator.
5980- CPU platform
5981    - Support MobileNetV2 Re-Training: Re-train the network with different class number.
5982
5983### MindSpore Lite
5984
5985- Converter
5986    - Support third-party models, including TFLite/Caffe/ONNX.
5987    - Add 93 TFLite op.
5988    - Add 24 Caffe op.
5989    - Add 62 ONNX op.
5990    - Add 11 optimized passes, include fusion/const fold.
5991    - Support aware-training and Post-training quantization.
5992- CPU
5993    - Add 100+ops,support fp32, int8/uint8, FP16 ops
5994    - Support fast convolution algorithms: Sliding Window, Img2col + Gemm, Strassen, Winograd
5995    - Support assembly/neon instruction.
5996    - Support CPU fp16 and sdot on ARM v8.2+.
5997- GPU
5998    - Add 20+ ops for OpenCL.
5999    - Support image2D/buffer format.
6000    - Optimize online initialization time.
6001    - add optimized convolution1X1/3X3/depthwise/convolution_transposed for OpenCL.
6002- Tool & example
6003    - Add benchmark and TimeProfile tools.
6004    - Add image classification Android Demo.
6005
6006## Bugfixes
6007
6008- Models
6009    - normalize the readme file([!5410](https://gitee.com/mindspore/mindspore/pulls/5410))
6010    - fix a sink_size bug for transformer([!5393](https://gitee.com/mindspore/mindspore/pulls/5393))
6011    - fix bool type optional for resnet50([!5363](https://gitee.com/mindspore/mindspore/pulls/5363))
6012- Python API
6013    - improve interface '__bool__' for tensor([!4000](https://gitee.com/mindspore/mindspore/pulls/4000))
6014    - fix GPU-ResizeNearestNeighbor([!3760](https://gitee.com/mindspore/mindspore/pulls/3760))
6015    - fix topK multi dimension grad func([!3711](https://gitee.com/mindspore/mindspore/pulls/3711))
6016    - fix scatterop error msg([!3699](https://gitee.com/mindspore/mindspore/pulls/3699))
6017    - fix bug of cast dtype when using mix_presion in pynative mode([!3730](https://gitee.com/mindspore/mindspore/pulls/3730))
6018- Executor
6019    - fix etsnet train error when UnsegmentSum's first input shape is (1,) ([!4573](https://gitee.com/mindspore/mindspore/pulls/4573))
6020    - fix bug of result error in while control flow because of unsupporting for value reference ([!4103](https://gitee.com/mindspore/mindspore/pulls/4103))
6021    - fix bug of the output tensor does not carry device data type ([!3774](https://gitee.com/mindspore/mindspore/pulls/3774))
6022    - fix bug of avoiding multi attr value are eliminated in pynative mode ([!4225](https://gitee.com/mindspore/mindspore/pulls/4225))
6023    - fix bug of AssignAdd unable to work normally in multi-cases ([!5171](https://gitee.com/mindspore/mindspore/pulls/5171))
6024- GPU platform
6025    - improve the environment variable checking for nvcc compiler path ([!5140](https://gitee.com/mindspore/mindspore/pulls/5140))
6026    - fix bug of error in cast operator conversion from fp16 to fp32 ([!4147](https://gitee.com/mindspore/mindspore/pulls/4147))
6027    - fix bug of the array out of bound in case of make_tuple operator ([!5219](https://gitee.com/mindspore/mindspore/pulls/5219))
6028- Data processing and Pro
6029    - fix GeneratorDataset time out([!3624](https://gitee.com/mindspore/mindspore/pulls/3624))
6030    - fix concat operator get_dataset_size error([!4701](https://gitee.com/mindspore/mindspore/pulls/4701))
6031    - fixing python validator for Repeat Op([!4366](https://gitee.com/mindspore/mindspore/pulls/4366))
6032- Third party
6033    - Sqlite : Update sqlite to 3.32.2 to handle [CVE-2020-11656](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-11656), [CVE-2020-13871](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-13871), [CVE-2020-11655](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-11655), [CVE-2020-9327](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-9327), [CVE-2020-13630](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-13630), [CVE-2020-15358](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-15358), [CVE-2020-13631](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-13631), [CVE-2020-13632](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-13632), [CVE-2020-13434](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-13434), [CVE-2020-13435](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-13435), and [CVE-2020-15358](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-11655).
6034    - Libjpeg-turbo : Update libjpeg-turbo to 2.0.4 to handle [CVE-2020-13790](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-13790).
6035
6036## Contributors
6037
6038Thanks goes to these wonderful people:
6039
6040Adel, Alexey, andy, andy_wangrui, anthonyaje, anzhengqi, askmiao, avakh, baihuawei, bingyaweng, BowenK, buxue, caifubi, CaoJian, caozhou, Cathy, changzherui, chenfei, chengxianbin, chenhaozhe, chenjianping, chentingting, chenzomi, chenzupeng, chujinjin, cjh9368, Corleone, cristoval, danish, dengyutao, eric, Eric, ervinzhang, etone-chan, fangzehua, fary86, fuzhiye, gengdongjie, genglishuai, Giancarlo, gongdaguo, gukecai, guohongzilong, GuoMengHao, hangq, hanhaocheng, hanhuifeng2020, hanjun996, Harshvardhan, He, heleiwang, hesham, hexia, Hoai, hongxing, huangdongrun, huanghui, huangxinjing, islam_amin, Jesse, jianghui58, jiangzhiwen, jin-xiulang, jinyaohui, jjfeing, John, Jonathan, jonyguo, kai00, kingfo, kpy, kswang, laiyongqiang, leilei_snow, leopz, Li, liangzelang, lianliguang, lichen_101010, lichenever, lihongkang, lilei, limingqi107, ling, lingyunli63, linqingke, lirongzhen1, liubuyu, liuwenhao4, liuxiao78, liuxiao93, liuzhongkai, Lixia, lixian, liyong, lizhenyu, looop5, luoyang, lvchangquan, lvliang, lvwenyuan, lyvette, mahdi, Mahdi, mamba_ni, maning202007, Margaret_wangrui, mayang, meixiaowei, meng_chunyang, ms_yan, nhussain, panbingao, panfengfeng, panyifeng, Payne, Peilin, peixu_ren, pengyongrong, Pengyongrong, qianlong, qujianwei, root, shenwei41, shibeiji, simson, songhonglei413, Su, sunsuodong, suteng, tao_yunhao, TFbunny, tinazhang, tom__chen, tony_liu2, tronzhang, VectorSL, wandongdong, wangdongxu, wanghua, wangmin, wangshaocong, wangzhe, wanyiming, Wei, wenchunjiang, wilfChen, WilliamLian, wsc, wukesong, wuweikang, wuxuejian, wuyongkang, xiefangqi, xuanyue, Xun, xutianchun, xuyongfei, yanghaitao, yangjie159, YangLuo, yangruoqi713, yangyongjie, yangzhenzhang, yankai, yao_yf, yelihua, yeyunpeng, Yi, yoni, yoonlee666, yuchaojie, yujianfeng, yuximiao, zhangxuetong, zhaizhiqiang, Zhang, zhangxinfeng3, zhangxuetong, zhangyihui, zhangz0911gm, zhanke, zhanyuan, zhaodezan, zhaoting, zhaozhenlong, zhengjun10, zhongligeng, zhoufeng, zhousiyi, zhouyaqiang, zhouyuanshen, Zichun, Zirui, zjun, zongha, ZPaC, lijiaqi, liangchenghui, wangminggui
6041
6042Contributions of any kind are welcome!
6043
6044# MindSpore 0.6.0-beta Release Notes
6045
6046## Major Features and Improvements
6047
6048### Ascend 910 Training and Inference Framework
6049
6050- New models
6051    - There are official, research and community under modelzoo.
6052        - Official is maintained  with the newest APIs by MindSpore team,  MaskRCNN are added.
6053        - Research is uploaded by researchers for official review, and APIs may not  be updated in time.
6054        - Community reprints the relevant links of partner research results.
6055    - Hub added on the same level as modelzoo, synchronous storage of materials needed for official hub web pages which will be launched soon.
6056    - Support pre-trained models, few lines of code can be used to download and load pre-trained models, supporting inference or transfer learning.
6057- Frontend and user interface
6058    - Supports user side operator compilation and graph execution error rendering.
6059    - Uniform definition dynamic learning rate behavior in optimizers.
6060    - Support IndexSlice in sparse expression.
6061    - Support use parent construct method during construct.
6062    - Support asynchronous execution save checkpoint file.
6063    - Support implicit type conversion in pynative mode.
6064    - User interfaces change log
6065        - unform learning rate behavior in optimizers([!2755](https://gitee.com/mindspore/mindspore/pulls/2755))
6066        - rename operator of sparse optimizer([!3217](https://gitee.com/mindspore/mindspore/pulls/3217))
6067        - move profiler module from mindinsight to mindspore([!3075](https://gitee.com/mindspore/mindspore/pulls/3075))
6068        - VOCDataset output change to multi-columns([!3093](https://gitee.com/mindspore/mindspore/pulls/3093))
6069        - GetDatasize feature([!3212](https://gitee.com/mindspore/mindspore/pulls/3212))
6070        - dataset: modify config api([!2936](https://gitee.com/mindspore/mindspore/pulls/2936))
6071- Executor and performance optimization
6072    - Decouple C++ and python, so make the architecture more extensible.
6073    - Parameter Server for distributed deep learning supported.
6074    - Serving: a flexible service deployment framework for deep learning models.
6075    - Memory reuse is enhanced, and the batch size of Bert large model is increased from 96 to 160 on a single server.
6076- Data processing, augmentation, and save format
6077    - Support MindRecord save operator after  date processing
6078    - Support automatic fusion operator, such as decode/resize/crop
6079    - Support CSV dataset loading
6080
6081### Other Hardware Support
6082
6083- GPU platform
6084    - New model supported: ResNext50, WarpCTC and GoogLeNet.
6085    - Support hyperparametric search and data enhanced automl on GPU.
6086    - Support Resnet50 automatic parallel in GPU backend.
6087
6088## Bugfixes
6089
6090- Models
6091    - Improved the performance and accuracy on ResNet50([!3456](https://gitee.com/mindspore/mindspore/pulls/3456))
6092    - Fixed the performance test case of bert([!3486](https://gitee.com/mindspore/mindspore/pulls/3486))
6093- Python API
6094    - Fix assign used in while loop([!2720](https://gitee.com/mindspore/mindspore/pulls/2720))
6095    - Revert optimize the graph output of all nop node.([!2857](https://gitee.com/mindspore/mindspore/pulls/2857))
6096    - Print tensor as numpy.([!2859](https://gitee.com/mindspore/mindspore/pulls/2859))
6097    - Support weight decay for sparse optimizer([!2668](https://gitee.com/mindspore/mindspore/pulls/2668))
6098    - Fix BatchToSpaceND([!2741](https://gitee.com/mindspore/mindspore/pulls/2741))
6099    - Fixing type check mistakes of InplaceAdd and Inplace Sub ops([!2744](https://gitee.com/mindspore/mindspore/pulls/2744]))
6100    - Change order param only equal to group param([!2748](https://gitee.com/mindspore/mindspore/pulls/2748))
6101- Executor
6102    - The performance of graph with control flow is optimized([!2931](https://gitee.com/mindspore/mindspore/pulls/2931))
6103    - Fix bug of wrong number of tuple layers([!3390](https://gitee.com/mindspore/mindspore/pulls/3390))
6104    - Fix cpu multi graph memory exception([!3631](https://gitee.com/mindspore/mindspore/pulls/3631))
6105    - Enable data sync when calling operator without defining a cell([!3081](https://gitee.com/mindspore/mindspore/pulls/3081))
6106    - Fix argmaxwith value error in pynative mode on GPU([!3082](https://gitee.com/mindspore/mindspore/pulls/3082))
6107    - Fix precision error with fp16 input on pynative mode([!3196](https://gitee.com/mindspore/mindspore/pulls/3196))
6108- Data processing
6109    - Fix bug of RandomColor and RandomSharpness default parameter checking  ([!2833](https://gitee.com/mindspore/mindspore/pulls/2833))
6110    - Fix process hung when training and eval  ([!3469](https://gitee.com/mindspore/mindspore/pulls/3469))
6111- Third party
6112    - Sqlite : Update sqlite to 3.32.2 to handle [CVE-2020-11656](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-11656), [CVE-2020-13871](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-13871), [CVE-2020-11655](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-11655), [CVE-2020-9327](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-9327), [CVE-2020-13630](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-13630), [CVE-2020-15358](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-15358), [CVE-2020-13631](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-13631), [CVE-2020-13632](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-13632), [CVE-2020-13434](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-13434), [CVE-2020-13435](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-13435), and [CVE-2020-15358](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-11655).
6113    - Libjpeg-turbo : Update libjpeg-turbo to 2.0.4 to handle [CVE-2020-13790](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-13790).
6114
6115## Contributors
6116
6117Thanks goes to these wonderful people:
6118
6119Alexey Shevlyakov, avakh, baihuawei, BowenK, buxue, caifubi, caojian05, Cathy Wong, changzherui, chenfei, chengxianbin, chenhaozhe, chenjianping, chentingting, chenzomi, chujinjin, Danish Farid, dayschan, dengwentao, dinghao, etone-chan, fangzehua, fary86, geekun, Giancarlo Colmenares, gong chen, gukecai, guohongzilong, hangangqiang, heleiwang, hesham, He Wei, hexia, hongxing, huangdongrun, huanghui, islam_amin, Jamie Nisbet, Jesse Lee, jiangjinsheng, jiangzhiwen, jinyaohui, jjfeing, jojobugfree, Jonathan Yan, jonyguo, Junhan Hu, Kang, kingfo, kouzhenzhong, kpy, kswang, laiyongqiang, leopz, liangzelang, lichenever, lihongkang, Li Hongzhang, lilei, limingqi107, lirongzhen1, liubuyu, liuchongming74, liuwenhao4, liuxiao, Lixia Chen, liyanliu, liyong, lizhenyu, lvliang, Mahdi, Margaret_wangrui, meixiaowei, ms_yan, nhussain, ougongchang, panfengfeng, panyifeng, peilinwang, Peilin Wang, pkuliuliu, qianlong, rick_sanchez, shibeiji, Shida He, shijianning, simson, sunsuodong, suteng, Tinazhang, Tron Zhang, unknown, VectorSL, wandongdong, wangcong, wangdongxu, wangdongxu6, wanghua, wangnan39, Wei Luning, wenchunjiang, wenkai, wilfChen, WilliamLian, wukesong, Xian Weizhao, Xiaoda Zhang, xiefangqi, xulei2020, xunxue, xutianchun, Yang, yanghaitao, yanghaitao1, yanghaoran, yangjie, yangjie159, YangLuo, Yanjun Peng, yankai, yanzhenxiang2020, yao_yf, Yi Huaijie, yoonlee666, yuchaojie, yujianfeng, zhangzhongpeng, zhangdengcheng, Zhang Qinghua, zhangyinxia, zhangz0911gm, zhaojichen, zhaoting, zhaozhenlong, zhoufeng, zhouneng, zhousiyi, Zirui Wu, Ziyan, zjun, ZPaC, lihongzhang, wangdongxu
6120
6121Contributions of any kind are welcome!
6122
6123# MindSpore 0.5.2-beta Release Notes
6124
6125## Major Features and Improvements
6126
6127### Ascend 910 Training and Inference Framework
6128
6129- New models
6130    - DenseNet121: a convolution based neural network for the task of image classification on ImageNet 2012 dataset.
6131
6132## Bugfixes
6133
6134- Models
6135    - VGG16,Alexnet,GoogleNet,optimize network for better performance. ([!5539](https://gitee.com/mindspore/mindspore/pulls/5539))
6136    - YOLOV3, fix yolov3_darknet53 dataset bug. ([!5658](https://gitee.com/mindspore/mindspore/pulls/5658))
6137
6138## Contributors
6139
6140Thanks goes to these wonderful people:
6141
6142Alexey Shevlyakov, avakh, baihuawei, BowenK, buxue, caifubi, caojian05, Cathy Wong, changzherui, chenfei, chengxianbin, chenhaozhe, chenjianping, chentingting, chenzomi, chujinjin, Danish Farid, dayschan, dengwentao, dinghao, etone-chan, fangzehua, fary86, geekun, Giancarlo Colmenares, gong chen, gukecai, guohongzilong, hangangqiang, heleiwang, hesham, He Wei, hexia, hongxing, huangdongrun, huanghui, islam_amin, Jamie Nisbet, Jesse Lee, jiangjinsheng, jiangzhiwen, jinyaohui, jjfeing, jojobugfree, Jonathan Yan, jonyguo, Junhan Hu, Kang, kingfo, kouzhenzhong, kpy, kswang, laiyongqiang, leopz, liangzelang, lichenever, lihongkang, Li Hongzhang, lilei, limingqi107, lirongzhen1, liubuyu, liuchongming74, liuwenhao4, liuxiao, Lixia Chen, liyanliu, liyong, lizhenyu, lvliang, Mahdi, Margaret_wangrui, meixiaowei, ms_yan, nhussain, ougongchang, panfengfeng, panyifeng, peilinwang, Peilin Wang, pkuliuliu, qianlong, rick_sanchez, shibeiji, Shida He, shijianning, simson, sunsuodong, suteng, Tinazhang, Tron Zhang, unknown, VectorSL, wandongdong, wangcong, wangdongxu, wangdongxu6, wanghua, wangnan39, Wei Luning, wenchunjiang, wenkai, wilfChen, WilliamLian, wukesong, Xian Weizhao, Xiaoda Zhang, xiefangqi, xulei2020, xunxue, xutianchun, Yang, yanghaitao, yanghaitao1, yanghaoran, yangjie, yangjie159, YangLuo, Yanjun Peng, yankai, yanzhenxiang2020, yao_yf, Yi Huaijie, yoonlee666, yuchaojie, yujianfeng, zhangzhongpeng, zhangdengcheng, Zhang Qinghua, zhangyinxia, zhangz0911gm, zhaojichen, zhaoting, zhaozhenlong, zhoufeng, zhouneng, zhousiyi, Zirui Wu, Ziyan, zjun, ZPaC, lihongzhang, wangdongxu
6143
6144Contributions of any kind are welcome!
6145
6146# MindSpore 0.5.0-beta Release Notes
6147
6148## Major Features and Improvements
6149
6150### Ascend 910 Training and Inference Framework
6151
6152- New models
6153    - ResNext50: a simple, highly modularized network architecture using aggregated resdiual transformations for image classification on ImageNet 2012 dataset.
6154    - MASS: a pre-training method for sequence to sequence based language generation tasks on Text Summarization and Conversational Response Generation using News Crawls 2007-2017 dataset, Gigaword corpus and Cornell movie dialog corpus.
6155    - Transformer: a neural network architecture for language understanding on WMT 2014 English-German dataset.
6156    - GCN: Graph Convolutional Networks for the task of classification of nodes in a graph on Cora and Citeseer datasets.
6157    - GAT: an attention-based graph neural network for node classification on Cora and CiteSeer dataset.
6158- Frontend and user interface
6159    - Support tensor value and assignment of mixed tensor index in graph mode.
6160    - Support tensor comparison, len operator, constexpr syntax, value and assignment of tensor index in pynative mode.
6161    - Support converting MindSpore IR to pb format for infer model.
6162    - Support print operator to write data directly on the hard disk.
6163    - Add the double recursive programming solution for very high speed parallel strategy search in automatic parallel.
6164    - User interfaces change log
6165        - Allow the learning rate of AdamWeightDecayDynamicLR and Lamb to be 0([!1826](https://gitee.com/mindspore/mindspore/pulls/1826))
6166        - Restricting the entire network input parameter is Tensor([!1967](https://gitee.com/mindspore/mindspore/pulls/1967))
6167        - Turn shape and dtype into attributes instead of interfaces([!1919](https://gitee.com/mindspore/mindspore/pulls/1919))
6168        - Delete multitypefungraph([!2116](https://gitee.com/mindspore/mindspore/pulls/2116))
6169        - Refactor the callback module in an encapsulated way, use _CallbackManager instead of_build_callbacks([!2236](https://gitee.com/mindspore/mindspore/pulls/2236))
6170        - Delete EmbeddingLookup([!2163](https://gitee.com/mindspore/mindspore/pulls/2163))
6171        - Checkpoint add model_type([!2517](https://gitee.com/mindspore/mindspore/pulls/2517))
6172- Executor and performance optimization
6173    - Heterogeneous execution on CPU and Ascend devices supported, and is verified in Wide&Deep model.
6174    - Quantitative training of MobileNetV2, Lenet and Resnet50 on Ascend-910 are supported.
6175    - Support new fusion architecture, which can do fusion optimization across graphs and kernels to improve execution speed.
6176- Data processing, augmentation, and save format
6177    - Support data processing pipeline performance profiling.
6178    - Support public dataset loading, such as CLUE and Coco.
6179    - Support more text processing, such as more tokenizers and vocab data.
6180    - Support MindRecord padded data.
6181
6182### Other Hardware Support
6183
6184- GPU platform
6185    - New model supported: Bert / Wide&Deep.
6186    - Support setting max device memory.
6187- CPU platform
6188    - New model supported: LSTM.
6189
6190## Bugfixes
6191
6192- Models
6193    - Bert, Move Bert from `example` to `model_zoo`, optimize network for better performance. ([!1902](https://gitee.com/mindspore/mindspore/pulls/1902))
6194    - VGG16, Move VGG16 from `example` to `model_zoo`, optimize network for better accuracy. ([!2645](https://gitee.com/mindspore/mindspore/pulls/2645))
6195    - Alexnet, modify parameter setting to improve accuracy ([!1364](https://gitee.com/mindspore/mindspore/pulls/2370))
6196    - Wide&Deep, Move Wide&Deep from `example` to `model_zoo`, optimize network for better performance. ([!2221](https://gitee.com/mindspore/mindspore/pulls/2221))
6197- Python API
6198    - Fix bug in auto cast([!1766](https://gitee.com/mindspore/mindspore/pulls/1766))
6199    - Fix bug of register_backward_hook([!2148](https://gitee.com/mindspore/mindspore/pulls/2148))
6200    - Fix bug of tuple args in pynative mode([!1878](https://gitee.com/mindspore/mindspore/pulls/1878))
6201    - Fix bug of checking numbers of arguments and graph parameters([!1701](https://gitee.com/mindspore/mindspore/pulls/1701))
6202- Executor
6203    - Fix bug of loading input data repeatedly in pynative mode([!1966](https://gitee.com/mindspore/mindspore/pulls/1966))
6204    - Fix bug of list cannot be used as input in pynative mode([!1765](https://gitee.com/mindspore/mindspore/pulls/1765))
6205    - Fix bug of kernel select ([!2103](https://gitee.com/mindspore/mindspore/pulls/2103))
6206    - Fix bug of pattern matching for batchnorm fusion in the case of auto mix precision.([!1851](https://gitee.com/mindspore/mindspore/pulls/1851))
6207    - Fix bug of generate hccl's kernel info.([!2393](https://gitee.com/mindspore/mindspore/pulls/2393))
6208- GPU platform
6209    - Fix bug of summary feature invalid([!2173](https://gitee.com/mindspore/mindspore/pulls/2173))
6210- Data processing
6211    - Fix bug of Cifar dataset reading([!2096](https://gitee.com/mindspore/mindspore/pulls/2096))
6212    - Fix bug of C++ behavior in RandomCropAndResize([!2026](https://gitee.com/mindspore/mindspore/pulls/2026))
6213    - Fix the bug of mindrecord shuffle([!2420](https://gitee.com/mindspore/mindspore/pulls/2420))
6214- Third party
6215    - Sqlite : Update sqlite to 3.32.2 to handle [CVE-2020-11656](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-11656), [CVE-2020-13871](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-13871), [CVE-2020-11655](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-11655), [CVE-2020-9327](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-9327), [CVE-2020-13630](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-13630), [CVE-2020-15358](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-15358), [CVE-2020-13631](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-13631), [CVE-2020-13632](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-13632), [CVE-2020-13434](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-13434), [CVE-2020-13435](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-13435), and [CVE-2020-15358](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-11655).
6216
6217## Contributors
6218
6219Thanks goes to these wonderful people:
6220
6221Alexey Shevlyakov, avakh, baihuawei, BowenK, buxue, caifubi, caojian05, Cathy Wong, changzherui, chenfei, chengxianbin, chenhaozhe, chenjianping, chentingting, chenzomi, chujinjin, Danish Farid, dayschan, dengwentao, dinghao, etone-chan, fangzehua, fary86, geekun, Giancarlo Colmenares, gong chen, gukecai, guohongzilong, hangangqiang, heleiwang, hesham, He Wei, hexia, hongxing, huangdongrun, huanghui, islam_amin, Jamie Nisbet, Jesse Lee, jiangjinsheng, jiangzhiwen, jinyaohui, jjfeing, jojobugfree, Jonathan Yan, jonyguo, Junhan Hu, Kang, kingfo, kouzhenzhong, kpy, kswang, laiyongqiang, leopz, liangzelang, lichenever, lihongkang, Li Hongzhang, lilei, limingqi107, lirongzhen1, liubuyu, liuchongming74, liuwenhao4, liuxiao, Lixia Chen, liyanliu, liyong, lizhenyu, lvliang, Mahdi, Margaret_wangrui, meixiaowei, ms_yan, nhussain, ougongchang, panfengfeng, panyifeng, peilinwang, Peilin Wang, pkuliuliu, qianlong, rick_sanchez, shibeiji, Shida He, shijianning, simson, sunsuodong, suteng, Tinazhang, Tron Zhang, unknown, VectorSL, wandongdong, wangcong, wangdongxu, wangdongxu6, wanghua, wangnan39, Wei Luning, wenchunjiang, wenkai, wilfChen, WilliamLian, wukesong, Xian Weizhao, Xiaoda Zhang, xiefangqi, xulei2020, xunxue, xutianchun, Yang, yanghaitao, yanghaitao1, yanghaoran, yangjie, yangjie159, YangLuo, Yanjun Peng, yankai, yanzhenxiang2020, yao_yf, Yi Huaijie, yoonlee666, yuchaojie, yujianfeng, zhangzhongpeng, zhangdengcheng, Zhang Qinghua, zhangyinxia, zhangz0911gm, zhaojichen, zhaoting, zhaozhenlong, zhoufeng, zhouneng, zhousiyi, Zirui Wu, Ziyan, zjun, ZPaC, lihongzhang, wangdongxu
6222
6223Contributions of any kind are welcome!
6224
6225# MindSpore 0.3.1-alpha Release Notes
6226
6227## Major Features and Improvements
6228
6229### Ascend 910 Training and Inference Framework
6230
6231- Frontend and User Interface
6232    - Independent model init interface.
6233- Data processing, augmentation, and save format
6234    - Support sample padding for minddataset.
6235
6236## Bugfixes
6237
6238- Python API
6239    - Fix bugs in the lars optimizer([!1894](https://gitee.com/mindspore/mindspore/pulls/1894))
6240- Data processing
6241    - Fix accuracy problem of RandomCropDecodeResize ([!2340](https://gitee.com/mindspore/mindspore/pulls/2340))
6242
6243# Release 0.3.0-alpha
6244
6245## Major Features and Improvements
6246
6247### Ascend 910 Training and Inference Framework
6248
6249- New models
6250    - DeepFM: a factorization-machine based neural network for CTR prediction on Criteo dataset.
6251    - DeepLabV3: significantly improves over our previous DeepLab versions without DenseCRF post-processing and attains comparable performance with other state-of-art models on the PASCAL VOC 2007 semantic image segmentation benchmark.
6252    - Faster-RCNN: towards real-time object detection with region proposal networks on COCO 2017 dataset.
6253    - SSD: a single stage object detection methods on COCO 2017 dataset.
6254    - GoogLeNet: a deep convolutional neural network architecture codenamed Inception V1 for classification and detection on CIFAR-10 dataset.
6255    - Wide&Deep: jointly trained wide linear models and deep neural networks for recommender systems on Criteo dataset.
6256- Frontend and User Interface
6257    - Complete numpy advanced indexing method. Supports value and assignment through tensor index.
6258    - Some optimizers support separating parameter groups. Different parameter groups can set different `learning_rate` and `weight_decay`.
6259    - Support setting submodule's logging level independently, e.g. you can set logging level of module `A` to warning and set logging level of module `B` to info.
6260    - Support weights to be compiled according to shape to solve the problem of large memory overhead.
6261    - Add some operators implement and grammar support in pynative mode. To be consistent with graph mode.
6262    - User interfaces change log
6263        - Learning rate and weight decay making group params([!637](https://gitee.com/mindspore/mindspore/pulls/637))
6264        - Support weights to be compiled according to shape([!1015](https://gitee.com/mindspore/mindspore/pulls/1015))
6265        - delete some context param([!1100](https://gitee.com/mindspore/mindspore/pulls/1100))
6266        - ImageSummary/ScalarSummary/TensorSummary/HistogramSummary([!1329](https://gitee.com/mindspore/mindspore/pulls/1329))([!1425](https://gitee.com/mindspore/mindspore/pulls/1425))
6267- Executor and Performance Optimization
6268    - Support doing evaluation while in training process, so that the accuracy of training can be easily obtained.
6269    - Enable second-order optimization for resnet50, which can achieve 75.9% accuracy in 45 epochs (Resnet50 @ImageNet).
6270    - Optimize pynative implementation and improve it's execution performance.
6271    - Optimize summary record implementation and improve its performance.
6272- Data processing, augmentation, and save format
6273    - Support simple text processing, such as tokenizer/buildvocab/lookup.
6274    - Support padding batch.
6275    - Support split or concat dataset.
6276    - Support MindDataset reading from file list.
6277
6278### Other Hardware Support
6279
6280- GPU platform
6281    - New models supported: MobileNetV2, MobileNetV3.
6282    - Support mixed precision training.
6283    - Support device memory swapping.
6284
6285## Bugfixes
6286
6287- Python API
6288    - An exception to the broadcast input data type check([!712](https://gitee.com/mindspore/mindspore/pulls/712))
6289    - Fix issues assignsub return value 0([!1036](https://gitee.com/mindspore/mindspore/pulls/1036))
6290    - Fix issue Conv2dBackpropInput bprop should return 3 instead of 2 items([!1001](https://gitee.com/mindspore/mindspore/pulls/1001))
6291    - Fix sens shape error of TrainOneStepWithLossScaleCell([!1050](https://gitee.com/mindspore/mindspore/pulls/1050))
6292    - Fix BatchNormGrad operator([!1344](https://gitee.com/mindspore/mindspore/pulls/1344))
6293- Executor
6294    - Fix dropout,topK and addn errors in PyNative mode ([!1285](https://gitee.com/mindspore/mindspore/pulls/1285), [!1138](https://gitee.com/mindspore/mindspore/pulls/1138), [!1033](https://gitee.com/mindspore/mindspore/pulls/1033)).
6295    - Fix memory leaks after execution in PyNatvie mode ([!1201](https://gitee.com/mindspore/mindspore/pulls/1201)).
6296    - Fix HCCL failure in some special scenes ([!1204](https://gitee.com/mindspore/mindspore/pulls/1204), [!1252](https://gitee.com/mindspore/mindspore/pulls/1252)).
6297    - Fix SSD network when Select failed, can't find kernel info([!1449](https://gitee.com/mindspore/mindspore/pulls/1449)).
6298    - Fix Topk operator selection strategy bug between aicore and aicpu([!1367](https://gitee.com/mindspore/mindspore/pulls/1367)).
6299    - Fix input memory size of 'assign' op unequal in control sink mode when assigning a data from one child graph to another child graph([!802](https://gitee.com/mindspore/mindspore/pulls/802)).
6300    - Fix allreduce ir inconsistency([!989](https://gitee.com/mindspore/mindspore/pulls/989)).
6301- GPU platform
6302    - Fix summary for gradient collection ([!1364](https://gitee.com/mindspore/mindspore/pulls/1364))
6303    - Fix the slice operator ([!1489](https://gitee.com/mindspore/mindspore/pulls/1489))
6304- Data processing
6305    - Fix memory problems of GeneratorDataset of sub-process ([!907](https://gitee.com/mindspore/mindspore/pulls/907))
6306    - Fix getting data timeout when training the cifar10 dataset under the lenet([!1391](https://gitee.com/mindspore/mindspore/pulls/1391))
6307
6308## Contributors
6309
6310Thanks goes to these wonderful people:
6311
6312Alexey Shevlyakov, Amir Lashkari, anthony, baihuawei, biffex, buxue, caifubi, candanzg, caojian05, Cathy Wong, changzherui, chenfei, chengxianbin, chenhaozhe, chenzomi, chujinjin, cristoval, dengwentao, eric, etone-chan, fary86, gaojing, gengdongjie, gongchen, guohongzilong, guozhijian, heleiwang, hesham, He Wei, Hoai Linh Tran, hongxing, huangdongrun, huanghui, Jamie Nisbet, Jesse Lee, jiangjinsheng, jiangzhiwen, jinyaohui, jjfeing, jonwe, jonyguo, Junhan Hu, Kang, kingfo, kswang, laiyongqiang, leopz, lichenever, lihongkang, limingqi107, liubuyu, liuliyan2, liuwenhao4, liuxiao, liuxiao, liyong, lizhenyu, lvliang, Margaret_wangrui, meixiaowei, ms_yan, Nat Sutyanyong, ougongchang, panfengfeng, panyifeng, Peilin Wang, peixu_ren, qianlong, rick_sanchez, seatea, sheng, shijianning, simson, sunsuodong, Tinazhang, VectorSL, wandongdong, wangcong, wanghua, wangnan39, Wei Luning, wenchunjiang, wilfChen, WilliamLian, wsc, wukesong, wuxuejian, Xiaoda Zhang, xiefangqi, xulei2020, Yang, yangjie159, yangruoqi713, yangyongjie, yangzhenzhang, Yanjun Peng, yanzhenxiang2020, yao_yf, Yi Huaijie, yoonlee666, yujianfeng, YuJianfeng, yvetteliu, zhangdengcheng, Zhang Qinghua, zhangz0911gm, zhaojichen, zhaoting, zhaozhenlong, zhoufeng, zhouneng, zhousiyi, zhouyuanshen, Zirui Wu, Ziyan, zjun, ZPaC, lihongzhang
6313
6314Contributions of any kind are welcome!
6315
6316# MindSpore 0.2.0-alpha Release Notes
6317
6318## Major Features and Improvements
6319
6320### Ascend 910 Training and Inference Framework
6321
6322- New models
6323    - MobileNetV2: Inverted Residuals and Linear Bottlenecks.
6324    - ResNet101: Deep Residual Learning for Image Recognition.
6325
6326- Frontend and User Interface
6327    - Support for all python comparison operators.
6328    - Support for math operators **,//,%. Support for other python operators like and/or/not/is/is not/ in/ not in.
6329    - Support for the gradients of function with variable arguments.
6330    - Support for tensor indexing assignment for certain indexing type.
6331    - Support for dynamic learning rate.
6332    - User interfaces change log
6333        - DepthwiseConv2dNative, DepthwiseConv2dNativeBackpropFilter, DepthwiseConv2dNativeBackpropInput([!424](https://gitee.com/mindspore/mindspore/pulls/424))
6334        - ReLU6, ReLU6Grad([!224](https://gitee.com/mindspore/mindspore/pulls/224))
6335        - GeneratorDataset([!183](https://gitee.com/mindspore/mindspore/pulls/183))
6336        - VOCDataset([!477](https://gitee.com/mindspore/mindspore/pulls/477))
6337        - MindDataset, PKSampler([!514](https://gitee.com/mindspore/mindspore/pulls/514))
6338        - map([!506](https://gitee.com/mindspore/mindspore/pulls/506))
6339        - Conv([!226](https://gitee.com/mindspore/mindspore/pulls/226))
6340        - Adam([!253](https://gitee.com/mindspore/mindspore/pulls/253))
6341        - _set_fusion_strategy_by_idx,_set_fusion_strategy_by_size([!189](https://gitee.com/mindspore/mindspore/pulls/189))
6342        - CheckpointConfig([!122](https://gitee.com/mindspore/mindspore/pulls/122))
6343        - Constant([!54](https://gitee.com/mindspore/mindspore/pulls/54))
6344- Executor and Performance Optimization
6345    - Support parallel execution of data prefetching and forward/backward computing.
6346    - Support parallel execution of gradient aggregation and forward/backward computing in distributed training scenarios.
6347    - Support operator fusion optimization.
6348    - Optimize compilation process and improve the performance.
6349- Data processing, augmentation, and save format
6350    - Support multi-process of GeneratorDataset/PyFunc for high performance
6351    - Support variable batchsize
6352    - Support new Dataset operators, such as filter,skip,take,TextLineDataset
6353
6354### Other Hardware Support
6355
6356- GPU platform
6357    - Use dynamic memory pool by default on GPU.
6358    - Support parallel execution of computation and communication.
6359    - Support continuous address allocation by memory pool.
6360- CPU platform
6361    - Support for windows 10 OS.
6362
6363## Bugfixes
6364
6365- Models
6366    - Fix mixed precision bug for VGG16 model ([!629](https://gitee.com/mindspore/mindspore/pulls/629)).
6367- Python API
6368    - Fix ControlDepend operator bugs on CPU and GPU ([!396](https://gitee.com/mindspore/mindspore/pulls/396)).
6369    - Fix ArgMinWithValue operator bugs ([!338](https://gitee.com/mindspore/mindspore/pulls/338)).
6370    - Fix Dense operator bugs on PyNative mode ([!276](https://gitee.com/mindspore/mindspore/pulls/276)).
6371    - Fix MatMul operator bugs on PyNative mode ([!288](https://gitee.com/mindspore/mindspore/pulls/288)).
6372- Executor
6373    - Fix operator selection bugs and make it general ([!300](https://gitee.com/mindspore/mindspore/pulls/300)).
6374    - Fix memory reuse bug for GetNext op ([!291](https://gitee.com/mindspore/mindspore/pulls/291)).
6375- GPU platform
6376    - Fix memory allocation in multi-graph scenarios ([!444](https://gitee.com/mindspore/mindspore/pulls/444)).
6377    - Fix bias_add_grad under fp16 precision ([!598](https://gitee.com/mindspore/mindspore/pulls/598)).
6378    - Fix support for fp16 kernels on nvidia 1080Ti([!571](https://gitee.com/mindspore/mindspore/pulls/571)).
6379    - Fix parsing of tuple type parameters ([!316](https://gitee.com/mindspore/mindspore/pulls/316)).
6380- Data processing
6381    - Fix TypeErrors about can't pickle mindspore._c_dataengine.DEPipeline objects([!434](https://gitee.com/mindspore/mindspore/pulls/434)).
6382    - Add TFRecord file verification([!406](https://gitee.com/mindspore/mindspore/pulls/406)).
6383
6384## Contributors
6385
6386Thanks goes to these wonderful people:
6387
6388Alexey_Shevlyakov, Cathy, Chong, Hoai, Jonathan, Junhan, JunhanHu, Peilin, SanjayChan, StrawNoBerry, VectorSL, Wei, WeibiaoYu, Xiaoda, Yanjun, YuJianfeng, ZPaC, Zhang, ZhangQinghua, ZiruiWu, amongo, anthonyaje, anzhengqi, biffex, caifubi, candanzg, caojian05, casgj, cathwong, ch-l, chang, changzherui, chenfei, chengang, chenhaozhe, chenjianping, chentingting, chenzomi, chujinjin, dengwentao, dinghao, fanglei, fary86, flywind, gaojing, geekun, gengdongjie, ghzl, gong, gongchen, gukecai, guohongzilong, guozhijian, gziyan, h.farahat, hesham, huangdongrun, huanghui, jiangzhiwen, jinyaohui, jjfeing, jojobugfree, jonathan_yan, jonyguo, jzw, kingfo, kisnwang, laiyongqiang, leonwanghui, lianliguang, lichen, lichenever, limingqi107, liubuyu, liuxiao, liyong, liyong126, lizhenyu, lupengcheng, lvliang, maoweiyong, ms_yan, mxm, ougongchang, panfengfeng, panyifeng, pengyanjun, penn, qianlong, seatea, simson, suteng, thlinh, vlne-v1, wangchengke, wanghua, wangnan39, wangqiuliang, wenchunjiang, wenkai, wukesong, xiefangqi, xulei, yanghaitao, yanghaoran, yangjie159, yangzhenzhang, yankai10, yanzhenxiang2020, yao_yf, yoonlee666, zhangbuxue, zhangz0911gm, zhangzheng, zhaojichen, zhaoting, zhaozhenlong, zhongligeng, zhoufeng, zhousiyi, zjun, zyli2020, yuhuijun, limingqi107, lizhenyu, chenweifeng.
6389
6390Contributions of any kind are welcome!
6391
6392# MindSpore 0.1.0-alpha Release Notes
6393
6394## Main Features
6395
6396### Ascend 910 Training and Inference Framework
6397
6398- Recommended OS: Ubuntu 16.04 (or later) or EulerOS 2.5 or EulerOS 2.8
6399- Python version: 3.7.5
6400- Preset models
6401    - ResNet-50: residual structure-based convolutional neural network (CNN) for image classification, which is widely used.
6402    - AlexNet: classic CNN for image classification, achieving historical results in ImageNet LSVRC-2012.
6403    - LeNet: classic CNN for image classification, which was proposed by Yann LeCun.
6404    - VGG16: classic CNN for image classification, which was proposed by Oxford Visual Geometry Group.
6405    - YoloV3: real-time object detection network.
6406    - NEZHA: BERT-based Chinese pre-training network produced by Huawei Noah's Ark Laboratory.
6407- Execution modes
6408    - Graph mode: provides graph optimization methods such as memory overcommitment, IR fusion, and buffer fusion to achieve optimal execution performance.
6409    - PyNative mode: single-step execution mode, facilitating process debugging.
6410- Debugging capability and methods
6411    - Save CheckPoints and Summary data during training.
6412    - Support asynchronous printing.
6413    - Dump the computing data.
6414    - Support profiling analysis of the execution process performance.
6415- Distributed execution
6416    - Support AllReduce, AllGather, and BroadCast collective communication.
6417    - AllReduce data parallel: Each device obtains different training data, which accelerates the overall training process.
6418    - Collective communication-based layerwise parallel: Models are divided and allocated to different devices to solve the problem of insufficient memory for large model processing and improve the training speed.
6419    - Automatic parallel mode: The better data and model parallel mode can be predicted based on the cost model. It is recommended that this mode be used on ResNet series networks.
6420- Automatic differentiation
6421    - Implement automatic differentiation based on Source to Source.
6422    - Support distributed scenarios and automatic insertion of reverse communication operators.
6423- Data processing, augmentation, and save format
6424    - Load common datasets such as ImageNet, MNIST, CIFAR-10, and CIFAR-100.
6425    - Support common data loading pipeline operations, such as shuffle, repeat, batch, map, and sampler.
6426    - Provide basic operator libraries to cover common CV scenarios.
6427    - Support users to customize Python data augmentation operators through the Pyfunc mechanism.
6428    - Support the access of user-defined datasets through the GeneratorDataset mechanism.
6429    - Provide the MindSpore data format, data aggregation and storage, random access example, data partition, efficient parallel read, user-defined index, and dataset search.
6430    - Convert user datasets to the MindSpore data format.
6431    - After data processing and augmentation, provide training applications in feed and graph modes.
6432- FP32/16 mixed precision computation, supporting automatic and manual configuration
6433- Provide common operators such as nn, math, and array, which can be customized.
6434
6435### Inference Deployment
6436
6437- Deploy models in MindSpore format on the Ascend 310 platform for inference.
6438- Save models in ONNX format.
6439- Support saving models in LITE format and running models based on the lightweight inference framework.
6440    - Recommended OS: Android 4.3 or later
6441    - Supported network type: LeNet
6442    - Provide the generalization operators generated by TVM and operators generated after specific networks are tuned.
6443
6444### Other Hardware Support
6445
6446- GPU platform training
6447    - Recommended OS: Ubuntu 16.04
6448    - CUDA version: 9.2 or 10.1
6449    - CuDNN version: 7.6 or later
6450    - Python version: 3.7.5
6451    - NCCL version: 2.4.8-1
6452    - OpenMPI version: 3.1.5
6453    - Supported models: AlexNet, LeNet, and LSTM
6454    - Supported datasets: MNIST and CIFAR-10
6455    - Support data parallel.
6456- CPU platform training
6457    - Recommended OS: Ubuntu 16.04
6458    - Python version: 3.7.5
6459    - Supported model: LeNet
6460    - Supported dataset: MNIST
6461    - Provide only the stand-alone operation version.
6462
6463## Peripherals and Tools
6464
6465- [MindSpore Official Website](https://www.mindspore.cn/)
6466- [MindInsight Visualization Debugging and Optimization](https://gitee.com/mindspore/mindinsight)
6467- [MindArmour Model Security Hardening Package](https://gitee.com/mindspore/mindarmour)
6468- [GraphEngine Computational Graph Engine](https://gitee.com/mindspore/graphengine)
6469