MyGit

v1.3.0

pytorch/pytorch

版本发布时间: 2019-10-11 01:26:52

pytorch/pytorch最新发布版本:v2.5.1(2024-10-30 01:58:24)

Table of Contents

Breaking Changes

Type Promotion: Mixed dtype operations may return a different dtype and value than in previous versions. (22273, 26981)

Previous versions of PyTorch supported a limited number of mixed dtype operations. These operations could result in loss of precision by, for example, truncating floating-point zero-dimensional tensors or Python numbers.

In Version 1.3, PyTorch supports NumPy-style type promotion (with slightly modified rules, see full documentation). These rules generally will retain precision and be less surprising to users.

Version 1.2Version 1.3
>>> torch.tensor(1) + 2.5
tensor(3)
>>> torch.tensor([1]) + torch.tensor(2.5)
tensor([3])
>>> torch.tensor(**True**) + 5
tensor(True)
      
>>> torch.tensor(1) + 2.5
tensor(3.5000)
>>> torch.tensor([1]) + torch.tensor(2.5)
tensor([3.5000])
>>> torch.tensor(True) + 5
tensor(6)
      

Type Promotion: in-place operations whose result_type is a lower dtype category (bool < integer < floating-point) than the in-place operand now throw an Error. (22273, 26981)

Version 1.2Version 1.3
>>> int_tensor = torch.tensor(1)
>>> int_tensor.add_(1.5)
tensor(2)
>>> bool_tensor = torch.tensor(True)
>>> bool_tensor.add_(5)
tensor(True)
      
>>> int_tensor = torch.tensor(1)
>>> int_tensor.add_(1.5)
RuntimeError: result type Float cannot be cast to the desired output type Long
>>> bool_tensor = torch.tensor(True)
>>> bool_tensor.add_(5)
RuntimeError: result type Long cannot be cast to the desired output type Bool
      

These rules can be checked at runtime via torch.can_cast.

torch.flatten: 0-dimensional inputs now return a 1-dim tensor. (25406).

Version 1.2Version 1.3
>>> torch.flatten(torch.tensor(0))
tensor(0)
      
>>> torch.flatten(torch.tensor(0))
tensor([0])
      

nn.functional.affine_grid: when align_corners = True, changed the behavior of 2D affine transforms on 1D data and 3D affine transforms on 2D data (i.e., when one of the spatial dimensions has unit size).

Previously, all grid points along a unit dimension were considered arbitrarily to be at -1, now they are considered to be at 0 (the center of the input image).

torch.gels: removed deprecated operator, use torch.lstsq instead. (26480).

utils.data.DataLoader: made a number of Iterator attributes private (e.g. num_workers, pin_memory). (22273)

[C++] Variable::backward will no longer implicitly create a gradient for non-1-element Variables. Previously, a gradient tensor of all 1s would be implicitly created . This behavior matches the Python API. (26150)

auto x = torch::randn({5, 5}, torch::requires_grad());
auto y = x * x;
y.backward()
// ERROR: "grad can be implicitly created only for scalar outputs"

[C++] All option specifiers (e.g. GRUOptions::bidirectional_) are now private, use the function variants (GRUOptions::bidirectional(...)) instead. (26419).

Highlights

[Experimental]: Mobile Support

In PyTorch 1.3, we are launching experimental support for mobile. Now you can run any TorchScript model directly without any conversion. Here are the full list of features in this release:

We decided not to create a new framework for mobile so that you can use the same APIs you are already familiar with to run the same TorchScript models on Android/iOS devices without any format conversion. This way you can have the shortest path from research ideas to production-ready mobile apps.

The tutorials, demo apps and download links for prebuilt libraries can be found at: https://pytorch.org/mobile/

This is an experimental release. We are working on other features like customized builds to make PyTorch smaller, faster and better for your specific use cases. Stay tuned and give us your feedback!

[Experimental]: Named Tensor Support

Named Tensors aim to make tensors easier to use by allowing users to associate explicit names with tensor dimensions. In most cases, operations that take dimension parameters will accept dimension names, avoiding the need to track dimensions by position. In addition, named tensors use names to automatically check that APIs are being used correctly at runtime, providing extra safety. Names can also be used to rearrange dimensions, for example, to support "broadcasting by name" rather than "broadcasting by position".

Create a named tensor by passing a names argument into most tensor factory function.

>>> tensor = torch.zeros(2, 3, names=('C', 'N'))
    tensor([[0., 0., 0.],
            [0., 0., 0.]], names=('C', 'N'))

Named tensors propagate names across operations.

>>> tensor.abs()
    tensor([[0., 0., 0.],
            [0., 0., 0.]], names=('C', 'N'))

Rearrange to a desired ordering by using align_to .

>>> tensor = tensor.align_to('N', 'C', 'H', 'W')
>>> tensor.names, tensor.shape
    (('N', 'C', 'H', 'W'), torch.Size([3, 2, 1, 1]))

And more! Please see our documentation on named tensors.

[Experimental]: Quantization support

PyTorch now supports quantization from the ground up, starting with support for quantized tensors. Convert a float tensor to a quantized tensor and back by:

x = torch.rand(10,1, dtype=torch.float32)
xq = torch.quantize_per_tensor(x, scale = 0.5, zero_point = 8, dtype=torch.quint8)
# xq is a quantized tensor with data represented as quint8
xdq = x.dequantize()
# convert back to floating point

We also support 8 bit quantized implementations of most common operators in CNNs, including:

We also support dynamic quantized operators, which take in floating point activations, but use quantized weights (in torch.nn.quantized.dynamic).

Quantization also requires support for methods to collect statistics from tensors and calculate quantization parameters (implementing interface torch.quantization.Observer). We support several methods to do so:

For quantization aware training, we support fake-quantization operators and modules to mimic quantization during training:

In addition, we also support workflows in torch.quantization for:

All quantized operators are compatible with TorchScript.

For more details, see the documentation at: https://pytorch.org/docs/master/quantization.html

Type Promotion

Arithmetic and comparison operations may now perform mixed-type operations that promote to a common dtype.

This below example was not allowed in version 1.2. In version 1.3, the same code returns a tensor with dtype=torch.float32.

>>> torch.tensor([1], dtype=torch.int) + torch.tensor([1], dtype=torch.float32)

See the full documentation for more details.

Deprecations

nn.functional.affine_grid / nn.functional.grid_sample: USING The Align_CORNER Default value is now deprecated, because it will be changed in 1.4 release.

The align_corner parameter was added in this release; the behavior in the previous release was equivalent to setting the parameter to True. This is also the current default value but it will be changed to False from 1.4 release. Note that using the default will trigger a warning as demonstrated below; set the value explicitly to remove the warning.

>>> torch.nn.functional.affine_grid(torch.randn(1,2,3),
                                    (1,3,2,2))
UserWarning: Default grid_sample and affine_grid behavior will be changed
to align_corners=False from 1.4.0. 
See the documentation of grid_sample for details.
...

>>> torch.nn.functional.affine_grid(torch.randn(1,2,3),
                                    (1,3,2,2),
                                    align_corners=True)
# NO WARNING!
...

[C++] Deprecate torch::Tensor::data<T>() in favor of torch::Tensor::data_ptr<T>() (24847, 24886).

New Features

TensorBoard: 3D Mesh and Hyperparameter Support

torch.utils.tensorboard supports 3D mesh and points plus hyperparameter logging. More details can be found in the documentation for SummaryWriter with add_mesh and add_hparams.

A simple example exercising both methods:

from torch.utils.tensorboard import SummaryWriter

vertices_tensor = torch.as_tensor([
    [1, 1, 1],
    [-1, -1, 1],
    [1, -1, -1],
    [-1, 1, -1],
], dtype=torch.float).unsqueeze(0)
colors_tensor = torch.as_tensor([
    [255, 0, 0],
    [0, 255, 0],
    [0, 0, 255],
    [255, 0, 255],
], dtype=torch.int).unsqueeze(0)
faces_tensor = torch.as_tensor([
    [0, 2, 3],
    [0, 3, 1],
    [0, 1, 2],
    [1, 3, 2],
], dtype=torch.int).unsqueeze(0)

with SummaryWriter() as w:
    w.add_mesh('my_mesh', vertices=vertices_tensor, colors=colors_tensor, faces=faces_tensor)
    for i in range(5):
        w.add_hparams({'lr': 0.1*i, 'bsize': i},
                      {'hparam/accuracy': 10*i, 'hparam/loss': 10*i})

Distributed

This release adds macOS support for torch.distributed with the Gloo backend. You can more easily switch from development (e.g. on macOS) to deployment (e.g. on Linux) without having to change a single line of code. The prebuilt binaries for macOS (stable and nightly) include support out of the box.

Libtorch Binaries with C++11 ABI

We now provide Libtorch binaries for building applications compatible with the C++11 ABI. The download links for libtorch binaries with C++11 ABI can be found in https://pytorch.org/ “QUICK START LOCALLY”.

New TorchScript features

Improvements

C++ Frontend Improvements

We are on our way to better API parity between our Python and C++ frontends. Specifically, we made the following improvements:

Autograd

New torch::nn modules

New torch::nn::functional functions

tensor Construction API

Other C++ Improvements

Distributed Improvements

Performance Improvements

JIT Improvements

ONNX Exporter Improvements

In PyTorch 1.3, we have added support for exporting graphs with ONNX IR v4 semantics, and set it as default. We have achieved good initial coverage for ONNX Opset 11, which was released recently with ONNX 1.6. Further enhancement to Opset 11 coverage will follow in the next release. We have enabled export for about 20 new PyTorch operators. Also, we have focused on enabling the export for all models in torchvision. We have introduced some necessary groundwork for that in this release, e.g., accepting PyTorch models with inputs/outputs of Dict or String. We continue to work on torchvision models, such as FasterRCNN and MaskRCNN, to enable their export.

Adding Support for ONNX IR v4

Adding Support for ONNX Opset 11

Exporting More Torch Operators/Models to ONNX

Enhancing ONNX Export Infra

Other Improvements

Bug Fixes

TensorBoard Bug Fixes

C++ API Bug fixes

JIT

Other Bug Fixes

Documentation Updates

Distributed

JIT

Other documentation improvements

相关地址:原始地址 下载(tar) 下载(zip)

查看:2019-10-11发行的版本