MyGit

v1.1.0

pytorch/pytorch

版本发布时间: 2019-05-01 08:09:03

pytorch/pytorch最新发布版本:v2.5.1(2024-10-30 01:58:24)

Note: CUDA 8.0 is no longer supported

Highlights

TensorBoard (currently experimental)

First-class and native support for visualization and model debugging with TensorBoard, a web application suite for inspecting and understanding training runs, tensors, and graphs. PyTorch now supports TensorBoard logging with a simple from torch.utils.tensorboard import SummaryWriter command. Histograms, embeddings, scalars, images, text, graphs, and more can be visualized across training runs. TensorBoard support is currently experimental. You can browse the docs here.

[JIT] Attributes in ScriptModules

Attributes can be assigned on a ScriptModule by wrapping them with torch.jit.Attribute and specifying the type. Attributes are similar to parameters or buffers, but can be of any type. They will be serialized along with any paramters/buffers when you call torch.jit.save(), so they are a great way to store arbitrary state in your model. See the docs for more info.

Example:

class Foo(torch.jit.ScriptModule):
  def __init__(self, a_dict):
    super(Foo, self).__init__(False)
    self.words = torch.jit.Attribute([], List[str])
    self.some_dict = torch.jit.Attribute(a_dict, Dict[str, int])

  @torch.jit.script_method
  def forward(self, input: str) -> int:
    self.words.append(input)
    return self.some_dict[input]

[JIT] Dictionary and List Support in TorchScript

TorchScript now has robust support for list and dictionary types. They behave much like Python lists and dictionaries, supporting most built-in methods, as well as simple comprehensions and for…in constructs.

[JIT] User-defined classes in TorchScript (experimental)

For more complex stateful operations, TorchScript now supports annotating a class with @torch.jit.script. Classes used this way can be JIT-compiled and loaded in C++ like other TorchScript modules. See the docs for more info.

@torch.jit.script
class Pair:
	def __init__(self, first, second)
		self.first = first
		self.second = second

	def sum(self):
		return self.first + self.second

DistributedDataParallel new functionality and tutorials

nn.parallel.DistributedDataParallel: can now wrap multi-GPU modules, which enables use cases such as model parallel (tutorial) on one server and data parallel (tutorial) across servers. (19271).

Breaking Changes

New Features

Operators

NN

Tensors / dtypes

Optim

Distributions

Samplers

DistributedDataParallel

TorchScript and Tracer

Experimental Features

Improvements

Bug Fixes

Serious

Other

Deprecations

Performance

Highlights

Other

Documentation

ONNX

Exporting More Torch Operators to ONNX

Extending Existing Exporting Logic

Optimizing Exported ONNX Graph

Adding Utility Functions and Refactoring

Bugfixes

相关地址:原始地址 下载(tar) 下载(zip)

查看:2019-05-01发行的版本