2.3.0
版本发布时间: 2023-03-23 17:00:38
pyg-team/pytorch_geometric最新发布版本:2.5.3(2024-04-19 19:37:44)
We are thrilled to announce the release of PyG 2.3 🎉
PyG 2.3 is the culmination of work from 59 contributors who have worked on features and bug-fixes for a total of over 470 commits since torch-geometric==2.2.0
.
Highlights
PyTorch 2.0 Support
PyG 2.3 is fully compatible with the next generation release of PyTorch, bringing many new innovations and features such as torch.compile()
and Python 3.11 support to PyG out-of-the-box. In particular, many PyG models and functions are speeded up significantly using torch.compile()
in torch >= 2.0.0
.
We have prepared a full tutorial and a set of examples to get you going with torch.compile()
immediately:
import torch_geometric
from torch_geometric.nn import GraphSAGE
model = GraphSAGE(in_channels, hidden_channels, num_layers, out_channels)
model = model.to(device)
model = torch_geometric.compile(model)
Overall, we observed runtime improvements of nearly up to 300%:
Model | Mode | Forward | Backward | Total | Speedup |
---|---|---|---|---|---|
GCN |
Eager | 2.6396s | 2.1697s | 4.8093s | |
GCN |
Compiled | 1.1082s | 0.5896s | 1.6978s | 2.83x |
GraphSAGE |
Eager | 1.6023s | 1.6428s | 3.2451s | |
GraphSAGE |
Compiled | 0.7033s | 0.7465s | 1.4498s | 2.24x |
GIN |
Eager | 1.6701s | 1.6990s | 3.3690s | |
GIN |
Compiled | 0.7320s | 0.7407s | 1.4727s | 2.29x |
Please note that torch.compile()
within PyG is in beta mode and under active development. For example, currently torch.compile(model, dynamic=True)
does not yet work seamlessly, but fixes are on its way. We are very eager to improve its support across the whole PyG code base, so do not hesitate to reach out if you notice anything unexpected.
Infrastructure Changes
With the recent upstreams of torch-scatter
and torch-sparse
to native PyTorch, we are happy to announce that any installation of the extension packages torch-scatter
, torch-sparse
, torch-cluster
and torch-spline-conv
is now fully optional.
All it takes to install PyG is now encapsulated into a single command
pip install torch-geometric
and finally resolves a lot of previous installation issues.
Extension packages are still picked up for the following use-cases (if installed):
-
pyg-lib
: Heterogeneous GNN operators and graph sampling routines likeNeighborLoader
-
torch-scatter
: Accelerated"min"
and"max"
reductions -
torch-sparse
:SparseTensor
support -
torch-cluster
: Graph clustering routines likeknn
orradius
-
torch-spline-conv
:SplineConv
support
We recommend to start with a minimal installation, and only install additional dependencies once you actually get notified about them being missing during PyG usage.
Native PyTorch Sparse Tensor Support
With the recent additions of torch.sparse_csr_tensor
and torch.sparse_csc_tensor
classes and accelerated sparse matrix multiplication routines to PyTorch, we finally enable MessagePassing
on pure PyTorch sparse tensors as well. In particular, you can now use torch.sparse_csr_tensor
and torch.sparse_csc_tensor
as a drop-in replacement for torch_sparse.SparseTensor
:
from torch_geometric.nn import GCN
import torch_geometric.transforms as T
from torch_geometric.datasets import Planetoid
transform = T.ToSparseTensor(layout=torch.sparse_csr)
dataset = Planetoid("Planetoid", name="Cora", transform=transform)
model = GCN(in_channels, hidden_channels, num_layers=2)
model = model(data.x, data.adj_t)
Nearly all of the native PyG layers have been tested to work seamlessly with native PyTorch sparse tensors (#5906, #5944, #6003, #6033, #6514, #6532, #6748, #6847, #6868, #6874, #6897, #6930, #6932, #6936, #6937, #6939, #6947, #6950, #6951, #6957).
Explainability Framework
In PyG 2.2 we introduced the torch_geometric.explain
package that provides a flexible interface to generate and visualize GNN explanations using various algorithms. We are happy to add the following key improvements on this front:
- Support for explaining heterogeneous GNNs via
HeteroExplanation
- New visualization tools to
visualize_feature_importance
and tovisualize_graph
explanations - May new datasets and metrics to evaluate explanation algorithms
- Several new explanation algorithms such as
CaptumExplainer
,PGExplainer
,AttentionExplainer
,PGMExplainer
, andGraphMaskExplainer
- Support for node-level, link-level and graph-level explanations
Using the new explainer interface is as simple as:
explainer = Explainer(
model=model,
algorithm=CaptumExplainer('IntegratedGradients'),
explanation_type='model',
model_config=dict(
mode='multiclass_classification',
task_level='node',
return_type='log_probs',
),
node_mask_type='attributes',
edge_mask_type='object',
)
explanation = explainer(data.x, data.edge_index)
Read more about torch_geometric.explain
in our newly added tutorial and example scripts. We also added a blog post that describes the new interface and functionality in depth.
Accelerations
Together with Intel and NVIDIA, we are excited about new PyG accelerations:
-
[Experimental] Support for native
cugraph
-based GNN layers for commonly used layers such asCuGraphSAGEConv
,CuGraphGATConv
, andCuGraphRGCNConv
(#6278, #6388, #6412):RGCN
with neighbor sampling:Dataset CuGraphRGCNConv
(ms)FastRGCNConv
(ms)RGCNConv
(ms)AIFB 7,2 13,4 70 BGS 7,1 8,8 146,9 MUTAG 8,3 21,8 47,6 AM 17,5 51 330,1 Full-batch
GAT
:Dataset CuGraphGATConv
(ms)GATConv
(ms)Cora 7 8,7 Citeseer 7 9 Pubmed 8,2 11,4 GraphSAGE
with neighbor sampling:Dataset CuGraphSAGEConv
(ms)SAGEConv
(ms)ogbn-products
2591,8 3040,3 -
A fast alternative to
HGTConv
viaFastHGTConv
that utilizespyg-lib
integration for improved runtimes. Overall,FastHGTConv
achieves a speed-up of approximately 300% compared to the original implementation (#6178). -
A fast implementation of
HeteroDictLinear
that utilizespyg-lib
integration for improved runtimes (#6178). -
GNN inference and training optimizations on CPU within native PyTorch 2.0. Optimizations include:
-
scatter_reduce
: performance hotspot in message passing whenedge_index
is stored in Coordinate format (COO). -
gather
: backward ofscatter_reduce
, specially tuned for the GNN compute when the index is an expanded tensor. -
torch.sparse.mm
withreduce
flag: performance hotspot in message passing when theedge_index
is stored in Compressed Sparse Row (CSR). Supported reduce flags are"sum"
,"mean"
,"amax"
and"amin"
. On OGB benchmarks, a 1.12x - 4.07x performance speedup is measured (PyTorch 1.13.1 vs PyTorch 2.0) for single node inference and training.
-
-
Introduction of
index_sort
viapyg-lib>=0.2.0
, which implements a (way) faster alternative to sorting one-dimensional indices compared totorch.sort
(#6554). Overall, this achieves speed-ups in dataset loading times by up to 600%. -
Introduction of
AffinityMixin
to accelerate PyG workflows on CPU. CPU affinity can be enabled via theAffinityMixin.enable_cpu_affinity()
method fornum_workers > 0
data loading use-cases, and will guarantee that a separate core is assigned to each worker at initialization. Over all benchmarked model/dataset samples, the average training time is decreased by up to 1.85x. We added an in-depth tutorial on how to speed-up your PyG workflows on CPU.
Additional Highlights
Documentation Revamp
The documentation has undergone a revision of design and structure, making it faster to load and easier to navigate. Take a look at its new design here.
Community Sprint: Improving Code Coverage
We had our third community sprint in the last two weeks of January. The goal was to improve code coverage by writing more thorough tests. Thanks to the efforts of many contributors, the total code coverage went from ~85% to ~92% (#6528, #6523, #6538, #6555, #6558, #6568, #6573, #6578, #6597, #6600, #6618, #6619, #6621, #6623, #6637, #6638, #6640, #6645, #6648, #6647, #6653, #6657, #6662, #6664, #6667, #6668, #6669, #6670, #6671, #6673, #6675, #6676, #6677, #6678, #6681, #6683, #6703, #6720, #6735, #6736, #6763, #6781, #6797, #6799, #6824, #6858)
Breaking Changes
- Temporal sampling in
NeighborLoader
will now also sample nodes with an equal timestamp to the seed time. Changed from sampling only nodes with a smaller timestamp (requirespyg-lib>=0.2.0
) (#6517) - Changed the interface and implementation of
GraphMultisetTransformer
such that GNN execution is no longer performed inside its module (#634)) - Unify
Explanation.node_mask
andExplanation.node_feat_mask
into a single attribute inExplainer
(#6267) - Moved
ExplainerConfig
arguments to theExplainer
class (#6176) - Moved PyTorch Lightning data modules to
torch_geometric.data.lightning
(#6140) - Removed
target_index
argument in theExplainer
interface (#6270) - Removed the
Aggregation.set_validate_args
option (#6175) - Removed the usage of
__dunder__
names inMessagePassing
(#6999)
Deprecations
- The usage of
datasets.BAShapes
is now deprecated. Use theBAGraph
graph generator to generate Barabasi-Albert graphs instead (#6072)
Features
Layers, Models and Examples
- Added the
DenseGATConv
layer (https://github.com/pyg-team/pytorch_geometric/pull/6928) - Added the
DistMult
KGE model (https://github.com/pyg-team/pytorch_geometric/pull/6958) - Added the
ComplEx
KGE model (#6898) - Added the
TransE
KGE model (#6314) - Added
HeteroLayerNorm
andHeteroBatchNorm
layers (#6838) - Added a
TemporalEncoding
module (#6785) - Added the
SimpleConv
to perform non-trainable propagation (#6718) - Added
torch.jit
examples forexample/film.py
andexample/gcn.py
(#6602) - Added the
AntiSymmetricConv
layer (#6577) - Added a
PyGModelHubMixin
for Huggingface model hub integration (#5930, #6591) - Added the
PGMExplainer
(#6149, #6588, #6589) - Added internal
ToHeteroLinear
andToHeteroMessagePassing
modules to accelerateto_hetero
functionality (#5992, #6456) - Added the
GraphMaskExplainer
(#6284) - Added the
GRBCDAttack
andPRBCDAttack
adversarial attack models (#5972) - Added the
CaptumExplainer
(#6383, #6387, #6433, #6487) - Added the
GNNFF
model (#5866) - Added
MLPAggregation
,SetTransformerAggregation
,GRUAggregation
, andDeepSetsAggregation
as adaptive readout functions (#6301, #6336, #6338) (https://github.com/pyg-team/pytorch_geometric/pull/6331), #6332) - Added the
GPSConv
Graph Transformer layer (#6326, #6327) - Added the
PGExplainer
(#6204) - Added the
AttentionExplainer
(#6279) - Added the
PointGNNConv
layer (#6194) - Added the RandLA-Net architecture as classification and segmentation examples (#5117)
Datasets
- Added the
AirfRANS
dataset (#6287) - Added the
HeterophilousGraphDataset
suite (#6846) - Added support for the revised version of the
MD17
dataset (#6734) - Added the
BAMultiShapesDataset
(#6541) - Added the
Taobao
dataset and a corresponding example (#6144) - Added the Freebase
FB15k_237
dataset (#3204) - Added the
BA2MotifDataset
explainer dataset (#6257) - Added the
CycleMotif
motif generator to generaten
-node cycle shaped motifs (#6256) - Added the
InfectionDataset
to evaluate explanations (#6222) - Added a
CustomMotif
motif generator (#6179) - Added the
ERGraph
graph generator to generate Ergos-Renyi (ER) graphs (#6073) - Added a general
ExplainerDataset
to evaluate explanation methods (#6104)
Loaders
- Enabled
NeighborLoader
to return the number of sampled nodes and edges per hop, and added correspondingtrim_to_layer
functionality for more efficientNeighborLoader
use-cases (#6661, #6834) - Added a
ZipLoader
to execute multipleNodeLoader
orLinkLoader
instances (#6829) - Added a
seed_time
attribute to temporalNodeLoader
outputs in caseinput_time
is given (#6196)
Transformations
- Added the
Pad
transformation (#5940, #6697, #6731, #6758) - Added a
RemoveDuplicatedEdges
transformation (#6709)
General Improvements
- Added a memory-efficient
utils.one_hot
implementation (https://github.com/pyg-team/pytorch_geometric/pull/7005) - Optimized
utils.softmax
implementation (#6113, #6155, #6805) - Optimized
topk
implementation for graph pooling on large graphs (#6123) - Added common
utils.select
andutils.narrow
functionality to support filtering of both tensors and lists (#6162) - Support
normalization
customization inget_mesh_laplacian
(#6790) - Added CPU-optimized
spmm
functionality via CSR format (#6699, #6759) - Added TorchScript support to the
RECT_L
model (#6727) - Added TorchScript support to the
Node2Vec
model (#6726) - Added
utils.to_edge_index
to convert sparse tensors to edge indices and edge attributes (#6728) - Added TorchScript support to the
LINKX
model (#6712) - Added
dropout
option toGraphMultisetTransformer
(#6484) - Added option to customize loader arguments for evaluation in
LightningNodeData
andLightningLinkData
(#6450, #6456) - Added option to customize
num_neighbors
inNeighborSampler
after instantiation (#6446) - Added support to define a custom
HeteroData
mini-batch class in remote backends (#6377) - Allow the usage of
ChebConv
withinGNNExplainer
(https://github.com/pyg-team/pytorch_geometric/pull/6778) - Added
Dataset.to_datapipe
functionality for converting PyG datasets into a PyTorchDataPipe
(#6141) - Added
to_nested_tensor
andfrom_nested_tensor
functionality (#6329, #6330, [#6331] - Added
networkit
conversion utilities (#6321) - Added
Data.update()
andHeteroData.update
functionality (#6313) - Added
HeteroData.set_value_dict
functionality (https://github.com/pyg-team/pytorch_geometric/pull/6961, https://github.com/pyg-team/pytorch_geometric/pull/6974) - Added the (un)faithfulness explainability metric (#6090)
- Added the
fidelity
explainability metric (#6116, #6510) - Added
characterization_score
andfidelity_curve_auc
explainer metrics (#6188) - Added subgraph visualization of GNN explanations (#6235, #6271)
- Added a weighted negative sampling option in
LinkNeighborLoader
(#6264) - Added a
get_embeddings
function (#6201) - Added
Explanation.visualize_feature_importance
to support node feature importance visualizations (#6094) - Added heterogeneous graph support to explainers via
HeteroExplanation
(#6091, #6218) - Added a
summary
method for PyG/PyTorch models (#5859, #6161) - Added an
input_time
option toLightningNodeData
andtransform_sampler_output
toNodeLoader
andLinkLoader
(#6187) - Added
Data.edge_subgraph
andHeteroData.edge_subgraph
functionalities (#6193)
Bug Fixes
- Fixed a bug in
Data.subgraph()
andHeteroData.subgraph()
for bipartite graphs (#6613, #6654) - Fixed a bug in
PNAConv
andDegreeScalerAggregation
to correctly incorporate degree statistics of isolated nodes (#6609) - Fixed a bug in which
Data.to_heterogeneous
filtered attributes in the wrong dimension (#6522) - Fixed a bug in
to_hetero
when using an uninitialized submodule without implementingreset_parameters
(#6863) - Fixed a bug in
get_mesh_laplacian
(#6790) - Fixed a bug in which masks were not properly masked in
GNNExplainer
on link prediction tasks (#6787) - Fixed the
ImbalancedSampler
when operating on a slicedInMemoryDataset
(#6374) - Fixed the approximate PPR variant in
transforms.GDC
to not crash on graphs with isolated nodes (#6242) - Fixed the filtering of node features in
transforms.RemoveIsolatedNodes
(#6308) - Fixed a bug in
DimeNet
that causes an output dimension mismatch (#6305) - Fixed
Data.to_heterogeneous
when used with an emptyedge_index
(#6304) - Fixed a bug in the output order in
HeteroLinear
for un-sorted type vectors (#6198)
Full Changelog
Added
- Added a memory-efficient
utils.one_hot
implementation (#7005) - Added
HeteroDictLinear
and an optimizedFastHGTConv
module (#6178, #6998) - Added the
DenseGATConv
module (#6928) - Added
trim_to_layer
utility function for more efficientNeighborLoader
use-cases (#6661) - Added the
DistMult
KGE model (#6958) - Added
HeteroData.set_value_dict
functionality (#6961, #6974) - Added PyTorch >= 2.0 support (#6934, #7000)
- Added PyTorch Lightning >= 2.0 support (#6929)
- Added the
ComplEx
KGE model (#6898) - Added option to write benchmark results to csv (#6888)
- Added
HeteroLayerNorm
andHeteroBatchNorm
layers (#6838) - Added the
HeterophilousGraphDataset
suite (#6846) - Added support for sparse tensor in full batch mode inference benchmark (#6843)
- Enabled
NeighborLoader
to return number of sampled nodes and edges per hop (#6834) - Added
ZipLoader
to execute multipleNodeLoader
orLinkLoader
instances (#6829) - Added common
utils.select
andutils.narrow
functionality to support filtering of both tensors and lists (#6162) - Support
normalization
customization inget_mesh_laplacian
(#6790) - Added the
TemporalEncoding
module (#6785) - Added CPU-optimized
spmm_reduce
functionality via CSR format (#6699, #6759) - Added support for the revised version of the
MD17
dataset (#6734) - Added TorchScript support to the
RECT_L
model (#6727) - Added TorchScript support to the
Node2Vec
model (#6726) - Added
utils.to_edge_index
to convert sparse tensors to edge indices and edge attributes (#6728) - Fixed expected data format in
PolBlogs
dataset (#6714) - Added
SimpleConv
to perform non-trainable propagation (#6718) - Added a
RemoveDuplicatedEdges
transform (#6709) - Added TorchScript support to the
LINKX
model (#6712) - Added
torch.jit
examples forexample/film.py
andexample/gcn.py
(#6602) - Added
Pad
transform (#5940, #6697, #6731, #6758) - Added full batch mode to the inference benchmark (#6631)
- Added
cat
aggregation type to theHeteroConv
class so that features can be concatenated during grouping (#6634) - Added
torch.compile
support and benchmark study (#6610, #6952, #6953, #6980, #6983, #6984, #6985, #6986, #6989, #7002) - Added the
AntiSymmetricConv
layer (#6577) - Added a mixin for Huggingface model hub integration (#5930, #6591)
- Added support for accelerated GNN layers in
nn.conv.cugraph
viacugraph-ops
(#6278, #6388, #6412) - Added accelerated
index_sort
function frompyg-lib
for faster sorting (#6554) - Fix incorrect device in
EquilibriumAggregration
(#6560) - Added bipartite graph support in
dense_to_sparse()
(#6546) - Add CPU affinity support for more data loaders (#6534, #6922)
- Added the
BAMultiShapesDataset
(#6541) - Added the interfaces of a graph pooling framework (#6540)
- Added automatic
n_id
ande_id
attributes to mini-batches produced byNodeLoader
andLinkLoader
(#6524) - Added
PGMExplainer
totorch_geometric.contrib
(#6149, #6588, #6589) - Added a
NumNeighbors
helper class for specifying the number of neighbors when sampling (#6501, #6505, #6690) - Added caching to
is_node_attr()
andis_edge_attr()
calls (#6492) - Added
ToHeteroLinear
andToHeteroMessagePassing
modules to accelerateto_hetero
functionality (#5992, #6456) - Added
GraphMaskExplainer
(#6284) - Added the
GRBCD
andPRBCD
adversarial attack models (#5972) - Added
dropout
option toSetTransformer
andGraphMultisetTransformer
(#6484) - Added option to customize loader arguments for evaluation in
LightningNodeData
andLightningLinkData
(#6450, #6456) - Added option to customize
num_neighbors
inNeighborSampler
after instantiation (#6446) - Added the
Taobao
dataset and a corresponding example for it (#6144) - Added
pyproject.toml
(#6431) - Added the
torch_geometric.contrib
sub-package (#6422) - Warn on using latest documentation (#6418)
- Added basic
pyright
type checker support (#6415) - Added a new external resource for link prediction (#6396)
- Added
CaptumExplainer
(#6383, #6387, #6433, #6487, #6966) - Added support for custom
HeteroData
mini-batch class in remote backends (#6377) - Added the
GNNFF
model (#5866) - Added
MLPAggregation
,SetTransformerAggregation
,GRUAggregation
, andDeepSetsAggregation
as adaptive readout functions (#6301, #6336, #6338) - Added
Dataset.to_datapipe
for converting PyG datasets into a torchdataDataPipe
(#6141) - Added
to_nested_tensor
andfrom_nested_tensor
functionality (#6329, #6330, #6331, #6332) - Added the
GPSConv
Graph Transformer layer and example (#6326, #6327) - Added
networkit
conversion utilities (#6321) - Added global dataset attribute access via
dataset.{attr_name}
(#6319) - Added the
TransE
KGE model and example (#6314) - Added the Freebase
FB15k_237
dataset (#3204) - Added
Data.update()
andHeteroData.update()
functionality (#6313) - Added
PGExplainer
(#6204) - Added the
AirfRANS
dataset (#6287) - Added
AttentionExplainer
(#6279) - Added (un)faithfulness explainability metric (#6090)
- Added fidelity explainability metric (#6116, #6510)
- Added subgraph visualization of GNN explanations (#6235, #6271)
- Added weighted negative sampling option in
LinkNeighborLoader
(#6264) - Added the
BA2MotifDataset
explainer dataset (#6257) - Added
CycleMotif
motif generator to generaten
-node cycle shaped motifs (#6256) - Added the
InfectionDataset
to evaluate explanations (#6222) - Added
characterization_score
andfidelity_curve_auc
explainer metrics (#6188) - Added
get_message_passing_embeddings
(#6201) - Added the
PointGNNConv
layer (#6194) - Added
GridGraph
graph generator to generate grid graphs (#6220 - Added explainability metrics for when ground truth is available (#6137)
- Added
visualize_feature_importance
to support node feature visualizations (#6094) - Added heterogeneous graph support to
Explanation
framework (#6091, #6218) - Added a
CustomMotif
motif generator (#6179) - Added
ERGraph
graph generator to generate Ergos-Renyi (ER) graphs (#6073) - Added
BAGraph
graph generator to generate Barabasi-Albert graphs - the usage ofdatasets.BAShapes
is now deprecated (#6072 - Added explainability benchmark dataset framework (#6104)
- Added
seed_time
attribute to temporalNodeLoader
outputs in caseinput_time
is given (#6196) - Added
Data.edge_subgraph
andHeteroData.edge_subgraph
functionalities (#6193) - Added
input_time
option toLightningNodeData
andtransform_sampler_output
toNodeLoader
andLinkLoader
(#6187) - Added
summary
for PyG/PyTorch models (#5859, #6161) - Started adding
torch.sparse
support to PyG (#5906, #5944, #6003, #6033, #6514, #6532, #6748, #6847, #6868, #6874, #6897, #6930, #6932, #6936, #6937, #6939, #6947, #6950, #6951, #6957) - Add
inputs_channels
back in training benchmark (#6154) - Added support for dropping nodes in
utils.to_dense_batch
in casemax_num_nodes
is smaller than the number of nodes (#6124) - Added the RandLA-Net architecture as an example (#5117)
Changed
- Drop internal usage of
__dunder__
names (#6999) - Changed the interface of
sort_edge_index
,coalesce
andto_undirected
to only return singleedge_index
information in case theedge_attr
argument is not specified (#6875, #6879, #6893) - Fixed a bug in
to_hetero
when using an uninitialized submodule without implementingreset_parameters
(#6863) - Fixed a bug in
get_mesh_laplacian
(#6790) - Fixed a bug in which masks were not properly masked in
GNNExplainer
on link prediction tasks (#6787) - Allow the usage of
ChebConv
withinGNNExplainer
(#6778) - Allow setting the
EdgeStorage.num_edges
property (#6710) - Fixed a bug in
utils.bipartite_subgraph()
and updated docs ofHeteroData.subgraph()
(#6654) - Properly reset the
data_list
cache of anInMemoryDataset
when accessingdataset.data
(#6685) - Fixed a bug in
Data.subgraph()
andHeteroData.subgraph()
(#6613) - Fixed a bug in
PNAConv
andDegreeScalerAggregation
to correctly incorporate degree statistics of isolated nodes (#6609) - Improved code coverage (#6523, #6538, #6555, #6558, #6568, #6573, #6578, #6597, #6600, #6618, #6619, #6621, #6623, #6637, #6638, #6640, #6645, #6648, #6647, #6653, #6657, #6662, #6664, #6667, #6668, #6669, #6670, #6671, #6673, #6675, #6676, #6677, #6678, #6681, #6683, #6703, #6720, #6735, #6736, #6763, #6781, #6797, #6799, #6824, #6858)
- Fixed a bug in which
data.to_heterogeneous()
filtered attributs in the wrong dimension (#6522) - Breaking Change: Temporal sampling will now also sample nodes with an equal timestamp to the seed time (requires
pyg-lib>0.1.0
) (#6517) - Changed
DataLoader
workers with affinity to start atcpu0
(#6512) - Allow 1D input to
global_*_pool
functions (#6504) - Add information about dynamic shapes in
RGCNConv
(#6482) - Fixed the use of types removed in
numpy 1.24.0
(#6495) - Fixed keyword parameters in
examples/mnist_voxel_grid.py
(#6478) - Unified
LightningNodeData
andLightningLinkData
code paths (#6473) - Allow indices with any integer type in
RGCNConv
(#6463) - Re-structured the documentation (#6420, #6423, #6429, #6440, #6443, #6445, #6452, #6453, #6458, #6459, #6460, #6490, #6491, #6693, #6744)
- Fix the default arguments of
DataParallel
class (#6376) - Fix
ImbalancedSampler
on slicedInMemoryDataset
(#6374) - Breaking Change: Changed the interface and implementation of
GraphMultisetTransformer
(#6343) - Fixed the approximate PPR variant in
transforms.GDC
to not crash on graphs with isolated nodes (#6242) - Added a warning when accesing
InMemoryDataset.data
(#6318) - Drop
SparseTensor
dependency inGraphStore
(#5517) - Replace
NeighborSampler
withNeighborLoader
in the distributed sampling example (#6204) - Fixed the filtering of node features in
transforms.RemoveIsolatedNodes
(#6308) - Fixed a bug in
DimeNet
that causes a output dimension mismatch (#6305) - Fixed
Data.to_heterogeneous()
with emptyedge_index
(#6304) - Unify
Explanation.node_mask
andExplanation.node_feat_mask
(#6267) - Moved thresholding config of the
Explainer
toExplanation
(#6215) - Fixed a bug in the output order in
HeteroLinear
for un-sorted type vectors (#6198) - Breaking Change: Move
ExplainerConfig
arguments to theExplainer
class (#6176) - Refactored
NeighborSampler
to be input-type agnostic (#6173) - Infer correct CUDA device ID in
profileit
decorator (#6164) - Correctly use edge weights in
GDC
example (#6159) - Breaking Change: Moved PyTorch Lightning data modules to
torch_geometric.data.lightning
(#6140) - Make
torch_sparse
an optional dependency (#6132, #6134, #6138, #6139) - Optimized
utils.softmax
implementation (#6113, #6155, #6805) - Optimized
topk
implementation for large enough graphs (#6123)
Removed
-
torch-sparse
is now an optional dependency (#6625, #6626, #6627, #6628, #6629, #6630) - Removed most of the
torch-scatter
dependencies (#6394, #6395, #6399, #6400, #6615, #6617) - Removed the deprecated classes
GNNExplainer
andExplainer
fromnn.models
(#6382) - Removed
target_index
argument in theExplainer
interface (#6270) - Removed
Aggregation.set_validate_args
option (#6175)
Full Changelog: https://github.com/pyg-team/pytorch_geometric/compare/2.2.0...2.3.0
New Contributors
- @CharlesGaydon made their first contribution in https://github.com/pyg-team/pytorch_geometric/pull/5117
- @mova made their first contribution in https://github.com/pyg-team/pytorch_geometric/pull/6123
- @Humbertzhang made their first contribution in https://github.com/pyg-team/pytorch_geometric/pull/6143
- @dongyukang1 made their first contribution in https://github.com/pyg-team/pytorch_geometric/pull/6159
- @edwag made their first contribution in https://github.com/pyg-team/pytorch_geometric/pull/6183
- @toenshoff made their first contribution in https://github.com/pyg-team/pytorch_geometric/pull/6198
- @shenoynikhil made their first contribution in https://github.com/pyg-team/pytorch_geometric/pull/6073
- @andreasbinder made their first contribution in https://github.com/pyg-team/pytorch_geometric/pull/6194
- @ZeynepP made their first contribution in https://github.com/pyg-team/pytorch_geometric/pull/6090
- @karimsr4 made their first contribution in https://github.com/pyg-team/pytorch_geometric/pull/6279
- @FlorentExtrality made their first contribution in https://github.com/pyg-team/pytorch_geometric/pull/6287
- @anton-bushuiev made their first contribution in https://github.com/pyg-team/pytorch_geometric/pull/6306
- @marekdedic made their first contribution in https://github.com/pyg-team/pytorch_geometric/pull/6242
- @davidbuterez made their first contribution in https://github.com/pyg-team/pytorch_geometric/pull/6301
- @ken2403 made their first contribution in https://github.com/pyg-team/pytorch_geometric/pull/5866
- @bwroblew made their first contribution in https://github.com/pyg-team/pytorch_geometric/pull/6376
- @manangoel99 made their first contribution in https://github.com/pyg-team/pytorch_geometric/pull/6396
- @binlee52 made their first contribution in https://github.com/pyg-team/pytorch_geometric/pull/6439
- @bartekxk made their first contribution in https://github.com/pyg-team/pytorch_geometric/pull/6463
- @forest1040 made their first contribution in https://github.com/pyg-team/pytorch_geometric/pull/6478
- @sigeisler made their first contribution in https://github.com/pyg-team/pytorch_geometric/pull/5972
- @steveazzolin made their first contribution in https://github.com/pyg-team/pytorch_geometric/pull/6541
- @JinL0 made their first contribution in https://github.com/pyg-team/pytorch_geometric/pull/6557
- @HelgeS made their first contribution in https://github.com/pyg-team/pytorch_geometric/pull/6560
- @jaypmorgan made their first contribution in https://github.com/pyg-team/pytorch_geometric/pull/6562
- @jreniecki made their first contribution in https://github.com/pyg-team/pytorch_geometric/pull/6570
- @tingyu66 made their first contribution in https://github.com/pyg-team/pytorch_geometric/pull/6388
- @gravins made their first contribution in https://github.com/pyg-team/pytorch_geometric/pull/6577
- @karuna-bhaila made their first contribution in https://github.com/pyg-team/pytorch_geometric/pull/6597
- @DylanSand made their first contribution in https://github.com/pyg-team/pytorch_geometric/pull/6634
- @soumik12345 made their first contribution in https://github.com/pyg-team/pytorch_geometric/pull/6641
- @sivonxay made their first contribution in https://github.com/pyg-team/pytorch_geometric/pull/6648
- @kimfalk made their first contribution in https://github.com/pyg-team/pytorch_geometric/pull/6651
- @dsciebu made their first contribution in https://github.com/pyg-team/pytorch_geometric/pull/6710
- @bmarenco made their first contribution in https://github.com/pyg-team/pytorch_geometric/pull/6715
- @RobDHess made their first contribution in https://github.com/pyg-team/pytorch_geometric/pull/6734
- @cemunds made their first contribution in https://github.com/pyg-team/pytorch_geometric/pull/6810
- @berkekisin made their first contribution in https://github.com/pyg-team/pytorch_geometric/pull/6838
- @varun-tandon made their first contribution in https://github.com/pyg-team/pytorch_geometric/pull/6904