Skip to content

Commit ac71234

Browse files
committed
update docs
1 parent 6e17284 commit ac71234

File tree

8 files changed

+88
-22
lines changed

8 files changed

+88
-22
lines changed

docs/bibliography.bib

Lines changed: 14 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -183,3 +183,17 @@ @inproceedings{Hamilton2017
183183
title = {Inductive Representation Learning on Large Graphs},
184184
year = {2017},
185185
}
186+
187+
@inproceedings{Satorras2021,
188+
abstract = {This paper introduces a new model to learn graph neural networks equivariant to rotations, translations, reflections and permutations called E(n)-Equivariant Graph Neural Networks (EGNNs). In contrast with existing methods, our work does not require computationally expensive higher-order representations in intermediate layers while it still achieves competitive or better performance. In addition, whereas existing methods are limited to equivariance on 3 dimensional spaces, our model is easily scaled to higher-dimensional spaces. We demonstrate the effectiveness of our method on dynamical systems modelling, representation learning in graph autoencoders and predicting molecular properties.},
189+
author = {Victor Garcia Satorras and Emiel Hoogeboom and Max Welling},
190+
editor = {Marina Meila and Tong Zhang},
191+
booktitle = {Proceedings of the 38th International Conference on Machine Learning},
192+
month = {2},
193+
pages = {9323-9332},
194+
publisher = {PMLR},
195+
title = {E(n) Equivariant Graph Neural Networks},
196+
volume = {139},
197+
url = {http://arxiv.org/abs/2102.09844},
198+
year = {2021},
199+
}

docs/make.jl

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -42,8 +42,10 @@ makedocs(
4242
"Dynamic Graph Update" => "dynamicgraph.md",
4343
"Manual" => [
4444
"FeaturedGraph" => "manual/featuredgraph.md",
45-
"Graph Convolutional Layers" => "manual/conv.md",
45+
"Graph Convolutional Layers" => "manual/graph_conv.md",
4646
"Graph Pooling Layers" => "manual/pool.md",
47+
"Group Convolutional Layers" => "manual/group_conv.md",
48+
"Positional Encoding Layers" => "manual/positional.md",
4749
"Embeddings" => "manual/embedding.md",
4850
"Models" => "manual/models.md",
4951
"Linear Algebra" => "manual/linalg.md",

docs/src/manual/featuredgraph.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -9,6 +9,8 @@ GraphSignals.edge_feature
99
GraphSignals.has_edge_feature
1010
GraphSignals.global_feature
1111
GraphSignals.has_global_feature
12+
GraphSignals.positional_feature
13+
GraphSignals.has_positional_feature
1214
GraphSignals.subgraph
1315
GraphSignals.ConcreteFeaturedGraph
1416
```
File renamed without changes.

docs/src/manual/group_conv.md

Lines changed: 19 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,19 @@
1+
# Group Convolutional Layers
2+
3+
## ``E(n)``-equivariant Convolutional Layer
4+
5+
It employs message-passing scheme and can be defined by following functions:
6+
7+
- message function (Eq. 3 from paper): ``m_{ij} = \phi_e(h_i^l, h_j^l, ||x_i^l - x_j^l||^2, a_{ij})``
8+
- aggregate (Eq. 5 from paper): ``m_i = \sum_j m_{ij}``
9+
- update function (Eq. 6 from paper): ``h_i^{l+1} = \phi_h(h_i^l, m_i)``
10+
11+
where ``h_i^l`` and ``h_j^l`` denotes the node feature for node ``i`` and ``j``, respectively, in ``l``-th layer, as well as ``x_i^l`` and ``x_j^l`` denote the positional feature for node ``i`` and ``j``, respectively, in ``l``-th layer. ``a_{ij}`` is the edge feature for edge ``(i,j)``. ``\phi_e`` and ``\phi_h`` are neural network for edges and nodes.
12+
13+
```@docs
14+
EEquivGraphConv
15+
```
16+
17+
Reference: [Satorras2021](@cite)
18+
19+
---

docs/src/manual/positional.md

Lines changed: 19 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,19 @@
1+
# Positional Encoding Layers
2+
3+
## ``E(n)``-equivariant Positional Encoding Layer
4+
5+
It employs message-passing scheme and can be defined by following functions:
6+
7+
- message function: ``y_{ij}^l = (x_i^l - x_j^l)\phi_x(m_{ij})``
8+
- aggregate: ``y_i^l = \frac{1}{M} \sum_{j \in \mathcal{N}(i)} y_{ij}^l``
9+
- update function: ``x_i^{l+1} = x_i^l + y_i^l``
10+
11+
where ``x_i^l`` and ``x_j^l`` denote the positional feature for node ``i`` and ``j``, respectively, in ``l``-th layer, ``\phi_x`` is the neural network for positional encoding and ``m_{ij}`` is the edge feature for edge ``(i,j)``. ``y_{ij}^l`` and ``y_i^l`` represent the encoded positional feature and aggregated positional feature, respectively, and ``M`` denotes number of neighbors of node ``i``.
12+
13+
```@docs
14+
EEquivGraphPE
15+
```
16+
17+
Reference: [Satorras2021](@cite)
18+
19+
---

src/layers/group_conv.jl

Lines changed: 7 additions & 21 deletions
Original file line numberDiff line numberDiff line change
@@ -9,34 +9,20 @@ E(n)-equivariant graph neural network layer.
99
- `out_dim`: the output of the layer will have dimension `out_dim` + (dimension of input vector - `in_dim`).
1010
- `pos_dim::Int`: dimension of positional encoding.
1111
- `edge_dim::Int`: dimension of edge feature.
12-
- `init`: neural network initialization function, should be compatible with `Flux.Dense`.
12+
- `init`: neural network initialization function.
1313
1414
# Examples
1515
1616
```jldoctest
17-
julia> in_dim, int_dim, out_dim = 3,6,5
18-
(3, 5, 5)
17+
julia> in_dim, out_dim, pos_dim = 3, 5, 2
18+
(3, 5, 2)
1919
20-
julia> egnn = EEquivGraphConv(in_dim, int_dim, out_dim)
21-
EEquivGraphConv{Dense{typeof(identity), Matrix{Float32}, Vector{Float32}}, Dense{typeof(identity), Matrix{Float32}, Vector{Float32}}, Dense{typeof(identity), Matrix{Float32}, Vector{Float32}}}(Dense(8 => 5), Dense(5 => 1), Dense(8 => 5), 3, 5, 5)
22-
23-
julia> m_len = 2*in_dim + 2
24-
8
25-
26-
julia> nn_edge = Flux.Dense(m_len, int_dim)
27-
Dense(8 => 5) # 45 parameters
28-
29-
julia> nn_x = Flux.Dense(int_dim, 1)
30-
Dense(5 => 1) # 6 parameters
31-
32-
julia> nn_h = Flux.Dense(in_dim + int_dim, out_dim)
33-
Dense(8 => 5) # 45 parameters
34-
35-
julia> egnn = EEquivGraphConv(in_dim, nn_edge, nn_x, nn_h)
36-
EEquivGraphConv{Dense{typeof(identity), Matrix{Float32}, Vector{Float32}}, Dense{typeof(identity), Matrix{Float32}, Vector{Float32}}, Dense{typeof(identity), Matrix{Float32}, Vector{Float32}}}(Dense(8 => 5), Dense(5 => 1), Dense(8 => 5), 3, 5, 5)
20+
julia> egnn = EEquivGraphConv(in_dim=>out_dim, pos_dim, in_dim)
21+
EEquivGraphConv(ϕ_edge=Dense(10 => 5), ϕ_x=Dense(5 => 2), ϕ_h=Dense(8 => 5))
3722
```
38-
"""
3923
24+
See also [`WithGraph`](@ref) for training layer with static graph and [`EEquivGraphPE`](@ref) for positional encoding.
25+
"""
4026
struct EEquivGraphConv{X,E,H}
4127
pe::X
4228
nn_edge::E

src/layers/positional.jl

Lines changed: 24 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -7,6 +7,30 @@ abstract type AbstractPE end
77

88
positional_encode(l::AbstractPE, args...) = throw(ErrorException("positional_encode function for $l is not implemented."))
99

10+
"""
11+
EEquivGraphPE(in_dim=>out_dim; init=glorot_uniform, bias=true)
12+
13+
E(n)-equivariant positional encoding layer.
14+
15+
# Arguments
16+
17+
- `in_dim::Int`: dimension of input positional feature.
18+
- `out_dim::Int`: dimension of output positional feature.
19+
- `init`: neural network initialization function.
20+
- `bias::Bool`: dimension of edge feature.
21+
22+
# Examples
23+
24+
```jldoctest
25+
julia> in_dim_edge, out_dim = 2, 5
26+
(2, 5)
27+
28+
julia> l = EEquivGraphPE(in_dim_edge=>out_dim)
29+
EEquivGraphPE(2 => 5)
30+
```
31+
32+
See also [`EEquivGraphConv`](@ref).
33+
"""
1034
struct EEquivGraphPE{X} <: MessagePassing
1135
nn::X
1236
end

0 commit comments

Comments
 (0)