Layers

layers.gcn

GCN Layer Adapted from: https://github.com/PetarV-/DGI

class egc.module.layers.gcn.GCN(in_feats: int, out_feats: int, activation: str = 'prelu', bias: bool = True)[source]

Bases: Module

GCN Layer

Parameters:
  • in_feats (int) – input feature dimension

  • out_feats (int) – output feature dimension

  • activation (str) – activation function. Defaults to prelu.

  • bias (bool) – whether to apply bias after calculate hat{A}XW. Defaults to True.

training: bool
forward(features: Tensor, adj_norm: Tensor, sparse: bool = True) Tuple[Tensor, Tensor][source]

Forward Propagation

Parameters:
  • features (torch.Tensor) – normalized 3D features tensor in shape of torch.Size([1, xx, xx])

  • adj_norm (torch.Tensor) – symmetrically normalized 2D adjacency tensor

  • sparse (bool) – whether input sparse tensor

Returns:

hat{A}XW and XW

Return type:

out, hidden_layer (torch.Tensor, torch.Tensor)

layers.batch_gcn

GCN Layer Adapted from: https://github.com/PetarV-/DGI

class egc.module.layers.batch_gcn.BATCH_GCN(in_ft, out_ft, bias=True)[source]

Bases: Module

GCN Layer

Parameters:
  • in_ft (int) – input feature dimension

  • out_ft (int) – output feature dimension

  • bias (bool) – whether to apply bias after calculate hat{A}XW. Defaults to True.

forward(seq, adj, sparse=False)[source]

Forward Propagation

Parameters:
  • seq (torch.Tensor) – normalized 3D features tensor. Shape of seq: (batch, nodes, features)

  • adj (torch.Tensor) – symmetrically normalized 2D adjacency tensor

  • sparse (bool) – whether input sparse tensor

Returns:

hat{A}XW

Return type:

out (torch.Tensor)

training: bool

layers.multilayer_dnn

Multilayer DNN

class egc.module.layers.multilayer_dnn.MultiLayerDNN(in_feats: int, out_feats_list: List[int], bias: List[bool] | None = None, activation: List[str] | None = None)[source]

Bases: Module

MultiLayer Deep Nueral Networks.

Parameters:
  • in_feats (int) – Input feature dimension.

  • out_feats_list (List[int]) – List of hidden units dimensions.

  • bias (List[bool], optional) – Whether to apply bias at each layer. Defaults to True.

  • activation (List[str], optional) – Activation func list to apply at each layer. Defaults to ReLU.

forward(x)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool

layers.multilayer_gnn

MultiLayer GraphSAGE

class egc.module.layers.multilayer_gnn.MultiLayerGNN(in_feats: int, out_feats_list: List[int], aggregator_type: str = 'gcn', bias: bool = True, activation: List[str] | None = None, dropout: float = 0.0)[source]

Bases: Module

MultiLayer GraphSAGE with different types of aggregator_type.

Parameters:
  • in_feats (int) – Input feature dimension.

  • out_feats_list (List[int]) – List of hidden units dimensions.

  • aggregator_type (str, optional) – Aggregate type of sage. Defaults to ‘gcn’.

  • bias (bool, optional) – Whether to apply bias. Defaults to True.

forward(blocks, x, edge_weight=None)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool

layers.inner_product_de

layers

class egc.module.layers.inner_product_de.InnerProductDecoder(dropout: float = 0.0, act=<built-in method sigmoid of type object>)[source]

Bases: Module

Decoder for using inner product for prediction.

forward(z)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool