egc.module.layers package
Submodules
egc.module.layers.batch_gcn module
GCN Layer Adapted from: https://github.com/PetarV-/DGI
- class egc.module.layers.batch_gcn.BATCH_GCN(in_ft, out_ft, bias=True)[source]
Bases:
ModuleGCN Layer
- Parameters:
in_ft (int) – input feature dimension
out_ft (int) – output feature dimension
bias (bool) – whether to apply bias after calculate hat{A}XW. Defaults to True.
- forward(seq, adj, sparse=False)[source]
Forward Propagation
- Parameters:
seq (torch.Tensor) – normalized 3D features tensor. Shape of seq: (batch, nodes, features)
adj (torch.Tensor) – symmetrically normalized 2D adjacency tensor
sparse (bool) – whether input sparse tensor
- Returns:
hat{A}XW
- Return type:
out (torch.Tensor)
- training: bool
egc.module.layers.cluster_secomm module
ClusterModel for SEComm
- class egc.module.layers.cluster_secomm.SECommClusterModel(n_hid1, n_hid2, n_class, dropout)[source]
Bases:
ModuleClusterModel for SEComm
- forward(x1: Tensor) Tensor[source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool
egc.module.layers.disc_communitygan module
Discriminator Layer Adapted from: https://github.com/SamJia/CommunityGAN
- class egc.module.layers.disc_communitygan.DiscComGAN(n_nodes: int, node_emd_init: Tensor, n_epochs: int, dis_interval: int, update_ratio: float, n_sample_dis: int, lr_dis: float, l2_coef: float, batch_size: int, max_value: int)[source]
Bases:
ModuleDiscriminator of CommunityGAN
- Parameters:
n_nodes (int) – num of nodes.
node_emd_init (torch.Tensor) – node embedding in agm format pretrained in advance.
n_epochs (int) – num of training epochs.
dis_interval (int) – interval for discriminator.
update_ratio (float) – update ratio.
n_sample_dis (int) – num of samples for discriminator.
lr_dis (float) – learning rate.
l2_coef (float) – l2 coef.
batch_size (int) – batch size
max_value (int) – max value for embedding matrix.
- prepare_data_for_d(sampling: Callable, id2motifs: List[List[Tuple]]) Tuple[List[Tuple], List[List]][source]
generate positive and negative samples for the discriminator
- Parameters:
sampling (Callable) – sampling function.
id2motifs (List[List[Tuple]]) – list of motifs indexed by node id.
- Returns:
(list of motifs sampled, list of labels)
- Return type:
Tuple[List[Tuple], List[List]]
- forward(motifs: List[Tuple], label: List[List] | None = None) Tuple[Tensor, Tensor][source]
- Parameters:
motifs (List[Tuple]) – motifs
label (List[List], optional) – labels. Defaults to None.
- Returns:
(loss, reward)
- Return type:
Tuple[torch.Tensor,torch.Tensor]
- get_reward(motifs: List[Tuple], label: List[List] | None = None) ndarray[source]
get reward
- Parameters:
motifs (List[Tuple]) – motifs.
label (List[List], optional) – labels. Defaults to None.
- Returns:
reward.
- Return type:
np.ndarray
- fit(sampling: Callable, id2motifs: List[List[Tuple]]) None[source]
- Parameters:
sampling (Callable) – sampling funciton.
id2motifs (List[List[Tuple]]) – list of motifs indexed by node id.
- get_embedding() Tensor[source]
Get the embeddings (graph or node level).
- Returns:
embedding.
- Return type:
(torch.Tensor)
- training: bool
egc.module.layers.disc_dgi module
Discriminator Layer Adapted from: https://github.com/PetarV-/DGI
- class egc.module.layers.disc_dgi.DiscDGI(hidden_units: int = 512, bias: bool = True)[source]
Bases:
ModuleDiscriminator for DGI
- Parameters:
hidden_units (int) – hidden units dimension. Defaults to 512.
bias (bool) – whether to apply bias to xWy. Defaults to False.
- forward(g: Tensor, h: Tensor, h_shf: Tensor) Tensor[source]
Forward Propagation
- Parameters:
g (torch.Tensor) – avg readout of whole graph, 1D tensor.
h (torch.Tensor) – node embedding. 3D tensor.
h_shf (torch.Tensor) – shuffled node embedding as corrupted graph node embedding. 3D tensor.
- Returns:
concat of pos and neg disc output.
- Return type:
(torch.Tensor)
- training: bool
egc.module.layers.disc_gmi module
Discriminator Layer Adapted from: https://github.com/PetarV-/DGI & https://github.com/zpeng27/GMI
- class egc.module.layers.disc_gmi.DiscGMI(in1_features: int, in2_features: int, out_features: int = 1, activation: str = 'sigmoid', bias: bool = True)[source]
Bases:
ModuleDiscriminator Layer
- Parameters:
in1_features (int) – size of each first input sample.
in2_features (int) – size of each second input sample.
out_features (int) – size of each output sample. Defaults to 1.
activation (str) – activation of xWy. Defaults to sigmoid.
bias (bool) – whether to apply bias to xWy. Defaults to False.
- forward(in1_features: Tensor, in2_features: Tensor, neg_sample_list: int | None = None)[source]
Forward Propagation
- Parameters:
in1_features (torch.Tensor) – first input sample in shape of [1, xx, xx].
in2_features (torch.Tensor) – second input sample in shape of [1, xx, xx].
neg_sample_list (List, optional) – list of neg sampling nodes index of first input. Defaults to None.
- Returns:
output of discriminator.
- Return type:
s_c, s_c_neg (torch.Tensor)
- training: bool
egc.module.layers.disc_mvgrl module
Discriminator Layer Adapted from: https://github.com/PetarV-/DGI
- class egc.module.layers.disc_mvgrl.DiscMVGRL(n_h)[source]
Bases:
ModuleDiscriminator for MVGRL and GDCL
- Parameters:
n_h (int) – hidden units dimension. Defaults to 512.
- forward(c1, c2, h1, h2, h3, h4)[source]
Forward Propagation
- Parameters:
c1 (torch.Tensor) – readout of raw graph by Readout function
c2 (torch.Tensor) – readout of diffuse graph by Readout function
h1 (torch.Tensor) – node embedding of raw graph by one gcn layer
h2 (torch.Tensor) – node embedding of diffuse graph by one gcn layer
h3 (torch.Tensor) – node embedding of raw graph and shuffle features by one gcn layer
h4 (torch.Tensor) – node embedding of diffuse graph and shuffle features by one gcn layer
- Returns:
probability of positive or negtive node
- Return type:
logits (torch.Tensor)
- training: bool
egc.module.layers.gat_daegc module
GAT for DAEGC
- class egc.module.layers.gat_daegc.GAT(num_features, hidden_size, embedding_size, alpha)[source]
Bases:
ModuleGAT for DAEGC
- Parameters:
num_features (int) – input feature dimension.
hidden_size (int) – number of units in hiddin layer.
embedding_size (int) – number of output emb dim.
alpha (float) – Alpha for the leaky_relu.
- forward(x, adj, M)[source]
Forward Propagation
- Parameters:
x (torch.Tensor) – features of nodes
adj (torch.Tensor) – adj matrix
M (torch.Tensor) – the topological relevance of node j to node i up to t orders.
- Returns:
Reconstructed adj matrix z (torch.Tensor): latent representation
- Return type:
A_pred (torch.Tensor)
- dot_product_decode(Z)[source]
dot product decode
- Parameters:
Z (torch.Tensor) – node embedding.
- Returns:
Reconstructed adj matrix
- Return type:
torch.Tensor
- training: bool
- class egc.module.layers.gat_daegc.GATLayer(in_features, out_features, alpha=0.2)[source]
Bases:
ModuleSimple GAT layer, similar to https://arxiv.org/abs/1710.10903
- Parameters:
in_features (int) – dim num of input
out_features (int) – dim num of output
alpha (float) – Alpha for the leaky_relu.
- forward(x, adj, M, concat=True)[source]
Forward Propagation
- Parameters:
x (torch.Tensor) – features of nodes
adj (torch.Tensor) – adj matrix
M (torch.Tensor) – the topological relevance of node j to node i up to t orders.
concat (bool,optional) – if concat
- Returns:
latent representation
- Return type:
(torch.Tensor)
- training: bool
egc.module.layers.gcl_sublime module
Graph Contrastive Learning Model for SUBLIME
- class egc.module.layers.gcl_sublime.GCL_SUBLIME(nlayers, in_dim, hidden_dim, emb_dim, proj_dim, dropout, dropout_adj, sparse)[source]
Bases:
ModuleGraph contrastive learning of SUBLIME
- Parameters:
nlayers (int) – Number of gcn layers
in_dim (int) – Number of input dim
hidden_dim (int) – Number of hidden dim
emb_dim (int) – Number of embedding dimension
proj_dim (int) – Number of projection dimension
dropout (float) – Dropout rate
dropout_adj (float) – Drop edge rate
sparse (int) – If sparse mode
- forward(x, Adj_, branch=None)[source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool
- class egc.module.layers.gcl_sublime.SparseDropout(dprob=0.5)[source]
Bases:
ModuleSparse Dropout
- Parameters:
dprob (float) – dprob is ratio of dropout. Defaults to 0.5.
- forward(x)[source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool
- class egc.module.layers.gcl_sublime.GraphEncoder(nlayers, in_dim, hidden_dim, emb_dim, proj_dim, dropout, dropout_adj, sparse)[source]
Bases:
ModuleGraph Encoder of GSL model
- Parameters:
nlayers (int) – Number of gcn layers
in_dim (int) – Number of input dim
hidden_dim (int) – Number of hidden dim
emb_dim (int) – Number of embedding dimension
proj_dim (int) – Number of projection dimension
dropout (float) – Dropout rate
dropout_adj (float) – Drop edge rate
sparse (int) – If sparse mode
- forward(x, Adj_, branch=None)[source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool
egc.module.layers.gcn module
GCN Layer Adapted from: https://github.com/PetarV-/DGI
- class egc.module.layers.gcn.GCN(in_feats: int, out_feats: int, activation: str = 'prelu', bias: bool = True)[source]
Bases:
ModuleGCN Layer
- Parameters:
in_feats (int) – input feature dimension
out_feats (int) – output feature dimension
activation (str) – activation function. Defaults to prelu.
bias (bool) – whether to apply bias after calculate hat{A}XW. Defaults to True.
- training: bool
- forward(features: Tensor, adj_norm: Tensor, sparse: bool = True) Tuple[Tensor, Tensor][source]
Forward Propagation
- Parameters:
features (torch.Tensor) – normalized 3D features tensor in shape of torch.Size([1, xx, xx])
adj_norm (torch.Tensor) – symmetrically normalized 2D adjacency tensor
sparse (bool) – whether input sparse tensor
- Returns:
hat{A}XW and XW
- Return type:
out, hidden_layer (torch.Tensor, torch.Tensor)
egc.module.layers.gcn_sublime module
GCN Layer for SUBLIME model
- class egc.module.layers.gcn_sublime.GCNConv_dgl(input_size, output_size)[source]
Bases:
ModuleGCN layer using dgl.
- Parameters:
input_size (int) – input size
output_size (int) – output size
- forward(x, g)[source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool
- class egc.module.layers.gcn_sublime.GCNConv_dense(input_size, output_size)[source]
Bases:
ModuleGCN layer dense.
- Parameters:
input_size (int) – input size
output_size (int) – output size
- forward(x, A, sparse=False)[source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool
egc.module.layers.gene_communitygan module
Generator Layer Adapted from: https://github.com/SamJia/CommunityGAN
- class egc.module.layers.gene_communitygan.GeneComGAN(n_nodes: int, node_emd_init: Tensor, n_epochs: int, gen_interval: int, update_ratio: float, n_sample_gen: int, lr_gen: float, l2_coef: float, batch_size: int, max_value: int)[source]
Bases:
ModuleGenerator of CommunityGAN
- Parameters:
n_nodes (int) – num of nodes.
node_emd_init (torch.Tensor) – node embedding in agm format.
n_epochs (int) – num of training epochs.
gen_interval (int) – interval of generator.
update_ratio (float) – update ration.
n_sample_gen (int) – num of samples for generator.
lr_gen (float) – learning rate.
l2_coef (float) – l2 coef.
batch_size (int) – batch size.
max_value (int) – max value for embedding matrix.
- prepare_data_for_g(rewardFunc: Callable, sampling: Callable) Tuple[List[Tuple], List[List]][source]
sample subsets for the generator
- Parameters:
rewardFunc (Callable) – function of getting discriminator reward.
sampling (Callable) – sampling function.
- Returns:
(list of motifs sampled, list of labels)
- Return type:
Tuple[List[Tuple], List[List]]
- forward(motifs: List[Tuple], reward: List[List]) Tensor[source]
- Parameters:
motifs (List[Tuple]) – motifs.
reward (List[List]) – reward.
- Returns:
loss.
- Return type:
torch.Tensor
- fit(rewardFunc: Callable, sampling: Callable) None[source]
- Parameters:
rewardFunc (Callable) – function for getting discriminator reward.
sampling (Callable) – sampling function.
- get_embedding() Tensor[source]
Get the embeddings (graph or node level).
- Returns:
embedding.
- Return type:
(torch.Tensor)
- training: bool
egc.module.layers.gmi module
GMI Adapted From: https://github.com/zpeng27/GMI
- egc.module.layers.gmi.avg_neighbor(features: Tensor, adj_orig: csr_matrix) Tensor[source]
Aggregate Neighborhood Using Original Adjacency Matrix
- Parameters:
features (torch.Tensor) – 2D row-normalized features.
adj_orig (<class 'scipy.sparse.csr.csr_matrix'>) – row-avaraged adj.
- Returns:
row-avaraged aggregation of neighborhood.
- Return type:
(torch.Tensor)
- class egc.module.layers.gmi.GMI(in_features: int, hidden_units: int, gcn_depth: int = 2, activation: str = 'prelu')[source]
Bases:
Module- Parameters:
in_features (int) – input feature dimension.
hidden_units (int) – output hidden units dimension.
activation (str) – activation of gcn layer. Defaults to prelu.
- forward(features_norm: Tensor, adj_orig: csr_matrix, adj_norm: Tensor, neg_sample_list: List)[source]
Forward Propagation
- Parameters:
features_norm (torch.Tensor) – row-normalized features.
adj_orig (sp.csr_matrix) – row-avaraged adj.
adj_norm (torch.Tensor) – symmetrically normalized sparse tensor adj.
neg_sample_list (List) – list of multiple repeatable shuffle of nodes index list.
- Returns:
D_w(h_i, x_i), D_w(h_i, x’_i), D_w(h_i, x_j), D_w(h_i, x’_j), w_{ij}
- Return type:
mi_pos, mi_neg, local_mi_pos, local_mi_neg, adj_rebuilt (torch.Tensor)
- get_embedding(features_norm, adj_norm)[source]
Get Node Embedding
- Parameters:
features_norm (torch.Tensor) – row-normalized features.
adj_norm (torch.Tensor) – symmetrically normalized adj.
- Returns:
node embedding.
- Return type:
(torch.Tensor)
- training: bool
egc.module.layers.grace_secomm module
GraceModel for SEComm
- class egc.module.layers.grace_secomm.SECommEncoder(in_channels: int, out_channels: int, activation, base_model=<class 'dgl.nn.pytorch.conv.graphconv.GraphConv'>, k: int = 2)[source]
Bases:
ModuleSECommEncoder, k层GCN
- forward(g: DGLGraph, feats: Tensor)[source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool
- class egc.module.layers.grace_secomm.SECommGraceModel(encoder: SECommEncoder, num_hidden: int, num_proj_hidden: int, tau: float = 0.5)[source]
Bases:
ModuleGraceModel for SEComm
- forward(g: DGLGraph, feats: Tensor) Tensor[source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool
egc.module.layers.graph_learner module
Graph Learners for SUBLIME model
- class egc.module.layers.graph_learner.FGP_learner(features, k, knn_metric, i, sparse)[source]
Bases:
ModuleFGP learner
- Parameters:
features (torch.Tensor) – node features
k (int) – _description_
knn_metric (str) – The distance metric used to calculate the k-Neighbors for each sample point.
i (int) – _description_
sparse (int) – If sparse mode
- forward(h)[source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool
- class egc.module.layers.graph_learner.Attentive(isize)[source]
Bases:
Module- forward(x)[source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool
- class egc.module.layers.graph_learner.ATT_learner(nlayers, isize, k, knn_metric, i, sparse, mlp_act)[source]
Bases:
ModuleATT learner
- forward(features)[source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool
- class egc.module.layers.graph_learner.MLP_learner(nlayers, isize, k, knn_metric, i, sparse, act)[source]
Bases:
ModuleMLP learner
- forward(features)[source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool
- class egc.module.layers.graph_learner.GNN_learner(nlayers, isize, k, knn_metric, i, sparse, mlp_act, adj)[source]
Bases:
ModuleGNN learner
- forward(features)[source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool
egc.module.layers.inner_product_de module
layers
- class egc.module.layers.inner_product_de.InnerProductDecoder(dropout: float = 0.0, act=<built-in method sigmoid of type object>)[source]
Bases:
ModuleDecoder for using inner product for prediction.
- forward(z)[source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool
egc.module.layers.multilayer_dnn module
Multilayer DNN
- class egc.module.layers.multilayer_dnn.MultiLayerDNN(in_feats: int, out_feats_list: List[int], bias: List[bool] | None = None, activation: List[str] | None = None)[source]
Bases:
ModuleMultiLayer Deep Nueral Networks.
- Parameters:
in_feats (int) – Input feature dimension.
out_feats_list (List[int]) – List of hidden units dimensions.
bias (List[bool], optional) – Whether to apply bias at each layer. Defaults to True.
activation (List[str], optional) – Activation func list to apply at each layer. Defaults to ReLU.
- forward(x)[source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool
egc.module.layers.multilayer_gnn module
MultiLayer GraphSAGE
- class egc.module.layers.multilayer_gnn.MultiLayerGNN(in_feats: int, out_feats_list: List[int], aggregator_type: str = 'gcn', bias: bool = True, activation: List[str] | None = None, dropout: float = 0.0)[source]
Bases:
ModuleMultiLayer GraphSAGE with different types of aggregator_type.
- Parameters:
in_feats (int) – Input feature dimension.
out_feats_list (List[int]) – List of hidden units dimensions.
aggregator_type (str, optional) – Aggregate type of sage. Defaults to ‘gcn’.
bias (bool, optional) – Whether to apply bias. Defaults to True.
- forward(blocks, x, edge_weight=None)[source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool
egc.module.layers.selfexpr_secomm module
Self-Expressive module for SEComm
- class egc.module.layers.selfexpr_secomm.SECommSelfExpr(n)[source]
Bases:
ModuleSelf-Expressive module for SEComm
- forward(x)[source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool
Module contents
Common layers