Models

This module includes neural graph embedding models and support functions.

Knowledge graph embedding models are neural architectures that encode concepts from a knowledge graph (i.e., entities \(\mathcal{E}\) and relation types \(\mathcal{R}\)) into low-dimensional, continuous vectors \(\in \mathcal{R}^k\). Such knowledge graph embeddings have applications in knowledge graph completion, entity resolution, and link-based clustering, just to cite a few [NMTG16].

Knowledge Graph Embedding models (KGE) are neural architectures that encode concepts from a knowledge graph (i.e., entities \(\mathcal{E}\) and relation types \(\mathcal{R}\)) into low-dimensional, continuous vectors living in \(\mathbb{R}^k\), where \(k\) can be specified by the user.

Knowledge Graph Embeddings have applications in knowledge graph completion, entity resolution, and link-based clustering, just to cite a few [NMTG16].

In Ampligraph 2, KGE models are implemented in the ScoringBasedEmbeddingModel class, that inherits from Keras Model:

ScoringBasedEmbeddingModel(*args, **kwargs)

Class for handling KGE models which follows the ranking based protocol.

The advantage of inheriting from Keras models are many. We can use most of Keras initializers (HeNormal, GlorotNormal…), regularizers (\(L^1\), \(L^2\)…), optimizers (Adam, AdaGrad…) and callbacks (early stopping, model checkpointing…), all without having to reimplement them. From a user perspective, people already acquainted to Keras can seemlessly work with AmpliGraph due to the similarity of the APIs.

We also provide backward compatibility with the APIs of Ampligraph 1, by wrapping the older APIs around the newer ones.

Anatomy of a Model

Knowledge Graph Embeddings are learned by training a neural architecture over a graph. Although such architecture can be of many different kinds, the training phase always consists in minimizing a loss function \(\mathcal{L}\) that optimizes the scores output by a scoring function \(f_{m}(t)\), i.e., a model-specific function that assigns a score to a triple \(t=(sub,pred,obj)\).

The first three elements are included in the ScoringBasedEmbeddingModel class and they inherit from Keras Layer.

Further, for the scoring layer and the loss function, AmpliGraph features abstract classes that can be extended to design new models:

AbstractScoringLayer(*args, **kwargs)

Abstract class for scoring layer.

Loss([hyperparam_dict, verbose])

Abstract class for the loss function.

Embedding Generation Layer

The embedding generation layer generates the embeddings of the concepts present in the triples. It may be as simple as a shallow encoding (i.e., a lookup of the embedding of an input node or edge type), or it can be as complex as a neural network, which tokenizes nodes and generates embeddings for nodes using a neural encoder (e.g., NodePiece). Currently, AmpliGraph implements the shallow look-up strategy but will be expanded soon to include other efficient approaches.

EmbeddingLookupLayer(*args, **kwargs)

Negatives Generation Layer

This layer is responsible for generation of synthetic negatives. The strategies to generate negatives can be multiple. In our case, we assume a local close world assumption, and implement a simple negative generation strategy, where we randomly corrupt either the subject, the object or both the subject and the object of a triple, to generate a synthetic negative. Further, we allow filtering the true positives out of the generated negatives.

CorruptionGenerationLayerTrain(*args, **kwargs)

Generates corruptions during training.

Scoring Layer

The scoring layer applies a scoring function \(f\) to a triple \(t=(s,p,o)\). This function combines the embeddings \(\mathbf{e}_{s},\mathbf{r}_{p}, \mathbf{e}_{o} \in \mathbb{R}^k\) (or \(\in \mathbb{C}^k\)) of the subject, predicate, and object of \(t\) into a score representing the plausibility of the triple.

TransE(*args, **kwargs)

Translating Embeddings (TransE) scoring layer.

DistMult(*args, **kwargs)

DistMult scoring layer.

ComplEx(*args, **kwargs)

Complex Embeddings (ComplEx) scoring layer.

HolE(*args, **kwargs)

Holographic Embeddings (HolE) scoring layer.

Different scoring functions are designed according to different intuitions:

  • TransE [BUGD+13] relies on distances. The scoring function computes a similarity between

the embedding of the subject translated by the embedding of the predicate and the embedding of the object, using the \(L^1\) or \(L^2\) norm \(||\cdot||\):

\[f_{TransE}=-||\mathbf{e}_{s} + \mathbf{r}_{p} - \mathbf{e}_{o}||\]
  • DistMult [YYH+14] uses the trilinear dot product:
    \[f_{DistMult}=\langle \mathbf{r}_p, \mathbf{e}_s, \mathbf{e}_o \rangle\]
  • ComplEx [TWR+16] extends DistMult with the Hermitian dot product:
    \[f_{ComplEx}=Re(\langle \mathbf{r}_p, \mathbf{e}_s, \overline{\mathbf{e}_o} \rangle)\]
  • HolE [NRP+16] uses circular correlation (denoted by \(\otimes\)):
    \[f_{HolE}=\mathbf{w}_r \cdot (\mathbf{e}_s \otimes \mathbf{e}_o) = \frac{1}{k}\mathcal{F}(\mathbf{w}_r)\cdot( \overline{\mathcal{F}(\mathbf{e}_s)} \odot \mathcal{F}(\mathbf{e}_o))\]

Loss Functions

AmpliGraph includes a number of loss functions commonly used in literature. Each function can be used with any of the implemented models. Loss functions are passed to models at the compilation stage as the loss parameter to the compile() method. Below are the loss functions available in AmpliGraph.

PairwiseLoss([loss_params, verbose])

Pairwise, max-margin loss.

AbsoluteMarginLoss([loss_params, verbose])

Absolute margin, max-margin loss.

SelfAdversarialLoss([loss_params, verbose])

Self Adversarial Sampling loss.

NLLLoss([loss_params, verbose])

Negative Log-Likelihood loss.

NLLMulticlass([loss_params, verbose])

Multiclass Negative Log-Likelihood loss.

Regularizers

AmpliGraph includes a number of regularizers that can be used with the loss function. Regularizers can be passed to the entity_relation_regularizer parameter of compile() method.

LP_regularizer() supports \(L^1, L^2\) and \(L^3\) regularization. Ampligraph also supports the regularizers available in TensorFlow.

LP_regularizer(trainable_param[, ...])

Norm \(L^{p}\) regularizer.

Initializers

To initialize embeddings, AmpliGraph supports all the initializers available in TensorFlow. Initializers can be passed to the entity_relation_initializer parameter of compile() method.

Optimizers

The goal of the optimization procedure is learning optimal embeddings, such that the scoring function is able to assign high scores to positive statements and low scores to statements unlikely to be true.

We support optimizers available in TensorFlow. They can be specified as the optimizer argument of the compile() method.

Training

The training procedure follows that of Keras models:

  • The model is initialised as an instance of the ScoringBasedEmbeddingModel class. During its initialisation, we can specify, among the other hyper-parameters of the model: the size of the embedding (argument k); the scoring function applied by the model (argument scoring_type); the number of synthetic negatives generated for each triple in the training set (argument eta).

  • The model needs to be compiled through the compile() method. At this stage we define, among the others, the optimizer and the objective functions. These are passed as arguments to the aforementioned method.

  • The model is fitted to the training data using the fit() method. Next to the usual parameters that can be specified at this stage, AmpliGraph allows to also specify:

    • A validation_filter that contains the true positives to be removed from the synthetically corrupted triples used during validation.

    • A focusE option, which enables the FocusE layer [PC21]: this allows to handle datasets with a numeric value associated to the edges, which can signify importance, uncertainty, significance, confidence…

    • A partitioning_k argument that specifies whether the data needs to be partitioned in order to make training with datasets not fitting in memory more efficient.

    For more details and options, check the fit() method.

Calibration

Another important feature implemented in AmpliGraph is calibration [TC20]. Such a method leverages a heuristics that significantly enhance the performance of the models. Further, it bounds the score of the model in the range \([0,1]\), making the score of the prediction more meaningful and interpretable.

CalibrationLayer(*args, **kwargs)

Layer to calibrate the model outputs.

Numeric Values on Edges

Numeric values associated to edges of a knowledge graph have been used to represent uncertainty, edge importance, and even out-of-band knowledge in a growing number of scenarios, ranging from genetic data to social networks. Nevertheless, traditional KGE models (TransE, DistMult, ComplEx, HolE) are not designed to capture such information, to the detriment of predictive power.

AmpliGraph includes FocusE [PC21], a method to inject numeric edge attributes into the scoring layer of a traditional KGE architecture, thus accounting for the precious information embedded in the edge weights. In order to add the FocusE layer, set focusE=True and specify the hyperparameters dictionary focusE_params in the fit() method.

It is possible to load some benchmark knowledge graphs with numeric-enriched edges through Ampligraph dataset loaders.

Saving/Restoring Models

The weights of a trained model can be saved and restored from disk. This is useful to avoid re-training a model. In order to save and restore the weights of a model, we can use the save_weights() and load_weights() methods. When the model is saved and loaded with these methods, however, it is not possible to restart the training from where it stopped. AmpliGraph gives the possibility of doing that saving and loading the model with the functionalities available in the utils module.

Compatibility Ampligraph 1.x

Provides backward compatibility to AmpliGraph 1 APIs.

For those familiar with versions of AmpliGraph 1.x, we have created backward compatible APIs under the ampligraph.compat module.

These APIs act as wrappers around the newer Keras style APIs and provide seamless experience for our existing user base.

The first group of APIs defines the classes that wraps around the ScoringBasedEmbeddingModel with a specific scoring function.

TransE([k, eta, epochs, batches_count, ...])

Class wrapping around the ScoringBasedEmbeddingModel with the TransE scoring function.

ComplEx([k, eta, epochs, batches_count, ...])

Class wrapping around the ScoringBasedEmbeddingModel with the ComplEx scoring function.

DistMult([k, eta, epochs, batches_count, ...])

Class wrapping around the ScoringBasedEmbeddingModel with the DistMult scoring function.

HolE([k, eta, epochs, batches_count, seed, ...])

Class wrapping around the ScoringBasedEmbeddingModel with the HolE scoring function.

When it comes to evaluation, on the other hand, the following API wraps around the new evaluation process of Ampligraph 2.

evaluate_performance(X, model[, ...])

Evaluate the performance of an embedding model.