Models

Knowledge Graph Embedding Models

RandomBaseline([seed, verbose])

Random baseline

TransE([k, eta, epochs, batches_count, …])

Translating Embeddings (TransE)

DistMult([k, eta, epochs, batches_count, …])

The DistMult model

ComplEx([k, eta, epochs, batches_count, …])

Complex embeddings (ComplEx)

HolE([k, eta, epochs, batches_count, seed, …])

Holographic Embeddings

ConvE([k, eta, epochs, batches_count, seed, …])

Convolutional 2D KG Embeddings

ConvKB([k, eta, epochs, batches_count, …])

Convolution-based model

Anatomy of a Model

Knowledge graph embeddings are learned by training a neural architecture over a graph. Although such architectures vary, the training phase always consists in minimizing a loss function \(\mathcal{L}\) that includes a scoring function \(f_{m}(t)\), i.e. a model-specific function that assigns a score to a triple \(t=(sub,pred,obj)\).

AmpliGraph models include the following components:

AmpliGraph comes with a number of such components. They can be used in any combination to come up with a model that performs sufficiently well for the dataset of choice.

AmpliGraph features a number of abstract classes that can be extended to design new models:

EmbeddingModel([k, eta, epochs, …])

Abstract class for embedding models

Loss(eta, hyperparam_dict[, verbose])

Abstract class for loss function.

Regularizer(hyperparam_dict[, verbose])

Abstract class for Regularizer.

Initializer([initializer_params, verbose, seed])

Abstract class for initializer .

Scoring functions

Existing models propose scoring functions that combine the embeddings \(\mathbf{e}_{s},\mathbf{r}_{p}, \mathbf{e}_{o} \in \mathcal{R}^k\) of the subject, predicate, and object of a triple \(t=(s,p,o)\) according to different intuitions:

  • TransE [BUGD+13] relies on distances. The scoring function computes a similarity between the embedding of the subject translated by the embedding of the predicate and the embedding of the object, using the \(L_1\) or \(L_2\) norm \(||\cdot||\):

\[f_{TransE}=-||\mathbf{e}_{s} + \mathbf{r}_{p} - \mathbf{e}_{o}||_n\]
\[f_{DistMult}=\langle \mathbf{r}_p, \mathbf{e}_s, \mathbf{e}_o \rangle\]
\[f_{ComplEx}=Re(\langle \mathbf{r}_p, \mathbf{e}_s, \overline{\mathbf{e}_o} \rangle)\]
  • HolE [NRP+16] uses circular correlation (denoted by \(\otimes\)):

\[f_{HolE}=\mathbf{w}_r \cdot (\mathbf{e}_s \otimes \mathbf{e}_o) = \frac{1}{k}\mathcal{F}(\mathbf{w}_r)\cdot( \overline{\mathcal{F}(\mathbf{e}_s)} \odot \mathcal{F}(\mathbf{e}_o))\]
  • ConvE [DMSR18] uses convolutional layers (\(g\) is a non-linear activation function, \(\ast\) is the linear convolution operator, \(vec\) indicates 2D reshaping):

\[f_{ConvE} = \langle \sigma \, (vec \, ( g \, ([ \overline{\mathbf{e}_s} ; \overline{\mathbf{r}_p} ] \ast \Omega )) \, \mathbf{W} )) \, \mathbf{e}_o\rangle\]
\[f_{ConvKB}= concat \,(g \, ([\mathbf{e}_s, \mathbf{r}_p, \mathbf{e}_o]) * \Omega)) \cdot W\]

Loss Functions

AmpliGraph includes a number of loss functions commonly used in literature. Each function can be used with any of the implemented models. Loss functions are passed to models as hyperparameter, and they can be thus used during model selection.

PairwiseLoss(eta[, loss_params, verbose])

Pairwise, max-margin loss.

AbsoluteMarginLoss(eta[, loss_params, verbose])

Absolute margin , max-margin loss.

SelfAdversarialLoss(eta[, loss_params, verbose])

Self adversarial sampling loss.

NLLLoss(eta[, loss_params, verbose])

Negative log-likelihood loss.

NLLMulticlass(eta[, loss_params, verbose])

Multiclass NLL Loss.

BCELoss(eta[, loss_params, verbose])

Binary Cross Entropy Loss.

Regularizers

AmpliGraph includes a number of regularizers that can be used with the loss function. LPRegularizer supports L1, L2, and L3.

LPRegularizer([regularizer_params, verbose])

Performs LP regularization

Initializers

AmpliGraph includes a number of initializers that can be used to initialize the embeddings. They can be passed as hyperparameter, and they can be thus used during model selection.

RandomNormal([initializer_params, verbose, seed])

Initializes from a normal distribution with specified mean and std

RandomUniform([initializer_params, verbose, …])

Initializes from a uniform distribution with specified low and high

Xavier([initializer_params, verbose, seed])

Follows the xavier strategy for initialization of layers [GB10].

Optimizers

The goal of the optimization procedure is learning optimal embeddings, such that the scoring function is able to assign high scores to positive statements and low scores to statements unlikely to be true.

We support SGD-based optimizers provided by TensorFlow, by setting the optimizer argument in a model initializer. Best results are currently obtained with Adam.

Saving/Restoring Models

Models can be saved and restored from disk. This is useful to avoid re-training a model.

More details in the utils module.