ComplEx

class ampligraph.latent_features.ComplEx(k=100, eta=2, epochs=100, batches_count=100, seed=0, embedding_model_params={'corrupt_sides': ['s,o'], 'negative_corruption_entities': 'all'}, optimizer='adam', optimizer_params={'lr': 0.0005}, loss='nll', loss_params={}, regularizer=None, regularizer_params={}, initializer='xavier', initializer_params={'uniform': False}, verbose=False)

Complex embeddings (ComplEx)

The ComplEx model [TWR+16] is an extension of the ampligraph.latent_features.DistMult bilinear diagonal model . ComplEx scoring function is based on the trilinear Hermitian dot product in \(\mathcal{C}\):

\[f_{ComplEx}=Re(\langle \mathbf{r}_p, \mathbf{e}_s, \overline{\mathbf{e}_o} \rangle)\]

ComplEx can be improved if used alongside the nuclear 3-norm (the ComplEx-N3 model [LUO18]), which can be easily added to the loss function via the regularizer hyperparameter with p=3 and a chosen regularisation weight (represented by lambda), as shown in the example below. See also ampligraph.latent_features.LPRegularizer().

Note

Since ComplEx embeddings belong to \(\mathcal{C}\), this model uses twice as many parameters as ampligraph.latent_features.DistMult.

Examples

>>> import numpy as np
>>> from ampligraph.latent_features import ComplEx
>>>
>>> model = ComplEx(batches_count=2, seed=555, epochs=100, k=20, eta=5,
>>>             loss='pairwise', loss_params={'margin':1},
>>>             regularizer='LP', regularizer_params={'p': 2, 'lambda':0.1})
>>> X = np.array([['a', 'y', 'b'],
>>>               ['b', 'y', 'a'],
>>>               ['a', 'y', 'c'],
>>>               ['c', 'y', 'a'],
>>>               ['a', 'y', 'd'],
>>>               ['c', 'y', 'd'],
>>>               ['b', 'y', 'c'],
>>>               ['f', 'y', 'e']])
>>> model.fit(X)
>>> model.predict(np.array([['f', 'y', 'e'], ['b', 'y', 'd']]))
[[0.019520484], [-0.14998421]]
>>> model.get_embeddings(['f','e'], embedding_type='entity')
array([[-0.33021057,  0.26524785,  0.0446662 , -0.07932718, -0.15453218,
    -0.22342539, -0.03382565,  0.17444217,  0.03009969, -0.33569157,
     0.3200497 ,  0.03803705,  0.05536304, -0.00929996,  0.24446663,
     0.34408194,  0.16192885, -0.15033236, -0.19703785, -0.00783876,
     0.1495124 , -0.3578853 , -0.04975723, -0.03930473,  0.1663541 ,
    -0.24731971, -0.141296  ,  0.03150219,  0.15328223, -0.18549544,
    -0.39240393, -0.10824018,  0.03394471, -0.11075485,  0.1367736 ,
     0.10059565, -0.32808647, -0.00472086,  0.14231135, -0.13876757],
   [-0.09483694,  0.3531292 ,  0.04992269, -0.07774793,  0.1635035 ,
     0.30610007,  0.3666711 , -0.13785957, -0.3143734 , -0.36909637,
    -0.13792469, -0.07069954, -0.0368113 , -0.16743314,  0.4090072 ,
    -0.03407392,  0.3113114 , -0.08418448,  0.21435146,  0.12006859,
     0.08447982, -0.02025972,  0.38752195,  0.11451488, -0.0258422 ,
    -0.10990044, -0.22661531, -0.00478273, -0.0238297 , -0.14207476,
     0.11064807,  0.20135397,  0.22501846, -0.1731076 , -0.2770435 ,
     0.30784574, -0.15043163, -0.11599299,  0.05718031, -0.1300622 ]],
  dtype=float32)

Methods

__init__([k, eta, epochs, batches_count, …]) Initialize an EmbeddingModel
fit(X[, early_stopping, …]) Train a ComplEx model.
get_embeddings(entities[, embedding_type]) Get the embeddings of entities or relations.
get_hyperparameter_dict() Returns hyperparameters of the model.
predict(X[, from_idx]) Predict the scores of triples using a trained embedding model.
calibrate(X_pos[, X_neg, …]) Calibrate predictions
predict_proba(X) Predicts probabilities using the Platt scaling model (after calibration).
__init__(k=100, eta=2, epochs=100, batches_count=100, seed=0, embedding_model_params={'corrupt_sides': ['s,o'], 'negative_corruption_entities': 'all'}, optimizer='adam', optimizer_params={'lr': 0.0005}, loss='nll', loss_params={}, regularizer=None, regularizer_params={}, initializer='xavier', initializer_params={'uniform': False}, verbose=False)

Initialize an EmbeddingModel

Also creates a new Tensorflow session for training.

Parameters:
  • k (int) – Embedding space dimensionality
  • eta (int) – The number of negatives that must be generated at runtime during training for each positive.
  • epochs (int) – The iterations of the training loop.
  • batches_count (int) – The number of batches in which the training set must be split during the training loop.
  • seed (int) – The seed used by the internal random numbers generator.
  • embedding_model_params (dict) –

    ComplEx-specific hyperparams:

    • ’negative_corruption_entities’ - Entities to be used for generation of corruptions while training. It can take the following values : all (default: all entities), batch (entities present in each batch), list of entities or an int (which indicates how many entities that should be used for corruption generation).
    • corrupt_sides : Specifies how to generate corruptions for training. Takes values s, o, s+o or any combination passed as a list
    • ’non_linearity’: can be one of the following values linear, softplus, sigmoid, tanh
    • ’stop_epoch’: specifies how long to decay (linearly) the numeric values from 1 to original value

    until it reachs original value. - ‘structural_wt’: structural influence hyperparameter [0, 1] that modulates the influence of graph topology. - ‘normalize_numeric_values’: normalize the numeric values, such that they are scaled between [0, 1]

    The last 4 parameters are related to FocusE layers.

  • optimizer (string) – The optimizer used to minimize the loss function. Choose between ‘sgd’, ‘adagrad’, ‘adam’, ‘momentum’.
  • optimizer_params (dict) –

    Arguments specific to the optimizer, passed as a dictionary.

    Supported keys:

    • ’lr’ (float): learning rate (used by all the optimizers). Default: 0.1.
    • ’momentum’ (float): learning momentum (only used when optimizer=momentum). Default: 0.9.

    Example: optimizer_params={'lr': 0.01}

  • loss (string) –

    The type of loss function to use during training.

    • pairwise the model will use pairwise margin-based loss function.
    • nll the model will use negative loss likelihood.
    • absolute_margin the model will use absolute margin likelihood.
    • self_adversarial the model will use adversarial sampling loss function.
    • multiclass_nll the model will use multiclass nll loss. Switch to multiclass loss defined in [aC15] by passing ‘corrupt_sides’ as [‘s’,’o’] to embedding_model_params. To use loss defined in [KBK17] pass ‘corrupt_sides’ as ‘o’ to embedding_model_params.
  • loss_params (dict) –

    Dictionary of loss-specific hyperparameters. See loss functions documentation for additional details.

    Example: optimizer_params={'lr': 0.01} if loss='pairwise'.

  • regularizer (string) –

    The regularization strategy to use with the loss function.

    • None: the model will not use any regularizer (default)
    • ’LP’: the model will use L1, L2 or L3 based on the value of regularizer_params['p'] (see below).
  • regularizer_params (dict) –

    Dictionary of regularizer-specific hyperparameters. See the regularizers documentation for additional details.

    Example: regularizer_params={'lambda': 1e-5, 'p': 2} if regularizer='LP'.

  • initializer (string) –

    The type of initializer to use.

    • normal: The embeddings will be initialized from a normal distribution
    • uniform: The embeddings will be initialized from a uniform distribution
    • xavier: The embeddings will be initialized using xavier strategy (default)
  • initializer_params (dict) –

    Dictionary of initializer-specific hyperparameters. See the initializer documentation for additional details.

    Example: initializer_params={'mean': 0, 'std': 0.001} if initializer='normal'.

  • verbose (bool) – Verbose mode.
fit(X, early_stopping=False, early_stopping_params={}, focusE_numeric_edge_values=None, tensorboard_logs_path=None)

Train a ComplEx model.

The model is trained on a training set X using the training protocol described in [TWR+16].

Parameters:
  • X (ndarray, shape [n, 3]) – The training triples
  • early_stopping (bool) –

    Flag to enable early stopping (default:False).

    If set to True, the training loop adopts the following early stopping heuristic:

    • The model will be trained regardless of early stopping for burn_in epochs.
    • Every check_interval epochs the method will compute the metric specified in criteria.

    If such metric decreases for stop_interval checks, we stop training early.

    Note the metric is computed on x_valid. This is usually a validation set that you held out.

    Also, because criteria is a ranking metric, it requires generating negatives. Entities used to generate corruptions can be specified, as long as the side(s) of a triple to corrupt. The method supports filtered metrics, by passing an array of positives to x_filter. This will be used to filter the negatives generated on the fly (i.e. the corruptions).

    Note

    Keep in mind the early stopping criteria may introduce a certain overhead (caused by the metric computation). The goal is to strike a good trade-off between such overhead and saving training epochs.

    A common approach is to use MRR unfiltered:

    early_stopping_params={x_valid=X['valid'], 'criteria': 'mrr'}
    

    Note the size of validation set also contributes to such overhead. In most cases a smaller validation set would be enough.

  • early_stopping_params (dictionary) –

    Dictionary of hyperparameters for the early stopping heuristics.

    The following string keys are supported:

    • ’x_valid’: ndarray, shape [n, 3] : Validation set to be used for early stopping.
    • ’criteria’: string : criteria for early stopping ‘hits10’, ‘hits3’, ‘hits1’ or ‘mrr’(default).
    • ’x_filter’: ndarray, shape [n, 3] : Positive triples to use as filter if a ‘filtered’ early stopping criteria is desired (i.e. filtered-MRR if ‘criteria’:’mrr’). Note this will affect training time (no filter by default).
    • ’burn_in’: int : Number of epochs to pass before kicking in early stopping (default: 100).
    • check_interval’: int : Early stopping interval after burn-in (default:10).
    • ’stop_interval’: int : Stop if criteria is performing worse over n consecutive checks (default: 3)
    • ’corruption_entities’: List of entities to be used for corruptions. If ‘all’, it uses all entities (default: ‘all’)
    • ’corrupt_side’: Specifies which side to corrupt. ‘s’, ‘o’, ‘s+o’ (default)

    Example: early_stopping_params={x_valid=X['valid'], 'criteria': 'mrr'}

  • focusE_numeric_edge_values (ndarray, shape [n]) –

    If processing a knowledge graph with numeric values associated with links, this is the vector of such numbers. Passing this argument will activate the FocusE layer [PC21]. Semantically, numeric values can signify importance, uncertainity, significance, confidence, etc. Values can be any number, and will be automatically normalised to the [0, 1] range, on a predicate-specific basis. If the numeric value is unknown pass a np.NaN value. The model will uniformly randomly assign a numeric value.

    Note

    The following toy example shows how to enable the FocusE layer to process edges with numeric literals:

    import numpy as np
    from ampligraph.latent_features import ComplEx
    model = ComplEx(batches_count=1, seed=555, epochs=20,
                   k=10, loss='pairwise',
                   loss_params={'margin':5})
    X = np.array([['a', 'y', 'b'],
                  ['b', 'y', 'a'],
                  ['a', 'y', 'c'],
                  ['c', 'y', 'a'],
                  ['a', 'y', 'd'],
                  ['c', 'y', 'd'],
                  ['b', 'y', 'c'],
                  ['f', 'y', 'e']])
    
    # Numeric values below are associate to each triple in X.
    # They can be any number and will be automatically
    # normalised to the [0, 1] range, on a
    # predicate-specific basis.
    X_edge_values = np.array([5.34, -1.75, 0.33, 5.12,
                              np.nan, 3.17, 2.76, 0.41])
    
    model.fit(X, focusE_numeric_edge_values=X_edge_values)
    
tensorboard_logs_path: str or None
Path to store tensorboard logs, e.g. average training loss tracking per epoch (default: None indicating no logs will be collected). When provided it will create a folder under provided path and save tensorboard files there. To then view the loss in the terminal run: tensorboard --logdir <tensorboard_logs_path>.
get_embeddings(entities, embedding_type='entity')

Get the embeddings of entities or relations.

Note

Use ampligraph.utils.create_tensorboard_visualizations() to visualize the embeddings with TensorBoard.

Parameters:
  • entities (array-like, dtype=int, shape=[n]) – The entities (or relations) of interest. Element of the vector must be the original string literals, and not internal IDs.
  • embedding_type (string) – If ‘entity’, entities argument will be considered as a list of knowledge graph entities (i.e. nodes). If set to ‘relation’, they will be treated as relation types instead (i.e. predicates).
Returns:

embeddings – An array of k-dimensional embeddings.

Return type:

ndarray, shape [n, k]

get_hyperparameter_dict()

Returns hyperparameters of the model.

Returns:hyperparam_dict – Dictionary of hyperparameters that were used for training.
Return type:dict
predict(X, from_idx=False)

Predict the scores of triples using a trained embedding model. The function returns raw scores generated by the model.

Note

To obtain probability estimates, calibrate the model with calibrate(), then call predict_proba().

Parameters:
  • X (ndarray, shape [n, 3]) – The triples to score.
  • from_idx (bool) – If True, will skip conversion to internal IDs. (default: False).
Returns:

scores_predict – The predicted scores for input triples X.

Return type:

ndarray, shape [n]

calibrate(X_pos, X_neg=None, positive_base_rate=None, batches_count=100, epochs=50)

Calibrate predictions

The method implements the heuristics described in [TC20], using Platt scaling [P+99].

The calibrated predictions can be obtained with predict_proba() after calibration is done.

Ideally, calibration should be performed on a validation set that was not used to train the embeddings.

There are two modes of operation, depending on the availability of negative triples:

  1. Both positive and negative triples are provided via X_pos and X_neg respectively. The optimization is done using a second-order method (limited-memory BFGS), therefore no hyperparameter needs to be specified.
  2. Only positive triples are provided, and the negative triples are generated by corruptions just like it is done in training or evaluation. The optimization is done using a first-order method (ADAM), therefore batches_count and epochs must be specified.

Calibration is highly dependent on the base rate of positive triples. Therefore, for mode (2) of operation, the user is required to provide the positive_base_rate argument. For mode (1), that can be inferred automatically by the relative sizes of the positive and negative sets, but the user can override that by providing a value to positive_base_rate.

Defining the positive base rate is the biggest challenge when calibrating without negatives. That depends on the user choice of which triples will be evaluated during test time. Let’s take WN11 as an example: it has around 50% positives triples on both the validation set and test set, so naturally the positive base rate is 50%. However, should the user resample it to have 75% positives and 25% negatives, its previous calibration will be degraded. The user must recalibrate the model now with a 75% positive base rate. Therefore, this parameter depends on how the user handles the dataset and cannot be determined automatically or a priori.

Note

Incompatible with large graph mode (i.e. if self.dealing_with_large_graphs=True).

Parameters:
  • X_pos (ndarray (shape [n, 3])) – Numpy array of positive triples.
  • X_neg (ndarray (shape [n, 3])) –

    Numpy array of negative triples.

    If None, the negative triples are generated via corruptions and the user must provide a positive base rate instead.

  • positive_base_rate (float) –

    Base rate of positive statements.

    For example, if we assume there is a fifty-fifty chance of any query to be true, the base rate would be 50%.

    If X_neg is provided and this is None, the relative sizes of X_pos and X_neg will be used to determine the base rate. For example, if we have 50 positive triples and 200 negative triples, the positive base rate will be assumed to be 50/(50+200) = 1/5 = 0.2.

    This must be a value between 0 and 1.

  • batches_count (int) – Number of batches to complete one epoch of the Platt scaling training. Only applies when X_neg is None.
  • epochs (int) – Number of epochs used to train the Platt scaling model. Only applies when X_neg is None.

Examples

>>> import numpy as np
>>> from sklearn.metrics import brier_score_loss, log_loss
>>> from scipy.special import expit
>>>
>>> from ampligraph.datasets import load_wn11
>>> from ampligraph.latent_features.models import TransE
>>>
>>> X = load_wn11()
>>> X_valid_pos = X['valid'][X['valid_labels']]
>>> X_valid_neg = X['valid'][~X['valid_labels']]
>>>
>>> model = TransE(batches_count=64, seed=0, epochs=500, k=100, eta=20,
>>>                optimizer='adam', optimizer_params={'lr':0.0001},
>>>                loss='pairwise', verbose=True)
>>>
>>> model.fit(X['train'])
>>>
>>> # Raw scores
>>> scores = model.predict(X['test'])
>>>
>>> # Calibrate with positives and negatives
>>> model.calibrate(X_valid_pos, X_valid_neg, positive_base_rate=None)
>>> probas_pos_neg = model.predict_proba(X['test'])
>>>
>>> # Calibrate with just positives and base rate of 50%
>>> model.calibrate(X_valid_pos, positive_base_rate=0.5)
>>> probas_pos = model.predict_proba(X['test'])
>>>
>>> # Calibration evaluation with the Brier score loss (the smaller, the better)
>>> print("Brier scores")
>>> print("Raw scores:", brier_score_loss(X['test_labels'], expit(scores)))
>>> print("Positive and negative calibration:", brier_score_loss(X['test_labels'], probas_pos_neg))
>>> print("Positive only calibration:", brier_score_loss(X['test_labels'], probas_pos))
Brier scores
Raw scores: 0.4925058891371126
Positive and negative calibration: 0.20434617882733366
Positive only calibration: 0.22597599585144656
predict_proba(X)

Predicts probabilities using the Platt scaling model (after calibration).

Model must be calibrated beforehand with the calibrate method.

Parameters:X (ndarray (shape [n, 3])) – Numpy array of triples to be evaluated.
Returns:probas – Probability of each triple to be true according to the Platt scaling calibration.
Return type:ndarray (shape [n])