class ampligraph.latent_features.SelfAdversarialLoss(eta, loss_params=None, verbose=False)

Self adversarial sampling loss.

Introduced in [SDNT19].

\[\mathcal{L} = -log\, \sigma(\gamma + f_{model} (\mathbf{s},\mathbf{o})) - \sum_{i=1}^{n} p(h_{i}^{'}, r, t_{i}^{'} ) \ log \ \sigma(-f_{model}(\mathbf{s}_{i}^{'},\mathbf{o}_{i}^{'}) - \gamma)\]

where \(\mathbf{s}, \mathbf{o} \in \mathcal{R}^k\) are the embeddings of the subject and object of a triple \(t=(s,r,o)\), \(\gamma\) is the margin, \(\sigma\) the sigmoid function, and \(p(s_{i}^{'}, r, o_{i}^{'} )\) is the negatives sampling distribution which is defined as:

\[p(s'_j, r, o'_j | \{(s_i, r_i, o_i)\}) = \frac{\exp \alpha \, f_{model}(\mathbf{s'_j}, \mathbf{o'_j})} {\sum_i \exp \alpha \, f_{model}(\mathbf{s'_i}, \mathbf{o'_i})}\]

where \(\alpha\) is the temperature of sampling, \(f_{model}\) is the scoring function of the desired embeddings model.


__init__(eta[, loss_params, verbose]) Initialize Loss
__init__(eta, loss_params=None, verbose=False)

Initialize Loss

  • eta (int) – number of negatives
  • loss_params (dict) –

    Dictionary of loss-specific hyperparams:

    • ’margin’: (float). Margin to be used for loss computation (default: 1)
    • ’alpha’ : (float). Temperature of sampling (default:0.5)

    Example: loss_params={'margin': 1, 'alpha': 0.5}