# Evaluation¶

The module includes performance metrics for neural graph embeddings models, along with model selection routines, negatives generation, and an implementation of the learning-to-rank-based evaluation protocol used in literature.

## Metrics¶

Learning-to-rank metrics to evaluate the performance of neural graph embedding models.

 rank_score(y_true, y_pred[, pos_lab]) Rank of a triple mrr_score(ranks) Mean Reciprocal Rank (MRR) mr_score(ranks) Mean Rank (MR) hits_at_n_score(ranks, n) Hits@N

## Negatives Generation¶

Negatives generation routines. These are corruption strategies based on the Local Closed-World Assumption (LCWA).

 generate_corruptions_for_eval(X, …[, …]) Generate corruptions for evaluation. generate_corruptions_for_fit(X[, …]) Generate corruptions for training.

## Evaluation & Model Selection¶

Functions to evaluate the predictive power of knowledge graph embedding models, and routines for model selection.

 evaluate_performance(X, model[, …]) Evaluate the performance of an embedding model. select_best_model_ranking(model_class, X, …) Model selection routine for embedding models.

## Helper Functions¶

Utilities and support functions for evaluation procedures.

 train_test_split_no_unseen(X[, test_size, …]) Split into train and test sets. create_mappings(X) Create string-IDs mappings for entities and relations. to_idx(X, ent_to_idx, rel_to_idx) Convert statements (triples) into integer IDs.