Skip to content

Metrics Glossary#

This section contains guides for different metrics used to measure model performance.

Each ML use case requires different metrics. Using the right metrics is critical for understanding and meaningfully comparing model performance. In each metrics guide, you can learn about the metric with examples, its limitations and biases, and its intended uses.

  • Accuracy


    Accuracy measures how well a model predicts correctly. It's a good metric for assessing model performance in simple cases with balanced data.

  • Average Precision (AP)


    Average precision summarizes a precision-recall (PR) curve into a single threshold-independent value representing model's performance across all thresholds.

  • Averaging Methods: Macro, Micro, Weighted


    Different averaging methods for aggregating metrics for multiclass workflows, such as classification and object detection.

  • Confusion Matrix


    Confusion matrix is a structured plot describing classification model performance as a table that highlights counts of objects with predicted classes (columns) against the actual classes (rows), indicating how confused a model is.

  • F1-score


    F1-score is a metric that combines two competing metrics, precision and recall with an equal weight. It symmetrically represents both precision and recall as one metric.

  • False Positive Rate (FPR)


    False positive rate (FPR) measures the proportion of negative ground truths that a model incorrectly predicts as positive, ranging from 0 to 1. It is useful when the objective is to measure and reduce false positive inferences.

  • Precision


    Precision measures the proportion of positive inferences from a model that are correct. It is useful when the objective is to measure and reduce false positive inferences.

  • Precision-Recall (PR) Curve


    Precision-recall curve is a plot that gauges machine learning model performance by using precision and recall. It is built with precision on the y-axis and recall on the x-axis computed across many thresholds.

  • Recall (TPR, Sensitivity)


    Recall, also known as true positive rate (TPR) and sensitivity, measures the proportion of all positive ground truths that a model correctly predicts. It is useful when the objective is to measure and reduce false negative ground truths, i.e. model misses.

  • Receiver Operating Characteristic (ROC) Curve


    A receiver operating characteristic (ROC) curve is a plot that is used to evaluate the performance of binary classification models by using the true positive rate (TPR) and the false positive rate (FPR).

  • Specificity (TNR)


    Specificity, also known as true negative rate (TNR), measures the proportion of negative ground truths that a model correctly predicts, ranging from 0 to 1. It is useful when the objective is to measure the model's ability to correctly identify the negative class instances.

  • TP / FP / FN / TN


    The counts of TP, FP, FN and TN ground truths and inferences are essential for summarizing model performance. They are the building blocks of many other metrics, including accuracy, precision, and recall.

Computer Vision#

  • Geometry Matching


    Geometry matching is the process of matching inferences to ground truths for computer vision workflows with a localization component. It is a core building block for metrics such as TP, FP, and FN, and any metrics built on top of these, like precision, recall, and F1-score.

  • Intersection over Union (IoU)


    IoU measures overlap between two geometries, segmentation masks, sets of labels, or time-series snippets. Also known as Jaccard index in classification workflow.

Large Language Models#

  • Contradiction Score


    NLI classification is an insightful tool to measure if candidate texts and reference texts are contradictory or consistent, providing extra detail towards hallucination detection.

  • Vectara's HEM Score


    Vectara offers a hallucination detection metric that can live alongside any NLP workflows to measure factual consistency between candidate texts and reference texts.

Natural Language Processing#

  • BERTScore


    BERTScore is a metric used in NLP workflows to measure textual similarity between candidate texts and reference texts.

  • BLEU


    BLEU is a metric commonly used in a variety of NLP workflows to evaluate the quality of candidate texts. BLEU can be thought of as an analog to precision for text comparisons.

  • METEOR


    METEOR is a widely recognized and vital metric used in NLP. It is used to measure the quality of candidate texts against reference texts. Though it is an n-gram based metric, it goes beyond traditional methods by factoring in elements such as precision, recall, and order to provide a comprehensive measure of text quality.

  • Perplexity


    Perplexity is a metric commonly used in natural language processing to evaluate the quality of language models, particularly in the context of text generation. Unlike metrics such as BLEU or BERT, perplexity doesn't directly measure the quality of generated text by comparing it with reference texts. Instead, perplexity assesses the "confidence" or "surprise" of a language model in predicting the next word in a sequence of words.

  • ROUGE-N


    ROUGE-N, a metric within the broader ROUGE metric collection, is a vital metric in the field of NLP. It assesses the quality of a candidate text by measuring the overlap of n-grams between the candidate text and reference texts. ROUGE-N can be thought of as an analog to recall for text comparisons.

  • Word, Character, and Match Error Rate


    Word Error Rate (WER), Character Error Rate (CER), and Match Error Rate (MER) are essential metrics used in the evaluation of speech recognition and natural language processing systems. From a high level, they each quantify the similarity between reference and candidate texts, with zero being a perfect score.