Skip to main content
Ctrl+K
Mithril Python Server 0.0.1 documentation - Home
  • Supervised Learning
  • Explainability
  • Data Quality
  • Deployment and Production Testing
  • Abstract Tensor
  • Learn
    • NEPs
  • GitHub
  • Supervised Learning
  • Explainability
  • Data Quality
  • Deployment and Production Testing
  • Abstract Tensor
  • Learn
  • NEPs
  • GitHub

Section Navigation

Contents:

  • Performance Metrics
    • Accuracy
    • Precision
    • Recall
    • F1 Score
    • Specificity
    • Balanced Accuracy
    • Matthews Correlation Coefficient (MCC)
    • Cohen’s Kappa
    • Hamming Loss
    • Jaccard Index
    • Logarithmic Loss
    • Cross-Entropy Loss
    • Brier Score
  • Area Under ROC Curve (ROC AUC)
  • Average Precision (AP)
  • Confusion Matrix
  • Wilcoxon Signed-Rank Test/Paired t-test
  • Bootstrap Confidence Intervals
  • Robustness and Generalization
  • Calibration Curves
  • Expected Calibration Error (ECE)
  • Predication Intervals
  • Bias and Fairness Analysis
  • Privacy Impact Assessment
  • Explainability Requirements
  • Benchmarking Against State-of-the-Art
  • Ablation Studies
  • Extreme Conditions Testing
  • Edge Case Scenarios
  • Catastrophic Forgetting Tests
  • Incremental Learning Performance
  • Bayesian Neural Networks Testing
  • Monte Carlo Dropout
  • Cultural Sensitivity Checks
  • Localization Sensitivity
  • Supervised Learning
  • Performance Metrics
  • Specificity

Specificity#

previous

F1 Score

next

Balanced Accuracy

© Copyright 2024, Mithril Labs, Inc..

Created using Sphinx 8.1.3.

Built with the PyData Sphinx Theme 0.15.4.