How to use the fairlearn.metrics.mean_underprediction function in fairlearn

To help you get started, we’ve selected a few fairlearn examples, based on popular ways it is used in public projects.

Secure your code as it's written. Use Snyk Code to scan source code in minutes - no build needed - and fix issues immediately.

github fairlearn / fairlearn / test / unit / metrics / test_mean_predictions.py View on Github external
def test_mean_underprediction_weighted_single():
    y_pred = [0]
    y_true = [42]
    weight = [2]

    result = metrics.mean_underprediction(y_true, y_pred, weight)

    assert result == 42
github fairlearn / fairlearn / test / unit / metrics / test_mean_predictions.py View on Github external
def test_mean_underprediction_unweighted_single():
    y_pred = [0]
    y_true = [1]

    result = metrics.mean_underprediction(y_true, y_pred)

    assert result == 1
github fairlearn / fairlearn / test / unit / metrics / test_mean_predictions.py View on Github external
def test_mean_underprediction_weighted():
    y_pred = [0, 1, 5, 3, 1]
    y_true = [1, 1, 2, 0, 2]
    weight = [4, 1, 2, 2, 1]

    result = metrics.mean_underprediction(y_true, y_pred, weight)

    assert result == 0.5
github fairlearn / fairlearn / test / unit / metrics / test_mean_predictions.py View on Github external
def test_mean_underprediction_unweighted():
    y_pred = [0, 1, 1, 3, 4]
    y_true = [1, 1, 5, 0, 2]

    result = metrics.mean_underprediction(y_true, y_pred)

    assert result == 1