Skip to contents

Calculate all measures for sparse ROC

Usage

getThresholdSummary(
  prediction,
  predictionType = "binary",
  typeColumn = "evaluation"
)

Arguments

prediction

A prediction object

predictionType

The type of prediction (binary or survival)

typeColumn

A column that is used to stratify the results

Value

A data.frame with TP, FP, TN, FN, TPR, FPR, accuracy, PPF, FOR and Fmeasure

Details

Calculates the TP, FP, TN, FN, TPR, FPR, accuracy, PPF, FOR and Fmeasure from a prediction object

Examples

prediction <- data.frame(rowId = 1:100, 
                         outcomeCount = stats::rbinom(1:100, 1, prob=0.5),
                         value = runif(100), 
                         evaluation = rep("Train", 100))
summary <- getThresholdSummary(prediction)
str(summary)
#> 'data.frame':	100 obs. of  24 variables:
#>  $ evaluation             : chr  "Train" "Train" "Train" "Train" ...
#>  $ predictionThreshold    : num  0.991 0.984 0.975 0.969 0.964 ...
#>  $ preferenceThreshold    : num  0.991 0.984 0.975 0.969 0.964 ...
#>  $ positiveCount          : num  1 2 3 4 5 6 7 8 9 10 ...
#>  $ negativeCount          : num  99 98 97 96 95 94 93 92 91 90 ...
#>  $ trueCount              : num  50 50 50 50 50 50 50 50 50 50 ...
#>  $ falseCount             : num  50 50 50 50 50 50 50 50 50 50 ...
#>  $ truePositiveCount      : num  0 0 0 1 2 2 3 4 4 4 ...
#>  $ trueNegativeCount      : num  49 48 47 47 47 46 46 46 45 44 ...
#>  $ falsePositiveCount     : num  1 2 3 3 3 4 4 4 5 6 ...
#>  $ falseNegativeCount     : num  50 50 50 49 48 48 47 46 46 46 ...
#>  $ f1Score                : num  NaN NaN NaN 0.037 0.0727 ...
#>  $ accuracy               : num  0.49 0.48 0.47 0.48 0.49 0.48 0.49 0.5 0.49 0.48 ...
#>  $ sensitivity            : num  0 0 0 0.02 0.04 0.04 0.06 0.08 0.08 0.08 ...
#>  $ falseNegativeRate      : num  1 1 1 0.98 0.96 0.96 0.94 0.92 0.92 0.92 ...
#>  $ falsePositiveRate      : num  0.02 0.04 0.06 0.06 0.06 0.08 0.08 0.08 0.1 0.12 ...
#>  $ specificity            : num  0.98 0.96 0.94 0.94 0.94 0.92 0.92 0.92 0.9 0.88 ...
#>  $ positivePredictiveValue: num  0 0 0 0.25 0.4 ...
#>  $ falseDiscoveryRate     : num  1 1 1 0.75 0.6 ...
#>  $ negativePredictiveValue: num  0.495 0.49 0.485 0.49 0.495 ...
#>  $ falseOmissionRate      : num  0.505 0.51 0.515 0.51 0.505 ...
#>  $ positiveLikelihoodRatio: num  0 0 0 0.333 0.667 ...
#>  $ negativeLikelihoodRatio: num  1.02 1.04 1.06 1.04 1.02 ...
#>  $ diagnosticOddsRatio    : num  0 0 0 0.32 0.653 ...