R/Metrics.R
computeOhdsiBenchmarkMetrics.Rd
Generate perfomance metrics for the OHDSI Methods Benchmark
computeOhdsiBenchmarkMetrics(
exportFolder,
mdrr = 1.25,
stratum = "All",
trueEffectSize = "Overall",
calibrated = FALSE,
comparative = FALSE
)
The folder containing the CSV files created using the
packageOhdsiBenchmarkResults
function. This folder can contain
results from various methods, analyses, and databases.
The minimum detectable relative risk (MDRR). Only controls with this MDRR will be used to compute the performance metrics. Set to "All" to include all controls.
The stratum for which to compute the metrics, e.g. 'Acute Pancreatitis'. Set to 'All' to use all controls.
Should the analysis be limited to a specific true effect size? Set to "Overall" to include all.
Should confidence intervals and p-values be empirically calibrated before computing the metrics?
Should the methods be evaluated on the task of comprative effect estimation? If FALSE, they will be evaluated on the task of effect estimation.
A data frame with the various metrics per method - analysisId - database combination.