Skip to contents

A shiny app that is designed for any diagnostics results from phenotypeR, this includes:

* A diagnostics on the database via `databaseDiagnostics`. * A diagnostics on the cohort_codelist attribute of the cohort via `codelistDiagnostics`. * A diagnostics on the cohort via `cohortDiagnostics`. * A diagnostics on the population via `populationDiagnostics`. * A diagnostics on the matched cohort via `matchedDiagnostics`.

Usage

shinyDiagnostics(
  result,
  directory,
  minCellCount = 5,
  open = rlang::is_interactive()
)

Arguments

result

A summarised result

directory

Directory where to save report

minCellCount

Minimum cell count for suppression when exporting results.

open

If TRUE, the shiny app will be launched in a new session. If FALSE, the shiny app will be created but not launched.

Value

A shiny app

Examples

# \donttest{
library(PhenotypeR)

cdm <- mockPhenotypeR()

result <- phenotypeDiagnostics(cdm$my_cohort)
#> 
#> Warning: Vocabulary version in cdm_source (NA) doesn't match the one in the vocabulary
#> table (mock)
#> 
#> Warning: ! cohort_codelist attribute for cohort is empty
#>  Returning an empty summarised result
#>  You can add a codelist to a cohort with `addCodelistAttribute()`.
#> 
#>  Starting Cohort Diagnostics
#> → Getting cohort attrition
#> → Getting cohort count
#>  summarising data
#>  summarising cohort cohort_1
#>  summarising cohort cohort_2
#>  summariseCharacteristics finished!
#> → Getting cohort overlap
#> → Getting cohort timing
#>  The following estimates will be computed:
#>  days_between_cohort_entries: median, q25, q75, min, max, density
#> ! Table is collected to memory as not all requested estimates are supported on
#>   the database side
#> → Start summary of data, at 2025-06-17 18:42:11.746656
#>  Summary finished, at 2025-06-17 18:42:11.870359
#> → Creating matching cohorts
#> → Sampling cohort `my_cohort`
#> Returning entry cohort as the size of the cohorts to be sampled is equal or
#> smaller than `n`.
#>  Generating an age and sex matched cohort for cohort_1
#> Starting matching
#>  Creating copy of target cohort.
#>  1 cohort to be matched.
#>  Creating controls cohorts.
#>  Excluding cases from controls
#>  Matching by gender_concept_id and year_of_birth
#>  Removing controls that were not in observation at index date
#>  Excluding target records whose pair is not in observation
#>  Adjusting ratio
#> Binding cohorts
#>  Done
#> → Sampling cohort `my_cohort`
#> Returning entry cohort as the size of the cohorts to be sampled is equal or
#> smaller than `n`.
#>  Generating an age and sex matched cohort for cohort_2
#> Starting matching
#>  Creating copy of target cohort.
#>  1 cohort to be matched.
#>  Creating controls cohorts.
#>  Excluding cases from controls
#>  Matching by gender_concept_id and year_of_birth
#>  Removing controls that were not in observation at index date
#>  Excluding target records whose pair is not in observation
#>  Adjusting ratio
#> Binding cohorts
#>  Done
#> → Getting cohorts and indexes
#> → Summarising cohort characteristics
#>  adding demographics columns
#>  adding tableIntersectCount 1/1
#> window names casted to snake_case:
#>  `-365 to -1` -> `365_to_1`
#>  summarising data
#>  summarising cohort cohort_1
#>  summarising cohort cohort_2
#>  summarising cohort cohort_1_sampled
#>  summarising cohort cohort_1_matched
#>  summarising cohort cohort_2_sampled
#>  summarising cohort cohort_2_matched
#>  summariseCharacteristics finished!
#> → Calculating age density
#>  The following estimates will be computed:
#>  age: density
#> → Start summary of data, at 2025-06-17 18:42:36.776259
#>  Summary finished, at 2025-06-17 18:42:37.115428
#> → Run large scale characteristics (including source and standard codes)
#>  Summarising large scale characteristics 
#>  - getting characteristics from table condition_occurrence (1 of 6)
#>  - getting characteristics from table visit_occurrence (2 of 6)
#>  - getting characteristics from table measurement (3 of 6)
#>  - getting characteristics from table procedure_occurrence (4 of 6)
#>  - getting characteristics from table observation (5 of 6)
#>  - getting characteristics from table drug_exposure (6 of 6)
#> Formatting result
#>  Summarising large scale characteristics
#> → Run large scale characteristics (including only standard codes)
#>  Summarising large scale characteristics 
#>  - getting characteristics from table condition_occurrence (1 of 6)
#>  - getting characteristics from table visit_occurrence (2 of 6)
#>  - getting characteristics from table measurement (3 of 6)
#>  - getting characteristics from table procedure_occurrence (4 of 6)
#>  - getting characteristics from table observation (5 of 6)
#>  - getting characteristics from table drug_exposure (6 of 6)
#> Formatting result
#>  Summarising large scale characteristics
#> → Creating death cohort
#>  Cohort tmp_060_death_cohort created.
#> → Estimating single survival event
#> - Getting survival for target cohort 'cohort_1' and outcome cohort
#> 'death_cohort'
#> Getting overall estimates
#> - Getting survival for target cohort 'cohort_2' and outcome cohort
#> 'death_cohort'
#> Getting overall estimates
#> - Getting survival for target cohort 'cohort_1_sampled' and outcome cohort
#> 'death_cohort'
#> Getting overall estimates
#> - Getting survival for target cohort 'cohort_1_matched' and outcome cohort
#> 'death_cohort'
#> Getting overall estimates
#> - Getting survival for target cohort 'cohort_2_sampled' and outcome cohort
#> 'death_cohort'
#> Getting overall estimates
#> - Getting survival for target cohort 'cohort_2_matched' and outcome cohort
#> 'death_cohort'
#> Getting overall estimates
#> `eventgap`, `outcome_washout`, `censor_on_cohort_exit`, `follow_up_days`, and
#> `minimum_survival_days` casted to character.
#> 
#>  Creating denominator for incidence and prevalence
#>  Sampling person table to 1e+06
#>  Creating denominator cohorts
#>  Cohorts created in 0 min and 5 sec
#>  Estimating incidence
#>  Getting incidence for analysis 1 of 14
#>  Getting incidence for analysis 2 of 14
#>  Getting incidence for analysis 3 of 14
#>  Getting incidence for analysis 4 of 14
#>  Getting incidence for analysis 5 of 14
#>  Getting incidence for analysis 6 of 14
#>  Getting incidence for analysis 7 of 14
#>  Getting incidence for analysis 8 of 14
#>  Getting incidence for analysis 9 of 14
#>  Getting incidence for analysis 10 of 14
#>  Getting incidence for analysis 11 of 14
#>  Getting incidence for analysis 12 of 14
#>  Getting incidence for analysis 13 of 14
#>  Getting incidence for analysis 14 of 14
#>  Overall time taken: 0 mins and 13 secs
#>  Estimating prevalence
#>  Getting prevalence for analysis 1 of 14
#>  Getting prevalence for analysis 2 of 14
#>  Getting prevalence for analysis 3 of 14
#>  Getting prevalence for analysis 4 of 14
#>  Getting prevalence for analysis 5 of 14
#>  Getting prevalence for analysis 6 of 14
#>  Getting prevalence for analysis 7 of 14
#>  Getting prevalence for analysis 8 of 14
#>  Getting prevalence for analysis 9 of 14
#>  Getting prevalence for analysis 10 of 14
#>  Getting prevalence for analysis 11 of 14
#>  Getting prevalence for analysis 12 of 14
#>  Getting prevalence for analysis 13 of 14
#>  Getting prevalence for analysis 14 of 14
#>  Time taken: 0 mins and 7 secs
#> 

shinyDiagnostics(result, tempdir())
#>  Creating shiny from provided data

CDMConnector::cdmDisconnect(cdm = cdm)
# }