WhiteRabbit is a software tool to help prepare for ETLs (Extraction, Transformation, Loading) of longitudinal health care databases into the Observational Medical Outcomes Partnership (OMOP) Common Data Model (CDM). The source data can be in delimited text files, SAS files, or in a database (MySQL, SQL Server, Oracle, PostgreSQL, Microsoft Access, Amazon RedShift, PDW, Teradata, Google BigQuery, Azure). Note that for support of the OHDSI analytical tooling, the OMOP CDM will need to be in one of a limited set of database platforms (SQL Server, Oracle, PostgreSQL, Amazon RedShift, Google BigQuery, Impala).
WhiteRabbit’s main function is to perform a scan of the source data, providing detailed information on the tables, fields, and values that appear in a field. This scan will generate a report that can be used as a reference when designing the ETL, for instance by using the Rabbit-In-A-Hat tool. WhiteRabbit differs from standard data profiling tools in that it attempts to prevent the display of personally identifiable information (PII) data values in the generated output data file.
The typical sequence for using this software to scan source data in preparation of developing an ETL into an OMOP CDM:
Once the scan report is created, this report can then be used in the Rabbit-In-A-Hat tool or as a stand-alone data profiling document.
X.X.X
is the latest version).bin/whiteRabbit.bat
on Windows to start WhiteRabbit, and bin/whiteRabbit
on macOS and Linux.Note: on releases earlier than version 0.8.0, open the respective WhiteRabbit.jar or RabbitInAHat.jar files instead. Note: WhiteRabbit and RabbitInaHat only work from a path with only ascii characters.
WhiteRabbit possibly does not start when the memory allocated by the JVM is too big or too small. By default this is set to 1200m. To increase the memory (in this example to 2400m), either set the environment variable EXTRA_JVM_ARGUMENTS=-Xmx2400m
before starting or edit in bin/WhiteRabbit.bat
the line %JAVACMD% %JAVA_OPTS% -Xmx2400m...
. To lower the memory, set one of these variables to e.g. -Xmx600m
. If you have a 32-bit Java VM installed and problems persist, consider installing 64-bit Java.
(This addresses issue 293)
The Apache POI library is used for generating the scan report in Excel format. This library creates its own directory for temporary files in the system temporary directory. In issue 293 it has been reported that this can cause problems in a multi-user environment, when multiple user attempt to create this directory with too restrictive permissions (read-only for other users). WhiteRabbit from version 0.10.9 attempts to circumvent this automatically, but this workaround can fail due to concurrency problems. If you want to prevent this from happening entirely , you can set either the environment variable ORG_OHDSI_WHITERABBIT_POI_TMPDIR
or the Java system property org.ohdsi.whiterabbit.poi.tmpdir
to a temporary directory of your choice when starting WhiteRabbit (best would be to add this to the whiteRabbit
or whiteRabbit.bat
script). Please note that this directory should exist before your start WhiteRabbit, and that it should be writable by any user that may want to run WhiteRabbit. For each user a separate subdirectory will be created, so that permission related conflicts should be avoided. Also, WhiteRabbit now attempts to detect this situation before the scan starts. If this is detected, the scan is not started, and the problem identified before the scan, instead of afterwards.
All source code, descriptions and input/output examples are available on GitHub: https://github.com/OHDSI/WhiteRabbit
Any bugs/issues/enhancements should be posted to the GitHub repository: https://github.com/OHDSI/WhiteRabbit/issues
Any questions/comments/feedback/discussion can be posted on the OHDSI Developer Forum: http://forums.ohdsi.org/c/developers
Any files that WhiteRabbit creates will be exported to this local folder. Use the “Pick Folder” button to navigate in your local environment where you would like the scan document to go.
Here you can specify the location of the source data. The following source types are supported: delimited text files, SAS files, MySQL, SQL Server, Oracle, PostgreSQL, Microsoft Access, Amazon RedShift, PDW, Teradata, Google BigQuery, Azure, Snowflake. Below are connection instructions for each data type of data source. Once you have entered the necessary information, the “Test connection” button can ensure a connection can be made.
tab
for a tab delimited file.WhiteRabbit will look for the files to scan in the same folder you set up as a working directory.
WhiteRabbit will look for .sas7bdat
files to scan in the same folder you set up as a working directory.
Note that it is currently not possible to produce fake data for SAS files from a scan report.
<host>:<port>
), which defaults to 3306.<host>/<sid>
, <host>:<port>/<sid>
, <host>/<service name>
, or <host>:<port>/<service name>
<host>:<port>
), which defaults to 1433.<domain>/<user>
(e.g. ‘MyDomain/Joe’)When the SQL Server JDBC drivers are installed, you can also use Windows authentication. In this case, user name and password should be empty.
_sqljdbc_4.0/enu/auth/x64/sqljdbc_auth.dll_
(64-bits) or _sqljdbc_4.0/enu/auth/x86/sqljdbc_auth.dll_
(32-bits), which needs to be moved to a location on the system path, for example to c:/windows/system32
.<host>/<database>
). You can also specify the port (ex: <host>:<port>/<database>
), which defaults to 5432.If you want to use a BigQuery instance as the source database, after installing WhiteRabbit, you will need to download a zip file with the BigQuery JDBC driver, and unzip it in de repo
directory of the WhiteRabbit installation. The latest version tested with WhiteRabbit is 1.5.2.1005 .
The zip file can be downloaded here
Google BigQuery (GBQ) supports two different connection/authentication methods: application default credentials and service account authentication. The former method is considered more secure because it writes auditing events to stackdriver. The specific method used is determined by the arguments provided to the configuration panel as described below.
Authentication via application default credentials:
When using application default credentials authentication, you must run the following gcloud command in the user account only once: gcloud auth application-default login
(do not include the single quote characters). An application key is written to ~/.config/gcloud/application_default_credentails.json
.
Authentication via service account credentials:
<project>.database.windows.net:1433;database=<database_name>
)Please note that the fields Password and Authentication method are mutually exclusive: for only one of these fields a value should be supplied. A warning will be given when a value is supplied for both fields.
If you want to use a Teradata instance as the source database, after installing WhiteRabbit, you will need to download a zip file with the Teradata JDBC driver, and unzip it in de repo
directory of the WhiteRabbit installation. The latest version tested with WhiteRabbit is 20.00.00.16 .
The zip file can be downloaded here
A scan generates a report containing information on the source data that can be used to help design the ETL. Using the Scan tab in WhiteRabbit you can either select individual tables in the selected source database by clicking on ‘Add’ (Ctrl + mouse click), or automatically select all tables in the database by clicking on ‘Add all in DB’.
There are a few setting options as well with the scan:
Once all settings are completed, press the ‘Scan tables’ button. After the scan is completed the report will be written to the working folder.
For various reasons one could prefer to run WhiteRabbit from the command line. This is possible by specifying all the options one would normally select in the user interface in an .ini file. Example ini files can be found in the iniFileExamples folder. WhiteRabbit.ini
is a generic example, and there are also one or more database specific examples (e.g. Snowflake.ini
) Then, we can reference the ini file when calling WhiteRabbit from the command line, e.g.:
Windows
bin/whiteRabbit.bat -ini WhiteRabbit.ini
Mac/Unix
bin/whiteRabbit -ini WhiteRabbit.ini
After the scan is completed, a “ScanReport” Excel document will be created in the working folder location selected earlier. The document will have multiple tabs. The first two tabs are a “Field Overview” tab and a “Table Overview” tab. The subsequent tabs contain field and value overviews for each database table or delimited text files selected for the scan. The last tab (indicated by "_"
) contains metadata on the WhiteRabbit settings used to create the scan report. The “Table Overview” and "_"
tab are not present in releases earlier than v0.10.0.
The “Field Overview” tab will show for each table scanned, the details for each field. For example the data type, the number of empty rows and other statistics.
<=
sign (This column is not present in releases earlier than v0.9.0)The “Table Overview” tab gives information about each of the tables in the data source. Below is an example image of the “Table Overview” tab.
The “Description” column for both the field and table overview was added in v0.10.0. These cells are not populated by WhiteRabbit (with the exception when scanning sas7bdat files that contain labels). Rather, this field provides a way for the data holder to add descriptions to the fields and tables. These descriptions are displayed in Rabbit-In-A-Hat when loading the scan report. This is especially useful when the fieldnames are abbreviations or in a foreign language.
If the values of the table have been scanned (described in Performing the Scan), the scan report will contain a tab for each scanned table. An example for one field is shown below.
The field names from the source table will be across the columns of the Excel tab. Each source field will generate two columns in the Excel. One column will list all distinct values that have a “Min cell count” greater than what was set at time of the scan. Next to each distinct value will be a second column that contains the frequency, or the number of times that value occurs in the data. These two columns(distinct values and frequency) will repeat for all the source columns in the profiled table.
If a list of unique values was truncated, the last value in the list will be "List truncated..."
; this indicates that there are one or more additional unique source values that have a frequency lower than the “Min cell count”.
The scan report is powerful in understanding your source data by highlighting what exists. For example, the above example was retrieved for the “GENDER” column within one of the tables scanned, we can see that there were two common values (1 & 2) that appeared 104 and 96 times respectively. WhiteRabbit will not define “1” as male and “2” as female; the data holder will typically need to define source codes unique to the source system. However, these two values (1 & 2) are not the only values present in the data because we see this list was truncated. These other values appear with very low frequency (defined by “Min cell count”) and often represent incorrect or highly suspicious values. When generating an ETL we should not only plan to handle the high-frequency gender concepts “1” and “2” but also the other low-frequency values that exist within this column.
If the option for numerical statistics is checked, then a set of statistics is calculated for all integer, real and date data types. The following statistics are added to the Field Overview sheet (Columns K-Q):
When selecting the option for scanning numerical statistics, the parameter “Numeric stats reservoir size” can be set. This defines the number of values that will be stored for calculation of the numeric statistics. These values will be randomly sampled from the field values in the scan report. If the number of values is smaller than the set reservoir size, then the standard deviation and three quartile boundaries are the exact population statistics. Otherwise, the statistics are approximated based on a representative sample. The average, minimum and maximum are always true population statistics. For dates, the standard deviation of dates is given in days. The other date statistics are converted to a date representation.
This feature allows one to create a fake dataset based on a WhiteRabbit scan report. The generated fake data can be outputted directly to database tables (MySQL, Oracle, SQL Server, PostgreSQL) or as delimited text file. The resulting dataset could be used to develop ETL code when direct access to the data is not available.
WhiteRabbit has three modes to generate fake data:
The following options are available for generating fake data: