Added basic transfer learning functionality. See vignette(“TransferLearning”)
Add a gpu memory cleaner to clean cached memory after out of memory error
The python module torch is now accessed through an exported function instead of loading the module at package load
Added gradient accumulation. Studies running at different sites using different hardware can now use same effective batch size by accumulating gradients.
Refactored out the cross validation from the hyperparameter tuning
Remove predictions from non-optimal hyperparameter combinations to save space
Only use html vignettes
Rename MLP to MultiLayerPerceptron
DeepPatientLevelPrediction 2.0.3
Hotfix: Fix count for polars v0.20.x
DeepPatientLevelPrediction 2.0.2
Ensure output from predict_proba is numeric instead of 1d array
Refactoring: Move cross-validation to a separate function
Refactoring: Move paramsToTune to a separate function
linting: Enforcing HADES style
Calculate AUC ourselves with torch, get rid of scikit-learn dependancy
added Andromeda to dev dependencies
DeepPatientLevelPrediction 2.0.1
Connection parameter fixed to be in line with newest polars
Fixed a bug where LRFinder used a hardcoded batch size
Seed is now used in LRFinder so it’s reproducible
Fixed a bug in NumericalEmbedding
Fixed a bug for Transformer and numerical features
Fixed a bug when resuming from a full TrainingCache (thanks Zoey Jiang and Linying Zhang )
Updated installation documentation after feedback from HADES hackathon
Fixed a bug where order of numeric features wasn’t conserved between training and test set
TrainingCache now only saves prediction dataframe for the best performing model
DeepPatientLevelPrediction 2.0.0
New backend which uses pytorch through reticulate instead of torch in R
All models ported over to python
Dataset class now in python
Estimator class in python
Learning rate finder in python
Added input checks and tests for wrong inputs
Training-cache for single hyperparameter combination added
Fixed empty test for training-cache
DeepPatientLevelPrediction 1.1.6
Caching and resuming of hyperparameter iterations
DeepPatientLevelPrediction 1.1.5
Fix bug where device function was not working for LRFinder
DeepPatientLevelPrediction 1.1.4
Remove torchopt dependancy since adamw is now in torch
Update torch dependency to >=0.10.0
Allow device to be a function that resolves during Estimator initialization