createPrior creates a Cyclops prior object for use with fitCyclopsModel.

createPrior(
  priorType,
  variance = 1,
  exclude = c(),
  graph = NULL,
  neighborhood = NULL,
  useCrossValidation = FALSE,
  forceIntercept = FALSE
)

Arguments

priorType

Character: specifies prior distribution. See below for options

variance

Numeric: prior distribution variance

exclude

A vector of numbers or covariateId names to exclude from prior

graph

Child-to-parent mapping for a hierarchical prior

neighborhood

A list of first-order neighborhoods for a partially fused prior

useCrossValidation

Logical: Perform cross-validation to determine prior variance.

forceIntercept

Logical: Force intercept coefficient into prior

Value

A Cyclops prior object of class inheriting from "cyclopsPrior" for use with fitCyclopsModel.

Prior types

We specify all priors in terms of their variance parameters. Similar fitting tools for regularized regression often parameterize the Laplace distribution in terms of a rate "lambda" per observation. See "glmnet", for example.

variance = 2 * / (nobs * lambda)^2 or lambda = sqrt(2 / variance) / nobs

Examples

#Generate some simulated data:
sim <- simulateCyclopsData(nstrata = 1, nrows = 1000, ncovars = 2, eCovarsPerRow = 0.5, 
                           model = "poisson")
#> Sparseness = 76.6 %
cyclopsData <- convertToCyclopsData(sim$outcomes, sim$covariates, modelType = "pr", 
                                    addIntercept = TRUE)
#> Sorting covariates by covariateId and rowId

#Define the prior and control objects to use cross-validation for finding the 
#optimal hyperparameter:
prior <- createPrior("laplace", exclude = 0, useCrossValidation = TRUE)
control <- createControl(cvType = "auto", noiseLevel = "quiet")

#Fit the model
fit <- fitCyclopsModel(cyclopsData,prior = prior, control = control)  
#> Using cross-validation selector type byRow
#> Performing 10-fold cross-validation [seed = 1698789248] with data partitions of sizes 100 100 100 100 100 100 100 100 100 100
#> Using 1 thread(s)
#> Starting var = 0.234 (default)
#> Running at Laplace(2.92353) None  Grid-point #1 at 0.234 	Fold #1 Rep #1 pred log like = 600.677
#> Running at Laplace(2.92353) None  Grid-point #1 at 0.234 	Fold #2 Rep #1 pred log like = 519.924
#> Running at Laplace(2.92353) None  Grid-point #1 at 0.234 	Fold #3 Rep #1 pred log like = 621.534
#> Running at Laplace(2.92353) None  Grid-point #1 at 0.234 	Fold #4 Rep #1 pred log like = 429.156
#> Running at Laplace(2.92353) None  Grid-point #1 at 0.234 	Fold #5 Rep #1 pred log like = 455.987
#> Running at Laplace(2.92353) None  Grid-point #1 at 0.234 	Fold #6 Rep #1 pred log like = 569.073
#> Running at Laplace(2.92353) None  Grid-point #1 at 0.234 	Fold #7 Rep #1 pred log like = 445.111
#> Running at Laplace(2.92353) None  Grid-point #1 at 0.234 	Fold #8 Rep #1 pred log like = 680.122
#> Running at Laplace(2.92353) None  Grid-point #1 at 0.234 	Fold #9 Rep #1 pred log like = 513.724
#> Running at Laplace(2.92353) None  Grid-point #1 at 0.234 	Fold #10 Rep #1 pred log like = 579.541
#> AvgPred = 541.485 with stdev = 78.608
#> Completed at 0.234
#> Next point at 2.34 with value 0 and continue = 1
#> search[ 0.234 ] = 541.485(78.608)
#> 
#> Running at Laplace(0.9245) None  Grid-point #2 at 2.34 	Fold #1 Rep #1 pred log like = 600.62
#> Running at Laplace(0.9245) None  Grid-point #2 at 2.34 	Fold #2 Rep #1 pred log like = 519.914
#> Running at Laplace(0.9245) None  Grid-point #2 at 2.34 	Fold #3 Rep #1 pred log like = 621.499
#> Running at Laplace(0.9245) None  Grid-point #2 at 2.34 	Fold #4 Rep #1 pred log like = 429.072
#> Running at Laplace(0.9245) None  Grid-point #2 at 2.34 	Fold #5 Rep #1 pred log like = 455.985
#> Running at Laplace(0.9245) None  Grid-point #2 at 2.34 	Fold #6 Rep #1 pred log like = 569.055
#> Running at Laplace(0.9245) None  Grid-point #2 at 2.34 	Fold #7 Rep #1 pred log like = 445.068
#> Running at Laplace(0.9245) None  Grid-point #2 at 2.34 	Fold #8 Rep #1 pred log like = 680.096
#> Running at Laplace(0.9245) None  Grid-point #2 at 2.34 	Fold #9 Rep #1 pred log like = 513.724
#> Running at Laplace(0.9245) None  Grid-point #2 at 2.34 	Fold #10 Rep #1 pred log like = 579.472
#> AvgPred = 541.45 with stdev = 78.6096
#> Completed at 2.34
#> Next point at 0.0234 with value 0 and continue = 1
#> search[ 0.234 ] = 541.485(78.608)
#> search[ 2.34 ] = 541.45(78.6096)
#> 
#> Running at Laplace(9.245) None  Grid-point #3 at 0.0234 	Fold #1 Rep #1 pred log like = 600.805
#> Running at Laplace(9.245) None  Grid-point #3 at 0.0234 	Fold #2 Rep #1 pred log like = 519.924
#> Running at Laplace(9.245) None  Grid-point #3 at 0.0234 	Fold #3 Rep #1 pred log like = 621.649
#> Running at Laplace(9.245) None  Grid-point #3 at 0.0234 	Fold #4 Rep #1 pred log like = 429.416
#> Running at Laplace(9.245) None  Grid-point #3 at 0.0234 	Fold #5 Rep #1 pred log like = 456.004
#> Running at Laplace(9.245) None  Grid-point #3 at 0.0234 	Fold #6 Rep #1 pred log like = 569.073
#> Running at Laplace(9.245) None  Grid-point #3 at 0.0234 	Fold #7 Rep #1 pred log like = 445.152
#> Running at Laplace(9.245) None  Grid-point #3 at 0.0234 	Fold #8 Rep #1 pred log like = 680.134
#> Running at Laplace(9.245) None  Grid-point #3 at 0.0234 	Fold #9 Rep #1 pred log like = 513.725
#> Running at Laplace(9.245) None  Grid-point #3 at 0.0234 	Fold #10 Rep #1 pred log like = 579.753
#> AvgPred = 541.564 with stdev = 78.598
#> Completed at 0.0234
#> Next point at 0.00234 with value 0 and continue = 1
#> search[ 0.0234 ] = 541.564(78.598)
#> search[ 0.234 ] = 541.485(78.608)
#> search[ 2.34 ] = 541.45(78.6096)
#> 
#> Running at Laplace(29.2353) None  Grid-point #4 at 0.00234 	Fold #1 Rep #1 pred log like = 600.805
#> Running at Laplace(29.2353) None  Grid-point #4 at 0.00234 	Fold #2 Rep #1 pred log like = 519.924
#> Running at Laplace(29.2353) None  Grid-point #4 at 0.00234 	Fold #3 Rep #1 pred log like = 621.874
#> Running at Laplace(29.2353) None  Grid-point #4 at 0.00234 	Fold #4 Rep #1 pred log like = 430.035
#> Running at Laplace(29.2353) None  Grid-point #4 at 0.00234 	Fold #5 Rep #1 pred log like = 456.006
#> Running at Laplace(29.2353) None  Grid-point #4 at 0.00234 	Fold #6 Rep #1 pred log like = 569.073
#> Running at Laplace(29.2353) None  Grid-point #4 at 0.00234 	Fold #7 Rep #1 pred log like = 445.152
#> Running at Laplace(29.2353) None  Grid-point #4 at 0.00234 	Fold #8 Rep #1 pred log like = 680.134
#> Running at Laplace(29.2353) None  Grid-point #4 at 0.00234 	Fold #9 Rep #1 pred log like = 513.725
#> Running at Laplace(29.2353) None  Grid-point #4 at 0.00234 	Fold #10 Rep #1 pred log like = 580.157
#> AvgPred = 541.688 with stdev = 78.5522
#> Completed at 0.00234
#> Next point at 0.000234 with value 0 and continue = 1
#> search[ 0.00234 ] = 541.688(78.5522)
#> search[ 0.0234 ] = 541.564(78.598)
#> search[ 0.234 ] = 541.485(78.608)
#> search[ 2.34 ] = 541.45(78.6096)
#> 
#> Running at Laplace(92.45) None  Grid-point #5 at 0.000234 	Fold #1 Rep #1 pred log like = 600.805
#> Running at Laplace(92.45) None  Grid-point #5 at 0.000234 	Fold #2 Rep #1 pred log like = 519.924
#> Running at Laplace(92.45) None  Grid-point #5 at 0.000234 	Fold #3 Rep #1 pred log like = 621.874
#> Running at Laplace(92.45) None  Grid-point #5 at 0.000234 	Fold #4 Rep #1 pred log like = 430.035
#> Running at Laplace(92.45) None  Grid-point #5 at 0.000234 	Fold #5 Rep #1 pred log like = 456.006
#> Running at Laplace(92.45) None  Grid-point #5 at 0.000234 	Fold #6 Rep #1 pred log like = 569.073
#> Running at Laplace(92.45) None  Grid-point #5 at 0.000234 	Fold #7 Rep #1 pred log like = 445.152
#> Running at Laplace(92.45) None  Grid-point #5 at 0.000234 	Fold #8 Rep #1 pred log like = 680.134
#> Running at Laplace(92.45) None  Grid-point #5 at 0.000234 	Fold #9 Rep #1 pred log like = 513.725
#> Running at Laplace(92.45) None  Grid-point #5 at 0.000234 	Fold #10 Rep #1 pred log like = 580.157
#> AvgPred = 541.688 with stdev = 78.5522
#> Completed at 0.000234
#> Next point at 3.23147e-23 with value 542.287 and continue = 0
#> search[ 0.000234 ] = 541.688(78.5522)
#> search[ 0.00234 ] = 541.688(78.5522)
#> search[ 0.0234 ] = 541.564(78.598)
#> search[ 0.234 ] = 541.485(78.608)
#> search[ 2.34 ] = 541.45(78.6096)
#> 
#> 
#> Maximum predicted log likelihood (542.287) estimated at:
#> 	3.23147e-23 (variance)
#> 	2.4878e+11 (lambda)
#> 
#> Fitting model at optimal hyperparameter
#> Using prior: Laplace(2.4878e+11) None 

#Find out what the optimal hyperparameter was:
getHyperParameter(fit)
#> [1] 3.231474e-23

#Extract the current log-likelihood, and coefficients
logLik(fit)
#> 'log Lik.' -2122.09 (df=3)
coef(fit)
#> (Intercept)           1           2 
#>   -3.777662    0.000000    0.000000 

#We can only retrieve the confidence interval for unregularized coefficients:
confint(fit, c(0))
#> Using 1 thread(s)
#> 
#> Warning: BLR gradient is ill-conditioned
#> Enforcing convergence!
#> 
#> Warning: BLR gradient is ill-conditioned
#> Enforcing convergence!
#> 
#> Warning: BLR gradient is ill-conditioned
#> Enforcing convergence!
#> 
#> Warning: BLR gradient is ill-conditioned
#> Enforcing convergence!
#> 
#> Warning: BLR gradient is ill-conditioned
#> Enforcing convergence!
#> 
#> Warning: BLR gradient is ill-conditioned
#> Enforcing convergence!
#> 
#> Warning: BLR gradient is ill-conditioned
#> Enforcing convergence!
#> 
#> Warning: BLR gradient is ill-conditioned
#> Enforcing convergence!
#> 
#> Warning: BLR gradient is ill-conditioned
#> Enforcing convergence!
#> 
#> Warning: BLR gradient is ill-conditioned
#> Enforcing convergence!
#> 
#> Warning: BLR gradient is ill-conditioned
#> Enforcing convergence!
#> 
#> Warning: BLR gradient is ill-conditioned
#> Enforcing convergence!
#> 
#> Warning: BLR gradient is ill-conditioned
#> Enforcing convergence!
#> 
#> Warning: BLR gradient is ill-conditioned
#> Enforcing convergence!
#> 
#> Warning: BLR gradient is ill-conditioned
#> Enforcing convergence!
#> 
#> Warning: BLR gradient is ill-conditioned
#> Enforcing convergence!
#> 
#> Warning: BLR gradient is ill-conditioned
#> Enforcing convergence!
#> 
#> Warning: BLR gradient is ill-conditioned
#> Enforcing convergence!
#> 
#> Warning: BLR gradient is ill-conditioned
#> Enforcing convergence!
#> 
#> Warning: BLR gradient is ill-conditioned
#> Enforcing convergence!
#> 
#> Warning: BLR gradient is ill-conditioned
#> Enforcing convergence!
#> 
#> Warning: BLR gradient is ill-conditioned
#> Enforcing convergence!
#> 
#> Warning: BLR gradient is ill-conditioned
#> Enforcing convergence!
#> 
#> Warning: BLR gradient is ill-conditioned
#> Enforcing convergence!
#>             covariate     2.5 %    97.5 % evaluations
#> (Intercept)         0 -3.803731 -3.751804          24