Create setting for AdaBoost with python DecisionTreeClassifier base estimator

  nEstimators = list(10, 50, 200),
  learningRate = list(1, 0.5, 0.1),
  algorithm = list("SAMME.R"),
  seed = sample(1e+06, 1)



(list) The maximum number of estimators at which boosting is terminated. In case of perfect fit, the learning procedure is stopped early.


(list) Weight applied to each classifier at each boosting iteration. A higher learning rate increases the contribution of each classifier. There is a trade-off between the learningRate and nEstimators parameters There is a trade-off between learningRate and nEstimators.


(list) If ‘SAMME.R’ then use the SAMME.R real boosting algorithm. base_estimator must support calculation of class probabilities. If ‘SAMME’ then use the SAMME discrete boosting algorithm. The SAMME.R algorithm typically converges faster than SAMME, achieving a lower test error with fewer boosting iterations.


A seed for the model


if (FALSE) {
model.adaBoost <- setAdaBoost(nEstimators = list(10,50,200), learningRate = list(1, 0.5, 0.1),
                              algorithm = list('SAMME.R'), seed = sample(1000000,1)