xorbits.xgboost.train(params, dtrain, evals=(), **kwargs)[source]#

Train a booster with given parameters.

  • params – Booster params.

  • dtrain – Data to be trained.

  • num_boost_round ((Not supported yet)) – Number of boosting iterations.

  • evals – List of validation sets for which metrics will evaluated during training. Validation metrics will help us track the performance of the model.

  • obj – Custom objective function. See Custom Objective for details.

  • feval ((Not supported yet)) –

    Deprecated since version 1.6.0(xgboost): Use custom_metric instead.

  • maximize ((Not supported yet)) – Whether to maximize feval.

  • early_stopping_rounds ((Not supported yet)) – Activates early stopping. Validation metric needs to improve at least once in every early_stopping_rounds round(s) to continue training. Requires at least one item in evals. The method returns the model from the last iteration (not the best one). Use custom callback or model slicing if the best model is desired. If there’s more than one item in evals, the last entry will be used for early stopping. If there’s more than one metric in the eval_metric parameter given in params, the last metric will be used for early stopping. If early stopping occurs, the model will have two additional fields: bst.best_score, bst.best_iteration.

  • evals_result ((Not supported yet)) –

    This dictionary stores the evaluation results of all the items in watchlist.

    Example: with a watchlist containing [(dtest,'eval'), (dtrain,'train')] and a parameter containing ('eval_metric': 'logloss'), the evals_result returns

    {'train': {'logloss': ['0.48253', '0.35953']},
     'eval': {'logloss': ['0.480385', '0.357756']}}

  • verbose_eval ((Not supported yet)) – Requires at least one item in evals. If verbose_eval is True then the evaluation metric on the validation set is printed at each boosting stage. If verbose_eval is an integer then the evaluation metric on the validation set is printed at every given verbose_eval boosting stage. The last boosting stage / the boosting stage found by using early_stopping_rounds is also printed. Example: with verbose_eval=4 and at least one item in evals, an evaluation metric is printed every 4 boosting stages, instead of every boosting stage.

  • xgb_model ((Not supported yet)) – Xgb model to be loaded before training (allows training continuation).

  • callbacks ((Not supported yet)) –

    List of callback functions that are applied at end of each iteration. It is possible to use predefined callbacks by using Callback API.


    States in callback are not preserved during training, which means callback objects can not be reused for multiple training sessions without reinitialization or deepcopy.

    for params in parameters_grid:
        # be sure to (re)initialize the callbacks before each run
        callbacks = [xgb.callback.LearningRateScheduler(custom_rates)]
        xgboost.train(params, Xy, callbacks=callbacks)

  • custom_metric ((Not supported yet)) –

    Custom metric function. See Custom Metric for details.



Return type

a trained booster model

This docstring was copied from xgboost.