I'm trying to understand using kfolds cross validation from the sklearn python module.
I understand the basic flow:
model = LogisticRegression()
Where i'm confused is using sklearn kfolds with cross val score. As I understand it the cross_val_score function will fit the model and predict on the kfolds giving you an accuracy score for each fold.
e.g. using code like this:
kf = KFold(n=data.shape, n_folds=5, shuffle=True, random_state=8) lr = linear_model.LogisticRegression() accuracies = cross_val_score(lr, X_train,y_train, scoring='accuracy', cv = kf)
So if I have a dataset with training and testing data, and I use the
cross_val_score function with kfolds to determine the accuracy of the algorithm on my training data for each fold, is the
model now fitted and ready for prediction on the testing data?
So in the case above using
No the model is not fitted. Looking at the source code for
As you can see,
cross_val_score clones the estimator before fitting the fold training data to it.
cross_val_score will give you output an array of scores which you can analyse to know how the estimator performs for different folds of the data to check if it overfits the data or not. You can know more about it here
You need to fit the whole training data to the estimator once you are satisfied with the results of
cross_val_score, before you can use it to predict on test data.