#MachineLearning #SupervisedLearning #Classification

By Billy Gustave

FuPont Chemical Company

Business challenge/requirement

FuPont is a leading chemical company across the globe. The Company is on a CSR (Corporate Social Responsibility) mission. It wants to identify biodegradable products based on a study of the relationships between chemical structure and biodegradation of molecules.

Goal :

  • Create a ML model to classify the chemical structure as 'Ready BioDegradable' -RB vs 'Not Ready Biodegradable' -NRB
  • Data: bio-degradabale-data.csv
  • Comparing different models and including boosting ones

Data Cleaning and Exploration

In [1]:
import numpy as np, pandas as pd, matplotlib.pyplot as plt, seaborn as sns
In [2]:
# features columns names
names = ['SpMax_L', 'J_Dz', 'nHM', 'F01[N-N]', 'F04[C-N]', 'NssssC', 'nCb-', 'C%','nCp','nO','F03[C-N]','SdssC','HyWi_B','LOC','SM6_L','F03[C-O]','Me','Mi','nN-N','nArNO2','nCRX3','SpPosA','nCIR','B01[C-Br]','B03[C-Cl]','N-073','SpMax_A','Psi_i_1d','B04[C-Br]','SdO','TI2_L','nCrt','C-026','F02[C-N]','nHDon','SpMax_B','Psi_i_A','nN','SM6_B','nArCOOR','nX','experimentalclass']
In [3]:
df = pd.read_csv('bio-degradabale-data.csv', header=None, sep=';',names=names)
df.shape
Out[3]:
(1055, 42)
In [4]:
df.head()
Out[4]:
SpMax_L J_Dz nHM F01[N-N] F04[C-N] NssssC nCb- C% nCp nO ... C-026 F02[C-N] nHDon SpMax_B Psi_i_A nN SM6_B nArCOOR nX experimentalclass
0 3.919 2.6909 0 0 0 0 0 31.4 2 0 ... 0 0 0 2.949 1.591 0 7.253 0 0 RB
1 4.170 2.1144 0 0 0 0 0 30.8 1 1 ... 0 0 0 3.315 1.967 0 7.257 0 0 RB
2 3.932 3.2512 0 0 0 0 0 26.7 2 4 ... 0 0 1 3.076 2.417 0 7.601 0 0 RB
3 3.000 2.7098 0 0 0 0 0 20.0 0 2 ... 0 0 1 3.046 5.000 0 6.690 0 0 RB
4 4.236 3.3944 0 0 0 0 0 29.4 2 4 ... 0 0 0 3.351 2.405 0 8.003 0 0 RB

5 rows × 42 columns

In [5]:
df.describe()
Out[5]:
SpMax_L J_Dz nHM F01[N-N] F04[C-N] NssssC nCb- C% nCp nO ... nCrt C-026 F02[C-N] nHDon SpMax_B Psi_i_A nN SM6_B nArCOOR nX
count 1055.000000 1055.000000 1055.000000 1055.000000 1055.000000 1055.000000 1055.000000 1055.000000 1055.000000 1055.000000 ... 1055.000000 1055.000000 1055.000000 1055.000000 1055.000000 1055.000000 1055.000000 1055.000000 1055.000000 1055.000000
mean 4.782644 3.069508 0.716588 0.042654 0.980095 0.290047 1.646445 37.055640 1.376303 1.803791 ... 0.129858 0.883412 1.274882 0.961137 3.918240 2.558417 0.686256 8.629492 0.051185 0.723223
std 0.546916 0.831308 1.462452 0.256010 2.332955 1.073771 2.224822 9.144466 1.963521 1.775435 ... 0.644057 1.520467 2.273994 1.257013 0.999602 0.642765 1.090389 1.241986 0.318970 2.239286
min 2.000000 0.803900 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 ... 0.000000 0.000000 0.000000 0.000000 2.267000 1.467000 0.000000 4.917000 0.000000 0.000000
25% 4.481000 2.502750 0.000000 0.000000 0.000000 0.000000 0.000000 30.450000 0.000000 0.000000 ... 0.000000 0.000000 0.000000 0.000000 3.487500 2.103000 0.000000 7.991000 0.000000 0.000000
50% 4.828000 3.046300 0.000000 0.000000 0.000000 0.000000 1.000000 37.500000 1.000000 2.000000 ... 0.000000 0.000000 0.000000 1.000000 3.726000 2.458000 0.000000 8.499000 0.000000 0.000000
75% 5.125000 3.437650 1.000000 0.000000 1.000000 0.000000 3.000000 43.400000 2.000000 3.000000 ... 0.000000 1.000000 2.000000 2.000000 3.987000 2.870500 1.000000 9.020500 0.000000 0.000000
max 6.496000 9.177500 12.000000 3.000000 36.000000 13.000000 18.000000 60.700000 24.000000 12.000000 ... 8.000000 12.000000 18.000000 7.000000 10.695000 5.825000 8.000000 14.700000 4.000000 27.000000

8 rows × 41 columns

In [6]:
df.info()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 1055 entries, 0 to 1054
Data columns (total 42 columns):
 #   Column             Non-Null Count  Dtype  
---  ------             --------------  -----  
 0   SpMax_L            1055 non-null   float64
 1   J_Dz               1055 non-null   float64
 2   nHM                1055 non-null   int64  
 3   F01[N-N]           1055 non-null   int64  
 4   F04[C-N]           1055 non-null   int64  
 5   NssssC             1055 non-null   int64  
 6   nCb-               1055 non-null   int64  
 7   C%                 1055 non-null   float64
 8   nCp                1055 non-null   int64  
 9   nO                 1055 non-null   int64  
 10  F03[C-N]           1055 non-null   int64  
 11  SdssC              1055 non-null   float64
 12  HyWi_B             1055 non-null   float64
 13  LOC                1055 non-null   float64
 14  SM6_L              1055 non-null   float64
 15  F03[C-O]           1055 non-null   int64  
 16  Me                 1055 non-null   float64
 17  Mi                 1055 non-null   float64
 18  nN-N               1055 non-null   int64  
 19  nArNO2             1055 non-null   int64  
 20  nCRX3              1055 non-null   int64  
 21  SpPosA             1055 non-null   float64
 22  nCIR               1055 non-null   int64  
 23  B01[C-Br]          1055 non-null   int64  
 24  B03[C-Cl]          1055 non-null   int64  
 25  N-073              1055 non-null   int64  
 26  SpMax_A            1055 non-null   float64
 27  Psi_i_1d           1055 non-null   float64
 28  B04[C-Br]          1055 non-null   int64  
 29  SdO                1055 non-null   float64
 30  TI2_L              1055 non-null   float64
 31  nCrt               1055 non-null   int64  
 32  C-026              1055 non-null   int64  
 33  F02[C-N]           1055 non-null   int64  
 34  nHDon              1055 non-null   int64  
 35  SpMax_B            1055 non-null   float64
 36  Psi_i_A            1055 non-null   float64
 37  nN                 1055 non-null   int64  
 38  SM6_B              1055 non-null   float64
 39  nArCOOR            1055 non-null   int64  
 40  nX                 1055 non-null   int64  
 41  experimentalclass  1055 non-null   object 
dtypes: float64(17), int64(24), object(1)
memory usage: 346.3+ KB

Only the target variables is not a number column

Missing values

In [7]:
# checking the percentage of missing values in each variable
df.isnull().sum()/len(df)*100
Out[7]:
SpMax_L              0.0
J_Dz                 0.0
nHM                  0.0
F01[N-N]             0.0
F04[C-N]             0.0
NssssC               0.0
nCb-                 0.0
C%                   0.0
nCp                  0.0
nO                   0.0
F03[C-N]             0.0
SdssC                0.0
HyWi_B               0.0
LOC                  0.0
SM6_L                0.0
F03[C-O]             0.0
Me                   0.0
Mi                   0.0
nN-N                 0.0
nArNO2               0.0
nCRX3                0.0
SpPosA               0.0
nCIR                 0.0
B01[C-Br]            0.0
B03[C-Cl]            0.0
N-073                0.0
SpMax_A              0.0
Psi_i_1d             0.0
B04[C-Br]            0.0
SdO                  0.0
TI2_L                0.0
nCrt                 0.0
C-026                0.0
F02[C-N]             0.0
nHDon                0.0
SpMax_B              0.0
Psi_i_A              0.0
nN                   0.0
SM6_B                0.0
nArCOOR              0.0
nX                   0.0
experimentalclass    0.0
dtype: float64
In [8]:
# mapping our target variable
# RB:1, NRB:0
from sklearn.preprocessing import LabelEncoder
le = LabelEncoder()
le.fit(df.experimentalclass)
df['experimentalclass'] = le.transform(df.experimentalclass)
In [9]:
# Features and Target
X = df.drop(['experimentalclass'], axis=1)
y = df.experimentalclass
In [10]:
from sklearn.model_selection import train_test_split
# random_state guarantees the same output everytime the program is run
x_train, x_test, y_train, y_test = train_test_split(X,y, test_size = .2, random_state=7)

No missing values

Unique values

In [11]:
from sklearn.feature_selection import VarianceThreshold
# zero variance (unique values)
constant_filter = VarianceThreshold(threshold=0)
constant_filter.fit(x_train)
columns_to_remove = [name for name in x_train.columns if name not in x_train.columns[constant_filter.get_support()]]
print('Unique features: ', columns_to_remove)
Unique features:  []

No unique values

Highly correlated values

Threshold: 75%

In [12]:
# Correlation matrix for all independent vars
corrMatrix = x_train.corr()
allVars = corrMatrix.keys()

absCorrWithDep = []
for var in allVars:
    absCorrWithDep.append(abs(y.corr(x_train[var])))
# threshold seeting
corrTol = 0.75

# for each column in the corr matrix
for col in corrMatrix:
    
    if col in corrMatrix.keys():
        thisCol = []
        thisVars = []
        temp = corrMatrix[col]
        
        # Store the corr with the dep var for fields that are highly correlated with each other
        for i in range(len(corrMatrix)):
            
            if abs(corrMatrix[col][i]) == 1.0 and col != corrMatrix.keys()[i]:
                thisCorr = 0
            else:
                thisCorr = (1 if abs(corrMatrix[col][i]) > corrTol else -1) * abs(temp[corrMatrix.keys()[i]])
            thisCol.append(thisCorr)
            thisVars.append(corrMatrix.keys()[i])
        
        mask = np.ones(len(thisCol), dtype = bool) # Initialize the mask
        
        ctDelCol = 0 # To keep track of the number of columns deleted
        
        for n, j in enumerate(thisCol):
            # Delete if (a) a var is correlated withh others and do not ave the best corr with dep,
            # or (b) completely corr with the 'col'
            mask[n] = not (j != max(thisCol) and j >= 0)
            
            if j != max(thisCol) and j >= 0:
                # Delete the column from the corr matrix
                corrMatrix.pop('%s' %thisVars[n])
                ctDelCol += 1
                
        # Delete the corresponding row(s) from the corr matrix
        corrMatrix = corrMatrix[mask]
In [13]:
columns_to_keep = corrMatrix.columns
len(columns_to_keep)
Out[13]:
30
In [14]:
x_train_clean = x_train[columns_to_keep]
fig, ax = plt.subplots(figsize=(16,14))
sns.heatmap(x_train_clean.corr(), cmap='Reds',annot=True, linewidths=.5, ax=ax)
Out[14]:
<matplotlib.axes._subplots.AxesSubplot at 0x1a1764dff08>

Keeping 30 feature below the 75% correlation threshold

Feature importance

In [15]:
from sklearn.ensemble import RandomForestClassifier
rfc = RandomForestClassifier()
rfc.fit(x_train_clean, y_train)
features = x_train_clean.columns
importances = rfc.feature_importances_
indices = np.argsort(importances)
fig, ax = plt.subplots(figsize=(16,14))
plt.title('Feature Importances')
plt.barh(range(len(indices)), importances[indices], color='b', align='center')
plt.yticks(range(len(indices)), [features[i] for i in indices])
plt.xlabel('Relative Importance')
plt.show()

Removing the bottom 7 features.

In [16]:
low_importance_features = ['B03[C-Cl]','F01[N-N]','nArNO2','B01[C-Br]','N-073','nN-N','nCRX3']
x_train_clean = x_train_clean.drop(low_importance_features,axis=1)

Renaming features for xbg model, it doesn't accept names with [, ] or <

In [17]:
x_train_clean = x_train_clean.rename(columns = {'F04[C-N]':'F04C-N'})
x_test = x_test.rename(columns = {'F04[C-N]':'F04C-N'})

Recursive Feature Selection using RFECV with XGBoost model

In [18]:
from sklearn.model_selection import cross_val_score, KFold
kfold = KFold(n_splits=10, random_state=17, shuffle=True)
In [19]:
from xgboost import XGBClassifier
from sklearn.feature_selection import RFECV
model = XGBClassifier()
rfecv = RFECV(estimator=model, cv=kfold, scoring='accuracy')
rfecv.fit(x_train_clean, y_train)
Out[19]:
RFECV(cv=KFold(n_splits=10, random_state=17, shuffle=True),
      estimator=XGBClassifier(base_score=0.5, booster='gbtree',
                              colsample_bylevel=1, colsample_bynode=1,
                              colsample_bytree=1, gamma=0, learning_rate=0.1,
                              max_delta_step=0, max_depth=3, min_child_weight=1,
                              missing=None, n_estimators=100, n_jobs=1,
                              nthread=None, objective='binary:logistic',
                              random_state=0, reg_alpha=0, reg_lambda=1,
                              scale_pos_weight=1, seed=None, silent=None,
                              subsample=1, verbosity=1),
      min_features_to_select=1, n_jobs=None, scoring='accuracy', step=1,
      verbose=0)
In [20]:
plt.figure()
fig, ax = plt.subplots(figsize=(16,14))
plt.title('XGB CV score vs No of Features')
plt.xlabel("Number of features selected")
plt.ylabel("Cross validation score (nb of correct classifications)")
plt.plot(range(1, len(rfecv.grid_scores_) + 1), rfecv.grid_scores_)
plt.grid()
plt.show()
<Figure size 432x288 with 0 Axes>
In [21]:
feature_importance = list(zip(x_train_clean.columns, rfecv.support_))
new_features = []
for key,value in enumerate(feature_importance):
    if(value[1]) == True:
        new_features.append(value[0])
        
print(new_features)
x_train_best = x_train_clean[new_features]
x_test_best = x_test[new_features]
['SpMax_L', 'J_Dz', 'nHM', 'F04C-N', 'NssssC', 'nCb-', 'C%', 'nCp', 'nO', 'SdssC', 'HyWi_B', 'LOC', 'Me', 'Mi', 'Psi_i_1d', 'SdO', 'nCrt', 'nHDon', 'SpMax_B', 'nN', 'nArCOOR', 'nX']

Model Selection

Comapring results from 9 different models

In [22]:
# model libraries
from sklearn.linear_model import LogisticRegression
from sklearn.naive_bayes import GaussianNB
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier, GradientBoostingClassifier, AdaBoostClassifier
from sklearn.svm import SVC
from sklearn.neighbors import KNeighborsClassifier
from xgboost import XGBRFClassifier
In [23]:
models = []
models.append(('LR',LogisticRegression(solver = 'newton-cg')))
models.append(('NB',GaussianNB()))
models.append(('DTC',DecisionTreeClassifier()))
models.append(('RFC',RandomForestClassifier()))
models.append(('SVC',SVC()))
models.append(('KNN',KNeighborsClassifier()))
models.append(('GBC',GradientBoostingClassifier()))
models.append(('ABC',AdaBoostClassifier()))
models.append(('XGB',XGBClassifier()))
models.append(('XGBRF',XGBRFClassifier()))

Using Kfold and Cross Validation

In [24]:
names = []
scores = []
for name, model in models:
    score = cross_val_score(model, x_train_best, y_train, cv=kfold, scoring='accuracy').mean()
    names.append(name)
    scores.append(score)
results  = pd.DataFrame({'Model': names,'Accuracy': scores})
results
Out[24]:
Model Accuracy
0 LR 0.866106
1 NB 0.758319
2 DTC 0.817521
3 RFC 0.868473
4 SVC 0.764076
5 KNN 0.812885
6 GBC 0.861429
7 ABC 0.842423
8 XGB 0.864958
9 XGBRF 0.809300
In [25]:
axis = sns.barplot(x = 'Model', y = 'Accuracy', data = results)
axis.set(xlabel='Classifier', ylabel='Accuracy')
for p in axis.patches:
    height = p.get_height()
    axis.text(p.get_x() + p.get_width()/2, height + 0.005, '{:1.4f}'.format(height), ha="center") 
    
plt.show()

Though Logistic Regression has slightly better accuracy than XGBoost because it performs a lot faster so that 1% lost in accuracy for huge boost in performance is worth it.

Model tuning

Learning rate and estimators
This function allows us to find the best number of estimators for our starting learning rate. We will run it again after we tune the learning for best estimators for that learning rate.

In [26]:
import xgboost as XGB
from sklearn.metrics import accuracy_score, roc_auc_score
def modelfit(alg, useTrainCV=True, cv_folds=5, early_stopping_rounds=50):
    
    if useTrainCV:
        xgb_param = alg.get_xgb_params()
        #xgtrain = xgb.DMatrix(dtrain[predictors].values, label=dtrain[target].values)
        xgtrain = XGB.DMatrix(x_train_best.values, label=y_train.values)
        cvresult = XGB.cv(xgb_param, xgtrain, num_boost_round=alg.get_params()['n_estimators'], nfold=cv_folds,
                          metrics='auc', early_stopping_rounds=early_stopping_rounds, verbose_eval=-1)
        alg.set_params(n_estimators=cvresult.shape[0])
    
    #Fit the algorithm on the data
    alg.fit(x_train_best, y_train, eval_metric='auc')
        
    #Predict training set:
    dtrain_predictions = alg.predict(x_train_best)
    dtrain_predprob = alg.predict_proba(x_train_best)[:,1]
        
    #Print model report:
    print("\nModel Report")
    print("Accuracy (Train): %.4g" % accuracy_score(y_train.values, dtrain_predictions))
    print("AUC Score (Train): %f" % roc_auc_score(y_train, dtrain_predprob))
    
    feat_imp = pd.Series(alg._Booster.get_fscore()).sort_values(ascending=False)
    feat_imp.plot(kind='bar', title='Feature Importances')
    plt.ylabel('Feature Importance Score')
In [27]:
# initial parameters value (starting point which will change later on)
xgb = XGBClassifier(learning_rate=0.1, max_depth=5, n_estimators=500, subsample=0.8,
                    colsample_bytree=0.8,objective='binary:logistic',seed=7)
modelfit(xgb)
[0]	train-auc:0.919782+0.00490165	test-auc:0.860132+0.0339716
[1]	train-auc:0.938834+0.0093262	test-auc:0.875165+0.034059
[2]	train-auc:0.951924+0.00713621	test-auc:0.886445+0.032343
[3]	train-auc:0.95647+0.00501205	test-auc:0.893134+0.0337303
[4]	train-auc:0.959808+0.00583186	test-auc:0.895707+0.0303951
[5]	train-auc:0.962661+0.0068251	test-auc:0.896788+0.0327719
[6]	train-auc:0.964328+0.00608868	test-auc:0.898748+0.0329039
[7]	train-auc:0.965407+0.00474971	test-auc:0.897657+0.0337211
[8]	train-auc:0.967654+0.00451551	test-auc:0.900717+0.0312574
[9]	train-auc:0.969424+0.00435009	test-auc:0.901314+0.032638
[10]	train-auc:0.971199+0.00395002	test-auc:0.902401+0.0324209
[11]	train-auc:0.972243+0.00368539	test-auc:0.906259+0.0309937
[12]	train-auc:0.973808+0.0042673	test-auc:0.908818+0.0303229
[13]	train-auc:0.97558+0.00410966	test-auc:0.910462+0.0298631
[14]	train-auc:0.976855+0.00324934	test-auc:0.912163+0.0308033
[15]	train-auc:0.977574+0.00330807	test-auc:0.911818+0.0311902
[16]	train-auc:0.978198+0.00328889	test-auc:0.912782+0.0317221
[17]	train-auc:0.978887+0.00279025	test-auc:0.913157+0.0323201
[18]	train-auc:0.979756+0.00235491	test-auc:0.913721+0.0319248
[19]	train-auc:0.980645+0.00260076	test-auc:0.91424+0.0304513
[20]	train-auc:0.981301+0.00278728	test-auc:0.91446+0.0307809
[21]	train-auc:0.982084+0.00268476	test-auc:0.914462+0.0314461
[22]	train-auc:0.982446+0.0025811	test-auc:0.913801+0.0329861
[23]	train-auc:0.983351+0.00288848	test-auc:0.91427+0.0328018
[24]	train-auc:0.984229+0.00247596	test-auc:0.915404+0.0325297
[25]	train-auc:0.984961+0.00231424	test-auc:0.915928+0.031784
[26]	train-auc:0.985639+0.00217547	test-auc:0.917199+0.0303441
[27]	train-auc:0.98645+0.00208088	test-auc:0.917912+0.0303424
[28]	train-auc:0.986773+0.00194971	test-auc:0.918104+0.0303293
[29]	train-auc:0.987312+0.00194108	test-auc:0.920428+0.0302434
[30]	train-auc:0.987782+0.00173663	test-auc:0.920521+0.0309745
[31]	train-auc:0.98845+0.00181789	test-auc:0.92092+0.0314238
[32]	train-auc:0.988884+0.00170001	test-auc:0.920719+0.0311109
[33]	train-auc:0.989318+0.00181645	test-auc:0.921598+0.030249
[34]	train-auc:0.989743+0.00201999	test-auc:0.921652+0.0307617
[35]	train-auc:0.990018+0.00220131	test-auc:0.922259+0.030262
[36]	train-auc:0.99047+0.00211799	test-auc:0.923456+0.0296513
[37]	train-auc:0.990924+0.00185059	test-auc:0.923792+0.0290011
[38]	train-auc:0.991389+0.00172909	test-auc:0.924183+0.0302966
[39]	train-auc:0.991646+0.00167122	test-auc:0.92459+0.0301286
[40]	train-auc:0.991977+0.00170538	test-auc:0.924776+0.029085
[41]	train-auc:0.992378+0.00160206	test-auc:0.924621+0.0290556
[42]	train-auc:0.992683+0.00153973	test-auc:0.925694+0.0286238
[43]	train-auc:0.992938+0.00156152	test-auc:0.926072+0.0288075
[44]	train-auc:0.993279+0.00149389	test-auc:0.926208+0.0280404
[45]	train-auc:0.993721+0.00131135	test-auc:0.926171+0.0281571
[46]	train-auc:0.993883+0.00122357	test-auc:0.926295+0.0278112
[47]	train-auc:0.994119+0.00113744	test-auc:0.927064+0.0283976
[48]	train-auc:0.994321+0.00124936	test-auc:0.926718+0.0292495
[49]	train-auc:0.994535+0.00124682	test-auc:0.926767+0.0282818
[50]	train-auc:0.994762+0.00124856	test-auc:0.926107+0.0294157
[51]	train-auc:0.995106+0.00118233	test-auc:0.926982+0.0294149
[52]	train-auc:0.995263+0.00114926	test-auc:0.927133+0.0293405
[53]	train-auc:0.995395+0.00113468	test-auc:0.927233+0.0297372
[54]	train-auc:0.995547+0.00120288	test-auc:0.92733+0.029228
[55]	train-auc:0.995788+0.0011797	test-auc:0.927264+0.029213
[56]	train-auc:0.995922+0.00113918	test-auc:0.927452+0.0292321
[57]	train-auc:0.996217+0.00107971	test-auc:0.928049+0.0295691
[58]	train-auc:0.996387+0.0010242	test-auc:0.928077+0.0292918
[59]	train-auc:0.996601+0.00098503	test-auc:0.927687+0.0292988
[60]	train-auc:0.99664+0.000952582	test-auc:0.928335+0.0296354
[61]	train-auc:0.996774+0.000939568	test-auc:0.927756+0.0301162
[62]	train-auc:0.996924+0.000901349	test-auc:0.928024+0.029428
[63]	train-auc:0.997096+0.000836724	test-auc:0.928327+0.029219
[64]	train-auc:0.997136+0.000826207	test-auc:0.928749+0.0286349
[65]	train-auc:0.997279+0.000810631	test-auc:0.929061+0.0285663
[66]	train-auc:0.997474+0.000754813	test-auc:0.929192+0.0283976
[67]	train-auc:0.997584+0.000735759	test-auc:0.929146+0.0281469
[68]	train-auc:0.99772+0.000657621	test-auc:0.929953+0.0281918
[69]	train-auc:0.997829+0.000646066	test-auc:0.929505+0.0285375
[70]	train-auc:0.997959+0.000619624	test-auc:0.928801+0.029315
[71]	train-auc:0.998037+0.000624394	test-auc:0.928859+0.0291532
[72]	train-auc:0.998109+0.000604637	test-auc:0.928803+0.0290172
[73]	train-auc:0.998186+0.000615629	test-auc:0.929576+0.0297525
[74]	train-auc:0.998265+0.000616617	test-auc:0.92982+0.0294244
[75]	train-auc:0.998363+0.000569387	test-auc:0.929725+0.029568
[76]	train-auc:0.99848+0.000559733	test-auc:0.929517+0.029489
[77]	train-auc:0.998574+0.000560721	test-auc:0.929948+0.0294476
[78]	train-auc:0.998673+0.000526948	test-auc:0.930321+0.0289275
[79]	train-auc:0.998769+0.000467751	test-auc:0.930381+0.0287878
[80]	train-auc:0.998889+0.000430834	test-auc:0.929611+0.0292724
[81]	train-auc:0.998935+0.0004031	test-auc:0.929823+0.0291463
[82]	train-auc:0.998971+0.000419377	test-auc:0.929715+0.0293926
[83]	train-auc:0.999008+0.000422014	test-auc:0.92974+0.0294547
[84]	train-auc:0.999082+0.000404867	test-auc:0.929432+0.0294947
[85]	train-auc:0.999199+0.000370564	test-auc:0.92902+0.02931
[86]	train-auc:0.999201+0.0003674	test-auc:0.929198+0.0288263
[87]	train-auc:0.999267+0.000372267	test-auc:0.92933+0.0294662
[88]	train-auc:0.999304+0.00037718	test-auc:0.929937+0.0291159
[89]	train-auc:0.999339+0.000352972	test-auc:0.930216+0.029152
[90]	train-auc:0.999387+0.000341167	test-auc:0.930276+0.0291921
[91]	train-auc:0.999432+0.00031217	test-auc:0.930747+0.0288764
[92]	train-auc:0.99944+0.000319918	test-auc:0.930974+0.0286897
[93]	train-auc:0.999455+0.000299896	test-auc:0.930534+0.0284399
[94]	train-auc:0.999486+0.00026909	test-auc:0.930438+0.0282467
[95]	train-auc:0.999527+0.000253434	test-auc:0.929954+0.0283986
[96]	train-auc:0.99955+0.000251132	test-auc:0.929502+0.0281159
[97]	train-auc:0.999568+0.000250134	test-auc:0.929237+0.0282562
[98]	train-auc:0.999591+0.000252885	test-auc:0.929163+0.0277246
[99]	train-auc:0.99963+0.000225382	test-auc:0.929515+0.0280226
[100]	train-auc:0.999655+0.000216104	test-auc:0.929545+0.0281774
[101]	train-auc:0.999667+0.000191366	test-auc:0.929269+0.0286047
[102]	train-auc:0.999675+0.000198633	test-auc:0.928898+0.028297
[103]	train-auc:0.99971+0.000176364	test-auc:0.929349+0.0280418
[104]	train-auc:0.999731+0.000163425	test-auc:0.92898+0.0279496
[105]	train-auc:0.999741+0.000154492	test-auc:0.928614+0.0279958
[106]	train-auc:0.999766+0.000152601	test-auc:0.929048+0.0277786
[107]	train-auc:0.99978+0.000135562	test-auc:0.928854+0.0274847
[108]	train-auc:0.999805+0.000131675	test-auc:0.929001+0.0273037
[109]	train-auc:0.999807+0.000145626	test-auc:0.929223+0.0270223
[110]	train-auc:0.999823+0.000132695	test-auc:0.929056+0.0274656
[111]	train-auc:0.999825+0.000128006	test-auc:0.92904+0.0274703
[112]	train-auc:0.999832+0.000124233	test-auc:0.928449+0.0277124
[113]	train-auc:0.999836+0.000114861	test-auc:0.928822+0.0273149
[114]	train-auc:0.999852+0.000101282	test-auc:0.928829+0.0275588
[115]	train-auc:0.999864+9.93229e-05	test-auc:0.928665+0.0271172
[116]	train-auc:0.999869+9.79767e-05	test-auc:0.928501+0.0271303
[117]	train-auc:0.999881+8.68871e-05	test-auc:0.928581+0.0267217
[118]	train-auc:0.999879+9.12916e-05	test-auc:0.928957+0.0264948
[119]	train-auc:0.999899+7.96984e-05	test-auc:0.928284+0.0271087
[120]	train-auc:0.999903+7.90964e-05	test-auc:0.928633+0.0271619
[121]	train-auc:0.999907+7.39881e-05	test-auc:0.928287+0.0271225
[122]	train-auc:0.999912+7.57063e-05	test-auc:0.927911+0.027107
[123]	train-auc:0.999916+7.42574e-05	test-auc:0.928041+0.026973
[124]	train-auc:0.99992+7.38566e-05	test-auc:0.928498+0.0270513
[125]	train-auc:0.999928+6.91404e-05	test-auc:0.928003+0.0271499
[126]	train-auc:0.99994+6.24263e-05	test-auc:0.92818+0.0271567
[127]	train-auc:0.999942+6.41542e-05	test-auc:0.928221+0.0270879
[128]	train-auc:0.999946+5.95805e-05	test-auc:0.928291+0.0267102
[129]	train-auc:0.999949+5.49531e-05	test-auc:0.928172+0.0271939
[130]	train-auc:0.999961+4.77661e-05	test-auc:0.928101+0.027227
[131]	train-auc:0.999963+4.54048e-05	test-auc:0.928198+0.0277054
[132]	train-auc:0.999963+4.54048e-05	test-auc:0.928133+0.0278134
[133]	train-auc:0.999967+4.17536e-05	test-auc:0.928599+0.0277291
[134]	train-auc:0.999969+3.87319e-05	test-auc:0.928475+0.0275736
[135]	train-auc:0.999971+3.69995e-05	test-auc:0.928299+0.0279915
[136]	train-auc:0.999977+2.94184e-05	test-auc:0.928282+0.0274446
[137]	train-auc:0.999975+3.13024e-05	test-auc:0.927948+0.0275209
[138]	train-auc:0.999979+2.64394e-05	test-auc:0.927969+0.0274525
[139]	train-auc:0.999982+2.17862e-05	test-auc:0.928424+0.0274256
[140]	train-auc:0.999982+2.35508e-05	test-auc:0.928522+0.0273335
[141]	train-auc:0.999984+1.9106e-05	test-auc:0.92799+0.0273525

Model Report
Accuracy (Train): 0.9834
AUC Score (Train): 0.998917

max_depth and min_child_weight

In [28]:
from sklearn.model_selection import GridSearchCV
In [29]:
param = {'max_depth':range(0,15,1),
         'min_child_weight':range(0,15,2)}
In [30]:
gsrch = GridSearchCV(estimator=XGBClassifier(learning_rate=0.1, n_estimators=141, subsample=0.8, colsample_bytree=0.8,
                                             objective='binary:logistic',seed=7), param_grid=param, scoring='accuracy',
                     n_jobs=-1,cv=5)
gsrch.fit(x_train_best, y_train)
max_depth = gsrch.best_params_['max_depth']
min_child_weight = gsrch.best_params_['min_child_weight']
gsrch.best_params_, gsrch.best_score_
Out[30]:
({'max_depth': 7, 'min_child_weight': 0}, 0.8673217807833193)

gamma

In [31]:
param = {'gamma':[i/10.0 for i in range(0,101)]}
In [32]:
gsrch = GridSearchCV(estimator=XGBClassifier(learning_rate=0.1, n_estimators=141, subsample=0.8, colsample_bytree=0.8, 
                                             objective='binary:logistic',seed=7, max_depth=max_depth, 
                                             min_child_weight=min_child_weight), param_grid=param, scoring='accuracy', 
                     n_jobs=-1, cv=5)
gsrch.fit(x_train_best, y_train)
gamma = gsrch.best_params_['gamma']
gsrch.best_params_, gsrch.best_score_
Out[32]:
({'gamma': 4.8}, 0.8732248520710059)

subsample and colsample_bytree

In [33]:
param = {'subsample':[i/10.0 for i in range(6,11)],
        'colsample_bytree':[i/10.0 for i in range(6,11)]}
In [34]:
gsrch = GridSearchCV(estimator=XGBClassifier(learning_rate=0.1, n_estimators=141, objective='binary:logistic',seed=7, 
                                             max_depth=max_depth, min_child_weight=min_child_weight, gamma=gamma), 
                     param_grid=param, scoring='accuracy', n_jobs=-1, cv=5)
gsrch.fit(x_train_best, y_train)
colsample_bytree = gsrch.best_params_['colsample_bytree']
subsample = gsrch.best_params_['subsample']
gsrch.best_params_, gsrch.best_score_
Out[34]:
({'colsample_bytree': 0.7, 'subsample': 0.6}, 0.8744012397858552)

reg_alpha

In [35]:
param = {'reg_alpha':[1e-5, 1e-2, 0.1, 1, 100, 0, 0.001, 0.005, 0.01, 0.05]}
In [36]:
gsrch = GridSearchCV(estimator=XGBClassifier(learning_rate=0.1, n_estimators=141, objective='binary:logistic',seed=7, 
                                             max_depth=max_depth, min_child_weight=min_child_weight, gamma=gamma, 
                                             colsample_bytree=colsample_bytree, subsample=subsample), param_grid=param, 
                     scoring='accuracy', n_jobs=-1, cv=5)
gsrch.fit(x_train_best, y_train)
reg_alpha = gsrch.best_params_['reg_alpha']
gsrch.best_params_, gsrch.best_score_
Out[36]:
({'reg_alpha': 1e-05}, 0.8744012397858552)

estimators for learning rate 0.01

In [37]:
xgb = XGBClassifier(learning_rate=0.01, n_estimators=100000, objective='binary:logistic',seed=7, max_depth=max_depth, 
                    min_child_weight=min_child_weight, gamma=gamma, colsample_bytree=colsample_bytree, subsample=subsample)
modelfit(xgb)
[0]	train-auc:0.89053+0.010455	test-auc:0.836774+0.0268395
[1]	train-auc:0.91141+0.00866776	test-auc:0.870156+0.0346771
[2]	train-auc:0.928988+0.00803569	test-auc:0.881438+0.028392
[3]	train-auc:0.935828+0.00677021	test-auc:0.886886+0.0304972
[4]	train-auc:0.939788+0.00540373	test-auc:0.896426+0.037584
[5]	train-auc:0.941309+0.00592874	test-auc:0.900131+0.0294765
[6]	train-auc:0.941796+0.00466846	test-auc:0.897884+0.02968
[7]	train-auc:0.942415+0.0032612	test-auc:0.896824+0.0303079
[8]	train-auc:0.944319+0.0040536	test-auc:0.901605+0.0271051
[9]	train-auc:0.944612+0.00386515	test-auc:0.90089+0.0273104
[10]	train-auc:0.945803+0.0049011	test-auc:0.899754+0.0283789
[11]	train-auc:0.946158+0.00472103	test-auc:0.898873+0.0279104
[12]	train-auc:0.946265+0.00463708	test-auc:0.899881+0.0280115
[13]	train-auc:0.946317+0.00480212	test-auc:0.899274+0.027789
[14]	train-auc:0.946381+0.0048145	test-auc:0.901152+0.0281022
[15]	train-auc:0.947516+0.00469464	test-auc:0.903262+0.0298728
[16]	train-auc:0.948334+0.00428834	test-auc:0.903103+0.0305394
[17]	train-auc:0.948598+0.0040585	test-auc:0.90404+0.0313707
[18]	train-auc:0.948999+0.00401119	test-auc:0.903954+0.0318656
[19]	train-auc:0.949516+0.00456791	test-auc:0.904422+0.0325304
[20]	train-auc:0.949786+0.00464073	test-auc:0.904625+0.0328008
[21]	train-auc:0.949664+0.00458169	test-auc:0.903421+0.0329203
[22]	train-auc:0.949616+0.00480748	test-auc:0.903774+0.0326141
[23]	train-auc:0.949315+0.00491417	test-auc:0.903673+0.0333421
[24]	train-auc:0.950056+0.00465535	test-auc:0.905002+0.0331897
[25]	train-auc:0.950086+0.00462363	test-auc:0.904783+0.0333975
[26]	train-auc:0.950473+0.00477477	test-auc:0.904886+0.0328258
[27]	train-auc:0.950722+0.00476342	test-auc:0.904925+0.0325848
[28]	train-auc:0.950911+0.00473533	test-auc:0.905279+0.0321839
[29]	train-auc:0.951236+0.00484579	test-auc:0.905825+0.0318487
[30]	train-auc:0.951309+0.00506224	test-auc:0.905868+0.0318866
[31]	train-auc:0.951293+0.0051315	test-auc:0.905136+0.0323311
[32]	train-auc:0.951057+0.00524331	test-auc:0.905053+0.0321764
[33]	train-auc:0.951185+0.00541828	test-auc:0.905481+0.0318624
[34]	train-auc:0.951427+0.00552272	test-auc:0.905599+0.0318385
[35]	train-auc:0.951518+0.00590118	test-auc:0.906135+0.0317602
[36]	train-auc:0.951777+0.00542365	test-auc:0.906529+0.0319203
[37]	train-auc:0.951788+0.00539935	test-auc:0.906582+0.0326034
[38]	train-auc:0.952006+0.00526461	test-auc:0.907712+0.031472
[39]	train-auc:0.952035+0.00528763	test-auc:0.90751+0.0319309
[40]	train-auc:0.952091+0.00533013	test-auc:0.907713+0.0321088
[41]	train-auc:0.952316+0.00494976	test-auc:0.907506+0.0323723
[42]	train-auc:0.952258+0.00490666	test-auc:0.907328+0.0326121
[43]	train-auc:0.952335+0.00492017	test-auc:0.907825+0.0326555
[44]	train-auc:0.952572+0.00444098	test-auc:0.907837+0.033218
[45]	train-auc:0.952598+0.00434035	test-auc:0.908469+0.0332053
[46]	train-auc:0.95256+0.00407262	test-auc:0.908377+0.0335637
[47]	train-auc:0.952817+0.00395832	test-auc:0.908112+0.0329765
[48]	train-auc:0.952887+0.00389789	test-auc:0.908235+0.0328523
[49]	train-auc:0.952863+0.00399352	test-auc:0.908053+0.0329668
[50]	train-auc:0.952982+0.00407557	test-auc:0.907793+0.032793
[51]	train-auc:0.952999+0.00417506	test-auc:0.907842+0.0326647
[52]	train-auc:0.95318+0.0041162	test-auc:0.907993+0.0326116
[53]	train-auc:0.953154+0.00401304	test-auc:0.90844+0.0325331
[54]	train-auc:0.953315+0.00379513	test-auc:0.909124+0.0326838
[55]	train-auc:0.953227+0.00378789	test-auc:0.908873+0.032592
[56]	train-auc:0.953296+0.00378005	test-auc:0.908714+0.0325593
[57]	train-auc:0.953466+0.0038586	test-auc:0.909192+0.0326656
[58]	train-auc:0.953609+0.00391876	test-auc:0.90895+0.0328233
[59]	train-auc:0.953417+0.00400964	test-auc:0.909017+0.0325424
[60]	train-auc:0.953346+0.00400773	test-auc:0.909131+0.0325279
[61]	train-auc:0.953384+0.00401663	test-auc:0.908885+0.0329492
[62]	train-auc:0.953525+0.00409387	test-auc:0.908603+0.0330293
[63]	train-auc:0.953721+0.00431505	test-auc:0.90838+0.0332011
[64]	train-auc:0.953896+0.00445287	test-auc:0.90833+0.0332886
[65]	train-auc:0.954016+0.00468626	test-auc:0.908651+0.0340956
[66]	train-auc:0.954067+0.00467824	test-auc:0.908806+0.0342097
[67]	train-auc:0.954135+0.00470718	test-auc:0.908774+0.0341242
[68]	train-auc:0.953919+0.00470823	test-auc:0.908711+0.0342015
[69]	train-auc:0.953991+0.00476293	test-auc:0.909089+0.0340975
[70]	train-auc:0.95407+0.00467316	test-auc:0.909213+0.0344834
[71]	train-auc:0.95414+0.00482121	test-auc:0.909491+0.0341922
[72]	train-auc:0.954331+0.00484367	test-auc:0.90997+0.0335139
[73]	train-auc:0.954331+0.00490891	test-auc:0.909976+0.03347
[74]	train-auc:0.954498+0.00486771	test-auc:0.9101+0.0334014
[75]	train-auc:0.954764+0.00477552	test-auc:0.909913+0.0335799
[76]	train-auc:0.954779+0.00476943	test-auc:0.91006+0.0333378
[77]	train-auc:0.954963+0.00461436	test-auc:0.909895+0.0331747
[78]	train-auc:0.954784+0.00462303	test-auc:0.910027+0.0334892
[79]	train-auc:0.954814+0.00460359	test-auc:0.909815+0.0334212
[80]	train-auc:0.954951+0.00468945	test-auc:0.91039+0.033035
[81]	train-auc:0.954871+0.00469694	test-auc:0.910171+0.0331586
[82]	train-auc:0.954804+0.00473244	test-auc:0.910328+0.0330575
[83]	train-auc:0.954878+0.00478277	test-auc:0.911353+0.0329175
[84]	train-auc:0.954961+0.00469574	test-auc:0.911658+0.0328115
[85]	train-auc:0.954905+0.0046739	test-auc:0.911288+0.0328592
[86]	train-auc:0.954933+0.00467099	test-auc:0.91142+0.0331225
[87]	train-auc:0.954947+0.0048171	test-auc:0.911624+0.032724
[88]	train-auc:0.955004+0.00491979	test-auc:0.91197+0.0326946
[89]	train-auc:0.955041+0.00496775	test-auc:0.912097+0.032699
[90]	train-auc:0.955062+0.00502795	test-auc:0.912281+0.0326271
[91]	train-auc:0.955096+0.00505129	test-auc:0.912622+0.0324468
[92]	train-auc:0.955182+0.00497389	test-auc:0.912716+0.0327635
[93]	train-auc:0.955207+0.00490983	test-auc:0.912723+0.0329975
[94]	train-auc:0.955236+0.00495275	test-auc:0.912686+0.0325047
[95]	train-auc:0.955254+0.00490201	test-auc:0.912502+0.032353
[96]	train-auc:0.955234+0.004876	test-auc:0.912528+0.0322771
[97]	train-auc:0.955168+0.00490705	test-auc:0.912718+0.0324413
[98]	train-auc:0.955361+0.00483936	test-auc:0.912689+0.0324577
[99]	train-auc:0.955394+0.00476304	test-auc:0.912569+0.0328535
[100]	train-auc:0.955329+0.00484929	test-auc:0.912315+0.0325945
[101]	train-auc:0.955278+0.00490076	test-auc:0.912956+0.0320089
[102]	train-auc:0.95526+0.00491215	test-auc:0.912775+0.0321584
[103]	train-auc:0.955215+0.00480979	test-auc:0.912949+0.0320547
[104]	train-auc:0.955471+0.00486499	test-auc:0.912914+0.0319081
[105]	train-auc:0.955501+0.00478869	test-auc:0.913046+0.0321139
[106]	train-auc:0.955579+0.00478296	test-auc:0.91321+0.0320144
[107]	train-auc:0.955684+0.00487067	test-auc:0.913471+0.0318469
[108]	train-auc:0.955713+0.00488989	test-auc:0.913558+0.0316607
[109]	train-auc:0.955872+0.00490674	test-auc:0.913679+0.031695
[110]	train-auc:0.956062+0.00488209	test-auc:0.913936+0.031622
[111]	train-auc:0.956132+0.00481803	test-auc:0.913751+0.0316563
[112]	train-auc:0.956231+0.00473105	test-auc:0.913814+0.0317056
[113]	train-auc:0.956231+0.0047438	test-auc:0.914415+0.0318009
[114]	train-auc:0.956233+0.00461222	test-auc:0.914441+0.0318983
[115]	train-auc:0.956328+0.00453826	test-auc:0.914425+0.0313022
[116]	train-auc:0.956425+0.00461887	test-auc:0.914499+0.0316183
[117]	train-auc:0.956504+0.0047576	test-auc:0.91441+0.0319102
[118]	train-auc:0.956512+0.00465664	test-auc:0.914343+0.0320163
[119]	train-auc:0.956534+0.00466425	test-auc:0.913991+0.0317525
[120]	train-auc:0.956539+0.0047704	test-auc:0.913986+0.0316082
[121]	train-auc:0.956536+0.0047257	test-auc:0.913986+0.031628
[122]	train-auc:0.956644+0.00473823	test-auc:0.914023+0.0315638
[123]	train-auc:0.956787+0.0047356	test-auc:0.913839+0.0315645
[124]	train-auc:0.956764+0.00472513	test-auc:0.91371+0.031663
[125]	train-auc:0.956816+0.00481265	test-auc:0.913854+0.0311278
[126]	train-auc:0.956858+0.00476192	test-auc:0.913703+0.0312919
[127]	train-auc:0.956889+0.00476326	test-auc:0.913729+0.0314549
[128]	train-auc:0.956947+0.00484813	test-auc:0.913826+0.0315501
[129]	train-auc:0.957017+0.00485975	test-auc:0.913987+0.0317337
[130]	train-auc:0.956976+0.00477654	test-auc:0.914196+0.0316936
[131]	train-auc:0.956979+0.00483501	test-auc:0.914264+0.0316365
[132]	train-auc:0.957124+0.00478044	test-auc:0.914349+0.0315222
[133]	train-auc:0.957102+0.00471426	test-auc:0.914283+0.0313813
[134]	train-auc:0.957275+0.00454435	test-auc:0.914352+0.0316312
[135]	train-auc:0.957412+0.00440884	test-auc:0.914269+0.0319711
[136]	train-auc:0.95742+0.00443458	test-auc:0.914161+0.0319032
[137]	train-auc:0.957476+0.00432382	test-auc:0.914193+0.0319073
[138]	train-auc:0.957525+0.00432922	test-auc:0.914289+0.0319723
[139]	train-auc:0.957571+0.00437357	test-auc:0.914358+0.032015
[140]	train-auc:0.957575+0.0044297	test-auc:0.9141+0.0319422
[141]	train-auc:0.957571+0.00445352	test-auc:0.914386+0.0320309
[142]	train-auc:0.957669+0.00444894	test-auc:0.91479+0.0318734
[143]	train-auc:0.957711+0.00438236	test-auc:0.914696+0.0318856
[144]	train-auc:0.95785+0.00428454	test-auc:0.914695+0.03188
[145]	train-auc:0.957872+0.00428981	test-auc:0.914625+0.0317349
[146]	train-auc:0.957872+0.00427482	test-auc:0.91465+0.0316593
[147]	train-auc:0.957837+0.00433297	test-auc:0.914442+0.0319398
[148]	train-auc:0.957869+0.00427714	test-auc:0.914627+0.0321824
[149]	train-auc:0.957977+0.00419896	test-auc:0.914718+0.0323316
[150]	train-auc:0.957985+0.00407377	test-auc:0.914653+0.0321988
[151]	train-auc:0.957965+0.00407832	test-auc:0.914467+0.0324323
[152]	train-auc:0.958125+0.00411924	test-auc:0.914685+0.0322517
[153]	train-auc:0.958136+0.0041089	test-auc:0.914718+0.0322977
[154]	train-auc:0.958193+0.00414115	test-auc:0.914595+0.0323031
[155]	train-auc:0.958259+0.00413159	test-auc:0.914782+0.0324093
[156]	train-auc:0.958414+0.00407784	test-auc:0.9149+0.0323105
[157]	train-auc:0.958441+0.00407114	test-auc:0.91499+0.0322557
[158]	train-auc:0.958428+0.00407181	test-auc:0.915147+0.032224
[159]	train-auc:0.958513+0.00399126	test-auc:0.915083+0.0323769
[160]	train-auc:0.958558+0.00394827	test-auc:0.915261+0.0321889
[161]	train-auc:0.958569+0.00384638	test-auc:0.915226+0.0321667
[162]	train-auc:0.958613+0.00378693	test-auc:0.915357+0.0323049
[163]	train-auc:0.958562+0.00378776	test-auc:0.915206+0.0324844
[164]	train-auc:0.958542+0.00372889	test-auc:0.91533+0.0324704
[165]	train-auc:0.958609+0.00379045	test-auc:0.915267+0.0324964
[166]	train-auc:0.958713+0.00381644	test-auc:0.914991+0.0329569
[167]	train-auc:0.958713+0.00392799	test-auc:0.914997+0.0329771
[168]	train-auc:0.958785+0.00397474	test-auc:0.915025+0.0328811
[169]	train-auc:0.958793+0.00397663	test-auc:0.915212+0.0327955
[170]	train-auc:0.95889+0.00405107	test-auc:0.915212+0.0326102
[171]	train-auc:0.958863+0.00401325	test-auc:0.915334+0.032508
[172]	train-auc:0.958838+0.00413895	test-auc:0.915396+0.0325034
[173]	train-auc:0.958869+0.00415141	test-auc:0.915334+0.0325315
[174]	train-auc:0.959024+0.00413058	test-auc:0.915618+0.0324676
[175]	train-auc:0.959089+0.00409184	test-auc:0.915999+0.0325109
[176]	train-auc:0.959109+0.00408888	test-auc:0.915876+0.0323221
[177]	train-auc:0.959145+0.00410063	test-auc:0.915842+0.0323634
[178]	train-auc:0.959162+0.0041559	test-auc:0.915936+0.03242
[179]	train-auc:0.959228+0.00411175	test-auc:0.915908+0.0325235
[180]	train-auc:0.959294+0.0040841	test-auc:0.915941+0.0326719
[181]	train-auc:0.959271+0.00400846	test-auc:0.91588+0.0328227
[182]	train-auc:0.959392+0.00395786	test-auc:0.915971+0.0326394
[183]	train-auc:0.959399+0.00391402	test-auc:0.915973+0.0326722
[184]	train-auc:0.959401+0.00399056	test-auc:0.916216+0.0323966
[185]	train-auc:0.959479+0.00401071	test-auc:0.916117+0.0322963
[186]	train-auc:0.959462+0.00402196	test-auc:0.916183+0.0324357
[187]	train-auc:0.95951+0.0040281	test-auc:0.915864+0.0323667
[188]	train-auc:0.959513+0.00399647	test-auc:0.915954+0.0323512
[189]	train-auc:0.959519+0.00399534	test-auc:0.91611+0.0322576
[190]	train-auc:0.959546+0.00400403	test-auc:0.916173+0.032365
[191]	train-auc:0.959661+0.00398436	test-auc:0.916269+0.0323474
[192]	train-auc:0.959631+0.00395137	test-auc:0.916269+0.0323474
[193]	train-auc:0.959649+0.00397024	test-auc:0.916266+0.0322847
[194]	train-auc:0.959694+0.00393489	test-auc:0.916543+0.0321798
[195]	train-auc:0.959711+0.003898	test-auc:0.916569+0.0321505
[196]	train-auc:0.959734+0.00395781	test-auc:0.916589+0.0319671
[197]	train-auc:0.959762+0.00392591	test-auc:0.916687+0.0318697
[198]	train-auc:0.959756+0.00402046	test-auc:0.916746+0.0318243
[199]	train-auc:0.959802+0.00402002	test-auc:0.916749+0.0320226
[200]	train-auc:0.959868+0.00400755	test-auc:0.916877+0.0320273
[201]	train-auc:0.959881+0.00397348	test-auc:0.916944+0.0320805
[202]	train-auc:0.959876+0.00398057	test-auc:0.91691+0.0320435
[203]	train-auc:0.95988+0.00402765	test-auc:0.916975+0.0321526
[204]	train-auc:0.959917+0.0040232	test-auc:0.917199+0.0322727
[205]	train-auc:0.959977+0.00406659	test-auc:0.917012+0.0323586
[206]	train-auc:0.959989+0.00408795	test-auc:0.917131+0.0321412
[207]	train-auc:0.960076+0.00404572	test-auc:0.917072+0.0321047
[208]	train-auc:0.960111+0.00398605	test-auc:0.917132+0.0321666
[209]	train-auc:0.960187+0.00397811	test-auc:0.917349+0.0321472
[210]	train-auc:0.960239+0.00400281	test-auc:0.917316+0.0321448
[211]	train-auc:0.960386+0.00397772	test-auc:0.91741+0.032039
[212]	train-auc:0.960395+0.00393887	test-auc:0.917447+0.0321824
[213]	train-auc:0.960358+0.00393994	test-auc:0.917362+0.0323406
[214]	train-auc:0.960341+0.00392696	test-auc:0.917426+0.0323097
[215]	train-auc:0.960396+0.00386765	test-auc:0.917519+0.0321479
[216]	train-auc:0.960423+0.00386627	test-auc:0.917389+0.032227
[217]	train-auc:0.96045+0.00391764	test-auc:0.917357+0.0323162
[218]	train-auc:0.960437+0.00389947	test-auc:0.917379+0.0321181
[219]	train-auc:0.960471+0.00389614	test-auc:0.917378+0.0321075
[220]	train-auc:0.960567+0.0038782	test-auc:0.917132+0.0322894
[221]	train-auc:0.960578+0.00383933	test-auc:0.917355+0.0324346
[222]	train-auc:0.960581+0.00388515	test-auc:0.917329+0.0326779
[223]	train-auc:0.960598+0.00389324	test-auc:0.917204+0.0326243
[224]	train-auc:0.960586+0.00394124	test-auc:0.917303+0.0327028
[225]	train-auc:0.96067+0.00391915	test-auc:0.917329+0.0326428
[226]	train-auc:0.960682+0.00394399	test-auc:0.917327+0.0326013
[227]	train-auc:0.960631+0.00396268	test-auc:0.917361+0.0326669
[228]	train-auc:0.960637+0.00394589	test-auc:0.917455+0.0326465
[229]	train-auc:0.960674+0.00397491	test-auc:0.917583+0.0326751
[230]	train-auc:0.960694+0.00392743	test-auc:0.917584+0.0327285
[231]	train-auc:0.960723+0.00385372	test-auc:0.917303+0.0329738
[232]	train-auc:0.960733+0.00388189	test-auc:0.917497+0.0330069
[233]	train-auc:0.96079+0.00390274	test-auc:0.917523+0.0329058
[234]	train-auc:0.96088+0.00388577	test-auc:0.917604+0.0326384
[235]	train-auc:0.960917+0.00386265	test-auc:0.917632+0.0324889
[236]	train-auc:0.960941+0.00389672	test-auc:0.917643+0.0326362
[237]	train-auc:0.96098+0.0038807	test-auc:0.91771+0.0327343
[238]	train-auc:0.961013+0.00386627	test-auc:0.917549+0.0325187
[239]	train-auc:0.96104+0.00383963	test-auc:0.917542+0.0325996
[240]	train-auc:0.961099+0.0038647	test-auc:0.917448+0.032585
[241]	train-auc:0.961169+0.0038607	test-auc:0.917517+0.0327119
[242]	train-auc:0.961161+0.00388141	test-auc:0.917459+0.0328824
[243]	train-auc:0.961244+0.00382701	test-auc:0.917646+0.0328308
[244]	train-auc:0.961244+0.00378446	test-auc:0.917708+0.0329259
[245]	train-auc:0.961219+0.00380091	test-auc:0.917675+0.0328256
[246]	train-auc:0.961241+0.00376148	test-auc:0.917892+0.0325791
[247]	train-auc:0.961235+0.00375251	test-auc:0.917894+0.0327499
[248]	train-auc:0.96124+0.00377694	test-auc:0.91783+0.032618
[249]	train-auc:0.961254+0.00380066	test-auc:0.917829+0.0326378
[250]	train-auc:0.961326+0.00380261	test-auc:0.917767+0.0326921
[251]	train-auc:0.961277+0.00383981	test-auc:0.917925+0.0327103
[252]	train-auc:0.961353+0.0038274	test-auc:0.917989+0.0326217
[253]	train-auc:0.961402+0.00380791	test-auc:0.917954+0.032645
[254]	train-auc:0.961401+0.00381345	test-auc:0.917925+0.0327321
[255]	train-auc:0.961409+0.00383074	test-auc:0.918118+0.0329689
[256]	train-auc:0.961394+0.00387254	test-auc:0.918142+0.032789
[257]	train-auc:0.961417+0.00392568	test-auc:0.91813+0.0324215
[258]	train-auc:0.961485+0.00388338	test-auc:0.918132+0.0324998
[259]	train-auc:0.961573+0.00394063	test-auc:0.918131+0.0324078
[260]	train-auc:0.961626+0.00392675	test-auc:0.918132+0.0323813
[261]	train-auc:0.961692+0.00394074	test-auc:0.918231+0.0322723
[262]	train-auc:0.961742+0.00391322	test-auc:0.918172+0.0325588
[263]	train-auc:0.961775+0.00388045	test-auc:0.918043+0.0323806
[264]	train-auc:0.961797+0.00386755	test-auc:0.918134+0.0323846
[265]	train-auc:0.96181+0.00384795	test-auc:0.91813+0.0323178
[266]	train-auc:0.961872+0.00381936	test-auc:0.918191+0.0323724
[267]	train-auc:0.961899+0.00379	test-auc:0.917932+0.0322461
[268]	train-auc:0.961894+0.00381371	test-auc:0.917871+0.0323099
[269]	train-auc:0.96186+0.00378516	test-auc:0.917774+0.0322852
[270]	train-auc:0.961926+0.0037725	test-auc:0.917993+0.0322607
[271]	train-auc:0.961973+0.00379562	test-auc:0.918313+0.032353
[272]	train-auc:0.962013+0.00375966	test-auc:0.918433+0.0321942
[273]	train-auc:0.962062+0.00378178	test-auc:0.918461+0.0321343
[274]	train-auc:0.96213+0.00373783	test-auc:0.918374+0.0320771
[275]	train-auc:0.962184+0.00371075	test-auc:0.918341+0.0320031
[276]	train-auc:0.962215+0.003756	test-auc:0.918434+0.031987
[277]	train-auc:0.962279+0.00376265	test-auc:0.918489+0.0317891
[278]	train-auc:0.962288+0.00375487	test-auc:0.918518+0.0316877
[279]	train-auc:0.962338+0.00376127	test-auc:0.918548+0.0317878
[280]	train-auc:0.962389+0.00384776	test-auc:0.918518+0.0318016
[281]	train-auc:0.962456+0.00385958	test-auc:0.918581+0.0315356
[282]	train-auc:0.962441+0.0038523	test-auc:0.918611+0.03147
[283]	train-auc:0.962421+0.00388189	test-auc:0.918583+0.031685
[284]	train-auc:0.962485+0.0038815	test-auc:0.918368+0.0319022
[285]	train-auc:0.962539+0.00388098	test-auc:0.918307+0.0318883
[286]	train-auc:0.962627+0.00381442	test-auc:0.91862+0.0317248
[287]	train-auc:0.962652+0.00384628	test-auc:0.918556+0.0317149
[288]	train-auc:0.962669+0.00385388	test-auc:0.918556+0.0316784
[289]	train-auc:0.962683+0.00384489	test-auc:0.918617+0.0316512
[290]	train-auc:0.962739+0.00382766	test-auc:0.918553+0.0317005
[291]	train-auc:0.962757+0.00386627	test-auc:0.918585+0.031764
[292]	train-auc:0.962776+0.00391041	test-auc:0.918737+0.0317459
[293]	train-auc:0.962834+0.00391012	test-auc:0.918704+0.0315859
[294]	train-auc:0.9629+0.0039286	test-auc:0.918832+0.0316282
[295]	train-auc:0.962904+0.00393383	test-auc:0.918771+0.0316143
[296]	train-auc:0.962927+0.00392568	test-auc:0.91859+0.0318821
[297]	train-auc:0.963003+0.00391871	test-auc:0.918623+0.0319168
[298]	train-auc:0.962995+0.00391565	test-auc:0.918782+0.0320082
[299]	train-auc:0.962997+0.00387095	test-auc:0.918781+0.0320738
[300]	train-auc:0.963007+0.00386442	test-auc:0.91875+0.0321648
[301]	train-auc:0.963061+0.00386294	test-auc:0.918872+0.0321189
[302]	train-auc:0.963139+0.00383061	test-auc:0.918996+0.03197
[303]	train-auc:0.96315+0.0038709	test-auc:0.918962+0.0319322
[304]	train-auc:0.963221+0.00384753	test-auc:0.919149+0.0317668
[305]	train-auc:0.963238+0.00384993	test-auc:0.919216+0.0318429
[306]	train-auc:0.96323+0.00379308	test-auc:0.919188+0.0319808
[307]	train-auc:0.96326+0.00378097	test-auc:0.919155+0.0319021
[308]	train-auc:0.963268+0.00381444	test-auc:0.919215+0.031798
[309]	train-auc:0.963265+0.00376778	test-auc:0.919242+0.031682
[310]	train-auc:0.963296+0.00378308	test-auc:0.919303+0.0317733
[311]	train-auc:0.963363+0.00379266	test-auc:0.919235+0.0316867
[312]	train-auc:0.963423+0.00375603	test-auc:0.919264+0.0318594
[313]	train-auc:0.963441+0.00377899	test-auc:0.919237+0.0318415
[314]	train-auc:0.963495+0.00374785	test-auc:0.919303+0.0319949
[315]	train-auc:0.963484+0.00375587	test-auc:0.919333+0.0320226
[316]	train-auc:0.963485+0.00374166	test-auc:0.919272+0.0319983
[317]	train-auc:0.963483+0.00371866	test-auc:0.919175+0.0319591
[318]	train-auc:0.963515+0.00367049	test-auc:0.919114+0.032137
[319]	train-auc:0.963507+0.00370744	test-auc:0.919114+0.0321003
[320]	train-auc:0.963492+0.0037216	test-auc:0.919145+0.032087
[321]	train-auc:0.963533+0.00380194	test-auc:0.919269+0.0319882
[322]	train-auc:0.963609+0.0037433	test-auc:0.9193+0.0321038
[323]	train-auc:0.963638+0.00370889	test-auc:0.9193+0.032145
[324]	train-auc:0.963675+0.00367569	test-auc:0.91939+0.0320175
[325]	train-auc:0.963683+0.00366183	test-auc:0.919393+0.0320684
[326]	train-auc:0.963683+0.00366183	test-auc:0.919393+0.0320684
[327]	train-auc:0.963763+0.00361644	test-auc:0.919359+0.0320583
[328]	train-auc:0.963821+0.00358946	test-auc:0.919359+0.0321233
[329]	train-auc:0.963829+0.00352273	test-auc:0.919329+0.0321733
[330]	train-auc:0.963837+0.0035369	test-auc:0.919297+0.0321731
[331]	train-auc:0.963835+0.00361195	test-auc:0.919173+0.0321703
[332]	train-auc:0.963835+0.00359864	test-auc:0.919204+0.0321706
[333]	train-auc:0.963888+0.0036058	test-auc:0.919388+0.0321096
[334]	train-auc:0.963893+0.00358042	test-auc:0.919391+0.0321603
[335]	train-auc:0.963961+0.00355164	test-auc:0.919581+0.03215
[336]	train-auc:0.963955+0.00354348	test-auc:0.919614+0.0321879
[337]	train-auc:0.96398+0.00356034	test-auc:0.919613+0.0322569
[338]	train-auc:0.964001+0.00353869	test-auc:0.919644+0.0322851
[339]	train-auc:0.964052+0.00353296	test-auc:0.919677+0.0323592
[340]	train-auc:0.964087+0.0035293	test-auc:0.919645+0.032203
[341]	train-auc:0.964145+0.00352559	test-auc:0.919797+0.0322193
[342]	train-auc:0.964153+0.0034896	test-auc:0.919859+0.0320413
[343]	train-auc:0.964145+0.00349995	test-auc:0.919861+0.0320924
[344]	train-auc:0.964139+0.0034494	test-auc:0.919861+0.0322484
[345]	train-auc:0.964199+0.00344979	test-auc:0.919764+0.0322232
[346]	train-auc:0.964312+0.00339957	test-auc:0.919861+0.0323266
[347]	train-auc:0.964376+0.0034239	test-auc:0.920018+0.0323015
[348]	train-auc:0.964384+0.00340848	test-auc:0.920017+0.0323294
[349]	train-auc:0.964374+0.00339617	test-auc:0.919986+0.0323794
[350]	train-auc:0.964414+0.00335405	test-auc:0.919955+0.0324572
[351]	train-auc:0.964394+0.00335924	test-auc:0.919924+0.0324289
[352]	train-auc:0.964413+0.00334136	test-auc:0.920019+0.032367
[353]	train-auc:0.964427+0.00332614	test-auc:0.920021+0.0324546
[354]	train-auc:0.964425+0.00333958	test-auc:0.91999+0.0325411
[355]	train-auc:0.964483+0.00334732	test-auc:0.919996+0.032639
[356]	train-auc:0.964481+0.00333002	test-auc:0.919868+0.032582
[357]	train-auc:0.964476+0.00332451	test-auc:0.919899+0.0326098
[358]	train-auc:0.964445+0.00332619	test-auc:0.919899+0.0326098
[359]	train-auc:0.964452+0.00338202	test-auc:0.919963+0.0325976
[360]	train-auc:0.964459+0.00339677	test-auc:0.920054+0.0324846
[361]	train-auc:0.964486+0.00337838	test-auc:0.919954+0.0323711
[362]	train-auc:0.96449+0.00337835	test-auc:0.919986+0.0323713
[363]	train-auc:0.964556+0.00333079	test-auc:0.919864+0.0325726
[364]	train-auc:0.964561+0.00333843	test-auc:0.919897+0.0326101
[365]	train-auc:0.964559+0.00334939	test-auc:0.919988+0.0326133
[366]	train-auc:0.96458+0.0033491	test-auc:0.919925+0.0326132
[367]	train-auc:0.964648+0.00336339	test-auc:0.919891+0.0325757
[368]	train-auc:0.964687+0.00336988	test-auc:0.919991+0.0326887
[369]	train-auc:0.96471+0.00335361	test-auc:0.920053+0.0326382
[370]	train-auc:0.964772+0.00335321	test-auc:0.920054+0.0326887
[371]	train-auc:0.964827+0.00335475	test-auc:0.920146+0.0325376
[372]	train-auc:0.964845+0.00333123	test-auc:0.920077+0.0324527
[373]	train-auc:0.964827+0.00333147	test-auc:0.920198+0.0322802
[374]	train-auc:0.96489+0.00331529	test-auc:0.92029+0.032255
[375]	train-auc:0.964919+0.00330866	test-auc:0.920239+0.0325935
[376]	train-auc:0.964936+0.00335349	test-auc:0.920178+0.0325
[377]	train-auc:0.964919+0.00333766	test-auc:0.920147+0.0325503
[378]	train-auc:0.964968+0.00330196	test-auc:0.920209+0.0326566
[379]	train-auc:0.96495+0.00327951	test-auc:0.920208+0.0325816
[380]	train-auc:0.965005+0.00329293	test-auc:0.920236+0.0324436
[381]	train-auc:0.965057+0.00326716	test-auc:0.920236+0.0324436
[382]	train-auc:0.965054+0.00327288	test-auc:0.920142+0.032391
[383]	train-auc:0.965058+0.00329053	test-auc:0.920175+0.0324285
[384]	train-auc:0.965098+0.0032683	test-auc:0.920174+0.0324562
[385]	train-auc:0.965122+0.00327238	test-auc:0.920113+0.0323592
[386]	train-auc:0.965167+0.00327388	test-auc:0.920084+0.0323688
[387]	train-auc:0.965166+0.00325866	test-auc:0.920184+0.0324829
[388]	train-auc:0.965232+0.0032057	test-auc:0.920278+0.0325359
[389]	train-auc:0.965257+0.00318371	test-auc:0.920187+0.0326116
[390]	train-auc:0.965249+0.00326089	test-auc:0.920156+0.0326245
[391]	train-auc:0.965298+0.0032693	test-auc:0.920125+0.0326244
[392]	train-auc:0.965364+0.00332682	test-auc:0.920124+0.0325745
[393]	train-auc:0.965372+0.00332395	test-auc:0.920124+0.0325745
[394]	train-auc:0.965394+0.00329739	test-auc:0.920154+0.0326022
[395]	train-auc:0.965452+0.00340591	test-auc:0.920157+0.0326285
[396]	train-auc:0.965464+0.00344224	test-auc:0.920126+0.0326784
[397]	train-auc:0.965466+0.00343365	test-auc:0.920156+0.0327062
[398]	train-auc:0.965472+0.0034235	test-auc:0.920096+0.0326507
[399]	train-auc:0.965475+0.0034205	test-auc:0.920062+0.0326126
[400]	train-auc:0.965483+0.00341055	test-auc:0.920001+0.0326758
[401]	train-auc:0.965497+0.00341125	test-auc:0.920032+0.0326626
[402]	train-auc:0.965518+0.00344392	test-auc:0.920034+0.0327636
[403]	train-auc:0.965534+0.00346412	test-auc:0.920061+0.0326527
[404]	train-auc:0.965571+0.00345502	test-auc:0.920091+0.0326806
[405]	train-auc:0.965615+0.00340609	test-auc:0.920093+0.0327181
[406]	train-auc:0.965686+0.0034419	test-auc:0.919972+0.032895
[407]	train-auc:0.965692+0.0034529	test-auc:0.919972+0.032895
[408]	train-auc:0.965692+0.00347388	test-auc:0.920102+0.032959
[409]	train-auc:0.965668+0.00347581	test-auc:0.920163+0.0328593
[410]	train-auc:0.965725+0.00348588	test-auc:0.92007+0.0329177
[411]	train-auc:0.965733+0.00349105	test-auc:0.92004+0.032931
[412]	train-auc:0.965762+0.00348812	test-auc:0.919977+0.0329306
[413]	train-auc:0.965776+0.00347672	test-auc:0.920068+0.032932
[414]	train-auc:0.965775+0.00348747	test-auc:0.919954+0.0328498
[415]	train-auc:0.965791+0.00348251	test-auc:0.919894+0.0328764
[416]	train-auc:0.965795+0.00349365	test-auc:0.91997+0.0327943
[417]	train-auc:0.965822+0.00352957	test-auc:0.91989+0.0327752
[418]	train-auc:0.965845+0.00353181	test-auc:0.920018+0.032838
[419]	train-auc:0.965849+0.00353358	test-auc:0.920065+0.0328849
[420]	train-auc:0.965867+0.00358273	test-auc:0.919971+0.0329467
[421]	train-auc:0.965875+0.00359645	test-auc:0.920001+0.0329336
[422]	train-auc:0.965926+0.00356288	test-auc:0.91994+0.0329188
[423]	train-auc:0.965953+0.00352145	test-auc:0.919999+0.032855

Model Report
Accuracy (Train): 0.9111
AUC Score (Train): 0.965511

423

Predicting

In [38]:
xgb = XGBClassifier(learning_rate=0.01, n_estimators=423, objective='binary:logistic',seed=7, max_depth=max_depth, 
                    min_child_weight=min_child_weight, gamma=gamma, colsample_bytree=colsample_bytree, subsample=subsample)
In [39]:
final_score = cross_val_score(xgb, x_train_best, y_train, cv=10, scoring='accuracy').mean()
print("Final train accuracy : {} ".format(final_score))
Final train accuracy : 0.8672689075630252 
In [40]:
xgb.fit(x_train_best, y_train)
y_pred = xgb.predict(x_test_best)
score = accuracy_score(y_test, y_pred)
print("Final test accuracy : {} ".format(score))
Final test accuracy : 0.8815165876777251 
In [41]:
xgb = XGBClassifier()
xgb.fit(x_train_best, y_train)
y_pred = xgb.predict(x_test_best)
score = accuracy_score(y_test, y_pred)
print("Test accuracy without tuning : {} ".format(score))
Test accuracy without tuning : 0.8625592417061612 

Note:

Tuning improvements are minimal.

Contact Me

www.linkedin.com/in/billygustave

billygustave.com