-
Notifications
You must be signed in to change notification settings - Fork 1
Open
Description
Hello, thank you for sharing your excellent work and releasing the accompanying code. I noticed that the scaling step in the preprocessing part appears to be applied to the entire dataset X, which includes both training and test samples:
for column in X.columns:
if X[column].dtype in ['float64', 'int64', 'uint8', 'int16']:
scaler = StandardScaler()
scaler.fit(X[column].values.reshape(-1,1))
X[column] = scaler.transform(X[column].values.reshape(-1,1))
X[column] = X[column].round(1)
Since X here contains both training and testing samples, the scaling parameters (mean and standard deviation) are calculated using statistics from the whole dataset, including the test set. In practice, this may introduce data leakage, because the scaling incorporates information from the test set into the training phase. A common practice is to fit the scaler on the training data only, and then transform the test data using the same fitted parameters.
Metadata
Metadata
Assignees
Labels
No labels