Skip to content

features feed into label encoders should be in the whole data set instead of just train set part #5

Open
@li-xin-yi

Description

@li-xin-yi

In the wide model part, one-hot encoders are used to label categorical features with just few unique values.

# Wide feature 2: one-hot vector of variety categories

# Use sklearn utility to convert label strings to numbered index
encoder = LabelEncoder()
encoder.fit(variety_train)
variety_train = encoder.transform(variety_train)
variety_test = encoder.transform(variety_test)
num_classes = np.max(variety_train) + 1

# Convert labels to one hot
variety_train = keras.utils.to_categorical(variety_train, num_classes)
variety_test = keras.utils.to_categorical(variety_test, num_classes)

However, some values may just occur in test_set (fortunately, no such instance in the wine dataset). It's safer to fit the encoder with more possible values. Similar to label encoder, the tokenizer used preprocess descriptions also should learn on more possible information, which can be provided by full data set (including test set part), and without data leaking (because of no use of target label data).

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions