Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds
Arrow up icon
GO TO TOP
scikit-learn Cookbook

You're reading from   scikit-learn Cookbook Over 80 recipes for machine learning in Python with scikit-learn

Arrow left icon
Product type Paperback
Published in Dec 2025
Publisher Packt
ISBN-13 9781836644453
Length 388 pages
Edition 3rd Edition
Languages
Arrow right icon
Author (1):
Arrow left icon
John Sukup John Sukup
Author Profile Icon John Sukup
John Sukup
Arrow right icon
View More author details
Toc

Table of Contents (17) Chapters Close

Preface 1. Chapter 1: Common Conventions and API Elements of scikit-learn 2. Chapter 2: Pre-Model Workflow and Data Preprocessing FREE CHAPTER 3. Chapter 3: Dimensionality Reduction Techniques 4. Chapter 4: Building Models with Distance Metrics and Nearest Neighbors 5. Chapter 5: Linear Models and Regularization 6. Chapter 6: Advanced Logistic Regression and Extensions 7. Chapter 7: Support Vector Machines and Kernel Methods 8. Chapter 8: Tree-Based Algorithms and Ensemble Methods 9. Chapter 9: Text Processing and Multiclass Classification 10. Chapter 10: Clustering Techniques 11. Chapter 11: Novelty and Outlier Detection 12. Chapter 12: Cross-Validation and Model Evaluation Techniques 13. Chapter 13: Deploying scikit-learn Models in Production 14. Chapter 14: Unlock Your Exclusive Benefits 15. Index 16. Other Books You May Enjoy

Transformers and the transform() method

In scikit-learn, transformers are tools that modify data by applying transformations such as scaling, normalization, or encoding to prepare it for modeling. Each transformer follows a consistent interface, using the fit() method to learn any necessary parameters from the data and the transform() method to apply those transformations. For instance, StandardScaler() calculates the mean and standard deviation during fit() and uses those values to transform the data by scaling it (as you may recall from high school statistics, this transformed value is called a z-score).

Figure 1.1 – Data transformation in the context of scikit-learn’s Pipeline() class

Figure 1.1 – Data transformation in the context of scikit-learn’s Pipeline() class

Data transformations provide several benefits when applied to ML scenarios. First, many models presuppose data to be normally distributed, free of outliers, and so on. Second, most real-world datasets do not come in this neat-and-tidy format and require some massaging before modeling occurs:

from sklearn.preprocessing import StandardScaler
import numpy as np
# Example data
X = np.array([[1, 2], [3, 4], [5, 6]])
# Create a StandardScaler instance
scaler = StandardScaler()
# Fit the scaler on the data
scaler.fit(X)
# Transform the data
X_scaled = scaler.transform(X)
print(X_scaled)
# Output:
[[-1.22474487 -1.22474487]
[ 0.           0.        ]
[ 1.22474487   1.22474487]]

Another common shortcut that we saw previously, fit_transform(), allows users to perform both steps in one command, making preprocessing workflows more efficient. Again, when to use fit_transform() and fit() with transform() separately depends on the task at hand. Typically, we should apply the fit_transform() method to our training data if we want to transform our data immediately based on the calculated transformation, something the fit() method can’t achieve by itself. However, when applying transformations to our test dataset, we wouldn’t want to reapply the fit() method; this would impose a potentially different data transformation, as our test data will be slightly different from our training data. Remember, our test dataset is meant to be treated exactly like our training data for model consistency purposes, so implementing a separate fit() method on it could potentially alter our test data and make our model predictions unreliable when applied in a real-world scenario:

from sklearn.preprocessing import StandardScaler
import numpy as np
# Example data
X = np.array([[1, 2], [3, 4], [5, 6]])
# Create a StandardScaler instance and
# fit_transform the data in one step
scaler = StandardScaler()
X_scaled = scaler.fit_transform(X)
print(X_scaled)
# Output:
[[-1.22474487 -1.22474487]
[ 0.           0.        ]
[ 1.22474487   1.22474487]]

This consistency across all transformers allows them to be integrated seamlessly into ML pipelines, ensuring that the same transformation is applied to both the training and test data, something that becomes significantly important when implementing production-level models.

We will explore various transformers, including StandardScaler(), MinMaxScaler(), and OneHotEncoder(), in Chapter 2 to demonstrate how they can be used to prepare data for ML models using the fit(), transform(), and fit_transform() methods. Practical examples will be provided to illustrate how you can integrate transformers into workflows to ensure your data is preprocessed consistently.

lock icon The rest of the chapter is locked
Visually different images
CONTINUE READING
83
Tech Concepts
36
Programming languages
73
Tech Tools
Icon Unlimited access to the largest independent learning library in tech of over 8,000 expert-authored tech books and videos.
Icon Innovative learning tools, including AI book assistants, code context explainers, and text-to-speech.
Icon 50+ new titles added per month and exclusive early access to books as they are being written.
scikit-learn Cookbook
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime
Modal Close icon
Modal Close icon