Create end-to-end, reproducible feature engineering pipelines that can be deployed into production using open-source Python libraries
Key Features
- Learn and implement feature engineering best practices
- Reinforce your learning with the help of multiple hands-on recipes
- Build end-to-end feature engineering pipelines that are performant and reproducible
Book Description
Feature engineering, the process of transforming variables and creating features, albeit time-consuming, ensures that your machine learning models perform seamlessly. This second edition of Python Feature Engineering Cookbook will take the struggle out of feature engineering by showing you how to use open source Python libraries to accelerate the process via a plethora of practical, hands-on recipes.
This updated edition begins by addressing fundamental data challenges such as missing data and categorical values, before moving on to strategies for dealing with skewed distributions and outliers. The concluding chapters show you how to develop new features from various types of data, including text, time series, and relational databases. With the help of numerous open source Python libraries, you'll learn how to implement each feature engineering method in a performant, reproducible, and elegant manner.
By the end of this Python book, you will have the tools and expertise needed to confidently build end-to-end and reproducible feature engineering pipelines that can be deployed into production.
What you will learn
- Impute missing data using various univariate and multivariate methods
- Encode categorical variables with one-hot, ordinal, and count encoding
- Handle highly cardinal categorical variables
- Transform, discretize, and scale your variables
- Create variables from date and time with pandas and Feature-engine
- Combine variables into new features
- Extract features from text as well as from transactional data with Featuretools
- Create features from time series data with tsfresh
Who this book is for
This book is for machine learning and data science students and professionals, as well as software engineers working on machine learning model deployment, who want to learn more about how to transform their data and create new features to train machine learning models in a better way.
Table of Contents
- Imputing Missing Data
- Encoding Categorical Variables
- Transforming Numerical Variables
- Performing Variable Discretization
- Working with Outliers
- Extracting Features from Date and Time
- Performing Feature Scaling
- Creating New Features
- Extracting Features from Relational Data with Featuretools
- Creating Features from Time Series with tsfresh
- Extracting Features from Text Variables
Soledad Galli is a lead data scientist with more than 10 years of experience in world-class academic institutions and renowned businesses. She has researched, developed, and put into production machine learning models for insurance claims, credit risk assessment, and fraud prevention. Soledad received a Data Science Leaders' award in 2018 and was named one of LinkedIn's voices in data science and analytics in 2019. She is passionate about enabling people to step into and excel in data science, which is why she mentors data scientists and speaks at data science meetings regularly. She also teaches online courses on machine learning in a prestigious Massive Open Online Course platform, which have reached more than 10,000 students worldwide.