The Most Overlooked Problem With One-Hot Encoding
Hint: This is NOT about sparse data representation.
With one-hot encoding, we introduce a big problem in the data.
When we one-hot encode categorical data, we unknowingly introduce perfect multicollinearity.
Multicollinearity arises when two or more features can predict another feature.
As the sum of one-hot encoded features is always 1, it leads to perfect multicollinearity.
This is often called the Dummy Variable Trap.
It is bad because:
The model has redundant features
Regressions coefficients aren’t reliable in the presence of multicollinearity, etc.
So how to resolve this?
The solution is simple.
Drop any arbitrary feature from the one-hot encoded features.
This instantly mitigates multicollinearity and breaks the linear relationship which existed before.
So remember...
Whenever we one-hot encode categorical data, it introduces multicollinearity.
To avoid this, drop one column and proceed ahead.
👉 Over to you: What are some other problems with one-hot encoding?
👉 Read what others are saying about this post on LinkedIn and Twitter.
👉 Tell the world what makes this newsletter special for you by leaving a review here :)
👉 If you liked this post, don’t forget to leave a like ❤️. It helps more people discover this newsletter on Substack and tells me that you appreciate reading these daily insights. The button is located towards the bottom of this email.
👉 If you love reading this newsletter, feel free to share it with friends!
👉 Sponsor the Daily Dose of Data Science Newsletter. More info here: Sponsorship details.
Find the code for my tips here: GitHub.
I like to explore, experiment and write about data science concepts and tools. You can read my articles on Medium. Also, you can connect with me on LinkedIn and Twitter.
Does it really matter which feature is dropped?