Why Don't We Call It Logistic Classification Instead?
Start questioning the names of other algorithms too. It will be a fun activity.
Have you ever wondered why logistic regression is called "regression" when we only use it for classification tasks? Why not call it "logistic classification" instead? Here's why.
Most of us interpret logistic regression as a classification algorithm. However, it is a regression algorithm by nature. This is because it predicts a continuous outcome, which is the probability of a class.
It is only when we apply those thresholds and change the interpretation of its output that the whole pipeline becomes a classifier.
Yet, intrinsically, it is never the algorithm performing the classification. The algorithm always adheres to regression. Instead, it is that extra step of applying probability thresholds that classifies a sample.
👉 Read what others are saying about this post on LinkedIn and Twitter.
👉 If you liked this post, leave a heart react 🤍.
👉 If you love reading this newsletter, feel free to share it with friends!
Hi There!
Lately, I have been thinking about taking some feedback about the content I post in this daily newsletter. I would appreciate it if you could take a minute to answer some questions for me.
This will help me understand how these emails are received at your end and if the content requires any changes.
The responses are anonymous.
If there’s anything else that you want me to consider, please do not hesitate to drop a comment or reply to this email :)
Thanks for answering!
Find the code for my tips here: GitHub.
I like to explore, experiment and write about data science concepts and tools. You can read my articles on Medium. Also, you can connect with me on LinkedIn and Twitter.
I don't like when you provide a list of loads of python libraries to try, I ignore these emails. What I love about your posts is that you provide a single easily digestible tip. Providing a big list of things you could try that would take ages to read through seems to defeat the point.
what I find lacking in all data science blogs that I read concerns the conditions when an analyst begins to take steps outside the objective rules of science and begins to place more faith in the model itself than its general utility at solving a problem. how should analysts deal with the tendency to wish to over fit a model to increase its accuracy? What objective rules can analysts follow to retain negative feedback data in their models rather than adjusting the model to accomodate for outliers?