I don't like when you provide a list of loads of python libraries to try, I ignore these emails. What I love about your posts is that you provide a single easily digestible tip. Providing a big list of things you could try that would take ages to read through seems to defeat the point.
Thanks for the feedback. I understand your point. Those posts were primarily written as a summary post for many tools I had already talked about here. But now that I have pivoted towards data science and python concepts and writing less about tools/libs, you are unlikely to see any such posts now. Thanks for mentioning that though :)
what I find lacking in all data science blogs that I read concerns the conditions when an analyst begins to take steps outside the objective rules of science and begins to place more faith in the model itself than its general utility at solving a problem. how should analysts deal with the tendency to wish to over fit a model to increase its accuracy? What objective rules can analysts follow to retain negative feedback data in their models rather than adjusting the model to accomodate for outliers?
That's a pretty common problem. That is why we are repeatedly told to follow procedures like cross validation, never looking at the test set. In fact, at times, it is also recommended to generate a new test set at regular intervals, as sometimes, when you are repeatedly training new models and testing on the same dataset, you (as an ML engineer) may (unknowingly) be augmenting your model with test-set related information. So for the objective rules, I think respecting the general machine learning training guidelines is what you can follow, and good generalizable models will follow :)
hi avi, i love the work that you do and as a python beginner, i find your emails informative. one constructive feedback would be to reduce the number of posts and make it once or twice every week but not to compromise on the the quality. I think you may want to look into breaking a topic down into 2-3 parts that makes it relevant to different platforms. shorter version could be a daily drop on linkedin while the emailer could be more elaborate and long form. One reason why i say this is that while I like to emails, the inbox clutter sometimes makes me skip the older ones. all the best and keep it up!
Thanks Ramakant for pointing that out. This has actually been going in my head lately and I have been developing some plans around that. Expect to hear soon about this :)
I don't like when you provide a list of loads of python libraries to try, I ignore these emails. What I love about your posts is that you provide a single easily digestible tip. Providing a big list of things you could try that would take ages to read through seems to defeat the point.
Thanks for the feedback. I understand your point. Those posts were primarily written as a summary post for many tools I had already talked about here. But now that I have pivoted towards data science and python concepts and writing less about tools/libs, you are unlikely to see any such posts now. Thanks for mentioning that though :)
what I find lacking in all data science blogs that I read concerns the conditions when an analyst begins to take steps outside the objective rules of science and begins to place more faith in the model itself than its general utility at solving a problem. how should analysts deal with the tendency to wish to over fit a model to increase its accuracy? What objective rules can analysts follow to retain negative feedback data in their models rather than adjusting the model to accomodate for outliers?
That's a pretty common problem. That is why we are repeatedly told to follow procedures like cross validation, never looking at the test set. In fact, at times, it is also recommended to generate a new test set at regular intervals, as sometimes, when you are repeatedly training new models and testing on the same dataset, you (as an ML engineer) may (unknowingly) be augmenting your model with test-set related information. So for the objective rules, I think respecting the general machine learning training guidelines is what you can follow, and good generalizable models will follow :)
hi avi, i love the work that you do and as a python beginner, i find your emails informative. one constructive feedback would be to reduce the number of posts and make it once or twice every week but not to compromise on the the quality. I think you may want to look into breaking a topic down into 2-3 parts that makes it relevant to different platforms. shorter version could be a daily drop on linkedin while the emailer could be more elaborate and long form. One reason why i say this is that while I like to emails, the inbox clutter sometimes makes me skip the older ones. all the best and keep it up!
Thanks Ramakant for pointing that out. This has actually been going in my head lately and I have been developing some plans around that. Expect to hear soon about this :)