Daily Dose of Data Science

Daily Dose of Data Science

Home
Sponsor
Premium
Archive
Leaderboard
About

Machine Learning

KernelPCA vs. PCA for Dimensionality Reduction
...explained with visuals.
Oct 15 • 
Avi Chawla
4
A Memory-efficient Technique to Train Large Models
...that even LLMs like GPTs and LLaMAs use.
Oct 14 • 
Avi Chawla
1
Breathing KMeans vs KMeans
Robustify KMeans with centroid addition and removal.
Oct 7 • 
Avi Chawla
3
Activation Pruning for Model Compression (with implementation)
Removing 74% neurons with 0.5% accuracy drop.
Oct 2 • 
Avi Chawla
5
What is Contrastive Learning?
A popular ML interview question, explained with a use case.
Sep 25 • 
Avi Chawla
3
L2 Regularization is NOT Just a Regularization Technique
A lesser-known usage of L2 regularization.
Sep 19 • 
Avi Chawla
4
The Ideal Loss Function for Class Imbalance
...a popular ML interview question.
Sep 18 • 
Avi Chawla
4
How Dropout Actually Works
A lesser-known detail of Dropout.
Sep 10 • 
Avi Chawla
6
6 Graph Feature Engineering Techniques
Must-know for building GNNs.
Jul 31 • 
Avi Chawla
5
2 Techniques to Synchronize ML Models in Multi-GPU Training
...explained visually.
Jul 10 • 
Avi Chawla
1
DropBlock vs. Dropout for Regularizing CNNs
Addressing a limitation of Dropout when used in CNNs.
Jul 9 • 
Avi Chawla
3
A Hands-on Demo on Autoencoders
...along with applications.
Jul 8 • 
Avi Chawla
5
© 2025 Avi Chawla
Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture