Daily Dose of Data Science
Subscribe
Sign in
Home
Sponsor
Premium
Archive
Leaderboard
About
Deep learning
Latest
Top
Discussions
DropBlock vs. Dropout for Regularizing CNNs
Addressing a limitation of Dropout when used in CNNs.
Jul 9
•
Avi Chawla
3
Share this post
Daily Dose of Data Science
DropBlock vs. Dropout for Regularizing CNNs
Copy link
Facebook
Email
Notes
More
Bias-Variance Tradeoff is Incomplete!
A counterintuitive phenomenon while training ML models.
Jul 7
•
Avi Chawla
28
Share this post
Daily Dose of Data Science
Bias-Variance Tradeoff is Incomplete!
Copy link
Facebook
Email
Notes
More
Scale ML Models to Billions of Parameters
...with 4 simple changes to your PyTorch code.
Jul 3
•
Avi Chawla
2
Share this post
Daily Dose of Data Science
Scale ML Models to Billions of Parameters
Copy link
Facebook
Email
Notes
More
15 Techniques to Optimize Neural Network Training
...explained in a single frame.
Jun 27
•
Avi Chawla
4
Share this post
Daily Dose of Data Science
15 Techniques to Optimize Neural Network Training
Copy link
Facebook
Email
Notes
More
TabM: A Powerful Alternative to MLP Ensemble
32x parameter reduction without accuracy loss.
Jun 6
•
Avi Chawla
6
Share this post
Daily Dose of Data Science
TabM: A Powerful Alternative to MLP Ensemble
Copy link
Facebook
Email
Notes
More
48 Most Popular Open ML Datasets
...summarized in a single frame.
Jun 2
•
Avi Chawla
12
Share this post
Daily Dose of Data Science
48 Most Popular Open ML Datasets
Copy link
Facebook
Email
Notes
More
5 Chunking Strategies For RAG
...explained in a single frame.
May 29
•
Avi Chawla
7
Share this post
Daily Dose of Data Science
5 Chunking Strategies For RAG
Copy link
Facebook
Email
Notes
More
Memory Pinning to Accelerate Model Training
A simple technique, and some key considerations.
May 7
•
Avi Chawla
5
Share this post
Daily Dose of Data Science
Memory Pinning to Accelerate Model Training
Copy link
Facebook
Email
Notes
More
Knowledge Distillation using Teacher Assistant
Improved model compression.
Apr 25
•
Avi Chawla
2
Share this post
Daily Dose of Data Science
Knowledge Distillation using Teacher Assistant
Copy link
Facebook
Email
Notes
More
Transfer Learning, Fine-tuning, Multitask Learning and Federated Learning
Four must-know model training paradigms.
Mar 26
•
Avi Chawla
7
Share this post
Daily Dose of Data Science
Transfer Learning, Fine-tuning, Multitask Learning and Federated Learning
Copy link
Facebook
Email
Notes
More
Implementing Knowledge Distillation From Scratch
...to compress ML models.
Mar 10
•
Avi Chawla
4
Share this post
Daily Dose of Data Science
Implementing Knowledge Distillation From Scratch
Copy link
Facebook
Email
Notes
More
5 LLM Fine-tuning Techniques
...explained visually.
Feb 20
•
Avi Chawla
5
Share this post
Daily Dose of Data Science
5 LLM Fine-tuning Techniques
Copy link
Facebook
Email
Notes
More
1
Share
Copy link
Facebook
Email
Notes
More
This site requires JavaScript to run correctly. Please
turn on JavaScript
or unblock scripts