The Supercharged Jupyter Kernel That Was Waiting to be Discovered
Addressing some pain points of the default kernel.
Working in Jupyter Notebooks gets tedious and messy at times.
If you regularly use them, you already know that when we update a variable, all its dependent cells must be manually re-executed.
And if we have many cells (and to make things worse, they are spread here and there), execution errors are almost inevitable.
Recently, I learned a pretty cool Jupyter hack that addresses these pain point.
Lately, instead of using the standard jupyter kernel (ipykernel
), I have started using ipyflow
.
It is a supercharged kernel for jupyter, which tracks the relationship between cells and variables.
Its magic command enables an automatic recursive re-execution of dependent cells if a variable is updated.
This is depicted in the demo below:
As demonstrated above, updating the variable x
automatically triggers all its dependent cells.
What’s more, at any point, we can obtain the corresponding code to reconstruct any symbol, as demonstrated below:
This way, we can easily determine the cell executions that generated a specific output.
Do note that ipyflow
offers a different kernel from the default kernel in Jupyter.
Thus, after installing ipyflow (pip install ipyflow)
, select the following kernel while launching a new notebook:
Isn’t that cool?
Get started here: ipyflow.
On a side note:
When I discovered
ipyflow
, I felt that this was something I never realized I wanted, or I never had any idea that I had a pain point with default Jupyter kernel, until I learned about the solution.I think this is a classic case where you learn about the problem after learning about the solution first :)
Anyway, if you want to read some more Jupyter hacks, check out this recent issue:
👉 If you liked this post, don’t forget to leave a like ❤️. It helps more people discover this newsletter on Substack and tells me that you appreciate reading these daily insights.
The button is located towards the bottom of this email.
Thanks for reading!
Latest full articles
If you’re not a full subscriber, here’s what you missed last month:
Federated Learning: A Critical Step Towards Privacy-Preserving Machine Learning
You Cannot Build Large Data Projects Until You Learn Data Version Control!
Why Bagging is So Ridiculously Effective At Variance Reduction?
Sklearn Models are Not Deployment Friendly! Supercharge Them With Tensor Computations.
Deploy, Version Control, and Manage ML Models Right From Your Jupyter Notebook with Modelbit
Gaussian Mixture Models (GMMs): The Flexible Twin of KMeans.
To receive all full articles and support the Daily Dose of Data Science, consider subscribing:
👉 Tell the world what makes this newsletter special for you by leaving a review here :)
👉 If you love reading this newsletter, feel free to share it with friends!
That was a funny example to use because the output is actually a constant xD