Preserve generalization power while reducing run-time.
Nice work again, Avi!
Christoph Molnar also wrote about it in his interpretable machine learning model book, it is called "global surrogate model": https://christophm.github.io/interpretable-ml-book/global.html
Thanks for sharing this, Marcell. When they talk about Approximation model, I find this idea to be very similar to knowledge distillation :)
Thanks again for sharing, and appreciating the work :)
This is a great idea! I'm curious how mathmatically sound this would be to create a "global surrogate model" for xgboost or other boosted trees models?
Intuitively it seems very similar.
Any chance you have sample code in R?
Nice work again, Avi!
Christoph Molnar also wrote about it in his interpretable machine learning model book, it is called "global surrogate model": https://christophm.github.io/interpretable-ml-book/global.html
Thanks for sharing this, Marcell. When they talk about Approximation model, I find this idea to be very similar to knowledge distillation :)
Thanks again for sharing, and appreciating the work :)
This is a great idea! I'm curious how mathmatically sound this would be to create a "global surrogate model" for xgboost or other boosted trees models?
Intuitively it seems very similar.
Any chance you have sample code in R?