About the probabilities computed from a decision tree you say: "Yet, these manipulations do not account for the “true” uncertainty in a prediction. This is because the uncertainty is the same for all predictions that land in the same leaf node." Can you explain this a bit further? What is the problem with this type of uncertainty?
This isn't a "problem" per se, it's just that many different instances can have the same probabilistic estimate if they end up in the same leaf node. Realistically, this must not be the case because if inputs are altered, it is expected that the probabilistic outputs also vary. But if changing the input is not providing you an idea about how the model's confidence is changing, you cannot make conclusions about how different variables affect the final outcome. Does that answer your question?
About the probabilities computed from a decision tree you say: "Yet, these manipulations do not account for the “true” uncertainty in a prediction. This is because the uncertainty is the same for all predictions that land in the same leaf node." Can you explain this a bit further? What is the problem with this type of uncertainty?
This isn't a "problem" per se, it's just that many different instances can have the same probabilistic estimate if they end up in the same leaf node. Realistically, this must not be the case because if inputs are altered, it is expected that the probabilistic outputs also vary. But if changing the input is not providing you an idea about how the model's confidence is changing, you cannot make conclusions about how different variables affect the final outcome. Does that answer your question?
Just to confirm XGBOOST and LightGBM are labelling models right? Can we say all ensemble based models are labelling models?