The rapid rise of large language models has dominated much of the conversation around AI in recent months—which is understandable, given LLMs’ novelty and the speed of their integration into the daily workflows of data science and ML professionals.
Longstanding concerns around the performance of models and the risks they pose remain crucial, however, and explainability is at the core of these questions: how and why do models produce the predictions they offer us? What’s inside the black box?
This week, we’re returning to the topic of model explainability with several recent articles that tackle its intricacies with nuance and offer hands-on approaches for practitioners to experiment with. Happy learning!
- As Vegard Flovik aptly puts it, “for applications within safety-critical heavy-asset industries, where errors can lead to disastrous outcomes, lack of transparency can be a major roadblock for adoption.” To address this gap, Vegard provides a thorough guide to the open-source Iguanas framework, and shows how you can leverage its automated rule-generation powers for increased explainability.
- While SHAP values have proven beneficial in many real-world scenarios, they, too, come with limitations. Samuele Mazzanti cautions against placing too much weight (pun intended!) on feature importance, and recommends paying equal attention to error contribution, since “the fact that a feature is important doesn’t imply that it is beneficial for the model.”