Explainability Methods for Legal Judgment Prediction in Switzerland

Introduction

We recently presented a dataset for Legal Judgment Prediction (you try to predict the outcome of a case based on its facts) including 85K Swiss Federal Supreme Court decisions [2]. Although we achieved up to 70% Macro-F1 Score, the models still work as black boxes and are thus not interpretable. In this project, you will venture in the realm of explainable Machine Learning to understand the predictions better the models made on the Legal Judgment Prediction dataset. There are many explainability methods that can be tried and compared such as SHAP, LIME, Diverse Counterfactual Explanations, Integrated Gradients, using Attention, or using Probes to predict the legal area, origin cantons or citations.

Other libraries that might come in handy: Interpret-text, Captum, transformers-interpret