An increasing reliance on Artificial Intelligence for decision making is driving financial institutions, regulators, and supervisors towards a clarification of sources and control of risks. These risks were either already present (but marginal) or even non-existent in the usual model risk management framework. In a context where the use of machine learning is becoming massive and industrialized across banks and insurance companies, problematics such as interpretability and dynamic monitoring, robustness, ethics, bias and fairness require a specific attention.
Although all these topics are becoming active academic research topics and business innovation fields, their rigorous analysis from the model risk point of view remains at its early stage. A close collaboration between academics, regulator experts and private sector professionals can accelerate finding pragmatic answers to multiple important questions, e.g. how to interpret outputs of black-box models? How to monitor machine learning models in time? When and why do they deviate? How to control the discrimination incurred by the algorithms? How to prevent the effects on decisions of input data changes or data falsification?
This short paper is based on Emerton Data research and analysis and provides an introduction to the newly raised problematics of machine learning risks and ethics, with a focus on insurance and more generally on financial services, probably the most mature sectors, even if these problematics will soon affect all industries.