Presently there is no dedicated Artificial Intelligence (AI) legislation in Switzerland. Nevertheless, given the ever increasing adoption and use of AI tools in various sectors – in particular in finance, the risks associated with such systems would inevitably require thorough scrutiny.
To this end, the Swiss Financial Markets Authority (FINMA) has recently[i] published a set of findings and observations which take a risk-based approach defined from operational, data-related, IT and cyber alongside legal and reputational perspectives. The supervised entities would therefore need to identify, assess, monitor, manage and control the risks associated with their AI applications, either as an in-house development or outsourced, and to make sure these are aligned and reflected in their respective governance models.
Above all, FINMA highlights operational risks such as lack of robustness, correctness, bias and explainability, the risks associated with third party service providers as well as challenges in the allocation of responsibilities and accountability as the most compelling issues.
Once identified, the ‘materiality’ of the risks in question would need to be determined. In other words, to define whether a given AI application may carry a higher threshold in cases where it “…is used to comply with supervisory law or to perform critical functions, or when customers or employees are strongly affected by its results”.
From the perspective of date-related risks, it is apparent that incorrect, inconsistent, incomplete, unrepresentative or outdated data would undermine the credibility and effectiveness of an AI application. Therefore, certain measures would need to be put in place to ensure input data integrity and that the availability of and access to data is secured. On the other hand, FINMA refers to regular checks in order to detect data drifts, and to validation methods in order to guarantee ongoing quality of output data.
Lastly, it is noted that explainability of results would be critical for an effective assessment of an AI application, whereby the drivers of a given application and its behaviour under varying circumstances and conditions would need to be comprehensible even to non-experts such as clients, investors and supervisory authorities etc. For those applications carrying higher ‘materiality’, the results of an independent review forming an informed and unbiased opinion as to the reliability of the application in question would also need to be taken into account in the development phase of that application.
[i] See here https://www.finma.ch/en/news/2024/12/20241218-mm-finma-am-08-24/.