The transparency of generative artificial intelligence (AI) models has been analyzed by the "Foundation Model Transparency Index" (FMTI), an index developed by the Center for Research on Foundation Models (CRFM) at Stanford that evaluates the transparency of major AI model developers. Transparency is fundamental to ensuring accountability, fairness, and user trust in AI systems, particularly for generative models that create new data from training data.
The CRFM developed the FMTI to evaluate the transparency of developers based on three main domains:
the resources used for model development,
the characteristics of the model itself, and
the final use of the model.
The CRFM's approach, while valuable, is not entirely in line with the European legal and social context. In particular, the FMTI focuses on transparency but does not thoroughly address the issue of explainability, i.e., the ability of an AI model to make its decisions understandable. Explainability is considered fundamental in the EU to ensure user trust and understanding of AI systems, allowing users to understand not only the functions of a model but also the decision-making process that underlies it.