Stanford University researchers have released a report on Wednesday, focusing on the transparency of artificial intelligence foundation models developed by companies such as OpenAI and Google. The report authors are urging these companies to disclose more information, including details about the data and human labor employed in training these AI models.
The study points out that transparency has been on the decline in the AI field over the last three years, even as the capabilities of these models continue to advance. Stanford professor Percy Liang, one of the researchers behind the Foundation Model Transparency Index, emphasized the concern, saying, “We view this as highly problematic because we’ve seen in other areas like social media that when transparency goes down, bad things can happen as a result.”
Foundation models are AI systems that are trained on massive datasets, enabling them to perform a wide range of tasks, from generating text to coding. These models, developed by various companies, are fueling the rapid growth of generative AI, which has gained immense popularity since the launch of Microsoft-backed OpenAI’s ChatGPT.
In a world that is increasingly relying on these models for decision-making and automation, understanding their limitations and biases is of paramount importance, according to the authors of the report.
The Foundation Model Transparency Index evaluated ten prominent models based on 100 different transparency indicators, including aspects such as training data and computational resources utilized. Notably, all models received “unimpressive” scores. Even the most transparent model, Meta’s Llama 2, only received a score of 53 out of 100. In contrast, Amazon’s Titan model ranked the lowest, with a score of 11 out of 100, while OpenAI’s GPT-4 model received a score of 47 out of 100.
The authors of the index hope that their report will serve as an incentive for companies to enhance the transparency of their foundation models. Additionally, it is expected to provide a starting point for governments seeking to regulate the rapidly evolving field of AI.
The Foundation Model Transparency Index is an initiative by the Stanford Institute for Human-Centered Artificial Intelligence’s Center for Research on Foundation Models.