City
Epaper

Major AI models not very transparent: Report

By IANS | Updated: October 19, 2023 16:35 IST

New York, Oct 19 Artificial intelligence (AI) based foundations models such as Meta's Llama 2 and OpenAI’s GPT-4 ...

Open in App

New York, Oct 19 Artificial intelligence (AI) based foundations models such as Meta's Llama 2 and OpenAI’s GPT-4 are low on transparency, according to a global report on Thursday.

The Foundation Model Transparency Index created by a group of eight AI researchers from Stanford University, MIT Media Lab, and Princeton University, tracked 10 most popular AI models who disclosed information about their work and how people use their systems.

The report showed that “no major foundation model developer is close to providing adequate transparency, revealing a fundamental lack of transparency in the AI industry”.

Among the models it tested, Meta’s Llama 2 (54 per cent) scored the highest, closely followed by BloomZ (53 per cent) and then OpenAI’s GPT-4 (48 per cent).

Other models evaluated include Stability’s Stable Diffusion (47 per cent), Google’s PaLM 2 (40 per cent), Anthropic’s Claude (36 per cent), Command from Cohere (34 per cent), AI21 Labs’ Jurassic 2 (25 per cent), Inflection-1 (21 per cent) from Inflection, and Amazon’s Titan (12 per cent).

“While the societal impact of these models is rising, transparency is on the decline. If this trend continues, foundation models could become just as opaque as social media platforms and other previous technologies, replicating their failure modes,” the researchers said.

The report defined transparency based on 100 indicators for information about how the models are built, how they work, and how people use them. The researchers assessed these companies on the basis of their most salient and capable foundation model and systematically gathered information made publicly available by the developer as of September 15.

For each developer, two researchers scored the 100 indicators, assessing whether the developer satisfied the indicator on the basis of public information.

The initial scores were shared with leaders at each company, encouraging them to contest scores they disagreed with.

Although the mean score was just 37 per cent, 82 of the indicators were satisfied by at least one developer. This means that developers can significantly improve transparency by adopting best practices from their competitors, the researchers said.

“This provides a snapshot of transparency across the AI ecosystem. All developers have significant room for improvement that we will aim to track in the future versions of the Index,” the researchers noted.

Disclaimer: This post has been auto-published from an agency feed without any modifications to the text and has not been reviewed by an editor

Open in App

Related Stories

EntertainmentLeighton Meester is devastated by the passing of 'Gossip Girl' co-star Michelle Trachtenberg

NationalUGC Issues Clarification on Fake Notice About Exam Cancellations After Operation Sindoor

Other SportsIPL 2025: Noor’s four-fer, Brevis’ 22-ball fifty helps CSK beat KKR by two wickets

NationalOdisha Cabinet approves eight key proposals

Cricket"When I was in eighth or ninth standard...": Bhuvneshwar Kumar recalls his beginnings as a cricketer

Technology Realted Stories

TechnologyCentre launches portal to boost non-ferrous metal recycling ecosystem

TechnologyC-DOT, CSIR-NPL sign MoU to boost joint research in classical and quantum communications

TechnologyWorld's wealthiest 10pc contributing most to global warming than poorest 50pc: Study

TechnologyIndia-UK FTA bypasses China’s dependence, navigates US tariffs: SBI report

TechnologyStudy shows diabetes drug may help treat prostate cancer