City
Epaper

Vital to remain vigilant about deepfakes in global election year: Wipro’s Global Privacy Officer

By IANS | Updated: March 23, 2024 13:25 IST

New Delhi, March 23 Considering that more than 60 countries, including India, are entering election mode this year, ...

Open in App

New Delhi, March 23 Considering that more than 60 countries, including India, are entering election mode this year, it is vital that we remain vigilant on recent trends in the dynamic digital landscape, especially deepfakes, says Ivana Bartoletti, Global Chief Privacy and AI Governance Officer at Wipro.

With the widespread use of generative AI, we face a new and concerning threat: deepfakes.

“Deepfakes have become accessible to everyone, posing a significant risk as these manipulations allow the creation and dissemination of realistic audio and video content featuring individuals saying and doing things they never actually said or did,” emphasised Bartoletti, also the founder of the ‘Women Leading in AI Network’.

The consequences extend beyond the digital realm, as online disinformation and coordination can spill over into real-world violence.

In India, the government has issued an update to its AI advisory, saying that the big digital companies do not need the government's permission anymore before launching any AI model in the country.

However, big tech companies are advised to label “under-tested and unreliable AI models to inform users of their potential fallibility or unreliability.”

"Under-tested/unreliable Artificial Intelligence foundational models)/ LLM/Generative Al, software(s) or algorithm(s) or further development on such models should be made available to users in India only after appropriately labelling the possible inherent fallibility or unreliability of the output generated,” according to the new MeitY advisory.

All intermediaries or platforms must ensure that the use of AI models /LLM/Generative AI, software or algorithms "does not permit its users to host, display, upload, modify, publish, transmit, store, update or share any unlawful content as outlined in Rule 3(1)(b) of the IT Rules or violate any other provision of the IT Act."

The digital platforms have been asked to comply with new AI guidelines with immediate effect.

According to Bartoletti, to ensure public safety, companies must take responsibility and implement measures to combat deepfakes and disinformation.

“This includes investing in advanced detection technologies to identify and flag deepfake content, as well as collaborating with experts to develop effective debunking methods,” she noted.

Additionally, promoting media literacy and critical thinking among the public is crucial.

--IANS

na/dan

Disclaimer: This post has been auto-published from an agency feed without any modifications to the text and has not been reviewed by an editor

Open in App

Related Stories

InternationalCalifornia Attorney General launches probe into Elon Musk's Grok over AI-generated deepfake abuse

International"Need Greenland for national security": Trump warns Russia and China could move in

International"Killing in Iran is stopping, no plans for executions," says Trump amid protests

InternationalUS launches second phase of Trump's Gaza peace plan, announces demilitarisation, technocratic government

CricketBangladesh Cricket Board distances itself from director's remarks, assures action amid cricketers' outrage

Technology Realted Stories

TechnologyTelangana DGP warns social media platforms against indulging in character assassination

TechnologyInfineon, NIELIT sign MoU to boost semiconductor skills in India

TechnologyNHAI launches pilot for real-time safety alerts over stray cattle on highways

TechnologyInfosys CEO denies claims of employee detention by US ICE

TechnologyHealth experts, advocates call for systemic reforms to protect transfusion-dependent patients