SDG 9: Industry, Innovation, and Infrastructure | SDG 16: Peace, Justice and Strong Institutions
Institutions: Ministry of Electronics & IT (MeitY) | NITI Aayog
The Australian government’s National AI Centre (NAIC) released new guidance titled “Being clear about AI-generated content,” aimed at helping businesses enhance transparency and build customer trust regarding digital content created or modified using Artificial Intelligence. The NAIC states that transparency is crucial for reducing regulatory and reputational risks in the digital economy. The core guidance identifies three practical steps for businesses to make AI-generated content easy to identify: Labelling (adding a visible notification of AI-generation and source), Watermarking (embedding traceable information), and Metadata recording (including descriptive file information).
Watermarking is a method of embedding information—often invisible to the naked eye but digitally traceable—into content (like images, audio, or text) to help trace its origin or verify its authenticity. This process provides a robust technical mechanism, typically more secure than a simple visible label, to establish accountability for digital content created or modified by AI systems.
Businesses are advised to select the appropriate level of transparency based on the content’s context and potential impact, for instance, using simple labels for internal documents versus applying both Watermarking and metadata for official or high-impact publications. The guidance is positioned as aligned with evolving industry best practice and global standards, serving as a continuously updated framework to build confidence that digital content is recognizable as AI-derived.
Policy Relevance
The comprehensive approach to labelling, watermarking, and metadata provides a clear operational blueprint for Indian regulators (MeitY, IndiaAI) as they develop domestic guidelines for AI transparency. This framework is crucial for tackling the challenges posed by deepfakes and preserving public trust in digital platforms, aligning India’s governance strategy with global transparency norms.
Relevant Question for Policy Stakeholders: What regulatory framework and technical standards must MeitY adopt to mandate the use of watermarking for high-risk AI-generated content to combat deepfakes effectively?
Follow the full news here: New guidance helps Australians identify AI-generated content

