India has proposed stronger technology rules to tackle the growing risks of AI misuse, including mandatory labelling of AI-generated content such as deepfakes and expanded obligations for major platforms. The framework aims to increase transparency for users, reduce deceptive media spread, and strengthen accountability for distribution at scale, reflecting emerging global norms around synthetic content. For creators, compliance would likely involve watermarking or metadata tags, while platforms may need updated detection pipelines, appeals processes, and clear disclosures at the point of view. Policy analysts note that implementation details will be decisive: standardized labelling protocols, interoperability across platforms, and penalties for non-compliance could shape effectiveness and innovation impact. Consumers stand to benefit from clearer signals around authenticity, particularly in political communication, celebrity endorsements, and crisis information where manipulated media can cause real harm. For the startup ecosystem, predictable guidelines could reduce reputational risk and align product design with safety expectations, though smaller firms may need support to meet technical requirements. Over the coming weeks, stakeholder feedback and revised drafts may refine thresholds, exemptions, and transition timelines, aiming to balance user protection with a vibrant AI economy. Users should expect more visible “AI-generated” badges, improved media literacy prompts, and easier reporting tools integrated into content feeds.