As AI evolves rapidly, regulators are falling behind. The EU leads with a structured code that focuses on transparency, rights, and risk safeguards in generative AI
The EU’s voluntary code urges firms to document training methods, declare use cases, and flag systemic risks—balancing innovation with oversight
Copyright compliance is central. The code calls for lawful data use, safeguards against infringing outputs, and respect for content scraped from the web
Few countries match this effort. India has no AI law yet, and its 2023 data protection act doesn’t address model training, open-source tools, or cross-border scraping
Big Tech promises self-regulation, but real checks are missing. Even global AI summits have failed to align companies on the long-term risks of unsafe AI
Global rules are hard, but essential. India can lead with its talent. Risk-based deployment, audit trails, and ethical design are key to making AI accountable