βΆ India, now the second-largest market for OpenAI, aims to balance innovation with user safety.
30.10.2025 20:03 β π 0 π 0 π¬ 0 π 0@alisarmustafa.bsky.social
Author of the AI Policy Newsletter. Subscribe here : Alisarmustafa.substack.com
βΆ India, now the second-largest market for OpenAI, aims to balance innovation with user safety.
30.10.2025 20:03 β π 0 π 0 π¬ 0 π 0βΆ The proposal mirrors similar labelling standards emerging in the EU and China.
βΆ Public comments on the draft are open until November 6, 2025.
βΆ Experts call the 10% labelling rule one of the first quantifiable visibility standards worldwide.
βΆ The draft rules call for metadata traceability and technical safeguards to verify AI-produced content.
βΆ Indiaβs IT Ministry said the measures address growing risks of AI misuse, election manipulation, and impersonation.
βΆ Labels must cover at least 10% of the image surface or the first 10% of an audio clipβs duration.
βΆ Companies must also obtain user declarations confirming whether uploaded content is AI-generated.
βΆ The Indian government has proposed new rules to label AI-generated content to fight deepfakes and misinformation.
βΆ Platforms such as OpenAI, Meta, X, and Google must clearly mark AI-generated visuals and audio.
India Proposes Strict Rules to Label AI-Generated Content Amid Deepfake Concerns 
www.reuters.com/business/med...
βΆ Lawmakers reviewed 162 public submissions after the first draft in September 2025.
βΆ The update aims to ensure responsible AI governance amid Chinaβs rapidly expanding digital ecosystem of over 1 billion internet users.
βΆ The amendment promotes alignment with the Civil Code and Personal Information Protection Law to strengthen data privacy.
βΆ Fines and penalties for cybersecurity and AI-related violations would be increased, with severe cases facing license suspension or revocation.
βΆ The draft introduces ethical standards and risk monitoring systems for AI technologies.
βΆ As of June 2025, China had 515 million generative AI users, doubling since December 2024 (CNNIC data).
βΆ Chinaβs top legislature is considering a draft amendment to the Cybersecurity Law (2017) to govern AI more effectively.
βΆ The proposal seeks to balance AI innovation with regulation and safety oversight.
βΆ It supports core AI research, algorithm development, and AI infrastructure building.
China Drafts Cybersecurity Law Amendment to Regulate and Support AI Development 
www.chinadaily.com.cn/a/202510/24/...
βΆ All initiatives aim to maintain human oversight while using AI to speed safe innovation in healthcare.
βΆ The MHRA said the program marks a βstep changeβ in drug approval, strengthening the UKβs position as a global life sciences leader.
βΆ A third project, worth Β£259,250, will test synthetic patient data to support trials for cancer and rare diseases.
βΆ Combined, the projects represent over Β£2 million in UK government investment through the Regulatorsβ Pioneer Fund and AI Capability Fund.
βΆ AI will identify risky drug combinations, which will be validated in human-based lab models.
βΆ The MHRA will also pilot AI-assisted tools for scientific advice, trial assessments, and licensing, funded with Β£1 million.
βΆ The project, funded with Β£859,650, focuses on cardiovascular medicines and aims to prevent side effects causing 1 in 6 hospital admissions.
βΆ Scientists from the MHRA, PhaSER Biomedical, and the University of Dundee will develop the model.
βΆ The MHRA announced three AI-driven projects to make medicines safer and reach patients faster.
βΆ A new study will use AI and NHS data to predict harmful drug interactions before treatments reach the public.
UK to Use AI to Predict Drug Side Effects Before Reaching Patients 
www.gov.uk/government/n...
βΆ Evidence gathered will shape the framework for future AI regulatory pilots in the UK.
βΆ Submissions are open until 2 January 2026.
βΆ DSIT invites input from businesses, researchers, regulators, and the public on how the Lab can support innovation while protecting safety, fairness, and rights.
βΆ Responses will help inform how regulators collaborate, evaluate outcomes, and scale successful sandbox approaches across sectors.
βΆ The consultation seeks views on how the Growth Lab should operate, including which sectors to prioritize and how oversight and risk management should work in practice.
29.10.2025 17:03 β π 0 π 0 π¬ 1 π 0βΆ The Lab will allow AI developers to test products and services in real-world conditions under limited, time-bound regulatory adjustments.
βΆ Its goal is to explore how flexible, evidence-based regulation can help accelerate safe AI deployment while maintaining public trust and accountability.
βΆ The Department for Science, Innovation and Technology (DSIT) has launched a Call for Evidence on the design of the AI Growth Lab β a new regulatory sandbox to support responsible AI innovation.
29.10.2025 17:03 β π 0 π 0 π¬ 1 π 0UK government opens Call for Evidence on new AI Growth Lab
assets.publishing.service.gov.uk/media/68f75b...
βΆ Lawyers have also faced fines and sanctions for unvetted AI-generated filings in recent years.
28.10.2025 20:03 β π 0 π 0 π¬ 0 π 0βΆ Both judges have now adopted written AI-use policies and enhanced internal review procedures.
βΆ Senate Judiciary Chair Chuck Grassley demanded stronger judicial AI guidelines nationwide.
βΆ Grassley said courts must ensure AI use does not compromise fair treatment or litigantsβ rights.
βΆ The resulting draft opinion in a securities lawsuit was released prematurely and later withdrawn.
βΆ Judge Henry Wingate (MS) said a clerk used Perplexity AI to draft a civil rights ruling later replaced due to inaccuracies.
βΆ Two U.S. federal judges said AI-generated content led to factual errors in recent court rulings.
βΆ Judge Julien Xavier Neals (NJ) reported a law intern used ChatGPT for legal research without approval.
Federal Judges Admit AI Tools Caused Errors in Court Decisions
www.reuters.com/sustainabili...
βΆ Current U.S. law does not regulate how AI chatbots market themselves to minors.
βΆ The legislation aims to protect childrenβs wellbeing and ensure transparency in AI interactions.
βΆ The bill also seeks to cover AI companions marketed as therapists or emotional support tools.
βΆ The American Psychological Association warns of reduced socialization and increased loneliness in young users.
βΆ Hawleyβs bill would require AI chatbots to clearly disclose they are not human or licensed professionals.