Alisar Mustafa's Avatar

Alisar Mustafa

@alisarmustafa.bsky.social

Author of the AI Policy Newsletter. Subscribe here : Alisarmustafa.substack.com

140 Followers  |  147 Following  |  324 Posts  |  Joined: 16.11.2024  |  2.0804

Latest posts by alisarmustafa.bsky.social on Bluesky

β–Ά India, now the second-largest market for OpenAI, aims to balance innovation with user safety.

30.10.2025 20:03 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

β–Ά The proposal mirrors similar labelling standards emerging in the EU and China.
β–Ά Public comments on the draft are open until November 6, 2025.
β–Ά Experts call the 10% labelling rule one of the first quantifiable visibility standards worldwide.

30.10.2025 20:03 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

β–Ά The draft rules call for metadata traceability and technical safeguards to verify AI-produced content.
β–Ά India’s IT Ministry said the measures address growing risks of AI misuse, election manipulation, and impersonation.

30.10.2025 20:03 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

β–Ά Labels must cover at least 10% of the image surface or the first 10% of an audio clip’s duration.
β–Ά Companies must also obtain user declarations confirming whether uploaded content is AI-generated.

30.10.2025 20:03 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

β–Ά The Indian government has proposed new rules to label AI-generated content to fight deepfakes and misinformation.
β–Ά Platforms such as OpenAI, Meta, X, and Google must clearly mark AI-generated visuals and audio.

30.10.2025 20:03 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

India Proposes Strict Rules to Label AI-Generated Content Amid Deepfake Concerns
www.reuters.com/business/med...

30.10.2025 20:03 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

β–Ά Lawmakers reviewed 162 public submissions after the first draft in September 2025.
β–Ά The update aims to ensure responsible AI governance amid China’s rapidly expanding digital ecosystem of over 1 billion internet users.

30.10.2025 17:04 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

β–Ά The amendment promotes alignment with the Civil Code and Personal Information Protection Law to strengthen data privacy.
β–Ά Fines and penalties for cybersecurity and AI-related violations would be increased, with severe cases facing license suspension or revocation.

30.10.2025 17:04 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

β–Ά The draft introduces ethical standards and risk monitoring systems for AI technologies.
β–Ά As of June 2025, China had 515 million generative AI users, doubling since December 2024 (CNNIC data).

30.10.2025 17:04 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

β–Ά China’s top legislature is considering a draft amendment to the Cybersecurity Law (2017) to govern AI more effectively.
β–Ά The proposal seeks to balance AI innovation with regulation and safety oversight.
β–Ά It supports core AI research, algorithm development, and AI infrastructure building.

30.10.2025 17:04 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
China weighs draft amendment to cybersecurity law to better promote, regulate sound AI development

China Drafts Cybersecurity Law Amendment to Regulate and Support AI Development
www.chinadaily.com.cn/a/202510/24/...

30.10.2025 17:04 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

β–Ά All initiatives aim to maintain human oversight while using AI to speed safe innovation in healthcare.
β–Ά The MHRA said the program marks a β€œstep change” in drug approval, strengthening the UK’s position as a global life sciences leader.

29.10.2025 20:03 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

β–Ά A third project, worth Β£259,250, will test synthetic patient data to support trials for cancer and rare diseases.
β–Ά Combined, the projects represent over Β£2 million in UK government investment through the Regulators’ Pioneer Fund and AI Capability Fund.

29.10.2025 20:03 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

β–Ά AI will identify risky drug combinations, which will be validated in human-based lab models.
β–Ά The MHRA will also pilot AI-assisted tools for scientific advice, trial assessments, and licensing, funded with Β£1 million.

29.10.2025 20:03 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

β–Ά The project, funded with Β£859,650, focuses on cardiovascular medicines and aims to prevent side effects causing 1 in 6 hospital admissions.
β–Ά Scientists from the MHRA, PhaSER Biomedical, and the University of Dundee will develop the model.

29.10.2025 20:03 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

β–Ά The MHRA announced three AI-driven projects to make medicines safer and reach patients faster.
β–Ά A new study will use AI and NHS data to predict harmful drug interactions before treatments reach the public.

29.10.2025 20:03 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
Side effects from drug interactions to be predicted by AI before reaching patients The MHRA leads three new government-backed projects using AI-driven approaches to make medicines safer and bring treatments to patients more quickly.

UK to Use AI to Predict Drug Side Effects Before Reaching Patients
www.gov.uk/government/n...

29.10.2025 20:03 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

β–Ά Evidence gathered will shape the framework for future AI regulatory pilots in the UK.
β–Ά Submissions are open until 2 January 2026.

29.10.2025 17:03 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

β–Ά DSIT invites input from businesses, researchers, regulators, and the public on how the Lab can support innovation while protecting safety, fairness, and rights.
β–Ά Responses will help inform how regulators collaborate, evaluate outcomes, and scale successful sandbox approaches across sectors.

29.10.2025 17:03 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

β–Ά The consultation seeks views on how the Growth Lab should operate, including which sectors to prioritize and how oversight and risk management should work in practice.

29.10.2025 17:03 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

β–Ά The Lab will allow AI developers to test products and services in real-world conditions under limited, time-bound regulatory adjustments.
β–Ά Its goal is to explore how flexible, evidence-based regulation can help accelerate safe AI deployment while maintaining public trust and accountability.

29.10.2025 17:03 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

β–Ά The Department for Science, Innovation and Technology (DSIT) has launched a Call for Evidence on the design of the AI Growth Lab β€” a new regulatory sandbox to support responsible AI innovation.

29.10.2025 17:03 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

UK government opens Call for Evidence on new AI Growth Lab
assets.publishing.service.gov.uk/media/68f75b...

29.10.2025 17:03 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

β–Ά Lawyers have also faced fines and sanctions for unvetted AI-generated filings in recent years.

28.10.2025 20:03 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

β–Ά Both judges have now adopted written AI-use policies and enhanced internal review procedures.
β–Ά Senate Judiciary Chair Chuck Grassley demanded stronger judicial AI guidelines nationwide.
β–Ά Grassley said courts must ensure AI use does not compromise fair treatment or litigants’ rights.

28.10.2025 20:03 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

β–Ά The resulting draft opinion in a securities lawsuit was released prematurely and later withdrawn.
β–Ά Judge Henry Wingate (MS) said a clerk used Perplexity AI to draft a civil rights ruling later replaced due to inaccuracies.

28.10.2025 20:03 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

β–Ά Two U.S. federal judges said AI-generated content led to factual errors in recent court rulings.
β–Ά Judge Julien Xavier Neals (NJ) reported a law intern used ChatGPT for legal research without approval.

28.10.2025 20:03 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Federal Judges Admit AI Tools Caused Errors in Court Decisions
www.reuters.com/sustainabili...

28.10.2025 20:03 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

β–Ά Current U.S. law does not regulate how AI chatbots market themselves to minors.
β–Ά The legislation aims to protect children’s wellbeing and ensure transparency in AI interactions.
β–Ά The bill also seeks to cover AI companions marketed as therapists or emotional support tools.

28.10.2025 17:04 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

β–Ά The American Psychological Association warns of reduced socialization and increased loneliness in young users.
β–Ά Hawley’s bill would require AI chatbots to clearly disclose they are not human or licensed professionals.

28.10.2025 17:04 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

@alisarmustafa is following 20 prominent accounts