The recent international AI safety Summit was hosted in UK, Nov 2023. There are governments from 28 countries, 42 industries and 50 academic organisation who joined this summit. It includes some leading organisations in the AI industry such as Hugging Face, Meta, OpenAI, Nvidia, Google DeepMind and etc.
It came to an agreement to establish a shared understanding of the risks and opportunities presented by "frontier AI". This declaration focuses on collaboration to build a scientific and evidence-based understanding of AI risks and to develop risk-based policies to ensure safety.
In this summit, the UK announced its own AI Safety Institute. The US accounce the formation of an American AI Safety Institute. It also came to an agreement among major nations like the United States, China, and members of the European Union, to establish a shared understanding of the risks and opportunities. This declaration focuses on collaboration to build a scientific and evidence-based understanding of AI risks and to develop risk-based policies to ensure safety.
EU Artificial Intelligence Act
Regarding the work progress in the AI regulations, EU is leading in this direction. The final draft of EU Artificial Intelligence Act was released on 21st January 2024. This act uses a risk based approach which classifies AI according to its risk.
Unacceptable risk is prohibited (e.g. social scoring systems and manipulative AI).
Most of the text addresses high-risk AI systems, which are regulated.
A smaller section handles limited risk AI systems, subject to lighter transparency obligations: developers and deployers must ensure that end-users are aware that they are interacting with AI (chatbots and deepfakes).
Minimal risk is unregulated (including the majority of AI applications currently available on the EU single market, such as AI enabled video games and spam filters – at least in 2021; this is changing with generative AI).
The majority of obligations fall on providers (developers) of high-risk system. Natural or legal persons that deploy an AI system in a professional capacity rank the second in obligations.
All General purpose AI (GPAI) much provide technical documentation. The ones with systemic risk must conduct model evaluations, adversarial testing, track and report serious incidents and ensure cybersecurity protections.
Prohibited AI systems
Subliminal, manipulative, or deceptive techniques for decision-making, causing significant harm.
Age, disability, or socio-economic circumstances to distort behaviour, causing significant harm.
Biometric categorisation systems.
Assessing the risk of an individual committing criminal offenses solely based on database
Compiling facial recognition databases by untargeted scraping.
Inferring emotions in workplaces or educational institutions (except for medical or safety)
'Real-time' remote biometric identification (RBI) in publicly accessible spaces for law enforcement (except for special cases)
High risk AI system
Used as a safety component or a product covered by EU laws in Annex II AND required to undergo a third-party conformity assessment under those Annex II laws.
Annex III except for special cases.
System that profiles individuals.
Most General purpose AI
The requirement for providers of high-risk AI system
Establish a risk management system throughout the high risk AI system’s lifecycle;
Conduct data governance, ensuring that training, validation and testing datasets are relevant, sufficiently representative and, to the best extent possible, free of errors and complete according to the intended purpose.
Draw up technical documentation to demonstrate compliance and provide authorities with the information to assess that compliance.
Design their high risk AI system for record-keeping to enable it to automatically record events relevant for identifying national level risks and substantial modifications throughout the system’s lifecycle.
Provide instructions for use to downstream deployers to enable the latter’s compliance.
Design their high risk AI system to allow deployers to implement human oversight.
Design their high risk AI system to achieve appropriate levels of accuracy, robustness, and cybersecurity.
Establish a quality management system to ensure compliance.
After entry into force, the AI act will apply:
6 months for prohibited AI systems.
12 months for GPAI.
24 months for high risk AI systems under Annex III.
36 months for high risk AI systems under Annex II.
Codes of practice must be ready 9 months after entry into force.
UK white paper and US AI bill of rights Blueprint
The UK's AI policy paper, A pro-innovation approach to AI regulation, proposes a framework for AI regulation aimed at fostering innovation while ensuring public trust. This approach focuses on creating rules proportionate to the risks associated with AI's sectoral use. The proposed framework is designed to support AI's development and use, driving growth, prosperity, and innovation, while building public trust in AI technologies.
The US AI guideline, Blueprint for an AI Bill of Rights, outlines five principle for the ethical use of AI. It emphasise the need to use AI in ways that uphold democratic values and civil rights. It's a guide for using technology in a manner that protects all people and reinforces societal values.
They are both working as guideline giving suggestions and show good initiatives. No clear evidence to show that there will be strict regulations to enforce the restriction.