Europe – commission legal framework on Artificial Intelligence could impact Talent Acquisition and Assessment

SIA_Logo_2018This post was originally published by SIA

The European Commission is proposing the first ever legal framework on Artificial Intelligence, which addresses the risks of AI and aims to develop an ecosystem of trust around AI.

The proposal is based on EU values and fundamental rights and aims to give people the confidence to embrace AI-based solutions, while encouraging businesses to develop them.

The proposed legal framework follows a commitment by European Commission President Ursula von der Leyen to put forward legislation for a coordinated European approach on the human and ethical implications of AI. This was then followed by a White Paper that set out policy options on how to achieve the objective of promoting the uptake of AI and of addressing the risks associated with certain uses of such technology.

“AI should be a tool for people and be a force for good in society with the ultimate aim of increasing human well-being,” the Commission states. “Rules for AI available in the Union market or otherwise affecting people in the Union should therefore be human centric, so that people can trust that the technology is used in a way that is safe and compliant with the law, including the respect of fundamental rights.”

The proposal could have important implications for AI used in talent acquisition and assessment.

The Commission has put forward the proposed regulatory framework on AI with the following specific objectives:

  • ensure that AI systems placed on the Union market and used are safe and respect existing law on fundamental rights and Union values;
  • ensure legal certainty to facilitate investment and innovation in AI;
  • enhance governance and effective enforcement of existing law on fundamental rights and safety requirements applicable to AI systems;
  • facilitate the development of a single market for lawful, safe and trustworthy AI applications and prevent market fragmentation.

 “The proposal lays down a solid risk methodology to define “high-risk” AI systems that pose significant risks to the health and safety or fundamental rights of persons. Those AI systems will have to comply with a set of horizontal mandatory requirements for trustworthy AI and follow conformity assessment procedures before those systems can be placed on the Union market.”

Predictable, proportionate and clear obligations are also placed on providers and users of those systems to ensure safety and respect of existing legislation protecting fundamental rights throughout the whole AI systems’ lifecycle. For some specific AI systems, only minimum transparency obligations are proposed, in particular when chatbots or ‘deep fakes’ are used.

The proposed rules will be enforced through a governance system at member states level, building on already existing structures, and a cooperation mechanism at Union level with the establishment of a European Artificial Intelligence Board.

According to law firm Osborne Clarke, the proposal’s goal of ‘trustworthiness’ creates a new layer of regulatory risk and the additional financial and organisational burden of compliance.

Furthermore, Obsorne Clarke expects the draft provisions to likely be subject to extensive lobbying as they make their way through the legislative process and does not expect this to become law before 2023, probably with a further 18 to 24 months before it is fully in force.

In its proposals, the Commission details a list of prohibited AI, high-risk AI systems, transparency obligatons for certain AI systems, codes of conduct, governance and implementation and measures to reduce regulatory burdens on SMEs and start-ups.

For the full proposal, click here.  

This post was originally published by SIA

Spread the word

Related posts