Home / Insight / Artificial Intelligence

Artificial Intelligence (AI)

Navigating the opportunities and legal risks: exploring AI's impact on your business

Artificial Intelligence (AI) provides a whole range of new opportunities but also includes new and unique risks for companies, governments and individuals within society. These opportunities and legal risks involve ethical, legal and regulatory challenges.

At CMS we are keen to share our thoughts about AI and to add to the wider AI debate. CMS has a long history of advising companies large and small to leverage the benefits of new technology including AI developments whilst limiting the possible legal risks.

Visit local Artificial Intelligence Insight web pages and contact any of the partners listed for more information about our legal services related to AI.

Governments / Public Policy Guidance

  • CMS Netherlands is actively involved with the NL AI Coalition. Katja van Kranenburg is chair of the working group Human Capital. 
  • CMS Hungary is co-organising with Microsoft and the Hungarian AI Coalition the so called Responsible AI Conference series taking place on a bi-annual basis. The conference is dedicated to discussing the latest trends and regulatory framework for AI and features speakers from the AI industry, as well as from regulatory bodies and experts in the fields of AI ethics and policy.
  • CMS advised on the England and Wales Law Society’s response to the Department for Science, Innovation and Technology’s AI White Paper.
  • Charles Kerrigan and Rachel Free (CMS UK) are Advisory Board Members of UK All Party Parliamentary Group on AI.
  • Charles Kerrigan is a recommended advisor for the UK Parliament Innovation Hub on AI and emerging technologies.

AI Events


24/04/2024
Navigating the Future with the EU AI Act - Join Our Responsible AI Series
 Join us for the next session of our acclaimed Responsible AI series for an exclusive look at where EU countries and companies stand on AI following the adoption of the EU AI Act. Why attend? 2024 is and will be a pivotal year for AI regulation. This session provides a unique platform to explore the nuances, challenges, and opportunities of the current AI ecosystem, directly from a distinguished panel of AI industry leaders, regulatory experts, and pioneers in AI ethics and policy. At this event, we will delve into the specific implementation steps of the EU AI Act and provide you with a roadmap of the steps your organization will need to take to comply with the AI Act, with the first obligations coming into force as early as November 2024. We will discuss the critical role of data, the foundation on which AI algorithms are trained and refined and the importance of accessible, user-friendly AI sys­tems, pri­or­it­ising ethical considerations and privacy protection. Please click the button below to register for this eventShould you have any queries about the event, please contact us. 
18/04/2024
In View: Life Sciences & Healthcare - What's new in AI Regulation and Data...
 We are delighted to invite you to the CMS In View: Life Sciences & Healthcare - What’s New in AI Regulation and Data Protection? event taking place on Thursday 18 April at our London Cannon Place offices. The seminar will focus on key AI and data protection topics relevant to life sciences and healthcare where you will hear from industry and regulatory experts from the ICO, Health Research Authority, UCL, the Wellcome Trust and CMS UK specialists. If you would like to attend this event, please register via the button below.
09/04/2024
AI Act and its implications on the automotive industry
Join us for an exclusive webinar where we delve into the intricacies of the AI Act and its implications in the automotive industry. As AI continues to revolutionize the way vehicles are designed, manufactured, and operated, it's imperative for industry professionals to understand the regulatory landscape shaping its usage. Our webinar brings together a panel of distinguished experts, including legal luminaries, representatives from the European Commission, and experts from EU legislative bodies. They will provide invaluable insights, guidance, and interpretation on the AI Act's provisions, ensuring that participants gain a comprehensive understanding of its impact on automotive innovation and compliance.

 

Resources and publications


Looking ahead to the EU AI Act
Learn about what companies should be aware of in order to prepare for im­ple­ment­a­tion...
International Digital Regulation Hub
Digital regulation is shaping the future of Europe’s economy. Now is the...
The CMS Intelligent Tech Hub, CMS UK
Brought to you by experts in the CMS UK Finance team specialising in digital...
Digital Generation, The Mobile Century 2024
CMS supports GTWN with their latest publication featuring articles on AI...
Deal Deliberations Series
AI: When it pays to work smarter, CMS UK
AI Assurance: Building Trust in Responsible AI Systems in the UK
AI assurance involves the process of measuring, evaluating, and communicating...
On Point: Human + Machine; exploring AI’s impact on business
CMS Funds Group AI & Tech Interviews
This series of interviews focuses on the meaning of digitalisation, digital...
The use of generative AI in Litigation: Future implications ands potential...
In this article, Rebecca Byczok and Reeve Boyd from CMS’ Finance Disputes...
AI in Financial Services - Autumn 2023 update
China Promulgated Framework Regulations on Generative AI

AI Library

Artificial Intelligence and Machine Learning
Markus Kaulartz, Partner, CMS Germany
AI, Machine Learning & Big Data Laws and Regulations 2023
Contributing Editor, Charles Kerrigan, Partner, CMS UK
Artificial Intelligence, Law and Regulation
Edited by Charles Kerrigan, Partner, CMS UK

Feed

23/04/2024
Artificial Intelligence and Occupational Health and Safety – Opportunities...
This article looks at both the opportunities and risks presented by using artificial intelligence with regard to occupational health and safety.AI has become an integral part of the modern working world...
18/04/2024
Transforming the Legal Landscape? The Impact of LLMs
Large Language Models (LLMs) are a branch of artificial intelligence (AI) that can generate human-like text based on deep learning techniques. LLMs are trained on massive amounts of textual data, such...
18/04/2024
In View: Life Sciences & Healthcare - What's new in AI Regulation and Data...
 We are delighted to invite you to the CMS In View: Life Sciences & Healthcare - What’s New in AI Regulation and Data Protection? event taking place on Thursday 18 April at our London Cannon Place offices. The seminar will focus on key AI and data protection topics relevant to life sciences and healthcare where you will hear from industry and regulatory experts from the ICO, Health Research Authority, UCL, the Wellcome Trust and CMS UK specialists. If you would like to attend this event, please register via the button below.
17/04/2024
Impact of the CJEU's Schufa judgment on the use of AI in HR
This article examines the extent to which the CJEU's Schufa judgment is an obstacle to the use of artificial intelligence (AI) in the HR sector.More and more companies are using AI systems in HR. A key...
11/04/2024
CMS signs global partnership with leading legal GenAI vendor Harvey
CMS signs global partnership with leading legal GenAI vendor Har­vey­In­ter­na­tion­al law firm CMS has entered into a global partnership with Harvey, one of the world’s leading generative AI (GenAI) platforms...
09/04/2024
AI Act and its implications on the automotive industry
Join us for an exclusive webinar where we delve into the intricacies of the AI Act and its implications in the automotive industry. As AI continues to revolutionize the way vehicles are designed, manufactured, and operated, it's imperative for industry professionals to understand the regulatory landscape shaping its usage. Our webinar brings together a panel of distinguished experts, including legal luminaries, representatives from the European Commission, and experts from EU legislative bodies. They will provide invaluable insights, guidance, and interpretation on the AI Act's provisions, ensuring that participants gain a comprehensive understanding of its impact on automotive innovation and compliance.
08/04/2024
Virtuelle Influencer: Chancen und Hürden
Virtual influencers are a new phenomenon, at least in Germany. They might be used to advertise digital fashion in future, for example, or to interact in the metaverse. What are the special features of virtual influencers? What are the benefits, especially for businesses, and can they be regarded as influencers from a legal viewpoint? Adrian Zarm and Dr Gabriele Stark, both from the Intellectual Property practice, answer these and many other important questions in our new podcast.
27/03/2024
CMS signs global partnership with leading legal GenAI vendor Harvey
International law firm CMS has entered into a global partnership with Harvey, one of the world’s leading generative AI (GenAI) platforms. This partnership puts CMS, operating in 47 countries, at the forefront in using GenAI to enhance the delivery of legal services to clients. CMS has a strong track record of using AI technology in its legal service delivery in transaction, litigation and advisory practice groups. Adding GenAI technology to support clients around the globe is the next evolutionary step. CMS has been looking at the potential of AI for a number of years and at generative AI since before ChatGPT hit the news. “CMS believes that GenAI  will enhance and support our human knowledge and skills, enabling the firm to deliver even greater  benefits to its clients,” said Isabel Scholes, CMS Executive Dir­ect­or.​​Backed by ​OpenAI​,​​ ​Harvey augments productivity and streamlines workflows across ​different parts of legal work, such as contract analysis, due diligence, litigation and regulatory compliance. Harvey can help produce insights, assist in creating initial drafts, suggestions and forecasts from large amounts of data, which are used to create final deliverables. This helps lawyers provide quicker, better and more affordable solutions to their clients. In 2023, CMS started a pilot programme with Harvey, involving a large number of CMS lawyers, tax advisors and notaries in several jurisdictions. Now, CMS will introduce Harvey in a phased approach across its member firms, starting in France, Germany, the Netherlands, Portugal and the UK. Pierre-Sé­bas­tien Thill, CMS Chairman, said: “We are very pleased to be collaborating with the GenAI platform Harvey. CMS lawyers will now not only have access to GenAI tools that will help them enhance the delivery of services to our clients but will also work with the Harvey team to help shape the future of GenAI systems in the legal sector.”Duncan Weston, CMS Executive Partner, added: “At CMS, we are constantly challenging and innovating the way legal services are delivered. Our focus is on using technology to solve problems and generate value for our clients. Our global partnership with Harvey is a prime example of this.”"CMS's pioneering spirit in embracing Harvey's AI technology is a powerful move towards a more innovative legal sector. We are honoured to be a part of this journey," said Gabe Pereyra, Harvey Co-Founder and President."By joining forces with CMS, Harvey is helping to chart a new course for legal services that is more efficient, accurate, and client-focused. It's a privilege to support such a dedicated team of professionals," said Winston Weinberg, Co-Founder and CEO of Harvey.
15/03/2024
Next steps
Following the release of the pre-final text of the AI Act and its adoption by the European Parliament’s Internal Market and Civil Liberties Committees in February 2024, the torch was passed to the European Parliament plenary. Voting took place in the European Parliament on 13 March 2024 and approval was given by a large majority. The text is now being revised by the legal linguists of the European Parliament. The final text is then formally approved once again in the European Parliament. This is expected to take place on 10 / 11 April. This final text will then have to be approved by the Council of the European Union. A clear date for this has not yet been defined, but it can be assumed that this will happen soon after the final text has been approved by the European Parliament, most likely end of April/early May 2024. The AI Act will enter into force on the 20th day after publication in the EU Official Journal and will be applicable after 24 months. However, some specific provisions will have different application dates, such as prohibitions on AI, that will apply 6 months after entry into force; or General Purpose AI models already on the market, which are given a compliance deadline of 12 months. The AI Office was established on 21 February 2024 and the European Commission will oversee the issuance of at least 20 delegated acts. The AI Act’s implementation will be supported by an expert group formed to advise and assist the European Commission in avoiding overlaps with other EU regulations. Meanwhile, Member States must appoint at least one notifying authority and one market surveillance authority and communicate to the European Commission the identity of the competent authorities and the single point of contact. The next regulatory step appears to be focused on AI liability. On 14 December 2023, EU policymakers reached a political agreement on the amendment of the Product Liability Directive. This proposal aims to accommodate technological developments, notably covering digital products like software, including AI. The next proposal in line in the AI package is the Directive on the ad­apt­a­tion/har­mon­iz­a­tion of the rules on non-contractual civil liability to Artificial Intelligence (AI Liability Directive). Addressing issues of causality and fault related to AI systems, this directive proposal ensures that claimants can enforce appropriate remedies when suffering damages in fault-based scenarios. The draft was published on 28 September 2022 and is still pending to be considered by the European Parliament and Council of the European Union . Once adopted, EU Member States will be obliged to transpose its provisions into national law within a likely two-year timeframe. The enactment of the AI Act represents a pivotal step towards fostering a regulatory landscape, not only in the EU but worldwide, that balances innovation, trust, and accountability, ensuring that AI serves as driver of progress while safeguarding fundamental rights and societal values.
15/03/2024
Codes of conduct, confidentiality and penalties, delegation of power and...
Codes of conduct (Currently Title IX, Art. 69)In order to foster ethical and reliable AI systems and to increase AI literacy among those involved in the development, operation and use of AI, the new AI Act mandates the AI Office and Member States to promote the development of codes of conduct for non-high-risk AI systems. These codes of conduct, which should take into account available technical solutions and industry best practices, would promote voluntary compliance with some or all of the mandatory requirements that apply to high-risk AI systems. Such voluntary guidelines should be consistent with the EU values and fundamental rights and address issues such as transparency, accountability, fairness, privacy and data governance, and human oversight. Furthermore, to be effective, such codes of conduct should be based on clear objectives and key performance indicators to measure the achievement of these objectives. Codes of conduct may be developed by individual AI system providers, deployers, or organizations representing them and should be developed in an inclusive manner, involving relevant stakeholders such as business and civil society organisations, academia, etc. The  European Commission will assess the impact and effectiveness of the codes of conduct within two years of the AI Act entering into application, and every three years thereafter. The aim is to encourage the application of requirements for high-risk AI systems to non-high-risk AI systems, and possibly other additional requirements for such AI systems (including in relation to environmental sustainability).
14/03/2024
Governance and post-market monitoring, information sharing, market surveillance
Governance (Currently Title VI, Art. 55b-59)The AI Act establishes a governance framework under Title VI, with the scope of coordinating and supporting its application on a national level, as well as build capabilities at Union level and integrate stakeholders in the field of artificial intelligence. The measures related to governance will apply from 12 months following the entry into force of the AI Act. To develop Union expertise and capabilities, an AI Office is established within the Commission, having a strong link with the scientific community to support its work which includes the issuance of guidance; its establishment should not affect the powers and competences of national competent authorities, and bodies, offices and agencies of the Union in the supervision of AI systems. The newly proposed AI governance structure also includes the establishment of the European AI Board (AI Board), composed of one representative per Member State, designated for a period of 3 years. Its list of tasks has been extended and includes the collection and sharing of technical and regulatory expertise and best practices in the Member States, contributing to their harmonisation, and the assistance to the AI Office for the establishment and development of regulatory sandboxes with national authorities. Upon request of the Commission, the AI Board will issue recommendations and written opinions on any matter related to the implementation of the AI Act. The Board shall establish two standing sub-groups to provide a platform for cooperation and exchange among market surveillance authorities and notifying authorities on issues related to market surveillance and notified bodies. The final text of the AI Act also introduces two new advisory bodies. An advisory forum (Art. 58a) will be established to provide stakeholder input to the European Commission and the AI Board preparing opinions, recommendations and written contributions.A scientific panel of independent experts (Art. 58b) selected by the European Commission will provide technical advice and input to the AI Office and market surveillance authorities. The scientific panel will also be able to alert the AI Office of possible systemic risks at Union level. Member States may call upon experts of the scientific panel to support their enforcement activities under the AI Act and may be required to pay fees for the advice and support by the experts. Each Member State shall establish or designate at least one notifying authority and at least one market surveillance authority as national competent authorities for the purpose of the AI Act. Member States shall ensure that the national competent authority is provided with adequate technical, financial and human resources and infrastructure to fulfil their tasks effectively under this regulation, and satisfies an adequate level of cybersecurity measures. One market surveillance authority shall also be appointed by Member States to act as a single point of contact.
13/03/2024
General purpose AI models and measures in support of innovation
General purpose AI models (Currently Title VIIIA, Art. 52a-52e)The AI Act is founded on a risk based approach. This regulation, intended to be durable, initially wasn’t associated to the characteristics of any particular model or system, but to the risk associated with its intended use. This was the approach when the proposal of the AI Act was drafted and adopted by the European Commission on 22 April, 2021, when the proposal was discussed at the  Council of the European Union on 6 December, 2022. However, after the great global and historical success of generative AI tools in the months following the Commission’s proposal, the idea of regulating AI focusing only on its intended use seemed then insufficient. Then, in the 14 June 2023 draft, the concept of “foundation models” (much broader than generative AI) was introduced with associated regulation. During the negotiations in December 2023, some additional proposals were introduced regarding “very capable foundation models” and “general purpose AI systems built on foundation models and used at scale”. In the final version of the AI Act, there is no reference to “foundation models”, and instead the concept of “general purpose AI models and systems” was adopted. General Purpose AI models (Arts. 52a to 52e) are distinguished from general purpose AI systems (Arts. 28 and 63a). The General Purpose AI systems are based on General Purpose AI models: “when a general purpose AI model is integrated into or forms part of an AI system, this system should be considered a general purpose AI system” if it has the capability to serve a variety of purposes (Recital 60d). And, of course, General Purpose AI models are the result of the operation of AI systems that created them.“General purpose AI model” is defined in Article 3.44b as “an AI model (…) that displays significant generality and is capable to competently perform a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications”. The definition lacks quality (a model is “general purpose” if it “displays gen­er­al­ity”1Re­cit­al 60b contributes to clarify the concept saying that “generality” means the use of at least a billion of parameters, when the training of the model uses “a large amount of data using self-supervision at scale”. footnote) and has a remarkable capacity for expansion. Large generative AI models are an example of General Purpose AI models (Recital 60c). The obligations imposed to providers of General Purpose AI models are limited, provided that they don’t have systemic risk. Such obligations include (Art. 52c) (i) to draw up and keep up-to-date technical documentation (as described in Annex IXa) available to the national competent authorities, as well as to providers of AI systems who intend to integrate the General Purpose AI system in their AI systems, and (ii) to take some measures in order to respect EU copyright legislation, namely to put in place a policy to identify reservations of rights and to make publicly available a sufficiently detailed summary about the content used. Furthermore, they should have an authorised representative in the EU (Art. 52ca). The most important obligations are imposed in Article 52d to providers of General Purpose AI models with systemic risk. The definition of AI models with systemic risk is established in Article 52a in too broad and unsatisfactory terms: “high impact capabilities”. Fortunately, there is a presumption in Article 52a.2 that helps: “when the cumulative amount of compute used for its training measured in floating point operations (FLOPs) is greater than 10^25”. The main additional obligations imposed to General Purpose AI models with systemic risks are (i) to perform model evaluation (including adversarial testing), (ii) to assess and mitigate systemic risks at EU level, (iii), to document and report serious incidents and corrective measures, and (iv) to ensure an adequate level of cybersecurity. Finally, an “AI system” is “an AI system which is based on a General Purpose AI model, that has the capacity to serve a variety of purposes” (Art. 3.44e). If General Purpose AI systems can be used directly by deployers for at least one purpose that is classified as high-risk (Art. 57a and Art. 63a), an evaluation of compliance will need to be done.