A 2022 assessment of the scale of AI activity in UK businesses showed that as businesses grow, they are more likely to adopt AI; Although only 15% of small companies have adopted at least one AI technology, not only have 68% of large companies already done so – they are also 17% more likely to adopt multiple AI technologies.[1]

The EU’s legislative efforts to oversee the deployment of artificial intelligence could therefore not be more pertinent. But despite the numerous regulations aiming to give SMEs an equal piece of the AI pie, many of the guidelines will remain elusive to those less skilled in IT legislative jargon – in other words: most decision-makers in these companies.


On 14 June 2023, the European parliament approved a proposed “regulation laying down harmonised rules on artificial intelligence”, paving the way for the next steps in the EU’s legislative procedure.

In article 3.1, the artificial intelligence act (AIA) defines an “AI system”[2]  such that the degree of system autonomy, i.e. its independence from human intervention, is the defining criteria. While Annex 1 of the AIA text proposed by the Commission goes into more detail as to the techniques and approaches referenced – listing machine learning, logic- and knowledge-based concepts and statistical methods[3].

The legislative proposal aims to promote the uptake of “human-centric and trustworthy” AI innovation, support international competition within the EU and protect legal and natural persons against the negative consequences of AI.

Target audience for the AIA are providers and deployers of AI-based systems. “Providers” are considered to be companies that develop and sell artificial intelligence, whereas “deployers” are organisations that use third-party AI systems responsibly to support their business. Private users are not affected by the impending regulation.


Article 4a lists “general principles” that apply to all AI systems in the EU – regardless of their risk category (see below). They should:

  • be human-centric in their design, i.e. serve humans and be controlled by humans.
  • be technically reliable, secure and robust in case of unexpected issues
  • be developed and deployed in accordance with the EU’s data protection legislation
  • show users that they are interacting with a machine
  • ensure the accountability and explainability of interactions
  • promote diversity, non-discrimination and fairness
  • consider the environmental consequences of operation and malfunction

In addition to these general principles, the text follows a risk-based approach to classify AI systems and makes a separate distinction for generative AI (General Purpose AI, GPAI).

Conceptually, the proposal assumes that AI poses a potential risk to health, safety and basic rights and therefore classifies the systems into four risk groups – minimal, low, high and unacceptable – each associated with increasing obligations, culminating in an outright ban.

Risk classificationCharacteristicExampleObligations  
Minimal risk  AI-based softwareComputer games
Spam filters

Low riskAI systems with direct
human-AI interaction  
ChatbotsTransparency requirement

High riskAI systems that (a) are used in products that are subject to EU product safety regulations, (b) are such products themselves or (c) are listed in Annex III of the act  AI for critical infrastructure management  

AI used for the employment and management of workers  

Online platforms with over 45 million users    
– transparency requirement
– disclosure requirement [4]
– documentation requirement
– risk management
– human oversight
– cyber security
– reporting of serious incidents
– qualitative criteria for training, validation and test data sets
Unacceptable riskAI systems which, if deployed, could infringe basic rights e.g. for recognising emotions, surveilling and manipulating humans  Social scoring; biometric identification in public, behaviour manipulation  Fundamentally prohibited  

Furthermore, the text includes limitations for “general purpose AI”, regardless of its risk classification with special reference made to so-called foundation models (FM) – such as midjourney, ChatGPT or Bard as examples of GPAI. The reason for highlighting these systems is manifold. Not only are they trained using large volumes of data from a broad spectrum of sources, they can also fulfil very general requests and are indefinitely expandable beyond their initial scope. Simply being classified as a foundation model will therefore mean an AI is subject to specific compliance requirements concerning transparency and disclosure[5], which also apply to downstream providers (see article 28b Obligations of the provider of a foundation model)

In its current version, the proposal no longer classifies foundation models as being high risk. GPAI or FM providers need only observe the strict criteria of this category if their products are integrated within high-risk AI systems.

Non-compliance with the AI act us associated with sanctions comparable to those under GDPR. For high-risk AI systems, the penalty can be up to 6% of annual global turnover or a maximum of €30 million


The EU not only took ethical principles into account when compiling its proposal for the regulation of AI systems. The proposal also contains a number of dedicated guidelines to support the operations of medium-sized AI innovators and deployers.

A selection of provisions aimed at SMEs under AIA, Articles 53, 55, 58 and 69[6]:

  • Giving SMEs based in the EU free priority access to AI sandboxes[7]
  • Tailoring communications relating to the participation in sandboxes to the legal and administrative capacities of SMEs
  • Organising specific awareness-raising activities concerning the application of this regulation, in consideration of SME requirements – where appropriate, establishing accessible dedicated channels for communicating with SMEs
  • Encouraging SMEs to participate in developing further AI standards
  • Taking the specific interests and needs of small-scale providers into account when setting conformity assessment fees in accordance with Art. 43 by lowering costs proportionately to their development stage, size and market share
  • Regularly assessing the certification and compliance costs for SMEs through transparent consultations with stakeholders
  • Providing an advisory forum, in addition to the AI Office, to include SMEs, which shall be balanced with regard to commercial, non-commercial and SME interests
  • Considering the unique requirements of SMEs when compiling codes of conduct

These SME-friendly guidelines have received mixed reviews with many stakeholders listing compliance as being a key concern. Implementing these standards, says Alessandra Zini in her research piece ‘The AI Act: help or hindrance for SMEs?’[8], “will be costly for companies, especially SMEs. SMEs are at the forefront of innovation in Europe, but they are easily crippled by high costs of compliance, risking bearing an unsustainable burden”. And the proposal still leaves a number of questions unanswered: “Is a company that sells third-party AI as part of its own products a provider or a deployer?”, “Why is there a dual classification system that distinguishes between risks and GPAI?”, “What is the exact definition of foundation models under the act?”, “Are downstream providers subject to obligations in relation to industrial property rights?” – the list goes on. It is crucial that these critical assessments and questions be discussed in the coming trialogue between the Parliament, the Council and the Commission and, if necessary, improved.

If a decorum is reached this year, the act could be formally enshrined in law by mid-2024. SMEs should use the subsequent 2-year transition period to get to grips with the AI Act, keep on top of its further development and find out where they stand in relation to industrial property rights, data protection, confidentiality and the terms of use of their AI-based products and services.

[1] See: (accessed on 05.07.2023)

[2] “Machine-based system that is designed to operate with varying levels of autonomy and that can, for explicit or implicit objectives, generate outputs such as predictions, recommendations, or decisions, that influence physical or virtual environments”. See.: (accessed on 05.07.2023)

[3] ibid.: in detail: (a)  Machine learning approaches, including supervised, unsupervised and reinforcement learning, using a wide variety of methods including deep learning; (b)  Logic- and knowledge-based approaches, including knowledge representation, inductive (logic) programming, knowledge bases, inference and deductive engines, (symbolic) reasoning and expert systems; (c)  Statistical approaches, Bayesian estimation, search and optimization methods.

[5] For example informing natural persons that they are engaging with an AI system or presenting a list of all copyrighted data used for training purposes – criteria which also applies to subsequent providers or deployers see (accessed on 05.07.2023)

[6] See ibid.

[7] AI sandbox: “a way to connect innovators and regulators and provide a controlled environment for them to cooperate” that “facilitates the development, testing and validation of innovative AI systems with a view to ensuring compliance with the requirements of the AI Regulation.” See (accessed on 05.07.2023)

[8] See: Https://, accessed on 05.07.2023

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button