Artificial intelligence (AI) has long ceased to be a topic for the future, but rather is an integral part of the modern world of work. In 2025, over 40 per cent of German companies already use AI in their businesses to optimise processes, support decision making and organise work efficiently. At the same time, new legal questions have arisen with the use of technology, such as which applications are permitted, what risks should be considered, and how can employers ensure that the use of AI is transparent, fair and most importantly, legally compliant. Companies are now, for the first time, subject to a binding legal framework which deals with these questions and provides requirements for safely and responsibly dealing with AI for the first time since the EU AI Act (AI Act) came into force on 1 August 2024.
An initial overview of AI in working life
There are a wide range of uses for AI in HR. On the recruitment side, AI can analyse staffing requirements, create vacancy notices and job profiles, read applications, and assist with pre-screening suitable candidates. AI can recommend targeted training and act as a digital mentor when onboarding new staff. It can recognise risks such as absences and potential dismissals at an early stage and support classical HR processes such as reporting, skills management, risk assessments and ensuring gender neutral remuneration.
AI is also increasingly used in Microsoft Office applications and in tools such as Microsoft Copilot. Here AI assists employees with crafting documents, presentations and emails, summarising complex information, automatically generating reports and can help with analysing data. Even tools such as DeepL and ChatGPT (as well as other LLMs) are being increasingly used to translate, formulate or prepare text. In these ways AI is not just used as a functional tool, but also as a tool to support productive work.
Statutory requirements for using AI
The use of AI is subject to various regulatory standards. The requirements of the EU AI Act, the German General Equal Treatment Act (Allgemeines Gleichbehandlungsgesetz, AGG) and the EU General Data Protection Regulation (GDPR) are to be particularly taken into consideration.
The AI Act is the first comprehensive legal act to regulate AI. It stipulates how AI is to be developed and used in the EU and relies on a risk-based approach. AI systems are divided into different categories depending on the degree of risk - from minimal risk to limited risk up to high-risk systems. Systems that strongly invade privacy or are incompatible with fundamental rights, such as a social scoring system which evaluates people on the basis of their behaviour or social characteristics, (Art. 5 (1) (c) AI Act) or those which recognise and evaluate emotions in the workplace (cf. Art. 5 (1) (f) AI Act), are completely prohibited.
AI applications are often classified as high risk, (cf. Art. 6 (2) in conjunction with Annex III no. 4 AI Act) especially in the areas of HR and operations, such as applicant selection and performance monitoring.
The AI Act sets out clear obligations in Article 26. Employers must introduce risk management which ensures data quality and documents all measures, outcomes and data used.
As deployers of AI, employers must ensure a sufficient level of AI literacy. They must take measures to ensure that their staff who oversee AI applications are sufficiently trained to use and monitor such applications in accordance with Art. 4 AI Act in conjunction with Art. 3 (56) AI Act.
Art. 50 AI Act lists certain systems which can be classified as minimal risk. This includes, for example, chatbots (Art. 50 (1) AI Act), generated synthetic content (Art. 50 (2) AI Act) and deep fakes (Art. 50 (4) AI Act). Transparency obligations also apply here, however. Employees must be able to recognise that they are interacting with AI and AI-generated content such as deep fakes must be disclosed accordingly.
The AGG and the provisions of the General Data Protection Regulation play a central role alongside the AI Act. Employers must transparently disclose how the AI systems function, what evaluation criteria are used and whether measures have been taken to avoid algorithmic bias. Such bias frequently arises as a result of unilateral or incorrect training data and can lead to certain groups being unknowingly systemically disadvantaged. In the application process, this can quickly lead to breaches of the AGG, for instance if applicants are assessed more poorly due to their age, gender or origin.
In relation to data protection law, it must be taken into account that personal data may only be processed for clearly defined purposes and unnecessary data collection is prohibited (cf. Art. 5 (1) (c) GDPR). Every instance of processing personal data by an AI system must have a legal basis in accordance with Art. 6 GDPR. Employees and applicants must therefore be made aware of what data is used by AI, for what purpose this data is processed and what implications this has on the process. These transparency obligations apply to the application process as well as to ongoing employment (cf. Art. 13 GDPR).
The prohibition of automated individual decision-making in Art. 22 (1) GDPR also applies here. This states that legally relevant decisions cannot be fully automated. AI tools may merely support the decision-making process; the final decision must, however, always be made by a responsible person.
Obligations and responsibilities of companies
Companies are subject to specific obligations as a result of these legal requirements which must be adhered to when using AI in an employment context.
If AI is used as a tool it must be assessed on a regular basis whether the Works Council has the right of co-determination as per section 87 (1) no. 6 of the German Works Council Constitution Act (Betriebsverfassungsgesetz, BetrVG). The objective eligibility for monitoring is already sufficient to trigger the co-determination right, contrary to the wording of the BetrVG, so the AI tools used by the company are generally subject to the right of co-determination.
In certain circumstances the use of AI applications may also be deemed to be a significant change to the plant of the establishment as per section 111 sentence 3 no. 4 BetrVG. This is the case if either a fundamentally new method of working is introduced or if the organisational implications of the AI system represents a significant change to the organisation of the establishment.
Regardless of this, specific co-determination rights for AI tools may apply, such as in accordance with section 95 (2a) BetrVG. On the one hand, the Works Council has the right to be informed about the use of AI; on the other hand, the Works Council is free to call on the advice of an expert when assessing the introduction or use of AI in accordance with section 80 (3) sentence 2 BetrVG.
Large Language Models (LLMs) such as ChatGPT, Google Gemini and Microsoft Copilot are also being integrated into operational processes. Employees use these for writing text, research, automating administrative tasks and as an assistant for everyday working life. From an employment law perspective, employers should draw up clear policies in relation to the use of LLMs, in particular with regard to confidentiality and data protection. LLMs usually use external servers, so there is a risk here that personal data or confidential company data is processed in a prohibited manner. Without the implementation of suitable measures, transmitting such data to LLMs may breach data protection provisions. In addition, LLMs may include errors or be incomplete. Decisions therefore always require a qualified human control in order to avoid inaccuracies.
Conclusion and forecast
AI offers considerable benefits in everyday working life: more efficient work, faster application processes, more objective candidate decision making, better documentation and more efficient HR processes. The use of AI may increase employee productivity by reducing routine tasks and supporting processes in a structured manner. As a result, employees gain more time for tasks with more demanding content and may use their resources more effectively. It is imperative, however, that the final responsibility always rests with a person: decisions which are based on AI results must be reviewed by qualified staff to avoid errors and misinterpretations.
AI must also always be used responsibly and transparently. Employers who draw up clear policies in good time, assess risks and sustainably design systems may be able to benefit from the potential of AI without getting into legal difficulties.
The AI Act provides the legal framework. It is up to companies how they implement this in practice to design their dealings with AI to be efficient, legally compliant and reliable.
For further information, we are at your disposal at any time – we are happy to support and advise you!
