
A mid-sized wholesaler of industrial supplies has used an AI-powered chatbot for two years to answer questions about delivery times. At the same time, AI automatically generates product descriptions for 12,000 items. Management is pleased with the efficiency gains – until the legal team points to the EU AI Act in June 2026. From 2 August 2026, fines of up to 35 million euros or seven percent of global annual revenue may apply if high-risk AI systems are not operated in compliance with the rules. What many B2B shop operators underestimate: the EU AI Act does not only affect tech companies. It applies to anyone using AI systems in digital sales.
Why the EU AI Act directly affects B2B shop operators
The EU AI Act has been in force since 1 August 2024, but most companies have pushed the topic aside. According to the B2BEST Barometer 2026, 56 percent of B2B companies are watching developments without taking active measures. Among smaller companies, 25 percent see no need to act at all. That attitude is risky: from 2 August 2026, the rules for high-risk AI systems under Annex III and the transparency obligations under Article 50 will be fully enforced.
Imagine your sales team uses AI-based credit scoring to automate lending decisions. Without documentation of the training data, without a risk impact assessment, and without human oversight, you will be in breach of the law from August onwards. The result: heavy fines and serious reputational damage. Especially in B2B, where trust is the currency, such an incident can threaten the survival of the business.
The five key compliance obligations for your B2B shop
Obligation 1: Label AI-generated content If your B2B shop uses AI chatbots, you must tell customers at the start of each interaction that they are communicating with an automated system. This follows from Article 50 of the EU AI Act. The same applies to AI-generated product text or images: users must be able to see that the content was created by a machine. The label must be clear, understandable, and immediately visible. If you are careless here, you risk fines of up to 15 million euros or three percent of annual revenue.
Obligation 2: Risk classification and documentation Every AI system you use must be assigned to a risk class. High-risk systems under Annex III include, among other things, credit checks, lending decisions, and automated contract conclusions. These systems require full documentation: What data is processed? How was the AI trained? What error rates exist? Set up an AI register that records all systems, their purpose, and their risk class.
Obligation 3: Human oversight for high-risk systems High-risk AI systems must not make decisions on their own. There must always be a human who can review decisions and intervene if needed. For example, if your shop uses an automated system to grant credit limits to business customers, an employee must give the final approval. This oversight must be documented, including cases where manual intervention took place.
Obligation 4: Transparency towards regulators National supervisory authorities have the right to inspect your AI systems. On request, you must be able to prove that your systems meet the requirements of the EU AI Act. This includes technical documentation, risk impact assessments, and evidence of tests performed. Anyone who builds a compliance structure early not only avoids penalties, but also creates trust with customers and business partners.
Obligation 5: Regular review and adjustment AI systems learn and change. What is compliant today may become problematic tomorrow. That is why the EU AI Act requires continuous monitoring and adjustment of your AI applications. Personalised recommendation engines, for example, must be checked regularly for bias and discrimination. Set up a fixed review process – ideally every quarter.
How to implement the obligations in practice
Start with an inventory: Which AI systems are you using today? Chatbots, product text generators, recommendation engines, credit scoring? Assign each system to a risk class. Use the checklist in Annex III of the EU AI Act or bring in external expertise.
For the transparency obligation under Article 50, a simple notice is often enough: “This chat is operated by an AI system. For complex queries, we are happy to connect you with a team member.” What matters is that this notice appears at the start of the interaction. For AI-generated product text, you can add a note such as “This text was created with the support of artificial intelligence.”
For high-risk systems, you should appoint an internal compliance manager. This person coordinates documentation, monitors human oversight, and prepares for inspections by supervisory authorities. Invest in a central AI register – this can be a simple spreadsheet listing the system, provider, function, risk class, and last review.
Do not underestimate the value of compliance. Customers and business partners are becoming more sensitive to data protection and AI ethics. Anyone who communicates transparently and can prove that their AI systems are operated in compliance gains a competitive advantage.
Conclusion: Act now, not in August
The EU AI Act is no longer a distant threat – from 2 August 2026, the rules will be fully enforced. Anyone not compliant by then risks heavy fines and reputational damage. The five obligations – labelling, risk classification, human oversight, transparency, and continuous review – are not an insurmountable hurdle for B2B shop operators. With a structured inventory, clear responsibilities, and a central AI register, you create the basis for compliant AI use.
Commerce Partner has worked in digital B2B sales since 1999 and supports mid-sized manufacturers and wholesalers in implementing complex compliance requirements. If you need support preparing for the EU AI Act or want to check whether your AI systems meet the requirements, book a free 30-minute strategy call now at www.commerce-partner.com/kontakt. Together, we make sure your digital sales run not only efficiently, but also in full legal compliance.








