The Signal
PwC calls it the "defining challenge of 2026": agentic AI workflows are spreading faster than governance models can keep up. Gartner says governance frameworks will shift from advisory to mandatory by the end of the year. But governance does not have to mean bureaucracy. For smaller businesses, it can be as simple as knowing what data your AI touches and who is responsible for the outputs.
The Story: Responsible AI for the rest of us
If you have been following AI news this year, you have probably seen the word "governance" everywhere. It sounds like something that requires a legal department and a 60-page policy document.
It does not. At least, not for a 20-person company.
What governance actually means at your scale
AI governance is knowing three things: what AI tools your business uses, what data those tools can access, and who is accountable when something goes wrong.
That is it. You do not need an ethics board. You need a clear-eyed view of what is running in your business and a named person who pays attention.
Start with an "AI inventory" — a simple spreadsheet of every AI tool your team uses, what data it touches, and who manages it. Most businesses underestimate how many AI tools are already embedded in their workflows. Your email platform probably uses AI. Your CRM almost certainly does.
The European AI Act: what it means for you
If you serve customers in Europe — or plan to — the EU AI Act is now part of your landscape. It becomes substantially operational on 2 August 2026, with rules for high-risk AI systems, transparency obligations, and oversight requirements taking effect.
Here is what matters for a small business, simplified:
The Act classifies AI systems by risk level. Most tools you use (chatbots, content generators, scheduling assistants) fall into the "limited risk" or "minimal risk" categories. These require basic transparency — essentially, letting people know when they are interacting with AI.
High-risk systems — those making decisions about employment, credit, or critical services — face stricter requirements: documentation, human oversight, and audit trails.
The good news for SMEs: the Act was written with you in mind. SMEs are mentioned 38 times in the legislation, compared to just 7 mentions of "industry." Provisions include regulatory sandboxes with priority SME access (free of charge), simplified compliance documentation, and proportional fees. The European Commission's "AI Omnibus" package further reduces compliance costs, with estimated savings of up to €5 billion for European businesses by 2029.
GDPR and AI: three things to get right
If you are already GDPR-compliant, you have a head start. But AI introduces specific wrinkles:
Data processing transparency. If you feed customer data into an AI tool, your customers need to know. Update your privacy policy to be specific about which tools process what data.
Consent and legal basis. You need a lawful basis for processing personal data through AI — either explicit consent or legitimate interest. Document which one, and why.
Right to human review. Under GDPR, individuals have the right not to be subject to decisions based solely on automated processing. If your AI makes decisions that affect customers — pricing, eligibility, applications — a human must be available to review on request.
The human-in-the-loop principle
This is perhaps the most practical governance concept for any business: knowing when to let AI run freely and when a person must make the final call.
For routine, low-stakes tasks — drafting social media posts, summarising meeting notes, categorising support tickets — AI can operate with minimal oversight. Review the outputs occasionally, but the risk of a mistake is low and the consequences are manageable.
For anything customer-facing, financial, or legally significant — AI-drafted contracts, automated pricing, hiring recommendations — a human should review before it goes out the door. Every time. No exceptions.
The rule of thumb: if a mistake would embarrass you, cost you money, or harm a customer, a human checks it first.
Why this builds trust
Here is the business case for governance that often gets overlooked: customers care about this. PwC's 2025 Responsible AI survey found that 60% of executives said responsible AI practices boost ROI and efficiency, while 55% reported improvements in customer experience and innovation. Governance is not just risk mitigation — it is a signal to your customers that you take their data and their trust seriously.
Having someone on your team who owns this — who knows the inventory, reviews the policies quarterly, and stays informed — puts you ahead of the vast majority of businesses your size.
The Operator's Toolkit: The one-page AI governance checklist
Print this out. Pin it up. Review it quarterly.
What AI tools does our business use? List every tool, including AI features embedded in existing software (CRM, email, accounting). Note what each tool does and who manages it.
What data does each AI tool access? For each tool on the list, document the data scope: customer names, emails, financial records, conversation transcripts, internal documents. Be specific.
Who reviews AI-generated outputs before they reach a customer? Name the person. If no one reviews them, decide whether that is acceptable for that specific use case.
What happens when the AI gets it wrong? Define the escalation path. Who gets notified? What is the correction process? How do you communicate the error to affected customers?
How often do we review and update our AI systems? Set a minimum cadence — quarterly at least. Check for new AI tools that have crept into use, updated terms of service from vendors, and changes in regulation.
Are we compliant with GDPR data processing requirements? If you handle EU customer data, confirm that your privacy policy reflects AI processing, you have documented legal bases, and you can honour right-to-review requests.
Do we have documented consent for any customer data used with AI tools? If your AI tools process personal data, verify that you have the right permissions in place and that they are recorded.
Is there a named person responsible for AI oversight? This does not need to be a new hire. It needs to be someone who owns the checklist above.
The Radar: Three things worth knowing this week
PwC: Responsible AI moves from "talk to traction" in 2026. Their latest predictions report is clear — agentic workflows are spreading faster than governance can keep up. Companies that operationalise responsible AI now will see measurable advantages in ROI and customer trust.
European AI regulation is reshaping the continent. The EU's AI Omnibus simplifies compliance for SMEs — simplified documentation, proportional fees, and regulatory sandboxes. The Act becomes substantially operational in August 2026.
Agent monitoring is a fast-growing investment category. CB Insights reports that AI agent observability and governance is now a strategic M&A target. Even early-stage companies are being asked, "How do you govern your AI?" during fundraising. Roughly 54% of private companies in this space remain in early stages — the market is just getting started.
From the Field
A quick question: does your business have any AI governance in place right now? Even informal — someone who reviews AI outputs, or a list of tools your team uses. Hit reply and let me know — it helps me write more useful editions.
Until next time,
Francois