Apr 07, 2026

77% of Businesses Using AI Have No AI Policy

68% of small businesses use AI regularly. 77% have no written policy. Here's how to write one in two pages and the technical guardrails to enforce it.

8 min read

The Number: 77% Have No Policy. 68% Use AI Regularly.

A 2025 survey by the U.S. Chamber of Commerce found that 68% of small businesses use AI tools regularly. A separate survey by Cisco found that 77% of businesses using AI have no written policy governing how it's used. Those two numbers together describe one of the largest unmanaged risks in American business right now.
Your employees are using ChatGPT, Copilot, Gemini, and a dozen other tools. They're pasting customer data into prompts. They're uploading financial documents to get summaries. They're using AI to draft emails that go to clients. And in 77% of companies, nobody has told them what's okay to put into these tools and what isn't.
This article is a practical guide to building an AI policy for a mid-market business. Not a 50-page governance framework. A document your team will actually read.

What Goes Wrong Without a Policy

Samsung, 2023. Engineers pasted proprietary source code into ChatGPT to debug it. The code became part of OpenAI's training data. Samsung banned ChatGPT company-wide, but the data was already gone. Samsung is a $200 billion company with a security team. If it happened to them, it can happen to you.
A law firm (anonymized), 2024. An attorney used ChatGPT to draft a legal brief. The AI cited six court cases. None of them existed. The judge sanctioned the attorney and the firm. Total cost: $15,000 in fines, a public reprimand, and a client relationship that didn't survive.
An HR department (anonymized), 2025. A hiring manager used an AI tool to screen resumes. The tool hadn't been audited for bias. It systematically ranked candidates from certain zip codes lower. The company found out when a rejected candidate filed an EEOC complaint. Settlement cost: six figures, plus the legal fees and the PR damage.
In each case, the employee wasn't acting maliciously. They were trying to work faster. Nobody told them the rules because there were no rules.

What an AI Policy Should Cover

Your AI policy needs to answer six questions. That's it. You can cover all six in two pages.
1. Which tools are approved? List the AI tools your company allows. Be specific: "ChatGPT Plus (business tier), Microsoft Copilot, Grammarly Business." If a tool isn't on the list, employees shouldn't use it for work. The reason: consumer-tier AI tools often train on user inputs. Business-tier versions typically don't, but you need to verify this for each tool.
2. What data can go into AI tools? Create three categories. Green: public information, marketing copy, general questions. Yellow: internal documents, non-sensitive business data. Requires manager approval before inputting. Red: customer PII, financial data, proprietary code, legal documents, health records. Never goes into an AI tool without explicit approval from legal or compliance.
This is the most important part of the policy. If your team remembers nothing else, they need to know: never paste customer data, financial records, or proprietary code into a consumer AI tool.
3. Who reviews AI outputs before they go to customers or partners? Any AI-generated content that goes outside the company needs human review. Emails to clients, proposals, legal documents, financial reports. The AI drafts. A human reviews and sends. No exceptions for time pressure.
4. How do we evaluate new AI vendors? Before signing up for a new AI tool, someone needs to check: Where does the data go? Does the vendor train on our inputs? What's their SOC 2 status? Can we delete our data if we stop using the tool? A simple checklist that takes 30 minutes to complete. It prevents the scenario where marketing signs up for an AI tool that sends customer data to a server in a jurisdiction with no data protection laws.
5. What happens when AI makes a mistake? AI will produce wrong outputs. Your policy needs an incident response plan. Who gets notified? How do you assess the impact? What do you tell the affected party? This doesn't need to be complex. A one-page flowchart: AI error detected -> assess severity -> notify [person] -> correct the output -> document what happened.
6. What training do employees get? New hires need to learn the policy during onboarding. Existing employees need a refresher when the policy changes or when new tools get approved. The training should take 30 minutes, not a full day. Cover: approved tools, data classification, the review requirement, and how to report issues.
We help companies build their AI policy, including the technical guardrails that enforce it. Data loss prevention rules, approved tool lists, monitoring dashboards. One call to start. Schedule your AI policy consultation.

How AI Policy Connects to Cyber Insurance

Cyber insurers are starting to ask about AI governance. In 2025, Coalition (one of the largest cyber insurance providers) added AI-related questions to their application. They want to know: Does the company have an AI use policy? Are employees trained on it? Is there monitoring for AI-related data exposure?
Companies that can answer "yes" to all three get better rates. Companies that can't may face higher premiums, exclusions for AI-related incidents, or difficulty getting coverage at all. The cyber insurance market has already gone through this cycle with ransomware. Five years ago, nobody asked about backup policies. Now it's a standard question. AI governance is following the same path.
If you're paying for cyber insurance (and you should be), having an AI policy isn't just about risk management. It directly affects your premiums.

The EU AI Act and US Companies

The EU AI Act went into full effect in phases starting in 2024. If your company has European customers, employees, or partners, parts of this regulation apply to you regardless of where you're headquartered. The same extraterritorial principle that made GDPR relevant to US companies applies here.
The Act classifies AI systems by risk level. High-risk AI (used in employment, credit scoring, insurance, education) requires documentation, human oversight, and transparency about how the AI makes decisions. If you use AI to screen job applicants, score customer credit risk, or make insurance recommendations, and any of those people are in the EU, you need to comply.
For most mid-market US companies, this means two things. First: know which of your AI tools fall into the high-risk category. Second: document how they work and who oversees them. Your AI policy should include a section that maps your AI tools to the Act's risk categories. Even if you don't have EU customers today, you might tomorrow. It's easier to build the documentation now than to retroactively create it under regulatory pressure.

Building the Technical Guardrails

A policy on paper is a start. Technical enforcement makes it real. Here's what we build for clients.
Data Loss Prevention (DLP) rules. Configure your email and cloud platforms to detect when employees send sensitive data patterns (SSNs, credit card numbers, customer IDs) to AI tool domains. Microsoft Purview and Google Workspace DLP both support custom rules. When someone tries to paste a credit card number into ChatGPT, the system blocks it and logs the attempt.
Approved tool enforcement. Use your MDM or endpoint management to control which AI applications can be installed and which browser extensions are allowed. Block unapproved tools at the network level. This sounds draconian, but it's the same approach companies use for any software. You wouldn't let employees install random accounting software. AI tools deserve the same governance.
Audit logging. Track which AI tools employees use, how often, and who uses them. Most business-tier AI tools provide admin dashboards with usage data. Aggregate this into your existing security monitoring. You don't need to read every prompt. You need to know the patterns: is someone in accounting sending 200 queries a day to an AI tool? That's worth investigating.
Output review workflows. For high-stakes AI outputs (customer communications, financial analysis, legal drafts), build a review queue. The AI generates the draft. It goes into a queue. A human reviews and approves before it goes out. We build these as lightweight apps that plug into your existing workflow tools.

The One-Page Version

If you take nothing else from this article, print this and pin it in the break room:
  • Approved tools: [your list here]. Everything else needs IT approval first.
  • Never put into AI tools: customer personal data, financial records, proprietary code, health records, legal documents.
  • Always review before sending: anything AI-generated that goes to customers, partners, or regulators.
  • When AI gets it wrong: stop, tell your manager, document what happened.
  • Questions? Ask [your IT/compliance contact].
That's five bullet points. Your team can read it in 60 seconds. It covers 90% of the risk.

What to Do This Week

Pick one person to own the AI policy. Doesn't need to be the CTO. Could be the operations manager, the head of IT, or even a senior manager who's tech-literate and organized. Give them a week to draft the two-page policy using the six questions above. Circulate it. Get sign-off. Roll it out with a 30-minute all-hands training. You now have an AI policy. You're ahead of 77% of companies.

Need help building your AI policy?

We write the policy with you and build the technical guardrails that enforce it: DLP rules, tool governance, audit logging, and review workflows. One call to start.