Good Faith Compliance

Fast, Practical AI Governance for Small Business, Public Trust, and Responsible AI Use

The Best First Step: Start With Good Faith Compliance

The fastest and easiest first step is Good Faith Compliance.

If your organization is already using AI, or if your people may be using AI without clear rules, Good Faith Compliance gives you a practical starting point that can be put in place quickly.

It creates a static first defense: policy, training, oversight expectations, verification, documentation, and a clear good-faith position that shows your organization took reasonable steps to govern AI use before problems arise.

For many small businesses, law firms, nonprofits, employers, and public-trust organizations, that first step can address a major share of immediate AI guidance and governance needs.

We estimate that Good Faith Compliance can address approximately 85% of the immediate AI governance, AI guidance, and liability-management concerns most small organizations are facing right now.

That does not mean the work is finished.

It means the organization is no longer standing there with nothing.

It has a position.

It has rules.

It has training.

It has evidence.

It has a better answer if a regulator, insurer, client, employee, board, funder, or member of the public asks:

“What did you do to govern AI use?”

The answer should not be:

“We were thinking about it.”

The answer should be:

“We saw the risk, created rules, trained our people, required oversight, and documented reasonable care.”

That is what Good Faith Compliance is built to do.

For a More Comprehensive Start: Choose Good Faith Compliance Pro

For organizations that want a stronger starting position, Good Faith Compliance Pro adds ongoing support, monitoring, verification, intelligence, and oversight.

Good Faith Compliance Basic gives you the first defense.

Good Faith Compliance Pro helps keep that defense current.

That matters because AI governance is not static.

AI tools change.

Employee behavior changes.

Regulations change.

Court expectations change.

Professional guidance changes.

Client questions change.

Insurance standards change.

Public expectations change.

A static first position is important, but it cannot stay static forever.

As the AI environment changes, your compliance and liability-management system needs to respond.

More importantly, it needs to anticipate.

Good Faith Compliance Pro is designed for organizations that want to move beyond the first step and maintain a stronger, more adaptive position through:

  • DHITL oversight support

  • GovernSeal verification

  • GrayZone monitoring

  • The Practical AI Brief twice per month

  • Ongoing governance updates

  • Enhanced AI compliance evidence documentation

  • Stronger public-facing trust language

  • AI Guardrail Breach awareness and prevention support

  • GAP© — Guardrail Accountability Protocol alignment

Good Faith Compliance is the fast first move.

Good Faith Compliance Pro is the stronger continuing position.

Why Acting Now Matters

AI is already inside organizations.

It is being used in emails, reports, summaries, research, marketing, hiring support, client communication, internal planning, compliance work, document review, and decision support.

Some of that use is approved.

Some of it is not.

That is why AI governance for small business matters now.

The real question is no longer:

“Will our organization use AI?”

The real question is:

“Can we show that AI is being used with policy, oversight, verification, documentation, and human accountability in place?”

Good Faith Compliance helps your organization move quickly from exposure to structure.

It is not a claim of perfection.

It is a defensible starting position.

It is better to act sooner by choice than later through a regulator, insurer, client complaint, employment dispute, privacy incident, AI error, or loss of public confidence.

Close the GAP before AI creates exposure.

Our Position

Trusted By Heroes has been early on the direction of AI governance, AI guidance, shadow AI, human oversight, documentation, and public trust.

To this point, our concerns have not been theoretical.

They have been proven right by the direction of the market, regulators, professional bodies, courts, insurers, and the public conversation around responsible AI use.

We are happy to help organizations start now, strengthen their position, and avoid waiting until pressure forces the conversation.

The best move is not panic.

The best move is not delay.

The best move is to put a practical first defense in place, then improve it as the environment changes.

That is the purpose of Good Faith Compliance.

A Message From Bob McTaggart

Video Placement: “Who We Are and How We Do What We Do”

Before you decide whether Good Faith Compliance is right for your organization, hear directly from Bob McTaggart, founder of Trusted By Heroes.

In this short video, Bob explains who we are, why Trusted By Heroes was built, and how we approach AI governance differently.

We are not selling AI tools.

We are helping organizations put structure, oversight, verification, and evidence around AI use before unmanaged AI becomes a legal, regulatory, operational, employment, insurance, or public-trust problem.

Good Faith Compliance is built for organizations that know AI is already being used and want a practical first step: policy, training, human oversight, verification, documentation, and a defensible record of reasonable care.

Suggested video title:

Who We Are and How We Do What We Do

Suggested video subtitle:

Veteran-led AI governance, public trust, and practical compliance before AI creates exposure.

Suggested video talking points:

  • Who Bob is: veteran, founder, lived-experience builder, and AI governance advisor

  • Why Trusted By Heroes exists

  • Why AI governance must start before a breach

  • What Good Faith Compliance does

  • How GAP© — Guardrail Accountability Protocol fits in

  • Why human oversight must have authority

  • How GovernSeal, DHITL, Audit Anchor, and GrayZone work together

  • Why public trust requires visible care, real oversight, and human accountability

  • How organizations can start with a defensible first step

What Is Good Faith Compliance?

Good Faith Compliance is a practical AI governance package for organizations that need structure without unnecessary complexity.

It helps establish:

  • AI governance policy

  • AI usage policy

  • Staff AI guidance

  • Human-in-the-loop oversight

  • DHITL review principles

  • AI compliance evidence documentation

  • GovernSeal verification

  • Public-facing trust language

  • Governance improvement roadmap

  • Certificate-style documentation of reasonable governance steps

The purpose is simple:

Show that the organization recognized AI risk, created rules, trained people, required oversight, verified key records, and documented reasonable care.

Good Faith Compliance is not about pretending AI risk is solved.

It is about creating a clear, responsible, evidence-backed first position.

What Is an AI Usage Policy?

One of the most important questions leaders are asking is:

What is an AI usage policy?

An AI usage policy is a plain-language rule set that explains how people inside an organization may use AI tools.

It should answer:

  • Who may use AI?

  • Which AI tools are approved?

  • What information must never be entered into AI?

  • What tasks are allowed?

  • What tasks are prohibited?

  • What requires human review?

  • When must AI use be escalated?

  • Who owns the final decision?

  • What records must be kept?

Without an AI usage policy, people guess.

When people guess, risk grows.

Good Faith Compliance replaces guessing with clear AI guidance.

AI Governance vs AI Ethics

There is a difference between AI governance vs AI ethics.

AI ethics speaks to values.

It asks:

  • Is this fair?

  • Is this responsible?

  • Is this transparent?

  • Should AI be used this way?

AI governance turns those values into operating controls.

It asks:

  • What policy exists?

  • Who is responsible?

  • What oversight is required?

  • What evidence is kept?

  • What happens when risk appears?

  • Who has authority to stop the action?

Ethics explains what an organization believes.

Governance shows what the organization actually did.

Good Faith Compliance is built around governance, not empty statements.

Why AI Governance Matters Now

AI rules, expectations, and liability standards are moving quickly.

Professional bodies, regulators, courts, insurers, clients, employers, and the public are all asking harder questions about how AI is being used, who is supervising it, what information is being exposed, and whether organizations can prove they acted responsibly.

For law firms, ABA guidance has already made AI use a professional responsibility issue involving competence, confidentiality, client communication, supervision, verification, and human review. AI is no longer just a technology decision. It is becoming a governance, ethics, and liability issue.

For organizations exposed to European markets or global clients, EU AI expectations are already shaping how businesses think about risk classification, transparency, documentation, human oversight, and accountability. Even organizations outside Europe are being pulled toward higher standards because customers, partners, vendors, and insurers are starting to expect proof.

In the United States, the situation is active but unsettled. Federal agencies, state governments, courts, bar associations, employers, and industry regulators are all moving at different speeds. That creates risk because businesses may not have one single rulebook, but they are still expected to show reasonable care, good-faith governance, and defensible decision-making.

Canada is moving in the same direction, with growing pressure around privacy, automated decision-making, workplace use, public trust, and responsible AI adoption. The legal landscape is still developing, but waiting for perfect clarity is not a safe strategy.

That is the real issue.

AI governance now matters because the environment is turbulent, fragmented, and changing faster than most organizations can comfortably manage. A static policy is a start, but it will not be enough forever. Organizations need a defensible first position now, and a system that can evolve as regulation, liability, technology, and public expectations continue to change.

For employers, AI enforcement risks for employers are growing around hiring, workplace decisions, privacy, employee data, discrimination, and automated decision support.

The direction is clear:

AI use must be governed, documented, and kept under human accountability.

Good Faith Compliance helps organizations prepare for that reality with a practical first step.

Why Public Trust Matters

Rory Cory, a military museum director, summarized the issue clearly:

“Museums operate in the public trust. Like courts, post offices, and schools, we are expected to meet a higher standard of care, accuracy, and accountability. Good-Faith Compliance creates a clear defensive position by showing we took reasonable steps to govern AI use before problems arise. It puts policy, oversight, verification, and documentation in place. This reduces liability exposure and reassures the public that AI is being used responsibly. It also sets the right example for safe, controlled, and accountable AI use.”

That public trust standard applies beyond museums.

It applies to small businesses, law firms, nonprofits, schools, associations, employers, professional firms, first responder organizations, veteran organizations, public-facing institutions, and any group people rely on.

The public trust mantra is simple:

Care must be visible. Oversight must be real. Accountability must remain human.

AI does not remove responsibility.

It raises the need for structure.

Reference: Our AI Governance Policy

TrustedByHeroes.com operates under an AI Governance Policy that reflects a good-faith governance posture.

The policy is built around:

  • Responsible AI use

  • Transparency

  • Human accountability

  • Human-in-the-loop oversight

  • Data responsibility

  • AI inventory

  • Risk classification

  • Review and validation

  • Escalation protocols

  • Evidence and documentation

  • Continuous improvement

The policy includes the core operating principle:

Oversight must have authority at the point of execution. If it cannot act, it is not governance.

Good Faith Compliance turns that principle into a usable package for organizations that need to start now.

GAP© — Guardrail Accountability Protocol

GAP© — Guardrail Accountability Protocol is the practical governance protocol behind Good Faith Compliance.

GAP helps prevent AI Guardrail Breaches before they become legal, regulatory, operational, employment, insurance, or public-trust exposure.

The best positioning line is:

AI Guardrail Breach names the risk. Guardrail Accountability Protocol provides the response.

The strongest sales version is:

Shadow AI is a Guardrail Breach. Good Faith Compliance is the first step in the Guardrail Accountability Protocol.

The public-facing call to action is:

Close the GAP before AI creates exposure.

AI Guardrail Breach: The Problem

An AI Guardrail Breach occurs when AI use crosses outside approved policy, oversight, documentation, authority, or evidence requirements.

It may include:

  • Shadow AI

  • Unauthorized AI tools

  • Client or employee data leakage

  • Privilege or confidentiality exposure

  • Hallucinated or unverified outputs

  • AI-generated work with no review trail

  • Unmanaged automation

  • No proof of human oversight

  • AI use outside approved policy

  • AI decisions without assigned human authority

An AI Guardrail Breach does not always begin with bad intent.

Often, it begins with speed, pressure, convenience, unclear rules, or employees trying to be helpful without knowing where the line is.

That is why policy alone is not enough.

Organizations need a working response system.

Guardrail Accountability Protocol: The Response

The Guardrail Accountability Protocol is the system for keeping AI use inside approved policy, human oversight, documentation, authority, and evidence requirements.

It is designed to answer the question that comes after an AI problem:

What did your organization do to prevent this?

Good Faith Compliance is the first-step package inside that protocol.

It helps the organization establish the basics:

  • AI governance policy

  • AI usage policy

  • Staff AI guidance

  • Training and awareness

  • Human-in-the-loop oversight

  • DHITL authority structure

  • GovernSeal verification

  • AI compliance evidence documentation

  • Public-facing trust language where appropriate

The purpose is not to claim AI risk is gone.

The purpose is to show that the organization acted responsibly before the problem occurred.

How the Pieces Work Together

ConceptRolePlain MeaningAI Guardrail Breach

The risk/eventAI use crossed the line

GAP© — Guardrail Accountability ProtocolThe doctrine/systemHow we prevent, prove, and correct it

Good Faith ComplianceThe first-step packagePolicy, training, governance, and evidence

GovernSealProof layerVerifies and records governance documents

DHITLHuman oversight layerMakes sure people with authority review the right decisions

Audit AnchorExecution layerCreates stronger evidence at the point of action

GrayZoneMonitoring and visibility layerHelps identify governance exposure and emerging risk signals

The Practical AI BriefOngoing intelligence layerKeeps organizations current through practical AI governance updates

What the Good Faith Compliance Package Includes

1. AI Governance Policy

The AI governance policy defines the organization’s responsible AI position.

It establishes:

  • Acceptable AI use

  • Prohibited AI use

  • Human review requirements

  • Data responsibility

  • Risk classification

  • Escalation rules

  • Documentation expectations

  • Continuous improvement

This policy is the foundation for responsible AI in compliance, operations, communication, and decision support.

2. AI Usage Policy and AI Guidance

The AI usage policy translates governance into practical staff guidance.

It helps people understand:

  • What they can do with AI

  • What they must not do with AI

  • What information is restricted

  • When to verify AI output

  • When to disclose or escalate AI use

  • When human review is required

  • Who is accountable for the final decision

This is especially important for small teams where AI use may spread quickly through informal habits.

3. Training and Awareness

Policy alone is not enough.

Good Faith Compliance includes practical awareness training.

Training helps staff recognize:

  • AI hallucinations

  • Confidentiality risk

  • Privacy risk

  • Bias

  • Overreliance

  • Shadow AI

  • Public-facing error

  • Employment-related AI risk

  • When to involve a human in the loop

  • When to escalate a concern

The goal is not to turn staff into AI experts.

The goal is to help them use AI safely and responsibly.

Training creates the first line of defense.

4. HITL and Human-in-the-Loop Oversight

HITL stands for human in the loop.

But human-in-the-loop oversight must be real.

It is not enough for a person to casually glance at an AI output after the fact. The human must have authority to review, challenge, stop, correct, or escalate before AI output becomes action.

Good Faith Compliance helps define:

  • What requires HITL review

  • Who performs the review

  • When review must happen

  • What must be verified

  • When escalation is required

  • Who remains accountable

The rule is simple:

A human in the loop without authority is not governance.

5. DHITL — Distributed Human-in-the-Loop Oversight

Trusted By Heroes uses a stronger operating concept: DHITL — Distributed Human-in-the-Loop oversight.

DHITL recognizes that one person cannot carry every AI risk alone.

Oversight must be supported through:

  • Clear roles

  • Defined review points

  • Escalation pathways

  • Peer support

  • Training

  • Human accountability

  • Authority at the point of execution

DHITL helps define:

  • Who reviews AI outputs

  • What type of AI work requires review

  • When a second review is required

  • When escalation is necessary

  • Who has authority to halt or override

  • What gets documented

  • How responsibility remains human

This connects naturally to the service-world understanding that people perform better when supported by structure, teammates, training, and clear authority.

For veteran and first responder organizations, this is also where first responder business resources and peer support thinking meet AI governance: strong systems protect people before failure occurs.

6. GovernSeal Verification

GovernSeal supports the evidence side of Good Faith Compliance.

It helps create verifiable records for key governance documents, statements, and compliance materials.

GovernSeal may support:

  • Document verification

  • Authorship records

  • Ownership records

  • Version control

  • Certificate-style proof lines

  • Public-facing verification language

  • Evidence that a governance document existed at a point in time

This matters because AI governance is not only about having a policy.

It is about proving the policy existed, was controlled, and formed part of a real governance effort.

GovernSeal helps move an organization from:

“We had a policy somewhere.”

to:

“Here is the verified governance record.”

7. AI Compliance Evidence Documentation

Strong governance needs evidence.

AI compliance evidence documentation may include:

  • Policy versions

  • Staff acknowledgments

  • Training records

  • AI use inventories

  • Review logs

  • Escalation records

  • Public-facing governance statements

  • GovernSeal verification records

  • Certificate-style completion records

This evidence helps show that the organization acted before a problem occurred.

The principle is simple:

If you cannot show it, it may not help you.

8. Third-Party Certificate-Style Governance Support

Good Faith Compliance may include third-party certificate-style support showing that the organization completed a baseline AI governance package.

This is not regulatory approval.

It is not legal advice.

It is not a guarantee.

It is a structured record showing that the organization took reasonable steps toward AI governance.

This may be useful for:

  • Clients

  • Boards

  • Regulators

  • Insurers

  • Employers

  • Public-facing reassurance

  • Professional partners

  • Community stakeholders

For many organizations comparing the best AI compliance programs for SMBs, the most useful first step is not a massive enterprise platform. It is a practical package that creates policy, oversight, verification, and documentation.

9. Audit Anchor Execution Layer

Audit Anchor is the stronger execution-layer component of the broader Trusted By Heroes governance system.

Where Good Faith Compliance creates the first-step policy, training, oversight, and documentation baseline, Audit Anchor is designed to strengthen evidence at the point where AI-assisted output becomes action.

Audit Anchor supports the principle:

Oversight must have authority at the point of execution.

It is intended to help organizations move beyond after-the-fact policy and toward stronger evidence at the point of decision, action, review, or approval.

In simple terms:

  • Good Faith Compliance creates the governance baseline.

  • GovernSeal supports proof and verification of governance records.

  • DHITL defines human authority and review.

  • Audit Anchor strengthens evidence at the point of action.

10. Public-Facing Trust Language

Some organizations need internal governance only.

Others also need public-facing reassurance.

Good Faith Compliance can help create plain-language public statements such as:

  • Responsible AI use statement

  • AI governance commitment

  • Public trust statement

  • Human oversight statement

  • Data responsibility statement

  • Good Faith Compliance participation statement

This is especially useful for organizations that operate in the public trust.

Public-facing language must be careful.

It should not overclaim.

It should not imply regulatory approval.

It should not suggest AI is risk-free.

It should say what matters:

We have taken reasonable steps to govern AI use through policy, oversight, verification, documentation, and human accountability.

Good Faith Compliance Packages

Good Faith Compliance is designed to meet organizations where they are.

Some organizations need a simple first step.

Others need ongoing monitoring, stronger oversight, proof layers, and continuing governance intelligence.

Good Faith Compliance Basic

First-Step AI Governance Baseline

Good Faith Compliance Basic is for organizations that need a practical AI governance starting point.

It helps establish the foundation:

  • AI governance policy

  • AI usage policy

  • Staff AI guidance

  • Basic AI awareness training

  • Human-in-the-loop expectations

  • AI risk classification

  • Escalation guidance

  • AI compliance evidence documentation

  • Public-facing responsible AI language where appropriate

  • Good-faith governance position statement

This package helps the organization answer:

What did you put in place to govern AI use before problems arose?

Good Faith Compliance Basic creates the first defensive position.

Good Faith Compliance Pro

Governance, Oversight, Monitoring, Verification, and Intelligence

Good Faith Compliance Pro is for organizations that need more than a starting policy.

It is designed for organizations that want ongoing AI governance support, stronger oversight, public-trust reassurance, monitored risk visibility, and continuing intelligence in a rapidly changing environment.

Good Faith Compliance Pro includes the Good Faith Compliance baseline, plus:

  • DHITL oversight support

  • GovernSeal verification

  • GrayZone monitoring

  • The Practical AI Brief twice per month

  • Ongoing AI governance updates

  • Enhanced AI compliance evidence documentation

  • Stronger public-facing trust language

  • AI Guardrail Breach awareness and prevention support

  • GAP© — Guardrail Accountability Protocol alignment

Good Faith Compliance Basic creates the starting point.

Good Faith Compliance Pro helps maintain the position.

That matters because AI governance is not a one-time project.

A policy written once and never revisited will eventually fall behind the real world.

Good Faith Compliance Pro is designed for organizations that want ongoing support around policy, oversight, verification, monitoring, and intelligence.

It helps organizations stay ahead of:

  • Shadow AI

  • AI Guardrail Breaches

  • Staff misuse or uncertainty

  • AI tool changes

  • Regulatory pressure

  • Client expectations

  • Insurance questions

  • Employment-related AI exposure

  • Public trust concerns

  • Weak documentation

  • Loss of confidence after unmanaged AI use

The goal is simple:

Stay prepared before AI creates exposure.

GFC Pro Components
DHITL Oversight Support

DHITL — Distributed Human-in-the-Loop oversight strengthens the human review layer.

It helps ensure AI oversight is not symbolic.

A human in the loop must have authority to review, question, stop, correct, or escalate before AI output becomes action.

DHITL helps define:

  • Who reviews AI outputs

  • What must be reviewed

  • When escalation is required

  • Who has halt authority

  • What must be documented

  • How oversight responsibility is distributed

  • How the organization avoids relying on one unsupported reviewer

The principle is simple:

Oversight must have authority at the point of execution. If it cannot act, it is not governance.

GovernSeal Verification

GovernSeal supports the proof layer.

It helps create verifiable records for key governance documents, policies, statements, and compliance materials.

GovernSeal may support:

  • Document verification

  • Authorship records

  • Ownership records

  • Version control

  • Certificate-style proof lines

  • Public-facing verification language

  • Evidence that a governance document existed at a point in time

This helps move the organization from:

“We had a policy somewhere.”

to:

“Here is the verified governance record.”

GrayZone Monitoring

GrayZone monitoring supports ongoing governance visibility.

AI risk does not stay fixed after a policy is written.

Tools change. Staff behavior changes. Regulations change. Professional guidance changes. Public expectations change. Client questions change. Insurance expectations change.

GrayZone monitoring helps track governance exposure signals and identify where the organization may need to strengthen its position.

This may include monitoring or review around:

  • AI governance maturity

  • Shadow AI risk

  • Policy gaps

  • Documentation gaps

  • Oversight weaknesses

  • Public-facing AI exposure

  • Employer AI risk

  • Professional guidance changes

  • Regulatory pressure

  • Governance readiness indicators

GrayZone monitoring is not a guarantee that every risk is eliminated.

It is an early-warning and visibility layer.

It helps leadership see where attention is needed before the issue becomes a breach, complaint, insurer concern, regulator question, or public-confidence problem.

Ongoing AI Governance Intelligence

AI governance is not standing still.

Rules are changing. Expectations are changing. Tools are changing. Client questions are changing. Regulators, courts, insurers, employers, and professional bodies are all moving toward one basic expectation:

If your organization uses AI, you need to show how it is governed.

That is why Good Faith Compliance Pro includes access to The Practical AI Brief, our twice-monthly AI governance briefing for small firms, professional organizations, employers, and public-trust institutions.

The Practical AI Brief is not hype.

It is not an AI tool newsletter.

It is a practical intelligence briefing focused on helping organizations understand what is changing, what matters, and what should be watched.

Each issue is designed to help leaders stay current on:

  • AI governance trends

  • AI compliance developments

  • ABA and professional guidance

  • EU AI and regulatory pressure

  • AI enforcement risks for employers

  • Shadow AI and AI Guardrail Breaches

  • HITL and human-in-the-loop oversight

  • DHITL doctrine and practical oversight

  • AI tools and practical use cases

  • Risk mitigation

  • Product comparisons

  • Best practices for safe AI integration

  • Predictions and early warning signals

  • Practical steps organizations can take now

The tone is practical, neutral, and useful.

The goal is to help organizations reduce liability exposure, improve governance through best practices, and make better decisions in a fast-changing environment.

The Practical AI Brief — Twice-Monthly AI Governance Briefing

Good Faith Compliance Pro includes The Practical AI Brief, delivered twice per month.

This briefing is built for practical leaders, not AI hype followers.

It covers:

  • AI governance developments

  • AI compliance trends

  • ABA and professional guidance

  • EU AI developments

  • AI enforcement risks for employers

  • Shadow AI and AI Guardrail Breaches

  • HITL and human-in-the-loop oversight

  • DHITL doctrine and practical oversight

  • AI tools worth understanding

  • Product comparisons

  • Risk mitigation

  • Best practices for responsible AI adoption

  • Predictions and early warning signals

  • Practical steps organizations can take now

The Practical AI Brief is designed to provide ongoing value after the initial compliance package is complete.

It keeps organizations informed, current, and better prepared as the AI governance environment continues to mature.

Package Comparison

FeatureGood Faith Compliance BasicGood Faith Compliance ProAI governance policyIncludedIncludedAI usage policyIncludedIncludedStaff AI guidanceIncludedIncludedBasic AI awareness trainingIncludedIncludedHuman-in-the-loop expectationsIncludedIncludedAI compliance evidence documentationIncludedEnhancedPublic-facing responsible AI languageOptionalEnhancedGAP© — Guardrail Accountability Protocol alignmentIncludedIncludedDHITL oversight supportNot includedIncludedGovernSeal verificationOptional add-onIncludedGrayZone monitoringNot includedIncludedThe Practical AI Brief newsletterNot includedIncluded twice monthlyOngoing governance updatesLimitedIncludedBest suited forFirst-step governanceOngoing governance and monitoring

Implementing AI Governance Step by Step

Good Faith Compliance gives organizations a practical path for implementing AI governance step by step.

Step 1: Identify AI Use

Find out where AI is already being used.

This may include internal work, public-facing communication, HR, marketing, client service, research, document drafting, customer support, and decision support.
Step 2: Classify Risk

Separate low-risk, moderate-risk, and high-risk use cases.

Risk LevelExampleLow RiskInternal brainstorming, formatting, general research supportModerate RiskDrafting operational content, client-facing material, reportsHigh RiskLegal, financial, employment, safety, privacy, or decision-impacting use

Step 3: Create AI Policy

Put written rules in place.

The organization should define approved use, restricted use, prohibited use, human review requirements, escalation triggers, and documentation expectations.

Step 4: Train People

Make sure staff understand the rules and risks.

Training should be practical, plain-language, and tied to real work.

Step 5: Define Human Oversight

Identify when human review is required.

Make sure the human in the loop has authority to stop, correct, approve, or escalate.

Step 6: Verify and Document

Create evidence that reasonable care was taken.

This may include policy records, training records, acknowledgment records, oversight records, escalation records, and GovernSeal verification.

Step 7: Improve Over Time

Update the governance program as AI tools, regulations, client expectations, insurance standards, and risk conditions change.

This makes Good Faith Compliance a realistic AI governance starting point for small businesses, professional firms, nonprofits, public-trust organizations, and employers.

What Good Faith Compliance Helps Reduce

Good Faith Compliance helps reduce exposure related to:

  • Shadow AI

  • AI Guardrail Breaches

  • Unclear employee AI use

  • Weak AI guidance

  • No AI usage policy

  • Confidentiality mistakes

  • Privacy risk

  • AI hallucinations

  • Unverified public-facing content

  • Employment-related AI risk

  • Weak human-in-the-loop review

  • Lack of AI compliance evidence documentation

  • Missing accountability

  • Inability to show reasonable care

  • Loss of public confidence after unmanaged AI use

It does not eliminate every risk.

Nothing does.

But it creates a stronger position than doing nothing, relying on informal habits, or waiting until a problem occurs.

Why Closing the GAP Matters

If an AI-related issue happens, the organization may be asked:

  • What AI policy was in place?

  • Who was allowed to use AI?

  • What tools were approved?

  • What information was restricted?

  • What required human review?

  • Was there a human in the loop?

  • Did that human have authority?

  • What evidence shows the policy existed?

  • What records show staff were trained?

  • What proof shows reasonable care was taken?

Without structure, those questions become difficult.

With Good Faith Compliance, the organization has a better answer.

Not a perfect answer.

A defensible one.

Who This Is For

Good Faith Compliance is designed for organizations that need AI governance but do not need unnecessary complexity.

It is especially relevant for:

  • Small businesses

  • Law firms

  • Accounting firms

  • Museums

  • Schools

  • Nonprofits

  • Public-trust organizations

  • Professional service firms

  • Associations

  • Veteran organizations

  • First responder organizations

  • Employers using AI in workplace operations

  • Organizations preparing for client, insurer, or regulator questions

This is a practical starting point for organizations that know AI is useful but also understand unmanaged AI creates exposure.

What This Is Not

Good Faith Compliance is not:

  • Legal advice

  • Regulatory approval

  • A guarantee against liability

  • A replacement for professional judgment

  • A substitute for legal, privacy, employment, or compliance counsel

  • A claim that AI use is risk-free

  • A one-time checkbox

It is a practical governance framework that helps organizations show responsible, documented, good-faith action.

How We Do What We Do

This page should include a “How We Do What We Do” video.

The video can explain:

  • What Good Faith Compliance is

  • What is an AI usage policy

  • Why AI governance for small business matters

  • The difference between AI governance vs AI ethics

  • How HITL and human-in-the-loop review work

  • How DHITL strengthens oversight

  • How GovernSeal supports verification

  • How Audit Anchor supports stronger execution-layer evidence

  • Why AI compliance evidence documentation matters

  • How GAP© helps prevent AI Guardrail Breaches

  • How GrayZone monitoring supports ongoing visibility

  • How The Practical AI Brief keeps organizations current

  • How public-facing trust language should be used responsibly

  • How organizations can mature their governance over time

Suggested video title:

How Good Faith Compliance Creates a Defensible AI Governance Starting Point

The Trusted By Heroes Standard

Trusted By Heroes operates under a clear AI governance principle:

Oversight must have authority at the point of execution. If it cannot act, it is not governance.

Our approach prioritizes:

  • Structural control over observation

  • Evidence over assumption

  • Accountability over automation

  • Human judgment over blind trust

  • Documentation over memory

  • Public trust over empty claims

  • Practical governance over performative policy

AI can assist.

AI can accelerate.

AI can improve productivity.

But accountability must remain human.

Better Sooner Than Under Pressure

Most organizations will eventually be forced to answer for AI use.

The only question is whether they answer from a position of preparation or reaction.

Waiting means the trigger may come from:

  • A regulator

  • A client complaint

  • A privacy issue

  • An employment dispute

  • A hallucinated public statement

  • A confidentiality failure

  • An insurer asking harder questions

  • A board or funder asking for proof

  • A loss of public confidence

  • A preventable AI Guardrail Breach

That is the wrong time to start building governance.

It is better to put structure in place sooner by choice than later through pressure, investigation, legal exposure, insurance concern, or public loss of confidence caused by unmanaged AI use.

Close the GAP before AI creates exposure.

Good Faith Compliance gives your organization the first step: policy, training, oversight, verification, documentation, and human accountability.

Good Faith Compliance Pro adds ongoing oversight, verification, GrayZone monitoring, and The Practical AI Brief twice per month.

Start With a Defensible First Step — Then Keep It Current

If your organization is already using AI, or if your people may be using AI without clear rules, now is the time to act.

Do not wait until an AI Guardrail Breach becomes a legal problem, regulatory issue, employment dispute, client concern, insurer question, board concern, or public-confidence failure.

It is better to act sooner by choice than later because a regulator, client, insurer, board, funder, employee, or member of the public asks what went wrong.

Good Faith Compliance gives your organization the first step.

Good Faith Compliance Pro adds ongoing oversight, verification, GrayZone monitoring, and The Practical AI Brief twice per month.

Together, they help you close the GAP before AI creates exposure.

Contact TrustedByHeroes.com to discuss:

  • Good Faith Compliance Basic

  • Good Faith Compliance Pro

  • AI governance for small business

  • AI usage policy creation

  • AI guidance for staff

  • GAP© — Guardrail Accountability Protocol

  • HITL and human-in-the-loop oversight

  • DHITL oversight support

  • GovernSeal verification

  • GrayZone monitoring

  • The Practical AI Brief twice-monthly AI governance briefing

  • Audit Anchor execution-layer evidence

  • AI compliance evidence documentation

  • Public-facing responsible AI trust statements

  • AI enforcement risks for employers

  • AI in compliance operations

Close the GAP before AI creates exposure.
Start with Good Faith Compliance and put policy, oversight, verification, and documentation in place before shadow AI becomes a Guardrail Breach.

Better sooner than under pressure.
Put AI governance in place before a regulator, insurer, client, employee, or public-confidence failure forces the conversation.