Chapter 6

AI Utilization Plan

Part 1: Executive Summary

The Restoration Company embraces AI as a tool to enhance our work, not replace the thoughtfulness, expertise, and care that define who we are. This document establishes how we use AI responsibly across every department, ensuring our output reflects our values, protects our data, and maintains the quality our customers expect.

Our Philosophy

AI is powerful, but it is not a shortcut. Every piece of work that leaves this company, whether assisted by AI or not, must meet our standards and reflect our voice. The person using the tool is responsible for the outcome. AI can draft, suggest, and accelerate. It cannot think for us, and it does not absolve us of accountability.

Key Principles

  • Innovate Towards Simplicity — We use AI to remove friction and complexity, not to add layers of generic noise.
  • Ambitiously Responsive and Reliable — AI helps us move faster, but speed never compromises accuracy or dependability. We review, we verify, we own the result.
  • Christ-Centered — We approach AI with honesty and integrity. We are stewards of this technology, responsible for how we use it and what we produce with it.
  • Be Growth-Oriented — AI is a skill to develop, not a crutch to lean on. We invest in learning how to use these tools well.

What's Expected

Every employee who uses AI tools is expected to:

  • Understand what the tool produced before using or sharing it
  • Review and refine AI output to meet our quality standards
  • Never input confidential or sensitive data into unapproved tools
  • Take full ownership of the final work product

Part 2: Company-Wide AI Policy

2.1 Guiding Principles

Our use of AI is governed by the same values that guide every other aspect of our work. These principles apply universally, regardless of department or tool.

Human Accountability

AI is a tool, not a decision-maker. The person who uses AI is fully responsible for the output, its accuracy, its quality, and its appropriateness. "The AI did it" is never an acceptable explanation for substandard work.

Intentional Use

We use AI with purpose, not by default. Before reaching for an AI tool, consider whether it genuinely serves the task at hand. AI should make good work better or faster, not make lazy work possible.

Transparency

We are honest about when and how AI is used. This doesn't mean every email needs a disclosure, but when it matters, particularly in customer-facing deliverables, creative work, or documentation, we don't obscure AI's role in the process.

Continuous Learning

Proficiency with AI is a skill, and we expect our people to develop it. This means investing time in understanding how these tools work, what they're good at, what they're not, and how to get better results over time.

Stewardship

We treat AI as a resource to be used responsibly. This includes protecting company and customer data, respecting intellectual property, and recognizing that how we use this technology reflects on The Restoration Company as a whole.

2.2 Human Accountability Standard

This section exists to be unambiguous: AI-generated work that has not been reviewed, understood, and refined by a human does not meet our standards. We call this "unrefined AI output," and it is not acceptable at The Restoration Company.

What is Unrefined AI Output?
Unrefined AI output is content that has been copied from an AI tool and used with little or no human involvement. It typically exhibits: generic or robotic tone, factual errors or hallucinations, lack of specificity, obvious structural tells (e.g., "Here are five ways to..."), or content that could have been written for any company.

The Standard

Before submitting, sharing, or publishing any AI-assisted work, you must be able to answer "yes" to all of the following:

  1. Have I read and understood the entire output?
  2. Have I verified any factual claims, data, or references?
  3. Have I edited the output to reflect our voice, standards, and the specific context of this task?
  4. Could I explain or defend this work if asked about it?
  5. Am I proud to put my name on this?

If the answer to any of these is "no," the work is not ready.

Accountability

The person who submits the work owns the work. If AI-assisted output contains errors, misrepresents information, or falls short of our quality standards, the responsibility lies with the individual, not the tool. Leaders are expected to hold their teams to this standard and address patterns of low-quality AI use directly.

2.3 Data Privacy and Confidentiality

AI tools are powerful, but they come with risk. When you input information into an AI system, that data may be stored, used for training, or otherwise processed in ways outside of our control. Protecting company and customer data is non-negotiable.

Data Classification for AI Use

Never Input (Prohibited)
  • Customer PII: names, addresses, emails, phone numbers, payment information
  • Employee personal data: SSNs, compensation details, performance reviews, health information
  • Financial records: detailed revenue figures, pricing strategies, margin data, banking info
  • Trade secrets: manufacturing methods, vendor agreements, unreleased product designs
  • Access credentials: passwords, API keys, authentication tokens
  • Any data covered by NDA or customer contract
Use Caution (Requires Judgment)
  • Internal communications and strategy documents
  • Customer names or company names without additional sensitive context
  • Aggregated or anonymized data
  • Draft content that references real projects or clients
Generally Acceptable
  • Publicly available information
  • Generic requests for writing assistance, brainstorming, or research
  • Code snippets that contain no proprietary business logic or credentials
  • Hypothetical scenarios and anonymized examples

Tool-Specific Considerations

Different AI tools have different data handling practices. Our approved tools have been vetted, but even within approved tools, exercise judgment. Enterprise versions of AI tools (like our Gemini Pro accounts) typically offer stronger data protections than free consumer versions. Never use a personal account or unapproved tool for company work.

When in Doubt

If you are unsure whether data is appropriate to input into an AI tool, do not guess. Reach out to [email protected] for guidance.

2.4 Approved Tools

The following AI tools have been reviewed and approved for use at The Restoration Company. Using unapproved tools for company work is prohibited. If you believe a tool should be added to this list, submit a request through the process outlined in Part 4.

Company-Wide Approved Tools

ToolApproved ForNotes
Google Gemini ProAll employeesPrimary AI assistant. Use company account only.
ChatGPTMarketing (approved users)Creative and content work. No personal accounts.
ClaudeIT/EngineeringCoding, development, and technical documentation.

What Is Not Approved

  • Free or consumer-tier versions of approved tools (use only company-provisioned accounts)
  • Personal AI accounts for company work
  • Any AI tool not listed above, including: Microsoft Copilot, Midjourney, DALL-E, open-source models run locally, or browser-based AI assistants
  • AI features embedded in other software unless explicitly approved

2.5 Prohibited Uses

Certain uses of AI are off-limits at The Restoration Company, regardless of tool or department. These prohibitions exist to protect our company, our customers, and our integrity.

Data and Privacy Violations

  • Inputting any data classified as "Never Input" in Section 2.3
  • Using AI to process, analyze, or store customer data outside of approved systems
  • Circumventing data protections by paraphrasing or lightly anonymizing sensitive information

Misrepresentation and Deception

  • Presenting AI-generated work as entirely human-created when transparency is required or expected
  • Using AI to fabricate references, citations, testimonials, or qualifications
  • Generating fake communications that impersonate real individuals

Quality and Accountability Failures

  • Submitting unrefined AI output as finished work
  • Using AI to produce work you do not understand or cannot explain
  • Delegating critical decisions entirely to AI without human judgment

Security and Access

  • Using unapproved AI tools for any company work
  • Using personal AI accounts for company business
  • Inputting credentials, API keys, or access tokens into any AI tool
  • Installing or running local AI models on company systems without IT approval

Customer Communications

  • Exercise heightened caution when using AI to draft any communication that will be sent directly to customers
  • AI-assisted customer communications must be carefully reviewed for tone, accuracy, and personalization
  • Never use AI to generate responses to customer complaints, disputes, or sensitive situations without thorough human review

Harmful or Inappropriate Use

  • Generating content that is discriminatory, offensive, or contrary to our values
  • Using AI to surveil, profile, or make employment decisions about individuals
  • Any use that violates applicable laws or regulations

2.6 Training and Competency Expectations

AI is a skill, not a shortcut. We expect employees who use AI tools to develop genuine competency, not just the ability to get output, but the judgment to use these tools well.

Baseline Expectations

All employees who use AI tools in their work should:

  • Complete any company-provided AI training for their approved tools
  • Understand the capabilities and limitations of the tools they use
  • Know what data can and cannot be shared with AI systems
  • Be able to recognize unrefined AI output and understand why it falls short
  • Stay current as tools, policies, and best practices evolve

Manager Responsibilities

Leaders are responsible for ensuring their teams are equipped to use AI effectively:

  • Confirming team members have completed required training before using AI tools
  • Providing ongoing coaching on AI use within their department's workflows
  • Modeling good AI practices and holding the team accountable to this policy
  • Escalating questions or gray areas to [email protected]

Ongoing Development

AI tools and best practices are evolving rapidly. What works today may be outdated in six months. We encourage employees to share effective prompts and techniques with teammates, experiment with new approaches while staying within policy guardrails, provide feedback on how AI tools are working in practice, and suggest training topics or resources.

Current Training Status
As of this policy's publication, approximately half of the company has received formal training on Gemini Pro. Employees who have not yet completed training should coordinate with their manager to do so.

Part 3: Department-Specific Guidelines

3.1 IT (Engineering/Development)

The IT department uses AI as a daily augmentation of our engineering capabilities. Claude is our approved tool for all development work, and we approach it with a spec-driven, plan-first methodology.

Approved Tool

Claude (via Claude Code) is the approved AI tool for engineering work. All other AI coding assistants are prohibited unless explicitly approved.

Primary Use Cases

  • Code generation and implementation
  • Debugging and troubleshooting
  • Technical documentation
  • Specification planning and writing
  • Code review assistance

Engineering-Specific Standards

Understand Every Line
If AI generates code, you must understand what that code does before using it. This means understanding the logic, the language constructs, and how it integrates with the broader system. "It works" is not sufficient. If you cannot explain the code, you cannot commit the code.
  • Plan First, Generate Second — Before generating code, define the specification. Know what you're building, why, and how it should behave.
  • Human Review Required — All AI-generated code must be reviewed by a human before it is used, merged, or deployed. No exceptions.
  • No Blind Trust — AI will confidently produce code that is wrong, insecure, or subtly broken. Treat AI output with professional skepticism. Test it. Question it. Verify it.

What AI Does Not Replace

  • Architectural decisions and system design
  • Security review and threat modeling
  • Understanding of the codebase and business logic
  • Judgment calls on technical tradeoffs
  • Accountability for production systems

3.2 Operations (including Manufacturing)

Operations is beginning to integrate AI into daily workflows, with significant potential for documentation, training, and data-driven decision support.

Approved Tool

Google Gemini Pro is the approved AI tool for Operations.

Primary Use Cases

  • Process documentation and standard operating procedures (SOPs)
  • Training material creation and refinement
  • Data analysis and reporting
  • Decision-making assistance and scenario planning
  • Summarizing operational metrics and trends
AI Informs, Humans Decide
AI can analyze data, surface insights, and present options. It cannot make decisions. Personnel decisions, staffing levels, product quality determinations, QA pass/fail decisions, and safety-related decisions require human judgment and cannot be delegated to AI.

3.3 Marketing

Marketing is actively using AI for creative work, from content creation to campaign planning. Everything that reaches a customer should feel like it came from The Restoration Company, not a machine.

Approved Tools

Google Gemini Pro and ChatGPT are approved for Marketing use. Use company-provisioned accounts only.

Primary Use Cases

  • Content creation (social media, email, web copy)
  • Campaign planning and ideation
  • Image generation
  • Product promotion scheduling
  • Brainstorming and creative concepting
  • Editing and proofreading
Brand Voice Is Non-Negotiable
AI does not know our brand. It will produce generic content unless guided carefully. All AI-generated content must be reviewed and edited to align with The Restoration Company's brand voice and guidelines.

AI-Generated Images

AI-generated images are permitted for customer-facing use, but they are held to the same accountability standard as any other AI output. Before using an AI-generated image externally, apply the five-question checklist from Section 2.2.

3.4 Sales

Sales at The Restoration Company is built on relationships and personal touch. AI can support this work but must never replace the human connection that sets us apart.

Approved Tool

Google Gemini Pro is the approved AI tool for Sales.

Primary Use Cases

  • Sales data analysis and performance insights
  • Customer and prospect research
  • Internal communications and documentation
  • Sales skill development and coaching preparation
  • Meeting prep and follow-up summaries
Personal Touch Is Our Advantage
AI should not be used to draft customer communications in Sales. This includes outreach emails to prospects, follow-up messages to customers, responses to customer inquiries, and proposals sent directly to customers. When a customer hears from Sales, it should be authentically from the salesperson.

Where AI Adds Value

AI is encouraged for work that stays internal or supports the salesperson's own development:

  • Analyzing your sales data to identify patterns and opportunities
  • Researching a prospect's company, industry, or challenges before a call
  • Preparing talking points or anticipating objections
  • Summarizing meeting notes for your own records
  • Improving internal updates and reports to leadership

3.5 Finance/HR

Finance and HR are beginning to use AI for documentation and administrative efficiency. Given the sensitive nature of the data these departments handle, extra caution is required.

Approved Tool

Google Gemini Pro is the approved AI tool for Finance/HR.

Primary Use Cases

  • Policy and procedure documentation
  • Training material creation
  • Job descriptions and posting drafts
  • Internal communications and announcements
  • Benefits summaries and explanations
  • Resume screening assistance
  • General data analysis support
AI Assists, Humans Decide
The following decisions require human judgment and cannot be delegated to AI: hiring and candidate selection, termination and disciplinary action, compensation and salary decisions, performance ratings and evaluations, promotion and role changes, and benefits eligibility determinations.

Part 4: Governance and Review

This policy is a living document. As AI tools evolve and our experience with them grows, so will our guidelines.

Policy Ownership

This policy is owned and maintained by Tyler Boyd. All questions, concerns, escalations, and requests for clarification should be directed to [email protected].

Review Cadence

This policy will be reviewed and updated as needed, but no less than annually. Reviews will consider:

  • Changes to available AI tools and their capabilities
  • New use cases emerging across departments
  • Lessons learned from AI use in practice
  • Feedback from employees and department leaders
  • Evolving best practices and industry standards

Requesting New Tools

If you believe a new AI tool should be evaluated for company use, submit a request to [email protected] with:

  • Tool name and description
  • Intended use case and department
  • Why current approved tools do not meet this need
  • Any known information about the tool's data handling and security practices

Gray Areas

AI presents novel situations. If you're facing a use case that feels uncertain, not clearly permitted and not clearly prohibited, pause and ask. It's always better to get guidance than to guess wrong. Contact [email protected].

© 2025 Restoration Apparel Company. Internal use only.