Skip to main content

Important: This is an operational template, not legal advice. Adapt it with your campaign’s or firm’s counsel, compliance team, and senior leadership before adopting.

1. Approved uses

  • Internal drafting and summarization (memos, research, briefings).
  • First-pass drafts of fundraising emails, scripts, and digital copy that will be reviewed by a named human owner.
  • Synthesis of public information (news, polling, opposition research) for staff use.
  • Operational tasks: meeting notes, agendas, scheduling drafts, internal Q&A.
  • Code, data, and analytics support for internal tools where outputs are reviewed.

2. Prohibited uses

  • Generating final public-facing content without a human reviewer of record.
  • Creating deceptive synthetic audio, video, or imagery of any real person.
  • Submitting protected, regulated, or privileged information into third-party tools that are not contractually approved.
  • Producing legal, compliance, or financial conclusions presented as authoritative.
  • Targeting individuals based on protected characteristics in ways that violate law or platform policy.

3. Required human review

  • All public-facing communications must be reviewed and approved by a named human before publication.
  • High-stakes outputs (legal, financial, candidate voice, paid media, donor communications) require a second reviewer.
  • Reviewers must be empowered to reject, edit, or escalate any AI-assisted output.

4. Public-facing content rules

  • Candidate voice content requires explicit candidate or principal sign-off, not just staff approval.
  • Imagery and video involving real people must be authentic or clearly labeled per platform and legal rules.
  • Statistical, factual, and historical claims must be source-checked before release.

5. Sensitive data rules

  • No donor PII, voter file records, or regulated data into non-approved AI tools.
  • No client-confidential strategy memos into consumer AI products without contractual data protections.
  • Use approved enterprise tools, with data-use settings configured for the campaign or firm.
  • Any FEC, FCC, state election, or platform-policy question raised by AI use must be escalated to designated counsel before action.
  • Disclaimers, paid-for-by language, and required disclosures are the responsibility of human reviewers, not the model.

7. Client/candidate approval

  • Firms must obtain client consent for AI-assisted work that touches the client’s brand, voice, or data.
  • Candidates must be briefed on what AI is used for, by whom, and where the human checkpoints sit.

8. Disclosure and logging expectations

  • Maintain an internal log of AI use categories, tools, and reviewers — proportional to the size of the operation.
  • Disclose AI use to clients, vendors, and partners where contracts or platform policies require it.
  • Be prepared to answer, in plain language, what AI did and what a human did for any output.

9. Staff training

  • Every staff member with AI access receives role-appropriate training before use.
  • Training covers approved tools, prohibited uses, sensitive data handling, and the human-review checkpoints for their role.
  • Refresh training when tools, policies, or election rules change.

10. Incident response

  • Any incident — leak, hallucinated public claim, deceptive content, data exposure — is reported to the designated AI owner within 24 hours.
  • A short post-incident review identifies root cause, affected systems, and the policy or training change needed.
  • Repeat incidents trigger a tool, vendor, or workflow review.