top of page
Search

I Asked AI to Review My Internal Affairs Investigative Report — And the Results Surprised Me

  • Writer: George Perez
    George Perez
  • Jan 13
  • 5 min read

About the Author:

Retired Assistant Director George Perez is a 25-year veteran of the Miami Dade Police Department in Florida. During his career he served in various investigative posts that ranged from general crimes, robbery, homicide and internal affairs and public corruption investigations. His experience as an investigator and executive command staff member of a large metropolitan police force routinely is called upon to instruct the National Internal Affairs Investigator’s Certification for the Public Agency Training Council, (PATC) as well as consult local, state, and federal agencies across the United States.


The Need


During a previous agency voluntary audit training specifically designated for internal affairs commanders, an IA Unit Commander handed me a draft misconduct report that was prepared weeks earlier. It was comprised of the necessary points for review, but something felt off: The timeline felt thin and lacking depth, the policy violations seemed ambiguous, a labor attorney would raise concerns about timeliness, due process, and consistency. I shared the concerns with the IA Commander and he agreed, but a full rewrite could take hours. There must be a better way. So, I tested an AI review process to see how it would do and the results surprised me. The Solution I uploaded the sanitized report to ensure CJIS compliance and asked AI to identify weaknesses, omissions, and clarity problems along a series of managerial and investigative areas well known to internal affairs investigators and command staff. What came back was blunt and detailed. It pointed to some issues not noticed in prior reviews. Not legal conclusions, nor findings, just clear diagnostic feedback.


The Result


The report improved dramatically. The IA Commander had fewer questions, and the final file was stronger, better organized, and easier to defend. That test made one thing clear:


AI will soon be a standard companion for IA investigators—just like digital evidence systems, CAD logs, and early-intervention dashboards. What matters is how you use it.


What AI Flagged Immediately


AI zeroed in on structural issues that commonly weaken IA cases:


Timeline gaps — Events listed out of order or lacking detail.

• Weak witness summaries — No direct quotes, vague phrasing, leading questions

Policy omissions — No reference to specific sections violated or accreditation references.

Unaddressed contradictions — Conflicting statements left unexplained.

Missing rationale — No explanation for why evidence was excluded or collateral policy violations not addressed

Underdeveloped force analysis — No objective review of proportionality or necessity.

Credibility blind spots — No articulation of impeachment factors or reliability notes.


These are the same areas that draw scrutiny from attorneys, arbitrators, and oversight bodies.


How IA Units Are Using AI Today


The use of AI by law enforcement agencies in police reports is emerging and has a long road ahead, however agencies nationwide are integrating AI into misconduct investigations for clarity, efficiencies, and the furtherance of unbiased due process.


Common applications include:

Draft Review Tools

o Identifying vague language

o Tightening report structure

o Checking alignment between facts and policy

Policy Cross-Reference Systems

o Linking narrative sections to exact policy numbers

o Flagging missing elements o Identifying lacking supervisory actions

Digital Evidence Breakdown

o Time-stamped summaries of body-camera footage o Highlighting force-decision points

o Linking policy and training implementation

Bias and Neutrality Checks

o Detecting subjective phrasing

o Highlighting unsupported assumptions

o Identifying leading questions of involved employees

Workload & Early-Intervention Data

o Tracking complaint trends

o Surfacing performance patterns

Legal Guidance Alerts

o Flagging standards tied to Garrity, Loudermill, NLRB rules, and First Amendment protections

o Identifying when a case may implicate Giglio concerns


AI is becoming a quality-control layer inside IA—not the decision-maker.


What 2025 Court Rulings Say About AI in Investigations


Of course, courts have weighed in on the subject, bit have yet settled on a national standard. Nonetheless, 2025 produced several key developments that IA investigators should know:


AI Evidence Reliability Scrutiny

• Federal courts advanced proposals to regulate AI-generated evidence under a proposed new rule like Federal Rule of Evidence 707.

o Goal: Increase reliability, prevent hidden AI-generated facts, and

require disclosure when AI contributes to evidence review.


Transparency Requirements

• States such as California expanded transparency laws requiring disclosure when police use AI in report drafting or evidence summarization.

o Departments must maintain human-reviewed versions of all AI assisted work.


Sanctions for AI Errors in Legal Filings

• Federal judges sanctioned attorneys for submitting filings containing

unverified AI-generated content, reinforcing a clear principle:

Every AI-assisted document must be verified by a human reviewer.


AI and Digital Evidence

• Several rulings highlighted concerns about AI-modified or AI-altered

digital evidence (deepfakes), requiring investigators to authenticate

video sources more rigorously in administrative and criminal reviews.


It is important to note these rulings do not ban AI from use in investigative measures or report writing in public safety. They reinforce accountability—and underscore your responsibility to verify every factual statement and every policy conclusion.


Benefits for Internal Affairs Investigators


It is important to focus on both the benefits and risk areas of this new frontier as it makes its mark in law enforcement. There is no doubt it has its place and will enhance the workflow process for all involved.


AI strengthens IA investigations when used wisely:

Faster Draft Review

o Immediate detection of missing facts or unclear reasoning.

o Faster detection of missing policy and accreditation standards

Stronger Due-Process Protection

o AI checks notice steps, timelines, and interview sequencing against statutory requirements (e.g., Florida’s Law Enforcement Officers’ Bill of Rights).

More Consistent Reports

o Standardized structure across investigators.

o Better alignment between evidence, policy, and findings.

Early Pattern Identification

o AI can surface recurring problems involving performance,

documentation, or supervision.

Reduced Supervisory Workload

o Supervisors receive clearer, more complete drafts.


Remember, AI improves precision. It does not replace the investigator’s responsibility to reason, verify, and make findings. It does not replace final supervisory review and disciplinary decision making.


Risks and Operational Limits


AI must be used carefully. Investigators remain accountable for every factual and legal conclusion.


Key risks:

Over-reliance — AI provides guidance, not final answers.

Data Security — Uploading IA data into open platforms risks exposure.

Training Gaps — Staff often misunderstand AI’s role and boundaries.

Credibility and Intent Assessments — AI cannot evaluate tone, body language, or inconsistencies across interviews.


Why I Think the Future of AI Is Encouraging


Overall, AI will strengthen truth-finding and due process in IA work. I know this to be true because I encourage and often provide agencies with templates and workflow checklists for their IA investigators to use. This helps create repeatable results in the investigative and report writing process. The use of AI generated checklists that include some of the areas I list below can create a structured workflow process that we can expect improvements in.


• Report accuracy

• Efficiency

• Policy alignment

• Evidence organization

• Administrative consistency

• Transparency to communities and arbitrators

• Better communication to involved employees and their unions


When integrated responsibly, AI enhances fairness by:

• Reducing errors

• Documenting investigator reasoning

• Supporting consistent decision-making

• Reinforcing procedural safeguards


My Final Thoughts


Internal Affairs work demands clear polices, timelines, strong reasoning, fairness, and solid documentation. AI strengthens each of these areas when you use it with discipline, specific parameters, and verify the outcome against evidence. You still decide what is true, you still assess credibility, and you still protect due process. AI’s role is support—not decision-making. It helps you produce sharper, clearer, and more defensible investigations in a more efficient and void of personal bias. The future of IA isn’t hands-off or automated. It’s you, using smarter tools to deliver stronger work products.


 
 

All video content, course materials, and products are the exclusive intellectual property of PATC (Public Agency Training Council) and its esteemed instructors, with all rights reserved © 2026.

bottom of page