The Algorithmic Consistency Initiative, LLC
Bridging Technical Safety and Civil Rights
The Algorithmic Consistency Initiative (ACI) is a policy research organization dedicated to preventing the automation of historical bias. We bridge the gap between technical AI alignment and civil rights law, advocating for safety standards that move beyond "data fairness" to address the fundamental risks of "Behavioral Mimicry" in critical infrastructure.
Current AI policy focuses heavily on "unbiased data." However, our research identifies a deeper, structural flaw in how AI models are trained: The Consistency Paradox.
Human judgment is biologically inconsistent. Research shows that decisions in high-stakes environments (judicial rulings, medical triage) fluctuate wildly based on cognitive fatigue, hunger, and time of day.
Standard training methods (Inverse Reinforcement Learning) teach AI to mimic these human decisions. Because AI does not suffer from fatigue, it does not just copy our skills; it industrializes our cognitive failures, executing flawed or biased judgments with perfect, unblinking consistency.
"Bias Laundering"—where an algorithm takes historical discrimination (e.g., redlining or disparate impact) and launders it through complex mathematics to make it appear objective and neutral.
AI doesn't just automate our decisions—it industrializes our inconsistencies and launders our biases through mathematical complexity.
We argue that "human-level performance" is a dangerous safety standard for AI. Instead, we advocate for Constitutional AI (CAI) as the regulatory gold standard.
We must stop asking AI to guess our values from our history (which is flawed) and start explicitly coding our values as constraints.
High-stakes models must operate under a "Constitution"—a set of explicit, non-negotiable rules (e.g., "Do not discriminate based on protected class") that override the training data whenever the two conflict.
Constitutional AI ensures that AI systems align with our stated values, not our historical failures, creating transparent and accountable decision-making in critical infrastructure.
Working with the Senate Commerce Committee to update the NIST AI Risk Management Framework (RMF). We are pushing for "Explicit Normative Constraints" to be classified as a required safety control for high-risk systems.
Partnering with the Congressional Hispanic Caucus (CHC) to identify and ban "Proxy Discrimination" in housing and lending algorithms, ensuring that AI advances equity rather than automating exclusion.
Promoting the use of AI not to replace human judgment, but to audit it—using consistency checks to identify when human decision-makers are deviating from their own ethical ideals.

Director
Researcher and author of "The Mirror Effect: How AI's Consistency Exposes the Flaw in Human Moral Preference."
The author of 19 books on AI and host of the 200-episode podcast "AI and Us: Exploring Our Future," Rocha's forthcoming book "AI is More Human than Humans: A Mirror to Our Better Angels" offers the first practitioner's guide to implementing NIST AI RMF Executive Order mandates of Promoting the Use of Trustworthy Artificial Intelligence in the Federal Government, Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, Winning the Race: America's AI Action Plan, through Consistency Auditing—a methodology that makes ethical AI governance actionable for the 430+ federal agencies required to comply.
A Congressional appointee, professional speaker with 40 years of experience, and passionate advocate for algorithmic accountability, Rocha connects the civil rights movement's lessons to today's AI challenges.
Our work is informed by a diverse coalition of technical and civil rights leaders, including:
President, Dolores Huerta Foundation
Civil Rights Leader
Aligned with Stuart Russell's principles
Our coalition bridges the gap between civil rights advocacy and technical AI safety, ensuring that policy solutions are both technically sound and socially just.
Reach out to discuss policy research, request briefings, or inquire about collaboration opportunities with congressional offices and federal agencies.
For urgent matters or media inquiries: