About Us

The Algorithmic Consistency Initiative, LLC

Bridging Technical Safety and Civil Rights

Mission

The Algorithmic Consistency Initiative (ACI) is a policy research organization dedicated to preventing the automation of historical bias. We bridge the gap between technical AI alignment and civil rights law, advocating for safety standards that move beyond "data fairness" to address the fundamental risks of "Behavioral Mimicry" in critical infrastructure.

The Core Problem: The Consistency Paradox

Current AI policy focuses heavily on "unbiased data." However, our research identifies a deeper, structural flaw in how AI models are trained: The Consistency Paradox.

Human Error

Human judgment is biologically inconsistent. Research shows that decisions in high-stakes environments (judicial rulings, medical triage) fluctuate wildly based on cognitive fatigue, hunger, and time of day.

AI Scale

Standard training methods (Inverse Reinforcement Learning) teach AI to mimic these human decisions. Because AI does not suffer from fatigue, it does not just copy our skills; it industrializes our cognitive failures, executing flawed or biased judgments with perfect, unblinking consistency.

The Result

"Bias Laundering"—where an algorithm takes historical discrimination (e.g., redlining or disparate impact) and launders it through complex mathematics to make it appear objective and neutral.

AI doesn't just automate our decisions—it industrializes our inconsistencies and launders our biases through mathematical complexity.

Our Policy Framework: Constitutional AI

We argue that "human-level performance" is a dangerous safety standard for AI. Instead, we advocate for Constitutional AI (CAI) as the regulatory gold standard.

From Inference to Constraint

We must stop asking AI to guess our values from our history (which is flawed) and start explicitly coding our values as constraints.

The Solution

High-stakes models must operate under a "Constitution"—a set of explicit, non-negotiable rules (e.g., "Do not discriminate based on protected class") that override the training data whenever the two conflict.

Key Principle

Constitutional AI ensures that AI systems align with our stated values, not our historical failures, creating transparent and accountable decision-making in critical infrastructure.

Strategic Focus Areas

1. Legislative Oversight

Working with the Senate Commerce Committee to update the NIST AI Risk Management Framework (RMF). We are pushing for "Explicit Normative Constraints" to be classified as a required safety control for high-risk systems.

2. Civil Rights Protection

Partnering with the Congressional Hispanic Caucus (CHC) to identify and ban "Proxy Discrimination" in housing and lending algorithms, ensuring that AI advances equity rather than automating exclusion.

3. The "Mirror" Audit

Promoting the use of AI not to replace human judgment, but to audit it—using consistency checks to identify when human decision-makers are deviating from their own ethical ideals.

Leadership

Alberto Rocha, Director

Alberto Rocha

Director

Researcher and author of "The Mirror Effect: How AI's Consistency Exposes the Flaw in Human Moral Preference."

The author of 19 books on AI and host of the 200-episode podcast "AI and Us: Exploring Our Future," Rocha's forthcoming book "AI is More Human than Humans: A Mirror to Our Better Angels" offers the first practitioner's guide to implementing NIST AI RMF Executive Order mandates of Promoting the Use of Trustworthy Artificial Intelligence in the Federal Government, Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, Winning the Race: America's AI Action Plan, through Consistency Auditing—a methodology that makes ethical AI governance actionable for the 430+ federal agencies required to comply.

A Congressional appointee, professional speaker with 40 years of experience, and passionate advocate for algorithmic accountability, Rocha connects the civil rights movement's lessons to today's AI challenges.

Congressional Appointee19 Books Published200-Episode Podcast40 Years Experience

Advisory & Coalition Support

Our work is informed by a diverse coalition of technical and civil rights leaders, including:

Dolores Huerta

President, Dolores Huerta Foundation

Angel Luevano

Civil Rights Leader

Technical Safety Community

Aligned with Stuart Russell's principles

Our coalition bridges the gap between civil rights advocacy and technical AI safety, ensuring that policy solutions are both technically sound and socially just.

Get In Touch

Reach out to discuss policy research, request briefings, or inquire about collaboration opportunities with congressional offices and federal agencies.

0/500 characters

* Required fields