About Us

The Algorithmic Consistency Initiative, LLC

Bridging Technical Safety and Civil Rights

Official Launch: California Secretary of State Approval

The Algorithmic Consistency Initiative has been officially approved by the California Secretary of State as a registered policy research organization and think tank. Led by Angel Luevano, lead plaintiff in the historic Luevano v. Campbell case, ACI brings decades of civil rights advocacy to the challenge of AI governance.

Read the Full Press Release

Mission

The Algorithmic Consistency Initiative (ACI) is a policy research organization dedicated to preventing the automation of historical bias. We bridge the gap between technical AI alignment and civil rights law, advocating for safety standards that move beyond "data fairness" to address the fundamental risks of "Behavioral Mimicry" in critical infrastructure.

The Core Problem: The Consistency Paradox

Current AI policy focuses heavily on "unbiased data." However, our research identifies a deeper, structural flaw in how AI models are trained: The Consistency Paradox.

Human Error

Human judgment is biologically inconsistent. Research shows that decisions in high-stakes environments (judicial rulings, medical triage) fluctuate wildly based on cognitive fatigue, hunger, and time of day.

AI Scale

Standard training methods (Inverse Reinforcement Learning) teach AI to mimic these human decisions. Because AI does not suffer from fatigue, it does not just copy our skills; it industrializes our cognitive failures, executing flawed or biased judgments with perfect, unblinking consistency.

The Result

"Bias Laundering"—where an algorithm takes historical discrimination (e.g., redlining or disparate impact) and launders it through complex mathematics to make it appear objective and neutral.

AI doesn't just automate our decisions—it industrializes our inconsistencies and launders our biases through mathematical complexity.

Strategic Focus Areas

1. Legislative Oversight

Working with the Senate Commerce Committee to update the NIST AI Risk Management Framework (RMF). We are pushing for "Explicit Normative Constraints" to be classified as a required safety control for high-risk systems.

2. Civil Rights Protection

Partnering with the Congressional Hispanic Caucus (CHC) to identify and ban "Proxy Discrimination" in housing and lending algorithms, ensuring that AI advances equity rather than automating exclusion.

3. The "Mirror" Audit

Promoting the use of AI not to replace human judgment, but to audit it—using consistency checks to identify when human decision-makers are deviating from their own ethical ideals.

Our Policy Framework: Constitutional AI

We argue that "human-level performance" is a dangerous safety standard for AI. Instead, we advocate for Constitutional AI (CAI) as the regulatory gold standard.

From Inference to Constraint

We must stop asking AI to guess our values from our history (which is flawed) and start explicitly coding our values as constraints.

The Solution

High-stakes models must operate under a "Constitution"—a set of explicit, non-negotiable rules (e.g., "Do not discriminate based on protected class") that override the training data whenever the two conflict.

Key Principle

Constitutional AI ensures that AI systems align with our stated values, not our historical failures, creating transparent and accountable decision-making in critical infrastructure.

Leadership

Alberto Rocha, Director

Alberto Rocha

Director

Researcher and author of "The Mirror Effect: How AI's Consistency Exposes the Flaw in Human Moral Preference."

The author of 19 books on AI and host of the 200-episode podcast "AI and Us: Exploring Our Future," Rocha's forthcoming book "AI is More Human than Humans: A Mirror to Our Better Angels" offers the first practitioner's guide to implementing NIST AI RMF Executive Order mandates of Promoting the Use of Trustworthy Artificial Intelligence in the Federal Government, Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, Winning the Race: America's AI Action Plan, through Consistency Auditing—a methodology that makes ethical AI governance actionable for the 430+ federal agencies required to comply.

A Congressional appointee, professional speaker with 40 years of experience, and passionate advocate for algorithmic accountability, Rocha connects the civil rights movement's lessons to today's AI challenges.

Congressional Appointee19 Books Published200-Episode Podcast40 Years Experience
Angel Luevano, Co-Founder

Angel Luevano

Co-Founder

Angel Luevano is a civil rights figure whose legacy is anchored in his role as the lead plaintiff in Luevano v. Campbell—the landmark class-action that successfully challenged the federal government's "Professional and Administrative Career Examination" (PACE) for its discriminatory impact on minority applicants. The resulting 1981 Luevano Consent Decree eliminated the PACE exam and created equitable hiring pathways, fundamentally reforming federal employment for decades.

Mr. Luevano served a distinguished 30-year career within the very system he helped reform, holding high-ranking federal positions where he drafted regulations on detecting employment discrimination. He now co-leads the Algorithmic Consistency Initiative (ACI), ensuring the legal principle established in his decree—that selection systems must be provably fair—is engineered into the algorithmic tools of the 21st century.

A graduate of UC Law San Francisco, he is authoring a memoir on his historic case and resides in Sacramento.

Civil Rights PioneerLuevano v. Campbell30 Years Federal ServiceUC Law San Francisco

Advisory & Coalition Support

Our work is informed by a diverse coalition of technical and civil rights leaders, including:

Dolores Huerta

President, Dolores Huerta Foundation

Angel Luevano

Civil Rights Leader

Technical Safety Community

Aligned with Stuart Russell's principles

Our coalition bridges the gap between civil rights advocacy and technical AI safety, ensuring that policy solutions are both technically sound and socially just.

Get In Touch

Reach out to discuss policy research, request briefings, or inquire about collaboration opportunities with congressional offices and federal agencies.

0/500 characters

* Required fields