The Day AI Decided Who Gets to Work — and Why That Should Concern All of Us
On July 12, 2024, a U.S. federal court quietly changed how society must think about AI in hiring.
Bob McTaggart
1/24/20263 min read


The Day AI Decided Who Gets to Work — and Why That Should Concern All of Us
(July 12, 2024)
On July 12, 2024, a U.S. federal court quietly changed how society must think about AI in hiring.
The case, Mobley v. Workday, was not about abstract technology policy or hypothetical risk. It was about a person who kept trying to work — and kept being rejected by a system that offered no explanation, no transparency, and no visible human involvement.
According to the allegations, Mr. Mobley applied for more than a hundred roles and received rejection after rejection, often at odd hours. Anyone who has lived through long-term unemployment or repeated rejection understands what that does to a person. When rejection comes without context, it stops being informational and starts becoming psychological.
This matters.
Repeated, unexplained rejection doesn’t just affect income. It affects dignity. It erodes confidence. It creates anxiety and hypervigilance. A person begins to question not just their qualifications, but their worth.
I speak to this with care because I live with sanctuary trauma. Certain systems and institutional environments can still trigger PTSD responses for me, even years later. What makes those moments most destabilizing isn’t the decision itself — it’s the absence of a human presence. When there is no explanation, no accountability, no way to ask “why,” the nervous system fills in the blanks. Rejection starts to feel personal, permanent, and deserved.
Knowing that, I can only imagine how this experience may have felt for Mr. Mobley.
This is why Mobley v. Workday matters far beyond employment law.
The court’s ruling allowed the case to proceed on the theory that an AI hiring vendor could be treated as an agent of the employer under U.S. employment discrimination law. That distinction is significant. It signals a shift away from treating AI systems as neutral background tools and toward recognizing them as active participants in decisions that shape human lives.
But the deeper issue is not legal theory. It is governance.
The harm described in this case was not caused by AI itself. It was caused by automated decision-making deployed at scale without clear limits, meaningful human oversight, or provable accountability.
When systems quietly decide who advances and who is excluded — without explanation or recourse — the psychological impact compounds quickly. Research increasingly links opaque automated decision systems to elevated stress, anxiety, loss of agency, and long-term erosion of trust. People are being evaluated by mechanisms they do not understand and cannot challenge. Over time, that doesn’t just damage individuals. It damages confidence in institutions and in society itself.
From a governance perspective, this harm was preventable.
There should have been clear policies defining which hiring decisions automation was allowed to make independently and which required mandatory human review. Fully automated rejection decisions, especially where protected characteristics could be implicated, should have been prohibited without documented human confirmation.
There should have been binding human-in-the-loop requirements tied to decision impact. When a system recommendation results in rejection, repeated exclusion, or patterned outcomes across roles, a human should be required to pause, review, and take responsibility. That single safeguard could have interrupted what Mr. Mobley experienced early.
There should also have been accessible, tamper-evident decision records capable of answering basic questions in real time: which system acted, what inputs were used, which criteria applied, whether a human reviewed the outcome, and which policy governed the decision at that moment. Without that record, there is no meaningful way to challenge a decision short of litigation.
Finally, there should have been mechanisms to detect and intervene in harmful patterns. Dozens or hundreds of rejections should never accumulate silently. In other regulated environments, that alone would trigger review, suspension of automation, and senior oversight.
Absent these safeguards, litigation became the only remaining path.
Courts are not hostile to innovation. They intervene when people are harmed and no other accountability mechanism exists. When systems operate at scale, governance must operate at the same scale — or society will use courts as the last line of defense.
The broader risk here is cultural.
If AI systems are allowed to quietly decide who belongs, who advances, and who is excluded — without explanation or appeal — we should expect rising anxiety, declining trust, and growing resistance to AI across society. People will not trust systems that make life-altering decisions without acknowledging their humanity.
AI did not create this harm.
Unaccountable decision-making did.
If leaders want public trust in AI, they must design for human impact first — not just efficiency, speed, or cost savings. Governance is not bureaucracy. It is the safeguard that protects people from silent harm and organizations from becoming the final repository of accountability.
That lesson is here now. The question is whether we choose to learn it before more people are hurt.
Bob McTaggart
Author (edited with AI)
Bob McTaggart | Veteran-led AI Governance & Trust Infrastructure
#ai #riskmanagement #trustedbyheroes
Supporting
GETTING veterans and first responders back on mission!
Veteran-inspired AI Governance & Trust Infrastructure
Trusted by Heroes and Mounted Rifles Management
Veterans and First Responders receive direct support through SupportOurHeroes.Directory
Leadership and peer support are taught through RedFridayTalks.Help
The same governance protections are available to everyone.
© 2026. All rights reserved.