LDL. JUST NOW: Musk Clashes With Omar Over “Deportation Algorithms” — Crowd Gasps as Code Appears on Screen.
Washington, D.C. — What started as a routine oversight hearing on “AI and Immigration Enforcement” turned into a high-voltage showdown today, as Representative Ilhan Omar confronted Elon Musk over his company’s alleged role in building “deportation algorithms” for federal and local agencies.
In a packed committee room lined with cameras, activists, and lobbyists, Omar accused Musk’s firm of selling AI tools that “sort human beings like spam emails,” while Musk insisted his software merely “ranks risk, not people.”
The moment that shook the room came when Omar’s staff projected a large, blurred-out snippet of code on the screens behind the witnesses and she demanded:
“Show me where the Constitution fits into this function.”
The line — and the image — instantly became the defining symbol of a roaring national argument: Can AI ever be neutral in a system that decides who gets deported?
“You are sorting human beings like spam emails.”
The clash began when Omar questioned Musk about reports that one of his AI subsidiaries had signed contracts with enforcement agencies to provide “predictive risk scoring” on non-citizens — ranking people by how likely they were to overstay visas, commit crimes, or “violate immigration conditions.”
“Your pitch decks don’t say ‘deportation engine,’” Omar said, leaning over the microphone. “They say ‘risk optimization’ and ‘efficiency at scale.’ But when these systems get deployed, real people lose their jobs, their homes, sometimes their families — not because a judge heard their story, but because a model flagged them as ‘high risk.’”
She paused, then delivered her first stinging line:
“You are sorting human beings like spam emails — except when Gmail gets it wrong, no one gets put on a plane.”
Musk shook his head.
“With respect, that’s not what our software does,” he replied. “We don’t deport anyone. We provide tools that help agencies prioritize limited resources — to focus attention on the highest-risk cases based on data. Our systems rank risk, not people.”
“Ranks risk, not people” — or just a new label?
Omar seized on that phrase.
“You say you ‘rank risk, not people,’” she repeated. “But the person who gets a knock on the door at 5 a.m. doesn’t see a risk score. They see officers. They see cuffs. They see their kids crying in the hallway.”
She raised a stack of printed documents.
“These are internal briefs from agencies using your tools,” she said. “They talk about ‘red list’ cases. They talk about ‘removal priorities’ driven by scores. They talk about ‘flagged subjects.’ That’s not abstract risk. Those are people.”
Musk countered that any large system needs triage.
“Before these tools,” he said, “case officers used hunches, biases, and whatever files happened to be on top of the stack. At least with AI, you can see inputs, you can audit performance, you can tune it over time. It’s more objective than someone’s gut feeling.”
Omar’s reply was sharp:
“Objective compared to what?” she asked. “If you feed a model years of unequal enforcement data, it will learn that inequality as ‘truth.’ You’re not deleting bias. You’re compiling it.”
Code on the big screen
The hearing’s turning point came when Omar signaled to her staff.
“I want to show you something,” she said.
The overhead screens flickered, then displayed a large block of blurred-out pseudo-code: variables, conditionals, weights — all anonymized to protect proprietary details, according to the committee. The exact logic was unreadable, but the structure was unmistakable: an input pipeline, a scoring function, a decision threshold.
“This is a simplified representation of your ‘risk model’ as described in technical documentation,” Omar said. “We’ve removed specifics. But the structure is accurate.”
She turned to Musk.
“You keep telling the public your systems are tools, not verdicts,” she said. “That they’re just one input among many. So let me ask you this.”
She pointed at the floating function on the screen.
“Show me where the Constitution fits into this function.”
The room went still. Even some of the committee members paused, eyes raised to the code.
“Where is the due process variable?” Omar continued. “Where does it check for the right to counsel? Where does it encode the idea that in this country, the government doesn’t get to punish people based solely on a secret score they never see and can’t contest?”
She let the questions hang.
“Because from here,” she said, “it looks like the only things that matter are whatever your engineers decide to feed it — and whatever outcomes your paying clients reward.”
Musk: “If you want zero risk models, you’ll get zero innovation.”
Musk defended his company’s approach, arguing that banning such tools would not make systems fairer — just less transparent.
“If you outlaw these models,” he said, “you don’t get a world without triage. You get a world where that triage goes back into filing cabinets and backroom conversations. At least when you have code, you can audit it.”
He emphasized that his contracts include provisions for independent testing and that agencies, not his company, set enforcement policy.
“We’re not writing immigration law in Python,” he said. “We’re optimizing within frameworks governments already created. If you have a problem with those rules, change the law — don’t shoot the calculator.”
He also warned that if U.S. agencies were barred from using advanced tools, others would rush in.
“If we step back,” he said, “companies with fewer ethics and less scrutiny will gladly fill this space. Then you’ll have the same systems — or worse — with zero accountability.”
Omar: “Constitutional rights are not an ‘input parameter’”
Omar responded that “accountability” can’t just mean better dashboards for agency managers.
“Constitutional rights are not an ‘input parameter’ in your model,” she said. “They are limits on what the state is allowed to do — full stop. And when your tools quietly shift who gets targeted, who gets visited, who gets fast-tracked for removal, you are effectively redrawing those lines without a single vote in this chamber.”
She proposed a temporary moratorium on the use of predictive AI in deportation decisions until Congress can establish clear rules on transparency, contestability, and bias audits.
“If software is going to whisper into the government’s ear about who is ‘high risk,’” she said, “then the people being whispered about deserve to know what was said.”
Public opinion: safety vs. secrecy
Outside the committee room, protestors and counter-protestors chanted competing slogans: “NO ALGORITHMS IN ICE” on one side, “USE EVERY TOOL TO STOP CRIME” on the other.
Online, clips of the hearing spread rapidly. The image of ominous code hovering over Musk and Omar, with her demand — “Show me where the Constitution fits into this function” — became an instant meme and a serious rallying cry.
Some viewers sided with Musk, arguing that refusing to use sophisticated tools in enforcement is “tying one hand behind our back” in a dangerous world. Others echoed Omar’s warning that “secret math” deciding who gets targeted is fundamentally incompatible with open justice.
What both sides seemed to recognize is that the debate is bigger than Musk, bigger than Omar, and bigger than immigration policy alone.
It’s about a country trying to decide where to draw the line between efficiency and rights in an era when the state can outsource its gut instincts to a machine.
For now, there is no clear answer — only a question, spelled out in code on a giant screen:
When algorithms enter the courtroom and the border checkpoint, who makes sure the law still lives outside the function?