The DOJ Just Settled Its First DEI Case & IBM Paid $17 Million
The DOJ alleged four specific practices: IBM used a “diversity modifier” that tied employee bonus compensation to hitting demographic targets; altered interview criteria based on race or sex through “diverse interview slates”; set race and sex demographic goals for business units and factored those into employment decisions; and restricted access to certain training, mentorship, and leadership development programs based on race or sex. IBM denied engaging in unlawful conduct and cooperated with the government’s investigation, making early voluntary disclosures from its own internal review. The settlement is not an admission of liability by IBM. IBM also terminated or modified various DEI-related programs as part of the resolution.
The enforcement framework is worth understanding clearly. The False Claims Act (originally a Civil War-era statute) allows the government to recover damages when federal contractors knowingly make false certifications. Here, the DOJ’s theory is that IBM certified compliance with anti-discrimination requirements as a condition of its federal contracts, while maintaining practices the government contends violated those same requirements. Acting Attorney General Todd Blanche framed it directly: companies cannot evade anti-discrimination law by repackaging the same practices as DEI.
For anyone in hiring and recruiting, this changes the compliance conversation. $17 million from a company of IBM’s size is a signal. If the DOJ can extract a settlement from IBM with this enforcement framework, every smaller federal contractor with fewer legal resources needs to take note. The Civil Rights Fraud Initiative has explicitly signaled that additional investigations are ongoing. Review your hiring practices, your compensation structures, and any programs with eligibility criteria tied to demographic characteristics, not after you get the inquiry, but now.
Gallup’s New Data Explains Exactly Why Your AI Rollout Isn’t Working
Gallup surveyed 23,717 U.S. employees in February 2026 and found that giving workers access to AI tools is not the same as getting them to use those tools. Among employees at companies that make AI available, 67% of leaders use it frequently, meaning a few times a week or more. That drops to 52% of managers, 50% of project managers, and 46% of individual contributors. Access alone clearly isn’t the differentiator.
The factors that actually drive adoption are integration and manager support. 88% of employees who strongly agree that AI integrates well with their existing systems use it frequently, compared with 55% who don’t strongly agree. When managers actively champion AI use, employees are 9.3 times as likely to say it has transformed how work gets done in their organization, and 7.8 times as likely to say it gives them more opportunities to do their best work. That’s the entire story of why most AI rollouts underperform: companies buy licenses, announce the initiative, and expect the adoption to follow. It doesn’t.
Staffing your team doesn’t have to be hard.
Reach out and see how we can help.
A New Academic Paper Explains Why Every CEO Sees the AI Layoff Cliff and Races Toward It Anyway
Two economists from the University of Pennsylvania and Boston University published a working paper that puts a formal economic model behind something many people have been sensing: companies are replacing workers with AI at a pace that’s damaging the economy, and they can see it happening, and they’re doing it anyway.
The mechanism is a demand externality. When a company lays off workers and replaces them with AI, those workers stop spending money. That lost consumer demand hurts every business in the market, but the company doing the automation captures all of the cost savings while bearing only a fraction of the demand loss it creates. The damage gets distributed across competitors. The AI-attributed layoff numbers we’ve been tracking and AI cited as the primary driver in an accelerating share of cuts are the real-world version of the model the researchers describe.
The result is a Prisoner’s Dilemma. Every company’s best individual move is to automate as aggressively as possible, even though restraint would leave everyone (workers and company owners both) better off collectively. No voluntary agreement among companies can hold, because any firm that holds back while competitors don’t simply loses ground. The researchers tested six potential policy fixes. Five fall short: worker retraining helps but doesn’t change the underlying incentive; UBI raises the floor on living standards but leaves automation rates unchanged; capital income taxes don’t alter the per-task automation margin; worker equity stakes are a partial fix at best; and collective bargaining can’t hold because defection is always the dominant strategy. The only fix that works is a Pigouvian automation tax, set equal to the demand damage each company’s layoffs impose on the rest of the market, similar in logic to a carbon tax on emissions.
Two counterintuitive findings: more competition makes the problem worse, because each firm feels a smaller share of the damage. And better AI makes it worse too, because each company perceives a market-share gain from automating faster than rivals, but at the symmetric equilibrium, those gains cancel, leaving only more displacement. This is a working paper, not yet peer-reviewed, but the methodology is rigorous. The argument is going to show up in policy debates, and having the formal framework behind it matters.
Frequently Asked Questions
The DOJ alleged IBM used a “diversity modifier” tying employee bonuses to hitting demographic targets, altered interview criteria based on race or sex, set demographic goals for business units, and restricted access to training and mentorship programs based on protected characteristics. IBM denied wrongdoing, and the settlement is not an admission of liability. IBM also cooperated with the investigation and voluntarily changed the practices at issue.
Not categorically. The DOJ’s theory targets specific practices (compensation tied to demographic targets, restricted program access based on race or sex, altered hiring criteria) rather than DEI broadly. That said, the enforcement landscape is shifting fast. Federal contractors should review any programs with eligibility or compensation criteria linked to protected characteristics with legal counsel now.
According to Gallup’s February 2026 survey, the biggest barriers are skepticism about relevance (39% of non-users don’t believe AI can help with their work), ethical concerns (43%), and habit (46% prefer doing things the way they always have). The strongest driver of adoption is whether a worker’s direct manager actively supports and models AI use.
A working paper from UPenn and Boston University explains it as a structural trap. Each company captures the full cost savings from automation, but only absorbs a fraction of the consumer demand it destroys; the rest falls on competitors. The rational move for every individual company is to automate aggressively, even though collective restraint would leave everyone better off. No voluntary agreement can hold because defecting is always the dominant strategy.
