The Phyllis Schlafly Report
By John and Andy Schlafly
Trump’s spectacular capture in January of Nicolás Maduro, the Communist dictator of Venezuela, was reportedly assisted by artificial intelligence (AI). Specifically, the AI program Claude is used in our military, and in 8 of the 10 largest companies.
Secretary of War Pete Hegseth made decisions based on multiple scenarios presented by Palantir using Claude. And this was not the first time that the U.S. Army has benefited from this AI tool.
The stunning success of this military operation involved fewer than 200 American troops, of whom 7 were injured, and several of them were visited by President Trump last Friday at Fort Bragg. Three helicopter pilots were badly wounded in their legs by machine gun fire, Trump said, while 83 soldiers defending Maduro were killed, according to Venezuela.
AI company Anthropic licenses Claude under an Acceptable Use Policy (AUP) that limits how it can be used. In rejection of these limitations, War Secretary Pete Hegseth threatens to eradicate it not just from our military, but from every vendor that sells products and services to our military by labeling Claude a “supply chain risk.”
This is a type of exclusion usually invoked only for foreign adversaries of the United States. But the irritation by Hegseth and the top brass in our military at Anthropic’s restrictions on the use of its product has worsened to the point where this harsh punishment of a complete banishment is being considered.
Just imagine if the scientists who worked on the Manhattan Project to develop the atomic bomb had placed restrictions on its use by the military. The military should be able to use unrestricted AI to advance our national security as our elected president thinks best.
Anthropic is not the only AI company placing restrictions on the use of its tools. Other AI companies have also attempted to limit the military applications of their programs.
Several competitors to Anthropic, including OpenAI, Google, and xAI, are chomping at the bit to secure a contract with the U.S. Armed Services and may be willing to drop the restrictions Anthropic currently insists on maintaining. But the Trump Administration complains that it would be enormously difficult to eradicate all current uses of Claude, including those by contractors like Palantir, to switch to another AI tool.
Military spokesman Sean Parnell stated, “The Department of War’s relationship with Anthropic is being reviewed. Our nation requires that our partners be willing to help our warfighters win in any fight. Ultimately, this is about our troops and the safety of the American people.”
Our military’s contract with Anthropic is worth only about $200 million, a mere pittance of Anthropic’s total $14 billion in annual revenue. Anthropic may be concerned about losing more sizable business if its tool becomes associated with military attacks.
The Vatican released a statement a year ago entitled “Antiqua et Nova” which warned that “autonomous weapons systems, which are capable of identifying and striking targets without direct human intervention, are a cause for grave ethical concern. ... No machine should ever choose to take the life of a human being.”
But there is no guarantee that China would play by the rules of Western Civilization. Our military’s AI needs to be advanced enough to defend against China’s AI in a war, while it is reasonable to limit AI to prevent it from making any unsupervised decisions to kill.
Meanwhile, an improved Chinese AI program created a video of a fistfight between actors Brad Pitt and Tom Cruise, which looks as realistic as a Hollywood movie. Despite possibly violating the copyrights on the movies with which the AI program was trained, it has gone viral and thrown Hollywood into a tailspin.
The New York Times described this 15-second clip as “more cinematic than anything so far” from AI. “In next to no time, one person is going to be able to sit at a computer and create a movie indistinguishable from what Hollywood now releases,” commented Rhett Reese, screenwriter of “Deadpool” and other movies.
The instigator of this video fistfight on a rooftop between Pitt and Cruise was the Irishman Ruairi Robinson. He said on X that this was generated by merely “a 2 line prompt in Seedance 2,” which is an AI film-generating tool by the Chinese company ByteDance.
ByteDance is the same Chinese company that developed TikTok, which upended social media platforms in the U.S. with popular short-reel videos.
Suddenly, a flood of potentially copyright-infringing material created by this Chinese AI tool is going viral online, using characters and scenes copied from popular movies. Those who would like to change a movie's ending may be able to do so privately using AI, but posting new endings could violate copyrights.
John and Andy Schlafly are sons of Phyllis Schlafly (1924-2016) and lead the continuing Phyllis Schlafly Eagles organizations with writing and policy work.
These columns are also posted on PhyllisSchlafly.com, pseagles.com, and Townhall.com.