The Pentagon's Silicon Valley Problem

NDAjam | 309 points

The examples in the article are rather cherry-picked. Failures in Vietnam can hardly be blamed on an IBM 360 only. The Hamas attack might have surprised Israel but the Iron Dome has been tech working well in recent years. The US warned anybody who wanted to listen (not many) that Russia was about to attack Ukraine. And it was a bunch of rather theoretical physicists who built the atomic bomb.

dosinga | a month ago

I think - like a lot of media reporting on the space - this overgeneralizes (heh) artificial intelligence. The predictive aspects of ML have been in use in modern militaries for _decades_, and the opening graf handwavely indicates that an LLM was a bigger chunk of the perceived intelligence failure of the October 7 attack.

That an LLM is a part of a system that includes a large amount of ML is not surprising. It's a great human interface. Do I for a second believe that it played a much larger role, such to be implied as responsible in any non-negligble way for missing the attack. Of course not.

My point here is that ML continues to play a role, ML continues to both succeed and fail, and ML will continue to be imperfect, even moreso as it competes against adversarial ML. Blaming imperfect tools for inevitable failures is not a useful exercise, and certainly not a "problem" considering the alternative being even more failure-prone humans.

nkozyra | a month ago

”The AI system knows everything about Hamas: what they said, what they published […] it analyzes behavior, predicts risks, and raises alerts.”

”Well aware of this Hamas members fed their enemy the data that they wanted to hear. The AI system, it turned out, knew everything about the terrorist except what he was thinking.”

When your opponent can see everything you do and hear everything you say, the only defence is privacy. In the novel The Three Body Problem this is taken to an extreme: the only privacy is inside the human mind and so select individuals are allowed to make decisions based on strategies known only to them which they have never said aloud. Science fiction has become reality.

gorgoiler | a month ago

I think the most important lesson, it’s borderline impossible to design any good system without clear use cases.

Ukraine has these use cases, also high motivation to tackle them. Ukrainians are controlling battlefield with commodity computers https://en.defence-ua.com/news/how_the_kropyva_combat_contro... They sunk multiple Russian warships with long-range naval drones https://www.bbc.com/news/world-europe-68528761 They recently started large-scale testing of cheap flying drones with computer vision-based target recognition on board https://www.forbes.com/sites/davidhambling/2024/03/21/ukrain...

However, US is at peace. Which is a great thing by itself, but it means it’s too easy for them to waste billions of dollars developing technologies which look awesome in PowerPoint, but useless in practice.

Const-me | a month ago

I can’t speak for Israeli tech, but the pentagon has an image problem in the valley, I don’t believe they are getting the best recruits even for contracting companies like Palintir. Our generation is closer to Iraq and Vietnam than WW2, and many of the bright minds are first generation immigrants. Despite the more recent image problems ad tech has (now that people are seeing more of how the sausage is made), it’s still sexier to work on big consumer companies than defense. You’d have to pay my colleagues more to work for the US government, even indirectly, instead it’s often less (and often with less freedoms of what they do off the clock).

And now, what I’m reading is that if you do go contract for the military in AI, your function is partially some kind of scapegoat insurance. Blame those eggheads with their computers who can be fooled, not the fools who hired them and acted on that signal above others I guess?

The idea that a chatGPT model would have been a deciding factor in preventing 10/7 is laughable on its face to anyone who works in the industry, except maybe a consultant selling LLMs to the IDF.

mattnewton | a month ago

'Caught by surprise' is a weird description. Israeli press has repeatedly run stories about how frontline analysts sounding the alarm were ignored.

That could be due to things like sexism, ageism or discrimination against conscripts, or it could be due to the settler organisations having their people in government and a strong wish to resettle the Gaza strip.

Either way, the signals were there, they had been watching the preparations and exercises for a year or so. Even if the resistance groups had kept that secret even a mediocre officer in intelligence or the army should be able to conclude from 'first principles' and what they were doing that there would eventually be a violent response.

cess11 | a month ago

I personally think this is the most interesting part of the entire article:

'He then focused on defense work, lamenting that people with the relevant tech skills to build the weapons of the future were “largely refusing to work with the defense sector.”'

I wonder to what extent that is still true. There is clearly a lot of money flowing and some definitely followed the money ( Palantir exists after all ).

A4ET8a8uTh0 | a month ago

I’m shocked by the amount of taxpayer money gone to waste. So many unsuccessful projects, the infamous incompetence of Big Tech looks like nothing compared to US military industrial complex’s.

So this was where all the surplus of Western civilization was going to for the last 3/4 of a century. Now the surplus is no more, and soon to turn negative as the critical resources and energy sources run out, I hope the US loses its global dominance as soon as possible. I’m sorry, but at no point in time have they been just rulers over planet Earth. Entire countries of mine have been demolished and entire populations have been killed/forced to migrate, so that you can buy the new Xbox to your child, and your neighbor can buy a new yacht.

Aerbil313 | a month ago

I was looking for a mention of the Strategic Defense Initiative, aka "Star Wars". Among the technical issues the program never overcame was the ability to adequately recognize incoming missiles and guide anti-missile defenses to the target. Much like the Igloo White and Assault Breaker systems mentioned in the article, it failed to distinguish decoys from real.

cratermoon | a month ago

It seems difficult to escape from the eternal truth of measure and countermeasure … a fool with a tool is still a fool …

quantum_state | a month ago

At the end of the article, Cockburn complains that asking ChatGPT about Palantir work with the IDF gets a hallucination in response. I just queried duckduckgo.com with "IDF Palantir", and receved links to several news articles from relatively mainstream news sources. If the point is that LLMs are currently unreliable, then sure. If the point is that we can't know whether Palantir is working with the IDF, then there is available evidence

thyrsus | a month ago

Looks like op staff were overwelmed by oncall duty false alarms.

Yeah the best way to fix errors is to ... just ignore them.

I think any sophisticated system that requires a bureaucratic staff to operate is doomed to fail.

est | a month ago

> Nevertheless, Hamas’s devastating attack on October 7 caught Shin Bet and the rest of Israel’s multibillion-dollar defense system entirely by surprise.

Somebody high up in the Israeli military was probably like, "After very careful consideration, sir, I've come to the conclusion that your new defense system sucks."

https://www.youtube.com/watch?v=fyFB2p1yrQI

bitwize | a month ago

The subtitle is rather telling, when combined with the title:

How Big Tech is losing the wars of the future

The underlying assumption of the article is that we want AI to further centralize military power into the hands of fewer and fewer people.

Whenever that goal has been achieved in the past, it has been disastrous for human rights, scientific progress, and things like life expectancies and food security.

I’d rather Silicon Valley keep producing stuff like the printing press and gutenberg bible, and not work on reducing the costs of operating a new Spanish Inquisition or an S.S.-style surveillance apparatus.

Even if you trust the current Pentagon, there’s some other government that would misuse the technology. Also, you have no way of knowing who will control the Pentagon in 50-100 years.

hedora | a month ago

Any sufficiently advanced technology can be defeated with sticks and stones.

surfingdino | a month ago

I'm not sure why they discussed Israel at all - except for bullshit accusation in racism which seems to come automatically nowdays. Israeli intelligence failure had zero to do with any technology and everything to do with political and ideological delusion. They had fake data, they had real data, they had all the data. They had tons of opinions and options. They had to choose which concept to believe and which conclusion to make. This is not something that depends on technology, and it can't be, that depends on the person or persons making the decision, and it is as unsolved problem now as if was millenia ago.

smsm42 | a month ago

It's just the usual technology obsession of military industrial and political types that's been around for decades. The reality is that the most important factor in combat is the human one and every fancy gadget you use just introduces more liability and weak points.

The AI marketing hype and lobbying stuff fills the pockets of a few people but it doesn't make soldiers more effective, "cloud computing controls the battlefield" is such a meme worthy sentence I don't understand how anyone can take someone seriously who says that out loud.

What you could see in the Israel-Hamas conflict mentioned in the article is what you also see with the Houthis or in Ukraine, that the best technology on the battlefield is cheap, resilient and simple enough to be understood and operated by the least competent soldier, not some 10 billion dollar fantasy tool out of a sci-fi novel.

The example in the article of Hamas feeding Israeli informants deliberate misinformation to strengthen the notion that Hamas would not attack, now imagine this amplified by even more gullible LLM powered "intelligence analysts". It's a theme of the "AI age", the people who stand to benefit the most are critically thinking humans able to exploit the tool induced stupidity of everyone else. Hackers, appropriately enough.

Barrin92 | a month ago

I don't understand the headline "problem" of the article. Or the "How Big Tech is losing the wars of the future".

Silicon Valley has always been a part of the US military complex. Maybe there was a period sometime in the 90es where it was irrational exuberance and don't be evil. But now we are surely back under manners.

throwaway4good | a month ago

I’m a combat vet from the infantry. I was a lowly enlisted, not an officer. I attended college after wards and now work in tech out west.

I am amazed how confused, historically ignorant, but intelligent my coworkers are. It’s like they’ve never even heard of NATO, world war 1 or 2, and almost each one when they’ve asked what I did has then had the follow up question “what’s the infantry”. This used to seem cute, but lately I’ve wondered if it’s something to be concerned with.

If the people who are being paid all this money and building all these products are incredibly intelligent but have zero wisdom, what does that look like when war comes? Do they just step aside, point at a stock price, and tell me I’m barbaric and living in a past century? I wonder about it almost daily now as the conflict looks more and more likely to spread and I see an electorate more and more isolationist and looking to repeat the mistakes of the past.

Hmmm, not even sure my point. I guess it’s that I find many Americans insanely smart and intelligent, but they lack any wisdom but act as though they are all Martin Luther king jr when they pronounce a name correctly, or some such morally insignificant thing compared to life and death.

Something is totally fucked with how people are valuing things. What I’m not sure, but when a US Republican senator named JD Vance (a fucking veteran!) starts going around literally spouting Kremlin propaganda lines about Ukraine, and conservatives agree and repeat it, I just feel sad about my society.

Also, obviously some people do know these things, I’m being loose with language here. But a huge majority of the people have asked me those questions.

oglop | a month ago

I have no idea how Silicon Valley could be held responsible for an Israeli intelligence failure. Israel is not a part of the U.S.

The author exhibits essentially zero knowledge of the advances in military intelligence in the past 10-20 years. He’s talking about problems in the Vietnam war and IBM 360 mainframes as if all of the stuff Macnamara dreamed of weren’t daily reality now.

cameldrv | a month ago

What aspects of modern warfare didn't Hideo Kojima foresee?

>Another combat veteran, now with a Pentagon agency working on these issues, told me that the AI developers he works with didn’t seem to understand some of the requirements for the technology’s military application. “I don’t know if AI, or the sensors that feed it for that matter, will ever be capable of spontaneity or recognizing spontaneity,” he said. He cited a DARPA experiment in which a squad of Marines defeated an AI-governed robot that had been trained to detect them simply by altering their physical profiles. Two walked inside a large cardboard box. Others somersaulted. One wore the branches of a fir tree. All were able to approach over open ground and touch the robot without detection.

Oh..

>I was curious about Palantir, whose stock indeed soared amid the 2023 AI frenzy. I had been told that the Israeli security sector’s AI systems might rely on Palantir’s technology. Furthermore, Shin Bet’s humiliating failure to predict the Hamas assault had not blunted the Israeli Defense Force’s appetite for the technology; the unceasing rain of bombs upon densely packed Gaza neighborhoods, according to a well-sourced report by Israeli reporter Yuval Abraham in +972 Magazine, was in fact partly controlled by an AI target-creation platform called the Gospel. The Gospel produces automatic recommendations for where to strike based on what the technology identifies as being connected with Hamas, such as the private home of a suspected rank-and-file member of the organization. It also calculates how many civilians, including women and children, would die in the process—which, as of this writing, amounted to at least twenty-two thousand people, some 70 percent of them women and children. One of Abraham’s intelligence sources termed the technology a “mass assassination factory.” Despite the high-tech gloss on the massacre, the result has been no different than the slaughter inflicted, with comparatively more primitive means, against Dresden and Tokyo during World War II.

musha68k | a month ago

> Nevertheless, Hamas’s devastating attack on October 7 caught Shin Bet and the rest of Israel’s multibillion-dollar defense system entirely by surprise. The intelligence disaster was even more striking considering Hamas carried out much of its preparations in plain sight, including practice assaults on mock-ups of the border fence and Israeli settlements—activities that were openly reported. Hamas-led militant groups even posted videos of their training online. Israelis living close to the border observed and publicized these exercises with mounting alarm, but were ignored in favor of intelligence bureaucracies’ analyses and, by extension, the software that had informed them. Israeli conscripts, mostly young women, monitoring developments through the ubiquitous surveillance cameras along the Gaza border, composed and presented a detailed report on Hamas’s preparations to breach the fence and take hostages, only to have their findings dismissed as “an imaginary scenario.” The Israeli intelligence apparatus had for more than a year been in possession of a Hamas document that detailed the group’s plan for an attack.

> Well aware of Israel’s intelligence methods, Hamas members fed their enemy the data that they wanted to hear, using informants they knew would report to the Israelis. They signaled that the ruling group inside Gaza was concentrating on improving the local economy by gaining access to the Israeli job market, and that Hamas had been deterred from action by Israel’s overwhelming military might. Such reports confirmed that Israel’s intelligence system had rigid assumptions of Hamas behavior, overlaid with a racial arrogance that considered Palestinians incapable of such a large-scale operation. AI, it turned out, knew everything about the terrorist except what he was thinking.

That sounds a lot like a company that's implementing data-driven "best practices" from some expensive management consultants.

It truly is the best system, regardless of how bad the results are. It's best by definition.

tivert | a month ago

Since the article talks about the failure of AI in the context of the 10/7 I think it’s worth discussing the situation directly. Everything points to the Israelis not having taken their security seriously beyond the tactical level. I’m certain they thwarted other attacks, but it was an inevitability that a major attack was successful at some point. Such an attack would necessitate a military response. However the Israelis have no strategic vision. They lacked serious plans for such an eventuality and still lack a serious goal for their invasion of Gaza. They haven’t articulated anything that indicates a vision to meaningfully change the situation from the 10/6 state to something more sustainable. Therefore, it doesn’t seem like a reasonable takeaway to say AI failed.

orange_joe | a month ago

> The system knows everything about [the terrorist]: where he went, who his friends are, who his family is, what keeps him busy, what he said and what he published. Using artificial intelligence, the system analyzes behavior, predicts risks, raises alerts.

Where does "the terrorist" end and me, you and anyone else just minding our own business get inserted instead? And let's say it's not even the gov doing this but some private company with public data, what's to stop the gov from buying "reports" from that company. 100% legal. That is, no rights being violated, etc.

Anyone who says, "I have nothing to hide" is a fool, at best.

chiefalchemist | a month ago

I'm getting really tired of writers crapping on 'AI' as if a static self-sufficient offering.

Like no, the AI doesn't know everything other than what the terrorist is thinking. It summarizes what it's being fed.

If a chatbot was being fed reports concerned about border activities then it's going to raise concern about border activities.

This is an unnecessary and misleading angle to the article jumping on a bandwagon.

The failure here is a broader failure of human intelligence across Western intelligence services in favor of contracts with third party defense contractors. There's a story for that.

For "AI not knowing the terrorist mind" not much of a story.

kromem | a month ago

>Nevertheless, Hamas’s devastating attack on October 7 caught Shin Bet and the rest of Israel’s multibillion-dollar defense system entirely by surprise. The intelligence disaster was even more striking considering Hamas carried out much of its preparations in plain sight, including practice assaults on mock-ups of the border fence and Israeli settlements—activities that were openly reported. Hamas-led militant groups even posted videos of their training online. Israelis living close to the border observed and publicized these exercises with mounting alarm, but were ignored in favor of intelligence bureaucracies’ analyses and, by extension, the software that had informed them. Israeli conscripts, mostly young women, monitoring developments through the ubiquitous surveillance cameras along the Gaza border, composed and presented a detailed report on Hamas’s preparations to breach the fence and take hostages, only to have their findings dismissed as “an imaginary scenario.” The Israeli intelligence apparatus had for more than a year been in possession of a Hamas document that detailed the group’s plan for an attack.

At some point you have to hazard the notion that they let it happen on purpose. "Wag the dog" trended around that time, and with Netanyahu's various woes, maybe they went ahead and built the Torment Nexus.

underlipton | a month ago

Finally. someone talking sense about AI.

OhMeadhbh | a month ago

No offense, but this article is MASSIVE BS.

There are issues with innovation in the DoD and DHS, but a lot of this is offloaded to private sector vendors anyhow.

I notice how the article didn't mention any of the companies I personally know doing stuff in the space, nor actually sourced from members of the VC, Business, or Defense community.

The fact that the author took Palantir's marketing at face value is proof enough - the CIA let their contract with Palantir lapse a couple years ago (and I think they only even bought it because of their stake in In-Q-Tel), and they haven't had great success selling to the Fed.

I actually work in this space btw.

-----

The bigger stumbling block is procurement.

Software Procurement by Federal standards is relatively straightforward so a Series E+ startup can make it if they spend around $7-10M and 1-1.5 years on a dedicated roadmap for FedRamp and FIPS compliance.

Once you step out of software, procurement becomes paperwork hell. Throw in the paperwork hell from Grantmakers like the DoD and DoE, and you end up with a quasi-Soviet procurement system. Ironically, most of these compliance and regulatory checks were added for good intentions - primarily to minimize corruption and graft, yet it basically clogged up the entire system, and dissuades startups and innovators from working directly with the Defense community.

Some projects like DIUx and and In-Q-Tel are trying to change that, but it's too little too late, and our defense base is entirely dependent on firms like Microsoft, Cisco, Crowdstrike, Zscaler, etc acquiring promising startups to evangelize their innovations internally.

Fundamentally, this is why I dislike the New America/Khan/Chopra vision of anti-trust. It doesn't actually help innovation from a federal standpoint, as small companies and startups have no reason to work with the Fed given the amount of red tape that exists.

If the same effort was put to harmonizing and simplifying procurement across the Federal Government, you could directly make demands on competition.

This is what China does, and is a major reason their MIC was able to grow leaps and bounds in just 20 years.

alephnerd | a month ago
[deleted]
| a month ago

“no one appears to have noticed that Project Maven fit into the grand tradition of many other high-tech weapons projects: ecstatic claims of prowess coupled with a disregard for real-world experience”

agomez314 | a month ago

The thing we should all really be terrified about is how Trump and Stephen Miller will use of all of this technology we have built against us when elected.

outside1234 | a month ago

[flagged]

SpliffnCola | a month ago

[flagged]

mschuster91 | a month ago

[flagged]

burutthrow1234 | a month ago

[flagged]

sweeter | a month ago
[deleted]
| a month ago

[flagged]

AndyMcConachie | a month ago

I don't understand why this have anything do with silicone valley or AI / AGI.

It is just a classical confirmation bias.

j16sdiz | a month ago