We've been here before: AI promised humanlike machines – in 1958

cdme | 33 points

It shouldn’t be this hyped up but the first ai boom was because a computer could learn ‘something’ without explicit programming. It was too primitive and computers were not powerful enough; the scientists (like Minsky) were too optimistic about the growth of computer power as well. The 80s revival with knowledge systems wasn’t actually ever feasible as AGI; everyone I spoke to then in the field (I started uni begin of the end 80s AI winter) had been saying for many years it was a dead end.

So I feel outside the overhyping (some hype is warranted but this much?) could be less, this is different; it’s not a promise, I am chatting to my computer on my lap as if it was my human assistant; it gives me answers faster and usually better than most humans would. That’s simply not the same as we had in the previous instances. Knowledge systems did this too, but were encyclopedia’s; they couldn’t come up with stuff that wasn’t explicitly put in and categorized. You often controlled them by answering a tree structure of multiple choice screens (there are some nice episodes on the Computer Cronicles about this on Archive.org). Now you can have this knowledge base in your (vector) db, ask open questions in any language about it; it will answer human-like, lies/bluff and all.

anonzzzies | a month ago

Is there a term for this kind of fallacy? The one where someone claims that something that’s being done now has been failed before, and so doing it again means it’s a failure now as well.

It’s obvious that modern AI is far, far superior to ELIZA or 50s perceptrons. It reminds me of people who said space travel will never be possible, or heavier than air flight might be developed in ten million years.

joegibbs | a month ago

Except we have human-like machines now. I run a llm on my laptop which can write poetry, talk about the history of the Vikings, play games, tell jokes, hold conversations, bullshit when it doesn't know something, and make mistakes. All very human-like. Does some human-like things better than humans. Other human-like things not as well.

Goal-post moving? Sure. But I think it just betrays a deep discomfort with what has been achieved. Kind of like the meme that it's all just a giant plagiarism machine.

jonnycomputer | a month ago

This article tries to seem like an enlightened sceptic, but it's actually really cringe because of it's lack of insight into the real progress since then. Then again I don't know what people would expect from a politics editor with a liberal arts background.

sigmoid10 | a month ago

We need thoughts like this, even though it doesn't bring any new information. The question the author puts forth is basically; will it work this time?". This is a q. that many seems to believe the answer to is, "yes".

I am one of those. However, distinguishing between truly autonomous AI systems and those that are simply high-performing can become challenging. The line between the two might become blurred when performance reaches a level that seems autonomous. In this context I believe that another "AI winter" will be hard to identify in the future.

Manheim | a month ago

I avoid the term AI. We have a machine learning. A lot of viable implementation of it. And text calculators which are predictive text lossy compressors. Done. No AI hype:)

nbzso | a month ago

Was reading to see what insights the author has, then all of a sudden the article just ended... Not sure if I got the point of the article.

fredwu | a month ago

eye roll I tell you, the amount of hype nowaydays is shameful display of rational thought

xkcd1963 | a month ago

> We've been here before: AI promised humanlike machines – in 1958

But this time is for real. /s

hulitu | a month ago