Karl Friston: a neuroscientist who might hold the key to true AI

MKais | 190 points

It's worth noting that 'free energy' is just the 'evidence lower bound' that is optimized by a large portion of today's machine learning algorithms (i.e. variational auto-encoders).

It's also worth noting that 'predictive coding' - a dominant paradigm in neuroscience - is a form of free energy minimization.

Moreover, free energy minimization (as predictive coding) approximates the backpropagation algorithm [1], but in a biologically plausible fashion. In fact, most biologically plausible deep learning approaches use some form of prediction error signal, and are therefore functionally akin to predictive coding.

Which is all just to say that the notion of free energy minimization is somewhat commonplace in both neuroscience and machine learning.

[1] https://www.ncbi.nlm.nih.gov/pubmed/28333583

atschantz | 5 years ago

Clickbait article.

It is noteworthy that Friston has, as of November 2018, neither (1) formalised free energy minimisation (FIM) with sufficient precision that it goes beyond a vague research heuristic, that can (and is) adapted in ad-hoc ways; nor (2) come up with sufficient empirical evidence for his claim that FIM is how human or animal brains works -- despite the recent revolution in our ability to measure live neurons, and despite having been asked (in private) by working neuro-scientist, including at his university.

mafribe | 5 years ago

Notes from the last time I tried to understand this - https://www.lesswrong.com/posts/wpZJvgQ4HvJE2bysy/god-help-u...

jamii | 5 years ago

I’ve seen Friston speak a few times. My favorite quote along these lines is that “your arm moves because you predict it will, and your motor system seeks to minimize prediction error.”

He’s been a huge figure in human neuroscience, bringing statistics to all those psychologists with fMRI scanners

marmaduke | 5 years ago

Most grad-level Deep Learning classes have a week or so devoted to "Approximate Bayes" methods. And it's conceivable future updates to all popular probabilistic programming languages will include "programmable" rather than "fixed-function" inference methods.

"Inference Metaprogramming" paper

https://people.csail.mit.edu/rinard/paper/pldi18.pdf

Latest state-of-the-art research will be presented at upcoming NeuroIPS conference

Symposium on Advances in Approximate Bayesian Inference

http://approximateinference.org/accepted/

I think the most fascinating aspect is that Friston and his team are working within the field of Computational and Algorithmic Psychiatry. I mean this pre-print is really interesting: using video game play to diagnose disorder.

Active Inference in OpenAI Gym: A Paradigm for Computational Investigations Into Psychiatric Illness

https://www.biologicalpsychiatrycnni.org/article/S2451-9022(...

ArtWomb | 5 years ago

For anyone who is interested in a tutorial and actual implementation of active inference (an idea based on Free Energy Principle), here's one in Python https://kaiu.me/2017/07/11/introducing-the-deep-active-infer...

I have been trying to understand FEP, and so far my understanding is that essentially the agent tries to learn the generative model that most closely explains observations and then tries to act in a way that are more likely to cause the environment to generate its preferred observations (say pH and temperature in the right range).

The problem with this approach is in scalability of inference and candidate model generation. By the time you provide model for the agent, you as a designer have coded much of your knowledge already and hence constrain the agent. True AI will build model from the scratch, and not just learn model complexity.

paraschopra | 5 years ago

The complex systems people used to discuss the problem of agents with internal models making models of other agents [1].

Similarly biologists are interested in how a living thing 'organises itself' in the world, maintains its structure and how its sensing and action is coupled to the environment[2].

This sounds like a similar approach, however fuzzy. Isn't it just saying 'can we look for principles that define how living creatures should organise the effort (energy/information) it makes sense to put into "recognising/ predicting/ acting in / being in" the world?'

Makes sense there could be some shared mechanisms, though I'd personally be surprised if they are universal, as differing life-forms seem suited for differing levels of environmental change. This is something lots of people have looked at (it's fun), and agree the Wired article doesn't give a clear answer.

1. Can't recall the paper, but think it was Doyne Farmer ( or Chris Langton?) arguing that if your agent had complexity N, then you should spend sqrt(N) complexity modelling another agent

2. e.g. Maturana & Varela, summary of autopoeisis here http://supergoodtech.com/tomquick/phd/autopoiesis.html but I'm sure lots of other biologists have good theories

jimduk | 5 years ago

By no means will I be ever able to grasp Friston's theory, but the free energy minimisation vaguely reminds me of Curiosity-driven reinforcement learning. Can anyone with more understanding than me confirm or deny this apparent resemblance?

snrji | 5 years ago

If you liked this, you may also like "Am I autistic? An intellectual autobiography"[0] by Karl Friston. It doesn't go into his free energy idea at all, but is more about the person behind the idea. My favourite line "I remember being asked [by an educational psychologist] whether I thought the puppets in Thunderbirds ever got hungry".

[0] https://www.aliusresearch.org/uploads/9/1/6/0/91600416/frist...

m-i-l | 5 years ago

The article is too long for the idea it tries to convey. I like to read to broaden my mind, not for readings sake.

kken | 5 years ago

As far as I can tell the “free energy principle” is just asserting that the brain is approximately Bayesian and is doing some kind of variational inference, right? I’m not sure how revolutionary that is.

(I’m predisposed not to like Friston because his work in fMRI plays fast and loose with the idea of “causality”.)

currymj | 5 years ago

Intelligence automatically emerging in nature is very likely and an obvious requirement to humans rising to be the dominant species on earth. FEP makes it seem like this is a new idea. How we think about the meta, and reconstruct ideas from our own perspective has been embedded in human adaptation as long as recorded history. To model "true" AI from a human perspective using FEP you need to model AI from the initial frame of reference, where human intelligence emerged automatically. This could perhaps be done through manipulating a fundamental components of our brains or simulating scenarios where this could of happened.

damnson | 5 years ago

I found this interview with Karl Friston helpful to understand free energy principle from a high level:

https://www.youtube.com/watch?v=NIu_dJGyIQI

jchook | 5 years ago

I think true AI will not be a computer program that suddenly becomes human like. It will be a human that becomes more and more cyborg like https://techcrunch.com/2018/11/01/thomas-reardon-and-ctrl-la... Soon humans will have more and more brain surgery adding cyborg features to their non-artificial-intelligence, just their natural-intelligence until at some point, they will be so machine like, boom, AI.

andrewfromx | 5 years ago

> Friston found time for other pursuits as well. At age 19, he spent an entire school vacation trying to squeeze all of physics on one page. He failed but did manage to fit all of quantum mechanics.

Is the page available to read?

qwerty456127 | 5 years ago

Thinking about this with my engineering hat on -- If I wanted to guide the behaviour of such a system, I would have to influence the prediction somehow - and then the system would act to change the state of the world to match that prediction and/or update the prediction with more information about what is actually going on (by actively making observations etc...). This seems like quite an elegant and neat little lever for high-level control/objective setting. A bit like a Picardian 'make it so' button...

w_t_payne | 5 years ago

This almost sounds like a special case of Jeremy England's dissipation-driven adaptation theory. Does anyone know the overlap/differences between these theories (other than specificity)?

calebm | 5 years ago

Blog post about understanding Fristons ideas, also consider the amount of reseacher comments it has provoked: http://slatestarcodex.com/2018/03/04/god-help-us-lets-try-to...

DanielleMolloy | 5 years ago

Well, here is what I believe brains do:

https://news.ycombinator.com/item?id=9022206

It makes total sense for the brain's job to be minimizing surprise, because minimizing surprise is the best and most basic strategy for survival.

axilmar | 5 years ago

I see no reference to the dead salmon in the MR scanner showing correlative activation via spam mapping. Those results somewhat having been a hinderance to quite a few PET and fMRI researchers’ careers.

hlyshkow | 5 years ago

What I got out of this, it appears in essence he's saying that "those creatures with the most accurate picture of the world are the ones best prepared to succeed in the world"?

perpetualcrayon | 5 years ago

This is a brilliant portrait of Karl Friston. Thanks for sharing!

DanielleMolloy | 5 years ago

>> He has an h-­index—a metric used to measure the impact of a researcher’s publications—nearly twice the size of Albert Einstein’s.

That can only mean the h-index is a load of rubbish.

YeGoblynQueenne | 5 years ago

It seems he's an ontologist at heart.

https://en.wikipedia.org/wiki/Ontology

perpetualcrayon | 5 years ago

Anyone have a reference relating the free energy minimisation principle / active inference to reinforcement learning type environments?

conjectures | 5 years ago

This article is a complete waste. The title implies that it’s about ai but it turns out to be a portrait of a mans life — a pr piece. Not only that but free energy minimization has nothing to do with intelligence other than vaguely describing one of its most obvious and superficial characteristics.

—-

Ai is the most important issue in the world. True general ai is an existential threat to human kind. The economics of general ai lead to the extinction of humans no matter how you slice it. Killer robots are just icing on the cake — the tip of the iceberg.

General ai can be thought of as the keystone in the gateway of automation. It allows the automation of the human mind itself. The ai we have now cannot do this. Better ml algorithms will never threaten the human mind most likely. So people have a very false and dangerous sense of security.

Ml experts eagerly correct people like me with a vague notion and wave of the hand: ai won’t be a problem for a long time. As I said, ml is not a threat (for being automation of human thought) and this is because ml has nothing to do with human thought. Ml experts don’t know anything about human thought and therefore a complete layman is just as qualified to speculate about general ai as an ml expert is. Or a person with a physics degree or what have you. You might say that laymen tend to be dumber, or some variation on that, but that’s besides the point and irrelevant.

There are many reasons to be worried about the creation of general ai. First, general ai is much more broad than it is given credit for — sentience has many more forms than the human mind and is a broader attack surface than usually thought. People imagine it as finding the human mind like a needle in a haystack. It’s a lot easier than that. The algorithm for the kernel of intelligence is probably relatively much simpler than one would initially imagine. We don’t know when we might stumble on it. Or I could be wrong but I’m still right because even if it’s very complex relatively, we will still discover it if we try — and we are trying. As i said, ml isn’t a huge threat for general ai and I think it’s very likely that brain research is the biggest threat currently. The resolution of mri scanning and probing is increasing as is the computational power to make sense of the readings and test algorithms that we discover. I already see people commenting that computer won’t be powerful enough to test algorithms: you won’t need a silicon version of the brain to test them. I guarantee it.

If general ai were to come into existence, it would have the ability to do any task better than a human. Any group or organization that uses ai to perform any task will overtake anyone who does not. It will be a ratchet effect where each application of ai spreads across the world like a disease and never goes away. Soon, everything is done with ai. A market economy’s decentralized nature makes it an absolute powder-keg for ai in this respect because each node in the market is selfish and will implement ai to gain a short term advantage in the market — and as I’ve said once one node does it all nodes will do it. This behaviour historically has fueled the success of markets but as we have seen with global warming does not always work.

The key here is the fact that the only reason human life has value is because humans offer an extremely vital and valuable service that cannot be found anywhere else. Even though this is true, most humans on this planet do not enjoy a high quality of life. It is insane to imagine that once our only bargaining chip is ripped from our collective hands that the number of people with high standard of living will go up instead of down. There will be mass unemployment. Humans will be cast aside. And that’s all assuming that robots are never made to maliciously target human life for any reason.

People say that automation leads people to better, new jobs. In reality jobs are not an inexhaustible resource. They just seem to be.

The only solution, in one form or another, is the prohibition of ai. I hope that someone else reading this will agree with me or suggest another solution. I am interested in forming some kind of group to prevent all this from happening.

nuguy | 5 years ago

I hate this Wired style of "journalism": 99% of hype, hyperbole and anecdotes wrapped in 1% of evidence and substance.

diego_moita | 5 years ago

This headline is garbage.

stevew20 | 5 years ago