Explore the limits of AI, Brain, and Cognitive Plausibility

Cognitive plausibility, AI and the brain

In a recent DeepMind paper titled \”Reward Is Enough\”, David Silver, Satinder, Doina, and Richard Sutton made this point clear. They argue that \”maximizing rewards is enough to drive behaviors that exhibit most, if not all, attributes of intelligence.\” But reward isn’t enough. This statement is circular, simplistic, and vague. It explains very little, because it is meaningless in environments that are highly structured and controlled. Humans do a lot of things without any reward, including writing nonsense about rewards.

Imagine that you or your team are able to talk about your solution and how clever or plausible it is. This kind of solution is argued a lot. You aren’t thinking about the specific problem, or those who will be affected by it. Cognitive plausibility is important for business leaders and practitioners because it shows the wrong culture. Real-world problem solving is the only way to solve the problems that the world poses for intelligence, whose solutions will never be cognitively plausible. Insiders may want their solutions to share their goals, but your solution doesn’t need to know that you are solving a particular problem.

It doesn’t matter how \”cognitively plausible\”, or logical, a solution is if you are trying to achieve a goal. If you don’t give a damn about how a solution is achieved, then you can do anything. It’s more important to focus on the goal and the best solution for the problem than the way it was achieved, whether the solution was self referencing or how a solution looks after the problem has been solved.


AI, the brain, and cognitive plausibility