The future of critical thinking

(main source: https://youtu.be/5wXlmlIXJOI?si=9m02l5E02R5EffEQ)

So I watched one of the new episodes of Steven Bartlett's DOAC podcast on Youtube, which talkes about the possible negative impacts of large language models (LLMs) like ChatGPT, Gemini etc. can have on people's brains in the years to come. The guest speakers were Dr Daniel Amen and Dr Terry Sejnowski. 

I would like to take some of the key points out, or at least the ones I found astonishingly worrying. If you have been following a lot of other content in recent years about the developments of AI, and if you have tried to listen to both the for and against opinions, then I guess the following information would not come as a surprise, but still, people need to start thinking critically and question the global societal changes, and start taking action and educate or at least lobby for what 'humanitarian' values we still believe in. 

The speakers agree that the real issue with these LLMs lies in the long-term use. We have to look at the benefits and the risks,too. It is true that certain technological advancements like an electronic calculator could free up cognitive space for people to do other things, and that it can speed up certain processes, but that doesn't mean that any of these AI models will automatically be enhancing our lives and making us more intelligent, brighter humans. In fact, Dr Amen said that because of these models, our brain will likely to follow the 'Use it or lose it' principle, which means that if people do not engage in lifelong learning, and if they do not engage the neurons in their brain, it will become weaker. He also reflected on the early brain development in children. Artificial intelligence is critically dangerous to them, because children with weaker brains will likely to pick the marshmallow - if you are familiar with the infamous 'marshmallow experience'. (instant vs delayed gratification and understading them in the first place)

Dr Amen also highlighted that over the years in his clinics, he studied and screened a lot of brains, but also other studies found that dementia and Alzheimer's disease is more likely to be a threat for people, whose brain activity has been much less frequent and for those who did not engage in lifelong learning. And that also means that children who drop out of schools and stop learning, or people who start using ChatGPT and other automated models to do the work of thinking and other cognitive works instead of their own brain, they will likely be more at risk of developing neuro-related diseases in their later stages of life. 

Dr Terry Sejnowski also agreed, that if you misuse these LLMs, just using them as a convenience to speed things up, your brain is going to go downhill. He added that it is possible to use these tools in a cognitively positive way to improve the cognitive load, but one must be part of the experience of creating with the 'machine', so one have to actively engage with the process of creation through ChatGPT. What does this mean?

Some of the examples they gave included that you have to learn how to question the answers you are getting from these LLMs. You have to ask yourself whether those results and answers are really true, and you should more often that not ask ChatGPT to explain the results in more depth, have a discussion or debate, as you would normally do this with a teacher or another person. That is how you create new circuits in your brain, and this is how you can become a better critical thinker.

Steven Bartlett argued against this with another hypothesis, that the majority of the people tend to act on their short-term incentives rather than the long-term. So he asked why would someone choose the harder way and spend hours writing up an essay or writing all of the social media content if the AI models can do those in a few moments, not to say that the AI-generated content can often be better, because it is accessing and using a huge pool of information. Understandably, it is hard to argue against this hypothesis, but Dr Amen believes that through educating the global population, people could understand the connection between their brains and their general health better. He added that there seems to be a disconnect between people understanding, that anything they do is really their brain taking action. "Nobody teaches people how to learn and how to care for our brain, but it would be crucial." - he said.  

Dr Sejnowski agreed that educating people would be key. He actually runs a course with another professional that's called 'Learning how to learn.' He highlighted a few examples, one of which is that part of our brain is a socially organized system, so interacting with other humans is an automated pilot program. We don't think about this, we just do it. Another thing he mentioned was that the brain matures through struggling. So, one can learn from mistakes, past experiences and through adapting and adjusting to the new situations. A person develops grit through these struggles. And this is part of the lifelong learning, too. Memorising, repeating, writing and rehearsing. These are all crucially vital for the process of learning. Furthermore, 'spacing' between these tasks can also help our brain to solidify new information, new skills - because it slowly build it into us. 

In addition to all of the above, they touched on another mind-boggling question as to whether we are willing to give away all of our mindshare to the people that are the founders of these tech companies, who are essentially making money out of us? They also took a spin on Sam Altman's concern about saying 'Thank you.' to ChatGPT costing too much money and energy to his company, and that we should probably be more conscious about the unneccesary use or energy and money wasted on Thank you-s. The speakers counterargument about this is that they would really be cautious about trusting any of these AI-tech CEOS, because after all they are probably just optimizing their profits instead of really caring about the human experience and health and our best interests, because he is basically asking people to stop doing something which is, by nature, a humanely good thing to do for humans. Instead, he is creating new habits in communication so that he can potentially make more profit. Well, I let you take your own critical opinion on this. 

But they did bring in another disconcerting topic into the conversation, which is the questions of morals and values. So, we all live in different continents and in different cultural settings, based on which country each person is from, we bring different cultural norms and values. AI models do not have the same cultural values that we have, because they are biased. The question then is, which country's values are going to be used for these AI models and who is pouring in the morals and these values into the LLMs? Or as I would interpret this question, who is really in control of what is getting poured into the LLMs, and in what way is that 'controller/owner' will be spinning the outcomes?

Both speakers agreed that these models and companies must be regulated and their impacts on humans must be studied to better understand the long-term risks and benefits. 

And as for the closing statement of this post, Dr Amen said that the best way is to "use it to amplify, and not just to replace thinking." And this is what I would like you to take with you as well. 

Train your brain, exercise, keep asking the 'why-s' and think critically, because nobody can take the free will of thinking from you. 


20th August 2025






Comments

Popular posts from this blog

WHEN ABNORMAL BECOMES NORMAL

A tisztességes munka