About a month ago I came across some interesting news. Playing chess does not make your kids smarter, the headlines read. I didn’t make much of them. I knew immediately that these recent scientific findings must be getting blown out of all proportion. I was hoping my inaction would let this quietly fade away and retreat to the small corner of the library where it belongs. I was simply too busy to deal with silly news.
However, while on a short break hiking in the Lake District, a history academic staying in my hostel room brought the news up. He was trying to prove how pointless one of my favourite hobbies is. We had a heated discussion, and I explained to him the limitations of the social sciences and statistics. During this conversation, I realised how far this erroneous and harmful message had spread. It’s all over the place. I had to read the report myself and put it in context.
When I began reading this 57-page report, I can’t say I was shocked with what I found, I expected it. Despite the many limitations clearly stated by the authors, journalists somehow arrived at a harmful and unjust one-line summary. Several more unstated limitations affect this study, yet the authors draw an ambitious conclusion. “We believe that this study has provided strong evidence that teaching primary school children how to play chess has little lasting impact upon their educational achievement.” No, with all due respect, this report, not even a published study, does not even do that. In fact, if you read the report carefully, the headline of this blog post is equally justified. Chess helps kids study less and score on tests just as well as non-playing peers who overstudy.
I’ve previously readily admitted that there is a lack of evidence that chess does make us smarter. It’s true. However, I have always defended the counter argument: the fact that there is no evidence or limited evidence for something does not mean we’ve “proven” it. It’s a significant limitation of modern science. Chess may very well make us smarter, but science does not yet have the tools to prove this. Why can’t we prove it today? Well, there is simply not enough interest or money that’s needed to generate the research required. Furthermore, scientists do not yet have adequate tools at their disposal to answer these kinds of complex real-world questions. For instance, these authors claimed they had a randomised controlled trial or RCT, the golden standard of medical research. However, on page 24 they state that “schools were not randomly selected into the trial.” Yet they continue to claim this is a “randomised” trial. Really?!
Maybe 10 years down the line we will “know” more about chess and cognitive ability, maybe we won’t. We don’t know. But that doesn’t give journalists or biased scientists the right to tarnish the reputation of this beautiful game as is facilitated by this report. It also does not justify the conclusions drawn by the authors of this report. All the authors can claim is that their flawed and limited method found no improvement on test scores. These tests require four years of preparation, yet with 30 hours of chess they expected to see improvements. Now that would be magical, wouldn’t it?
There are plenty of concepts in the social sciences that are extremely hard to “prove”. We are not talking about physics or mathematics where proofs exist. What good is being “smart” anyway? What does it mean to be smart? Is getting an A in a test the definition of “smart”? These kinds of arguments can go on for a long time. Some people and academics resort to IQ as a measurement of smartness. However, did you know that the creator of the IQ test designed it specifically to identify pupils who needed help to perform better? He did not create it to measure “smartness”! He made it to educate. Alfred Binet designed the IQ test to facilitate learning.
I am all for embracing the Education Endowment Foundation’s mission. I do believe school programmes require solid evidence, as solid as science can provide. However, in the case of this report, they got it all wrong. Some excellent and very positive effects of chess were right in front of their eyes, they found them! Perhaps due to bias they were blind to them and chose the wrong conclusions. Chess helps kids study less and score on tests just as well as non-playing peers who overstudy. It really does. This conclusion is drawn from the same study by the EEF, have a read yourself and think about it. Here is my summary:
The kids in the chess group had to miss their regular classes to take chess lessons, sometimes from poor chess instructors, sometimes from good chess instructors. Some even missed their mathematics classes, while most missed humanities lessons. Despite having fewer hours of curricular instruction, these kids achieved, on average, the same scores for the Key Stage 2 examinations. They were no better but also no worse. Let us think about this carefully for a second. These kids had to study less and yet they performed just as well. Moreover, there were plenty of positive effects reported, from teachers’ increased positive expectations to pupils’ enjoyment of school. These kinds of effects have been well-documented in Educational Psychology literature, highlighting their importance for academic achievement. These achievements cannot be underestimated. Students studied less and were happier. They had less behavioural problems and scored just as well on the tests. To me, this is a huge conclusion with very positive implications for chess. Why did the authors state their conclusion so negatively? I would love to hear their thoughts.
Let’s not forget the studies from Denmark and Italy which found exactly the opposite. Chess does help with academic achievement. Again, those studies suffer from their limitations, as any social sciences study does. However, they help remind us that the question remains unclear and open. The possibility is there.
When questions and evidence are unclear, my philosophy is to resort to my personal judgment. Do I want my kids playing Pokemon Go or chess… hmm, hard one! Just kidding, I hope the answer is clear. I know what hobbies I will encourage my children to have. Chess will definitely be at the top of that list. Whether one day this will be “proven” by science remains unclear, but enough proof exists for me. This report, simply adds to that positive evidence in favour of chess.
At Chessable we use scientific insights with substantial evidence to design our learning tools. We further test them incrementally, each time on larger groups of people, to make sure they have the desired positive effects. We measure our success by the success of our users, and will continue to do so as we make our bid to be at the forefront of chess education and educational technology.
Read similar articles at:
- Our most popular course, revamped: 100 Endgames You Must Know gets an update! - 21st March 2019
- Cyclical Review: The Woodpecker Method feature you can use on ANY tactics course + custom reps! - 19th March 2019
- Happy New Year! 2018 in review + a BIG spoiler for 2019 - 8th January 2019
- Bringing chess books to life… and now in print - 5th November 2018
- Breezy learning: 3 Chessable improvements you should know about - 29th August 2018
- Sort by accuracy: the PRO feature that will supercharge your chess learning - 23rd May 2018
- We’ve done it! $50,000 paid to chess authors in 2018 + a call to arms - 23rd March 2018
- Synced chess videos: How to strengthen recall with a fresh new mode of review - 16th March 2018
- FastTrack: The new PRO feature that lets you set the pace - 16th March 2018
David is Chessable’s CEO and Chief Scientist. He finished his dissertation on expertise and expert performance as part of a MSc in Psychology of Education (BPS) at the University of Bristol, and also holds a PGCert in Applied Psychology from the University of Liverpool. David’s chess rating is around 1,850 FIDE.