Skip To Main Content

Thinking like a Human, Not a Computer

Thinking like a Human, Not a Computer
Nick Lloyd, Math Department Head

“If it takes 3 towels 3 hours to dry on the line, how long does it take 9 towels to dry?” These words immediately caught my attention as I scrolled through Reels (or was it Shorts?) the other day. In the quick video (from an hour-long lecture at The Royal Institution encouraging people to read the speaker’s new book, “How to Expect the Unexpected”), Kit Yates explained how he fooled ChatGPT with this simple question. 

ChatGPT answered, “If it takes three hours for three towels to dry, then it would take nine hours for nine towels to dry.” Mr. Yates explained that if there is enough room for all the towels, then it shouldn’t take any longer to dry nine towels than it does three towels and followed this explanation with, “But maybe this is the expected behavior for ChatGPT because it’s trying to mimic human behavior, human interactions.” 

Why do people, and artificial intelligences (AIs), get the answer to this type of question wrong? One reason is that we often look at the mathematical tools we have and try each one on to see if it fits. When we’ve learned all sorts of algorithms, we want to apply them whenever we can to answer a mathematical question, even when logical reasoning should guide us elsewhere. In my Geometry classroom, students learn to discuss, explain, and challenge each other's ideas. The correct answer to a problem is only part of the solution, as teams and individuals need to be able to explain the reasoning behind their answers. When students struggle to find the correct solution, they can still demonstrate their understanding by explaining their thinking.

At Dana Hall, students look for the logic behind their approach to solving a problem. We practice using our intuition and then check to ensure the answer makes sense.

Mathematical algorithms are only useful if we fully understand how and when to use them, and what pitfalls to avoid. Students new to Dana Hall often think we ask questions of them looking only for the right answer, but in reality, we are looking for them to demonstrate their thinking, too. This benefits our students by providing a deeper understanding of why a solution must be correct. We know that our students will face questions and problems in the future that have no known algorithms to apply to their solution, and we provide students with the thinking tools to find answers to questions they’ve never considered before.

Our Geometry students, fresh from a year of studying Algebra, began this year with the question: “If chicken nuggets can be ordered in packs of 6, 9, and 20, what is the largest number of chicken nuggets that cannot be ordered?” At first, students want to apply all that they have learned in Algebra; writing equations, defining variables, and guessing and checking possible solutions. We encourage them to be comfortable trying new methods towards finding an answer and having enough patience to believe they can solve the problem, even when they’ve never been shown how to solve such a problem. With collaboration and logical thinking, our students not only find the correct answer, but they learn to explain how they know it must be the answer. 

For those keeping score, ChatGPT was not able to answer that question, even after I fed it dozens of additional prompts to correct its many wrong answers. Even, and perhaps especially, in a world of answers provided by search engines and artificial intelligence, Dana Hall continues to invest in our students’ ability to think critically and recognize when the answer from a machine doesn’t make sense.