It Matters What Language AI Thinks In
Written by Mike James   
Wednesday, 23 October 2024

We are currently polarized in how we think about Large Language Models. Some say that they are just overgrown autocompletes and some say that they have captured some aspects of intelligence. How well they solve problems seems to depend on what language you ask for the solution to be expressed in.

I have argued for a long time that programmer think differently. Before you learn a programming language all you have at your disposal is your native natural language, English, say, for the sake of a concrete example. English is great for expressing feelings and some ideas, but most people begin to struggle when asked to formulate some procedure precisely. In fact many non-programmers don't even understand the idea of a precise expression of a procedure. They don't even notice that language is in general vague and this makes accurate expression difficult, perhaps impossible.

Taking this idea  a little further you could argue that the language the you use to solve a problem affects how easy it is to solve. For example, many math problems are difficult to state in natural language but trivial in math notation. Any explanation of quantum mechanics is seriously incomplete without the language of physics and, of course, how could you possibly understand a Quicksort without code? These things are all a matter of degree, but you can see the direction that things are moving in - a precise language has its advantages.

peternorvigsq

Now we have some evidence that is more than anecdotal. Peter Norvig, a well-known AI expert, currently a Distinguished Research Fellow at Google AI, has been conducting some experiments on LLMs to find out what they know. In the case of Cheryl's Birthday, a logic problem where you have to discover Cheryl's birthday given a list of possible dates and a set of constraints, it seems that LLMs can't reason like humans. It turned out that a human could solve the problem, but none of the nine LLMs could do the job. The LLMs knew the problem and could produce the standard answer, but couldn't solve any variations on the problem. This, Norvig suggests, means that LLMs have "no theory of mind", i.e. no way of understanding and reasoning about what other people know.

Clearly no matter how you interpret it the LLMs aren't as good as humans when it comes to this problem. But wait, the one human involved in the project, Norvig himself, is an exceptionally good programmer, perhaps this has something to do with it?

Another experiment suggests that this might be the case. The LLMs this time were asked to list all the ways that three distinct positive integers have a product of 108. When just asked to do the job, only 2 out of the 9 managed to make a good attempt. When the question was reframed to "write a program to---" then 7 of the 9 managed it. So "thinking" in English hampered the LLMs in getting a solution, but switching to Python helped a lot.

What does all this mean?

Well it clearly means we have to be careful how we ask questions of an LLM and hence this backs up the idea that prompt engineering is a thing. We have already seen that questions tend to get better answers if you include "think about it" or "take time to think about your answer". Perhaps we need to include "create a program" even when the solution doesn't actually require a program. As a side observation, I would say that this is evidence for the idea that a knowledge of a programming language does focus the mind...

Take a look at the results of the experiment - the programs the that LLMs generated are interesting. Note: You'll need Python installed to see the content on Peter Norvig's Pytudes repos.

And if you think that this is all irrelevant because LLMs are just big autocomplete machines consider that it is entirely possible that this is all you are...

  • Mike James, Founder and Chief Editor of I Programmer is also a prolific author. In addition to his books on specific languages, he is also the author of The Trick Of The Mind: Programming and Computational Thought, aimed at programmers and non-programmers alike, which examines the nature of programming and reveals why it is a very special skill.

 

ab3

 

More Information

The Languages of English, Math, and Programming

LLMs, Theory of Mind, and Cheryl's Birthday

Related Articles

Peter Norvig - As We May Program

Regex Golf, XKCD And Peter Norvig 

Peter Norvig On The 100,000-Student Classroom 

Magic Prompts For LLMs?

Stanford AI Class - Mid Term Report

Tell A Chatbot "Take a deep breath ..." For Better Answers

Runaway Success Of ChatGPT

To be informed about new articles on I Programmer, sign up for our weekly newsletter, subscribe to the RSS feed and follow us on Twitter, Facebook or Linkedin.

 

Banner


Ursina - A Game Engine Powered by Python
08/11/2024

Ursina is a new open source game engine in which you can code any type of game in Python, be it 2-D, 3-D, an application, a visualization, you name it.



C23 ISO Standard Is Here But You Probably Won't Read It
06/11/2024

At last ISO C23 has been published, but at $250 you probably aren't going to read it. Can we really tolerate this sort of profiteering on the work of others? This is worse than academic publishing!


More News

espbook

 

Comments




or email your comment to: comments@i-programmer.info

<ASIN:1871962722>

Last Updated ( Wednesday, 23 October 2024 )