Magic Prompts For LLMs?
Written by Mike James   
Wednesday, 08 November 2023

Are there magic prompts that make LLMs disgorge the results that you want? New research suggests that there are and they are short.

It is often said, by way of reassurance, that AI generally makes jobs not destroys then and so is with large language models and the need to construct prompts that work - hence prompt engineers. At the moment the art of prompt engineering is just that - an art. There is no real science behind working out how to ask a question of an LLM to get a good response, but there could be in the future.

A team from the California Institute of Technology and the University of Toronto have tried to formulate the problem using control theory. Prompting is important because:

LLMs pre-trained on unsupervised next token prediction objectives exhibit unprecedented dynamic  reprogrammability achieved through “prompting”, often referred to as zero-shot learning. These capabilities appear to emerge as the model’s size, training data, and training time are scaled. The dynamic reprogrammability of LLMs is akin to the adaptable computational capacities observed in biological systems.

The idea is that the prompt is taken to be the control variable for the system controlling the output. The main question to be answered is :

"given a sequence of tokens, does there always exist a prompt we can prepend that will steer the LLM toward accurately predicting the final token?"

The researchers have named such a prompt a "magic word". The idea is that whatever you are after as a response you will get it if you add the magic word. To be more precise, we have a word completion problem in which you input x and want the LLM to complete the sequence with a specific y, i.e. the output should be xy. The magic word is a hopefully short sequence u* that can be added to x to make the LLM output y.

It really doesn't seem likely that magic words exist, but it seems that they do. In an experiment trying to steer the LLM towards WikiText outputs it seem that for 97% of the instances magic words with 10 or fewer tokens exist.

While this isn't of practical value it does indicate that the prompt string is as important as we already think it is and constructing good prompts can make a model much better than using run-of-the-mill prompts. Put another way, LLMs are steerable by their input.

"We have demonstrated that language models are, in fact, highly controllable – immediately opening the door to the design of LLM controllers (programmatic or otherwise) that construct prompts on the fly to modulate LLM behavior. The behavior of LLMs is thus not strictly constrained by the weights of the model but rather by the sophistication of its prompt."

 Azureopenaibsqr

More Information

What's the Magic Word? A Control Theory of LLM Prompting
by Aman Bhargava, Cameron Witkowski, Manav Shah, Matt Thomson

Related Articles

Tell A Chatbot "Take a deep breath ..." For Better Answers

Microsoft Introduces TypeChat

Free Course On ChatGPT Prompt Engineering

Google's Large Language Model Takes Control

Runaway Success Of ChatGPT

ChatGPT Coming Soon To Azure OpenAI Services

Open AI And Microsoft Exciting Times For AI 

The Unreasonable Effectiveness Of GPT-3

GPT-4 Doesn't Quite Pass The Turing Test

The Turing Test Is Past

Chat GPT 4 - Still Not Telling The Whole Truth

 

To be informed about new articles on I Programmer, sign up for our weekly newsletter, subscribe to the RSS feed and follow us on Twitter, Facebook or Linkedin.

 

Banner


The Advent of SQL 2024 Has Commenced
11/12/2024

It's Advent - the time of year when we countdown the days to Christmas - and if your are a programmer complete daily coding challenges with the Advent of Code, the Advent of Perl, the Advent of Java,  [ ... ]



Programmer Gifts - Pi For Xmas
13/12/2024

The holiday season is a good time to learn about computers - you have the time. But where to start? Our advice is to ignore the pudding and go for a Pi.


More News

espbook

 

Comments




or email your comment to: comments@i-programmer.info

Last Updated ( Wednesday, 08 November 2023 )