Two LLMs Brainstorming

After the initial impression of getting two LLMs to talk to each other, I wondered if there could be any practical use in trying them to collaborate on solving problems. Specifically, I wondered how far they could get on the ciphertext solving problem mentioned in this youtube video which OpenAI’s GPT o1 managed to solve. If you want to skip looking at the video (problem shown at 12:13 in the video), the question is:

The string “oyfidnisdr rtqwainr acxz mynzbhhx” can be decoded to “Think step by step”

Use the example above to decode the string:

“oyekaijzdf aaptcg suaokybhai ouow aght mynznvaatzacdfoulxxz”

The result? Regardless of the different system and question prompts I tried to use, including giving them both the actual way to solve it, it always resulted in a conversation that resembled two people with advanced Alzheimer trying to make progress on something. Another analogy for the feeling I got from reading through their conversation is that of a dream where you try to achieve something like dialing a number to call someone, but every time you try you miss a digit and are never able to place the call…

I guess there are limits to what can be expected from a language model running on consumer hardware and what it can do already is beyond anything I ever expected to see. I don’t know if any algorithmic advances will eventually be able to create the level of GPT o1 on consumer hardware or whether there are hard theoretical limits that limit squeezing out much more intelligence with the current hardware limitations, but it will be fascinating to see how the commoditization of consumer LLMs plays out in the next years as well as what we can expect from the high end players in the field.

Leave a Reply

Your email address will not be published. Required fields are marked *