This is an example of the ‘capability overhang’ phenomenon I’ve been talking about re language models for a while – existing LLMs are far more capable than we think. All it takes is some experimentation and finding experts to find new ways to phrase questions and you can wind up with extraordinarily powerful capability jumps without retraining the model. Optimization of input can be done in parallel to optimization of the model.

Import AI 314: Language models + text-to-speech; emergent cooperation in wargames; ICML bans LLM-written papers by Jack Clark favicon