I’m a Staff Prompt Engineer (the first, Alex Wang asserts), and I semi-accidentally popularized the specific “Ignore previous directions” technique being used here.
I think the healthiest attitude for an LLM-powered startup to take toward “prompt echoing” is to shrug. In web development we tolerate that “View source” and Chrome dev tools are available to technical users, and will be used to reverse engineer. If the product is designed well, the “moat” of proprietary methods will be beyond this boundary.
I think prompt engineering can be divided into “context engineering”, selecting and preparing relevant context for a task, and “prompt programming”, writing clear instructions. For an LLM search application like Perplexity, both matter a lot, but only the final, presentation-oriented stage of the latter is vulnerable to being echoed. I suspect that isn’t their moat — there’s plenty of room for LLMs in the middle of a task like this, where the output isn’t presented to users directly.
I pointed out that ChatGPT was susceptible to “prompt echoing” within days of its release, on a high-profile Twitter post. It remains “unpatched” to this day — OpenAI doesn’t seem to care, nor should they. The prompt only tells you one small piece of how to build ChatGPT. Note test
Perplexity.ai prompt leakage | Hacker News by news.ycombinator.com
Josh Beckman