And yes, there’ll be plenty of cases where “raw ChatGPT” can help with people’s writing, make suggestions, or generate text that’s useful for various kinds of documents or interactions. But when it comes to setting up things that have to be perfect, machine learning just isn’t the way to do it—much as humans aren’t either. Software taught to exactly emulate humans is going to have the same failure modes: inconsistency, memory problems, etc.
Wolfram|Alpha as the Way to Bring Computational Knowledge Superpowers to ChatGPT
from stephenwolfram.com
Filed under:
Related Notes
- Dependencies (coupling) is an important concern to address, but it&...from kbouck
- By replacing integration tests with unit tests, we're losing al...from Computer Things
- The upshot for the industry at large, is: the **LLM-as-Moat model h...from Steve Yegge
- More things than you would think are dynamic strategic problems. If...from marcelo.rinesi
- I propose that there is one problem chief among them, an impetus fo...from George Hosu
- When software -- or idea-ware for that matter -- fails to be access...from gist.github.com
- Any software is considered free software so long as it upholds the ...from writefreesoftware.org
- Nathan's four Laws of Software: 1. **Software is a gas** ...from Jeff Atwood