People seem to have this idea that large language models (LLM’s) can be relied upon to do complex things. People want to do things like “deep research”, pretending that an LLM can effectively perform the job of a research analyst. People make insulting statements like “GPT-4o…enables PhD-level reasoning”. People seem to be under the impression that LLM’s can be productive on their own 1 2 , that it’s a good idea to let agentic AI loose with access to the ability to submit proposed code to public open-source software, and disparage its maintainer when it is rejected (and I venture to say many are not discouraged by this result).