Machine learning in general and large language models in particular are terrific in their potential to solve specific problems, but terrible in how they’re actually put to use. On one end, you have billionaire CEOs from the wealth extractor class who throw insane amounts of money at this technology on the prospect of replacing every worker and employee with machines and make even larger shitloads of money than they already do. On the other end, there are all these people who are nudged into using these models as a new and shiny toy in ways that make them even more distracted from what’s actually going on than they already are. And one of the most popular rationalizations for chatting with an LLM instead of reading a book or doing actual research is the nearly ubiquitous argument of using ChatGPT “only for ideas and inspiration,” not for the actual work.
Which is screaming baloney.
Developing creative ideas—for writing, art, academia, what have you—is already actual work that demands a lot more than feeding prompts to chatbots. That hard work also includes the “structure” of, e.g., an essay, a course, a lecture, a textbook, or a paper. Sure, one can lift such a structure from essays or lectures or textbooks that already exist, which is sort of legit, and that’s literally exactly what you’re doing when you scaffold your work with output from ChatGPT. The other option is to build the structure of an essay or lecture or textbook around an original educational or didactic concept aka a creative idea, which is a lot more work.
On aspects specific to writing, you can read more (with somewhat less snark) over at my primary blog between drafts.
Addendum
I’m not the only one who thinks it’s bogus, it seems!