Why language models need not get worse
Thursday 9 February 2023, 00:00Sam Kriss has a Substack posting in which he describes the zairja of the world and then links it to his ideas on why AI is getting worse. Basically, what I get from the piece is that he's saying successive generations of GPT models have produced less and less valuable output as they better and better approximate the average content of the World Wide Web. Nearly all of the Web is garbage, and so an approximation of the average Web document is garbage too. The output from older models is weird and interesting because more random and less accurately approximating the average Web document; the output from new models is boring and worthless. My own view is that these issues are not as serious or as inevitable as he presents them. There seem to be some gaps in Kriss's understanding of how language models, and the ChatGPT Web service in particular, work; and filling in the gaps points to some natural solutions for the problem he describes.