Stack Overflow Bans ChatGPT

Dec 6, 2022

Stack Overflow is “banning” the use of ChatGPT (post). I know more than a few programmers who sought to farm some free internet points by answering questions with the help of ChatGPT. It’s an obvious and existential question for Stack Overflow, which makes sense why we’re doing the thought experiment here first.

Is the convergence of bots and humans necessarily a crash course?

First, there’s the question of whether or not Stack Overflow can actually enforce this policy. The best strategy is most likely rate-limiting accounts. Not only does this not actually identify ChatGPT-assisted answers, but it limits power users, like the legendary Jon Skeet (who has answered over 38,000 questions, about 7/day for the last 14 years). You can’t just identify wrong answers because many human-submitted answers are wrong (or low quality).

It’s clear that the ranking algorithms on user-generated content websites must materially change. Upvotes/downvotes are no longer sufficient when the amount of content increases exponentially. I’m unsure what the right answer here is, possibly even more algorithmic surfacing of information (e.g., TikTok feed).

It’s also becoming clear that GPT-3 will at least chip away at some of the knowledge-sharing forums like Stack Overflow. Programmers can more easily get answers by chatting with ChatGPT or using GitHub Copilot without leaving their IDE. Developers might even just read the documentation that was stubbed out by a GPT-3 program.  Stack Overflow has faced diminishing relevance for years and struggled to monetize (it was sold to Prosus last year for $1.8 billion).

Maybe these platforms can adapt to computer-aided UGC. Or maybe we need new platforms where humans and bots co-exist (or don’t).