Drowning in Bullshit
Maybe I’m just getting old (Editor’s note: he’s definitely getting old) but it’s hard to shake the feeling that online life sucks now. We’re drowning in bullshit and it’s only getting worse.
Google search sucks. Try finding anything: You have to wade through pages of ads, widgets, low information content mill crap, and outright spam to find relevant search results. Yes, Google search is trying to funnel you into places you’ll generate money. But it doesn’t help that the web is just full of bullshit. And of course there’s a feedback loop here: that bullshit only really exists because Google incentivized it with ad cash.
A lot of people are impressed with ChatGPT’s smoke and mirrors. It strings words together convincingly enough to sound like it knows what it’s talking about. You don’t have to look very hard to see the seams1, but the end result is striking truthiness.
Tech journalism has focused on what this could mean for Google’s search business. Why would people need to search for information when they can just ask an AI chat bot who will give them something that looks like an accurate answer? But I think that’s underestimating the scale of transformation we’ll see.
A lot of today’s bullshit is written by humans being paid 1 to 5 cents a word to grind out something that looks like information in the hopes of getting clicks that lead to ad impressions or affiliate link purchases. This shit has already made search unusable and killed a lot of what old timers like me loved about the web.
But who needs to pay humans when you could just run an infinite number of prompts into a passable AI text generator? Sure, the content might be a little less accurate, but that was never the point--it was always bullshit. And in a lot of cases the writing will be better. What will change dramatically is the scale. This bullshit will be inescapable. How will anyone find any real information? Traditional Google search is toast no matter what.
Your relatives on social media are already falling for the least-convincing conspiracy content in history. What if it was written by a robot that can actually string a coherent sentence together? A tiny automated operation could barf out millions of fabricated posts and memes a day, just throwing shit at the wall to see what sticks. What would a sophisticated version of this look like? Does it even matter?
Media companies have been gutting newsrooms and divesting in actual journalism for decades now. Why wouldn’t they lean on AI to produce massive amounts of content? Toss in a few facts and let the machines remix it endlessly. This already exists--have you ever clicked on a news link, maybe about why a company’s stock price went up or down, and found yourself in the uncanny valley of an autogenerated news story with a few raw facts and no actual insight? Get used to that.
As all this AI-generated content takes over, and successive generations are trained on it, what is our place in all of this? As online life becomes dominated with robots throwing words at each other, do we just burrow down and try to find human spaces and connection?
A less-online life is pretty compelling these days. In the past couple years I’ve gotten off social media services one by one, and it does feel like I’ve dramatically reduced my exposure to bullshit. Maybe we can all just focus on our sporadically-updated, low-readership blogs!
Asking ChatGPT a question about a topic you know well and you’re sure to trigger the Gell-Mann Amnesia effect. ↩
Leave a comment (comments are moderated, they might take a while to appear)