ChatGPT and other "deep-learning" programs have made a lot of noise and headlines lately. As I alluded to in my nopecoin post, I'm very much not on-board with some of the new tricks we're teaching the old dog that is personal computing.
I had the privilege of growing up alongside home computers themselves, as well as the internet. I
remember categorized newsgroups, public web forums, mailing lists, and
all sorts of well-thought-out ways for people to connect and share ideas.
So, I've never understood the appeal of this current crop of "social
media" giants. I don't get it. I so very much don't get it that I
don't use any of them. The closest I come is youtube, which
I only touch with a ten-foot pole of browser plugins which disable the
comments, shorts, ads, and most of its other most egregious
anti-features.
But, I'm not most people. Most people go ahead and use facebook/twitter/instagram/whatever, at least to some degree.
Thus, we've let unsupervised and unconsidered algorithms decide which bits of information we see or don't see. Your facebook/twitter/instagram/whatever feeds are not lists of things you've asked for, they're infinitely-scrolling sets of the items which have been calculated to be the most likely to generate the most profit for the middleman in question. This has had society-poisoning and democracy-breaking effects, and it should not be surprising that when profits matter more than honesty, things go pear-shaped quickly.
After seeing the messes we get ourselves into when we defer our thinking and sorting to machines (profit machines, specifically), my personal reaction would be to take a step back and reflect, "hm, maybe we need to think this through more carefully". Instead, we've gone ahead and given these same capitalism bots the ability not to just show and hide and reorder (dis)information, but to literally make things up on the spot.
What? Why?!
(Well, I know why. Middlemen need no longer rely solely on their own victicustomers to provide the "content" to shuffle into each others' feeds.)
I'm beginning to feel about AI software the way I feel about guns. Mechanically, these are fascinating pieces of engineering and ingenuity. There's both a complexity and a simplicity that is captivating and beautiful. Also, no thank you! Decline. Unsubscribe. Not in my house. It claims to be a fun and neutral tool, but I have a very hard time seeing it as anything but a dangerous weapon.
I suspect we'll soon see artificially-generated novels, and movies, and augmented-reality serials. They'll be amusing enough not to die in the crib. At nearly no cost, there won't be much reason not to keep trying, even at low points in any fad/zeitgeist cycle. Entire wikipedias' worth of "information" will come and go and morph and self-refer. I think that's pretty fascinating, but it also sure looks like authorship, authenticity, and having any basis at all in truth are all up on the chopping block. Once these things start consuming each others' and their own output as input, we're going to have more messes to clean up than we can deal with. Based on the current trajectory of our approaches, my guess is we'll try to fight fire with more and more fire.
I worry whether we have much natural intelligence left, let alone what the artificial sort will get up to.