Dear Fedi friends,
I'd like to put together a list of people who are publicly resisting / calling out LLMs and AI slop.
Why? I enjoy reading my Fediverse feed in topical lists and I need something to counteract the unrelenting AI hype I see in the media.
Do you have any recommendations?
So far, at the top of my list I have:
@timnitGebru @emilymbender and @alexhanna of @DAIR
plus @cwebber @jaredwhite and @tante
Anyone else to recommend who advocates for #NoAI?
Regarding that last boost, I'm starting to conceive of LLMs and image generators as a phenomenon of (American) society eating its seed corn. If you're not familiar with the phrase, "seed corn" is the corn you set aside to plant next year, as opposed to the corn you eat this year. If you eat your seed corn this year, you have no seeds to plant next year, and thus create a crisis for all future years, a crisis that could have been avoided with better management.
LLMs and image generators mass ingest human-created texts and images. Since the human creators of the ingested texts and images are not compensated and not even credited, this ingestion puts negative pressure on the sharing of such things. Creative acts functioning as seed for future creative acts becomes depressed. Creative people will have little choice but to lock down, charge for, or hide their works. Otherwise, they'll be ingested by innumerable computer programs and replicated ad infinitum without so much as a credit attached. Seed corn that had been freely given forward will become difficult to get. Eaten.
Eating your seed corn is meant to be a last ditch act you take out of desperation after exhausting all other options. It's not meant to be standard operating procedure. What a bleak society that does this, consuming itself in essence.
#ChatGPT #GPT #Midjourney #StableDiffusion #DALLE #AI
My half-baked deep thought of the day is that we are living through a time in which imagination has been de-legitimatized generally, and we are reaching the necessarily absurd crescendo of this process. It is taking physical form in technologies such as generative AI, literal Anti-Imagination generators.
Let me be clear what I (don't) mean by "imagination". I do not mean "creative", "fictional", "imaginary", or "artistic"--those words can be related, but they've also been co-opted into exactly the trend I'm calling out. I also don't mean interesting images that appear in your mind or dreams but are then dismissed as lacking significance. I do mean truly imaginative acts arising within and from the mind, and not subjected to editorial scrutiny by logic, empiricism, or other instrumentalized forms of reason. Dreams are one way to access imagination; active imagination--stream of consciousness directed but not edited by the conscious mind--can be a way to explore it. Religious, magical, mystical, or meditative practices can too if that's your jam. So can psychoanalysis and some other forms of therapy. There are countless other ways and I don't pretend to have any special knowledge of this, I'm just riffing on an idea.
More and more I believe that we have to rediscover and exercise this aspect of ourselves if we're to navigate the current crisis. () Our collective imagination lacks force at a time when generative Anti-Imagination is reaching industrial scale.
#AI #GenAI #GenerativeAI #imagination #ActiveImagination
() "Crisis" has a medical definition: "that change in a disease which indicates whether the result is to be recovery or death" (Webster's dictionary). Its Greek root can also mean "decision", which I like to think about when considering "crises". They are decisions that need to be made.
I'm tinkering with an argument based on algorithmic complexity that if it were possible to make something like an "automated mathematician" or "automated scientist", then these would be expected to eventually produce outputs that we humans would be unable to distinguish from random noise.
Getting the whole argument just right is fiddly, but the basic idea is this. You feed some kind of theory into the AM/AS, which is a black box. It churns on this and spits out a result, which is added to the theory (I'm neglecting the case that the result is inconsistent with the theory). It can now churn on theory + result 1. For any given and potentially very large N, after doing this long enough, it's churning on theory + result 1 + result 2 + ... + result N. Whatever it spits out will be dependent in particular on results 1 - N. When N is large enough, unless you know these results you will not be able to understand what it outputs because the output will almost surely depend critically on one or more of results 1 - N. In other words, the output will look like noise to you. If the AM/AS is appreciably faster at producing results than people are at understanding them, there will be an N beyond which no one can understand the output up to that point. It'll become indistinguishable (unable to be distinguished) from random noise.
If you're into software development, this would be analogous to a software system that generates syntactically-correct code and then adds that code as a new call in a growing software library. If you were to run this long enough, virtually all the programs it generated that were short enough for human beings to have any hope of reading and understanding would consist almost entirely of library calls to code generated by the system. You'd have no idea what any of this code did unless you studied the library calls, which you wouldn't be able to do beyond a certain scale. If the system were expanding the library faster than you could read and understand it, there'd be no hope at all.
I'll leave it as an exercise to the reader whether this is a desirable thing to do and whether it's happened yet. I would offer, though, a question to ponder: what reason is there to believe that a random number generator hooked up to an inscrutable interpreter produces human flourishing, for any given meaning of "human flourishing" you care to use?
#tech #dev #mathematics #AutomatedMathematician #AutomatedScientist #AI #GenAI #GenerativeAI #ThoughtExperiment
Hi! I’d like to nominate myself for your list. I’m not nearly as high-profile as many of the folks that you mentioned or who are being recommended, but I am a practicing computer scientist trained in AI, and I have managed to go viral on LinkedIn with my criticism of AI a few times (haha). I post pretty regularly on the fediverse about AI, usually critically though sometimes "philosophically" or politically.
Some blog posts:
- AI as “anti imagination”
- The problem of thinking of everything as data
- An automated scientist would be a random number generator
- AI as eating your seed corn
CC: @timnitGebru@dair-community.social @emilymbender@dair-community.social @alexhanna@peertube.dair-institute.org @DAIR@dair-community.social @cwebber@social.coop @jaredwhite@indieweb.social @tante@tldr.nettime.org