A wordy title for a fairly simple question: Is it a good idea to trust groups like OpenAI with the task of censoring the inputs and outputs of what they create?
I have been watching the growth of Dall-E 2’s beta community. OpenAI has curated access to this software quite carefully, and buried within the API there are safeguards to curate their output even moreso. For supposedly being a tool for artists, they have not picked many – I see mostly youtubers, large twitter accounts, and journalists with access.
I can’t believe this was by accident. Artists, you see, are troublesome creatures. Most of us think the G rating should be relegated to history, that from the time they begin to understand language, children deserve thought-provoking, personally relevant, and human stories, art, and music which creep on the PG, maybe even PG-13.
OpenAI demands your prompts suggest no violence, no disease, nothing that could be considered trauma an actual human being might experience, unless you count a toad wearing a party hat as traumatic. They accomplish this by screening keywords, and possibly a degree of sentiment analysis (at least, I would be surprised if they weren’t incorporating it.) Youtubers gush over its power while being amused at being banned for using the word ‘shot’ in the context of photography.
There is something sinister about this. When we think about the equivalent being done to artists in traditional media, we are appropriately prepared – history has taught us to be violently opposed to people pulling the pens and paints from the hands of artists. No such alarm exists in the community OpenAI has cultivated.
No one is calling out any kind of warning that if a medium is truly transformative, censorship of it can deeply scar our culture. We’ve already seen this happen – the mid-20th red scare and the comics code come to mind. Part of it is the idea that OpenAI owns this model, we see ourselves as buying access to it so moderation of some kind is culturally acceptable, but the consequences will be exactly the same.
Consider the best case scenario – OpenAI is truly, willingly an ethically focused entity, attempting to get this process right and limit the damage their creation might cause. Putting aside whether they should have made it in the first place if they have to serve as perpetual guardian to it, they are one group with some very human limitations on knowing what ‘damage’ even means. They cannot know what damage they will create through chilling effect.
Because I didn’t know at one point, chilling effect is the propagated impact that censorship exerts on media outside individual targets. An example would be a journalist who feels they cannot report on war crimes because others have been convicted in similar situations for breaching state secrets, or during the red scare era, an artist who cannot speak out against the cold war for fear of being persecuted as a communist. OpenAI has introduced a chilling effect from Dall-E 2’s inception, whether we think about it that way or not – artists will simply avoid using the platform if they’re banned for showing a pair of boxing gloves, or someone coughing (real examples of things that can, currently, get you kicked out of the beta).
The best case scenario here is that Dall-E 2 will adapt to the demands of its users or die. People will reject its output as vapid, a gimmick at best. But money can lead us down a different path – if it’s cost-effective, studios will begin to use it, or buy special privileges that “amateur” users are too stupid and too immoral to be trusted with.
Which leads us to the worst of all worlds, the one I think we’re in. Yay, we’re here!
OpenAI is a business. The ethical decisions it makes must align with its profit for it to survive. It is, in fact, a business decision to project themselves as ethical, but like Google (remember “Don’t be evil”?) that goes out the window the moment other priorities determine its survival.
If it is more profitable, OpenAI will close off access to its technology entirely. It will blame you, the artist, the dabbler, for not being profitable enough or G-rated enough to use its software, and it will promptly find other customers. Those customers may be the worst humanity has to offer, and most people will still be blaming market forces as it happens, even while they do the heavy lifting for OpenAI of marketing and digesting its potential.
I can’t help but think the motivations for pushing a G rating on its users were much more deeply cynical than any ethical concerns. They are cultivating an image and fear there is risk in permitting users anything more. It signals pretty clearly to me that they don’t really care what this technology is used for, only about making sure people don’t see that it can be used for some deeply evil shit – not yet anyway.
I think a lot of people see this technology and think they’ll be running it on their own rig some day, but Dall-E 2 took 100k-200k GPU hours to train, probably over a quarter of a million dollars of rented processing time, and GPT-3 was in the millions. Realistically we may never see that happen. We are stuck with these monoliths making decisions about what art is and is not acceptable, for better or worse.
… No, you know, let’s not kid ourselves. It’s worse. Mostly worse.