Artificial Intelligence: An Insult to Life Itself.

I’ve written, before, about my admiration for the legendary animator and artist Hayao Miyazaki. He has produced some of the most moving and beautiful works of art in Japanese cinema. My personal favorite is Porco Rosso, but others are more likely familiar with his works Spirited Away or My Neighbor Totoro. Or perhaps the most recent movie, The Boy and the Heron, which I really need to see sometime.

A favorite story about Miyazaki comes from the time he was given a sales pitch–a tech demo, from a vendor looking to convince the director to use AI-generated animation in his films (in the background, for zombies, the vendor suggests). Miyazaki watches the clip of an animated figure struggling across the floor, and seems to think a moment. Then he answers with a memory about a friend of his who’s lost muscle power in his arm, and can’t even give a high-five anymore. But this, he says, is something else–its created by something who has no idea what pain even is.

“I am utterly disgusted,” he says. “I strongly feel that this is an insult to life itself.”

You feel sort of bad for the presenters, but to be fair, what were they expecting?

Something else I’ve written about is my love for Lord of the Rings, and the excellent adaptation by Peter Jackson. I’ve watched the film multiple times–not just the original release, but the extended editions, along with their appendices. On those appendices, they talk about the animation programs they used to simulate the massive battle scenes–which was, again, an AI program. It was one so good, in fact, that the programmers noticed two soldiers just up and fled the battle rather than fight.

I have been thinking about both these examples recently–or rather, I’ve been thinking about the Miyazaki one constantly, and the Peter Jackson one came to me as a counter-example when I started typing this up–as I’ve struggled with my own thinking on the newest and quite possible most dangerous new tech available right now–Artificial Intelligence.

(My little brother, who understands computers more than I do, informs me that the Miyazaki AI and the LotR AI are different sorts of AI. ”Generative” vs. “Emulative” or something. I don’t understand the distinction).

AI has been around for years (as these examples show) and I’ve expressed myself to my friends often about it, but it never really felt necessary to make a blog about it. Smarter and louder people than I were already expressing concern, I felt that anyone sensible had to realize the inherent danger in the technology. So while I thought about posting, I was, in the end, too lazy to actually do it.

But then two things happened. One, I went to a teacher’s training convention. One that featured multiple classes about AI. Specifically, about how awesome AI was and how we should use it in the classroom. Some of the individual teachers giving presentations did speak about pitfalls with AI, including its bias problem, the risk of cheating, and the poor expression that sometimes resulted. However, the keynote speaker barely paid lip service to the idea, and instead continued talking about how amazing the tech was–especially the paid premium version!

I later realized that our training convention had been co-sponsored by Code 313, a tech non-profit which develops a lot of educational technologies–including AI. That explained a lot.

Two, I assigned my students some papers. Some papers that, inevitably, came back as generated by AI bots, to the extent that we can determine that. And while I was still stewing over the students who had done this, several of my teacher colleagues admitted that they, likewise, were using AI to come up with teaching materials.

So screw it. Let’s talk about how we’re all doomed and why this is horrible.

The current iteration of AI, to be clear, is not sentient or self-aware. Amazon drones aren’t about to start killing heads of state or setting nukes off. It’s best explained, perhaps, as advanced autocorrect or prolonged auto-complete. Some friends who use ChatGPT describe it simply as “Google on steroids.” (We have this odd relationship with steroids where they’re technically illegal but we always use them to describe things we like). ChatGPT, Bard and the like are applications that build new material based on what, in the vast world of the internet, most commonly goes into a space. It’s simple pattern modeling. There’s no reasoning involved.

That’s part of the problem.

On a practical level, my problem with AI as a teacher is, as I said before, it makes cheating so easy and hard to detect. AI-detectors exist but they are notoriously unreliable. Any teacher worth their salt, of course, can tell when a student has used an AI to write a paper–it’s very notable when the student who’s skipped all year suddenly starts throwing around words like “verimilissitude”–but AI makes proving it a very thorny proposition. For me especially that’s problematic, since I dislike penalizing students without hard proof that they were cheating. It’s going to make writing and grading research paper VERY much more complicated in the short term and lead to a lot of stupid people who don’t understand research (though arguably we weren’t doing great on that score anyway).

But on a larger, more societal level, the problem is that AI-generated content has no inherent reasoning behind it. Yes, we are likely to see a lot of grad students and college professors flooding journals with AI-generated academic essays. But these articles will have no basis in reality or logic, they will simply SOUND logical and SOUND realistic. Misinformation is about to get a huge shot in the arm. Already there have been numerous cases where lazy lawyers filed AI-written legal briefs, only to learn that they referenced completely fictitious cases.

I’m not sure anyone believed these photos were real, but it’s a dangerous precedent. The DeSantis campaign generated images of Trump and Fauci hugging–nothing major on its own, but what else could you do and get people to believe in?

Worse from an artistic perspective, none of the AI-generated material will be remotely innovative or original. People love to talk about how AI makes them feel “creative” and helps them “create things”, but this fundamentally misunderstands the way the current version of AI works. ChatGPT and Midjourney don’t create new things, they just rearrange old things. It’s like a fan in a hot room–not actually improving things, just moving the same stuff around.

I mean, I find these videos funny, but they sort of miss the point of both Wes Anderson and Star Wars.

The problem is that these points don’t matter, because this recycled material is faster, easier, and most important, cheaper to produce. Sports Illustrated just fired most of its writing staff, in the near wake of revelations that many of its profiles and articles were being written by AI. News articles everywhere are starting to use creepy Midjourney images as headers. The first AI-written movie was recently released. Gaming companies are already toying with the idea of AI-made games with AI-written stories and AI-art. It’s all terrible, and frankly it’s all going to be terrible, always, but it doesn’t matter–it doesn’t need to be very good if it’s free and easy. Art sites like DevART, and literary journals like Clarkesworld are getting flooded with computer-remediated jumbleware from “artists” proud in their ability to click “Generate” on a website. 

Anyone remember that blog I wrote a few years back, about how good art is harder to find than ever, because anyone can submit trash? Now add to that a trash generator, capable of creating countless new clones of old trash.

I’m using a lot of metaphors. Let me bring this to something human.

Neil Gaiman, who I quote a lot, because he’s really good at saying things, has this quote about artistic individuality.

“Start telling the stories that only you can tell, because there’ll always be better writers than you and there’ll always be smarter writers than you. There will always be people who are much better at doing this or doing that – but you are the only you.

Tarantino – you can criticize everything that Quentin does – but nobody writes Tarantino stuff like Tarantino. He is the best Tarantino writer there is, and that was actually the thing that people responded to – they’re going ‘this is an individual writing with his own point of view’.

There are better writers than me out there, there are smarter writers, there are people who can plot better – there are all those kinds of things, but there’s nobody who can write a Neil Gaiman story like I can.”

Oddly enough, another of my favorite British writers, CS Lewis, said something similar that dovetails with this nicely.

Literary experience heals the wound, without undermining the privilege, of individuality. There are mass emotions which heal the wound; but they destroy the privilege. In them our separate selves are pooled and we sink back into sub-individuality. But in reading great literature I become a thousand men and yet remain myself. Like a night sky in the Greek poem, I see with a myriad eyes, but it is still I who see. Here, as in worship, in love, in moral action, and in knowing, I transcend myself; and am never more myself than when I do.

CS Lewis, Experiment in Criticism

The great thing about art, Lewis is saying, is that it allows you to inhabit, for a while, someone else’s shoes, to become for a moment a schoolchild who finds a magical world in a wardrobe, or an Oxford professor kidnapped by colonialist astronauts. Gaiman contends that each story is truly great in the way it expresses the pure individuality of its specific writer. And it is connecting with that pure individual vision that allows Lewis’ reader to “transcend” themselves without losing themselves. The Russian theorist Mikhail Bakhtin called this trait of art “heteroglossia.” (Maybe. Bakhtin is a hard one to parse sometimes).

And here is why I think of AI as an insult to life itself–AI loses all individuality in its process of generating “art.”

AI is not the expression of an individual. AI is the voice of multitudes; it is art and science generated by hive-mind; it is quite literally a form of “our separate selves are pooled and we sink into sub-individuality.” It is the roar of the mob, and quite frankly there is nothing more terrifying.

I like to think of myself as an optimistic guy. I’m sure there are some definite positives to AI–I can readily believe in its utility as a programming tool. I can’t really gainsay my colleagues who use it to come up with teaching questions or exercises (though I dislike it on principle). I’ve toyed with the idea, myself, of using AI to realize my video-game-creation fantasies.

But as recent revelations about phones and social media have shown, each new technology comes with drawbacks and problems that we should be wary of. And it disturbs me that no one seems to be genuinely worried about this one.


3 thoughts on “Artificial Intelligence: An Insult to Life Itself.

  1. Hey John, a good post.

    I agree with what you say about individuality and voice and purpose. Anything that can be made for free is worthless at least from an economic POV. So for this reason I think the future of creative pursuits, at least at the high end, is safe. The most successful media products that have most used generative AI so far are the Spiderverse movies and Puss in Boots: The Last Wish. Both of these used AI to automate the “in betweening” process of animation and also to automate the addition of certain effects. But rather than lay people off, they chose to simply make the rest of the movie look even better. I think as far as high end media content goes that’s the path forward.

    What I do worry about is the deluge of trash drowning people out. I’m not sure about the solution there, but I expect that we’ll have to restructure the culture around art and spreading it, at least online. There might be more of a return to analog functions. I also worry that AI will be used as an excuse to cut people out of royalties.

    I think a lot of the people using AI for the first time are discovering the joy of creating *something* for the first time, even if its fundamentally trash, its far better than what they could’ve made before. Most people give up on art and writing when they realize they can’t realize their vision – AI lets them bridge that gap at least partially. The question is if they *stay* there and never learn to develop their own ‘voice.’

    My most positive take on AI is that certain tasks that are currently super labor intensive and not super rewarding, could be automated to a degree. Specially designed tutors, or cab drivers.

    Like

      1. for an example, one of the biggest potential uses *right now* is for research, which is fundamentally just poring through data trying to find some kind of interesting correlation.

        Like

Leave a comment