Sunday, February 26, 2023

Dreaming of electric sheep?

 

iLexx | Deposit Photos
You have likely heard by now some of the furor over artificial intelligence, or AI, creeping into spaces where previously it had dared to tread. Since ChatGPT's debut in November, school districts around the country have banned its use by students. Educators are concerned that kids will use the tool to generate homework assignments. Colleges are concerned, too, but most appear to be leaving the decision about whether to ban use of the technology up to individual professors.

Then there's AI art. Last year the furor was all about apps that could take your selfie and turn it into something artistic or cartoony. The biggest concern was that the apps themselves were skimming too much personal data, maybe for resale -- including your face, which could be a security problem. Those apps dropped out of sight pretty quickly. But complaints have continued, especially from artists, over a new crop of apps that can turn out images based on written commands. The problem is that these apps are skimming images from the internet and slicing and dicing them. Those skimmed images are created by actual flesh-and-blood artists -- and the apps are doing it without compensating the artists. That's not just a financial problem for the artists, although that's bad enough; it's also copyright infringement.

Besides that, the resulting artwork isn't very good. But the AI programs are learning; the more input they get from human users, the better they're going to get at this. Artists can envision their livelihoods disappearing as clients turn to AI-produced art.

Although maybe the machines aren't yet fast enough at learning. Earlier this month, Microsoft unveiled its new AI assistant, Bing, named after its search engine (which I hope has gotten better than it was the last time I tried to use it -- wow). Except that when the New York Times and the Washington Post sent reporters to chat with Bing, they learned her -- okay, its -- name was actually Sydney, and she -- okay, it -- was kinda quirky. She (I give up) pledged her undying love for the NYT reporter, but when WaPo asked her about him, she had no idea who he was. So much for undying love.

Mostly, I've been bemused by it all. Sure, the vast majority of Americans think AI technology will do society more harm than good -- but they conveniently forget that we're already awash in it. Does your email have a spam filter? (I sure hope so!) That's AI. Talked to Alexa lately? She's AI, too.

I do think there are some things AI shouldn't be used for. For example, self-driving vehicles are proving to be as bad an idea as we all suspected they'd be. 

And I think that by and large, creativity should be left to human beings. I know how corporate America works, and I'm worried about artists losing their jobs to technology that turns out an inferior product for less.

Then there's this: Clarkesworld, the online speculative fiction magazine, has stopped taking submissions. Why? Because for the past few months, the editorial staff has been inundated with AI-generated stories. Neil Clarke, who publishes and edits the zine, says by the time they shut down submissions on February 20th, they had received 500 AI-generated submissions this month alone -- and it's getting worse. Clarke says that because his zine pays on publication, it ended up on somebody's list of places to send AI-generated stories and make money from them. 

He says it's not going to work; the AI-generated stories aren't any good.

I get that. But that's today. The machines are still learning.

***

The illustration up top is not AI-generated, as far as I know; I bought it from a royalty-free stock photo site. I was going for a futuristic take on the "monkeys using typewriters" adage. What do you think?

***

These moments of blatantly human blogginess have been brought to you, as a public service, by Lynne Cantwell -- who is still fully flesh-and-blood. Well, except for some crowns in her mouth and the plastic lenses in her eyes.

No comments: