clock menu more-arrow no yes mobile

Filed under:

Tuesday Cannon Fodder: robots writing

Los Angeles Lakers Kareem Abdul-Jabbar, 1983 NBA Playoffs Set Number: X28386 TK2

Good Tuesday morning, TSFers. The AI-revolution is here, and it’s a garbled mess. Yesterday, another “legacy” publication was accused of publishing AI-generated content, joining CNET, Bankrate, G/O Media (Gizmodo and A.V. Club), Buzzfeed, USA Today and Gannett to have published low-quality, error-ridden “content” created by AI.

Futurism published an investigation into Sports Illustrated that indicates the magazine’s parent company, The Arena Group, was publishing AI-generated content on their product review vertical, with sources at the publisher confirming the content is AI-generated. The bylines appear to be attributed to fake people as well, using headshots available for sale on a website that sells pictures of AI-generated “people.” Periodically, the “writers” would disappear and their posts would be attributed to a different, fake author. According to Futurism, the posts appeared without any disclaimer that the content was AI-generated.

An entirely separate question is why Sports Illustrated, one of the premier sports periodicals for decades, has an online products review vertical to being with. The answer is probably easy money from affiliate marketing and ad revenue from cheap clicks. But I digress.

Initially, Arena Group did not respond to Futurism’s request for comment, but the AI-generated authors all disappeared from the website. After the story was published, an Arena Group spokesperson blamed a third-party contractor, AdVon, for allowing purportedly human writers to use pen names in publishing content. The statement denied that any of the content was AI-generated, relying on AdVon’s assurance that it was not.

The Sports Illustrated Union released a scathing statement, demanding accountability and transparency from their parent group.

Us OG-interneters have skepticism as our default position, a habit that will serve us well as we enter this brave new world of AI-generated content. Generative AI has tremendous potential and is a powerful tool. It can do a lot of cool and useful things.

It cannot, however, do journalism. Real journalism requires legwork that a computer is incapable of doing because it is, you know, a computer. The concern is that less-than-scrupulous publishers will allow AI to fabricate those portions of stories. For example, some of you may have seen the two attorneys in New York who were sanctioned by the court for citing fake cases in a brief at least portions of which were generated by ChatGPT.

There are a whole mess of ethical and legal issues wrapped up in generative AI. As is often the case with technological leaps forward, those annoying, nitty-gritty details like proper guardrails, best practices, and questions about legality seem to have fallen by the wayside. Or at least are outside of my considerably vast purview of online goings-on.

I know there is some movement already. There are a number of different lawsuits working their way through the courts against different generative AI companies filed by various elements of the music industry. AI usage was one of the secondary components of the writers’ and actors’ strikes.

It doesn’t seem like generative AI is going away. We’re going to continue to see stories like the Futurism investigation into Sports Illustrated for a while. Once we establish norms around publishing AI-generated content, somebody else is going to do something different that pushes the envelope and feels wrong. And so it will go.