Online publishing platform Medium published a blog post on January 26th, where it detailed the approach it is taking against AI-generated content.
The platform, which prizes human knowledge and experience, asked the Medium Community what it thinks about AI-generated writing. “This is a moment of huge transformation in the digital world, and the potential implications are both wide-reaching and still not well-defined. But they’re also not abstract: AI-generated content is here now, and it’s important to start wrestling with that impact now too, even as the landscape is still taking shape,” wrote Medium’s VP of content, Scott Lamb.
According to him, the feedback received comprised primarily of a common sentiment — the fact that people paid and signed up to read what humans have written, and to pay their worth for doing real work. “I don’t want to give AI the eyeball hours or oxygen on a subscription platform,” commented a user.
On the other hand, according to Lamb, there were many responses regarding the need for transparency and disclosure. Taking that in mind, Medium updated its distribution standards to include an AI-specific guideline:
“We welcome the responsible use of AI-assistive technology on Medium. To promote transparency, and help set reader expectations, we require that any story created with AI assistance be clearly labeled as such.”
Lamb then goes on to explain that this is Medium’s initial approach when it comes to combating AI-generated content. “As this technology and its use continue to evolve, our policies may, too,” he wrote. “We believe that creating a culture of disclosure, where the shared expectation of good citizenship is that AI-generated content is disclosed, empowers readers. It allows them to choose their own reaction to, and engagement with, this kind of work, and clearly understand whether a story is machine- or human-written.”
In case Medium comes across content that it believes is AI-generated, but doesn’t have the necessary disclosure, the piece of content won’t be distributed across Medium’s network.
Elsewhere, a 22-year-old from Toronto has already developed a tool that can detect whether a piece of content was written by a human or an AI. The tool is called GPTZero, and it measures the “perplexity, creativity, and variability” of a piece of text to display a score that reveals whether the text was generated by ChatGPT or a human. Read more about it here.
Image credit: Medium