This article posted on Medium, which describes itself as an open platform for thinking, is bylined as written by Azmeerwaqar. And, to be sure, as with many social media and other handles, it could be a pseudonym.
As with any platform based on user-generated content, the perceived value and quality will vary.
The point is that, having read more content that is likely produced by existing generative AI engines, there are telltale signs that AI is the author, even before running it through an AI content detector, such as WizorAI.
Generative AI is good at producing content that is essentially a list or a high-level summary. The text almost always lacks quantitative examples or anything else that is the equivalent of a footnote. Unlike a “news” story there are no quotes, comments or reactions.
Source: text analysis by Wizor.AI
To be sure, Medium accepts AI-generated stories, but that fact is supposed to be disclosed. A better policy would be to create a way for readers to automatically block any such stories.
If I want a list, I can ask a generative AI engine the question directly. I do not need to be “fooled” into thinking some human expert has created the information.
But that also suggests the value of the current state of the art. Generative AI (aise from its value in generating code) excels at subjects that are essentially lists of names or categories. Beyond that, I find the text is quite generalized. Too general to quote or footnote, at any rate.
To be sure, search engines produce lists as well, though they are links. But those links can be footnoted or documented as to source.
No comments:
Post a Comment