This morning, MIT Technology Review published an article on how a blog post written by GPT-3 hit #1 on Hacker News, a well-known site aggregator and high-quality forum. Unfortunately, the article omitted many key facts, making it very misleading. Among other things, the author used sock-puppet accounts to manipulate the site; I (Alyssa) noticed this and reported it, long before the GPT-3 story came out. I’ve sent a request for correction to the author, Karen Hao, but haven’t heard back yet [EDIT: response added below]. In the meantime, I’ll let Dan Gackle (Hacker News admin) explain:
Sorry to pop the balloon, but the story is bogus and based on false claims.
It’s false that the post was generated by GPT-3. The author admitted to writing the title and editing the intro, and that’s already all that most people read. He also described the article body this way: “as unedited as possible”—in other words, edited. It’s false that (as he originally claimed) only one commenter called the post as GPT-3, and false that (as he now claims—since the article says it and who else would have come up with that) all such comments were downvoted.
All that is just what he publicly admitted. How much of the rest is also fake? People who try to game HN like this, including with bogus accounts and fake votes, are not known for scruples. It seems that, having got busted in dishonest attempts to get attention on HN, he decided to get attention from journalists instead, and found one who didn’t bother to check the other side of the story. (source)
GPT-3 is a red herring; the issue was the generic, baity title on a popular theme. Those routinely get upvotes because people see words like ‘procrastination’ or ‘overthinking’ and instantly think of their own experiences and ideas and want to talk about them. Such threads are not about the article, they’re about the title, which the author admits writing (“I would write the title and introduction, add a photo”). Title plus introduction is already more than most people read, so this case is not what they say it is—which is consistent with their other misrepresentations, including the false claim “only one person noticed it was written by GPT-3”. (source)
Only one of those comments got pushback, and that comment wasn’t simply matter-of-fact; the problem with it (from my point of view anyhow) was that it added a gratuitous insult (“or the human equivalent”). That made the whole thing read more like snark than straightforwardly raising a question. The other comment was more matter-of-fact about calling GPT-3 and didn’t get any pushback.
The problem is that the cases legitimately overlap. That is, “sounds like GPT-3” gets used as an internet insult (example: https://news.ycombinator.com/item?id=23687199) just like “sounds like this was written by a Markov chain” used to be (example: https://news.ycombinator.com/item?id=19614166). It’s not surprising that someone interpreted the first comment that way, because it contained extra markers of rudeness. That may have been a losing bet but it wasn’t a bad one. Perhaps the other comment didn’t get interpreted that way because it didn’t throw in any extra cues of rudeness—or perhaps it was just random. Impossible to tell from a sample size of 2.
Not to take away from the glory of lukev for calling it correctly. I just don’t think the reply deserves to be jumped on so harshly. (source)
ADDED: A response from Karen Hao, the author of the story:
Me: “Hello. My name is Alyssa, I’m an AI engineer at McD Tech Labs. Unfortunately, some critical facts appear to have been left out of your recent article on GPT-3:
– The blog author tried to use sock puppets to manipulate Hacker News, and their accounts were banned even before the GPT-3 “reveal”
– Multiple people did suggest that it was written by GPT-3, not just one as their post originally claimed
– They admit to having manually edited the articles, and they never posted the originals, so we don’t know how much of the work was by GPT-3
– Hacker News only displays the article title (and many people upvote based on that), which the author admits was human-written
For more information, please contact Dan Gackle at email@example.com. Thank you.”
Karen Hao: “Thanks, Alyssa. Appreciate you pointing these out. I am talking to my editor now about potentially adding more information about the first bullet you include as additional context to the story.
As for the others, from my perspective, they do not change the story. The point of the story isn’t that he didn’t try hacking Hacker News or that he didn’t edit any of his posts, but rather that it took only hours for him to generate all the content and then get 20k+ views. It doesn’t really matter what means he used in addition to AI because all those other means are at the disposal of anyone else. So if a malicious actor wanted to do the same thing, they would.”
Me: “Can I include your response in my blog post?”
Karen: “In that case, let me provide a point by point rebuttal.
- As I said below, I am talking to my editor about adding this detail, but we have to remember that a malicious actor could easily do this as well.
- I said in my article that multiple people noticed this: “Only three or four of the dozens of people who commented on his top post on Hacker News raised suspicions that it might have been generated by an algorithm.” [NOTE: This is true, but Liam’s post said that only one person noticed, which raises obvious questions.]
- They did not. You misread Liam’s blog post. He wrote the title and intro, which I say in my piece. He let GPT-3 do the rest. He says in his blog post later that if someone wanted to use GPT-3 as a legitimate writing tool, they could probably get away with minimal editing. That is not referring to his own work. But again, if he had edited his post, it does not change the fact that a malicious actor could do this too.
- Yes the title is human written and a malicious actor could human-write a clickbaity title too. Doesn’t change the fact that once the post hit the number one spot, the fake content ended up getting 20k+ views.
Also my rebuttal to Dan’s arguments:
- “It’s false that the post was generated by GPT-3.”: I never said the entirety of it was
- “GPT-3 is a red herring”: This is a dangerous statement. Liam would not have been able to do everything he did within hours without GPT-3’s help. Again all the other tools he used, including trying to hack HN, writing baity titles, are tools at the disposal of malicious actors.
- “The problem is that the cases legitimately overlap.”: And that’s precisely what makes GPT-3-generated content even more dangerous. It can be legitimately hard to tell between bad human writing and good GPT-3 writing.”
Me: “Thank you, but the footnote makes it clear that he did in fact edit his own posts:
[quote from Liam] “There were, on occasion, small changes that had to be changed or would have been dead giveaways. For instance, it once attributed a Bob Dylan quote to Emerson. There were also some simple grammatical errors.””
Karen: “Oh that’s fair — anyway my point stands. Malicious actors can do the same so focusing on the % of things that is GPT-3 generated is really not the point.”