NashCoding Yet Another Artificial Intelligence Blog

28Oct/1110

Hacker News Needs Honeypots

There has been a lot of recent debate regarding how to improve quality control on HackerNews (HN), and to his credit, Paul Graham (pg) has tried a lot of tactics. There is a very clear set of HN guidelines, which very few members these days probably read. For a while, pg tried playing around with the karma formula and, even if I disagree about the way karma should be measured, at least he gave it an effort. He also hid comment karma from everyone but the author, to help slow the demonstrable deterioration of the discussion section; apparently this has been successful in pg's observations. Nevertheless, I do believe that we are seeing a continuing trend downward in overall article quality on the front page1.

In this post, I present a honeypot approach to combating group-think and quality deterioration in article selection on social news sites.

Article Honeypots

A honeypot is an article that is link-bait or otherwise in direct violation of the site's guidelines, but is intentionally submitted by an admin as a test to see if users inappropriately upvote it2. For each user, three scores are tracked: the number of honeypots seen, the number of honeypots upvoted, and the number of honeypots flagged. A user's seen count increments when they load a page with a honeypot article displayed3. If a user upvotes one of the honeypots, their upvoted score is incremented; if the user flags the honeypot, their flagged score is increment. We then take the ratio of the difference of the flagged and upvoted scores, and the seen score to get a honeypot ratio, h:

h(u) = \frac{f_{u} - v_{u}}{s_{u}}

where:

u is the target user
f_{u} is the number of honeypots the user flagged
v_{u} is the number of honeypots the user upvoted
s_{u} is the total number of honeypots seen by the user

This produces a ratio in the range [-1,1]. We may want to punish people who excessively flag everything, otherwise they'll always have a max h score4:

h(u) = \frac{f_{u} - v_{u}}{s_{u}} - (1 - \frac{f_{u}}{t_{u}+1})

where:

t_{u} is the total number articles a user has flagged

The range of this h-ratio is [-2,1). However, because t_{u} is very likely going to be much larger than f_{u} for every user, we would expect that no user will ever receive a score near 1 in practice. If the h-ratio of a user is less than an admin-specified threshold, we flag the user as detrimental to the overall quality of the site and their upvotes would either be discounted or ignored entirely.

Implicit Honeypots

Since it's not always feasible to expect admins to find or label honeypots, it would be nice to have some way to crowdsource honeypots implicitly. To do this, we take the top N users, ranked by their explicit honeypot ratio, and label them as "super flaggers". For each article, we track the percentage of super flags they've received and if this percentage is above an admin-specified threshold, we automatically label that article as a honeypot:

function getSuperFlaggers(honeypots, users, superFlaggersCount):
    # Order users by their honeypot ratio, in descending order
    ranked_users = users.sort(u => u.honeypotRatio)

    return ranked_users.subset(0, superFlaggersCount)
    
function isHoneypot(article, threshold, superFlaggers):
    count = 0
    seen = 0
    foreach sf in supperFlaggers:
        seen += 1 if seen(sf, article)
        count += 1 if flagged(sf, article)
    percent = count / seen
    return percent >= threshold

It would make sense to check this value every time an article hits a certain number of upvotes. One would also likely want to ensure that seen has reached a high enough value to be significant before checking. If an article is labeled as a honeypot, then all users who have already seen the article should have their h values retroactively updated.

Conclusion

The above post presented two reasonable approaches to improving the quality of the front page on social news sites like HackerNews. Social news sites have long been believed to suffer from the deterioration over time, with recent evidence supporting that belief. While anecdotal evidence suggests hiding comment karma has helped improve discussion quality, article selection quality has remained largely unchanged5. The honeypots approach I described above may help ebb the flow of upvotes to link-bait and inappropriate articles by enabling the community to moderate itself implicitly.

If you liked this article, please vote it up on HackerNews. Assuming you think it meets the quality guidelines. ;)

Footnotes
  1. For instance, see this front-page article from 10/23/11.
  2. In practice, it may be more reasonable for admins to label user submissions that they see hit the front page and are judged as violating the guidelines.
  3. If multiple honeypots are displayed, the count is incremented by the number of honeypots seen for the first time by the user.
  4. Technically, this is only true if they flag everything and upvote nothing.
  5. The exception here may be voting ring and other clique detection algorithms. However, such algorithms are designed to prevent manipulation of the system rather than enforcing quality guidelines.
Comments (10) Trackbacks (0)
  1. your solution, although a bold attempt would solidify the opinion of the site around those of ‘elite’.

    i wonder what would happen if you chose randomly the supperFlaggers?
    i wish there was some social network within HN, as you might be able to randomly select individuals, and then randomly select within their network. this ted talk seems to indicated that you can get a good strong indication of how a network functions by doing such a thing.

    http://www.ted.com/talks/lang/eng/nicholas_christakis_how_social_networks_predict_epidemics.html

  2. @Justin: No, it does not “solidify the opinion of the site around those of the elite”. It does nothing to boost the powers of super flaggers regarding how articles are displayed, as I noted several times in the comments on HN. If 10% of users are super flaggers, 80% are normal, and 10% are ignored, then a super flagger will on average account for 1/9th of the upvotes rather than 1/10th– not a big deal.

    Again, super flaggers do not receive a boost in upvoting power nor flagging power. They simply are used as a method of generating honeypots which later filter out users who vote up said honeypots. In a healthy community, there would not even be any filtered users as everyone was playing by the rules.

  3. A question about footnote 1 – why is that a bad article?

  4. Why not just have the admins flag the real flame bait as a honeypot. So your not injecting more bait for us to look at.

    Good Question JustAsking… why is footnote 1 bad?

  5. It’s a poor article because it is purely about politics and things off-topic to HN. That is something explicitly stated in the site guidelines as being disallowed.

    @Chris: See the second footnote, where I basically say exactly that– admins really should just mark existing articles, not submit new ones.

  6. Please google “Ptolemaic epicycles” and then click my name. I wrote a blog post years ago which explains why this problem always happens for sites of this nature. There is a fundamental failure to apply behavioral economics at the root of Hacker News and all sites built on the karma model. Adding increasingly intricate course-corrections will not change that.

  7. @giles: What? Just provide a link please…

  8. @Wesley, I believe Giles is suggesting you click on his name in this comment thread.

    It links here: http://gilesbowkett.blogspot.com/2008/05/summon-monsters-open-door-heal-or-die.html

  9. @Rob: Thanks for pointing that out. The article he’s linking there is pretty much just ad hominem attacks with no actual evidence, so I’ll just dismiss it.

  10. While I’ll agree that there is a fair amount of colorful commentary in Giles post, I wouldn’t say it’s irrelevant.

    I particularly agree with this statement:

    “When you build a system where you get points for the number of people who agree with you, you are building a popularity contest for ideas. However, your popularity contest for ideas will not be dominated by the people with the best ideas, but the people with the most time to spend on your web site. Votes appear to be free, like contribution is with Wikipedia, but in reality you have to register to vote, and you have to be there frequently for your votes to make much difference. So the votes aren’t really free – they cost time. If you do the math, it’s actually quite obvious that if your popularity contest for ideas inherently, by its structure, favors people who waste their own time, then your contest will produce winners which are actually losers. The most popular ideas will not be the best ideas, since the people who have the best ideas, and the ability to recognize them, also have better things to do and better places to be.”

    Lack of article quality over time certainly does seem like a systemic problem.


Leave a comment

 

Trackbacks are disabled.