Twitter said on Friday It is holding a competition in hopes that hackers and researchers will be able to identify biases in its image cropping algorithm. Twitter will reward those who find as-yet-undiscovered examples of bias in its image-cropping algorithm.
Twitter said in its blog post “Finding bias in machine learning (ML) models is difficult, and sometimes, companies find out about unintended ethical harms once they’ve already reached the public. We want to change that.“
Details of Competition
Those competing will have to submit a description of their findings, and a dataset that can be run through the algorithm to demonstrate the issue. Twitter will then assign points based on what kind of harms are found, how much it could potentially affect people and more.
Winners will be announced at the DEF CON AI Village workshop hosted by Twitter on August 9th, 2021.
Cash Prize Details
The winning teams will receive cash prizes via HackerOne:
- $3,500 1st Place
- $1,000 2nd Place
- $500 3rd Place
- $1,000 for Most Innovative
- $1,000 for Most Generalizable (i.e., applies to most types of algorithms).
Disclaimer
Void where prohibited. No purchase is necessary. Participation is not limited to DEF CON conference attendees. All participants must register with HackerOne to be eligible to win.
Many users have expressed dissatisfaction over the prize amount, with a few users saying it should have an extra zero. For context, Twitter’s normal bug bounty program pays $2,940 to someone who finds a bug that let you perform actions for someone else (like retweeting a tweet or image) using cross-site scripting. Similarly, finding an OAuth issue that lets you take over someone’s Twitter account would net you $7,700.
The social networking company said in a blog post that the bounty competition was aimed at identifying “potential harms of this algorithm beyond what we identified ourselves.”
Following criticism last year about image previews in posts excluding Black people’s faces, the company said in May a study by three of its machine learning researchers found an 8% difference from demographic parity in favor of women and a 4% favor toward white individuals and 7% difference from demographic parity in favor of white women.
According to Twitter, It wants to cultivate a community, focused on ML ethics, to help us identify a broader range of issues than we would be able to on our own. With this competition, it aims to set a precedent at Twitter, and in the industry, for proactive and collective identification of algorithmic harms.