Why Facebook’s Responsible A.I. Team Needs to Be Able to Lose Money in Order to Do Its Job

‘Oh, your algorithm update lowers revenue and decreases usage? Ship it!’

Photo: Solen Feyissa/Unsplash
 

“Measure what matters and what you measure matters.” There are any number of similar quotations that talk about how the very act of tracking a KPI in an organization causes people to focus on it more, let alone if you’re linking an explicit incentive structure to goals. It’s why, for example, if boards care about ideals like diversity and culture, they should work with CEOs to make sure those stats are first-class citizens on the company dashboards alongside revenue and profit.

It’s even harder when you can’t agree what the right metric should be. As I’ve written before, one of the problems we face as an industry is largely trying to measure current day Web 3.0 with Web 2.0 dashboards. Misinformation, trolling, harassment, polarization, and the resulting negative implications — none of these are as simple to define as CTR or CPM. I myself fell victim to this during my time leading the consumer product team at YouTube. When Google leadership asked us to shift from focusing solely on user growth to also increasing monetization, the team we destaffed to fund the new effort had been working on the comments system. Yup, YouTube comments, which most often resulted in a lot of name-calling, profanity, and worse. We all wanted it to improve, but why did I sacrifice this project in the near term? Because it wasn’t connected to a first-tier KPI, like revenue, uploads, or playbacks. So it had to wait.

But what if you have the right metric to measure—say, the negative externalities of a product—but it turns out that number is loosely negatively correlated with your business KPIs? Like, for example, if polarizing content leads to more short-term engagement, which leads to more active users, which leads to more ad revenue? It’s not crazy to wonder this, and while I don’t believe that it’s a true correlation or that our social platforms are intentionally running at the efficient frontier of anger and profits, I do always wonder what margin pressure does to, say, adequate investment in trust and safety.

Casey Newton’s Platformer article about Facebook’s Responsible A.I. team provoked a combination of eagerness and skepticism. My fear is that even if these teams are actually equipped to study and challenge internally held beliefs about their products, they will be forbidden from making changes that negatively impact business metrics. That is to say, we want responsibility but only when it doesn’t put the stock price at risk. I say this not just (or specifically) about Facebook, but more generally about the complexity of incentives within a corporation. Also, yes, it’s true that companies already make decisions to balance user experience with monetization. During my time at Google and YouTube, there were plenty of experiments with ad load, placement, and so on, and the company never maximized immediate dollars if there was a disproportionate negative impact on, say, user engagement or advertiser ROI. Long-term greedy, I guess, not short term.

But back to this question of how to give a team like Responsible A.I. the ability to decrease dollars, engagement, or growth if they believe it has a positive impact on fairness, responsibility, or whatever other metrics they’re responsible for managing. I’ve got an idea: a budget.


 Yes, a budget! Teams like this should be entrusted to “spend” money up to an annual prespecified amount. It doesn’t mean they have to spend it. Indeed, many gains might be, in this example, revenue neutral or even revenue positive. But let them make decisions consistent with their mandate without having to implicitly (or explicitly) defend why they’re causing the company to leave dollars on the table.

Look, I know this is a weird concept and has all sorts of potential secondary effects: Other changes are made elsewhere in the company to recover the “lost” revenue that turn out to have a different set of negative externalities. It reinforces the idea that fairness comes at the expense of revenue, perhaps giving other teams the ability to give up their “responsibility” and just let this separate A.I. team “fix” everything. Maybe we’ll get to a point where it’s more like carbon offsets, where each product team has to manage their own responsibility budget, and there’s an internal market to trade responsibility points. New challenges require new solutions, and in these cases, I think you’ll need to navigate corporate anthropology, not just corporate algorithms.