Want to detect bad actors? Look on the bright side

“You will find that 99.9% of people in this world are actually really good. When they use your product, they use it for all the right reasons. But then once in a while when something bad happens, it also feels like one too many,” Airbnb’s director of Trust Product and Operations, Naba Banerjee, said onstage at TechCrunch Disrupt 2023.

Banerjee knows what she is talking about: Not long before she joined Airbnb’s trust and safety team in 2020, the company had declared a ban on “party houses” and taken other measures after five people died at a Halloween party hosted at a house that was rented on the platform.

Since then, the executive has become a bit of a “party pooper,” in the words of a recent CNBC profile. “As the person in charge of Airbnb’s worldwide ban on parties,” the piece noted, “Naba Banerjee has spent more than three years figuring out how to battle party ‘collusion’ by users, flag ‘repeat party houses’ and, most of all, design an anti-party AI system.”

It is this AI element that I found particularly interesting to discuss on Disrupt’s Builders Stage with Banerjee and her fellow panelist, Remote CEO Job van der Voort. While Remote is growing fast, the vast majority of its users still behave exactly as expected. But even a company the size of Airbnb, it turns out, doesn’t have that much data on rule-breaking behavior.

Leave a Reply

Your email address will not be published. Required fields are marked *