Privacy at Facebook Scale
Authors: Paulo Tanaka, Sameet Sapra, Nikolay Laptev
Abstract: Most organizations today collect data across every facet of their business. There becomes no shortage of data in these businesses as this data eventually gets copied, transformed, and scattered across the organization's data warehouse. During privacy-related audits, organizations are required to locate all instances of a certain type of data to enforce privacy and security related policies around this data. In these cases, it becomes crucial to have insight into the data so that automatic access controls and data retention policies can be applied to certain data assets within the data stores. This paper is about an end-to-end system built to detect sensitive semantic types within Facebook at scale and enforce data retention and access controls automatically. Content based data classification is an open challenge. Traditional Data Loss Prevention (DLP)-like systems solve this problem by fingerprinting the data in question and monitoring endpoints for the fingerprinted data. With trillions of constantly changing data assets in Facebook, this approach is both not scalable and ineffective in discovering what data is where. Instead, the approach described here is our first end-to-end privacy system that attempts to solve this problem by incorporating data signals, machine learning, and traditional fingerprinting techniques to map out and classify all data within Facebook. The described system is in production achieving a 0.9+ average F2 scores across various privacy classes while handling trillions of data assets.
Explore the paper tree
Click on the tree nodes to be redirected to a given paper and access their summaries and virtual assistant
Look for similar papers (in beta version)
By clicking on the button above, our algorithm will scan all papers in our database to find the closest based on the contents of the full papers and not just on metadata. Please note that it only works for papers that we have generated summaries for and you can rerun it from time to time to get a more accurate result while our database grows.