Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 5 de 5
Filter
Add more filters











Database
Language
Publication year range
1.
Sci Adv ; 10(29): eadn7053, 2024 Jul 19.
Article in English | MEDLINE | ID: mdl-39018389

ABSTRACT

Information about us, our actions, and our preferences is created at scale through surveys or scientific studies or as a result of our interaction with digital devices such as smartphones and fitness trackers. The ability to safely share and analyze such data is key for scientific and societal progress. Anonymization is considered by scientists and policy-makers as one of the main ways to share data while minimizing privacy risks. In this review, we offer a pragmatic perspective on the modern literature on privacy attacks and anonymization techniques. We discuss traditional de-identification techniques and their strong limitations in the age of big data. We then turn our attention to modern approaches to share anonymous aggregate data, such as data query systems, synthetic data, and differential privacy. We find that, although no perfect solution exists, applying modern techniques while auditing their guarantees against attacks is the best approach to safely use and share data today.

3.
Sensors (Basel) ; 21(11)2021 May 26.
Article in English | MEDLINE | ID: mdl-34073425

ABSTRACT

Information theory is a unifying mathematical theory to measure information content, which is key for research in cryptography, statistical physics, and quantum computing [...].

4.
Nat Commun ; 10(1): 3069, 2019 07 23.
Article in English | MEDLINE | ID: mdl-31337762

ABSTRACT

While rich medical, behavioral, and socio-demographic data are key to modern data-driven research, their collection and use raise legitimate privacy concerns. Anonymizing datasets through de-identification and sampling before sharing them has been the main tool used to address those concerns. We here propose a generative copula-based method that can accurately estimate the likelihood of a specific person to be correctly re-identified, even in a heavily incomplete dataset. On 210 populations, our method obtains AUC scores for predicting individual uniqueness ranging from 0.84 to 0.97, with low false-discovery rate. Using our model, we find that 99.98% of Americans would be correctly re-identified in any dataset using 15 demographic attributes. Our results suggest that even heavily sampled anonymized datasets are unlikely to satisfy the modern standards for anonymization set forth by GDPR and seriously challenge the technical and legal adequacy of the de-identification release-and-forget model.


Subject(s)
Data Analysis , Data Anonymization , Personally Identifiable Information , Datasets as Topic , Likelihood Functions , Normal Distribution
SELECTION OF CITATIONS
SEARCH DETAIL