CryptoKitties, Explained … Mostly – The New York Times
https://www.nytimes.com/2017/12/28/style/cryptokitties-want-a-blockchain-snuggle.html
CryptoKitties, Explained … Mostly – The New York Times
https://www.nytimes.com/2017/12/28/style/cryptokitties-want-a-blockchain-snuggle.html
QT:{{”
The new data ‘sanitization’ technique obscures regions of a
participant’s genome in a dataset to secure her privacy, and may encourage more people to participate in genetic studies, says lead investigator Mark Gerstein, professor of biomedical informatics at Yale University.
“If someone hacks into your email, you can get a new email address; or if someone hacks your credit card, you can get a new credit card,” Gerstein says. “If someone hacks your genome, you can’t get a new one.”
To determine which information and how much of it should remain private to prevent a linkage attack, Gerstein and his colleagues performed linkage attacks on existing genetic datasets. In one sample attack, they compared two publicly available databases and RNA sequencing results to successfully identify 421 individuals.
In another linkage attack, Gerstein’s team sequenced the RNA of two volunteers and shuffled these data into a larger dataset. They then obtained DNA samples from the volunteers’ used coffee cups and sequenced their genomes. Again, they could link the two individuals to their genomes with a high degree of certainty.
Based on what they learned from the mock linkage attacks, Gerstein’s team developed a technique to mask some variants from a person’s genetic data while preserving where those variants are located in the genome. To do this, they replace the genetic variant of concern with one from a reference genome; which variants are removed depend on the genetic conditions or predispositions someone’s genetic data reveals.
Introducing too many of these privacy-masking variants can decrease the usefulness of the data. But Gerstein’s team struck a balance that enables researchers to obtain data on gene-expression values but also enables study participants to dictate how much of their genetic information they wish to keep hidden.
“}}
https://www.spectrumnews.org/news/sanitizing-functional-genomics-data-may-prevent-privacy-breaches/
The Naked Scientists
Data sanitisation tool plugs privacy gap
Functional genomics data is vulnerable to de-anonymising attacks… 17 December 2020
Interview with
Mark Gerstein, Yale University
Part of the show RNA Vaccines, Privacy, and Penguins
part of the Naked Genetics podcast, on all podcast platforms and o
site here: https://www.thenakedscientists.com/podcasts/naked-genetics/rna-vaccines-privacy-and-penguins
direct link: https://www.thenakedscientists.com/articles/interviews/data-sanitisation-tool-plugs-privacy-gap
https://www.newyorker.com/magazine/2020/03/16/dressing-for-the-surveillance-age
Much of this @jmseabrook article about facial recognition will soon be applicable to genomic privacy & individuals’ attempts to protect themselves in this sphere as well…
QT:{{”
adversarialfashion.com
…
Adversarial examples demonstrate that deep-learning-based C.V. systems are only as good as their training data, and, because the data sets don’t contain all possible images, we can’t really trust them. In spite of the gains in accuracy and performance since the switch to deep learning, we still don’t understand or control how C.V. systems make decisions. “You train a neural network on inputs that represent the world a certain way,” Goldstein said. “And maybe something comes along that’s different—a lighting condition the system didn’t expect, or clothing it didn’t expect. It’s important that these systems are robust and don’t fail catastrophically when they stumble on something they aren’t trained on.”
The early work on adversarial attacks was done in the digital realm, using two-dimensional computer-generated images in a simulation. Making a three-dimensional adversarial object that could work in the real world is a lot harder, because shadows and partial views defeat the attack by introducing nuisance variables into the input image. A Belgian team of researchers printed adversarial images on
two-dimensional boards, which made them invisible to yolo when they held the boards in front of them. Scientists at Northeastern University and at the M.I.T.-I.B.M. Watson A.I. Lab created an adversarial design that they printed on a T-shirt. Goldstein and his students came up with a whole line of clothes—hoodies, sweatshirts, T-shirts.
“}}
this is quite interesting privacy leak
privacy component in the first year from nhgri – this initiative is similar to bd2k
https://dpcpsi.nih.gov/sites/default/files/CoC_May_2020_1.05PM_Concept_Clearance_AIBLE_Brennan_508.pdf