https://www.spectator.co.uk/article/should-we-worry-about-the-new-variant-of-covid-19-
Archive for December, 2020
Should we worry about the new variant of Covid-19? | The Spectator
December 16, 2020Batten Down The Hatches, Branford, As Winter Storm Approaches | Branford, CT Patch
December 16, 2020New Covid strain: How worried should we be? – BBC News
December 16, 2020Turning on pseudogenes | Interviews | Naked Scientists
December 15, 2020new yorker Slack Is the Right Tool for the Wrong Way to Work
December 14, 2020Bill Gates says US entering worse phase of COVID pandemic and predicts lockdowns will last into 2022 | Daily Mail Online
December 14, 2020Google says it mitigated a 2.54 Tbps DDoS attack in 2017, largest known to date | ZDNet
December 14, 2020The 2020 Death Toll Is Higher Than Normal, and It’s Not All Covid-19 – The New York Times
December 14, 2020A Physics Analysis of Every Jedi Jump in All of Star Wars | WIRED
December 13, 2020Dressing for the Surveillance Age | The New Yorker
December 13, 2020https://www.newyorker.com/magazine/2020/03/16/dressing-for-the-surveillance-age
Much of this @jmseabrook article about facial recognition will soon be applicable to genomic privacy & individuals’ attempts to protect themselves in this sphere as well…
QT:{{”
adversarialfashion.com
…
Adversarial examples demonstrate that deep-learning-based C.V. systems are only as good as their training data, and, because the data sets don’t contain all possible images, we can’t really trust them. In spite of the gains in accuracy and performance since the switch to deep learning, we still don’t understand or control how C.V. systems make decisions. “You train a neural network on inputs that represent the world a certain way,” Goldstein said. “And maybe something comes along that’s different—a lighting condition the system didn’t expect, or clothing it didn’t expect. It’s important that these systems are robust and don’t fail catastrophically when they stumble on something they aren’t trained on.”
The early work on adversarial attacks was done in the digital realm, using two-dimensional computer-generated images in a simulation. Making a three-dimensional adversarial object that could work in the real world is a lot harder, because shadows and partial views defeat the attack by introducing nuisance variables into the input image. A Belgian team of researchers printed adversarial images on
two-dimensional boards, which made them invisible to yolo when they held the boards in front of them. Scientists at Northeastern University and at the M.I.T.-I.B.M. Watson A.I. Lab created an adversarial design that they printed on a T-shirt. Goldstein and his students came up with a whole line of clothes—hoodies, sweatshirts, T-shirts.
“}}