Posts Tagged ‘x78retwee’

Couple Who Defaced $400,000 Painting in South Korea Thought It Was a Public Art Project – The New York Times

May 7, 2021

Honestly, I’m wondering if this was secretly the intention all along. It’s in a public and high traffic space, paint cans and brushes just laying around, and no security barriers. It’s almost begging for people to mess with it

I agree: perhaps the intent was to appear to invite participation but then, in the end, to reject it.

https://www.nytimes.com/2021/04/07/world/asia/jonone-vandalism-south-korea-art.html

Teaching computers to read and speak chemistry

May 7, 2021

https://cen.acs.org/physical-chemistry/computational-chemistry/Teaching-computers-read-speak-chemistry/99/i5

Platypus genome

May 5, 2021

https://www.nytimes.com/2021/01/09/science/platypus-genome-echidna.amp.html

Using multiple measurements of tissue to estimate subject- and cell-type-specific gene expression | Bioinformatics | Oxford Academic

May 3, 2021

Nice analysis integrating bulk & single-cell data, to get at inter-individual differences in the expression of genes in specific
cell types https://academic.OUP.com/bioinformaticsformatics/article/36/3/782/5545976

Using multiple measurements of tissue to estimate subject- and cell-type-specific gene expression

https://academic.oup.com/bioinformatics/article/36/3/782/5545976

Prior SARS-CoV-2 infection rescues B and T cell responses to variants after first vaccine dose | Science

May 2, 2021

https://science.sciencemag.org/content/early/2021/04/29/science.abh1282

Those who worry about CO2 should worry about methane, too | The Economist

April 29, 2021

https://www.economist.com/science-and-technology/2021/04/03/those-who-worry-about-co2-should-worry-about-methane-too

Rare COVID reactions might hold key to variant-proof vaccines

April 28, 2021

https://www.nature.com/articles/d41586-021-00722-8

Pre-symptomatic detection of COVID-19 from smartwatch data | Nature Biomedical Engineering

April 28, 2021

https://www.nature.com/articles/s41551-020-00640-6

CRISPR and the Splice to Survive | The New Yorker

April 28, 2021

QT:{{”
A few feet away from the detoxed toads, Spot and Blondie were sitting in their own tank, an even more elaborate affair, with a picture of a tropical scene propped in front for their enjoyment. They were almost a year old and fully grown, with thick rolls of flesh around their midsections, like sumo wrestlers. Spot was mostly brown, with one yellowish hind leg; Blondie was more richly variegated, with whitish hind legs and light patches on his forelimbs and chest. Cooper reached a gloved hand into the tank and pulled out Blondie, whom she’d described to me as “beautiful.” He immediately peed on her. He appeared to be smiling malevolently. He had, it seemed to me, a face only a genetic engineer could love.

To guard against a Vonnegutian catastrophe, various fail-safe schemes have been proposed, with names like killer rescue, multi-locus assortment, and daisy chain. All of them share a basic, hopeful premise: it should be possible to engineer a gene drive that’s effective but not too effective. Such a drive might be engineered so as to exhaust itself after a few generations, or it might be yoked to a gene variant that’s limited to a single population on a single island. It has also been suggested that if a gene drive did somehow manage to go rogue it might be possible to send out into the world another gene drive, featuring a “Cas9-triggered chain ablation”—or catcha—sequence, to chase it down. What could possibly go wrong? “}}

Robo-writers: the rise and risks of language-generating AI

April 17, 2021

https://www.nature.com/articles/d41586-021-00530-0

GPT3

QT:{{”
A neural network’s size — and therefore its power — is roughly measured by how many parameters it has. These numbers define the strengths of the connections between neurons. More neurons and more connections means more parameters; GPT-3 has 175 billion. The next-largest language model of its kind has 17 billion (see ‘Larger language models’). (In January, Google released a model with 1.6 trillion parameters, but it’s a ‘sparse’ model, meaning each parameter does less work. In terms of performance, this is equivalent to a ‘dense’ model that has between 10 billion and 100 billion parameters, says William Fedus, a researcher at the University of Montreal, Canada, and Google.)
“}}