Posts Tagged ‘quote’

What to Know About Disinfecting and Cleaning Surfaces – The New York Times

June 4, 2022

Is there a way of selectively killing just the “bad” microbes on surfaces, while leaving only the “good” ones?

What to Know About Disinfecting and Cleaning Surfaces – The New York Times QT:{{”
“The bottom line: We germaphobes can still delight in killing germs, but perhaps not all of them. When I need to clean a spill, I’ll use soap and water or a gentle cleaning spray, not a disinfectant. But after handling raw meat, or when a family member is ill, I’ll reach for the stronger stuff to clean contaminated surfaces, and I’ll make sure to let it sit long enough to work, with the windows open. And while I wait, maybe I’ll have the chance to tidy my house, too.” “}}

You’re Cleaning All Wrong
https://www.nytimes.com/2022/05/05/well/clean-disinfect-home-germs.html

Mirror therapy – Wikipedia

May 29, 2022

https://en.wikipedia.org/wiki/Mirror_therapy

QT:{{”
Mirror therapy (MT) or mirror visual feedback (MVF) is a therapy for pain or disability that affects one side of the patient more than the other side. It was invented by Vilayanur S. Ramachandran to treat post-amputation patients who had phantom limb pain (PLP). Ramachandran created a visual (and psychological) illusion of two intact limbs by putting the patient’s affected limb into a “mirror box,” with a mirror down the center (facing toward a patient’s intact limb).
“}}

Particle’s surprise mass threatens to upend the standard model

May 29, 2022

Hot news in ’22, but this was based on data collected by 2011!

https://www.nature.com/articles/d41586-022-01014-5

QT:{{”
Old experiment, new tricks

In the latest work, Kotwal and his collaborators aimed to take the most precise measurement ever of the W’s mass. The data had all been collected by 2011, when Fermilab’s Tevatron — a 6-kilometre-long circular machine that collided protons with antiprotons and was once the world’s most powerful accelerator — shut down. But the latest measurement would not have been possible back then, says Kotwal. Instead, it is the result of a steady improvement of techniques in data analysis, as well as the particle-physics community’s improved understanding of how protons and antiprotons behave in collisions. “Many of the techniques to achieve that kind of precision we had not even learned about by 2012.”

The team looked at roughly four million W bosons produced inside the CDF detector between 2002 and 2011 — a data set four times larger than the group used in an early measurement in 20122. The researchers calculated the energy of each decay electron by measuring how its trajectory bent in a magnetic field. One painstaking advance over the past decade improved the resolution of the trajectories from roughly 150 micrometres to less than 30 micrometres, says Kotwal.
“}}

Testing Positive for the Coronavirus Overseas: What You Need to Know – The New York Times

May 13, 2022

QT:{{”
To enter the United States, all air passengers age 2 and older must have a negative coronavirus test taken within one day of departure…. The accepted PCR and viral tests are available at many hotels, airports, health clinics and local pharmacies overseas. Certain antigen or nucleic acid amplification self-tests like BinaxNOW and Ellume…are also accepted. These require you to connect to a telehealth service by video, so that you can be supervised by a medical practitioner while you take the test….
There are no testing requirements for travelers entering the United States through land or ferry ports of entry.
“}}

https://www.nytimes.com/2022/05/04/travel/covid-test-positive-traveling-overseas.html

Understanding adversarial examples requires a theory of artefacts for deep learning | Nature Machine Intelligence

May 5, 2022

Thought this was a good perspective:
https://www.nature.com/articles/s42256-020-00266-y. Liked the way it connects AlphaFold’s success in exploiting “inscrutable” features in residue-residue interactions to “artefacts” exploited by adversarial attacks

QT:{{”
Returning to debate over Ilyas et al.’s results, suppose for the sake of argument that there are scientific disciplines in which progress may depend in some crucial way on detecting or modelling predictively useful but human-inscrutable features. To ground the discussion in a speculative but plausible example, let us return to protein folding. For many years in the philosophy of science, protein folding was regarded as paradigm evidence for ‘emergent’ properties36—prop- erties that only appear at higher levels of investigation, and which humans cannot reduce to patterns in lower-level structures. The worry here is that the interactions among amino acids in a protein chain are so complex that humans would never be able to explain biochemical folding principles in terms of lower-level physics37. Instead, scientists have relied on a series of analytical ‘energy land- scape’ or ‘force field’ models that can predict the stability of final fold configurations with some degree of success. These principles are intuitive and elegant once understood, but their elements can- not be reduced to the components of a polypeptide chain in any straightforward manner, and there seem to be stark upper limits on their prediction accuracy. By contrast, AlphaFold38 on its first entry in the CASP protein-folding competition was able to beat state-of-the-art analytical models on 40 out of 43 of the test pro- teins, and achieve an unprecedented 15% jump in accuracy across the full test set.

Subsequent work39 has suggested that the ability of DNNs to so successfully predict final fold configurations may depend on the identification of ‘interaction fingerprints’, which are distributed across the full polypeptide chain. We might speculate that these interaction fingerprints are like the non-robust features that cause image-classifying networks to be susceptible to adversarial attacks, in that they are complex, spatially distributed, predictively useful, and not amenable to human understanding. Suppose this is all the case, for the sake of argument; whether protein science should rely on such fingerprints depends on whether they are artefacts, and if so whether we can understand their origins.

Researchers should develop a systematic taxonomy of the kinds of features learned by DNNs and tools to distinguish them from one another and gauge their suitability for various scientific projects. The first cut in this taxonomy would divide those features that are reliably predictive from those that are not; this distinction has long been a central focus of research in machine learning and is explored by standard methods like cross-validation. The next cut would distinguish predictive features that are scrutable to humans (robust) from those that humans find inscrutable (non-robust); this is the cut that Ilyas et al., and Zhou and Firestone have begun to explore. Finally, the third cut divides the predictive-but-inscrutable features into artefacts and inherent data patterns detectable only by non-human processing, with the former targeted for more suspi- cion until a theory of their origins and techniques for mitigation can be deployed; Goh’s Distill response has made some initial steps here. More research on the last two cuts is urgently needed to understand the full implications of DNNs’ susceptibility to adversarial attack

“}}

https://www.nature.com/articles/s42256-020-00266-y

The Science of Mind Reading | The New Yorker

May 4, 2022

QT:{{”
…second, they thought that they had devised a method for
communicating with such “locked-in” people by detecting their unspoken thoughts.

Osgood became known not for the results of his surveys but for the method he invented to analyze them. He began by arranging his data in an imaginary space with fifty dimensions—one for fair-unfair, a second for hot-cold, a third for fragrant-foul, and so on. Any given concept, like tornado, had a rating on each dimension—and, therefore, was situated in what was known as high-dimensional space. Many concepts had similar locations on multiple axes: kind-cruel and
honest-dishonest, for instance. Osgood combined these dimensions. Then he looked for new similarities, and combined dimensions again, in a process called “factor analysis.”

When you reduce a sauce, you meld and deepen the essential flavors. Osgood did something similar with factor analysis. Eventually, he was able to map all the concepts onto a space with just three dimensions. The first dimension was “evaluative”—a blend of scales like good-bad, beautiful-ugly, and kind-cruel. The second had to do with “potency”: it consolidated scales like large-small and strong-weak. The third measured how “active” or “passive” a concept was. Osgood could use these three key factors to locate any concept in an abstract space. Ideas with similar coördinates, he argued, were neighbors in meaning.

For decades, Osgood’s technique found modest use in a kind of personality test. Its true potential didn’t emerge until the nineteen-eighties, when researchers at Bell Labs were trying to solve what they called the “vocabulary problem.” People tend to employ lots of names for the same thing. This was an obstacle for computer users, who accessed programs by typing words on a command line.

They updated Osgood’s approach. Instead of surveying undergraduates, they used computers to analyze the words in about two thousand technical reports. The reports themselves—on topics ranging from graph theory to user-interface design—suggested the dimensions of the space; when multiple reports used similar groups of words, their dimensions could be combined. In the end, the Bell Labs researchers made a space that was more complex than Osgood’s. It had a few hundred dimensions. Many of these dimensions described abstract or “latent” qualities that the words had in common—connections that wouldn’t be apparent to most English speakers. The researchers called their technique “latent semantic analysis,” or L.S.A.

In the following years, scientists applied L.S.A. to ever-larger data sets. In 2013, researchers at Google unleashed a descendant of it onto the text of the whole World Wide Web. Google’s algorithm turned each word into a “vector,” or point, in high-dimensional space. The vectors generated by the researchers’ program, word2vec, are eerily accurate: if you take the vector for “king” and subtract the vector for “man,” then add the vector for “woman,” the closest nearby vector is “queen.” Word vectors became the basis of a much improved Google Translate, and enabled the auto-completion of sentences in Gmail. Other companies, including Apple and Amazon, built similar systems. Eventually, researchers realized that the “vectorization” made popular by L.S.A. and word2vec could be used to map all sorts of things.
“}}

I was also very impressed with how the article explained concepts related to LSA and word2vec. Thought it was interesting that they were derived, in a sense, from Charles Osgood’s seminal work.

https://www.newyorker.com/magazine/2021/12/06/the-science-of-mind-reading

iPhone Notebook export for The Performance Cortex: How Neuroscience Is Redefining Athletic Genius

April 30, 2022

Your Notebook exported from The Performance Cortex: How Neuroscience Is Redefining Athletic Genius

https://www.goodreads.com/notes/36560373-the-performance-cortex/114528832-mark-gerstein?ref=h_cr

The role of dorsolateral and ventromedial prefrontal cortex in the processing of emotional dimensions | Scientific Reports

April 23, 2022

https://www.nature.com/articles/s41598-021-81454-7#:~:text=To%20put%20it%20in%20a,the%20respective%20areas%20does%20hold.
QT:{{”
To put it in a nutshell, the vmPFC is assumed to have a crucial role in emotional processing, whereas the dlPFC is predominantly involved in cognitive control and executive processing.
It is however debatable if such a strict functional distinction of the respective areas does hold.
“}}

Yankees beat Cowboys for title of most valuable sports team

April 22, 2022

https://nypost.com/2022/04/18/yankees-beat-cowboys-for-title-of-most-valuable-sports-team/ QT:{{”
When it comes to valuations of sports teams, the New York Yankees are king of the hill and top of the heap.
The Bronx Bombers are worth $7.01 billion — beating out the NFL’s Dallas Cowboys for the title of most valuable franchise.
The Cowboys are the second most valuable team with a valuation of $6.92 billion, according to the news site Sportico, which factored in metrics such as revenue, real estate, and related businesses. The NBA’s New York Knicks ($6.12 billion) and the Golden State Warriors ($6.03 billion) are the only other sports franchises whose valuations are north of $6 billion.
The rankings were done based on figures from 2019 — the last full season that was played before the pandemic.
“}}

Keeping Secrets: Anonymous Data Isn’t Always Anonymous – I School Online

April 17, 2022

https://ischoolonline.berkeley.edu/blog/anonymous-data/

QT:{{”
The classic example of this problem occurred in 1997, when Latanya Sweeney, who was then a graduate student at MIT, found the medical records of Massachusetts Governor William Weld, who had collapsed during a public ceremony. She used Weld’s readily available zip code and birth date to scan the Massachusetts Group Insurance Commission (GIC) database for his records and confirmed the identity using voter-registration records from Cambridge, Massachusetts. Some have cited this as an unusual example, given that it involved a
high-profile public figure, which may not be generally repeatable. However, at the American Association for the Advancement of Science meeting in Chicago earlier this month, Sweeney, who is now a computer science professor at Harvard, submitted the results of another sting operation: This time, she purchased a $50 database from the state of Washington that included all hospitalization records for one year. The data included patient demographic information, diagnoses, the identity of the attending physicians, the hospital, and the method used to pay the bill. It had no patient names or addresses, but it included the zip code. Sweeney then conducted a search of all news stories in the state that contained the word ‘hospitalized’ during the same period. With a little sleuthing, they found they could exactly match the information from an article to the database in 43 percent of the cases (they hired a reporter to confirm the identifications), essentially allowing them to place a name on an anomymized health record. “}}