Posts Tagged ‘deeplearning’

The Science of Mind Reading | The New Yorker

May 4, 2022

QT:{{”
…second, they thought that they had devised a method for
communicating with such “locked-in” people by detecting their unspoken thoughts.

Osgood became known not for the results of his surveys but for the method he invented to analyze them. He began by arranging his data in an imaginary space with fifty dimensions—one for fair-unfair, a second for hot-cold, a third for fragrant-foul, and so on. Any given concept, like tornado, had a rating on each dimension—and, therefore, was situated in what was known as high-dimensional space. Many concepts had similar locations on multiple axes: kind-cruel and
honest-dishonest, for instance. Osgood combined these dimensions. Then he looked for new similarities, and combined dimensions again, in a process called “factor analysis.”

When you reduce a sauce, you meld and deepen the essential flavors. Osgood did something similar with factor analysis. Eventually, he was able to map all the concepts onto a space with just three dimensions. The first dimension was “evaluative”—a blend of scales like good-bad, beautiful-ugly, and kind-cruel. The second had to do with “potency”: it consolidated scales like large-small and strong-weak. The third measured how “active” or “passive” a concept was. Osgood could use these three key factors to locate any concept in an abstract space. Ideas with similar coördinates, he argued, were neighbors in meaning.

For decades, Osgood’s technique found modest use in a kind of personality test. Its true potential didn’t emerge until the nineteen-eighties, when researchers at Bell Labs were trying to solve what they called the “vocabulary problem.” People tend to employ lots of names for the same thing. This was an obstacle for computer users, who accessed programs by typing words on a command line.

They updated Osgood’s approach. Instead of surveying undergraduates, they used computers to analyze the words in about two thousand technical reports. The reports themselves—on topics ranging from graph theory to user-interface design—suggested the dimensions of the space; when multiple reports used similar groups of words, their dimensions could be combined. In the end, the Bell Labs researchers made a space that was more complex than Osgood’s. It had a few hundred dimensions. Many of these dimensions described abstract or “latent” qualities that the words had in common—connections that wouldn’t be apparent to most English speakers. The researchers called their technique “latent semantic analysis,” or L.S.A.

In the following years, scientists applied L.S.A. to ever-larger data sets. In 2013, researchers at Google unleashed a descendant of it onto the text of the whole World Wide Web. Google’s algorithm turned each word into a “vector,” or point, in high-dimensional space. The vectors generated by the researchers’ program, word2vec, are eerily accurate: if you take the vector for “king” and subtract the vector for “man,” then add the vector for “woman,” the closest nearby vector is “queen.” Word vectors became the basis of a much improved Google Translate, and enabled the auto-completion of sentences in Gmail. Other companies, including Apple and Amazon, built similar systems. Eventually, researchers realized that the “vectorization” made popular by L.S.A. and word2vec could be used to map all sorts of things.
“}}

I was also very impressed with how the article explained concepts related to LSA and word2vec. Thought it was interesting that they were derived, in a sense, from Charles Osgood’s seminal work.

https://www.newyorker.com/magazine/2021/12/06/the-science-of-mind-reading

Deep Learning for AI | July 2021 | Communications of the ACM

August 10, 2021

https://cacm.acm.org/magazines/2021/7/253464-deep-learning-for-ai/fulltext

TURING LECTURE
Deep Learning for AI

Reconciling modern machine-learning practice and the classical bias–variance trade-off

May 31, 2021

QT:{{“U-shaped bias–variance trade-off curve has shaped our view of model selection and directed applications of learning algorithms in practice. “}}
Nice discussion of the limitations of the bias-variance tradeoff for #DeepLearning
https://www.pnas.org/content/116/32/15849

(129) 4k, 60 fps San Francisco, a Trip down Market Street, April 14, 1906 – YouTube

July 10, 2020

Amazing “upscaling” of old movies by http://Neural.Love uses AI to colorize, de-noise & boost FPS. Video of SF 4 days before the 1906 earthquake illustrates this well
https://www.youtube.com/watch?v=VO_1AdYRGW8

Also:

https://www.youtube.com/channel/UCD8J_xbbBuGobmw_N5ga3MA

A watershed moment for protein structure prediction

January 20, 2020

https://www.nature.com/articles/d41586-019-03951-0

Machine mind hack: The new threat that could scupper the AI revolution | New Scientist

January 18, 2020

https://www.newscientist.com/article/mg24232270-200-machine-mind-hack-the-new-threat-that-could-scupper-the-ai-revolution/

Need to make a molecule? Ask this AI for instructions

April 7, 2018

Need to make a molecule? Ask this AI for instructions
http://www.nature.com/articles/d41586-018-03977-w #DeepLearning to do better #retrosynthesis. Perhaps other things in chemistry could be learned as well!

QT:{{”
“The tool, described in Nature on 28 March1, is not the first software to wield artificial intelligence (AI) instead of human skill and intuition. Yet chemists hail the development as a milestone, saying that it could speed up the process of drug discovery and make organic chemistry more efficient.

“What we have seen here is that this kind of artificial intelligence can capture this expert knowledge,” says Pablo Carbonell, who designs synthesis-predicting tools at the University of Manchester, UK, and was not involved in the work. He describes the effort as “a landmark paper”.”
“}}

Need to make a molecule? Ask this AI for instructions

April 1, 2018

https://www.nature.com/articles/d41586-018-03977-w

What to expect in 2018: science in the new year

January 13, 2018

What to expect in ’18: science in the new year
https://www.Nature.com/articles/d41586-018-00009-5 Insights from cancer & ancient #genomes. Cures from #CRISPR. Progress in
#OpenAccess. Also, lots on outer space. But nothing on #cryoEM, #DeepLearning, #QuantumComputing or the brain connectome. HT @OBahcall

google released variant calling with deep learning

December 16, 2017

$GOOG Is Giving Away AI That Can Build Your Genome Seq. https://Research.GoogleBlog.com/2017/12/deepvariant-highly-accurate-genomes.html + https://www.Wired.com/story/google-is-giving-away-ai-that-can-build-your-genome-sequence GATK creators now doing a tensor-flow version. Release sounded a bit like IBM unveiling Deep Blue decades ago: “Today, we announce…DeepVariant, a #DeepLearning tech…"

Steven Salzberg’s response to deep variant:
https://www.forbes.com/sites/stevensalzberg/2017/12/11/no-googles-new-ai-cant-build-your-genome-sequence/#5953db7b5774

QT:{{"On Monday, Google released a tool called DeepVariant that uses deep learning—the machine learning technique that now dominates AI—to identify all the mutations that an individual inherits from their parents.1 Modeled loosely on the networks of neurons in the human brain, these massive mathematical models have learned how to do things like identify faces posted to your Facebook news feed, transcribe your inane requests to Siri, and even fight internet trolls. And now, engineers at Google Brain and Verily (Alphabet’s life sciences spin-off) have taught one to take raw sequencing data and line up the billions of As, Ts, Cs, and Gs that make you you.”
"}}

Google Is Giving Away AI That Can Build Your Genome Sequence
https://www.wired.com/story/google-is-giving-away-ai-that-can-build-your-genome-sequence/