Posts Tagged ‘from’

Edit Video Fast | Simon Says

July 3, 2021

https://www.simonsays.ai/
Does AI transcription

NY Times article about remote work

June 24, 2021

https://www.nytimes.com/2021/06/23/upshot/remote-work-innovation-office.html

Do Chance Meetings at the Office Boost Innovation? There’s No Evidence of It. For some, the office even stifles creativity. As the pandemic eases in the U.S., a few companies seek to reimagine what work might look like.

Lawseq |

June 20, 2021

https://lawseq.umn.edu

Stackelberg competition – Wikipedia

June 20, 2021

https://en.wikipedia.org/wiki/Stackelberg_competition

Reconciling modern machine-learning practice and the classical bias–variance trade-off

May 31, 2021

QT:{{“U-shaped bias–variance trade-off curve has shaped our view of model selection and directed applications of learning algorithms in practice. “}}
Nice discussion of the limitations of the bias-variance tradeoff for #DeepLearning
https://www.pnas.org/content/116/32/15849

How to Make Oobleck – A Simple Recipe for Making Slime | Live Science

May 19, 2021

QT:{{”
Want to have fun with physics and even “walk on water”? Try making a mixture of cornstarch and water called oobleck. It makes a great science project or is just fun to Oobleck is a non-Newtonian fluid. “}}

https://www.livescience.com/21536-oobleck-recipe.html

sleep datasets with possible public or expert access

May 6, 2021

STAGES (n=30,000)
https://academic.oup.com/sleep/article/41/suppl_1/A124/4988361

Others somewhat smaller
https://sleepdata.org/datasets/hchs (N=16,000; actigraphy but not genomics)

https://sleepdata.org/datasets/mesa (N=6,800; actigraphy but not genomics; strength is longitudinal following)

Robo-writers: the rise and risks of language-generating AI

April 17, 2021

https://www.nature.com/articles/d41586-021-00530-0

GPT3

QT:{{”
A neural network’s size — and therefore its power — is roughly measured by how many parameters it has. These numbers define the strengths of the connections between neurons. More neurons and more connections means more parameters; GPT-3 has 175 billion. The next-largest language model of its kind has 17 billion (see ‘Larger language models’). (In January, Google released a model with 1.6 trillion parameters, but it’s a ‘sparse’ model, meaning each parameter does less work. In terms of performance, this is equivalent to a ‘dense’ model that has between 10 billion and 100 billion parameters, says William Fedus, a researcher at the University of Montreal, Canada, and Google.)
“}}

Smart cities built with smart materials | Science

April 8, 2021

Is plain-old asphalt a smart, self-healing material?
https://science.sciencemag.org/content/371/6535/1200

Human local adaptation of the TRPM8 cold receptor along a latitudinal cline

April 7, 2021

Stumbled onto this paper. Thought the conclusion that Europeans were more cold-sensitive due to TRPM8 was quite counter-intuitive – but interesting nevertheless
https://journals.plos.org/plosgenetics/article?id=10.1371/journal.pgen.1007298

cold receptor