Posts Tagged ‘x57l’
Making machine learning trustworthy | Science
August 29, 2021Expanding Access to Large-Scale Genomic Data While Promoting Privacy: A Game Theoretic Approach: The American Journal of Human Genetics
August 29, 2021https://www.cell.com/ajhg/fulltext/S0002-9297(16)30526-2
Expanding Access to Large-Scale Genomic Data While Promoting Privacy: A Game Theoretic Approach
Zhiyu Wan
Yevgeniy Vorobeychik
Weiyi Xia
Ellen Wright Clayton
Murat Kantarcioglu
Bradley Malin
Published:January 05, 2017
DOI:https://doi.org/10.1016/j.ajhg.2016.12.002
metabrain QTL
July 17, 2021Brain expression quantitative trait locus and network analysis reveals downstream effects and putative drivers for brain-related diseases by N. de Klein et al., bioRxiv, 2021.
BrainChart
June 10, 2021Only a tenth of the human genome is studied | The Economist
April 28, 2021QT:{{”
There are roughly 20,000 genes in the human genome. Understanding genes and the proteins they encode can help to unravel the causes of diseases, and inspire new drugs to treat them. But most research focuses on only about ten percent of genes. Thomas Stoeger, Luis Amaral and their colleagues at Northwestern University in Illinois used machine learning to investigate why that might be.
First the team assembled a database of 430 biochemical features of both the genes themselves (such as the levels at which they are expressed in different cells) and the proteins for which they code (for example, their solubility). When they fed these data to their algorithm, they were able to explain about 40% of the difference in the attention paid to each gene (measured by the number of papers published) using just 15 features. Essentially, there were more papers on abundantly expressed genes that encode stable proteins. That suggests researchers—perhaps not unreasonably—focus on genes that are easier to study. Oddly, though, the pattern of publication has not changed much since 2000, despite the completion of the human genome project in 2003 and huge advances in DNA-sequencing technology. “}}
Robo-writers: the rise and risks of language-generating AI
April 17, 2021https://www.nature.com/articles/d41586-021-00530-0
GPT3
QT:{{”
A neural network’s size — and therefore its power — is roughly measured by how many parameters it has. These numbers define the strengths of the connections between neurons. More neurons and more connections means more parameters; GPT-3 has 175 billion. The next-largest language model of its kind has 17 billion (see ‘Larger language models’). (In January, Google released a model with 1.6 trillion parameters, but it’s a ‘sparse’ model, meaning each parameter does less work. In terms of performance, this is equivalent to a ‘dense’ model that has between 10 billion and 100 billion parameters, says William Fedus, a researcher at the University of Montreal, Canada, and Google.)
“}}
Why Google Photos unlimited storage is going away
December 7, 2020clarifying article on Google’s future
https://www.fastcompany.com/90579872/google-photos-no-unlimited-storage