Posts Tagged ‘network’

Mind the gaps: The holes in your brain that make you smart

June 10, 2017

Mind the gaps: The holes in your brain…make you smart
https://www.NewScientist.com/article/mg23331180-300-mind-the-gaps-the-holes-in-your-brain-that-make-you-smart/ Contrasts connectivity from graphs vs large-scale topology

Network Analysis: 60-second animation shows how divided Congress has become over the last 60 years

June 9, 2017

#Network Analysis [highlighting #modularity change]: Animation shows how divided Congress has become over…60 yrs https://www.YouTube.com/watch?v=tEczkhfLwqM

Network analytics in the age of big data | Science

April 2, 2017

#Network analytics in the age of #BigData
http://science.ScienceMag.org/content/353/6295/123.full Emphasizes analyzing connectivity of graph structures (eg motifs) v nodes

QT:{{”
To mine the wiring patterns of networked data and uncover the functional organization, it is not enough to consider only simple descriptors, such as the number of interactions that each entity (node) has with other entities (called node degree), because two networks can be identical in such simple descriptors, but have a very different connectivity structure (see the figure). Instead, Benson et al. use higher-order descriptors called graphlets (e.g., a triangle) that are based on small subnetworks obtained on a subset of nodes in the data that contain all interactions that appear in the data (3). They identify network regions rich in instances of a particular graphlet type, with few of the instances of the particular graphlet crossing the boundaries of the regions. If the graphlet type is specified in advance, the method can uncover the nodes interconnected by it, which enabled Benson et al. to group together 20 neurons in the nematode worm neuronal network that are known to control a particular type of movement. In this way, the method unifies the local wiring patterning with higher-order structural modularity imposed by it, uncovering higher-order functional regions in networked data. “}}

uri alon’s coherent/incoherent ffls

March 9, 2017

Function of the FFL network motif
http://www.PNAS.org/content/100/21/11980.abstract Decade-old work defining coherent & incoherent ones based on activation/repression

http://sites.fas.harvard.edu/~mcb195/lectures/Networks-II/Literature/pnas_Mangan03.pdf

A Proteome-wide Fission Yeast Interactome Reveals Network Evolution Principles from Yeasts to Human: Cell

February 24, 2017

FissionNet: Proteome-wide [pombe] Interactome Reveals #Network Evolution Principles
http://www.Cell.com/cell/abstract/S0092-8674(15)01556-1 Involving ~1300 soluble proteins

A scored human protein-protein interaction network to catalyze genomic interpretation : Nature Methods : Nature Research

December 9, 2016

Scored…PPI #network to catalyze genomic interpretation http://www.Nature.com/nmeth/journal/vaop/ncurrent/full/nmeth.4083.html >500k links from lit. mining; up weights small-scale expt

Spatiotemporal 16p11.2 Protein Network Implicates Cortical Late Mid-Fetal Brain Development and KCTD13-Cul3-RhoA Pathway in Psychiatric Diseases

November 15, 2016

Spatiotemporal…Protein Network Implicates Cortical…Fetal Brain Development & KCTD13…RhoA Pathway in…Diseases
http://www.sciencedirect.com/science/article/pii/S0896627315000367

dyanamic PPI w brainspan data

Spatiotemporal 16p11.2 Protein Network Implicates Cortical Late Mid-Fetal Brain Development and KCTD13-Cul3-RhoA Pathway in
Psychiatric Diseases

Guan Ning Lin1, 5,
Roser Corominas1, 5,
Irma Lemmens2,
Xinping Yang3,
Jan Tavernier2,
David E. Hill3,
Marc Vidal3,
Jonathan Sebat1, 4,
Lilia M. Iakoucheva1,

http://dx.doi.org/10.1016/j.neuron.2015.01.010

Kinome-wide Decoding of Network-Attacking Mutations Rewiring Cancer Signaling: Cell

July 2, 2016

Kinome-wide Decoding of #Network-Attacking Mutations Rewiring Cancer http://www.cell.com/cell/abstract/S0092-8674(15)01108-3 Mapping NAMs onto well-known signaling pathways

Computer Vision and Computer Hallucinations » American Scientist

October 21, 2015

Computer Vision
&…Hallucinationshttp://www.americanscientist.org/issues/id.16420,y.2015,no.5,content.true,page.1,css.print/issue.aspx Instead of training a neural network, train an image to fit it. Dreams emerge
QT:{{”
“The algorithm behind the deep dream images was devised by Alexander Mordvintsev, a Google software engineer in Zurich. In the blog posts he was joined by two coauthors: Mike Tyka, a biochemist, artist, and Google software engineer in Seattle; and Christopher Olah of Toronto, a software engineering intern at Google.

Here’s a recipe for deep dreaming. Start by choosing a source image and a target layer within the neural network. Present the image to the network’s input layer, and allow the recognition process to proceed normally until it reaches the target layer. Then, starting at the target layer, apply the backpropagation algorithm that corrects errors during the training process. However, instead of adjusting connection weights to improve the accuracy of the network’s response, adjust the source image to increase the amplitude of the response in the target layer. This forward-backward cycle is then repeated a number of times, and at intervals the image is resampled to increase the number of pixels.”
“}}

PLOS Genetics: Statistical Estimation of Correlated Genome Associations to a Quantitative Trait Network

December 28, 2014

Correlated Genome Associations to Quantitative Trait #Network (QTN) http://www.plosgenetics.org/article/info%3Adoi%2F10.1371%2Fjournal.pgen.1000587
Uses fused #lasso for estimation of relationships

Kim & Xing (’09) provide a new method for calculating how genetic
markers associate with phenotypes by incorporating phenotype
connectivity features into the correlation structure between markers
and phenotypes. Their model attempts to quantify pleiotropic
relationships between different phenotypes and assumes a common
genotypic origin for the existence of clusters of correlated
phenotypes, which their algorithm uses to reduce the number of
significant genetic markers. In particular, Kim and Xing present a
method for performing quantitative trait analysis that implements two
novel approaches to inferring the contribution of a
[marker/allele/SNP/gene/locus] to a quantitative trait. The first is
organization of traits into a quantitative trait network (QTN). The
second is the utilization of fused lasso, a variation of multivariate
regression that seeks to minimize the number of non-zero coefficients
and least squared error. These two approaches are combined in an
attempt to minimize noise (in the form of small coefficients for SNP’s
that don’t really make a contribution) and focus on truly relevant
SNP’s while dealing with the correlated nature of quantitative
traits. Based on two datasets – simulated HapMap data and
data from the Severe Asthma Research Program – the authors show marked
improvement in accuracy and reduction of false positives over simpler
multivariate regression methods.