Posts Tagged ‘teaching’

Naive Bayes Classification explained with Python code

May 15, 2017

Naive #Bayes Classification explained with Python code Nice worked example; good for #teaching HT @KirkDBorne

Learning and earning: Lifelong learning is becoming an economic imperative | The Economist

April 8, 2017

Lifelong Learning Future for colleges? Microcredentails & Nanodegrees inspired by albums unbundled into iTunes songs

interesting view of where short “workshops” fit relative to the traditional course

Scott DeRue, the dean of the Ross School of Business at the University of Michigan, says the unbundling of educational content into smaller components reminds him of another industry: music. Songs used to be bundled into albums before being disaggregated by iTunes and streaming services such as Spotify. In Mr DeRue’s analogy, the degree is the album, the course content that is freely available on MOOCs is the free streaming radio service, and a “microcredential” like the nanodegree or the specialisation is paid-for iTunes.

How should universities respond to that kind of disruption? For his answer, Mr DeRue again draws on the lessons of the music industry. Faced with the disruption caused by the internet, it turned to live concerts, which provided a premium experience that cannot be replicated online. The on-campus degree also needs to mark itself out as a premium experience, he says.

Harvard is putting its photography classes online for free

January 15, 2017

Sunday Puzzler: Rock the Boat

August 4, 2016

Puzzler: Rock the Boat Good illustration of Archimedes’ principle & using extreme cases for intuition, ie dense rock

Visualization of Statistical Power Analysis

July 28, 2016

Visualization of Power Analysis Useful sliders giving one a feel of the #statistics

How does multiple testing correction work?

June 13, 2016

How does multiple-testing correction work Intuition for teaching: genome-wide error rate on a single gene v family

The role of regulatory variation in complex traits and disease : Nature Reviews Genetics : Nature Publishing Group

June 12, 2016

Reg. variation in cplx traits by @LeonidKruglyak nice teaching figure for #eQTLs, showing how mostly cis + hotspots

Know it all: 10 secrets of successful learning – life – 25 March 2015 – New Scientist

April 13, 2015

Know it all: 10 secrets of successful learning Including quizzes, practicing to teach, buddying up & even video games


February 8, 2015

Useful helpers for #teaching #bioinformatics: Biostars forum & Rosalind assignment evaluator

Why Most Published Research Findings are false

February 7, 2015

Why Most Published Research Findings are False Evaluating 2×2 confusion matrix, effects of bias & multiple studies

PLoS Medicine | 0696
August 2005 | Volume 2 | Issue 8 | e124

Published research fi ndings are sometimes refuted by subsequent evidence, with ensuing confusion and disappointment. Refutation and controversy is seen across the range of research designs, from clinical trials and traditional epidemiological studies [1–3] to the most modern molecular research [4,5]. There is increasing concern that in modern research, false fi ndings may be the majority or even the vast majority of published research claims [6–8]. However, this should not be surprising. It can be proven that most claimed research fi ndings are false. Here I will examine the key

Research fi ndings are defi ned here as any relationship reaching formal statistical signifi cance, e.g., effective interventions, informative predictors, risk factors, or associations. “Negative” research is also very useful. “Negative” is actually a misnomer, and the misinterpretation is widespread. However, here we will target relationships that investigators claim exist, rather than null fi ndings. As has been shown previously, the probability that a research fi nding is indeed true depends on the prior probability of it being true (before doing the study), the statistical power of the study, and the level of statistical signifi cance [10,11]. Consider a 2 × 2 table in which research fi ndings are compared against the gold standard of true relationships in a scientifi c fi eld. In a research fi eld both true and false hypotheses can be made about the presence of relationships. Let R be the ratio of the number of “true relationships” to “no relationships” among those tested in the fi eld. R

is characteristic of the fi eld and can vary a lot depending on whether the fi eld targets highly likely relationships or searches for only one or a few true relationships among thousands and millions of hypotheses that may be postulated. Let us also consider, for computational simplicity, circumscribed fi elds where either there is only one true relationship (among many that can be hypothesized) or the power is similar to fi nd any of the several existing true relationships. The pre-study probability of a relationship being true is R⁄(R + 1). The probability of a study fi nding a true relationship refl ects the power 1 − β (one minus the Type II error rate). The probability of claiming a relationship when none truly exists refl ects the Type I error rate, α. Assuming that c relationships are being probed in the fi eld, the expected values of the 2 × 2 table are given in Table 1. After a research fi nding has been claimed based on achieving formal statistical signifi cance, the post-study probability that it is true is the positive predictive value, PPV. The PPV is also the complementary probability of what Wacholder et al. have called the false positive report probability [10]. According to the 2 × 2 table, one gets PPV = (1 − β)R⁄(R − βR + α). A research fi nding is thus