Preserving Validity in Adaptive Data Analysis http://ibmresearchnews.blogspot.com/2015/08/preserving-validity-in-adaptive-data_6.html Using differential #privacy for correct #stats even w/ test-set reuse
“A common next step would be to use the least-squares linear regression to check whether a simple linear combination of the three strongly correlated foods can predict the grade. It turns out that a little combination goes a long way: we discover that a linear combination of the three selected foods can explain a significant fraction of variance in the grade (plotted below). The regression analysis also reports that the p-value of this result is 0.00009 meaning that the probability of this happening purely by chance is less than 1 in 10,000.
Recall that no relationship exists in the true data distribution, so this discovery is clearly false. This spurious effect is known to experts as Freedman’s paradox. It arises since the variables (foods) used in the regression were chosen using the data itself.
We found that challenges of adaptivity can be addressed using techniques developed for privacy-preserving data analysis. These techniques rely on the notion of differential privacy that guarantees that the data analysis is not too sensitive to the data of any single individual. We rigorously demonstrated that ensuring differential privacy of an analysis also guarantees that the findings will be statistically valid. We then also developed additional approaches to the problem based on a new way to measure how much information an analysis reveals about a dataset.
The Thresholdout Algorithm
Using our new approach we designed an algorithm, called Thresholdout, that allows an analyst to reuse the holdout set of data for validating a large number of results, even when those results are produced by an adaptive analysis.