Disentangling Machine Learning Theory with Cross-Validation
+2
−0
Does anyone see a link between machine learning's repeated epochs of training and the concept of cross-validation in linear modeling theory?
This article demonstrates what I perceive to be confusion about the validity of cross-validation combined with Bayesian optimization:
https://piotrekga.github.io/Pruned-Cross-Validation/
I am starting to believe cross-validation is actually a slower, less effective approximation of Bayesian inference. That opinion is informed by this Biometrika article from earlier this year (although I do not fully agree with the theoretical framework of coherence and prefer a more Jaynesian approach, but unfortunately he has been dead for more than 20 years):
https://academic.oup.com/biomet/article/107/2/489/5715611
2 comments
See here for a failed attempt to bridge this gap via SE: https://stats.stackexchange.com/questions/494109/why-not-use-large-n-n0-75-validation-sets-in-machine-learning-trainin — seth.wagenman about 2 months ago
Trying again on data science sub-stack: https://datascience.stackexchange.com/q/87266/93564 — seth.wagenman 29 days ago