Disentangling Machine Learning Theory with Cross-Validation
Does anyone see a link between machine learning's repeated epochs of training and the concept of cross-validation in linear modeling theory?
This article demonstrates what I perceive to be confusion about the validity of cross-validation combined with Bayesian optimization:
I am starting to believe cross-validation is actually a slower, less effective approximation of Bayesian inference. That opinion is informed by this Biometrika article from earlier this year (although I do not fully agree with the theoretical framework of coherence and prefer a more Jaynesian approach, but unfortunately he has been dead for more than 20 years):