In machine learning, double descent is a surprising phenomenon where increasing the number of model parameters causes test performance to get better, then worse, and then better again. It refutes the classical overfitting finding that if you have too many parameters in your model, your test error will always keep getting worse with more parameters. For a surprisingly wide range of models and datasets, you can just keep on adding more parameters after you’ve gotten over the hump, and performance will start getting better again.
Double descent in human learning
from chris-said.io
Filed under:
Same Source
Related Notes
- Original layout ![](https://www.joelsimon.net/imgs/evo_plans/resul...from joelsimon.net
- More things than you would think are dynamic strategic problems. If...from marcelo.rinesi
- But you can already see the idea of a “prompt” evolving into someth...from Ryan Broderick
- Amdahl's law is often used in [parallel computing](https://en.w...from en.wikipedia.org
- **THE PRIORITIZATION MATRIX** All of the projects and activities o...from review.firstround.com
- One of my favorite systems papers ever is the COST paper, which exa...from blog.nelhage.com
- Adobe claims Firefly was trained on a data set that was built from ...from Garbage Day
- The bottom line is this; in modern multicore, multi-CPU devices, th...from Glauber Costa