One of the bases of mathematics is the Central Limit Theorem. It describes the requirements for converging on a correct answer from an estimate. It requires a measurement of the error (which can be approximate as well).
A "distance" from the correct solution is necessary. Without a metric for the distance, it is impossible to converge.
Computer "science", as taught by leading universities, assumes axiomatically
that the distance between the solutions taught and a correct and useful solution
As the joke goes, the difference between theory and practice is zero in theory....
The teachers assume that in practice, the students will instinctively know what mathematics to apply to a problem without any way of measuring errors.
This would be laughable, except that the programs designed using this assumption run life-support, transportation, defense, and banking systems.
A colloquial statement would be: if you can't describe a correct solution, you have no idea if your current trial approaches correctness. If you cannot measure the difference between the result of your current trial and a correct solution, you cannot iteratively improve your current trial.
By (lengthy) devolution, the most recent language and approach that a graduate has been exposed to is usually the solution to any problem offered. This is rarely correct and often disastrously wrong.
©2009 Geoff Steckel All Rights Reserved