OBD vs OBS

In 1995 Nico and I wrote our thesis memo on the comparison of OBD and OBS for the pruning of neural networks, under the supervision of Jan Larssen in the department of maths, statistics and computer sciences at DTU.

Optimal Brain Surgeon (OBS) by Babak Hassibi, David Stork and Gregory Wolff of Ricoh California Research Center, published in 1993.

Optimal Brain Damage (OBD) by Yann Le Cun, John Denker and Sara Solla of AT&T Bell Laboratories, published in 1989.

The idea of OBD was to adapt or optimize the size of a neural network by removing “unimportant” weights. Fewer weights means better generalization and fewer training examples needed, which leads to improved speed of learning and classification.
Make a tradeoff between network complexity and training set error
“saliency” is a measure of the effect on the training error.

The OBS team argued that its method was “significantly better” than OBD which “removed the wrong weights”. We wanted to test this assumption.

While Babak Hassibi completed his PhD in Electrical Engineering at Stanford University and is now a Professor at Caltech, Yann Le Cun is a Professor at NYU and Chief AI Scientist at Facebook.

Yann Le Cun published a book recently (2019) about the progress of AI which reminded me of OBD. Quand la machine apprend. TBC


Leave a comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.