Inductive bias and adversarial data
- Machine learning has become alchemy (Rahimi at NIPS'17)
- Maybe it's not about having a very rigorous model but having better
empirical approach.
- We're also not so sure what rigorous means and how deep we need to go until
our approach is "rigorous".
- No fee lunch theorem: there's no such thing as universal lerning algorithm
that is on average (for all possible inputs) better than random.
- This is a formalization of "problem of induction".
- We can justify inductive reasoning in inductive way -- really?
- But why do we think that all input sequences are equally likely?
- However, for any learning algorithm there's adversarial example (that will be
handled well by another learning algorithm).
- We can go to Solomonoff induction, but that's not computable.
- For neural networks we know how to build Puntam's monster: there's an
algorithm that comes up with adversarial examples using gradient descent.
- The good generalization abilities of netural networks are not well
understood. The theory says that it should not be that good. In other words
we don't know which inductive assumptions NNs make.
- We must better understand the inductive bias of NNs!
- Q/A
- Capsule networks.
- Relationship to Gödel's incompleteness theorem.
- Human cognitive biases and change blindness, inattention blindness as
adversarial examples.
- DNNs have so much memory that they might just be memorizing and not
generalizing.