Daniel Paleka

I like software, math and machine learning. I am starting my PhD at ETH Zurich. See more about me.

Interesting Problems

Here are some research directions I sometimes play with. I have spent between one hour and one month thinking on each of these questions. How wide should one-hidden-layer networks be to be robust (when interpolating random data)? The general idea of the tradeoff between neural network robustness; and neural network layer widths is well known1. Of course, one should compare networks with the same accuracy; for simplicity, consider networks that interpolate a random dataset....

April 8, 2021