My research focuses on Robustness and Regularization in Adversarial Deep Learning, more specifically robustness to adversarial attacks (Adversarial Examples) and Generative Adversarial Networks (GANs). Highlights of my research include the recent paper “The Odds Are Odd: A Statistical Test for Detecting Adversarial Examples” published at ICML'19, the preprint "Adversarial Training Generalizes Data-dependent Spectral Norm Regularization" presented at the ICML'19 workshop on Generalization, as well as “Stabilizing Training of Generative Adversarial Networks through Regularization” published at NIPS'17. Prior to that I was developing a model of interdependent neural networks together with an algorithm that allows to localize the most influential nodes responsible for broadcasting information in the brain.