Deep Learning has emerged as the most successful field of artificial intelligence with overwhelming successes in industrial speech, language and vision benchmarks. I invented LSTM recurrent neural networks, which evolved into a key technology in different AI fields like speech, language, and text analysis. We use LSTM for natural language processing in collaboration with companies like Zalando and Bayer, e.g. to analyze fashion blogs or twitter news related to health. In the AUDI Deep Learning Center and in collaboration with NVIDIA we apply Deep Learning to advance autonomous driving. With Deep Learning we won the NIH Tox21 challenge, predict biological effects of drug candidates from their chemical structure and from high content imaging. In current research we analyze convergence properties of generative adversarial networks (GANs) using stochastic approximation (cf. TTUR). Further we investigate self-normalizing networks, which automatically converge to their optimal learning conditions (cf. SELUs). Most recently, we developed a new reinforcement learning method, which outperforms Monte Carlo Tree Search and other RL methods on delayed reward problems. This new method has a potential to initiate a paradigm shift in reinforcement learning.