Dec 6, 2021
Řečník · 0 sledujících
Řečník · 0 sledujících
Řečník · 1 sledující
Řečník · 0 sledujících
We consider the setting of vector valued non-linear dynamical systems X_t+1 = ϕ(A^* X_t) + η_t, where η_t is unbiased noise and ϕ : ℝ→ℝ is a known link function that satisfies certain expansivity property. The goal is to learn A^* from a single trajectory X_1,... , X_T of dependent or correlated samples.While the problem is well-studied in the linear case, where ϕ is identity, with optimal error rates even for non-mixing systems, existing results in the non-linear case hold only for mixing systems. In this work, we improve existing results for learning nonlinear systems in a number of ways: a) we provide the first offline algorithm that can learn non-linear dynamical systems without the mixing assumption, b) we significantly improve upon the sample complexity of existing results for mixing systems, c) in the much harder one-pass, streaming setting we study a SGD with Reverse Experience Replay (SGD-RER) method, and demonstrate that for mixing systems, it achieves the same sample complexity as our offline algorithm, d) we justify the expansivity assumption by showing that for the popular ReLU link function — a non-expansive but easy to learn link function with i.i.d. samples — any method would require exponentially many samples (with respect to dimension of X_t) from the dynamical system. We validate our results via. simulations and demonstrate that a naive application of SGD can be highly sub-optimal. Indeed, our work demonstrates that for correlated data, specialized methods designed for the dependency structure in data can significantly outperform standard SGD based methods.We consider the setting of vector valued non-linear dynamical systems X_t+1 = ϕ(A^* X_t) + η_t, where η_t is unbiased noise and ϕ : ℝ→ℝ is a known link function that satisfies certain expansivity property. The goal is to learn A^* from a single trajectory X_1,... , X_T of dependent or correlated samples.While the problem is well-studied in the linear case, where ϕ is identity, with optimal error rates even for non-mixing systems, existing results in the non-linear case hold only for mixing syste…
Účet · 1,9k sledujících
Neural Information Processing Systems (NeurIPS) is a multi-track machine learning and computational neuroscience conference that includes invited talks, demonstrations, symposia and oral and poster presentations of refereed papers. Following the conference, there are workshops which provide a less formal setting.
Professional recording and live streaming, delivered globally.
Presentations on similar topic, category or speaker
Kibeom Kim, …
Pro uložení prezentace do věčného trezoru hlasovalo 0 diváků, což je 0.0 %
Jiaqi Ma, …
Pro uložení prezentace do věčného trezoru hlasovalo 0 diváků, což je 0.0 %
Dan Garber, …
Pro uložení prezentace do věčného trezoru hlasovalo 0 diváků, což je 0.0 %
Pro uložení prezentace do věčného trezoru hlasovalo 0 diváků, což je 0.0 %
Qixian Zhong, …
Pro uložení prezentace do věčného trezoru hlasovalo 0 diváků, což je 0.0 %
Pro uložení prezentace do věčného trezoru hlasovalo 0 diváků, což je 0.0 %