Learning to Represent the Evolution of Dynamic Graphs with Recurrent Models

Jun 15, 2019

Speakers

About

Graph-structured representations are widely used as a natural and powerful way to encode information such as relations between objects or entities, interactions between online users (e.g., in social networks), 3D meshes in computer graphics, multi-agent environments, as well as molecular structures, to name a few. Learning and reasoning with graph-structured representations is gaining increasing interest in both academia and industry, due to its fundamental advantages over more traditional unstructured methods in supporting interpretability, causality, transferability, etc. Recently, there is a surge of new techniques in the context of deep learning, such as graph neural networks, for learning graph representations and performing reasoning and prediction, which have achieved impressive progress. However, it can still be a long way to go to obtain satisfactory results in long-range multi-step reasoning, scalable learning with very large graphs, flexible modeling of graphs in combination with other dimensions such as temporal variation and other modalities such as language and vision. New advances in theoretical foundations, models and algorithms, as well as empirical discoveries and applications are therefore all highly desirable. The aims of this workshop are to bring together researchers to dive deeply into some of the most promising methods which are under active exploration today, discuss how we can design new and better benchmarks, identify impactful application domains, encourage discussion and foster collaboration. The workshop will feature speakers, panelists, and poster presenters from machine perception, natural language processing, multi-agent behavior and communication, meta-learning, planning, and reinforcement learning, covering approaches which include (but are not limited to): -Deep learning methods on graphs/manifolds/relational data (e.g., graph neural networks) -Deep generative models of graphs (e.g., for drug design) -Unsupervised graph/manifold/relational embedding methods (e.g., hyperbolic embeddings) -Optimization methods for graphs/manifolds/relational data -Relational or object-level reasoning in machine perception -Relational/structured inductive biases for reinforcement learning, modeling multi-agent behavior and communication -Neural-symbolic integration -Theoretical analysis of capacity/generalization of deep learning models for graphs/manifolds/ relational data -Benchmark datasets and evaluation metrics

Organizer

Categories

About ICML 2019

The International Conference on Machine Learning (ICML) is the premier gathering of professionals dedicated to the advancement of the branch of artificial intelligence known as machine learning. ICML is globally renowned for presenting and publishing cutting-edge research on all aspects of machine learning used in closely related areas like artificial intelligence, statistics and data science, as well as important application areas such as machine vision, computational biology, speech recognition, and robotics. ICML is one of the fastest growing artificial intelligence conferences in the world. Participants at ICML span a wide range of backgrounds, from academic and industrial researchers, to entrepreneurs and engineers, to graduate students and postdocs.

Store presentation

Should this presentation be stored for 1000 years?

How do we store presentations

Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

Sharing

Recommended Videos

Presentations on similar topic, category or speaker

Interested in talks like this? Follow ICML 2019