Mar 28, 2022
Nonlinear embeddings that are based on neighborhood graphs (such as t-SNE) are popular dimensionality reduction methods that can effectively visualize high-dimensional data. However, they are typically non-parametric and cannot be directly applied to out-of-sample points. This problem is typically addressed by learning a parametric mapping (e.g. neural nets) which projects inputs into low-dimensional manifold. Although a number of methods exist to train various mappings, only few of them consider decision trees which have important practical property: ability to interpret the mapping while still being nonlinear and accurate model. In this paper, we formulate the training of nonlinear embeddings as constrained optimization problem where low-dimensional projections must be provided by decision tree. We show that the solution for this problem can be obtained by applying quadratic penalty methods which yields the proposed alternating optimization algorithm. As a subproduct of the algorithm, we introduce the notion of the controlled interpretability to manage a trade-off between model accuracy and the level of interpretability.
AISTATS is an interdisciplinary gathering of researchers at the intersection of computer science, artificial intelligence, machine learning, statistics, and related areas. Since its inception in 1985, the primary goal of AISTATS has been to broaden research in these fields by promoting the exchange of ideas among them. We encourage the submission of all papers which are in keeping with this objective at AISTATS.
Professional recording and live streaming, delivered globally.
Presentations on similar topic, category or speaker