Evaluating Active Learning with Cost and Memory Awareness

Document Type

Conference Proceeding

Publication Date

2018

Abstract

Active Learning (AL) is a methodology from Machine Learning and Design of Experiments (DOE) in which the quantities of interest are measured sequentially and the corresponding surrogate models are constructed incrementally. AL provides compelling optimizations over static DOE in applications with engineering processes where the cost of individual experiments is significant. It also helps perform series of computer experiments in parameter sweeps and performance analysis studies. One of the non-trivial tasks in the design of AL systems is the selection of algorithms for cost-efficient exploration of the input spaces of interest: AL needs to balance "exploitation" of experiments with modest costs and careful "exploration" of expensive configurations. Finding this balance in an automatic and general manner is challenging yet desirable in practice.

In this paper, we investigate the application of AL algorithms to Adaptive Mesh Refinement (AMR) performed on a supercomputer. We use AL in conjunction with Gaussian Process Regression for the incremental modeling of cost and memory usage of a series of AMR simulations of a shock-bubble interaction phenomenon. In the studied 5-dimensional input parameter space - with physical, numerical, and machine parameters - we allow AL to guide experimentation across hundreds of configurations. We develop and evaluate a novel multi-objective AL experiment selection algorithm which prioritizes cost-efficient exploration of available configurations and at the same time avoids simulations that violate memory constraints.

Share

COinS