Loading Events

PhD Thesis Proposal

June

2
Tue
Gregory John Barlow Carnegie Mellon University
Tuesday, June 2
10:30 am to 12:00 am
Generalized Density-Estimate Memory for Dynamic Problems

Event Location: Newell Simon Hall 1305

Abstract: Optimization systems traditionally focus on static problems, also known as offline or a priori optimization problems. Many real-world problems may be better modeled as dynamic optimization problems, also known as stochastic, in situ, or online optimization problems. In these types of problems, the fitness landscape of the search space changes over time. While it may be possible to learn policies offline for every potential scenario, doing so may not be tractable or sufficiently robust. Instead, recent work has focused on optimization concurrent with dynamic processes in order to track moving optima.


One part of the field that has seen a great deal of work in the last decade is dynamic optimization with evolutionary algorithms (EAs). The population-based search used in most EAs has proven very useful for dynamic problems, since the population helps to ease optimization after changes in the fitness landscape. However, normal EAs encounter several difficulties with dynamic problems; foremost, the convergence of a population on a single peak in the fitness landscape depletes the diversity of the population, making adaptation after a change in the fitness landscape more difficult. One method for countering this problem has been the use of memory to retain good solutions for future use.


Memory aids dynamic optimization in several ways. By maintaining good solutions from the past, memory may speed up search for similar solutions after a dynamic event. Memory entries also provide additional points to search from after a change, and when the population has converged, may add a great deal of diversity to the population. Memory has been used extensively for dynamic optimization, and while there are many variations, a standard memory model has emerged. In this model, a small portion of the population is reserved for memory. For many problems, this type of memory has helped to improve the performance of dynamic optimization. However, because the memory size is finite, and usually small, it may be difficult to store enough solutions to accurately model the search space over time. Also, for problems where the feasible region of the search space changes with time, memory entries may become infeasible, making the memory less useful.


This proposal examines the development of more generalized models of memory for dynamic problems. A density-estimate memory for dynamic problems which builds rich models of the dynamic search space efficiently and then uses that information to improve the quality of solutions returned to the search process is proposed. By building models within the memory, the search process can continue to interact with a limited number of entries, except now these entries are models incorporating many individual solutions. The models stored in memory can be continuously refined by updating the models as new solutions are stored to the memory. These models can be used to help keep search diverse in a more informed way and help retrieve the most useful solutions from the memory. An extension of memory to problems where the feasible region of the search space changes over time through the development of an indirect memory layer is also proposed, which will open up the use of memory to many dynamic problems which could not use information from the past.. The indirect memory layer will use environmental information from the problem to create an indirect representation of a solution which can be stored in memory. The stored entry can be mapped to a feasible solution at a future time. The work proposed here will lead to a generalized memory which can be applied to many dynamic problems.

Committee:Stephen Smith, Chair


Katia Sycara


Laura Barbulescu


Juergen Branke, University of Warwick, UK