Logo[ Bristol CS | Index ]

Parallel Optimization Problem Solving

Many practical applications are modelled as optimization problems, which are NP-complete: to reach an optimal solution, a huge amount of computation is required. There are many research areas and different techniques that have developed to solve optimization problems: Integer Programming, Operations Research, Genetic Algorithms, Simulated Annealing, Constraint Logic Programming, etc. In this project, we are investigating how to efficiently solve optimization problems in parallel. We focus on two major areas: Genetic Algorithms and Constraint Logic Programming.

In the Genetic Algorithms theme, we are studying fine-grained and coarse-grained parallel algorithms. We aim to design a new hybrid algorithm which combines both fine-grained and coarse-grained parallelism. Some theoretical issues are also being studied, such as how local selection and global selection affect performance, and the relation between population size and speed of convergence. We have implemented a vectorized genetic algorithm, and expect to obtain detailed results soon.

In the theme of Constraint Logic Programming, we are applying distributed algorithms to finite domain constraint satisfaction problems. Conventionally, optimization problems are solved with finite domain constraints by using a global search on a set of global data. Our aim is to extend the finite domain constraint solving system such that the data can be kept locally, and several distributed constraint solvers (i.e., software agents) can work independently; when the local optimum is reached, solvers communicate with each other in order to achieve a global optimum. We have implemented two parallel constraint logic programming systems in the past and developed some practical applications.


Steve Gregory

Steve Gregory, steve@cs.bris.ac.uk. Last modified on Monday 12 June 2000 at 15:48. © 2000 University of Bristol