French version selectionnée English version Spanish version

The 2nd International Workshop on Parallel Optimization using/for Multi- and Many-core High Performance Computing

10-14 déc. 2020
Virtual/Online event - Barcelona (France)
On the road to exascale, multi-core processors and many-core accelerators/coprocessors are increasingly becoming key-building blocks of many computing platforms including laptops, high performance workstations, clusters, grids, and clouds. On the other hand, plenty of hard problems in a wide range of areas including engineering design, telecommunications, logistics and transportation, biology, energy, etc., are often modeled and tackled using optimization approaches. These approaches include greedy algorithms, exact methods (dynamic programming, Branch-and-X, constraint programming, A*, etc.) and meta-heuristics (evolutionary algorithms, particle swarm, ant or bee colonies, simulated annealing, Tabu search, etc.). In many research works, optimization techniques are used to address high performance computing (HPC) issues including HPC hardware design, compiling, scheduling, auto-tuning, etc. On the other hand, optimization problems become increasingly large and complex, forcing the use of parallel computing for their efficient and effective resolution. The design and implementation of parallel optimization methods raise several issues such as load balancing, data locality and placement, fault tolerance, scalability, thread divergence, etc. This workshop seeks to provide an opportunity for the researchers to present their original contributions on the joint use of advanced (discrete or continuous, single or multi-objective, static or dynamic, deterministic or stochastic, hybrid) optimization methods and distributed and/or parallel multi/many-core computing, and any related issues. The POMCO Workshop topics include (but are not limited to) the following: - Parallel models (island, master-worker, multi-start, etc.) for optimization methods revisited for multi-core and/or many-core (MMC) environments. -Parallelization techniques and advanced data structures for exact (e.g. tree-based) optimization methods. -Parallel mechanisms for hybridization of optimization algorithms on MMC environments. - Parallel strategies for handling uncertainty, robustness and dynamic nature of optimization methods. - Parallel/distributed large-scale global optimization (e.g. parallel/distributed cooperative coevolution-based algorithms). - Parallel/distributed multi- and many-objective algorithms (non dominance-based, decomposition-based, etc.). - Implementation issues of parallel optimization methods on MMC workstations, MMC clusters, MMC grids/clouds, etc. - Parallel/distributed implementation using productivity-aware programming languages (MPI+X, Chapel, Python, Julia, etc.). - Software frameworks for the design and implementation of parallel and/or distributed MMC optimization algorithms. - Computational/theoretical studies reporting results on solving big optimization problems using MMC computing. - Energy-aware optimization for/with MMC parallel and/or distributed optimization methods. Optimization techniques for scheduling, compiling, auto-tuning for MMC clusters, MMC grids/clouds, etc.
Discipline scientifique : Calcul parallèle, distribué et partagé - Recherche opérationnelle - Combinatoire - Optimisation et contrôle

Lieu de la conférence
Personnes connectées : 162 |  Contact |  À propos |  RSS