A Brief Note On Optimization

887 Words2 Pages

1. INTRODUCTION Optimization, in simple terms, means minimize the cost incurred and maximize the profit such as resource utilization. EAs are population based metaheuristic (means optimize problem by iteratively trying to improve the solution with regards to the given measure of quality) optimization algorithms that often perform well on approximating solutions to all types of problem because they do not make any assumptions about the underlying evaluation of the fitness function. There are many EAs available viz. Genetic Algorithm (GA) [1] , Artificial Immune Algorithm (AIA) [2], Ant Colony Optimization (ACO) [3], Particle Swarm Optimization (PSO) [4], Differential Evolution (DE) [5, 6], Harmony Search (HS) [7], Bacteria Foraging Optimization (BFO) [8], Shuffled Frog Leaping (SFL) [9], Artificial Bee Colony (ABC) [10, 11], Biogeography-Based Optimization (BBO) [12], Gravitational Search Algorithm (GSA) [13], Grenade Explosion Method (GEM) [14] etc. To use any EA, a model of decision problem need to be built that specifies: 1) The decisions to be made, called decision variables, 2) The measure to be optimized, called the objective, and 3) Any logical restrictions on potential solutions, called constraints. These 3 parameters are necessary while building any optimization model. The solver will find values for the decision variables that satisfy the constraints while optimizing (maximizing or minimizing) the objective. But the problem with all the above EAs is that, to get optimal solution, besides the necessary parameters (explained above), many algorithms-specific parameters need to be handled appropriately. For example, in case of GA, adjustment of the algorithm-specific parameters such as crossover rate (or probability, PC), mu... ... middle of paper ... ... the algorithm are identified and modified suitably, using OpenMP, one can easily exploit the functionality of multi-core CPU and can maximize the utilization of all the cores of multi-core system which is necessary from the optimization point of view (which says, maximize the resource utilization). This paper contributes towards this direction and undertakes a detailed study by investigating the effect of number of cores, dimension size, population size, and problem complexity on the speed-up of TLBO algorithm. In the remainder of this paper, we give a brief literature review of TLBO and its applications. Thereafter, we discuss the possibilities of tweaking a TLBO to make it suitable for parallel implementation on a multi-core system. Then, we present results on few test problems of different complexities and show appreciable speed-ups using our proposed algorithm.

Open Document