preview

Bid Cost Minimization

Good Essays
Abstract—Most Independent System Operators (ISOs) adopt the Bid Cost Minimization (BCM) to select offers and their respective generation levels while minimizing the total bid cost. It was shown that the customer payment costs that result from selected offers can differ significantly from the customer payments resulting from the Payment Cost Minimization (PCM), under which payment costs are minimized directly. In order to solve the PCM in the dual space, the Lagrangian relaxation and surrogate optimization approach is frequently used. When standard optimization methods, such as branch-and-cut, become ineffective due to the large size of a problem, the Lagrangian relaxation and surrogate optimization approach provides a good feasible solution within a reasonable CPU time. The convergence of the standard Lagrangian relaxation and surrogate subgradient approach depends on the optimal dual value, which is usually unknown. Furthermore, when using the surrogate subgradient approach, the upper bound property is lost, so additional conditions are needed to ensure convergence. The main goal of this paper is to develop a convergent variation of the surrogate subgradient method without invoking the optimal dual value, and show the relevance and effectiveness of the new method for solving large constrained optimization problems, such as the PCM.

I. INTRODUCTION

C

URRENTLY, most ISOs in the United States adopt the bid cost minimization (BCM) settlement mechanism for minimizing total offer costs. In this setup, customer payment costs, which are determined by a mechanism that assigns uniform market clearing prices (MCPs), are different from the minimized offer costs. An alternative method to determine customer payment costs is to ...

... middle of paper ...

...amic step size [8]. Perhaps the most recent and exhaustive survey on the subgradient methods for convex optimization is [2].

The Lagrangian relaxation and surrogate subgradient optimization approach was specifically treated in [6] and [5]. The former paper develops the surrogate subgradient method and proves its convergence. Compared to subgradient and gradient methods, the surrogate subgradient approach finds better and smoother directions within less CPU time. The latter paper extends the methodology to solving coupled problems. Since the optimal dual value or the optimal multipliers remain unknown, there is the need for development of the surrogate subgradient method, the convergence of which does not depend on the optimal dual value.

The convergence of the surrogate subgradient method with dynamic or constant stepsize still remains an open question.
Get Access