In other words, a solution to a multi-objective programming problem is said to be efficient or non-inferior or Pareto optima if it is not possible to improve some objective function values at expense of others. Such solutions are infinitely many and so interest is always on generating some of them. Generally, solution methods of multi-objective programming problems may be classified into three groups, viz : priori, interactive, and posterior methods (Hwang and Masud, 1979). In a priori method, the decision maker states his preferences before the optimization procedure attaching weights on each objective function. The pitfall of this method is that it is not easy for the decision maker to accurately quantify his preferences prior to optimization (Mavrotas,
He says that it is too easy to explain the findings from dual task experiments in terms of central capacity. So if two tasks can be performed simultaneously it's because they don't exceed the central capacity and if they cannot be performed then it is because the combined effort needed to accomplish the tasks exceeds the capacity. This may be the case, but there is no independent definition of the central processing capacity. So, task difficulty cannot be defined. The argument is circular as difficult tasks require more attention and tasks that require more attention are difficult.
In spite of the good performance, the wrapper methods have had restricted employment because of the high computational intricacy involved. In this paper genetic algorithm (GA) as a filter technique is employed in terms of feature selection purpose to result in a better diagnosis of the stock’s trends. 3.3. Support vector machine (SVM) Support vector machine (SVM), which is based on structural risk minimizations concepts and statistical learning theory, was developed firstly by Vapnik (1995).Two remarkable applications of SVM is pattern recognition (classification) as well as regression estimation (approximation of functions).These applications make SVM as a popular method among researchers to implement it to solve problems such as nonlinear modeling, time series forecasting and etc. In this article, it’s classification algorithm has been applied to forecast the price of each stock in certain time peri... ... middle of paper ... ...diction.
Most fraudulent financial reporting schemes involve “earnings management” techniques, which inflate earnings, create an improved financial picture, or conversely, mask a deteriorating one. Premature revenue recognition is one of the most common forms of fraudulent earnings management and the case of Informix Software Inc. unfortunately illustrates closely this practice. The analysis of this case will shed light on issues like: v Informix’s revenue recognition policy prior to 1990 and its compliance with FASB Concept #5, FASB Statement #86, GAAP protocols. v Informix’s reactions to AICPA SOP in changing the revenue recognition procedures and Informix’s reason to prematurely and voluntarily implement the new policy v The changes that took place at Informix and the financial results reported during 1990 Furthermore, we will also evaluate the software industry practices and the regulations in place at that time. We conclude with lessons learnt and recommendations towards identifying and discouraging non-GAAP revenue recognition practices.
The concept of Price Elasticity of Demand (PED) measures the responsiveness of quantity demanded by consumers to a change in product price. It is used by businesses to forecast sales, set the most effective price of goods and determine total revenue (TR) and total expenditure (TE). Similarly, governments also use price elasticity of demand when imposing indirect taxes on goods and setting minimum and maximum prices. Marginal revenue is also determined by the price elasticity of demand. Price elasticity of demand is used to predict the quantity shift in the supply curves and the effect on price for a product, and is usually always negative as it is the relationship between price and quantity demanded is an inverse one.
The Bland and Altman 95% limits of agreement (LOA) were -2.7 to 4.7. In conclusion, despite this outcome measure demonstrated excellent test retest reliability, lack of appropriate sample size may decrease the reproducibility of this result. Consequently, a further research is required with an appropriate sample size to draw a definite conclusion. Introduction Reliability of an outcome measurement reflects how reproducible or repeatable the measurement is under a given set of circumstances. For an outcome measurement to be useful, it must provide stable or reproducible values with small errors of measurement when no variable is influencing the attribute that the measurement is quantifying (Rankin and Stoke 1998).
Partially Observable Markov Decision Processes (POMDP) extends the MDP framework to include states those are not fully observable. With this extension, we are able to model more practical problems, while the solution methods that exist for MDP will no longer be applicable. The computational intensive of POMDP algorithms is much more than that of MDP. This complexity is due to the uncertain about the true state, which leads to a probability distribution over the states. So POMDP algorithms are dealing with probability distributions, while MDP algorithms are working on a finite number of discrete states.
My argument is: putting in mind that we want to measure the significance in the difference of performance between the models/macro sets, and given that the process switching time of current operating systems is non zero, we should make such an assumption. This is because there will be a small ove... ... middle of paper ... ...h instances, and it was hard to avoid generating such instances for the test. It is not possible to completely control the output of a random problem generator, and the mprime problems were either relatively easy or extremely hard. So, the only way I found to make things more fair in the comparison was to apply the upper bound on the perfect model as discussed above. This method was very effective in showing that the perfect model is superior compared to the other macros/model.
It also stunts any scope for improvement or innovation as it is too focused on sticking to the set benchmarks. This often leads to poor overall performance of the organization in the long run which in turn affects the going concern of the business. Secondly, it utilizes a single, volume-based cost driver which leads to the distortion of the cost of products. It traces overheads to products or services usin... ... middle of paper ... ...osts and where to apply efforts to curb inflationary costs. This can be of particular value in tracking new products or customers and also solves the cross-subsidies problem linked to traditional costing system by separating overhead costs into different cost categories or cost pools.
Many load balancing algorithms are proposed but are not effective and has many flaws. This work is limited to work load balancing and reducing communication time between the participating resources. This is achieved by converting the initial database into TID(which contains all the data in the initial database with reference to items ) then it is passed to all the participating systems thus reducing the communication time by referencing the TID in each participating system. This technique allows both distributed or parallel mechanism to be implemented easily. RELATED WORK Sarra Senhadji, Salim Khiat, and Hafida Belbachir,.