Essay PreviewMore ↓
We all know that computers can help a jumbo jet land safely in the worst of weather, aid astronauts in complex maneuvers in space, guide missiles accurately over vast stretches of land, and assist doctors and physicians in creating images of the interior of the human body. We are lucky and pleased that computers can perform these functions for us. But in doing them, computers show no intelligence, but merely carry out lengthy complex calculations while serving as our obedient helpers. Yet the question of whether computers can think, whether they are able to show any true intelligence has been a controversial one from the day humans first realized the full potential of computers. Exactly what intelligence is, how it comes about, and how we test for it have become issues central to computer science and, more specifically, to artificial intelligence. In searching for a domain in which to study these issues, many scientists have selected the field of strategic games. Strategic games require what is generally understood to a high level of intelligence, and through these games, researchers hope to measure the full potential of computers as thinking machines (Levy & Newborn 1).
From the beginning, some have argued that computers would never be good at strategic games until humans themselves understood how they themselves played and then modeled computers to play the same way. Most computer scientists felt that humans carried out highly selective searches, and programmers initially set out to have their programs do the same. It was believed that special-purpose computer languages in which gaming concepts could be easily expressed were necessary. There were some that argued that although human intuition could not be programmed, it was required for top-level play. Computers have improved gradually over the years from the point of barely making legal moves to the current state of being world-class players. On the surface, they do not seem to imitate the human thought process, but upon closer examination, one begins to sense that they do. How exactly do computers play strategic games? The best way of answering this question is to look at how computers play the game of chess, as this game in order to be mastered requires what we consider to be the highest level of intelligence. Among all the strategic games, the game of chess has been studied the most by AI researchers with the objective of making chess-playing machines that can defeat the best human players.
How to Cite this Page
"Computers And Strategic Games." 123HelpMe.com. 25 Feb 2020
Need Writing Help?
Get feedback on grammar, clarity, concision and logic instantly.Check your paper »
- Adventures in Computer Games Today's world is based almost completely on technology. We use it to teach, learn, and design new ideas and theories. Many people like to argue that the old pen and paper way is best and don't like to think what computers are actually doing for us today. Computer gaming seems to catch a lot of the controversy of today's argument. People question the direction these games are leading our children. From an outward appearance they seem to be the replacement for individualism and imagination but from the inner perspective they are helping create a new generation of life enhancing skills such as anger management, reading writing, arithmetic, and even relationships.... [tags: Computer Video Games Argumentative Pro]
1519 words (4.3 pages)
- In the 21st century, many great inventions have been born, and one of the greatest advances in modern technology has been the creation of electronic games. Whether we like it or not, those games greatly influence the society as they have become the crucial parts of our daily life. The number of people playing video games has proliferated for the past years, due to availability and accessibility of computers and electronic consoles everywhere. Even though there are negative effects of playing violent computer games constantly, I strongly believe that video games in moderation can bring about positive results in many ways such as entertaining, building social bonds, and developing skills.... [tags: entertains, social, bonds, skills]
884 words (2.5 pages)
- Impact of Computers on Society Ever since the dawn of civilization, knowledge has been power. If you knew how to count, you could understand more than others. This still applies today. If you know more than another, you will have more power. Computers in the modern world are the main source of knowledge. From a simple calculator to the most powerful supercomputer, computers give man an edge over his/her rivals. Technology is all about being one step ahead of others. Being able to do advanced math before your opponents allows you to gain a strategic advantage in corporate competition or global politics.... [tags: Technology Society]
938 words (2.7 pages)
- In this new age of technology and information, it is adapt or die in relation to owning a personal home computer. Growing up with a personal computer is nearly required of children in this modern generation to succeed; and its need will only widen in the future to come. The possible risks related to children, using personal computers, are preventable and far overshadowed by the potential for education and general benefits of owning a personal computer now and in the future. This is why the use of personal computers in the classroom and in the home is dramatically increasing as society develops into this age of technological advancement.... [tags: laptops, computers, class technology]
1276 words (3.6 pages)
- Video games are seen in almost every home in developed countries, from computers, smartphones, and gaming consoles they are being implemented in our everyday lives. Their effects on people mostly adolescents have been controversial, but many studies have shown their positive effects are far greater than their negative effects. In the 1940’s the term video game was born due to the growth of computer advances and research. It is not possible to pinpoint of the first made video game due to the lack of concern for documentation and the loss of older computer systems.... [tags: video games, negative effects]
1021 words (2.9 pages)
- Game Playing and Artificial Intelligence Abstract Since the inception of the field of Artificial Intelligence (AI), game playing has had a key role. Likewise, AI has been an integral part of modern computer games. This collaboration of academic and commercial research and development into AI has yielded vast amounts of crossover technology. Academic research problems have become or influenced commercial games and the money gained from the commercial applications of AI have helped advance academic research as well.... [tags: Video Games AI]
1552 words (4.4 pages)
- This essay will explore the advantageous and disadvantageous aspects of computer games for children, teenagers and adult and argue for its positive and negative health and social implications of this trend. Computer games are becoming increasingly more and more popular, and, in some situations, essential in our society. First, computer games provide a wide range of educational activities and encourage different aspect of imagination, learning, exploration and creativity. The use of computer games has been around for several years, and it is well documented through disease, disabilities, learning skills and entertainment.... [tags: Computer Games]
876 words (2.5 pages)
- Computer games have steadily become a form of mainstream entertainment ever since Pong was released back in 1958. Today, it is hard to find an electronics department in any store that doesn't carry some sort of computer game. "Big deal," you say, "Everyone knows about computer games. What does they have to do with physics?" Well, the technology for creating more powerful software is constantly advancing, and since games are a form of software, they too become more and more advanced. As games become more advanced, game developers aim to create games that offer a more realistic experience.... [tags: Physics Computers Science Video Games]
1005 words (2.9 pages)
- Computers are everywhere, and they are used for everything, and in every type of business have we become too dependent on computers. The younger generation particularly has seized on the strange communication through the Internet. Using chat groups on different subjects they are taking in school, they conduct live conversations by keyboard through the internet. Since computers have been invented, so many people everywhere find themselves dependent on computers. Computers are appearing almost about every aspect of our lives, and in most cases, they are making things very easy r for us.... [tags: computers]
934 words (2.7 pages)
- Physics is one of the key elements of any computer game or animation. This is especially true, when it comes to 3D environments. Physics applies to every aspect of the real world, from how objects act on each other through obvious collisions down to the more in depth conservation of energies and momentums. These same concepts apply to Computer Generated Environments (CGE). 3D CGEs always require a few key laws to be followed in order for them to look at least somewhat realistic. Without at least implementing these simple physics concepts, the interaction of polygons will look unrealistic.... [tags: Physics video games computer]
3001 words (8.6 pages)
Evaluation of a Board Position
Three main factors are taken into account when scoring a position. Material is the most important factor. A player who has more pieces is usually in the stronger position. The different pieces are assigned values as follows: Queen=9, Rook=5, Bishop=3, Knight=3, Pawn=1 and King=200. The King is considered more valuable than all the other pieces. Pawn structure is very important in the course of a chess game. One should usually avoid doubled, backward, and isolated pawns. So for each doubled, backward, or isolated pawn, there is a 0.5 point penalty. The third factor taken into account when evaluating a position is mobility. A player whose pieces are more mobile and control more squares is usually in the stronger position. So there is a 0.1 point bonus for each legal move that a side has available. These values vary only very slightly from one chess engine to another. The score S of a position P, denoted by S(P), is then given by the scoring function
S(P) = 200(K ¨C K¡ä ) + 9(Q ¨C Q¡ä ) + 5(R ¨C R¡ä ) + 3(B ¨C B¡ä + N ¨C N¡ä ) +
(P ¨C P¡ä ) ¨C 0.5(D ¨C D¡ä + S ¨C S¡ä + I ¨C I¡ä ) + 0.1(M ¨C M¡ä )
where K, Q, R, B, N, and P are the number of White Kings, Queens, Rooks, Bishops, Knights, Pawns; D, S, and I represent doubled, backward, and isolated Pawns; M is the number of legal moves for White. Primed variables represent similar variables for Black. A positive score implies that White is ahead while a negative one implies that Black is ahead (Newborn 9).
This method of board evaluation is the one most commonly used by chess engines. Especially, the values of the pieces are almost always the same for different engines. The ideas of mobility and pawn structure, however, are more abstract and different chess engines may differ in the relative importance they attach to them. I have given just one common example.
The Chess Tree and the Minimax Algorithm
The basic idea is for the computer to start from the initial position and search all legal moves, making a tree of these moves. After every move in the tree, it should evaluate the board position and determine which position is the most favorable and then make the move that leads to that board position, keeping in mind that the opponent is also going to make the best move. But there is a problem with this seemingly simple idea. When the game of chess begins, White has a choice of 20 moves. Following White¡¯s first move, Black has 20 replies. Thus, 400 different positions arise after only 1 move by each
side. After 2 moves, the number grows to over 20000, becoming astronomical after several more moves. The entire chess tree has more positions than there are atoms in the Milky Way (Levy & Newborn 154). If it were possible to search the entire tree, it would be possible to determine precisely the best opening move. But in practice, the huge size of the tree does not warrant this approach.
For a given tree, the minimax algorithm provides the rule for deciding which move to make. Based on the algorithm, the one-move strategy can be described as follows. Generate all legal moves for the side to move in position P. Call them M1,¡..,Mr. Next, for each move, construct the new position that follows. Call these new positions M1P,¡¡, MrP. Then calculate the score of each of these new positions and call the scores S(M1P),¡¡.,S(MrP). Finally, if it is White¡¯s turn to move, select the move that leads to the highest or maximum score. If it is Black¡¯s turn, select the move that leads to the position with the lowest or minimum score (Newborn 11).
The tree in Fig. I illustrates this strategy for a hypothetical position P having four legal moves for white. The node at the left represents position P and is called the root of the tree. The four moves are represented by branches connecting the root node on the left to the terminal nodes on the right. The terminal nodes represent the positions M1P, M2P, M3P, and M4P. The score of each terminal node is shown beside it. By looking ahead one move, the figure indicates that White¡¯s best move is M2, which leads to a position with a score of +14. Accordingly, we can say that position P can be assigned a backed-up score of +14. So in any position P, White moves such that the backed-up score of P is a maximum, whereas Black moves so as to minimize the backed-up score of P (Newborn 11).
The two-move strategy follows from the one-move strategy. First, generate all legal moves for White (assuming White¡¯s turn to move) in position P and construct non-terminal positions M1P, ¡¡, MrP as before. Then apply the one-move strategy for Black to move in each of these positions. This will give Black¡¯s best move and the corresponding backed-up score for each of the positions M1P, ¡¡, MrP. Then, based on the backed-up scores for the non-terminal positions M1P, ¡¡, MrP, apply the one-move strategy for White to move in position P. Fig. II shows a tree of depth 2 based on the two-move strategy. This strategy can be extended to trees of ay length. Fig. III shows this strategy as applied to a tree of depth 4 (Newborn 12).
A careful study of the minimax algorithm leads to the observation that there are many paths within the search tree that need not be examined because they have no effect on the outcome of the search. The elimination of these unnecessary paths is the function performed by the alpha-beta algorithm. Essentially, the alpha-beta algorithm is the minimax algorithm supplemented by a few lines of code that decide what paths have no bearing on the outcome of the search and thus need not be searched.
The alpha-beta algorithm can be illustrated by looking at Fig. II. The computer begins the minimax search by examining position P and generating moves M1, M2, and M3. It then generates position M1P and then moves M11, M12, and M13. Next it applies the one-move strategy and concludes that, if it makes move M1, its opponent will make move M12. This assigns a backed-up score of +7 to move M1. The computer next generates position M2P and then moves M21, M22 and M23 and then generates and scores position M21P and M22P. Upon finding that the score of position M22P is less than +7, the computer realizes that move M2 is not as good as move M1. Move M23 need not be examined now; its score is irrelevant. We say that move M22 causes a cutoff of the search at position M2P. Similarly, move M31 refutes move M3 and the computer need not examine moves M32 and M33. Thus, the computer examines only six of the nine terminal positions in arriving at the conclusion that move M1 is the best. Thus, the alpha-beta algorithm reduces the number of moves to be searched and thus the time required by the computer to come up with the best move. The alpha-beta algorithm can be generalized to trees of any finite depth. Cutoffs that occur at positions at odd piles are called alpha-cutoffs while cutoffs that occur at even piles are called beta-cutoffs (Newborn26). The four-ply tree of Fig. III is shown again in Fig. IV but with Fig. IV illustrating the alpha-beta algorithm. It is important to note that the minimax algorithm would come up with the same answer whether aided by the alpha-beta algorithm or not.
Below are described some of the latest search techniques that are employed by modern chess programs.
The transposition table is a hashing scheme to detect positions in different branches of the search tree that are identical. If a search arrives at a position that has been reached before and if the value obtained can be used, the position does not have to be searched again. If the value cannot be used, it is still possible to use the best move that was used previously at that position to improve the move ordering. A transposition table is a safe optimization that can save much time. The only danger is that mistakes can be made with respect to draw by repetition of moves because two positions will not share the same move history. A transposition table can save up to a factor 4 on tree size and thus on search time. Because of the exponential nature of tree growth, this means that maybe one level deeper can be searched in the same amount of time.
Iterative deepening means repeatedly calling a fixed depth search routine with increasing depth until a time limit is exceeded or maximum search depth has been reached. The advantage of doing this is that we do not have to choose a search depth in advance; we can always use the result of the last completed search. Also because many position evaluations and best moves are stored in the transposition table, the deeper search trees can have a much better move ordering than when starting immediately searching at a deep level. Also the values returned from each search can be used to adjust the aspiration search window of the next search, if this technique is used.1
Move Generation and the Killer Heuristic
When the legal moves are generated for a given position, they are ordered before the alpha-beta algorithm is applied. A good move ordering leads to a quicker search. The killer heuristic is used to improve the move ordering. The idea is that a good move in one branch of the tree is also good at another branch at the same depth. For this purpose at each ply one or two killer moves are maintained that are searched before other moves are searched. A successful cutoff by a non-killer move overwrites one of the killer moves for that ply.1
The purpose of the quiescence search is to prevent horizon effects, where a bad move hides an even worse threat because the threat is pushed beyond the search horizon. This is done by making sure that evaluations are done at stable positions, i.e. positions where there are no direct threats (e.g. hanging pieces, checkmate, promotion). A quiescence search does not take all possible moves into account, but restricts itself e.g. to captures, checks, check evasions, and promotion threats. The art is to restrict the quiescence search in such a way that it does not add too much to the search time.1
The Efficient Use of Time
Computers are programmed to think while their opponent is on the move. They guess that their opponent will make the anticipated move on the principal continuation and they proceed to calculate a reply. If their guess turns out to be correct, they can either save time by replying immediately or they can continue searching. If they are wrong, they simply restart their search. The best programs guess their opponents¡¯ moves 50 percent of the time, effectively giving them 50 percent more time (Levy & Newborn 189).
Opening and Endgame Databases
In chess, the opening usually steers the game. Grandmasters and good players rely on well-known openings which have been established over the years as being distinctly favorable. Modern chess programs have a database of the most popular opening variations. These variations can be as much as twenty moves deep. So in the opening, these programs don¡¯t have to search for moves; they simply select the moves from the database. This saves considerable time for the middle game and the endgame. Chess programs have databases of all 3, 4 and 5 piece endgames so they often do not need to use up as much time as expected in the endgame. Also, the branching factor on each move in the endgame is reduced once pieces like the queen are exchanged off, so the computer can search to a greater depth than in the middlegame.
Comparison of Human and Computer Chess Players
There is a great difference between the way humans and computers play chess. The human players evaluate chess positions in chucks of pieces and look at the board holistically as opposed to computers. Grand Masters have the ability to instinctively pick out good moves from the many possibilities, just like picking a face from a crowd. They can recognize patterns in board positions and can play from past experience. The computers cannot do so. The ability of computers depends on how fast a computer is and how quickly it can search the tree. The top programs are of the grandmaster level and the best chess computer Deep Blue has beaten the World Chess Champion Gary Kasparov.
Other Games that Computers Play
Tic-Tac-Toe is a very simple game with a 3¡Á3 board having 9 possible first moves with 8 possible continuations with 7 possible continuations and so on. Thus there are 9! = 362880 board positions. Since the number of possible positions is so less, computers can play the perfect game of Tic-Tac-Toe i.e. it is impossible to beat them is human doesn¡¯t start.
Professional level Go is played on 19¡Á19 board size. Moves are made by placing black and white pieces on the board. The objective is to cover as much of the board as possible with one¡¯s own color. Computers follow the same strategy for Go as for other games but the number of possibilities in Go is so huge that it is not possible for the computer to make a good enough search. This number is much greater than that in chess. Thus, computers are not very good at go and even the very best Go programs reach only the advanced beginner level. There are other reasons for the lack of ability of computer go programs. Chess programs typically use a heuristic search and evaluation technique. Search trees of board positions are generated to a fixed depth and are heuristically pruned according to an evaluation of the merit of the board positions. This approach works well in Chess because the board size is sufficiently small and the nature of Chess is more tactical than strategic.
Evaluation of a board position in Go presents problems not encountered in Chess. Go is a much more strategic game in comparison to Chess. Unlike Chess, Go does not focus around the capture of a single piece. Positional advantages are slowly built up in achieving the long term goal of acquiring more territory than the opponent. There are many direct and indirect ways to achieve this goal such as making territory, building influence, attacking weak enemy groups, securing friendly groups, destroying enemy territory etc. Due to the large size of the board, a Go game is comprised of many small local skirmishes. If a game of Chess were described as a battle, a game of Go could be described as a war. Many good tactical moves at the local level must all compete for selection in the context of strategic global considerations. Thus a player must balance resources to achieve local goals at many locations whilst trying to pursue an overall global objective. Due to the problems associated with evaluation, the brute force methods employed in AI to program Chess will not work for Go. As a domain, Go is rich for research in AI and cognitive science. Hans Berliner, a former Correspondence Chess World Champion, a top-ranked U.S. over-the-board player and a well-known Chess programmer refers to Go by saying:
¡®... this game may have to replace Chess as the ¡®task par excellence¡¯ for AI¡¯.
Handtalk is regarded as the best computer Go program. In 1997, Janice Kim beat Handtalk in a demo with a 25-stone handicap. Go represents the biggest challenge in the field of strategic games for artificial intelligence researchers today.
While much research was being pursued in the field of computer Chess, little study was given to Checkers. The problem of Checkers had been thought to have been solved in 1975 when a program defeated a master in an informal match. (In reality the game was won because of human error). However, the field of Checkers has received new attention because it is a very good example of how Artificial Intelligence techniques can be applied to a game that is difficult, but easier to manage then Chess. Checkers, like chess has an 8¡Á8 board but the number of possible moves is less. Computers have been successful at checkers and the best checkers program can handily beat the checkers champion. Checkers influenced one of the major breakthroughs in AI: the creation of the ¡®self-evaluating function¡¯ which allowed the program to learn how to play from its own mistakes. This function has an algorithm that keeps track of the games played by the computer and takes into account the moves from these games when deciding upon the best move. The current computer-checkers champion is Chinook that beat the World Checkers Champion Dr. Marion Tinsley in 1990. The program was retired in 1997 when it became clear that no human came close to beating the program. The difference between Chinook and the highest-rated human is about 200 rating points (which is a considerable difference). Much of the research that was done on chess was used for checkers.
Deep Blue and the Relation of Strategic Games to General AI
Chess had long been considered as the ultimate goal of game-playing computers. Chess was believed to be a game that was only to be mastered through the possession of true intelligence. Thus, in order to make intelligent computers, the first step should be to make computers that could beat humans at chess. The efforts of making a chess engine that could beat humans culminated in the production of Deep Blue by IBM. Deep Blue is an amazingly fast computer that uses 256 processors working in tandem, and can analyze 200,000,000 board positions per second. This speed is combined with a huge database of not only openings and endings, but also of all the major games played in the last century by the leading Grandmasters.
In 1996, Deep Blue played the then World Chess Champion Gary Kasparov in a six game exhibition match. The world champion was stunned in the first game by a defeat but recovered to win the match by two scoring three wins and two draws. The next year, another exhibition match was played in which Deep Blue beat Kasparov scoring two wins, three draws, and a loss, with a brilliant win in game two. What does the scientific community think of Deep Blue¡¯s victory? Many are filled with admiration but some are still cautious. However, there is no doubt that it is a tremendous result in the history of artificial intelligence.
But what implications does this result has on the field of AI? Does it mean that Deep Blue is an intelligent machine, and its intelligence is comparable to that of humans? Looking at how Deep Blue works, it is obvious that there is nothing intelligent about how it plays chess. The Checkers World Champion Dr. Marion Tinsley remarked:
¡®The programs may indeed consider a lot of moves and positions, but one thing is certain. They do not see much!¡¯ 
So, either chess does not require true intelligence to be mastered, or our notion of true intelligence is faulty, or we ourselves follow a similar mechanical procedure when playing; we simply don¡¯t understand it because it is so complicated. In my opinion, if a computer can beat a human at chess, then it should at least be called an intelligent chess player, because it has overcome all the human faculties of intelligence. Its intelligence may be different from humans but it is definitely not inferior.
At present, Go presents a great challenge for AI researchers. It is believed that because of the huge number of possibilities, computers can never be good at Go if they follow the same procedure used for chess. Maybe Go is the game that requires ¡®true intelligence¡¯ to be mastered. But same was the case with chess 30 years ago. So it is highly possible that we can create good computer Go players using the strategies used for chess. So it is possible that computers that can play Go do not have to be intelligent in the way we define intelligence.
What the research in the field of computer game play has taught us is that computers can possibly or potentially overcome some of the human faculties of intelligence. So maybe our minds are as mechanical as computers, maybe we follow very specific rules in the formulation of our thoughts, maybe we are as unintelligent as the computers are according to our notion of intelligence. So the aim of AI research should not be to make computers that work the way we think the human brain works, but rather computers that work as efficiently as, if not more than, the human brain works. And the research in strategic games tells us that while this may be very difficult, it may not be impossible.
1. Newborn, Monroe. Computer chess. New York: Academic Press, 1975.
2. David Levy and Monroe Newborn. How Computers Play Chess. New York: Computer Science Press, c1991.
3. Fogel, David B. Playing at the edge of AI. San Francisco: Morgan Kaufmann Publishers, c2002.
4. Schaeffer, Jonathan. One jump ahead: Challenging human supremacy in checkers. New York: Springer, c1997.
5. Botvinnik, M. M. Computers, chess and long-range planning. New York, Springer-Verlag, 1970.
6. http://citeseer.nj.nec.com/cachedpage/394820/1 (last accessed: 12/05/02).
7. http://www.research.ibm.com/deepblue/ (last accessed: 12/05/02).
8. http://www2.psy.uq.edu.au/~jay/go/CS-TR-339.html (last accessed: 12/05/02).
9. http://www.howstuffworks.com/chess.htm (last accessed: 12/05/02).
10.http://www.xs4all.nl/~verhelst/chess/search.html (last accessed: 12/05/02).