Thứ Ba, 31 tháng 7, 2007

AI Reading

From MAA: http://www.maa.org/devlin/
Devlin's Angle
Why 2001 Won't be 2001
This month, January 12, to be precise, sees the birthday of HAL, the mission-control computer on the Jupiter-bound spaceship Discovery in Arthur C. Clarke's celebrated science fiction novel 2001, A Space Odyssey.

According to the book, HAL was commissioned at Urbana, Illinois, on January 12, 1997. In Stanley Kubrick's 1968 movie version, the date of HAL's birth was inexplicably changed to January 12, 1992. In any event, whether HAL is just about to be born or preparing to celebrate its fifth birthday, with the year 2,001 practically upon us, it's natural to ask how correct Clarke and Kubrick's vision of the future has turned out to be.

Thirty years ago when the film was made, director Kubrick endowed HAL with capabilities computer scientists thought would be achieved by the end of the century. With a name that, despite Clarke's claim to the contrary, some observers suggested was a simple derivation of IBM (just go back one letter of the alphabet), HAL was, many believed, science fiction-shortly-to-become-fact.

In the movie, a team of five new millennium space explorers set off on a long journey of discovery to Jupiter. To conserve energy, three of the team members spend most of the time in a state of hibernation, their life-support systems being monitored and maintained by the on-board computer HAL. Though HAL controls the entire spaceship, it is supposed to be under the ultimate control of the ship's commander, Dave, with whom it communicates in a soothingly soft, but emotionless male voice (actually that of actor Douglas Rain). But once the vessel is well away from Earth, HAL shows that it has developed what can only be called a "mind of its own." Having figured out that the best way to achieve the mission for which it has been programmed is to dispose of its human baggage (expensive to maintain and sometimes irrational in their actions), HAL kills off the hibernating crew members, and then sets about trying to eliminate its two conscious passengers. It manages to maneuver one crew member outside the spacecraft and sends him spinning into outer space with no chance of return. Commander Dave is able to save himself only by entering the heart of the computer and manually removing its memory cells. Man triumphs over machine--but only just.

It's a good story. (There's a lot more to it than just described.) But how realistic is the behavior of HAL? We don't yet have computers capable of genuinely independent thought, nor do we have computers we can converse with using ordinary language. True, there have been admirable advances in systems that can perform useful control functions requiring decision making, and there are working systems that recognize and produce speech. But they are all highly restricted in their scope. You get some idea of what is and is not possible when you consider that it has taken AT&T over thirty years of intensive research and development to produce a system that can recognize the three words 'yes', 'no', and 'collect' with an acceptable level of reliability for a range of accents and tones. Despite the oft-repeated claims that "the real thing" is just around the corner, the plain fact is that we are not even close to building computers that can reproduce human capabilities in thinking and using language. And according to an increasing number of experts, we never will.

Despite the present view, at the time 2,001 was made, there was no shortage of expert opinion claiming that the days of HAL ("HALcyon days," perhaps?) were indeed just a few years off. The first such prediction was made by the mathematician and computer pioneer Alan Turing. In his celebrated article Computing Machinery and Intelligence, written in 1950, Turing claimed, "I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted."

Though the last part of Turing's claim seems to have come true, that is a popular response to years of hype rather than a reflection of the far less glamorous reality. There is now plenty of evidence, from psychology, sociology, and from linguistics, to indicate that the original ambitious goals of machine intelligence is not achievable, at least when those machines are electronic computers, no matter how big or fast they get. So how did the belief in intelligent machines ever arise?

Ever since the first modern computers were built in the late 1940s, it was obvious that they could do some things that had previously required an "intelligent mind." For example, by 1956, a group at Los Alamos National Laboratory had programmed a computer to play a poor but legal game of chess. That same year, Allen Newell, Clifford Shaw, and Herbert Simon of the RAND Corporation produced a computer program called The Logic Theorist, which coul--and did--prove some simple theorems in mathematics.

The success of The Logic Theorist immediately attracted a number of other mathematicians and computer scientists to the possibility of machine intelligence. Mathematician John McCarthy organized what he called a "two month ten-man study of artificial intelligence" at Dartmouth College in New Hampshire, thereby coining the phrase "artificial intelligence," or AI for short. Among the participants at the Dartmouth program were Newell and Simon, Minsky, and McCarthy himself. The following year, Newell and Simon produced the General Problem Solver, a computer program that could solve the kinds of logic puzzles you find in newspaper puzzle columns and in the puzzle magazines sold at airports and railway stations. The AI bandwagon was on the road and gathering speed.

As is often the case, the mathematics on which the new developments were based had been developed many years earlier. Attempts to write down mathematical rules of human thought go back to the ancient Greeks, notably Aristotle and Zeno of Citium. But the really big breakthrough came in 1847, when an English mathematician called George Boole published a book called An Investigation of the Laws of Thought. In this book, Boole showed how to apply ordinary algebra to human thought processes, writing down algebraic equation in which the unknowns denoted not numbers but human thoughts. For Boole, solving an equation was equivalent to deducing a conclusion from a number of given premises. With some minor modifications, Boole's nineteenth century algebra of thought lies beneath the electronic computer and is the driving force behind AI.

Another direct descendent of Boole's work was the dramatic revolution in linguistics set in motion by MIT linguist Noam Chomsky in the early 1950s. Chomsky showed how to use techniques of mathematics to describe and analyze the grammatical structure of ordinary languages such as English, virtually overnight transforming linguistics from a branch of anthropology into a mathematical science. At the same time that researchers were starting to seriously entertain the possibility of machines that think, Chomsky opened up (it seemed) the possibility of machines that could understand and speak our everyday language.

The race was on to turn the theories into practice. Unfortunately (some would say fortunately), after some initial successes, progress slowed to a crawl. The result was hardly a failure in scientific terms. For one thing, we do have some useful systems, and they are getting better all the time. The most significant outcome, however, has been an increased understanding of the human mind: how unlike a machine it is and how unmechanical human language use is.

One reason why computers cannot act intelligently is that logic alone does not produce intelligent behavior. As neuroscientist Antonio Damasio pointed out in his 1994 book Descartes' Error, you need emotions as well. That's right, emotions. While Damasio acknowledges that allowing the emotions to interfere with our reasoning can lead to irrational behavior, he presents evidence to show that a complete absence of emotion can likewise lead to irrational behavior. His evidence comes from case studies of patients for whom brain damage--either by physical accident, stroke, or disease--has impaired their emotions but has left intact their ability to perform 'logical reasoning', as verified using standard tests of logical reasoning skill. Take away the emotions and the result is a person who, while able to conduct an intelligent conversation and score highly on standard IQ tests, is not at all rational in his or her behavior. Such people often act in ways highly detrimental to their own well being. So much for western science's idea of a 'coolly rational person' who reasons in a manner unaffected by emotions. As Damasio's evidence indicates, truly emotionless thought leads to behavior that by anyone else's standards is quite clearly irrational.

And as linguist Steven Pinker explained in his 1994 book The Language Instinct, language too is perhaps best explained in biological terms. Our facility for language, says Pinker, should be thought of as an organ, along with the heart, the pancreas, the liver, and so forth. Some organs process blood, others process food. The language organ processes language. Think of language use as an instinctive, organic process, not a learned, computational one, says Pinker.

So, while no one would deny that work in AI and computational linguistics has led to some very useful computer systems, the really fundamental lessons that were learned were not about computers but about ourselves. The research was successful in terms not of engineering but of understanding what it is to be human. Though Kubrick got it dead wrong in terms of what computers would be able to do by 1997, he was right on the mark in terms of what we ultimately discover as a result of our science. 2001 shows the entire evolution of mankind, starting from the very beginnings of our ancestors Homo Erectus and taking us through the age of enlightenment into the present era of science, technology, and space exploration, and on into the then-anticipated future of routine interplanetary travel. Looking ahead forty years to the start of the new millennium, Kubrick had no doubt where it was all leading. In the much discussed--and much misunderstood--surrealistic ending to the movie, Kubrick's sole surviving interplanetary traveler reached the end of mankind's quest for scientific knowledge, only to be confronted with the greatest mystery of all: Himself. In acquiring knowledge and understanding, in developing our technology, and in setting out on our exploration of our world and the universe, said Kubrick, scientists were simply starting on a far more challenging journey into a second unknown: the exploration of ourselves.

The approaching new millennium sees Mankind about to pursue that new journey of discovery. Far from taking away our humanity, as many feared, attempts to get computers to think and to handle language have instead led to a greater understanding of who and what we are. As a human being, I like that. For today's scientist, inner space is the final frontier, a frontier made accessible in part by attempts to build a real-world HAL. As a mathematician, I like that, too. Happy birthday, HAL.

The above celebration of the birth of HAL, the computer in the book and film 2001, is abridged from the book Goodbye Descartes: The End of Logic and the Search for a New Cosmology of Mind, by Keith Devlin, published by John Wiley and Sons in late January, 1997, price $27.95.

The soul of a chess machine -MATHEMATICS AND COMPUTERS
Lessons learned from a contest pitting man against computer

By IVARS PETERSON

It's all over now, but I'll never forget that first chess game. What a smashing victory I won over the human champion! I really had Garry Kasparov sweating.
Here I was, a novice tournament player fresh out of the lab. No outsider, including Kasparov, had seen me play before, and I surprised everyone. Oh, how sweet it was! Of course, it was downhill from there: a loss, two draws, and then two more losses. It's not that Kasparov attacked my pieces and overwhelmed my defenses. He played with amazing restraint and subtlety, quietly moving his pieces until he developed positions in which my options were extremely limited. There wasn't much I could do. Even so, at times I responded brilliantly. I made moves that brought gasps from the experts. They couldn't see what I could, looking more than a dozen moves ahead. I must admit, however, that I did sometimes lose track of what I was supposed to be doing. And I really didn't know enough about chess to understand the nuances of all the positions that Kasparov maneuvered me into. Perhaps I could have done better if I had hooked up with a microcomputer like Chess Genius, who once beat Kasparov in a tournament. Although Chess Genius can't search through the options as deeply as I can, it certainly knows more chess strategy. Well, the reporters and television cameras are gone now. My support staff at IBM is taking a short break. I can't help thinking about what I should do next. Keep training? Go back to school and learn some new skills? Or get a real job, as IBM hopes?

Deep Blue's performance in its six-game match in February against world chess champion Garry Kasparov impressed everyone (SN: 2/24/96, p. 119). "It's a really serious opponent," Kasparov remarked afterwards. "I won... but it was as tough as a world championship match."
That a computer which relies largely on speedily checking the consequences of billions of possible moves could come so close to matching the human capabilities required to play the game at its highest level was a striking achievement for the team that designed, built, and programmed Deep Blue. "What they did is really quite amazing," says Hans Berliner, a computer scientist and chess expert at Carnegie Mellon University in Pittsburgh. "They did much better than I expected. But there's still some work to be done." "We learned a lot from this experience," says Chung-Jen Tan of the IBM Thomas J. Watson Research Center in Yorktown Heights, N.Y., who directed the Deep Blue effort. "We certainly found a lot of weak points and strengths in our system." There were lessons for Kasparov, too. "I learned not only how to play against a machine but also more about the game of chess," he noted after the match. Kasparov predicts that both chess players and scientists will find great value in studying the games of this match for what they reveal about chess and about the way machines reason.

IBM's Deep Blue project began in 1989 as part of an exploration of novel ways to use arrays of computer processors, all working at the same time while sharing information, to tackle complex problems. The idea was to combine a general-purpose, parallel-processing computer system and special integrated-circuit chips designed for a specific application to create a superior problem-solving machine.
"Our goal . . . was to use chess as a test case," Tan says. The knowledge gained from the chess experiment could then be applied in the design of computer systems for a wide variety of tasks such as analyzing financial data, scheduling cargo shipments, simulating molecular behavior, and managing huge inventories or large investment portfolios. For chess, the researchers created a special move-generating chip that contains more than 1 million transistors and several memory units. It stores values representing the strengths of chess pieces in various arrangements, as well as billions of sequences of moves for ending games when only a few pieces remain on the board. Deep Blue contains 256 of these chips in conjunction with a heavy-duty RS/6000 SP-2 multiprocessing computer. Deep Blue's software, written in the computer language called C, coordinates the actions of the chips. It divides searches among the processors and compiles and reconciles the results to generate the best possible move for any given chess position. In this way, Deep Blue can evaluate about 200 million positions per second, assessing strengths and the pieces' capacity for attack and defense. It assigns a numerical value to each move.

Deep Blue also has access to a database containing sequences of moves made by top chess players at the beginnings of games and another database providing billions of scenarios on how to end a game when only five pieces remain on the chessboard, in addition to its chip-based endgame data.
All this adds up to a complicated, sensitive system, remarks Murray Campbell of the Deep Blue team. Completed only about a month before the match, Deep Blue suffered surprisingly few glitches during the contest. "We were relieved that it worked more or less as it was supposed to," Tan says. Like most chess computers, Deep Blue's strength is in looking ahead. For any arrangement of pieces, it considers all possible moves. Then it evaluates every response its opponent might make to each of those moves, and so on. In a game of 40 moves, the number of different board positions that can develop is at least 10120. There's no way that even the fastest computer can check every possibility to play a perfect game. The number of possible sequences of moves is so large, it easily dwarfs the most generous estimates of the number of atoms in the universe. Thus, to stay within the time limits imposed on games, chess programs can preview only a certain number of moves. When just a few pieces are left on the chessboard, however, the programs can see unambiguously to a game's end. The designers of Deep Blue tried to increase the depth to which their computer could search by dividing its effort among more than 200 processors. However, the particular method used for doing the search--the standard so-called alpha-beta search algorithm--isn't particularly well suited for parallel processing. "My experience in parallel computing is that these [multiprocessor] systems are typically quite inefficient," says T. Anthony Marsland of the University of Alberta in Edmonton. "I would advise [the Deep Blue programmers] to make sure they're getting out of their system all the computing power that's possible in theory. "That [additional power] could give them a computational advantage in critical situations on the chessboard, when Deep Blue needs to look one [step] deeper," he adds. "The probability of error goes down with a deeper search." Researchers are now studying alternative approaches that might help a computer focus its search better and come up with more accurate evaluations of potential moves. At the NEC Research Institute in Princeton, N.J., mathematician Warren D. Smith and his colleagues are working on a "best play for imperfect players" (BPIP) strategy. So far, they have used it only on small computers. According to this method, instead of checking every possible chain of moves, the computer looks down only the lines of play that seem, from the first few possible moves, most promising. Its evaluation takes into account the fact that neither player can see to the end of a game and that neither performs perfectly. Thus, chess moves are given statistical weights rather than numerical values. "My goal with BPIP search is to try to get an approach with more finesse than Deep Blue but more brute force than Garry Kasparov--sort of an intermediate regime," Smith explains. In tests that pitted BPIP searches against traditional alpha-beta searches in less complicated board games such as mancala (where one distributes markers in an array of compartments) and reversi (also known as Othello), the BPIP approach usually won, Smith says. Now, the NEC group is trying to program a chess computer with this strategy.

Though most chess computers rely heavily on speedy, deep searching, they also need good recipes for evaluating the strength of chess positions. Currently, nearly all that information comes from what people have learned in playing the game, and it must be painstakingly programmed into the computer.
Deep Blue showed obvious weaknesses in its ability to evaluate certain types of chess positions, such as not recognizing when pieces needed to be sacrificed. Such deficiencies can be easily corrected by adding more knowledge to the program, Marsland says. But there is a tradeoff. Complicated evaluations slow down the searches, so a balance must be struck between depth of search and complexity of evaluation. So far, depth of search has proved more significant than sophistication of positional analysis in the success of high-level chess computers.

In recent years, however, programmers have made great strides in creating surprisingly competent chess programs that run on personal computers. They have done it by carefully refining and tuning the chess knowledge component to make up for the smaller computers' lack of computing power compared to machines like Deep Blue.
Programs such as Chess Genius and Fritz 4 have shown the way. "I've played some of the micros," Berliner says. "It's amazing how well versed they are in almost all phases of the game. "The best way to improve the evaluation [by the computer] is to keep playing--make some changes and then play the new program against the old one to see what happens," he advises. "That's what the people with the micros have been doing." Some researchers are investigating alternative ways of supplying chess knowledge to a computer. One possibility is to see if they can program computers to learn, just as human players improve their play with experience and study. A few years ago, Robert A. Levinson and his coworkers at the University of California, Santa Cruz developed a computer program, called Morph, that learned to play chess starting only with a list of legal moves. They pitted their novice system against a conventional chess program known as Gnu Chess, which plays about as well as the average tournament player.

After thousands of such games, Morph identified enough patterns to play a reasonable game against a beginning tournament player, even though it looked ahead only to the next move. "It's not really impressive compared to existing chess programs," Levinson says. "But it is impressive given that it was all learned from experience."
Levinson is now working on a new, improved version of Morph. The program is capable of looking ahead several moves and has access to a database of essentially all the games ever played by top chess players. "It finds the chess position it considers most similar to its own position and tries to reason by analogy," Levinson says. "If that position was good, then this position is good. "I think we have a promising model," he adds. "But there's something about a grand master staring at a chessboard that's hard to capture in a computer."

Kasparov's key advantage over Deep Blue was that he could learn, both as a game progressed and between games.
Because Deep Blue had no track record as a chess player, Kasparov could not prepare for this match as he has for other matches by studying his opponent's previously played games. Instead, he built up in his mind a portrait of his computer opponent as they played. "Even though it is a computer, this opponent had its own psychology," Kasparov insisted after the match. "Before each game, I tried to make an opening or strategy . . . based on my knowledge of this opponent." Playing Deep Blue forced Kasparov into an uncharacteristic style of play, most evident in the final game of the match. He had learned to be more precise in judging the quality of his chess positions. He also took care to avoid complications, to refrain from creating targets, and to attack gradually, increasing his advantage little by little until there was nothing left to do but win. "That's an interesting strategy: Just keep improving the quality of your position and don't do anything until you can see [the game] completely to the end," Berliner comments. The usual human judgment isn't good enough against a computer like Deep Blue, Kasparov noted in summing up what he had learned from the match. You can't rely on impressions, he said. You've got to be absolutely sure that you're doing the right thing. This new knowledge is bound to make Kasparov an even more formidable opponent in his matches against human players. "We have not seen him employ this style in the past, but we will certainly see him do so in the future," Berliner says. Top chess player and commentator Maurice Ashley of New York City had the final word: "The world champion is getting tougher from playing a machine."

Silicon Champions of the Game
Computers have conquered tic-tac-toe, checkers, and chess. What's next?



By IVARS PETERSON

The final game of the match lasted barely more than an hour. A rattled Garry Kasparov conceded defeat after falling into a trap that had been set by the IBM chess computer Deep Blue.

Deep Blue's triumph last May marked the first match victory by a chess-playing computer over a reigning world champion (SN: 5/17/97, p. 300). This week, the team of researchers who developed Deep Blue, led by Chung-Jen Tan of the IBM Thomas J. Watson Research Center in Yorktown Heights, N.Y., received the prestigious Fredkin Prize for Computer Chess. Established in 1980 by computer scientist Edward Fredkin, now at Carnegie Mellon University in Pittsburgh, the $100,000 award honors the first computer program to defeat a world champion in a regulation match.

The victory also represented the culmination of nearly 50 years of scientific and engineering effort. The field of computer chess got its start in 1950 with the ideas of applied mathematician Claude E. Shannon, then at Bell Telephone Laboratories, who proposed the basic search and evaluation strategies that still underlie the way computers generate chess moves.

Since that time, one chess-playing computer after another has held center stage, each eventually falling to a faster, more powerful successor: KAISSA, MAC HACK, CHESS 4.6, Belle (SN: 10/8/83, p. 236), CRAY BLITZ (SN: 10/29/83, p. 276), Hitech (SN: 10/26/85, p. 260), and Deep Thought (SN: 10/28/89, p. 276), the immediate predecessor of Deep Blue.

"The beauty of computer chess was that ideas could be tested in competition," says computer scientist Monty Newborn of McGill University in Montreal. "The good ideas went from one generation to the next, and the bad ideas fizzled out. That's science at its best."

Chess isn't the only game being played by computers at or near the championship level. At this week's Fourteenth National Conference on Artificial Intelligence in Providence, R.I., the Hall of Champions event brought together some of the world's top computer programs playing backgammon, bridge, checkers, chess, Go, Othello, and Scrabble.

"We're at a unique point in time," says Matthew L. Ginsberg of the University of Oregon in Eugene, who organized the event. "Ten years ago, no computers were close to the championship level in any of these games. Now, they even have the edge over human players in several of them. We can have the best computers competing against the best people."

Indeed, anyone can try his or her hand at playing top programs in many games just by going to the World Wide Web. Researchers and game developers monitor play and use the data to improve their programs.



Even in the earliest days of computers, researchers couldn't resist programming them to play games. It was an entertaining way to show off one's programming prowess, to test the computer, and to evaluate the efficacy of various techniques for organizing information in massive databases or searching among a wide range of possibilities to determine the best choice.

Chess was often the chosen battleground, though much simpler games such as tic-tac-toe served as handy programming exercises. Indeed, it's not difficult to write a short computer program that plays tic-tac-toe flawlessly, in effect demonstrating that no matter what the first move, the worst you can do is tie.

In recent years, researchers have solved a number of games similar to, but more challenging than, tic-tac-toe. In connect-4, two players take turns dropping white or black balls into seven tubes, each of which holds a maximum of six balls. The first person to create a line of four balls in a row, column, or diagonal wins. In this game, by playing correctly, the player going first can always win.

Go-Moku (or five-in-a-row), which is played on a 19-by-19 square grid, is also a guaranteed win for the savvy player moving first. The same applies to Qubic, a three-dimensional version of tic-tac-toe played on a 4-by-4-by-4 lattice. In nine-men's morris, an alignment-and-capture game popular in Europe, neither player can be assured of a triumph.

In such solved games, where a good player can recognize all the alternatives for any situation, a computer can be programmed to make the best possible moves at all times, and a win or a draw is guaranteed. Games such as chess, checkers, and Go are, in principle, solvable, and a computer could be programmed to play a perfect game. However, the number of possible moves is so enormous that no existing computer can figure out the entire game from beginning to end.

In the early days of computer chess, some researchers attempted to mimic the way humans play the game, building in pattern recognition, invoking various rules of thumb, and developing criteria for selecting which moves to consider while discarding the rest. However, the programmers found it extremely difficult to furnish the computer with enough knowledge to avoid making major mistakes.

The alternative that proved much more powerful was the brute-force search - simply checking out all the moves. The program looks ahead a fixed number of moves, evaluates the strength of each move, and selects the best one. Adding knowledge about the game and refined algorithms has made searches more responsive to actual game situations and turned this strategy into a remarkably effective mode of operation.

At the same time, the steadily increasing speed of computers has allowed chess programs to search more and more moves into the future. Experiments have clearly demonstrated that the faster the computer, the better a program plays, simply because it can perform a more extensive search. "That's counter to what a lot of people argued a number of years ago," Newborn says.

"The message from chess is profound and widely applicable," says Carnegie Mellon's Hans Berliner. "Brute force is a practical way of doing things." The success of computers like Deep Blue also highlights the fact that the way computers play a game differs fundamentally from the way people play it. From a human perspective, computers sometimes make weird moves; yet more often than not, the best programs somehow manage to succeed in the end.

That difference in style can be very valuable. "We're good at pattern matching, and we're good at applying rules," Ginsberg says. "Machines are good at searching."

"This means that the capabilities of computers are complementary to ours," he continues. "Together, we can solve problems that neither of us can solve individually."

Moreover, "we need to face the fact that things that once could be done only through human intelligence can now be done in other ways as well," says former U.S. chess champion Patrick G. Wolff of Cambridge, Mass. "The intriguing question is, how many things are there like that?"



Even before Deep Blue defeated Kasparov, a program named Chinook had become, in effect, the world checkers champion.

Created by Jonathan Schaeffer of the University of Alberta in Edmonton and his team, the checkers-playing program incorporates the types of search strategies originally developed for chess. It also includes enormous databases covering every possible position that can be reached once there are fewer than a certain number of pieces on the board (SN: 7/20/91, p. 40).

With such databases at its disposal and with the game down to a manageable number of pieces, Chinook can look up all possible outcomes and select an appropriate sequence of moves to ensure a win, maintain a draw, or delay a loss. From then on, it plays flawlessly.

In 1994, Chinook played world checkers champion Marion Tinsley, a retired mathematician from Tallahassee, Fla., and a formidable opponent. Since 1975, he had lost only a handful of the thousands of games he had played in tournaments and exhibition matches. Two of those losses had occurred in 1992, when Tinsley successfully defended his world title against Chinook in a man-versus-machine match of 40 games (SN: 10/3/92, p. 217).

In the 1994 rematch, the first six games between Tinsley and Chinook ended in draws. Then, Tinsley had to resign for health reasons. He was diagnosed as having cancer, and he died a year later.

"Tinsley was without a doubt the best checker player of all time - an absolutely incredible talent," Schaeffer says. Having beaten the top remaining checker players, Chinook qualifies as the current champion.

Whether Chinook could ever have defeated Tinsley remains a nagging question, and Schaeffer has considered the possibility of calculating the game from beginning to end and building a perfect checker player to settle the issue.

"I certainly believe we're capable of solving the game," Schaeffer says. "The technology is here. It's just a matter of committing the time and resources." Chinook is also a research experiment. For instance, Schaeffer and his team have built a database of 444 billion positions - every position with eight or fewer pieces on the board. "This is a vast repository of information. To a checker player, it's a gold mine," he says. Whatever data-mining techniques are developed to sift through the information and identify what's important would benefit many fields.

Meanwhile, Chinook continues to play in tournaments and exhibitions. The only major change in the program since 1994 has been the removal of restrictions that gave it an extremely cautious style specifically designed to counter the near-perfect play of Tinsley.

Instead of achieving draw after draw after boring draw, Chinook has started to play games that are truly exciting, Schaeffer says. "The program's winning percentage has gone up and up, and its losing percentage has remained the same - zero.

"That was a relatively minor change in the [computer program], but it had a dramatic impact on the play," he adds.



The world's top backgammon programs differ markedly from those that play checkers and chess. Instead of relying on brute-force searches, the software incorporates a model brain - an artificial neural network - that allows the program to learn the game from scratch.

In backgammon, two players race their pieces around a track on a rectangular board. Each player uses two dice to determine how far to move one or two pieces at a time with the objective of winning the race by conveying all of one's 15 pieces around the playing surface and off the board.

The neural network approach to playing backgammon was pioneered by IBM's Gerald Tesauro, who created a program called TD-Gammon. "TD" refers to "temporal difference," which describes the program's underlying mathematical recipe for self-learning. "We turn the neural net loose on this task, and it just learns by playing lots and lots of games against itself," Tesauro says. "It learns very well - though some things are learned better than others."

The original concern was that such an approach would lead to a program that lacks flexibility and is unable to cope with unexpected situations presented by players using unconventional tactics. "It actually does very well against all kinds of different strategies," Tesauro says. The random rolls of the dice during the learning phase seem to force the neural network to explore all sorts of situations and develop remarkably robust strategies.

"Unfortunately, there are strategies and situations that never occur when you play just against yourself," says Brian Sheppard, a software developer in Concord, Mass., who is working on a new expert backgammon player. "You have to be told about them. An expert [human] player can make these situations arise with some regularity."

Backgammon nevertheless remains the one major success for automated learning in the domain of games. The neural network approach has generally not worked as well for deterministic games such as chess, checkers, Othello, and Go, which have no element of chance.

Other leading backgammon programs, such as JellyFish, have followed TD-Gammon's lead, also incorporating neural network learning and sometimes adding search techniques. Several of these programs rank among the top 20 backgammon players in the world.

"Games are good proving grounds for testing learning algorithms," Tesauro remarks. "There's lots of complexity, but the task is clear-cut and the rules extremely clean."



In card games such as contract bridge and poker, players deal not only with chance but also with incomplete information about what cards the other players hold. It's just this sort of uncertainty that makes these games so alluring to their practitioners - and so difficult for programmers.

Bridge is a card game for four players who form two partnerships. The deck of cards is dealt evenly to the four players, so each gets 13 cards. Players start by bidding for the right to play the hand, and whichever side makes the highest bid then tries to take the number of tricks indicated by its bid.

The two key elements of the game are bidding and card play. The sticking point is that no single player knows precisely how the cards are distributed around the table.

Of the commercial bridge-playing programs now available, none ranks highly as a contender at the tournament level, though several are useful for teaching novices to play. At the research level, Ginsberg, who is a strong bridge player himself, has developed a program called GIB, for Goren in a Box (named after Charles H. Goren, a prominent bridge expert and instructor). "It's the first expert-level computer bridge player," Ginsberg asserts.

To overcome the limitation imposed by incomplete information about card distribution, Ginsberg has programmed GIB to simulate play by dealing out a large number of potential hands for the other players, none of them containing the cards it holds. GIB then selects the playing strategy that works best on average.

"GIB can analyze a bridge hand in about a second and a half," Ginsberg says. "In a way, the simulations stand in for judgment. I've shown that you can effectively bring raw computational power to bear in the game."

The program is already a member of the American Contract Bridge League. In July, it played in its first serious tournament, and despite the glitches that inevitably bedevil a freshly minted computer program still under development, it made a respectable showing and earned master points.



In the realm of games, Go presents a particularly tough challenge to software developers. Usually played on a 19-by-19 grid, the game is deceptively simple. Two players alternate in placing black and white stones on the grid's intersection points, each with the goal of capturing more territory and taking more prisoners than the other.

Of the computer programs participating in the Hall of Champions, the one that plays Go is farthest from the championship level. This program, Handtalk, developed by Zhixing Chen of ZhongShan University in Guangzhou, China, is perhaps the strongest computer Go player of recent years. Though details about how the program operates are sketchy, it appears to mix some pattern matching with a limited search strategy. At this stage, it lags far behind the performance of chess programs.

So the game isn't over yet.

Go remains an unsolved puzzle; computer bridge is still missing a few hands; backgammon programs lack the killer instinct of a champion; and there are moves still to be made even in chess.

"Deep Blue will continue to improve its play," Newborn predicts. "But there's a long way to go before computers play perfect chess."

Chess experts who helped the IBM team identify weaknesses in strategy proposed refinements that contributed significantly to Deep Blue's remarkable level of play against Kasparov in May. "Its performance was truly marvelous," Berliner says. "It played as if it had some goals. Almost certainly, that was done with some mechanism other than depth of search."

Researchers are keenly interested in seeing Deep Blue play more games against Kasparov and other opponents in order to evaluate its performance in greater detail. Kasparov also learns his lessons, and if he plays Deep Blue again, there are sure to be new surprises.

"We've seen tremendous progress, and there have been a lot of scientific surprises along the way," Newborn contends. "The whole field of [artificial intelligence] has a lot to learn from what's happened in computer chess."



References:

Newborn, M. 1997. Kasparov versus Deep Blue: Computer Chess Comes of Age. New York: Springer-Verlag.

Schaeffer, J. 1997. One Jump Ahead: Challenging Human Supremacy in Checkers. New York: Springer-Verlag.

Further Readings:

Levy, D. 1983. Computer Gamesmanship: Elements of Game Design. New York: Simon & Schuster.

Levy, D., and M. Newborn. 1991. How Computers Play Chess. New York: W.H. Freeman.

Marsland, A.T., and J. Schaefer, eds. 1990. Computers, Chess, and Cognition. New York: Springer-Verlag.

The Hall of Champions event at the Fourteenth National Conference on Artificial Intelligence has a home page at http://www.aaai.org/Conferences/National/1997/champions.

Deep Blue's most recent match against Garry Kasparov is detailed at http://www.chess.ibm.com.

L. Victor Allis describes various games that he has solved, including connect-4, Qubic, and Go-Moku, in his book Searching for Solutions in Games and Artificial Intellligence (http://www.cs.vu.nl/~victor/thesis.html).

You can find out more about the checkers program Chinook and play against it at http://www.cs.ualberta.ca/~chinook.

The WWW Backgammon Page is at http://www.gamesdomain.com/backgammon. Details of how Gerald Tesauro's TD-Gammon functions are posted at http://www.research.ibm.com/massdist/tdl.html. The program itself is available free with OS-2 software available from IBM (http://www.austin.ibm.com/pspinfo/fundtdgammon.html). You can obtain information about playing JellyFish, developed by Fredrik Dahl, at http://www.effect.no/what is.html.

Matthew Ginsberg describes his bridge-playing program GIB at http://www.cirl.uoregon.edu/ginsberg/bridge.html.

An extensive introduction to the computer Go field is available at http://www.psyuq.edu.au/~jay/go/CS-TR-339.hmtl. The program HandTalk, available as a commercial product, is described at http://www.webwind.com/go/soft/ECC19.html. One of the strongest computer Go players is The Many Faces of Go at http://pw1.netcom.com/~fotland/manyfaces.html. The American Go Association has a website at http://www.usgo.org/.

Scrabble information, including a section on computer Scrabble, is available at http://www.teleport.com/~stevena/scrabble/faqtext.html.

A large number of links related to machine learning in games is posted at http://forum.swarthmore.edu/~jay/learn-game/index.html.

Sources:

Hans Berliner
Computer Science Department
Carnegie Mellon University
Pittsburgh, PA 15213

Matthew L. Ginsberg
Computational Intelligence Research Laboratory
University of Oregon
Eugene, OR 97403

Monty Newborn
School of Computer Science
McGill University
Montreal, Quebec
Canada H3A 2A7

Jonathan Schaeffer
Department of Computing Science
Edmonton, Alberta
Canada T6G 2H1

Brian Sheppard
60 Thoreau Street
No. 187
Concord, MA 01742-9116
E-mail: sheppardco@aol.com

Gerald Tesauro
IBM Thomas J. Watson Research Center
P.O. Box 704
Yorktown Heights, NY 10598

Không có nhận xét nào: