Wait a second!
More handpicked essays just for you.
More handpicked essays just for you.
Negative impacts of artificial intelligence
Essay on history of artificial intelligence
Negative effects of artificial intelligence
Don’t take our word for it - see why 10 million students trust us with their essay needs.
Introduction
As our world expands through the growing abilities and applications of computers in our everyday lives, it seems that the role of the computer has been reversed. Before we knew that the computer only understood what we programmed it to understand; however, now the majority of our society is learning more from computers than they are able to input into it. Dumm (1986 p.69)
History
"The human aspiration to create intelligent machines has appeared in myth and literature for thousands of years, from stories of Pygmalion to the tales of the Jewish Golem." Anat Treister-Goren, Ph.D. (http://www.a-i.com/)
The concepts of the development of artificial intelligence can be traced as far back as ancient Greece. Even something as small as the abacus has in someway led to the idea of artificial intelligence. However, one of the biggest breakthroughs in the area of AI is when computers were invented.
Many encyclopaedias and other reference works state that the first large-scale automatic digital computer was the Harvard Mark 1, which was developed by Howard H. Aiken (and team) in America between 1939 and 1944. However, in the aftermath of World War II it was discovered that a program controlled computer called the Z3 had been completed in Germany in 1941, which means that the Z3 pre-dated the Harvard Mark I. Prof. Hurst Zuse (http://www.epemag.com/zuse/)
Following shortly after Z3, Britain's Colossus in 1943 and two years later America came up with another system ENIAC (Electronic Numerical Integrator and Computer)
Years later in 1956 John Von Neumann would develop one of the most influential computers called the JOHNNIAC (John V. Neumann Integrator and Computer). The JOHNNIAC was an early effort at AI prog...
... middle of paper ...
...reliably understand them under all possible circumstances. (http://www.asimovonline.com/asimov_FAQ.html)
Conclusion
As our research into science and technology ever increases its seems inevitable that in the near future Artificial Intelligent machines will exist and become part of our everyday life such as we see with modern computers today.
Bibliography
Asimov, I. (1967) I, Robot, London: Dobson
Boden, M. A. (1990) The Age of Intelligent Machines, Basic Books
Dumm, T. et al (1986) Mind Over Machine, Free Press
Nathanson, M. (1984) Using Artificial Intelligence Systems May be Smartest Way to Trim Costs, Modern Healthcare, Volume 14. Page 138
Oxford English Dictionary (2005). 3rd. ed. Oxford.
Anat Treister-Goren, Ph.D. (http://www.a-i.com/)
Prof. Hurst Zuse (http://www.epemag.com/zuse/)
(http://www.asimovonline.com/asimov_FAQ.html)
Artificial Intelligence (AI) is one of the newest fields in Science and Engineering. Work started in earnest soon after World War II, and the name itself was coined in 1956 by John McCarthy. Artificial Intelligence is an art of creating machines that perform functions that require intelligence when performed by people [Kurzweil, 1990]. It encompasses a huge variety of subfields, ranging from general (learning and perception) to the specific, such as playing chess, proving mathematical theorems, writing poetry, driving a car on the crowded street, and diagnosing diseases. Artificial Intelligence is relevant to any intellectual task; it is truly a Universal field. In future, intelligent machines will replace or enhance human’s capabilities in
The official foundations for "artificial intelligence" were set forth by A. M. Turing, in his 1950 paper "Computing Machinery and Intelligence" wherein he also coined the term and made predictions about the field. He claimed that by 1960, a computer would be able to formulate and prove complex mathematical theorems, write music and poetry, become world chess champion, and pass his test of artificial intelligences. In his test, a computer is required to carry on a compelling conversation with humans, fooling them into believing they are speaking with another human. All of his predictions require a computer to think and reason in the same manner as a human. Despite 50 years of effort, only the chess championship has come true. By refocusing artificial intelligence research to a more humanlike, cognitive model, the field will create machines that are truly intelligent, capable of meet Turing's goals. Currently, the only "intelligent" programs and computers are not really intelligent at all, but rather they are clever applications of different algorithms lacking expandability and versatility. The human intellect has only been used in limited ways in the artificial intelligence field, however it is the ideal model upon which to base research. Concentrating research on a more cognitive model will allow the artificial intelligence (AI) field to create more intelligent entities and ultimately, once appropriate hardware exists, a true AI.
Soldiers sown from dragon teeth, golden robots built by Hephaestus, and three-legged tables that could move under their own power - the Greeks were the first to cross the divide between machine and human. Although the history of Artificial Intelligence (AI) began with these myths and speculations, it is becoming a part of everyday life. How did it evolve so quickly, and what are its implications for the future?
When World War II broke out in 1939 the United States was severely technologically disabled. There existed almost nothing in the way of mathematical innovations that had been integrated into military use. Therefore, the government placed great emphasis on the development of electronic technology that could be used in battle. Although it began as a simple computer that would aid the army in computing firing tables for artillery, what eventually was the result was the ENIAC (Electronic Numerical Integrator and Computer). Before the ENIAC it took over 20 hours for a skilled mathematician to complete a single computation for a firing situation. When the ENIAC was completed and unveiled to the public on Valentine’s Day in 1946 it could complete such a complex problem in 30 seconds. The ENIAC was used quite often by the military but never contributed any spectacular or necessary data. The main significance of the ENIAC was that it was an incredible achievement in the field of computer science and can be considered the first digital and per...
was introduce in 1971. IBM then came out with more advance computers such as System/38 in 1978 and the AS / 400 in 1988.
The science behind humanlike robots is advancing. They are becoming more smart, mobile and autonom...
The approach to artificial intelligence should be proceeded with caution. Throughout recent years and even decades before, it has been a technological dream to produce artificial intelligence. From movies, pop culture, and recent technological advancements, there is an obsession with robotics and their ability to perform actions that require human intelligence. Artificial intelligence has become a real and approachable realization today, but should be approached with care and diligence. Humans can create advanced artificial intelligence but should not because of the harm they may cause, the monumental advancement needed in the technology, and that its harm outweighs its benefits.
When most people think of artificial intelligence they might think of a scene from I, Robot or from 2001: A Space Odyssey. They might think of robots that highly resemble humans start a revolution against humanity and suddenly, because of man’s creation, man is no longer the pinnacle of earth’s hierarchy of creatures. For this reason, it might scare people when I say that we already utilize artificial intelligence in every day society. While it might not be robots fighting to win their freedom to live, or a defense system that decides humanity is the greatest threat to the world, artificial intelligence already plays a big role in how business is conducted today.
Artificial intelligence is a concept that has been around for many years. The ancient Greeks had tales of robots, and the Chinese and Egyptian engineers made automations. However, the idea of actually trying to create a machine to perform useful reasoning could have begun with Ramon Llull in 1300 CE. After this came Gottfried Leibniz with his Calculus ratiocinator who extended the idea of the calculating machine. It was made to execute operations on ideas rather than numbers. The study of mathematical logic brought the world to Alan Turing’s theory of computation. In that, Alan stated that a machine, by changing between symbols such as “0” and “1” would be able to imitate any possible act of mathematical
Artificial intelligence is defined as developing computer programs to solve complex problems by applications of processes that are analogous to human reasoning processes. Roughly speaking, a computer is intelligent
Shyam Sankar, named by CNN as one of the world’s top ten leading speakers, says the key to AI evolvement is the improvement of human-computer symbiosis. Sankar believes humans should be more heavily relied upon in AI and technological evolvement. Sankar’s theory is just one of the many that will encompass the future innovations of AI. The next phase and future of AI is that scientists now want to utilize both human and machine strengths to create a super intelligent thing. From what history has taught us, the unimaginable is possible with determination. Just over fifty years ago, AI was implemented through robots completing a series of demands. Then it progressed to the point that AI can be integrated into society, seen through interactive interfaces like Google Maps or the Siri App. Today, humans have taught machines to effectively take on human jobs, and tasks that have created a more efficient world. The future of AI is up to the creativity and innovation of current society’s scientists, leaders, thinkers, professors, students and
Artificial intelligence is an idea of if the human thought process can be mechanized. It was around the 1940’s – 50’s that a group of people came together to discuss the possibility of creating an artificial brain and its uses. These people were a variety of scientists from different fields such as mathematics, economics, engineering, and etc. This was the birth of the field of artificial intelligence. While artificial intelligence would prove to be technologically revolutionary by introducing new ideas such as quantum computers or robots, said new ideas could result in the downfall of mankind. The result could range to being the plummet of the economy, the end of the human race, or even the corruption of the next generation and onwards. All of these problems resulting in the possibility of the end of the earth. The more we need to learn more about technology and further advance it, the closer we are getting to the extinction of the human race. These are the reasons why the advancement of artificial intelligent should be halted or banned so no harm can be done even without the intentions.
The First Generation of Computers The first generation of computers, beginning around the end of World War 2, and continuing until around the year 1957, included computers that used vacuum tubes, drum memories, and programming in machine code. Computers at that time where mammoth machines that did not have the power our present day desktop microcomputers. In 1950, the first real-time, interactive computer was completed by a design team at MIT. The "Whirlwind Computer," as it was called, was a revamped U.S. Navy project for developing an aircraft simulator.
The computer evolution has been an amazing one. There have been astonishing achievements in the computer industry, which dates back almost 2000 years. The earliest existence of the computer dates back to the first century, but the electronic computer has only been around for over a half-century. Throughout the last 40 years computers have changed drastically. They have greatly impacted the American lifestyle. A computer can be found in nearly every business and one out of every two households (Hall, 156). Our Society relies critically on computers for almost all of their daily operations and processes. Only once in a lifetime will a new invention like the computer come about.
Computer technology not only has solved problems but also has created some, including a certain amount of culture shock as individuals attempt to deal with the new technology. A major role of computer science has been to alleviate such problems, mainly by making computer systems cheaper, faster, more reliable, easier to use.