Every material on the planet has the same basic building blocks, atoms, which can be used to characterize those materials. These characterizations come in three forms type, arrangement, and connectedness. The study of materials has led humanity to the silicon age, the construction of ever faster and more powerful computers and eventually to the ultimate coalescence of man and machine and the creation of artificial intelligence. How have computers become so advanced that we can consider things like artificial intelligence?
To delve into the concept of artificial intelligence would be to put the cart before the horse. How have computers come to where they are? What makes them possible? What are they made of? Now there we go, step one the microchip. A microchip is a self-contained electrical circuit on a wafer of silicon. Why silicon? All materials have three characterizations specific to them. All materials are made of atoms, the atoms are arranged in a manner particular to each material and the atoms of each material bond in a definitive manner.
A method of thinking about this is compare a pile of lose carbon atoms also known as coal. It is brittle, highly flammable and black. Now take those exact same carbon atoms and make them line up and bond electron to electron in a perfect grid. Diamonds are one of the hardest substances on earth, so perfectly clear they refract light, and do not burn.
When building circuits scientists look for very unique properties for the substrate of the circuit. In this case a low conductivity to promote proper electron flow along a designed pathway, elemental resistance to prohibit deterioration, and the ability to reproduce the material flawlessly and inexpensively. Enter silicon. Scientists discove...
... middle of paper ...
...off their type, arrangement and connectedness until we get where we need to be and produce the material perfect for the task at hand.
Works Cited
Berkeley Ph.D., I. (1997). What is Artificial Intelligence? Retrieved from http://www.ucs.louisiana.edu/~isb9112/dept/phil341/wisai/WhatisAI.html Dow Corning Inc. (2004). Silicones: Electronics Changing the Picture of Electronics Worldwide
Retrieved from https://www.dowcorning.com/vi_VN/content/vietnam/ces_electronics.pdf
Intel Corporation. (2014). Moore's law and intel innovation. Retrieved from http://www.intel.com/content/www/us/en/history/museum-gordon-moore-law.html Nobel Media AB (2003). The History of the Integrated Circuit Retrieved from http://www.nobelprize.org/educational/physics/integrated_circuit/history/ Trefil, J., & Hazen, R. M. (2013). The sciences: An integrated approach. (7th ed.). John Wiley &
Sons.
It is necessary to look at the development of artificial intelligence in order to put this idea into context. The concept of intelligent and aware constructs began to emerge in the 1950s and 60s as several scientists in many fields came together to discuss the possibilities of advanced computer research. The first major step was a scientific conference at Dartmouth College in 1956. Here, the general concepts and possible paths of research for a.i. were fleshed out. As described in Artificial Intelligence: A Modern Approach, this conference was “the birth of artificial intelligence.” This was mostly a theoretical stage yet attending experts predicted that with a huge investment, working technology could be available in a generation (16). After being officially established, a.i. research and discovery exploded. Computer programs, a brand new idea, were already conquering algebra problems and speech recognition. Some could even reproduce English (18). It was clear that artificial intelligence research was going to be at the fo...
Chalmers, A. (1976). What is this thing called science? St. Lucia: University of Queensland Press.
The official foundations for "artificial intelligence" were set forth by A. M. Turing, in his 1950 paper "Computing Machinery and Intelligence" wherein he also coined the term and made predictions about the field. He claimed that by 1960, a computer would be able to formulate and prove complex mathematical theorems, write music and poetry, become world chess champion, and pass his test of artificial intelligences. In his test, a computer is required to carry on a compelling conversation with humans, fooling them into believing they are speaking with another human. All of his predictions require a computer to think and reason in the same manner as a human. Despite 50 years of effort, only the chess championship has come true. By refocusing artificial intelligence research to a more humanlike, cognitive model, the field will create machines that are truly intelligent, capable of meet Turing's goals. Currently, the only "intelligent" programs and computers are not really intelligent at all, but rather they are clever applications of different algorithms lacking expandability and versatility. The human intellect has only been used in limited ways in the artificial intelligence field, however it is the ideal model upon which to base research. Concentrating research on a more cognitive model will allow the artificial intelligence (AI) field to create more intelligent entities and ultimately, once appropriate hardware exists, a true AI.
Artificial intelligence has come a long way since the first robot. In 1950, Alan Turing of Britain publishes, Computer Machinery and Intelligence. This book was proposed to be the birth of artificial intelligence as we know it. The first robot that presents the usage of artificial intelligence was built in 1969. The purpose of this robot was to try out navigation using basic tools such as cameras and bump sensors (Marshall 371). Since then, we have made a million robots way better than this one and we’re going to continue doing so. While the world advances, so is technology. It’s slowly progressing and become better and more reliable. Artificial intelligence is a certain type of technology that is resourceful to our nation. We are using it in the medical field, it’s been helpful to military forces, and it’s helping our world become a better place.
Artificial intelligence(AI) is refer to as computer algorithms that show functions that represent intelligence or duplicate certain components and elements of intelligence (Novella, 2017). Computers are good at crunching numbers, running algorithms, recognizing patterns, and searching and matching data. Artificial intelligence is also defined as the stimulation of human intelligence, functioned or processed by machines, especially computer system (Rouse, 2016). These processes involved learning which means the acquisition of information and the rules for using the information, reasoning whereby using the rules to achieve approximate conclusions, and lastly is self-correction. AI has applications in almost every way we use computers in society (Smith, 2006).
Soldiers sown from dragon teeth, golden robots built by Hephaestus, and three-legged tables that could move under their own power - the Greeks were the first to cross the divide between machine and human. Although the history of Artificial Intelligence (AI) began with these myths and speculations, it is becoming a part of everyday life. How did it evolve so quickly, and what are its implications for the future?
It’s hard to believe that in this day in age that not many things remain that aren’t being made simpler with the introduction of computers over the last decade or two. But what is a computer, why and how is a computer so powerful, and how does a computer know what to do? Computers are very simple machines once one gets past the programing.
The concepts of the development of artificial intelligence can be traced as far back as ancient Greece. Even something as small as the abacus has in someway led to the idea of artificial intelligence. However, one of the biggest breakthroughs in the area of AI is when computers were invented.
Strong Artificial Intelligence occurs ultimately when qualities that are accredited to the human brain are demonstrated by machines.
making computers behave like humans more human like fashion and in much less time then a human takes. Hence it is called as Artificial Intelligence. Artificial intelligence is different from psychology because it emphasis on computation and is different from computer science because of its emphasis on perception, reasoning and action. It makes machines smarter and more useful.[2]
Shyam Sankar, named by CNN as one of the world’s top ten leading speakers, says the key to AI evolvement is the improvement of human-computer symbiosis. Sankar believes humans should be more heavily relied upon in AI and technological evolvement. Sankar’s theory is just one of the many that will encompass the future innovations of AI. The next phase and future of AI is that scientists now want to utilize both human and machine strengths to create a super intelligent thing. From what history has taught us, the unimaginable is possible with determination. Just over fifty years ago, AI was implemented through robots completing a series of demands. Then it progressed to the point that AI can be integrated into society, seen through interactive interfaces like Google Maps or the Siri App. Today, humans have taught machines to effectively take on human jobs, and tasks that have created a more efficient world. The future of AI is up to the creativity and innovation of current society’s scientists, leaders, thinkers, professors, students and
Artificial intelligence is an idea of if the human thought process can be mechanized. It was around the 1940’s – 50’s that a group of people came together to discuss the possibility of creating an artificial brain and its uses. These people were a variety of scientists from different fields such as mathematics, economics, engineering, and etc. This was the birth of the field of artificial intelligence. While artificial intelligence would prove to be technologically revolutionary by introducing new ideas such as quantum computers or robots, said new ideas could result in the downfall of mankind. The result could range to being the plummet of the economy, the end of the human race, or even the corruption of the next generation and onwards. All of these problems resulting in the possibility of the end of the earth. The more we need to learn more about technology and further advance it, the closer we are getting to the extinction of the human race. These are the reasons why the advancement of artificial intelligent should be halted or banned so no harm can be done even without the intentions.
Artificial intelligence is defined as developing computer programs to solve complex problems by applications of processes that are analogous to human reasoning processes. Roughly speaking, a computer is intelligent
Artificial intelligence is a concept that has been around for many years. The ancient Greeks had tales of robots, and the Chinese and Egyptian engineers made automations. However, the idea of actually trying to create a machine to perform useful reasoning could have begun with Ramon Llull in 1300 CE. After this came Gottfried Leibniz with his Calculus ratiocinator who extended the idea of the calculating machine. It was made to execute operations on ideas rather than numbers. The study of mathematical logic brought the world to Alan Turing’s theory of computation. In that, Alan stated that a machine, by changing between symbols such as “0” and “1” would be able to imitate any possible act of mathematical
Thousands of years ago calculations were done using people’s fingers and pebbles that were found just lying around. Technology has transformed so much that today the most complicated computations are done within seconds. Human dependency on computers is increasing everyday. Just think how hard it would be to live a week without a computer. We owe the advancements of computers and other such electronic devices to the intelligence of men of the past.