Artificial Intelligence Essay

803 Words2 Pages

Artificial Intelligence What if I told you that robots could be taking over the world at any moment now? Well, maybe not that soon, but sooner than you think, because Artificial Intelligence is always advancing. When you think of Artificial Intelligence, you probably think of movies like Star Wars or Terminator, but it’s actually very real and we use it every day. Artificial intelligence, or AI for short, is a machine’s ability to think and act like a human. It’s the brain inside a robot, but not the robot itself. Humans, and animals, have what we call natural intelligence, or NI. The main difference between NI and AI is Natural Intelligence’s main goal is to survive and adapt to their environment. Artificial Intelligence’s main goal …show more content…

The first branch is ANI, or Artificial Narrow Intelligence. Examples of ANI could be any ai system that is below human level intelligence. Google Maps is a good example of ANI because it specializes in one thing only; it can give you directions to places, but can’t do much else. The second branch is called AGI, or Artificial General Intelligence. AGI is about the same as human level intelligence. An example of AGI would be an AI system named Nigel, invented by Kimera, a company focused on improving AGI. Nigel was programmed, not to already know things, but to learn things like humans do. Nigel learns through a constantly growing network of changing directions that can lead to any connected device. The more people that Nigel learns from, Nigel’s process of learning accelerates, taking him a step closer to ASI. Artificial Super Intelligence is the third branch of Artificial Intelligence. ASI is higher than human level intelligence. ASI is yet to be achieved because AGI hasn’t advanced far enough yet. Although, it will be much easier to go from AGI to ASI than it was to go from ANI to …show more content…

The law states that fundamental measures of information technology follow predictable and exponential trajectories, which basically means that the main measures of computers follow predictable and expanding paths. So, this means that the more AGI learns, the faster it learns. Imagine you are in the year 2000. The internet is starting out through dial-up, people are starting to hear about “Google”, and the first IPhone hasn’t even been released yet.
Now, imagine that you somehow knew how to time travel back thirty years and brought someone from the year 1970 with you back into 2000. You show this person from the 70’s all of the new technological inventions that exist in 2000, like CD’s and Cell Phones and Video Games, and they are obviously amazed, because for someone in the 1970’s life is so completely different that they wouldn’t even think about some of the things existing.
Then, imagine taking someone from 2000 forwards into half that time; Just 15 years into 2015. You would show them all the possibilities of smartphones and smart watches, the availability of 3D printers, and accurate GPS systems. They’d be even more impressed than the person from the 70’s

Open Document