--Advertisement--

What is Artificial Intelligence? Everything you need to know

--Advertisement--

A brief history of Artificial Intelligence covers the total timeline of AI

There are many ways that people interact with A.I. everyday. From Google translate, to personal assistants; like Siri and Alexa and even to how we unlock our phones.

"A.I. began with an ancient wish to forge the gods."- Pamela McCorduck

A long time ago the Greeks had some thoughts about golden robots, which weren't actually called robots. Because the term had not been invented yet, and milk-white statues come into life. Their ancient wish remaining a wish needless. To say that so far the Greeks did not invent the first artificial intelligence.

The Middle Ages came about and rumors spread of secret mystical or alchemical means of placing minds into matter a couple of new myths. Were created such as the takuan and the homunculus, and the golem. But again stories remain just that stories.

19th century

Soon it was the 19th century and even more stories implanted the idea of artificial beings, and thinking machines. This time in books like Frankenstein and Rossum's Universal robots, which is finally a case where the term robot was actually used. Across all this timeline described before. Realistic automatons were being built by crafty people and in turn misinterpreted to have very real minds and emotions.

By way of an effort that we now have come to call fake news, you are what we have achieved so far was an assumption that the process of human thought could be mechanized as part of a long history of study into mechanical or formal reasoning.

In the 17th century

A dynamic trio Leibnitz, Hobbes and Descarte explored the possibility that all rational thought could be made as systematic as algebra or geometry.

In the 20th century

There was a key insight in the form of the Turing machine. A simple theoretical construct that captured the essence of abstract symbol manipulation. This sparked the earliest real research into the Thinking Machines in the late 30s, 40s and early 50s.

Alan Turing's theory of computation showed that any form of computation could be described digitally big whoop.

By now, everybody was solidly convinced of the possibility of an artificial brain. Sometime in near future, or as for decades leading up to now, we would come to wishfully timeline it right around the corner. People make little analog toys called turtles, and beasts using analog circuitry and in general just having an overall grand old time.

In 1951

A man called Marvin Minsky built a machine called SNARC. He moved on to become an innovator in AI for the next 50 years. One year before that, Alan Turing published a cool little paper in which he waxed philosophical about machines that think calling such behavior difficult to define. Device displaying mess test, which was promptly named after him.

You know scientists need their 15 minutes to . In fact his 15 minutes carried over into the decades after as by his method we will now logically assume that a machine is thinking when it can carry a conversation indistinguishable from a human being. When computers learn to play chess, by which we really mean, when computers were programmed to play chess. So-called game AI would forever be used as a measure of progress in the field.

In 1955

A program now making use of this brand-new invention called digital computing proved the first 38 theorems outlined in the principia mathematica. They found a few more elegant proofs for some. This led some hopefuls to proclaim that the venerable mind-body problem had been solved. An explanation had been found for a Howell system composed of matter could have properties of mind. This idea would later become known as strong AI.


Finally, in 1956

During the Dartmouth conference, everybody was properly convinced to adopt the term artificial intelligence as the true name of the field.

This was the year where AI gained its name mission first successes and major players, and is widely considered as the birth of AI.

We were now in the time of discovery, gaining new ground and programs developed during this time. It was simply astounding to most people. Researchers were highly optimistic and most of them would swear that in less than 20 years, we would have a fully intelligent machine. This era would last from 1956 to 1974. No fully intelligent machine was ever developed.

Reasoning a search

Most early AI programs will be based on something called reasoning a search. In which a goal was tried to be completed by proceeding step-by-step towards it, by making a move or deduction and backtracking whenever a dead end was reached. Problem turns out that a lot of problems have an astronomical amount of possible paths through such a maze. So we do what we do best in such a situation give it a name. That name would become combinatorial explosion .

They tried reducing the search space by using heuristics or rules of thumb, to eliminate paths that were unlikely, to lead to a solution which in many cases turned out to be itself very unlikely, to lead to a solution for strong AI. They tried some things giving them names overqualified as names at the time as they went along, like general problem solver, geometry theorem provers and strips.

AI's subfield

By now, a subfield of AI had some minor success and natural language processing to solve high school algebra word problems. Also Eliza, the world's first chat bot was developed, and even occasionally fooled some people into believing ERG, and responses were coming from a real human. This type of foolery would continue even till this day, whenever a media outlet happens to interview a certain intelligent robot.

--Advertisement--

In the late 60s,

People proposed that AI research should focus on artificially simple situations known as micro worlds. Simplifying the models to gain more success. Nice world of blocks was developed. And sure enough, this led to some huge improvements in computer vision, so there's death!

Something with an unpronounceable name was the crowning achievement of this micro world program, which is just bad marketing. Ironically, it could communicate in plain English.

It's time for some optimistic predictions thought some scientists because you know science is all about speculation.

In 1958

Allen Newell and Herbert A. Simon say that within years a digital computer will be the world's first chess champion. In 1965, Herbert A. Simon says that machines will be capable within 20 years of doing any work a man can do.

In 1967

Marvin Minsky says : "within one generation the problem of creating artificial intelligence will be substantially solved". In 1970, Minsky says: "in three to eight years we will have a machine with the general intelligence of an average human being".

In 1967

Japan initiated the way bug project, which was completed in 1972 in the form of the wave walk dash one robot. It could do some cool stuff for the time. They made a second version. Because if you're going to call your robot something dash one, you must make at least one other version. Then, it was winter or at least. We entered the first AI winter where the field was all of a sudden subject to a lot of critique financial setbacks and just overall naysay.

Kind of what happens when you set expectations so high with all these highly scientific predictions, they were making and then failed to deliver on any of it. Of course, new ideas were explored during the winter, but people were just keeping their head down for now.

You see there were some fundamental problems. Of course, the elephant in the room has always been computer power. But there was more interact ability and the combinatorial explosion for instance, common sense knowledge and reasoning. Moravec's Paradox, the frame and qualification problems.

So no more money and all the scientists go stand in the corner, and think about what you have done over the last few years. Of course, with the playing field now wide open philosophers burst into the room, and started waxing Wellum philosophical about the problems. They saw when it came to the claims made by AI researchers.

The first modern neural network

Frank Rosenblatt invented the perceptron, and Marvin Minsky brought him to the sudden halt with his devastating critique on them. But it turns out all along they were important revived and a vital ingredient to the big learning revolution. So all's well ends well, except that Rosenblatt would never live to see his invention make it in the end as he died in a boating accident. Shortly, after Minsk rip them a new one, then gang spawned.

1982 to 1987

From AI winter into AI boom. This is 1982 to 1987, where a new form of AI program took over, called expert systems, and adopted by corporations all over the world. As knowledge became the focus of mainstream AI research, also connectionism saw a revival.

AI was back in a spotlight and again praised for its successes. In 1981, Japan the sidestep money talks and sets aside 850 million to see if they can write some cool code that can carry a decent conversation translated language interpreter picture and were generally reasoning like a human being, this was called the fifth generation computer project, and most written in Prolog. Other countries quickly responded with their own projects and large-scale funding in the AI world had returned.

Robotics research is demanded an entirely new approach to artificial intelligence. As we all know robots are cool and artificial intelligences drew. So back in the corner the scientists went. By the way, the term AI winter was coined by researches with too much time on their hands. Probably a side effect from not getting your together.

In the late 80s, a couple of researchers seeing a lack of hip stressed movement in their field developed a concept. They came to call nouvelle AI, which tried to convince those left behind in the still thinking of the current that true intelligence can only develop if a machine has a body. To make this concept more palatable for people to swallow.

They also refer to this as embodied reasoning. This was quickly mocked by people who were afraid of such radical thinking and starkly refuted in papers that described how elephants don't play chess.

In the mid 90s

AI finally started achieving some of its oldest goals, which were pretty old. Now as the field was about half a century of age by now, its reputation was still less than pristine at the moment. So not many people paid much attention, or formulated. In another way many people pay very little attention.

People of course desperately clinging to Moore's law, and use it more as an excuse than anything else. Nouvelle AI was now retro and the in-crowd needed a new clique. So their focus shifted to intelligent agents. We are now trying to maximize our chances of success by perceiving our environment and taking actions.

Many researchers were in hiding, because of the bad rap. AI had gotten over the years and everybody was in underground clubs called informatics, knowledge based systems, cognitive systems or computational intelligence. Nobody wanted to be seen as a wild-eyed dreamer, because even at the highest level of scientific excellence, personal reputation is a person's greatest good.

Everybody was comfortably ignoring complex problems, like common sense reasoning, and reveling. In the fact that simple problems had simple solutions. So the deep learning revolution!

Given a little phenomenon called big data. Of course, thank you mister more faster computers. We were now able to develop advanced machine learning techniques.

--Advertisement--