Artificial Intelligence (AI) - Timeline of 2022 Update

Artificial Intelligence (AI) - Timeline of 2022 Update

People's lives have become much easier since the advent of computers. Because people now do all the work with the help of computer and their work pressure depends on the computer. And this is why computers have gradually improved.

Gradually people have increased the power of this machine a lot. Such as their speed, and their size. So that time can be saved because today all work is done only by computer. In this case, if the computer works at a slow speed, then a lot of time can be wasted.

You've probably heard of Artificial Intelligence because it's talked about everywhere today. But if you don't know any specific information about it, then there is no reason to worry. Today we will give you complete information about Artificial Intelligence in our post. So that it can make our work easier. So let's dive into this from ancient times to 2022.

Antiquity

Artificial Intelligence (AI) began with myths, legends, and stories about artificial animals that were enriched by intelligence or awareness created by master craftsmen. Early Greek philosophers sought to portray the human thought process as a metaphor for the formation of theories.

Later fiction

Concepts about artificial males and thought machines were created in fiction, such as Mary Shelley's Frankenstein or Carroll Capek's RUR (Universal Robot of Rasmus), and speculation, such as Samuel Butler's "Darwin in the Machine" and real-world events including Edgar Allan Poe's Players ".

Automata

Artisans of every civilization, including Ian Shi, Alexandrian Hero, Al-Jazari, Pierre Jacket-Droz, and Wolfgang von Kempelin, have built realistic human automata. Ancient Egyptian and Greek sacred statues were the first known automata. Believers believed that the artisans donated these figures with a genuine mind / in the Middle Ages, these legendary automatons were asked to answer the questions they addressed.

Read more: Who is Elon Musk? What is he famous for?

Read more: Top 10 richest people in the world 2022

The formal argument

Artificial intelligence is based on the idea that human thinking can be mechanical. There has been a lot of research on formal - or "mechanical" - "logic" Chinese, Indian and Greek philosophers invented the formal deduction system in the first millennium BC. They include Aristotle (who wrote a rigorous analysis of syllogism), Euclid (whose elements were a model of formal reasoning), Al-Khwarizmi (who invented algebra and is credited with naming "algorithm"), and European scholastic thinkers such as William of Oakham.

The Spanish philosopher Ramon Llull (1232-1315) developed a number of logical machines for creating knowledge through logical methods; He described his devices as mechanical creatures that could combine basic and undisputed information using simple logical operations to produce all possible knowledge. Gottfried Leibniz revived Lull's idea.

Leibniz, Thomas Hobbes, and Re Descartes explored the possibility in the 16th century that all rational thought could be reduced to algebra or geometry. Because, according to Hobbes, "nothing but an account." Leibniz envisioned a universal language of reasoning (its features universal) that would reduce controversy to calculations so that "there is no need for debate between two philosophers more than two accountants. Because it is enough for them to take their pencils." Until then AI will become the central belief of the study.

In the 20th century, logical-mathematical reasoning made an important advancement that made artificial intelligence practical. Such works formed the basis of Bull's The Laws of Thought and Frege's Begriffsschrift. In 1913, Russell and Whitehead published Principia Mathematica, a formal study of the foundations of mathematics, based on the phrase system.

The response they received was unexpected in two ways. In the beginning, they proved that there are limitations to what mathematical reasoning can do. However, second and more significant (for AI), their research suggests that any mathematical assumptions can be mechanized within these parameters.

Turing test

The Turing test is a long-term goal for AI research - will we ever be able to create a computer that can disguise a human so that a skeptical judge can't tell the difference? It has followed many similar paths of AI research since its inception. At first, it seemed difficult but possible (once hardware technology arrived).

Despite decades of study and significant technological advances, the Turing experiment continues to serve as a goal for AI researchers, as well as revealing how far we have come from achieving this.

In 1950, English mathematician and computer scientist Alan Turing published a paper entitled "Computing Machinery and Intelligence" which would be known as artificial intelligence. That was a few years before John McCarthy coined the term artificial intelligence. The article begins with a simple question: "Can machines think?" Next, Turing proposed a method for determining whether machines could think, known as the Turing test. "Imitation games" were created as a simple test that could be used to determine if machines were thinking. The hypothesis that programming a computer that looks exactly like an intelligent human being proves that computers can think.

While people continue to argue about whether machines can think and test cyberspace, it is clear that Alan Turing and his proposed standards have provided a powerful and educational approach to the AI ​​field. Written by Alan Turing himself, the paper contributes to AI research and paves the way for modern computer science. The Turing test is considered a breakthrough in artificial intelligence and can be considered a goal for many years, and it is also a milestone in tracking the progress of the entire AI field.

Read more:Artificial Intelligence (AI) - Timeline of 2022 Update

Read more:Artificial Intelligence (AI) - Timeline of 2022 Update

Cybernetics and early neural networks

The invention of the computer inspired the early investigation of intelligent machines. A confluence of ideas emerged in the late 1930s, late 1940s, and early 1950s, which inspired earlier work in neuroscience. The works of Norbert Winer and Claude Shannon focused on the control and stability of electrical networks. Claude Shannon's information theory describes digital signals (all-or-nothing signals). Alan Turing's theoretical concept of computing proves that any type of calculation can be presented digitally. The close connection between these ideas suggests that an electronic brain could be created.

W. Gray Walter's tortoise-like robot, as well as the Johns Hopkins Beast, are examples of work in this area. These machines were driven by analog electronics and instincts rather than computers, digital electronics, or symbolic reasoning; They were completely controlled by analog circuitry.

In 1943, Walter Pitts and Warren McClough examined networks of standardized artificial neurons and demonstrated how they could perform basic logical functions. They first describe what researchers later call a neural network. A young Marvin Minsky, then a 24-year-old graduate student, was inspired by Pitts and McCulloch. In 1951 (with Dean Edmunds), he built the first neural network machine, the SNARC. For the next 50 years, Minsky will be one of the most important leaders and inventors of AI.

Game AI

In 1951, Christopher Strachey and Dietrich Prince developed the checker program for the Ferranti Mark 1 machine at the University of Manchester. Arthur Samuel's checker program, created in the mid-50s and early '60s, finally reached the amateur level. The use of AI in games will continue throughout history as a metric for progress in AI.

Dartmouth Workshop 1956: Birth of AI

In 1956, the Dartmouth Conference was hosted by Marvin Minsky, John McCarthy, and two senior IBM scientists: Claude Shannon and Nathan Rochester. "A machine can be built to mimic any aspect of human intelligence," the proposal said. Participants include Ray Solomonoff, Oliver Selfridge, Trenchard Moore, Arthur Samuel, Allen Neil, and Herbert A. Simon was included — all of whom would go on to create significant AI projects in the first decade of the study. At the conference, Neil and Simon unveiled "Logic Theorists" while McCarthy urged attendees to adopt "artificial intelligence" as the name of their field. The 1956 Dartmouth Conference was an event that gave AI's name, purpose, and first success, as well as its key players and defining moments.

Symbolic AI 1956-1974

To most people, the years following the Dartmouth workshop were simply "amazing": computers were solving algebraic word problems, proving geometric theorems, and learning to speak English. In the late 1960s, very few people thought that such "intelligent" behavior by machines was conceivable. Personally and in print, educators are very optimistic that a fully intelligent machine will be built in less than 20 years. The new sector has attracted significant funding from government agencies such as DARPA.

First AI winter 1974-1980

In the 1970s, AI faced criticism and financial catastrophe. The problems that AI researchers encountered were not recognized by them. Their huge expectations were raised far beyond what was reasonable, and when the promised benefits failed to materialize, government funding for the AI ​​disappeared. At the same time, the field of connectivity (or neural networks) remained dormant for ten years after Marvin Minsky's destructive critique of the preceptor. In the late 1970s, despite the negative public perception of AI, new ideas were explored in the fields of logic programming, commonsense reasoning, and so on.

Read more: Who is Elon Musk? What is he famous for?

Read more: Top 10 richest people in the world 2022

Boom 1980-1987

From the first days of AI, knowledge has been a major concern. The Expert System, a variant of the AI ​​program, was adopted by businesses around the world in the 1980s, and knowledge became the focal point of AI research. In the 1990s, the Japanese government invested heavily in AI, along with its fifth-generation computer enterprise. The resurgence of connectivity in the works of John Hopfield and David Rumelhart in the early 1980s was another exciting moment. Again, AI was successful.

Second AI Winter 1987-1993

In the 1980s, the business world turned its attention to AI following the classic pattern of an economic bubble. The accident happened because the commercial suppliers could not come up with an effective solution. Hundreds of companies have failed, and many investors have refused to invest in them. Many believed that the technology was not effective, but research continued. Numerous experts, such as Rodney Brooks and Hans Moravek, have spoken out in favor of radically new types of AI.

AI 1993-2011

The field of artificial intelligence, which is more than half a century old, has reached some of its basic objectives. It is currently being used effectively across the technology sector, albeit somewhat quietly. Some of this was the result of improved computing ability while trying to focus on certain isolated issues and achieve the highest level of scientific accountability. And yet, AI's reputation in the business world was lower than that of Nakshatra. In the field, there was limited agreement in the 1960s on why AI failed to deliver on its promise of human-level intelligence. AI was divided into many distinct disciplines, each focusing on a different problem or method, although giving the illusion that they were working towards the same goal.

"Knights victory"

Artificial intelligence researchers began to develop and use more sophisticated mathematical approaches than ever before. Educators are already addressing many of the problems needed to address AI in areas such as mathematics, electrical engineering, economics, and operations research. Shared mathematical languages ​​allow for further collaboration between different fields and measurable and verifiable results; According to Russell & Norwig (2003), AI has now become a more serious "scientific" discipline.

Since Judea Pearl's influential work, Introduction to Probability and Decision Theory, was introduced to the field in 1988, Probability and Decision Theory have been incorporated into AI. The Bayesian network, the hidden Markov model, information theory, stochastic modeling, and classical optimization are just a few of the many new techniques employed. Mathematical representations were also made for examples of "computational intelligence" such as neural networks and evolutionary algorithms.

Read more:Artificial Intelligence (AI) - Timeline of 2022 Update

Read more:Artificial Intelligence (AI) - Timeline of 2022 Update

Prediction (or "Where is HAL 9000?")

In 1968, Arthur C. Clark and Stanley Kubrick predicted that by 2001, there would be a machine intelligence that could be compared to or surpassed by humans. The HAL 9000, the AI ​​character they designed, is based on the idea of ​​many top AI experts that such a device would be created by 2001.

By 2016, the AI-related products, hardware, and software market had reached over $ 8 billion, with interest in AI reaching "mania". Big data applications have begun to expand beyond the realm of statistics. For example, big data was used to train models in ecology and for various economic applications. Advances in deep learning (especially deep convoluted neural networks and repetitive neural networks) have accelerated advances and research in image and video processing, text analysis, and even speech recognition.

Big data

Big data is a term used to describe a large number of numerical data that is beyond the power of ordinary application software. A whole new set of processing models is needed to manage this level of decision making, insight, and process optimization. In the Big Data era, Victor Meyer Schoenberg and Kenneth Cook defined big data as "all data is used for analysis instead of random evaluation (sample survey).

The following big data has five important features: volume, velocity, diversity, quality, and authenticity (recommended by IBM). The significance of big data technology is not to capture huge data information, but to focus on the important bits. In other words, if big data is compared to the economy, then the key to profit in this sector is to improve the "processing power" of data and turn it into "value addition".

Artificial common intelligence

The ability to solve any problem, rather than just a specific one, is known as common intelligence. Artificial intelligence (or "AGI") refers to software that can apply intelligence to a variety of problems in the same way as humans.

In the early 2000s, AI researchers argued that AI development had largely abandoned its original purpose in the field of artificial intelligence. The AGI study was established as a separate sub-discipline and by 2010 there were academic conferences, laboratory and university courses dedicated to AGI research as well as private consortia and new firms.

Artificial intelligence is also known as artificial intelligence instead of "strong AI", "full AI", "weak AI" or "narrow AI".

Read more: Who is Elon Musk? What is he famous for?

Read more: Top 10 richest people in the world 2022

AI in 2022

Artificial Intelligence (AI) has become a business and organizational reality for many sectors. Even if the advantages of AI are not always easily apparent, it has shown itself capable of improving process efficiency, reducing errors and labor, and gaining insights from big data.

People are talking about what will be the next big thing in the world when it comes to AI-driven trends. Here is a collection of the most exciting AI trends expected in 2022:

1. ROI driven AI implementation;

2. Video analysis

3. Business model as 'service';

4. Improved cyber security;

5. AI in Metaverse;

6. A data fabric;

7. AI and ML, including the Internet of Things (IoT);

8. AI leading hyper-automation.

Conclusion

Artificial intelligence has a huge impact on science, economics, manufacturing, and the future of each individual. Artificial intelligence has contributed to the development of innovative technologies such as Big Data, Robotics, and the Internet of Things from the very beginning and will continue to do so.

#technology #ARTIFICIAL #AI #INTILLIGENT #artificialintelligencefuture 

Comments 0:

Leave a comment

Comment must be 4 - 300 character*