It aims to make machines and devices made bright by Artificial Intelligence (AI) do everything people can do and have difficulty doing.
Concepts such as Artificial Intelligence (AI), Robots, Intelligent Machines, Learning Machines are often used interchangeably. However, people who do not know much about the subject often feel uneasy when these concepts are used. Even though this uneasiness is sometimes due to not having the necessary information, the people’s anxiety is more about the future.
The concept of “Artificial Intelligence (AI)” was first used by a lecturer named Alan Turing in the 1950s. Alan Turing, the founder of computer science, “Can machines think?” posing the question, paving the way for Artificial Intelligence (AI). One of Alan Turing’s significant achievements was deciphering the Enigma code program, which the Germans used in the Second World War, which was considered impossible to solve. Historians claim that with the deciphering of this program, the defeat of the Germans was pushed forward by two years.
Researchers go back to prehistoric times when describing the chronological history of the concept of Artificial Intelligence (AI). From this point of view, the following chronological order emerges:
- Prehistoric Period: Different dreams about robots can be encountered from the prehistoric period to the last century.
- Dark Period (1965–1970): Little progress was made. Computer experts hoped to make intelligent computers by developing a thinking mechanism and simply uploading data.
- Renaissance Period (1970–1975): The way for rapidly increasing developments was opened. Artificial Intelligence (AI) researchers have developed systems such as disease diagnosis. The foundations of today’s expansions were formed.
- Partnership Period (1975–1980): Artificial Intelligence (AI) researchers began to benefit from other branches of science such as language and psychology.
- Entrepreneurship Period (1980–?): It was taken out of Artificial Intelligence (AI) laboratories and thought with much more complex applications according to the needs of the real world.
It aims to make machines and devices made bright by Artificial Intelligence (AI) do everything people can do and have difficulty doing. In particular, it is planned to have intelligent machines and robots do work that carries risks for humans.
The learning process of humans happens by adjusting the synaptic connections between neurons. People enter into the learning process by living (experience) from their birth. In this process, the brain shows continuous development. As we live and experience, synaptic connections are adjusted, and new relationships are formed. Since we are born, every object and event we perceive with our sense organs is data. Learning takes place with the accumulation of this data (experience).
The inspiration for Artificial Intelligence (AI) developers and practitioners has been the learning processes of the human brain. Artificial Intelligence (AI) designers try to apply this structure to computer programs by imitating a design similar to our brain’s learning and thinking systems. The computer program that will direct this artificial neural network system needs to learn it. On the other hand, Artificial Intelligence (AI) training takes place with plenty of data input and output, just like our brains.
Artificial Intelligence (AI) is realized through artificial neural networks created by imitating the learning processes of our brain and by uploading software and algorithms developed by humans to objects and machines via chips and sensors. Objects and machines loaded with Artificial Intelligence (AI) not only do the job expected of them, but they can also make decisions without the need to consult people by learning from the experiences they have gained.
First Artificial Intelligence (AI) Experiments
After the Artificial Intelligence (AI) studies started, their first trials took place on the playground. Developed chess programs were constantly being renewed in their first games. Because they needed data input/output to learn, thanks to these input/output processes, they knew the wrong moves they made and learned not to do it in the next game. As data input/output increased, so did the number of activities they predicted forward. At the end of this learning process, on May 11, 1997, IBM’s Deep Blue computer devastated the famous chess player Kasparov.
The Artificial Intelligence (AI) program “AlphaGo,” developed by the company named “DeepMind,” which was later bought by Google, defeated Lee Sedol, one of the best Go players, in 2016, with the game intelligence developed by using Deep Learning technique.
AlphaGo’s neural network was first taught which moves were preferred by showing positions from thousands of human games. Then, of course, this infrastructure, which was not enough to surpass people, was used to create the initial evaluation formula that programs that would play a role in the following “self–learning” stage would use.
AlphaGo has played more games on its own than a human could play in a lifetime. This “reward signal” at the end of the game has reflected the positions in the previous stages through reinforcement learning, and the information that they were “acceptable” was coded into the neural network. Thus, the evaluation formula, which was getting better at each stage, led to a higher quality of self–played games in the next step. The program had reached the superhuman level when the DeepMind engineers stopped this cycle.
But going beyond Google DeepMind’s successes in AlphaGo, they developed AlphaZero in 2017. AlphaZero’s difference from AlphaGo was that it didn’t even need human knowledge at all. While playing by himself, the actions he made, in the beginning, were stupid. But even with just 4 hours of self–training, after thousands of games, he found the experience that led to the win and discovered his style of play.
Types of Artificial Intelligence (AI)
- Industrial (Technical) Artificial Intelligence (AI)
Intelligent machines are only a part of the fields where Artificial Intelligence (AI) is applied.
The most crucial feature of intelligent machines is that they store and analyze data coming from the internet and make decisions independently thanks to their Artificial Intelligence (AI). You can plan the routine work that a machine will do and ensure the smooth operation of the device through the chips and sensors you will embed in the machine. For example, by programming a carpet machine, you can produce carpets without human effort, using threads of various colors, using motifs you have designed, without adding the human factor to any stage of this production process.
Or, you can make the car ready for use by having all the sub–parts of a vehicle you design manufactured by the machines you programmed and then having the parts assembled without adding the human factor to any stage of these processes. However, the devices used in these processes cannot be called “Intelligent Machines” or “Learning Machines.” Because these machines are used in different stages of the automobile production line, only do specific jobs defined for the production stages.
Chips and sensors are embedded in these machines and equipment. With the said placement process, “embedded systems” are created. The most defining features of cyber–physical systems obtained from these processes are communicating with each other and with machinery and equipment outside the system over specific internet networks.
We can call machines and factories designed due to virtualization and equipped with “Intelligent.” They are smart because they can receive and store data from other objects and devices, but most importantly, they analyze the collected data. As a result of these analyzes, they can plan all production processes and make decisions independently of people, without the need for humans.
Similar to social networks, there is communication between workers, machines, and resources in intelligent factories.
This system will have no stocking and warehousing costs since production will be made entirely on demand. In addition, malfunctions and errors that may arise in the machines can be predicted and repaired without causing any interruption or pause in the production process. When the news about the re–emergence of an unforeseen Covid–19 like situation arises, it will decide independently how to plan production for this new virus.
- Codeless (Non–Technical) Artificial Intelligence (AI)
Such Artificial Intelligence (AI) programs are technologies used by business branches and management units (Marketing, Finance, Human Resources, Communication, Administrative Affairs, etc.) that do not contain engineering knowledge.
In this technology, the program developers did the basic coding processes. Users do not contribute to coding within business processes; they only manage Artificial Intelligence (AI).
This system, also referred to as “Artificial Intelligence Management,” can be compared to people gaining the ability to use office programs by learning office programs.
To better understand codeless Artificial Intelligence (AI) I, we also need to look at Narrow and General Artificial Intelligence (AI) concepts.
- Artificial Narrow Intelligence (ANI)
Artificial Narrow Intelligence (ANI) is the expression used for Artificial Intelligence (AI) that is trained and developed in a field. Artificial Intelligence (AI), which has been created by human beings and beat humans in some competitions, is narrow Artificial Intelligence (AI).
Artificial Narrow Intelligence (ANI) cannot do other work than the one it is loaded with. DeepBlue and AlphaGo software, which we have mentioned above, are examples of narrow Artificial Intelligence (AI) that defeat people. On the one hand, this software can not organize the user’s calendar, schedule their appointments, or pay the bank debt; on the other hand, it can play games with people. This Artificial Intelligence (AI), which has limited knowledge and capabilities, is mainly used in search engines, translation, mobile applications, and repetitive jobs. As such, every business with sufficient data and workflow can have Artificial Intelligence (AI) developed for itself. This Artificial Intelligence (AI) can continue to work independently and be unaware of each other.
- Artificial General Intelligence (AGI)
Artificial General Intelligence (AGI) is the expression used with neural networks like the human brain. It connects Artificial Narrow Intelligence (ANI) through these networks and allows them to do business. However, the most crucial difference from limited Artificial Intelligence (AI) is that it can think like a human, make sense of it and bring an idea to life.
Artificial General Intelligence (AGI) is also the technology that will enable the widespread use of no–code Artificial Intelligence (AI) because professionals who can use Artificial Intelligence (AI) Management will be able to plan their appointments, performance follow–ups, employee assignments, market shopping, drinks, coffee, food, bank payments, collection instructions, doctor appointments, and artistic activities in a short time through Artificial General Intelligence (AGI).