Ihab Khalifa, head of the Technological Developments Unit at the Future Center for Research and Advanced Studies, said that the “ChatGPT” application shocked many people with its ability to enter into complex and complex conversations with them on various topics with high efficiency, as if someone was talking to them.
In this context, Khalifa, in an article published by the Future Center for Research and Advanced Studies under the title “Does artificial intelligence face a setback after the ChatGPT shock?”, touched on a set of themes; Among them are “Learning or innovating?”, “Deception and error,” “specialized artificial intelligence,” “frightening speed of development,” and “calls for review.”
The author of the article cautioned that “between the fear of the speed of development of artificial intelligence and its superiority over humans, and the fear of the possibility of giving humans absolute confidence in artificial intelligence systems that they may not be worthy of, there remain fears that artificial intelligence systems will suffer a setback as a result of a serious error that results in exposing Human lives are at risk.”
This is the text of the article:
Artificial intelligence has proven to be so capable of mimicking human style that it becomes difficult to know whether a particular novel, short story or piece of music was composed by humans or by AI. Rather, the “ChatGPT” application shocked many people with its ability to engage in complex and complex conversations with them on various topics with high efficiency, as if someone was talking to them.
Have “general” artificial intelligence systems, which have the ability to perform many functions at the same time, reach the stage of reliability and use, or do they still suffer from fundamental problems? Is it evolving faster and faster than we humans can comprehend, or is its efficiency and capabilities overestimated? Between this and that, there is a possibility, albeit small, that artificial intelligence systems will suffer a setback, even if temporary, that they will lose their credibility. Either because of the speed of its development, or because we are not yet ready to deal with it.
Learn or create?
If we look at general artificial intelligence – and here I distinguish between it and specialized artificial intelligence that performs a specific function – we will find that artificial intelligence is a super child, who was hardly born until he was able to possess super abilities, understand the most difficult sciences, solve the most complex issues, and provide answers Quick and even accurate, a university professor may not be able to count it in his specialty.
This is indeed true; Rather, the matter has moved from simply providing answers to questions to doing jobs that require imagination and creativity, such as drawing and writing. This happens by analyzing large amounts of big data that are processed and organized in such a way that AI systems can create relationships between them to generate logical inferences. And as far as the quality of the data fed into the system and the way it is processed, the quality of the answers provided by artificial intelligence is; However, the method of acquiring knowledge in artificial intelligence, which qualifies it to perform these functions, is completely different from its counterpart in humans. Here, I do not mean that the quality of human decisions is better; But the way they learn compared to the way artificial intelligence learns may be more accurate and heavier, because humans acquire their skills through years of experimentation and learning, and go through the process of overburdening, developing skills and memorizing information through countless human experiences.
A person acquires his skills and knowledge through decades in which he learns complex words, sentences and structures, then begins to generate ideas emanating from his mind, and innovates from nothing. During this, thought mixes with culture, religion, information, intuition, and historical experience, and DNA and genes preserve the basic information that guarantees the continuation of human life. And all this happens under the supervision of a very complex mental and biological system, which contributes to the formation of an intellectual system and a human approach that is capable of selecting, arranging, filtering and choosing, in contrast to artificial intelligence systems that rely only on instantaneous and immediate learning.
In addition, the human brain is able to deal very efficiently with a small amount of information, so you can easily realize that “Abu Muhammad is the name of his son Muhammad”, and this is information that “GBT Chat” could not realize due to the lack of information; The need for massive information is essential for AI systems to learn, and the quality of this data depends on the quality of the answer the system will provide.
And if the human brain is able to innovate from scratch, artificial intelligence cannot do so; Rather, it needs hundreds of “terabytes” of huge processed data in order to learn from it and then add to it. A person may be able to imagine – for example – the shape of Satan and depict it in his arts based on his human knowledge and the symbolism of Satan’s role in human life. But artificial intelligence cannot do this without learning from previous images that were drawn of a very ugly being, so the intelligent system took it, made it uglier, and presented it as the devil.
While artificial intelligence aims from learning to provide answers to questions raised, or to perform a temporary benefit, or to achieve a current interest; The human goal of learning is to understand and interpret events, whether to adapt to them or change them, and not to stop at the stage of providing answers to open questions. If artificial intelligence seeks to know that the cause of the apple falling from the tree is the existence of earthly gravity, then humans seek to know what this gravity is and its impact on their lives.
Deception and error
The speed at which artificial intelligence learns may lead to errors, and this may be possible and acceptable as well; Humans, despite their complex way of learning, also make mistakes. this is real; But the degree of trust that humans may give to artificial intelligence may not be given to each other. There is a wide awareness that humans, despite their knowledge, make mistakes, and there is a state of acceptance of that, adaptation to it, and willingness to deal with it. However, the problem becomes big when humans see that artificial intelligence with all its strength is difficult to fall into error. Here, the error of artificial intelligence becomes more dangerous than the error of humans. Because, then, it will not be realized that it is wrong.
Also, the occurrence of artificial intelligence in this error is not limited to social and controversial issues only; Even the inevitable issues may be mistaken by artificial intelligence, so the input data is correct and well processed, but the system makes an error because it desperately wants to provide a quick and immediate answer to your questions, or because of the corruption of the input it got.
In a personal experiment that I did on the “ChatGPT” application, I gave him a very simple arithmetic problem, which is that if “Ihab” was twice as old as “Moamen” when the latter was two years old, then how old would Ihab be if Moamen was 50 years old? The simple answer is 52 years, because when Moamen was 2 years old, Ihab was 4 years old, so the difference between them is only 2 years, but for some reason “ChatGPT” could not answer this question, and I think that is because he wanted to provide a quick answer without Wait a few seconds to make sure his answer is correct.
What amazed me in answering this question is the application’s insistence on providing more than one evidence of the correctness of its logic, although it is corrupt and wrong logic. And herein lies the danger that humans could rely on this kind of deceptively fast AI.
specialized artificial intelligence
The search for a powerful general artificial intelligence system capable of providing answers to everything quickly is like the physicist Stephen Hawking’s search for a single equation to explain the universe, with all its planets, suns, celestial bodies and darkness; No single equation can do this, and no single algorithm can either. It is a flattening and simplification of a very complex issue, and it cannot be achieved, at least during this stage, which marks the beginning of the dawn of advanced artificial intelligence systems.
Humans are different in their knowledge, ideas, and perceptions, and they are not at the same level of technological development. Some of them use old technology, others use modern technology, and a third category does not use any of them. Here, the nature of the data that is fed to artificial intelligence systems differs according to the nature of the users, so the answers are valid for one category and not others. In addition, despite the advances made by humanity in science, the limits of our knowledge about ourselves and the universe are still limited and incomplete, so how can a general artificial intelligence system be formulated that is able to provide answers to everything? Certainly, it is difficult to achieve and will be flawed.
Therefore, it is better to take care of developing specialized artificial intelligence, which focuses on specific jobs in application of the principle of “work specialization”; Relying on a general artificial intelligence system capable of providing information about everything that has not yet been achieved, and the promises made by “Chat GBT”, “Microsoft” and “Google” are similar to Mark Zuckerberg’s promises about “Metaverse”, so they have not been fulfilled so far and they are not hard to get. It is just that we still need more time for these smart systems to reach maturity so that they do not suffer a setback.
Frightening speed of development
Artificial intelligence may be very good, and it may also be right and free from errors, defects and deficiencies, and it may be the best model of human knowledge and the source of wisdom that humanity lacks. But are we ready for this kind of development in this way at the present time, or do we need more calm and deliberation so that these smart systems can be developed on the right track without causing the destruction of humanity?
The development of artificial intelligence systems is proceeding faster than our ability as humans to keep up with it, and greater than our ability to know its repercussions. Even artificial intelligence may fall into error because of us humans, as a result of our inability to understand the way in which it evolves.
Invitations to review
Perhaps this frightening development is what prompted billionaire Elon Musk, along with 1,125 others, including experts in artificial intelligence, to sign an open letter calling for stopping the development of the most advanced artificial intelligence systems for a period of 6 months. for-profit. And it demands that AI laboratories stop training any technology stronger than “GPT-4”, which was released by “OpenAI” earlier in March 2023. The letter stated that contemporary artificial intelligence systems have become competitors to humans in public tasks, and that we should ask ourselves: Are Should we allow machines to flood our information channels with claims and lies? The report stressed that if such development is not halted quickly, governments should intervene and impose a temporary halt to these activities.
For its part, the Italian authorities announced, on March 31, 2023, the ban on the “GBT Chat” application, and attributed its decision to the program’s failure to respect the legislation related to personal data in Italy, and the lack of a system to verify the age of minor users, in addition to the lack of a legal basis that justifies The process of collective collection and storage of personal data for the purpose of training the platform’s operating algorithms, in addition to the exposure of the application on March 20 to a breach and leakage of data related to user conversations and information related to the payments of subscribers to the paid service.
In conclusion, it can be said that between the fear of the speed of development of artificial intelligence and its superiority over humans, and the fear of the possibility of giving people absolute confidence in artificial intelligence systems that they may not be worthy of, there remain fears that artificial intelligence systems will suffer a setback as a result of a serious error that results in endangering human lives. risk or failure in reliability and use and the inability to generate sound results with high efficiency, which requires a sufficient period to reconsider the progress made by artificial intelligence so far and re-evaluate the degree of its efficiency.