The Onslaught of Artificial Intelligence

Introduction

I have always believed that Artificial Intelligence (AI) is a promethean technology that can be used for good or evil. I, therefore, feel that while we welcome its positive uses, the negative impacts could be too deleterious for humanity to control or endure, and as such, we should have international treaties and controls put in place to regulate the technology. With the arrival of ChatGPT and the promise of Artificial General Intelligence (AGI) and possibly Artificial Super Intelligent (ASI) systems, I am even more convinced that some internationally agreed sets of controls and laws guiding the behaviour of such systems, and hardwired into the AI systems, should be implemented as soon as possible.

My Early Years in AI

I started studying Artificial Intelligence (AI) in 1983 when I enrolled in the Master’s degree program in Cognition, Computing, and Psychology at the University of Warwick in England. The course was all about AI – how to build AI systems by studying humans to determine what makes them intelligent and then applying that knowledge to develop intelligence-based systems. Since then, I have been keeping track of the development of artificial intelligence and its uses in various industries. The knowledge and understanding of what makes humans intelligent have been applied to the development of AI systems capable of performing tasks previously thought to be unique to human intelligence. It is crucial to consider any possible disadvantages of these intelligence-based systems as we continue to push the limits of AI. That is why it is significant to develop internationally agreed-upon sets of controls and laws to govern the development and use of AI or intelligence-based systems.

Intelligence-based systems, also known as cognitive systems, imitate human intelligence and execute activities that typically require human intelligence, such as interpreting a spoken language, identifying objects or photographs, making judgments, and learning from prior experiences. When such systems are built, those who believe in the strong AI hypothesis refer to them as having human intelligence or intentionality. On the other hand, those who believe in the weak AI hypotheses view intelligence-based systems as a tool or technology used to automate or enhance specific jobs rather than as an end in itself. They see AI as a tool for better decision-making, increased productivity, and the provision of new insights and information. However, to those that believe in the strong hypothesis, if a system can display all the cognitive traits of people, it should be considered human.

During my above study course, I came to believe that humans are very creative and capable of building the above types of systems over time. However, I did not believe in the strong hypothesis; rather, I believed and still believe in the weak hypothesis of AI, as did people like Professor John Searle. He is widely known for his “Chinese Room” thought experiment.

In his thought experiment, Prof. Searle proposes that a person in an enclosed room who does not speak Chinese be given a set of rules and symbols that allow him or her to communicate in Chinese with a Chinese speaker outside the room. The person in the room can receive Chinese words, convert them into English, understand the communication, respond in English, and then convert the response back into Chinese. It appears that the person in the room speaks Chinese fluently to the Chinese speaker outside the room. However, the person in the room does not understand Chinese; he or she is simply manipulating symbols according to a set of rules. Searle argues that this thought experiment illustrates that a machine or algorithm can simulate an understanding of a natural language without actually understanding it. He further argues that this is true of AI systems in general: they can display traits associated with human intelligence, but this does not mean they possess genuine human intelligence or consciousness. He suggests that the true objective of AI should be to produce machines that can perform specialized tasks efficiently rather than trying to build machines that can fully comprehend and have consciousness like humans.

John Searle’s opinions on the weak AI hypothesis have been widely studied and considered significant in AI. Hubert Dreyfus, a philosopher, and Daniel Dennett, a cognitive scientist, are two well-known individuals who have endorsed Searle’s opinions. It’s important to note that this thought experiment raises many questions and criticisms. Some argue that this thought experiment is not a fair representation of AI, that it doesn’t take cognizance of the complexity of the human mind and consciousness, and that the Chinese Room thought experiment is not a fair representation of how AI systems work. I believe that Prof. John Searle is right, AI systems are mere powerful symbol manipulators. However, that they are merely manipulating symbols does not detract from the fact that humans can judge their capabilities as human like, and so, can easily declare them to be human or even super-human.

Motivations of the AI Intelligentsia

The motivations of the AI intelligentsia, consisting of researchers, engineers, and scientists who work in the field of AI, include scientific curiosity, technical challenges, commercial opportunities, social impact, national security, and ethical considerations. They are thus playing a significant role in shaping the future of humanity by advancing AI technology and developing Artificial General Intelligence (AGI) systems that have the potential to revolutionize many aspects of daily life for people.

The AI intelligentsia is taking humanity to a new epoch – to an age of intelligent-based systems who will co-exist with humans at home, at work places and at social environments, or who can eventually rule over humans. It is a new dawn, a brave new world that is about to be revealed, as we redefine the origins and nature of sapiens, taking us to a new stage of human evolution, in which humans will evolve into cybernetically and genetically engineered “Homo-Deus”, as proposed or mentioned by thinkers such as Yuval Noah Harari, Dan Brown, Michio Kaku and Ray Kurzweil.

Ray Kurzweil believes that AGI will surpass human intelligence in a wide range of tasks and could be used to solve problems like curing diseases, terraforming other planets, and overcoming death. Kurzweil’s expected new book, “Singularity is Nearer”, will expand on his previous ideas and predictions of AI. He has previously stated that he believes computers will be able to pass the Turing test, which measures a machine’s ability to exhibit intelligent behaviour comparable to or indistinguishable from that of a human, by 2029. He also predicts that by 2045, AGI will be achieved and will be able to improve itself at an exponential rate, leading to a rapid acceleration of technological progress, and perhaps to the AGI turning into an ASI (Artificial Super Intelligent) system.

Also, Yuval Noah Harari suggests in his book “Homo Deus: A Brief History of Tomorrow” that advanced brain-computer interfaces (BCIs) could be a step towards the development of artificial general intelligence (AGI). He claims that by directly connecting human brains to computers and other machines, we can improve our cognitive abilities and eventually create AGI systems that match or even exceed human intelligence. It is critical to remember that AGI is a speculative topic, and the notion that BCIs would be a step toward its development is purely theoretical. There are many different approaches to creating AGI, and the relationship between BCI and AGI is not yet well understood. However, the idea that BCIs could significantly improve human cognitive abilities is intriguing and warrants further investigation.

On the contrary, Martin Ford argues in his book “The Rise of the Robots: Technology and the Threat of a Jobless Future” that the development of AGI could have an impact on the job market because machines could potentially take over many tasks currently performed by humans. In particular, he asserts that low-skilled and repetitive jobs will suffer significantly from widespread automation and the rise of AGI. He also suggests that AGI could lead to greater inequality as the people who own and control these technologies will become increasingly wealthy and powerful while a growing number of people may become unemployed and left behind. He believes society should start preparing for these changes now by investing in education and training programs and enacting new policies to help people adjust to a rapidly changing job market. It is worth noting that this is a speculative topic and that there are many different opinions about the potential impacts of AGI on the job market and the economy as a whole. Some experts believe that AGI has the potential to create new job opportunities and boost economic growth, while others believe it will lead to job displacement and widen the gap between the rich and the poor.

Artificial General Intelligence (AGI) refers to systems that have achieved singularity with humans and as such can comprehend or learn any complex tasks that humans can. AGI systems would be able to carry out a variety of tasks like problem-solving, making decisions, and learning even without being explicitly programmed. Most AI systems in use today fall into the category of weak or limited AI due to their focus on narrow task domains.

The advancement of AGI has ethical and societal implications, as it may alter our interactions with technology and our understanding of intelligence. For example, in the 1980s, Professor Weizenbaum created an AI-based system called ELIZA, which mimics a Rogerian Therapist and was one of the first natural language processing programs. His system showed the shallowness of human-computer interaction and the risks associated with an overreliance on AI technologies. He was astonished to discover that his secretary had an emotional connection to the system, while using it, as if it were human, despite knowing that he wrote the code for it. Weizenbaum used this experience to openly criticize the field of AI after he realized that people might treat AI systems like they were people. He argued that the hype surrounding the technology was unjustified and that it was unlikely that machines would ever be able to fully understand human thoughts or emotions.

Weizenbaum’s viewpoint on AI raises the question of whether or not AGI is truly possible and whether or not the concept of AGI is well-defined. It also emphasizes the importance of being realistic about what AI can or cannot do and avoiding overhyping the technology. It is important to note that the field of AI has evolved significantly since Weizenbaum’s criticism, and some researchers have developed new methods, techniques, and theories that address some of AI’s limitations, such as Symbolic AI, Connectionist AI, and Hybrid AI. Some AI researchers are working on developing AI systems capable of demonstrating human-like intelligence and consciousness, such as creating AI systems capable of passing the Turing or Chinese Room tests.

It is crucial to remember that Weizenbaum’s worries about AI remain valid today, and the discipline of AI continues to raise ethical and societal implications. As AI systems advance and become more capable, it is critical to consider the implications of their development and application. For example, as AGI systems become competent at performing tasks previously thought to be unique to humans, it raises concerns about the future of work and the role of AI in society. Furthermore, as AI systems improve their ability to understand and interpret human behaviour, issues about privacy, autonomy, and the possibility of AI being used negatively by individuals or society as a whole arise.

In light of these concerns, AI researchers and developers must think about the moral and societal implications of their work and develop AI systems that are open, auditable, accountable, and consistent with human values. Furthermore, society as a whole must engage in informed discussions about the future of AI and its potential impacts on our lives. This includes involving stakeholders from diverse backgrounds and perspectives, such as ethicists, philosophers, sociologists, policymakers, and members of the public, in the development and governance of AI systems. Additionally, there should be ongoing efforts to ensure that AI systems are developed and used responsibly and ethically, with measures in place to prevent unintended consequences and negative impacts on society.

Divergent Viewpoints on the Prospects of AGI Morphing into ASI

While some researchers believe that AGI is possible, others think that it’s unlikely and that the concept of AGI is not well defined. The debate over whether AGIs will eventually “morph” into ASIs is divisive among AI researchers and experts for different reasons. One argument in favour of this viewpoint is that, due to advances in technology and machine learning, AGIs will continue to develop quickly and may one day outperform human intellect in several fields. This means that AGIs will continually grow exponentially over time and eventually transform into ASIs. However, since there are still many unresolved issues and unknowns surrounding AGI and ASI, many researchers are less optimistic about the timeline for their development. In addition, some specialists believe that the term “superintelligence” is ill-defined and too ambiguous, making it difficult to say whether or when AGIs will develop to that level. Despite the difficulties, many researchers are still working on developing AGI and ASI because they think the potential advantages outweigh the risks. Improvements in decision-making, increased output, and the ability to solve problems that are currently insurmountable for humans are potential benefits. However, to benefit from these advantages, the risks associated with AGI and ASI must be decreased by implementing the proper safety measures. In addition to addressing moral and ethical dilemmas, this requires developing strategies for monitoring and controlling AGI behaviour.

As pointed out earlier, majority of AI systems in use today are referred to as “Narrow AIs” because they are designed to carry out specific tasks. Examples of these types of AI include IBM’s Watson and DeepBlue, Expert Systems, and AlphaGo, all of which exhibit intelligent behaviour within a specific domain, but do not possess the general intelligence that humans have. Some of these systems may have high levels of proficiency or intelligence in their specific areas, but they do not have the broad intelligence that humans possess.

It is important to note that this is often a starting point in the progress towards creating a general AI. Some researchers typically begin by developing AI systems that are particularly skilled in one area and then use the insights and knowledge gained from these systems to advance the development of more general AI systems. Organizations such as the AGI Society, Berkeley Artificial Intelligence Research, CSAIL at MIT, Facebook AI Research, Google DeepMind, and the Human-Level Artificial Intelligence (HLAI) Conference demonstrate that there is ongoing work in the field of AGI and that many experts are working to create more general AI systems.

I am among those who are nervous about the unleashing of AGI systems in our society. I therefore believe that when designing and implementing AGI, it is critical to proceed with caution and care. Technologies are amoral; they can be used for good, such as increased efficiency and productivity in various industries, but they can also pose risks, such as job displacement, security threats, and ethical concerns, depending on the motivations of those who create them. Before proceeding with AGI development and implementation, it is critical to consider the potential risks and benefits, as well as the consequences of AGI morphing into ASI and the impact they could have on human jobs and sense of uniqueness in the world.

Even though we still do not have AGIs today, Ray Kurzweil, among other AI experts, predicts that AGIs will be developed by 2045, citing the Law of Accelerating Returns, which deduces that the rate of technological growth is exponential. It is crucial to keep in mind that these projections are based on current trends and advancements in the field of AI and that developing AGI is a complex and ongoing process that might not occur in a specific order. It is also worth noting that AGI does not necessarily imply replicating all human capabilities; rather, it could refer to systems that are advanced enough that humans perceive them as AGI, but the consequences of having such systems are unknown. It could be compared to opening a Pandora’s Box or attempting to construct a new Tower of Babel, with unknown and potentially negative consequences for humanity and the world. I believe in the long run, the net effect will be negative, perhaps even catastrophic, for our earth, unless we do something to regulate them appropriately.

Implications of AI

Humans have been attempting to increase the forms, types, places, and reach of communication, resulting in the emergence of many forms of communication, including written language, oral language, sign language, and more recently, digital communication. For example, through the use of books, letters, and other written documents, written language has made it possible for humans to communicate over great distances and long periods. The printing press facilitated written communication after the fifteenth century. The telephone and telegraph enabled long-distance communication in the nineteenth century. The development of radio, television and the internet has significantly expanded the reach and extensibility of communication and information during the 20th century. Communication with anyone at any time is now possible thanks to the internet and mobile technology. Social networking and instant messaging are new forms of communication as well.

Most machines developed during the agrarian, industrial, and post-industrial eras have ended up deskilling and displacing humans from their traditional vocations, whether in crafts, blue-collar, or clerical work. However, they have also increased production and opened up new areas of labour for individuals who were displaced. As a result, there have been more employment increases than job losses overall. Many of stakeholders believe that this will always be the case, even for artificial intelligence systems. However, AI can replace not only monotonous administrative and physical tasks, but also virtually every other job, including those of artists, programmers, teachers, doctors, researchers, lawyers, accountants, and managers—indeed, everyone’s work. Managers have believed that having 1,000 employees will cause 1,000 headaches for them since the beginning of time. So, they will employ whatever machines or methods that allow them to eliminate numerous workers.

However, it is important to note that the impact of AI on the workforce will likely be more complex than simply replacing jobs. AI has the potential to improve human capabilities, produce more work, and create new jobs. Additionally, the rate at which AI will impact different industries and job types will vary, and some jobs may be more resilient to automation than others. It is also important to consider the ethical and societal implications of AI and its impact on the workforce. For example, there may be concerns about income inequality and the displacement of certain groups of workers. It is crucial for policymakers and industry leaders to carefully consider these issues and develop strategies to mitigate negative impacts while harnessing the potential benefits of AI. Moreover, there is a need to think about retraining programs, education and upskilling of the workforce, and to ensure that the benefits of AI are shared equitably across society.

The Ethical Implications of Giving AGI a Human-Like Brain

Are we trying to give AGI a human-like brain and make it self-aware? This seems to be what we are doing, advertently or inadvertently. The question of whether to give AGI self-awareness and consciousness is a contentious issue. Some argue that replicating and understanding human intelligence is a crucial step for AGI to perform tasks such as creativity, empathy, and moral reasoning. Others argue that it is unnecessary and even dangerous, as the actions of a self-aware AGI are uncertain, and it could lead to unintended consequences.

It is important to consider the ethical and moral concerns that arise from the development of AGI with a human-like brain, including the entity’s rights and obligations, and society’s treatment of it. Isaac Asimov, a science fiction author and biochemist, was one of the first to explore these ethical issues in his famous “Three Laws of Robotics” in which he proposed guidelines for the safe and ethical use of robots and AI. These laws include the prohibition on robots harming humans, the requirement for robots to obey human orders, and the obligation of robots to protect their existence as long as it does not contradict the first two laws.

Asimov’s laws provide a useful framework for considering the ethical implications of AGI, and his work continues to be relevant today as we grapple with the ethical challenges posed by the development of AGI. It is important for researchers, policymakers, and industry leaders to carefully consider these ethical implications as AGI technology continues to advance and to ensure that AGI systems are developed with a clear understanding of their limitations and potential risks. Therefore, it is important for researchers, policymakers, and industry leaders to carefully consider these ethical implications as AGI technology continues to advance and to ensure that AGI systems are developed with a clear understanding of their limitations and potential risks.

The Ethical and Societal Implications of Global Human Consciousness

The concept of a global human consciousness, or a “world brain,” refers to the idea that advancements in technology, particularly AI and the internet, are allowing for the collective intelligence of humanity to be harnessed in a way that has never been possible before. With the advent of technology like ChatGPT, which allows for easy access to information and the ability to ask questions, it is becoming increasingly possible for individuals to access and share knowledge on a global scale. The world is now aware of what GPT-3 is capable of doing! Imagine what happens when it’s upgraded to GPT-4 and then GPT-10. We’ve been told that GPT-4, once completed, would be 500 times more competent than GPT-3. Today, students may use ChatGPT to produce essays, term papers, and even theses. Professors have started utilizing GPT to edit the chapters they have written and even to help with book chapter composition. Every organization can now use GPT to accomplish practically everything, potentially reducing the need for human personnel.

When I look at a new technology that has been invented in our attempts to build an AGI, like ChatGPT, I believe it appears we want to build a “world brain”, which can be used for both good and ill. ChatGPT has an excellent level of human-to-human communication. It can be as plain as many people usually are in conversations, yet it can also get as technical as others might want. Any question you ask will have an intelligent response, so feel free to ask anything. It can be your research assistant, write essays for you, draw pictures, and write poems for you, and so on. Individuals can now utilize it for free via the internet. It could replace search engines in applications like Google and Facebook and provide all the answers to questions in applications like Quora.

Building a “world brain” is a goal shared by many organizations besides OpenAI. Numerous other research facilities are working to create a world brain, both in the West and in other nations like China and Japan. They are all doing so, perhaps unwittingly or unconsciously, working to develop systems with narrow domains such as chatbots, language synthesis systems, language generation systems, and deep learning systems. Some of them have the explicit goal of developing AGI. However, the development of a global human consciousness raises important questions about the nature of human identity, agency, and autonomy.

Moreover, there is a societal implication that, if not properly addressed, could lead to a widening of the digital divide and further marginalization of certain groups. Access to and control over information, technology, and resources will be crucial to ensure a fair distribution of benefits and opportunities in the world

Africa, AI, and Other Exponential Techs

In all of the above, where is Africa? Why is there a deafening silence on all of the promethean-level technologies in Africa? Why does Africa continue to adopt a “follow-follow” mentality? Why does Africa think that the world is only meant for some others to recreate without its input? Whatever eventually becomes the world, unfortunately, Africa will also be immersed in it. Africa is so busy with its day-to-day existential issues, along the lines of Maslow’s Hierarchy of Needs, to the point that the business of rethinking the world and our existence is left to others, particularly the conceptual West, to do on behalf of humanity. My concern is that it is only a few in the West, such as the AI intelligentsia, who are trying to recreate the world and human existence. They seem to have an unspoken agenda, an atheistic agenda, an anti-God agenda, an agenda that wants to build a new Tower of Babel, and, an agenda that wants to create a new version of humanity. Does Africa agree with their agendas?

It is important for Africa to also be a part of these conversations and developments in technology, as it will ultimately affect the continent just as much as any other region. Africa should not be left behind in the shaping of the future, and should actively participate in the rethinking of the world and our existence. It is also important to consider the potential consequences and ethical implications of these technologies and to have a diverse range of perspectives and voices involved in the decision-making process. Furthermore, Africa should also take into account its values and beliefs, and ensure that they are not being overlooked or disregarded in the pursuit of technological advancement.

Control and Regulation of AI

The control and regulation of AI refers to the various measures put in place to ensure the safe and responsible use of artificial intelligence technology. This can include guidelines for the development and deployment of AI systems, as well as laws and regulations that govern the use of AI in specific industries or applications. Some of the key concerns that are addressed through AI regulation include issues related to privacy, security, and the potential for AI to impact jobs and the economy. Additionally, there are also ethical concerns related to AI, such as the potential for AI to perpetuate bias or make decisions that negatively impact certain groups of people.

Several guidelines have been proposed for the development and deployment of AI systems, including explainability and transparency, fairness and non-discrimination, human oversight, safety and robustness, privacy and security, continuous monitoring and improvement, accountability, human rights, societal and environmental well-being, and human-centred values. These guidelines aim to ensure the safe and responsible use of AI, but there is no one regulatory body overseeing their implementation.

There are currently a limited number of laws and regulations specifically governing the use of AI, but as the technology continues to advance and its impact on society becomes more significant, more laws and regulations are likely to be developed. Some examples of existing laws and regulations that govern the use of AI in specific industries or applications include:

  • Health Care: The US Health Insurance Portability and Accountability Act (HIPAA) regulates the use of AI in healthcare by protecting the privacy and security of patient data.
  • Finance: The General Data Protection Regulation (GDPR) in the European Union regulates the use of AI in finance by protecting the privacy and personal data of individuals.
  • Autonomous vehicles: The National Highway Traffic Safety Administration (NHTSA) in the US has issued guidance on the safe testing and deployment of autonomous vehicles, which includes requirements for data recording and sharing, cybersecurity, and human oversight.
  • Employment: Many countries have laws that prohibit discrimination in the workplace, which can apply to AI systems used in the hiring process or the management of employees.

These are just a few examples, regulations may vary from country to country, and it is important to keep in mind that laws and regulations are always changing as technology advances and society’s understanding of it evolves.

Conclusion

As Artificial Intelligence (AI) continues to evolve, it is expected to have a significant impact on how we live and work. Many people look at the development of AI with a positive outlook, I share that sentiment but also with concerns. I believe that it is like opening a box of unknown consequences that humanity will regret. I am worried that there are no worldwide regulations and control systems in place to govern the design, development, and application of AI. Without these, we can’t ensure that AI will be safe for humanity. Moreover, I do not see any significant efforts being put into implementing Asimov’s laws of robotics, which could be used to ensure safety features are built into AI systems.

The ethical implications of AI must be taken into account by society, and its creation and application must be consistent with human values. This may involve creating regulations and guidelines for the use of AI, as well as investing in retraining programs to assist individuals whose jobs are at risk of being replaced by AI.

Overall, the integration of AI is a complex issue that requires a thorough understanding of the potential benefits and risks of this technology. It is essential for society to have open and honest conversations about the implications of AI and to collaborate to ensure that its development and use align with human values and promote the well-being of all individuals.

 

Leave a Reply

Your email address will not be published. Required fields are marked *