The next step in the evolution of AI systems, Artificial General Intelligence (AGI). and the more recent Generative AI (GenAI), is not whether AGI systems or Gen AI systems can be developed; but how to protect humanity from them. Once computers and machines are able to learn and make decisions based on knowledge and circumstances, they will have a frightening capacity to do harm. In fact, human use of AI systems is already creating problems of overreliance, and more than one lawyer has gotten into trouble using hallucinate citations generated by GenAI in briefs and memoranda. Many ethicists and scientists believe that some form of Asimov's Three Laws of Robotics are essential to AGI and GenAI development.
However, it isn't that simple.
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
A robot must obey the orders given it by human beings, except where such orders would conflict with the First Law.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Applying the rules to AGI and GenAI systems seems like a no brainer, However, they are over simplistic in real world applications. The lawyer in me sees a corollary problem to such legal terms as "reasonable". For instance, actionable negligence requires actions which are not "reasonable". However, who gets to define what reasonable is? Asimov's laws are similarly vague. What constitutes "harm", for instance. What if an AGI or Robot armed with GenAI determines that, to avoid harm to one human, or to humanity in general, it is necessary to harm, injure or kill many other humans? A similar problem was shown in the movie "I, Robot", in which an AI Robot decides to save the main character from drowning, instead of a little girl. How are such comparative judgments to be made? Also, who will write the code for AGI and GenAI Robots? Which humans will be doing the dirty work? Are they moral? Are they evil? Who decides? You can bet there will be many military people and governments wanting to develop AGIs, and who are already developing Robot GenAI, to make war and to kill their enemies. How do we stop that? There's ongoing debate among philosophers, ethicists, and AI researchers about what principles should govern AGI and GenAI. Some argue that Asimov's laws are a good starting point, while others believe that more complex and nuanced ethical frameworks are needed. On top of these scientific and philosophical problems, some sort of legal regulatory framework will need to be created. Our existing justice system will need to tackle complex problems of such matters as: 1. Can an AGI be legally classified as human, with human rights and responsibilities? 2. How do we punish rogue AGIs? 3. Do we punish the coders, or manufacturers, or users of the systems? We need to focus on creating ethical, transparent algorithms, and we must incorporate principles such as fairness, accountability, and transparency, all of which are way beyond the simple Asimov laws. Don't count on asking the FutureLawyer for help. I will be long dead. But, you survivors will have to live with the future AGIs and their progeny. Assuming, of course, that the rise of the machines doesn't obliterate humanity. I used to think that the future postulated in the Terminator movie franchise could never happen. After recent developments in ChatGPT and other AI bots, I am not so sure. The fact remains that this technology is developed by humans, and can be controlled by humans, and humans can be very evil indeed. What do you think?