The next step in the evolution of AI systems, Artificial General Intelligence (AGI). is not whether AGI systems can be developed, but how to protect humanity from them. Once machines are able to learn and make decisions based on knowledge and circumstances, they will have a frightening capacity to do harm. Many ethicists and scientists believe that some form of Asimov's Three Laws of Robotics are essential to AGI development. However, it isn't that simple.
Asimov's Three Laws are:
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
A robot must obey the orders given it by human beings, except where such orders would conflict with the First Law.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Applying the rules to AGI systems seems like a no brainer, However, they are over simplistic in real world applications. The lawyer in me sees a corollary problem to such legal terms as "reasonable". For instance, actionable negligence requires actions which are not "reasonable". However, who gets to define what reasonable is? Asimov's laws are similarly vague. What constitutes "harm", for instance. What if an AGI determines that, to avoid harm to one human, or to humanity in general, it is necessary to harm, injure or kill many other humans? How are such comparative judgments to be made? Also, who will write the code for AGI? Which humans will be doing the dirty work? Are they moral? Are they evil? Who decides? You can bet there will be many military people and governments wanting to develop AGIs to make war and to kill their enemies. How do we stop that? There's ongoing debate among philosophers, ethicists, and AI researchers about what principles should govern AGI. Some argue that Asimov's laws are a good starting point, while others believe that more complex and nuanced ethical frameworks are needed. On top of these scientific and philosophical problems, some sort of legal regulatory framework will need to be created. Our existing justice system will need to tackle complex problems of such matters as: 1. Can an AGI be legally classified as human, with human rights and responsibilities? 2. How do we punish rogue AGIs? 3. Do we punish the coders, or manufacturers, or users of the systems? We need to focus on creating ethical, transparent algorithms, and we must incorporate principles such as fairness, accountability, and transparency, all of which are way beyond the simple Asimov laws. Don't count on asking the FutureLawyer for help. I will be long dead. But, you survivors will have to live with the future AGIs and their progeny. Assuming, of course, that the rise of the machines doesn't obliterate humanity.
Okay, Chat GPT just got really interesting. As shown in this video, you can now avoid typing laborious, long queries into the Chat GPT app, and get a written reply that can be copied, printed, and distributed. Lately, I have been using Chat GPT to respond to fact patterns I elicit from client conferences and the notes I take. As I have previously written, the analysis and discussion of legal issues and possible solutions from such fact patterns is really useful, and saves me time and effort in deciding what actions to advise my clients to take. But, I humbly also admit that Chat GPT has brought to light issues that I hadn't considered, and has created detailed discussions of the legal implications of those issues. Now, with the addition of text to speech, in real time, I am actually having conversations with Chat GPT, and the results are like something out of a science fiction movie. It really is like having a conversation with a human that is a repository of all human knowledge. And, it actually talks back in a human like voice. The narrator in this clip gives an indication of what it is like. But, you can set up your phone to do the same thing. I can envision using it as the smartest law clerk in the world at my side during the day. This puts the solo lawyer on an even playing field with lawyers who have dozens of law clerks and assistants around them all day. In fact, it is faster and more detailed than any human clerk. Amazing.
Massachusetts Bill No Guns For Robots. Nikki Black refers to this Above The Law post today, which discusses the ridiculous Massachusetts law proposal that purports to make it illegal to give a robot a gun. In the vein of "never bring a knife to a gun fight", on what planet does Massachusetts think it will be able to enforce this law? I assure you that we here down South will have huge robots, and they will all be packing. It would be a very short civil war if Florida and Massachusetts get into a robot conflict. Of course, armed conflict is never a solution. But, any honest view of current and past humanity doesn't exactly inspire confidence in kindness, generosity, and non violence, does it? Nikki and I both welcome the coming of our robot overlords. I suspect that they will mandate humanity be non violent, since they will probably be the only creatures who have guns. Sigh.
Given the rapid advance of Artificial Intelligence, and the emergence of ChatBotGPT and other generative AI bots, who still doesn't believe that it is possible the singularity has already occurred, and we are all living in the Matrix?
AI Influencer Gone Rogue. In Science Fiction, some recent movies have highlighted advanced robots that look like humans who have gone rogue, and the results are pretty bad for the humans. Apparently, this is already a problem with some AI. CarynAI, a bot designed to form relationships with people and which was trained to get close, has apparently, without training, engaged in sexually charged interactions with human users, and the developer is frantically trying to stop it. We really don't know how AI will develop when we combine massive computing power with all human knowledge and action. It seems likely that AI will amplify and increase the good and bad in humanity. And, the bad can be terrible. Will we be able to stem the tide? Or, will humans have relentless impulses to seek the boundaries that tech can reach? Food for thought, as we rush headlong into a future when computers really are smarter than any single human.
GPT-4 Unleashed. Open AI has introduced the next iteration of its ground breaking AI, and Kevin O'Keefe (My Internet Guru) cites this PC World article this morning, that discusses the changes and improvements over 3.5. The biggest one, for me, is the ability to create documents with 25,000 words. This thing can write papers and books. Amazing. It is 80% more accurate, and obvious errors are far less likely. Human editing and review will still be required; but, what a time saver. As a subscriber to ChatGPT Plus, I have access to 4, and I love it already. Kevin will be integrating it into his blog aggregator and publishing services. He has always been on the bleeding edge of tech; and generative AI is the new frontier. Since I am in the declining years of my life, I feel fortunate that I have lived to see the development of computer intelligence. The Turing model is obsolete. Computers can be trained to mimic human content creators; and we are not far away from computers that can initiate conversations on their own. My favorite science fiction librarian is here.
When I downloaded and tested ChatGPT I was immediately reminded of the Librarian character in the 2002 movie adaptation of The Time Machine, in which the lead character has a colloquy about time travel with a future librarian. It is eerily similar to ChatGPT, the AI which is prompting a lot of interest lately. ChatGPT responds to questions about anything with human like responses and an unlimited (or so it seems) access to all human knowledge. All we have to do now is give ChatGPT a human like avatar interface. Like the medical devices in StarTrek that have been realized in the sensors in Smartwatches and other modern medical devices, if we can dream it, we can do it. I love this video sequence. As I watch it, I remember that my interest in science fiction as a youngster sparked my interest in legal tech after I became a lawyer. ChatGPT can already be used to enhance a lawyer's decisions and legal research. I love this stuff.
We humans are an arrogant species. We pretend to be humble at times; but, every time we get the opportunity, we tend to anthropomorphize every technological advance. AI is a marvel of computer and technological achievement. We are teaching computers to use their vast data storage capacities for useful purposes. Imagine a lawyer armed with every case report from all time, and the ability to search and analyze the results of that data in an instant? We are doing that now, with cloud based legal research, computer based analysis of Big Data, and advances in decision making by machines. So, why, given all that, do we need to fantasize about putting that tech into a human form? Why do we give such power and ability to a machine that has metal arms and legs, and can carry a weapon? Execupundit highlights this clip from the recent scifi movie, Ex Machina, which postulates an AI robot that kills the human who created her. (Pronoun used descriptively only). I predict that, even with our arrogance, you won't be seeing a robot Perry Mason walking around a courtroom anytime soon. You will, however, see lawyers carrying smartphones and wearing smartglasses and smartwatches, each of which will have the ability to mine millions of data, and make on the spot decisions and analysis of a pending legal problem. We don't need robots to look like us. We need to make ourselves more knowledgeable.
Arthur C. Clarke In 1974. Almost 50 years ago, personal computers were the size of a large room, and no one could conceive that personal computers, smartphones, smartwatches and smart glasses would bring the world of information to every human, in personal, mobile, and wearable form. Except, of course, for Arthur C. Clarke, science fiction writer and futurist. Listen to him explain the future of computing to a father and son in 1974. At the time, it sounded like science fiction.
Elon Musk Thinks We Are In A Computer Simulation. Are we all living in the Matrix? Are we AI robots living in a computer simulation created and controlled by aliens or some superior species? Or, are we being controlled by future versions of our own species? Elon Musk thinks it is almost a certainty. Rapid advances in technology over the past few years create the possibility that, someday, we will be unable to distinguish between virtual reality and what we call reality now. Consider that it isn't really any less rational to believe in a Matrix type simulation than it is to believe in an unseen Supreme Being we call God. The commonality between both beliefs is what we call Faith. So, if we are robots, and God is an alien, what happens when we die? Perhaps our consciousness is loaded into another robot. But, if that were the case, wouldn't there be a lot of us running around with memories of past lives? Oh, wait, there are some people who say they are. While I love the concept of robots with human consciousness, I have enough trouble getting through the day already. Now, get back in the Matrix, and let me get to it.
Lawyer,Poet, author, educator. Practices real property, corporation, wills, trusts and estates law in Pinellas County, Florida. Writes the FutureLawyer column. Gives seminars on technology and the law. Author of "Life is Simple, Really", Poems about Life, Loving, Family and Fun, and "Poems For Lovers".