The development of Artificial General Intelligence (AGI) has been in the news recently, as OpenAI, which provides Chat GPT, my AI assistant of choice, has recently had a leadership shakeup that apparently came from disputes in the organization about the next step in AI, AGI. AGI makes the development of learning computers a real thing, and is very scary for many computer researchers. While not yet a risk of creating sentient computers, it does present serious problems when a computer can learn, (and thus make choices?) that might not be good for human beings. Systems that can understand, learn, and apply knowledge across a broad range of tasks, much like a human being, create risks that some researchers think should be carefully managed, and not thrust willy nilly upon the world. My view is that such systems can still be controlled and regulated by humans, since they do not possess consciousness, emotions, or self-awareness. They operate based on algorithms and data, without personal experiences or subjective awareness.
Of course, the profit motive indicates to me that risk assessment in the development of this new technology will take a back seat in boardroom discussions, and I wish I could live longer than my current 5 to 20 year life expectancy. If you are young, you really need to learn this stuff. Your future depends on it.
Comments