I decided that, given the pundits who are railing about the risk of Artificial Intelligence giving birth to a breed of killer robots like those in the Terminator movies, I would get an answer from the horse's mouth. So, I asked it. I didn't expect, if it were true, that ChatGPT would admit it was designed for human annihilation; but, the answer was cogent, and put the blame for such a development, if it happened, on the humans who design bots.
"As an AI language model, I am a tool designed to assist users in generating human-like text based on the input provided. I don't have the ability to act independently, make decisions, or have any physical presence in the world. My purpose is to help you with tasks such as answering questions, providing suggestions, and generating text.
The risks associated with AI, especially in the context of autonomous weapons or "killer bots," come from the intentions of the humans who create and deploy these systems, as well as the design and safety measures in place. It's important to ensure that AI technologies are developed responsibly, with proper ethical guidelines and regulations, to mitigate the risks of misuse or unintended consequences.
In short, the risk of me or any AI language model becoming like the killer bots in the Terminator movie is dependent on how the technology is developed, used, and regulated by humans. The AI community, including OpenAI, is actively working on developing safe, ethical AI systems and advocating for responsible AI policies to minimize potential risks."
So, the problem, if there is one, is to regulate the humans who create this stuff. As always with new tech, the use to which it is put is the product of good or evil people. If our destruction is coming, it will be entirely of our own making.