Chat GPT-4 V (with Vision) We haven't even gotten comfortable with Chat GPT-4 yet, and we have a whole new generative AI paradigm to learn. Adding Vision capability to the Chat GPT universe means that, when the tech matures, blind people will be able to have the world interpreted for them on the fly. Already, GPT-4V can take an image of an interface, and instantly create the computer code necessary to create it. Any image now has the interpretive benefit of the entire Chat GPT database behind it. I can envision (pardon the pun) a world in which vision impaired humans will be able to walk around with Smartglasses that orally interpret the world for the user, with depth that even the human eye and brain can't imitate. For those who want to focus on the flaws of early iterations, as this article does, and for the naysayers who want to talk about the risk of a world in which computers can think for us, I say the benefits far outweigh the risks, and, as time has proven, technological advances can't be stopped anyway. Let's learn to live with them, and enhance the ability of our human species to engage life, for as long as we are graced to live it.