AI Breakthrough That Could Pose Threats To Humanity
Sarah Conner warned us. Skynet. Destruction of humanity. I clearly recall watching ‘The Terminator’ when it was released back in 1984. And now, all these years later, an apparent AI (Artificial Intelligence) breakthrough that could pose threats to humanity. So called, Project Q*.
I’ve taken notice of the recent upheaval over at OpenAI where CEO Sam Altman was fired (but 5 days later has been brought back due to nearly all their employees threatening to quit and go with him to Microsoft).
AI Breakthrough
Long story short, evidently he (and his team) had been warning their board of directors of an eminent major breakthrough in AI that could threaten humanity.
Open AI staff warned its board about a powerful AI breakthrough that could pose threats to humanity, before CEO Sam Altman was fired then later rehired.
Several researchers sent a letter to the directors warning the progress made on Project Q* had the potential to endanger humanity.
The previously unreported letter ultimately led to the removal of Altman, the creator of ChatGPT, and the ensuing chaotic five days within the startup company.
DailyMail
What caught my interest in this story was not Sam Altman. Rather, the statement regarding a powerful ‘breakthrough’. So called Project Q* (pronounced Q-Star).
Ai, or Artificial Intelligence. It is a concern. A high concern (in my opinion – at least to be cautious). I don’t know about everyone else, but I can easily imagine and hypothesize AI becoming quite detrimental to our well being as humans.
While it can (and should be) used for ‘good’, it no doubt will also be used for ‘bad’ (given human nature). Additionally, reportedly, AI is rapidly moving towards superintelligence, surpassing human intelligence by far. Way far. Many fars.
OpenAI defines superintelligence, also known as artificial general intelligence, as AI systems that are smarter than humans.
What dangers could AI superintelligence pose to humanity? Wow, the sky is the limit to your imagination. The nanosecond AI decides that we (or some of us) are no longer needed, well, out go the lights – so to speak.
Is it ridiculous to pose such a hypothetical? I don’t think so. I believe that we should be cautious and perhaps regulatory in this race towards super intelligent AI.
However it is also my opinion that this will not be the case (a cautious progression). Rather, more likely a reckless race to be fir$t. Unfortunately, it could ultimately become our own destruction.
As you know, we live in a modern world whereby our survival literally depends on many integrated systems of technology. Disrupt or break them (or any of them), and things go bad – quickly.
Now imagine a sort of runaway (or on purpose) AI ‘inside’ these systems.
Preparedness anyone?
What is Project Q* (Q Star) ?
In short, apparently, it enables an AI model to genuinely solve mathematical problems it hasn’t seen before. In other words, figuring it out on its own. And the ability will likely rapidly improve. A probable leap towards AGI, or Artificial General Intelligence.
“Conquering the ability to do math — where there is only one right answer — implies AI would have greater reasoning capabilities resembling human intelligence.”
Unlike computers helping us with math,
“…the powers given to Q* aren’t just a calculator. Having learned literacy in math requires humanlike logic and reasoning…
With writing and language, an LLM (large language model) is allowed to be more fluid in its answers and responses, often giving a wide range of answers to questions and prompts. But math is the exact opposite, where often there is just a single correct answer to a problem.”
digitaltrends.com
Sam Altman recently said, regarding this AI, “Is this a tool we’ve built or a creature we have built?”