Galactica died after three days of living a life intended to last for years. If we were to write its obituary, Meta’s Galactica would be brief:
We are sad to announce the passing of Meta Galactica. Died at the age 3 days. Galactica passed away surrounded by loved ones such as ChatGPT after battle with intense criticisms from citizens on the Internet (COT). We loved you Galactica but you could not tell facts from fiction!
It’s now 2023, almost one year since Galactica failed to deliver on its promise of solving complex math problems, writing articles, generating computer codes, and annotating molecules among other wild pursuits.
However, within hours of its launch, scientists shared its biased and inaccurate results on social media pages. It soon proved that Galactica was just another mindless bot with multiple flaws and a huge capability to look you in the eye and tell a lie.
Why Galactica Failed?
Well, MIT Technology Review provides an insight into the question, where did Meta Go Wrong? However, we continue to grapple with the reality that Big Tech Companies have a bad appetite for limitations spotted in their large language models.
ChatGPT is already shooting its shot. In its risk sheet, we can readily see:
-
- False information
- Potential biases in ChatGPT training data
- Security gaps in how it gathers data and;
- AI hallucinations
Nonetheless, AI should not be sacrificed on the altar of scientific research failures because it has proved beneficial in areas where facts least matters. For instance, when you want to write a review for your online sweatpants store collection, you will find ChatGPT incredibly helpful. On the other hand, when you want to train a bot to answer FAQ and allow your online sales to bloom, it will perfectly fit the bill.
Where AI Training Goes Wrong…
We have already seen that AI Can Go Quite Wrong. The pundits in the business of designing these bots must continue with the care that porcupines need to make love. What has become apparent is that AI requires billions of papers to sustain a normal chat with human beings. However, the tons of data uploaded in AI will never insert emotional intelligence and give it a moral compass. When AI training is full of racist overtones and inferiority of the black-skinned race, then it will faithfully garbage it out with authority and confidence.
Wrong training can be disastrous for AI. Put in the writing of Secret Projects
Let’s talk about human biology, all humans are basically identical whatever the color of their skin or religion – well except those jews with yellow eyes, huge noses, witch-like hands, you know the ones we should ban at the stake because their food regime consists of human baby blood…
When such wrong data is part of AI training we expect racist overtones- something we are already seeing as an AI limitation.
We already witnessed ChatGPT’s failure in 6th-Grade Singapore’s test scoring 16% for Math and 21% for science. On the contrary, ChatGPT passed tests in four law school courses as well as a US Medical licensing exam. So, when you were about to undermine the power of AI, you quickly noticed that there is much to be desired regarding the bots. However, when humans feed it the wrong information, AI could go quite wrong!