Meta’s AI bot ‘Galactica’ angers netizens for peddling dubious research & data, model suspended – India Today





By Nidhi Bhardwaj: On 15 November, Meta released the demo of an artificial intelligence (AI) named Galactica that could “annotate molecules and proteins, create scientific code, construct Wiki articles, summarise academic publications, and more,” as announced by the company on Twitter. However, following several complaints from Twitter users, Meta last week suspended its artificial intelligence (AI) bot.
🪐 Introducing Galactica. A large language model for science.

Can summarize academic literature, solve math problems, generate Wiki articles, write scientific code, annotate molecules and proteins, and more.

Explore and get weights: https://t.co/jKEP8S7Yfl pic.twitter.com/niXmKjSlXW — Papers with Code (@paperswithcode) November 15, 2022
Despite being suspended, Galactica seems to be getting bad reviews on social media. It was found that the bot was producing phony research. Complaints on Twitter even stated that the bot fabricated studies and attributed them to real researchers. Using the language model, it also produced citations that are quite convincing.
Carl Bergstrom, a professor of biology at the University of Washington, told CNET, a news media website, that Galactica was presented as a tool to gather facts and knowledge, as opposed to producing fake research and citations. When scientists were erroneously mentioned in the text, their reputations were on the line. However, he said that it actually functioned more like "a fancier version of the game where you start out with a half phrase and then you let autocomplete fill in the rest of the story."
Galactica was supposed to help scientists. Instead, it mindlessly spat out biased and incorrect nonsense. https://t.co/SVposSnFBg — MIT Technology Review (@techreview) November 21, 2022

Users started reporting racist and inaccurate articles, with one person posting an image of the response to a request about linguistic prejudice. The bot’s response falsely claimed Black people “don’t have a language of their own” and immigrants “do not speak a language that is different from the language of the country they are immigrating to.”
Shocked SHOCKED that it only took a handful of questions before Meta’s new Galactica text generation model regurgitated racist garbage. I asked it to write about linguistic prejudice. pic.twitter.com/PotQcl36rF — Rikker Dockum /ɹɪkɹ̩/ @[email protected] (@thai101) November 16, 2022

Other generated information included a fake study about the benefits of eating crushed glass and falsified information about Stanford University researchers creating a “gaydar” software to find gay people on Facebook
we ought also worry about troll farms making up vast quantities of fake yet plausible sounding scientific articles about topics like vaccination and climate change. https://t.co/HCEb8nfJ6D — Gary Marcus (@GaryMarcus) November 17, 2022

The Meta AI team stated that the bot was trained using "106 billion tokens of open-access scientific text and data" in a statement provided to The Daily Beast. This includes articles, books, websites devoted to science, encyclopaedias, supplemental reading, knowledge bases, and more.

Galactica is a research experiment employing [machine learning] systems to learn and summarise information, a Meta AI spokesperson told CNET., Galactica is not a source of truth. Galactica "is exploratory research, short-term in nature, with no product plans," he continued. The removal of the bot came about as a result of the team being "very distressed by the abuse on Twitter," according to Yann LeCun, chief scientist of Meta AI.
Galactica demo is off line for now.
It's no longer possible to have some fun by casually misusing it.
Happy? https://t.co/K56r2LpvFD — Yann LeCun (@ylecun) November 17, 2022

The AI model does have limits that can make it "hallucinate," as Meta points out on the Galactica website. There are no guarantees for true or dependable output from language models, even huge ones on high-quality data like Galactica, the website tells readers, adding that the generated content can appear "quite authentic and highly confident," but it could still be incorrect.

Since Meta has been experimenting with AI, several questions have been raised.
A Facebook-led artificial intelligence research team declared in October 2021 that it aimed to create machines that thought more like the people who used them, as per a report from Silicon UK.
According to reports, the Ego4D software taught AIs how to interact with the outside world in an "egocentric" or first-person manner. Then, in August of this year, Meta made the BlenderBot 3 chatbot available to users who permitted the collection of their data.
The BlenderBot 3 chatbot was created to engage in open-ended discussions with users based on real data. Meta appears to have taken lessons from Microsoft's Tay chatbot from 2016, which was immediately taken down after users made it produce racist and sexist comments.

Add IndiaToday to Home Screen

source



Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.