What GPT-4’s missteps in Meta-Galactica mean
Watch the sessions on demand from the Low-Code/No-Code Summit and learn how to innovate successfully and achieve efficiency through upskilling citizen developers. Watch now.
Last week, the landscape of large language models (LLMs) was awash with thought and debate. It reminded me of Rodin’s The Thinker. Meta’s galactica LLM demo went wrong, and Stanford CRFM debuted its HELM benchmark after weeks of rumors that OpenAI’s GPT-4 could be released in the next few months.
Last Tuesday, the online chatter exploded. Meta AI and Papers With Code released a new LLM open-source called Galactica. It was described in an Arxiv paper as \”a large linguistic model for science,\” meant to assist scientists with \”information overflow.\”