The past few days have been a wild ride for the growing open source AI community — even by its fast-moving and freewheeling standards.
Here’s the quick chronology: on or about January 28, a user with the handle “Miqu Dev” posted a set of files on HuggingFace, the leading open source AI model and code sharing platform, that together comprised a seemingly new open source large language model (LLM) labeled “miqu-1-70b.”
The HuggingFace entry, which is still up at the time of this article’s posting, noted that new LLM’s “Prompt format,” how users interact with it, was the same as Mistral, the well-funded open source Parisian AI company behind Mixtral 8x7b, viewed by many to be the top performing open source LLM presently available, a fine-tuned and retrained version of Meta’s Llama 2.
Posted on 4chan
The same day, an anonymous user on 4chan (possibly “Miqu Dev”) posted a link to the miqu-1-70b files on 4chan, the notoriously longstanding haven of online memes and toxicity, where users began to notice it.
Some took to X, Elon Musk’s social network formerly known as Twitter, to share the discovery of the model and what appeared to be its exceptionally high performance at common LLM tasks (measured by tests known as benchmarks), approaching the previous leader, OpenAI’s GPT-4 on the EQ-Bench.
Machine learning (ML) researchers took notice on LinkedIn, as well.
“Does ‘miqu’ stand for MIstral QUantized? We don’t know for sure, but this quickly became one of, if not the best open-source LLM,” wrote Maxime Labonne, an ML scientist at JP Morgan & Chase, one of the world’s largest banking and financial companies. “Thanks to @152334H, we also now have a good unquantized version of miqu here: https://lnkd.in/g8XzhGSM
The investigation continues. Meanwhile, we might see fine-tuned versions of miqu outperforming GPT-4 pretty soon.“
Quantization in ML refers to a technique used to make it possible to run certain AI models on less powerful computers and chips by replacing specific long numeric sequences in a model’s architecture with shorter ones.
Users speculated “Miqu” might be a new Mistral model being covertly “leaked” by the company itself into the world — especially since Mistral is known for dropping new models and updates without fanfare through esoteric and technical means — or perhaps an employee or customer gone rouge.
Confirmation from the top
Well, today it appears we finally have confirmation of the latter of those possibilities: Mistral co-founder and CEO Arthur Mensch took to X to clarify: “An over-enthusiastic employee of one of our early access customers leaked a quantised (and watermarked) version of an old model we trained and distributed quite openly…
To quickly start working with a few selected customers, we retrained this model from Llama 2 the minute we got access to our entire cluster — the pretraining finished on the day of Mistral 7B release. We’ve made good progress since — stay tuned!“
Hilariously, Mensch also appears to have taken to the illicit HuggingFace post not to demand a takedown, but leaving a comment that the poster “might consider attribution.”
Still, with Mensch’s note to “stay tuned!” it appears that not only is Mistral training a version of this so-called “Miqu” model that approaches GPT-4 level performance, but it may, in fact, match or exceed it, if his comments are to be interpreted generously.
A pivotal moment in open source AI and beyond?
That would be a watershed moment not just for open source generative AI but the entire field of AI and computer science: since its release back in March 2023, GPT-4 has remained the most powerful and highest performing LLM in the world by most benchmarks. Not even any of Google’s presently available, long-rumored Gemini models have been able to eclipse it — yet (according to some measures, the current Gemini models are actually worse than the older OpenAI GPT-3.5 model).
The release of an open source GPT-4 class model, which would presumably be functionally free to use, would likely place enormous competitive pressure on OpenAI and its subscription tiers, especially as more enterprises look to open source models, or a mixture of open source and closed source, to power their applications, as VentureBeat’s founder and CEO Matt Marshall recently reported. OpenAI may retain the edge with its faster GPT-4 Turbo and GPT-4V (vision), but the writing on the wall is pretty clear: the open source AI community is catching up fast. Will OpenAI have enough of a head start, and a metaphorical “moat” with its GPT Store and other features, to remain in the top spot for LLMs?
VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.