A new AI model has garnered significant social media attention with its lightning-fast response speed and innovative technology that could potentially challenge Elon Musk’s Grok and ChatGPT.
Groq, the latest AI tool to make waves in the industry, has quickly gained attention after its public benchmark tests went viral on the popular social media platform X.
Many users have shared videos of Groq’s remarkable performance, showcasing its computational prowess that outperforms the well-known AI chatbot ChatGPT.
side by side Groq vs. GPT-3.5, completely different user experience, a game changer for products that require low latency pic.twitter.com/sADBrMKXqm
— Dina Yerlan (@dina_yrl) February 19, 2024
Groq Develops Custom ASIC Chip for Itself
What sets Groq apart is its team’s development of a custom application-specific integrated circuit (ASIC) chip designed specifically for large language models (LLMs).
This powerful chip enables Groq to generate an impressive 500 tokens per second, while the publicly available version of ChatGPT, known as ChatGPT-3.5, lags behind at a mere 40 tokens per second.
Groq Inc, the company behind this AI marvel, claims to have achieved a groundbreaking milestone by creating the first-ever language processing unit (LPU), which serves as the engine that drives Groq’s model.
Unlike traditional AI models that heavily rely on graphics processing units (GPUs), which are both scarce and expensive, Groq’s LPU offers an alternative solution with unmatched speed and efficiency.
Wow, that’s a lot of tweets tonight! FAQs responses.• We’re faster because we designed our chip & systems• It’s an LPU, Language Processing Unit (not a GPU)• We use open-source models, but we don’t train them• We are increasing access capacity weekly, stay tuned pic.twitter.com/nFlFXETKUP
— Groq Inc (@GroqInc) February 19, 2024
Interestingly, Groq Inc is no newcomer to the industry, having been founded in 2016, when it secured the trademark for the name “Groq.”
However, last November, as Elon Musk introduced his own AI model, named Grok (with a “k”), the original creators of Groq took to their blog to address Musk’s naming choice.
In a playful yet assertive manner, they highlighted the similarities and asked Musk to opt for a different name, considering the association with their already-established Groq brand.
Despite its recent social media buzz, neither Musk nor the Grok page on X has commented on the naming overlap between the two tools.
AI Developers to Create Custom Chips
Groq’s successful use of its custom LPU model to outperform other popular GPU-based models has caused a stir.
Some even speculate that Groq’s LPUs could potentially offer a significant improvement over GPUs, challenging the high-performing hardware of in-demand chips like Nvidia’s A100 and H100.
“Groq created a novel processing unit known as the Tensor Streaming Processor (TSP) which they categorize as a Linear Processor Unit (LPU),” X user Jay Scambler wrote.
“Unlike traditional GPUs that are parallel processors with hundreds of cores designed for graphics rendering, LPUs are architected to deliver deterministic performance for AI computations.”
Scambler added that this means that “performance can be precisely predicted and optimized which is critical in real-time AI applications.”
Groq is serving the fastest responses I’ve ever seen. We’re talking almost 500 T/s!
I did some research on how they’re able to do it. Turns out they developed their own hardware that utilize LPUs instead of GPUs. Here’s the skinny:
Groq created a novel processing unit known as… pic.twitter.com/mgGK2YGeFp
— Jay Scambler (@JayScambler) February 19, 2024
The trend aligns with the current industry movement where major AI developers are actively exploring the development of in-house chips to reduce reliance on Nvidia’s models alone.
For one, OpenAI, a prominent player in the AI field, is reportedly seeking substantial funding from governments and investors worldwide to develop its own chip.
The post New AI Model Groq Challenges Elon Musk’s Grok and ChatGPT appeared first on Cryptonews.