Meta’s long-awaited $799 AI glasses, branded Meta Ray-Ban Display, had their moment in the spotlight at the Connect 2025 keynote—only to stumble during live demonstrations. Two technical glitches in front of a live audience left Mark Zuckerberg & team scrambling, pulling focus from the product’s promise.
Onstage failures began when the LiveAI assistant misread commands in a cooking demo, repeatedly skipping steps. Later, a WhatsApp video call would not connect through the glasses, forcing CTO Andrew Bosworth to abandon the segment after visibly struggling. One guest blamed Wi-Fi; others saw the issues as a cautionary tale about expectations versus reality.
Despite the missteps, Meta emphasizes ambition. The glasses aim to deliver wearable “agentic AI”—technology that anticipates user needs, assists tasks hands-free, and integrates voice/gesture control. Meta says many features are powered on-device, signalling its commitment to privacy and autonomy.
The botched demo landed during a high-stakes product reveal expected to showcase Meta leading in AI wearables. Critics note the contrast: many tech companies now avoid live demos precisely because of unpredictable live conditions. Still, smiling through the errors, Zuckerberg defended the risk, suggesting perfection isn’t necessary to capture imagination.
For consumers, this moment matters. Early adopters may remain excited, but confidence may hinge on whether Meta can steadily deliver reliable performance. For Meta, the test now is whether these glasses will work seamlessly in everyday conditions—not just in prepared demos.
Disclaimer
This article is for informational purposes only and does not constitute an endorsement or evaluation of Meta’s product. Product performance may vary, especially for live demos. Always check official reviews before making purchasing decisions.


