CTOs pouring millions into React, Vue, and Angular are building for yesterday. Your screen-first UI is becoming the digital equivalent of a fax machine.
The future isn't an app you launch. It's intelligence that's ambient, embodied, and sees you. Google's Project Astra is the latest salvo: an AI that can actually see and remember what's happening around it in real-time. It's not just processing text; it's interacting with the physical world. And Razer's Project Ava, with its holographic companion projected onto your desk, isn't some far-off sci-fi dream. It's a glimpse of actual products shipping soon, offering a tangible AI presence without a flat screen.
This shift hinges on a fundamental architectural change – what I call the 'Holly layer.' It's about decoupling the AI's core intelligence and reasoning from its presentation. Right now, our systems are hardwired to push data onto a screen. The next generation will swap that screen handler for a holographic handler, a voice agent, or an embodied robotic interface. The intelligence remains, but the output medium transforms.
Think about it. We've spent years, even decades, optimising for pixels. We've built entire careers and massive engineering teams around mastering the art of the user interface on a two-dimensional display. Billions, if not trillions, have been invested in this paradigm. And it's all about to be disrupted. Science fiction has been telling us this story for years. From Red Dwarf's Holly to JARVIS and HAL, the vision was always ambient intelligence, a constant, aware presence, not a discrete application. We've been building the wrong thing.
This isn't about discarding existing technology overnight. It's about recognising where the strategic investment needs to shift. If your roadmap is solely focused on refining pixel-perfect UIs for the next five years, you're betting on a legacy system. The real innovation, the competitive advantage, will be in the Holly layer – building AI that can perceive, understand, and interact with the world, unbound by the limitations of a screen.
Are you building for the present, or the future?
The code doesn't write itself. Yet.
https://tyingshoelaces.com/linkedin/screens-wrong-interface-ai