In the current zeitgeist of technological fervor, the Apple Vision Pro emerges as the unassailable tech hype of the moment. Social media platforms are awash with visuals—captivating videos and pictures—depicting enthusiastic early adopters donning their Apple Vision Pro goggles and “enjoying” the blend of their surrounding reality with digital floating virtual windows, pop-ups, disclaimer alerts, and notifications. Amidst this flurry of shared experiences, the Apple Vision Pro stands as a symbol of the bold, immersive future technology promises to unfold. Is this really what the future looks like?
In the unfolding panorama of future technological landscapes, I think we stand at a juncture where divergent paths beckon, each offering a distinct paradigm of technology integration in our lives. On one end of this spectrum, we find ourselves propelled towards a future where technology assumes a more prominent and invasive role, erasing boundaries between the digital and physical realms. The protagonist in this narrative is none other than the Apple Vision Pro. Conversely, on the opposing end of this technological spectrum, we discern a subtler, yet equally transformative trajectory. Envision a future where technology seamlessly integrates into the tapestry of everyday life, almost imperceptibly so. This quiet metamorphosis is epitomized by the integration of Language Models (LLMs) and Artificial Intelligence (AI) into existing products—subtle influencers quietly shaping our daily experiences. In this scenario, grandiose proclamations and ostentatious headsets give way to a discreet evolution transpiring in the background.
From my point of view, I'm still struggling to see specific use cases where mixed realities add substantial value. The Apple Vision Pro is undoubtedly cool, but what existing experience, apart from games, can be 10x better on that kind of platform? On the other hand, I see LLMs, AI, or algorithms evolving and taking more and more presence and handling tasks that are repetitive, boring, or burdensome for humans. When it comes to health tech, I could definitely see “transparent” algorithms scanning hundreds of CT-scans or MRI images looking for abnormalities and automatically measuring them, but I don’t imagine (at least not now) a doctor wearing mixed reality goggles when in front of a patient or even during a telemedicine consultation.
Will our surroundings be adorned with floating windows and avatars, or will the inconspicuous hum of algorithms become our steadfast companions?