- cross-posted to:
- [email protected]
- cross-posted to:
- [email protected]
The Apple Vision Pro is supposed to be the start of a new spatial computing revolution. After several days of testing, it’s clear that it’s the best headset ever made — which is the problem.
I also think they’re too big and bulky and haven’t found the right way to use them yet. When they can be integrated with glasses and can truly “augment” the world around you, that’s when they’ll pick up. Think of a party where you can automatically display the names of people and key bio info with them. Or a sporting game where you can pull up stats on players. Or navigation where it overlays arrows on the street. For now you just get “toss our window up in your field of view with these clunky goggles”
Isn’t that how google glass did it? It all sounds good in theory until you realize there is a looong road until it’s sleek and most people are not willing to use it in the awkward stage.
Google glass wasn’t AR though, it was just a display strapped to some glasses. It didn’t do 3D or head tracking or anything.
You’re right about the bulky phase. That’s really where they should be putting R&D. I think google glass had a better look, but all of these trials and missteps are ultimately what pushes it forward until there is a version that people really end up embracing.
I think a big problem with apple doing this is that they literally decided to make the body of the device out of steel, which is much heavier than other materials like plastic. The bigscreen beyond is the currently lightest headset, but the only things in the headset are the displays, processing and tracking are outside of the headset and no passthrough cameras. So basically with current lens and display technology its still not possible to get glasses like formfactor in VR headsets, let alone with integrated cameras, processing, movable ipd, eye tracking, etc.
deleted by creator