News · Apple
Vision Pro's Next Leap: Enhanced Accessibility with On-Device AI
Apple is set to redefine accessibility on the Vision Pro, introducing advanced features leveraging on-device AI for users with low vision.

A person using an Apple Vision Pro with magnified text and object descriptions
Video: Apple on YouTube
Apple is poised to significantly upgrade accessibility functionalities on the Vision Pro. Upcoming updates introduce features tailored for users with low vision or blindness.
Central to these enhancements is an overhaul of the Zoom feature. While previously limited to virtual content, the updated Zoom will now magnify the physical world viewed through passthrough cameras.
Another significant addition is "Live Recognition", an expansion of the VoiceOver screen reader. This new feature harnesses on-device machine learning to describe surroundings, identify objects, and read text.
The implications are profound. It transforms the Vision Pro from a spatial computing device into a highly intelligent visual assistant.
These features are slated for release in a visionOS update later this year. We anticipate a debut as part of visionOS 3 at WWDC25.
Apple continues to push the envelope for assistive technology. Integrating these AI-powered tools underscores a commitment to making spatial computing accessible to everyone.
Share this story







