Beijing time in the early morning of June 6, Apple WWDC 2023 Global Developers Conference officially opened, the event, as "One more thing" debut Apple Vision Pro undoubtedly became the event's most concerned about the product, no one.
In the industry generally think "meta-universe" boom recede, Apple kill XR track a little late, but the backhand pull out Apple Vision Pro such "king bomb" level products, let the industry quite amazed.
Then, in the industry will generally focus on artificial intelligence, Apple only launched the "meta-universe" device Apple Vision Pro, can not help but doubt Apple's ability in artificial intelligence geometry.
Let the next big model home, for you to take stock of WWDC 2023, the new Apple Vision Pro, in the end have revealed the Apple artificial intelligence strength?
AIGC Generated Portrait
When using Apple Vision Pro for FaceTime video calls, the absence of a camera facing the user, while the user wears an XR device, also makes the user look very strange.
A more intelligent input method
As we all know, one of the most criticized predicaments of XR industry is the lack of input methods, whether it is the single key input of the handle or the floating keyboard input method, in terms of efficiency and accuracy, compared to the physical keyboard, the experience is very bad.
Apple Vision Pro main interaction methods for the eyes, gestures and voice, which means that voice input may become Apple Vision Pro one of the most important way to type.
While Apple did not emphasize input methods in the Apple Vision Pro introduction, the iOS 17 introduction mentions a smarter input method that not only corrects spelling errors, but even corrects grammatical errors made by the user during typing.
Auto-corrected words are temporarily underlined, giving users a clear idea of which words have been changed and reverting back to the original with a single touch.
What's more, based on device-side machine learning, the input method also automatically improves the model based on each user's keystroke. This brings the input method's auto-correction feature to an unprecedented level of accuracy.
In addition, based on the cutting-edge word prediction Transformer language model, the word association function allows for very fast input of the next word, or even a complete sentence.
This highly personalized language prediction model also allows the input method to better understand the user's language habits and significantly improve input accuracy when using voice input.
The new "Handwriting" app
Along with iOS 17, the new "Journal" app uses machine learning technology on the device to create personalized memories and writing suggestions based on the user's photos, music, exercise, etc. Based on this information, the app provides you with suggestions for recording and writing at the moment that suits you. writing suggestions based on this information.
This means that based on the iPhone's arithmetic power, the device has been able to deploy localized semantic understanding capabilities for processing text, pictures and other multimedia content, as well as having certain generative AI functions.
And at this time, Apple chooses to keep a low profile. In the opinion of Big Model House, Apple's AI capability is really relatively weak in the face of top big models like GPT, and over-emphasizing AI capability is undoubtedly hitting a rock with an egg.
In addition, as a technology company whose main revenue comes from consumer electronics as well as services, Apple needs to emphasize more on the emergence of new features, the improvement of user experience and the continuous increase of user stickiness compared to the relatively general concept of AI.