A recently published Apple (NASDAQ:AAPL) patent continuation revealed that the California-based tech company has developed an intelligent computer user interface system that automatically changes from gesture-based controls to conventional input methods based on a user’s proximity to the computer system. The patent continuation is titled, “Computer User Interface System and Methods,” and was first uncovered by Apple Insider.
In the background information for the patent, Apple noted that, “Computer devices are typically configured with a user interface that allows a user to input information, such as commands, data, control signals, and the like, via user interface devices, such as a keyboard, a mouse, a trackball, a stylus, a touch screen, and the like.” Similarly, user interfaces that are operated via gestures also already exist.
However, Apple’s invention proposes merging the two major types of user interfaces into a new type of user interface that dynamically adapts to a user’s location and proximity to the computer display. Using sensors that can detect a user’s distance, depth, and/or proximity, the computer system will first determine a “user context.” Per the patent summary, “a user presence context may be determined to be presence of a user, absence of a user or presence of multiple users within a vicinity of the computer. Based on the determined user presence context, the computer may be placed in a particular mode of operation and/or may alter an appearance of information displayed by the computer.”
For example, if the computer senses two separate users at a distance of three feet, it may enter a multiple user, gesture-based control mode. Operations controlled via gestures could include, but are not limited to, scrolling, selecting, and zooming.
Once the system detects that a user is close enough, it will reconfigure its user interface with controls that are more suitable for use at close range, such as a keyboard, touchscreen, or other type of conventional input method. According to Apple, the sensors employed by the computer system could rely on “near field radio frequency (RF), ultrasonic, infrared (IR), antenna diversity, or the like.” The system may also include, “visible light sensors, ambient light sensors [and] mechanical vibration sensors.”
However, as described in the patent, a user’s proximity to the computer wouldn’t necessarily be the only “detectable parameter associated with a user.” The computer system may also determine the type of user interface mode that it enters based on objects that a user is carrying. According to Apple, some embodiments of the system will detect wireless devices such as cell phones or electronic identification cards via a wireless communication protocol such as Bluetooth or RFID (radio frequency identification.)
Finally, Apple noted that its computer user interface system will also be able to “learn” through a training algorithm that will allow it to “decrease and/or minimize false detection and increase and/or maximize positive recognition of gestures.” Although it is unknown what Apple plans to do with this patented technology, it should be noted that the iPhone maker acquired motion-sensing technology company PrimeSense last year. Some industry watchers believe that PrimeSense’s 3D vision and gesture technology will be used in a revamped Apple TV device that may be released later this year.
Follow Nathanael on Twitter (@ArnoldEtan_WSCS)