The U.S. Patent and Trademark Office officially released a series of 63 recently granted patents to Apple Inc. today. In this special report, we cover Apple’s patent-pending devices, systems, and methods for predictive quantization of user interaction with a virtual musical instrument in computer-generated reality environments (CGR). Think of it as the GarageBand entering the virtual age. This will be a dream come true for air guitarists.
Music processing systems (such as music processing systems in CGR environments) should ideally strive to improve temporal precision in musical performances to improve music quality and user experience. This task, known as quantization.
Apple’s assigned patent covers devices, systems, and methods for predictive quantization of user interaction with a virtual musical instrument in computer-generated reality (CGR) environments.
According to some implementations, the method is performed by a device with one or more processors, non-temporary memory and one or more hardware components for user interaction configured to enable a user to play a virtual instrument in a CGR environment.
The method also includes: obtaining user movement information, wherein the user movement information characterizes the actual body position and trajectory information of the user; generating, from actual user motion information and a predetermined location of the virtual instrument in the CGR environment, an expected virtual instrument interaction time for a virtual instrument interaction before the virtual instrument interaction occurs; determining whether the predicted interaction time of virtual instrument falls within an acceptable time interval around one of a number of temporal sound markers; and in response to determining that the predicted virtual instrument interaction time falls within an acceptable time interval around a particular temporal sound marker for the plurality of temporal sound markers, and quantifying the virtual instrument interaction by presenting play of the virtual instrument to match the particular temporal sound marker for the majority of temporal sound markers.
Further details regarding patent FIG. 1: In some implementations, user motion information includes data determined using images taken by image sensors (e.g., within the HMD (# 120) of Figure 1, and / or within the handheld devices 130A and 130B of Figure 1) and / or data determined using of output from IMUs, gyroscopes, accelerometers, torque meters, force gauges and / or other sensors (eg within the HMD 120 in Fig. 1 and / or within the handheld devices 130A and 130B in Fig. 1).
For example, the user movement information includes data characterizing the impact force, the angle of impact and the position of impact of the user movement in relation to the virtual instrument (eg drum head, keyboard key, guitar string or the like) from the IMUs. , gyroscopes, accelerometers, torque gauges, force gauges and / or other sensors for the handheld devices 130A and 130B in fig. 1.
In this example, the device uses the motion and the predicted collision to determine the properties of the note to be played. As such, for example, if the user hits a virtual cymbal at high speed, the device plays a higher note based on the expected speed or impact force.
Or as another example, if the device predicts that the user will hit a certain part of the virtual cymbal or that the user’s finger hits a specific key on a virtual piano, the device will play a different sound or play a different tone based on the expected position or percussion angle. .
Apple’s patent FIG. 7 below is a flowchart representation of a method for presenting user play of a virtual musical instrument.
Apple’s granted patent 10,782,779 was originally filed in the third quarter of 2019 and is published today by the US Patent and Trademark Office. To dive deeper into the details of the patent, go through the patent here.