Lines Matching refs:is
9 The main entry point into this library is the `HeadTrackingProcessor` class.
10 This class is provided with the following inputs:
18 - Static: only the sound stage pose is taken into account. This will result
21 This will result in an experience where the sound stage is perceived to be
24 into account. This will result in an experience where the sound stage is
31 above and is ready to be fed into a virtualizer.
36 A `recenter()` operation is also available, which indicates to the system that
44 When referring to poses in code, it is always good practice to follow
56 Pose3f worldToHead; // “world” is the reference frame,
57 // “head” is the target frame.
60 By following this convention, it is easy to follow correct composition of poses,
75 “Twist” is to pose what velocity is to distance: it is the time-derivative of a
89 This is the listener’s head. The origin is at the center point between the
96 This is the primary screen that the user will be looking at, which is relevant
99 origin is at the center of the screen. X-axis goes from left to right, Z-axis
108 This is the frame of reference used by the virtualizer for positioning sound
109 objects. It is not associated with any physical frame. In a typical
110 multi-channel scenario, the listener is at the origin, the X-axis goes from left
112 front-right speaker is located at positive X, Y and Z=0, a height speaker will
117 It is sometimes convenient to use an intermediate frame when dealing with
118 head-to-screen transforms. The “world” frame is a frame of reference in the
120 It is arbitrary, but expected to be stable (fixed).
142 pose to obtain the pose of the “logical screen” frame, in which the Y-axis is
148 The Screen-Relative Pose block is provided with a head pose and a screen pose
150 module may indicate that the user is likely not in front of the screen via the
157 When the head is considered still, we would trigger a recenter operation
158 (“auto-recentering”) and when the screen is considered not still, the mode
164 a head-to-stage pose that is going to feed the virtualizer. It is controlled by
165 the “desired mode” signal that indicates whether the preference is to be in
168 The actual mode may diverge from the desired mode. It is determined as follows:
170 - If the desired mode is static, the actual mode is static.
171 - If the desired mode is world-relative:
172 - If head and screen poses are fresh and the screen is stable (stillness
173 detector output is true), the actual mode is world-relative.
174 - Otherwise the actual mode is static.
175 - If the desired mode is screen-relative:
176 - If head and screen poses are fresh and the ‘valid’ signal is asserted, the
177 actual mode is screen-relative.
182 A Rate Limiter block is applied to the final output to smooth out any abrupt