Tuesday, March 25, 2014

Working with the latency tester

The Oculus latency tester is a device that allows you to empirically test the time it takes between issuing the commands to render something on the Rift screen and the screen actually changing in response.  Someone on the Oculus forums recently asked why there weren't any test applications that you could download in order to use the device.  The reason for this is that such an application would defeat the purpose of the device, which is to allow you to measure the performance and latency of your program on your hardware.

Specifically it allows you to measure the amount of time you should be using in the sensor fusion prediction mechanism.


Prediction

The sensor fusion functionality in the SDK is very good at determining the instantaneous orientation of the device (and thus the user's head, assuming they wear their Oculus like the rest of us).  However, because there is an inevitable delay between reading the tracker data and getting the output to the screen, the instantaneous orientation isn't really good enough.  If you're using the instantaneous orientation and turning your head rapidly to the side, by the time the image is displayed it's for a point of view that no longer corresponds to where you're looking.

Humans can turn their heads at remarkably high rates, up to 600° per second at the high end and 300° per second is not uncommon.  Suppose your Rift application is running at 60 frames per second.  This equates to a rendering time of 15ms.  For a 300° per second turn rate, assuming you captured the user's position and then took 15ms to render a frame, that would imply that every frame rendered while turning at that rate would be about 4.5° off from where it should be--and that's in the ideal case.  In reality, thanks to effects like buffering in the video card of additional frames, and response rates of LCD panels, that 15ms latency is pretty much never achievable (at least for the DK1), and the lowest I've ever seen my own device's response is closer to 30ms, which results in a 9° error when turning.  

The solution to this is to have the sensor fusion mechanism report not what the orientation is, but what it's going to be by the time the photons actually start appearing from the panel.  Steve LaValle has a terrific post on the Oculus VR blog about how this kind of prediction is accomplished.  But for the prediction to work, you need an accurate measurement of the time between the tracker reading and the image appearing on the screen, the so-called 'motion to photon' time.  Getting that time is the purpose of the latency tester.

Using the latency tester

In order to use the latency tester you need to remove one of the lenses from your Rift and replace it with the device, which looks something like a squat black cone with the tip cut off.  At the narrow end of the truncated cone is a small color sensor, similar to what you might find in a webcam, but with very low resolution (possibly only a single pixel) which can detect the color displayed where the lens axis would normally intersect with the Rift screen.

Using the device is remarkably simple.  In your application you must first locate the OVR::LatencyTesterDevice, similar to how you would locate the OVR::SensorDevice. Once found, you connect it to an instance of OVR::Util::LatencyTest:

In order for the test functionality to work, you need to make two additional changes. First you need tell the OVR::Util::LatencyTest instance to record the time when the sensor fusion orientation is read by calling OVR::Util::LatencyTest.ProcessInputs():

Next, you need to add code that will check to see if a test is currently running and respond accordingly by rendering a test square on the screen:

This code uses my rendering libraries to draw a square using OpenGL. Ideally the code that renders the square should have minimal impact on the overall rendering time, but it's important to render it in such a way that it doesn't change the time that would normally be spent rendering the scene.  Currently the code I use is part of the scene rendering, prior to distortion, but I could just as easily put it post-distortion.  The important thing is that the rendering of the square doesn't appreciably change the total rendering time, up or down.

Getting the results

Normally this additional code does nothing visible in the application.  Until your activate the tester by either pressing the button on its face, or programmatically by calling OVR::Util::LatencyTest.BeginTest(), the scene should render just as it would otherwise.  However, once you start a test via one of these methods, calls to OVR::Util::LatencyTest.DisplayScreenColor(colorToDisplay) will begin returning true values, and let your application know what color the test square should be.

The test typically alternates between black and white, initially setting the values to calibrate the sensor so that it knows which color is which, and then later, switching the color every frame in order to get a precise measurement between the time OVR::Util::LatencyTest.ProcessInputs() was called and the time when the color on the screen changed to the expected value.  After a number of iterations it will display the results both in the debug console (or standard out, depending on your platform) and on the LED readout display on the tester itself.

The resulting value is your latency and is the value you should be using as the prediction value, by calling OVR::SensorFusion::SetPrediction() and passing in the interval.  Bear in mind that SetPrediction() requires that you set the value in seconds, while the output of the latency test is in milliseconds, so you need to convert the results to a floating point value and divide by 1000.  Additionally, the current OVR::Util::LatencyTest doesn't have a way of fetching the results programmatically as a number, only as a 'result string'.  Presumably you could parse that string and get the results, but the approach right now seems to be that you'd get the results, and then modify your application code accordingly.  This makes some amount of sense since with the test device in place, you can't really use the Rift.

Drawbacks

The use of the latency tester device essentially precludes the normal operation of the Rift, since it involves switching out one of the lenses with an opaque bit of electronics that has a cable sticking out of it in the space where your eyeball goes (again, I'm assuming here that you wear your Rift like the rest of us.  If you don't... hey, I don't judge.).

In addition, the test doesn't really account for variable frame rates in applications.  In an ideal world your frame rate would be constant, and for VR content this is much more critical than it otherwise would be, but it's not always going to be possible.  Furthermore, the test is only testing your current setup, nothing more.  If you perform the test on your hardware and drivers and then distribute an application, there's no real guarantee that the results are going to be valid for someone else who's using a different setup or even a different model Rift.

DK2 and latency testing

According to the Oculus DK2 page, the new device has an integrated latency tester in the hardware.  This might potentially solve the issues I've described above.  Were I designing the device I would ensure that the section of the screen used for the test was one outside the area visible to the user, so the test could be run at any time without the user noticing anything.  It's currently an open question as to whether this is the case, but it seems like an obvious design choice, so I'm working from the assumption that this is the case.

Another unanswered question is whether the integrated latency tester will work with the new DK2 screen while it's running in low persistence mode, since low persistence means the screen spends most of its time emitting no light, which might be indistinguishable to the device from 'rendering a black square'.

The latter could probably be solved by switching from 'black and white' to something like 'red and green' for the test colors, as long as the device is able to record the first moment when it saw the color, even if it only lasted for a few milliseconds before going back to black.

However, even if the device can't work in low-persistence mode, there are other potential solutions to making it an effective mechanism for end users.  For instance, an application could do a brief test in high persistence mode when it starts up, and then use that to determine a baseline for the latency in the current hardware setup, and then subsequently modify that baseline based on the current frame rate as computed by other means.  Alternatively, perhaps the display screen is actually capable of placing individual portions of the display into low or high persistence individually... but this seems like a long shot, especially compared to the relatively small amount of work it would take to make the tester simply work on a low persistence display.

Once I have my DK2 kit, I'll be sure to update this post to include my findings.

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.