Oculus Developer Forums

Positional tracking calibration

Discussions on input devices/methods for virtual reality.

Positional tracking calibration

Postby sven » Fri May 17, 2013 12:39 am

I'm still hopeful regarding the idea of approximating positional tracking for scenarios with limited mobility (e.g. the users sitting on a chair) with the accelerometers inside the rift tracker.

I think it may be worthwhile to capture real positional data recorded by a different system (Razer Hydra, MS Kinect or PS Move) and correlate it with the readings from the sensor.

Do you think this is worthwhile? Has anyone tried it?
Last edited by sven on Fri May 17, 2013 1:07 am, edited 1 time in total.
User avatar
sven
 
Posts: 257
Joined: Fri Mar 29, 2013 2:02 pm
Location: Dortmund, Germany

Re: Positional tracking calibration

Postby edzieba » Fri May 17, 2013 12:56 am

If you already have a positional tracker fixed with the IMU, you might be able to do some sensor fusion to improve the update rate (using the IMU to provide a delta between absolute tracker updates, and doing some smoothing on top). The IMU on it's own is not in any way sufficient for positional tracking. This portion of Google's tech Talk on sensor fusion should explain why. Low-cost commercially available MEMS gyros and acceleration have not improved appreciably since then.
edzieba
 
Posts: 98
Joined: Fri Mar 29, 2013 11:02 am

Re: Positional tracking calibration

Postby Harley » Fri May 17, 2013 3:38 am

Checkout DARPA’s Micro-PNT positioning tracking technology, their TIMU concept could maybe be added in the future?

viewtopic.php?f=25&t=1093

Harley wrote:Micro-PNT (Micro-Technology for Positioning, Navigation and Timing) does absolute position tracking on a single chip!

To oversimplify it; Micro-PNT adds integrates a highly-accurate master clock ("TIMU" or "Timing & Inertial Measurement Unit) to existing chips with 3-axis gyroscopes and 3-axis accelerometers (and 3-axis magnetometer) to simultaneous measure the motion tracked and combines that with timing from the syncronixed clock, and with sensor fusion makes a single chip that does absolute position tracking, all without external transmitters/transceivers.

Is this technology just too expensive or not yet released by DARPA for use in non-militerized commercial products?

http://www.darpa.mil/NewsEvents/Releases/2013/04/10.aspx
Harley
 
Posts: 112
Joined: Sun May 12, 2013 1:17 pm

Re: Positional tracking calibration

Postby xyx » Fri May 17, 2013 4:03 am

For walking situation, there's a research paper called "a reliable and accurate indoor localization method using phone inertial sensors" http://research.microsoft.com/en-us/um/people/zhao/pubs/ubicomp12_IndoorNav.pdf which basically tries to correct sensor drift by utilizing corridor constraint and repetitive nature of walking.

So, I think it's possible to do similar thing in sitting-on-chair situation, if you can find some pattern in human motion (which should definitely exists).
xyx
 
Posts: 12
Joined: Fri Mar 29, 2013 6:23 pm

Re: Positional tracking calibration

Postby Tgaud » Fri May 17, 2013 7:49 am

You can't define position by accelerometers.

Because on the long run, it becomes inaccurates.
an acceleration is traduced in speed with lose of information
then speed is traduced in position with even more lose of informations.

you'll end, having a random position in your room, on the long run, even if you didn't moved from your chair.

Magnetic vision tracking :
Advantage :
_Fast
_Dont care of objects
_little receiver on the desk

Disadvantage :
_Not a long range. (can't move around the room)


Optical (PS MOVE) tracking :
Advantge :
_Long range

Disadvantage:
_No object beetween you and the camera permitted
_Processing visual image take time


RadioFreqeuncy (RF) tracking
Advantage :
_FAST
_Long range.
_very precise.
_Dont care of objects beetween you and the receivers

Disadvantage :
_You need 4 receivers at different place on your desk.
_Its new so only some prototypals exists


The best things, if your on a chair, not too far is magnetic position tracking or RF if oculus manage to make a partnership with someone developping a prototype on it.
Tgaud
 
Posts: 408
Joined: Mon May 13, 2013 1:59 pm

Re: Positional tracking calibration

Postby doktorvr » Fri May 17, 2013 9:47 am

You won't be able to get the ground truth from a Hydra as the accuracy of the readings depend on where you are in the magnetic field. So a movement of 10cm from A to B != 10cm from B to C in virtual units.
doktorvr
 
Posts: 8
Joined: Sat Mar 30, 2013 12:02 pm

Re: Positional tracking calibration

Postby Tgaud » Fri May 17, 2013 11:31 am

yes, it was i said It was precise, but only if you're close to the computer.

But the best, imo, is the radiofrequency concept.
you have 4receipters on your desk, and the emitter ping every receipter, to triangulate his position, very precisely
and very fast.
Tgaud
 
Posts: 408
Joined: Mon May 13, 2013 1:59 pm

Re: Positional tracking calibration

Postby geekmaster » Fri May 17, 2013 4:13 pm

Tgaud wrote:You can't define position by accelerometers.

Because on the long run, it becomes inaccurates.

You cannot truthfully make such an absolute statement. It all depends on your application, and what environmental constraints you define for that application.

You can have relatively accurate positional tracking from the Rift tracker data if you assume a fixed sitting or standing position, which is normal for a wired device. You just need to set that as your average "home" position. There are plenty of research papers that document how to do this.

Of course, for mobile applications, you need to add GPS or other positional data to periodically recalibrate your IMU-based position data to prevent "long run inaccuracy". In that case, your statement is correct, but not in the typical Rift DK usage situation.

The key to getting useful positional tracking data from an IMU such as is used in the Rift DK, is to detect proper recalibration points, such as when sitting or standing erect, or just using the mean position as a home reference point. Another important thing to prevent drift is to force the velocity to zero whenever the head (natural) stops moving for a moment to obtain a clear and sharp view (for minimal motion blur). This can all be done with relatively simple software algorithms, but it can be much more effective if you take skeletal models and full-body gesture recognition into account, along with motion prediction using a short time series of recent tracker data.

For really accurate positional computation, snap-vector analysis is commonly used for accurate low-latency motion prediction (such as is used in quad-copter swarm formation flying):
http://www.seas.upenn.edu/~dmel/mellingerICRA11.pdf

In the "long run", you will always be seated (or standing) in the same spot while using the Rift, and you can periodically recalibrate to that known home position.

FYI, the snap-vector (i.e. "joust") is the second derivative of the acceleration vector:
https://info.aiaa.org/Regions/Western/Orange_County/Newsletters/AIAAOC_SnapCracklePop_docx.pdf[/quote]
But most applications do not go beyond jerk-vectors.
"For the things we have to know before we can do them, we learn by doing them." —Aristotle
"The GREATEST discoveries shall be found in pursuit of diminishing returns." —geekmaster
Всё гениальное просто. Щедрость не имеет пределов.
User avatar
geekmaster
 
Posts: 2613
Joined: Fri Apr 12, 2013 8:07 pm

Re: Positional tracking calibration

Postby Tgaud » Fri May 17, 2013 5:53 pm

if when moving 30cm you have 1mm error, nothing observable
then, if you play 4hours, you can assume that you'll have moved 3Km, so the 1mm error become a 1meter error.
Tgaud
 
Posts: 408
Joined: Mon May 13, 2013 1:59 pm

Re: Positional tracking calibration

Postby geekmaster » Sat May 18, 2013 8:37 am

Tgaud wrote:if when moving 30cm you have 1mm error, nothing observable
then, if you play 4hours, you can assume that you'll have moved 3Km, so the 1mm error become a 1meter error.

Except when you know that you are sitting or standing in the same spot (typical WIRED Rift DK usage), you can periodically reset your velocity to zero, and you can subtract your long-term average position to keep you centered in your constrained workspace. You will never travel farther than the reach of your cables. That is what I meant by "constrained" in my previous posts here and at MTBS3D, in which is explain how and why we CAN to useful positional tracking with the existing Rift DK head tracker harware.

Of course, PORTABLE (free walking) applications will requiring adding a GPS to your sensor fusion, but most of us will be sitting on our assets while we play with our Rifts!

And for future portability, I have one of these that I plan to add to my Rift DK sensor fusion code:
http://emerythacks.blogspot.com/2013/01/u-blox-pci-5s-cheap-gps-module-for-your.html

We must remember that the Rift DK is a designed as a gaming (entertainment) device, and not a precision scientific instrument. Within those specifications, I feel that is it more important to provide a FUN experience than to provide an accurate simulation, so I plan to use approximations and hacks and cheats wherever they simplify or speed up my code, especially when they allow me to do cooler things with smaller hardware requirements. And even innacurate postional tracking (with skeletal modelling and gesture recognition support) can greatly enhance our VR despite any objections by people bothered by long term uncompensated drift. I plan to compensate...
:D
"For the things we have to know before we can do them, we learn by doing them." —Aristotle
"The GREATEST discoveries shall be found in pursuit of diminishing returns." —geekmaster
Всё гениальное просто. Щедрость не имеет пределов.
User avatar
geekmaster
 
Posts: 2613
Joined: Fri Apr 12, 2013 8:07 pm

Re: Positional tracking calibration

Postby Tgaud » Sat May 18, 2013 10:46 am

The best is the radioFrequency Position tracking.
(4 little receiver in your desktop, an emitter in the occulus rift, and it will "ping" every receiver.
By the time difference beetween the reception of each receivers, they can tell precisely where is the emitter in the room).

Works in all condition, and has every advantage and precision possible.
Tgaud
 
Posts: 408
Joined: Mon May 13, 2013 1:59 pm

Re: Positional tracking calibration

Postby geekmaster » Sat May 18, 2013 1:51 pm

Tgaud wrote:The best is the radioFrequency Position tracking.
(4 little receiver in your desktop, an emitter in the occulus rift, and it will "ping" every receiver.
By the time difference beetween the reception of each receivers, they can tell precisely where is the emitter in the room).

Works in all condition, and has every advantage and precision possible.

If you are going to mount it in your desktop, then why not just use a wired solution? Wireless has to deal with radio interference and bandwidth issues, and probably transmitter licensing issues as well, and certainly FCC (and other agency) certification if you plan to sell it.

Using the built-in hardware is basically free (certainly not the rather high cost of the RF tracking gear that you suggest). If you really need all that accuracy for a specific purpose, great. But for gaming? I don't think so. Especially for wide general audience who just spent all their spare funds on a Rift...

For Mom and Pop gaming, I think that the methods I suggested are sufficient. The "best" is great and all, especially if you have a research grant to pay for it...

Oh, and your absolute statement is not true... In fact, your "market-droid" phraseology sounds like you have an agenda here -- do you sell such things or something? Perhaps a disclaimer would be in order...
"For the things we have to know before we can do them, we learn by doing them." —Aristotle
"The GREATEST discoveries shall be found in pursuit of diminishing returns." —geekmaster
Всё гениальное просто. Щедрость не имеет пределов.
User avatar
geekmaster
 
Posts: 2613
Joined: Fri Apr 12, 2013 8:07 pm

Re: Positional tracking calibration

Postby geekmaster » Sat May 18, 2013 2:00 pm

Tgaud wrote:if when moving 30cm you have 1mm error, nothing observable
then, if you play 4hours, you can assume that you'll have moved 3Km, so the 1mm error become a 1meter error.

Where do you get those numbers?

There are a lot of people and a lot off applications that would LOVE to have only a 1meter error after a 4hour gap in their GPS data. In fact, 1meter error just during the time it takes to drive through a tunnel would be pretty handy. Even the military would like that kind of accuracy without having to stick atomic clocks into their positioning systems to adjust for IMU errors during GPS dropouts.

Do you actually know what you are talking about? I would love to see some references (such as URLs) that support some of your claims...

FYI, your comparisons above differ by an order of magnitude in their error rates...
"For the things we have to know before we can do them, we learn by doing them." —Aristotle
"The GREATEST discoveries shall be found in pursuit of diminishing returns." —geekmaster
Всё гениальное просто. Щедрость не имеет пределов.
User avatar
geekmaster
 
Posts: 2613
Joined: Fri Apr 12, 2013 8:07 pm

Re: Positional tracking calibration

Postby Tgaud » Sun May 19, 2013 11:36 am

geekmaster wrote:
Tgaud wrote:The best is the radioFrequency Position tracking.
(4 little receiver in your desktop, an emitter in the occulus rift, and it will "ping" every receiver.
By the time difference beetween the reception of each receivers, they can tell precisely where is the emitter in the room).

Works in all condition, and has every advantage and precision possible.

If you are going to mount it in your desktop, then why not just use a wired solution? Wireless has to deal with radio interference and bandwidth issues, and probably transmitter licensing issues as well, and certainly FCC (and other agency) certification if you plan to sell it.

Using the built-in hardware is basically free (certainly not the rather high cost of the RF tracking gear that you suggest). If you really need all that accuracy for a specific purpose, great. But for gaming? I don't think so. Especially for wide general audience who just spent all their spare funds on a Rift...

For Mom and Pop gaming, I think that the methods I suggested are sufficient. The "best" is great and all, especially if you have a research grant to pay for it...

Oh, and your absolute statement is not true... In fact, your "market-droid" phraseology sounds like you have an agenda here -- do you sell such things or something? Perhaps a disclaimer would be in order...



No, the frequency is different. Its the same frequency used in medical for radio images.
no compatibility problem.
And the interest is to move over all your room and being located precisely.

look this video :
https://www.youtube.com/watch?feature=player_embedded&v=mYyFUQbWC1E


there is also a topic here :

viewtopic.php?f=25&t=787&p=11215&hilit=radio#p11215
Tgaud
 
Posts: 408
Joined: Mon May 13, 2013 1:59 pm

Re: Positional tracking calibration

Postby Tgaud » Sun May 19, 2013 11:45 am

geekmaster wrote:
Tgaud wrote:if when moving 30cm you have 1mm error, nothing observable
then, if you play 4hours, you can assume that you'll have moved 3Km, so the 1mm error become a 1meter error.

Where do you get those numbers?

There are a lot of people and a lot off applications that would LOVE to have only a 1meter error after a 4hour gap in their GPS data. In fact, 1meter error just during the time it takes to drive through a tunnel would be pretty handy. Even the military would like that kind of accuracy without having to stick atomic clocks into their positioning systems to adjust for IMU errors during GPS dropouts.

Do you actually know what you are talking about? I would love to see some references (such as URLs) that support some of your claims...

FYI, your comparisons above differ by an order of magnitude in their error rates...


well i take it from this message :

That's all well and good, but unless you actually test it, you'll never know.

As for accelerometer based position tracking...

I tried that out once a long time ago, and while I'm sure there is possibility for improvement, let me tell you, those test results were not encouraging.

The result of trying to track position with an accelerometer were such that after about 3-4 seconds, the error was on the order of the reported position being several meters away from the actual position.

Left to run at that rate, after about a minute the reported position could easily be several hundred meters off.

Now, I'm sure you could code something better thought out than what I was doing back then, but the drift is huge.

Using an accelerometer as a PRIMARY data source for position tracking is a really bad idea.
At best it can give you a bit of supplementary data, but if you're thinking it can be the primary source, and you use the camera (or whatever else there is) to correct it, you're being way too optimistic about just how huge the error is for acceleration data.

Remember:
You are trying do determine a position in space;

Starting from accelerometer data,
You first have the error in reported acceleration.

You then need to find the integral of the acceleration to get the velocity,
Then you need to integrate again to find the position.

Each integration step, roughly speaking creates an exponential increase in the amount of error.

Not only that, but the error is cumulative. So, if the original error in reported acceleration is +-0.1 m/s^2, then the potential error in reported velocity is at least +-0.2 m/s, but will gradually increase. For a velocity calculated from 3 values, it's +-0.3 m/s, for 10 values it's +-1 m/s - And at this point you don't even have a position value yet...)


Trying to use acceleration data as your core positional input, in short, is a horrible idea.
If you're going to do this at all, it would be far safer to do the reverse;

Start from known positional data. (provided by a camera of some sort, or whatever other means), then use acceleration data to predict the motion between position updates.
However, how much you'd gain from doing this is questionable, and closely related to the camera's framerate and other issues. - The framerate of the camera determines how much time you'd have to spend performing acceleration based tracking; Which in turn figures in to how much uncertainty you will have to work with.

The uncertainty for anything beyond trivial amounts of time becomes a major issue though; - if it didn't there would be a lot of position tracking applications around already, because accelerometers are all over the place these days


from KuraIthys in this topic : viewtopic.php?f=20&t=767&p=9060&hilit=integrate#p9060
Tgaud
 
Posts: 408
Joined: Mon May 13, 2013 1:59 pm

Re: Positional tracking calibration

Postby geekmaster » Sun May 19, 2013 7:14 pm

Tgaud wrote:...
from KuraIthys in this topic : viewtopic.php?f=20&t=767&p=9060&hilit=integrate#p9060


Look on the next page of that same topic, and I explained there why and how such constrained accelerometer-based tracking is possible:
viewtopic.php?f=20&t=767&start=20#p9079

There are also newer "free walking" methods that map out a building, learning where the corridors and doors are from averaged data, and constraining your accelerometer position results to those allowed paths. You could consider it a sort of a large-scale room (or building) sided "gesture recognition" system.

And the important thing is that people are actually DOING this stuff, and it works, so all the explanation about how you cannot make it work does not make it impossible.
"For the things we have to know before we can do them, we learn by doing them." —Aristotle
"The GREATEST discoveries shall be found in pursuit of diminishing returns." —geekmaster
Всё гениальное просто. Щедрость не имеет пределов.
User avatar
geekmaster
 
Posts: 2613
Joined: Fri Apr 12, 2013 8:07 pm

Re: Positional tracking calibration

Postby edzieba » Mon May 20, 2013 3:35 am

You can constrain walking movements because human passive-dynamic walking can be modeled well as a pair of pendulums (because we naturally like to walk in the most efficient manner). Additionally, for much of the step the sensor is held stationary on the ground, allowing for tracking to be produced on a variation of a calculable path between fixed points, and these points strung together using absolute orientation data to estimate the path. It breaks down if you try and run, sidestep, crawl, etc.

Seated head-movement is NOT comparable to walking movement. Without a way to periodically set the sensor to a known position, you CANNOT make assumptions about where the head is in order to correct the massive drift inherent in MEMS accelerometers (and gyros, due to sensor fusion to remove the g vector), and the head does not move in regular and calculable motions.

With existing hardware, positional tracking with purely inertial measurement is simply not viable outside very specific scenarios involving constrained motion. The hardware to do inertial navigation over long timeframes ('long' here being above a few seconds) is not cheap, compact or commercially available (and definitely covered under ITAR).

For INS to be a viable tracking option, new hardware needs to become cheaply available (e.g. on-chip ring-laser gyros), or existing hardware must be used to augment an absolute positioning method.
edzieba
 
Posts: 98
Joined: Fri Mar 29, 2013 11:02 am

Re: Positional tracking calibration

Postby geekmaster » Mon May 20, 2013 6:23 am

For typical gamer use, you can make an assumption about where the AVERAGE position is (sitting upright or standing upright). There will be multiple places where the user stops moving his head to get a clear view (no motion blur), such as upright, or with head or hips (a tipping point), or leaning forward with head above knees (another tipping point). The center of the outer range of motion can be used as a recalibration point, and so can all the other stable "not moving" positions when they are detected.

Essentially, you just need to use gesture (body posture) recogition that can be learned over the range of motion. Each time the head (mostly) stops moving (very) near one of these points, set the velocity to zero and clamp the position to that know location.

Although this is a stretch from the documents I have read, I believe in this idea, and my beliefs carry a lot of power... I share my ideas so that others may use them too.
Last edited by geekmaster on Tue May 21, 2013 6:44 am, edited 1 time in total.
"For the things we have to know before we can do them, we learn by doing them." —Aristotle
"The GREATEST discoveries shall be found in pursuit of diminishing returns." —geekmaster
Всё гениальное просто. Щедрость не имеет пределов.
User avatar
geekmaster
 
Posts: 2613
Joined: Fri Apr 12, 2013 8:07 pm

Re: Positional tracking calibration

Postby Rabbit » Mon May 20, 2013 7:02 am

If you're sitting down and moving about a level using a gamepad then position drift wouldn't be a major concern, would it? Your position is exactly where you are in the game (as long as your player height is kept ok and centred around standing eye-level).

I suspect OP is only talking about accelerometer readings to get it reacting nicely to those small movements like when you lean forward or shift sideways a little. And when I say only, I mean the range of movement, I think it would add a tremendous amount to the sense of immersion if these little shifts in perspective were able to be added in. Definitely worthwhile if we can get something from the existing sensors in the dev kit.

Best example I can think of is the TED talk by Jonny Lee where he does it with a Wii remote (IR tracking): http://www.ted.com/talks/johnny_lee_demos_wii_remote_hacks.html at around 3:30. The difference really jumps out.
User avatar
Rabbit
 
Posts: 2
Joined: Fri Mar 29, 2013 6:30 pm
Location: Melbourne, Australia

Re: Positional tracking calibration

Postby geekmaster » Mon May 20, 2013 5:01 pm

Rabbit wrote:... I suspect OP is only talking about accelerometer readings to get it reacting nicely to those small movements like when you lean forward or shift sideways a little. And when I say only, I mean the range of movement, I think it would add a tremendous amount to the sense of immersion if these little shifts in perspective were able to be added in. Definitely worthwhile if we can get something from the existing sensors in the dev kit. ...

That is EXACTLY what I was talking about (in full agreement with the OP). The OP would like IF it can be done. I say it CAN be done. Others here say it cannot be done because they tried it and failed. I have provided references and links to others who have solved this problem well enough for our purposes, in a constrained environment. And yet, there are persistent doubters, just like in other threads where this was discussed.

I agree that seated motion is not the same as walking (with a foot-mounted acellerometer). But that constrains the range of motion even more. The skeleton is anchored to the buttox, which is attached to a chair. That works in our benefit. Even standing is not a problem, because you are limited by your need to maintain balance. We just need to adapt to those limits.

As I mentioned in various posts, I already have some rudimentary constrained positional tracking working. I have some tuning to do, and I need to integrate it into some sort of an app (perhaps the Tuscany demo)... I do not know why people keep disagreeing about the possibility or practicality of things that I have already done, or even things that I that I already know how to do, for that matter...
"For the things we have to know before we can do them, we learn by doing them." —Aristotle
"The GREATEST discoveries shall be found in pursuit of diminishing returns." —geekmaster
Всё гениальное просто. Щедрость не имеет пределов.
User avatar
geekmaster
 
Posts: 2613
Joined: Fri Apr 12, 2013 8:07 pm

Next

Return to Input Devices

Who is online

Users browsing this forum: Bing [Bot] and 2 guests