I have been working on the IMU during the last few days. First, we have run some test with data coming from the IMU and our implemented DTW algorithm and the results bode well. By this I mean that if we compare two series of acceleration data from two gesture more or less similar, we get a much lower value than if we compare two different gesture. Not a big surprise but at least for precise gesture, there will be no doubt that we can guess the one that has been made by the user.
Nonetheless I have some troubles : The data given by the IMU are given along the IMU’s axis and we would like to have them along the user’s axis. The aim is that even if the user doesn’t hold the wand quite right, calculation will compensate for the wand’s bad orientation. On a mathematical viewpoint, it’s quite easy, it’s just plain projection with three-dimensional basis, still my results in that area are a bit disappointing : I use the quaternion data given by the IMU, from them I can guess the coordinates of each axis of the IMU’s base into the user’s base (the rotation matrix from the quaternion). Since I know how to write the IMU’s base into the user’s base and I also know the coordinates of the acceleration vector into the IMU’s base, I can calculate the coordinates of that same vector into the user’s base… But with all the multiplications and divisions that implies, the result isn’t quite what it should be, values move a lot even if the IMU is still and the gravity’s vector is sometime more horizontal than vertical.
So more about it tomorrow, I hope I’ll have a solution and I am open to suggestion 🙂