Before going to bed

Hi,

Since few days, I haven’t post because I was tired. In fact, Monday was ·− / ···− · ·−· −·−− / ···· ·− ·−· −·· / −·· ·− −·−− 🙂
After the presentation of a first prototype, we are now working on some completions.
I had for example, implementing the second capacitive sensor yesterday.

Now, I got to obey Alexis and go sleep.

Good night !

Evaluation

Hi,

today we had an evaluation of the project and … there are still a lot of things to do. Our project works but there are still some bugs and it is not easy to use for someone not from our group. I also worked on the compass using the magnetometer and it works nicely and fixed one of the bugs I mentioned, we had to keep the PCB plugged on the USB to have it work and I think I solved this, just a mistake in the coding. So all the major features are operative but integration is still required more thoroughly as well as user-friendlyness.

Hugues

DDay

During the last three days we have been in quite a rush. I am happy to say that we eventually succeeded  in having a good motion recognition process : it’s based on two algorithm. As we saw that DTW wasn’t as effective as expected, we split in two, one part working on improving the DTW and another group working on detecting the movement thanks to the orientation of the wand, for this we use the quaternions, we split the space around the user in slice or quadrants, a movement is considered to be a succession of different orientation, which means of different slices. This is quite useful as the amplitude of the gesture no longer matters. I have been working on making a compass using the magnetometer but unfortunately it won’t be ready for tomorrow… a shame because it actually started working today.

See you,

Hugues

Loudness detector and … 3D printing filament story

Today, we have continue to work on the MEMS micro with Alexis, and Samuel has also participated to the code. I would like to thank for their help.

We have changed the filter since yesterday and now, the ST Microelectronics PDM library is used. We have implemented a loudness detector whick sends an event when the loudness goes over the specified threshold.

For the test, the concerning thread was nearly the only one working running but tomorrow, I will test it in parallel to the others ones. These future results will indicate to us if we will be able to capture a usefull sound during the game, and also if the voice recognition will be possible. To be continued…
I let you see this feature with the following video :

Moreover, this morning, I have finished some modifications of the 3D Model. Nevertheless, we were not lucky with the 3D printer today. In addition to the lack of 3D filament, we got problems, 3 times, with the printer because of knots on the filament. We tried and succeed a really tricky stuff, with the Expelliarose team, to roll up a coil of 3D printing filament on an other coil, suitable with the 3D printer used. I give you our tools and let you guess the process : a broom, a coat rack, a pen, a drill, … 😀 It was a very funny and interesting teamwork.

Now, it’s time to sleep for me. Bye !

Microphone check, one, two, one, two…

Hi,

Yesterday, we’ve worked on the microphone with Alexis. As using the I2S protocol is more practical for us but it isn’t implemented on the 2.5.6 version of ChibiOS, we decided to add its implementation, available on Chibios 3. Once the good configurations set, we finally succeded in making the clock and the data transfer work.

After this, we added a processing to make the raw data useful. For this, we used a library that implements decimation signal processing and a low pass filter. In this way, the microphone capture the sound 😀

Now, we just have to apply a band pass filter to delete some unuseful data. The next step will be the sound recognition 🙂

PS : Thanks again to Alexis, for his HUGE help.

Bye !

Motion recognition again

Hi,

obviously I was too optimist a few days ago when I said that the motion recognition was going on well : the results were not conclusive and quite randomly given. So for the last days I have been making tests, plotting the IMU’s output to check that the data were coherent were with what I expected and they were therefore the problem has to come from the DTW algorithm which doesn’t seem to be as effective as documents, thesis and others we read lead us to believe. We have added pre-processing and tried specific gesture and the results have slightly improve but it is still quite disappointing. Still we are not giving up and we will find a way even if it implies changing utterly the way we want to recognize the spells. I also worked on the microphone, we spent a lot of time trying to generate  the clock without success and it looks like we didn’t configure it well on the microprocessor, that is we didn’t use to its full ability but it’s starting to work now so more info this evening.

Hugues

I’m working against the clock but the MEMS micro one is working against me !

Since thursday, I’ve been working on the microphone. We have decided to use this micro, in a first time, just for capture the sound intensity and force the player to scream during to scream during a spell.
In the future, we’ll maybe use it to make some voice recognition, using the same algorithm employed for the motion recognition.

But so far, I still get problem to receive the data from the microphone. We are linked to the processor thanks 2 pins that provide a possible using of I2S, with I2K CLK and I2S SD.
I have configured the different registers to not only to have a correct clock but also to precise all the I2S characteristics choosen, as described in the documentation.

Despite this, when I read the SPI status register, I don’t see the “receive buffer not empty” flag enabled, what means that any data was received. So, I think this a clock problem because the data transfer is dictated by the clock. I wanted  to see with a analogical analyser the clock but the pin is too small and it’s too risky. Then, I’ve decided to just check the clock pin with a direct reading of the pin but for the moment, I see a fixed value. I’ve enabled the SPI2 clock, which is available on the same pin, but it still doesn’t work.

I must have overlooked something but I will try and fix this issue quickly.
If anyone has a idea that may help me, there will be most welcome 🙂

See you !

Updates about the Android application

Since a picture speaks a thousand words, I’ll let you do the maths to find out how many blog posts will I save by showing you some screenshots of our first Android application!

This might not seem much, but we’re now able to play with multiple wands since the application assigns an unique identifier to each one. And that’s a great way to make sure spells are sent and received.

Screenshot_2015-04-14-20-06-51

Screenshot_2015-04-14-20-07-09

Screenshot_2015-04-14-20-07-28

Next step: the server!

(Note that this is a debug application, some options are not meant to be part of the final product)

[Expelliarose] Some kind of recognition

Hi,

This evening, we managed with Hugues to make IMU data streaming and DTW recognition work together. We only made really simple tests : try an movement following y axis and one following z axis. They were recognized but there are no management of false positive still. They road is long but we are still pretty happy with the result.

Good night,

Mickaël

Motion recognition !!

Hi,

Great news, today we performed our first motion recognition with the wand’s PCB. We had some disappointing results at first, results were either not conclusive or wrong !! Implementing the DTW algorithm we could have an horizontal move that was closer to a vertical move than to another horizontal one but now it’s ok. Provided the move is big enough (if you just move your thumb vertically it won’t recognize it 🙂 ) we can make the difference between an horizontal and a vertical move. I know it’s not much but it’s enough for two spells for our wand : attack and protection so we can start playing !!

Apart from that I also tried to compensate for the orientation of the wand in the user’s hand but didn’t get good results so its quite likely we will have to give up on that, at least for now. I gave a look at MIT-GRT but apparently the orientation is only calculated for static posture whereas we need it during a movement.

Bye,

Hugues