The teams have been formed and the projects confirmed today, and the Figurines project made the cut! Three of us are now working on it!
Now we have to find it a real name, more on that tomorrow!
For today, here are some of the things we have to think about:
– the basic shot will be incorporated into the base module (which will be a modified Pololu 3pi), no need for a secondary module that would complicate things too much with mechanical considerations. This shot can be done with IR or Lasers. Whatever we choose, it has to be able to be detected easily by other figurines (a square receptor of a certain area). The lasers would be cool (especially with a little smoke on the battlefield :p) and could have one of two colors depending on the team of the figurine. We could also encode into them basic info on the shooter (enough to identify it) so that when a figurine is shot, the server is told the identity of the shooter and can then dole out expo points, decide how much HP to take from the shot figurine depending on the level and power of the shooter, etc etc. We don’t know if all this could be done with IR.
– the server needs to know the distance between all the figurines, so each of them has to be able to measure its distance to other figurines and send that info to back to the server (through the phone). We don’t know yet how we’re going to achieve that. We would like to do this without having to constraint the battlefield (otherwise we could imagine a few fixed beacons on the battlefield (3 at least) that allow the position of the figurines to be known, or a camera above the battlefield that would have the position of all the figurines). Ultrasounds would be a way, but it would probably be two hard to distinguish common objects from figurines. Maybe we could measure the transmission time of a radio wave between the figurines…
– the user needs to see what the figurines is looking (i.e.) aiming at. The basic idea was to stream video from a camera aligned with the cannon. But streaming video is probably too tricky in terms of resources. We could apply a pre-treatment to the video, in order to only send to the phone sufficient information about the environment of the figurine to reconstruct a virtual rendering of this environment. This would also allow the figurine to have information about its environment and maybe have an autonomous mode in which it wouldn’t need the user to guide it but could be configured to accomplish some “simple” actions: hide, detect an enemy and aim at it, etc.
– the secondary modules will add virtual capabilities only, otherwise it would be too complicated to do. So all we have to do is find a way for the different modules “plugged” into the figurine to be detected and identified by the main module that will then relay the info to the phone which will apply the adapted virtual modifications of capabilities. One way to do this is to plug the modules on a bus like SPI where the master can detect all the slaves and get a specific ID from them when we boot the figurine.
As always any comment, advice, etc is more than welcome!
We’re excited to do this project!
PS: On a different topic, I wrote the BLE tutorial with Lerela, it was fun to go through the 2700 pages of the Bluetooth specs 😉
Until next time!