Project confirmed!

The teams have been formed and the projects confirmed today, and the Figurines project made the cut! Three of us are now working on it!

Now we have to find it a real name, more on that tomorrow!

For today, here are some of the things we have to think about:

– the basic shot will be incorporated into the base module (which will be a modified Pololu 3pi), no need for a secondary module that would complicate things too much with mechanical considerations. This shot can be done with IR or Lasers. Whatever we choose, it has to be able to be detected easily by other figurines (a square receptor of a certain area). The lasers would be cool (especially with a little smoke on the battlefield :p) and could have one of two colors depending on the team of the figurine. We could also encode into them basic info on the shooter (enough to identify it) so that when a figurine is shot, the server is told the identity of the shooter and can then dole out expo points, decide how much HP to take from the shot figurine depending on the level and power of the shooter, etc etc. We don’t know if all this could be done with IR.

– the server needs to know the distance between all the figurines, so each of them has to be able to measure its distance to other figurines and send that info to back to the server (through the phone). We don’t know yet how we’re going to achieve that. We would like to do this without having to constraint the battlefield (otherwise we could imagine a few fixed beacons on the battlefield (3 at least) that allow the position of the figurines to be known, or a camera above the battlefield that would have the position of all the figurines). Ultrasounds would be a way, but it would probably be two hard to distinguish common objects from figurines. Maybe we could measure the transmission time of a radio wave between the figurines…

– the user needs to see what the figurines is looking (i.e.) aiming at. The basic idea was to stream video from a camera aligned with the cannon. But streaming video is probably too tricky in terms of resources. We could apply a pre-treatment to the video, in order to only send to the phone sufficient information about the environment of the figurine to reconstruct a virtual rendering of this environment. This would also allow the figurine to have information about its environment and maybe have an autonomous mode in which it wouldn’t need the user to guide it but could be configured to accomplish some “simple” actions: hide, detect an enemy and aim at it, etc.

– the secondary modules will add virtual capabilities only, otherwise it would be too complicated to do. So all we have to do is find a way for the different modules “plugged” into the figurine to be detected and identified by the main module that will then relay the info to the phone which will apply the adapted virtual modifications of capabilities. One way to do this is to plug the modules on a bus like SPI where the master can detect all the slaves and get a specific ID from them when we boot the figurine.

As always any comment, advice, etc is more than welcome!

We’re excited to do this project!

PS: On a different topic, I wrote the BLE tutorial with Lerela, it was fun to go through the 2700 pages of the Bluetooth specs 😉

Until next time!

8 comments to Project confirmed!

  • Loki

    You are addressing some interesting problems.
    – Aiming : Camera, well maybe, it’s up to you to figure out and it would be awesome if it works. Else : You can put a bright led on the front of the base, pointing toward the ground to display a visible cone of emission. Or a straight laser line to be precise, the point is to be able to see it from above.

    – Location : it depends how precise you want to be but be carefull because location in 2D or 3D space was Rose projects by themselve so it’s not a easy subject.
    Maybe having an emmiter on one base and a receiver on the other can help with proper synchronisation.
    Having a designated space for your battlefield can be a good idea, one you can roll with printed area can be cool! Then you can put a camera above or beacons as you said. And even use color sensors to detect the ground and have some “terrains” bonuses.

  • Alexis Polti

    Why does the server need to know the distance between the figurines ?

  • BigFatFlo

    @Loki: Thanks for the comment!
    About aiming, we hadn’t thought about a visible cone of emission, or a targeting laser. But the reason for that is that we would like for the player to not need to see his robot directly to control it and aim with it, hence the camera idea.
    I think the camera idea is doable if we buy a camera with video treatment module (we found one on the web which can detect colors and patterns) that would allow us to transmit back to the user only the pattern seen by the camera. The smartphone would then create a simplified rendering of this environment where only the necessary information is presented: other robots and obstacles, without the details of real-life images. For instance, all the environment could be white except for obstacles which would be black and other robots which would be in their team’s color.

    @Loki & Alexis: the server needs to know the distance between two robots in order to know which robot is damaged by a 360° shot or healed, as there would be no actual wave emitted. So it is not a matter of location as much as just a matter of distance…
    Could it be possible for each robot (one after the other) to emit a signal containing its ID and the time of emission, then all the robots which receive this signal send back to the smartphone the time of reception and the ID of the sender., that way we have an estimate of the direct distance between them?

  • Alexis Polti

    How do you see that, the distance calculation ?

  • BigFatFlo

    Well, if a robot sends a signal containing its time of emission and its identity, then another robot receives it at a certain time, thus has the propagation time, which gives us the distance. Would this be possible?

  • Alexis Polti

    Hard to tell. Depending on the precision that you need, that could an entire project on its own or something quite trivial.

  • BigFatFlo

    Well with a speed of 3*10^8 m/s, in order to measure a distance with a 10cm precision, we could need to measure a time of 0.33 ns… That’s too small!

  • Alexis Polti

    Yep :/