Last week I have been studying the various options offered to our MAESTROSEs. I dropped the idea to use sound to localize the others : we had too many doubts about the timings in the process, and we have encountered various troubles with our hardware, postponing any tests.
We had another issue: we wanted a system in order to obtain unique IDs for our modules, so that they can be addressed individually to implement some more advanced functions. I have found some interesting algorithms in order to achieve this purpose: one of them relies on a very basic communication which is feasible with our zigbees broadcasting, and such a solution could work with any new module added to the network. Nevertheless, a more pragmatic solution (which we will use) is to use the serial IDs from our micro controllers or XBee modules. They are guaranteed unique by the constructor. We wanted to avoid using them mainly because they were too long, but a hash will do the trick.
In the meantime I have been studying the baudrates we can achieve with one, and then many modules emitting, but I have had troubles with the code (that is why I am in the process of rewriting the driver from the zigbee). I am still trying to get it working (I can emit and receive packets , but I have trouble playing with the baudrate (sometimes 9600bps works, then it is 115200, then it doesn’t work anymore) and with the synchronization between two emitters (for example if one of the emitter emits packets while this other is configuring, that one crashes).
We have had bad luck for the past three days: we have first lost two days because of our (fused ?) microcontroller not responding anymore. After we confirmed this problem, Alexis soldered for us a brand new one. We have had just enough time to confirm our code so far works, before facing another material problem : a broken pin from our codec (we were wondering why despite all our efforts we couldn’t get a response from it). Once again Alexis fixed this, and we are ready to go!
While I couldn’t work on the microcontroller, I have thought a bit about a way to get localize our modules with “sound pings”: we enable both codecs in full-duplex mode, the first one emits a sound (fixed frequency, for example with the sine test available in the codec) while the other listens. As soon as he hears the sound, he repeats it to the first module. When the first module gets an answer, he checks the time and obtains a delay. This delay should be from something like 2*distance/sound_speed + T with T a constant covering the time to generate the first sound, then to detect it and generate an answer, and finally detect the answer. The problem is wether this T really is constant or not (I have searched for it in the datasheet but couldn’t find this kind of timings). I fear we might run short of time to try and implement this function, mainly because of the delay due to our material problems. I think if we can fix the codec soon enough then we should give it a try. Else, let’s focus on the main features we have thought of (we already are able to localize close distance – below one meter – with RSSI, which is already pretty cool).
Today, I was trying to determine wether we could transmit music with our zigbees reliably (will the bitrate be good enough when many MAESTROSEs are talking together in the mean time? Will we lose any data?), I couldn’t conclude yet because of some troubles I have using ChibiOS’s timers. I should fix that tomorrow though. In the meantime, Bertrand and Aurelien have diagnosized and then repaired (with Alexis’ help) our codec, and have begun implementing FRAM support.
Now that we know for sure what our MAESTROSEs will be made of, we have taken two parallel paths: on one hand, we have begun porting our code from the STM32F103 toward the STM32F405, to make sure that as soon as we receive our PCBs we will be able to switch smoothly, while on the other hand we have been sharpening our tools by coding for the F103. Due to some coding problems we are trying to fix, we don’t know for sure yet whether we can rely on our XBEE for positioning. This is why we plan on using our speakers and microphones instead (maybe as an auxiliary system) in order for our MAESTROSEs to locate each other : they can talk so why not say something useful? We plan on synchronizing them then having them play some frequencies at a given time. Thus, we could measure the amount of time before the signal is received and deduce the distance between each other. The main problem I see with this is that given the special speakers we use (surface transducers), we might have surprises concerning the speed of sound (depending on the surface we put them on, this might lead to some unexpected values).
In the meantime we have begun writing our main program. We plan on beginning with a first sequence in order to acknowledge the other MEASTROSEs already present (so that they identify each other). Then, some special datas (or frequencies according to the system we will finally decide to use) will enable each one of them to locate the others. Through that, then can decide for example after what delay they will play sounds coming from each speaker.
And then, each MAESTROSE would wait for a signal (or a sound, according to what mode we chose : playing music from a SD card or live sounds) that it would automatically retransmit after the delay defined earlier (with the exceptions of those we could activate manually with a zigbee signal or a switch to actually emit sounds and music).
We have had a bitter surprise today: we just realized that with our nRFL2401+, the only information about received power reduction we can get is whether we are over or under -64dBm, while we planned on using a precise value of attenuation to get the distance between two modules. It means that we will eventually have to rely on our good old XBee Pro. Now that we know (almost) all the components we will need, we are currently designing our PCB. We must focus too on a way get more precise results with our XBee (our resolution right now is still far too bad).
We are a bit disappointed to have to give up on our nRF24L01+ (we had quite a good time welding it to our cards, and we have spent a few days trying to use it) but at least now we have an actual basis to work on.
For some days, we have been working on finding a way to locate our modules with the attenuation of the signals they send. Bertrand has pushed further his tests with the Zigbee (which have not come yet to a conclusion), but we fear that we might lack resolution, because the attenuation does not seem really reliable. This is why, in parallel, we are working on a second module , the nRF24L01+, in order to compare the resolution we can achieve. We have (almost) finished installing it on our cards, and we are writing a code to use it and perform our tests. We will post the results as soon as we get them.
Today, we have discussed about our project of building a spider robo (hexapod) that would be able to walk on the walls.
We now have a general idea of its look, its features, the way we can build it, its components.
It is made of 6 legs, each being controlled by three servo-motors, linked to a double-deck.
We plan to add a suction cup to each of the legs, controlled with a central vacuum generator and a valve per cup in order to enable our robot to crawl on the walls.
We will add a camera (seemingly orientable with a servo-motor) in order to take images of what our robot sees.
It will transmit this information via wifi and will process the video to find and avoid obstacles on its path.
We have defined our objectives and we established a schedule.
If you are interested, you might learn more with tomorrow’s presentation.
We even have a sweet surprise for you concerning the secret name of our project…
Aurélien, Bertrand & Guillaume