Between hardware difficulties, our drivers’ developpement has gone forth, and it has been one week now since our codec has begun to work, playing songs and recording sound. Our FRAM was functionnal much sooner, and Bertrand has implemented the ChibiOS BaseChannel interface on our SPI driven microSD card, so that it couldn’t be easier to write the recorded sound to it, and stream WAV, MP3 and Ogg Vorbis files from it. The last lacking driver is for the push buttons, but it has been written by Guillaume and should need only a bit of debugging.
The main program structure which will support our applications is also on the way, we have all it needs to exchange ZigBee messages between our modules, parse them and internally dispatch them to the right threads… Well, as it is there is only one thread needing to exchange data with other modules, and our dispatching thread can seem a bit useless, but our idea of what Maestrose will perform next week is meant to evolve and, well, if this small thread remains pointless, discarding it shouldn’t be a big matter.
The next big deal for us will be time synchronization through ZigBee. In fact I had begun working on it before some more important matters required my attention, and here too there’s some debugging to do.
Maestrose, coming soon…
Last week I have been studying the various options offered to our MAESTROSEs. I dropped the idea to use sound to localize the others : we had too many doubts about the timings in the process, and we have encountered various troubles with our hardware, postponing any tests.
We had another issue: we wanted a system in order to obtain unique IDs for our modules, so that they can be addressed individually to implement some more advanced functions. I have found some interesting algorithms in order to achieve this purpose: one of them relies on a very basic communication which is feasible with our zigbees broadcasting, and such a solution could work with any new module added to the network. Nevertheless, a more pragmatic solution (which we will use) is to use the serial IDs from our micro controllers or XBee modules. They are guaranteed unique by the constructor. We wanted to avoid using them mainly because they were too long, but a hash will do the trick.
In the meantime I have been studying the baudrates we can achieve with one, and then many modules emitting, but I have had troubles with the code (that is why I am in the process of rewriting the driver from the zigbee). I am still trying to get it working (I can emit and receive packets , but I have trouble playing with the baudrate (sometimes 9600bps works, then it is 115200, then it doesn’t work anymore) and with the synchronization between two emitters (for example if one of the emitter emits packets while this other is configuring, that one crashes).
We have had bad luck for the past three days: we have first lost two days because of our (fused ?) microcontroller not responding anymore. After we confirmed this problem, Alexis soldered for us a brand new one. We have had just enough time to confirm our code so far works, before facing another material problem : a broken pin from our codec (we were wondering why despite all our efforts we couldn’t get a response from it). Once again Alexis fixed this, and we are ready to go!
While I couldn’t work on the microcontroller, I have thought a bit about a way to get localize our modules with “sound pings”: we enable both codecs in full-duplex mode, the first one emits a sound (fixed frequency, for example with the sine test available in the codec) while the other listens. As soon as he hears the sound, he repeats it to the first module. When the first module gets an answer, he checks the time and obtains a delay. This delay should be from something like 2*distance/sound_speed + T with T a constant covering the time to generate the first sound, then to detect it and generate an answer, and finally detect the answer. The problem is wether this T really is constant or not (I have searched for it in the datasheet but couldn’t find this kind of timings). I fear we might run short of time to try and implement this function, mainly because of the delay due to our material problems. I think if we can fix the codec soon enough then we should give it a try. Else, let’s focus on the main features we have thought of (we already are able to localize close distance – below one meter – with RSSI, which is already pretty cool).
Today, I was trying to determine wether we could transmit music with our zigbees reliably (will the bitrate be good enough when many MAESTROSEs are talking together in the mean time? Will we lose any data?), I couldn’t conclude yet because of some troubles I have using ChibiOS’s timers. I should fix that tomorrow though. In the meantime, Bertrand and Aurelien have diagnosized and then repaired (with Alexis’ help) our codec, and have begun implementing FRAM support.
Now that we know for sure what our MAESTROSEs will be made of, we have taken two parallel paths: on one hand, we have begun porting our code from the STM32F103 toward the STM32F405, to make sure that as soon as we receive our PCBs we will be able to switch smoothly, while on the other hand we have been sharpening our tools by coding for the F103. Due to some coding problems we are trying to fix, we don’t know for sure yet whether we can rely on our XBEE for positioning. This is why we plan on using our speakers and microphones instead (maybe as an auxiliary system) in order for our MAESTROSEs to locate each other : they can talk so why not say something useful? We plan on synchronizing them then having them play some frequencies at a given time. Thus, we could measure the amount of time before the signal is received and deduce the distance between each other. The main problem I see with this is that given the special speakers we use (surface transducers), we might have surprises concerning the speed of sound (depending on the surface we put them on, this might lead to some unexpected values).
In the meantime we have begun writing our main program. We plan on beginning with a first sequence in order to acknowledge the other MEASTROSEs already present (so that they identify each other). Then, some special datas (or frequencies according to the system we will finally decide to use) will enable each one of them to locate the others. Through that, then can decide for example after what delay they will play sounds coming from each speaker.
And then, each MAESTROSE would wait for a signal (or a sound, according to what mode we chose : playing music from a SD card or live sounds) that it would automatically retransmit after the delay defined earlier (with the exceptions of those we could activate manually with a zigbee signal or a switch to actually emit sounds and music).
Back to this algorithm deriving the modules’ repartition based on the whole set of distances in between.
It is now implemented, and thoroughly trialed. It has taken some transformations for the results to be comparable to the (pseudo-random) entries, but let’s come to the jist of it.
For the first time in this project’s (and its precursor’s) history, I have the great honor and the deep pleasure to write these two words (and one exclamation mark, which is the bare minimum) :
It works !
Which is more, I now understand why. But our hardest difficulties still stand beyond us, and measuring the distances data isn’t the least of them. It is very likely that it be the greatest, in fact, not to mention that the aforementioned algorithm demands that every module see each other, which can only be if they all stand at less from 100m from each other.
One step is still one step, and tomorrow shall bring the next !
Our greatest problem here is to find a way to determine, based only on ZigBee exchanges, first approximate distances between our modules, spaced from at least something around one meter, then their positions relative to one another (though it’s perfectly clear that with nothing mor than distances, relative positions will be defined up to any rotation and/or symmetry).
After one week spent rummaging through various studies announcing with more or les enthusiasm their rather disheartening results (mostly a wide error range, the use of many anchors, and often total dysfunction indoors), I finally stumbled upon this one, where inaccuracy is countered by the use of many channels (all channels internationaly offered by IEEE 802.15.4 and ZigBee, in fact). It comes down to an error of less than a meter, and even less than 30cm, for distances ranging up to 5m (if I remember well) between nodes, which is quite exceptional.
If this precision would by itself satisfy our needs (though better would still be appreciated), the method still isn’t perfect, since it uses 2n+1 anchors in a n-dimensional space. In our case no anchor should be used, and I am now trying to understand how all of that works together (while previous studies seemed so simplistic, this one is quite tough), and where anchors intervene, to see if we can make wiithout them and what it would then cost. But right now, I still don’t figure out why the method is presented as based on IEEE 802.15.4 (and even ZigBee, if I’m not mistaken) while all I can see is either pure mathematics or pure RF, so I gather it will take time for me to understand the whole thing.
In parallel, I have found in the appendices of a panorama study an heuristic to derive relative positions of the modules from their relative distances. It seems quite simple (nothing harder than to diagonalize a real symmetric matrix comes into play), and not silly, until the last step, which seems quite dubious to me. Indeed, since the method gives a vector far too wide for what it should mean, we simply throw away its least significant components. Before believing that has any chance to work, I have to implement it and see for myself.
So there’s work to do… Well, that’s what we’re here for !
In order to locate themselves and to stream content between each other our modules will make extensive use of zigbee so it is a critical parameter in our project. In order to determine what we will be able to achieve I started doing some tests with the STM32 boards that we used in class (thanks to Phh and Aurélien for lending me theirs)
The localisation part will be done by measuring the signal attenuation beetween the modules. To access this data the zigbee module has to be switched in API mode through the ATAP command. Then the zigbee module is no longer acting transparently as a serial line but it receives and sends structured trames. Among the informations contained in the RX trames, we can access the Received Signal Strength Indicator in -dBm.
It appeared that this value doesn’t remain constant when the two zigbees stay at the same distance from each other. in order to see the dispersion of these values, I used a zigbee module of the robotics club connected to the serial port of the computer to log the values and I obtained the following histograms.
The green one shows the values of attenuations when the two modules are in opposite corners of the room and the blue one when they are almost stuck the one to the other.
The attenuation value is not constant but considering the median or the average on last value should give us a relevant value.
I’m concerned because the range of values is not very wide : -23 to -50 dBm in the experience, and I’ve observed that it decreases very fast with the distance and then much more slowly, I hope that we’ll be able to have a decent resolution in spite of it and I’ll try to plot an attenuation/distance curve in the next days to have precise data.
We will also use the zigbee to stream audio content between the modules and in this perspective we need to evaluate the maximum bitrate we can rely on. The theoretical value for zigbee is 250kb/s in reality we will certainly not reach it. To do some tests I used two STM32 boards, broadcasting trames containing 100bytes of datas (the maximum value on API mode). On the software part I simply used a timer that print each second the number of trames received in the last second.
The results were very disapointing. When the two modules were broadcasting and listening simultaneously I hardly reached 27 trames/s = 21.6kbps and when only one was brodcasting and the other one listening 68 trames = 64.4kbps which is still pretty far from the theoretical 250kbps. I’ll try to improve these results in the next days, so any ideas on what I could change to do so is very welcome.
Success is the ability to go from one failure to another with no loss of enthusiasm.
Sir Winston Churchil
After our last project’s failure (Miss you, DHEXTROSE !), we embedded on a brand new one, and here it is :
Audio modules able to synchronize so that the sound heard by one of them, or drawn from a (micro)SD card, could be propagated as a sound wave accross space, always spreading to the nearest neighbours first. They shall also operate as a network of audio sensors performing observations.
We began today to consider which components our modules would comprise. Here is what we could settle :
- a surface transducer
- a MP3 codec
- obviously, a microphone and a battery
We still have to choose a ZigBee module microcontroller (for instance a STM32). Few calculations being needed, our choice willl primarily being guided by autonomy considerations. I’m also told introducing an amplificator between the codec and te speaker would be relevant.