Today, we added the yaw servomechanism on the Copterix and it responded pretty well, it is now able to fall slowly like a fall leaf. Sorry we don’t have any video this time, but you will soon understand why…
Our octocopter went through a lot of trouble:
When we passed on battery, motors were twice as powerful as on sector, and even though we lowed them a lot, it finally crashed on the ceiling. That was of course not a real test environment (indoor flights are not the exact purpose of a copter), and that is exactly the reason why this accident occurred. It turned out that we just had to change a propeller and our copter was back on tracks.
Since yesterday evening, Copterix tends not to start properly. We thought first it was because of the motors card security, which turns off every motors in case of a brutal startup, but it was late tonight we finally discovered with our teacher that it was just a bad contact between a part of this card and the rest of it on the i2c bus, and we finally fixed it.
Afterwards, we performed a Radio Frequency test and it went really well: we think we will be able to control the thrust, roll and pitch by remote control after a few callibration.
Finally, we started our web site ! We plan to add a lot of content about our project during next week, keep on checking !
Today we replaced the oscilloscope by the real laser. As expected, it does not exactly work the same way, but our first test with « The Riddle » is not so bad. You can see it on the video below, it is little as we do not have a lot of space in the classroom. You can see that the image is blinking, this is partly due to our code, but also partly due to the camera sync, and the « real » result is cleaner than what can be seen here.
The new design suggested by our teachers has been implemented, except for the FIFO which for the moment does not use the internal RAMs This will normally be done tomorrow. The FIFO, as it is now sometimes, leads to some bugs that we do not really understand yet. We will also investigate those issues tomorrow. Nevertheless, we have a functional design, the one we used tonight to make the video.
Tweet to ILDA
Concerning the way we will display tweets, Sam suggested us to make a smooth horizontal scrolling. Our first idea was to generate a big ILDA image containing the whole tweet on one line, and to clip it at display time, just before sending it to the laser. It seems that is was not the best way to act. So, we are now trying to generate an ILDA animation corresponding to the scrolling with a Python script. We are on our way and we have yet disclosed a few points of intersert to change in our design to make it work soon.
The library we mentionned last time (libmicrohttpd) seems to fit quite well with our needs. It is not a complete Web server, but it is enough to make the card serve a static HTML page and a little REST API to get tweets and validate them. The authentication is for the moment very basic it consists in a « secret » token as segment of the URI. It is not very secure, but it is not our priority at the very moment.
As projects become more mature, we are starting to receive requests from people who used Google Translate to understand the content of the posts. In order to ease the communication, the site is switching to English for the rest of the class duration.
Samuel Tardieu | 2011-04-13 | REST | Catégorie: Soutenance | Les commentaires sont fermés
Here is a little summary of what has been done today, regarding the text-to-speech on the beagleboard.
Audio with alsa on the beagleboard
First, I would like to explain the step we followed to get the audio output to work on the beagleboard without damaging the TPS6595, which manages the audio, but also the power supply (now I am sure that you understand the reason why we should not burn this one down).
We have on our SD card a bootstrapped version of the ubuntu linux distribution, with alsa installed.
To get alsa to work without being the superuser, you have to add the normal user to the audio group, and reboot the beagleboard.
Then, open the alsamixer program.
Here is was you SHOULD NOT do, despite it is being advised on some forums : enable each and every device in alsamixer.
This will cause the TPS6595 chip to overheat, and may damage it.
What you should do is enable only what is necessary :
Increase the volume of the DAC2 analog; DAC2 digital coarse and DAC2 digital fine.
Increase the volume of the headset
Enable headsetL2 and headsetR2
You should now have a working audio output.
In order for our whole application to work properly on the board, we decided not to use pulseaudio (which requires up to 40% of the CPU on the board). We decided to implement our own interface for the audio output, which would handle all the write requests from internal threads such as the text-to-speech engine’s thread. This interface would store the corresponding samples, pre-process them in order to fit alsa’s interleaved pcm format, and play them on the audio output.
We were able to test successfully this interface today, by synthesizing speech with SVOX pico on the beagleboard, and playing it simultaneously on the audio output.
The whole process requires 30% of the cpu during a short period (synthesis/samples post) and then 0/0.7% of the CPU during the rest of the the process, which is good news compared to the 40% CPU minimum required during the whole process in our previous experiments.
The next step will be to port the CMU Sphinx recognition helloworld we designed to the beagleboard.