ELECINF344/381

Partie interactive du site pédagogique ELECINF344/ELECINF381 de Télécom ParisTech (occurrence 2011).

Catégories

[CASPER] can fetch and store mails

We made some improvements in our mail fetcher. Now, it’s able to fetch mails from most mail server ( pop3, secured or not). It can fetch the last mail received and store its content and subject into a file, so it can be read by our text-to-speach engine.

Moreover, we had the mail fetcher work in the Beagleboard.

As a reminder, we are using the C++ library Vmime 0.9.1.

Soutenances des projets 2011 : vendredi 29 avril après-midi

Les soutenances des projets de ROSE 2011 auront lieu vendredi 29 avril 2011 de 13h30 à 17h30 environ au foyer des élèves de Télécom ParisTech. Les soutenances sont publiques et l’accès se fait par le 53 rue Vergniaud.

Au menu cette année : un train électrique, un octocoptère, un laser qui joue des animations et récupère les tweets pendant une soirée, un robot trompe qui suit son maître, un clone de segway amélioré, des nouveaux glips.

 

Venez nombreux !

[CASPER] has a new body

Yesterday, we started to prepare our robot’s external appearance for the final presentation.

We replaced the robot’s elements with new elements in plexiglas that we manufactured ourselves, during the afternoon.

You can see below the robot’s arm assembled:

IRL is reaching its final rush to the last presentation.

ILDA and Web app
Since the last post we did a lot of tests in real situation, namely with several programs running on the board. We deemed that the operation to translate a text to an ILDA animation tends to be far too slow that’s why we decided to rely on a web app based on the google appengine api for Python. Yesterday, we turned words into action and started to create the web app and a proxy in the card to query the webapp or in case of a failure, redirect request to the embedded slow instance of the program. We did deploy the web app on appspot.com and our first tests tend to confirm that the service is accelerated by a factor of 30 to 60 depending on the load of the board. We did realize a website to present our restfull API to that webapp and we will put it online as soon as possible.

Website
As far as website are concerned, we want to introduce our brand new website. Feel free to comment or share any kind of feedback.

FPGA
We hunted some bugs in our code and yet it works better and we don’t intend to make any kind of fix on it till the presentation.

DMX and additional board
We have contacted TSM and we will be able to try our code with real equipments.
We can communicate with our additional board via Zigbee. We have now to connect this feature with the other parts of the project with 0MQ.

Software FIFO
Our software fifo works, we are putting all the pieces together to make our « driver/conductor/leader » module which will manage all the features of the project.
Today we’ll stick the pieces, it’s a milestone !

TSV Safe Express: Demystifying Conundrums

During the last two days, we have faced a lot of problems with the sensors of the track. We had two types of interrupts that were causing us problems:-
1. Debouncing Interrupts
2. Fake Interrupts

Problem: We have configured STM32 GPIO ports to Internal Pull-Up. All our sensors cards are powered by the Central Station Card through the CAN bus. We observed that there is some kind of parasitic signal that comes from the booster towards our Central Station which produces a glitch in the power towards the Nodes (Sensors card in this case). Due to this glitch in power, the GPIO ports which were pulled high become low and thus Fake Interrupts are produced. Thanks to Alexis who stayed till 5 in the morning to help us debug the problem.

The first one was solved by making sure that the any interrupts (that is not fake) from the sensors is processed only if there is atleast a gap of 100ms since the last time it was called. For the second one, which is more tricky, we use a simple method (a FreeRTOS task) which checks for glitches. This method was proposed by Samuel and we thank him for assisting with our software.

Right now, we are trying to do all the mapping of Reed Sensors and Lights to the XML Layout of the track. This is required as the central station needs to talk to server giving him with sensor input.

Wireless RoseWheel

After a long night of debugging, we finally managed to make the bluetooth working.  We are now able to send instruction and receive information from RoseWheel through the bluetooth interface using minicom on a laptop. We are also able to pair with an Android smartphone but we haven’t been able to verify the data transmission yet. We are still working on the Android application, but hopefully tomorrow we will be able to start our first tests of wireless communication with RoseWheel.

We also tested the  range of the Bluetooth with the transmitter inside the chassis and we received the signal from a distance of approximately 10 meters with two walls in between.

We started the implementation of the human-interface board. We have found a big LCD driven by the same controller as the one of our laboratory boards. We plan to use it to display information to the user directly on the handlebars, such as the engine power, battery level and the current angle measured by the sensors. As we are going to place the LCD vertically we had to rewrite a new set of characters made to be displayed in this orientation.

Copterix: Flight in the complex

Today, we went to the complex, a more spacious room than the one where we used to do our tests. The tests were quite conclusive. We were able to do some « landing » (the copter was caught by Axel near the floor because of turbulences caused by the proximity of this obstacle). With more power to the motors, we made the copter stay longer in the air, about 10/20 seconds (sadly, it wasn’t filmed).

We also used the remote to control the trust and the setpoint of the PID. It will need some software adjustments since it is still to brutal and rather hard to use.

Tomorrow, we’ll restore height control in our PID and see how much the copter drifts. This drift shall be corrected with the remote.

Copterix: some reflexions about Kanade

After having spent hours focusing on our PID into the Télécom ParisTech’s dancing room, we joined Télécom Robotic’s room in order to work on our implementation of Lucas and Kanade algorithm, because of a giant chessboard in green and red, perfect for image processing. We fixed Copterix in the air using some strings, and started test, with or without threshold, and it wasn’t really efficient…
We thought a moment about using ‘unfisheye’ opencv’s algorithm (see joined photo to understand why we wanted to use it), but it took 0.6 seconds per frame on our Gumstix…

Fisheye

What our camera actually sees: we can observe a 'fisheye' effect

Then we stopped and decided we should decide exactly how, even if it worked, Lucas and Kanade would help us to correct Copterix’s drift.
As we have 10 frames per second when processing the algorithm, it will be really difficult to determine in real time if we actually corrected the drift whenever we would want to correct it, and that is why we imagined the following algorithm:

  1. wait for Copterix to be horizontal (pitch and yaw < epsilon)
  2. take a reference picture and search the good features to track on it (quite heavy operation)
  3. while Copterix is horizontal (pitch and yaw < epsilon)
  4. ____take a picture, compare it with reference
  5. ____if not horizontal (pitch and yaw >= epsilon)
  6. ______wait for Copterix to be horizontal (pitch and yaw < epsilon)
  7. ______go to 3.
  8. ____if drift is big enough (move size > threshold)
  9. ______go to 12.
  10. ____if we don’t find enough features to track
  11. ______go to 1.
  12. ask PID to incline Copterix toward the good direction (we have to decide an angle)
  13. wait a chosen time (we have to be over-precise on this data)
  14. ask PID to set Copterix horizontal
  15. go to 1.

The real difficulty of this algorithm is that it infers we do own a powerful and precise PID, able to remain horizontal most of the time, and to follow an order in the better and faster possible way, which is of course not true.

That is why we are now considering the fact that we may not have time to have a excellent PID AND implement such a precise algorithm; we will talk about it tomorrow with our teachers.

RoseWheel now detects obstacles.

Yesterday we finished to implement the drivers for both the distance sensors: the infrared and the ultrasonic one. They actually work better than expected as their detection cones are a little thinner than expected.

At last, Rosewheel will come with 2 infrared sensors and 1 ultrasonic sensor. The infrared sensors will be used to detect falls and ravines whereas the ultrasonic sensor will be used to detect obstacles.

It’s not possible to link the sensors to the mainboard with a wire longer than 15cm, we will not be able to place them on the top of the handlebars to detect ravines as we previously thought. But, given the fact that the detection cone of the infrared sensor is really thinner than expected we finally conclude that we will have a sufficient range to detect ravines with sensors place on the chassis basis.

Indeed, we need a range of detection long enough as even if it could sound a bit counter-intuitive, to stop, Rosewheel needs to accelerate so that it could place back its center of gravity upright the wheels axis and then stop.

For the obstacles detection, we will use the ultrasonic sensor placed right in the middle of the chassis, with a slight upward inclination so that it doesn’t take the floor as an obstacle. As the range of detection is up to 4 meters, it will generate errors with long distance obstacles that aren’t actually on RoseWheel’s path. Thus, we have to handle only the detection of obstacles that are closer than 4 meters. We haven’t yet defined this range but we will have a clearer idea during the tests.

To be continued.

 

Copterix: Lucas and Kanade

Today, we began to write some articles for the website and we mainly worked on Lucas and Kanade algorithm. We did some tests and we have some good results when the camera is recording pictures from the ceiling, but we have bad results when they are from the floor. Here is some illustrations:These pictures represent the evolution of the helicopter movement (x and y). The first picture was taken when the camera was watching the floor held by Loïc. We cannot distinguish a good evolution of the motion.

For the second picture, the camera was oriented to the ceiling. The copter was put on a table that we could move. The first peak represents a slow movement in one direction and a fast movement in the opposite direction. The second peak represents the same experiment with the introduction of some manual oscillations.

Tomorrow, in order to solve these problems, we will first of all apply a low pass filter and we will try to use a draughtboard on the floor with a good lighting.