ELECINF344/381

Partie interactive du site pédagogique ELECINF344/ELECINF381 de Télécom ParisTech (occurrence 2011).

Catégories

TSV Safe Express: Big time troubleshooting.

Today we have faced numerous problems. The ethernet module causes a crash in our central station and we still haven’t figured out the solution. We also tried to connect all card’s (6 captor-6 traffic lights cards) to our main station and they failed to join our the network. As Alexis consulted us to do, we lowered the baud rate of the CAN bus so that problem is solved. All cards join the network and  CAN baud rate now is configured at 125kbp/s.

Regarding the communication of the server with the train we were able to track the position of the train and send it to the server. Our goal now is to remotely control lights and train, and we are working on it.

[CASPER] can fetch and store mails

We made some improvements in our mail fetcher. Now, it’s able to fetch mails from most mail server ( pop3, secured or not). It can fetch the last mail received and store its content and subject into a file, so it can be read by our text-to-speach engine.

Moreover, we had the mail fetcher work in the Beagleboard.

As a reminder, we are using the C++ library Vmime 0.9.1.

[CASPER] has a new body

Yesterday, we started to prepare our robot’s external appearance for the final presentation.

We replaced the robot’s elements with new elements in plexiglas that we manufactured ourselves, during the afternoon.

You can see below the robot’s arm assembled:

IRL is reaching its final rush to the last presentation.

ILDA and Web app
Since the last post we did a lot of tests in real situation, namely with several programs running on the board. We deemed that the operation to translate a text to an ILDA animation tends to be far too slow that’s why we decided to rely on a web app based on the google appengine api for Python. Yesterday, we turned words into action and started to create the web app and a proxy in the card to query the webapp or in case of a failure, redirect request to the embedded slow instance of the program. We did deploy the web app on appspot.com and our first tests tend to confirm that the service is accelerated by a factor of 30 to 60 depending on the load of the board. We did realize a website to present our restfull API to that webapp and we will put it online as soon as possible.

Website
As far as website are concerned, we want to introduce our brand new website. Feel free to comment or share any kind of feedback.

FPGA
We hunted some bugs in our code and yet it works better and we don’t intend to make any kind of fix on it till the presentation.

DMX and additional board
We have contacted TSM and we will be able to try our code with real equipments.
We can communicate with our additional board via Zigbee. We have now to connect this feature with the other parts of the project with 0MQ.

Software FIFO
Our software fifo works, we are putting all the pieces together to make our « driver/conductor/leader » module which will manage all the features of the project.
Today we’ll stick the pieces, it’s a milestone !

TSV Safe Express: Demystifying Conundrums

During the last two days, we have faced a lot of problems with the sensors of the track. We had two types of interrupts that were causing us problems:-
1. Debouncing Interrupts
2. Fake Interrupts

Problem: We have configured STM32 GPIO ports to Internal Pull-Up. All our sensors cards are powered by the Central Station Card through the CAN bus. We observed that there is some kind of parasitic signal that comes from the booster towards our Central Station which produces a glitch in the power towards the Nodes (Sensors card in this case). Due to this glitch in power, the GPIO ports which were pulled high become low and thus Fake Interrupts are produced. Thanks to Alexis who stayed till 5 in the morning to help us debug the problem.

The first one was solved by making sure that the any interrupts (that is not fake) from the sensors is processed only if there is atleast a gap of 100ms since the last time it was called. For the second one, which is more tricky, we use a simple method (a FreeRTOS task) which checks for glitches. This method was proposed by Samuel and we thank him for assisting with our software.

Right now, we are trying to do all the mapping of Reed Sensors and Lights to the XML Layout of the track. This is required as the central station needs to talk to server giving him with sensor input.

Copterix: some reflexions about Kanade

After having spent hours focusing on our PID into the Télécom ParisTech’s dancing room, we joined Télécom Robotic’s room in order to work on our implementation of Lucas and Kanade algorithm, because of a giant chessboard in green and red, perfect for image processing. We fixed Copterix in the air using some strings, and started test, with or without threshold, and it wasn’t really efficient…
We thought a moment about using ‘unfisheye’ opencv’s algorithm (see joined photo to understand why we wanted to use it), but it took 0.6 seconds per frame on our Gumstix…

Fisheye

What our camera actually sees: we can observe a 'fisheye' effect

Then we stopped and decided we should decide exactly how, even if it worked, Lucas and Kanade would help us to correct Copterix’s drift.
As we have 10 frames per second when processing the algorithm, it will be really difficult to determine in real time if we actually corrected the drift whenever we would want to correct it, and that is why we imagined the following algorithm:

  1. wait for Copterix to be horizontal (pitch and yaw < epsilon)
  2. take a reference picture and search the good features to track on it (quite heavy operation)
  3. while Copterix is horizontal (pitch and yaw < epsilon)
  4. ____take a picture, compare it with reference
  5. ____if not horizontal (pitch and yaw >= epsilon)
  6. ______wait for Copterix to be horizontal (pitch and yaw < epsilon)
  7. ______go to 3.
  8. ____if drift is big enough (move size > threshold)
  9. ______go to 12.
  10. ____if we don’t find enough features to track
  11. ______go to 1.
  12. ask PID to incline Copterix toward the good direction (we have to decide an angle)
  13. wait a chosen time (we have to be over-precise on this data)
  14. ask PID to set Copterix horizontal
  15. go to 1.

The real difficulty of this algorithm is that it infers we do own a powerful and precise PID, able to remain horizontal most of the time, and to follow an order in the better and faster possible way, which is of course not true.

That is why we are now considering the fact that we may not have time to have a excellent PID AND implement such a precise algorithm; we will talk about it tomorrow with our teachers.

MB Led: Easter Saturday in A405

Today, we continued on our recent work : I worked on the algorithm and on the rotation of a block. The rotation works quite well now. The algorithm for this rotation is simple : When a communication between two blocks is stopped (due to the left of someone), the two blocks save their « interfaces » (aliases of the UARTs ). When there is a PING from a block, we check the saved interfaces in order to see if the block was a « friend » of this block. If it’s the case, we just calculate the rotation of the block.

The main problem is that a block can communicate with 4 others blocks at the same time. So, sometimes, we have to wait a little bit when we want to compare the interfaces.

Cédric noticed in the afternoon a bug in the IrDA functions and we fixed this. Now, the IrDA has more reliable and there are few problems in the network.

I still have problems when two networks meet with more than one block each involved in the « deal » but things are going better each day.

Benjamin worked on the launcher task. He thought of a remote block who can control the animation/game that will be played soon after.

Cédric continued the firmware transmission and the protocol is actually functional at good speed. But he is now facing problems with flash writing.

 

Here is a little video of the synchronization (running on the GLiPs but the MBLed are arriving soon):

 

Every block knows its i,j and minI, minJ, maxI, maxJ in the network. The block in the north-west shows a M; north-east a B; south-west LE: south-east D. If he is alone, he shows a M too.
That’s why sometimes in the video, there are 2 M : one of the block has encountered problems and reset himself.

For the synchronization, the leader gives the time every 20 PING (approximately every second), when a block receive that, it transmits it to his neighbor and so on.

 

 

[CASPER] Website is online !

Now you can visit our website ! For now, presentation of the project and videos are available as well as an awesome logo made by Thibault !

http://caspertherobot.blogspot.com/

TSV Safe Express: Traffic lights set up and ready to go.

After stabilizing the NMRAnet  CAN bus , at least to what concerns the traffic lights, we connected all the traffic lights of the circuit to the cards and verified that indeed this part of the project has been finalized. 6 traffic light cards successfully respond to the assigned by the central station events, and glow led’s accordingly. Now what remains, concerning this part of the project is, their mechanical placement and installment.

Currently we are focusing mainly on the development of the sensor cards and we hope that won’t take much time. We got our first interrupts from the train passing over the reed sensors. Hopefully by the end of the week end we will be able to wrap everything up and start communicating with Samuel’s application.

Here comes a video of Siwar explaining our work up to this moment.








 

 

IRL : Settings !

In our last article we described some ways to enhance the display of a text scrolling smoothly.
We actually tried several possibilities of settings for the scrolling text and finally found a convenient one (as shown on the video above).

We did some major improvements concerning the speed of the code on the card succeeding in speeding it by a factor of 5. We deemed that a major issue of that program lays on the access time of the memory. Indeed it takes more than 20 seconds to write an ILDA file of a tweet from the ram to the flash. Samuel suggested that we could use a socket to directly send the ILDA file from the python script to the C program in charge of the FPGA communication. In fact, with such a strategy we would avoid the slow speed flash access and be able to send a tweet to the laser around 30 seconds after its validation.

In addition we’re going to add some bit stuffing to make the frames equally populated in terms of points so as to assure a constant frame rate. You can easily notice that disturbing effect on the video above.

In parallel, we’re getting familiar with the DMX protocol and are working on the DMX board.
We haven’t forgotten the software fifo between the C program and the FPGA. We deemed that a zeromq socket would be a interesting solution and we are currently working on it.