ELECINF344/381

Partie interactive du site pédagogique ELECINF344/ELECINF381 de Télécom ParisTech (occurrence 2011).

Catégories

Copterix: some reflexions about Kanade

After having spent hours focusing on our PID into the Télécom ParisTech’s dancing room, we joined Télécom Robotic’s room in order to work on our implementation of Lucas and Kanade algorithm, because of a giant chessboard in green and red, perfect for image processing. We fixed Copterix in the air using some strings, and started test, with or without threshold, and it wasn’t really efficient…
We thought a moment about using ‘unfisheye’ opencv’s algorithm (see joined photo to understand why we wanted to use it), but it took 0.6 seconds per frame on our Gumstix…

Fisheye

What our camera actually sees: we can observe a 'fisheye' effect

Then we stopped and decided we should decide exactly how, even if it worked, Lucas and Kanade would help us to correct Copterix’s drift.
As we have 10 frames per second when processing the algorithm, it will be really difficult to determine in real time if we actually corrected the drift whenever we would want to correct it, and that is why we imagined the following algorithm:

  1. wait for Copterix to be horizontal (pitch and yaw < epsilon)
  2. take a reference picture and search the good features to track on it (quite heavy operation)
  3. while Copterix is horizontal (pitch and yaw < epsilon)
  4. ____take a picture, compare it with reference
  5. ____if not horizontal (pitch and yaw >= epsilon)
  6. ______wait for Copterix to be horizontal (pitch and yaw < epsilon)
  7. ______go to 3.
  8. ____if drift is big enough (move size > threshold)
  9. ______go to 12.
  10. ____if we don’t find enough features to track
  11. ______go to 1.
  12. ask PID to incline Copterix toward the good direction (we have to decide an angle)
  13. wait a chosen time (we have to be over-precise on this data)
  14. ask PID to set Copterix horizontal
  15. go to 1.

The real difficulty of this algorithm is that it infers we do own a powerful and precise PID, able to remain horizontal most of the time, and to follow an order in the better and faster possible way, which is of course not true.

That is why we are now considering the fact that we may not have time to have a excellent PID AND implement such a precise algorithm; we will talk about it tomorrow with our teachers.

Copterix: Lucas and Kanade

Today, we began to write some articles for the website and we mainly worked on Lucas and Kanade algorithm. We did some tests and we have some good results when the camera is recording pictures from the ceiling, but we have bad results when they are from the floor. Here is some illustrations:These pictures represent the evolution of the helicopter movement (x and y). The first picture was taken when the camera was watching the floor held by Loïc. We cannot distinguish a good evolution of the motion.

For the second picture, the camera was oriented to the ceiling. The copter was put on a table that we could move. The first peak represents a slow movement in one direction and a fast movement in the opposite direction. The second peak represents the same experiment with the introduction of some manual oscillations.

Tomorrow, in order to solve these problems, we will first of all apply a low pass filter and we will try to use a draughtboard on the floor with a good lighting.

[Casper] Flash, Screen and Pictures…

We’ve been working these last few days on the PCB, to display pictures and to save and load them from flash.

First, we developped a set of functions to erase, write and read the flash memory, using a SPI protocol. After testing it with text strings we displayed on the screen, we tried with pictures. However, the STM32 RAM is not big enough to handle a whole picture : we have at most 20kbytes RAM, and one picture takes 25kbytes (132*132 pixels of 12 bits, 4 for each RGB channel). So we read the flash per 256 bytes blocks that we send to the LCD screen, until transfer completion. Similarly, we write flash per 256 bytes blocks when receiving a picture from the UART connection.

In order to have a simple protocol for writing pictures, we do not allow two pictures to share a same memory sector (smallest erasable area). Since each sector is 4 kbytes wide, we use 7 sectors for each picture : that’s about 3 kbytes too large. However, this allows to store 146 pictures in the flash, which should be enough for our project. We did not work yet on the possibility of storing smaller pictures which could be used to refresh a small screen area. It may be useful for animations.

Finally, we use the screen configuration options to rotate pictures by 90, 180 or 270 degrees, changing the way the screen controller reads the RAM.

Moreover, we wrote a small program which uses OpenCV library (which is already used on the beagleboard for face detection and recognition) and converts a picture (jpeg, png, bmp,…) in a 132*132 12 bits picture and saves it in a file that can be sent to the PCB by UART.

MB Led: Led Matrix, Direction and Acknowledgment

It is time to make a debriefing about last week achievement.

Led Matrix:

Colour gradient and intensity

Benjamin worked on the Led driver in order to display a picture and has fixed some problems with luminosity. He balanced green, blue and red and we are now able to display a white colour on our blocks (although a blue tint remains on videos). From now on, we also have a linear gradient of intensity for each primary colour. In order to obtain this gradient, we first tried to put a linear variation of intensity in registers, but it appeared linearity didn’t worked and he had to adjusted individually each step.

Electronic visual display:

An interruption is raised with a frequency of 10kHz and each time a new line is displayed. As our Led matrix has 8 lines, we have a electronic visual display at 1,25kHz. Each time we want to display a new line we connect to the Led driver through SPI sending it thanks to a vector of 288bits matching a 8 pixel line with three colour and 12 bit of gray control. Actually we only use 4 bit for each colour.

IRDA:

Yesterday I implemented acknowledgement system with IrDA transmission. First I needed to divide our sending queue into 2 queues: one for quick packets which don’t need ack such as PING, PONG and synchronizing messages sent at high frequency and allowing some losses; the other one is for important packet needing a safe transmission. These packet are sent to a task which could be seen as a buffer sending a packet (every 40ms) until it receives an ACK and then picking up the next one. In order to identify a packet an ACK transmit the ID of  the sender and the ID of the packet it acknowledge and when an ACK packet is received, a message containing these information is send to the task managing output packets. In order to avoid loop waiting for an ACK , we use a counter and if a packet is not acknowledged after 10 tries we consider it as a lost packet (for ping messages, if we don’t receive one PONG for 10 of our PING,  we consider the neighbour as gone). Once a neighbour is gone we empty the sending queues for this UART.

This system is not optimal, being slow, but at higher speed we got a lot corruption during transmission and more packet are lost.

NETWORK:

We decided to rethink our algorithm after the reactions of yesterday’s post. We changed the algorithm when some block is leaving the network. If someone is missing, the block which notice that says it to the network. Every other block answer by « I’m still here ». If the leader doesn’t answer (probably he’s gone), there is an other election; those who don’t answer are considered to be out of the network. In this algorithm, the blocks keep in memory their position and orientation and we don’t have to start an election every time some block quits.

CONCLUSION:
A little video showing up some features as the use of LED driver, election and direction:

NOTE: Now once a direction is decided, it is kept when the connection is lost.