ILDA and Web app
Since the last post we did a lot of tests in real situation, namely with several programs running on the board. We deemed that the operation to translate a text to an ILDA animation tends to be far too slow that’s why we decided to rely on a web app based on the google appengine api for Python. Yesterday, we turned words into action and started to create the web app and a proxy in the card to query the webapp or in case of a failure, redirect request to the embedded slow instance of the program. We did deploy the web app on appspot.com and our first tests tend to confirm that the service is accelerated by a factor of 30 to 60 depending on the load of the board. We did realize a website to present our restfull API to that webapp and we will put it online as soon as possible.
As far as website are concerned, we want to introduce our brand new website. Feel free to comment or share any kind of feedback.
We hunted some bugs in our code and yet it works better and we don’t intend to make any kind of fix on it till the presentation.
DMX and additional board
We have contacted TSM and we will be able to try our code with real equipments.
We can communicate with our additional board via Zigbee. We have now to connect this feature with the other parts of the project with 0MQ.
Our software fifo works, we are putting all the pieces together to make our « driver/conductor/leader » module which will manage all the features of the project.
Today we’ll stick the pieces, it’s a milestone !
After a long night of debugging, we finally managed to make the bluetooth working. We are now able to send instruction and receive information from RoseWheel through the bluetooth interface using minicom on a laptop. We are also able to pair with an Android smartphone but we haven’t been able to verify the data transmission yet. We are still working on the Android application, but hopefully tomorrow we will be able to start our first tests of wireless communication with RoseWheel.
We also tested the range of the Bluetooth with the transmitter inside the chassis and we received the signal from a distance of approximately 10 meters with two walls in between.
We started the implementation of the human-interface board. We have found a big LCD driven by the same controller as the one of our laboratory boards. We plan to use it to display information to the user directly on the handlebars, such as the engine power, battery level and the current angle measured by the sensors. As we are going to place the LCD vertically we had to rewrite a new set of characters made to be displayed in this orientation.
Today, we went to the complex, a more spacious room than the one where we used to do our tests. The tests were quite conclusive. We were able to do some « landing » (the copter was caught by Axel near the floor because of turbulences caused by the proximity of this obstacle). With more power to the motors, we made the copter stay longer in the air, about 10/20 seconds (sadly, it wasn’t filmed).
We also used the remote to control the trust and the setpoint of the PID. It will need some software adjustments since it is still to brutal and rather hard to use.
Tomorrow, we’ll restore height control in our PID and see how much the copter drifts. This drift shall be corrected with the remote.
Today, we began to write some articles for the website and we mainly worked on Lucas and Kanade algorithm. We did some tests and we have some good results when the camera is recording pictures from the ceiling, but we have bad results when they are from the floor. Here is some illustrations:These pictures represent the evolution of the helicopter movement (x and y). The first picture was taken when the camera was watching the floor held by Loïc. We cannot distinguish a good evolution of the motion.
For the second picture, the camera was oriented to the ceiling. The copter was put on a table that we could move. The first peak represents a slow movement in one direction and a fast movement in the opposite direction. The second peak represents the same experiment with the introduction of some manual oscillations.
Tomorrow, in order to solve these problems, we will first of all apply a low pass filter and we will try to use a draughtboard on the floor with a good lighting.
These last two days we mainly worked on tuning the PID coefficients to improve the stability. In this way, we tried to find the best set of PID coefficients so that RoseWheel could raise from an almost horizontal position and recover stability as soon as possible and then maintain this stability over time, even under hard external perturbation. We also worked on improving the coefficients we found for the human driving mode to make the right compromise between smooth-driving and responsiveness.
During this test and improve phase, Alexis advised us to plot the curve of the different values critical for our system. Thus, we plotted over time the value of the angle and the angle rate before and after the kalman filtering and also the command sent to the motor calculated by our PID algorithm.
It made us find 2 problems in our software that explains why at the beginning RoseWheel was so unstable:
The Kalman filter wasn’t configured correctly. We first configured it for the testbench but we forgot to change the time step that was indeed ten times smaller than the correct value. That led to latency and incorrectness in the output of the Kalman filtering. Changing this parameter to its correct value made the system become really more stable.
Even after the Kalman filtering, we noticed that the angular rate was still a lot noisy. This noise caused our gyropod to vibrate a lot when increasing the derivative coefficient Kd in the PID. That is a problem because increasing the derivative coefficient is the only way we have to lower the amplitudes of the oscillations induced by a big proportionnal term Kp in the PID. Thus, we decided to smooth the angular rate value by using a low-pass filter after the Kalman filtering. It’s really a simple filter as it’s only made of 2 coefficients, but according to the plots, it made its work correctly and the angular rate seems to be really smoother than before. But while testing, increasing the derivative coefficient still leads to oscillations and vibrations, so we still have to work on it.
As we obtained satisfying results at making RoseWheel maintain its equilibrium, we started working on other features :
We implemented the safety switch and elaborated a protocol to avoid as much dangerous situations as possible :
At the very beginning, when one presses the power on button, RoseWheel is in self-driving-mode by default. That means it will try to reach its equilibrium position as soon as possible and then remain in this position. We assume that the user isn’t too silly and, for instance, will not try to power on RoseWheel in an almost horizontal position whereas he is facing it. Then, as soon as the safety switch that is located on the base is pressed, that means that someone has its foot on RoseWheel and wants to ride it : RoseWheel switches to its human-drive mode. If suddenly the safety switch is released, RoseWheel checks the angle. If it’s almost 0°, that means that RoseWheel’s speed is almost 0 and the person maybe wants to get out, then calmly RoseWheel switches to its self-driven mode (we still need to implement it). If the angle is under 75°, that could means two things: the person has felt down or the person has temporarily raised its foot. Thus, if the safety switch isn’t pressed within 1 second, RoseWheel makes a special noise, then, if it’s still not switched on within another second, RoseWheel switch to its self-driven mode. Finally, if the angle is greater than 75°, that means that the person has felt down and RoseWheel could represent a menace for people around, motors are disabled.
2) Concerning the obstacle detection, we almost finished to implement the sharps and sonar drivers. As these detectors are really sensitive to the external conditions, we still have to test them and see how they react in the different environnements RoseWheel will evolve in.
3) Concerning the remote control, we almost finished to implement the drivers for the Bluetooth, we made some tests yesderday, but we still need to continue them today and we will talk more about it in the next article.
Tuesday evening, after nearly two days of fine-tuning our testing procedures and security measures, we finally did the first tests on our own vehicle. In order to make sure we didn’t damage the equipment (nor hurt somebody), we followed a simple and strict protocol:
1. First of all, we checked that we could control manually the motors via our interpreter;
2. Then, we verified that the limitations we applied to the variation of the voltage of each motor were adequate (as mentioned in a previous post, an abrupt increase or decrease on the voltage could severely damage the motors);
3. Finally, we determined a maximum speed that seemed reasonable, so that we could stop the vehicle manually should the worst happen.
After this initial procedure, we could start the actual tests of our system. After some iterations of tuning the coefficients of our controller, we realized that it was actually much easier to equilibrate the vehicle with a human being on top of it than without any load, since:
(i) the person enhances the performance of the control system by balancing his/her weight in a way that maximizes stability;
(ii) the presence of the human being makes the test conditions closer to the physical model we used to compute and simulate the parameters.
After several adjustments, we were able to take our first ride on the RoseWheel -- with basic steering capabilities via the integrated potentiometer implemented as a plus.
Nevertheless, by wednesday morning we had not yet found a way of balancing the RoseWheel alone -- a must for our next major goal, remote control. Then, we realized that the chassis we used had a very interesting property: by tilting it to a specific angle, we could align the center of gravity of the steering bar with the axis of the wheels, reaching a metastable equilibrium. We used this fact to offset the error variable of our PID controller, and that proved to be very effective, since we obtained as a result a pretty good stability. But all this comes at a price: since we now have two different equilibrium positions, we must use the safety button located on the chassis to determine if we are carrying a person or not; nothing too difficult to implement, though.
On the other hand, the PID coefficients may be trickier to master. We are still working on them to solve the issues we are having, notably oscillations and vibrations that sometimes occur. To help us with the debugging, we set up a solution to plot angle, angular velocity, and output voltage command for each one of the motors. Our next actions on this front will be to use all these data to better tune Kp, Ki and Kd, so as to have a more responsive and stable control algorithm.
Meanwhile, we started to move on to our next goals: we began to study the Bluetooth controller, and are already able to compile and run a « hello world » Android application. Next steps include implementing drivers for our distance sensors to implement obstacle detection.
To summarize all our work, we prepared a special video showing some of the most interesting (and/or funny) demos we made so far. Enjoy!
Today we replaced the oscilloscope by the real laser. As expected, it does not exactly work the same way, but our first test with « The Riddle » is not so bad. You can see it on the video below, it is little as we do not have a lot of space in the classroom. You can see that the image is blinking, this is partly due to our code, but also partly due to the camera sync, and the « real » result is cleaner than what can be seen here.
The new design suggested by our teachers has been implemented, except for the FIFO which for the moment does not use the internal RAMs This will normally be done tomorrow. The FIFO, as it is now sometimes, leads to some bugs that we do not really understand yet. We will also investigate those issues tomorrow. Nevertheless, we have a functional design, the one we used tonight to make the video.
Tweet to ILDA
Concerning the way we will display tweets, Sam suggested us to make a smooth horizontal scrolling. Our first idea was to generate a big ILDA image containing the whole tweet on one line, and to clip it at display time, just before sending it to the laser. It seems that is was not the best way to act. So, we are now trying to generate an ILDA animation corresponding to the scrolling with a Python script. We are on our way and we have yet disclosed a few points of intersert to change in our design to make it work soon.
The library we mentionned last time (libmicrohttpd) seems to fit quite well with our needs. It is not a complete Web server, but it is enough to make the card serve a static HTML page and a little REST API to get tweets and validate them. The authentication is for the moment very basic it consists in a « secret » token as segment of the URI. It is not very secure, but it is not our priority at the very moment.
we have started today making some tests on the DCC part. As we have seen the good results with the oscilloscope, we moved to testing on a rail part by placing the train and sending the DCC commands through the rails. In fact we could command our train in both directions in different speeds.
Then, we started to implement a small module that takes Ethernet commands following a simple protocol. This module is translating the Ethernet commands to appropriate DCC and CAN commands in order to command different parts of our train model.
In parallel we still test the stability of our CAN module. Despite the good results there are very rare cases where our cards become unstable. We are making some modifications and we are going tomorrow to test on multiple light cards.
Our next PSSC is the initialization of Ethernet (what is already done) and control of our circuit from an extern program.
This week we tried to put together our developments and make them interact with each other. It includes sensors communication, kalman filter, control feedback, motors control, CAN communication protocol. We were dogged by bad luck as we realized the heatsink of our mainboard was too big to fit in the Zzaag… A few millimeters required us to order for a new one with a more suitable shape. Fortunately Alexis and Samuel helped us to find a new one. Because of that we were not able to validate the following « feedback control tuning » PSSC for Saturday as planned. Adviced by Samuel we also decided to postpone the « encoders drivers » PSSC as it wasn’t really in our first order priorities.
We tried to make profit of this situation and decided to prepare everything for the first tests. Alexis helped us defining our tests procedure to be as safe as possible for both hardware and people. Motors need special attention as they can be broken with unsuitable commands. Because of the wheels and the weight of the chassis they have a really high inertia. Asking them sharply to reverse their direction of rotation when they are rotating fast can seriously damage them and the H-bridges. If we damage them we wouldn’t be able to order new ones before the end of the course, it would be fatal for our project. Therefore we need at first to make some testing at low speed so as to define the maximum acceptable speed at which we can run the motors before they get damaged. To this extent we need an interpreter running on the mainboard and communicating using RS232 to be able to give motors various commands and define this maximum speed. Then we need to limit our commands to this maximum speed using software limitations. Finally the interpreter should be able to change the feedback control values for our testing to be more flexible. We implemented such an interpreter and made a video. The motors commands in the video are given between 0 and 1000 such as:
ml0 = 0%: max negative speed
ml500 = 50%: stop
ml1000 = 100%: max positive speed
Before trying the Zzaag motors we do some testings on other DC motors with less inertia that we cannot damage as easily as the Zzaag’s ones. Putting all the pieces together we were able to shoot a video which shows the motors reacting appropriately to the inclination of the sensorboard. As for the Kalman filter, we implemented a lighter version than the one of RoseWheel as it works best when tested on the real system. This version was only able to track the gyroscope drift and correct it. As far as we could see during the testing, it did it well. Concerning the PID, we tried to test the version that we are going to use in the RoseWheel but it still needs to be improved during the next tests.
Tomorrow we will be able to work on the Zzaag motors using our safe procedure and the tools developped. We look forward to it as it is a major step further for our project…
Today we were able to control our leds using the TLC5945 Led Driver, although we have not yet exploited it’s capabilities at full extent. As Alexis and Samuel proposed during our meeting, we should use dot correction in order to balance the mismatch of luminosity between green, red and orange. In addition, it would be useful to control intensity through TLC5945 grayscale mode especially once we assemble the whole system containing 60 leds.
Our next PSSC concerns DCC control of the train , although there are many more things we are concerned about. At this point we are mostly concerned about switching from our laboratory based STM32F103 cards, to the STM32F107 based central station card. For the moment we haven’t yet achieved to configure the STM32F107 CAN bus drivers correctly. Once this is done we will do more tests and continue working on the stability of the NMRAnet CAN bus module.
One problem of the NMRAnet CAN bus net module is the fact that in presence of multiple producer consumer events, the nodes fail to reply in time, to the STATUS REQUEST message addressed to them by the central station at some point. We have thought that a possible fact that causes this situation is that for the moment nodes are using only one of the STM32F103′s available CAN bus reception FIFO’s. In this case, processing the status request incoming packets is being delayed by event packets already present in the FIFO . Therefore we think that filtering out NMRAnet producer/ consumer event packets from one of the FIFO’s and letting them pass thorugh the other one seems like an improvement.