Our awesome video is on the RoseWheel.free.fr home page…
…have a look
After a long night of debugging, we finally managed to make the bluetooth working. We are now able to send instruction and receive information from RoseWheel through the bluetooth interface using minicom on a laptop. We are also able to pair with an Android smartphone but we haven’t been able to verify the data transmission yet. We are still working on the Android application, but hopefully tomorrow we will be able to start our first tests of wireless communication with RoseWheel.
We also tested the range of the Bluetooth with the transmitter inside the chassis and we received the signal from a distance of approximately 10 meters with two walls in between.
We started the implementation of the human-interface board. We have found a big LCD driven by the same controller as the one of our laboratory boards. We plan to use it to display information to the user directly on the handlebars, such as the engine power, battery level and the current angle measured by the sensors. As we are going to place the LCD vertically we had to rewrite a new set of characters made to be displayed in this orientation.
Yesterday we finished to implement the drivers for both the distance sensors: the infrared and the ultrasonic one. They actually work better than expected as their detection cones are a little thinner than expected.
At last, Rosewheel will come with 2 infrared sensors and 1 ultrasonic sensor. The infrared sensors will be used to detect falls and ravines whereas the ultrasonic sensor will be used to detect obstacles.
It’s not possible to link the sensors to the mainboard with a wire longer than 15cm, we will not be able to place them on the top of the handlebars to detect ravines as we previously thought. But, given the fact that the detection cone of the infrared sensor is really thinner than expected we finally conclude that we will have a sufficient range to detect ravines with sensors place on the chassis basis.
Indeed, we need a range of detection long enough as even if it could sound a bit counter-intuitive, to stop, Rosewheel needs to accelerate so that it could place back its center of gravity upright the wheels axis and then stop.
For the obstacles detection, we will use the ultrasonic sensor placed right in the middle of the chassis, with a slight upward inclination so that it doesn’t take the floor as an obstacle. As the range of detection is up to 4 meters, it will generate errors with long distance obstacles that aren’t actually on RoseWheel’s path. Thus, we have to handle only the detection of obstacles that are closer than 4 meters. We haven’t yet defined this range but we will have a clearer idea during the tests.
To be continued.
These last two days we mainly worked on tuning the PID coefficients to improve the stability. In this way, we tried to find the best set of PID coefficients so that RoseWheel could raise from an almost horizontal position and recover stability as soon as possible and then maintain this stability over time, even under hard external perturbation. We also worked on improving the coefficients we found for the human driving mode to make the right compromise between smooth-driving and responsiveness.
During this test and improve phase, Alexis advised us to plot the curve of the different values critical for our system. Thus, we plotted over time the value of the angle and the angle rate before and after the kalman filtering and also the command sent to the motor calculated by our PID algorithm.
It made us find 2 problems in our software that explains why at the beginning RoseWheel was so unstable:
As we obtained satisfying results at making RoseWheel maintain its equilibrium, we started working on other features :
We implemented the safety switch and elaborated a protocol to avoid as much dangerous situations as possible :
At the very beginning, when one presses the power on button, RoseWheel is in self-driving-mode by default. That means it will try to reach its equilibrium position as soon as possible and then remain in this position. We assume that the user isn’t too silly and, for instance, will not try to power on RoseWheel in an almost horizontal position whereas he is facing it. Then, as soon as the safety switch that is located on the base is pressed, that means that someone has its foot on RoseWheel and wants to ride it : RoseWheel switches to its human-drive mode. If suddenly the safety switch is released, RoseWheel checks the angle. If it’s almost 0°, that means that RoseWheel’s speed is almost 0 and the person maybe wants to get out, then calmly RoseWheel switches to its self-driven mode (we still need to implement it). If the angle is under 75°, that could means two things: the person has felt down or the person has temporarily raised its foot. Thus, if the safety switch isn’t pressed within 1 second, RoseWheel makes a special noise, then, if it’s still not switched on within another second, RoseWheel switch to its self-driven mode. Finally, if the angle is greater than 75°, that means that the person has felt down and RoseWheel could represent a menace for people around, motors are disabled.
2) Concerning the obstacle detection, we almost finished to implement the sharps and sonar drivers. As these detectors are really sensitive to the external conditions, we still have to test them and see how they react in the different environnements RoseWheel will evolve in.
3) Concerning the remote control, we almost finished to implement the drivers for the Bluetooth, we made some tests yesderday, but we still need to continue them today and we will talk more about it in the next article.
Tuesday evening, after nearly two days of fine-tuning our testing procedures and security measures, we finally did the first tests on our own vehicle. In order to make sure we didn’t damage the equipment (nor hurt somebody), we followed a simple and strict protocol:
1. First of all, we checked that we could control manually the motors via our interpreter;
2. Then, we verified that the limitations we applied to the variation of the voltage of each motor were adequate (as mentioned in a previous post, an abrupt increase or decrease on the voltage could severely damage the motors);
3. Finally, we determined a maximum speed that seemed reasonable, so that we could stop the vehicle manually should the worst happen.
After this initial procedure, we could start the actual tests of our system. After some iterations of tuning the coefficients of our controller, we realized that it was actually much easier to equilibrate the vehicle with a human being on top of it than without any load, since:
(i) the person enhances the performance of the control system by balancing his/her weight in a way that maximizes stability;
(ii) the presence of the human being makes the test conditions closer to the physical model we used to compute and simulate the parameters.
After several adjustments, we were able to take our first ride on the RoseWheel -- with basic steering capabilities via the integrated potentiometer implemented as a plus.
Nevertheless, by wednesday morning we had not yet found a way of balancing the RoseWheel alone -- a must for our next major goal, remote control. Then, we realized that the chassis we used had a very interesting property: by tilting it to a specific angle, we could align the center of gravity of the steering bar with the axis of the wheels, reaching a metastable equilibrium. We used this fact to offset the error variable of our PID controller, and that proved to be very effective, since we obtained as a result a pretty good stability. But all this comes at a price: since we now have two different equilibrium positions, we must use the safety button located on the chassis to determine if we are carrying a person or not; nothing too difficult to implement, though.
On the other hand, the PID coefficients may be trickier to master. We are still working on them to solve the issues we are having, notably oscillations and vibrations that sometimes occur. To help us with the debugging, we set up a solution to plot angle, angular velocity, and output voltage command for each one of the motors. Our next actions on this front will be to use all these data to better tune Kp, Ki and Kd, so as to have a more responsive and stable control algorithm.
Meanwhile, we started to move on to our next goals: we began to study the Bluetooth controller, and are already able to compile and run a « hello world » Android application. Next steps include implementing drivers for our distance sensors to implement obstacle detection.
To summarize all our work, we prepared a special video showing some of the most interesting (and/or funny) demos we made so far. Enjoy!
This week we tried to put together our developments and make them interact with each other. It includes sensors communication, kalman filter, control feedback, motors control, CAN communication protocol. We were dogged by bad luck as we realized the heatsink of our mainboard was too big to fit in the Zzaag… A few millimeters required us to order for a new one with a more suitable shape. Fortunately Alexis and Samuel helped us to find a new one. Because of that we were not able to validate the following « feedback control tuning » PSSC for Saturday as planned. Adviced by Samuel we also decided to postpone the « encoders drivers » PSSC as it wasn’t really in our first order priorities.
We tried to make profit of this situation and decided to prepare everything for the first tests. Alexis helped us defining our tests procedure to be as safe as possible for both hardware and people. Motors need special attention as they can be broken with unsuitable commands. Because of the wheels and the weight of the chassis they have a really high inertia. Asking them sharply to reverse their direction of rotation when they are rotating fast can seriously damage them and the H-bridges. If we damage them we wouldn’t be able to order new ones before the end of the course, it would be fatal for our project. Therefore we need at first to make some testing at low speed so as to define the maximum acceptable speed at which we can run the motors before they get damaged. To this extent we need an interpreter running on the mainboard and communicating using RS232 to be able to give motors various commands and define this maximum speed. Then we need to limit our commands to this maximum speed using software limitations. Finally the interpreter should be able to change the feedback control values for our testing to be more flexible. We implemented such an interpreter and made a video. The motors commands in the video are given between 0 and 1000 such as:
Before trying the Zzaag motors we do some testings on other DC motors with less inertia that we cannot damage as easily as the Zzaag’s ones. Putting all the pieces together we were able to shoot a video which shows the motors reacting appropriately to the inclination of the sensorboard. As for the Kalman filter, we implemented a lighter version than the one of RoseWheel as it works best when tested on the real system. This version was only able to track the gyroscope drift and correct it. As far as we could see during the testing, it did it well. Concerning the PID, we tried to test the version that we are going to use in the RoseWheel but it still needs to be improved during the next tests.
Tomorrow we will be able to work on the Zzaag motors using our safe procedure and the tools developped. We look forward to it as it is a major step further for our project…
We got the board that controls the motors yesterday evening so we started to test if our H-bridge driver works as expected, here is an example of our tests:
It looks good for the moment so the next step is to implement a control of the violent speed changes…
In relation with our previous post we made some further investigations on the physic underlying the accelerometer measurements.
Theoretically, it’s really easy to extract a tilt angle from an accelerometer. We just have to take the arcsin of the right axis measurement, depending on the frame used, divide by g and we get the tilt angle.
for us : theta = arcsin(Ax/g) where :
theta = tilt angle, Ax = measurement on the x axis, g = gravitationnal constant
This solution works well while the accelerometer doesn’t translate.
If the accelerometer is translating with a non zero acceleration component, extracting a correct angle measurement becomes really tricky. As we don’t only measure the gravitationnal acceleration, but also the translating acceleration, we can’t apply directly the previous equation.
however, even without encoders, it’s mathematically possible to extract the tilt angle. As we have a 3-axis accelerometer, we can extract from the measurements of 2 well chosen axis of the accelerometers a set of 2 equations with 2 unknown variables that are the tilt angle theta and the translating acceleration x and thus it’s possible to resolve this set of equation.
In the frame of the accelerometer we have :
g*sin(theta) + x*cos(theta) = Ax (1)
-g*cos(theta) + x*sin(theta) = Az (2)
where, g is the gravitational constant, theta is the tilt angle, Ai is the acceleration measurement of the accelerometer along the i axis. We gave this set of 2 equations to matlab using the solve function.
And it returned 2 solutions:
theta = -2*atan((Ax – (Ax^2 + Az^2 – g^2)^(1/2))/(Az – g))
We tried each of theses solutions, but none of them worked. We investigate some more about it and the main problem is that, experimentally, the measurement aren’t perfect and are skewed by noise. Thus, even if there is no tilt angle, Az isn’t perfectly equal to g. Furthermore, even if theoretically Ax² + Az² is supposed to be equal to g², experimentally it’s not the case, and that leads to calculate the square root of a negative number, that isn’t a great thing in the real world.
So, our conclusion is that even if this set of 2 solutions is mathematically right, it’s not relevant in the real world and we decided not to use it.
In addition, we lied, equation (1) and (2) aren’t perfectly right, we also have to add the acceleration terms from the rotational movement of the chassis around the wheels axis. That leads to :
g*sin(theta) + l*theta_dot_dot + x*cos(theta) = Ax (1)
-g*cos(theta) – l*(theta_dot)² + x*sin(theta) = Az (2)
That’s why we finally decided to put the sensorboard as close as possible from the wheels axis so that l is small as possible. That hopefully able us to neglect this two terms.
Anyway, we think that a proper set of coefficients for the measurement noise and the process noise in the Kalman filter will be sufficient to correct those problems.
To be continued.
Yesterday, after several days of debugging, we finally managed to gather data from the gyroscope. We ultimately found out that the freezes we were experiencing were caused by a reset interrupting an I2C cycle in the exact moment a slave was about to write a ’0′ on the SDA line (as an acknowledgement or during a read cycle); when the STM32 woke up and tried to generate a START condition, the slave would immediately force SDA to a low state, thus preventing the microcontroller from reclaiming the bus. We fixed this problem by sending several clock pulses on the SCL line until the IMU-3000 releases SDA; then, we can force a STOP condition and abort the incomplete cycle.
We have also completed a first version of our communication protocol. Under normal operating conditions, the following messages are exchanged on the bus:
In debug mode, the sensor board sniffs the bus so that it can send it by Bluetooth or serial port. The HI board monitors the exchanges as well to display relevant information on the LCD:
Today, we finished and/or improved some of the micro tasks we assigned to each other at the beginning of this week.
We tested a lighter version of the Kalman Filter that will be used in RoseWheel on the testbench we made a few weeks earlier. We used a light version because the full one needed to know the state equations of the system to be accurate. As we didn’t want to loose time trying to find out the state equations of the testbench (even if they are really similar to the ones of RoseWheel), we decided to test a version that was only able to track the drift of the gyroscope and make some minor corrections on the signal from the accelerometer during x axis translation. The light Kalman filter worked well and did what we expected, he tracked the drift and thus corrected the gyroscope noise and also filtered the signal from the accelerometer thanks to the sensor fusion.
During the experiments, we could observe that the noise from both the accelerometer and the gyroscope weren’t as high as we expected, that is a good news. However, even if the drift problem is correct by the Kalman filter, the measurements from the accelerometer are still not accurate during movements because of the noise induced by the other accelerations components. These other acceleration components are only dependents on the angle and its derivatives. We thought of a way to subtract those noisy components at each step by using the estimation of this values at the previous step. We will test it tomorrow on the testbench if we have time to do it.
We also finished the implementation of the CAN drivers and we tested it on our laboratory boards as we didn’t receive yet our mainboard. The tests were satisfaying. But as at the time we performed the tests we hadn’t decided yet the final version of the CAN protocol we were going to use for the communication between our 3 board (mainboard, sensorboard, HI board ), we didn’t use any filters. Hopefully tomorrow we will be able to test the CAN protocol as we will receive our mainboard.