For bouLED, we will need to know its orientation. The orientation can be represented as a quaternion but we haven’t implemented Madgwick’s algorithm yet. So, I wrote a python script to simulate a rotation. The script generates and writes a quaternion on the script standard output.
In fact the sphere will be an icosahedron. It‘s the regular polyhedron made of 20 equilateral triangles. So I drew a equilateral triangle on openscad in order to print it in 3D. Then we will stick on this triangle a LED strip, and we will judge if the resolution is high enough.
In order to choose the components of our project, I decided to focus on the format of a LASER animation: ILDA. Indeed, the format of the animation requires us the bit-rate of our data flow, the time-precision and spatial accuracy of the galvanometers.
These informations are essential for the choice of all the components: the class of the SD card, the generation of the USB protocol, the choice of the Wi-Fi and ethernet specification, the requirements of DACs (to control galvanometer and LASER), and finally the choice of the micro-controller.
In order to collect all these informations, I created the Wiki of our project LASMO. I put the information about the ILDA Format, and the information of one of a galvanometer we will be able to use.
For our project, we need to
define the LASER type we can use. So, I did researches about the different
classes. There are 5 classes of LASER, and we can use only 3 of them without a
specific licence. However, we need to have enough energy to properly see the projection.
So, we will use a 3R LASER class, which can not be looking during more than
0.25 s. We need to program in hard a control module in order to avoid a potential
Now, we have to precisely define
the LASER’s colour, its power, its voltage, its diameter, and its control speed
required for our project
The first thing I did for our project bouLED was to write a python script to visualise the orientation of our test card.
bouLED will need to know its orientation, which will be computed from a MARG (Magnetic, Angular Rate, and Gravity) sensor using Madgwick’s algorithm. In order to easily check the results of this computation, we want to draw and rotate a 3D shape as we rotate the card with the MARG sensor. The orientation of the card is represented as a quaternion, and will be sent to my python script via a serial port.
We haven’t implemented Madgwick’s algorithm yet, so I just sent arbitrary quaternions to test the visualization. Its latency was at first unacceptable (~0.5s) when I used PySerial to read the serial port. Sending the serial port to the script standard input made the latency imperceptible.
For image stabilisation, there must be some way of computing bouLED’s 3D orientation. We chose the Madgwick sensor fusion algorithm using an AHRS (Attitude and Heading Reference System), which comprises a magnetometer, an accelerometer and a gyroscope.
Alexis gave us STM32 «IoT node» Discovery kits, which feature this AHRS array among the extensive list of on-board sensors. I wanted to use ChibiOS, but this devboard uses a STM32L475VG microcontroller, whereas the closest supported microcontroller is in the STM32L476 series. Therefore, I had to port ChibiOS, which gave me some interesting insight about the inner working of its MCU abstractions.
I had to manually query the LSM6DSL accelerometer/gyroscope combo over I2C, but fortunately, a driver for the LIS3MDL magnetometer is already provided in ChibiOS/Ex. For the moment, the unfiltered values are sent to the computer over J-Link RTT; we will eventually use UART-over-USB instead.
The final board will use a AHRS combo in one chip, because in the current setup, the gyro and accelerometer are synchronized with each other, but not with the magnetometer.
In this first post we would like to present the architecture of our project. It includes the choice of the type of components we will need, the way they are connected to each other and our estimations concerning their number.
We will dimension our components to be able to handle a maximum specification, which we will very probably scale back according to the different bottlenecks we stumble upon during the duration of the project. Keeping last year’s project in mind, we are aiming at a maximum number of LEDs of about 2000, a rotation speed of 50 turns per second and a refreshing speed of each LED of 360 times per rotation. Knowing that and if we take 8 bits per color and 8 bits for brightness, we calculate a maximum bandwidth of about 150MB/s. In practice, we aim at an array of 40 by 30 LEDs in order to have a standard 4:3 display, given that we place the LEDs with the same pixel pitch in height and width.
We will control the LEDs by rectangular blocks. The size of each block is limited by both the number of PINs and the bandwidth of the LED driver we choose. We plan to choose a LED driver with at least the same characteristics as last year’s project: the ability to drive 16 LEDs at the same time with an input bandwidth of 33MHz.
We estimate the Wifi throughput to be limited at 150Mb/s, which implies that we will need to implement a compression algorithm in our future file structure.
We will need:
a Wifi module in order to stream data to the display
an SD card reader
a microcontroller unit in order to communicate with the Wifi and SD cartd reader modules
an FPGA used to buffer the frames and driver the LED drivers
a synchronization module in the form of an infra red sensor
an OS to handle the modules
voltage converters depending on the choice of components in order to supply accurate voltage to each one
For next time
We will choose the actual reference for each component.
Welcome onto Télécom ParisTech‘s ROSE (SE302 PRIM project) web site. Here you will post and read all the news about the IoT and embedded systems class. Every student must create his/her own account on this site.