As we want to use led strips for bouLED, trying to turn these LEDs on with our microcontroller would be a good start.
Our STM32L475 luckily happened to have SPI controllers. I used one in the frequency range recommended in the APA102 LED strip datatsheet, that is around 1MHz, to display simple animations on the ribbon. After some playing with the wires and some endianness considerations, I could control almost all of the 133 LEDs (and I’m investigating why some won’t obey). Thanks to ChibiOS, the current implementation already makes use of the DMA, so that SPI transactions don’t take up much CPU time.
The next step will be to see if we can increase the frequency. Then we’ll need to consider more LED strips: the SPI controllers can only send data on a few GPIOs, while we might need to plug 20 LED strips (or less if we can put some in series).
Since last week we’ve mainly done three things : defining formal specifications, upgrading the architecture, and start choosing components.
We’ve created a spreadsheet that would compute some figures like the maximum power, the maximum bandwidth, the bandwidth per LED driver, etc. We can immediately see parameters effects by varying them.
We’ve chosen to use 1200 LEDs using a 4:3 format. This seems sufficient to get a nice result without creating unnecessary complexity.
We also wanted to check the maximal throughput because we think it’s our main bottleneck. Using 8 bits per color, and 25 revolutions per second we measured a total throughput of around 50 Mb/s. Again this seems reasonable since we can transmit that much data both through WiFi or SDIO.
Finally we needed to check how many drivers and multiplexers we should use. Using 16-LED drivers and 8-columns multiplexers, a block that is at the edge of the panel (thus we the higher update frequency) would have a maximal throughput of around 8 Mb/s. We saw several LED drivers made by TI which have at least 20 Mhz bandwidth (and approximately the same throughput).
With all this, we add our final setup : 40×30 LED pane, decomposed in 8×15 blocks each controlled by a LED driver and a column multiplexer.
Upgrading the architecture
We’ve also been thinking about the fixed part of the project. We needed a way to control the motor and the IR LED used to synchronize the system. We tried to think about the usage flow, and finally came up with this :
Push a button to turn the power on. This should power two BLE modules, one fixed and one mobile.
When selecting or streaming a file through the WiFi interface, the mobile BLE module would communicate with the fixed part to start IR emission and motor rotation.
When paused or stopped, the same process would happen to stop the motor.
It means that we’re going to have to design a (much simpler) circuit in the fixed part to control the motor and LED.
We’ve also started to choose components. In fact this is highly correlated to the architecture since we need to be aware of what’s existing to design the system.
In particular, I’ve been searching for a WiFi module. After some research, I found the ESP8266 and its ESP32 family successors. These are particularly know, and adpated for our usage, because:
The processor is dual-core, one core being dedicated to the IP stack, and the other being available for the user
The processing power is sufficient for most usage. We are not going to do much work apart from forwarding data to the FPGA.
It has 500 Ko of SDRAM and up to 16 Mo of external flash memory. This is clearly enough for our program and data.
It is SPI and SDIO capable.
It is cheap.
It supports WiFi with an UDP throughput of 30 Mo/s
It supports BLE
It has a huge fan community, lots of tutorials and programming guides.
For all these reasons, we are going to use two ESP32 module. One will be used in the mobile part to handle the WiFi interface. It will also communicate through BLE with the second module on the fixed part. This is the module that will drive both the IR LED and the motor.
Moreover, the ESP32 is supported by, and mostly used with FreeRTOS. This is a sufficient reason for choosing FreeRTOS as our main OS.
I’ve been digging into TI website to compare their LED panel drivers. I noticed that two of their drivers are suited for “large high frequency multiplexed panels”. Furthermore, TI even wrote a document aimed at explaining the whole process of using those two drivers to build a panel. Since this is pretty much what we’re doing, it looked like a good idea to use those.
Ambroise checked the differences with other drivers that were available and noticed that, although the bandwidth is lower, they provide lower rising/fall time and most importantly have buffers to store the whole frame.
For the LASER, we need to know which power we have to use, in order to know the LASER’s class. It was a difficult question which I had to think and reflect on. Actually, the class 3B is a higher level of LASER, but more dangerous. On another way, I thought that a 3R LASER couldn’t be powerful enough. Finally, after some researches, we decided to take a 3R LASER of 5mW, which would be sufficient for projecting animations. Furthermore, the LASER will be in green colour (520 nm) , thus points will be more apparent and distinguishable.
There are 2types of LASER that we can use: The LASER diode, that can be used like a simpleLED or the LASER Diode Pumped Solid Stage. We will use the first one.
With those informations, we can now specified some characteristics like the tension : 2.8 – 6.5V/ DC
We will control our LASER with the TTL modulation and not a analogic one. We can also shade the beam’s brightness with PWM mode.
Now, we also know that in order to show an animation, we need 300KB/s = 2.4 Mbits/s.
Also, 10BASE-T Ethernet can provide us 10Mbits/s which is enough. We can also take another specification of Ethernet that will be widely enough like 100BASE-T .
For the Wi-Fi, norms are differents by the range and the rate. We can use almost every norm, but we prefer to have a considerable range in order to do the LASMO’s configuration everywhere in a site. So,we will use the 802.11n norm Wi-Fi.
Now, we have to identified the micro-processor we will use
Before we can start drawing the schematics for the PCB, it is crucial to know the exact components we will be using. So I looked for a suitable FPGA.
There are only two main competitors on the FPGA production field: Altera (bought by Intel a few years ago) and Xilinx. Since the FPGA used at Telecom in the project rooms are Altera’s Cyclone V, I narrowed my research to this family of components. It will allow us to easily test our software.
There are 6 product lines in the Cyclone V family, shown in this product table. We don’t need a hard processor system since we will be using an external micro-controller and we don’t need a fast transceiver. The limits the FPGA to the Cyclone V E, composed of 5 products, varying in memory size, number of logical elements and I/O. The number of I/O and the memory size will not be a problem for us, even at the lowest tier. Given that I don’t know the size of our modules yet, I am inclined to choose the (almost) same number of logical elements as the FPGA available at Telecom, leaving me with the 5CEBA5 model, available in different PIN configurations. We will need to discuss this further with Alexis to determine what can realistically be soldered on the PCB.
I also created a wiki for our project, formatting what had already been discussed among us and listed on external documents.
Informations on this post are taken from ILDA official website. The ILDA format is intended for frame exchange purposes only. It is not optimized for space or speed, and it is not currently concerned with display issues such as point output rate. Also, the format does not include show information such as timing of frames. Generally, the highest function the ILDA format can provide is a sequence of frames which play back to form an animation. The ILDA File can provide a 3D or 2D structure, and provide color with Indexed color in a table or True colors on 24 bytes. In order to best estimate the required useful bit-rate, we analyse the most disadvantageous case : format 4 with 3D points and “True Color”. All these numbers are for one frame and N points per frame :
a header of 32 bytes
N data records of 10 bytes :
2 bytes for X coordinate
2 bytes for Y coordinate
2 bytes for Z coordinate
1 bytes for status code
3 bytes for “True Color” ( 1 bytes red, 1 bytes green, 1 bytes blue )
With frame_rate in fps, N in points per frame and k in pps :
N = k / frame_rate
bit_rate_per_frame = 32 + 10*N
bit_rate = bit_rate_per_frame * frame_rate
For our project, we can assume that the frame rate will not exceed 60fps (and generally 25fps) and the number of points will not exceed 30Kpps (the commercial galvanometers can’t go faster). So, the bit-rate of an ILDA animation will not exceed 300KB/s.
For bouLED, we will need to know its orientation. The orientation can be represented as a quaternion but we haven’t implemented Madgwick’s algorithm yet. So, I wrote a python script to simulate a rotation. The script generates and writes a quaternion on the script standard output.
In fact the sphere will be an icosahedron. It‘s the regular polyhedron made of 20 equilateral triangles. So I drew a equilateral triangle on openscad in order to print it in 3D. Then we will stick on this triangle a LED strip, and we will judge if the resolution is high enough.
In order to choose the components of our project, I decided to focus on the format of a LASER animation: ILDA. Indeed, the format of the animation requires us the bit-rate of our data flow, the time-precision and spatial accuracy of the galvanometers.
These informations are essential for the choice of all the components: the class of the SD card, the generation of the USB protocol, the choice of the Wi-Fi and ethernet specification, the requirements of DACs (to control galvanometer and LASER), and finally the choice of the micro-controller.
In order to collect all these informations, I created the Wiki of our project LASMO. I put the information about the ILDA Format, and the information of one of a galvanometer we will be able to use.
For our project, we need to
define the LASER type we can use. So, I did researches about the different
classes. There are 5 classes of LASER, and we can use only 3 of them without a
specific licence. However, we need to have enough energy to properly see the projection.
So, we will use a 3R LASER class, which can not be looking during more than
0.25 s. We need to program in hard a control module in order to avoid a potential
Now, we have to precisely define
the LASER’s colour, its power, its voltage, its diameter, and its control speed
required for our project
The first thing I did for our project bouLED was to write a python script to visualise the orientation of our test card.
bouLED will need to know its orientation, which will be computed from a MARG (Magnetic, Angular Rate, and Gravity) sensor using Madgwick’s algorithm. In order to easily check the results of this computation, we want to draw and rotate a 3D shape as we rotate the card with the MARG sensor. The orientation of the card is represented as a quaternion, and will be sent to my python script via a serial port.
We haven’t implemented Madgwick’s algorithm yet, so I just sent arbitrary quaternions to test the visualization. Its latency was at first unacceptable (~0.5s) when I used PySerial to read the serial port. Sending the serial port to the script standard input made the latency imperceptible.
For image stabilisation, there must be some way of computing bouLED’s 3D orientation. We chose the Madgwick sensor fusion algorithm using an AHRS (Attitude and Heading Reference System), which comprises a magnetometer, an accelerometer and a gyroscope.
Alexis gave us STM32 «IoT node» Discovery kits, which feature this AHRS array among the extensive list of on-board sensors. I wanted to use ChibiOS, but this devboard uses a STM32L475VG microcontroller, whereas the closest supported microcontroller is in the STM32L476 series. Therefore, I had to port ChibiOS, which gave me some interesting insight about the inner working of its MCU abstractions.
I had to manually query the LSM6DSL accelerometer/gyroscope combo over I2C, but fortunately, a driver for the LIS3MDL magnetometer is already provided in ChibiOS/Ex. For the moment, the unfiltered values are sent to the computer over J-Link RTT; we will eventually use UART-over-USB instead.
The final board will use a AHRS combo in one chip, because in the current setup, the gyro and accelerometer are synchronized with each other, but not with the magnetometer.