[bouLED] Adding Wifi

BouLed will be inside a plexiglas sphere, so there will be no button. The two ways of interacting with it are its orientation and a WiFi remote control, which I made with an ESP32 DevKit. It opens a access point and runs an HTTP server. You connect to it with your phone (or whichever WiFi-enabled device you like), select some parameters on a web page, and send them to the ESP32. Which in turn sends them to our main board via SPI. Naturally, there’s no noticeable latency.

However, I noticed something strange. The card doesn’t automatically boot in flash when powered on, while it should (according to Espressif) when GPIO0 is not grounded. I tried connecting it to Vcc instead of letting it floating, and it boots in flash. Same behaviour on other kevkits. Except for the brand new one soldered on our main PCB, fortunately.

Clever mistake

When Matthias was working with the simulation, he noticed the faces weren’t ordered correctly. The top 5 faces of the icosahedron, enumerated in clockwise order, would haves indices 0 1 2 4 3 in the array of matrices giving their position, instead of 0 1 2 3 4. Yet I could do all the pretty map projections without any issue, as my implementation didn’t rely on a particular ordering of the matrices. Here’s a pseudo-code version of the function that would compute these matrices :

mat4 face = some_correct_somputation();

mat4 rotation = some_rotation(2*pi/5);

faces.add(face);

for (int i = 0; i < 4; i++) {

faces.add(rotation * face);

rotation = rotation * rotation;
}

The purpose of this was to place one face, rotate it by 2*pi/5 around the correct axis and use this as the second face. Rotate the 2nd face by 2*pi/5 to get the 3rd one, etc… This is obviously not the behaviour of this code, as the rotation matrix is squared each time, instead of being rotated by 2*pi/5. But this actually computes the right matrices! Basic group theory in Z/5Z you’d say. Funny bug I say.

Finishing the firmware

For the different features of bouLED, we worked on different devboard, because we can’t all work on our only STM32H7 devboard. Git merge after git merge, we’re done making it all work on the same board.

Finishing the hardware

We soldered the LEDs on the triangular PCBs last week. That was not too painful thanks to our teachers and a pick and place machine.

The pick-and-place machine in action, x16 speed up

We’ll now start mounting the whole thing.

[bouLED] News from the LED panels

Back from a long posting hiatus … The triangles are almost finished. Some of them are missing connectors, and others miss a second Micromatch connector for chaining. Unsurprisingly, when ordering the LEDs from China, we didn’t get APA102 LEDs but some clones: SK9822. It’s a bit different though, but this article sums it up nicely. I had to tweak my LED strip driver but it wasn’t a big deal.

Another surprise: everyone thought that we had 20 triangular PCBs, but no: we found another one. This is good news, especially given Murphy’s law.

Speaking of Murphy’s law, on Friday, we tested a triangle, whose power supply started to smoke … I don’t have any pictures but on Saturday we found out with Alexis that there was solder paste under the inductor of the buck converter, which short-circuited it and fried the IC. Fortunately, after changing the regulator and the inductor, everything works as expected, as far as power supplies are concerned.

However, as for the LEDs, it’s another story. I tested all triangles, and 9 of them don’t work properly: there are dead SK9822s on them, but it’s easy to find where the problem comes from with the testing code I wrote and an oscilloscope as an overkill logic probe. Replacing them should do the trick.

Here’s what the test setup looks like:

There’s a program I wrote on the STM32F7 Discovery board that sends a simple test animation to the triangle. Here’s what it looks like on a good panel:

On defective panels, the white flash at the beginning stops at the failure point.

As to the main board, it’s finished and we’re writing the firmware for it.

[LASMO] LASMO progress

Since our last posts, the project has evolved. Following the design of the schematic and PCB for the card, we started to develop the software. We have developed tests to test the board. So now we have tests that:
-allows to display a wave triangle on the exits of the internals DACs in the board F706 in order to send them thereafter to galvanometers.
– Send the value of various colors to the external DACs which will send to the laser what color to display.
-can read a file from the SD card.
– allow UART communication between the ESP32devKitC and the F706 board.
-allows to receive an IP address via the Ethernet port.
– display analog values input on the ADCS in volts ​.
We also wrote the board and the configurations for the LASMO board as we are currently developing on the board E407.
In addition, Pierre is currently writing the algorithm that allows to display a shell with the JLink that will allow us to test all functions in live.
Luc is writing the ilda file decoder which will allow us to send the information to the galvanometers as well as the lasers.
As for me, I am currently configuring the wifi of the ESP32.

[CyL3D] FPGA debugging

We are using the PIO core IP from Intel to implement debug registers accessible from the HPS to communicate easily with the FPGA. After testing that we could read and write such a register, I implemented a shell accessible in a Linux terminal inspired from ChibiOS.

I used those IPs to act as bypass registers in our FPGA modules. The implementation is still ongoing, but I currently have a shell command to test the correct soldering of the LEDs (by lighting all of them in a dim white).

The next step will be to complete all of our FPGA modules as well as their basic bypass register and test-benches (should be done by Monday).

[bouLED] Trying a projection algorithm

Over the holidays, I tried implementing a projection algorithm to display an image on the icosahedron. Roughly speaking, the idea is, for each LED of the icosahedron, to find out in which triangle in the un-rotated icosahedron it is. This step works fine. Then, on this triangle, we can put axes and get 2D coordinates, and finally find out which pixel (i.e, which colour) that is. It turns out that this step doesn’t work as well as it should. There were annoying glitches in the coordinates, for instance, and that was a nightmare to debug.

On Monday, we decided to ditch this idea and to go with Lucas’ method, which works fine, and what’s more, which is simple. Simplicity is underrated.

Tomorrow, we’ll build and test the main PCB

[bouLED] Drawing on bouLED

During the holidays, I worked (a bit) on displaying an image on bouLED. I first chose the simplest method that came to my mind: the equirectangular projection, used for world maps.

This is a very convenient representation because it’s simple. Storing a rectangular image in memory is straightforward. And, given the position of one LED as a height and an angle around the vertical axis, computing its color is easy: these two values are the actual latitude and longitude.

Performance

Computing this angle from the cartesian coordinates, however, involves a trigonometric function, atan2. This is probably the most expensive part of the computation. If we cannot avoid it, we should at least find an efficient implementation of it.

On the other hand, getting the cartesian coordinates of each LED is cheap. The faces of bouLED are flat, so you only need to make additions if you already have the position of the corners of the triangle.
As we chose an MCU with an FPU, using floating point numbers was a no-brainer.

Of course, the first thing I tried to display on the simulation was a world map. Here are the results:

The map is stabilized while the icosahedron rotates
Without the icosahedron rotating
Camera rotating around rotating bouLED

The simulation uses the exact same code as the MCU. On our 400MHz STM32H7, the function that takes a quaternion as argument and computes each LED’s color takes about 10ms to compute. This represents a very small yet sensible lag. The function can still be optimized, but we’ll need to add some smoothing/antialiasing.

Why are the LEDs so big on the gif ? Aren’t they smaller in reality ? Yes they are, but the result is far less convincing with small light points. That means we may need to “smoke” the plexiglass ball to discern the image.

Also, the equirectangular representation makes it easy to draw on the ball, but leads to some distortion. Matthias is currently working on another, possibly faster method.

We should very soon receive the PCBs. By that time, I’ll be playing with the WiFi module.


[CyL3D] HPS FPGA

I’ve been working on the HPS/FPGA communication, and how we will handle buffering.

Architecture

I came up with this completed version of the architecture:

FPGA buffers will be fed by a DMA Controller. The DMAC will read data from RAM using the fpga2hps bus, and write to an AXI bus connected to all FPGA buffers.

On the HPS side, we will have a kernel module controlling the DMAC through the hps2fpga bus. It will be responsible of DMA transfer sequencing (between all buffers), and also the transfer timing.

The DMA source will be a pool of contiguous RAM buffers allocated by the driver.

Finally, the driver will expose a Linux char device. Through this device, a user-space app will be able to feed the buffers with data from various source (an SD card, a Wifi streaming).

Buffering

If you think about it, each half frame-slice will need to be printed twice, for it to be seen from all points of view. Thus, the drawing flow looks like this (where each line is a half turn, and the two numbers represents the frame being drawn by each half of the panel) :

  • F(n, right) ; F(n+1, left)
  • F(n+1, left) ; F(n+1, right)
  • F(n+1, right); F(n+1, left)
  • F(n+1, left) ; F(n+2, right)

We can see that the first half always get data from the second half at previous round. Thus, if we swap buffers smartly across drivers, we can use half-a-turn buffers. This results in 3 buffers for 2 drivers storing (F(n), right), F(n+1, left), F(n+1, right)), where at each time, one buffer is useless and can be fed by the DMAC with the next correct half.

To start with simpler code, we will approximate to 1 buffer per driver (+33% data stored), with a full turn capacity. We will optimize only if needed, that is, if the throughput of the HPS -> FPGA is not sufficient to transfer that much data.

In RAM, we will have a double buffer. One where we write data for the next turn, and one with data that should be used during this turn.

[bouLED] Done with the PCBs

We finished the PCB design this week (they are currently being reviewed by Alexis). The voltage regulator placed in the center of the triangular PCB had a few constraints, but routing the LEDs was simple enough so I could use the autorouting and just make a few modifications to it, like shortening the VCC paths and widening these traces.

The voltage regulator

We placed a few mounting holes, to screw the triangles on folded metal sheets keeping the icosahedron together.

The triangular PCB

By the time the PCBs are delivered, we’ll write as much software as possible. We’ll use our devkit to play with the ESP32 and some SD card reader, and, most important, we’ll write the display algorithm (we fortunately happen to have a simulation to help us).

[LASMO] ILDA decoder and SD card

Last week, while Pierre was working on the routing of the PCB, I’ve been working on software. What I’ve done is basically a function to get a file on the SD card, and a function that reads and interprets the file as an ILDA file. This is the first brick of our chain of programs, and the data will then be processed by the main program to send the appropriate commands to the galvanometers and the laser.

[CyL3D] Feeding the LED drivers

I started writing the code for the FPGA module which will be feeding the LED drivers. It corresponds to the “LED Driver driver” from our FPGA architecture diagram. In order to have a functional system as soon as possible once we receive the PCB I am also building a test bench to test the module. So far I have assertions on the timing requirements (hold times, setup times, etc.), the validity of the output data and on the internal state machine coherence. I will first finish the test bench and then the module.

TLC5957 Timing Requirements

Concerning the timing requirements, there are 7 different LAT commands. The datasheet of the TLC5957 indicates the setup time before a rising edge of SCLK for 6 of them. However, there is a diagram of the application note of the driver (Figure 7 of SLVUAF0) suggesting that there is also a setup time to respect for the 7th command. Does anyone from spirose have additional information on that?