[LASMO] Specifications

We have redefined the differents specifications of our project. To resume, LASMO is a displayer of a laser show. It will be able to display 2D ans 3D ILDA animations. All ILDA formats are supported. LASMO can display until 30 Kpps/s (at +/- 20° otpical) with a resolution of 4096 * 4096.  We use a RGB LASER in order to display until 16 millions colors.

Moreover, 3D animations are projected so that it looks 3D form from a certain point of view. This way, ILDA animations can be streamed either from internal memory (SD card) of from a PC via Wi-Fi or Ethernet.
Also, LASMO implement Art-NET protocoL in order to be controled by a standart light show controler thanks to Ethernet. So, it will be possible to synchronize on beat from stereo XLR input.

Furthermore, thanks to a picture, LASMO will be able to correct a deformed animation on a surface depending on the picture’s point of view and project a new one corrected always depending on this point of view.

[LASMO] Git workflow and beginning with FreeRTOS

In 2 days, we defined the architecture of our project and most notably, we establish that we’ll be using 3 microcontrollers: a main controller from the STM32F7 family, a smaller one from the STM32F3xx family, whose main purpose will be to switch off the laser when it doesn’t move to comply with security norms, and an ESP32 for the network.

It seemed then logical to try and begin learning to use FreeRTOS, as we were used to work with ChibiOS in practical courses. That’s why I copied a demo for STM32F4xx cards, which we happen to have used during the practical works. In the next week I’ll try to learn how the kernel works in order to set up a SPI interface and maybe, if I have the time, a code to access an SD card, using the STM32F4xx. The code will be eventually reused on our final STM32F7 if functional.

We chose to use the “GitHub workflow” because it’s simpler. The “Merge Request” feature of GitLab will help us apply this flow well.

[bouLED] 3D printing

After we made a model of our icosahedron’s one equilateral triangle using openscad. This is the first time I used 3D printer. After two hours, the first print has finished.

The first version

As you can see, the main issue was the led strip can not fit within holes because widening holes I didn’t change the spacing’s dimension between two LEDs and two rows. So the alignment doesn’t match with our LED strip. I fixed it and I tried a new print. After two other hours, the second version is here.

The second version

The alignment is more appropriate, but it’s still not perfect. The reason remains to be determined I think we should to have more accurate data (below millimeter) and if it’s possible with 3D printer available. Tomorrow I will fix and retry another 3D printed triangle.

[CyL3D] More work on 3D data streaming and display

Over this week-end, we finally agreed upon how we were going to deal with our big data throughput.

At first we wanted to take a MCU, but we noticed that if we use Wi-Fi and stream the data to our system, we needed to :

  • Have a reliable Wi-Fi module that works properly in an environement where other 2.4Ghz frequencies are normally used. For instance, People usually carry their phones with Bluetooth a Wi-Fi connection active and this should not break our system.
  • Be able to buffer the data before displaying as the Wi-Fi latency may vary. Without an external memory, an MCU cannot absorb much more than a few milliseconds of jitter.

Thus, we decided to take a System-On-Module instead. Ambroise talks about it in this post.

With now an FPGA, a dual-core ARM processor with FPU, embedded linux and 1GB of RAM, our system will be rather powerful. Thus, we are considering adding some drawing primitives directly on our system and not rely entirely on the computer to stream all the raw data.

Last time, I explained how I tried to make Blender work with VTK. I managed to voxelize a mesh and view every single voxel in the FIJI Visualization software. Unfortunately, the voxelization is done with a regular cubical grid, which is not what we want.

I succeeded into making a grid that represents our system : a cylinder which represents our 40×30 screen rotating on its Z-axis. On the cylindrical grid below, every white cell is meant to represent a LED on one of the 128 steps.

Custom VTK Grid used by our system

Now, I am trying to fill this grid with our mesh color data. Then, we simply need to extract each slice of the cylinder to know our LED configurations on each step.

[bouLED] Simulation, math and component choice

Towards a blinky simulator

It’d be convenient if the simulator allowed us to test the projection algorithm. To do this, there should be coloured spheres “stuck” to the facets of the icosahedron, arranged much like the holes in my last post‘s triangle. I haven’t written the sphere layout algorithm yet, but as a proof-of-concept, I put a sphere above the icosahedron and made it orbit the icosahedron when it is rotated. It is done by rotating its position vector using the incoming quaternion. Once all LEDoids are created, for each input quaternion, the software will have to loop through all the spheres and make them rotate around the icosahedron, so that they appear not to move in its frame of reference.

That’s a scene graph: the LEDs are children of the icosahedron, and making the icosahedron rotate makes the LEDs rotate. PyQtGraph doesn’t include scene graph handling, but this is a pretty simple one, so doing it manually will probably be less hassle than picking another library for this (VisPy, for instance).

In the end, the simulated projection algorithm should be able to change the spheres’ colours: this will allow us to test it.

First ideas for a projection algorithm

There is one physical icosahedron, and a virtual, stable icosahedric image, that we’ll call V. To find what facet of V a LED is in, rotate the LED’s position vector using the quaternions from the sensor fusion algorithm, normalize it and find the facet of V whose normalized normal (rolls off the tongue, huh ?) vector has the highest cross-product with our normalized LED position vector.

Once the facet of V is known remains the task of finding the right colour, but I haven’t given it too much thought yet. Finding this triangle for each of the 2000+ LEDs is going to be really computationally expensive, so perhaps we could do some kind of dichotomy algorithm, using a first dot product to find which hemisphere we’re interested in.

MCU choice

So far, we’d like to use a STM32F7 MCU, especially for its FPU and L1 cache (16KB+16KB for instructions and cache in STM32F7x8/9 MCUs !). A specific STM32F7 MCU has not been chosen yet.

[bouLED] More on the LED strip

Yesterday I stopped making knots with the wires: I soldered them to the LED strip on one end, and put pins on the other.

Before / After

The issues I had to control the last LEDs of the strip were actually due to the APA102 datasheet being wrong, in addition to being poorly translated from Chinese into English. The “end frame” of the SPI message it described was indeed required, to supply more clock signals than the length of the payload. But not enough if you have a hundred LEDs, as explained on this very informative blog. The data signal being delayed by half of clock cycle by each LED, the length of the end frame should be proportional to the number of LEDs.

Then there’s the brightness setting, 5 bits on each LED frame, which everyone agrees to set to 0b11111 and forget about.

It turns green on the 100th time

We should soon have a 3D printed triangle that fits the LEDs (see my friends’ posts). The LEDs shall then be rearranged: we’ll cut the ribbon into smaller ones and re-solder them. Then we’ll look at the signal coming out on the end of the ribbon and see if we can put another one in series.

To build the 19 other faces, we might need a few more LED strips. We did some measurements, and one 5V LED strip consumes a bit more more than 1A. Until now our test card could supply enough power when using a reasonable brightness, but with more than 15 times as many LEDs, we’re looking at big batteries for bouLED to be autonomous.


CyL3D: SoM exploration

Following my previous post regarding the choice of an FPGA, we found out that Cyclone V models only come in BGA format, which is very impractical for us to solder on our PCB. I focused my research on System on Modules (SoM), which have the advantage of providing us with an easier pin configuration to solder as well as an already built system around the FPGA.

In order to ensure that the variations in latency over WiFi (up to several dozens of ms according to our measurements) will not compromise the display of the frames, we have to consider including more memory to our system. With a 24-bit color depth and a 30Hz refresh rate, we would need more than the 4,460kb embedded memory on the 5CEBA5 if we want to account for a 50ms spike in latency. Given that our final presentation will most likely be done in a WiFi-saturated environment, we have to plan for more memory.

The Aries MCV series include a Cyclone V SE with a cortex A9 and 1GB of DDR3. We would use the CPU with Linux running on it, which would allow us to use a WiFi over SDIO module. The 1GB of DDR3 will giveus more than enough buffering capacity. It would be connected to our PCB by two qsh-090-01-f-d-a connectors positioned under the SoM.

Among the Aries products, I think the MCV-6DB would be the best for us because it keeps the same FPGA as the one I listed in my previous post.

MCV-6DB

[LASMO] Main components architecture

The LASER and the scanner were chosen : a LASER RGB of 1W and a scanner with a speed of 30 Kpps. They came from laboutiquelaser.fr, a French website. We are waiting for the technician’s answer for the availability of the LASER and a real documentation of each component. 

The choice of these two components allow us to define the main architecture of the project :

Main components’ architecture

The main controller is a STM32F7 family. On this micro-controller we will :

  • Use two DAC of 12 bits to control the scanner
  • Use two ADC of 12 bits to get a stereo input line ( in order to synchronize the animation with a sound beat )
  • Communicate with a SD card 
  • Communicate with ESP32, STM32F3 and MAX512 with SPI protocol

The network controller is an ESP32. It will allow us to communicate with LASMO over Ethernet or Wi-Fi. It will integrate a HTTP server to provide us a web app in order to control LASMO.

The MAX512 is a triple 8 bits DAC. It will be able to control the LASER with 3 analog inputs (RGB). This information have to be confirmed, we don’t have yet the documentation of the LASER. In case the input is not analog, the MAX512 component will be removed.

The STM32F3 is a micro-controller whose solely role is to comply with security norms : indeed, the LASER can’t be turn on more than 25 ms on the same position. The security controller monitors the return position signal of the galvanometers and force the laser inputs signals to zero if the galvanometers don’t move enough.

[CyL3D] Blender and VTK

As we wrap up our components list, I decided to look into the kind of data that we will display on our system.

Our system will probably display either :

  • A mesh that can be made in a 3D modeling software such as Blender
  • Medical or scientifical 3D data, such as a CT-scan or a geophysical map.

VTK is an open-source software system for 3D computer graphics, image processing and visualization. VTK can manage voxels pretty well while Blender only has a very low support for volumetric data. VTK is developped in C/C++ but has wrappers for Python, which means we can use it in Blender.

Someone has already made a VTKBlender module to make blender work with VTK, so I decided to use it. It is available on GitHub.

Even with VTKBlender, it was quite a pain to make Blender 2.79, Python 3.7 and VTK 8.11 work.

On this paper, I found the source code of an old Blender plugin that worked with Python 2, VTK 5 and Blender 2.49. Unfortunately, quite some code is not compatible with current versions so I am upgrading the source code.

[bouLED] Visualization shenanigans and 3D modeling

A new simulator

I was trying to replace the cube in Lucas’ simulator (see here) by an icosahedron and add some kind of clue of the icosahedron’s orientation, but the Python 3D library we used, VTK, was getting on my nerves. Adding an icosahedron worked fine, but I wasn’t able to change its colors, and what’s worse, even when using the default colours (blue everywhere), one of the model’s facets stayed red, which was pretty jarring. I also added an axes widget that was supposed to rotate with the icosahedron, but to no avail: it wouldn’t rotate. One of us had to go, and it wasn’t going to be me.

Alexis sent us a script 11 days ago that displayed a colorful icosahedron with PyQtGraph, which provides a light abstraction over OpenGL. It made a nice starting point for a new simulator, with a rotating icosahedron, a fixed grid and axes. Behold !

Granted, it’s still ugly, but it works and PyQtGraph is way nicer to deal with than VTK.

3D modeling

We’d like our icosahedron to have 13cm equilateral triangles, which would make it fit snugly inside a 25cm transparent spherical shell for protection.

At first, we wanted to build the icosahedric display with triangular PCBs, but last week, our teachers suggested trying to 3D print a facet and put a LED strip (one the ubiquitous APA102 strips) on it, to check density, and if triangular PCBs are really necessary: perhaps we could have a PCB inside the isocahedron control LED strips glued to the facets.

Hichem and I made a model using OpenSCAD to understand how to lay the LEDs out. It’s a pretty neat piece of software for 3D declarative modeling. I really appreciated that because in this case, we had to be explicit and think about our constraints. So far, here’s what we’ve got:

The LED strips are meant to go under this. Using this model, we see that with the strips and dimensions we chose, there’s 111 LEDs per facet, so 2220 LEDs overall. That’s huge, and we’ll have to discuss whether having that many LEDs is feasible (or desirable, for that matter).