[SpiROSE] Driver protocol and renderer test

Driver protocol

This week I have made a lot of changes in the driver controller code. This module controls all 30 drivers, so it has to send the data, sclk (shift register clock), gclk (displayed clock) and lat commands. To test it we have a PCB with 8 columns of led (7 columns with only one led, and a last column with 16 leds), this simulates what a driver will have to drive : 8 columns of 16 leds, with multiplexing. We have connected a driver to this PCB, and use the DE1-SoC card we have at school as our FPGA. On those cards the FPGA is a cyclone 5, not a cyclone 3 as we plan to use, however the driver controller code is device agnostic so it is not a problem.

The module driver controller is supposed to receive data from another module, called framebuffer, which read the images in the FPGA ram. Thus for the test I wrote a simple framebuffer emulator which send data directly, without reading any ram.

The driver controller is a state machine, it can send a new configuration to the drivers, dump this configuration for debug purposes, do a Led Open Detection, or be in stream mode where it sends data and commands to actually display something. This last state has to be in sync with the framebuffer, thus the framebuffer sends a signal to the driver controller when it starts sending data from a new slice. When the driver controller goes from any state to the stream state, it has to wait for this signal. This signal will also be used by the multiplexing module.

The drivers have a lot of timing requirements, after each lat command sclk or gclk needs to be paused to give time to the driver to latch his buffers, and the data and lat needs to change severals ns before and sclk rising and falling edges. Therefor I added a second clock, two times faster (66 MHz), and used it to generate two clocks at 33MHz, in phase quadrature. One is used to clock the state machine, the other is used to generate sclk and gclk. This means that the lat and data edges will occurs a quarter cycle before or after the sclk/gclk edges, which is enough to respect the timing requirements.

In stream mode, gclk has to be input as a segment of length 2^n. During a segment we have to send all the new data to the 16 leds, and input the right lat commands to latch the buffers. The final command has to be input precisely at the last gclk cycle of the segment. In poker mode we send only 9 bit by leds, which takes 9*48 = 432 sclk cycles. The closest power of two is 512, thus we have 512-432=80 cycles of blanking to put somewhere. It was first decided to do all the blanking at the beginning of a segment, and then stream all the data. However as stated before, we need to pause sclk after each WRTGS command, which are sent every 48 cycles. Fortunately one cycle is enough, thus we have 8 blanking cycles not occurring at the beginning of a segment. So now we have 72 cycles of blanking, then one cycle of blanking every 48 cycles.

This means that the frame buffer has to take into account those blanking cycles. Just skipping one shift all the data, resulting in a weird color mix.


To test all this I wrote two simple demo. One simply lights all the leds in the same color (red, blue or green). The second one lights the led with the following pattern:

A button allow to shift the pattern, resulting in a nice Christmas animation.

You can notice that the colors don’t have the same luminosity. Fortunately this can be control with the driver configuration: each color has a 512 steps brightness control. What it still unclear to me is if the driver simply diminish the power sent to a color, or divide the same amount into the three colors. The current measures we have made seems to suggest the later, as the global amount of power doesn’t change when reducing the green intensity for instance.

Renderer test

The renderer allows us to voxelize an opengl scene. It is still a proof of concept and will soon be turn into a library. To test it, I wrote an sh script that does the following steps:

  • Start the renderer with default configuration to see if there is no error and that the shaders load properly
  • Take a screenshot of the rendering with imagemagick, and check that something is actually displayed by checking the color of the central pixel
  • Start the renderer with a simple sphere, take a screenshot and compare it (with imagemagick) to a reference image to detect any changes
  • Start the renderer in xor and non-xor mode, and compare two screenshot taken at the same time

[SpiROSE] LED Panel, end of place and route

Nothing really interesting for me this week, except place, route, place again, route again, etc.

The LED panel is pretty hard to route because of the constraints we have on it. Here are some stats about it:

  • There are 1920 LEDs, 15 drivers, 4 buffers, and 2 multiplexers inside a panel.
  • The top of the panel must have placeholders of approx. 10x10mm regularly to fix it using brackets, these placeholders will also be used as wires for electrical power input.
  • There will be 5 micro-blocks (a micro-block is a vertical set of 8×48 LEDs, with two drivers on top, one on bottom)

For fun, I looked at the stats from XPedition Layout, and saw there are approximately 12000 vias.

I would never have done everything by hand, it would have been too much repetitive work to check manually after. Hopefully mentor gives some utilities to help us copying blocks, here are the main ones I tried.

Hierarchical design and Instantiation

There are two main ways to design a circuit : by describing it explicitly, using pages to separate functions when possible (and appropriate), or abstracting parts of the schematic by defining new symbols representing a whole function.

The last method comes when the schematic is starting to get so big that it becomes difficult to read the schematics all at once. It is kinda similar to functions in procedural programming languages like C/C++. Another big advantage is the capacity of the symbols to be used more than once, making it easy to reuse a whole block.

This is where hierarchical designs can come in our design: designing a symbol as a whole micro-block makes the schematic approximately 5x smaller, with a guarantee that the micro-blocks will be wired in the exact same way. Finally, instantiating 5 micro-blocks let me place 5 big “components” already wired inside the PCB, and not manually placing and routing the 384 LEDs of the micro-block individually 🙂

Unfortunately, this systems works fine if it is conceived hierarchically from the beginning. If already placed and routed a micro-block, and putting it inside a symbol broke the link between the schematic and the PCB, breaking my place/route I already made…

Clusters and Circuit reuse

Another way of doing it is by using clusters. Cluster (Type 152 as described in the mentor’s documentation) is a type (physically represented by an integer) you can add in a device property. It acts as a group function: put some parts into the same cluster and they will be recognized as ‘equivalent’ parts. Here is how I’ve done this (UB0 is the reference micro-block, already placed/routed, I want to duplicate it and name the duplicated version UB1):

  • Set every LED’s cluster property in UB0 to a different number quickly by selecting the LEDs only (be careful not selecting special symbols like intrapage connector, power symbol, etc.) and using the ‘place text’ in ‘Type 152 – Cluster’ mode with options ‘auto-increment’.
  • Same for all the other devices like IC, resistors, capacitors, etc.
  • At this point there should be one element inside each cluster.
  • Copy paste (including net names) the whole UB0 schematic, and without unselecting it, replace text ‘UB0*’ to ‘UB1*’ with option ‘selected text only’ (I suppose all the different nets start with ‘UB0’, adapt as needed)
  • The schematic is ready!

Now for PCB:

  • Package, forward annotate.
  • Select (in select mode) UB0 parts, nets, vias, etc.
  • Now you have two options: direct paste (using clipboard) or save the selected circuit for future reuse.
  • If you activate licence for ‘Circuit Reuse’, you can save it (inside the ‘Edit’ menu).
  • Otherwise, just right-click and ‘Copy’. From there you should have a pin map assigner window where you can adjust your paste settings (if it does not show, just hit ‘F2’).
  • Oce it’s done, you can paste, and it’s done!

The problem with this being that there is no verification of integrity between the micro-blocks (if you rip/shortcut a wire somewhere in the schematic on one micro-block, you may not be notified).

This is what I applied because it is easier to use when you already have an existing design.


Next week, the last part will be added and the bottom part of the panel will be routed, finally!


[SpiROSE] Supplying power voxels … Wait watt?

Okay this may not be obvious from the title, but I mainly worked on to parts of the project this week: a new voxelization algorithm and the power supply.


To my great surprise, OpenGL ES GPUs do not support integer operations. This include logic operations. Especially between fragments. Do you see where I am going? If you recall my post regarding the voxelization algorithm, I rely on XOR-ing two fragments to voxelize a column. The OpenGL ES standard does not defines the glLogicOp functions, which is the one I need to XOR my fragments. Bummer.

However, there is a solution. We can emulate a bitwise XOR on n bits by doing n one-bit XORs. But how can we do XOR without XOR? If we look at a truth table, we see that XOR is equivalent to an addition where we keep only the LSB. Indeed, the LSB of 1+1 is 0. Lucky for us, OpenGL has a feature, called the blend mode, that allows to do just that. Now, for every bit of our output voxel texture, we are doing a bitwise add of the fragments.

But there is a huge downside to this method. This method now required a whole byte of data for each layer, where the XOR method required a single bit. Or, to be accurate, this method required a whole color channel per layer. This means we now need several output textures for the whole scene, while a single one was enough for up to 32 layers (32 bpp) previously. Fortunately, even older GL ES did support multitarget rendering (i.e. writing to several textures in a single pass), but with limitations. Our wandboard has the limitation of 16 output textures, which gives 64 total layers (1 layer / channel, 4 layers / texture).

Anyways, we now have this version ready to rock, where the result is basically indistinguishable from the XOR version.

XOR-less version. Notice that there are now 8 voxel textures on the bottom left, each storing 4 voxel layers.

XOR version, for reference


As another acheivement, I got OpenGL ES to work reliably on the SBC, which proved tricky, especially with the broken package for the library I use. I use GLFW as a context-creator (awesome library!), but the version that ships with Ubuntu 16.04 (official linux flavor provided for this board) is utterly broken, with wrong includes, … So much for an LTS distribution!

Furthermore, porting this app to OpenGL ES on the wandboard is in progress, and is looking good so far. Except I have no output. OpenGL magic I guess. Anyhow, OpenGL ES is much more finicky than its desktop counterpart (especially Nvidia’s one), which makes porting very tedious. Oh and working remotely is not the easiest of the tasks to debug graphic apps…

Power supply

We estimated the total worst-case-scenario-if-everything-blows-up current consumption. We are looking at :

  • LEDs: 44A
  • FPGA: 700mA
  • LED drivers: 840mA
  • Clock buffers: 120mA
  • SoM: 2A

Some of those figures are more empiric than anything: FPGA comes from Altera’s excel document calculator thingy, and the SoM comes from an overkill stress test (CPU+GPU+WiFi).

To make the drivers drop the least possible power, LEDs will be powered from a 4V rail, not too far off from their 3.4V forward voltage. Beefy buck DC-DC converters are needed!

Then next big hog is the SoM, which will get his own dedicated buck DC-DC converter, supplying 5V straight away.

All the rest will be fed from a 3V3 buck converter, which will be downstepped to others voltages for the remaining components (3v, 2v5, 1v2).

A single 12V supply will feed all this mess through the various DC-DC converters. We are looking at 190 odd watts.

What’s next?

Next week, I’ll finish the PSU schematics and we’ll be able to route the main rotative PCB, containing PSU, SoM and FPGA (to name a few).

I will also continue to port that damn renderer to GL ES.

Thanks for reading 😀

[SpiROSE] Oh My Schematic

Not a lot to talk about, except that Adrien and I finally began the schematics for the rotative base. This PCB is the one hosting the SoM, FPGA and power supplies. We also determined the exact signal count going from this PCB to the PCB hosting the LEDs and their drivers.

Speaking of power supplies, we’ll need beefy ones. As a worst-case calculation with extreme currents for the LEDs, a single color can eat up to 100mA. That’s 0.1 * 3 * 80 * 48 / 8 = 144A of current. The /8 comes from the 8-multiplexing we are doing with the LEDs.

This week: finishing the schematics and fixing OpenGL.

[SpiROSE] Driving the LED Drivers with the FPGA

This week was dedicated to writing the SystemVerilog code for driving the TLC5957 LED drivers. The first issue was to get some SystemC tests ready to use. While the tests where being written by someone else, I started writing the SystemVerilog code. Now the code is ready and have been tested on the drivers, but the timings are not correctly set, so the drivers are being incorrectly driven and behave pretty randomly.

The FPGA development have been delayed in order to get the schematics ready quickly. During the same time a more precise analysis of the available encoders have been done, including the mechanical restrictions, connector, communication protocol, etc.

Next week will be dedicated to the shematics and datasheets reading.

[SpiROSE] no more EIM and we now have SystemC for testing

Hey, this week we read a lot of FPGA and SBC documentation. The goal was to be sure our project was feasible with the components we chose. We also validated many points on our design.

Choice of RGB versus EIM

It appears that we can’t use the “fast GPMC-like interface”, also known as the EIM on the board. Fortunately, as Tuetuopay explained it, the RGB is doing very well for what we ask it for.

The reason for not using the EIM are very simple : it doesn’t go on the MXM connector. Dot. So now we take a definitive stance on the choice of the communication bus from SBC to FPGA.

SystemC for testing

As written in our previous posts, we were trying to use Verilator as a compiler to SystemC. It is now an almost finished task. Each SystemVerilog file is built with a SystemC testbench and we are currently working on finishing a SystemC TLC5957 driver model and associated utils to produce test sequences and a way to test our driver and LED with C program directly, so as to validate everything. Tests will be done before the end of the week and we have a solid knowledge of the behaviour of the TLC5957.

Only one or two points are still beyond our understanding : the behaviour of XREFRESH also known as auto-refresh is part of them for instance, but we have good hopes to assert its behaviour with the previous library.

The Verilator makefile won’t produce correct dependency files too, but it will be fixed by replacing it by our own rules.

Next steps

There have still some issues with the SBC as it can’t do what we were expected at first.

Next week, we have to finish the tests quickly so as to integrate the FPGA parts into our project. After that we will fix the SBC issue by either finding a way to do what we want (openCL, CPU parts) or by replacing it by another CPU-only algorithm.

FPGA architecture

This week we have settle the FPGA architecture in order to choose the right FPGA. Fortunately it seems that this cyclone 3 will do the job. The main issues to tackle were the I/O count, the clocks domains, and the internal memory. So before going into details let’s define some terms:

  • We call image a whole cylinder (i.e. a whole turn)
  • We call slice a 2D image displayed by the led panel at a given position

We plan to use 256 slices per turn. Those slices are embedded into a single image sent by the SBC, but this part is quite flexible : the layout of the slices, the frequency and the blanking can be tuned. This is actually very important, because it allows us to rotate the display just by changing the order of the slices in the image send by the SBC. This means that we don’t need to store a whole image in the FPGA, which would have been impossible with the cyclone 3. Instead we jut need to store a few slices to cross the two clocks domain (RGB clock and drivers clock). So let’s describe our architecture.

General Architecture

Modules’ role and architecture

Parallel RGB

The image sent by the SoM contained all the 128 slices :

The RGB logic module will write the pixels sent by the SBC into the RAM.
Since we will only use 16 bits per LED we drop the least significant bits and just store 16 bits per pixel.

In the RAM we store a whole slice in each µBlock, thus the RGB logic module has to decode the right address for each pixel.
For instance the first pixel from 1 to 80 must be written at address 0 to 79 (if we start at 0), but the following pixels 81 to 160
must be written at address 80*48 to 80*48+79 as they are part of the second slice.

Double frame buffer

From the drivers’ perspective a slice looks like this :

In poker mode, the drivers need the bits of all the 16 LEDs, thus we dont really send a pixel, we send a column of 16 bits
of the previous image (in red). When the drivers have sent those 16 bits we have to send the next ones, thus the frame buffer is read
by 30 columns at a time ! This makes it impossible to write the next slice without destroying relevant data. Thus we need a second buffer.

The double frame buffer contains two buffers of size 80*48 pixels. One store the current slice and send it to the driver controllers while
the other read the next slice from the RAM. When the first buffer has sent all data to the driver controllers it just switch the role of the two
buffers. Whenever this module receives the new position from the encoder it starts to fill the right buffer with the next slice.

It takes exactly 512 cycles (driver_clk, up to 33MHz)  to the drivers to send the data for their 16 LEDs , so with an 8 multiplexing the frame buffer will be read in 512*8 = 4096 cycles. The second buffer will be filled in 80*48 = 3840 cycles < 4096 thus we won’t have timing issue if the frame buffer use the same clock as the driver controllers.

Another solution is to have a double frame buffer of size 128 pixels for each driver, and to use an arbitrator to handle the RAM access
(one simple solution is just to let the buffers read one after another, or broadcast the pixel to every frame buffers and let them keep the
one they are interested in).

Driver controller and driver logic

Each driver controller follow those steps:
– send 1 bit at each clk rising during 432 cycles
– wait for 80 cycles (wait for the 512th cycle)

The driver logic will send the correct LAT commands to latch the data in the driver.

The driver logic is also used to configure the drivers (at reset or when the UART tells him to do so),
it can do so by sending the correct LAT commands and replacing the pixels of the frame buffer with
the configuration data (send by UART or stored inside the module for the reset).

Encoder logic

The encoder sends the new position through SSI3:
– 1 bit per cycle (up to the 16th bit)
– then 1 bit error flag
– then we need to wait for at least 20µs before the next transaction

At 2MHz it means that we get a new position every 28µs.
We want to use 256 position at 30 fps. It means that we have
1/(256*30) = 130µs between each position, thus we don’t have timing issue.

I/O count

The cyclone 3 we choose claims to have 128 I/O, and we need 80. However many I/O are specific (PLL…) or used for configuration, thus the real count is the following:

  • 32 VREFIO, used for reference voltage, since we do single ended I/O we don’t need reference voltage and we can use those I/O
  • 6 simple I/O, they don’t have any particular function
  • 8 resistor reference I/O, we can use them too as regular I/O
  • 8 PLL I/O, those are just for output clocks
  • 58 DIFFIO, most of them can be usedas regular I/O except the one needed to configure the FPGA (We will probably use STAPL which needs the four JTAG pin)

Therefor we have more than 100 I/O available.

[SpiROSE] Mechanical construction, various tests and FPGA design

This week was full of various little tests and tryouts: Power input, Motors, LEDs,  LED Drivers, etc. It was also a week used for designing the FPGA internal architecture and final adjustments for mechanical construction.


Mechanical construction

The mechanical design have been updated due to the lack of some materials in our distributor. The design have been checked and validated back with the mechanics, and the materials have been ordered. The mechanics will start the construction on next week. The motor and his controller have been ordered too.

The updated mechanical design is available here.

FPGA Design

The FPGA architecture have been finished, including clock domains, I/O count, main modules definition, RAM layout, data input specifications, synchronization constraints and drivers control logic. The chosen FPGA (Cyclone III with 40KLE) have been validated to work with this architecture and next week will be used for starting developing on it, to check the timing requirements. The rotative base station schematics can now start.

A note about RAM: last week a basic estimation was done to see if an FPGA could internally store a whole 3D image, but the calculus was erroneous, which leaded us to think that the Cyclone III FPGA was capable of it. The reality is that it can store up to 18 ‘slices’ (we call these micro-blocks), so synchronization is needed.


Various experiments

The LED drivers have been soldered on breakout board, as for the LEDs, and some motors we had have been tested. The goal was to validate our choice of components. Due to mechanical construction starting soon, we ordered the final motor and controller just after the tests in order to have it when the construction starts.


Now the schematics of the rotative base station are ready to start, and the FPGA code have to be written.

See you next week for more details!

[SpiROSE] SBC testing and test LED panel

This week was kinda quiet as far as the project went, due to the Athens week. However, this did not stop me from doing some actual testing on the SBC and the LEDs.

RGB on the SBC

First off, since we’ll be using the parallel RGB interface to transfer data to the FPGA, testing the flexibility of this bus was a must. Parallel busses get tricky once you get above 25 MHz. However, RGB is a simple display interface with basic control signals. Added to the 24 data bits, you have PCLK (pixel clock), HSYNC and VSYNC. The latter two are used to frame the actual image, with blanking intervals at the end of each line, and blanking intervals after the actual image. Because RGB is sort of a digital VGA, it uses the same timings, that are inherited from the CRT days. With those, you needed long blankings to let the electron beam go to the next line, wasting some precious bandwidth. For example, a 800 px wide image would have 224 wasted pixels per line on blanking.

However, an FPGA doesn’t care about electron beams, thus reducing those blankings to their minimum is essentials. As it turns out, the IPU of the i.MX 6 is very flexible, and allows us to set those blankings arbitrarily. A 1 px horizontal blanking and 1 line vertical blanking is feasible, and has been confirmed by measuring the output of the RGB interface.

Measuring RGB sync signals using poor’s man frequency meter

The measurement was accomplished using 3 STM32F103 boards (chinese clones of the Maple Mini, worth $2 a pop on eBay).

But what kind of frequencies can we expect? Well, our display being an LED matrix of 80×48, refreshing 256 times/turn at 30 turns/second, the total pixel clock is 80x48x256x30 = 29.5 Mpixels/s; thus a pixel clock a tad above 29.5 MHz (remember that cycles are wasted after each line, thus a slight overhead). Ouch, too high. But Wait! Our LED matrix is a whole slice of the cylinder along the diameter, not the radius. This means that, in a half-turn, we covered all of our display space, with the second half of the turn using the same pixel data. Great, we have our bandwidth reduced to a “mere” 14.75 MHz, which is much easier to route and work with.

But how to set those blankings exactly? I’m glad you asked! Linux drivers are well made for the i.MX 6, and once the LCD/RGB output has been enabled in U-Boot, a custom modeline in xrandr will do the trick. In our case, a resolution of 1024×480 exactly matches our pixel count, giving us a resolution of 1025×481 on the RGB (counting minimum blankings). As of frequencies, at 30 fps, this gives a pixel clock of 1025x481x30 = 14.79 MHz. This value shows that extra-reduced blankings are negligible for our application.

Corresponding measurements are the following :

From top to bottom : PCLK, VSYNC, HSYNC

Here is the modeline for this specific resolution:

Modeline "1024x480Z@30" 14.79 1024 1024 1025 1025 480 480 481 481

A few xrandr commands and you’re all set!


As FPGA development began and time advances, it was critical to validate the LEDs, and that our drivers are working. With the help of Alexis, we soldered samples drivers to breakout boards (QFN 56 is no trivial task) in order to test them.

However, what is a driver without any LEDs? Since 50 LEDs were ordered, I designed a very simple PCB that could be fabricated at school to test both the LEDs themselves and the multiplexing we’ll be doing. A single driver controlling 16×8 RGB LEDs, 50 units would not cut it, thus I only put a full column (16 leds) and a full line (8 multiplexed LEDs) with the required control logic (8 MOSFETS, here AO3401 I had laying on my desk). This only is an L shape, but allows us to test and develop the driver driver (no that’s not a typo). Oh and lots of pin headers for the wiring.

Bare board with small SMD parts soldered

I can’t say it enough, but kudos to Alexis for soldering all those components. For reference, the LEDs are 1.13×1.13 mm and the resistors are standards 0603.

Soldered PCB, wired up to the driver (not the blue board, which is yet another STM32 board)

Regarding brightness, I dare you to look straight at a single one of those LEDs for more than a few seconds. We did tests at extremely low brightness in an attempt to simulate the effective brightness once the panel would rotate. Even at 1.5% brightness, a single LED was easy enough to see in a brightly lit room. Furthermore, considering the density we have, I have no real concern at the moment regarding brightness.

What to do next?

Next week, I’ll try to build and run OpenGL apps on the SBC, and confirm that the voxelization algorithm can be run with OpenGL ES. It appears to be impossible without desktop OpenGL. If you recall my post about the algorithm, I use bitwise operands to combine different fragments. However, this is a feature exclusive to OpenGL. The rationale behind it (according to Khronos) is that most embedded GPUs lack a proper integer manipulation unit. Thus it is not part of the GL ES spec (for example, the glLogicOp function lacks, which is the function used to set the bitwise operation to logic XOR).

Thus I’ll either try to find a workaroud, or totally redesign the rederer, mainly on Alexandre’s ideas.

I will also benchmark the SBC in term of CPU power, to estimate if a 100% CPU implementation of the current algorithm is possible (spoiler: I doubt it).

[SpiROSE] Fixed Base and LCD screen

This week, I have worked on the fixed base, which is basically the ST STM32F746G-Discovery DevKit. Its role is to be the interface between users and SpiROSE since it will eventually be able to start/stop the device, to let the user choose what demonstration he wants to be displayed on the 3D screen, to drive the ESC and to handle safety issues. The DevKit comes with an integrated LCD 480×272 touch capacitive screen, which, we hope, will enhance user-friendliness.

To drive it properly, we decided to use a GUI library called µGFX. We thus had to integrate it along with the ST HAL drivers, the GNU Arm Embedded Toolchain as well as OpenOCD to debug the board. Once that was completed with no error whatsoever, we were able to dive in the API to get going. As regards Continuous Integration, to go beyond linting, I will have to find a way to test the program relatively to what is it supposed to display or what is it supposed to do when a given input occurs on the touchscreen.

I also worked on the IMU part. We already had a NXP FRDM-KW41Z board with an integrated SPI/I2C compatible accelerometer/magnetometer NXP FXOS8700CQ that could be bound to the DevKit with its Arduino connectors, so we didn’t bother finding another option. Since only the accelerometer is of any use for the project, I just had to flash on the NXP board a simple program that would set the right pad multiplexing so as to have SDA and SCL (I2C Data and Clock) driven properly from one board to another. I didn’t bother building an entire project based on the GNU Toolchain, but used instead NXP online tools. The accelerometer is capable of outputing data at 800 Hz, but there is no use for such speeds. One measurement every tenth of a second should be enough to detect a random displacement of the structure and to shut the motor down, in case of emergency. There again, I will have to determine the tests to be applied for the I2C communication between the two boards. For now I am able to display on the screen the IMU data along three othogonal axes. The data remain to be processed to know whether at a given moment, the accelerations measured are within acceptable range, this can only be implemented after tests on the actual structure, since we have to filter the accelerations due to vibrations.

As regards the interface, it is not looking fancy yet, but for now, three menus are implemented, one control menu for the user to start/stop SpiROSE, another to choose the demonstrations to be displayed and finally one console-style menu meant to display ERR/OK messages after virtually every component initialization or change (typically the configuration of the accelerometer with specific writes over the I2C bus). It will also ease debugging during the remaining development phase.

What’s next for the fixed base ? Once the potentiometer is received, it will be integrated. It is not important for now, but a lifting of the interface will be needed later.


First version of the interface, displaying the IMU values and a slider simulating a potentiometer