Categories

[SpiROSE] Rust, Device Trees and Mechanics

Hello! This have been a long time since I’ve posted here. Lots happened!

First the mechanics

The motor we chose was doing lots of noise, so I had to investigate this and try to get a solution to avoid having to buy another motor. I discovered the noise was coming from a broken ball bearing.

The faulty ball bearing was hiding in the bottom of the motor.

 

After ordering a new ball bearing, the motor was running perfectly… Up to the moment I realized the motor was not the only one to do a big big noise… The planetary gearbox is using 3 needle bearings that does a lot of noise too…

I recycled an old Brushless Motor, a 4 pole 2100kv 3-4S LiPo sensored BLDC motor. This motor is not supposed to fit the low speed requirements (~20-30Hz), so I had to test with the real world load. We engineered PCB prototypes without the components to simulate the load, and mounted them over the newly installed motor.

New sensored motor speed test without load

Load test, with a big protection. We care about our lives 🙂

The two black plates over the vice are the prototype PCBs (pre-sensitized FR4 plate we machined with a CNC to fit the real size). The system is sensored, so we just placed a logic analyzer at the hall effect outputs (and divided by 4 the outputs, as it’s a 4 poles motor). The result is that we can go from ~5Hz to more than 30Hz, so the motor fits perfectly with the load!

Good news: the motor is now fixed in the structure, mechanical engineering is about to end!

 

Discovering Rust

This have been a long time since I wanted to learn Rust programming language, but :

  • I didn’t have time to learn it
  • I didn’t have a good and simple project to start with

Thus, when I saw we had to make a simple Command Line Utility for our SPI communication with the FPGA, we decided to make it in Rust. The first day was hard. Very hard. I discovered how we use rustup and cargo correctly, how we setup Rust to always develop with the nightly version, and how to cross compile for armv7.

SpiROSE Command Line tool

After hours of questioning myself about how it works, reading the docs, asking questions to people who Rust and error reading, I finally got a working SPI CLI for SpiROSE. Ok, that’s cool, but how do I get the SPI device now?

SPI, Yocto, dtsi

As the current distribution is not really made for embedded (ubuntu), we wanted to try Yocto, a build system made for packaging a GNU/Linux inside embedded systems. Yocto is widely used for that, but has a complex curve of learning. As the time was getting short, we rolled back to ubuntu.

So, how to get SPI device in /dev/spidevX.X? The solution is:

  • Install Spidev module (we recompiled the kernel to integrate it inside)
  • Update the Device Tree Blob given to U-Boot to give the address of the SPI hardware to Spidev and enumerate it inside /dev

As nobody knew about Linux Device Trees in the group, I hardly tried to put the new device in the dtsi, to get the new Device Tree Blob to give to U-Boot.

I finally achieved this tonight. Tomorrow will be reserved for testing the SPI output (already tested with a logic analyzer) by connecting it to our FPGA development board.

 

Now this week will be full of tests, to get the PCBs ready to use as short as possible!

[AmpeROSE] Field-Programmable Gate Amperose

Hello everyone,

In this post I’d like to elaborate on how AmpeROSE’s FPGA component will operate, in the configuration where it actively participates in the automatic calibration and data acquisition. You may also want to refer to my previous post on this subject.

Automatic Calibration

When we were performing SPICE simulations of our measurement circuit, we observed the following phenomenon that led us away from thinking of implementing the calibration logic as a synchronous state machine (namely, a bidirectional shift register):

  • If the frequency of the clock driving the state machine was too elevated (>1MHz), the output of any comparators going high would very often cause the state machine to go to the corresponding extreme measurement range, even when stopping at the middle range would be the correct thing to do. This, in turn, caused the other comparator output to go high, and the state machine would oscillate. The cause of this exagerated reaction was that the state machine performed the second change in calibration before the rest of the circuit had time to properly react to the first change, and reach a new burden voltage corresponding to the current measurement range.
  • On the other hand, if the clock frequency was reduced to allow for the reaction time of the circuit after a state change (according to our simulations, about 2.5 us on the worst case, owing to the capacitances of the switches in the shunt selector), then a sudden peak in burden voltage that happened to take place just after a rising clock edge would go untreated for a very long time.

There is actually a way to address these issues without abandoning the idea of a synchronous implementation: a state machine clocked at a high frequency, which allows for a short reaction time, associated with a down-counter. Whenever a state change occurs, the down-counter is loaded with a value proportional to the reaction time allowed to the circuit, and further state changes are blocked up until the counter reaches 0. This is the strategy that I followed for implementing our calibration logic in SystemVerilog.

In terms of what exactly is the “high frequency” that should be used to clock our state machine and the counter, we calculated that in most extreme situation that still falls in our expected operating conditions (up to 300 mA drawn by the device under test (DUT), which has a bulk capacitance of 500nF) the DUT’s bulk capacitance should be able to hold the drop in the DUT’s supply voltage under 1V for about 1.66us, so we should, at the very least, react faster than this. Fortunately, Quartus indicates that maximum clock frequency for the current version of the calibration circuit is 111MHz, so we should be able to even approach the reaction time of our hardware comparators (5ns) if this proves to be necessary.

Regarding how the clock will be supplied to the calibration state machine, one of the dedicated clock inputs for the FPGA is connected to a pin of the uC which can be configured as the master clock of a Serial Audio Interface. Such a master clock signal can receive the output of the uC’s PLLI2S phase-locked loop, which we are not otherwise using in our design. This affords us a great flexibility in terms of choosing the clock frequency.

Context bits and SPI

When our external ADC signals that a new measurement is available, the FPGA “takes a snapshot” of the calibration state, of the context bits and of a bit indicating whether the calibration changed since the last measurement (which is then reset), and places it in an 8-bit register.

The “data ready” signal from the ADC is also transported to the microcontroller (uC), which should begin a SPI read of 32 bits. For the first 24 bits, the FPGA simply transports the signals between the uC and the ADC, so the 24-bit voltage measurement can be read. For the last eight bits, on the other hand, the FPGA inserts the bits from the “snapshot” on the MISO line, one per cycle of the serial clock. Thus, the uC receives the complete set of 32 bits corresponding the measurement (24 for the voltage measurement, 5 for the context and 3 for the automatic calibration) by the end of the SPI read.

How to test it all

I put together a straightforward testbench using Verilator and systemC. This testbench has simple systemC threads and methods playing the role of the components of the system (ADC, uC, comparators) which communicate through signals between themselves and to a systemC model of our FPGA, generated by Verilator from the SystemVerilog code.

Periodically, the uC thread determines a 32-bit measurement that should be obtained, and communicates it to the other threads, which cooperate to create the conditions for it to be produced (the ADC thread signals that is has completed its measurement and sends the most significant 24 bits of the 32, the comparators method react to the switch outputs of the FPGA model as if the correct range to be reached is the one described by the two least significant bits of the 32, etc.).

When the adc thread signals that a measurement is ready, the uC thread performs a SPI read, where the FPGA model plays the role described in the previous section. If the 32 bit reading obtained from this transaction is different from what was expected, the simulation returns an error. This testbench is currently passed by the SystemVerilog code.

How to proceed from here

The next immediate steps would be to improve the testbench to make it more realistic, and then to test the SystemVerilog code using a real FPGA and a microcontroller, such as the DE1-SoC boards present in our lab and our development board, respectively. After we receive our PCB, we should strive test it with our actual uC and FPGA as soon as possible.

[ZeROSEro7] WiFi and SD Card

Since my last post (a long time ago), I made several improvements.

I continued to work on WiFi chip and important parts in downloading and uploading files are finished. More specifically, from a smartphone, we can choose to upload or download a file.

Upload button will make the user choosing a file from smartphone memory and send it through http in blocks. WiFi chip will save each block into its FLASH as files. At the same time, STM32 will read and delete each block through UART. The content will be printed using RTT.

About downloading, a whole file, saved into WiFi chip FLASH, can be downloaded into smartphone “Download/” folder.

Before continuing WiFi communications, I want to make STM32 able to read and write in SD Card and to communicate with BLE. Now, files are not saved into STM32 memory so I want to implement SD Card interface . Moreover, STM32 doesn’t know when it need to wait for an upload (from smartphone), smartphone doesn’t know when WiFi chip FLASH is full and it doesn’t know available files to download. These problems can be solved using BLE to communicate from STM32 to smartphone.

I also worked on SD Card communications. I implemented and tested four main functions using ChibiOS:

sd_write_byte # Write a single byte at specified addresse

sd_write # Write several bytes from a buffer

sd_read_byte # Read a single byte at specified addresse

sd_read # Read a memory area

To deal with file saving, I will begin softly with only one file saved. Then, I want to use FAT C libraries to deal with files. I read the AmpeROSE post about SD Card and I would like to use the same API.

This week, I will continue to work on SD Card and on WiFi communications.

See you next week.

[ZeROSEro7] Keyboard works !

This week, we made a great improve with the USB keyboard. Indeed, we are able to use it on a computer with the STM32 (Olimex) between both. I also worked with Vincent on the porterage from the STM32F407 (Olimex) to the STM32F205 (USB Sniffer and Stealth Drop). Finally, I started to work on the SPI communication between an STM32 (Olimex) and a nRF52 (BLE).

USB Keyboard

The USB keyboard works on a computer with an STM32 (Olimex) between both. So I’m able to get every input from the keyboard to detect password, email, etc.

First of all, I got back an HID USB connection between the computer and the device. This communication start, the STM32 send its device, configuration, interface, HID and endpoint descriptor. These descriptors are currently hard coded. This will be done dynamically soon.

On the other side, a host USB communication start when a USB keyboard is plugged on the device (Olimex).

All interruptions from the keyboard are directly transferred to the computer, so it’s possible to use the keyboard normally… almost normally!

Input is fluent, every combine of input works except Caps lock and NUM lock! Indeed, all reports from the keyboard are sent to the computer, but no in the other direction. Caps lock and NUM lock input, update the status of LED on the keyboard managed by the computer. Worse, when the computer sends to the keyboard (STM32) the status of LED, the STM32 crash.

Hello World USB Sniffer !

As we received our PCB.

We wrote a code able to test each feature on board. On the USB Sniffer we want to test :

  • STM32F205
    • RGB LED
    • USB A male
    • USB A female
    • SPI communication
  • nRF52832
    • LED
    • BLE communication
    • SPI communication

We got back features from STM32F407 (Olimex) to use one the STM32F205. For that, we needed to manage the board.h and the tool chain compilation. Enguerrand choose the code to flash in the nRF5283.

And the very great news is every thing works !
Now we have to try the SPI communication between STM32 and nRF52

Slave SPI

I started to work on the SPI communication between an STM32 and an nRF52. It’s important that the nRF52 is the master of the communication ! Unfortunately, ChibiOS doesn’t implement a slave communication in SPI…

I got back STM32cubeF4 which is an MCU Package for STM32F4 series (HAL, Low-Layer APIs and CMSIS (CORE, DSP, RTOS), USB, TCP/IP, File system, RTOS, Graphic – coming with examples running on ST boards: STM32 Nucleo, Discovery kits and Evaluation boards). This one implements an example of slave SPI.

I’m trying to merge that with ChibiOS…

Next week

I want to valid the slave SPI communication between STM32 and nRF52 and add this feature to the USB Sniffer. I would like also to upgrade the USB keyboard to copy the dynamically USB descriptor and fix the problem with Caps lock and NUM lock.

[ZeROSEro7] Android CI, GATT dynamic connection and custom service

Great progress was made since my last post!

First of all, the CI was greatly improved. We now use smaller docker images for the different jobs and the code formatting is asserted using the checkstyle pluggin for gradle.

I’m intensely working on the BLE part as it is essential for all 3 sub-projects. We tested it on our USB Sniffer PCB and the BLE part is working with a 20dBm compared to the Nordic dev kit which enables a roughly 15m range for connection. This is completely satisfying for all gadgets but a bit short for the stealth drop. We don’t need a connection, but one advertisement has to be sent. We will measure the range for this packet reception once the stealth drop is welded.

I implemented GATT connection to a custom service on both sides (nrf52 and android app). It uses write and notification callbacks for faster transmission. Reading is much slower as it requires more transmissions. Just for the reminder, during our connection protocol, the device first waits for a specific advertisement from the phone (just the name field is examined for now, more complex mechanisms could be implemented if we have time). It then advertises itself as a smart shoe (nobody will try to find it ;)) and the phone initiates the GAP connection to this advertising. The connection is already dynamic, that is to say timeouts, disconnections and restarting is handled from both sides smoothly. The device also returns to observe mode a few seconds after the advertisement.

I read some documentation on bluetooth security and found many ways to have a secure process and connection. Nothing is enabled right now, but if I have time before the deadline, which is in 10 days, I could do the following:

  • Enable privacy by randomly masking the address of the device. An IRK (identification resolution key) will have to be shared with the phone.
  • Enable GAP connection encryption with AES and MITM protection
  • Enable GATT layer similar protections
  • Whitelist the phone’s scan requests instead of observing to get it’s advertisments. This has the downside that for demos with other phones, they either need to be in root userspace, or their BLE address must be entered in th device through some other way.

In the near future, I will provide the SPI interface to communicate between the phone and the STM through the BLE and the corresponding UI. Once the BLE is completly finished, I will switch to the LoRa communication to have the final Spy Talk prototype.

I am very confident that us 3 will have fully working demos for all 3 projects :). Yet, I have doubts concerning the range and data rate for the Stealth Drop.

[SpiROSE] Routing voxels

Howdy!

Good to see you again after the holidays, Christmas, New Year’s Eve, alcohol, …

I haven’t written here in a while, so I’ll do a large combined post of the week before the holidays, the holidays and this week (i.e. the week after the holidays).

Routing the rotative base

Before the holidays, I did most of the place and route of the rotative base. Obviously, it ended up being heavily modified around the board-to-board connectors. The pinouts (and even the shape!) of those connectors were modified several times, due to space constraints.

In the end, this is our rotative base PCB. This is a 4 layers, 20x20cm board that we now call “The Shuriken” due to its odd shape.

Right now, it is in fab, but with some delay due to a bug in the board house website.

Voxelization

Now that the renderer has a working PoC, it was time to wrap it in something more flexible and reusable.

Introducing libSpiROSE, which allows you to turn any OpenGL scenery in voxel information usable by SpiROSE. It can output both to the screen (which will by piped to the FPGA through the RGB interface), or to a PNG file.

For example, this is actual code that voxelizes a cube to then dump it in a PNG :


#include <iostream>
#include <spirose/spirose.h>
#include <glm/gtx/transform.hpp>

#define RES_W 80
#define RES_H 48
#define RES_C 128

int main(int argc, char *argv[]) {
glfwInit();
GLFWwindow *window = spirose::createWindow(RES_W, RES_H, RES_C);
spirose::Context context(RES_W, RES_H, RES_C);

float vertices[] = {1.f, 1.f, 0.f, 0.f, 1.f, 0.f, 1.f, 0.f,
0.f, 0.f, 0.f, 0.f, 1.f, 1.f, 1.f, 0.f,
1.f, 1.f, 0.f, 0.f, 1.f, 1.f, 0.f, 1.f};
int indices[] = {3, 2, 6, 2, 6, 7, 6, 7, 4, 7, 4, 2, 4, 2, 0, 2, 0, 3,
0, 3, 1, 3, 1, 6, 1, 6, 5, 6, 5, 4, 5, 4, 1, 4, 1, 0};
spirose::Object cube(vertices, 8, indices, sizeof(indices) / sizeof(int));
// Center the cube around (0, 0, 0)
cube.matrixModel = glm::translate(glm::vec3(-.5f));

// Voxelize the cube
context.clearVoxels();
context.voxelize(cube);

// Render voxels as slices
context.clearScreen();
context.synthesize(glm::vec4(1.f));

context.dumpPNG("cube.png");
return 0;
}

This leads to this :

Not impressive, but the code to generate it is all contained above.

Now, add an MVP matrix, a few rotations to the cube and swap context.synthesize with context.visualize and you’ll get this :

Next week

We have a first PCB that arrived from fab this week, so we’ll assemble it next week. However, this is the LED panels, sooooooo that’s going to take a long time.

We will also focus on building actual demos, now that we have libSpiROSE in an usable state.

See you next week!

[AmpeRose] A finger in every pie

Hello everyone,

As always this post of mine is long overdue, but to catch up on the delay, I’m gonna try to squeeze what I want to say into this one post. Here goes…

Holidays With The PCB

In the week before the holidays, and during the first week of said holidays, (Merry Christmas and happy new year, by the way!) I worked on verifying our schematics for the PCB, with the rest of my team, and I did the placement and routing of our PCB.

However, since our PCB’s design is very delicate, and very prone to being messed up by small details like distance between components, one of our professors, Alex Polti, redid the analog circuit and part of the digital circuit. So let’s give him a shout-out folks!

Minor adjustments were in order because some parts we needed for the pcb went out of stock, but this didn’t have a major impact on the circuit.

 

After The Holidays, The GUI Days

In the second part of the holidays, and all the way up to this week, I’ve been working with Abdelhadi on our PC software. We’ve faced many challenges, some of which were quite difficult. I’m not gonna talk a lot about this because Abdelhadi has already covered it in his post. I structured our code, and wrote some of the backend and view levels, as well as a trial for the USB and a few tests, while Abdelhadi worked the interface and the ethernet connection, along with a nice emulator. We have a basic working UI now, and are working on bettering it. We are also planning on changing the DataView level, because it’s proving to be very stiff and problematic to deal with: It isn’t allowing for easy control of the triggers, and the data retrieval functions implemented inside are very limited, and allow for so very little control.

This week, I’ve done a fast check on Bilal’s code for our microcontroller’s code. He’s done a pretty neat job of writing a simple API for us to use. Momo’s already reviewed and changed some things in this code, and I’m currently reviewing it one last time.

One more thing I’ve been doing during this time is updating our environment. This includes creating a new docker image for our CI as well as updating our CI pipelines and makefiles.

 

Upcoming Week, Upcoming Work

For the upcoming week, I’m planning on fully reviewing the C code, as well as the FPGA’s code. We’re planning on merging our work on our master branch to close a “few” PSSC issues.

But most of all, I’m impatiently waiting to get my hands on the PCB so that we could test it, and get it working.

 

We’ll keep you posted guys,

Michel

[Little Brosers] PCB, Images, hash digests and flash memory

Hello everyone,

 

This week, and particulary this week-end have been quite intense. Let’s sum it up.

PCB : First hands-on and testing

We received our PCBs this week ! It was awesome to see it, like, for real, not only on my screen. We spent a day testing each component and all went beautifully well. We had a software issue with the external flash’s pins but somme google research solved it. We forgot to configure the pin used for SPI’s CS, which is used for NFC by default. We juste had to declare a preprocessor constant to remove the NFC from this pin.

 

Some pics :

 

Banana for scale

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

In case you wonder, our drop’s PCB roll very well and resists to battery pole inversion 🙂

Docker images

Finally, after weeks of procrastination about Docker, I finally took the time to make a precisely adapted image for each jobs of our CI. “Took the time” means 1 full night and one full day but at least we now have beautiful lightweight alpine/arch docker images for our jobs. Tests run way faster than before !

 

Hash digests

So, for the last weeks, I have been in charge of our merkle tree and for simplicity, I never implemented messages hash. I always used the first byte of the message’s payload to run our sub-sorting, first one being timestamp. Also I never took care of computing the nodes’s hash of our merkle tree. This was just not usefull for what I had to do.

However the project is progressing and we came to the point where we needed those hash digests. Thus during friday, I sat at a table with Chris and we reviewed together the hash code for the drop. At the end of the day, we ended up with the exact same code for computing code from simulation or from the drops. We just made sure to redirect the “#include”s to the appropriate header files.

This is going to be merged into master branch pretty soon.

 

External flash memory

Ok, this time, it’s simple : my whole week-end was dedicated to the external flash memory. Saturday I started from master with what Xavier had already written for it and started to build basic functions like a get_status() function allowing main() to read the status register of the flash and I spent the rest of the day reading the datasheet. Today was all about writing a full Little Brosers message to the flash. Knowing that a flash page is 528 bytes, we decided that our messaged would be 528 bytes long for the maximum length. The external flash memory has 8192 pages.

So to make a quick summuray of what I have learned:

The flash has two SRAM buffer of 528 bytes buffer. They are used to optimize writing and reading operation. In our case it is very usefull.

Indeed, the nrf52 SPI driver only allows SPI transfers up to 255 bytes(i.e. 251 bytes of payload, 4 bytes being used by the flash commandID and page/word address). We won’t be able to fill a full page with only one transfert, even if the flash allows it, the nrf doesn’t. I mean, we could force the sending of more than 255 bytes using nrf52’s SPI driver but it would require a heavy structure using nrf52’s timers and PPIs to make it possible. We don’t have time for such a complex thing. Instead, we chose the following procedure to write a msg in the page i using buffer1:

-> transform the message_t C struct into a byte array of size 528
-> write msg[0] to msg[250] in buffer1[0] for 251 bytes
-> write msg[251] to msg[501] in buffer1[251] for 251 bytes
-> write msg[502] to msg[527] in buffer1[501] for 26 bytes
-> push whole buffer1 to page i

 

The procedure for reading is quite similar except that we do not use the buffers. Indeed, we use the low power continuous array read on the page i three times (0->250, 251->501, 502->527).

 

It is 22h36 and for now, it “seems” that writing works fine, not so sure about reading. I’ll stop here for today, we’ll see tomorrow with a full night of recover. I’ll have to show it to my coworkers and see if they have any idea how to debug this part. I have to admit that it is not very easy since we still not fully master the NRF_LOG macro which is supposed to replace the usual printf() of the libc. It very often doesn’t actually print the characters I asked for; skipping some of them if the number of characters is too high. Even if I raise the RTT buffer size.

So for now, flash support is not reliable.

 

See you

 

Antony Lopez

[AmpeROSE] User Interface

Hello everyone,

Lately I was working on the User Interface. The software that will run on the PC must receive measurements from AmpeRose, send commands to AmpeRose (through Ethernet or USB) and display the received measurements in the UI. The user can decide if he wants to display all the data he received, display the data that satisfies a certain trigger, display the spectrum … Another critical role of the PC software is data correction. During the initial phase, all the calibration data will be sent to PC and during the normal stream of measurements, the software will correct the received measurements accordingly.

In a previous post, we presented the class diagrams that will be implemented. As a first step, Michel & I implemented the skeleton (with some modifications judged necessary). As discussed in the mentioned post, the software has 3 main components:

Back-end

The back-end will handle all the connections with AmpeRose. It must implement 2 main methods: Send Command and Receive Measurement.

In order to implement these methods, I had to implement first the Ethernet Connection class. Since AmpeRose will be the server of two TCP connections (Data – Commands), the PC software will then implement 2 TCP clients that run in 2 different threads. Ethernet receiver will read packets of measurements and place them in a queue. Ethernet sender will read from the commands queue and send them to AmpeRose.

The words sent / received in the connection component of the back-end are raw 32 bit words that currently have no significance. That’s why we have to implement a command translator and a data translator. The command translator will make a mapping between 32 bit words and commands we defined in our protocol. The data translator will take the 32 bit words and extract the value, caliber and GPIO status.

The final component of the back-end is the Data Corrector. The data corrector implements 2 methods. The first will add calibration measurements and the second will apply the correction on a given measurement. We are planning to use a linear regression for correction however this component is not implemented yet.

To sum up, in order to send a command we will have to translate it first (using Command Translator) and send it then to AmpeRose using the chosen connection (by placing it in the commands queue in the Ethernet case and the Ethernet sender thread will then get the command from the queue and send it). In order to receive a measurement, we will read a value from the chosen connection (in the Ethernet case, we will read from the queue filled by the Ethernet receiver thread). The read value will be then expanded into useful information and the correction will be applied. In order to test these functionalities, I wrote a simple AmpeRose emulator and we can successfully send commands and receive measurements (generated with a frequency of 100KHz).

Data View

Data view is an intermediary layer between back-end and the main UI. It handles how this data will be viewed. In this class we can chose the view mode: Normal View Mode where data are viewed as a normal continuous stream – Trigger View Mode where only data that obey to a certain trigger are shown. (We can also chose the trigger that will be applied: a threshold – a certain slope …). The main thread of the data view calls the receive measurement of the back-end and store the data in a data container with a capacity specified by the user.

User Interface

Unlike what I thought, the graph display of the received measurements turned out to be the trickiest part of the PC software. The UI is composed of 2 main parts: Control and Graph (as planned in this post). The user will control when he wants to start and stop the acquisition and how the data will be displayed. Initially, in the case of many connected components AmpeRose, the user will also chose to which specific one he wants to connect.

The tricky part is how we are going to display 100,000 points / second. Receiving the measurements at this rate is not a problem however displaying them is. In the current version we are applying a down-sampling and once we zoom in more and more points will be restored. PyQtGraph (the library we are using) supports 3 methods for down-sampling: ‘subsample’ where we take the first of N samples, ‘mean’ where we calculate the mean of N samples and ‘peak’ where we down-sample by drawing a saw wave that follows the min and max of the original data. For now we are using the ‘peak’ method because we are interested in the extreme values.

Next Week

For now, we have a basic working UI ready to be used with the micro-controller. The UI for now can start and stop acquisition, receive measurements and display them. This week I will continue working on the UI. I will start by the implementation of the data correction and statistics. I already implemented the trigger view mode but it needs a lot of improvements that I will work on too. I will try to improve the performance of the graph display (Normally the down-sampling calculations are done in the main thread that may cause lagging when more and more data are received that’s why I will try to implement the down-sampling in a separate thread). Finally, on another note, if we receive our card this week I will also work on testing it.

Until next week 🙂

 

[Little Brosers] DFU, Flash, and Heroku

Hello everyone. I’ve been working on various things for our project, here is a summary.

DFU

After struggling a lot, I succeeded performing a DFU in bootloader mode (in this mode, there is no app in the flash). Now I work on making a DFU in buttonless mode, where the DFU is performed with a GATT service with the application running.

We need to flash the bootloader and the softdevice to make it work. When there is a reset, the bootloader checks the apps present in the internal flash, it computes their crc32 and compare it to the checksum stored in a special internal flash page named bootloader_settings_page. So now when we flash our app, we also need to write the good checksum in this place. The program nrfutil can generate a hex file (a text file, basically with a list of address and their data) with the content of the bootloader settings page, and we can juste merge it with the application with the nordic mergehex program to make it work. I still need to write a script for the merge, and also handle the keys in the CI (with a protected secret key).

Flash

I made a basic code that check if the SPI external flash (AT45DB321E) is present and stop the execution otherwise. For that I used the “Manufacturer and Device ID Read”, I just send an opcode, and the flash has to answer correctly 5 bytes. 

After having made a board configuration header, and after having received the drop boards, I tested it, but it was not working. The logic analyser showed that the CS pin was not working and was kept low. I tested it on a second Drop, and surprisingly, it started working for some times, and then failed. At this time I suspected a hardware problem, for example the flash receives too much voltage and is destroyed. Hopefully with the help of Antony, we realised that the CS pin was by default used for NFC, and unlike other GPIO pins, needed to be explicitly configured as GPIO. I think it works one time because there is an undefined behaviour if the CS wire is always kept low.

Heroku

We migrate the certification authority to Heroku because my 15€ router didn’t work correctly (after ~20 hours of up time, it wasn’t connected to the internet anymore, and I had to plug out and plug in the RJ45 wire, and it would work again).

The heroku migration was not so easy, the reason was that heroku has ephemeral filesystems, it was generating the keys at each app startup (often in the free version of Heroku). So I put all the keys in environnement variables. Heroku also doesn’t provide database, I need to find a free MongoDB Heroku plugin to make it work, and Heroku ask for your CB even if it’s free. I updated the continuous deployment, it is now easier (I was doing a git filter-branch to keep only the ca folder history before) and use a ruby util named dpl (https://github.com/travis-ci/dpl).

I also put the Heroku API_KEY  as a protected secret variable in GitLab (Accessible only in the master branch, otherwise my coworkers could just write a CI script “echo $API_KEY” in one of their branch and would be able to steal the API_KEY).