[CyL3D] Finalizing the schematics of our ULPI transceiver

Last friday we had a class about signal integrity. As we learnt about all the problems we could have, I finally understood the use of all the bypass capacitors and some of the other esoteric symbols or annotation that are no longer a mystery (such as 0-ohm resistors or DNI annotations)

Thus, I had to update the schematic of the ULPI/USB subsystem of our project.  You can see it below. Here are some important changes :

  • Disconnected the oscillator ENA pin : according to the datasheet, having the pin on high impedance does enable the oscillator.
  • Added USB power distribution switch, the TPS2051C that is needed not to break our USB device. It limits the current of our usb device to 500mA.
  • Added bypass capacitor to oscillator and power switch.
  • Updated power supply capacitors
  • Added 10K resistor for VBUS, connected to a 10uF capacitor that should be enough for our system. The USB 2.0 specification recommends a 120uF capacitor for hosts but we do not intend to hot-plug our USB device or have a cable between our device and our system. 
  • Added names to everything.
  • Connected the RESETBn pin to our nRST.
ULPI/USB Subsystem Schematic

As such, our ULPI/USB system should work. However, I have a doubt about whether or not I should add a resistor on the wire between the oscillator’s CLK and the USB3320C REFCLK. Our development kit has a resistor on its schematic, but the oscillator’s datasheet does not mention to put a resistor on the CLK output. If anybody has an idea about it, I’d be happy to hear it.

Booting the FPGA & Schematization of the USB3320

For the past days, I have been working on the schematics of our ULPI transceiver, the USB3320 and when my colleagues were working on other parts of the schematics, I had to focus on other stuff such as making our SOM work and improving our system architecture.

Making our Cyclone V SOM work was a breath of air compared to the schematic part. Following the Quick Start Guide of our development kit, I had to make sure the jumpers were setup correctly (and they were – not sure if it was the factory default or if Alexis did his magic before giving us the devkit) and configure the UART0 interface. Once everything was set and connected on /dev/ttyUSB0, I simply plugged in the devkit and it booted correctly.

You can see the booting log on this pastebin post

As we would like to be able to use Wi-Fi USB dongles, we are currently adding host USB 2.0 support. Unfortunately, we must use the ULPI interface to communicate with our USB devices. Thus, we have to use a ULPI-USB transceiver.

ULPI works with a 60MHz clock and I am unsure for now if our FPGA can generate such an accurate reference. However, the USB3320 can work with a external oscillator and it has its own PLLs to generate a 60MHz clock from a 12MHz reference clock.

Below is a work in progress of the schematic of the USB3320 in our system. It also includes a Linear Voltage Regulator as the transceiver also needs a 1.8V DC supply. 

What is definitely missing are the appropiate capacitors and resistors that I have not added everywhere yet.

USB3320 Schematic (WIP)

[CyL3D] More voxelizing and simulation

For the past week, we have been discussing a lot the components we will have. We mainly focused on the power supply, our photosensor(s), our Wi-Fi module(s) and our LED driver.

For more information about our hardware decisions, you may look at Ambroise, Guillaume and Baptiste’s posts.

In my last post, I showed you how I created a vtkUnstructuredGrid that fits our display system. The past week, I managed to extract geometrical data and color from the meshes in Blender and I tried a lot of different ways to fit the mesh geometrical data into my grid representing our cylinder.

The voxelizing algorithm takes in input a list of colored meshes and outputs an image that is the slices of the scene containing the meshes.

As we have a 40×30 LED panel and 256 steps per rotation, I decided to output one frame as a 1200×256 image, where every row is a different slice of our scene and the 1200 pixels in each row represent the RGB components of the LEDs, going from left to right, top to bottom. This image is currently saved as .bmp and .raw (which is basically BMP without the header) files.

I created multiple scenes to test out our algorithm. Below, you can see the result of voxelizing different kind of meshes. The first picture will be the model of the mesh I tried to voxelize. The second one will be the output of the algorithm, which is the 1200×256 reprensenting the LED configurations at each angle. The third one is how the LED configurations should appear to our eyes with our real system (it is a simulator, written by Guillaume using Processing — it works pretty well!)

 Voxelization results

Colored cube

Original cube model
Voxelizing algorithm output
Simulation of the colored cube display

Green cylinder

Green cylinder model with 32 faces
Voxelizing algorithm output of green cylinder
Simulation of the green cylinder display

Bi-colored Sphere

Bi-colored sphere model
Voxelizing algorithm output of bi-colored sphere
Simulation of the bi-colored sphere display

Colored text

Colored text model
Voxelizing algorithm output of colored text
Simulation of the colored text display

Results interpretation

As we can see, our voxelizing algorithm is not perfect but we can easily recognize the original meshes. Let’s see the pros and cons of the current algorithm.

Pros:

  • A cylinder of diameter 4:3 and a height of 1 is perfectly mapped to our system.
  • We can easily recognize shapes and texts although it does not feel perfectly aligned.
  • The raw output is well-fit for our system : reading sequentially the output file gives the different slices of a frame, in the correct order.
  • The algorithm can basically voxelize any colored mesh scene.
  • Blender has an animation framework and it should not be too much work to make a video with our algorithm : by simply voxelizing every frame and displaying them one after the other, we can get a 3D video.

Cons:

  • The algorithm is slow for now (1 second for a simple cube, about 1 minute for a scene with 500 polygons) but it is written in Python. It is not really optimized for now as I did this for prototyping but it can be rewritten in C and we can have Blender call the C program instead of having it to execute the whole algorithm in Python. On top of that, it can be rather easily multi-threaded. Indeed, the treatment of each slice is independent from the others. 
  • The complexity of the algorithm is linear with the number of polygons in the scene, making the voxelization of complex scenes in real time quite complicated.
  • Straight lines aliasing can be seen when the faces are displayed on a big radius. But this is rather a resolution problem than an algorithm problem and there’s not too much we can do about it. If we have a big quad, it is best to show it in the middle of the scene and with a small scale as this is where the LED density is the biggest and there is not much aliasing.
  • There is some noise in our sphere voxelization – but perhaps it is due to the fact that the sphere modeled in blender is an UV-sphere with not that many faces.

File size and compression

Ultimately, the frames or the video will be either put on the flash of our system or sent/streamed through the Wi-Fi. In such cases, file size is important. Here our models are pretty simple, but having a lot of unlit LEDs makes the output image of the algorithm a sparse image. When you have sparse data, it is easy to compress.

Below, you can see how much our out images are compressible. I used gzip -c6 to compress the raw files and see how much they could be compressed. Here are the results :

File sizes of our raw data and gzipped data

As we can see, the compressed file is about 1% of the raw file. On top of that, gzip is quite easy to use (there are portable libs online of about 200 lines of code and this definitely runs on a Cortex-A9) – fast (and the speed/compression ratio can be set)  – and efficient.

I do not expect to see a 100:1 compression ratio on every scene I could voxelize but it is rather comforting to see that we can ultimately use compression in our system if needed.

If you have any idea on how we could improve our results or any feedback to give, please comment.

[CyL3D] More work on 3D data streaming and display

Over this week-end, we finally agreed upon how we were going to deal with our big data throughput.

At first we wanted to take a MCU, but we noticed that if we use Wi-Fi and stream the data to our system, we needed to :

  • Have a reliable Wi-Fi module that works properly in an environement where other 2.4Ghz frequencies are normally used. For instance, People usually carry their phones with Bluetooth a Wi-Fi connection active and this should not break our system.
  • Be able to buffer the data before displaying as the Wi-Fi latency may vary. Without an external memory, an MCU cannot absorb much more than a few milliseconds of jitter.

Thus, we decided to take a System-On-Module instead. Ambroise talks about it in this post.

With now an FPGA, a dual-core ARM processor with FPU, embedded linux and 1GB of RAM, our system will be rather powerful. Thus, we are considering adding some drawing primitives directly on our system and not rely entirely on the computer to stream all the raw data.

Last time, I explained how I tried to make Blender work with VTK. I managed to voxelize a mesh and view every single voxel in the FIJI Visualization software. Unfortunately, the voxelization is done with a regular cubical grid, which is not what we want.

I succeeded into making a grid that represents our system : a cylinder which represents our 40×30 screen rotating on its Z-axis. On the cylindrical grid below, every white cell is meant to represent a LED on one of the 128 steps.

Custom VTK Grid used by our system

Now, I am trying to fill this grid with our mesh color data. Then, we simply need to extract each slice of the cylinder to know our LED configurations on each step.

[CyL3D] Blender and VTK

As we wrap up our components list, I decided to look into the kind of data that we will display on our system.

Our system will probably display either :

  • A mesh that can be made in a 3D modeling software such as Blender
  • Medical or scientifical 3D data, such as a CT-scan or a geophysical map.

VTK is an open-source software system for 3D computer graphics, image processing and visualization. VTK can manage voxels pretty well while Blender only has a very low support for volumetric data. VTK is developped in C/C++ but has wrappers for Python, which means we can use it in Blender.

Someone has already made a VTKBlender module to make blender work with VTK, so I decided to use it. It is available on GitHub.

Even with VTKBlender, it was quite a pain to make Blender 2.79, Python 3.7 and VTK 8.11 work.

On this paper, I found the source code of an old Blender plugin that worked with Python 2, VTK 5 and Blender 2.49. Unfortunately, quite some code is not compatible with current versions so I am upgrading the source code.