We are using the PIO core IP from Intel to implement debug registers accessible from the HPS to communicate easily with the FPGA. After testing that we could read and write such a register, I implemented a shell accessible in a Linux terminal inspired from ChibiOS.
I used those IPs to act as bypass registers in our FPGA modules. The implementation is still ongoing, but I currently have a shell command to test the correct soldering of the LEDs (by lighting all of them in a dim white).
The next step will be to complete all of our FPGA modules as well as their basic bypass register and test-benches (should be done by Monday).
I started writing the code for the FPGA module which will be feeding the LED drivers. It corresponds to the “LED Driver driver” from our FPGA architecture diagram. In order to have a functional system as soon as possible once we receive the PCB I am also building a test bench to test the module. So far I have assertions on the timing requirements (hold times, setup times, etc.), the validity of the output data and on the internal state machine coherence. I will first finish the test bench and then the module.
Concerning the timing requirements, there are 7 different LAT commands. The datasheet of the TLC5957 indicates the setup time before a rising edge of SCLK for 6 of them. However, there is a diagram of the application note of the driver (Figure 7 of SLVUAF0) suggesting that there is also a setup time to respect for the 7th command. Does anyone from spirose have additional information on that?
As we are proceeding with the schematics and the PCB placing, we are realizing that we will need more than a single PCB to make everything fit. We will then design an horizontal PCB, very much like SpiRose, though ours will be rectangular for budgetary reasons and it will only host our voltage regulators. The interface between the horizontal and vertical PCBs will be three big pads: GND, 3.3V and 5V. Hence, no signal integrity concerns (at least not at this interface). I placed our 12 to 5V converter on Xpedition Layout while conforming to the PCB layout guide of the module.
Furthermore, it appears that the PCB size of 190 by 190mm that we initially chose will be problematic for us to solder the LED automatically: the problem being the width of the PCB. I was going through the process of routing the LED drivers so that we could be stacking five of them on each side of the panel (according to our previous PCB layout idea), but that will have to be abandoned in order to make our PCB thinner. We will put 4 driver on each side and 2 above.
Altera made a spreadsheet named “Power Delivery Network” designed to determine an optimum number, type and value of decoupling capacitors to achieve the target impedance. I select the VRM type (Switcher), the supply voltage (3.3V), maximum current (2.58A), transient current (I typed 30%, but it seems extremely complicated to obtain a value without having the programming of the FPGA already done) and ripple tolerance (5% given the specifications of the FPGA and knowing that we don’t have transceivers for PCIe, which is the only bank requiring at most 3% ripple). For the spreading inductance, BGA Via inductance and plane capacitance profile I left the default values because I don’t know how to estimate that.
The tool looks up the capacitors it has in its database and tries to find the optimal combination for our target. With the preceding values it outputs 2x 0.01µF, 2x 0.047µF, 1x 1µF and 1x 10µF capacitors, with their respective ESR, ESL and mounting inductance (Lmnt).
Development board decoupling
If I take the configuration of the development board of our SoM, they are using 4x 10µF and 9x 100nF capacitors. If I use the capacitors pre-configured in the Altera tool, it shows that the resulting impedance is higher than the one I got with my inputs, but given that the development board has to be designed for any configuration of the FPGA, their decoupling capacitors are certainly adapted to more extreme power draws than what we will be experiencing. So, I think it would be a good idea to keep what Aries did with their development board.
On the mobile part of our system, we need to convert 12V to 5V and 3.3V. Each conversion has its constraints:
Both need to be able to start slowly enough in order to limit the inrush current from the 12V rail to a minimum.
The 5V rail needs to withstand the switch of its output current from almost 0A to 9A in a few dozens of nanoseconds, corresponding to the activation of a new LED scan line.
The 3.3V rail needs to be able to start quickly enough for the FPGA to startup properly.
I looked at the TI modules for our application. I chose the LMZ22005 for the 12V->3.3V@5A conversion, and the LMZ12010 for the 12V->5V@10A conversion because of the low count of external components required to make the design work. TI has an online tool (WEBECH Power Designer) allowing us to select a converter, choose the external components according to our output target voltage and current constraints and simulate its behavior (they have another downloadable program called Switcher Pro, but it is outdated and doesn’t have recent modules). It helped me confirm that the default 1.6ms of soft start time of the modules I chose was enough to limit the inrush current.
The second step is to check if we meet the FPGA requirements. The Cyclone V Device Handbook Volume I indicates the relationship between the rise time of the 3.3V rail and the Power On Reset (POR) delay. The Cyclone V Device Datasheet states that there are two POR delays that we can select: a fast one, between 4 and 12 ms and a standard one, between 100 and 300ms. In conclusion, the soft start time is quick enough for our FPGA to start properly.
Speaking of inrush current, we are in the worst case scenario for the FPGA inrush current at startup because we are powering all the banks with the same power rail (i.e. at the same time). But it translates to at most 2.92A for a maximum duration of 200µs (see table 10-1 of the Cyclone V Device Handbook Volume I), which is inside of our estimated power envelop.
Lastly, we need to be able to have very quick changes in current drawn on our 5V rail. The TI tool is a little buggy and depending on the component I choose in my design, the simulation can have voltage spikes (in the hundreds of volts, unrelated to the current being drawn). So I couldn’t simulate with higher output capacitor than the one selected by default (540µF), which would have probably reduced the output voltage variations (here: -4%, +2%). The current variation has been based on the rise and fall times of our LED drivers (TLC5957), which are respectively 40 and 16ns. The other interest of this simulation is the current actually used from the 12V rail: up to 12A.
However, I wonder if it wouldn’t be easier for the schematics to use the same module twice.
I looked for the power required by the System on Module (SoM) we are going to use (MCV-6DB). We are using the LVCMOS 3.3V standard, so every FPGA bank will be supplied 3.3V. According to the Wikipedia page of another SoM manufacturer, the maximal power consumption is 8.5W for a very similar FPGA (5CSXFC6) to the one we will be using (5CSEBA6). With a supply of 3,3V, we then have 2.58A. This result doesn’t include the power consumption of the peripherals included in the SoM. With the Power Calculator provided by Micron, and using the DDR3 datasheet (K4B4G1646D), I find that a single module of 4Gb will not exceed 300mW. So 600mW for the two of them. The eMMC (MTFC4GLDDQ-4M IT) has a typical current consumption of 70mA when active, according to Micron documents (link).
Compared to our last posts, we have updated our architecture. We realized that the only SDIO capable bank off the FPGA was used internally on the SoM to connect the HPS to the eMMC. This means that we can’t use SDIO to communicate with an SD card at high speeds. Fortunately, we won’t need high speeds from the SD card since we will be able to take our time at the start of the system in order to transfer the data to the much faster eMMC or DDR. So we will connect the SD card to the HPS through SPI.
We wanted to supply the mobile part with 5V which could then be directly used by some of the components (LED and synchronization mechanism for example), and converted to 3.3V for the rest of them (SoM, LED driver, …). However, it appears that the voltage will be subject to instabilities because of the way the power is transmitted through the rotating part. We have to account for that by using large capacitors as well as a higher supply voltage: at least 12V. So we will have to convert 12V to 5V and 3.3V. We are not sure if the conversions 12V->5V and 12V->3.3V are better than 12V->5V->3.3V. In the latter case we would use the component we originally considered : PTH05060W from TI (under review by Alexis).
Following my previous post regarding the choice of an FPGA, we found out that Cyclone V models only come in BGA format, which is very impractical for us to solder on our PCB. I focused my research on System on Modules (SoM), which have the advantage of providing us with an easier pin configuration to solder as well as an already built system around the FPGA.
In order to ensure that the variations in latency over WiFi (up to several dozens of ms according to our measurements) will not compromise the display of the frames, we have to consider including more memory to our system. With a 24-bit color depth and a 30Hz refresh rate, we would need more than the 4,460kb embedded memory on the 5CEBA5 if we want to account for a 50ms spike in latency. Given that our final presentation will most likely be done in a WiFi-saturated environment, we have to plan for more memory.
The Aries MCV series include a Cyclone V SE with a cortex A9 and 1GB of DDR3. We would use the CPU with Linux running on it, which would allow us to use a WiFi over SDIO module. The 1GB of DDR3 will giveus more than enough buffering capacity. It would be connected to our PCB by two qsh-090-01-f-d-a connectors positioned under the SoM.
Among the Aries products, I think the MCV-6DB would be the best for us because it keeps the same FPGA as the one I listed in my previous post.
Before we can start drawing the schematics for the PCB, it is crucial to know the exact components we will be using. So I looked for a suitable FPGA.
There are only two main competitors on the FPGA production field: Altera (bought by Intel a few years ago) and Xilinx. Since the FPGA used at Telecom in the project rooms are Altera’s Cyclone V, I narrowed my research to this family of components. It will allow us to easily test our software.
There are 6 product lines in the Cyclone V family, shown in this product table. We don’t need a hard processor system since we will be using an external micro-controller and we don’t need a fast transceiver. The limits the FPGA to the Cyclone V E, composed of 5 products, varying in memory size, number of logical elements and I/O. The number of I/O and the memory size will not be a problem for us, even at the lowest tier. Given that I don’t know the size of our modules yet, I am inclined to choose the (almost) same number of logical elements as the FPGA available at Telecom, leaving me with the 5CEBA5 model, available in different PIN configurations. We will need to discuss this further with Alexis to determine what can realistically be soldered on the PCB.
I also created a wiki for our project, formatting what had already been discussed among us and listed on external documents.
In this first post we would like to present the architecture of our project. It includes the choice of the type of components we will need, the way they are connected to each other and our estimations concerning their number.
We will dimension our components to be able to handle a maximum specification, which we will very probably scale back according to the different bottlenecks we stumble upon during the duration of the project. Keeping last year’s project in mind, we are aiming at a maximum number of LEDs of about 2000, a rotation speed of 50 turns per second and a refreshing speed of each LED of 360 times per rotation. Knowing that and if we take 8 bits per color and 8 bits for brightness, we calculate a maximum bandwidth of about 150MB/s. In practice, we aim at an array of 40 by 30 LEDs in order to have a standard 4:3 display, given that we place the LEDs with the same pixel pitch in height and width.
We will control the LEDs by rectangular blocks. The size of each block is limited by both the number of PINs and the bandwidth of the LED driver we choose. We plan to choose a LED driver with at least the same characteristics as last year’s project: the ability to drive 16 LEDs at the same time with an input bandwidth of 33MHz.
We estimate the Wifi throughput to be limited at 150Mb/s, which implies that we will need to implement a compression algorithm in our future file structure.
We will need:
a Wifi module in order to stream data to the display
an SD card reader
a microcontroller unit in order to communicate with the Wifi and SD cartd reader modules
an FPGA used to buffer the frames and driver the LED drivers
a synchronization module in the form of an infra red sensor
an OS to handle the modules
voltage converters depending on the choice of components in order to supply accurate voltage to each one
For next time
We will choose the actual reference for each component.