Today we focused in the encoders. The module to correct the speed of the motors is done but it is not using directly the encoders yet, we tested it with a fake function to simulate the real velocity that is going to be calculated from the encoders.
The module which calculates the real velocity from the encoders is not 100% yet, but almost. Tomorrow we are going to complete this module and link it with the corrector module.
We fixed FreeRTOS too, as it was not running in flash memory, seems that the first position of flash (where the stack pointer should be) is protected as we couldn’t write on it, we still need to investigate why. Now we are able to turn on Tutobot and the program runs automatically.
The OLED is not working yet, we verified the pin assignments in the FPGA (in Libero) and it seems ok, the code seems ok too, we are still investigating.
While measuring some pins with the oscilloscope, we got once the same curve that we got yesterday while trying to measuring the line sensors, so we concluded we never captured a signal from them. Today we notice that in Libero they are not configured as inout pins, and for some reason we can not change this parameter, this still needs to be investigated.
Today we solved our Zigbee problem, it was really silly: we were not waiting enough time before and after entering the Zigbee’s command mode. We had a loop to do this waiting that worked fine, but since we programmed the System Boot with a clock twice as fast as before, the loop was not long enough anymore.
Since we had both motors and zigbee working well, Helen wrote a basic program to control the motors with the keyboard, sending directions through Zigbee.
Gabriel is working on driving our OLED screen. He wrote a program following the example in the datasheet but it did not work. It is difficult to debug because the bug can be either in software or in the pin assignment done in the Libero project (FPGA). We will re-check the pin assignment tomorrow. He also started working on a way to make it simpler to write the .dat file (the image to program the FPGA and the System Boot).
I wrote some code to use the encoders, and set the Libero project to set flags for the corresponding ADCs, so an interrupt would be generated when the measure in the ADC overcomes a certain threshold. For some reason I have not figured out yet, the interrupt routine is never called. For now I will leave the interrupt approach, and create a FreeRTOS task to monitor the ADC samples.
We debuged the motors and now they are working fine.
The encoders are returning values with good resolution, tomorrow we will work to assemble a control with motors+encoders.
The pins of the line sensors seems not responding really good, we tried to capture the signal with a logic analyzer and an oscilloscope. We could measure the response with the oscilloscope, but only once =/, using the same program and same osciloscope.
The proximity sensors are not giving us good resolution between closer objects and farther ones.
Leds with pwm are working really well, we can choose in a register to use the pwm mode or not.
Zigbee is still not working after we programmed the SysBoot with DirectC.
We did it: we can activate the Tutobot motors now! After having brushing lots of bits to flash the FPGA image we could flash one that that implemented the interface to the motors and LEDs with the MSS using the FPGA. You can checkout the first run in the attached video (don’t mind about the shake to the camera since I scared when the motors started ).
It could had been earlier but we are still having issues to write the FPGA image file to the SPI memory. We already tried many approaches but we are still losing a few bits in every write (and when I mean a few, it is some 32 bits for the entire 191kbytes file). We still have to figure out how what is going on with the SPI interface.
It was tricky but now we can do it. We have now a program that downloads an image to write on the FPGA fabric.
Initially the approach was to write using the UART to download the image but it seemed that this could take more time to develop than intended initially so the approach was changed to another were we compress the image and store in the MSS RAM.
One problem with this approach is that the compressed image might be bigger than the amount of memory that the MSS RAM can hold, so this may require segmentation of the FPGA image (it is still not yet verified if the image can be bigger than that). Another problem is that the binary that programs the image in the FPGA has to be relinked to the image, which needs to be loaded to the MSS again. This problem can be overcame by downloading the image to the MSS RAM, without relinking it to the application that flashes it to the SPI flash, then writing the image to the SPI flash memory.
Another problem to deal with is that this was tested only on the SmartFusion eval board, and the eval board uses a flash memory that is different from that of the Tutobot. Since that SPI flash has a compatible instruction set, we expect that it will work with having to fix it.
After preparing the patch to OpenOCD to support writing in the flash memory, we sent the patch. It was refused due the amount of code from Actel we used to generate the embedded bin program. We could provide a cleaner code, removing the unused things from Actel’s driver, but it is working and we don’t have much time, after Rose we can work more in the patch.
If you are still interested in this patch you can found it here. Or send us an email .
Our PCB has arrived! \o/ Seems that we are going to weld the components tomorrow
Libero (FPGA and project configuration)
The routing of FPGA is done, a map between MSS ans FPGA, which are the things that can be connected directly with a GPIO and which can not. So after we have our robot, we will be able to drive the peripherals which doesn’t need a complex FPGA support.
Necessary HDL modules will be done together with the peripheral’s drive.
We integrated FreeRTOS to our project. There is just some minor things we need to configure, as the code work in the Ram but doesn’t in Rom memory =/
SPI Flash Memory
SPI driver is done and tested for SmartFusion Evaluation Board, we will need to adjust some aspects of the drive to our SPI flash memory
After the OpenoCD patch, we tested DirectC in flash it is working but it is not fully integrated with the project. Tomorrow we hope to have the FPGA programing environment working and tested with our Robot.
This weekend me and Helen worked more on the OpenOCD’s patch to write in the SmartFusion eNVM flash. We finally got it working, now we can load and execute code in flash \o/. Helen is now making some adjustments and organizing the git tree so we can commit it to OpenOCD.
We used Actel’s eNVM driver to create a stub that OpenOCD places in RAM and uses to perform the writing. The stub is compiled, the elf is converted into a binary image which is converted to a char array that is included in the OpenOCD source. After that, we wrote a OpenOCD flash driver for SmartFusion. A OpenOCD flash driver demands certain functions to be implemented, but as we’re in quite a hurry, we minimally implemented only the ones we needed: probe and write. The probe function initializes information that OpenOCD has on the flash bank. We saw in other drivers that it usually does more things, but we implemented only the basic to allow the write function to work. The write function places the stub in RAM, then repeatedly places a part of the image in RAM and runs the stub to write them to flash.
I’ll now move to creating the Libero project for our robot. Libero is the Actel’s development plataform for SmartFusion, and we need to use it to redirect the GPIOs to the FPGA pads. For now, it will generate a FPGA image that contains only this routing of GPIO to pads. Latter, as we work on the app libraries, we will insert the modules to control the motors, LCD, and other peripherals. Then we will use it to generate the final FPGA image that we will load into the Tutobot.
Today we did a presentation to show the status of the project and to show how the last part of the project will be done. We defined a more detailed planning for the Tutobot since now we have the architecture of the PCB closed, so we can think about how to program everything. Our deadlines are long since we don’t know the arrival of the first unit. The following table show the attributed responsibilities and deadlines for each task:
eNVM using OpenOCD
Libero project (IOMUX)
FPGA fabric programming
App libraries – Motor/Encoder
App libraries – FreeRTOS/Zigbee
App libraries – Sensors
App libraries – LCD/Buzzer
App : Tutogrid
App : Tutobrush
The two last tasks, Tutogrid and Tutobrush, are the development of two applications that will show some of the potential of the Tutobot.
Tutogrid is an application that discover the path from one predefined point to another over a grided floor filled with obstacles. The reactions are to be shown by the LCD and the buzzer. This app will show the usage of the line and collision sensors.
Tutobrush is an application that, using a pencil attached under the Tutobot, will draw over a paper a predefined picture and draw it in the LCD on the fly. The picture will be streamed by the Zigbee. This app shows the precision of the encoder