Today we have faced numerous problems. The ethernet module causes a crash in our central station and we still haven’t figured out the solution. We also tried to connect all card’s (6 captor-6 traffic lights cards) to our main station and they failed to join our the network. As Alexis consulted us to do, we lowered the baud rate of the CAN bus so that problem is solved. All cards join the network and CAN baud rate now is configured at 125kbp/s.
Regarding the communication of the server with the train we were able to track the position of the train and send it to the server. Our goal now is to remotely control lights and train, and we are working on it.
ILDA and Web app
Since the last post we did a lot of tests in real situation, namely with several programs running on the board. We deemed that the operation to translate a text to an ILDA animation tends to be far too slow that’s why we decided to rely on a web app based on the google appengine api for Python. Yesterday, we turned words into action and started to create the web app and a proxy in the card to query the webapp or in case of a failure, redirect request to the embedded slow instance of the program. We did deploy the web app on appspot.com and our first tests tend to confirm that the service is accelerated by a factor of 30 to 60 depending on the load of the board. We did realize a website to present our restfull API to that webapp and we will put it online as soon as possible.
As far as website are concerned, we want to introduce our brand new website. Feel free to comment or share any kind of feedback.
We hunted some bugs in our code and yet it works better and we don’t intend to make any kind of fix on it till the presentation.
DMX and additional board
We have contacted TSM and we will be able to try our code with real equipments.
We can communicate with our additional board via Zigbee. We have now to connect this feature with the other parts of the project with 0MQ.
Our software fifo works, we are putting all the pieces together to make our « driver/conductor/leader » module which will manage all the features of the project.
Today we’ll stick the pieces, it’s a milestone !
As you see, yesterday we were having fun with the laser trying to enhance the performances of our smooth text scrolling show. We now have embedded the whole program and it generates directly the ILDA file on board. We still have some speed issues, namely the program which generates the ILDA files is still pretty slow (it takes more than a minute for a tweet), but our imagination is thriving regarding optimization tricks and yet we gained a significant factor on the computing time.
We are also concerned about the beauty of the text. In fact we noticed a few effects in which we’re focusing on :
- When the laser goes more quickly, the line between 2 given points tends to look like a curve Solution : we are going to fix the derangement by adding several intermediate points between our points, this will be done in C at the end of the display chain.
- When the points are too faraway one from another, the galvanometers tend to be more noisy and we tend to be more worried about the survival of our system. Solution : we reduce the size of the text (so as to narrow the space between points) and insert intermediate points between the characters and at the end of the frame to make the shifts of position smoother.
- When the laser isn’t quick enough, we recognized that the flickering effect is more important.
Solution : we need to speed the laser up.
We did some major improvements on those different points not to mention the complexity issues. Step the text is getting more and more beautiful !
I’m sure you want to know a little more about our FPGA. We have enhanced our previous design by adding a RAM FIFO which will allow us to reduce the load of the CPU. We’re working on the CPU’s side of that program. Anyway, our security module seems to be a bit too sensible and it is triggering more often than expected.
In the next few days we will stress on the development of the DMX card.
Today, we received our 7 sensors card (which will track the position of the train and feed it to the central station), thanks to alexis who worked until very late in the night for soldering all the cards. So we wrote the code for the Sensor card which has a « producer » module who generates events on the CAN bus to indicate a sensor activity.
Also, Siwar is trying to do the communication protocol between ethernet and the server (setup by Samuel which controls the rail layout).
On the stabilisation of CAN, we are now filtering out events in CAN RX Queue 1(STM32 micro controller has 2 FIFIOs) and have left messages for network management on queue 0. This is done to avoid over-utilisation of queues due to either type of messages. Here is brief video of our project.
Since Saturday I have been working on Firmware transmission through IrDA and the use of SD Card.
Transmission protocol for firmware:
The firmware is stored on flash memory, so we are transmitting from flash to flash through IrDA.
As we have to write in flash by pages, we also have to transmit our firmware by pages and as pages important in size we cannot send it at once. Thus each firmware will be sent divided by pages which are divided into packets.
At the beginning of a transaction a first packet is sent with the data needed in order to store the firmware such as number of pages, number of packets sent by pages and for security purpose the id of the sender and the interface used. The bloc send an acceptance packet either it is already on transaction or not.
Then a PAGE_BEGIN packet is sent with its position in the firmware.
During page sending all data is stored in a buffer of 32-bits words.
At the end of a page a packet END_OF_PAGE is sent confirming its position and adding its CRC calculated thanks to CRC unit on STM32 and the CRC is checked(Note that in the IrDA task we already check the integrity of each packet). If everything is okay we write the buffer in flash, if not a request is sent to resend the page.
We continue till we arrive at the end of firmware with the packet END_TRANSACTION and the global integrity is checked. In case of success we send a TRANSACTION_OK packet, if not we have to do it again from the beginning.
For SD Card we will use FatFS, a free open source FAT file system module which has an implementation on STM32 thanks to Martin THOMAS. I adapt it to our system. I delayed testing because transmitting a huge amount of data seemed more important, SD card coming in second.
we have started today making some tests on the DCC part. As we have seen the good results with the oscilloscope, we moved to testing on a rail part by placing the train and sending the DCC commands through the rails. In fact we could command our train in both directions in different speeds.
Then, we started to implement a small module that takes Ethernet commands following a simple protocol. This module is translating the Ethernet commands to appropriate DCC and CAN commands in order to command different parts of our train model.
In parallel we still test the stability of our CAN module. Despite the good results there are very rare cases where our cards become unstable. We are making some modifications and we are going tomorrow to test on multiple light cards.
Our next PSSC is the initialization of Ethernet (what is already done) and control of our circuit from an extern program.
Our journey has continued from CAN to LAN passing through DCC in its way. During the last two days, we tried to write the dcc encoder for our card which will act as the central station. In brief, I will try to explain DCC for all those who are not already aware of it. DCC (Digital Command Control) is a standard which permits us to operate model railways by sending modulated pulse wave(amplified with the help of a booster) to the rail tracks. Thereby, we can control any locomotive or accessory (like Traffic lights) if they have the dcc decoder. DCC is standard provided by NMRA.
We have done the following encodings for bit 0 and 1 as per NMRA DCC Standard.
(i) bit ’0′ – 232 microseconds (half high and half low)
(ii) bit ’1′ – 116 microseconds (half cycle is low)
On the implementation part, to generate such type of signal on the STM32 GPIO port, we use PWM module(using Internal Timer of STM32) with 50% duty cycle. This was setup in Output Compare Mode. Through the help of oscilloscope, we verified that we got the expected results. We created a dcc packet simulator(coded as a task of FreeRTOS) and we tried sending Messages to the port as per DCC Communication Packet format.
On the Ethernet part, we have chosen LwIP Stack as the TCP/IP Stack. We have chosen LwIP primarily as it has features like DHCP, UDP, IPv6 which we may need as the project matures further. So, we integrated the LwIP stack with STM32F107(which has in-built MAC Controller) and DP83848P Phy Controller taking help of a similar example provided by ST on their website. We were able to make a telnet connection to our Central Station and correponding exchange messages through telnet.
On the stabilisation of NMRA CAN protocol, we added a watchdog module within all the nodes including the central station. This was required because if the software hangs in one of the nodes, it should have the ability to re-join the network. The timeout of the watchdog module has been kept to 3 seconds(Typically because we want our nodes to rejoin the network as soon as possible and at the same time give Central Station enough time to remove this node from the node table and event table). Also, the command to reset the nodes sent by the network manager at the start of his execution as a broadcast message to all the nodes will have a deeper impact. Instead of doing a cache reset at the nodes, we are now doing an actual software reset to the node. This is just to simulate an actual reset. All these changes done to stabilisation are open for discussions and all remarks/critics/ideas are heartily welcomed.
As I said in a previous post, we are now able to synthesize speech from a text input, and play the result directly on the audio output jack using a home-made interface between the synthesis engine and alsa.
We had also to port our speech recognition hello-world on the beagleboard. We first compiled the CMU Pocketsphinx library for the board, that is to say for an arm target, and then the hello-world program.
The program successfully recognized commands we recorded and played on the laptop, while having the beagleboard’s audio input connected to the laptop’s headset output by an appropriate cable.
We now have to interface electronically our microphones to the beagleboard’s audio input.
Apart from the progress in the audio, we also managed to compile a linux kernel module hello world on the board, despite the current custom kernel’s lack of certain header files.
The helloworld ran properly, and we were able to write a string to it, and read it back.
The next step will be to start developing our custom linux device driver, responsible for casper’s mechanical control.
Yesterday we were working on the switching on/off of the laser spot and on the module wich will ensure security. The features are written but not yet working well. We need modelsim to debug but it is only installed on the school’s computers which are, as you know, not available this week-end. So we decided to give up and focus on the Web part until Monday.
We can now fetch the last few tweets containing a given hashtag on twitter and put them into a database. Another program queries the database to get the newest message and then displays it letter by letter on the oscilloscope. Everything is written in C and working on the Armadeus board.
For twitter, we don’t use any library apart from libcurl, to query the twitter’s API, and jansson to parse the incoming JSON object returned by twitter.
Here a small video, you can see the latest (at this time) tweet on #rose2011, displayed leter by letter on the oscilloscope.
You can read « 75% students attendance during the vacation #rose2011″.
We can pack the result from a query to the database into a JSON object. Inserting values given by a JSON object into the database won’t be difficult.
Next step is to communicate with a client. As we don’t seem to be able to make Mongrell2 work on the Armadeus, we looked for an alternative. Yesterday we have quickly tried a library called libmicrohttpd to handle HTTP queries. We have only tested a simple GET command, but at least it works on the Armadeus, and it is written in pure C. Contrary to CGI, this library is able to manage multiple connections in only one process using Posix threads.
Today we were able to control our leds using the TLC5945 Led Driver, although we have not yet exploited it’s capabilities at full extent. As Alexis and Samuel proposed during our meeting, we should use dot correction in order to balance the mismatch of luminosity between green, red and orange. In addition, it would be useful to control intensity through TLC5945 grayscale mode especially once we assemble the whole system containing 60 leds.
Our next PSSC concerns DCC control of the train , although there are many more things we are concerned about. At this point we are mostly concerned about switching from our laboratory based STM32F103 cards, to the STM32F107 based central station card. For the moment we haven’t yet achieved to configure the STM32F107 CAN bus drivers correctly. Once this is done we will do more tests and continue working on the stability of the NMRAnet CAN bus module.
One problem of the NMRAnet CAN bus net module is the fact that in presence of multiple producer consumer events, the nodes fail to reply in time, to the STATUS REQUEST message addressed to them by the central station at some point. We have thought that a possible fact that causes this situation is that for the moment nodes are using only one of the STM32F103′s available CAN bus reception FIFO’s. In this case, processing the status request incoming packets is being delayed by event packets already present in the FIFO . Therefore we think that filtering out NMRAnet producer/ consumer event packets from one of the FIFO’s and letting them pass thorugh the other one seems like an improvement.