Building Dynamic Spectrum Access Prototypes using Open-Source SDR Software

A while ago I started to clean-up and open-source some of the components I wrote during my Ph.D. research. All of them can be found on my GitHub profile.

This includes a standalone library, called libgdtp, that implements a basic data transfer protocol that caters for functions such as framing, error control, flow control, and multiplexing. There exist wrapper blocks for GNU Radio as well as for Iris, too. Most the other components, however, are components for the Iris SDR framework. All Iris related components can be found in a repository called iris_fll_modules which should be installed along with the standard iris_modules repository.

Among them is a component called CoordinatorMobility. I’ve used this specific protocol, along with the entire Iris SDR ecosystem, for a number of dynamic spectrum access (DSA) experiments and research projects. Some publications talking about those can be found, for example, here.

As the name suggests, the CoordinatorMobility component implements a simple protocol for building a network of spectrum-agile radios that opportunistically access licensed spectrum whenever the owner of the spectrum, the so-called primary user (PU), is not actively using it. This protocol relies on roles being assigned to individual nodes in the network: a coordinator and one or more followers.

In such a network, the coordinator acts as a superior device that scans and monitors the available spectrum and then decides on the actual channel being used for data communication. It further selects a so-called backup channel (BC) that is used whenever the PU returns and reoccupies the spectrum. The entire communication is then (almost) seamlessly handed over to the BC. The information about the selected BC is sent by the coordinator to its followers over the wireless link via an inband signaling mechanism which is multiplexed with ordinary data traffic.

In the particular demo shown in the video, I’ve tried to combine some of the interesting features to show how an actual link layer protocol can be build using the components I’ve developed. The demo consists of five main phases that are shortly described below.

  • Connection phase: When the radio nodes are initially powered on, they immediately attempt to establish a link on a predefined channel.
  • Data transmission phase: Once the network connectivity is established, the coordinator (left node) attempts to transmit a video captured through the webcam to the follower. The coordinator uses its primary USRP (in the back) for this.
  • Spectrum sensing phase: This phase runs in parallel with the data transmission phase. Using the RTL-SDR dongle, the coordinator continuously scans through all available radio channels, determines the state of all channels and stores the results inside the link layer controller.
  • Target channel selection phase: Based on the results in that data storage, the coordinator mobility protocols determines a suitable BC among the available channels and informs the followers about its selection.
  • PU interruption and channel handover phase: In case the nodes detect an active PU on the current operating channel, the radio immediately suspends any active transmission in order to protect the PU from harmful interference. Both nodes now tune their radios to the BC and resume their communication.

Note that the follower only uses an RTL-SDR dongle as radio device. The video stream as well as the control information of the mobility protocol is received through that device. In contrast to what I said in the video, the second USRP (in the back of the right SDR node) is not used during this demo. Also, note that the PU uses a bladeRF to generate the PU signal.

5G Spectrum Sharing Challenge

IEEE Dynamic Spectrum Access Networks (DySpan) is one of the prime conferences in the field of dynamic spectrum access, spectrum sharing, 5G, and related technologies in general. For the very first time, this edition of DySpan, which took place in Stockholm, Sweden in September/October 2015, included a special track called “5G Spectrum Sharing Challenge“. This challenge “is a competition designed to demonstrate a radio protocol that can achieve high spectral efficiency in coexistence with legacy technology”. In particular, the organizers asked potential participants to design and implement an actual radio according to some rules that then competes with other contestants during the actual conference.

This call immediately caught our attention and my friends at Trinity College Dublin, Ireland and I decided to pair up once again – after the “DARPA Spectrum Challenge” – to build a radio for this contest.

The Goal

The main goal of the contest was to create a secondary user (SU) radio that would share a specified portion of RF spectrum with a primary user (PU) radio. In the particular case, the PU was a radio based on the IEEE 802.15.4 PHY, i.e., ZigBee, design operating in four 5MHz-wide channels centered around 2.3 GHz. Even though advertised otherwise, it is interesting to note that the PU used during the challenge was actually based on Bastian’s gr-ieee-802154 module.

By saying co-exist I mean that the goal of the SU is to transfer as many bytes as possible between an SU transmitter and an SU receiver while not degrading the performance of the PU transmitter/receiver pair.

The actual score that is used to determine the winner of the challenge is the product of the SU throughput and PU throughput. The problem, however, lies in the fact that the PU throughput drops to zero as soon as the actual throughput is reduced by more than 10%. Therefore, it was of utmost importance for any team to not interfere too much with the PU in order not to score zero.

The Teams

According to the challenge organizers, there were 6 teams who registered at the conference. One team was rejected which leaves 5 teams for the challenge in Stockholm: one from Karlsruhe, Germany, one from San Diego, USA, and two from Greece. And of course, there was our team.

Design Guidelines and Regulations

In order to participate in the challenge, all teams had to follow certain design guidelines and regulation that were monitored during the challenge. For example, the radio was not allowed to transmit out of the 20 MHz frequency chunk that was assigned for the challenge. Of course, any other wired or wireless connection was also forbidden. You can find more information about the regulations on the contest website.

Photograph of the spectrum analyzer TV at the conference.
Photograph of the spectrum analyzer TV at the conference.

Challenge Phases

The challenge itself consisted of two phases, each lasting for 10 minutes. During the first phase, called the learning phase, one was able to calibrate certain radio parameters or to learn the impact on the PU caused by the SU. During the learning phase, all this was possible without getting a negative score. The second phase was the actual challenge. It again lasted for 10 minutes, but SU throughput as well as interference caused to the PU was counted. Altogether each team had two runs, one in the morning and one in the afternoon, of which only the best one was counted.

Our Solution

What we wanted our radio to work like was the classical overlay approach. Within an overlay system, the SU transmits in parallel with the PU but on an orthogonal channel, i.e., a different frequency.
Our initial assumption was that at any time there would be 15 MHz of free spectrum for us to transmit in because the PU can only occupy a single 5 MHz channel at a time. This assumption actually turned out to be wrong. With this approach, we actually didn’t consider any power control or adaptive modulation and coding whatsoever.

Clearly, such a strategy demands a very reliable sensing mechanism in order to detect the PU as quick as possible and vacate the current operating channel without causing any interference.

So we concentrated on building a sensing mechanism to detect the ZigBee transmission. We used an energy detector for that with an online noise floor calibration. Furthermore, we added a dwell time estimator to detect the ZigBee frame size during runtime and to schedule our transmissions just in parallel.

For the transmitter, we decided to only transmit in one of the three vacant 5 MHz channels at a time, knowing well that this limited the achievable throughput to 1/3. During the design phase, we also considered a polyphase filter to occupy all three vacant channels but couldn’t handle the computational complexity over the entire 20 MHz. The same was actually true for the receiver.

Anyway, having a transmitter that hops over and transmits in one of four possible 5 MHz wide channels, we needed a mechanism to reliably receive the transmissions on the other side. Trying to avoid a complex rendezvous mechanism, our straightforward solution here was to receive on all four channels simultaneously.

We experimented a lot trying to get a 4×5 MHz polyphase channelizer to run using a single USRP. After trying to parallelize the channelizer and tuning the buffer management, it kind of worked in the lab. However, it was still quite computational intense and we were seeing overruns every now and then on the testing machine, which was a Core i7 Ivy-Bridge by the way. So we finally decided to use two N210s connected through an MIMO cable and let the FPGA do the channelization for us. Using the two RX DSP chains in each N210, we could get the 4×5 MHz receiver up and running without any problems, also thanks to the buffer management that we have developed previously for the polyphase receiver.

For our radio, we decided at a very early stage not to go with any SDR framework. Of course, using Iris or GNU Radio would have been possible too but after discussing the pros and cons of it, we decided to build the radio in plain C++ for this particular task. Of course, having decided to not use an SDR framework doesn’t mean that we wanted to build the entire radio from scratch. All of us have used liquid-dsp before and it was also clear from the early beginning that we would like to use this superb library for our radio design. Our first radio design was in fact based on the examples provided by liquid-dsp itself.

It should be noted that we developed the sensing mechanism and the transceiver chain at two different sites. The former was done in Dublin and the latter in Ilmenau. In order to do the final integration and some testing, we decided to schedule a hack weekend just before DySpan in Dublin. The integration went fine and we could get it all working in the lab with our experimental PU, which was an off-the-shelf Zigduino board. For the first time, we also used our cantenna, a directional antenna made out of a can of pineapples.

Francisco scratching his head during the final intergration in Dublin
Francisco scratching his head during the final integration in Dublin

The Reality

Having arrived at DySpan, however, we realized that the approach we were following wouldn’t really work. We designed the wrong tool for the problem. This was due to both some last minute changes of the organizers when it comes to the radio and some misunderstanding from our side. The sensing didn’t actually work with the PU that was used during the challenge. As already mentioned above, the PU PHY was the GNU Radio 802.15.4 module and a USRP X310. Unfortunately, the sidelobe emissions of the PU completely fooled our noise floor calibration. Furthermore, the challenge PU transmitted on all 4 channels simultaneously, not always, but occasionally. And that screwed our “there is always a free channel to transmit on” assumption. Clearly, during that time, our SU didn’t actually know on which channel to jump to. So the transmitter just kept transmitting the entire time. Very bad idea.

The challenge consisted of two runs, but only the best run counted. So we scored zero in the first run because we created too much interference. Our SU throughput was quite good, though.

Interestingly, pretty much every team struggled with this and scored zero because either the sensing part wasn’t working or the radio wasn’t running at all. As can be seen from the result plot, only KIT scored during the first run. It was also interesting to see that only one team (I don’t remember which, though) actually used the PU feedback during the challenge. This was another major misunderstanding. So in fact, it would have been possible to get feedback about the interference created at the PU during the challenge, which would have helped a lot. I am still not sure whether this feedback was available at the transmitter as well or at the receiver only, which still would have required to feed it back to the transmitter somehow.

Getting Prepared for Run 2

Right after run one, we thought about possible changes for run two. Clearly we needed to modify the logic that kept our SU transmitting all the time when all four channels were occupied. The changes in the code weren’t difficult but involved several changes in multiple threads using conditional variables with locks, etc. So we were quite excited because we couldn’t test the changes at all and didn’t want to create a deadlock for the second run.

The second set of changes were only parametric changes. We basically reduced transmit power as much as possible. Thanks to the highly directional Cantenna we were still able to get a reasonably high SNR and decided to stick with 16-QAM.

Run 2 and the Final Results

When we started the transmitter for the second run, all of us were very excited. The spectrum analyzer was hardly showing our transmissions. But apparently, our SU throughput was just a little bit below that of run one. Because we couldn’t see anything on the spectrum analyzer, we actually don’t even know if the sensing was working or not during the final run. After the run was over, we needed to wait some time for the organizers in order to calculate the PU throughput and then the final score. Again, the question was whether or not we stayed below the 10% interference level. And we did. As you can see from the final score chart, we achieved more than double the points of the second team, the guys from KIT. This score let us win the “objective challenge”. The winner of the “subjective challenge” was KIT.

Secondary User throughput  for both runs.
Secondary User throughput for both runs.
Final score taking PU interference into account.
Final score taking PU interference into account.

Summary and Acknowledgement

In the end, I believe what helped us win this contest was using a highly directional antenna and a robust and reliable PHY layer design that allowed us to use a very low transmit power that kept the interference to the PU below the allowed threshold.

On the last day of the conference just before the award ceremony, there was a time slot dedicated to the challenge in which all teams presented their approach and discussed their lessons learned. Interestingly, more sophisticated methods including learning  or online adaptation of transmission parameters were not part of any of the successful solutions. I truly hope that this will change next time.

Overall. the 5G Spectrum Sharing Challenge was a great event and a great success, not only because we won the objective challenge in the end. I am saying this because it was great meeting people from all around the world who are enthusiastic about SDRs and who like to take up a fun project to work on. This holds true for all who participated, but especially for the organizers of the challenge. They did a superb job before and during the conference and deserve big thanks. Well done Tom, Bertold, Sreeraj, and Sophie. I truly hope that something like this will be continued.

Our team in Stockholm.
Our team in Stockholm.
Justin talking about our solution on the last day of the conference.
Justin talking about our solution on the last day of the conference.
Justin and Francisco receiving the first prize, a USRP X310.
Justin and Francisco receiving the first prize, a USRP X310.

Developing MAC components with Iris

Introduction

This blog post shows how MAC layer protocols can be developed within Iris. For those of you who don’t know Iris, it’s a component-based software defined radio (SDR) framework written in C++. It’s being developed at Trinity College Dublin, Ireland and has been open-sourced early January 2013 under the LGPL license. Conceptually, Iris is very similar to GNU Radio. Together with USRP devices it can be used to build up fully functioning SDR prototypes that can be used for experimenting with new communication protocols and algorithms. At the moment, Iris only comes with a rather small number of components but I am pretty sure more and more will be added over time. I’ve been using the software for over two years now and pretty much all of my practical SDR work has something to do with it.

After Iris has been open-sourced I’ve decided to cleanup and port some of the components I’ve developed previously. This blog post mainly deals with the implementation of a simple MAC protocol (or is it just an ARQ protocol?) called Aloha within Iris. For a similar protocol for GNU Radio see the pre-cog MAC by John Malsbury. Anyway, the idea behind Aloha is quite simple: if you have something to send, send it. If you were not successful, try it again. Even though that sounds quite trivial it’s actually powerful enough to build up a small network and have the nodes exchange data. Certainly, you shouldn’t expect much of a simple protocol like this, if the amount of packets is too big, the expected throughput is actually very low because there will be many collisions. Anyway, the reason why I’ve chosen it is because it can be used to explain some basic ideas related to the implementation of MAC protocols using Iris. If you’re not interested in those things but would like to give it a shot, just jump to the install section below.

Component design and code walk through

In this section I’ll go through the code of the AlohaMac component and explain a few things that seem to be interesting. Everything said, including line numbers, are based on this version of the component. One important aspect of this implementation is that it uses Google’s Protocol Buffers as its serialization format. Using Protocol Buffers (or short protobuf) is a very efficient and extensible way of encoding structured data, such as a packet of a communication protocol, into an array of bytes. In fact I’ve tried many alternatives, I’ve used boost::serialization, JSON, MessagePack and of course do-it-your-own serialization including structs and memcpy(). In conclusion, I’ve found protobuf to be a very good compromise between speed, extensibility, portability and ease of use. A protobuf message is defined inside a .proto file which is used to generate language bindings (we use C++ but it supports a large number of languages).  The generated classes can then be used inside the code as usual which makes them very convenient to use. If you change anything, you don’t have to worry about regenerating the classes as all of this is hidden inside the CMake build environment which runs the code generation automatically as soon as anything changes. The current packet format of the AlohaMac component can be found here.

Lets have a closer look at the code now. After registering the component parameters, the first interesting part are the processMessageFrom[Above,Below]() functions. They are called whenever a new frame has been sent to the component from either a component sitting above the AlohaMac component or below it. Just think of the above/below thing as if you have a number of layers/protocols stacked together, similar to the ISO/OSI layers. What’s done here is simply to put the incoming frame in either a transmit or receive queue. The packets produced here are consumed later in separate transmit and receive threads. The data structure used in this example is a simple FIFO queue with a fixed length.

void AlohaMacComponent::processMessageFromAbove(boost::shared_ptr incomingFrame)
{
  //StackHelper::printDataset(incomingFrame, "vanilla from above");
  txPktBuffer_.pushDataSet(incomingFrame);
}

void AlohaMacComponent::processMessageFromBelow(boost::shared_ptr incomingFrame)
{
  //StackHelper::printDataset(incomingFrame, "vanilla from below");
  rxPktBuffer_.pushDataSet(incomingFrame);
}

The following two functions are used to spawn and interrupt the transmit/receive threads of the component. We need those threads because in the transmit path (see below) we block the current thread of execution waiting for an ACK packet or timeout. The reason why I’m using two additional threads here is that the reception and processing of packets is decoupled.

void AlohaMacComponent::start()
{
  rxThread_.reset(new boost::thread(boost::bind( &AlohaMacComponent::rxThreadFunction, this)));
  txThread_.reset(new boost::thread(boost::bind( &AlohaMacComponent::txThreadFunction, this)));
}

void AlohaMacComponent::stop()
{
  // stop threads
  rxThread_->interrupt();
  rxThread_->join();
  txThread_->interrupt();
  txThread_->join();
}

The registerPorts() function is important to inform the Iris core about the ports that a component uses and provides. The names used here are then used inside the radio XML file to connect the components with one another. In our case, we define four ports in total, two input/output ports on top of the component (to connect to the network component for instance) and two input/output ports below to connect to the PHY components.

void AlohaMacComponent::registerPorts()
{
  std::vector<int> types;
  types.push_back( int(TypeInfo< uint8_t >::identifier) );

  //The ports on top of the component
  registerOutputPort("topoutputport",types);
  registerInputPort("topinputport", types);

  //The ports below the component
  registerInputPort("bottominputport", types);
  registerOutputPort("bottomoutputport", types);
}

Before looking at the receiver, let’s first see how the transmit path looks like. As already mentioned above, both of them live in separate threads. Probably most of the time the Tx guy sits and waits for new frames to be sent (see line 200). If a new frame has been pushed down from a component above, the protocol basically just wraps its control information around it, including source and destination address, sequence number and of course the packet type. Packet meta data and the actual payload (i.e. the frame pulled from the transmit buffer) are actually kept separately up to the point where they get merged and serialized into a single stream of bytes (see line 208). For this, I’ve written a couple of helper functions which are defined inside class StackHelper. After that, the packet is ready to be sent over to the other node (line 215). The protocol now waits until it either receives an ACK for the data packet or the timer goes off (line 218). After an ACK timeout, the transmitter waits for a random amount of time before trying to retransmit the packet (line 222-225). The maximum number of retransmissions can be configured through a parameter.

void AlohaMacComponent::txThreadFunction()
{
  boost::this_thread::sleep(boost::posix_time::seconds(1));
  LOG(LINFO) << "Tx thread started.";

  try
  {
    while(true)
    {
      boost::this_thread::interruption_point();

      shared_ptr<StackDataSet> frame = txPktBuffer_.popDataSet();

      boost::unique_lock<boost::mutex> lock(seqNoMutex_);
      AlohaPacket dataPacket;
      dataPacket.set_source(localAddress_x);
      dataPacket.set_destination(destinationAddress_x);
      dataPacket.set_type(AlohaPacket::DATA);
      dataPacket.set_seqno(txSeqNo_);
      StackHelper::mergeAndSerializeDataset(frame, dataPacket);

      bool stop_signal = false;
      int txCounter = 1;
      while (not stop_signal) {
        // send packet to PHY
        LOG(LINFO) << "Tx DATA  " << txSeqNo_;
        sendDownwards(frame);

        // wait for ACK
        if (ackArrivedCond_.timed_wait(lock, boost::posix_time::milliseconds(ackTimeout_x)) == false) {
          // returns false if timeout was reached
          LOG(LINFO) << "ACK time out for " << txCounter << ". transmission of " << txSeqNo_;
          // wait random time before trying again, here between ackTimeout and 2*ackTimeout
          int collisionTimeout = rand() % ackTimeout_x;
          collisionTimeout = std::min(ackTimeout_x + collisionTimeout, 2 * ackTimeout_x);
          boost::this_thread::sleep(boost::posix_time::milliseconds(collisionTimeout));
        } else {
          // ACK received before timeout
          stop_signal = true;
        }

        if (++txCounter > maxRetry_x) stop_signal = true;
      }

      // increment seqno for next data packet and release lock
      txSeqNo_++;
      if (txSeqNo_ == std::numeric_limits<uint32_t>::max()) txSeqNo_ = 1;
      lock.unlock();
    }
  }
  catch(IrisException& ex)
  {
    LOG(LFATAL) << "Error in AlohaMac component: " << ex.what() << " - Tx thread exiting.";
  }
  catch(boost::thread_interrupted)
  {
    LOG(LINFO) << "Thread " << boost::this_thread::get_id() << " in stack component interrupted.";
  }
}

Let’s now have look at the receiver thread. Most of the time it’s again waiting for incoming frames (line 135). Payload and packet control data is then separated from each other using the helper class (line 138). According to its type, either DATA or ACK, the packet is now processed. An acknowledgment is sent back to the transmitter for each received data packet (line 145). But only if the sequence number matches the packet is passed up to the next layer (line 150). If the received packet was an ACK and the sequence number matches with the expected one, i.e. the one that has been just sent, the receiver thread wakes up the transmitter which continues with the next packet (line 164).

void AlohaMacComponent::rxThreadFunction()
{
  boost::this_thread::sleep(boost::posix_time::seconds(1));
  LOG(LINFO) << "Rx thread started.";

  try
  {
    while(true)
    {
      boost::this_thread::interruption_point();

      shared_ptr<StackDataSet> frame = rxPktBuffer_.popDataSet();

      AlohaPacket newPacket;
      StackHelper::deserializeAndStripDataset(frame, newPacket);

      if (localAddress_x == newPacket.destination()) {
        switch(newPacket.type()) {
        case AlohaPacket::DATA:
        {
          LOG(LINFO) << "Got DATA " << newPacket.seqno() << " from " << newPacket.source();
          sendAckPacket(newPacket.source(), newPacket.seqno());

          // check if packet contains new data
          if (newPacket.seqno() > rxSeqNo_ || newPacket.seqno() == 1) {
            // send new data packet up
            sendDownwards("topoutputport", frame);
            rxSeqNo_ = newPacket.seqno(); // update seqno
            if (newPacket.seqno() == 1) LOG(LINFO) << "Receiver restart detected.";
          }
          break;
        }
        case AlohaPacket::ACK:
        {
          LOG(LINFO) << "Got ACK  " << newPacket.seqno();
          boost::unique_lock<boost::mutex> lock(seqNoMutex_);
          if (newPacket.seqno() == txSeqNo_) {
            // received right ACK
            lock.unlock();
            ackArrivedCond_.notify_one();
          } else if (newPacket.seqno() > txSeqNo_) {
            LOG(LERROR) << "Received future ACK.";
          } else {
            LOG(LERROR) << "Received too old ACK";
          }
          break;
        }
        default:
          LOG(LERROR) << "Undefined packet type.";
          break;
        }
      }
    } // while
  }
  catch(IrisException& ex)
  {
    LOG(LFATAL) << "Error in AlohaMac component: " << ex.what() << " - Rx thread exiting.";
  }
  catch(boost::thread_interrupted)
  {
    LOG(LINFO) << "Thread " << boost::this_thread::get_id() << " in stack component interrupted.";
  }
}

Well, that’s already the end of the code. It’s pretty simple and lots of improvements can be done to the protocol of course. But with only 213 lines of C++ code (according to sloccount) it’s actually quite slim and an excellent starting point for more! Let’s now see how all this goes together and works in practice.

Installation

Iris is actually divided into two packages, the core and the modules. We first need to download and install the core package. Although I clone here from my fork of iris_core you could also clone from here because there aren’t any changes to the core yet. Install instructions can also be found here. Apart from the standard Iris dependencies, you’ll need to grab the protobuf library and the protobuf compiler as well. The install commands below will do the trick for Ubuntu but you might have to change them if you’re using another distro.

$ sudo apt-get install libprotobuf-dev protobuf-compiler
$ git clone git://github.com/andrepuschmann/iris_core.git iris_core
$ cd iris_core
$ mkdir build
$ cd build
$ cmake ..
$ make
$ sudo make install

After a while the build should be finished and we can continue with building the modules. Here, you’ll need to clone from my iris_modules fork and switch to the alohamac branch.

$ git clone git://github.com/andrepuschmann/iris_modules.git iris_modules
$ cd iris_modules
$ git checkout alohamac
$ mkdir build
$ cd build
$ cmake ..
$ make
$ sudo make install

Now, Iris should be installed on your system.

Getting started

We’ll first try to get the AlohaMac running locally using two instances of Iris on the same machine connected through UDP sockets. One instance will be a transmitter and the other one will be a receiver. Just start two consoles and start the transmitter in one of them and the receiver in the other. Note that the radios’ XML files are stored under “examples” inside the module path. Also, we need to specify the location of the components. By default they should live in /usr/local/lib/iris_modules/*.

$ cd ../examples/alohamac
$ iris -t /usr/local/lib/iris_modules/components/gpp/stack/ -p /usr/local/lib/iris_modules/components/gpp/phy/ alohamac_udp_tx.xml

And the receiver:

$ iris -t /usr/local/lib/iris_modules/components/gpp/stack/ -p /usr/local/lib/iris_modules/components/gpp/phy/ alohamac_udp_rx.xml

The transmitter window should now look like this:

	    Iris Software Radio
	    ~~~~~~~~~~~~~~~~~~~

[INFO]    System: Loading radio: alohamac_udp_tx.xml
[INFO]    XmlParser: Parsed engine: stackengine1
[INFO]    XmlParser: Parsed component: filereader0
[INFO]    XmlParser: Parsed component: alohamac0
[INFO]    XmlParser: Parsed engine: phyengine1
[INFO]    XmlParser: Parsed component: clientsocketin0
[INFO]    XmlParser: Parsed engine: phyengine2
[INFO]    XmlParser: Parsed component: clientsocketout0
[INFO]    XmlParser: Parsed link: filereader0 . bottomport1 -> alohamac0 . topinputport
[INFO]    XmlParser: Parsed link: alohamac0 . bottomoutputport -> clientsocketout0 . input1
[INFO]    XmlParser: Parsed link: clientsocketin0 . output1 -> alohamac0 . bottominputport
[INFO]    System: Starting radio

Stack Repository  :  /usr/local/lib/iris_modules/components/gpp/stack/
Phy Repository  :  /usr/local/lib/iris_modules/components/gpp/phy/
SDF Repository  :
Controller Repository  :
Log level : debug
Radio Config: alohamac_udp_tx.xml

	    Iris Software Radio
	    ~~~~~~~~~~~~~~~~~~~

	U  Unload Radio		S  Stop Radio
	R  Reconfigure		Q  Quit

(Radio running), Selection: [DEBUG]   filereader0: One block read.
[INFO]    alohamac0: Rx thread started.
[INFO]    alohamac0: Tx thread started.
[INFO]    alohamac0: Tx DATA  1
[INFO]    alohamac0: Got ACK  1
[DEBUG]   filereader0: One block read.
[INFO]    alohamac0: Tx DATA  2
[INFO]    alohamac0: Got ACK  2
[DEBUG]   filereader0: One block read.
[INFO]    alohamac0: Tx DATA  3
[INFO]    alohamac0: Got ACK  3
..

And the receiver like this. You can safely ignore the debug output that says sendDownwards() has failed. This is only because the file writer component is the last component of our component stack.

..
[INFO]    alohamac0: Got DATA 1 from aabbcc111111
[INFO]    alohamac0: Tx  ACK  1
[INFO]    alohamac0: Receiver restart detected.
[DEBUG]   filewriter0: FileWriter::processMessageFromAbove() called.
[DEBUG]   filewriter0: sendDownwards() failed. No buffers below.
[INFO]    alohamac0: Got DATA 2 from aabbcc111111
[INFO]    alohamac0: Tx  ACK  2
[DEBUG]   filewriter0: FileWriter::processMessageFromAbove() called.
[DEBUG]   filewriter0: sendDownwards() failed. No buffers below.
[INFO]    alohamac0: Got DATA 3 from aabbcc111111
[INFO]    alohamac0: Tx  ACK  3
..

After the radios ran for a while you could stop Iris by pressing q and Enter. You should now be able to see that the receiver has actually written the received packets to output.bin in the current path.

If this first run was successful and you have a couple USRPs around you could try the same with the alohamac_ofdm_rx.xml and alohamac_ofdm_tx.xml radios which essentially do the same but over the air. The examples have been tested with XCVR2450 boards in the 5 GHz band. So you might need to modify them if you’re using different hardware. In this post however, I’d wish to continue with another example which allows to connect the SDR to the network stack of your operating system using a so called TUN/TAP device. This allows to tunnel virtually any IP-based application through your SDR. In this example, I’ll simply use the ping command to show how two instances of Iris running on different PCs can talk to each other over the air using the Aloha MAC protocol.
For this we first need to setup a virtual network device on each host and give them an IP in the same subnet. We also need to modify the default route to make sure that the Linux network stack will give us every packet that goes to any IP in 10.0.0.*. We can create a TAP device which operates on layer-two packets like this:

$ sudo openvpn --mktun --dev tap0
$ sudo ifconfig tap0 10.0.0.1
$ sudo ip route add 10.0.0.0/24 dev tap0

Run the same commands on the second host but make sure that you give that guy a different IP, i.e. 10.0.0.2. Now we can give it a try and run the over the air test with Iris and the AlohaMac. Again, we need to start two different radios on both machines, run alohamac_ofdm_tap_node1.xml and alohamac_ofdm_tap_node2.xml. In this radios, the file reader/writer components are gone and got replaced by the TUN/TAP component that connects your SDR to the network stack. You could now test your connection by opening another console on lets say the first node (10.0.0.1) and ping the other one. The output on the console that is running Iris should look similar to the one above with the only exception being the initialization of the UhdRx and UhdTx components and the output of the TUN/TAP component.

[INFO]    alohamac0: Got DATA 1 from aabbcc111111
[INFO]    alohamac0: Tx  ACK  1
[INFO]    alohamac0: Receiver restart detected.
[DEBUG]   tuntap0: Successfully wrote 98 bytes to tun device
[DEBUG]   tuntap0: Read 98 bytes from device tap0
[INFO]    alohamac0: Got DATA 2 from aabbcc111111
[INFO]    alohamac0: Tx  ACK  2
[DEBUG]   tuntap0: Successfully wrote 98 bytes to tun device
[DEBUG]   tuntap0: Read 98 bytes from device tap0
[INFO]    alohamac0: Got ACK  1
[INFO]    alohamac0: Tx DATA  1
[INFO]    alohamac0: Tx DATA  2
[INFO]    alohamac0: Got DATA 2 from aabbcc111111
[INFO]    alohamac0: Tx  ACK  2
[ERROR]   alohamac0: Received too old ACK

The ping console should look like this:

$ ping 10.0.0.4 -c 2
PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data.
64 bytes from 10.0.0.4: icmp_req=1 ttl=64 time=98.8 ms
64 bytes from 10.0.0.4: icmp_req=2 ttl=64 time=884 ms

--- 10.0.0.4 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1000ms
rtt min/avg/max/mdev = 98.886/491.521/884.157/392.636 ms

You might have noticed the error above saying that we’ve received an old ACK and the relatively high ping time as well. This is likely to be a problem inside the OFDM modem components which I guess will be fixed soon. The AlohaMac tries to retransmit each packet up to a maximum number of retransmissions that can be configured through the XML file. So if you can’t receive anything or just get errors, simple restart Iris and try again. If you get your ping back to the first PC, congratulations, well done!

Over the past couple of years we’ve been developing more than a dozen of components for Iris including several MAC, spectrum mobility and rendezvous protocols. We’ll definitively open-source all of them and keep you updated about it!

SDR video

I’ve put a video together which shows some of the work we’ve done in the recent past. It’s basically a wireless communication system built using software defined radio (SDR). This means almost the entire radio is implemented in software and is executed on a standard computer running Linux. The cool thing about this is that you are completely free in implementing protocols and other things. In our case, we have been working on a Carrier Sense Multiple Access (CSMA) protocol which is quite similar to 802.11 (aka WiFi). The video shows how to nodes are transmitting video streams to one another over a single wireless channel. It’s the task of the protocol to coordinate the access of both nodes to this channel. We believe that this work demonstrates how powerful fully software-defined protocols can be in practice.

Exporting EPS figures including text from within LibreOffice

Drawing figures for papers using LibreOffice Draw is very simple and straight forward. You can easily export them as EPS files and include them in Latex. Unfortunately, LibreOffice does not insert plain text into the EPS files which is very important in order to use psfrag to replace text with complex Latex equations for instance. Luckily, one can configure LibreOffice to do so by just chaning a single variable in a configuration file on your computer.

  • Just close all running instances of LibreOffice.
  • Open “/usr/lib/libreoffice/basis3.4/share/registry/main.xcd” using your favorite text editor (note that you need proper rights to do so).
  • Search for “TextMode”.
  • If the “value” tag is set to “0”, change it to “2”
  • Close the file and restart LibreOffice Draw.
  • Your done!

  • Booting a custom Linux kernel inside a VMware guest machine

    Just spent a couple of hours figuring out how to boot a Linux guest inside a VMware machine. You might say that’s easy, just boot any recent Ubuntu kernel and that’s it. Well, that’s true but I wanted to have a stripped kernel with just the modules enabled that are really needed. So I configured a new kernel from scratch. This image was booting just fine but it failed mounting the root file system because it couldn’t find any. The simple reason was I disabled “Fusion MPT device support” and “Fusion MPT ScsiHost support”. Took me hours to figure out, damn. So just in case you’re having the same troubles ..

    Cheers,
    Andre

    personal webpage