A SpaceFibre interface includes a number of virtual channels. Each provides a FIFO type interface similar to that of a SpaceWire link. When data from a SpaceWire packet is placed in a SpaceFibre virtual channel it is transferred over the SpaceFibre link and placed in the same numbered virtual channel at the other end of the link. Data from the several virtual channels are interleaved over the physical SpaceFibre connection. To support the interleaving, data is sent in short frames of up to 256 SpaceWire N-chars each. An N-Char is a data byte, End of Packet marker (EOP) or Error End of Packet marker (EEP). A virtual channel can be assigned a quality of service which determines the precedence with which that virtual channel will compete with other virtual channels for sending data over the SpaceFibre link. Priority, bandwidth reservation, and scheduled qualities of service can be supported, all operating together using a simple precedence mechanism.

In this section the SpaceFibre quality of service mechanism is described.

Frames and Virtual Channels

To provide quality of service, it is necessary to be able to interleave different data flows over a data link or network. If a large packet is being sent with low priority and a higher priority one requests to be sent, it must be possible to suspend sending the low priority one and start sending the higher priority packet. To facilitate this SpaceWire packets are chopped up into smaller data units called frames. When the high priority packet requests to be sent, the current frame of the low priority packet is allowed to complete transmission, and then the frames of the high priority packet are sent. When all the frames of the high priority packet have been sent, the remaining frames of the low priority packet can be sent. Each frame has to be identified as belonging to a particular data flow so that the stream of packets can be reconstructed at the other end of the link.

Each independent data stream allowed to flow over a data link is referred to as a virtual channel (VC). Virtual channels are unidirectional and have a QoS attribute, e.g. priority. At each end of a virtual channel is a virtual channel buffer (VCB), which buffers the data from and to the application. An output VCB takes data from the application and buffers it prior to sending it across the data link. An input VCB receives data from the data link and buffers it prior to passing it to the receiving application.

There can be several output virtual channels connected to a single data link, which compete for sending information over the link. A medium access controller determines which output virtual channel is allowed to send the next data frame. When an output VCB has a frame of data ready to send and the corresponding input VCB at the other end of the link has room for a full data frame, the output VCB requests the medium access controller to send a frame. The medium access controller arbitrates between all the output VCBs requesting to send a frame. It uses the QoS attribute of each of the requesting VCBs to determine which one will be allowed to send the next data frame.

Priority is one example of a QoS attribute. Other types of QoS are considered in the subsequent sections.

Precedence

For the medium access controller to be able to compare QoS attributes from different output VCBs, it is essential that they are all using a common measure that can be compared. The name given to this measure is precedence. The competing output VCB with the highest precedence will be allowed to send the next frame. Precedence is derived from the bandwidth reserved for a virtual channel and its priority, as described in the following sections.

Bandwidth Reservation

When connecting an instrument via a network to a mass memory, what the systems engineer needs to know is “how much bandwidth do I have to transfer data from the instrument to the mass memory?” Once the network bandwidth allocated to a particular instrument has been specified, it should not be possible for another instrument to impose on the bandwidth allocated to that instrument. A priority mechanism is not suitable for this application. If an instrument with high priority has data to send it will hog the network until all its data has been sent. What is needed is a mechanism that allows bandwidth to be reserved for a particular instrument.

Bandwidth reservation calculates the bandwidth used by a particular virtual channel, and compares this to the bandwidth reserved for that virtual channel to calculate the precedence for that virtual channel. If the virtual channel has not used much reserved bandwidth recently, it will have a high precedence. When a data frame is sent by this virtual channel, its precedence will drop. Its precedence will increase again gradually over a period of time. If a virtual channel has used more than its reserved bandwidth recently, it will have a low precedence.

A virtual channel specifies a portion of overall link bandwidth that it wishes to reserve and expects to use, i.e. its Expected Bandwidth Percentage. When a frame of data is sent by any virtual channel, each virtual channel computes the amount of bandwidth that it would have been permitted to send in the time interval that the last frame was sent. This is known as the Bandwidth Allowance. Bandwidth Allowance is calculated as follows:

Bandwidth Allowance = Expected BW% x Last Frame Bandwidth

Where Expected BW% is the portion of overall link bandwidth that a virtual channel wishes to use, and Last Frame Bandwidth is the amount of data sent by any virtual channel in the last data frame. Each virtual channel can use its Bandwidth Allowance to determine its Bandwidth Credit, which is effectively the amount of data it can send and still remain within its Expected Bandwidth. Bandwidth Credit for a particular virtual channel is the amount of data that the virtual channel is permitted to send minus the amount of data it has actually sent, i.e. the Bandwidth Allowance less the Bandwidth Used accumulated over time.

Bandwidth Credit is calculated for each virtual channel as follows:

Bandwidth Credit Equation

Where Used Bandwidth is the amount of data sent by a particular virtual channel in the last data frame, which is zero for all virtual channels except for the one that sent the last frame.

Consider the following example. A virtual channel is allocated 10% of the link bandwidth (Expected Bandwidth = 0.1). Each frame being sent contains 250 bytes, so the Bandwidth Allowance is 0.1×250 =25 bytes/unit of time. If we consider the summation over say 20 frames and in that interval 2 frames are sent by the virtual channel of interest, the summed Bandwidth Allowance is 20*25 bytes/unit of time, the summed Used Bandwidth is 2×250 bytes/unit of time. So the Bandwidth Credit is then (20*25 – 2*250)/0.1 = 0 bytes/unit of time which is as expected since the virtual channel sent 2 out of 20 frames which is 10%. If the virtual channel sends 1 frame out of 20 then the Bandwidth Credit is (20*25-1*250)/0.1 = +2500 bytes/unit of time so the Bandwidth Credit increases significantly and it is more likely that the virtual channel will be permitted to send the next frame. If the virtual channel sends 3 frames out of 20 then the Bandwidth Credit is (20*25-3*250)/0.1 = -2500 bytes/unit of time so the Bandwidth Credit drops significantly and it is less likely that the virtual channel will be permitted to send the next frame.

The Bandwidth Credit is updated every time a data frame for any virtual channel has been sent. A Bandwidth Credit value close to zero indicates nominal use of bandwidth by the virtual channel. A negative value indicates that the virtual channel is using more than its expected amount of link bandwidth. A positive value indicates that the virtual channel is using less than its expected amount of link bandwidth.

To simplify the hardware required to calculate the Bandwidth Credit it is allowed to saturate at plus or minus a Bandwidth Credit Limit, i.e. if the Bandwidth Credit reaches a Bandwidth Credit Limit it is set to the value of the Bandwidth Credit Limit.

When the Bandwidth Credit for a virtual channel reaches the negative Bandwidth Credit Limit it indicates that the virtual channel is using more bandwidth than expected. This may be recorded in a status register and used to indicate a possible error condition. A network management application is able to use this information to check correct utilisation of link bandwidth by its various virtual channels.

For a virtual channel supporting bandwidth reserved QoS, the value of the bandwidth counter provides the precedence value for that virtual channel.

The operation of a bandwidth credit counter is illustrated in Figure 2.

Bandwidth Credit Counter

Figure 2 Bandwidth Credit Counter

The bandwidth credit for a particular VC increments gradually. At point (1) a frame is sent from by this VC, resulting in a sudden drop in credit. The size of the drop is amount of data sent in the frame divided by the percentage bandwidth reserved for the VC. This means that the smaller the percentage bandwidth the larger the drop, and hence the longer it takes to regain bandwidth credit.

After the drop at point (1) the bandwidth credit gradually increments until point (2) when another frame is sent by the VC. Further frames are sent at points (3), (4), (5) etc. If the frames sent are full frames then the drop in bandwidth credit every time a frame is sent, will be the same size.

The bandwidth credit counter for another VC is illustrated in Figure 3. This VC has about half the bandwidth of the VC in Figure 2 allocated to it. This means that the drops in bandwidth credit when frames are sent by this VC are about twice the size, as can be seen Figure 3 at points (1), (2) and (3).

Bandwidth Credit Counter with Smaller Reserved Bandwidth

Figure 3 Bandwidth Credit Counter with Smaller Reserved Bandwidth

The bandwidth credit counter of another VC is shown in Figure 4. In this case the bandwidth credit slowly increments and although some frames are sent at points (1), (2) and (3), the bandwidth credit eventually saturates, reaching its maximum permitted value at point (4). Although more bandwidth should be accumulated after point (4) this is effectively ignored since the maximum possible bandwidth credit has been reached. At point (5) a frame is sent once more, resulting in a drop from the maximum bandwidth credit value.

Bandwidth Credit Counter Reaching Saturation

Figure 4 Bandwidth Credit Counter Reaching Saturation

All three VCs are shown together in Figure 5. When a VC has a data frame ready to send and room for a full data frame at the other end of the link, it competes with any other VCs in a similar state, the one with the highest bandwidth credit being allowed to send the next data frame. At points (1), (2) and (3) the red VC has data to send and sends frames. At points (4), (5) and (6) the green VC has data to send and sends a data frame. At point (7) both the blue and the red VCs have data to send. The blue VC wins since it has the highest bandwidth credit count. After this the red VC is allowed to send a further data frame at point (8).

Bandwidth Credit of Competing VCs

Figure 5 Bandwidth Credit of Competing VCs

If the bandwidth credit counter reaches the minimum possible bandwidth credit value, it indicates that it is using more bandwidth than expected and a possible error may be flagged. This condition may be used to stop the VC sending any more data until it recovers some bandwidth credit, to help with “babbling idiot” protection.

Similarly if the bandwidth credit counter stays at the maximum possible bandwidth credit value for a relatively long period of time, the VC is using less bandwidth than expected and this condition can be flagged to indicate a possible error.

The Bandwidth Credit for different values of Expected Bandwidth is illustrated in Figure 6.

Bandwidth Credit for Different Expected Bandwidths

Figure 6 Bandwidth Credit for Different Expected Bandwidths

Along the x-axis is time expressed as transmitted frame number. It is assumed in the diagram that all frames are the same size. In frame numbers 5, 10 and 15 a frame is sent by the particular virtual channel being considered. Each line in the chart shows what happens to the Bandwidth Credit for different values of the Expected Bandwidth. Consider the green line in the middle (Expected BW = 0.2). The Bandwidth Credit slowly increases until the virtual channel sends a frame at time 5. The Bandwidth Credit then drops back to zero. If the Expected Bandwidth was higher the drop in Bandwidth Credit would be less and if the Expected Bandwidth was lower the drop in Bandwidth Credit would be more. The blue line at the bottom (Expected BW = 0.05) drops by 4000 units each time a frame is sent eventually saturating at the value of -5000. It can be seen from this diagram that if the virtual channel has a higher Expected Bandwidth its Bandwidth Credit is higher, assuming that the data the virtual channel sends is the same for each case.

The bandwidth credit value is the precedence used by the medium access controller to determine which VC is permitted to send the next data frame.

Priority

The second type of QoS provided by VCs is priority. Each VC is assigned a priority value and the VC with the highest priority (lowest priority number) is allowed to send the next data frame as soon as it is ready. Figure 7 shows three priority levels. SpaceFibre has 16 priority levels.

Multi-Layered Precedence Priority QoS

Figure 7 Multi-Layered Precedence Priority QoS

Within any level there can be any number of VCs which compete amongst themselves based on their bandwidth credit. A higher priority VC will always have precedence over a lower priority VC unless its Bandwidth Credit has reached the minimum credit limit in which case it is no longer allowed to send any more data frames. This prevents a high priority VC from consuming all the link bandwidth if it fails and starts babbling. More than one VC can be set to the same priority level in which case those VC’s will compete for medium access using bandwidth reservation.

Scheduled

To provide fully deterministic data delivery it is necessary for the QoS mechanism to ensure that data from specific virtual channels can be sent (and delivered) at particular times. This can be done by chopping time into a series of time-slots, during which a particular VC is permitted to send data frames. This is illustrated in Figure 8.

Scheduled Quality of Service

Figure 8 Scheduled Quality of Service

Each VC is allocated one or more time-slots in which it is permitted to send data frames. VC1 is scheduled to send in time-slot 1 and VC2 is scheduled to send in time-slots 2 and 3. The time-slot duration is a system level parameter, typically 1 ms, and there are 64 time-slots.

During a time-slot, if the VC is scheduled to send in that time-slot, it will compete with other VCs also scheduled to send in that time-slot based on precedence (priority and bandwidth credit). A fully deterministic system would have one VC allowed to send in a time-slot.

The schedule is always operating. If a user does not want to use scheduling the schedule table is simply filled completely, allowing any VC to send in any time-slot, competing with other VCs using precedence.

Scheduling can waste bandwidth if only one VC is allowed to send in a time-slot and that VC is not ready. To avoid this situation, the critical VC can be allocated a time-slot and given high priority. Another VC can be allocated the same time-slot with lower priority. In this way when that time-slot arrives the high priority VC will be allowed to send its data, but if it is not ready the VC with lower priority can send some data. This configuration is illustrated in Figure 8 time-slot 3 and VCs 6 and 8.

Figure 9 illustrates a very efficient way of implementing deterministic data delivery over SpaceFibre while using the maximum amount of link bandwidth for non-deterministic traffic. VCs 1 and 2 represent the deterministic traffic, for example Attitude and Orbit Control System (AOCS) and housekeeping. The AOCS has to read data from AOCS sensors and write commands to AOCS actuators every 4 ms; it does this using SpaceWire RMAP commands, for example. The deterministic VCs are allocated specific time-slots and given the highest priority. This means that when an allocated time-slot comes along the deterministic VC can send all of its data first. When it no longer has any data to send, other VCs can send data competing with each other for access to the link using precedence. The deterministic VC is not allowed to send data in other time-slots, only in the ones it has been allocated, ensuring strict time limits on the transfer of data by the deterministic VC. At the start of the allocated time-slot the deterministic VC will send its RMAP commands in a burst and will also send any RMAP replies. This is illustrated at the bottom of Figure 9, where a burst of traffic from the deterministic VC is sent at the start of a time-slot and other traffic fills up the time-slot when there is nothing else for the deterministic VC to send.

Determinism with Scheduling and Priority

Figure 9 Determinism with Scheduling and Priority

Time-slots can be defined using broadcast messages to send start of time-slot signals or to send time information and having a local time counter which determines the start and end of each time-slot. The SpaceFibre broadcast message mechanism supports both synchronisation and time distribution.

The SpaceFibre QoS mechanism is simple and efficient to implement and it provides bandwidth reservation, priority and scheduling integrated together, not as separate options. Furthermore SpaceFibre QoS provides a means for detecting “babbling idiots” and for detecting nodes that have ceased sending data when they are expected to be sending information.

Register to receive the SpaceFibre User’s Guide as soon as it is published.

Note: This information is taken from S. Parkes et al, “SpaceFibre: Multi-Gigabit/s Interconnect for Spacecraft On-board Data Handling”, IEEE Aerospace Conference, Big Sky, Montana, 2015.