Polled TCP

When developing a classic embedded Rust application that uses smoltcp for networking (either using RTIC or no executor at all), a common way to do that is to handle networking as part of the ethernet interrupt. This has a few problems:

  • Dependencies to the interrupt have to be declared as global statics.
  • The IRQ must never block.
  • It is harder to add another source of forcing the stack polling.
  • It is up to the developer to handle the state machine properly. (This will be solved in the next chapter with async.)

Let's try to solve the first two problems by adding a simple async task, which will periodically poll the smoltcp interface and handle a TCP client.

For reference, an example of an RTIC example can be found here.

Configuring the IP address

At this point, we will be using the network layer, so the first thing we need to do is to configure an IP address for our smoltcp interface.

    let config = smoltcp::iface::Config::new(liltcp::MAC.into());
    let mut interface = Interface::new(config, &mut eth_dma, liltcp::smoltcp_lilos::smol_now());
    interface.update_ip_addrs(|addrs| {
        let _ = addrs.push(IpCidr::new(
            liltcp::IP_ADDR.into_address(),
            liltcp::PREFIX_LEN,
        ));
    });

    let mut storage = [SocketStorage::EMPTY; 1];
    let mut sockets = SocketSet::new(&mut storage[..]);

The IP address and PREFIX_LEN are defined in the lib.rs as follows:

pub const IP_ADDR: Ipv4Address = Ipv4Address::new(10, 106, 0, 251);
pub const PREFIX_LEN: u8 = 24;

In theory, it should be possible to initialize the whole CIDR address in a single constant, but the patch has only landed recently and is not released yet.

Another thing included in the snippet is allocation of a SocketStorage and a SocketSet, which is smoltcp's way of storing active sockets. In this case, we will add only one socket, so the storage array length will be 1.

Network task

Now, that the preparations are out of the way, we can define our net_task. This task will handle both polling of the stack and handling of TCP (even though it will be simplified.

async fn net_task(
    mut interface: Interface,
    mut dev: ethernet::EthernetDMA<4, 4>,
    sockets: &mut SocketSet<'_>,
    mut phy: LAN8742A<impl StationManagement>,
    mut link_led: ErasedPin<Output>,
) -> Infallible {
    static mut RX: [u8; 1024] = [0u8; 1024];
    static mut TX: [u8; 1024] = [0u8; 1024];

    let rx_buffer = unsafe { RingBuffer::new(&mut RX[..]) };
    let tx_buffer = unsafe { RingBuffer::new(&mut TX[..]) };

    let client = smoltcp::socket::tcp::Socket::new(rx_buffer, tx_buffer);

    let handle = sockets.add(client);

    let mut eth_up = false;

    loop {
        'worker: {
            let eth_last = eth_up;
            eth_up = phy.poll_link();

            link_led.set_state(eth_up.into());

            if eth_up != eth_last {
                if eth_up {
                    defmt::info!("UP");
                } else {
                    defmt::info!("DOWN");
                }
            }
            if !eth_up {
                break 'worker;
            }

            let ready = interface.poll(liltcp::smoltcp_lilos::smol_now(), &mut dev, sockets);

            if !ready {
                break 'worker;
            }

            let socket = sockets.get_mut::<smoltcp::socket::tcp::Socket>(handle);
            if !socket.is_open() {
                defmt::info!("not open, issuing connect");
                defmt::unwrap!(socket.connect(
                    interface.context(),
                    liltcp::REMOTE_ENDPOINT,
                    liltcp::LOCAL_ENDPOINT,
                ));

                break 'worker;
            }

            let mut buffer = [0u8; 10];
            if socket.can_recv() {
                let len = defmt::unwrap!(socket.recv_slice(&mut buffer));
                defmt::info!("recvd: {} bytes {}", len, buffer[..len]);
            }
            if socket.can_send() {
                defmt::unwrap!(socket.send_slice(b"world"));
            }
        }

        // NOTE: Not performant, doesn't handle interrupt signal, cancel the wait on IRQ, etc.
        // NOTE: In async code, this will be replaced with a more elaborate calling of poll_at.
        lilos::time::sleep_for(lilos::time::Millis(1)).await;
    }
}

First, we define buffers that the TCP socket will internally use. These are defined as mutable statics, because they need to have the same lifetime or outlive the 'a lifetime defined for the SocketSet. Next, we create a TCP socket and add it to our SocketSet. This call gives us a handle that can be used to later access the socket through the SocketSet.

Now, the polling itself takes place. This is done in a loop with a labeled block called 'worker. First, we check that the link is UP, if it is not the case, let's just break the 'worker block. If the link is UP, we poll the interface to check if there are any new data to be processed by our socket. When there are, we can access our socket using the aforementioned handle and we can do operations with it. In this case, we check if it is open, if it is not the case, we attempt to connect to a remote endpoint and break the 'worker block to let the interface be polled again. On next polls, if the socket is open, we attempt to do a read and subsequently a write.

In the case of completion of the 'worker block or the block being interrupted by the break 'worker, the task will sleep for a millisecond.

This implementation is not meant to showcase an implementation of a TCP socket. Right now, there are many unhandled states and it is very likely that this will panic if you look at it wrong.

Another big problem here is performance, the polling loop runs with a fixed period of 1 ms.

Spawning the network task

Now we can simply spawn our task and let it do the polling and TCP handling.

        lilos::exec::run_tasks_with_preemption(
            &mut [
                core::pin::pin!(liltcp::led_task(gpio.led)),
                core::pin::pin!(net_task(
                    interface,
                    eth_dma,
                    &mut sockets,
                    lan8742a,
                    gpio.link_led
                )),
            ],
            lilos::exec::ALL_TASKS,
            Interrupts::Filtered(liltcp::NVIC_BASEPRI),
        );

Conclusions

This solution is probably good enough for a simple tests, but apart from it not being async, there is one big problem - adding the TCP handling will soon become a hassle, with any addition.

This is caused by these factors:

  • It is tightly coupled with smoltcp stack polls.
  • Adding more sockets will clutter the code even more.
  • Adding any kind of timeout would block the entire task, or you'd need to implement some sort of a state machine that will handle this - but this is what we want to use async for.

Let's now have a quick intermezzo concerning decoupling of polling and socket handling. Let's share the smoltcp stack across tasks.