alex.forencich

Forum Replies Created

Viewing 4 posts - 1 through 4 (of 4 total)
  • Author
    Posts
  • in reply to: Comments Please! #1648
    alex.forencich
    Participant

    Many thanks to Ken and Dustin for setting up the conference site and keeping it running like a well-oiled machine! (Or maybe a well duct-taped, bubble-gummed, and bailing-wired machine?)

    Here is a video of them editing PHP on the fly:

    At any rate, I liked being able to have time to compose questions and responses as well as having more control over the talks – watching them out of order, skipping around, consulting the talk and the paper at the same time, etc. However, what would be nice is more live discussions of some form. Not sure what format would make the most sense, though. Maybe a block of time for live Q+A or general discussion for each session or something along those lines. It might also be useful to have some kind of incentive to get discussion going on the forms during the conference so there can be plenty of eyeballs on the discussion.

    alex.forencich
    Participant

    Quick question: what if the ring oscillator or glitch amplifier is built from SRL primitives or distributed RAM which is configured at run-time, instead of LUTs?

    in reply to: Corundum: An Open-Source 100-Gbps NIC #1537
    alex.forencich
    Participant

    Thanks for the questions, kanshi.

    First, without a DPDK driver, it’s hard to measure performance for small packet sizes without being limited by overhead of the Linux network stack (after all, we want to measure capabilities of the NIC hardware, not the network stack software), so we don’t currently have any reliable measurements for packet sizes other than MTU-size frames. Once we have a DPDK driver up and running, we’ll have a better idea of what the design is capable of in terms of small packet sizes. However, I will say that the current design is relatively simple and as such has some limitations – namely, it currently only supports fixed-size descriptor blocks with no support for inlining data in the descriptor rings. As a result, throughput for minimum-length frames is rather poor due to PCIe overheads. The theoretical max with the current configuration is around 58 Mpps for minimum-length frames, compared to the 142 Mpps theoretical max of 100G Ethernet.

    In the TX direction, software classifies packets into queues, and then the transmit schedulers on each port determine which queues traffic will be sent from. In the receive direction, RSS flow hashing is used to select the receive queue. It would be possible to add additional logic to implement something more complex by pre-processing packets before handing them off to Corundum to pass to the host. At any rate, software-based classification in the transmit direction is required to dedicate queues to individual flows to avoid issues with head-of-line blocking.

    Corundum is intended to support high-precision transmit scheduling, something that most NICs (even most smart NICs) can only do in a very limited way. More complex functionality such as match-action rules, if necessary, can be implemented outside of the main corundum modules. In this case, corundum could be used as a high-performance host interface for a smart NIC design.

    Corundum uses a custom driver that registers the device as a standard Ethernet interface. All of the testing carried out so far has been with the standard Linux networking stack.

    in reply to: Corundum: An Open-Source 100-Gbps NIC #1528
    alex.forencich
    Participant

    Thanks for dropping by and listening to my presentation on Corundum! Feel free to ask questions, I will get back to you as soon as I can.

    If it sounds like an interesting project, be sure to check out the source code on GitHub. You can also join the Google group.

Viewing 4 posts - 1 through 4 (of 4 total)