Keynote I: The Programmable Imperative of Networking: Past and Future
Mike Fitton
Intel Corporation
As we look back over the last thirty years of digital communications infrastructure, mobile has progressed from voice services on 2G, through increasing data rates and increasingly feature-rich applications; at the same time, wireline network bandwidth has increased by many orders of magnitude and complexity. At each step in this journey, reconfigurable logic in general, and FPGA, in particular, has enabled every new generation of communications standards. FPGAs can be utilized as a flexible, reprogrammable workload accelerator that can offload computationally intensive applications and enable new usage models, for new applications and emerging standards. The availability of fixed-function, or custom, silicon solutions often trails these new requirements, and processors have an insufficient performance to address the highest possible data rates; in these cases, FPGA is the often only option for supporting the latest generation network.
In this contribution, we will explore the requirements that future networks will exert on future FPGA device features, software, and IP. The usage of FPGA as an Infrastructure (or Dataplane) Processing Unit will necessitate the features to ingest, distribute and process an incredible bandwidth of data. Workloads will include new AI/ML network types that have not been conceived of yet, processing data in real-time with constrained latency and latency determinism. Support for 6G will require new numerology, new signal processing, and new security algorithms to be supported. Furthermore, in many cases traditional design methodologies create a barrier to entry for a wider group of users; here, we must consider high-level, domain-specific programming languages (e.g. for AI/ML or networking) to widen the user base.
We will outline the requirements for future FPGA features and showcase some existing research and development that enables the reprogrammability required for next-generation communication standards.
Keynote II: Bitstream Design Abstraction to Build Reconfigurable Machines and Applications
Prasanna Sundararajan
Microsoft Azure
Abstract:
The design flow of modern FPGA tools starts with the synthesis of a design specification followed by placement and routing of an electronic circuit and generation of physical configuration bitstream. By contrast, JBits Software Development kit, developed by Xilinx two decades ago, operated directly on Bistream to map, place and route FPGA designs. This approach permits designs to be converted from the specification into implementation in orders of magnitude less time and using far fewer resources than conventional tools. This talk revisits the JBits Application Programming Interface (API) to build design from Bistream and associated tools and run-time reconfigurable cores developed to build various reconfigurable applications.
Developing applications leveraging bitstream abstraction has many benefits with respect to performance, debugging and deployment. From an application standpoint, one of the Cryptographic designs implemented using JBits showed the benefits of efficient physical implementation. A JBits implementation of Data Encryption Standard (DES), published in FCCM 2000, exceeded the speed reported by then announced DES ASIC. To put the relevance of Bitstream Design abstraction to current applications, we will also discuss the use of JBits using a real-world cloud and data center workload scenario.
Bio:
Prasanna Sundararajan is an experienced architect with a track record of taking new and early stage technology concepts from inception to production with a customer focus. He has 30 issued patents in the areas of computing and cache architectures, security, fault tolerance & reliability, and design tools and has extensive technology experience in Multicores and FPGA-based systems. He started his career at Xilinx and was one of the core team members of JBits project.
During his 12+ years at Xilinx, working in Xilinx Research Labs and HPC Business teams, he worked on run-time reconfiguration and C-to-FPGA design tools, HPC system architecture and SEU mitigation technologies. In 2012, he co-founded rENIAC (acquired by a public semiconductor company) to bring drop-in acceleration for cloud and datacenter workloads. Between his role at Xilinx and rENIAC, he has supported over 40 customers in Hyperscale, Finance, Government, Ad-tech, e-Commerce and Media market segments. Currently at Microsoft Azure, he focuses on acceleration of cloud workloads.