Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revisionBoth sides next revision
fpga_architecture_for_computing [2019/09/30 09:29] – [Service Oriented Memory Architecture] editfpga_architecture_for_computing [2019/09/30 09:40] – [Network Function Acceleration] edit
Line 26: Line 26:
 Prevailing memory abstractions and infrastructures that support FPGA application development continue to rely on the classic notions of loading and storing to memory address locations. Why should we limit our thinking to such a low-level, explicit paradigm when developing computing applications on an FPGA. Just like in software development, FPGA application developers should be supported by high-level abstractions that encapsulate in-memory data structures and meaningful operations on them. Under a Service-Oriented Memory Architecture, compute accelerator logic interacts with abstracted “memory objects” using a rich set of commands. The implementation complexities of the memory objects and operations, as well as the bare-mental-level memory interface, are hidden from the compute accelerator developers. The support for these memory objects and operations, realized as soft-logic datapapth modules, can be specialized to an application domain and can be provided in a reusable and extensible library.  We currently are working on a proof-of-concept prototype environment and application demonstrators. Prevailing memory abstractions and infrastructures that support FPGA application development continue to rely on the classic notions of loading and storing to memory address locations. Why should we limit our thinking to such a low-level, explicit paradigm when developing computing applications on an FPGA. Just like in software development, FPGA application developers should be supported by high-level abstractions that encapsulate in-memory data structures and meaningful operations on them. Under a Service-Oriented Memory Architecture, compute accelerator logic interacts with abstracted “memory objects” using a rich set of commands. The implementation complexities of the memory objects and operations, as well as the bare-mental-level memory interface, are hidden from the compute accelerator developers. The support for these memory objects and operations, realized as soft-logic datapapth modules, can be specialized to an application domain and can be provided in a reusable and extensible library.  We currently are working on a proof-of-concept prototype environment and application demonstrators.
 ====Network Function Acceleration==== ====Network Function Acceleration====
 +
 +We begin our investigation my studying FPGA acceleration of Intrusion Detection System (IDS). Today’s state of the art IDS are software-based and cannot cost- or power-efficiently keep-up with increasing network speed.  FPGA accelerators are promising as efficient high performance hardware alternative to software IDS and retain software’s programmability.  We are currently working on an FPGA accelerated SNORT IDS that uses FPGA to handle the common cases at network speed (100Gbps) and only offload a very small fraction of exceptional cases to CPU.  Future work is to create a high-level domain-specific NF programming framework for use by networking experts who are not RTL experts.
 +
 ... ...