Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
Next revisionBoth sides next revision
fpga_architecture_for_computing [2019/09/30 09:29] – [Service Oriented Memory Architecture] editfpga_architecture_for_computing [2019/09/30 09:42] – [CoRAM] edit
Line 26: Line 26:
 Prevailing memory abstractions and infrastructures that support FPGA application development continue to rely on the classic notions of loading and storing to memory address locations. Why should we limit our thinking to such a low-level, explicit paradigm when developing computing applications on an FPGA. Just like in software development, FPGA application developers should be supported by high-level abstractions that encapsulate in-memory data structures and meaningful operations on them. Under a Service-Oriented Memory Architecture, compute accelerator logic interacts with abstracted “memory objects” using a rich set of commands. The implementation complexities of the memory objects and operations, as well as the bare-mental-level memory interface, are hidden from the compute accelerator developers. The support for these memory objects and operations, realized as soft-logic datapapth modules, can be specialized to an application domain and can be provided in a reusable and extensible library.  We currently are working on a proof-of-concept prototype environment and application demonstrators. Prevailing memory abstractions and infrastructures that support FPGA application development continue to rely on the classic notions of loading and storing to memory address locations. Why should we limit our thinking to such a low-level, explicit paradigm when developing computing applications on an FPGA. Just like in software development, FPGA application developers should be supported by high-level abstractions that encapsulate in-memory data structures and meaningful operations on them. Under a Service-Oriented Memory Architecture, compute accelerator logic interacts with abstracted “memory objects” using a rich set of commands. The implementation complexities of the memory objects and operations, as well as the bare-mental-level memory interface, are hidden from the compute accelerator developers. The support for these memory objects and operations, realized as soft-logic datapapth modules, can be specialized to an application domain and can be provided in a reusable and extensible library.  We currently are working on a proof-of-concept prototype environment and application demonstrators.
 ====Network Function Acceleration==== ====Network Function Acceleration====
-... 
  
-====CoRAM====+We begin our investigation my studying FPGA acceleration of Intrusion Detection System (IDS). Today’s state of the art IDS are software-based and cannot cost- or power-efficiently keep-up with increasing network speed.  FPGA accelerators are promising as efficient high performance hardware alternative to software IDS and retain software’s programmability.  We are currently working on an FPGA accelerated SNORT IDS that uses FPGA to handle the common cases at network speed (100Gbps) and only offload a very small fraction of exceptional cases to CPU.  Future work is to create a high-level domain-specific NF programming framework for use by networking experts who are not RTL experts.  This is joint work with [[http://www.justinesherry.com/ |Justine Sherry]] and [[https://users.ece.cmu.edu/~vsekar/ | Vyas Sekar]]. 
 +====CoRAM (Classic)====
 Our investigation into FPGA architecture for computing began in 2009 with the question: how should data-intensive FPGA compute kernels view and interact with external memory data.  In response, we developed the original CoRAM FPGA computing abstraction.  The goal of the CoRAM abstraction is to present the application developer with (1) a virtualized appearance of the FPGA’s resources (i.e., reconfigurable logic, external memory interfaces, and on-chip SRAMs) to hide low-level, non-portable platform-specific details, and (2) standardized, easy-to-use high-level interfaces for controlling data movements between the memory interfaces and the in-fabric computation kernels.   Besides simplifying application development, the virtualization and standardization of the CoRAM abstraction also make possible portable and scalable application development. Our investigation into FPGA architecture for computing began in 2009 with the question: how should data-intensive FPGA compute kernels view and interact with external memory data.  In response, we developed the original CoRAM FPGA computing abstraction.  The goal of the CoRAM abstraction is to present the application developer with (1) a virtualized appearance of the FPGA’s resources (i.e., reconfigurable logic, external memory interfaces, and on-chip SRAMs) to hide low-level, non-portable platform-specific details, and (2) standardized, easy-to-use high-level interfaces for controlling data movements between the memory interfaces and the in-fabric computation kernels.   Besides simplifying application development, the virtualization and standardization of the CoRAM abstraction also make possible portable and scalable application development.