Differences
This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revisionNext revision | Previous revisionNext revisionBoth sides next revision | ||
fpga_architecture_for_computing [2019/09/28 03:08] – [Students] edit | fpga_architecture_for_computing [2019/09/30 09:40] – [Network Function Acceleration] edit | ||
---|---|---|---|
Line 24: | Line 24: | ||
====Service Oriented Memory Architecture==== | ====Service Oriented Memory Architecture==== | ||
- | Prevailing memory abstractions and infrastructures that support FPGA application development continue to rely on the classic notions of loading and storing to memory address locations. Why should we limit our thinking to such a low-level, explicit paradigm when developing computing applications on an FPGA. Just like in software development, | + | Prevailing memory abstractions and infrastructures that support FPGA application development continue to rely on the classic notions of loading and storing to memory address locations. Why should we limit our thinking to such a low-level, explicit paradigm when developing computing applications on an FPGA. Just like in software development, |
====Network Function Acceleration==== | ====Network Function Acceleration==== | ||
+ | |||
+ | We begin our investigation my studying FPGA acceleration of Intrusion Detection System (IDS). Today’s state of the art IDS are software-based and cannot cost- or power-efficiently keep-up with increasing network speed. | ||
+ | |||
... | ... | ||