Graphic processing units have been used in a number of compute intensive processing tasks. This includes applications on big data, the chips were also once effective in solving proof of work for Bitcoin rewards. Originally created to render computer graphics, GPUs make useful general coprocessors for time critical tasks.
Wenji Wu, a network researcher with U.S. Department of Energy’s Fermi National Accelerator Laboratory has demonstrated the ability of these chips in capturing real time data about network traffic.
Network monitoring requires that all the data packets be read as they cross the network. This requires a lot of parallel processing for effective inspection of these packets. Keeping up with traffic flowing through high speed networks could very well be the job for GPU-based network monitors.
Current network monitoring appliances rely on standard x86 processors or customer ASICs (application specific integrated circuits) for processing. Limitations are however met when these architectures are used in monitoring large networks.
Insufficient compute power and low memory bandwidth makes it hard to inspect operational data in real time using CPUs. ASICs provide a solution to these problems but with their custom architecture, they are expensive to program. Parallelism is also necessary for inspecting traffic on high-speed networks and ASICs do not meet this requirement.
High memory bandwidth, easy programmability, and multiple cores for parallel tasks – GPUs provide all these. At Fermilab, Wenji has built a demonstrable GPU-based network monitor which shows of the parallel execution model of GPUs. The device uses an off-the-shelf NIC and a Nvidia M2070 GPU for network captures. GPUs can be added at will, thus expanding the device capacity to handle more traffic.
The GPU-based network monitor performed at a speed of 17 times over systems based on a single-core CPU. The speed was 3 times compared to a 6-core CPU.
Adoption of GPUs in commercial network appliances could raise devices’ line rates. The use of GPU programming libraries such as CUDA (Compute Unified Device Architecture) and OpenCL (Open Computing Language) could also cut down development costs.