wiki.ipfire.org

The community-maintained documentation platform of IPFire

User Tools

Site Tools


hardware:mythbusters:single-core-performance

Single Core Performance

All modern CPUs have multiple cores per processor and this trend will not stop. This design has proven to work and allow computers to run faster and more efficiently. This article aims to address how many cores are needed for normal operation.

Energy-Tradeoff

The compromise is between computational performance and energy consumption. An additional core increases overall performance but consumes more power. The impact of additional cores is worse the higher the clock speed of each processor.

What applications use multiple cores?

In this special scenario of a firewall distribution there are a few applications which can produce high processor loads:

Network Throughput

The Linux kernel has to receive packets from the network interfaces and has to send packets. As the buffer of many network interface cards (NICs) is designed as a ring-buffer, only one processor can write to or read from that buffer at the same time. Some NICs can take advantage of multiple processors because they have multiple buffers, but this design usually exists to compensate for slow processors and slow bus transfer speeds.

  • Preference: Fewer but faster cores

Firewall Throughput

The Linux kernel has to evaluate every single packet to a set of firewall rules. These are evaluated one after the other and cannot be executed in parallel. So, a processor that has a faster clock speed is beneficial for firewall throughput.

  • Preference: Fewer but faster cores

VPNs

One of the biggest applications creating huge loads are VPNs - or more precisely the cryptographic operations that are used to encrypt data and proof integrity.

Encryption/Decryption

For VPN traffic encryption and decryption create the highest load. AES-NI may help if available. In the majority of cases encryption cannot be shared over multiple cores. This is a limitation of the design of the algorithms and cipher modes like CBC.

Integrity

The same goes for the second most expensive operation which integrity. Common algorithms like SHA-1, SHA-256, SHA-512 all have in common that they cannot (by design) be paralellised. A huge disadvantage might be that slow processors might even come up with bad results although AES-NI is available which slows down the throughput of the entire VPN.

When multiple VPN connections have high throughput at the same time the impact of a slow core is smaller. This will however still result in not being able with coping with peaks of bandwith or a single connection that is highly used. Fewer faster cores however never have a disadvantage.

  • Preference: Fewer but faster cores

Web Proxy

The web proxy handles HTTP requests in parallel, however it does not always need multiple cores to run faster. Too many cores can actually be harmful to performance. The proxy usually waits for input from the network and in this state other requests can be processed. Copying request information back and forth between multiple cores costs time and keeps the internal bus of the processor busy. Proxy access control lists can only be evaluated by a single core per request. This is also the case for virus scanning.

If there are more requests than a single core can handle, a second core is needed.

  • Preference: Fewer but faster cores

Conclusion: A higher single-core performance is more desirable

For all the applications above a higher performance of each core is desirable. A number of slower cores will result in slower execution and more power being consumed. Higher single core performance increases the usability of the system in general.

hardware/mythbusters/single-core-performance.txt · Last modified: 2018/11/18 05:42 by dnl