wiki.ipfire.org

The community-maintained documentation platform of IPFire

User Tools

Site Tools


en:hardware:mythbusters:single-core-performance

Single Core Performance

Multiple cores per processor are a trend that is no longer to stop. This concept has proven to work and improve computers to run faster and more efficient. But how many cores are needed?

Energy-Tradeoff

The compromise is between computational performance and energy consumption. An additional core increases overall performance but of course consumes power. It does not come for free. This is getting worse the higher the clock speed of each processor.

What applications use multiple cores?

In this special scenario of a firewall distribution there are a few applications to consider that produce high processor loads:

Network Throughput

The Linux kernel has to receive packets from the network interfaces and has to send packets. As the buffer of many NICs is designed as a ring-buffer only one processor can write to or read from that buffer at the same time. Some NICs can take advantage of multiple processors because they have multiple buffers, but this design usually exists to compensate for slow processors and slow bus transfer speeds.

  • Preference: Fewer but faster cores

Firewall Throughput

The Linux kernel has to evaluate every single packet to a set of firewall rules. These are evaluated one after the other and cannot be executed in parallel. Hence, a processor that has a faster clock speed is beneficial for firewall throughput.

  • Preference: Fewer but faster cores

VPNs

One of the biggest applications creating huge loads are VPNs - or more precisely the cryptographic operations that are used to encrypt data and proof integrity.

Encryption/Decryption

Encryption and decryption are putting the highest load on the processors for VPN traffic. AES-NI might help out if available. In the majority of cases encryption cannot be shared over multiple cores. This is a limitation of the design of the algorithms and cipher modes like CBC.

Integrity

The same goes for the second most expensive operation which integrity. Common algorithms like SHA-1, SHA-256, SHA-512 all have in common that they cannot (by design) be paralellised. A huge disadvantage might be that slow processors might even come up with bad results although AES-NI is available which slows down the throughput of the entire VPN.

When multiple VPN connections have high throughput at the same time the impact of a slow core is smaller. This will however still result in not being able with coping with peaks of bandwith or a single connection that is highly used. Fewer faster cores however never have a disadvantage.

  • Preference: Fewer but faster cores

Web Proxy

The web proxy handles HTTP requests in parallel. That however does not always need multiple cores to run faster. Too many cores can actually be more harmful. The proxy is usually waiting for input from the network and in that case other requests can be processed. Copying request information back and forth between multiple cores costs time and keeps the internal bus of the processor busy. Access control lists can only be evaluated by a single core per request. The same is true for virus scanning.

If there are more requests than a single core can handle, a second core is needed.

  • Preference: Fewer but faster cores

Conclusion: A higher single-core performance is more desirable

For all those applications above, a higher performance of each single core is desirable. Many cores will still allow to run the application, but will be slower and consume more power. A higher single core performance will also increase usability because the system is responding faster, forwarding packets faster and usually achieve higher throughput and performance in general.

Translations of this page?:
en/hardware/mythbusters/single-core-performance.txt · Last modified: 2016/03/09 01:25 by MichaelTremer