[FoRK] [info] (highscalability.com) The Secret to 10 Million Concurrent Connections -The Kernel is the Problem, Not the Solution

J. Andrew Rogers andrew at jarbox.org
Mon May 20 20:25:27 PDT 2013


On May 20, 2013, at 9:54 AM, Tomasz Rola <rtomek at ceti.pl> wrote:
> 
> With about 1000 clock cycles per second for packet processing, the number 
> of apps is a bit limited.


The way it works is that the core handling these packets classifies them by the processing latency due to clock cycles or a potential wait state. Packets that may exceed the threshold on average are shunted to other cores.


> There are some other bottlenecks, like width of 
> one's internet connection, network infrastructure, perhaps memory speed 
> too.


The practical ceiling given current hardware is 10 GbE and highly optimized processing pipelines can handle that packet rate pretty easily. Network bandwidth somewhere in the broader system is the primary bottleneck in most applications if your code is properly designed and you are not doing something silly like parsing XML.


> To me, the question that hit me was, what kind of stuff I could do if I 
> could handle 10MC? Let's put technical difficulties aside for a while.


You can run millions of connections per Linux server today. I know people that do. The connections are low bandwidth; think servers handling connections from millions of network-connected devices that don't do very much most of the time. Setting those connections up and tearing them down every time would be more involved but can also be worked around.





More information about the FoRK mailing list