Monday | 29 APR 2024
[ previous ]
[ next ]

Slightly More Complex HTTP Server

Title:
Date: 2023-07-26
Tags:  basic

Now my current HTTP.SERVER is a single threaded server and so a slow request will slow down all the users of the system. I thought of a few different ways to get around this issue but ultimately I decided on a pseudo forking method. I really like how it worked out now.

Previous Post

Source

When a request comes in I get it via the acceptConnection function and this gives me a socket handler to the person I need to send a message back to. At this point I close my server socket and phantom off a new HTTP.SERVER. I can then handle the current request while another HTTP.SERVER runtime handles the next call.

This is a very poor way of doing forking because for a brief moment the application is for all intents and purposes down.

Benchmarking with ab

I used ab to do some testing and thats where I realized that the spin up time of getting the socket back open was going to be a problem. I had hoped that getting the socket back up would be near instant and not noticeable but ab caught it very quickly.

ab is the apache benchmark tool and it lets you quickly test a server's capabilities.

ab -v2 -n 100 -c 10 http://192.168.7.31:8122/TEST.ROUTE
...
Time taken for tests:   0.048 seconds
Complete requests:      100
Failed requests:        98
...

This means that ab will send 100 requests, 10 at a time. ab is simply too fast and the spin up time is too much.

Proxy Servers

I got the brilliant idea to stick a proxy in front of my HTTP.SERVER. This way I could have the proxy server hold the requests while my server closes its connection and starts a new one.

I used nginx and quickly realized that it just doesn't have what I want it to do.

Luckily! Caddy does!

:8122 {
   reverse_proxy localhost:7122 {
      lb_try_duration 30s
      lb_try_interval 0.01s
   }
}

lb_try_duration means that Caddy will keep the requesting connection open for 30 seconds while trying to get in contact with the application server.

lb_try_interval is how long Caddy should wait before retrying the application.

This is exactly what I was looking for, I wanted to have the proxy server stand in front of my application server and make sure that the application server was up before passing things along.

This add an extra dependency now for the cases where requests come in very close together but its a relatively light dependency. I would love to have BASIC replace the proxy server and connect directly to the internet but that doesn't seem to be in the cards for now.

I think the ultimate solution will be if I can get exposure to something like select or poll/epoll or real threading primitives. Until then I'm pretty happy with how this turned out.

There are a few optimizations one can make, like setting up 2 HTTP.SERVER instances and having the proxy server load balance between the two. I don't think it's worth doing right now.

I think there are some deeper things to look at before I get to dealing with many requests at the same time.

ab -v2 -n 100 -c 10 http://192.168.7.31:8122/TEST.ROUTE
Time taken for tests:   4.800 seconds
Complete requests:      100
Failed requests:        0

This takes about 5 seconds so I'm not going to be winning any races but the fact that all 100 requests finished is pretty good in my eyes :).