Thursday | 31 OCT 2024
[ previous ]
[ next ]

Writing a Blogging Engine

Title:
Date: 2023-08-26
Tags:  bloggg

There is nothing more self indulgent that writing about the tech that you use to blog with and why you chose to do it. No matter the answer, it is of value to basically no one and no one actually cares.

I will now tell you dear reader how my blog is being served. This is going to be pure navel gazing :)

I could render my blog ahead of time using my Pick programs but I'm purposefully making it a dynamic site. I want it to be dynamic because I want the cost of generating my site felt on each page view. It will only work for just a few people so if my blog ever blew up or if someone was bored or accidently wrote an infinite loop somewhere then my blog would shut down and I would have to deal with it.

Hosting on VPS

This blog used to be hosted on a macbook.

Now it is being hosted on a [VPS in the cloud].

ScarletDME

ScarletDME is a fork of the open source version of OpemQM 2.6. I created my own fork that I am maintaining and I've added a few different things to it. The biggest difference in my for is that I have switched the build system to use Zig and the plan is to build out new features in Zig. To get my blog to work, I had to add the string math functions SADD, SSUB, SMUL and SDIV. I had done this in C and then did it again in Zig. I also added the secure socket functions and wrote them in Zig.

These 2 things allow my blog to work the way it works. The big integer stuff is required for my hashmaps and the secure sockets is so that I can do the SSL termination in my own web server rather than in nginx. I'm also learning how the Pick system works much deeper down the stack than just BASIC.

SERAPHIM

My blog uses a custom http server that I call SERAPHIM. It is very much something that exists to serve my blog and I probably won't use it for general things. It doesn't handle HEAD requests for example though that will probably change. I do want to make it more production ready as it would be handy for other things. The biggest issue is that I use PHANTOMs to get some semblance of threading but it is quite poor and requires the use of caddy.

I open a socket that accepts connections and the moment I get a connection, I close the socket and spin off a new SERAPHIM instance while original program active and that original program will complete the request. This means that a request comes in, spins off a new process that will listen for http requests, and then the main program finishes the request. It's definitely a weird flow that exists because of BASIC works. I could probably get around it directly in ScarletDME by writing the C binding so that I could get access to threading primitives. However then my server would definitely not be portable. Currently I can use it in both ScarletDME and UniVerse without issue.

The problem with using phantoms is that it isn't instant enough to start listening on the socket that I just closed. There is a gap and so it requires a load balance in front of my http server that will retry requests when a request failed. This is where caddy comes in, caddy has built in support to retry connections and nginx does not. This makes caddy a hard dependency to having my http server work well enough.

The goal for SERAPHIM is to make it more robust and to speed it up. I definitely didn't write it for speed and for now it works fine but it can't scale up. However it works as this blog is being served out from it.

Templating Language

I wrote a custom templating language that let's you generate HTML and run BASIC code dynamically. This involved writing an evaluator and a parser. This routine was quite slow originally but after spending a few days and adding a ton of caching I got the speed down to the point where it was acceptable. The worst thing was that I had gotten it down to an acceptable level in UniVerse. However in ScarletDME it is still quite a bit slower than I like. I don't know where I can gain more speed. It will probably have to be a rewrite from scratch no ScarletDME instead of starting from UniVerse.

I could also speed it up by giving it more CPU but where's the fun in that. My templating language's core theme is to use BASIC code to generate HTML. I really just want to be able to use BASIC data structures and BASIC control flow to manipulate HTML and I think I got a really good version of it now. I'm happy with how it works and the core of it is easily modifiable. I added the FOR OF loop to speed things up and that has been the biggest addition. I have the ability to use FOR loops, IF conditionals, dynamic arrays and do matreads all from a language that is interpreted.

There is also the kernel of an idea of stripping away my templating logic and simply just make an interpreted version of BASIC but that is less useful for me so I haven't chased after it yet. However it is an idea that has been kicking around in my head. I would love to be able to drop down into a BASIC repl just to try some stuff.

RENDER.MARKDOWN

I wrote my templating language to general templating stuff but I also wrote a markdown rendering program that would take my markdown blog posts and generate HTML. I have been lazy about styling so for the most part getting my markdown to compile to good html is going to be enough to get everything to look right.

I tried to do a form of test driven development but I hated that. On some level trying to chase completions was so boring for me that I couldn't enjoy the programming part. It may have made things better but I wasn't having fun. I enjoyed it far more when I gave up and wrote a rendering program on the fly. It is much worse but I also finished it and it's working well enough as it is rendering this post perfectly fine. I hope[As of 27 AUG 2023].

I'm not super happy with it as it still take quite a bit of time to generate the entire blog but the page speed is fast enough that I am fine with it for now. I would like to have be that the page generates in 1ms or less so that generating a thousand pages only takes a second. That is the real goal of my rendering routine.

DOWNLOAD-DATA

I also wrote a program to deploy my blog. I used to generate my blog in one shot and then scp it over and push it to github. This worked but it took too long. Now I have it at 350ms which is good enough though I do want to speed it up. There is nothing like being able to see changes almost instantly.

The biggest thing I learned here was that don't use Linux directories and expect speed. All the Picks seem to have slowdowns probably because they need to hit the disk versus having it all in a hashed file that is loaded fully into memory.

I copy every item in the account and write it out to a giant file which is keyed on file and item id. This giant file is the pushed to my blog with scp and then I run an import routine that reads the entire file and puts the items in the right files.

This was a simple script but it took some futzing around to get it to work quicky. I have my BASIC programs in a Linux directory but the speed issues were big enough even at just 20 programs that it was worth making the BP folder into a Pick file. This sped things up drastically.

Conclusion

I did this because I can and it was fun.