Benchmarking Reads in UniVerse - Take 2 - Going Crazy


Somehow I completely blanked that I already wrote the readall function. I wrote a whole blog post on it about benchmarking reads but I thought I never finished it. Now that I look at the code, everything’s done and it works…Fuck. I even wrote about it.

I have 0 memory of finishing it up which is a worrying sign.

Maybe because the results were so lacking, I just blocked it out.


❯ time node readall.js

Executed in    9.52 secs   fish           external
usr time   15.62 millis    0.00 millis   15.62 millis
sys time   93.75 millis   15.62 millis   78.12 millis

9.52 seconds for the read all version of the script.

9.40 seconds for the looping version.

Well those aren’t the results I was looking for.

Bizarre, maybe doing the buffer resizing is expensive. Increasing buffer size seemed to help but the two ways are comparable. I’m probably screwing something up or misunderstanding something. Going from js to c and back versus staying in just c to read all the records should have a difference.

Well! Shit! It’s faster to get the ids through a subroutine call by a factor of 10.

I don’t think its worth doing reads at all outside of UniVerse which is really surprising.

These results are much against a smaller set of data over the network so there is a huge cost there but I think the results are illuminating. Using the intercall library to do things like listing records and handling many records for searching and paging is a bad idea. Its fine for one offs but for bulk data, it’s way too costly such that writing a subroutine is much better.