Thursday | 21 NOV 2024
[ previous ]
[ next ]

A Web App in Rust

Title:
Date: 2020-08-25
Tags:  

Chapter 1 - Getting Started

Hello, World! Let's build a hacker news clone in rust and see what that entails. I've broken the tutorial into chapters based on my own ideas of how to structure a web application and I've also broken it down based on how I want to learn.

In a lot of places you'll see things explicitly or even wrongly done because it makes more intuitive sense. Stuff like not hashing passwords and errors not being handled are done because I think it makes it easier to see the entire forest instead of getting into the weeds. Once we have the gist of the application working, we'll circle back to fix our mistakes.

Caveat - I'm very much a beginner in rust so everything has a heaping of salt next to it!

Installing Rust

The first step may be the hardest! Install and get Rust working. You can find more instructions on the Rust site.

https://www.rust-lang.org/tools/install

It is usually straightforward and I know on Linux that the install script works smoothly. I'd be happy to add any bugs anyone ran into while installing Rust. I ran into some issues using Windows Subsystem for Linux but unfortunately I don't remember them or their fixes.

Once installed, you can verify it by the below:

> rustc --version
rustc 1.47.0 (18bf6b4f0 2020-10-07)

Easy peasy! (I hope the installation went well:)

IDE

I wholeheartedly suggest using an IDE, especially on Windows Subsystem for Linux. Rust takes a long time to compile on WSL so it's much better to get the IDE to highlight the errors beforehand.

I use Vim and YouCompleteMe with the rust language server installed. This works pretty well and will save a lot of the headaches of looking for dumb errors like not using enough colons or having misplaced periods.

Starting our Application

The next step is to start our application. Here we can use cargo. (This gets installed alongside rust, this is rust's package manager.)

> cargo new hacker-clone
     Created binary (application) hacker-clone package

Now to make sure it compiles and runs.

> cd hacker-clone/
hacker-clone> cargo run
   Compiling hacker-clone v0.1.0 (hacker-clone)
    Finished dev [unoptimized + debuginfo] target(s) in 0.96s
     Running target/debug/hacker-clone
Hello, world!

Done! We haven't done anything yet....but we did confirm that we can finally start doing something.

Starting with Actix

Now with Rust set up and working, we can start working on the web framework. The framework we'll be using is actix-web.

The first step is to include it in our Cargo.toml file.

./Cargo.toml

[dependencies]
actix-web = "3"

Now, finally, finally, finally we can start editing some code.

Open up src/main.rs and add the following

./src/main.rs

use actix_web::{HttpServer, App, web, HttpResponse};

#[actix_web::main]
async fn main() -> std::io::Result<()> {
    HttpServer::new(|| {
        App::new()
            .route("/", web::to(|| {
                HttpResponse::Ok().body("Hello, World!")
            }))
    })
    .bind("127.0.0.1:8000")?
    .run()
    .await
}

We include the parts of actix_web we need and we then start up our server with HttpServer::new().run().

We have set up our server to respond to the route, "/", with a http response of 200 which is OK, with the body being "Hello, World!".

You can see more options here:https://docs.rs/actix-web/0.4.0/actix_web/struct.HttpResponse.html

Currently we have the route using an anonymous function but we could also name this function and pass it into the route. We can also instead of using the route option, use macros and services to register our endpoints.

You can see examples of this here:https://actix.rs/docs/getting-started/

We also bound our server to the localhost IP address, 127.0.0.1 on port 8000. This means that our Rust web application will now respond to anything that comes in on port 8000 on the localhost.

Lets run it!

hacker-clone> cargo run
   Compiling hacker-clone v0.1.0 (/home/nivethan/bp/hacker-clone)
    Finished dev [unoptimized + debuginfo] target(s) in 13.86s
     Running target/debug/hacker-clone

Now navigate to the below address from your browser:

http://127.0.0.1:8000/

Voila! You should see our "Hello, World!" message. We have the barest hints of a website actually working. Don't blink, pretty soon we'll have an actual application.

Now to the next chapter!

Chapter 2 - Templates

Welcome back! At this point we have a very bare web application written in rust. All it does so far is respond to one request with some plain text. In this chapter we will add support for templating.

But first lets make our life easier and instead having to do cargo run after each modification, we'll have it automatically recompile and run. To do this we will install cargo-watch.

> cargo install cargo-watch

Then to run it.

> cd hacker-clone/
hacker-clone> cargo watch -x run

Now we can make modifications to our application without having to kill our server each time.

Tera

The templating engine we will be using is Tera.

You can find more information here:https://tera.netlify.app/docs/

This engine uses syntax in the vein of jinga2 and for the most part is intuitive.

The first step to using Tera is to add it to our dependencies in Cargo.toml.

./Cargo.toml

[dependencies]
actix-web = "3"
tera = "1.5.0"

Next we will create a folder to hold our templates.

mkdir templates

We now have a folder called templates sitting next out src which contains our rust files.

Now we'll create a very basic index page just to get some data on the screen. Once we do that we'll make it a little bit more complex by adding some loops and conditionals.

For now though, the below will be fine:

templates/index.html

<!DOCTYPE html>
<html lang="en">
    <head>
        <meta charset="utf-8">
        <title>{{title}}</title>
    </head>
    <body>
        Hello, {{name}}!
    </body>
</html>

Everything in the curly brackets will get processed by the Tera templating engine and the real values will get substituted in.

Now we will move back to Rust and get our web application actually using this template.

src/main.rs

use actix_web::{HttpServer, App, web, HttpResponse, Responder};
use tera::{Tera, Context};

async fn index(tera: web::Data<Tera>) -> impl Responder {
    let mut data = Context::new();
    data.insert("title", "Hacker Clone");
    data.insert("name","Nivethan");

    let rendered = tera.render("index.html", &data).unwrap();
    HttpResponse::Ok().body(rendered)
}

#[actix_web::main]
async fn main() -> std::io::Result<()> {
    HttpServer::new(|| {
        let tera = Tera::new("templates/**/*").unwrap();
        App::new()
            .data(tera)
            .route("/", web::get().to(index))
    })
    .bind("127.0.0.1:8000")?
    .run()
    .await
}

The first thing to notice is that we removed the anonymous function in the index route for a named function. When we implement this new function, index, we'll need to also add in the Responder portion of actix so that will also get added to our includes.

The next thing is to see that we set up tera. We create a new instance of Tera with the path to the template directory. We may have a second level within our templates so that is what the ** is for. This will give us a Tera object that we can use to render templates.

We use the unwrap function because if tera fails for whatever reason, our entire application would be moot so panicking would be the best bet. Had we wanted to handle the error gracefully we could use unwrap_or_else or we can do the match construct. In our case plain unwrap is fine. Our index function should however error gracefully but we'll do with that later on, for now we'll use unwrap because it'll be the easiest and quickest way to know when something goes wrong.

Next we register the tera object into our App with the use of the .data method. This way any functions we run in our App will always have access to tera.

In our index function we can access tera by passing it in via the function parameters with a type of web::Data.

At this point we have set everything up for tera to be used in the actual route handler function.

In the index function we start off by building a key value object called data with the constructor being Context. We then insert in the name value pairs of title and name. We then run our renderer saying which file we want to render and its associated data. All the tera.render is doing is processing the template along with the data to generate HTML. This HTML string is then sent back as a response to the browser.

And voila! We can go to browser now (cargo-watch should have restarted our server on each change we made) to 127.0.0.1:8000 and we should see "Hello, Nivethan!" and the title of the page should be "Hacker Clone".

Next we'll add some serialization so that we can get entire objects and arrays onto our growing website. Onward and forward!

Chapter 3 - Complex Templates

Welcome back! We now have an application that can use HTML templates to render some basic data and serve it as an actual web page. In this chapter, we will work on making our template a little bit more complex and work on translating Rust objects into something tera can use.

The translation of Rust objects requires us to serialize our structs. We can do this with the help of the serde crate.

Add serde to our dependencies.

./Cargo.toml

[dependencies]
actix-web = "3"
tera = "1.5.0"
serde = "1.0"

We can then update our main.rs file.

./src/main.rs

...
use serde::Serialize;

#[derive(Serialize)]
struct Post {
    title: String,
    link: String,
    author: String,
}

async fn index(tera: web::Data<Tera>) -> impl Responder {
    let mut data = Context::new();

    let posts = [
        Post {
            title: String::from("This is the first link"),
            link: String::from("https://example.com"),
            author: String::from("Bob")
        },
        Post {
            title: String::from("The Second Link"),
            link: String::from("https://example.com"),
            author: String::from("Alice")
        },
    ];

    data.insert("title", "Hacker Clone");
    data.insert("posts", &posts);

    let rendered = tera.render("index.html", &data).unwrap();
    HttpResponse::Ok().body(rendered)
}
...

The first thing we do is include serde into our file and pull Serialize.

Because we are creating a Hacker News clone, I made a struct that we may possibly use in the future. For now it will only contain a title, a link and an author. A user will come to our site and on the index page they will see a list of articles that they can read and comment on.

This struct is the object that we want to serialize so that we can have Tera render the object without us moving all the data in the struct to the tera Context manually. We add the derive statement to our struct and it will then be given automatic serialization.

Inside our index function, we create an array of posts that we can then loop through in our template file.

The next step is to insert the array of posts into our tera Context in this case it is called data.

We then pass data and the page we want to use to tera.render and we will then get an HTML string that we can then return to the user.

At this point we have everything working except we don't do anything with this new data in our template. Let's fix that!

Update our index.html file in our templates folder.

./templates/index.html

<!DOCTYPE html>
<html lang="en">
    <head>
        <meta charset="utf-8">
        <title>{{title}}</title>
    </head>
    <body>
        {% for post in posts %}
        <div>
            <a href="{{ post.link }}">{{ post.title }}</a>
            <small>{{ post.author }}</small>
        </div>
        {% endfor %}
    </body>
</html>

Here we use the looping construct to loop through our posts and we can see that the object has been translated exactly, the struct in rust is available to us in the template and this is because we used serde and serialization.

Now if you navigate to 127.0.0.1:8000 you should see a list of links on our home page! Slowly but surely, we're getting somewhere!

Chapter 4 - Forms

Welcome back! Let's set up our roadmap as this chapter will be beginning of passing data back and forth between the website and the web application. This is where things will start to branch out and our file count will increase.

In this chapter we will create a sign up page, a login page, and a submission page that will just print the data to the screen.

Before we begin, lets make one change to our index.html. We are going to take advantage of the fact that tera templates have inheritance and abstract out the standard parts of each of our pages.

Copy index.html to base.html.

./templates/base.html

<!DOCTYPE html>
<html lang="en">
    <head>
        <meta charset="utf-8">
        <title>{{title}}</title>
    </head>
    <body>
        {% block content %}
        {% endblock %}
    </body>
</html>

Here we removed our for loop and added block content, this means that in child templates we can set up a block content that will then get placed in the parent template.

./templates/index.html

{% extends "base.html" %}

{% block content %}
{% for post in posts %}
<div>
    <a href="{{ post.link }}">{{ post.title }}</a>
    <small>{{ post.author }}</small>
</div>
{% endfor %}
{% endblock %}

In index.html we extend base.html and create our block "content". This way our templates will only hold what they need.

Sign Up Page

For our sign up page, we'll keep it very simple, for now no front end validation so we'll just pass everything back to rust. We'll have a username, email and password.

Let's get started.

./templates/signup.html

{% extends "base.html" %}

{% block content %}
<form action="" method="POST">
    <div>
        <label for="username">Username:</label>
        <input type="text" name="username">
    </div>
    <div>
        <label for="email">E-mail:</label>
        <input type="email" name="email">
    </div>
    <div>
        <label for="password">Password:</label>
        <input type="password" name="password">
    </div>
    <input type="submit" value="Sign Up">
</form>
{% endblock %}

The sign up page is very bare bones but for now it will suffice. We will submit our form via POST to the page itself.

Now to hook this template up to rust.

./src/main.rs

...
async fn signup(tera: web::Data<Tera>) -> impl Responder {
    let mut data = Context::new();
    data.insert("title", "Sign Up");

    let rendered = tera.render("signup.html", &data).unwrap();
    HttpResponse::Ok().body(rendered)
}

#[actix_web::main]
async fn main() -> std::io::Result<()> {
    HttpServer::new(|| {
        let tera = Tera::new("templates/**/*").unwrap();
        App::new()
            .data(tera)
            .route("/", web::get().to(index))
            .route("/signup", web::get().to(signup))
    })
    .bind("127.0.0.1:8000")?
    .run()
    .await
}

Here the first thing to notice is that in our main function we set up a new route. We have "/signup" on a HTTP Get call our signup function. Our signup function will then set the title and pass the data and the page we want to render to tera.render.

We can now go to 127.0.0.1:8000/signup in our browser and we should see our very nice form!

The next thing we will do is have rust print out what it receives on a HTTP POST to that same page.

./src/main.rs

...
use serde::{Serialize, Deserialize};

...
#[derive(Debug, Deserialize)]
struct User {
    username: String,
    email: String,
    password: String,
}

async fn process_signup(data: web::Form<User>) -> impl Responder {
    println!("{:?}", data);
    HttpResponse::Ok().body(format!("Successfully saved user: {}", data.username))
}

...
            .route("/signup", web::get().to(signup))
            .route("/signup", web::post().to(process_signup))
...

The first thing we do is add a new route for the post request and have this route run our process_signup function.

The next thing we need to do is get the data out of our post request and we can do this with the Form utility from actix::web. This will let us get the data out of the request so that we can process it. To extract this data out however we need to do one other thing.

We need to be able take the string data in the post request and convert that into a rust object. For this we want to do the opposite of the serialize we did in the last chapter. We include the Deserialize utility from serde at the very top and then we derive it for our User struct. This struct matches the form we have in our template. We also need to derive Debug so we can print the data out.

We can now go to our signup page and try it out!

We should be able to get our page and once we click submit we should see the following in our terminal.

    Finished dev [unoptimized + debuginfo] target(s) in 16.75s
     Running target/debug/hacker-clone
User { username: "niv", email: "niv@example.com", password: "123" }

!

We're getting somewhere now. We can have our website talk to our rust web application!

Before we wrap up lets finish up the next two pages, our login page and our submission page.

We will create our template, we will add our route, we will create a struct of our form, we will write a rust function.

./templates/login.html

{% extends "base.html" %}

{% block content %}
<form action="" method="POST">
    <div>
        <label for="username">Username:</label>
        <input type="text" name="username">
    </div>
    <div>
        <label for="password">Password:</label>
        <input type="password" name="password">
    </div>
    <input type="submit" value="Login">
</form>
{% endblock %}

./src/main.rs

...
#[derive(Debug, Deserialize)]
struct LoginUser {
    username: String,
    password: String,
}

async fn login(tera: web::Data<Tera>) -> impl Responder {
    let mut data = Context::new();
    data.insert("title", "Login");

    let rendered = tera.render("login.html", &data).unwrap();
    HttpResponse::Ok().body(rendered)
}

async fn process_login(data: web::Form<LoginUser>) -> impl Responder {
    println!("{:?}", data);
    HttpResponse::Ok().body(format!("Logged in: {}", data.username))
}
...

...
            .route("/signup", web::post().to(process_signup))
            .route("/login", web::get().to(login))
            .route("/login", web::post().to(process_login))
...

We should now be able to go to 127.0.0.1:8000/login in our browser and log in!

Next up, the submission form! We will follow the same steps.

./templates/submission.html

{% extends "base.html" %}

{% block content %}
<form action="" method="POST">
    <div>
        <label for="title">Title:</label>
        <input type="text" name="title">
    </div>
    <div>
        <label for="link">Link:</label>
        <input type="text" name="link">
    </div>
    <input type="submit" value="Submit">
</form>
{% endblock %}

./src/main.rs

...
#[derive(Debug, Deserialize)]
struct Submission {
    title: String,
    link: String,
}

async fn submission(tera: web::Data<Tera>) -> impl Responder {
    let mut data = Context::new();
    data.insert("title", "Submit a Post");

    let rendered = tera.render("submission.html", &data).unwrap();
    HttpResponse::Ok().body(rendered)
}

async fn process_submission(data: web::Form<Submission>) -> impl Responder {
    println!("{:?}", data);
    HttpResponse::Ok().body(format!("Posted submission: {}", data.title))
}
...
...
            .route("/login", web::post().to(process_login))
            .route("/submission", web::get().to(submission))
            .route("/submission", web::post().to(process_submission))
...

Now if we navigate to 127.0.0.1:8000/submission in our browser we should be able to test our submission.

In the terminal we should see:

Submission { title: "test", link: "http://123" }

Done! We now have 4 major pages of our website wired to our rust application. We aren't doing much with it yet but this was a pretty big milestone. Next up we'll work on our rust application talking to our database. Exciting!

Chapter 5 - Database

Welcome back! At this point we have a our website talking to Rust and printing stuff to the console. We have 4 major pages now, we have the index page, the signup page, the login page and the submission page. Now we will shift gears and go to the bottom of our application stack and begin working our database and it's connection.

We will be using PostgreSQL with the diesel crate as our ORM. An ORM is an object relational mapper that will make database records look and act like objects so we can stay away from writing raw SQL.

Installing Postgres

Find your version of postgres and install it.

https://www.postgresql.org/download/

I am using Ubuntu on Windows Subsystem for Linux, so to install postgresql I used apt.

> sudo apt-get install postgresql-12

Once I had this installed I also needed the dev library for postgres.

> sudo apt-get install libpq

And finally because Windows Subsystem for Linux doesn't have a working init we'll need to manually start the postgres server each time or we'll to add it to something like our bashrc.

> sudo service postgresql start

At this point, we should have a working postgres installation. We can test our installation by checking the version. You may need to do a locate to find the actual executable.

> /usr/lib/postgresql/12/bin/postgres -V
postgres (PostgreSQL) 12.4 (Ubuntu 12.4-0ubuntu0.20.04.1)

Now that we have postgres installed, we need to do one more thing. We need to set up a username and password for postgres.

> sudo su postgres
> psql postgres
psql (12.4 (Ubuntu 12.4-0ubuntu0.20.04.1))
Type "help" for help.

postgres=# \password postgres
Enter new password:

We switch to the postgres user. We then run psql using the username postgres. We then change our password for a specific user with the command password.

We now have a postgres user set up with a password and can move to the next step.

Installing Diesel CLI

We will now install the diesel_cli crate. This will let us work with postgres from the commandline using diesel. This is a standalone binary built in rust, diesel proper will be a package we add later to our project so we, ourselves, can use the ORM.

cargo install diesel_cli --no-default-features --features postgres

We set the no-default-features flags and then we selectively install postgres. We can also set up MySQL and SQLite for diesel_cli but for this tutorial we'll stick with postgres.

If you need any help, please take a look at the diesel getting started page, it is very helpful.

https://diesel.rs/guides/getting-started/

Now we have Postgres and diesel_cli installed, but before we jump into programming directly in rust we will design our database and get it setup through the diesel cli. We're almost there!

Designing our Database

We are building a Hacker News clone so the first thing we need is ability to make posts and make comments. Comments can be replies so that will also need to be encoded somehow. Users will also need to be able to login so we need to save that information as well.

We could use the structs we created in the previous chapters as the bases for our database tables.

Let's start with the easier one, the User.

User {
    id,
    username,
    email,
    password
}

This is good enough for now, we could add some more fields to track things like when this user joined or if they want to have an about section.

Now the next table will be the Post.

Post {
    id,
    title,
    link,
    author,
    created_at
}

This is also a straightforward design. We could add more fields like number of likes and how many times someone has favorited this article.

The last table we'll build is for the Comments.

Comment {
    id
    comment,
    Post.id
    User.id,
    created_at,
    parentComment.id,
}

The comment table is a little different from the first 2 table we created. This is because the comments belong to a post and they also belong to a specific user. This is why we need to reference both of the other tables. Comments can also be replies to other comments, so we also need a way to keep track of what are replies and to who, this we can do by adding information about the parent the comment is replying to.

Comments are children of the User, the Post and can be children of the Comment table itself.

Now we have a design, lets implement it!

Setting Up Our Database

The first step will be quick, what we need to do is create a database in postgres and have it available to diesel. Diesel checks the environment variables for a postgres url or it will check the current folder for a .env file.

We will create a .env which we also be using later.

./.env

DATABASE_URL=postgres://postgres:password@localhost/hackerclone

This sets up an environment variable, DATABASE_URL, with the postgres url. "postgres:password" is our username and password that we setup earlier. "hackerclone" is the name of the database that we want to create. This database is what our User, Post and Comment tables will live in.

We can now use the diesel cli to create our database.

> diesel setup
Creating migrations directory at: hacker-clone/migrations
Creating database: hackerclone

diesel setup has created our database and it has created a migrations folder.

The migrations folder will contain the raw SQL we will write to create and modify tables. Each time we want to modify our table we will run a generate option in diesel to create a checkpoint.

Our first checkpoint will be the creation of the 3 tables we need.

hacker-clone> diesel migration generate hackerclone

We've now generated one checkpoint in our migrations folder. You should see a folder with a date and inside it will be 2 files, an up.sql and a down.sql. The up.sql is where we will write our modifications to our database and our down will be the SQL that will reverse what we did. This way we can move forwards and backwards, and by using the datestamped folders we have a history that we can go through at will!

Writing SQL

We can now get to writing some SQL!

./migrations/2020-10-18-061233_hackerclone/up.sql

-- Your SQL goes here
CREATE TABLE users (
    id SERIAL PRIMARY KEY,
    username VARCHAR NOT NULL,
    email VARCHAR NOT NULL,
    password VARCHAR NOT NULL,

    UNIQUE(username),
    UNIQUE(email)
);

CREATE TABLE posts (
    id SERIAL PRIMARY KEY,
    title VARCHAR NOT NULL,
    link VARCHAR,
    author INT NOT NULL,
    created_at TIMESTAMP NOT NULL,

    CONSTRAINT fk_author
        FOREIGN KEY(author)
            REFERENCES users(id)
);

CREATE TABLE comments (
    id SERIAL PRIMARY KEY,
    comment VARCHAR NOT NULL,
    post_id INT NOT NULL,
    user_id INT NOT NULL,
    parent_comment_id INT,
    created_at TIMESTAMP NOT NULL,

    CONSTRAINT fk_post
        FOREIGN KEY(post_id)
            REFERENCES posts(id),

    CONSTRAINT fk_user
        FOREIGN KEY(user_id)
            REFERENCES users(id),

    CONSTRAINT fk_parent_comment
        FOREIGN KEY(parent_comment_id)
            REFERENCES comments(id)
);

Here we create our 3 tables and set up the fields we need and their types. The other thing to pay attention to is that we use the idea of a foreign key in our post and comments. This way we can in the future implement things like a "my comments" page or a "my posts" page easily. The comments table also has a foreign key to itself, this is because comments can be both comments to the post and replies to other comments.

To undo our 3 create commands, we will need to do some DROPs.

./migrations/2020-10-18-061233_hackerclone/down.sql

-- This file should undo anything in up.sql
DROP TABLE comments;
DROP TABLE posts;
DROP TABLE users;

With that, we now have a complete checkpoint for diesel. We can now do "diesel migration run" to run the SQL in our up.sql file which will create our tables with all the fields we need. We can also do "diesel migration redo" which will run the down.sql file first and then the up.sql file. This way we can modify and change things quickly.

A note here is that, on production systems we rarely remove columns. In development this is a handy feature but it is something to be careful of.

Whew! We're done! We finally have our database set up and we have our tables created. We can now move on to actually working in rust and start doing some programming.

Feel free to take a breather, I know I need one!

Chapter 6 - Registering a User

Ok! The previous chapter was all set up for this chapter and the next few. We now have a database and three tables. We also have a website that can talk to our rust application. We are now going to marry the two in this chapter!

Before we get started we need to add some new crates we're going to be using in our application. We will be adding the dotenv crate and the diesel crate.

./Cargo.toml

[dependencies]
actix-web = "3"
tera = "1.5.0"
serde = "1.0"
dotenv = "0.15.0"
diesel = { version = "1.4.4", features = ["postgres"] }

We will be using the dotenv crate to read our .env file and place it in the environment as environment variables. We will also include diesel proper now. For diesel we want to specify that we are using postgres so we include it via the features option.

With that we have our dependencies taken care of. Let's move on to the fun part!

  • Just a note, make sure that cargo watch did indeed pull in our dependencies, this may require you to kill cargo watch and restart it as I was getting strange errors because cargo watch hadn't picked up the new dependencies.

For now we're going to focus on getting the core functions done. Lets outline what we're going to be doing for the next few chapters.

  1. We should be able to register new users.
  2. We should be able to login.
  3. We should be able to submit new posts.
  4. We should be able to view posts.
  5. We should be able to make comments on each post.

In this chapter we will focus on just connecting to our database and building our user registration process.

Connecting to Our Database

Before we get into the code, lets orient ourselves.

We currently have a schema.rs file in our src directory.

./src/schema.rs

...
table! {
    users (id) {
        id -> Int4,
        username -> Varchar,
        password -> Varchar,
        email -> Varchar,
    }
}
...

This is an automatically generated file by diesel that we will be using throughout our application. One thing to keep in mind at all times is that the schema is the source of truth, the order of the fields and their types needed to be reflected in any structs we make with the trait "Queryable".

The first step is make our application aware of this file.

./src/main.rs

#[macro_use]
extern crate diesel;
pub mod schema;

use actix_web::{HttpServer, App, web, HttpResponse, Responder};
...

We will need the first 3 lines of our main.rs to look like the above. I'm not entirely sure why we need this besides that it doesn't work without it. From what I can tell the macro is allowing the schema.rs file to actually generate in our binary. The extern I have no idea because from my understanding use supersedes it now. Using "use diesel::*" causes the macros in schema.rs to not get created properly. So we'll hand wave it away :)

Now we will write the connector, this is what we will call anytime we want to connect to our hackerclone database and do something with it.

...
use diesel::prelude::*;
use diesel::pg::PgConnection;
use dotenv::dotenv;

fn establish_connection() -> PgConnection {
    dotenv().ok();

    let database_url = std::env::var("DATABASE_URL")
        .expect("DATABASE_URL must be set");

    PgConnection::establish(&database_url)
        .expect(&format!("Error connecting to {}", database_url))
}
...

We include some parts of diesel and we also include the dotenv crate. We start by making sure that our environment file is properly set up and then we read in the database url from the environment. the dotenv package would have done this.

We then return a connection handler with the type PgConnection.

! We can now connect to our database!

Registering New Users

Finally! The funner part. We had created a User struct in one of the previous chapters to us to extract the User data from a request. Now we will use that same struct but with some minor modifications. This will also be where we start branching into different files. We could technically do this all in main.rs but then it would be quite unwieldy.

The first step to processing new users is to create a models.rs file and move our User struct to there.

./src/models.rs

use super::schema::users;
use diesel::{Queryable, Insertable};
use serde::Deserialize;

#[derive(Queryable)]
pub struct User {
    pub id: i32,
    pub username: String,
    pub email: String,
    pub password: String,
}

#[derive(Debug, Deserialize, Insertable)]
#[table_name="users"]
pub struct NewUser {
    pub username: String,
    pub email: String,
    pub password: String,
}

I copied over our old struct and renamed it NewUser. This is because we need two versions when want to interact with the User table. This will be true for most tables.

The NewUser struct corresponds to a user that we will extract from a request and will put into the User table. This is why it derives the Insertable trait.

The User struct corresponds to existing users, it's almost like extracting a full user from the database. This user will have the extra parameter of id. This struct has the Queryable trait because we want to be able to query the User table and get everything structured using the User struct.

  • Note here - Because the struct has the Queryable trait, we need to make sure the order and types matches what's in the schema. This resulted in more than one bug for myself!

The next thing to notice is that here we use the schema.rs file, we use it via the super option because the models.rs file is under the root, main.rs file.We are exposing our structs to other parts of our application through the pub keyword. We can also keep things private if we need to.

Now we have our User models set up! Let's go back our main.rs file.

./src/main.rs

...
pub mod schema;
pub mod models;
...

...
use diesel::prelude::*;
use diesel::pg::PgConnection;
use dotenv::dotenv;

use models::{User, NewUser};
...

The first thing we need to do is expose our models file and also include it. The first statement, "pub mod models" will expose our models file. The use statement is what actually includes the models into our scope.

Now we will update our process_signup function to use our database connector and models.

./src/main.rs

async fn process_signup(data: web::Form<NewUser>) -> impl Responder {
    use schema::users;

    let connection = establish_connection();

    diesel::insert_into(users::table)
        .values(&*data)
        .get_result::<User>(&connection)
        .expect("Error registering user.");

    println!("{:?}", data);
    HttpResponse::Ok().body(format!("Successfully saved user: {}", data.username))
}

The first thing we do is change the Form extractor to NewUser. The next thing to notice is our use statement. Here we are bringing in the the code that generated through the macros in the schema file. This is what will let us refer to the users table.The next thing we is create our connection and then we do our insert.

Before we do, we can do validation on our new user such as making sure the username is unique, the email is valid or the password is strong enough. For now however we will keep things simple and do our insert straight. Duplicate users will cause our UNIQUE constraint we wrote in the sql files to be violated and this will cause rust to panic.

The values line is one line I have no idea about. Without the * there is an error saying the Insertable trait isn't set up properly on the NewUser struct but I don't know what is wrong with it.

The next line is where we execute our insert passing in the connection and casting it to the type of User. The get_result call returns our newly loaded item and we need to cast it properly.

The expect call will trigger rust to panic if there are any errors in inserting.

We can now go to 127.0.0.1:8000/signup and try it out.

NewUser { username: "nivethan", email: "nivethan@example.com", password: "123" }

If we try to register a new user we should see the above line in our terminal.

thread 'actix-rt:worker:3' panicked at 'Error registering user.: DatabaseError(UniqueViolation, "duplicate key value violates unique constraint \"users_username_key\"")', src/main.rs:72:10

If we try to register an existing user we should see rust panicking.

!! Now we have wired up our rust application to our database and have our website's sign up page actually functioning. We could do more complex things such as verifying a user's e-mail or adding invite codes but let's keep it simple for now.

We're slowly building something! Next we'll working on actually logging our new user in.

Chapter 7 - Logging a User In

Welcome back! We're making some headway into our app, we finally have the ability to register new users, let's continue and add in the ability to log in!

Before we dive into the programming, let's go over what logging in really means. On our hackerclone website we only want registered users to submit new posts and comment. To do this we need to make sure that those requests are coming from registered users. One way we can do this on each request we can have the user input their username and password. We would then check that against our database and allow them to either post or not.

This is however troublesome for the user, they would need to verify themselves on each request. The next thing we can do is automate that via cookies. Cookies are bits of information the browser will send back to the website each time. Our website could save the username and password the user enters at the beginning in a cookie and then from that point onwards the browser will send the cookies with each request.

This is bad form, storing usernames and passwords would make it so that if the cookies somehow got read by someone on its way to our server then that user is now compromised.

This leads us to the idea of sessions. When a user logs in the first time and they verify themselves, we will create a random string and keep it in a table in memory. We then set the user's cookie to contain this string. Now every request that the user makes will give us this random string, and we can then look in our table to see who is matched to that string. This is still susceptible to being captured by a hacker but the hacker would only have access to that session. They wouldn't get the user's real credentials. We can also add time outs for sessions to add a little bit more security.

This is the method we will be using in our own login function. We'll have a user log in, make sure the passwords match, create a session for that user and add it to our session table. Logging out would then be simple, all we need to do is remove the user from the session table.

Let's get started!

Logging in a User

The first thing we'll do is move our LoginUser struct that we created a few chapters ago to our models file and include it via the use statement. This will just be a little tidying up.

./src/models.rs

...
#[derive(Debug, Deserialize)]
pub struct LoginUser {
    pub username: String,
    pub password: String,
}
...

./src/main.rs

...
use models::{User, NewUser, LoginUser};
...

We will slowly move all our structs out of our main.rs file because it makes more sense for them to all be centralized in our models.rs file.

Now let's work on verifying the user who is trying to login. What we need to do is go to the database and find the username that is trying to login, if that user exists we need to check their password. If and only if the passwords match do we want to create a session.

./src/main.rs

...
async fn process_login(data: web::Form<LoginUser>) -> impl Responder {
    use schema::users::dsl::{username, users};

    let connection = establish_connection();
    let user = users.filter(username.eq(&data.username)).first::<User>(&connection);

    match user {
        Ok(u) => {
            if u.password == data.password {
                println!("{:?}", data);
                HttpResponse::Ok().body(format!("Logged in: {}", data.username))
            } else {
                HttpResponse::Ok().body("Password is incorrect.")
            }
        },
        Err(e) => {
            println!("{:?}", e);
            HttpResponse::Ok().body("User doesn't exist.")
        }
    }
}
...

The first thing to notice here is that like before we are going to include some code from our schema so we can use our user table.

The next step is to create our connection to postgres. Right now we are creating a new connection on each request which can get expensive. In the future we will set up a pool of connections that we can then just use.

The next step is to check our database for our user. We do a filter and because we put a UNIQUE constraint on our field when making our sql file, we can grab just the first result we get. We pass in the connection to first which will execute our filter.

Now we will get a result type that we can match against. If the result was Ok then we can go into the next set of logic. If it was an Err, we will print a very helpful message. (Not a good idea for production)

If the result was Ok, then we can check the password, and if they match we will print our original login message to our terminal. If the passwords don't match, we'll let the user know.

Now if we navigation to our browser and go to 127.0.0.1:8000/login we should be able to login in with the user we already created.

We should see the below in our terminal.

LoginUser { username: "niv", password: "test" }

Done!

Joking! Not yet, we will now add our session table and make it so if a user logs in, they will then get a cookie set with the session information.

Adding Sessions

To add sessions to our rust application we are going to include another crate. First we'll add it to our dependencies.

./Cargo.toml

...
diesel = { version = "1.4.4", features = ["postgres"] }
actix-identity = "0.3.1"

actix-identity is a crate that allows us to maintain sessions using cookies.

The next step is to include the relevant parts of actix-identity into our project.

./src/main.rs

...
use serde::{Serialize, Deserialize};
use actix_identity::{Identity, CookieIdentityPolicy, IdentityService};
...

We will be using these 4 objects from the actix_identity crate.

Now we need to let our rust application know that we are going to be using sessions. What we want to do is make sure that any requests coming in have a cookie set on them and that we are always sending that information to the user and the browser on the other side will make sure we get that information back.

To do this we will need to register the IdentityService in our app, similar to how we did with tera. With tera, our templating engine, we wanted to make a variable accessible to functions we call within our App. With our IdentityService we want it for our requests. so instead of using .data() to register it, we will use .wrap().

./src/main.rs

...
App::new()
            .wrap(IdentityService::new(
                    CookieIdentityPolicy::new(&[0;32])
                    .name("auth-cookie")
                    .secure(false)
                )
            )
            .data(tera)
            .route("/", web::get().to(index))
...

Here we register our IdentityService with the wrap option, on our App object which sits inside our HttpServer. We will now have the ability to create sessions.

Let's go back to our login function.

./src/main.rs

async fn process_login(data: web::Form<LoginUser>, id: Identity) -> impl Responder {
    use schema::users::dsl::{username, users};

    let connection = establish_connection();
    let user = users.filter(username.eq(&data.username)).first::<User>(&connection);

    match user {
        Ok(u) => {
            if u.password == data.password {
                let session_token = String::from(u.username);
                id.remember(session_token);
                HttpResponse::Ok().body(format!("Logged in: {}", data.username))
            } else {
                HttpResponse::Ok().body("Password is incorrect.")
            }
        },
        Err(e) => {
            println!("{:?}", e);
            HttpResponse::Ok().body("User doesn't exist.")
        }
    }
}

We've made only 3 changes here, we added id as a parameter for this function, this is something that we can now pass in just like we did with tera.Then inside our password check, if it passes, we will create our session token and add it to our session table. This also sets the user's cookie with that information. With that we now have sessions!

What actix_identity's id option is doing is it is taking our value and it creates a hash out of it that it keeps in it's own table.

Earlier we wrapped an IdentityService around our application, this means that when a request comes in, it grabs the "auth-cookie" and does a look up to see what the corresponding id should be. This is is the value we set inside the .remember().

Let's make sure everything is working by making it so when a user goes to the login page, we'll check to see if they're already logged in. If they are logged in we'll display a message, whereas if they aren't logged in, it will be the regular page.

For this we will update our login function.

./src/main.rs

async fn login(tera: web::Data<Tera>, id: Identity) -> impl Responder {
    let mut data = Context::new();
    data.insert("title", "Login");

    if let Some(id) = id.identity() {
        return HttpResponse::Ok().body("Already logged in.")
    }
    let rendered = tera.render("login.html", &data).unwrap();
    HttpResponse::Ok().body(rendered)
}

We once again add id to our parameter list that we need to pass in. The "if let Some(id)" lets quickly check the id.identity() function. This function checks to see if the session token we saved in the cookie exists in our session table. If it does, the check will pass and we don't need to display the login page. If it doesn't exist, then we should allow the user to log in.

Let's do one more thing before we finish up. Let's wire together a logout! That way we can actually remove things from our session table.

The first thing to do is add a route for logout.

./src/main.rs

...
            .route("/login", web::post().to(process_login))
            .route("/logout", web::to(logout))
...

Then the next thing would be to write our logout function.

./src/main.rs

async fn logout(id: Identity) -> impl Responder {
    id.forget();
    HttpResponse::Ok().body("Logged out.")
}

id.forget() removes the session token we set from the session table and our cookie. This is what logging out really means on a website!

To see how this works, you can log into the site and then open the developer console and look under the storage tab. Here we should see that we have a cookie with a random string in it. Inside actix_identity we have a table of these random strings matched to our real values, so when we call id in our rust function, we will be able to get what we need!

And with that we are done! Well done, we have finished up another major piece of our puzzle. We now have users who can sign up, we can log them in and we can log them out. We can handle sessions so users can now make posts and comment.

In the next chapter we'll build our submission page!

See you soon.

Chapter 8 - Submitting a New Post

Welcome back! Now we have a website that we can register on and even log in to. Now let's add another major piece to our app, submitting new posts!

In this chapter we'll be building the post submission logic. Let's outline this first so that we have a roadmap of what we'll be doing.

  1. We need to create a model to extract the form data, we already have this.
  2. We need to create a model of the true database table so that we can get ids and timestamps.
  3. We need another model to set the author and the timestamp
  4. We need to insert into our database
  5. That's it!

Let's get started.

Gating the Submission Page

The first thing we'll do is gate our submission page. We only want logged in users making posts.

./src/main.rs

...
async fn submission(tera: web::Data<Tera>, id: Identity) -> impl Responder {
    let mut data = Context::new();
    data.insert("title", "Submit a Post");

    if let Some(id) = id.identity() {
        let rendered = tera.render("submission.html", &data).unwrap();
        return HttpResponse::Ok().body(rendered);
    }

    HttpResponse::Unauthorized().body("User not logged in.")
}
...

We will check the id and if the user is logged in, we will let them access the submission page. If they aren't we'll return a 401 - Unauthorized response.

You should now be able to navigate to 127.0.0.1:8000/submission. Depending on if you are logged in or not, you should get a different page.

Timestamps for Our Posts

Now before we create our models, we will need to add a new crate to our project. We don't want to get into the messiness of keeping track of time such as our post's created_at field. We will instead use the chrono crate and we will also add in the feature for serde so we can use serialize and deserialize with chrono.

./Cargo.toml

...
actix-identity = "0.3.1"
chrono = { version = "0.4", features = ["serde"] }
...

Now that we have chrono have our timestamps, we will also need to let diesel know about it. We will need to enable the chrono feature in diesel.

./Cargo.toml

...
diesel = { version = "1.4.4", features = ["postgres", "chrono"] }
...

Now we have the chrono crate available and we have it ready to work with our orm, diesel.

The Models

First, let's rename our Post struct we have in our main.rs to something else, this is really just a struct to extract out form data.

./src/main.rs

...
#[derive(Deserialize)]
struct PostForm {
    title: String,
    link: String,
}
...

PostForm is a better name for this or even PostFormExtractor to be a little bit more obvious about what we use this for.

This renaming would have broken our index function as it was using this struct. For now let's just stub out our index function and we'll worry about it in the next chapter.

./src/main.rs

...
async fn index(tera: web::Data<Tera>) -> impl Responder {
    let mut data = Context::new();

    let posts = "";

    data.insert("title", "Hacker Clone");
    data.insert("posts", &posts);

    let rendered = tera.render("index.html", &data).unwrap();
    HttpResponse::Ok().body(rendered)
}
...

This is the index function stubbed out for now.Next let's write our first model, the one that will reflect our post table. But before that let's update our includes to also use the post table.

./src/models.rs

use super::schema::{users, posts};

Now we have the posts table available to us.Now for our Post struct.

./src/models.rs

...
#[derive(Debug, Queryable)]
pub struct Post {
    pub id: i32,
    pub title: String,
    pub link: Option<String>,
    pub author: i32,
    pub created_at: chrono::NaiveDateTime,
}
...

Once again because this struct has the Queryable trait we need to make sure our struct matches both the order of fields and the types in our schema file../src/schema.rs

...
table! {
    posts (id) {
        id -> Int4,
        title -> Varchar,
        link -> Nullable<Varchar>,
        author -> Int4,
        created_at -> Timestamp,
    }
}
...

Here is a mapping for the types:

https://kotiri.com/2018/01/31/postgresql-diesel-rust-types.html

In our struct, the only strange field is the link as a post could just be a title. We signified this in our SQL file by not giving the "NOT NULL" condition. In our schema file it appears as Nullable and in our struct it should be Option.

The other thing to note is that our created_at is a type from the chrono crate. These types aren't included in serde so if we didn't enable serde in our chrono crate we would have issues with the Serialization and Deserialization traits.

Now we need one more struct, we need a struct that will act as our Insertable.

./src/models.rs

...
#[derive(Serialize, Insertable)]
#[table_name="posts"]
pub struct NewPost {
    pub title: String,
    pub link: String,
    pub author: i32,
    pub created_at: chrono::NaiveDateTime,
}
...

Our NewPost struct contains all the fields we want to set when we go to insert into our posts table. The 2 extra fields here are author and created_at both of which we will not extra from the form. This is why we need a 3rd struct. What we will do is convert our existing PostForm to a NewPost and then insert that into our table.

To do this we will implement a method for NewPost.

./src/models.rs

...
impl NewPost {
    pub fn from_post_form(title: String, link: String, uid: i32) -> Self {
        NewPost {
            title: title,
            link: link,
            author: uid,
            created_at: chrono::Local::now().naive_utc(),
        }
    }
}
...

This creates a function that will build a NewPost object from a title, link and user id we pass in.

With that we have all the models we need!

Let's update our main.rs to handle submissions now.

Submitting New Posts

The first thing we'll do is update the top of our file to include our newly created models.

./src/main.rs

...
use models::{User, NewUser, LoginUser, Post, NewPost};
...

Now with our models included, we can dive into the guts of our process_submission function.

./src/main.rs

...
async fn process_submission(data: web::Form<PostForm>, id: Identity) -> impl Responder {
    if let Some(id) = id.identity() {
        use schema::users::dsl::{username, users};

        let connection = establish_connection();
        let user :Result<User, diesel::result::Error> = users.filter(username.eq(id)).first(&connection);

        match user {
            Ok(u) => {
                let new_post = NewPost::from_post_form(data.title.clone(), data.link.clone(), u.id);

                use schema::posts;

                diesel::insert_into(posts::table)
                    .values(&new_post)
                    .get_result::<Post>(&connection)
                    .expect("Error saving post.");

                return HttpResponse::Ok().body("Submitted.");
            }
            Err(e) => {
                println!("{:?}", e);
                return HttpResponse::Ok().body("Failed to find user.");
            }
        }
    }
    HttpResponse::Unauthorized().body("User not logged in.")
}
...

The first thing to note is that in our process_submission function, we've updated our form extractor type to PostForm and also added id as a parameter. We should do some checking just to make sure the submission is coming from a logged in user.

Once we confirm that the session is valid we bring in the domain specific language or dsl for the users table. The first step in processing a submission is to figure out who the user is.

In our case, the session token is the username so we can reverse it to a user id easily by querying the user table. Had our token been a random string that we kept matched to the user, we would need to first go to that table to get the user id.

Once we have the User we make sure we have a valid result and then we convert our PostForm to a NewPost.

let new_post = NewPost::from_post_form(data.title.clone(), data.link.clone(), u.id);

This line admittedly does bother me as we are doing a clone to pass the data. I did not figure out what the borrowing rules here should be.

  • Note: Comment below has a better way of handling and passing data to our from_post_form which is much cleaner than doing a clone. We can pass the data back by calling data.into_inner() which will pass the entire PostForm object. For now we'll leave the code as is but feel free to use the better way!

The next step is to bring in the posts table which we do with the use schema::posts line.Next we insert our NewPost object into our posts table, reusing the connection we setup earlier in our function.

And with that! We should have submissions working!

Verifying a Submission

We can navigate to 127.0.0.1:8000/submission and if we're still logged in we can submit a new post.

Once submitted, our browser should show our submission message.

We can verify that the submission made it all the way to postgres through our rust application by checking postgres manually.

> sudo -u postgres psql postgres
psql (12.4 (Ubuntu 12.4-0ubuntu0.20.04.1))
Type "help" for help.

postgres=# \c hackerclone
hackerclone=# select * from posts;
 id | title | link | author |         created_at
----+-------+------+--------+----------------------------
  1 | Test  | 123  |      1 | 2020-10-19 03:29:05.063197
(1 row)

hackerclone=#

The first thing we need to do is switch into the postgres user, then run psql for the postgres user.

Once we do that, we are in postgres and need to connect to a database. In our case the database we want to connect to is hackerclone.

Once we connect to our database the prompt will change to reflect that.

Then to list the entries in a table all we need to do is a run a select against a specific table.

We can see that our submission worked perfectly!

Whew! We did a lot these past few chapters. We now have registering users, logging in, sessions, and making new posts all functioning. The next chapter will hopefully be a breather, we'll work on making our index page functional and making our website slightly easier to navigate instead of having to type in urls.

See you soon!

Chapter 9 - A New Index Page

Welcome back! We've come a long way from our humble beginnings of printing hello world to now being able to register new users and submitting new posts. This chapter we'll update our index route that we had stubbed out last week.

Currently if we navigate to 127.0.0.1:8000/ we will cause rust to panic, let's fix that!

The Index Route

What we want to do in our index route is to fetch all the posts sorted by date. We could add votes and weights to our posts so that the order can change but let's keep it simple. Our plan of attack will be to set up a database connection then query the posts table for all the entries. We will also need to do a join to get the user for each post, using the author field on the post.

Let's get started.

./src/main.rs

...
async fn index(tera: web::Data<Tera>) -> impl Responder {
    use schema::posts::dsl::{posts};
    use schema::users::dsl::{users};

    let connection = establish_connection();
    let all_posts :Vec<(Post, User)> = posts.inner_join(users)
        .load(&connection)
        .expect("Error retrieving all posts.");

    let mut data = Context::new();
    data.insert("title", "Hacker Clone");
    data.insert("posts_users", &all_posts);

    let rendered = tera.render("index.html", &data).unwrap();
    HttpResponse::Ok().body(rendered)
}
...

We first bring in the domain specific languages for posts and users, as we will need information from both tables.

The next step is to establish our connection to the postgres database.

Next we will do query. Here we join our posts table to users and then our load function fill take in the connection and execute it. The load means that it will get all posts in the posts table.

The reason we can do an inner join is because diesel knows that it is joinable. When we first wrote our raw SQL back in the chapter on databases, we gave our author id field on posts the foreign key constraint.

./migrations/2020-10-18-064517_hackerclone/up.sql

...
CREATE TABLE posts (
    id SERIAL PRIMARY KEY,
    title VARCHAR NOT NULL,
    link VARCHAR,
    author INT NOT NULL,
    created_at TIMESTAMP NOT NULL,

    CONSTRAINT fk_author
        FOREIGN KEY(author)
            REFERENCES users(id)
);
...

This constraint got written in our schema file as a joinable.

./src/schema.rs

...
joinable!(posts -> users (author));
...

This means that we can join our users table to our posts via the author field on our post.

Now that our query is returning a tuple of Post and User we'll need to update our template to reflect this. We also renamed our posts variable in our tera context to posts_users as this is more descriptive.

./templates/index.html

{% extends "base.html" %}

{% block content %}
{% for post_user in posts_users %}
{% set p = post_user[0] %}
{% set u = post_user[1] %}
<div>
    <a href="{{ p.link }}">{{ p.title }}</a>
    <small>{{ u.username }}</small>
</div>
{% endfor %}
{% endblock %}

The first item in our tuple is the post and the second is the user. With these 2 objects we can now build our index page.We should be able to navigate to 127.0.0.1:8000/ in our browsers now and see the posts that we have made.

! There we go! We have our index page all wired up now and we learned how to do joins to get data from other tables.

Now before we move on, let's do a quick update to our templates so we can access all of our routes without having to manually type them in. We can even clean up our index page so it formats a little bit better.

./templates/base.html

<!DOCTYPE html>
<html lang="en">
    <head>
        <meta charset="utf-8">
        <title>{{title}}</title>
        <style>
body { font-size:18px; }
td { vertical-align:top; }
        </style>
    </head>
    <body>
        <header>
            Hi, <i>Friend</i>, to <b>HackerClone</b>
            <div style="float:right;">
                <button onclick="window.location.href='/login'">
                    Login
                </button>
                <button onclick="window.location.href='/signup'">
                    Sign Up
                </button>
                <button onclick="window.location.href='/submission'">
                    Submit
                </button>
                <button onclick="window.location.href='/logout'">
                    Logout
                </button>
            </div>
        </header>
        <hr>
        {% block content %}
        {% endblock %}
    </body>
</html>

Here all we do is add a header and we make buttons for the various routes we set up. If we wanted we could making this template smarter and pass in the fact if we are logged in or not. Only logged in users should see the submit and logout button whereas only logged out users should see the login and signup button.

For now however, we'll leave it as it is.

Now for the index page.

./templates/index.html

{% extends "base.html" %}

{% block content %}
<table>
    {% for post_user in posts_users %}
    {% set p = post_user[0] %}
    {% set u = post_user[1] %}
    <tr>
        <td>{{loop.index}}. </td>
        <td>
        <a href="{{ p.link }}">{{ p.title }}</a>
        <br>
        <small>
            submitted by
            <a href="/user/{{u.username}}">
                {{ u.username }}
            </a>
        </small>
        <small><a href="/post/{{p.id}}">comments</a></small>
        <br>
        <small>{{ p.created_at }}</small>
        </td>
    </tr>
    {% endfor %}
</table>
{% endblock %}

Here we use table to format our posts and we also add some new information like our created at field on the post and we also added links to our user page and comments. For now ignore these as we will be wiring them up in the next chapter.

Now we have an index page and we have ours posts working fully! We can submit new posts and view them, let's get started on adding comments!

Chapter 10 - Commenting

Welcome back! We currently have a website that we can register on and submit new posts to. The next major piece we need to wire up is our comments. Let's dive right into it!

The first thing we need to do is create our post page, this will be where our comments will show.

The Post Page

In the previous chapter we set up our index page to display all of the posts made on our website, we also added a url for our comments.

./templates/index.html

...
<small><a href="/posts/{{p.id}}">comments</a></small>
...

Here we are attempting to retrieve the page under post with the id of the post. Now let's move to rust and process this new route.

./src/main.rs

...
            .route("/submission", web::post().to(process_submission))
            .service(
                web::resource("/post/{post_id}")
                    .route(web::get().to(post_page))
            )
...

The first thing we need to do is register our route, but we can't use our trusty route option anymore as we're trying to also pass in data via the url. Now we can register a service which allows us to do more configuration on the route. This way we can have wildcards and dynamic variables in our paths and still process them.

Now let's take a look at the post_page function gets run on a HTTP Get.

./src/main.rs

...
async fn post_page(tera: web::Data<Tera>,
    id: Identity,
    web::Path(post_id): web::Path<i32>
) -> impl Responder {

    use schema::posts::dsl::{posts};
    use schema::users::dsl::{users};

    let connection = establish_connection();

    let post :Post = posts.find(post_id)
        .get_result(&connection)
        .expect("Failed to find post.");

    let user :User = users.find(post.author)
        .get_result(&connection)
        .expect("Failed to find user.");

    let mut data = Context::new();
    data.insert("title", &format!("{} - HackerClone", post.title));
    data.insert("post", &post);
    data.insert("user", &user);

    if let Some(_id) = id.identity() {
        data.insert("logged_in", "true");
    } else {
        data.insert("logged_in", "false");
    }

    let rendered = tera.render("post.html", &data).unwrap();
    HttpResponse::Ok().body(rendered)
}
...

For now, we're just going to load the Post and User so that we can display something, we'll work on comments afterwards.

The first thing we do is bring in the tables we need, then we set up a connection to our database.

We then execute a find for our post. We can use the find option because we know the id of the post we are looking for. Once we have the Post, we can also run a find for the User that created this post, this way we can display their name on the post page. Here we could do a join as well like we did in the index page.

The next thing we do is a build a tera context object with the post and user information. We also added a new piece of information to our context object.

We are now passing in whether the user is logged in or not. This way we can modify our post page, if the user is logged in, show the comment box. If they aren't log in, hide the comment box.

We then pass all this data to our post.html template file. Let's create it!

./templates/post.html

{% extends "base.html" %}

{% block content %}

<table>
    <tr>
        <td>
            <a href="{{ post.link }}">{{ post.title }}</a>
            <br>
            <small>
                submitted by
                <a href="/user/{{user.username}}">
                    {{ user.username }}
                </a>
            </small>
            - {{ post.created_at }}
        </td>
    </tr>
</table>


<form action="" method="POST">
    <div>
        <label for="comment">Comment</label>
        <textarea name="comment"></textarea>
    </div>
    <br>
    <input type="submit" value="submit">
</form>

<div>
    Our comments will go here.
</div>
{% endblock %}

The top portion of this page will display what we showed on the index page with some minor modifications.

Our form is a POST to the current page, this means that when we submit a top level comment, we'll be POSTing it to our current url.

At this point we should be able to go to our index page, 127.0.0.1:8000/ and click on comments to get to our post page.

We should see our post's information and a comment box! With our Post Page functioning, let's add our comment functionality.

Submitting New Comments

The first thing we need to do is set our application to handle this route.

./src/main.rs

...
            .route("/submission", web::post().to(process_submission))
            .service(
                web::resource("/post/{post_id}")
                    .route(web::get().to(post_page))
                    .route(web::post().to(comment))
            )
...

We add a route for the HTTP POST request now which will go to our comment function.

Before we set up our comment function however we need to build some models. We need to once again create 2 structs, one to reflect our database and one to reflect a new comment that we will insert.

First we need to include our comments from our schema table.

./src/modes.rs

use super::schema::{users, posts, comments};

Now we can use our comments table when building out structs.

./src/models.rs

...
#[derive(Debug, Serialize, Queryable)]
pub struct Comment {
    pub id: i32,
    pub comment: String,
    pub post_id: i32,
    pub user_id: i32,
    pub parent_comment_id: Option<i32>,
    pub created_at: chrono::NaiveDateTime,
}

#[derive(Serialize, Insertable)]
#[table_name="comments"]
pub struct NewComment {
    pub comment: String,
    pub post_id: i32,
    pub user_id: i32,
    pub parent_comment_id: Option<i32>,
    pub created_at: chrono::NaiveDateTime,
}

impl NewComment {
    pub fn new(comment: String, post_id: i32,
        user_id: i32, parent_comment_id: Option<i32>) -> Self{
        NewComment {
            comment: comment,
            post_id: post_id,
            user_id: user_id,
            parent_comment_id: parent_comment_id,
            created_at: chrono::Local::now().naive_utc(),
        }
    }
}
...

This should look similar to our Post and NewPost structs, the only thing to note is that we need to make sure our fields and order match what is in our schema file../src/schema.rs

...
table! {
    comments (id) {
        id -> Int4,
        comment -> Varchar,
        post_id -> Int4,
        user_id -> Int4,
        parent_comment_id -> Nullable<Int4>,
        created_at -> Timestamp,
    }
}
...

We will get strange errors if things are out of order!With that we have our models set up! Now let's write our comment handler function and actually allow comments to get saved.

We first need to include our 2 new structs in our main.rs file.

./src/main.rs

...
use models::{User, NewUser, LoginUser, Post, NewPost, Comment, NewComment};
...

Now we can write our comment function.

./src/main.rs

...
#[derive(Deserialize)]
struct CommentForm {
    comment: String,
}

async fn comment(
    data: web::Form<CommentForm>,
    id: Identity,
    web::Path(post_id): web::Path<i32>
) -> impl Responder {

    if let Some(id) = id.identity() {
        use schema::posts::dsl::{posts};
        use schema::users::dsl::{users, username};

        let connection = establish_connection();

        let post :Post = posts.find(post_id)
            .get_result(&connection)
            .expect("Failed to find post.");

        let user :Result<User, diesel::result::Error> = users
            .filter(username.eq(id))
            .first(&connection);

        match user {
            Ok(u) => {
                let parent_id = None;
                let new_comment = NewComment::new(data.comment.clone(), post.id, u.id, parent_id);

                use schema::comments;
                diesel::insert_into(comments::table)
                    .values(&new_comment)
                    .get_result::<Comment>(&connection)
                    .expect("Error saving comment.");


                return HttpResponse::Ok().body("Commented.");
            }
            Err(e) => {
                println!("{:?}", e);
                return HttpResponse::Ok().body("User not found.");
            }
        }
    }

    HttpResponse::Unauthorized().body("Not logged in.")
}
...

We first need a Form Extractor struct for our comment. Then we get into our actual comment function.

Inside we make sure that the user has a session and then we begin our database lookups. We first check to see if our post exists, then we get the user using the session.

At this point we have all the pieces we need, we have the post, we have the user and we have the comment.

The next step is to make sure our User is valid and if it is we go on to intialize our NewComment struct. Here we set the parent_id to None because top level comments don't have parents.

Once it's been initialized, we insert our NewComment into our comments table.

We then send back a message letting the user know that we received their comment.

With that! we have our comments being saved to the database.

Now we will go back to our post_page function so we can display our comments.

Displaying Comments

We've structured our table so that comments belong to posts, we did this by setting up a foreign key constraint when we first wrote our SQL.

./migrations/2020-10-18-064517_hackerclone/up.sql

...
CREATE TABLE comments (
    id SERIAL PRIMARY KEY,
    comment VARCHAR NOT NULL,
    post_id INT NOT NULL,
    user_id INT NOT NULL,
    parent_comment_id INT,
    created_at TIMESTAMP NOT NULL,

    CONSTRAINT fk_post
        FOREIGN KEY(post_id)
            REFERENCES posts(id),

    CONSTRAINT fk_user
        FOREIGN KEY(user_id)
            REFERENCES users(id),

    CONSTRAINT fk_parent_comment
        FOREIGN KEY(parent_comment_id)
            REFERENCES comments(id)
);
...

In previous chapters, we did joins to get data referred to by foreign keys. For comments, instead of a join we will use diesel's association construct.

This way we can get all comments belonging to a particular post.

The first thing we need to do is to expose this relationship in our models.

./src/models.rs

...
#[derive(Debug, Serialize, Queryable, Identifiable, Associations)]
#[belongs_to(Post)]
pub struct Comment {
    pub id: i32,
    pub comment: String,
    pub post_id: i32,
    pub user_id: i32,
    pub parent_comment_id: Option<i32>,
    pub created_at: chrono::NaiveDateTime,
}
...

Comment is our child table and we are adding the traits of Identifiable and Associations to this model. We also add another macro of who this child belongs to.

./src/models.rs

...
#[derive(Debug, Serialize, Queryable, Identifiable)]
pub struct Post {
    pub id: i32,
    pub title: String,
    pub link: Option<String>,
    pub author: i32,
    pub created_at: chrono::NaiveDateTime,
}
...

Then in our parent, Post, we simply add the Identifiable trait.With that we have our relationship set up.

Then in our post_page function we will gather up all the comments that belong to our post.

./src/main.rs

...
    let comments :Vec<(Comment, User)> = Comment::belonging_to(&post)
        .inner_join(users)
        .load(&connection)
        .expect("Failed to find comments.");

    let mut data = Context::new();
    data.insert("title", &format!("{} - HackerClone", post.title));
    data.insert("post", &post);
    data.insert("user", &user);
    data.insert("comments", &comments);
...

Our Comment model now has the belonging_to function added to it via the "#[belonging_to(Post)]" macro and we can now use our foreign key to gather up all our comments for a particular post id.

We then use an inner join to retrieve our users so that we can get our comment's user_ids translated into the usernames.

We then pass our comments into our tera context object where we can now loop through our list and display the comments.

./templates/post.html

...
<form action="" method="POST">
    <div>
        <label for="comment">Comment</label>
        <br>
        <textarea name="comment"></textarea>
    </div>
    <br>
    <input type="submit" value="submit">
</form>

<br>
{% for comment_user in comments %}
{% set comment = comment_user[0] %}
{% set user = comment_user[1] %}
<div>
    {{comment.comment}}
    <br>
    <small> by {{user.username}}</small>
    <hr>
</div>
{% endfor %}
{% endblock %}
...

We loop through our list comments and users and display each piece of information.Now if we go to our website at 127.0.0.1:8000/ we should be able to navigate to one of our posts and view the the post page.

Voila! We should now be able to post new comments and view those comments on the post page.

Whew! We've just finished up another major piece of our puzzle and learned a little bit more about how diesel works. We saw how the belongs_to function is useful and how we can chain inner_joins. This should also make it clear how the models, macros and routes all come together to form a web application.

Next chapter we're going to build our user page, see you soon!

* Due to a publishing error, chapter 11 is considered the first part of the series. Sorry!

Chapter 11 - User Profiles

* This should be part 11, not sure why it got re-ordered to the first. Ah, it got re-ordered because I had accidentally published it a few days ago, and it's sorting based off that date. Funny little bug, I wonder how you would solve it, maybe publish date should be editable or I should have deleted and re-made it at the time.

Welcome back! At this point we have a functional website with all the traditional functions of news aggregator like hacker news. (Except for voting and replying to comments which arguably are very important but lets ignore that). Now we'll build a user profile page because who doesn't want to be able to see your submissions and comments!?

User Profile Page

In a previous chapter we already set up a url to access our profile page on the index page.

./templates/index.html

...
        <a href="{{ p.link }}">{{ p.title }}</a>
        <br>
        <small>
            submitted by
            <a href="/users/{{u.username}}">
                {{ u.username }}
            </a>
        </small>
...

This is very similar to what we set up when trying to go to our post page and so the steps to start are very much the same. We'll need to register a new route and we will be passing in data via the url.

Before we set up our route however, we'll first need to define our relationships. Both comments and posts belong to a user. Like how we defined the relationship between comments and posts, we'll need to do the same thing for comments and users, and posts and users.

./src/models.rs

...
#[derive(Debug, Serialize, Queryable, Identifiable)]
pub struct User {
    pub id: i32,
    pub username: String,
    pub email: String,
    pub password: String,
}
...

We first add the Identifiable trait to our parent, User.

./src/models.rs

..
#[derive(Debug, Serialize, Queryable, Identifiable, Associations)]
#[belongs_to(User, foreign_key="author")]
pub struct Post {
    pub id: i32,
    pub title: String,
    pub link: Option<String>,
    pub author: i32,
    pub created_at: chrono::NaiveDateTime,
}
...

We then add the Associations trait to Post and we add the belongs_to macro. Here we define our foreign key, by default diesel assumes the keys are tablename_id so for the User table it would assume user_id. However we defined it as author on our Post table.

./src/models.rs

...
#[derive(Debug, Serialize, Queryable, Identifiable, Associations)]
#[belongs_to(Post)]
#[belongs_to(User)]
pub struct Comment {
    pub id: i32,
    pub comment: String,
    pub post_id: i32,
    pub user_id: i32,
    pub parent_comment_id: Option<i32>,
    pub created_at: chrono::NaiveDateTime,
}
...

In our Comment table, we just need to add the belongs_to macro, here we don't need specify our foreign key because the assumption made by diesel holds.

Now we have our relationships defined! Let's set up our route.

./src/main.rs

...
.service(
                web::resource("/post/{post_id}")
                    .route(web::get().to(post_page))
                    .route(web::post().to(comment))
            )
            .service(
                web::resource("/user/{username}")
                    .route(web::get().to(user_profile))
            )
...

Here we add another service for /user and we route this to our user_profile function.

./src/main.rs

...
async fn user_profile(tera: web::Data<Tera>,
    web::Path(requested_user): web::Path<String>
) -> impl Responder {
    use schema::users::dsl::{username, users};

    let connection = establish_connection();
    let user :User = users.filter(username.eq(requested_user))
        .get_result(&connection)
        .expect("Failed to find user.");

    let posts :Vec<Post> = Post::belonging_to(&user)
        .load(&connection)
        .expect("Failed to find posts.");

    let comments :Vec<Comment> = Comment::belonging_to(&user)
        .load(&connection)
        .expect("Failed to find comments.");

    let mut data = Context::new();
    data.insert("title", &format!("{} - Profile", user.username));
    data.insert("user", &user);
    data.insert("posts", &posts);
    data.insert("comments", &comments);

    let rendered = tera.render("profile.html", &data).unwrap();
    HttpResponse::Ok().body(rendered)
}
...
  • The title was original missing in the context object, added thanks to the commenter below.

This will look similar to what we've done but lets go through the key portion.We first get the requested user and if we don't find them we cause a panic.

We then do two calls to belonging_to against the Post and Comment using the user we found. Once we have our results we pass them to our tera context object and render them.

./templates/profile.html

{% extends "base.html" %}

{% block content %}
<h3>{{user.username}} Profile</h3>

<h5>Posts</h5>
<hr>
{% for post in posts %}
<div>
    <a href="/post/{{post.id}}">{{post.title}}</a>
</div>
{% endfor %}


<h5>Comments</h5>
<hr>
{% for comment in comments %}
<div>
    <a href="/post/{{comment.post_id}}">{{comment.comment}}</a>
    <br>
</div>
{% endfor %}
{% endblock %}

Now we should have everything getting display on our profile page.

If we navigate to 127.0.0.1:8000/users/username, we should be able to see all of our posts and comments!

And with that we are done!!We have finished up quite a bit now. Our website can handle user registration, logging in, making new posts, making comments and finally user profiles.

With that you should now be able to see the structure of how a web app works and how the different pieces all come together. We have routes, functions that handle routes, models that correspond to our database and finally our database. We have templates and sessions acting as middleware. All of these things come together to make our web application!

In the next few chapters we're going to do some clean up, but for now pat yourself on the back for making it this far.

Chapter 12 - Passwords

Welcome back! I hope you had a well deserved break. Or you came straight here, in which case, coughnerdcough. At this point I hope that you can see the forest that is our application and can see how the major pieces fit together. Now its time to get into the weeds. In this chapter we're going to fix one of our most glaring mistakes. Our passwords. We currently save plaintext passwords to our database. Let's fix this!

What we want to do is hash the passwords we get when a user registers for the first time. We don't want to save unhashed passwords because if our database falls into the wrong hands, then they still shouldn't be able to compromise our users.

We will use the argonautica rust crate to do our password hashing and as a rule you will always be using a respected library to do any sort of password hashing.

Hashing Passwords

The first step to make our password storage better is to include the argonautica crate.

./Cargo.toml

...
chrono = { version = "0.4", features = ["serde"] }
argonautica = "0.2"

The first thing we need to do is a add a function to our NewUser struct in our models.rs file. This is because we want to do our password hashing in our model.

Before we start we need to add 2 new includes to our models.

./src/models.rs

use dotenv::dotenv;
use argonautica::Hasher;

We will use dotenv to load in our secret key which we will then use in our Hasher which we get from argonautica.

./src/models.rs

...
#[derive(Debug, Deserialize, Insertable)]
#[table_name="users"]
pub struct NewUser {
    pub username: String,
    pub email: String,
    pub password: String,
}

impl NewUser {
    pub fn new(username: String, email: String, password: String) -> Self {
        dotenv().ok();

        let secret = std::env::var("SECRET_KEY")
            .expect("SECRET_KEY must be set");

        let hash = Hasher::default()
            .with_password(password)
            .with_secret_key(secret)
            .hash()
            .unwrap();

        NewUser {
            username: username,
            email: email,
            password: hash,
        }
    }
}
...

Here we add a new constructor that will take in user, email and password and return a new NewUser object. Inside our new function we load in our .env file which we still need to add a SECRET_KEY to and then we load SECRET_KEY.

Next we run our Hasher .hash function against our password using our key.

Finally we return a NewUser object with our hashed password.

./.env

DATABASE_URL=postgres://postgres:postgres@localhost/hackerclone
SECRET_KEY="THIS IS OUR SUPER SUPER SUPER SUPER SECRET KEY"

The secret key should be a randomly generated string. We will use this again when we go to compare a password to a hash.

Now we can go back to our main.rs file and update our process_signup function.

./src/main.rs

...
async fn process_signup(data: web::Form<NewUser>) -> impl Responder {
    use schema::users;

    let connection = establish_connection();

    let new_user = NewUser::new(data.username.clone(), data.email.clone(), data.password.clone());

    diesel::insert_into(users::table)
        .values(&new_user)
        .get_result::<User>(&connection)
        .expect("Error registering used.");

    println!("{:?}", data);
    HttpResponse::Ok().body(format!("Successfully saved user: {}", data.username))
}
...

Now, instead of inserting the data we extracted from our request, we will use the extracted data to build a NewUser object using the constructor. This way it will run the password through our hash function.

We then insert this NewUser object instead and from this point on we will not be saving passwords in our database.

! Almost there, the next piece we need to update is our login function.

./src/main.rs

           if u.password == data.password {
                let session_token = String::from(u.username);
                id.remember(session_token);
                HttpResponse::Ok().body(format!("Logged in: {}", data.username))
            } else {
                HttpResponse::Ok().body("Password is incorrect.")
            }

Currently we do a straight comparison of our password with what we have in our database. Now that we've hashed the password in the database, we need to do the same thing when we got to compare them.

./src/main.rs

...
    match user {
        Ok(u) => {
            dotenv().ok();
            let secret = std::env::var("SECRET_KEY")
                .expect("SECRET_KEY must be set");

            let valid = Verifier::default()
                .with_hash(u.password)
                .with_password(data.password.clone())
                .with_secret_key(secret)
                .verify()
                .unwrap();

            if valid {
                let session_token = String::from(u.username);
                id.remember(session_token);
                HttpResponse::Ok().body(format!("Logged in: {}", data.username))
            } else {
                HttpResponse::Ok().body("Password is incorrect.")
            }
        },
        Err(e) => {
            println!("{:?}", e);
            HttpResponse::Ok().body("User doesn't exist.")
        }
    }
...

Now our process_login function will use the Verifier in argonautica. Similar to how we did the Hasher, we will first use dotenv to load in the environment variables.

We then read in the secret key and run the verify function against the password hash and the password we received from the user. If the verify function succeeds, then it means the user has entered the correct password and we can log them in.

Voila! We have now fixed our passwords! At this point we have broken our existing users as our new comparison function will be comparing a hashed password against an unhashed one. You will need to create new users to test against.

In the next chapter we'll look at setting up connection pooling!

Chapter 13 - Connection Pooling

Welcome back! The next thing we'll focus on fixing up is our connection to postgres. Currently anytime we want to talk to our database we initiate a connection via our establish_connection function. Instead we're going to use connection pooling!

Connection Pooling

Database connections aren't cheap, they take time to set up and so opening a new connection every time we want to do a query is a little bit wasteful. With connection pooling, we would maintain a number of connections to our database that our requests can then use.

This speeds up things quite a bit!

The first thing we need to do is include the generic connection pooling crate, r2d2 and then we need to enable r2d2 for diesel.

./Cargo.toml

...
diesel = { version = "1.4.4", features = ["postgres", "chrono", "r2d2"] }
...
r2d2 = "0.8"
...

Once we have our connection pooling crate and feature set, we can now set up a pool in our main.rs file.

The first thing we'll do is include the r2d2 connection pooling crate.

./src/main.rs

...
use diesel::prelude::*;
use diesel::pg::PgConnection;
use diesel::{r2d2::ConnectionManager};
type Pool = r2d2::Pool<ConnectionManager<PgConnection>>;
...

Here we added the ConnectionManager class and r2d2 as well. We also added a type alias meaning that the keyword "Pool" will now mean that specific type. This is because we'll need to use the type in a few places and having to type it all out would be painful.

Now let's set up our connections.

./src/main.rs

...
#[actix_web::main]
async fn main() -> std::io::Result<()> {
    dotenv().ok();
    let database_url = std::env::var("DATABASE_URL")
        .expect("DATABASE_URL must be set");

    let manager = ConnectionManager::<PgConnection>::new(database_url);
    let pool = r2d2::Pool::builder().build(manager)
            .expect("Failed to create postgres pool.");

    env_logger::init();

    HttpServer::new(move || {
        let tera = Tera::new("templates/**/*").unwrap();

        App::new()
            .wrap(Logger::default())
            .wrap(IdentityService::new(
                    CookieIdentityPolicy::new(&[0;32])
                    .name("auth-cookie")
                    .secure(false)
            )
            )
            .data(tera)
            .data(pool.clone())
            .route("/", web::get().to(index))
...
  • After a help comment from below, I've updated this chunk of code to be more in line with the way Actix does things. Originally the pool creation was happening inside our actix thread, this mean that we were creating a pool of database connections for each actix thread. Actix will create a thread for each core you have and the commenter below had 16 cores which meant 16 actix threads got started. Each pool contains 10 connections. This meant that they would be using 160 connections to postgres which would exceed the maximum of 100.
  • My PC had only 4 cores so it was well under the limit which is why I didn't run into the same issue!
  • The correction was to move the pools out of the anonymous function that way the pools are shared across threads, so we will have 16 actix threads, sharing 10 connections. Much better!

We set up our connection pooling outside our anonymous function so that all our actix threads can share the same pool of connections.We start by using dotenv to set up our environment. We then read in the database_url from our environment.

Next we set up a ConnectionManager using our database_url. This is very much like what we do in our establish_connection function.

Next we build a pool of connections using our ConnectionManager. This is the pool of connections we can draw from. Every time we call pool.get(), that returns a connection to postgres that we can use.

We do a move in our anonymous function because we want to shift ownership from our main function to our anonymous function, this way when the threads get started, they will each own the connection pool.

The next step is the part where we register the pool with our App. This is done via the data call just like with tera. This means that like our template engine, we can now pass this pool object and make it available to all of our route handling functions.

The thing to note here is that we do a pool.clone() in our data call. This is because we want a copy of the pools for each of our threads.

Let's update our index function to use our new pool object!

./src/main.rs

...
async fn index(tera: web::Data<Tera>, pool: web::Data<Pool>) -> impl Responder {
    use schema::posts::dsl::{posts};
    use schema::users::dsl::{users};

    let connection = pool.get().unwrap();
    let all_posts :Vec<(Post, User)> = posts.inner_join(users)
        .load(&connection)
        .expect("Error retrieving all posts.");

    let mut data = Context::new();
    data.insert("title", "Hacker Clone");
    data.insert("posts_users", &all_posts);

    let rendered = tera.render("index.html", &data).unwrap();
    HttpResponse::Ok().body(rendered)
}
...

Here we add out pool as a variable we pass into our index function. This is why we made the type alias, had we not created a Pool type alias, we would have had to type out pool: web::Data>> everywhere.

Next, we change our connection variable from "establish_connection()" to "pool.get().unwrap()". With that we are done! We have connection pooling now set up.

We can no do the same thing for all of our establish_connection calls. We first add the pool variable to our function parameters and then we swap our the connection variable.

With that, we should see no visible difference in our website! but we'll know in our hearts that connection pooling is up and working!

In the next chapter we'll work on some error handling, get ready!

Chapter 14 - Error Handling

Welcome back! We just cleaned up our passwords and now in this chapter we're finally going to do something we should have been doing before. We're going to actually handle our errors. Buckle in! This is going to be a long one. (Simple though!)

Currently, any and all errors just cause our application to crash. This means that we are always blowing up when our user does something that triggers an error. For some errors this is helpful but because this is a web application, we can't be causing our users to have to deal with our application just breaking in the middle of a request. This is why we need to set up a error handler.

Let's get started!

Preamble

Before we start, let's take a look at one function in particular and how to correct it. We're going to look at our process_login function because it is the first page where our user and our application interact and can run into problems.

./src/main.rs

...
async fn process_login(data: web::Form<LoginUser>, id: Identity, pool: web::Data<Pool>) -> impl Responder {
    use schema::users::dsl::{username, users};

    let connection = pool.get().unwrap();
    let user = users.filter(username.eq(&data.username)).first::<User>(&connection);

    match user {
        Ok(u) => {
            dotenv().ok();
            let secret = std::env::var("SECRET_KEY")
                .expect("SECRET_KEY must be set");

            let valid = Verifier::default()
                .with_hash(u.password)
                .with_password(data.password.clone())
                .with_secret_key(secret)
                .verify()
                .unwrap();

            if valid {
                let session_token = String::from(u.username);
                id.remember(session_token);
                HttpResponse::Ok().body(format!("Logged in: {}", data.username))
            } else {
                HttpResponse::Ok().body("Password is incorrect.")
            }
        },
        Err(e) => {
            println!("{:?}", e);
            HttpResponse::Ok().body("User doesn't exist.")
        }
    }
}
...

We have 4 places in our function that we can panic, blow up, break, end the world, error out of.

The first error spot is our pool.get function. We currently have an unwrap that will cause a panic if pool.get() comes back with an error.

The second error is our users.filter function. In this case we are doing a match statement to handle the error. This is a valid way to handle the error but we'll be making it a touch better.

The third error we need to deal with is our secret, currently we use the .expect function where rust will panic but will print our error message along with it.

The fourth error is our Verifier function, our verify call could come back with an error and we simply have an unwrap there. Our application would just panic and stop.

As you can see, we have quite a few errors, and currently besides just one, we let the errors cause our application to break. For the user this means they get a broken page.

Let's duplicate this, one error we can trigger is our secret key.

./src/main.rs

            let secret = std::env::var("SECRET_KEY_FAKE")
                .expect("SECRET_KEY must be set");

Let's change our var to something that we know doesn't exist in our .env file.

Now if we navigate to our 127.0.0.1:8000/login and try to login we will cause rust to panic.

In our terminal we should see the following.

thread 'actix-rt:worker:2' panicked at 'SECRET_KEY must be set: NotPresent', src/main.rs:127:51

This is a helpful message. However the user sees something completely different.

User - Firefox

The connection was reset

The connection to the server was reset while the page was loading.

    The site could be temporarily unavailable or too busy. Try again in a few moments.
    If you are unable to load any pages, check your computer’s network connection.
    If your computer or network is protected by a firewall or proxy, make sure that Firefox is permitted to access the Web.

Unhelpful to say the least!

Now you can see the problem. Let's fix it!

Handling Errors

What we want to do when an error is encountered is we want to send the user back a message saying that there is an error on the server, please try again later. Depending on the error we may want to give them more or less information.

After all if our database is blowing up, that isn't something our users want to know, however if there password is causing issues, then of course we want to let them know.

This is why errors can fall into 2 categories, we have Internal Errors that are our application's errors and we have User Error for errors driven by the user.

3 of our errors in process_login are Internal Errors, 1 is a User Error.

Actix supplies us with the ResponseError trait from actix_web::error, this trait allows us to make it so that we can define errors and HttpResponses to send.

./src/main.rs

...
#[derive(Debug)]
enum ServerError {
    ArgonauticError,
    DieselError,
    EnvironmentError,
    R2D2Error,
    UserError(String)
}
...

The first thing we do is set up an enum with a variety of errors. Really we only need two errors, Internal and User but to make it a little bit clear, we're going to have separate options for the different errors we can generate.

Now the next step is to implement the Display trait for our enum. This is what will get printed if we do a println!("{}", ServerError::DieselError). This is mandatory.

./src/main.rs

impl std::fmt::Display for ServerError {
    fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result{
        write!(f, "Test")
    }
}

Unfortunately I can't explain too much besides that it works!

  • Amusingl, this code was basically written by the compiler as it told me what to do.

Now let's get to the fun part. Let's implement the actix trait ResponseError onto our ServerError../src/main.rs

...
impl actix_web::error::ResponseError for ServerError {
    fn error_response(&self) -> HttpResponse {
        match self {
            ServerError::ArgonauticError => HttpResponse::InternalServerError().json("Argonautica Error."),
            ServerError::DieselError => HttpResponse::InternalServerError().json("Diesel Error."),
            ServerError::EnvironmentError => HttpResponse::InternalServerError().json("Environment Error."),
            ServerError::UserError(data) => HttpResponse::InternalServerError().json(data)
        }
    }
}
...

error_response is an actix function inside ResponseError that gets run. So when we write this function for our ServerError, it will override the error_response function that exists by default on ResponseError.

So in our error_response function, all we're going to do is send back certain HttpResponses depending on the match. If we trigger an ArgonauticaError, we're going to sending something different from our DieselError.

This is because we want to be clear but in reality the user doesn't care and so these messages could all be collapsed into one generic internal error message.

Now for the key part. What we need to do is now implement a From trait on our ServerError so that if an error is generated in our process_login function, it will get automatically converted into one of our errors.

The first step we need to do is change our return type on our function. Instead of a Responder, we're now going to return a Result.

./src/main.rs

async fn process_login(data: web::Form<LoginUser>, id: Identity, pool: web::Data<Pool>) -> Result<HttpResponse, ServerError> {

Now we will return a Result of HttpResponse or a Result of ServerError.

Next we need to update our function so that it sends back a Result type. This means we need to wrap all the HttpResponses in process_login with Ok().

./src/main.rs

async fn process_login(data: web::Form<LoginUser>, id: Identity, pool: web::Data<Pool>) -> impl Responder {
    use schema::users::dsl::{username, users};

    let connection = pool.get().unwrap();
    let user = users.filter(username.eq(&data.username)).first::<User>(&connection);

    match user {
        Ok(u) => {
            dotenv().ok();
            let secret = std::env::var("SECRET_KEY")
                .expect("SECRET_KEY must be set");

            let valid = Verifier::default()
                .with_hash(u.password)
                .with_password(data.password.clone())
                .with_secret_key(secret)
                .verify()
                .unwrap();

            if valid {
                let session_token = String::from(u.username);
                id.remember(session_token);
                Ok(HttpResponse::Ok().body(format!("Logged in: {}", data.username)))
            } else {
                Ok(HttpResponse::Ok().body("Password is incorrect."))
            }
        },
        Err(e) => {
            println!("{:?}", e);
            Ok(HttpResponse::Ok().body("User doesn't exist."))
        }
    }
}
...

Now let's change our secret statement to instead of using .expect we will now use the shorthand ?.

./src/main.rs

let secret = std::env::var("SECRET_KEY_FAKE")?;

What this means is that if SECRET_KEY_FAKE isn't there then it should immediately return with the Error. Before we had it panic, now we return with the error.

Our return type is set to Result ServerError so this means that rust will attempt convert the error we get from std::env to our ServerError.

To do this we need to implement a From trait on our ServerError.

./src/main.rs

...
impl From<std::env::VarError> for ServerError {
    fn from(_: std::env::VarError) -> ServerError {
        ServerError::EnvironmentError
    }
}
...

All this trait is doing is if it gets an Error of type std::env:VarError then it will return a ServerError::EnvironmentError.

Then our ResponseError's error_response function will run and match against EnvironmentErrr.

Now we should be able to navigate to 127.0.0.1:8000/login on our browser and try to login. Instead of panicking, we should see nothing in our terminal and our user should see an error saying "Environment Error".

! That is error handling! We just took an error of the secret key and instead of our request dying immediately, we handled it and sent back a message to the user.

R2D2Error

Now let's handle the pool.get().unwrap. To handle the error we need to implement a From trait for the specific error. Connection pooling is done through through the crate r2d2 so we will need to get the error from there.

./src/main.rs

...
impl From<r2d2::Error> for ServerError {
    fn from(_: r2d2::Error) -> ServerError {
        ServerError::R2D2Error
    }
}
...

We could combine our Diesel and r2d2 errors into one database error type if wanted but we'll leave it for now.

./src/main.rs

let connection = pool.get()?;

We can now get rid of our unwrap.

That's 2 errors down. Another 2 to go. Now let's deal with a user error.

UserError

./src/main.rs

    let user = users.filter(username.eq(&data.username)).first::<User>(&connection);

Currently we filter our users table looking for a user and we get Result back in our user variable. If the Result is Ok, we have a user, if we get an Err, it means the user doesn't exist.

Our match statement is very much a valid strategy for error handling but it does cause our logic to start getting nested. Instead of a match we can clean it up by adding the From trait to ServerError for diesel Errors.

./src/main.rs

impl From<diesel::result::Error> for ServerError {
    fn from(err: diesel::result::Error) -> ServerError {
        match err {
            diesel::result::Error::NotFound => ServerError::UserError("Username not found.".to_string()),
            _ => ServerError::DieselError
        }
    }
}

This trait is slightly different from our other traits. This is because when diesel returns with an Error here, it doesn't mean that we have an application error, it could be something user did as well.

In this case, the user could enter a username not in the database and that would cause our filter statement above to error out.

So in our From trait, we meed to process the diesel error. In this case we match only against NotFound, if the error from diesel is of that type, then we send back a UserError with a message.

The _ means, match the rest, if any other error from diesel is being processed, simply send back ServerError::DieselError.

With that trait, we can now remove our match statement.

./src/main.rs

async fn process_login(data: web::Form<LoginUser>, id: Identity, pool: web::Data<Pool>) -> Result<HttpResponse, ServerError> {
    use schema::users::dsl::{username, users};

    let connection = pool.get()?;
    let user = users.filter(username.eq(&data.username)).first::<User>(&connection)?;

    dotenv().ok();
    let secret = std::env::var("SECRET_KEY")?;

    let valid = Verifier::default()
        .with_hash(user.password)
        .with_password(data.password.clone())
        .with_secret_key(secret)
        .verify()
        .unwrap();

    if valid {
        let session_token = String::from(user.username);
        id.remember(session_token);
        Ok(HttpResponse::Ok().body(format!("Logged in: {}", data.username)))
    } else {
        Ok(HttpResponse::Ok().body("Password is incorrect."))
    }
}

Now you can see that without the errors handled in our function itself, the code gets quite clean.

ArgonauticaError

We're almost there, the final error we'll handle is the Argonautica error from our Verifier.

Let's implement the trait.

./src/main.rs

impl From<argonautica::Error> for ServerError {
    fn from(_: argonautica::Error) -> ServerError {
        ServerError::ArgonauticError
    }
}

With that, all we have to do is change our verify() to use the question mark.

./src/main.rs

    let valid = Verifier::default()
        .with_hash(user.password)
        .with_password(data.password.clone())
        .with_secret_key(secret)
        .verify()?;

Voila! We have set up error handling for quite a few errors now. Now we should be able to go through the rest of our application and rewrite all of our .expects and .unwraps to get handled rather than causing our application to panic and the request to just end.

This chapter is getting a little bit long for my tastes but hopefully you can see the structure that error handling really is. An enum that has a few From traits is enough to transform a wide variety of errors into something we can handle.

Note - Handling errors for tera would be quite useful, change the return type of afunction from responder to to Result, add an enum or use an existing one, implement the From trait to convert tera errors to ServerErrors. Have fun!

Chapter 15 - Logging

Welcome back! We're getting to the end of our journey now! One thing that you may or may not have noticed after the previous chapter is that our errors are now gone. If we handle all our errors then it means that our errors won't be printed to the screen anymore!

Not good, how are we, the developers, going to know how and why things have gone wrong. To fix this, we now need to add logging!

Let's get started!

Logging

I did only bring up errors, but ideally we also want to log all of the requests and maybe even more, this is because if a user runs into a problem, rarely will they have enough information for us to actually investigate. Really what we need are comprehensive logs of what our server is doing and that we can look to our logs when a user has a problem, or if our application has a problem.

Luckily there is a simple way for us to add logging. We need to use the env_logger crate with actix's own logging middleware. We then have our App use this. The idea of logging each request is very similar to the idea of adding cookies to every request. So it isn't surprising that we do both in a similar way!

./Cargo.toml

...
r2d2 = "0.8"
env_logger = "0.8"

We first add env_logger to our list of dependencies.

Next let's set the RUST_LOG level. We do this in our .env file.

./.env

DATABASE_URL=postgres://postgres:password@localhost/hackerclone
SECRET_KEY="THIS IS OUR SUPER SUPER SUPER SUPER SECRET KEY"
RUST_LOG="actix_web=info"

We'll use the info level for our logging, this way we can print our anything inside info!() and error!() as well.

Before we can start using these functions though, we first need to include the logger in our main.rs file.

./src/main.rs

use argonautica::Verifier;
use actix_web::middleware::Logger;

Here we include the Logger utility from actix.

./src/main.rs

#[actix_web::main]
async fn main() -> std::io::Result<()> {
    dotenv().ok();
    env_logger::init();

    HttpServer::new(|| {
        dotenv().ok();

Inside our main function we include the environment variables as we need to pick up our RUST_LOG level. We then initialize the env_logger.

Finally we need to register the logger to our App object.

./src/main.rs


        App::new()
            .wrap(Logger::default())
            .wrap(IdentityService::new(
                    CookieIdentityPolicy::new(&[0;32])
                    .name("auth-cookie")
                    .secure(false)
            )
            )
            .data(tera)
            .data(pool)

We use the same wrap option we used for our cookie, this way every request we get will know have logging functionality added to it.

Now, we can navigate to 127.0.0.1:8000/ and we should get our index page. But now in our terminal window we should also be seeing the actual requests.

[2020-10-22T01:33:14Z INFO  actix_web::middleware::logger] 127.0.0.1:61671 "GET / HTTP/1.1" 200 2270 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:81.0) Gecko/20100101 Firefox/81.0" 0.003334
[2020-10-22T01:33:14Z INFO  actix_web::middleware::logger] 127.0.0.1:61671 "GET /favicon.ico HTTP/1.1" 404 0 "http://127.0.0.1:8000/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:81.0) Gecko/20100101 Firefox/81.0" 0.000411

Now we can see every request that hits our server! Perfect!

Now let's add the ability to log what we want to log, which in this case will be the errors.

First let's add the log crate to our application.

./Cargo.toml

...
env_logger = "0.8"
log = "0.4"

Now let's update our .env file to also log our application and not just actix_web.

./.env

RUST_LOG="hacker_clone=info,actix_web=info"

One thing to note is that if you have -, dashes, in your application name, you'll need to make it an underscore.

Now let's log the error we get when a user tries to login with a username that doesn't exist. In the previous chapter we set it up so that it triggers a ServerError::UserError.

./src/main.rs

...
impl From<diesel::result::Error> for ServerError {
    fn from(err: diesel::result::Error) -> ServerError {
        match err {
            diesel::result::Error::NotFound => {
                log::error!("{:?}", err);
                ServerError::UserError("Username not found.".to_string())
            },
            _ => ServerError::DieselError
        }
    }
}
...

All we do is log the error and voila! We have our error saved in our logs now. We can also use log::info! to log things as well.

Now we can update all of our errors, so that in addition to letting the user know, we can also log the errors for ourselves.

Next, we'll work on deploying our application!

Chapter 16 - Deploying Our Application

Welcome back! We're finally here. The end point. Let's just steam right into deploying this little website we made.

Let's kill our cargo watch as we don't need the constant recompiles anymore!

Next, let's build our release.

./

hacker-clone> cargo build --release

This will build the release version of our application.

This may take some time, feel free to watch cargo compile everything and bask in the knowledge that we finally finished something!

Once we have our application build, we now have a binary in ./target/release/ called hacker-clone.

This binary is our application. If we do ./hacker-clone, our binary would actually run. Isn't that amazing?!

We still need our templates folder and we need our .env file but besides that, that's it. Our application went from a huge amount of work to this single binary that we can drop somewhere and run.

Deployment

The idea of deploying our application somewhere is that we'll run our application server on localhost, 127.0.0.1, port 8000 still. Then we'll use nginx as our actual web server and pass through all requests to port 8000. This way we can server static files through nginx and server everything else through our application.

So let's create a new directory on our machine called hacker-release. We'll then copy our release build, ./hacker-clone/target/release/hackerclone to our hacker-release folder. We'll also copy over our templates folder and our .env file as it contains our database.

If we were really deploying our application, we would need to harden our secret_key and recreate our database.

For now we'll just pretend!

hacker-release> ls -la
total 15744
drwxr-xr-x 1 nivethan nivethan      512 Oct 21 22:16 ./
drwxrwxrwx 1 nivethan nivethan      512 Oct 21 22:15 ../
-rw-r--r-- 1 nivethan nivethan      168 Oct 21 22:16 .env
-rwxr-xr-x 1 nivethan nivethan 15063920 Oct 21 22:16 hacker-clone*
drwxr-xr-x 1 nivethan nivethan      512 Oct 21 22:16 templates/

Now we should be able to do ./hacker-clone to start up our server.

hacker-release> ./hacker-clone

We should be able to navigate to 127.0.0.1:8000 and see our familiar friend, the index page!

We can now set up nginx to begin proxy passing requests to our application. This is done on Windows Subsystem for Linux so there may be differences.

Intstalling nginx

First let's install nginx.

sudo apt-get install nginx

Once installed we can start nginx.

sudo service nginx start

Now if we have nginx installed properly we should be able to go to 127.0.0.1:80 and see the Welcome to nginx page.

Configuring nginx

Now we can work on wiring our application to nginx. The first step is to write our hacker-clone nginx file.

/etc/nginx/sites-available/hacker-clone

server {
        listen 8080;
        server_name localhost;

        location / {
                proxy_pass http://localhost:8000;
                proxy_redirect off;
                proxy_set_header Host $host;
                proxy_set_header X-Real-IP $remote_addr;
        }
}

Next we need to symlink from our sites-available directory to the nginx sites-enabled directory.

sudo ln -s /etc/nginx/sites-available/hacker-clone /etc/nginx/sites-enabled/

One thing to note, is symlinking, is lowercase s, then source, then destination. I don't think I've ever rememebered!

Let's make sure our nginx configuration is valid, otherwise we might take down our nginx server and it won't come back up!

sudo nginx -T

If we have any errors, nginx will now tell us, otherwise we'll have the entire nginx configuration printed on the screen.

Now we can restart nginx.

sudo service nginx restart

Now if we navigate to 127.0.0.1:8080 we should be able to see our index page again! (If our release binary is still running!)

This doesn't seem like a big deal but it is, this is because our application is now accessed through nginx. This means that we can use nginx to server as our web server and our application will strictly be our application!

Now before we finish up there is one last thing to do.

Let's kill our hacker-clone application as it's currently running in the foreground and is logging everything to the screen. Instead we'll run it in the background and log everything to a file.

hacker-release> nohup ./hacker-clone > log.txt &

The "&" makes our application run in the background, the ">" sends the output of our application to a file and the nohup makes it so that we can close our terminal session and our application will continue to run.

And with that we are done!

See you in the conclusion where we can go over things and say good bye!

Chapter 17 - Conclusion

Welcome back! For the last time!

I'm going to split up my thoughts into 2 sections, the first will be my thoughts on the post series. The second part will focus on rust.

A Web App in Rust Thoughts

I'll take some time now to talk about the drawbacks and issues in our application still. I'm going to work backwards from the last chapter to the first and hopefully you'll have some ideas of how to fix it and can tell me!

Major note about the entire project is I kept most of the code in one file. In reality we should have split the code into handler files and had our main point to the handlers but to me having everything in one file made it so I could keep everything in my head.

The error handler was the one section that felt like it could go in its own file but I kept it in main.rs just for consistency. I debated doing the models in main but that didn't feel right.

16 - Our application cannot handle being shutdown, there is nothing making sure it stays up which is terrible. I'm not entirely sure what the answer is. Maybe a systemd or sysinit script, I've used both before but I don't think I got them right. It certainly didn't feel as stable as nginx and systemctl enable.

15 - Logging is currently kind of lackluster. I would love to get java style stack traces. The other thing missing is log rotation, currently the log gets either put on the screen or a file but it grows forever. Does rust have log rotation? Should it be happening on the linux level instead?

14 - Error handling was quite fun, it was probably the newest thing I learned through rust. My only question about error handling was if there was a way to print out what panic would have printed but to a log. This way instead of the cleaned up error we get the full disaster.

13 - Connection pooling was fine.

12 - Passwords. Ah Passwords, I wonder if maybe this should have been done earlier in the login pages. My focus was on getting the structure of the application done rather than being correct but it is a very simple addition.

11 - User Profiles were fine.

10 - Commenting was fine for the most part, I do wish that I had made the database a little simpler and skipped the foreign key being the user id. It would have been enough to just save the username as I could do the lookup with that as well. We also skipped making replies as I thought it was getting unruly but replies could be an interesting problem.

09 - The index page, my biggest issue here was design, I was going to use bootstrap to quickly build a half decent page but then I thought it felt too big and complex. Keeping it bare bones personally allowed me to see the whole project.

08 - Submitting new posts, this along with comments was fun to set up and play around with. I don't remember having any issues but I'm sure there were.

07 - Logging in a user, I am curious if the actix_identity does indeed use a hashmap, I looked at the source code and it looks really straightforward, I just don't understand it. I'm also wondering if relying on actix_identity is enough to manage sessions.

06 - Registering a user was fine, I wish I added karma as part of the project, that could have been a fun placeholder even if I did nothing with it like the comment replies.

05 - The database was a big topic to cover and I wonder if it is a valid schema and if I screwed up the foreign keys and relationships somehow. I don't have much experience with SQL or databases so it was really just winging it.

04 - The forms were straightforward, however a big piece missing is the CSRF token, no idea what its for but it sounded important everywhere else and I didn't even bother with it.

03 - Complex templates and temples could probably be merged together as they aren't big enough to warrant separate posts but they still make logical sense to be broken up in my head.

02 - The templates were fine, simple but a better design must be possible without classes and just using plain html. I'll need to think about it. I should add static files to actix so that some styling could be done and I think adding static files would be relatively simple but I'm leaving it out for now.

01 - The beginning was okay!

  • The ones I say are fine, might be fine because I screwed them up so badly I don't even know!

Thoughts on Rust

I've tried to pick up rust and work through the rust book a couple of times now and I can never get too deep. I don't think it fits my style of learning, I like throwing my self into doing things. So this post series was me putting together something I have a grasp on in a language I don't understand. That's why the focus of this series was the forest and the structure. I wanted to see how rust organized things how the language would affect things.

Ultimately I don't think the language matter too much. I've written applications in node and python and rust felt very much like them. The gain here may be the types but I don't think I was doing enough logic to really see rust shine. I didn't fight too much with borrow checker, I didn't really get help from the compiler except a few times. I used clone a few times to get past compiler errors, String and str are still a mystery and so on.

I also may have picked the wrong project, a web app I'm starting to see is just gluing things together. The actual business logic is harder to implement and create.

So I don't know how to feel about rust. I like the language but next time I'll do something that is less glue work and see what the hype is about.

Overall this was quite a bit of fun and I hope you enjoyed it. Hopefully it was easy to follow along and to see how a web application is structured. I don't think you would learn much about rust itself here but that's okay!

Thank you all!

Sources of Inspiration

https://blog.miguelgrinberg.com/post/the-flask-mega-tutorial-part-i-hello-world

Really enjoyed this tutorial, even knowing flask it was fun and really easy to follow.

https://gill.net.in/posts/auth-microservice-rust-actix-web1.0-diesel-complete-tutorial/

This was a good tutorial, really fast and I think you'd need to really sit and think to follow it. I just copied the code.

All of the crates had great documentation that I relied on the most and for the most part everything just worked.