19
Basic CRUD with rust using tide - move to db
In the last post we refactored the api
to be more ergonomic but we still used the in memory store
( a.k.a HashMap ). In this note will move
the information to a persistent store ( a postgresql
database ).
We will use the awesome sqlx crate to interact with our db and check our queries in compile
time.
With this goal
in mind, let's start changing the code. First let's add the deps we will need.
sqlx = { version = "0.3", features = ["runtime-async-std", "macros", "chrono", "json", "postgres", "uuid"] }
chrono = "0.4"
dotenv = "0.15"
uuid = { version = "0.8", features = ["v4", "serde"] }
And replace those in main.rs
- use async_std::sync::RwLock;
+ use dotenv;
+ use uuid::Uuid;
use serde::{Deserialize, Serialize};
- use std::collections::hash_map::{Entry, HashMap};
- use std::sync::Arc;
+ use sqlx::Pool;
+ use sqlx::{query, query_as, PgPool};
use tide::{Body, Request, Response, Server};
To recap, we will use
- dotenv to load
env vars
from.env
file. - Uuid as a type for the
dinos
id. - sqlx:: we will create a Pool ( a PgPool ) to handle the db connection and
query
/query_as
are macros used to make queries to the database.
But, we didn't talk about the db yet. We will use postgresql
and you can download or run inside docker.
I prefer the second options so, let's create a postgresql container with a persistent volume for the data.
I assume in this part that you already have docker installed
$ mkdir ~/pg-rust-data
$ docker run --name -p -p 5432:5432 rust-postgres -e POSTGRES_PASSWORD=postgres -v ~/pg-rust-data:/var/lib/postgresql/data -d postgres
With this commands we have a container running postgres
exposing the port 5432
and mounting a persistent volume for the data.
Now we can connect ( using your favorite client ) and create the database and the schema.
CREATE database rust_crud;
-- change connection to the created database e.g: \c rust_crud;
CREATE TABLE dinos (
id uuid NOT NULL,
name text,
weight integer,
diet text
);
ALTER TABLE dinos OWNER TO postgres;
ALTER TABLE ONLY dinos
ADD CONSTRAINT dinos_pkey PRIMARY KEY (id);
So now, let's move back to rust
:-)
Let's now change our State
to hold the connection pool instead of the memory store
struct State {
db_pool: PgPool,
}
And create a new fn to create the connection pool
pub async fn make_db_pool() -> PgPool {
let db_url = std::env::var("DATABASE_URL").unwrap();
Pool::new(&db_url).await.unwrap()
}
And now our server
function take a PgPool
as argument and in main we need to create the pool and passing to create the app.
#[async_std::main]
async fn main() {
dotenv::dotenv().ok();
tide::log::start();
let db_pool = make_db_pool().await;
let app = server(db_pool).await;
app.listen("127.0.0.1:8080").await.unwrap();
}
Good!, with this changes all our entity endpoints can access the db through the db pool
in the state
, now we need to update the code to interact with the database in the routes.
In our handlers we had something like this to access the shared HashMap
that holds the information.
let mut dinos = req.state().dinos.write().await;
Now we can replace with this line to get a db connection from the pool.
let db_pool = req.state().db_pool.clone();
And use the query_as!
and query!
macros to run sql statements, for example our create
endpoint now looks like this
async fn create(mut req: Request<State>) -> tide::Result {
let dino: Dino = req.body_json().await?;
let db_pool = req.state().db_pool.clone();
let row = query_as!(
Dino,
r#"
INSERT INTO dinos (id, name, weight, diet) VALUES
($1, $2, $3, $4) returning id, name, weight, diet
"#,
dino.id,
dino.name,
dino.weight,
dino.diet
)
.fetch_one(&db_pool)
.await?;
let mut res = Response::new(201);
res.set_body(Body::from_json(&row)?);
Ok(res)
}
The query_as!
macro allow us to run the query and return a struct
from the defined type ( in this case Dino
).
Now we need to change also the test
to ensure that our code works as expected.
#[async_std::test]
async fn create_dino() -> tide::Result<()> {
dotenv::dotenv().ok();
use tide::http::{Method, Request, Response, Url};
let dino = Dino {
id: Uuid::new_v4(),
name: String::from("test"),
weight: 50,
diet: String::from("carnivorous"),
};
let db_pool = make_db_pool().await;
let app = server(db_pool).await;
let url = Url::parse("https://example.com/dinos").unwrap();
let mut req = Request::new(Method::Post, url);
req.set_body(serde_json::to_string(&dino)?);
let res: Response = app.respond(req).await?;
assert_eq!(201, res.status());
Ok(())
}
We need to add the db_pool
creation and pass to the app
like we do in main
and also add an id
( uuid ) to the Dino Struct. Let's run the test to verify...
$ cargo test --package tide-basic-crud --bin tide-basic-crud -- create_dino --exact --nocapture
Finished test [unoptimized + debuginfo] target(s) in 10.71s
Running target/debug/deps/tide_basic_crud-227c59d203ec76fd
running 1 test
test create_dino ... ok
test result: ok. 1 passed; 0 failed; 0 ignored; 0 measured; 4 filtered out
Nice! test passed, now we can go ahead and change the other endpoints/tests.
async fn list(req: tide::Request<State>) -> tide::Result {
let db_pool = req.state().db_pool.clone();
let rows = query_as!(
Dino,
r#"
SELECT id, name, weight, diet from dinos
"#
)
.fetch_all(&db_pool)
.await?;
let mut res = Response::new(200);
res.set_body(Body::from_json(&rows)?);
Ok(res)
}
async fn get(req: tide::Request<State>) -> tide::Result {
let db_pool = req.state().db_pool.clone();
let id: Uuid = Uuid::parse_str(req.param("id")?).unwrap();
let row = query_as!(
Dino,
r#"
SELECT id, name, weight, diet from dinos
WHERE id = $1
"#,
id
)
.fetch_optional(&db_pool)
.await?;
let res = match row {
None => Response::new(404),
Some(row) => {
let mut r = Response::new(200);
r.set_body(Body::from_json(&row)?);
r
}
};
Ok(res)
}
async fn update(mut req: tide::Request<State>) -> tide::Result {
let dino: Dino = req.body_json().await?;
let db_pool = req.state().db_pool.clone();
let id: Uuid = Uuid::parse_str(req.param("id")?).unwrap();
let row = query_as!(
Dino,
r#"
UPDATE dinos SET name = $2, weight = $3, diet = $4
WHERE id = $1
returning id, name, weight, diet
"#,
id,
dino.name,
dino.weight,
dino.diet
)
.fetch_optional(&db_pool)
.await?;
let res = match row {
None => Response::new(404),
Some(row) => {
let mut r = Response::new(200);
r.set_body(Body::from_json(&row)?);
r
}
};
Ok(res)
}
async fn delete(req: tide::Request<State>) -> tide::Result {
let db_pool = req.state().db_pool.clone();
let id: Uuid = Uuid::parse_str(req.param("id")?).unwrap();
let row = query!(
r#"
delete from dinos
WHERE id = $1
returning id
"#,
id
)
.fetch_optional(&db_pool)
.await?;
let res = match row {
None => Response::new(404),
Some(_) => Response::new(204),
};
Ok(res)
}
Couple of things to check here:
- using
.fetch_all
withquery_as!
will return a Vec ( in our case ) - using
.fetch_optional
will return anOption
that we can match agains to check if theresource
exist or not.
Also we need to make some changes in the test, and for now we simplify only testing the correct status code.
#[async_std::test]
async fn get_dino() -> tide::Result<()> {
dotenv::dotenv().ok();
use tide::http::{Method, Request, Response, Url};
let dino = Dino {
id: Uuid::new_v4(),
name: String::from("test_get"),
weight: 500,
diet: String::from("carnivorous"),
};
let db_pool = make_db_pool().await;
// create the dino for get
query!(
r#"
INSERT INTO dinos (id, name, weight, diet) VALUES
($1, $2, $3, $4) returning id, name, weight, diet
"#,
dino.id,
dino.name,
dino.weight,
dino.diet
)
.fetch_one(&db_pool)
.await?;
// start the server
let app = server(db_pool).await;
let url = Url::parse(format!("https://example.com/dinos/{}", &dino.id).as_str()).unwrap();
let req = Request::new(Method::Get, url);
let res: Response = app.respond(req).await?;
assert_eq!(200, res.status());
Ok(())
}
#[async_std::test]
async fn update_dino() -> tide::Result<()> {
dotenv::dotenv().ok();
use tide::http::{Method, Request, Response, Url};
let mut dino = Dino {
id: Uuid::new_v4(),
name: String::from("test_update"),
weight: 500,
diet: String::from("carnivorous"),
};
let db_pool = make_db_pool().await;
// create the dino for update
query!(
r#"
INSERT INTO dinos (id, name, weight, diet) VALUES
($1, $2, $3, $4) returning id, name, weight, diet
"#,
dino.id,
dino.name,
dino.weight,
dino.diet
)
.fetch_one(&db_pool)
.await?;
// change the dino
dino.name = String::from("updated from test");
// start the server
let app = server(db_pool).await;
let url = Url::parse(format!("https://example.com/dinos/{}", &dino.id).as_str()).unwrap();
let mut req = Request::new(Method::Put, url);
let dinos_as_json_string = serde_json::to_string(&dino)?;
req.set_body(dinos_as_json_string);
let res: Response = app.respond(req).await?;
assert_eq!(200, res.status());
Ok(())
}
#[async_std::test]
async fn delete_dino() -> tide::Result<()> {
dotenv::dotenv().ok();
use tide::http::{Method, Request, Response, Url};
let dino = Dino {
id: Uuid::new_v4(),
name: String::from("test_delete"),
weight: 500,
diet: String::from("carnivorous"),
};
let db_pool = make_db_pool().await;
// create the dino for delete
query!(
r#"
INSERT INTO dinos (id, name, weight, diet) VALUES
($1, $2, $3, $4) returning id, name, weight, diet
"#,
dino.id,
dino.name,
dino.weight,
dino.diet
)
.fetch_one(&db_pool)
.await?;
// start the server
let app = server(db_pool).await;
let url = Url::parse(format!("https://example.com/dinos/{}", &dino.id).as_str()).unwrap();
let req = Request::new(Method::Delete, url);
let res: Response = app.respond(req).await?;
assert_eq!(204, res.status());
Ok(())
}
We are creating the records before operate for update/delete but we are not spin a new database or dropping the existing test
db, it's ok for now but we need to work in that area in the future notes.
I thought that having a ci for fmt, linting with clippy and build
will be a nice to have at this point, so I add the ci.yml
to run the job in github actions
and I found an issue with sqlx
since is trying to build and validate the queries from the macros. There is a workaround to build *decoupled/offline * but I just add a postgres container to the ci since in the future we want to run the test also there.
That's all for today, in the next iteration I will try to work in the test setup and add some front-end with tera.
As always, I write this as a learning journal and there could be another more elegant and correct way to do it and any feedback is welcome.
Thanks!
19