# Contributing
Source: https://docs.shuttle.dev/community/contribute
Learn about the best way to get started with contributing to Shuttle.
Contributing to Shuttle is highly encouraged!
Check out our [contributing docs](https://github.com/shuttle-hq/shuttle/blob/main/CONTRIBUTING.md) and find the appropriate repo below to contribute to.
There are also many other ways to help!
Join us on [Discord](https://discord.com/invite/shuttle) and [Twitter](https://twitter.com/shuttle_dev) to leave feedback, vote in polls, or just chat.
## Repositories
Please refer to the README.md or DEVELOPING.md files in the respective repository for instructions on how to run and test the code.
The core Shuttle product. Contains all crates that users interact with.
Officially maintained examples of projects that can be deployed on Shuttle. Also has a list of community examples.
Documentation hosted on docs.shuttle.dev.
Our website shuttle.dev, including the blog and Launchpad newsletter.
GitHub Action for continuous deployments.
An awesome list of Shuttle-hosted projects and resources that users can add to.
# Get Involved
Source: https://docs.shuttle.dev/community/get-involved
Learn how to get involved with Shuttle.
If you are wondering what the best way is to get involved, here's how:
Check us out at shuttle-hq/shuttle
Click to accept invite
Go to @shuttle\_dev on Twitter
Learn about how to contribute
# Open Source
Source: https://docs.shuttle.dev/community/open-source
Learn about how to get involved with the open source side of Shuttle.
We are firm believers that Open Source is the key to a long-term sustainable and prosperous technology community.
Shuttle CLI and Shuttle libraries build on Open Source software, and are themselves Open Source.
### License
The core Shuttle repositories are Open Source under the Apache License 2.0.
> A permissive license whose main conditions require preservation of copyright
> and license notices. Contributors provide an express grant of patent rights.
> Licensed works, modifications, and larger works may be distributed under
> different terms and without source code.
You can learn more about the Apache License 2.0
[here](https://github.com/shuttle-hq/shuttle/blob/main/LICENSE).
### Contribute
Contributions to improve Shuttle are welcome. Contribute to Shuttle via
[GitHub](https://github.com/shuttle-hq/shuttle).
# Account
Source: https://docs.shuttle.dev/docs/account
How Shuttle accounts work
To use Shuttle, create an account by signing in to the [Shuttle Console](https://console.shuttle.dev/).
## Login providers
You can create Shuttle accounts with the following authentication providers:
* GitHub
* Google
* Email + password (via Auth0)
Accounts on different providers with matching emails are not automatically linked and are treated as separate accounts.
## Account Tiers
There are three Shuttle account tiers: Pro, Growth, and Enterprise.
Check the [Pricing page](https://www.shuttle.dev/pricing) for more information.
## API Key
A Shuttle account has a single Shuttle API key.
The key can be retrieved in the Console's [Account Overview](https://console.shuttle.dev/account/overview).
The CLI command `shuttle login` automatically enters the API Key into the CLI configuration after confirming in the browser.
### Reset API Key
To reset the API Key, navigate to the Console's [Account Overview](https://console.shuttle.dev/account/overview).
You can also use `shuttle logout --reset-api-key` followed by `shuttle login`.
## Delete account
To delete your Shuttle account, reach out to [support@shuttle.dev](mailto:support@shuttle.dev).
This action will soon be made possible on the Account settings page.
## Community Tier Sunset
We're sunsetting the Community tier on Shuttle.
On Friday December 19th, 2025 at 12:00 GMT (UTC+0), all projects on the Community tier will be automatically stopped.
To keep your projects running, please upgrade to Pro. You can also upgrade at any time after Friday to re-deploy your projects.
Project data, such as contents of the Shared Database, will not be deleted. Refer to [this page](/guides/migrate-shared-postgres) if you want to migrate data.
Thank you for building with Shuttle.
# Builds
Source: https://docs.shuttle.dev/docs/builds
Details about the environment your app is built in
Shuttle builds run on AWS CodeBuild, where we compile your Rust app and place it in a Shuttle `runtime` Debian-based Docker image.
## Build architecture
Shuttle builds and runs images for the Arm64 (aarch64) architecture by default. Pro+ users can reach out to Shuttle Support if they require the x86\_64 architecture.
## Builder image
The Rust builder image is based on [cargo-chef](https://github.com/LukeMathWalker/cargo-chef) to utilize Docker-layer caching of the build dependencies.
### Rust toolchain
The Rust version in the image is regularly updated to the latest `stable-aarch64-unknown-linux-gnu` toolchain (or `stable-x86_64-unknown-linux-gnu` depending on your project's build architecture).
By default, the `wasm32-unknown-unknown` target is installed, which enables compiling WASM frontends.
### External tools
Apart from what is already found in the Debian-based cargo-chef image [\[1\]](https://github.com/debuerreotype/docker-debian-artifacts/blob/bfa3d175e4153a23ffb4cf1b573d72408333b4e2/bookworm/rootfs.manifest) [\[2\]](https://github.com/docker-library/buildpack-deps/blob/fdfe65ea0743aa735b4a5f27cac8e281e43508f5/debian/bookworm/Dockerfile), these `apt` packages are also installed:
* `clang`
* `cmake`
* `llvm-dev`
* `libclang-dev`
* `mold`
* `protobuf-compiler`
Additionally, these tools are installed:
* `cargo-binstall` (latest)
* `trunk` (0.19.2)
Some other build tool you think we should add? Let us know!
## Customize build process
### Feature flags
Use [the "shuttle" feature flag](/docs/project-configuration#cargo-feature-flags) for custom behavior when building on Shuttle.
### Environment variables
The `SHUTTLE=true` env var is set in the builder image.
If you have build flags or env variables that need to be set during compilation, you can add them in `.cargo/config.toml` ([docs](https://doc.rust-lang.org/cargo/reference/config.html)) and include it in your deployment. Below are some examples.
```toml .cargo/config.toml theme={null}
[build]
rustflags = ["--foo", "bar"]
[env]
MY_ENV_VAR = "Shuttle to the moon! πππ"
```
## Runtime image
The runtime image that your built executable is placed in is a `bookworm-slim` (Debian 12) with `ca-certificates` and `curl` installed.
## (EXPERIMENTAL) Hook scripts
This feature is experimental and can change
There are three optional bash scripts that can be used to run custom commands during the build.
Create them at the root of your project.
* `shuttle_prebuild.sh`: Runs before `cargo build`. Can be used to install custom build dependencies.
* `shuttle_postbuild.sh`: Runs after `cargo build`.
* `shuttle_setup_container.sh`: Runs in the runtime image before build artifacts are copied into it. Can be used to install custom runtime dependencies.
### Example: Install a custom Rust toolchain
In this example, we install and switch to the `nightly` toolchain.
```sh shuttle_prebuild.sh theme={null}
rustup install nightly --profile minimal
rustup default nightly
```
### Example: Install a build dependency
```sh shuttle_prebuild.sh theme={null}
apt update
apt install -y libopus-dev
```
### Example: Build a WASM frontend
```sh shuttle_postbuild.sh theme={null}
cd frontend
trunk build --release
```
Note: To use the built static assets at runtime, use [build.assets](/docs/files#build-assets).
### Example: Install a runtime dependency
```sh shuttle_setup_container.sh theme={null}
apt update
apt install -y ffmpeg
```
# Deployment environment
Source: https://docs.shuttle.dev/docs/deployment-environment
Details about the environment your app runs in
## Infrastructure
### Deployments
Deployments run in AWS ECS with Fargate VMs.
The default compute configuration is 0.25 vCPU and 0.5 GB RAM.
This will soon be made configurable.
If you need more compute out of the gate, reach out to us.
**Rolling deployments**:
When a new deployment is made, it will be started alongside the previous one until it is considered healthy.
After that, the previous deployment will start shutting down.
This means that there is a window of time where there are two instances running in parallel.
If the new deployment fails to become healthy, it will be retried 3 times while the the previous one will stays up.
### Runtime architecture
The runtime container will run on the architecture the Docker image was built for. See [build architecture](/docs/builds#build-architecture).
### Incoming HTTPS traffic
HTTPS traffic is proxied to your app on the project's default subdomain and any [custom domains](/docs/domain-names) that have been added.
The proxy sets the `X-Forwarded-For` HTTP header on incoming requests to the remote IP address of the request.
On the Growth tier, a dedicated Application Load Balancer is included, which provides better performance, reliabilty and isolation.
### Outgoing traffic
Egress traffic out from Shuttle go through NAT Gateways on these IP addresses:
* `13.43.103.185`
* `13.41.117.254`
* `13.43.235.93`
## Environment variables
These are the environment variables set in the Shuttle runtime container.
Check for `SHUTTLE=true` or use [the shuttle feature flag](./project-configuration#cargo-feature-flags) for custom behavior when running on Shuttle.
```bash theme={null}
SHUTTLE="true"
SHUTTLE_PROJECT_ID=""
SHUTTLE_PROJECT_NAME=""
RUST_LOG="INFO"
```
## Secrets
See [Shuttle Secrets](/resources/shuttle-secrets)
## Customize Runtime container
See [Hook scripts](/docs/builds#experimental-hook-scripts)
# Domain names
Source: https://docs.shuttle.dev/docs/domain-names
Add custom domains to your project
By adding a custom domain to your project, web traffic can be served on your own fancy domain name in addition to the default `-.shuttle.app`.
You can add a root-level domain (`example.com`) or a subdomain (`thing.example.com`).
Adding a custom domain to your project follows these steps:
* Purchase a domain name from a DNS provider,
* set up a DNS record to make it point to your Shuttle server, and
* generate an SSL certificate that enables HTTPS traffic to your project.
## 1. Set up DNS record
The process for setting up the required DNS rule looks different depending on which type of domain and registrar you have.
If you have your domain name on Cloudflare, the process is quite simple. Go to `Websites -> -> DNS -> Records -> Add Record`, then follow the relevant section below.
### Root domain
Add a `CNAME` record from `@` to your `*.shuttle.app` subdomain. The Cloudflare proxy can be enabled or disabled,
depending on your needs.
Adding a `CNAME` on the root level here is possible due to Cloudflare's [CNAME flattening](https://developers.cloudflare.com/dns/cname-flattening/).
You can also add a `CNAME` for the `www` subdomain if you also want traffic to `www.example.com` to arrive to your service.
### Subdomain
Add a `CNAME` record from your subdomain to your `*.shuttle.app` subdomain. Disable the Proxy.
In the example above, the subdomain `my-subdomain` (as in `my-subdomain.my-domain.com`) is being directed to `my-project-0000.shuttle.app`.
The process for other providers can vary, but here are the general steps. If you want to add docs for a specific provider, feel free to contribute to this page.
### Root domain
If you are adding a root-level domain, add one or more `A` records that point to the same IP
addresses that are returned when you do a DNS lookup for your default shuttle domain, e.g.
`example.shuttle.app`.
On Mac or Linux, we can use the `dig` tool in the terminal:
`dig +short A example.shuttle.app`
On Windows you can use `nslookup` in the terminal, or browser based equivalents like
[this](https://toolbox.googleapps.com/apps/dig/#A/).
### Subdomain
If you are adding a subdomain, add a `CNAME` record from your subdomain to your `*.shuttle.app` subdomain.
## 2. Set up SSL certificate
### a. via Console
Once the DNS records have propagated, submit the domain name in your project's domain settings on the Console.
### b. via CLI
Once the DNS records have propagated, add an SSL certificate and start receiving HTTPS traffic by running:
```bash theme={null}
shuttle certificate add
```
After that, you can manage certificates with:
```bash theme={null}
shuttle certificate list
shuttle certificate delete
```
# Deployment files
Source: https://docs.shuttle.dev/docs/files
How to add or ignore files in Shuttle deployments
This page answers:
* Which files are uploaded to Shuttle when I deploy?
* How do I change which files are uploaded?
* How to serve static frontend assets on Shuttle?
* What happens to files after I deploy?
* What happens to files that I write to disk?
## Files uploaded to the build stage
When you run `shuttle deploy`, all source files in your cargo workspace that are not ignored are packed into a zip archive and uploaded to Shuttle, where they are then extracted and built.
The archive POST request (after compression) has a size limit of 100 MB.To access files at runtime, they need to be copied from the build stage. Read more below.
### Ignore files
Ignoring files can be done by adding rules to `.gitignore` or `.ignore` files, depending on if you want to exclude them from version control or not.
### Include ignored files
If you have ignored files that you want to include in the upload to the build stage, declare those files in the `Shuttle.toml` file in the root of your workspace:
```toml Shuttle.toml theme={null}
# Declare ignored files that should be included in the upload:
[deploy]
include = [
"file.txt", # include file.txt
"frontend/dist/*", # include all files and subdirs in frontend/dist/
"static/*", # include all files and subdirs in static/
]
```
Specifying only a folder name, like `"static"`, does not work. Use `"static/*"` instead to include its contents.
### Debug included files
You can double check which files are included in your archive by turning on detailed logging:
```bash theme={null}
shuttle deploy --debug
```
or inspect the archive after writing it to disk:
```bash theme={null}
shuttle deploy --output-archive deployment.zip
```
### Block dirty deployments
If you want the `deploy` command to exit if there are uncommitted git changes:
```toml Shuttle.toml theme={null}
[deploy]
deny_dirty = true
```
## Add files to the runtime image
To make your files available at runtime such as config files, static web files etc, they need to be copied into the runtime image at the end of the build stage.
To do this, declare them as build assets in the `[build]` configuration in `Shuttle.toml`.
```toml Shuttle.toml theme={null}
# Declare files that should be copied to the runtime image:
[build]
assets = [
"file.txt", # include file.txt
"frontend/dist/*", # include all files and subdirs in frontend/dist/
"static/*", # include all files and subdirs in static/
]
```
An example on how to upload and serve static files with Axum can be found [here](https://github.com/shuttle-hq/shuttle-examples/tree/main/axum/static-files).
## Files in the runtime container
You app can use the file system like normal, but files in the deployment container are deleted after the deployment stops.
# Local Run
Source: https://docs.shuttle.dev/docs/local-run
Develop your Shuttle app locally
While deploying to production is easy with Shuttle, running your project locally is
useful for development. To start your project on your local system, while in your
project directory, run:
```bash theme={null}
shuttle run
```
The Shuttle CLI builds and runs your app in a similar way that the Shuttle platform does.
It then starts a local provisioner server that simulates resource provisioning on the local system using Docker.
The environment variables available in the [deployment environment](/docs/deployment-environment) are also set during a local run.
## Local runs with databases
If your project uses a database resource, it will default to starting a local [Docker](https://docs.docker.com/get-docker/) container for that database.
If you'd like to opt out of this behavior and supply your own database URI, simply pass it in as an argument to your resource.
This argument also supports insertion of secrets from `Secrets.toml` with string interpolation:
```rust theme={null}
#[shuttle_runtime::main]
async fn main(
#[shuttle_shared_db::Postgres(
local_uri = "postgres://postgres:{secrets.PASSWORD}@localhost:16695/postgres"
)] pool: PgPool,
) -> ShuttleAxum { ... }
```
**IMPORTANT:** If Docker isn't started, you may receive an "os error 2" error. This is typically related to your Docker installation. If you're using Docker via the CLI, you can use any Docker command to start it up. If you're using Docker Desktop, you will need to start Docker Desktop.If Docker fails to launch the database container, sometimes pulling the image manually (`docker pull postgres:16`) can help resolve it.To recreate a local database, use `docker ps` to find the postgres container, then `docker stop` and `docker rm` to delete it. On the next run, a new one will be created.
## Expose your application to your local network
If you'd like to expose your application to you local network, for example if you're serving a static
website and you'd like to open it on your phone, simply pass in the `--external` flag:
```bash theme={null}
shuttle run --external
```
This will bind your local application to `0.0.0.0:8000`, and you will now be able to connect to it
using your computer's local IP. If you'd also like to change the port, you can do so with the `--port`
argument:
```bash theme={null}
shuttle run --external --port 8123
```
You may need to open the port your app is started on in your firewall.
## Development Tips
Here are some small tips and tricks to boost your productivity when developing locally.
### Live reload backend with `bacon`
To live reload your Shuttle app when you save a file, you can install [bacon](https://github.com/Canop/bacon) and then use
```bash theme={null}
shuttle run --bacon
```
to run Shuttle's default bacon job.
The job tells bacon to execute `shuttle run` when you save a file.
To customize this behavior, you can set up your own jobs in `bacon.toml`.
Shuttle's default bacon config can be found [here](https://github.com/shuttle-hq/shuttle/tree/main/cargo-shuttle/src/util).
### Live reload backend with `cargo watch`
`cargo-watch` is no longer maintained, but still works
To live reload your Shuttle app when you save a file, you can use [cargo-watch](https://github.com/watchexec/cargo-watch):
```bash theme={null}
# This will execute `shuttle run` when you save a file.
cargo watch -s 'shuttle run'
# This will also (q)uietly (c)lear the console between runs.
cargo watch -qcs 'shuttle run'
# There are many other helpful options, see `cargo watch --help`
```
Small caveat: Be aware that ignoring files with `.ignore` will also affect the behaviour of `shuttle deploy`.
See the documentation on [including ignored files](https://docs.shuttle.dev/docs/files#include-ignored-files) for more info.
### Live reload frontend with `tower-livereload`
Using `bacon` or `cargo watch` will only reload the "backend" Shuttle process.
If you are developing a frontend in the browser that is hosted by your Shuttle app,
you might also want the web page to reload when files change.
If you are using Axum or other Tower-compatible frameworks, the Tower layer [tower-livereload](https://github.com/leotaku/tower-livereload) can help you.
First, add it to your project:
```bash theme={null}
cargo add tower-livereload
```
Then, when constructing your Axum router, add the layer like this:
```rust theme={null}
let router = Router::new()
.route(/* ... */)
.layer(tower_livereload::LiveReloadLayer::new());
```
This will inject HTML responses with code for live reloading your web page when the server restarts.
This also works in combination with `cargo watch`!
If you want to exclude this functionality from the release build, add this:
```rust theme={null}
let mut router = /* ... */;
if cfg!(debug_assertions) {
router = router.layer(tower_livereload::LiveReloadLayer::new());
}
```
## Docker engines
`cargo-shuttle` uses the [bollard](https://docs.rs/bollard/latest/bollard/index.html) crate to interact with the Docker engine on local runs.
If you are using a non-standard Docker engine, you might get this error:
```text theme={null}
got unexpected error while inspecting docker container:
error trying to connect: No such file or directory
```
The error is emitted due to bollard not connecting to the correct Docker Socket location.
On Unix, bollard defaults to connecting to `unix:///var/run/docker.sock` unless the `DOCKER_HOST` env variable overrides it.
If you end up using a `DOCKER_HOST` like below, you can add the `export DOCKER_HOST=...` line to your shell's config file to have it automatically set in new shell sessions.
### Docker Desktop
```bash theme={null}
export DOCKER_HOST="unix://$HOME/.docker/run/docker.sock"
```
### Podman
You will need to expose a rootless socket for Podman, and then set the `DOCKER_HOST` environment variable:
```bash theme={null}
podman system service --time=0 unix:///tmp/podman.sock
export DOCKER_HOST=unix:///tmp/podman.sock
```
### Colima
```bash theme={null}
export DOCKER_HOST="unix://$HOME/.colima/default/docker.sock"
```
# Logs
Source: https://docs.shuttle.dev/docs/logs
Tracing or logging in Shuttle apps.
Shuttle records build logs and anything your application writes to `stdout` and `stderr`.
## Viewing Logs
To view the logs for your current active deployment, if there is one:
```bash theme={null}
shuttle logs
```
You can also view logs of a specific deployment by adding the deployment id to this command:
```bash theme={null}
shuttle logs
```
To get the deployment ID, you can run this command to view your deployment history:
```bash theme={null}
shuttle deployment list
```
The `--latest` or `-l` flag will get the logs from the most recently created deployment:
```bash theme={null}
shuttle logs --latest
```
To view logs without the timestamps and log origin tags:
```bash theme={null}
shuttle logs --raw
```
## Default Tracing Subscriber
By default, Shuttle will set up a global tracing subscriber behind the scenes.
If you'd rather set up your own tracing or logging, you can opt-out by disabling the default features on `shuttle-runtime`:
```toml Cargo.toml theme={null}
shuttle-runtime = { version = "0.57.0", default-features = false }
```
With the default features enabled you can skip the step of initializing your subscriber when implementing [tracing](https://docs.rs/tracing/latest/tracing/)
in your application, all you have to do is add `tracing` as a dependency in your `Cargo.toml`, and you're good to go!
```rust theme={null}
use tracing::info;
#[shuttle_runtime::main]
async fn main() -> ... {
info!("Starting up");
// ...
}
```
If you'd rather use [log](https://docs.rs/log/latest/log/), everything from the above section on tracing applies.
A log-compatible layer is added to the global subscriber, so like with tracing, all you need to do to use `log` macros
in your project is add it to your `Cargo.toml`.
### Configuring the default subscriber
The global subscriber has an `env_filter` which defaults to the `INFO` log level if no `RUST_LOG` variable is set.
You can change the log level for local runs with `RUST_LOG="..." shuttle run` or `shuttle run --debug`.
Deployments use `RUST_LOG="info"`, and this is not configurable at the moment. A custom subscriber can be set up instead.
### Custom logging setup
If you opt-out of the default subscriber, you can set up logging or tracing the way you would in any other Rust application.
The only thing you need to ensure is that your setup writes to `stdout`, as this is what Shuttle will record and return from
the `shuttle logs` commands.
We've created an example Actix Web app with a simple `tracing`: [custom tracing example](https://github.com/shuttle-hq/shuttle-examples/tree/main/tracing/custom-tracing-subscriber).
# Project configuration
Source: https://docs.shuttle.dev/docs/project-configuration
How to configure your Rust project for running on Shuttle
## Shuttle.toml
The file `Shuttle.toml` can be used for project-local configuration.
For the current options available, check out [Deployment files](/docs/files).
## Workspaces
Shuttle supports [cargo workspaces](https://doc.rust-lang.org/book/ch14-03-cargo-workspaces.html), but only one Shuttle service per workspace.
The first binary target using the `#[shuttle_runtime::main]` macro will be targeted for local runs and deployments.
If `Shuttle.toml` or [Secrets](/resources/shuttle-secrets) are used, those files should be placed in the root of the workspace.
This is an example of a workspace structure with shared code between a backend and frontend crate:
```text theme={null}
.
βββ .gitignore
βββ Cargo.toml
βββ Secrets.toml (optional)
βββ Shuttle.toml (optional)
βββ backend/
β βββ Cargo.toml (depends on shuttle-runtime)
β βββ src/
β βββ main.rs (contains #[shuttle_runtime::main])
βββ frontend/
β βββ Cargo.toml
β βββ src/
β βββ main.rs
βββ shared/
βββ Cargo.toml
βββ src/
βββ lib.rs
```
## Cargo feature flags
If the cargo feature `shuttle` exists, Shuttle activates it and disables default features.
In this example, Shuttle will enable the features `shuttle` and `bar`.
To use default features on Shuttle, add `default` to the shuttle array.
```toml Cargo.toml theme={null}
[features]
default = ["foo"]
shuttle = ["bar"]
foo = []
bar = []
```
## Multiple binaries
If you want to keep your project structured for allowing both running with and without Shuttle, check out the [standalone-binary](https://github.com/shuttle-hq/shuttle-examples/tree/main/other/standalone-binary) example. This is great for gradually adding Shuttle into your project.
## .shuttle/config.toml
The `.shuttle/config.toml` is created when linking your project folder to a Shuttle project.
It is added to `.gitignore` by default, and should not be committed.
# Projects
Source: https://docs.shuttle.dev/docs/projects
Everything about Shuttle projects
A Shuttle Project is the main high-level abstraction on the Shuttle platform.
Projects are logical separations of deployments (services), logs, resources, and more.
Deployments to different Shuttle projects run in their own ECS service (Fargate VM), providing a high level of isolation from other projects.
## Project Name and ID
A Shuttle Project is uniquely identified with its ID, a string starting with `proj_`.
The Shuttle API uses project IDs to identify calls to the project APIs.
The Project Name is an more convenient way to identify your projects.
The name is also part of the default free subdomain your project is hosted on (`https://-.shuttle.app/`).
The *nonce* (4 random characters) at the end allows multiple users to have projects with the same name.
You can view your project IDs and names with:
```sh theme={null}
shuttle project list
```
You can rename a project with the following command.
Note: This also updates the default subdomain name.
Custom domains remain unchanged.
```sh theme={null}
shuttle project update name
```
## Project Linking
For the Shuttle CLI to know which Shuttle Project ID to target, it stores the ID in a gitignored config file in your project.
If the ID is not found, it will prompt you to link a project.
You can also re-link the project with one of:
```sh theme={null}
shuttle project link
shuttle project link --name my-project
shuttle project link --id proj_
```
## CLI options
When running CLI commands, the project can always be overridden with the `--name` or `--id` options, regardless of which directory you are in.
```bash theme={null}
# Create a project called `my-project`
shuttle project create --name my-project
# Check the deployment list of `my-project`
shuttle deployment list --name my-project
# View the logs of project with id `proj_01J5AYYKX1WZX51F8GBTH269XB`
shuttle logs --id proj_01J5AYYKX1WZX51F8GBTH269XB
```
## Environments
Multiple environments (such as development, staging, and production) within a project is a planned feature.
Until that feature is ready, you can use a workflow of using one Shuttle project for each environment (for example `project` and `project-dev`).
To deploy to the non-default project name, you can use the `deploy` command with `--name` for targetting a different project, and `--secrets` to use a different secrets file. For example:
```sh theme={null}
shuttle deploy --name project-dev --secrets Secrets.dev.toml
```
# Scaling
Source: https://docs.shuttle.dev/docs/scaling
Adjusting the computational resources allocated to your application
At Shuttle, you can adjust the vCPU and memory resources allocated to your applicationβs compute environment.
This functionality enables you to **vertically scale** your application to better suit your specific performance requirements and workload demands.
Changing the instance size of your project is limited to the Pro tier and above, and will result in additional charges. These are calculated according to our [usage-based pricing](/pricing/billing#usage-based-pricing) model. Please ensure you review the pricing details before making adjustments.
### Example
The example below illustrates how to configure your application to use a **medium** instance size using **Infrastructure from Code**:
```rust main.rs theme={null}
#[shuttle_runtime::main(instance_size = "m")]
async fn main() -> shuttle_axum::ShuttleAxum {
// Your application logic here
}
```
### Available Instance Sizes
The table below lists the available `instance_size` values, along with the corresponding instance type and minimum required Account Tier:
| Instance Size | Value | vCPU | Memory (GB) | Account Tier |
| ------------------- | ----- | ---- | ----------- | ------------ |
| **Basic** (default) | `xs` | 0.25 | 0.5 | Pro+ |
| **Small** | `s` | 0.5 | 1 | Pro+ |
| **Medium** | `m` | 1 | 2 | Pro+ |
| **Large** | `l` | 2 | 4 | Pro+ |
| **X Large** | `xl` | 4 | 8 | Pro+ |
| **XX Large** | `xxl` | 8 | 16 | Growth+ |
*Ensure your selected instance size aligns with your application's current and anticipated workload.*
# Shuttle Shutdown
Source: https://docs.shuttle.dev/docs/shuttle-shutdown
Shuttle is ceasing operations
Shuttle is ceasing operations. This page outlines what this means for you and what steps you need to take.
## Timeline
**Community tier users:** You have until the **beginning of 2026** to migrate any data before projects and accounts are deleted.
**Pro tier users:** You have until **Jan 16, 2026** to migrate your workloads and data. On that date, all projects will be stopped, and deleted at a later point.
**Growth tier users:** You will receive individual communication with at least one month to migrate, with flexibility to extend if needed.
## Migration options
We recommend migrating to [Neptune](https://www.neptune.dev/), our next-generation platform. Please note that Neptune is currently in beta and may not yet be suitable for all production workloads.
For migrating data out of the Shared Postgres instace, refer to [this page](/guides/migrate-shared-postgres).
Our team is available to assist with migration planning and execution. If you need help, please reach out to us.
## Frequently Asked Questions
### When is Shuttle shutting down?
Pro tier projects will shut down on **Jan 16, 2026**. Growth tier customers will receive individual timelines with at least one month notice.
### How do I migrate off Shuttle?
We can help you migrate to Neptune. Contact us to discuss your migration plan and timeline.
### Will I be charged during the shutdown period?
No. Subscription fee collection has been paused.
### Can I retrieve my data after shutdown?
Data may be available for a limited retention period. Please contact us if you need to retrieve data after your shutdown date.
## Need help?
If you have questions or need migration assistance, please contact us. We're here to help make this transition as smooth as possible.
# Teams
Source: https://docs.shuttle.dev/docs/teams
How Shuttle Teams work
Shuttle Teams is a feature exclusive to [Growth Tier](https://www.shuttle.dev/pricing) users, enabling you to supercharge your team's productivity, share projects within your team, and centralize billing.
## Features
Your team can be managed on the Shuttle Console under **Account -> Team**.
### Members
Team admins can invite new team members via email.
The invited user must sign in to Shuttle using the same email address you provide.
There are by default 10 seats in a team. Additional team seats can be purchased by contacting us.
### Projects
All of the team owner's projects are shared with all team members.
To collaborate on a new project within the team, the team owner must create the project.
Once the project is created, the team can perform all actions except deleting the project.
Using `--name` in the CLI will prioritise Personal projects with that name over team projects. Using `--id` or linking the local directory ensures that the correct project is always targetted.
### Roles
* **Owner:** Admin access to all team projects and can delete team projects.
* **Admin:** Admin access to all team projects.
## Upcoming Updates
* Giving the team a name.
* More team roles for fine grained access.
* Moving projects in and out of the team, keeping personal projects private.
# Better Stack Integration
Source: https://docs.shuttle.dev/docs/telemetry/betterstack
How to set up Better Stack monitoring with Shuttle
This guide will walk you through setting up Better Stack as your telemetry provider for Shuttle.
## Prerequisites
1. A [Better Stack account](https://betterstack.com/) (sign up if you don't have one)
## Step 1: Create a Telemetry Source in Better Stack
1. Log in to your Better Stack account
2. Navigate to the **Sources** section
3. Click **Connect Source**
4. Choose a name for your telemetry source
5. Select the OpenTelemetry option in the "platform" section
6. Click "Connect Source" at the bottom of the page
## Step 2: Configure Shuttle Project
1. On the Source configuration page, locate and copy:
* Your source token
* The ingestion host URL
2. In the Shuttle Console:
* Navigate to your project
* Go to the "Telemetry" tab
* Click to enable Better Stack
* Paste your source token and ingestion host
* Click "Apply"
## Step 3: Enable OpenTelemetry Exporter (optional)
This step is only required if you want to emit custom metrics and logs from your application. If you only need the default CPU, network, and memory metrics, you can skip this step.
Follow the [Getting Started guide](./getting-started#step-1-enable-the-opentelemetry-exporter) to enable the OpenTelemetry exporter in your project.
## Step 4: Redeploy Your Project
**Important**: If your project was already running, you must redeploy it for telemetry to start flowing.
* If you've made code changes (like adding custom metrics), redeploy using the Shuttle CLI: `shuttle deploy`
* If you haven't made any code changes, you can redeploy from the Shuttle Console by selecting your latest deployment and clicking "Redeploy"
## Step 5: Create Dashboards
Once telemetry data starts flowing, you can create custom dashboards in Better Stack:
1. Go to the Dashboards section
2. A new dashboard will be created automatically for your project
3. Click the dots on any widget and select "Configure" to customize it
Better Stack offers several ways to visualize your data:
* SQL queries
* Visual drag-and-drop interface
* PromQL (beta)
### Example: Creating a CPU Usage Graph
1. Click the dots on a widget and select "Configure"
2. Switch to the visual interface
3. Add `cpu_usage_vcpu` as your metric
4. Set the Y-axis unit to "vCPU"
5. Group by your project name
## Available Metrics
Better Stack will receive all the standard Shuttle telemetry metrics, including:
* CPU usage and utilization
* Memory usage and limits
* Network I/O statistics
* Disk I/O statistics
* Custom metrics from your application (requires OpenTelemetry exporter from Step 3)
For a complete list of available metrics, see our [telemetry overview](./overview#default-platform-metrics).
## Troubleshooting
If you don't see data in Better Stack:
1. Verify your source token and ingestion host are correct
2. If you're using custom metrics, check that your project has the `setup-otel-exporter` feature on `shuttle-runtime` enabled
3. Ensure you've redeployed your project after enabling telemetry
4. Contact [Better Stack support](https://betterstack.com/contact) or join our [Discord community](https://discord.gg/shuttle) for help
# Custom Metrics Guide
Source: https://docs.shuttle.dev/docs/telemetry/custom-metrics
How to add custom metrics and tracing events to your Shuttle application
Exporting custom metrics and logs is available in the [Shuttle Pro Tier](https://www.shuttle.dev/pricing) and above.
This guide will show you how to add custom metrics and tracing events to your Shuttle application using the `tracing` crate.
## Getting Started
First, add the `tracing` dependency to your project:
```bash theme={null}
cargo add tracing
```
## Basic Usage
Add tracing events with fields to your functions to create custom metrics. Here's a simple example:
```rust theme={null}
async fn hello_world() -> &'static str {
tracing::info!(counter.hello = 1, "Hello world from OTel!");
"Hello, world!"
}
```
This will:
1. Send an `info` level log to stdout
2. Export the metric to your configured telemetry provider
3. Include the custom attribute `counter.hello` with value `1`
## Metric Types
The runtime's OTel exporter uses `tracing-opentelemetry` under the hood, which automatically handles three metric types:
1. **Monotonic Counters**: Values that only increase (e.g., total requests)
```rust theme={null}
tracing::info!(monotonic_counter.requests = 1, "New request received");
```
2. **Counters**: Values that can increase or decrease
```rust theme={null}
tracing::info!(counter.active_users = 1, "User logged in");
tracing::info!(counter.active_users = -1, "User logged out");
```
3. **Histograms**: For measuring distributions of values
```rust theme={null}
tracing::info!(histogram.request_duration_ms = 150.0, "Request completed");
```
## Example Trace Output
Here's what a tracing event looks like when exported:
```json theme={null}
{
"attributes": {
"code.filepath": "src/main.rs",
"code.lineno": 4,
"code.module_path": "my_project",
"counter.hello": 1
},
"dropped_attributes_count": 0,
"dt": "2025-02-04T15:56:27.068644985Z",
"message": "Hello world from OTel!",
"observed_timestamp": "2025-02-04T15:56:27.068649239Z",
"resources": {
"service.name": "my-project",
"service.version": "0.1.0",
"shuttle.deployment.env": "production",
"shuttle.project.crate.name": "my_project",
"shuttle.project.id": "proj_01JK8SHBZQ0XF0TKW0EDWBJ8NH",
"shuttle.project.name": "my-project",
"telemetry.sdk.language": "rust",
"telemetry.sdk.name": "opentelemetry",
"telemetry.sdk.version": "0.27.1"
},
"severity_number": 9,
"severity_text": "INFO",
"source_type": "opentelemetry"
}
```
## Best Practices
1. **Use Meaningful Names**: Choose clear, descriptive names for your metrics
```rust theme={null}
// Good
tracing::info!(counter.user_sessions = 1, "User session started");
// Bad
tracing::info!(counter.us = 1, "Session");
```
2. **Include Context**: Add relevant context to your metrics
```rust theme={null}
tracing::info!(
counter.api_calls = 1,
api.endpoint = "/users",
api.method = "GET",
"API call completed"
);
```
3. **Use Appropriate Metric Types**:
* Use `monotonic_counter` for values that only increase
* Use `counter` for values that can go up and down
* Use `histogram` for measuring distributions
## Learn More
* [tracing documentation](https://docs.rs/tracing/latest/tracing/)
* [tracing-opentelemetry documentation](https://docs.rs/tracing-opentelemetry/latest/tracing_opentelemetry/)
* [OpenTelemetry metrics specification](https://opentelemetry.io/docs/specs/otel/metrics/semantic_conventions/)
# Getting Started
Source: https://docs.shuttle.dev/docs/telemetry/getting-started
Step-by-step guide to setting up telemetry in your Shuttle project
This guide will walk you through setting up telemetry for your Shuttle project. We'll cover the basic setup that applies to all telemetry providers, and then point you to provider-specific guides.
## Prerequisites
Before you begin, make sure you have:
1. A Shuttle project deployed
2. Access to the Shuttle Console
3. An account with your chosen telemetry provider (e.g. Better Stack)
## Step 1: Enable the OpenTelemetry Exporter
First, you need to enable telemetry export in your project. Add the `setup-otel-exporter` feature to your `shuttle-runtime` dependency:
```bash theme={null}
cargo add shuttle-runtime -F setup-otel-exporter
```
The `shuttle-runtime` entry in your project's `Cargo.toml` should now look something like:
```toml theme={null}
shuttle-runtime = { version = "0.57.0", features = ["setup-otel-exporter"] }
```
> **Note:** Your specific entry may not be *identical*, that's OK. The important part is that `"setup-otel-exporter"` appears in the `features` array.
## Step 2: Choose Your Telemetry Provider
Shuttle supports several telemetry providers. Each has its own setup process:
* [Better Stack](./betterstack) - Recommended for most users
* More providers coming soon!
We plan to expand the list of supported third party services.
Let us know you thoughts and suggestions on [GitHub](https://github.com/shuttle-hq/shuttle/discussions/1980).
## Step 3: Configure Your Provider
Follow the specific guide for your chosen provider to:
1. Create a telemetry source in your provider's dashboard
2. Get your provider's connection details (usually an API key and endpoint)
3. Configure these details in the Shuttle Console
## Step 4: Redeploy Your Project
**Important**: After configuring telemetry, you ***must*** redeploy your project for the changes to take effect.
If you made code changes in Step 1 (enabling runtime's OpenTelemetry exporter), you need to make a new deployment using with `shuttle deploy`.
## Next Steps
Once your telemetry is set up, you can:
* [Add custom metrics](./custom-metrics) to your application
* Create dashboards in your provider's interface
* Monitor your application's performance and health
Need help? Check out our [telemetry overview](./overview) for more details about what metrics are available,
or join our [Discord community](https://discord.gg/shuttle) for support.
# Overview
Source: https://docs.shuttle.dev/docs/telemetry/overview
How Shuttle Telemetry integrations work
Under the hood, Shuttle runs an OpenTelemetry (OTel) collector alongside your service that, once configured, sends project telemetry like vCPU and RAM usage as well as any custom metrics you define to a supported third-party integration of your choice.
## Quick Start
To get started with telemetry in your Shuttle project, follow our [step-by-step guide](./getting-started).
## Available Telemetry
1. **All Container Metrics** - See the [Default Platform Metrics](#default-platform-metrics) section below
2. **Application Metrics** - Track your application's behavior with [custom metrics](./custom-metrics)
3. **Application Logs** - Export tracing events and logs (not stdout/stderr)
4. **No Export Limits** - Send as much data as you need
## Supported Providers
Currently, Shuttle supports the following telemetry providers:
* [Better Stack](./betterstack) - Recommended for most users
* More providers coming soon!
We plan to expand the list of supported third party services.
Let us know you thoughts and suggestions on [GitHub](https://github.com/shuttle-hq/shuttle/discussions/1980).
***
## Custom Metrics and Tracing
Learn how to add custom metrics and tracing events to your application in our [Custom Metrics Guide](./custom-metrics).
## Default Platform Metrics
The following table lists all container metrics that Shuttle automatically collects and exports via OpenTelemetry for every deployment.
| Attribute name | Description |
| ------------------------------- | --------------------------------------------------------- |
| cpu\_cores | CPU cores available |
| cpu\_onlines | Number of online/active CPUs |
| cpu\_reserved | Reserved CPU resources (if any) |
| cpu\_usage\_kernelmode | CPU time spent in kernel mode (nanos) |
| cpu\_usage\_system | System-wide CPU usage (nanos) |
| cpu\_usage\_total | Total CPU time usage (nanos) |
| cpu\_usage\_usermode | CPU time spent in user mode (nanos) |
| cpu\_usage\_vcpu | vCPU usage |
| cpu\_utilized | Percentage of CPU utilized |
| memory\_reserved | Memory reserved (bytes) |
| memory\_usage | Memory used (bytes) |
| memory\_usage\_limit | Memory limit (bytes) |
| memory\_usage\_max | The max amount of memory used by your application (bytes) |
| memory\_utilized | Memory being utilised (bytes) |
| network\_io\_usage\_rx\_bytes | Network ingress (bytes) |
| network\_io\_usage\_rx\_packets | Network ingress packet count |
| network\_io\_usage\_rx\_dropped | Network ingress dropped packet count |
| network\_io\_usage\_rx\_errors | Network ingress errored packet count |
| network\_io\_usage\_tx\_bytes | Network egress (bytes) |
| network\_io\_usage\_tx\_packets | Network egress packet count |
| network\_io\_usage\_tx\_dropped | Network egress dropped packet count |
| network\_io\_usage\_tx\_errors | Network egress errored packet count |
| network\_rate\_rx | Network ingress rate (bytes/s) |
| network\_rate\_tx | Network egress rate (bytes/s) |
| storage\_read\_bytes | Bytes read from disk |
| storage\_write\_bytes | Bytes written to disk |
# Hello World
Source: https://docs.shuttle.dev/examples/actix
Actix Web is a powerful, pragmatic, and extremely fast web framework for Rust.
This section revolves around simple Actix examples you can get quickly started with by following these 3 steps:
1. Initialize a new Actix project by running the `shuttle init --template actix-web` command
2. Copy pasting the contents of the example you want to deploy -- make sure to check the tabs of the snippet(s) to ensure you are copying the right code/file
3. Running the `shuttle deploy` command
If you are looking for step-by-step guides, check out our [Tutorials](/templates/tutorials) section.
You can clone the example below by running the following (you'll need `shuttle` CLI installed):
```bash theme={null}
shuttle init --template actix-web
```
```rust src/main.rs theme={null}
use actix_web::{get, web::ServiceConfig};
use shuttle_actix_web::ShuttleActixWeb;
#[get("/")]
async fn hello_world() -> &'static str {
"Hello World!"
}
#[shuttle_runtime::main]
async fn actix_web(
) -> ShuttleActixWeb {
let config = move |cfg: &mut ServiceConfig| {
cfg.service(hello_world);
};
Ok(config.into())
}
```
```toml Cargo.toml theme={null}
[package]
name = "hello-world"
version = "0.1.0"
edition = "2021"
[dependencies]
actix-web = "4.3.1"
shuttle-actix-web = "0.57.0"
shuttle-runtime = "0.57.0"
tokio = "1.26.0"
```
***
# Cookie Authentication
Source: https://docs.shuttle.dev/examples/actix-cookie-authentication
Explore how you can secure your Actix Web application by using cookies.
## Description
This example shows how to use authentication within actix-web with cookies, assisted by actix-identity and actix-session.
The idea is that all requests authenticate first at the login route to get a cookie, then the cookie is sent with all requests requiring authentication using the HTTP cookie header.
You can clone the example below by running the following (you'll need `shuttle` CLI installed):
```bash theme={null}
shuttle init --from shuttle-hq/shuttle-examples --subfolder actix-web/cookie-authentication
```
Three Actix Web routes are registered in this file:
* `/public`: a route that can be called without needing any authentication.
* `/login`: a route for posting a JSON object with a username and password to get a cookie.
* `/private`: a route that will display whether you're logged in or not, based on if you're logged in.
The example uses `actix-identity` and `actix-session` with a cookie store to assist with easy setup.
## Code
```toml Cargo.toml theme={null}
[package]
name = "cookie-authentication"
version = "0.1.0"
edition = "2021"
[dependencies]
actix-identity = "0.7.1"
actix-session = { version = "0.9.0", features = ["cookie-session"] }
actix-web = "4.3.1"
shuttle-actix-web = "0.57.0"
shuttle-runtime = "0.57.0"
tokio = "1.26.0"
```
Your `main.rs` should look like this:
````rust main.rs theme={null}
use actix_identity::{Identity, IdentityMiddleware};
use actix_session::{config::PersistentSession, storage::CookieSessionStore, SessionMiddleware};
use actix_web::{
cookie::{time::Duration, Key},
error, get,
http::StatusCode,
middleware,
web::{self, ServiceConfig},
HttpMessage as _, HttpRequest, Responder,
};
use shuttle_actix_web::ShuttleActixWeb;
const FIVE_MINUTES: Duration = Duration::minutes(5);
#[get("/")]
async fn index(identity: Option) -> actix_web::Result {
let id = match identity.map(|id| id.id()) {
None => "anonymous".to_owned(),
Some(Ok(id)) => id,
Some(Err(err)) => return Err(error::ErrorInternalServerError(err)),
};
Ok(format!("Hello {id}"))
}
#[get("/login")]
async fn login(req: HttpRequest) -> impl Responder {
// some kind of authentication should happen here
// attach a verified user identity to the active session
Identity::login(&req.extensions(), "user1".to_owned()).unwrap();
web::Redirect::to("/").using_status_code(StatusCode::FOUND)
}
#[get("/logout")]
async fn logout(id: Identity) -> impl Responder {
id.logout();
web::Redirect::to("/").using_status_code(StatusCode::FOUND)
}
#[shuttle_runtime::main]
async fn main() -> ShuttleActixWeb {
// Generate a random secret key. Note that it is important to use a unique
// secret key for every project. Anyone with access to the key can generate
// authentication cookies for any user!
//
// When deployed the secret key should be read from deployment secrets.
//
// For example, a secure random key (in base64 format) can be generated with the OpenSSL CLI:
// ```
// openssl rand -base64 64
// ```
//
// Then decoded and converted to a Key:
// ```
// let secret_key = Key::from(base64::decode(&private_key_base64).unwrap());
// ```
let secret_key = Key::generate();
let config = move |cfg: &mut ServiceConfig| {
cfg.service(
web::scope("")
.service(index)
.service(login)
.service(logout)
.wrap(IdentityMiddleware::default())
.wrap(
SessionMiddleware::builder(CookieSessionStore::default(), secret_key.clone())
.cookie_name("auth-example".to_owned())
.cookie_secure(false)
.session_lifecycle(PersistentSession::default().session_ttl(FIVE_MINUTES))
.build(),
)
.wrap(middleware::NormalizePath::trim())
.wrap(middleware::Logger::default()),
);
};
Ok(config.into())
}
````
## Usage
Once you've cloned this example, launch it locally by using `shuttle run`. Once you've verified that it's up, you'll now be able to go to `http://localhost:8000` and start trying the example out!
First, we should be able to access the public endpoint without any authentication using:
```sh theme={null}
curl http://localhost:8000/public
```
But trying to access the private endpoint will return "Hello anonymous":
```sh theme={null}
curl http://localhost:8000/private
```
So let's get a cookie from the login route first:
```sh theme={null}
curl http://localhost:8000/login
```
Accessing the private endpoint with the token will now succeed:
```sh theme={null}
curl --header "Authorization: Bearer " http://localhost:8000/private
```
The token is set to expire in 5 minutes, so wait a while and try to access the private endpoint again. Once the token has expired, a user will need to get a new token from login.
Since tokens usually have a longer than 5 minutes expiration time, we can create a `/refresh` endpoint that takes an active token and returns a new token with a refreshed expiration time.
Looking to extend this example? Here's a couple of ideas to get you started:
* Create a frontend to host the login
* Add a route for registering
* Use a database to check login credentials
***
# Postgres Todo App
Source: https://docs.shuttle.dev/examples/actix-postgres
This article walks you through how you can easily set up a simple to-do app using Actix Web and SQLx with PostgresQL.
## Description
This example shows how to make a simple TODO app using Actix Web and a shared Shuttle Postgres DB.
The following routes are provided:
* GET `/todos/` - Get a to-do item by ID.
* POST `/todos` - Create a to-do item. Takes "note" as a JSON body parameter.
You can clone the example below by running the following (you'll need `shuttle` CLI installed):
```bash theme={null}
shuttle init --from shuttle-hq/shuttle-examples --subfolder actix-web/postgres
```
## Code
```rust src/main.rs theme={null}
use actix_web::middleware::Logger;
use actix_web::{
error, get, post,
web::{self, Json, ServiceConfig},
Result,
};
use serde::{Deserialize, Serialize};
use shuttle_actix_web::ShuttleActixWeb;
use shuttle_runtime::{CustomError};
use sqlx::{Executor, FromRow, PgPool};
#[get("/{id}")]
async fn retrieve(path: web::Path, state: web::Data) -> Result> {
// query database to get data
// if error, return Bad Request HTTP status code
let todo = sqlx::query_as("SELECT * FROM todos WHERE id = $1")
.bind(*path)
.fetch_one(&state.pool)
.await
.map_err(|e| error::ErrorBadRequest(e.to_string()))?;
Ok(Json(todo))
}
#[post("")]
async fn add(todo: web::Json, state: web::Data) -> Result> {
// query database to create a new record using the request body
// if error, return Bad Request HTTP status code
let todo = sqlx::query_as("INSERT INTO todos(note) VALUES ($1) RETURNING id, note")
.bind(&todo.note)
.fetch_one(&state.pool)
.await
.map_err(|e| error::ErrorBadRequest(e.to_string()))?;
Ok(Json(todo))
}
#[derive(Clone)]
struct AppState {
pool: PgPool,
}
#[shuttle_runtime::main]
async fn actix_web(
#[shuttle_shared_db::Postgres] pool: PgPool,
) -> ShuttleActixWeb {
// run migrations
pool.execute(include_str!("../schema.sql"))
.await
.map_err(CustomError::new)?;
// set up AppState
let state = web::Data::new(AppState { pool });
// set up our Actix web service and wrap it with logger and add the AppState as app data
let config = move |cfg: &mut ServiceConfig| {
cfg.service(
web::scope("/todos")
.wrap(Logger::default())
.service(retrieve)
.service(add)
.app_data(state),
);
};
Ok(config.into())
}
#[derive(Deserialize)]
struct TodoNew {
pub note: String,
}
#[derive(Serialize, Deserialize, FromRow)]
struct Todo {
pub id: i32,
pub note: String,
}
```
```sql schema.sql theme={null}
DROP TABLE IF EXISTS todos;
CREATE TABLE todos (
id serial PRIMARY KEY,
note TEXT NOT NULL
);
```
```toml Cargo.toml theme={null}
[package]
name = "postgres"
version = "0.1.0"
edition = "2021"
[dependencies]
actix-web = "4.3.1"
shuttle-actix-web = "0.57.0"
shuttle-runtime = "0.57.0"
serde = "1.0.148"
shuttle-shared-db = { version = "0.57.0", features = ["postgres", "sqlx"] }
sqlx = "0.8.2"
tokio = "1.26.0"
```
## Usage
Once you've cloned the example, try launching it locally using `shuttle run`. Once you've verified that it runs successfully, try using cURL in a new terminal to send a POST request:
```bash theme={null}
curl -X POST -d '{"note":"Hello world!"}' -H 'Content-Type: application/json' \
http://localhost:8000/todo
```
Assuming the request was successful, you'll get back a JSON response with the ID and Note of the record you just created. If you try the following cURL command, you should be able to then retrieve the message you stored:
```bash theme={null}
curl http://localhost:8000/todo/
```
Interested in extending this example? Here's as couple of ideas:
* Add update and delete routes
* Add static files to show your records
***
# Static Files
Source: https://docs.shuttle.dev/examples/actix-static-files
This article walks you through setting up static files with Actix Web, a powerful Rust framework for battle-hardened web applications.
## Description
This example has one route at `/` where the homepage is served and shows you how you can serve HTML or other types of files with Actix Web.
Note that static assets are declared in the `Shuttle.toml` file.
You can clone the example below by running the following (you'll need `shuttle` CLI installed):
```bash theme={null}
shuttle init --from shuttle-hq/shuttle-examples --subfolder actix-web/static-files
```
## Code
```rust src/main.rs theme={null}
use actix_files::Files;
use actix_web::web::ServiceConfig;
use shuttle_actix_web::ShuttleActixWeb;
#[shuttle_runtime::main]
async fn main() -> ShuttleActixWeb {
let config = move |cfg: &mut ServiceConfig| {
cfg.service(Files::new("/", "assets"));
};
Ok(config.into())
}
```
```html assets/index.html theme={null}
Static Files
This is an example of serving static files with Actix Web and Shuttle.
```
```toml Cargo.toml theme={null}
[package]
name = "static-files"
version = "0.1.0"
edition = "2021"
[dependencies]
actix-files = "0.6.2"
actix-web = "4.3.1"
shuttle-actix-web = "0.57.0"
shuttle-runtime = "0.57.0"
tokio = "1.26.0"
```
```toml Shuttle.toml theme={null}
[build]
assets = [
"assets/*",
]
```
## Usage
After you clone the example, launch it locally by using `shuttle run` then visit the home route at `http://localhost:8000` - you should see a homepage that shows our included HTML file.
You can extend this example by adding more routes that serve other files.
***
# Actix WebSocket Actorless
Source: https://docs.shuttle.dev/examples/actix-websocket-actorless
Learn how websockets can upgrade your web service by providing live update functionality, using Actix Web.
## Description
This example shows how to use a WebSocket to show the live status of the Shuttle API on a web page. The app also provides an echo service and notifies when the number of connected users change.
You can clone the example below by running the following (you'll need `shuttle` CLI installed):
```bash theme={null}
shuttle init --from shuttle-hq/shuttle-examples --subfolder actix-web/websocket-actorless
```
## Code
```rust src/main.rs theme={null}
use actix_files::NamedFile;
use actix_web::{
web::{self, ServiceConfig},
HttpRequest, HttpResponse, Responder,
};
use actix_ws::Message;
use chrono::{DateTime, Utc};
use futures::StreamExt;
use serde::Serialize;
use shuttle_actix_web::ShuttleActixWeb;
use std::{
sync::{atomic::AtomicUsize, Arc},
time::Duration,
};
use tokio::sync::{mpsc, watch};
const PAUSE_SECS: u64 = 15;
const STATUS_URI: &str = "https://api.shuttle.dev/.healthz";
type AppState = (
mpsc::UnboundedSender,
watch::Receiver,
);
#[derive(Debug, Clone)]
enum WsState {
Connected,
Disconnected,
}
#[derive(Serialize, Default, Clone, Debug)]
struct ApiStateMessage {
client_count: usize,
origin: String,
date_time: DateTime,
is_up: bool,
}
async fn echo_handler(
mut session: actix_ws::Session,
mut msg_stream: actix_ws::MessageStream,
tx: mpsc::UnboundedSender,
) {
while let Some(Ok(msg)) = msg_stream.next().await {
match msg {
Message::Ping(bytes) => {
if session.pong(&bytes).await.is_err() {
return;
}
}
Message::Text(s) => {
session.text(s.clone()).await.unwrap();
tracing::info!("Got text, {}", s);
}
_ => break,
}
}
if let Err(e) = tx.send(WsState::Disconnected) {
tracing::error!("Failed to send disconnected state: {e:?}");
}
let _ = session.close(None).await;
}
async fn websocket(
req: HttpRequest,
body: web::Payload,
app_state: web::Data,
) -> actix_web::Result {
let app_state = app_state.into_inner();
let (response, session, msg_stream) = actix_ws::handle(&req, body)?;
let tx_ws_state = app_state.0.clone();
let tx_ws_state2 = tx_ws_state.clone();
// send connected state
if let Err(e) = tx_ws_state.send(WsState::Connected) {
tracing::error!("Failed to send connected state: {e:?}");
}
// listen for api state changes
let mut session_clone = session.clone();
let mut rx_api_state = app_state.1.clone();
actix_web::rt::spawn(async move {
// adding some delay to avoid getting the first message too soon.
tokio::time::sleep(Duration::from_millis(500)).await;
while rx_api_state.changed().await.is_ok() {
let msg = rx_api_state.borrow().clone();
tracing::info!("Handling ApiStateMessage: {msg:?}");
let msg = serde_json::to_string(&msg).unwrap();
session_clone.text(msg).await.unwrap();
}
});
// echo handler
actix_web::rt::spawn(echo_handler(session, msg_stream, tx_ws_state2));
Ok(response)
}
async fn index() -> impl Responder {
NamedFile::open_async("./static/index.html")
.await
.map_err(actix_web::error::ErrorInternalServerError)
}
#[shuttle_runtime::main]
async fn main() -> ShuttleActixWeb {
// We're going to use channels to communicate between threads.
// api state channel
let (tx_api_state, rx_api_state) = watch::channel(ApiStateMessage::default());
// websocket state channel
let (tx_ws_state, mut rx_ws_state) = mpsc::unbounded_channel::();
// create a shared state for the client counter
let client_count = Arc::new(AtomicUsize::new(0));
let client_count2 = client_count.clone();
// share tx_api_state
let shared_tx_api_state = Arc::new(tx_api_state);
let shared_tx_api_state2 = shared_tx_api_state.clone();
// share reqwest client
let client = reqwest::Client::default();
let client2 = client.clone();
// Spawn a thread to continually check the status of the api
tokio::spawn(async move {
let duration = Duration::from_secs(PAUSE_SECS);
loop {
tokio::time::sleep(duration).await;
let is_up = get_api_status(&client).await;
let response = ApiStateMessage {
client_count: client_count.load(std::sync::atomic::Ordering::SeqCst),
origin: "api_update loop".to_string(),
date_time: Utc::now(),
is_up,
};
if shared_tx_api_state.send(response).is_err() {
tracing::error!("Failed to send api state from checker thread");
break;
}
}
});
// spawn a thread to continuously check the status of the websocket connections
tokio::spawn(async move {
while let Some(state) = rx_ws_state.recv().await {
match state {
WsState::Connected => {
tracing::info!("Client connected");
client_count2.fetch_add(1, std::sync::atomic::Ordering::SeqCst);
}
WsState::Disconnected => {
tracing::info!("Client disconnected");
client_count2.fetch_sub(1, std::sync::atomic::Ordering::SeqCst);
}
}
let client_count = client_count2.load(std::sync::atomic::Ordering::SeqCst);
tracing::info!("Client count: {client_count}");
let is_up = get_api_status(&client2).await;
if let Err(e) = shared_tx_api_state2.send(ApiStateMessage {
client_count,
origin: "ws_update".to_string(),
date_time: Utc::now(),
is_up,
}) {
tracing::error!("Failed to send api state: {e:?}");
}
}
});
let app_state = web::Data::new((tx_ws_state, rx_api_state));
let config = move |cfg: &mut ServiceConfig| {
cfg.service(web::resource("/").route(web::get().to(index)))
.service(
web::resource("/ws")
.app_data(app_state)
.route(web::get().to(websocket)),
);
};
Ok(config.into())
}
async fn get_api_status(client: &reqwest::Client) -> bool {
let response = client.get(STATUS_URI).send().await;
response.is_ok_and(|r| r.status().is_success())
}
```
```html static/index.html theme={null}
WS with Actix Web
WebSocket example
When you connect you will be notified of the Shuttle API status and the
amount of connected users every 15 seconds.
You can also send a message to the server and you will get back the echo.
Status:disconnected
```
```toml Cargo.toml theme={null}
[package]
name = "websocket-actorless"
version = "0.1.0"
edition = "2021"
publish = false
[dependencies]
actix-files = "0.6.2"
actix-web = "4.3.1"
actix-ws = "0.2.5"
chrono = { version = "0.4.23", features = ["serde"] }
futures = "0.3"
reqwest = "0.11"
serde = { version = "1.0", features = ["derive"] }
serde_json = "1.0"
shuttle-actix-web = "0.57.0"
shuttle-runtime = "0.57.0"
tokio = { version = "1", features = ["rt-multi-thread", "sync"] }
tracing = "0.1"
```
```toml Shuttle.toml theme={null}
[build]
assets = [
"static/*",
]
```
## Usage
Once you've cloned the example, launch it locally using `shuttle run` and then go to `http://localhost:8000`. You should be able to see a status page and if you go to your Inspect/Chrome Devtools (depending on what browser you're using), if you go to the Network tab you'll see that your browser received a HTTP status code of 101.
***
# Hello World
Source: https://docs.shuttle.dev/examples/axum
Axum is a web application framework that focuses on ergonomics and modularity.
This section revolves around simple Axum examples you can get quickly started with by following these 3 steps:
1. Initialize a new Axum project by running the `shuttle init --template axum` command
2. Copy pasting the contents of the example you want to deploy -- make sure to check the tabs of the snippet(s) to ensure you are copying the right code/file
3. Running the `shuttle deploy` command
If you are looking for step-by-step guides, check out our
[Tutorials](/templates/tutorials) section.
You can clone the example below by running the following (you'll need `shuttle` CLI installed):
```bash theme={null}
shuttle init --template axum
```
```rust src/main.rs theme={null}
use axum::{routing::get, Router};
async fn hello_world() -> &'static str {
"Hello, world!"
}
#[shuttle_runtime::main]
async fn main() -> shuttle_axum::ShuttleAxum {
let router = Router::new().route("/", get(hello_world));
Ok(router.into())
}
```
```toml Cargo.toml theme={null}
[package]
name = "hello-world"
version = "0.1.0"
edition = "2021"
[dependencies]
axum = "0.8"
shuttle-axum = "0.57.0"
shuttle-runtime = "0.57.0"
tokio = "1.28.2"
```
***
# JWT Authentication
Source: https://docs.shuttle.dev/examples/axum-jwt-authentication
Learn how you can secure your Axum web application by using JWT tokens.
## Description
This example shows how to use Axum authentication with [JSON Web Tokens](https://jwt.io/) (JWT for short).
The idea is that all requests authenticate first at a login route to get a JWT.
Then the JWT is sent with all requests requiring authentication using the HTTP header `Authorization: Bearer `.
This example uses the [`jsonwebtoken`](https://github.com/Keats/jsonwebtoken) which supports symmetric and asymmetric secret encoding, built-in validations, and most JWT algorithms.
Three Axum routes are registered in this file:
* `/public`: a route that can be called without needing any authentication.
* `/login`: a route for posting a JSON object with a username and password to get a JWT.
* `/private`: a route that can only be accessed with a valid JWT.
You can clone the example below by running the following (you'll need `shuttle` CLI installed):
```bash theme={null}
shuttle init --from shuttle-hq/shuttle-examples --subfolder axum/jwt-authentication
```
## Code
```rust main.rs theme={null}
use axum::{
extract::FromRequestParts,
http::{request::Parts, StatusCode},
response::{IntoResponse, Response},
routing::{get, post},
Json, RequestPartsExt, Router,
};
use axum_extra::{
headers::{authorization::Bearer, Authorization},
TypedHeader,
};
use jsonwebtoken::{decode, encode, DecodingKey, EncodingKey, Header, Validation};
use once_cell::sync::Lazy;
use serde::{Deserialize, Serialize};
use serde_json::json;
use std::fmt::Display;
use std::time::SystemTime;
static KEYS: Lazy = Lazy::new(|| {
// note that in production, you will probably want to use a random SHA-256 hash or similar
let secret = "JWT_SECRET".to_string();
Keys::new(secret.as_bytes())
});
#[shuttle_runtime::main]
async fn main() -> shuttle_axum::ShuttleAxum {
let app = Router::new()
.route("/public", get(public))
.route("/private", get(private))
.route("/login", post(login));
Ok(app.into())
}
async fn public() -> &'static str {
// A public endpoint that anyone can access
"Welcome to the public area :)"
}
async fn private(claims: Claims) -> Result {
// Send the protected data to the user
Ok(format!(
"Welcome to the protected area :)\nYour data:\n{claims}",
))
}
async fn login(Json(payload): Json) -> Result, AuthError> {
// Check if the user sent the credentials
if payload.client_id.is_empty() || payload.client_secret.is_empty() {
return Err(AuthError::MissingCredentials);
}
// Here you can check the user credentials from a database
if payload.client_id != "foo" || payload.client_secret != "bar" {
return Err(AuthError::WrongCredentials);
}
// add 5 minutes to current unix epoch time as expiry date/time
let exp = SystemTime::now()
.duration_since(SystemTime::UNIX_EPOCH)
.unwrap()
.as_secs()
+ 300;
let claims = Claims {
sub: "b@b.com".to_owned(),
company: "ACME".to_owned(),
// Mandatory expiry time as UTC timestamp - takes unix epoch
exp: usize::try_from(exp).unwrap(),
};
// Create the authorization token
let token = encode(&Header::default(), &claims, &KEYS.encoding)
.map_err(|_| AuthError::TokenCreation)?;
// Send the authorized token
Ok(Json(AuthBody::new(token)))
}
// allow us to print the claim details for the private route
impl Display for Claims {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
write!(f, "Email: {}\nCompany: {}", self.sub, self.company)
}
}
// implement a method to create a response type containing the JWT
impl AuthBody {
fn new(access_token: String) -> Self {
Self {
access_token,
token_type: "Bearer".to_string(),
}
}
}
// implement FromRequestParts for Claims (the JWT struct)
// FromRequestParts allows us to use Claims without consuming the request
impl FromRequestParts for Claims
where
S: Send + Sync,
{
type Rejection = AuthError;
async fn from_request_parts(parts: &mut Parts, _state: &S) -> Result {
// Extract the token from the authorization header
let TypedHeader(Authorization(bearer)) = parts
.extract::>>()
.await
.map_err(|_| AuthError::InvalidToken)?;
// Decode the user data
let token_data = decode::(bearer.token(), &KEYS.decoding, &Validation::default())
.map_err(|_| AuthError::InvalidToken)?;
Ok(token_data.claims)
}
}
// implement IntoResponse for AuthError so we can use it as an Axum response type
impl IntoResponse for AuthError {
fn into_response(self) -> Response {
let (status, error_message) = match self {
AuthError::WrongCredentials => (StatusCode::UNAUTHORIZED, "Wrong credentials"),
AuthError::MissingCredentials => (StatusCode::BAD_REQUEST, "Missing credentials"),
AuthError::TokenCreation => (StatusCode::INTERNAL_SERVER_ERROR, "Token creation error"),
AuthError::InvalidToken => (StatusCode::BAD_REQUEST, "Invalid token"),
};
let body = Json(json!({
"error": error_message,
}));
(status, body).into_response()
}
}
// encoding/decoding keys - set in the static `once_cell` above
struct Keys {
encoding: EncodingKey,
decoding: DecodingKey,
}
impl Keys {
fn new(secret: &[u8]) -> Self {
Self {
encoding: EncodingKey::from_secret(secret),
decoding: DecodingKey::from_secret(secret),
}
}
}
// the JWT claim
#[derive(Debug, Serialize, Deserialize)]
struct Claims {
sub: String,
company: String,
exp: usize,
}
// the response that we pass back to HTTP client once successfully authorised
#[derive(Debug, Serialize)]
struct AuthBody {
access_token: String,
token_type: String,
}
// the request type - "client_id" is analogous to a username, client_secret can also be interpreted as a password
#[derive(Debug, Deserialize)]
struct AuthPayload {
client_id: String,
client_secret: String,
}
// error types for auth errors
#[derive(Debug)]
enum AuthError {
WrongCredentials,
MissingCredentials,
TokenCreation,
InvalidToken,
}
```
```toml Cargo.toml theme={null}
[package]
name = "jwt-authentication"
version = "0.1.0"
edition = "2021"
[dependencies]
axum = "0.8"
axum-extra = { version = "0.10", features = ["typed-header"] }
jsonwebtoken = "8.3.0"
once_cell = "1.18.0"
serde = { version = "1", features = ["derive"] }
serde_json = "1"
shuttle-axum = "0.57.0"
shuttle-runtime = "0.57.0"
tokio = "1.28.2"
```
## Usage
Once you've cloned this example, launch it locally by using `shuttle run`. Once you've verified that it's up, you'll now be able to go to `http://localhost:8000` and start trying the example out!
First, we should be able to access the public endpoint without any authentication using:
```sh theme={null}
curl http://localhost:8000/public
```
But trying to access the private endpoint will fail with a 403 forbidden:
```sh theme={null}
curl http://localhost:8000/private
```
So let's get a JWT from the login route first:
```sh theme={null}
curl -X POST --header "Content-Type: application/json" --data '{"client_id": "foo", "client_secret": "bar"}' http://localhost:8000/login
```
Accessing the private endpoint with the token will now succeed:
```sh theme={null}
curl --header "Authorization: Bearer " http://localhost:8000/private
```
The token is set to expire in 5 minutes, so wait a while and try to access the private endpoint again. Once the token has expired, a user will need to get a new token from login.
Looking to extend this example? Here's a couple of ideas to get you started:
* Create a frontend to host the login
* Add a route for registering
* Use a database to check login credentials
***
# Postgres Todo App
Source: https://docs.shuttle.dev/examples/axum-postgres
This article walks you through how you can easily set up a simple to-do app using Axum and SQLx with PostgresQL.
## Description
This example shows how to make a simple TODO app using Axum and a shared Shuttle Postgres DB.
The following routes are provided:
* GET `/todos/` - Get a to-do item by ID.
* POST `/todos` - Create a to-do item. Takes "note" as a JSON body parameter.
You can clone the example below by running the following (you'll need `shuttle` CLI installed):
```bash theme={null}
shuttle init --from shuttle-hq/shuttle-examples --subfolder axum/postgres
```
## Code
```rust src/main.rs theme={null}
use axum::{
extract::{Path, State},
http::StatusCode,
response::IntoResponse,
routing::{get, post},
Json, Router,
};
use serde::{Deserialize, Serialize};
use sqlx::{FromRow, PgPool};
async fn retrieve(
Path(id): Path,
State(state): State,
) -> Result {
match sqlx::query_as::<_, Todo>("SELECT * FROM todos WHERE id = $1")
.bind(id)
.fetch_one(&state.pool)
.await
{
Ok(todo) => Ok((StatusCode::OK, Json(todo))),
Err(e) => Err((StatusCode::BAD_REQUEST, e.to_string())),
}
}
async fn add(
State(state): State,
Json(data): Json,
) -> Result {
match sqlx::query_as::<_, Todo>("INSERT INTO todos (note) VALUES ($1) RETURNING id, note")
.bind(&data.note)
.fetch_one(&state.pool)
.await
{
Ok(todo) => Ok((StatusCode::CREATED, Json(todo))),
Err(e) => Err((StatusCode::BAD_REQUEST, e.to_string())),
}
}
#[derive(Clone)]
struct MyState {
pool: PgPool,
}
#[shuttle_runtime::main]
async fn main(#[shuttle_shared_db::Postgres] pool: PgPool) -> shuttle_axum::ShuttleAxum {
sqlx::migrate!()
.run(&pool)
.await
.expect("Failed to run migrations");
let state = MyState { pool };
let router = Router::new()
.route("/todos", post(add))
.route("/todos/{id}", get(retrieve))
.with_state(state);
Ok(router.into())
}
#[derive(Deserialize)]
struct TodoNew {
pub note: String,
}
#[derive(Serialize, FromRow)]
struct Todo {
pub id: i32,
pub note: String,
}
```
```toml Cargo.toml theme={null}
[package]
name = "postgres"
version = "0.1.0"
edition = "2021"
[dependencies]
axum = "0.8"
serde = { version = "1", features = ["derive"] }
shuttle-axum = "0.57.0"
shuttle-runtime = "0.57.0"
shuttle-shared-db = { version = "0.57.0", features = ["postgres", "sqlx"] }
sqlx = "0.8"
tokio = "1.28.2"
```
## Usage
Once you've cloned the example, try launching it locally using `shuttle run`. Once you've verified that it runs successfully, try using cURL in a new terminal to send a POST request:
```bash theme={null}
curl -X POST -d '{"note":"Hello world!"}' -H 'Content-Type: application/json' \
http://localhost:8000/todos
```
Assuming the request was successful, you'll get back a JSON response with the ID and Note of the record you just created. If you try the following cURL command, you should be able to then retrieve the message you stored:
```bash theme={null}
curl http://localhost:8000/todos/
```
Interested in extending this example? Here's as couple of ideas:
* Add update and delete routes
* Add static files to show your records
***
# Static Files
Source: https://docs.shuttle.dev/examples/axum-static-files
This article walks you through setting up static files with Axum, a powerful Rust framework maintained by the Tokio-rs team.
## Description
This example has one route at `/` where the homepage is served and shows you how you can serve HTML or other types of files with Actix Web.
Note that build assets are declared in `Shuttle.toml`.
You can clone the example below by running the following (you'll need `shuttle` CLI installed):
```bash theme={null}
shuttle init --from shuttle-hq/shuttle-examples --subfolder axum/static-files
```
## Code
```rust src/main.rs theme={null}
use axum::Router;
use tower_http::services::ServeDir;
#[shuttle_runtime::main]
async fn main() -> shuttle_axum::ShuttleAxum {
// ServeDir falls back to serve index.html when requesting a directory
let router = Router::new().fallback_service(ServeDir::new("assets"));
Ok(router.into())
}
```
```html assets/index.html theme={null}
Static Files
This is an example of serving static files with Axum and Shuttle.
```
```toml Cargo.toml theme={null}
[package]
name = "static-files"
version = "0.1.0"
edition = "2021"
publish = false
[dependencies]
axum = "0.8"
shuttle-axum = "0.57.0"
shuttle-runtime = "0.57.0"
tokio = "1.28.2"
tower-http = { version = "0.6", features = ["fs"] }
```
```toml Shuttle.toml theme={null}
[build]
assets = [
"assets",
]
```
## Usage
After you clone the example, launch it locally by using `shuttle run` then visit the home route at `http://localhost:8000` - you should see a homepage that shows our included HTML file.
You can extend this example by adding more routes that serve other files.
***
```
```
# WebSockets
Source: https://docs.shuttle.dev/examples/axum-websockets
Learn how websockets can upgrade your web service by providing live update functionality, using Axum.
## Description
This example shows how to use a WebSocket to show the live status of the Shuttle API on a web page.
There are a few routes available:
* `/` - the homepage route where you can find the `index.html` page.
* `/websocket` - the route that handles websockets.
You can clone the example below by running the following (you'll need `shuttle` CLI installed):
```bash theme={null}
shuttle init --from shuttle-hq/shuttle-examples --subfolder axum/websocket
```
## Code
```rust src/main.rs theme={null}
use std::{sync::Arc, time::Duration};
use axum::{
extract::{
ws::{Message, WebSocket},
WebSocketUpgrade,
},
response::IntoResponse,
routing::get,
Extension, Router,
};
use chrono::{DateTime, Utc};
use futures::{SinkExt, StreamExt};
use serde::Serialize;
use shuttle_axum::ShuttleAxum;
use tokio::{
sync::{watch, Mutex},
time::sleep,
};
use tower_http::services::ServeDir;
struct State {
clients_count: usize,
rx: watch::Receiver,
}
const PAUSE_SECS: u64 = 15;
const STATUS_URI: &str = "https://api.shuttle.dev/.healthz";
#[derive(Serialize)]
struct Response {
clients_count: usize,
#[serde(rename = "dateTime")]
date_time: DateTime,
is_up: bool,
}
#[shuttle_runtime::main]
async fn main() -> ShuttleAxum {
let (tx, rx) = watch::channel(Message::Text("{}".into()));
let state = Arc::new(Mutex::new(State {
clients_count: 0,
rx,
}));
// Spawn a thread to continually check the status of the api
let state_send = state.clone();
tokio::spawn(async move {
let duration = Duration::from_secs(PAUSE_SECS);
loop {
let is_up = reqwest::get(STATUS_URI)
.await
.is_ok_and(|r| r.status().is_success());
let response = Response {
clients_count: state_send.lock().await.clients_count,
date_time: Utc::now(),
is_up,
};
let msg = serde_json::to_string(&response).unwrap();
if tx.send(Message::Text(msg.into())).is_err() {
break;
}
sleep(duration).await;
}
});
let router = Router::new()
.route("/websocket", get(websocket_handler))
.fallback_service(ServeDir::new("static"))
.layer(Extension(state));
Ok(router.into())
}
async fn websocket_handler(
ws: WebSocketUpgrade,
Extension(state): Extension>>,
) -> impl IntoResponse {
ws.on_upgrade(|socket| websocket(socket, state))
}
async fn websocket(stream: WebSocket, state: Arc>) {
// By splitting we can send and receive at the same time.
let (mut sender, mut receiver) = stream.split();
let mut rx = {
let mut state = state.lock().await;
state.clients_count += 1;
state.rx.clone()
};
// This task will receive watch messages and forward it to this connected client.
let mut send_task = tokio::spawn(async move {
while let Ok(()) = rx.changed().await {
let msg = rx.borrow().clone();
if sender.send(msg).await.is_err() {
break;
}
}
});
// This task will receive messages from this client.
let mut recv_task = tokio::spawn(async move {
while let Some(Ok(Message::Text(text))) = receiver.next().await {
println!("this example does not read any messages, but got: {text}");
}
});
// If any one of the tasks exit, abort the other.
tokio::select! {
_ = (&mut send_task) => recv_task.abort(),
_ = (&mut recv_task) => send_task.abort(),
};
// This client disconnected
state.lock().await.clients_count -= 1;
}
```
```html static/index.html theme={null}
Websocket status page
Current API status
Last check time
Clients watching
```
```toml Cargo.toml theme={null}
[package]
[package]
name = "websocket"
version = "0.1.0"
edition = "2021"
[dependencies]
axum = { version = "0.8", features = ["ws"] }
chrono = { version = "0.4", features = ["serde"] }
futures = "0.3.28"
reqwest = "0.12"
serde = { version = "1", features = ["derive"] }
serde_json = "1"
shuttle-axum = "0.57.0"
shuttle-runtime = "0.57.0"
tokio = "1"
tower-http = { version = "0.6", features = ["fs"] }
```
```toml Shuttle.toml theme={null}
[build]
assets = [
"static",
]
```
## Usage
Once you've cloned the example, launch it locally using `shuttle run` and then go to `http://localhost:8000`. You should be able to see a status page and if you go to your Inspect/Chrome Devtools (depending on what browser you're using), if you go to the Network tab you'll see that your browser received a HTTP status code of 101.
***
# Loco
Source: https://docs.shuttle.dev/examples/loco
Loco is Ruby on Rails but for Rust, a web framework with everything included.
A Loco Hello World project is more complex than other frameworks and is not shown here in full.
Check out one of these ways to use Loco:
## Generate Shuttle deployment config
Loco CLI provides a command for generating everything needed for deploy a Loco app on Shuttle.
Read more in the [Loco deployment docs](https://loco.rs/docs/infrastructure/deployment/)!
## Starter template for Loco on Shuttle
There is also a full Hello World [example](https://github.com/shuttle-hq/shuttle-examples/tree/main/loco/hello-world).
You can use it by running `shuttle init` and selecting the Loco Hello World template.
# Other Examples
Source: https://docs.shuttle.dev/examples/other
This section contains examples for the following frameworks: Rama, Tower, Warp, Salvo, and Poem.
### Hello World
This example provides a simple "Hello, world!" Rust application that you can deploy with Shuttle. It's a great starting point for learning how to use Shuttle and getting familiar with the deployment process for Rust applications.
In order to get started, initialize your project with `shuttle init` and pick the framework you want to use for this example.
Once you are done, your project should be setup with all the required dependencies so go ahead and copy/paste the relevant code snippet from below into your `main.rs` file.
```rust Rama theme={null}
use rama::{
Context, Layer,
error::ErrorContext,
http::{
StatusCode,
layer::forwarded::GetForwardedHeaderLayer,
service::web::{Router, response::Result},
},
net::forwarded::Forwarded,
};
async fn hello_world(ctx: Context<()>) -> Result {
Ok(match ctx.get::() {
Some(forwarded) => format!(
"Hello cloud user @ {}!",
forwarded
.client_ip()
.context("missing IP information from user")
.map_err(|err| (StatusCode::INTERNAL_SERVER_ERROR, err.to_string()))?
),
None => "Hello local user! Are you developing?".to_owned(),
})
}
#[shuttle_runtime::main]
async fn main() -> Result {
let router = Router::new().get("/", hello_world);
let app =
// Shuttle sits behind a load-balancer,
// so in case you want the real IP of the user,
// you need to ensure this headers is handled.
//
// Learn more at
GetForwardedHeaderLayer::x_forwarded_for().into_layer(router);
Ok(shuttle_rama::RamaService::application(app))
}
```
```rust Tower theme={null}
use std::convert::Infallible;
use std::future::Future;
use std::pin::Pin;
use std::task::{Context, Poll};
#[derive(Clone)]
struct HelloWorld;
impl tower::Service> for HelloWorld {
type Response = hyper::Response;
type Error = Infallible;
type Future = Pin> + Send + Sync>>;
fn poll_ready(&mut self, _cx: &mut Context<'_>) -> Poll> {
Poll::Ready(Ok(()))
}
fn call(&mut self, _req: hyper::Request) -> Self::Future {
let body = hyper::Body::from("Hello, world!");
let resp = hyper::Response::builder()
.status(200)
.body(body)
.expect("Unable to create the `hyper::Response` object");
let fut = async { Ok(resp) };
Box::pin(fut)
}
}
#[shuttle_runtime::main]
async fn tower() -> shuttle_tower::ShuttleTower {
let service = HelloWorld;
Ok(service.into())
}
```
```rust Warp theme={null}
use warp::Filter;
use warp::Reply;
#[shuttle_runtime::main]
async fn warp() -> shuttle_warp::ShuttleWarp<(impl Reply,)> {
let route = warp::any().map(|| "Hello, World!");
Ok(route.boxed().into())
}
```
```rust Salvo theme={null}
use salvo::prelude::*;
#[handler]
async fn hello_world(res: &mut Response) {
res.render(Text::Plain("Hello, world!"));
}
#[shuttle_runtime::main]
async fn salvo() -> shuttle_salvo::ShuttleSalvo {
let router = Router::new().get(hello_world);
Ok(router.into())
}
```
```rust Poem theme={null}
use poem::{get, handler, Route};
use shuttle_poem::ShuttlePoem;
#[handler]
fn hello_world() -> &'static str {
"Hello, world!"
}
#[shuttle_runtime::main]
async fn poem() -> ShuttlePoem {
let app = Route::new().at("/", get(hello_world));
Ok(app.into())
}
```
Run the example locally with:
```bash theme={null}
shuttle run
```
In order to deploy the example, simply run:
```bash theme={null}
shuttle deploy
```
# Overview
Source: https://docs.shuttle.dev/examples/overview
This section of the docs allows you to browse the various starter examples available on the [shuttle-examples](https://github.com/shuttle-hq/shuttle-examples#readme).
For more feature complete starter templates and tutorials, check out the [Templates and Tutorials](/templates).
Here are some relevant links to help you find what you're looking for:
* [Shuttle Examples repo](https://github.com/shuttle-hq/shuttle-examples#readme) - contains all officially maintained examples and starter templates.
- [Shuttlings](https://github.com/shuttle-hq/shuttlings) - A collection of code challenges that also happens to be a good tutorial for backend development on Shuttle.
# Poise
Source: https://docs.shuttle.dev/examples/poise
Poise is an opinionated Discord bot framework based on Serenity with good support for slash commands.
### Prerequisites
To get going with Poise, follow the same prerequisites as for [Serenity](./serenity).
### Code
This example shows how to build a Poise bot with Shuttle that responds to the `/hello` command with `world!`.
```rust src/main.rs theme={null}
use anyhow::Context as _;
use poise::serenity_prelude::{ClientBuilder, GatewayIntents};
use shuttle_runtime::SecretStore;
use shuttle_serenity::ShuttleSerenity;
struct Data {} // User data, which is stored and accessible in all command invocations
type Error = Box;
type Context<'a> = poise::Context<'a, Data, Error>;
/// Responds with "world!"
#[poise::command(slash_command)]
async fn hello(ctx: Context<'_>) -> Result<(), Error> {
ctx.say("world!").await?;
Ok(())
}
#[shuttle_runtime::main]
async fn main(#[shuttle_runtime::Secrets] secret_store: SecretStore) -> ShuttleSerenity {
// Get the discord token set in `Secrets.toml`
let discord_token = secret_store
.get("DISCORD_TOKEN")
.context("'DISCORD_TOKEN' was not found")?;
let framework = poise::Framework::builder()
.options(poise::FrameworkOptions {
commands: vec![hello()],
..Default::default()
})
.setup(|ctx, _ready, framework| {
Box::pin(async move {
poise::builtins::register_globally(ctx, &framework.options().commands).await?;
Ok(Data {})
})
})
.build();
let client = ClientBuilder::new(discord_token, GatewayIntents::non_privileged())
.framework(framework)
.await
.map_err(shuttle_runtime::CustomError::new)?;
Ok(client.into())
}
```
```toml Secrets.toml theme={null}
DISCORD_TOKEN = 'the contents of your discord token'
```
```toml Cargo.toml theme={null}
[package]
name = "hello-world-poise-bot"
version = "0.1.0"
edition = "2021"
publish = false
[dependencies]
anyhow = "1.0.68"
poise = "0.6.1"
shuttle-runtime = "0.57.0"
# Since poise is a serenity command framework, it can run on Shuttle with shuttle-serenity
shuttle-serenity = "0.57.0"
tracing = "0.1.37"
tokio = "1.26.0"
```
***
# Hello World
Source: https://docs.shuttle.dev/examples/rocket
Rocket is an async web framework for Rust with a focus on usability, security, extensibility, and speed.
This section revolves around simple Rocket examples you can get quickly started with by following these 3 steps:
1. Initialize a new Rocket project by running the `shuttle init --template rocket` command
2. Copy pasting the contents of the example you want to deploy -- make sure to check the tabs of the snippet(s) to ensure you are copying the right code/file
3. Running the `shuttle deploy` command
If you are looking for step-by-step guides, check out our [Tutorials](/templates/tutorials) section.
You can clone the example below by running the following (you'll need `shuttle` CLI installed):
```bash theme={null}
shuttle init --template rocket
```
```rust main.rs theme={null}
#[macro_use]
extern crate rocket;
#[get("/")]
fn index() -> &'static str {
"Hello, world!"
}
#[shuttle_runtime::main]
async fn rocket() -> shuttle_rocket::ShuttleRocket {
let rocket = rocket::build().mount("/hello", routes![index]);
Ok(rocket.into())
}
```
```toml Cargo.toml theme={null}
[package]
name = "hello-world"
version = "0.1.0"
edition = "2021"
[dependencies]
rocket = "0.5.0"
shuttle-rocket = "0.57.0"
shuttle-runtime = "0.57.0"
tokio = "1.26.0"
```
***
# JWT Authentication
Source: https://docs.shuttle.dev/examples/rocket-jwt-authentication
Learn how you can secure your Rocket web application by using JWT tokens.
## Description
This example shows how to use [Rocket request guards](https://rocket.rs/guide/v0.5/requests/#request-guards) for authentication with [JSON Web Tokens](https://jwt.io/) (JWT for short).
The idea is that all requests authenticate first at the `/login` route on a given web service to get a JWT.
Then the JWT is sent with all requests requiring authentication using the HTTP header `Authorization: Bearer `.
This example uses the [`jsonwebtoken`](https://github.com/Keats/jsonwebtoken) which supports symmetric and asymmetric secret encoding, built-in validations, and most JWT algorithms.
However, this example only makes use of symmetric encoding and validation on the expiration claim.
You can clone the example below by running the following (you'll need `shuttle` CLI installed):
```bash theme={null}
shuttle init --from shuttle-hq/shuttle-examples --subfolder rocket/jwt-authentication
```
Three Rocket routes are registered in this file:
* `/public`: a route that can be called without needing any authentication.
* `/login`: a route for posting a JSON object with a username and password to get a JWT.
* `/private`: a route that can only be accessed with a valid JWT.
## Code
```toml Cargo.toml theme={null}
[package]
name = "authentication"
version = "0.1.0"
edition = "2021"
[dependencies]
chrono = "0.4.23"
jsonwebtoken = { version = "8.1.1", default-features = false }
lazy_static = "1.4.0"
rocket = { version = "0.5.0", features = ["json"] }
serde = { version = "1.0.148", features = ["derive"] }
shuttle-rocket = "0.57.0"
shuttle-runtime = "0.57.0"
tokio = "1.26.0"
```
Your `main.rs` should look like this:
```Rust main.rs theme={null}
// main.rs
use rocket::http::Status;
use rocket::response::status::Custom;
use rocket::serde::json::Json;
use serde::{Deserialize, Serialize};
mod claims;
use claims::Claims;
#[macro_use]
extern crate rocket;
#[derive(Serialize)]
struct PublicResponse {
message: String,
}
#[get("/public")]
fn public() -> Json {
Json(PublicResponse {
message: "This endpoint is open to anyone".to_string(),
})
}
#[derive(Serialize)]
struct PrivateResponse {
message: String,
user: String,
}
// More details on Rocket request guards can be found here
// https://rocket.rs/guide/v0.5/requests/#request-guards
#[get("/private")]
fn private(user: Claims) -> Json {
Json(PrivateResponse {
message: "The `Claims` request guard ensures only valid JWTs can access this endpoint"
.to_string(),
user: user.name,
})
}
#[derive(Deserialize)]
struct LoginRequest {
username: String,
password: String,
}
#[derive(Serialize)]
struct LoginResponse {
token: String,
}
/// Tries to authenticate a user. Successful authentications get a JWT
#[post("/login", data = "")]
fn login(login: Json) -> Result, Custom> {
// This should be real user validation code, but is left simple for this example
if login.username != "username" || login.password != "password" {
return Err(Custom(
Status::Unauthorized,
"account was not found".to_string(),
));
}
let claim = Claims::from_name(&login.username);
let response = LoginResponse {
token: claim.into_token()?,
};
Ok(Json(response))
}
#[shuttle_runtime::main]
async fn rocket() -> shuttle_rocket::ShuttleRocket {
let rocket = rocket::build().mount("/", routes![public, private, login]);
Ok(rocket.into())
}
```
Your `claims.rs` should look like this:
```Rust claims.rs theme={null}
// claims.rs
use chrono::{Duration, Utc};
use jsonwebtoken::{
decode, encode, errors::ErrorKind, DecodingKey, EncodingKey, Header, Validation,
};
use lazy_static::lazy_static;
use rocket::{
http::Status,
request::{FromRequest, Outcome},
response::status::Custom,
};
use serde::{Deserialize, Serialize};
// TODO: this has an extra trailing space to cause the test to fail
// This is to demonstrate shuttle will not deploy when a test fails.
// FIX: remove the extra space character and try deploying again
const BEARER: &str = "Bearer ";
const AUTHORIZATION: &str = "Authorization";
/// Key used for symmetric token encoding
const SECRET: &str = "secret";
lazy_static! {
/// Time before token expires (aka exp claim)
static ref TOKEN_EXPIRATION: Duration = Duration::minutes(5);
}
// Used when decoding a token to `Claims`
#[derive(Debug, PartialEq)]
pub(crate) enum AuthenticationError {
Missing,
Decoding(String),
Expired,
}
// Basic claim object. Only the `exp` claim (field) is required. Consult the `jsonwebtoken` documentation for other claims that can be validated.
// The `name` is a custom claim for this API
#[derive(Serialize, Deserialize, Debug)]
pub(crate) struct Claims {
pub(crate) name: String,
exp: usize,
}
// Rocket specific request guard implementation
#[rocket::async_trait]
impl<'r> FromRequest<'r> for Claims {
type Error = AuthenticationError;
async fn from_request(request: &'r rocket::Request<'_>) -> Outcome {
match request.headers().get_one(AUTHORIZATION) {
None => Outcome::Error((Status::Forbidden, AuthenticationError::Missing)),
Some(value) => match Claims::from_authorization(value) {
Err(e) => Outcome::Error((Status::Forbidden, e)),
Ok(claims) => Outcome::Success(claims),
},
}
}
}
impl Claims {
pub(crate) fn from_name(name: &str) -> Self {
Self {
name: name.to_string(),
exp: 0,
}
}
/// Create a `Claims` from a 'Bearer ' value
fn from_authorization(value: &str) -> Result {
let token = value.strip_prefix(BEARER).map(str::trim);
if token.is_none() {
return Err(AuthenticationError::Missing);
}
// Safe to unwrap as we just confirmed it is not none
let token = token.unwrap();
// Use `jsonwebtoken` to get the claims from a JWT
// Consult the `jsonwebtoken` documentation for using other algorithms and validations (the default validation just checks the expiration claim)
let token = decode::(
token,
&DecodingKey::from_secret(SECRET.as_ref()),
&Validation::default(),
)
.map_err(|e| match e.kind() {
ErrorKind::ExpiredSignature => AuthenticationError::Expired,
_ => AuthenticationError::Decoding(e.to_string()),
})?;
Ok(token.claims)
}
/// Converts this claims into a token string
pub(crate) fn into_token(mut self) -> Result> {
let expiration = Utc::now()
.checked_add_signed(*TOKEN_EXPIRATION)
.expect("failed to create an expiration time")
.timestamp();
self.exp = expiration as usize;
// Construct and return JWT using `jsonwebtoken`
// Consult the `jsonwebtoken` documentation for using other algorithms and asymmetric keys
let token = encode(
&Header::default(),
&self,
&EncodingKey::from_secret(SECRET.as_ref()),
)
.map_err(|e| Custom(Status::BadRequest, e.to_string()))?;
Ok(token)
}
}
#[cfg(test)]
mod tests {
use crate::claims::AuthenticationError;
use super::Claims;
#[test]
fn missing_bearer() {
let claim_err = Claims::from_authorization("no-Bearer-prefix").unwrap_err();
assert_eq!(claim_err, AuthenticationError::Missing);
}
#[test]
fn to_token_and_back() {
let claim = Claims::from_name("test runner");
let token = claim.into_token().unwrap();
let token = format!("Bearer {token}");
let claim = Claims::from_authorization(&token).unwrap();
assert_eq!(claim.name, "test runner");
}
}
```
## Usage
Once you've cloned this example, launch it locally by using `shuttle run`. Once you've verified that it's up, you'll now be able to go to `http://localhost:8000` and start trying the example out!
First, we should be able to access the public endpoint without any authentication using:
```sh theme={null}
curl https:///public
```
But trying to access the private endpoint will fail with a 403 forbidden:
```sh theme={null}
curl https:///private
```
So let's get a JWT from the login route first:
```sh theme={null}
curl -X POST --data '{"username": "username", "password": "password"}' https:///login
```
Accessing the private endpoint with the token will now succeed:
```sh theme={null}
curl --header "Authorization: Bearer " https:///private
```
The token is set to expire in 5 minutes, so wait a while and try to access the private endpoint again. Once the token has expired, a user will need to get a new token from login.
Since tokens usually have a longer than 5 minutes expiration time, we can create a `/refresh` endpoint that takes an active token and returns a new token with a refreshed expiration time.
# Postgres Todo App
Source: https://docs.shuttle.dev/examples/rocket-postgres
This article walks you through how you can easily set up a simple to-do app using Rocket and SQLx with PostgresQL.
## Description
This example shows how to make a simple TODO app using Rocket and a shared Shuttle Postgres DB.
The following routes are provided:
* GET `/todo/` - Get a to-do item by ID.
* POST `/todo` - Create a to-do item. Takes "note" as a JSON body parameter.
You can clone the example below by running the following (you'll need `shuttle` CLI installed):
```bash theme={null}
shuttle init --from shuttle-hq/shuttle-examples --subfolder rocket/postgres
```
## Code
```rust src/main.rs theme={null}
#[macro_use]
extern crate rocket;
use rocket::response::status::BadRequest;
use rocket::serde::json::Json;
use rocket::State;
use serde::{Deserialize, Serialize};
use shuttle_runtime::CustomError;
use sqlx::{Executor, FromRow, PgPool};
#[get("/")]
async fn retrieve(id: i32, state: &State) -> Result, BadRequest> {
let todo = sqlx::query_as("SELECT * FROM todos WHERE id = $1")
.bind(id)
.fetch_one(&state.pool)
.await
.map_err(|e| BadRequest(e.to_string()))?;
Ok(Json(todo))
}
#[post("/", data = "")]
async fn add(
data: Json,
state: &State,
) -> Result, BadRequest> {
let todo = sqlx::query_as("INSERT INTO todos(note) VALUES ($1) RETURNING id, note")
.bind(&data.note)
.fetch_one(&state.pool)
.await
.map_err(|e| BadRequest(e.to_string()))?;
Ok(Json(todo))
}
struct MyState {
pool: PgPool,
}
#[shuttle_runtime::main]
async fn rocket(#[shuttle_shared_db::Postgres] pool: PgPool) -> shuttle_rocket::ShuttleRocket {
pool.execute(include_str!("../schema.sql"))
.await
.map_err(CustomError::new)?;
let state = MyState { pool };
let rocket = rocket::build()
.mount("/todo", routes![retrieve, add])
.manage(state);
Ok(rocket.into())
}
#[derive(Deserialize)]
struct TodoNew {
pub note: String,
}
#[derive(Serialize, FromRow)]
struct Todo {
pub id: i32,
pub note: String,
}
```
```sql schema.sql theme={null}
DROP TABLE IF EXISTS todos;
CREATE TABLE todos (
id serial PRIMARY KEY,
note TEXT NOT NULL
);
```
```toml Cargo.toml theme={null}
[package]
name = "postgres"
version = "0.1.0"
edition = "2021"
[dependencies]
rocket = { version = "0.5.0", features = ["json"] }
serde = "1.0.148"
shuttle-rocket = "0.57.0"
shuttle-runtime = "0.57.0"
shuttle-shared-db = { version = "0.57.0", features = ["postgres", "sqlx"] }
sqlx = "0.8.2"
tokio = "1.26.0"
```
## Usage
Once you've cloned the example, try launching it locally using `shuttle run`. Once you've verified that it runs successfully, try using cURL in a new terminal to send a POST request:
```bash theme={null}
curl -X POST -d '{"note":"Hello world!"}' -H 'Content-Type: application/json' \
http://localhost:8000/todo
```
Assuming the request was successful, you'll get back a JSON response with the ID and Note of the record you just created. If you try the following cURL command, you should be able to then retrieve the message you stored:
```bash theme={null}
curl http://localhost:8000/todo/
```
Interested in extending this example? Here's as couple of ideas:
* Add update and delete routes
* Add static files to show your records
***
# Static Files
Source: https://docs.shuttle.dev/examples/rocket-static-files
This article walks you through setting up static files with Rocket, an ergonomic framework for simple web applications.
## Description
This example has one route at `/` where the homepage is served and shows you how you can serve HTML or other types of files with Rocket.
Note that static assets are declared in the `Shuttle.toml` file.
You can clone the example below by running the following (you'll need `shuttle` CLI installed):
```bash theme={null}
shuttle init --from shuttle-hq/shuttle-examples --subfolder rocket/static-files
```
## Code
```rust src/main.rs theme={null}
use rocket::fs::NamedFile;
use rocket::fs::{relative};
use std::path::{Path, PathBuf};
#[rocket::get("/")]
pub async fn serve(mut path: PathBuf) -> Option {
path.set_extension("html");
let mut path = Path::new(relative!("assets")).join(path);
if path.is_dir() {
path.push("index.html");
}
NamedFile::open(path).await.ok()
}
#[shuttle_runtime::main]
async fn rocket() -> shuttle_rocket::ShuttleRocket {
let rocket = rocket::build().mount("/", rocket::routes![serve]);
Ok(rocket.into())
}
```
```html assets/index.html theme={null}
Static Files
This is an example of serving static files with Rocket and Shuttle.
```
```toml Cargo.toml theme={null}
[package]
name = "static-files"
version = "0.1.0"
edition = "2021"
publish = false
[dependencies]
rocket = "0.5.0"
shuttle-rocket = "0.57.0"
shuttle-runtime = "0.57.0"
tokio = "1.26.0"
```
```toml Shuttle.toml theme={null}
[build]
assets = [
"assets",
]
```
## Usage
After you clone the example, launch it locally by using `shuttle run` then visit the home route at `http://localhost:8000` - you should see a homepage that shows our included HTML file.
You can extend this example by adding more routes that serve other files.
***
# Hello World Bot
Source: https://docs.shuttle.dev/examples/serenity
Serenity is a Rust library for the Discord API.
This section revolves around simple Serenity examples you can get quickly started with by following these 4 steps:
1. Go through the steps in **Prerequisites** below
2. Initialize a new Serenity project by running the `shuttle init --template serenity` command
3. Copy pasting the contents of the example you want to deploy -- make sure to check the tabs of the snippet(s) to ensure you are copying the right code/file
4. Running the `shuttle deploy` command
### Prerequisites
To get started log in to the [Discord developer portal](https://discord.com/developers/applications).
1. Click the New Application button, name your application and click Create.
2. Navigate to the Bot tab in the lefthand menu, and add a new bot.
3. On the bot page click the Reset Token button to reveal your token. Put this token in your `Secrets.toml`. It's very important that you don't reveal your token to anyone, as it can be abused. Create a `.gitignore` file to omit your `Secrets.toml` from version control.
4. For the sake of this example, you also need to scroll down on the bot page to the Message Content Intent section and enable that option.
To add the bot to a server we need to create an invite link.
1. On your bot's application page, open the OAuth2 page via the lefthand panel.
2. Go to the URL Generator via the lefthand panel, and select the `bot` scope as well as the `Send Messages` permission in the Bot Permissions section.
3. Copy the URL, open it in your browser and select a Discord server you wish to invite the bot to.
### Code
This example shows how to build a Serenity bot with Shuttle that responds to the `!hello` command with `world!`.
```rust src/main.rs theme={null}
use anyhow::Context as _;
use serenity::async_trait;
use serenity::model::channel::Message;
use serenity::model::gateway::Ready;
use serenity::prelude::*;
use shuttle_runtime::SecretStore;
use tracing::{error, info};
struct Bot;
#[async_trait]
impl EventHandler for Bot {
async fn message(&self, ctx: Context, msg: Message) {
if msg.content == "!hello" {
if let Err(e) = msg.channel_id.say(&ctx.http, "world!").await {
error!("Error sending message: {:?}", e);
}
}
}
async fn ready(&self, _: Context, ready: Ready) {
info!("{} is connected!", ready.user.name);
}
}
#[shuttle_runtime::main]
async fn serenity(
#[shuttle_runtime::Secrets] secrets: SecretStore,
) -> shuttle_serenity::ShuttleSerenity {
// Get the discord token set in `Secrets.toml`
let token = secrets.get("DISCORD_TOKEN").context("'DISCORD_TOKEN' was not found")?;
// Set gateway intents, which decides what events the bot will be notified about
let intents = GatewayIntents::GUILD_MESSAGES | GatewayIntents::MESSAGE_CONTENT;
let client = Client::builder(&token, intents)
.event_handler(Bot)
.await
.expect("Err creating client");
Ok(client.into())
}
```
```toml Secrets.toml theme={null}
DISCORD_TOKEN = 'the contents of your discord token'
```
```toml Cargo.toml theme={null}
[package]
name = "hello-world-serenity-bot"
version = "0.1.0"
edition = "2021"
[dependencies]
anyhow = "1.0.66"
serenity = { version = "0.12.0", default-features = false, features = ["client", "gateway", "rustls_backend", "model"] }
shuttle-runtime = "0.57.0"
shuttle-serenity = "0.57.0"
tokio = "1.26.0"
tracing = "0.1.37"
```
***
# Todo List Bot
Source: https://docs.shuttle.dev/examples/serenity-todo
Learn how to write a Serenity bot that can manage a to-do list.
### Prerequisites
In this example we will deploy a Serenity bot with Shuttle that can add, list and complete todos using [Application Commands](https://discord.com/developers/docs/interactions/application-commands). To persist the todos we need a database. We will have Shuttle provision a PostgreSQL database for us by passing `#[shared::Postgres] pool: PgPool` as an argument to our `main` function.
To run this bot we need a valid Discord Token. To get started log in to the [Discord developer portal](https://discord.com/developers/applications).
1. Click the New Application button, name your application and click Create.
2. Navigate to the Bot tab in the lefthand menu, and add a new bot.
3. On the bot page click the Reset Token button to reveal your token. Put this token in your `Secrets.toml`. It's very important that you don't reveal your token to anyone, as it can be abused. Create a `.gitignore` file to omit your `Secrets.toml` from version control.
To add the bot to a server we need to create an invite link.
1. On your bot's application page, open the OAuth2 page via the lefthand panel.
2. Go to the URL Generator via the lefthand panel, and select the `applications.commands` scope.
3. Copy the URL, open it in your browser and select a Discord server you wish to invite the bot to.
For this example we also need a `GuildId`.
1. Open your Discord client, open the User Settings and navigate to Advanced. Enable Developer Mode.
2. Right click the Discord server you'd like to use the bot in and click Copy Id. This is your Guild ID.
3. Store it in `Secrets.toml` and retrieve it like we did for the Discord Token.
For more information please refer to the [Discord docs](https://discord.com/developers/docs/getting-started) as well as the [Serenity repo](https://github.com/serenity-rs/serenity) for more examples.
```rust src/main.rs theme={null}
use anyhow::Context as _;
use serenity::async_trait;
use serenity::builder::{
CreateCommand, CreateCommandOption, CreateInteractionResponse, CreateInteractionResponseMessage,
};
use serenity::model::application::{CommandDataOptionValue, CommandOptionType, Interaction};
use serenity::model::gateway::Ready;
use serenity::model::id::GuildId;
use serenity::prelude::*;
use shuttle_runtime::SecretStore;
use sqlx::{Executor, PgPool};
use tracing::{error, info};
mod db;
struct Bot {
database: PgPool,
guild_id: String,
}
#[async_trait]
impl EventHandler for Bot {
async fn interaction_create(&self, ctx: Context, interaction: Interaction) {
if let Interaction::Command(command) = interaction {
info!("Received command interaction: {:#?}", command);
let user_id: i64 = command.user.id.into();
let content = match command.data.name.as_str() {
"todo" => {
let command = command.data.options.first().expect("Expected command");
match command.name.as_str() {
"add" => match &command.value {
CommandDataOptionValue::SubCommand(opts) => {
let note = opts.first().unwrap().value.as_str().unwrap();
db::add(&self.database, note, user_id).await.unwrap()
}
_ => "Command not implemented".to_string(),
},
"complete" => match &command.value {
CommandDataOptionValue::SubCommand(opts) => {
let index = opts.first().unwrap().value.as_i64().unwrap();
db::complete(&self.database, &index, user_id)
.await
.unwrap_or_else(|_| {
"Please submit a valid index from your todo list"
.to_string()
})
}
_ => "Command not implemented".to_string(),
},
"list" => db::list(&self.database, user_id).await.unwrap(),
_ => "Command not implemented".to_string(),
}
}
_ => "Command not implemented".to_string(),
};
if let Err(why) = command
.create_response(
&ctx.http,
CreateInteractionResponse::Message(
CreateInteractionResponseMessage::new().content(content),
),
)
.await
{
error!("Cannot respond to slash command: {why}");
}
}
}
async fn ready(&self, ctx: Context, ready: Ready) {
info!("{} is connected!", ready.user.name);
let guild_id = GuildId::new(self.guild_id.parse().unwrap());
let _ = guild_id
.set_commands(
&ctx.http,
vec![CreateCommand::new("todo")
.description("Add, list and complete todos")
.add_option(
CreateCommandOption::new(
CommandOptionType::SubCommand,
"add",
"Add a new todo",
)
.add_sub_option(
CreateCommandOption::new(
CommandOptionType::String,
"note",
"The todo note to add",
)
.min_length(2)
.max_length(100)
.required(true),
),
)
.add_option(
CreateCommandOption::new(
CommandOptionType::SubCommand,
"complete",
"The todo to complete",
)
.add_sub_option(
CreateCommandOption::new(
CommandOptionType::Integer,
"index",
"The index of the todo to complete",
)
.min_int_value(1)
.required(true),
),
)
.add_option(CreateCommandOption::new(
CommandOptionType::SubCommand,
"list",
"List your todos",
))],
)
.await;
}
}
#[shuttle_runtime::main]
async fn serenity(
#[shuttle_shared_db::Postgres] pool: PgPool,
#[shuttle_runtime::Secrets] secret_store: SecretStore,
) -> shuttle_serenity::ShuttleSerenity {
// Get the discord token set in `Secrets.toml`
let token = secret_store
.get("DISCORD_TOKEN")
.context("'DISCORD_TOKEN' was not found")?;
// Get the guild_id set in `Secrets.toml`
let guild_id = secret_store
.get("GUILD_ID")
.context("'GUILD_ID' was not found")?;
// Run the schema migration
pool.execute(include_str!("../schema.sql"))
.await
.context("failed to run migrations")?;
let bot = Bot {
database: pool,
guild_id,
};
let client = Client::builder(&token, GatewayIntents::empty())
.event_handler(bot)
.await
.expect("Err creating client");
Ok(client.into())
}
```
```rust src/db.rs theme={null}
use sqlx::{FromRow, PgPool};
use std::fmt::Write;
#[derive(FromRow)]
struct Todo {
pub id: i32,
pub note: String,
}
pub(crate) async fn add(pool: &PgPool, note: &str, user_id: i64) -> Result {
sqlx::query("INSERT INTO todos (note, user_id) VALUES ($1, $2)")
.bind(note)
.bind(user_id)
.execute(pool)
.await?;
Ok(format!("Added `{}` to your todo list", note))
}
pub(crate) async fn complete(
pool: &PgPool,
index: &i64,
user_id: i64,
) -> Result {
let todo: Todo = sqlx::query_as(
"SELECT id, note FROM todos WHERE user_id = $1 ORDER BY id LIMIT 1 OFFSET $2",
)
.bind(user_id)
.bind(index - 1)
.fetch_one(pool)
.await?;
sqlx::query("DELETE FROM todos WHERE id = $1")
.bind(todo.id)
.execute(pool)
.await?;
Ok(format!("Completed `{}`!", todo.note))
}
pub(crate) async fn list(pool: &PgPool, user_id: i64) -> Result {
let todos: Vec =
sqlx::query_as("SELECT note, id FROM todos WHERE user_id = $1 ORDER BY id")
.bind(user_id)
.fetch_all(pool)
.await?;
let mut response = format!("You have {} pending todos:\n", todos.len());
for (i, todo) in todos.iter().enumerate() {
writeln!(&mut response, "{}. {}", i + 1, todo.note).unwrap();
}
Ok(response)
}
```
```sql schema.sql theme={null}
DROP TABLE IF EXISTS todos;
CREATE TABLE todos (
id serial PRIMARY KEY,
user_id BIGINT NULL,
note TEXT NOT NULL
);
```
```toml Secrets.toml theme={null}
DISCORD_TOKEN = 'the contents of your discord token'
GUILD_ID = "123456789"
```
```toml Cargo.toml theme={null}
[package]
name = "postgres-serenity-bot"
version = "0.1.0"
edition = "2021"
[dependencies]
anyhow = "1.0.66"
serde = "1.0.148"
serenity = { version = "0.12.0", default-features = false, features = ["client", "gateway", "rustls_backend", "model"] }
shuttle-runtime = "0.57.0"
shuttle-serenity = "0.57.0"
shuttle-shared-db = { version = "0.57.0", features = ["postgres", "sqlx"] }
sqlx = "0.8.2"
tokio = "1.26.0"
tracing = "0.1.37"
```
# Installation
Source: https://docs.shuttle.dev/getting-started/installation
How to install the Shuttle Command Line Interface (CLI)
## Install Script (Recommended)
The install script finds the optimal alternative for installing the latest version on your OS, architecture and distro.
It's the easiest way to get started.
It also helps you to install Rust if you haven't yet.
### Linux and macOS
```sh theme={null}
curl -sSfL https://www.shuttle.dev/install | bash
```
### Windows (PowerShell)
```powershell theme={null}
iwr https://www.shuttle.dev/install-win | iex
```
The install script collects anonymous telemetry data to help improve the product.
No IP address or similar is collected.
The collected data is:
* Platform (OS)
* Installation method used, and whether it is a new install or upgrade
* Success/Failure outcome, and which step that failed
* Start and end times of script execution
You can opt out by setting any of these environment variables to `1` or `true`:
* `DO_NOT_TRACK`
* `DISABLE_TELEMETRY`
* `SHUTTLE_DISABLE_TELEMETRY`
* `CI`
The scripts for both platforms are open source [\[1\]](https://github.com/shuttle-hq/shuttle/blob/main/install.sh) [\[2\]](https://github.com/shuttle-hq/shuttle/blob/main/install.ps1). Improvements are always welcome!
You can install the [Shuttle MCP](/integrations/mcp-server) to enable AI
coding assistants like Claude and Cursor to access Shuttle documentation,
deploy projects, and fetch logs directly from your editor. Making your coding
experience even faster and more efficient.
## Alternative Installation Methods
### cargo-binstall
If you prefer using Cargo's package manager, you can use [cargo-binstall](https://github.com/cargo-bins/cargo-binstall):
```sh theme={null}
cargo binstall cargo-shuttle
```
### From Source
For those who prefer building from source:
```sh theme={null}
cargo install cargo-shuttle
```
Building from source requires a Rust toolchain and may take longer than other
installation methods.
### Pre-built Binaries
Pre-built binaries for various platforms are available on [our GitHub releases page](https://github.com/shuttle-hq/shuttle/releases/latest).
### Community Packages (unofficial)
Shuttle CLI is also available on other package managers through community-maintained packages:
Available on [Homebrew](https://formulae.brew.sh/formula/cargo-shuttle).
```sh theme={null}
brew install cargo-shuttle
```
Available in the [community repository](https://archlinux.org/packages/extra/x86_64/cargo-shuttle/).
```sh theme={null}
pacman -S cargo-shuttle
```
Available on [Alpine Edge](https://pkgs.alpinelinux.org/packages?name=cargo-shuttle\&branch=edge) after enabling the [testing repository](https://wiki.alpinelinux.org/wiki/Repositories).
```sh theme={null}
apk add cargo-shuttle
```
[](https://repology.org/project/cargo-shuttle/versions)
## Verifying Your Installation
After installation, verify that the Shuttle CLI is properly installed by running:
```sh theme={null}
shuttle --version
```
You should see the current version number displayed.
## Next Steps
Get started with your first Shuttle project
Explore our example projects
# Quick Start
Source: https://docs.shuttle.dev/getting-started/quick-start
Get started with your first Shuttle project and deploy it to the cloud π
This guide assumes you have installed the Shuttle CLI. If you haven't, please visit the [installation guide](./installation) first.
## Login to Your Account
First, you'll need to authenticate with Shuttle. The login command will open your browser to the [Shuttle Console](https://console.shuttle.dev/) where you can authenticate:
```sh theme={null}
shuttle login
```
You can use your Google account, GitHub account or email to authenticate with Shuttle.
## Create Your First Project
The `init` command helps you create a new Shuttle project. You can choose from:
* A simple Hello World template
* One of our [example projects](/examples)
* A [pre-built template](/templates)
For your first project, we recommend starting with the Hello World template:
```sh theme={null}
shuttle init
```
## Run Your Project Locally
Before deploying, you can run your project locally to test it:
```sh theme={null}
shuttle run
```
## Deploy Your Project
When you're ready to deploy your project to the cloud:
```sh theme={null}
shuttle deploy
```
This will:
1. Upload your code to Shuttle's platform and build it into a Docker image
2. Deploy it to our infrastructure and assign a public HTTPS URL
The first deployment might take a few minutes as it builds your project and sets up the infrastructure.
## Next Steps
Explore more example projects
Learn about available resources
Start from a pre-built template
# CLI
Source: https://docs.shuttle.dev/guides/cli
Overview of the Shuttle commands
Interaction with the Shuttle platform is mainly done with the `shuttle` Command Line Interface (CLI).
Some tasks, such as viewing logs, can also be done in the [Shuttle Console](https://console.shuttle.dev/).
To get an overview of available commands, subcommands, and options, run:
```bash theme={null}
shuttle help
# or
shuttle --help
```
## Commands
| Command | Description |
| ----------- | ----------------------------------------------------------- |
| init | Generate a Shuttle project from a template \[aliases: i] |
| run | Run a project locally \[aliases: r] |
| deploy | Deploy a project \[aliases: d] |
| deployment | Manage deployments \[aliases: depl] |
| logs | View build and deployment logs |
| project | Manage Shuttle projects \[aliases: proj] |
| resource | Manage resources \[aliases: res] |
| certificate | Manage SSL certificates for custom domains \[aliases: cert] |
| account | Show info about your Shuttle account \[aliases: acc] |
| login | Log in to the Shuttle platform |
| logout | Log out of the Shuttle platform |
| generate | Generate shell completions and man page |
| feedback | Open an issue on GitHub and provide feedback |
| upgrade | Upgrade the Shuttle CLI binary |
| mcp | Commands for the Shuttle MCP server |
| help | Print this message or the help of the given subcommand(s) |
## Cookbook / Cheat Sheet
These are some useful sequences of commands that are handy to keep in your back pocket.
For full documentation, use `--help` on the respective command.
Use the global `--debug` flag to print detailed debug output.
### Get started
* `cargo install cargo-shuttle`: For more alternatives, see [Installation](/getting-started/installation).
* `shuttle login`: Log in via the Shuttle Console.
* `shuttle init`: Generate a project from a template.
* `shuttle account`: Check account details.
### Local run
For more tips, see [Local Run](/docs/local-run).
* `shuttle run`: Run the project locally so you can test your changes.
* `shuttle run --port 8080`: Change the local port.
* `shuttle run --port 8080 --external`: Expose to local network by listening on `0.0.0.0`.
* `shuttle run --secrets `: Use a non-default secrets file for this run.
* `shuttle run --release`: Compile with release mode.
* `shuttle run --bacon`: Run in watch mode, requires separate install of [bacon](https://github.com/Canop/bacon).
### Deploy a project
* `shuttle project create`: Create a project on Shuttle.
* `shuttle deploy`: Deploy the project to Shuttle.
* `shuttle deploy --no-follow`: Don't poll deployment state. Alias: `--nf`.
* `shuttle deploy --secrets `: Use a non-default secrets file for this deployment.
### Manage projects
All project-related commands can use:
* `--working-directory ` or `--wd ` to execute the command in a different folder.
* `--name ` to specify project name explicitly (see [Project](/docs/projects#project-name-and-id)).
* `--id ` to specify project id explicitly (see [Project](/docs/projects#project-name-and-id)).
* `shuttle project list`: List your projects.
* `shuttle project status`: Check the state of this project.
* `shuttle project link`: Link this project folder to a project on Shuttle.
* `shuttle project delete`: Delete a project.
* `shuttle project update name `: Rename a project and its default subdomain.
### Manage deployments and logs
* `shuttle deployment list`: List deployments in this project.
* `shuttle deployment status`: Show status of the currently running deployment.
* `shuttle deployment stop`: Stop any currently running deployments.
* `shuttle deployment redeploy`: Redeploy the latest deployment without building.
* `shuttle deployment redeploy [id]`: Redeploy the specified deployment id without building.
* `shuttle logs`: Get all logs from the currently running deployment.
* `shuttle logs --latest`: Get all logs from the latest deployment.
* `shuttle logs `: Get all logs from a specific deployment.
* `shuttle logs --raw`: Print the logs without timestamps and origin tags. The `--raw` flag is also available for the `run` and `deploy` commands.
### Manage resources
* `shuttle resource list`: List resources linked to this project.
* `shuttle resource delete `: Delete a resource such as databases and secrets.
### Shell completions
Use `shuttle generate shell ` with one of the supported shells: bash, elvish, fish, powershell, zsh.
Example configuration for Zsh on Linux: add `eval "$(shuttle generate shell zsh)"` to `~/.zshrc`.
### Utility
* `shuttle --debug`: Turn on tracing output for Shuttle libraries. (WARNING: can print sensitive data)
* `shuttle --output json` to get the output from API calls in plain JSON.
* `shuttle deploy --output-archive `: Dump the deployment archive to disk instead of deploying it. Useful for debugging.
* `shuttle logout --reset-api-key`: Log out and reset your account's API key.
# Migrate Postgres Data
Source: https://docs.shuttle.dev/guides/migrate-shared-postgres
Migrate Postgres data to shuttle.dev
This guide shows how to dump schemas and table data from a [Shared Postgres](/resources/shuttle-shared-db) database, and restore a dump.
If you encounter issues, feel free to [contact us](/support/support) for help.
## Prerequisites
To run the data upload against the database, you need a Postgres Client.
In this guide, we will use `psql`.
On Debian/Ubuntu, you can install it with:
```sh theme={null}
sudo apt install postgresql-client
```
Shuttle Postgres databases run Postgres version 16, so `psql` versions older than 16 might not work.
## Dump database to SQL file
Since the Shared Postgres cluster has strict permissions, running `pg_dump` against your connection string is not possible.
Instead, you can use the `resource dump` command that runs `pg_dump` with `--no-owner --no-privileges` for you.
Dumping the database data extracts a copy of all data, and the database is left unmodified.
The command writes a dump in SQL format to stdout, so you can use it to write to a file like so:
```sh theme={null}
# dump database into /tmp/dump.sql
shuttle resource dump database::shared::postgres > /tmp/dump.sql
```
If you get errors about request limits or timeouts, reach out to us for support.
You can inspect the file and edit it to your liking.
If you already let your schema migrations run in the new db, you could for instance remove the `CREATE TABLE`, `ALTER TABLE` etc, and only keep the table data `COPY` statements and so on.
## Restore data from a SQL file
Use the following command to get the connection string for the new database:
```sh theme={null}
shuttle resource list --show-secrets
```
Use psql to run the dump file against it:
```sh theme={null}
psql -d -f filename.sql
```
You might see various errors about tables, rows, or constraints already existing in the new database.
In most cases this is fine, but you can verify that everything looks good by connecting to the database, and testing the app.
```sh theme={null}
psql
```
# Upgrade Shuttle version
Source: https://docs.shuttle.dev/guides/upgrade
How to upgrade to newer Shuttle versions
1. Check the [Releases page](https://github.com/shuttle-hq/shuttle/releases) for any considerations regarding breaking changes in the new release.
2. Upgrade your Shuttle CLI with one of the options below:
* `shuttle upgrade` **(available in v0.48.0+)** (runs the install script below for you)
* `curl -sSfL https://www.shuttle.dev/install | bash` (Linux and macOS)
* `iwr https://www.shuttle.dev/install-win | iex` (Windows)
* `cargo binstall cargo-shuttle`
* `cargo install cargo-shuttle`
3. Update your project's Shuttle dependencies in `Cargo.toml`:
```toml Cargo.toml theme={null}
shuttle-runtime = "0.57.0"
# do the same for other shuttle dependencies
```
4. Test that your project works with `cargo check` / `shuttle run`.
5. Finally, redeploy your Shuttle app with `shuttle deploy`.
# GitHub Actions
Source: https://docs.shuttle.dev/integrations/ci-cd
Streamline your Rust projects with Shuttle CI/CD
Connecting your GitHub account via the [GitHub integration](/integrations/github) is easier and simpler for automatic deployments. However, if your deployment requires building other static assets like React or Vue builds, you'll need to set up a custom GitHub Action and run the build script first before deploying to Shuttle.
Shuttle provides a GitHub Action for automating deployments. This action can run the `shuttle deploy` command for you, enabling continuous deployments on every push.
Here's an example of a GitHub Actions workflow that uses the Shuttle Deploy Action:
```yaml theme={null}
name: Deploy to Shuttle
on:
push:
branches:
- main
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- uses: shuttle-hq/deploy-action@v2
with:
shuttle-api-key: ${{ secrets.SHUTTLE_API_KEY }}
project-id: proj_0123456789
secrets: |
MY_AWESOME_SECRET_1 = '${{ secrets.SECRET_1 }}'
```
## Inputs
| Name | Description | Required | Default |
| :-------------------- | :----------------------------------------- | :------- | :------------ |
| shuttle-api-key | The Shuttle API key | true | N/A |
| project-id | Project ID, starts with `proj_` | true | N/A |
| cargo-shuttle-version | Version of cargo-shuttle | false | `""` (latest) |
| working-directory | The cargo workspace root | false | `"."` |
| secrets | Content of the `secrets.toml` file, if any | false | `""` |
| extra-args | Extra args to the deploy command | false | `""` |
## Learn More
Check out the official [GitHub Deploy Action](https://github.com/shuttle-hq/deploy-action) for more information.
# Custom Resources
Source: https://docs.shuttle.dev/integrations/custom-resources
This example shows how you can make a custom Shuttle resource annotation.
This example shows how you can make a custom resource.
The example resource we'll be making will be a Plain Data Object (which we will refer to as `pdo`), and outputs the value that you pass into the "name" attribute for the resource.
We are using the Axum framework in `main.rs` so we can showcase the resource in action, but the implementation is entirely separate from what web framework you use so you can add your custom resource to any Shuttle-hosted resource.
You can clone the example below by running the following (you'll need `shuttle` CLI installed):
```bash theme={null}
shuttle init --from https://github.com/shuttle-hq/shuttle-examples --subfolder custom-resource/pdo
```
```rust src/main.rs theme={null}
use axum::{extract::State, routing::get, Router};
use pdo::{Builder, Pdo};
use std::sync::Arc;
async fn hello_world(State(pdo): State>) -> String {
pdo.name.clone()
}
#[shuttle_runtime::main]
async fn axum(#[Builder(name = "John")] pdo: Pdo) -> shuttle_axum::ShuttleAxum {
let state = Arc::new(pdo);
let router = Router::new().route("/", get(hello_world)).with_state(state);
Ok(router.into())
}
```
```rust src/lib.rs theme={null}
use async_trait::async_trait;
use serde::Serialize;
use shuttle_service::{resource::Type, Error, Factory, IntoResource, ResourceBuilder};
#[derive(Default, Serialize)]
pub struct Builder {
name: String,
}
pub struct Pdo {
pub name: String,
}
impl Builder {
/// Name to give resource
pub fn name(mut self, name: &str) -> Self {
self.name = name.to_string();
self
}
}
#[async_trait]
impl ResourceBuilder for Builder {
const TYPE: Type = Type::Custom;
type Config = Self;
type Output = String;
fn config(&self) -> &Self::Config {
self
}
async fn output(self, _factory: &mut dyn Factory) -> Result {
// factory can be used to get resources from Shuttle
Ok(self.name)
}
}
#[async_trait]
impl IntoResource for String {
async fn into_resource(self) -> Result {
Ok(Pdo { name: self })
}
}
```
```toml Cargo.toml theme={null}
[package]
name = "pdo"
version = "0.1.0"
edition = "2021"
[dependencies]
async-trait = "0.1.56"
axum = "0.7.3"
serde = { version = "1.0.148", default-features = false, features = ["derive"] }
shuttle-service = "0.57.0"
shuttle-axum = "0.57.0"
shuttle-runtime = "0.57.0"
tokio = "1.28.2"
```
# GitHub Integration
Source: https://docs.shuttle.dev/integrations/github
Connect a GitHub repository for automatic deployments
Connect your GitHub account to Shuttle to manage and deploy repositories directly from the Shuttle console. You can turn on automatic deployments when you push code to GitHub. When enabled, this integration streamlines your deployment workflow by automatically rebuilding and redeploying your application whenever changes are pushed.
## How It Works
When you connect a GitHub repository to your Shuttle project, we establish a direct link between your codebase and your deployment pipeline.
### Features
* **Deploy Your Own Repository**: Connect your existing GitHub repository to Shuttle, configure your secrets, and deploy your application from Shuttle dashboard
* **Automatic Deployments on Git Push**: Enable automatic deployments so that pushing code to your selected branch triggers an immediate rebuild and redeployment
* **Deploy Templates**: Choose from our pre-configured GitHub templates and deploy them instantly to get started quickly - no CLI installation required
* **Deploy From Dashboard**: Deploy any branch from your repository directly using the "Deploy" button in the console dashboard
### Use-cases and limitations
If your project relies on gitignored assets that are needed when running `shuttle deploy`, you can instead use our [GitHub Deploy Action](/integrations/ci-cd), and build/configure these assets prior to deployment.
Shuttle's automatic deployments from GitHub are designed for projects where the codebase is fully ready to deploy with `shuttle deploy` after checkout.
## Which Should I Use?
| Your Project | Use This |
| :--------------------------------------------- | :--------------------- |
| Pure Rust backend (Axum, Actix, Rocket) | **GitHub Integration** |
| Rust + pre-built static files committed to git | **GitHub Integration** |
| Rust + React/Vue build step | **GitHub Actions** |
| Rust + Asset compilation (SCSS, Tailwind) | **GitHub Actions** |
| Rust + Code generation before deploy | **GitHub Actions** |
## Connecting Your GitHub Account
To integrate GitHub with your Shuttle project, follow these steps:
1. Navigate to the [Integrations page](https://console.shuttle.dev/account/integrations) in Shuttle Console.
2. Click "Connect to GitHub" to authorise Shuttle to access your selected repositories.
3. GitHub will prompt you to provide access.
## Deploying a GitHub Repository
1. Navigate to [new project](https://console.shuttle.dev/new-project) and select "**GitHub Repository".**
2. Once authorised, select the repository you want to deploy.
3. Confirm the repository is configured to run on Shuttle. See the [migration docs](/migration/guide) for guidance.
4. Select the branch and add secrets if required.
5. Confirm deployment and your project will be deployed to Shuttle π
## Connecting a GitHub Repository to an Existing Project
1. Navigate to **project** **settings** of your desired project.
2. Once authorised, select the repository you want to connect to this Shuttle project.
3. After selecting your repository, confirm the connection. Your Shuttle project is now linked to your GitHub repository.
*Disconnecting removes the link between Shuttle and your repository but doesn't affect your deployed application or your GitHub repository itself.*
## Automatic Deployments
Once you've connected a GitHub repository to your Shuttle project, you can push to GitHub and Shuttle will automatically pull the latest code from your connected repository, build it, and deploy the updated application.
1. Navigate to project settings and ensure a GitHub repository is connected.
2. Enable **Automatic Deployments**
3. Select and confirm branch.
4. Push to GitHub and a deployment will automatically begin.
## Deploying a GitHub Template
Deploying a template sets up both the Shuttle project and a new GitHub repository in your connected GitHub account. You get a fully functional application deployed on Shuttle with the complete source code in your GitHub account.
1. Navigate to [Shuttle Console](https://console.shuttle.dev/new-project) and select "**Template".**
2. Select desired template.
3. Choose "**Deploy from GitHub**" development flow and authorise if required.
4. Select which GitHub account you would like the template to be generated to and name repository.
5. If required, add secrets and **deploy**.
## Permissions for Growth Tier
**Team members on a Growth plan** have **view access** to GitHub integration settings but cannot modify them.
* Team members are unable to connect a repository, disconnect or change a linked repository.
* **Both** account **owners** and **team** members can **enable** **automatic** **deployments** and **deploy** a **connected** **repository**.
# Shuttle MCP
Source: https://docs.shuttle.dev/integrations/mcp-server
How to set up the Shuttle MCP server for use in AI coding assistants
To make Rust development even better, we've built the Shuttle [MCP server](https://modelcontextprotocol.io/).
## What's this for?
The Shuttle MCP server supercharges your development workflow in two powerful ways:
**Contextual Knowledge**: It acts as a bridge between Shuttle's comprehensive documentation and your AI coding assistant, providing instant access to Shuttle-specific guidance without leaving your editor. No more tab-switching between docs and code!
**Powerful Development Tools**: Beyond just documentation, the MCP server gives your AI agent direct access to Shuttle's tooling ecosystem. Your assistant can help you deploy services, **fetch logs**, and interact with the Shuttle console - all through natural conversation. This makes development with Rust and Shuttle significantly faster and more intuitive.
## Installation
The Shuttle MCP server comes with the **Shuttle CLI (version 0.56 and later)**, make sure you have the [Shuttle CLI](https://docs.shuttle.dev/getting-started/installation) installed first.
## Configuration
For **Cursor users**, you can install the Shuttle MCP server with one click:
Or choose your preferred AI coding environment and add the Shuttle MCP server configuration manually:
```json Cursor theme={null}
{
"mcpServers": {
"Shuttle": {
"command": "shuttle",
"args": ["mcp", "start"]
}
}
}
```
```json Windsurf theme={null}
{
"mcpServers": {
"Shuttle": {
"command": "shuttle",
"args": ["mcp", "start"]
}
}
}
```
```json VSCode theme={null}
{
"servers": {
"Shuttle": {
"type": "stdio",
"command": "shuttle",
"args": ["mcp", "start"]
}
}
}
```
```json Claude Desktop theme={null}
{
"mcpServers": {
"Shuttle": {
"command": "shuttle",
"args": ["mcp", "start"]
}
}
}
```
```bash Claude Code theme={null}
claude mcp add Shuttle --scope user -- shuttle mcp start
```
### Configuration File Locations
| IDE | Configuration Path |
| :------------- | :------------------------------------------------------------------------------------------------------------------------------------------- |
| Cursor | `~/.cursor/mcp.json` (global) or `.cursor/mcp.json` (project) |
| Windsurf | `~/.codeium/windsurf/mcp_config.json` |
| VSCode | User settings or `.vscode/mcp.json` (project) |
| Claude Desktop | **macOS**: `~/Library/Application Support/Claude/claude_desktop_config.json` **Windows**: `%APPDATA%\Claude\claude_desktop_config.json` |
## You're all set! π
Your AI coding assistant now has direct access to Shuttle's knowledge base and tooling.
Let's see it in action!
## How to use?
### Prompting Best Practices
To get the most out of the **Shuttle MCP server**, end your prompts with `Use the Shuttle MCP server`, this makes sure that the AI agent always references the most up-to-date Shuttle documentation and examples when helping you with your code.
It also makes the AI agent more likely to use the Shuttle MCP server for deployments, logs, project, status, etc.
```txt theme={null}
Build a Simple Todo App with Axum and then deploy it to Shuttle.
Use the Shuttle MCP server.
```
### Add AI Assistant Rules (Recommended)
Create AI agent rules in your project to establish consistent coding practices. Choose the appropriate file based on your AI coding assistant:
* **Cursor**: `.cursor/rules/shuttle.mdc`
* **Windsurf**: `.windsurf/rules/shuttle.md`
* **Claude Code**: `CLAUDE.md`
```markdown theme={null}
# Shuttle Rules
- Always use the Shuttle MCP to get the latest docs and examples
- Whenever needed, use the Shuttle MCP server for deployments, logs, project, status, etc.
```
#### For Cursor Users: Enable "Always Apply"
When adding rules in Cursor, make sure to select "Always Apply" to ensure the rule is applied to all AI chats automatically.
### MCP Server in Action
Let's see the MCP server in action! We'll use Cursor and ask the AI agent to build a simple full stack todo app with Axum and deploy it to Shuttle.
Here is the prompt we'll use, and we'll leave the rest to the AI agent:
```txt Prompt theme={null}
Build a Simple Full Stack Todo App with a beautiful UI with Axum and then deploy it to Shuttle.
Use the Shuttle MCP server.
```
#### AI Agent Accessing Latest Documentation
The AI agent immediately starts by searching through Shuttle's latest documentation using the MCP server. This ensures it has access to the most up-to-date information about Shuttle's features, best practices, and examples before beginning development.
As you can see, the AI agent proactively searches for relevant documentation sections, ensuring it builds the todo app using current Shuttle patterns and recommendations.
#### Debugging with MCP Tools
The power of the MCP server really shines when debugging. In this example, the AI agent deployed the app to the cloud and checked the status - everything looked good. But when fetching the logs, it encountered a runtime panic.
The AI agent immediately identified the issue and fixed the error, then deployed again. This showcases the power of having MCP tools integrated directly into your AI assistant's workflow.
#### Success on Second Try
After fixing the runtime error, the AI agent successfully deployed the application on the second attempt:
#### The Final Result
Here's the beautiful todo app UI that was deployed to the cloud:
The entire process - from initial development to debugging and successful deployment - was handled seamlessly by the AI agent using the Shuttle MCP server tools. This demonstrates how the MCP integration makes Shuttle development faster, more intuitive, and more reliable.
We can't wait to see what amazing projects you create with the Shuttle MCP server, and we'd love to hear your feedback!
# OpenAI
Source: https://docs.shuttle.dev/integrations/shuttle-openai
Learn about Shuttle's OpenAI resource annotation.
This plugin allows services to connect to [OpenAI](https://openai.com/), enabling easy access to OpenAI's powerful language models and AI capabilities.
## Usage
Add `shuttle-openai` to the dependencies for your service by running `cargo add shuttle-openai`.
This resource will be provided by adding the `shuttle_openai::OpenAI` attribute to your Shuttle `main` decorated function.
It returns an `async_openai::Client` for you to interact with OpenAI's services.
### Example
In the case of an Axum server, your main function will look like this:
```rust theme={null}
use async_openai::{Client, config::OpenAIConfig};
use shuttle_axum::ShuttleAxum;
#[shuttle_runtime::main]
async fn app(
#[shuttle_openai::OpenAI(api_key = "{secrets.OPENAI_API_KEY}")]
openai: Client,
) -> ShuttleAxum {
// Your app logic here
}
```
[Click here for the full example.](https://github.com/shuttle-hq/shuttle-examples/tree/main/axum/openai)
### Parameters
| Parameter | Type | Description |
| ----------- | ------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------- |
| api\_key | `str` | The API key for OpenAI authentication |
| api\_base | `Option` | To use a API base url different from default [OPENAI\_API\_BASE](https://docs.rs/async-openai/latest/async_openai/config/constant.OPENAI_API_BASE.html) |
| org\_id | `Option` | To use a different organization id other than default |
| project\_id | `Option` | Non default project id |
The API key is loaded from your `Secrets.toml` file.
## Configuration
To use this integration, you need to set up your OpenAI API key in the `Secrets.toml` file:
```toml theme={null}
OPENAI_API_KEY = "your-api-key-here"
```
Make sure to keep your API key confidential and never commit it to version control.
## Additional Information
For more details on how to use the OpenAI client in your Rust application, refer to the [async\_openai documentation](https://docs.rs/async-openai/latest/async_openai/).
Remember to handle API usage responsibly and in accordance with OpenAI's usage policies and rate limits.
# Apache OpenDAL
Source: https://docs.shuttle.dev/integrations/shuttle-opendal
Learn about Shuttle's OpenDAL resource annotation.
This plugin allows services to connect to [Apache OpenDALβ’](https://github.com/apache/opendal). OpenDAL is a data access layer that allows users to easily and efficiently retrieve data from various storage services in a unified way.
Users can connect OpenDAL to access data from a variety of storage services, including: S3, AzBlob (Azure Blob storage), GCS, OSS and [so on](https://opendal.apache.org/docs/rust/opendal/services/index.html).
## Usage
**IMPORTANT**: Currently Shuttle isn't able to provision a storage for you (yet). This means you will have to create the storage service first and setup the secrets accordingly.
Add `shuttle-opendal` to the dependencies for your service by running `cargo add shuttle-opendal`.
This resource will be provided by adding the `shuttle_opendal::Opendal` attribute to your Shuttle `main` decorated function.
It returns a `opendal::Operator` for you to connect the storage service.
### Example
In the case of an Axum server, your main function will look like this:
```rust theme={null}
use opendal::Operator;
use shuttle_axum::ShuttleAxum;
#[shuttle_runtime::main]
async fn app(
#[shuttle_opendal::Opendal(scheme = "s3")]
storage: Operator,
) -> ShuttleAxum {}
```
### Parameters
| Parameter | Type | Default | Description |
| --------- | ----- | ---------- | ------------------------------------------------ |
| scheme | `str` | `"memory"` | The scheme of the storage service to connect to. |
All secrets are loaded from your `Secrets.toml` file.
For instance, when using `s3`, you can configure the scheme to `s3` and specify the secrets: `bucket`, `access_key_id`, and `secret_access_key`.
Visit the [OpenDAL Documentation](https://opendal.apache.org/docs/rust/opendal/services/index.html) for more information on how to setup the secrets for the storage service you want to connect to.
# Qdrant
Source: https://docs.shuttle.dev/integrations/shuttle-qdrant
Integrate vector search capabilities into your Rust projects with Shuttle Qdrant. Our documentation outlines the process, simplifying your journey.
This plugin allows services to connect to a [Qdrant](https://qdrant.tech/) database. Qdrant is a vector database & vector similarity search engine.
You can get started in seconds by cloning our Axum + Qdrant example with
```bash theme={null}
shuttle init --from shuttle-hq/shuttle-examples --subfolder axum/qdrant
```
## Usage
**IMPORTANT:** Currently Shuttle isn't able to provision a Qdrant Cloud cluster for you (yet). This means you will have to create an account on their [website](https://qdrant.tech/) and follow the few steps required to create a cluster and an API key to access it.
Add `shuttle-qdrant` and `qdrant-client` to the dependencies for your service by running `cargo add shuttle-qdrant qdrant-client@1.7.0`. This resource will be provided by adding the `shuttle_qdrant::Qdrant` attribute to your Shuttle main function.
It returns a `qdrant_client::QdrantClient`. When running locally it will by default spin up a Qdrant Docker container for your project.
If you want to connect to a remote database when running locally, you can specify the `local_url` parameter.
### Parameters
| Parameter | Type | Required? | Description |
| ---------- | ----- | ------------- | ------------------------------------------------------------------------------------ |
| cloud\_url | \&str | In deployment | URL of the database to connect to. NOTE: It should use the gRPC port. |
| api\_key | \&str | No | Required if the database requires an API key. |
| local\_url | \&str | No | If specified, connect to this URL on local runs instead of using a Docker container. |
Make sure the `cloud_url` parameter is specifying the gRPC port of the database. This is typically done by adding `:6334` at the end.
You can use secrets interpolation to set the URL and API key. See below for an example.
### Example
In the case of an Axum server, your main function can look like this:
```rust theme={null}
use qdrant_client::prelude::*;
#[shuttle_runtime::main]
async fn axum(
#[shuttle_qdrant::Qdrant(cloud_url = "{secrets.CLOUD_URL}", api_key = "{secrets.API_KEY}")]
qdrant: QdrantClient,
) -> shuttle_axum::ShuttleAxum {
// set up state and router...
}
```
# Turso
Source: https://docs.shuttle.dev/integrations/shuttle-turso
Learn about how to use Shuttle with Turso, a distributed SQLite cloud service.
This plugin allows services to connect to a [Turso](https://turso.tech) database. Turso is an edge-hosted distributed database based on libSQL, a SQLite fork.
## Usage
**IMPORTANT:** Currently Shuttle isn't able to provision a Turso database for you (yet). This means you will have to create an account on their [website](https://turso.tech/) and follow the few steps required to create a database and create a token to access it.
Add `shuttle-turso` and `libsql` to the dependencies for your service by running `cargo add shuttle-turso libsql`. This resource will be provided by adding the `shuttle_turso::Turso` attribute to your Shuttle main decorated function.
It returns a `libsql::Database`. When running locally it will instantiate a local SQLite database of the name of your service instead of connecting to your edge database.
If you want to connect to a remote database when running locally, you can specify the `local_addr` parameter. In that case, the token will be read from your `Secrets.dev.toml` file.
### Parameters
| Parameter | Type | Default | Description |
| ----------- | ------ | ------- | ------------------------------------------------------------------------------------------------------------------------------------------------ |
| addr | str | `""` | URL of the database to connect to. Should begin with either `libsql://` or `https://`. |
| token | str | `""` | The value of the token to authenticate against the Turso database. You can use string interpolation to read a secret from your Secret.toml file. |
| local\_addr | Option | `None` | The URL to use when running your service locally. If not provided, this will default to a local file named `.db` |
### Example
In the case of an Axum server, your main function will look like this:
```rust theme={null}
use libsql::Database;
use shuttle_axum::ShuttleAxum;
#[shuttle_runtime::main]
async fn app(
#[shuttle_turso::Turso(
addr="libsql://my-turso-db-name.turso.io",
token="{secrets.DB_TURSO_TOKEN}")] client: Database,
// use secrets if you are not hardcoding your token/addr
#[shuttle_secrets::Secrets] secrets: SecretStore
) -> ShuttleAxum {
// ... some code
}
```
# Migrate to Shuttle
Source: https://docs.shuttle.dev/migration/guide
A comprehensive guide to migrating your existing Rust project to Shuttle
## Migration checklist
Before starting your migration, check if your project meets these requirements:
### β Supported Service Types
* HTTP web services using:
* [Axum](/examples/axum)
* [Actix Web](/examples/actix)
* [Rocket](/examples/rocket)
* [Loco](/examples/loco)
* [7+ other HTTP frameworks](/examples/other)
* Discord bots
* Telegram bots
* Web services connecting to RPC nodes
* Services with scheduled tasks (cronjobs)
* Other service types (via [custom service](/templates/tutorials/custom-service))
### β οΈ Other Considerations
* External resources can still be used by connecting to them
* Database migrations need to be handled separately
* Check that your project:
* Is compatible with the latest Rust version
* Can be built and run in a Docker container
If you're unsure about any compatibility requirements, join our [Discord community](https://discord.gg/shuttle) for help!
## Migration Steps
Remember to install the [Shuttle CLI](/getting-started/installation) if you haven't yet!
### 1. Add Shuttle Dependencies
Add the required Shuttle dependencies to your `Cargo.toml`:
```bash theme={null}
# Add the Shuttle runtime
cargo add shuttle-runtime
# Add your framework integration if supported (e.g., for Axum)
cargo add shuttle-axum
# Add any Shuttle resources you need (e.g., Postgres)
cargo add shuttle-shared-db --features postgres,sqlx
```
### 2. Update Your Main Function
Common steps needed for migrating are:
* Change the main function to use `#[shuttle_runtime::main]`.
* Change the return type and return the framework's Router or config. This varies by framework, check [examples](/examples/overview) for more.
* Add a `Secrets.toml` file and use `#[shuttle_runtime::Secrets]` instead of loading environment variables.
* Use `#[shuttle_shared_db::Postgres]` instead of manually connecting to a database.
Here is an example of what an Axum service might look like before and after migrating:
```rust main.rs (before) theme={null}
#[tokio::main]
async fn main() {
dotenvy::dotenv().ok();
let db_url = std::env::var("DATABASE_URL").unwrap();
let secret = std::env::var("MY_SECRET").unwrap();
let pool = sqlx::Pool::connect(&db_url).await.unwrap();
sqlx::migrate!().run(&pool).await.unwrap();
// Use secrets for anything that needs them
let router = create_api_router(pool);
let addr = SocketAddr::from(([0, 0, 0, 0], 8000));
Server::bind(&addr)
.serve(router.into_make_service())
.await
.unwrap()
}
```
```rust main.rs (after) theme={null}
#[shuttle_runtime::main]
async fn main(
#[shuttle_shared_db::Postgres] pool: PgPool,
#[shuttle_runtime::Secrets] secrets: shuttle_runtime::SecretStore,
) -> shuttle_axum::ShuttleAxum {
sqlx::migrate!().run(&pool).await.unwrap();
// Use secrets for anything that needs them
let router = create_api_router(pool);
Ok(router.into())
}
```
### 3. Add Shuttle configuration
If your app uses gitignored files, or uses static files at runtime, you need to add a `Shuttle.toml` file with some file declarations.
Read more in [Deployment files](/docs/files).
### 4. Testing it locally
```bash theme={null}
shuttle run
```
### 5. Deploy
If everything is ready to launch:
```bash theme={null}
shuttle deploy
```
The first deployment might take a few minutes as it sets up your infrastructure.
## Next Steps
Learn about available resources
Create a custom service
# Transfer to the new platform
Source: https://docs.shuttle.dev/platform-update/migration
How to transfer projects from shuttle.rs to shuttle.dev
This guide is for transferring projects from **shuttle.rs** to the new **shuttle.dev** platform.
If you are a new Shuttle user, you can ignore this document and start from [Installation](/getting-started/installation).
## 1. Check platform features & policies
Due to the large re-write of the Shuttle platform, some features have been dropped or moved to the feature roadmap.
Make sure you have read the [Platform Update Changelog](/platform-update/platform-update#changelog) to verify that it is currently possible to migrate your project.
Notably:
* **Shuttle Persist** is not supported in the same way. Shared Postgres DB with a similar key-value store abstraction exists instead ([`SerdeJsonOperator` in shuttle\_shared\_db](/resources/shuttle-shared-db)). For help with migrating data, please reach out to us.
* Migrating **Shared MongoDB** is not possible since the feature has been dropped.
## 2. Update CLI
Follow [Installation](/getting-started/installation) to install the latest Shuttle CLI.
The same Shuttle API key is used on both platforms. If you are already logged in, you don't need to `shuttle login` again.
To verify that your new CLI is installed and working correctly, try the new account command:
```sh theme={null}
shuttle account
```
If the command does not error, you are ready to use **shuttle.dev**.
## 3. Access the new Console
The Shuttle Console for **shuttle.dev** is accessed at [console.shuttle.dev](https://console.shuttle.dev/).
~~The Shuttle Console for **shuttle.rs** is still accessed at [console.shuttle.rs](https://console.shuttle.rs/).~~
## 4. Update Code
Most Shuttle projects should run on the new platform with minimal code changes.
This is the list of required changes.
### Cargo.toml
Update to the latest version of Shuttle dependencies:
```toml Cargo.toml theme={null}
shuttle-runtime = "0.57.0"
# do the same for all other shuttle crates
```
### Shuttle.toml
The `name` field is now unused.
The `assets` field has been renamed to `deploy.include` ([docs](/docs/files#include-ignored-files)).
If you want the deploy command to keep blocking dirty deployments, add the `deploy.deny_dirty` field ([docs](/docs/files#block-dirty-deployments)).
The new field `build.assets` might need to be added:
If your project uses static assets or other files at runtime, you need to declare them in `build.assets` to have them copied from the builder to the runtime image ([docs](/docs/files#build-assets)).
### Secrets.toml
Secrets.toml must now be in the root of the cargo workspace, so move it there if it is in a member crate.
`--secrets ` on the deploy command can still be used for a custom secrets file location.
## 5. Local Run
Check that your project builds and runs locally with
```sh theme={null}
shuttle run
```
## 6. Deploy
Time to deploy!
```sh theme={null}
shuttle deploy
```
Note your project URL: project subdomains are now under `*.shuttle.app` instead of `*.shuttleapp.rs`.
## 7. (Optional) Migrate Shared Postgres data
If you use a Shared Postgres database and want to migrate data to the new platform, follow the [migration guide](/guides/migrate-shared-postgres)!
## 8. (Optional) Update GitHub Action
If you are using [deploy-action](https://github.com/shuttle-hq/deploy-action), check the new v2 branch for renamed and new required fields:
* Use `shuttle-hq/deploy-action@v2` instead of `shuttle-hq/deploy-action@main`
* Rename `deploy-key` to `shuttle-api-key`
* Add a `project-id` value
* Any other args for `shuttle deploy` can be added in `extra-args`
## 9. (Optional) Custom domains
Once you have moved resources, deployed your app to shuttle.dev, and want to move your custom domain, update your DNS records according to [Domain names](/docs/domain-names) and use the new `shuttle certificate` command to [add SSL certificates](/docs/domain-names#set-up-ssl-certificate).
HTTPS traffic should then be enabled for your custom domain.
# Overview & Changelog
Source: https://docs.shuttle.dev/platform-update/platform-update
In Q4 2024, Shuttle launched a new and improved platform
Introducing the new Shuttle platform! We've supercharged what developers love about Shuttle, combining our powerful developer experience with enterprise-grade infrastructure. For developers, we've kept it simple and intuitive - no complex configs, just focus on your Rust code. On the production side, we've implemented VM-level isolation, increased reliability and scalability to meet real-world demands. From solo developers to enterprise teams, Shuttle now offers the perfect blend of ease and production-ready robustness.
[*Read the full announcement here!*](https://www.shuttle.dev/blog/2024/10/10/shuttle-redefining-backend-development)
## Important Dates
* January 2nd, 2025: Deployment freeze on legacy platform
* January 14th, 2025: Legacy platform shutdown begins (gradual shutdown and removal of all projects)
* January 31st, 2025: Complete decommissioning of legacy platform infrastructure
Migrate to the new platform for continued deployment capabilities. Check out our [migration docs](/platform-update/migration).
## Domain and CLI changes
### Access the NEW platform
* New Console: [console.shuttle.dev](https://console.shuttle.dev)
* New Docs: [docs.shuttle.dev](https://docs.shuttle.dev)
* Command Line: new `shuttle` command (installed alongside `cargo shuttle`)
### ~~Access the OLD platform~~ (no longer available)
* ~~Old Console: [console.shuttle.rs](https://console.shuttle.rs)~~
* ~~Old Docs: [docs.shuttle.rs](https://docs.shuttle.rs)~~
* ~~Command Line: `cargo shuttle`~~
## Changelog
### Changed
* β οΈ The Shuttle Console is located at [console.shuttle.dev](https://console.shuttle.dev/) instead of **console.shuttle.rs**.
* β οΈ Project subdomains are under `*.shuttle.app` instead of `*.shuttleapp.rs`
* β οΈ A new binary `shuttle` is now provided for using the new platform ([read more](/getting-started/installation)).
* Project names are no longer globally unique, only unique per account. Project URLs now have some random characters at the end of its default subdomain, e.g. `myproject-3h5n.shuttle.app`.
* Builds and deployments are now fully separated. This allows for more specialised build workflows, and more efficient deployment hosting.
* Builds run on AWS CodeBuild, and now produce a Docker image instead of just a binary. *This implies we can support more languages than Rust in the future. β¨*
* Deployments run on AWS ECS with Fargate.
* CLI commands:
* `deploy`: no longer runs tests, so `--no-test` has no effect.
* `deploy`: no longer denies dirty deployments by default ([read more](/docs/files#block-dirty-deployments)).
* `status`: use `deployment status` instead.
* `stop`: use `deployment stop` instead.
* `project status`: projects no longer have a state, so `--follow` has no effect.
* `login`: automatically gets the API key from the API after an approval in Shuttle Console. No more copy + pasting!
* Max archive size for deployments is now 100 MB (up from 50MB).
* Secrets.toml must now be in the root of the cargo workspace (`--secrets ` can still be used for a custom location).
* Shuttle.toml:
* Renamed the `assets` field to `deploy.include` ([read more](/docs/files#include-ignored-files)).
* For static files to move from the build stage to the runtime container, you must specify `build.assets` ([read more](/docs/files#build-assets)).
* Shared Postgres resource:
* Postgres 16
* Now based on AWS RDS
* Now has common [Postgres Extensions](https://docs.aws.amazon.com/AmazonRDS/latest/PostgreSQLReleaseNotes/postgresql-extensions.html#postgresql-extensions-16x)
### Added
* The Shuttle proxy, that proxies HTTP requests to user projects, now sets the [X-Forwarded-For header](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/X-Forwarded-For) on all requests, with the IP of the original caller.
* CLI commands:
* Commands that target a project: You will now be prompted to link your project directory to a Shuttle project.
* In addition to `--name `, you can also use `--id ` to target specific projects. This overrides project linking.
* `project link`: explicitly re-link the project directory to a Shuttle project.
* `certificate` command for adding and managing SSL certificates for custom domains.
* `deploy --no-follow` to not poll the deployment status until it reaches a running or failed state.
* `account` to show information about your account.
* `project update name` to rename a project.
* You can now create accounts with Google sign-in and email + password. Accounts on new providers with matching emails are not linked and are treated as separate accounts.
* The ability to increase the allocated CPU and RAM limits (contact us).
### Removed
* **Shuttle Persist** resource and the persistent file volume. *(We plan to implement an S3-based replacement.)*
* **MongoDB Shared Database** resource. *(Removed due to having very few users.)*
* **AWS RDS Database** resource. *(We plan to bring it back.)*
* CLI commands:
* `project stop`
* `project restart`
* `clean`
* The **Teams** feature has been temporarily disabled. We plan to enable it after improvements have been made.
# Usage & Billing
Source: https://docs.shuttle.dev/pricing/billing
Each Shuttle plan includes a base subscription with predetermined resource allocations. Additional resources can be purchased as needed, giving you the flexibility to scale your applications while maintaining cost control.
## How Billing Works
Your monthly bill consists of your base plan subscription plus any additional resource usage beyond the included allocations from the previous month. Pro and Growth tiers start with a 14-day free trial of the base subscription. Usage charges still apply during the trial period.
## Usage-based Pricing
Additional resources beyond your plan's included limits are billed at the rates shown below.
| Additional Resource | Rate | Available to |
| ------------------- | ------------------- | -------------- |
| Database Storage | \$0.12/GB/month | Pro and Growth |
| Build Minutes | \$0.025/minute | Pro and Growth |
| Network Egress | \$0.10/GB | Pro and Growth |
| Custom Domains | \$0.25/domain/month | Pro and Growth |
| Compute (vCPU) | \$0.06/vCPU hour | Pro and Growth |
| Dedicated Database | \$20/month | Pro and Growth |
| Team Member | \$25/user/month | Growth |
*For example, if your plan includes 1GB of network egress and you use 1.5GB, you'll be billed for the extra 0.5GB at \$0.10/GB.*
# FAQ
Source: https://docs.shuttle.dev/pricing/faq
Frequently Asked Pricing Questions
## Common Pricing Questions
**How does usage-based pricing work?**
Base tiers include generous resource allocations. When you exceed these allocations, additional resources are available at fixed rates: database storage at \$0.12/GB,
build minutes at \$0.025/minute,
and network egress at \$0.10/GB.
## Support
**What is priority support?**
Priority support customers receive enhanced assistance, including white glove migration help when moving your applications to Shuttle. You'll get guaranteed response times of less than 1 business day, ensuring your questions and issues are addressed promptly.
**What is dedicated support?**
Dedicated support offers our highest level of service, combining all the benefits of priority support with access to a private support channel for direct communication with our team. You'll receive white glove migration assistance, response times under 1 business day, and Enterprise customers additionally benefit from an Uptime SLA.
## Plan Selection
**Is the Growth tier suitable for my team?**
Growth tier is ideal when you need auto-scaling, multi-zone deployment, or team collaboration features. If you're running production workloads that require high availability or have multiple team members managing deployments, Growth provides the necessary tools and infrastructure.
**Can I use Shuttle for languages other than Rust?**
No, Shuttle is specifically designed for Rust applications. We provide native support for popular Rust frameworks like Axum, Actix-Web, and Rocket, delivering an optimized experience for Rust development.
## Resource Limits and Overages
**What happens if I exceed my plan's resource limits?**
We implement both soft and hard limits. You'll receive notifications when approaching soft limits, giving you time to upgrade or adjust usage. For resources like storage and build minutes, you'll be billed for additional usage at standard rates.
**Can I set custom resource limits?**
Growth tier users can set maximum limits on active container instances to control costs. All users can monitor resource usage through dashboards and receive alerts when approaching limits.
**Can I get additional custom domains without buying an additional project?**
Yes, you can add custom domains at \$0.25/domain/month without purchasing an additional project. If you do purchase an additional project, it comes with its own included custom domain allocation. The maximum number of custom domains allowed on the Growth tier is 25, regardless of whether they are included with projects or purchased separately.
**What happens if my application exceeds the max recommended requests per second?**
Your application will continue to function, but you may experience some performance degradation during periods of high traffic. These guidelines exist to help you determine when it's time to consider upgrading to a higher tier. If your project consistently hits limits we will reach out to work with you to find the best way forward.
## Billing and Subscription Management
**Can I change my plan at any time?**
Yes, you can upgrade or downgrade your plan at any time. Upgrades take effect immediately, while downgrades are applied at the start of your next billing cycle.
**How does custom domain billing work?**
Each plan includes a set number of domains (1 for Community, 3 for Pro, 5 for Growth). Additional domains can be added to any plan for \$0.25 per domain per month.
**How many Pro accounts can be registered at one organisation?**
Each organization is limited to a maximum of 3 Pro accounts. If your organization needs more than 3 accounts, we recommend upgrading to the Growth tier, which provides centralized account management and shared access to all deployments in a single unified workspace.
**How are team members billed on the Growth tier?**
The Growth tier includes access for up to 10 team members. Additional team members can be added for \$25 per user per month.
# Plans overview
Source: https://docs.shuttle.dev/pricing/overview
Shuttle offers simple, transparent pricing designed to scale with your needs, from personal projects to enterprise deployments.
Check out our [pricing page](https://www.shuttle.dev/pricing) for a detailed comparison of each tier.
## Choosing a Plan
Shuttle offers tiers designed to support developers and teams at every stage of their journey. Here's a detailed guide to help you choose the right plan for your needs.
### Pro
When you're ready to move your applications to production, the Pro tier provides the reliability and support you need. It includes three projects running on reserved instances, enhanced monitoring capabilities, and guaranteed support response times. This tier is ideal for individual developers or small teams running production workloads that require consistent performance and professional support.
### Growth
The Growth tier supports teams running complex production applications that need advanced scaling capabilities. It includes team collaboration features, auto-scaling and zero-downtime deployments. With dedicated databases and priority support, the Growth tier is designed for organizations that need robust infrastructure and team-focused features to support their growing applications.
### Enterprise
The Enterprise tier provides customizable infrastructure solutions for larger organizations with specific requirements. It offers single-tenant deployments, custom hardware configurations, multi-region support, and advanced security features. This tier is suitable for organizations that need complete control over their infrastructure, have specific compliance requirements, or require custom support agreements.
# Scaling & Limits
Source: https://docs.shuttle.dev/pricing/scaling
When scaling your application on Shuttle, you have two main options for increasing computational resources:
* **Vertical scaling:** Increase the vCPU and memory of your existing project(s). This is ideal when you need more power for individual applications. Available to both Pro and Growth.
* **Horizontal scaling:** Deploy replicas of your application to distribute load and increase availability. Available exclusively to Growth tier.
Both options are billed as detailed in the sections below:
## Vertical scaling
Simply upgrade your instance size and pay the difference. Each instance size comes with a base amount of included vCPU (0.25 vCPU), and you only pay for the additional (billable) vCPU usage per hour.
| Instance Size | vCPU | Memory (GB) | Included vCPU | Billable vCPU |
| ------------- | ---- | ----------- | ------------- | ------------- |
| **Basic** | 0.25 | 0.5 | 0.25 | 0 |
| **Small** | 0.5 | 1 | 0.25 | 0.25 |
| **Medium** | 1 | 2 | 0.25 | 0.75 |
| **Large** | 2 | 4 | 0.25 | 1.75 |
| **X Large** | 4 | 8 | 0.25 | 3.75 |
| **XX Large** | 8 | 16 | 0.25 | 7.75 |
*For example, if you upgrade from Basic (0.25 vCPU) to Medium (1 vCPU), you'll only be charged for the additional 0.75 vCPU since 0.25 vCPU is included in your plan.*
[Read the Scaling section to find out how to adjust vertical scaling](/docs/scaling)
## Horizontal scaling
Growth tier users can configure a single project to run on multiple instances and have Shuttle seamlessly load balance traffic across them for higher scalability and availability. Each replica instance is billed based on its instance size and vCPU usage per hour.
## Purchase additional projects
Pro, Growth and Enterprise users can purchase additional projects beyond their included projects. The minimum instance for each additional project is 0.5 vCPU and is billed according to the table below.
| Instance Size | vCPU | Memory (GB) | Included vCPU | Billable vCPU |
| ------------- | ---- | ----------- | ------------- | ------------- |
| **Small** | 0.5 | 1 | 0 | 0.5 |
| **Medium** | 1 | 2 | 0 | 1 |
| **Large** | 2 | 3 | 0 | 2 |
| **X Large** | 4 | 8 | 0 | 4 |
| **XX Large** | 8 | 16 | 0 | 8 |
*Note: The vCPU allocations for additional projects do not include any "included vCPU" - you'll be billed for the full vCPU amount of the instance size you select.*
### How many additional projects can you add on each tier?
| Tier | Included projects | Additional projects | Total project limit |
| -------------- | ----------------- | ------------------- | ------------------- |
| **Pro** | 3 | 7 | 10 |
| **Growth** | 10 | 40 | 50 |
| **Enterprise** | Custom | Custom | Custom |
## Request Rate Guidelines
Shuttle provides flexible request rate guidelines to ensure optimal performance for all users and projects across our tiers. These guidelines help you determine which tier best suits your application's needs:
* **Pro**: Up to 50 requests per second
* **Growth**: Up to 1,000 requests per second
These limits are not strictly enforced as hard caps but serve as performance guidance. Applications may experience performance degradation when consistently exceeding the recommended request rates for their tier.
If your application regularly exceeds the recommended request rate for your current tier, our team may reach out to suggest an upgrade to a more suitable tier. This helps ensure your application maintains optimal performance and doesn't impact the overall platform experience.
# Upgrading
Source: https://docs.shuttle.dev/pricing/upgrading
## Pro to Growth
When upgrading from Pro to Growth tier, you'll need to reach out to our team to ensure a smooth transition and proper configuration of your enhanced features. Here's what to do:
1. Visit the Shuttle Console and click the "Contact Us" button
2. Our team will work with you to understand your specific requirements and usage patterns
3. We'll help configure your instance requirements and team access controls
4. Your applications will continue running during the transition with zero downtime
The Growth tier upgrade process is handled personally by our team to ensure you get the most value from the advanced features and can properly configure them for your use case.
# Resources
Source: https://docs.shuttle.dev/resources/resources
Learn about the resources that are officially supported by Shuttle.
This section covers the resources currently supported by Shuttle. In broad terms, for the resource needed, simply mark up your code with the appropriate annotation. This is very powerful and brings several benefits:
* simplicity
* receive a database by writing a simple annotation
* enables quick prototyping, there is no need for extensive setup, management consoles/tools, etc.
## Resources
### AWS Relational Database Service (RDS)
This plugin allows applications to leverage AWS RDS for database needs, instead of a shared database. This will result in improved availability and reliability given the "managed" nature of the service and its increased resources. It can be a good choice if the database backing the application requires the performance increase that results from improved isolation.
### Shared Databases
The Shared Databases plugin can give you your own Postgres database inside a cluster shared with other users. This is the option to choose if total database isolation isn't a concern. It's great starting point for prototyping applications on Shuttle.
## Integrations
Apps on Shuttle can integrate with many external services. For some, we provide official wrapper plugins to get you started quickly. Check out the [Integrations](/integrations) section for the full list.
## Commands
To check which resources are linked to your project, use
```bash theme={null}
shuttle resource list
```
### Deleting resources
You can delete database resources with
```bash theme={null}
shuttle resource delete [TYPE]
```
To know what to put in the TYPE parameter, use the list command above.
# AWS RDS
Source: https://docs.shuttle.dev/resources/shuttle-aws-rds
Connect AWS RDS dedicated databases with your Rust applications on Shuttle
This plugin provisions AWS RDS databases on [Shuttle](https://www.shuttle.dev). The following engines are supported:
* Postgres
* MySQL
* MariaDB
This resource is available on the Pro tier and up. Provisioning it incurs additional charges, except
on the Growth tier, where the first instance is included. For details, see our
[pricing page](https://www.shuttle.dev/pricing).
## RDS vs Shared DB
* Dedicated Instance: Each AWS RDS database is it's own dedicated instance.
* Stability and Security: AWS RDS has greater stability due to it being a service directly offered by AWS, and also greater security due to being a dedicated instance.
* Flexible: AWS RDS instances on the Shuttle platform can be customised to suit your needs. Instance size can be increased from the default and AWS RDS features can be enabled or disabled.
On *AWS RDS* we offer:
* Postgres
* MySQL
* MariaDB
With the *Shared DB* we offer:
* Postgres
## Default RDS Instance
The RDS instance created by Shuttle has the following specifications and features by default:
* 2 vCPU
* 1 GB Memory
* 20 GiB Storage
* Backups Disabled
* Single Availability Zone
* 1 Node
* No Proxy
The pricing of this instance can be found on our [pricing page](https://www.shuttle.dev/pricing).
If you require a different configuration, please contact us. We can provision any size or configuration of RDS to suit your needs - a full list of RDS features can be found [here](https://aws.amazon.com/rds/features/).
## Usage
Please [contact us](/support/support) to enable RDS resources for your account.
Then, add the `shuttle-aws-rds` dependency.
Each type of database is behind its own feature flag and macro attribute path.
| Engine | Feature flag | Attribute path |
| -------- | ------------ | --------------------------- |
| Postgres | `postgres` | `shuttle_aws_rds::Postgres` |
| MySQL | `mysql` | `shuttle_aws_rds::MySql` |
| MariaDB | `mariadb` | `shuttle_aws_rds::MariaDB` |
### Output type
This resource supports the same output types as [Shared DB Postgres](./shuttle-shared-db#output-type), along with `sqlx::MySqlPool` for MySQL and MariaDB.
Lastly, add a macro annotaion to the Shuttle main function. Here are examples for Postgres:
```rust theme={null}
// Use the connection string
#[shuttle_runtime::main]
async fn main(#[shuttle_aws_rds::Postgres] conn_str: String) -> ... { ... }
// With sqlx feature flag, get a PgPool connected automatically
#[shuttle_runtime::main]
async fn main(#[shuttle_aws_rds::Postgres] pool: sqlx::PgPool) -> ... { ... }
```
### Parameters
All of the AWS RDS macros take the same optional parameter:
| Parameter | Type | Description |
| -------------- | ----- | -------------------------------------------------------------------------------------------- |
| local\_uri | \&str | If specified, on local runs, use this database instead of starting a Docker container for it |
| database\_name | \&str | Name to give the default database. Defaults to project name if none is given |
When passing in strings, you can also insert secrets from `Secrets.toml` using string interpolation.
To insert the `PASSWORD` secret, pass it in like this:
```rust theme={null}
#[shuttle_runtime::main]
async fn main(
#[shuttle_aws_rds::Postgres(
local_uri = "postgres://postgres:{secrets.PASSWORD}@localhost:16695/postgres"
)] conn_str: String,
) -> ... { ... }
```
Caveat: If you are interpolating a secret from `Secrets.dev.toml`, you need to set the same secret in `Secrets.toml` to a empty string so that this step does not crash in deployment.
The URI should be formatted according to the
[Postgres](https://www.postgresql.org/docs/current/libpq-connect.html#LIBPQ-CONNSTRING) or
[MySql and MariaDB](https://dev.mysql.com/doc/refman/8.0/en/connecting-using-uri-or-key-value-pairs.html#connecting-using-uri)
documentation, depending on which one you're using.
If you do not specify a `local_uri`, then cargo-shuttle will attempt to spin up a Docker container and launch the database inside of it.
For this to succeed, you must have Docker installed and you must also have started the Docker engine. If you have not used Docker
before, the easiest way is to [install the desktop app](https://docs.docker.com/get-docker/) and then launch it in order to start
the Docker engine.
## Example
This snippet shows the main function of an Axum app that uses the `#[shuttle_aws_rds::Postgres]` attribute macro to provision an RDS Postgres database,
which can be accessed with an [sqlx Pool](https://docs.rs/sqlx/latest/sqlx/pool/index.html).
```rust main.rs theme={null}
#[shuttle_runtime::main]
async fn main(
#[shuttle_aws_rds::Postgres] pool: PgPool,
) -> ShuttleAxum {
pool.execute(include_str!("../schema.sql"))
.await
.map_err(CustomError::new)?;
let state = MyState { pool };
let router = Router::new()
.route("/todo", post(add));
.route("/todo/:id", get(retrieve));
Ok(router.into())
}
```
# Secrets
Source: https://docs.shuttle.dev/resources/shuttle-secrets
Including secrets in your deployment
This plugin manages secrets on [Shuttle](https://www.shuttle.dev).
## Usage
Add a `Secrets.toml` to the crate root or workspace root of your Shuttle service with the secrets you'd like to store.
Make sure to add `Secrets*.toml` to a `.gitignore` to omit your secrets from version control.
The format of the Secrets.toml file is a key-value mapping with string values.
```toml theme={null}
MY_API_KEY = 'the contents of my API key'
MY_OTHER_SECRET = 'some other secret'
```
Next, pass `#[shuttle_runtime::Secrets] secrets: shuttle_runtime::SecretStore` as an argument to your `shuttle_runtime::main` function.
`SecretStore::get` can now be called to retrieve your API keys and other secrets at runtime.
## Local secrets
When developing locally with `shuttle run`, you can use a different set of secrets by adding a `Secrets.dev.toml` file.
If you don't have a `Secrets.dev.toml` file, `Secrets.toml` will be used locally as well as for deployments.
If you want to have both secret files with some of the same secrets for both local runs and deployments, you have to duplicate the secret across both files.
## Different secrets file
You can also use other secrets files (in TOML format) by using the `--secrets [file]` argument on the `run` and `deploy` commands.
## Example
This snippet shows a Shuttle rocket main function that uses the `shuttle_runtime::Secrets` attribute to gain access to a `SecretStore`.
```rust main.rs theme={null}
use shuttle_runtime::SecretStore;
#[shuttle_runtime::main]
async fn rocket(
#[shuttle_runtime::Secrets] secrets: SecretStore,
) -> shuttle_rocket::ShuttleRocket {
// get secret defined in `Secrets.toml` file.
let secret = secrets.get("MY_API_KEY").context("secret was not found")?;
let state = MyState { secret };
let rocket = rocket::build().mount("/", routes![secret]).manage(state);
Ok(rocket.into())
}
```
The full example can be found on [GitHub](https://github.com/shuttle-hq/shuttle-examples/tree/main/rocket/secrets)
## Deleting secrets
This command will delete *all* secrets from your project.
Re-deploy with an updated `Secrets.toml` to add them back.
```bash theme={null}
shuttle resource delete secrets
```
## Caveats
* Some libraries read from their own config files or environment variables,
with no way of providing them in code. Sometimes, this can be solved by
manually setting the variable after loading the secret (and before loading the library):
`std::env::set_var("SOME_ENV_VAR", my_secret);`
# Shuttle Shared Databases
Source: https://docs.shuttle.dev/resources/shuttle-shared-db
Learn about the Shuttle Shared Database resource.
This plugin manages databases on [Shuttle](https://www.shuttle.dev) and connects them to your app.
A shared database is in the same cluster as other user's databases, but it is not accessible by other users.
If you want a high performing and isolated database, we also offer dedicated [Shuttle AWS RDS](./shuttle-aws-rds).
You can connect to any type of remotely hosted database from your code, so do not let our current database offerings limit your creativity! Got other databases you want to see on Shuttle? Let us know!
The Shared Database is intended primarily for experimentation and development. Since it runs on AWS
RDS instances shared with other Shuttle users, performance may be impacted during times of high demand
from other databases. For business-critical workloads, we recommend using a dedicated
[Shuttle AWS RDS database](./shuttle-aws-rds).
The Shuttle Shared Postgres cluster is RDS-based and uses Postgres version 16. Contact support if you want to enable common [Postgres Extensions](https://docs.aws.amazon.com/AmazonRDS/latest/PostgreSQLReleaseNotes/postgresql-extensions.html#postgresql-extensions-16x) for your project.
## Usage
Start by adding the `shuttle-shared-db` dependency.
Each type of shareable database is behind its own feature flag and macro attribute path.
| Engine | Feature flag | Attribute path |
| -------- | ------------ | ----------------------------- |
| Postgres | `postgres` | `shuttle_shared_db::Postgres` |
### Output type
By default, you can get the connection string to the database and connect to it with your preferred library.
You can also specify other return types to get rid of common boilerplate.
Depending on which type declaration is used as the output type in the macro, additional feature flags need to be activated:
**Postgres output types:**
| Feature flag | Type declaration | Description |
| ----------------------------------------- | -------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------- |
| | `String` | The connection string including username and password ([example](https://github.com/shuttle-hq/shuttle-examples/tree/main/shuttle-cron)) |
| `sqlx` (with rustls) or `sqlx-native-tls` | `sqlx::PgPool` | An sqlx connection pool ([example](https://github.com/shuttle-hq/shuttle-examples/tree/main/axum/postgres)) |
| `diesel-async` | `diesel_async::AsyncPgConnection` | An async diesel connection |
| `diesel-async-bb8` | `diesel_bb8::Pool` | A bb8 connection pool |
| `diesel-async-deadpool` | `diesel_deadpool::Pool` | A deadpool connection pool |
| `opendal-postgres` | `opendal::Operator` | An OpenDAL Operator key-value storage interface |
| `opendal-postgres` | `shuttle_shared_db::SerdeJsonOperator` | A wrapper over Operator with interface for serde types ([example](https://github.com/shuttle-hq/shuttle-examples/tree/main/rocket/url-shortener)) |
Lastly, add a macro annotation to the Shuttle main function. Here are examples for Postgres:
```rust theme={null}
// Use the connection string
#[shuttle_runtime::main]
async fn main(#[shuttle_shared_db::Postgres] conn_str: String) -> ... { ... }
// With sqlx feature flag, get a PgPool connected automatically
#[shuttle_runtime::main]
async fn main(#[shuttle_shared_db::Postgres] pool: sqlx::PgPool) -> ... { ... }
```
### Parameters
| Parameter | Type | Description |
| ---------- | ----- | -------------------------------------------------------------------------------------------- |
| local\_uri | \&str | If specified, on local runs, use this database instead of starting a Docker container for it |
When passing in strings, you can also insert secrets from `Secrets.toml` using string interpolation.
To insert the `PASSWORD` secret, pass it in like this:
```rust theme={null}
#[shuttle_runtime::main]
async fn main(
#[shuttle_shared_db::Postgres(
local_uri = "postgres://postgres:{secrets.PASSWORD}@localhost:16695/postgres"
)] conn_str: String,
) -> ... { ... }
```
Caveat: If you are interpolating a secret from `Secrets.dev.toml`, you need to set the same secret in `Secrets.toml` to a empty string so that this step does not crash in deployment.
The URI should be formatted according to the
[Postgres](https://www.postgresql.org/docs/current/libpq-connect.html#LIBPQ-CONNSTRING)
documentation.
If you do not specify a `local_uri`, then cargo-shuttle will attempt to spin up a Docker container and launch the database inside of it.
For this to succeed, you must have Docker installed and you must also have started the Docker engine. If you have not used Docker
before, the easiest way is to [install the desktop app](https://docs.docker.com/get-docker/) and then launch it in order to start
the Docker engine.
## Example
The Shuttle main function below uses the `#[shuttle_shared_db::Postgres]` attribute macro to provision a shared Postgres database,
which can be accessed with an [sqlx Pool](https://docs.rs/sqlx/latest/sqlx/pool/index.html).
```rust main.rs theme={null}
#[shuttle_runtime::main]
async fn main(
#[shuttle_shared_db::Postgres] pool: PgPool,
) -> shuttle_axum::ShuttleAxum {
sqlx::migrate!()
.run(&pool)
.await
.expect("Failed to run migrations");
let state = MyState { pool };
let router = Router::new()
.route("/todos", post(add))
.route("/todos/:id", get(retrieve))
.with_state(state);
Ok(router.into())
}
```
The full example can be found on [GitHub](https://github.com/shuttle-hq/shuttle-examples/tree/main/axum/postgres).
# Frequently Asked Questions
Source: https://docs.shuttle.dev/support/faq
Learn about the most frequently asked questions from Shuttle users.
This section is aimed at collecting common questions users have to provide documented answers.
We deploy every project in its own AWS Fargate VM.
This gives you safe isolation from other users,
and also across all the projects that are owned by your account.
Your code is built and hosted on our servers. The first time you introduce additional resources in your code, like the first time you use a database, we will add that resource to your project and wire it automatically to your deployment.
Read about what happens to deployment files [here](/docs/files).
See [Deployment environment](/docs/deployment-environment).
Of course! When you deploy a project on Shuttle, your app becomes available at a public URL.
With this, you can build a server that serves pretty much any frontend project.
For details on how to serve static files with Shuttle, check out our [Deployment files docs](/docs/files).
You can also just use it for an API, and host your frontend on any of the common frontend hosting solutions (Vercel, Netlify, etc), you just have to make your API calls to the URL of your project, and you're up.
For a tutorial on how to build and deploy a simple full-stack app using Next.js & Rust, check out this tutorial: [Deploying a NextJS frontend with Rust, in one go](https://joshmo.hashnode.dev/deploying-a-nextjs-front-end-with-a-rust-api-in-one-go)
Absolutely! And it's quite easy. Check out this section of our docs for the steps to do so: [Custom Services](/templates/tutorials/custom-service)
Yes, you can create a SeaORM connection from the sqlx pool Shuttle passes to you when you provision a SQL database. You can take a look at the example [here](https://github.com/shuttle-hq/shuttle/issues/179#issuecomment-1203536025).
Currently, all deployments are in the AWS eu-west-2 region (London).
We do plan to support multiple regions in the future.
See [this issue](https://github.com/shuttle-hq/shuttle/issues/1584) for the latest status and discussion on this topic.
# Support
Source: https://docs.shuttle.dev/support/support
How to reach the Shuttle team for support
There are 3 ways to contact the Shuttle team and get support:
* **Intercom chat**: Click the chat button at the bottom right on any of our websites.
* **Discord server**: [discord.gg/shuttle](https://discord.gg/shuttle)
* On the Pro Tier and above, you get access to a private support channel where you can ask Shuttle engineers for help directly.
* **Email**: [support@shuttle.rs](mailto:support@shuttle.rs)
* When contacting us for project or account removal, please use the email address associated with your GitHub account.
If you've found a bug or inaccuracy on our websites, feel free to open an issue or make a PR: [CLI & Rust libraries](https://github.com/shuttle-hq/shuttle), [Website](https://github.com/shuttle-hq/www), [Docs](https://github.com/shuttle-hq/shuttle-docs)
# Troubleshooting
Source: https://docs.shuttle.dev/support/troubleshooting
Learn about how to solve common problems you might run into while using Shuttle.
This section is aimed at collecting common issues users have to provide quick debug solutions.
Make sure to follow all of the [upgrading steps](/guides/upgrade).
This is most likely to happen if you're using one of our shared database annotations. To prevent this from happening, ensure that Docker is running.
Check [Docker engines](/docs/local-run#docker-engines) for more info.
Some Postgres libraries need `libpq` installed at runtime. To install it on Shuttle, add a [`shuttle_setup_container.sh`](/docs/builds#experimental-hook-scripts) with `apt update && apt install -y libpq-dev`.
This is likely because you are a using a different version of the dependency than the Shuttle crate, causing a dependency mismatch.
To fix this, there's a couple of options:
1. Switch the version of your crate over to the dependency. You can find the correct one either from the error itself, or going to Crates.io and checking the dependencies of the latest Shuttle crate version.
2. If you are unable to do the above because of nested dependencies, you can typically fork the crate's repo then upgrade the dependency yourself. However, depending on breaking changes this may be a non-trivial amount of work.
Try logging out and logging back in. If this still doesn't work, feel free to shoot us a message on [our Discord server!](https://discord.gg/shuttle)
### Other issues?
Hop on over to [our Discord server](https://discord.gg/shuttle), we are very responsive!
# SaaS Starter Template
Source: https://docs.shuttle.dev/templates/fullstack/saas-template
Learn how you can deploy a fully working SaaS template using Next.js & Rust.
We've created a SaaS template that you can use to get started quickly with a fullstack Rust + Next.js app.
The design of the template is based on a sales-oriented Customer Relationship Management (CRM) tool where
users will be able to view their customers, sales records as well as some analytics.

## Features
* Take subscription payments with Stripe
* Email session-based login
* Mailgun (email subscription, welcome email etc)
* Pre-configured frontend routes for easy transition
* Examples of how to implement simple dashboard analytics
## Pre-requisites
* Rust
* Node.js/NPM.
* Typescript.
* [cargo-shuttle](https://www.shuttle.dev)
## Getting Started
* Initialize the template with:
```sh theme={null}
shuttle init --from shuttle-hq/shuttle-examples --subfolder fullstack-templates/saas
```
* cd into the folder
* Run `npm i` to install the dependencies on the frontend.
* Set your secrets in the Secrets.toml file at the `Cargo.toml` level of your backend folder. Unset secrets will default
to "None" to prevent automatic crashing of the web service, but some services may not work.
## Development Scripts
* **Using `dev` for Development:**
* Run `npm run dev` to start your application with live reload capabilities. This script uses `turbowatch` to
monitor changes in both the frontend and backend.
* Visit `http://localhost:8000` once the app has built.
* If you prefer using `cargo-watch` instead of `turbowatch`, the watch feature can be disabled in
the `turbowatch.ts` file.
* **Frontend-Focused Development with `next-dev`:**
* For a frontend-specific development workflow, use `npm run next-dev`.
* This script runs Next.js in a development mode optimized for faster builds and reloads, enhancing your frontend
development experience.
* **Analyzing Bundle Size with `analyze`:**
* The `analyze` script is designed to provide insights into the bundle size of your Next.js application.
* Run `npm run analyze` to generate a detailed report of the size of each component and dependency in your bundle.
* This is particularly useful for identifying large dependencies or components that could be optimized for better
performance.
## Troubleshooting
* If you change the migrations after running locally or deploying, you will need to go into the database itself and
delete the tables. You can do this easily with something
like [psql](https://www.postgresql.org/docs/current/app-psql.html) or [pgAdmin](https://www.pgadmin.org/).
* If connecting to external services like Stripe doesn't work, try checking your Secrets.toml file.
* Shuttle connects by default to port 8000 - if you're currently already using something at port 8000, you can add
the `--port ` to the `shuttle run` command to change this.
# Overview
Source: https://docs.shuttle.dev/templates/overview
Learn more about the official Shuttle templates.
This section of the docs is a hub for exploring Shuttle's Examples, Templates, and Tutorials.
Here are some relevant links to help you find what you're looking for:
* [Shuttle Examples repo](https://github.com/shuttle-hq/shuttle-examples#readme) - contains all officially maintained examples and starter templates.
* [Shuttle Community Templates](https://github.com/shuttle-hq/shuttle-examples#community-examples) - a list of more templates made by the community.
* [Awesome Shuttle](https://github.com/shuttle-hq/awesome-shuttle#readme) - A list of awesome projects built on Shuttle.
* [Shuttlings](https://github.com/shuttle-hq/shuttlings) - A collection of code challenges that also happens to be a good tutorial for backend development on Shuttle.
# Authentication
Source: https://docs.shuttle.dev/templates/tutorials/authentication
Learn how to implement authentication using Rust.
Most websites have some kind of user system. But implementing authentication can
be a bit complex. It requires several things working together.
Making sure the system is secure is daunting. How do we know others cannot
easily log into accounts and make edits on other people's behalf? And building
stateful systems is difficult.
Today we will look at a minimal implementation in Rust. For this demo we won't
be using a specific authentication library, instead writing from scratch using
our own database and backend API.
We will be walking through implementing the system including a frontend for
interacting with it. We will be using Axum for routing and other handling logic.
The [source code](https://github.com/kaleidawave/axum-shuttle-postgres-authentication-demo)
for this tutorial can be found here (opens new window). We will then deploy the
code on Shuttle, which will handle running the server and giving us access to a
Postgres server.
To prevent this post from being an hour long, some things are skipped over (such
as error handling) and so might not match up one-to-one with the tutorial. This
post also assumes basic knowledge of HTML, web servers, databases and Rust.
This isn't verified to be secure, use it at your own risk!!
### Let's get started
First, we will install Shuttle for creating the project (and later for deployment). If you don't already have it you can install it with `cargo install cargo-shuttle`. We will first go to a new directory for our project and create a new Axum app with `shuttle init --template axum`.
You should see the following in `src/main.rs`:
```rust theme={null}
use axum::{routing::get, Router};
async fn hello_world() -> &'static str { "Hello, world!" }
#[shuttle_runtime::main] async fn axum() -> shuttle_axum::ShuttleAxum {
let router = Router::new().route("/hello", get(hello_world));
Ok(router.into())
}
```
### Templates
For generating HTML we will be using [Tera](https://keats.github.io/tera/docs/), so
we can go ahead and add this with `cargo add tera`. We will store all our
`templates` in a template directory in the project root.
We want a general layout for our site, so we create a base layout. In our base
layout, we can add specific tags that will apply to all pages such as a
[Google font](https://fonts.google.com/). With this layout all the content will
be injected in place of `{% block content %}{% endblock content %}`:
```html theme={null}
Title
{% block content %}{% endblock content %}
```
And now we can create our first page that will be displayed under the `/` path
```html theme={null}
{% extends "base.html" %} {% block content %}
Hello world
{% endblock content %}
```
Now we have our template, we need to register it under a Tera instance. Tera has
a nice
[filesystem-based registration system](https://docs.rs/tera/1.16.0/tera/struct.Tera.html#method.new),
but we will use the
[`include_str!`](https://doc.rust-lang.org/std/macro.include_str.html) macro so
that the content is in the binary. This way we don't have to deal with the
complexities of a filesystem at runtime. We register both templates so that the
`index` page knows about `base.html`.
```rust theme={null}
let mut tera = Tera::default();
tera.add_raw_templates(vec![
("base.html", include_str!("../templates/base.html")),
("index", include_str!("../templates/index.html")),
])
.unwrap();
```
We add it via an [Extension](https://docs.rs/axum/latest/axum/struct.Extension.html)(wrapped in `Arc` so that extension cloning does not deep clone all the templates)
```rust theme={null}
#[shuttle_runtime::main]
async fn axum() -> shuttle_axum::ShuttleAxum {
let mut tera = Tera::default();
tera.add_raw_templates(vec![
("base.html", include_str!("../templates/base.html")),
("index", include_str!("../templates/index.html")),
])
.unwrap();
let router = Router::new()
.route("/hello", get(hello_world))
.layer(Extension(Arc::new(tera)));
Ok(router.into())
}
```
### Rendering views
Now we have created our Tera instance we want it to be accessible to our get
methods. To do this in Axum, we add the extension as a parameter to our
function. In Axum, an
[Extension](https://docs.rs/axum/latest/axum/struct.Extension.html) is a unit
struct. Rather than dealing with .0 to access fields, we use destructuring in
the parameter (if you thought that syntax looks weird).
```rust theme={null}
async fn index(
Extension(templates): Extension,
) -> impl IntoResponse {
Html(templates.render("index", &Context::new()).unwrap())
}
```
### Serving assets
We can create a `public/styles.css` file
```css theme={null}
body {
font-family: "Karla", sans-serif;
font-size: 12pt;
}
```
And easily create a new endpoint for it to be served from:
```rust theme={null}
async fn styles() -> impl IntoResponse {
Response::builder()
.status(http::StatusCode::OK)
.header("Content-Type", "text/css")
.body(include_str!("../public/styles.css").to_owned())
.unwrap()
}
```
Here we again are using `include_str!` to not have to worry about the filesystem
at runtime.
[ServeDir](https://docs.rs/tower-http/latest/tower_http/services/struct.ServeDir.html)is
an alternative if you have a filesystem at runtime. You can use this method for
other static assets like JavaScript and favicons.
### Running
We will add our two new routes to the router (and remove the default "hello
world" one) to get:
```rust theme={null}
let router = Router::new()
.route("/", get(index))
.route("/styles.css", get(styles))
.layer(Extension(Arc::new(tera)));
```
With our main service we can now test it locally with `shuttle run`.
Nice!
### Adding users
We will start with a user's table in SQL.
([this is defined in schema.sql](https://github.com/kaleidawave/axum-shuttle-postgres-authentication-demo/blob/main/schema.sql)).
```sql theme={null}
CREATE TABLE users (
id integer PRIMARY KEY GENERATED ALWAYS AS IDENTITY,
username text NOT NULL UNIQUE,
password text NOT NULL
);
```
The `id` is generated by the database using a sequence. The `id` is a primary
key, which we will use to reference users. It is better to use a fixed value
field for identification rather than using something like the `username` field
because you may add the ability to change usernames, which can leave things
pointing to the wrong places.
### Registering our database
Before our app can use the database we have to add sqlx with some features: `cargo add sqlx -F postgres runtime-tokio-native-tls`.
We will also enable the Postgres feature for Shuttle with
`cargo add shuttle-service -F sqlx-postgres`.
Now back in the code we add a parameter with
`#[shuttle_shared_db::Postgres] pool: Database`. The
`#[shuttle_shared_db::Postgres]` annotation tells shuttle to provision a
Postgres database using the
[infrastructure from code design](https://www.shuttle.dev/blog/2022/05/09/ifc).
```rust theme={null}
type Database = sqlx::PgPool;
#[shuttle_runtime::main]
async fn axum(
#[shuttle_shared_db::Postgres] pool: Database
) -> ShuttleAxum {
// Build tera as before
// Run the schema.sql migration with sqlx to create our users table
pool.execute(include_str!("../schema.sql"))
.await
.map_err(shuttle_service::error::CustomError::new)?;
let router = Router::new()
.route("/", get(index))
.route("/styles.css", get(styles))
.layer(Extension(Arc::new(tera)))
.layer(pool);
// Wrap and return router as before
}
```
### Signup
For getting users into our database, we will create a post handler. In our
handler, we will parse data using multipart.
[I wrote a simple parser for multipart that we will use here](https://github.com/kaleidawave/axum-shuttle-postgres-authentication-demo/blob/main/src/utils.rs#L45-L64).
The below example contains some error handling that we will ignore for now.
```rust theme={null}
async fn post_signup(
Extension(database): Extension,
multipart: Multipart,
) -> impl IntoResponse {
let data = parse_multipart(multipart)
.await
.map_err(|err| error_page(&err))?;
if let (Some(username), Some(password), Some(confirm_password)) = (
data.get("username"),
data.get("password"),
data.get("confirm_password"),
) {
if password != confirm_password {
return Err(error_page(&SignupError::PasswordsDoNotMatch));
}
let user_id = create_user(username, password, database);
Ok(todo!())
} else {
Err(error_page(&SignupError::MissingDetails))
}
}
```
### Creating users and storing passwords safety
When storing passwords in a database, for security reasons we don't want them to
be in the exact format as plain text. To transform them away from the plain text
format we will use a
[cryptographic hash function](https://en.wikipedia.org/wiki/Cryptographic_hash_function)from
[pbkdf2](https://github.com/RustCrypto/password-hashes/tree/master/pbkdf2)(cargo add pbkdf2):
```rust theme={null}
fn create_user(username: &str, password: &str, database: &Database) -> Result {
let salt = SaltString::generate(&mut OsRng);
// Hash password to PHC string ($pbkdf2-sha256$...)
let hashed_password = Pbkdf2.hash_password(password.as_bytes(), &salt).unwrap().to_string();
// ...
}
```
With hashing, if someone gets the value in the password field they cannot find
out the actual password value. The only thing this value allows is whether a
plain text password matches this value. And with
[salting](https://en.wikipedia.org/wiki/Salt_\(cryptography\)) different names
are encoded differently. Here all these passwords were registered as "password",
but they have different values in the database because of salting.
```sql theme={null}
postgres=> select * from users;
id | username | password
----+----------+------------------------------------------------------------------------------------------------
1 | user1 | $pbkdf2-sha256$i=10000,l=32$uC5/1ngPBs176UkRjDbrJg$mPZhv4FfC6HAfdCVHW/djgOT9xHVAlbuHJ8Lqu7R0eU
2 | user2 | $pbkdf2-sha256$i=10000,l=32$4mHGcEhTCT7SD48EouZwhg$A/L3TuK/Osq6l41EumohoZsVCknb/wiaym57Og0Oigs
3 | user3 | $pbkdf2-sha256$i=10000,l=32$lHJfNN7oJTabvSHfukjVgA$2rlvCjQKjs94ZvANlo9se+1ChzFVu+B22im6f2J0W9w
(3 rows)
```
With the following simple database query and our hashed password, we can insert
users.
```rust theme={null}
fn create_user(username: &str, password: &str, database: &Database) -> Result {
// ...
const INSERT_QUERY: &str =
"INSERT INTO users (username, password) VALUES ($1, $2) RETURNING id;";
let fetch_one = sqlx::query_as(INSERT_QUERY)
.bind(username)
.bind(hashed_password)
.fetch_one(database)
.await;
// ...
}
```
And we can handle the response and get the new user id with the following:
```rust theme={null}
fn create_user(username: &str, password: &str, database: &Database) -> Result {
// ...
match fetch_one {
Ok((user_id,)) => Ok(user_id),
Err(sqlx::Error::Database(database))
if database.constraint() == Some("users_username_key") =>
{
return Err(SignupError::UsernameExists);
}
Err(err) => {
return Err(SignupError::InternalError);
}
}
}
```
Great now we have the signup handler written, let's create a way to invoke it in
the UI.
### Using HTML forms
To invoke the endpoint with multipart we will use an HTML form.
```html theme={null}
{% extends "base.html" %} {% block content %}
{% endblock content %}
```
Notice the action and method that correspond to the route we just added. Notice
also the `enctype` being multipart, which matches what we are parsing in the
handler. The above has a few attributes to do some client-side validation, but
[in the full demo it is also handled on the server](https://github.com/kaleidawave/axum-shuttle-postgres-authentication-demo/blob/ba71a914055f312636581f5e82172b1078e7b9eb/src/authentication.rs#L124-L133).
We create a handler for this markup in the same way as done for our index with:
```rust theme={null}
async fn get_signup(
Extension(templates): Extension,
) -> impl IntoResponse {
Html(templates.render("signup", &Context::new()).unwrap())
}
```
We can add `signup` to the Tera instance and then add both the get and post
handlers to the router by adding it to the chain:
```rust theme={null}
.route("/signup", get(get_signup).post(post_signup))
```
### Sessions
Once signed up, we want to save the logged-in state. We don't want the user to
have to send their username and password for every request they make.
### Cookies and session tokens
Cookies help store the state between browser requests. When a response is sent
down with
[Set-Cookie](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Set-Cookie),
then any subsequent requests the browser/client makes will send cookie
information. We can then pull this information off of headers on requests on the
server.
Again, these need to be safe. We don't want collisions/duplicates. We want it to
be hard to guess. For these reasons, we will represent it as a 128-bit unsigned
integer. This has 2^128 options, so a very low chance of a collision.
We want to generate a "session token". We want the tokens to be
cryptographically secure. Given a session id, we don't want users to be able to
find the next one. A simple globally incremented u128 wouldn't be secure because
if I know I have session 10 then I can send requests with session 11 for the
user who logged in after. With a cryptographically secure generator, there isn't
a distinguishing pattern between subsequently generated tokens. We will use the
[ChaCha](https://github.com/rust-random/rand/tree/master/rand_chacha) algorithm/crate (we will add
cargo add rand\_core rand\_chacha).
[We can see that it does implement the crypto marker-trait confirming it is valid for cryptographic scenarios](https://docs.rs/rand_chacha/0.3.1/rand_chacha/struct.ChaCha8Rng.html#impl-CryptoRng).
This is unlike
[Pseudo-random number generators where you can predict the next random number given a start point and the algorithm](https://www.youtube.com/watch?v=-h_rj2-HP2E\&ab_channel=PwnFunction).
This could be a problem if we have our token we can get the session token of the
person who logged in after us really easy and thus impersonate them.
To initialize the random generator we use
[SeedableRng::from\_seed](https://docs.rs/rand_core/latest/rand_core/trait.SeedableRng.html#tymethod.from_seed).
The seed in this case is an initial state for the generator. Here we use
[OsRng.next\_u64()](https://docs.rs/rand_core/latest/rand_core/struct.OsRng.html)which
retrieves randomness from the operating system rather a seed. We will be doing
something similar to the creation of the Tera instance. We must wrap it in an
arc and a mutex because generating new identifiers requires mutable access. We
now have the following main function:
```rust theme={null}
#[shuttle_runtime::main]
async fn axum(
#[shuttle_shared_db::Postgres] pool: Database
) -> ShuttleAxum {
// Build tera and migrate database as before
let random = ChaCha8Rng::seed_from_u64(OsRng.next_u64())
let router = Router::new()
.route("/", get(index))
.route("/styles.css", get(styles))
.route("/signup", get(get_signup).post(post_signup))
.layer(Extension(Arc::new(tera)))
.layer(pool)
.layer(Extension(Arc::new(Mutex::new(random))));
// Wrap and return router as before
}
```
### Adding sessions to signup
As well as creating a user on signup, we will create the session token for the
newly signed-up user. First we have to create the sessions table, we can add the
following to our `schema.sql`:
```sql theme={null}
CREATE TABLE IF NOT EXISTS sessions (
session_token BYTEA PRIMARY KEY,
user_id integer REFERENCES users (id) ON DELETE CASCADE
);
```
Then we create a function to create and insert the session:
```rust theme={null}
type Random = Arc>;
pub(crate) async fn new_session(
database: &Database,
random: Random,
user_id: i32
) -> String {
const QUERY: &str = "INSERT INTO sessions (session_token, user_id) VALUES ($1, $2);";
let mut u128_pool = [0u8; 16];
random.lock().unwrap().fill_bytes(&mut u128_pool);
// endian doesn't matter here
let session_token = u128::from_le_bytes(u128_pool);
let _result = sqlx::query(QUERY)
.bind(&session_token.to_le_bytes().to_vec())
.bind(user_id)
.execute(database)
.await
.unwrap();
session_token
}
```
In the full demo, we use the
[new type pattern](https://www.shuttle.dev/blog/2022/07/28/patterns-with-rust-types#the-new-type-pattern)over
a u128 to make this easier, but we will stick with a u128 type here.
Now we have our token, we need to package it into a cookie value. We will do it
in the simplest way possible, using `.to_string()`. We will send a response that
does two things, sets this new value and returns/redirects us back to the index
page. We will create a utility function for doing this:
```rust theme={null}
fn set_cookie(session_token: &str) -> impl IntoResponse {
http::Response::builder()
.status(http::StatusCode::SEE_OTHER)
.header("Location", "/")
.header("Set-Cookie", format!("session_token={}; Max-Age=999999", session_token))
.body(http_body::Empty::new())
.unwrap()
}
```
Now we can complete our signup handler by adding random as a parameter and
returning our set cookie response.
```rust theme={null}
async fn post_signup(
Extension(database): Extension,
Extension(random): Extension,
multipart: Multipart,
) -> impl IntoResponse {
let data = parse_multipart(multipart)
.await
.map_err(|err| error_page(&err))?;
if let (Some(username), Some(password), Some(confirm_password)) = (
data.get("username"),
data.get("password"),
data.get("confirm_password"),
) {
if password != confirm_password {
return Err(error_page(&SignupError::PasswordsDoNotMatch));
}
let user_id = create_user(username, password, &database);
let session_token = new_session(database, random, user_id);
Ok(set_cookie(&session_token))
} else {
Err(error_page(&SignupError::MissingDetails))
}
}
let session_token = new_session(database, random, user_id);
```
### Using the session token
Great so now we have a token/identifier for a *session*. Now we can use this as
a key to get information about users.
We can pull the cookie value using the following spaghetti of iterators and
options:
```rust theme={null}
let session_token = req
.headers()
.get_all("Cookie")
.iter()
.filter_map(|cookie| {
cookie
.to_str()
.ok()
.and_then(|cookie| cookie.parse::().ok())
})
.find_map(|cookie| {
(cookie.name() == USER_COOKIE_NAME).then(move || cookie.value().to_owned())
})
.and_then(|cookie_value| cookie_value.parse::().ok());
```
### Auth middleware
In the last post, we went into detail about middleware.
[You can read more about it in more detail there](https://www.shuttle.dev/blog/2022/08/04/middleware-in-rust).
In our middleware, we will get a little fancy and make the user pulling lazy.
This is so that requests that don't need user data don't have to make a database
trip. Rather than adding our user straight onto the request, we split things
apart. We first create an AuthState which contains the session token, the
database, and a placeholder for our user (Option \)
```rust theme={null}
#[derive(Clone)]
pub(crate) struct AuthState(Option<(u128, Option, Database)>);
pub(crate) async fn auth(
mut req: http::Request,
next: axum::middleware::Next,
database: Database,
) -> axum::response::Response {
let session_token = /* cookie logic from above */;
req.extensions_mut()
.insert(AuthState(session_token.map(|v| (v, None, database))));
next.run(req).await
}
```
Then we create a method on `AuthState` which makes the database request.
Now we have the user's token we need to get their information. We can do that
using SQL joins
```rust theme={null}
impl AuthState {
pub async fn get_user(&mut self) -> Option<&User> {
let (session_token, store, database) = self.0.as_mut()?;
if store.is_none() {
const QUERY: &str =
"SELECT id, username FROM users JOIN sessions ON user_id = id WHERE session_token = $1;";
let user: Option<(i32, String)> = sqlx::query_as(QUERY)
.bind(&session_token.to_le_bytes().to_vec())
.fetch_optional(&*database)
.await
.unwrap();
if let Some((_id, username)) = user {
*store = Some(User { username });
}
}
store.as_ref()
}
}
```
Here we cache the user internally using an Option. With the caching in place if
another middleware gets the user and then a different handler tries to get the
user it results in one database request, not two!
We can add the middleware to our chain using:
```rust theme={null}
#[shuttle_runtime::main]
async fn axum(
#[shuttle_shared_db::Postgres] pool: Database
) -> ShuttleAxum {
// tera,random creation and db migration as before
let middleware_database = database.clone();
let router = Router::new()
.route("/", get(index))
.route("/styles.css", get(styles))
.route("/signup", get(get_signup).post(post_signup))
.layer(axum::middleware::from_fn(move |req, next| {
auth(req, next, middleware_database.clone())
}))
.layer(Extension(Arc::new(tera)))
.layer(pool)
.layer(Extension(Arc::new(Mutex::new(random))));
// Wrap and return router as before
}
```
### Getting middleware and displaying our user info
Modifying our index Tera template, we can add an "if block" to show a status if
the user is logged in.
```html theme={null}
{% extends "base.html" %} {% block content %}
Hello world
{% if username %}
Logged in: {{ username }}
{% endif %} {% endblock content %}
```
Using our middleware in requests is easy in Axum by including a reference to it
in the parameters. We then add the username to the context for it to be rendered
on the page.
```rust theme={null}
async fn index(
Extension(current_user): Extension,
Extension(templates): Extension,
) -> impl IntoResponse {
let mut context = Context::new();
if let Some(user) = current_user.get_user().await {
context.insert("username", &user.username);
}
Html(templates.render("index", &context).unwrap())
}
```
### Logging in and logging out
Great we can signup and that now puts us in a session. We may want to log out
and drop the session. This is very simple to do by returning a response with the
cookie `Max-Age` set to 0.
```rust theme={null}
pub(crate) async fn logout_response() -> impl axum::response::IntoResponse {
Response::builder()
.status(http::StatusCode::SEE_OTHER)
.header("Location", "/")
.header("Set-Cookie", "session_token=_; Max-Age=0")
.body(Empty::new())
.unwrap()
}
```
For logging in we have a very similar logic for signup with pulling multipart
information of a post request. Unlike signup, we don't want to create a new
user. We want to check the row with that username has a password that matches.
If the credentials match then we create a new session:
```rust theme={null}
async fn post_login(
Extension(database): Extension,
multipart: Multipart,
) -> impl IntoResponse {
let data = parse_multipart(multipart)
.await
.map_err(|err| error_page(&err))?;
if let (Some(username), Some(password)) = (data.get("username"), data.get("password")) {
const LOGIN_QUERY: &str = "SELECT id, password FROM users WHERE users.username = $1;";
let row: Option<(i32, String)> = sqlx::query_as(LOGIN_QUERY)
.bind(username)
.fetch_optional(database)
.await
.unwrap();
let (user_id, hashed_password) = if let Some(row) = row {
row
} else {
return Err(LoginError::UserDoesNotExist);
};
// Verify password against PHC string
let parsed_hash = PasswordHash::new(&hashed_password).unwrap();
if let Err(_err) = Pbkdf2.verify_password(password.as_bytes(), &parsed_hash) {
return Err(LoginError::WrongPassword);
}
let session_token = new_session(database, random, user_id);
Ok(set_cookie(&session_token))
} else {
Err(error_page(&LoginError::NoData))
}
}
```
Then we refer back to the
[signup section](https://docs.shuttle.dev/guide/authentication-tutorial.html#using-html-forms)
and replicate the same HTML form and handler that renders the Tera template as
seen before but for a login screen. At the end of that we can add two new routes
with three handlers completing the demo:
```rust theme={null}
#[shuttle_runtime::main]
async fn axum(
#[shuttle_shared_db::Postgres] pool: Database
) -> ShuttleAxum {
// tera, middleware, random creation and db migration as before
let router = Router::new()
// ...
.route("/logout", post(logout_response))
.route("/login", get(get_login).post(post_login))
// ...
// Wrap and return router as before
}
```
### Deployment
This is great, we now have a site with signup and login functionality. But we
have no users, our friends can't log in on our localhost. We want it live on the
interwebs. Luckily we are using Shuttle, so it is as simple as:
`shuttle deploy`
Because of our `#[shuttle_runtime::main]` annotation and out-the-box Axum
support our deployment doesn't need any prior config, it is instantly live!
Now you can go ahead with these concepts and add functionality for listing and
deleting users.
[The full demo implements these if you are looking for clues](https://github.com/kaleidawave/axum-shuttle-postgres-authentication-demo).
### Thoughts building the tutorial and other ideas on where to take it
This demo includes the minimum required for authentication. Hopefully, the
concepts and snippets are useful for building it into an existing site or for
starting a site that needs authentication. If you were to continue, it would be
as simple as more fields onto the user object or building relations with the id
field on the user's table. I will leave it out with some of my thoughts and
opinions while building the site as well as things you could try extending it
with.
For templating Tera is great. I like how I separate the markup into external
files rather than bundling it into `src/main.rs`. Its API is easy to use and is
well documented. However, it is quite a simple system. I had a few errors where
I would rename or remove templates and because the template picker for rendering
uses a map it can panic at runtime if the template does not exist. It would be
nice if the system allowed checking that templates exist at compile time. The
data sending works on serde serialization, which is a little bit more
computation overhead than I would like. It also does not support streaming. With
streaming, we could send a chunk of HTML that doesn't depend on database values
first, and then we can add more content when the database transaction has gone
through. If it supported streaming we could avoid the all-or-nothing pages with
white page pauses and start connections to services like Google Fonts earlier.
Let me know what your favorite templating engine is for Rust and whether it
supports those features!
For working with the database, sqlx has typed macros. I didn't use them here but
for more complex queries you might prefer the type-checking behavior. Maybe 16
bytes for storing session tokens is a bit overkill. You also might want to try
sharding that table if you have a lot of sessions or using a key-value store
(such as Redis) might be simpler. We also didn't implement cleaning up the
sessions table, if you were storing sessions using Redis you could use the
[EXPIRE command](https://redis.io/commands/expire/)to automatically remove old
keys.
# Custom Service
Source: https://docs.shuttle.dev/templates/tutorials/custom-service
This example will explain how to create a custom Shuttle service using Poise and Axum.
In this simple example we will implement `Service` for a custom service that serves a Discord bot alongside a web server created using Axum.
### Prerequisites
To be able to create this example, we'll need to grab an API token from the [Discord developer portal](https://discord.com/developers/applications).
1. Click the New Application button, name your application and click Create.
2. Navigate to the Bot tab in the lefthand menu, and add a new bot.
3. On the bot page click the Reset Token button to reveal your token. Put this token in your `Secrets.toml` (explained below). It's very important that you don't reveal your token to anyone, as it can be abused. Create a `.gitignore` file to omit your `Secrets.toml` from version control.
Your `Secrets.toml` file needs to be in the root of your directory once the project has been initialised - the file will use a format similar to a `.env` file, like so:
```toml Secrets.toml theme={null}
DISCORD_TOKEN = 'the contents of my discord token'
```
To add the bot to a server, we need to create an invite link:
1. On your bot's application page, open the OAuth2 page via the lefthand panel.
2. Go to the URL Generator via the lefthand panel, and select the `bot` scope.
3. Copy the URL, open it in your browser and select a Discord server you wish to invite the bot to.
### Initial Setup
Start by running the following command:
```bash theme={null}
shuttle init --template none
```
This will simply initialize a new cargo crate with a dependency on `shuttle-runtime`.
We also want to add several dependencies for this - make sure your Cargo.toml looks like below:
```toml Cargo.toml theme={null}
[package]
name = "custom-service"
version = "0.1.0"
edition = "2021"
[dependencies]
anyhow = "1.0.62"
axum = "0.6.4"
hyper = "0.14.24"
poise = "0.5.2"
serde = "1.0"
shuttle-runtime = "0.57.0"
tokio = "1.26.0"
```
### Getting Started
To get started, we need to return a wrapper struct from our `shuttle_runtime::main`.
```rust main.rs theme={null}
pub struct CustomService {
discord_bot:
FrameworkBuilder>,
router: Router,
}
```
Then we need to implement `shuttle_service::Service` for our wrapper. If you need to bind to an address,
for example if you're implementing service for an HTTP server, you can use the `addr` argument from `bind`.
You can only have one HTTP service bound to the `addr`, but you can start other services that don't rely on
binding to a socket, like so:
```rust main.rs theme={null}
#[shuttle_runtime::async_trait]
impl shuttle_runtime::Service for CustomService {
async fn bind(
mut self,
addr: std::net::SocketAddr,
) -> Result<(), shuttle_runtime::Error> {
let router = self.router.into_inner();
let serve_router = axum::Server::bind(&addr).serve(router.into_make_service());
tokio::select!(
_ = self.discord_bot.run() => {},
_ = serve_router => {}
);
Ok(())
}
}
```
### Commands/Routing
Before we can actually run the program, we will need to set up the commands and routing that it needs
before we can add them to the struct implementation. Let's do that now:
```rust commands.rs theme={null}
use poise::serenity_prelude as serenity;
// this is a blank struct initialised in main.rs and then imported here
use crate::Data;
type Error = Box;
type Context<'a> = poise::Context<'a, Data, Error>;
#[poise::command(slash_command, prefix_command)]
pub async fn age(
ctx: Context<'_>,
#[description = "Selected user"] user: Option,
) -> Result<(), Error> {
let u = user.as_ref().unwrap_or_else(|| ctx.author());
let response = format!("{}'s account was created at {}", u.name, u.created_at());
ctx.say(response).await?;
Ok(())
}
```
```rust router.rs theme={null}
use axum::http::StatusCode;
use axum::response::IntoResponse;
use axum::{routing::get, Router};
pub fn build_router() -> Router {
Router::new().route("/", get(hello_world))
}
pub async fn hello_world() -> impl IntoResponse {
(StatusCode::OK, "Hello world!").into_response()
}
```
### Struct Implementation
To finish up, we return the wrapper struct from our `shuttle_runtime::main` function and add implementation
for setting up each of our services for the struct, like so:
```rust main.rs theme={null}
use std::sync::Arc;
use poise::serenity_prelude as serenity;
use shuttle_runtime::SecretStore;
mod commands;
use commands::age;
mod router;
use router::router;
pub struct Data {}
pub struct CustomService {
discord_bot:
FrameworkBuilder>,
router: Router,
}
#[shuttle_runtime::main]
async fn init(
#[shuttle_runtime::Secrets] secrets: SecretStore,
) -> Result {
let discord_api_key = secrets.get("DISCORD_API_KEY").unwrap();
let discord_bot = poise::Framework::builder()
.options(poise::FrameworkOptions {
commands: vec![age()],
..Default::default()
})
.token(discord_api_key)
.intents(serenity::GatewayIntents::non_privileged())
.setup(|ctx, _ready, framework| {
Box::pin(async move {
poise::builtins::register_globally(
ctx, &framework.options().commands
).await?;
Ok(Data {})
})
});
let router = build_router();
Ok(CustomService {
discord_bot,
router
})
}
```
### Finishing Up
Try it out with the `run` command:
```bash theme={null}
shuttle run
```
# Working with Databases in Rust
Source: https://docs.shuttle.dev/templates/tutorials/databases-with-rust
Learn how to work with databases using Rust.
In this guide we'll be looking at working with PostgreSQL using Rust. By the end of this guide you'll have more of an idea of how to use both and when each one would be better for your use case.
It will be assumed you already have a project initialised - if not, you can always initialise a new project with `shuttle init`.
## SQL
Relational databases are the classical way to store data in the backend of web applications when it comes to storing records and persisted data. Shuttle currently offers free provisioned SQL instances for your applications and currently provides it through an SQLx connection pool. It should be noted that although we're using Postgres for this guide, the same concepts works equally well with both MySQL and MariaDB.
To get started with SQLx (and using our new database), we'll want to add the `sqlx` crate to an initialised Rust program. We will also want to add the `shuttle-shared-db` crate which allows us to use the macro that provisions the database instance to us.
```bash theme={null}
cargo add sqlx shuttle-shared-db --features sqlx/macros,shuttle-shared-db/postgres,shuttle-shared-db/sqlx
cargo install sqlx-cli
```
We can then run `sqlx migrate add ` in our project to add a migrations folder that will contain an empty SQL file, with the naming convention `_.sql`. We can then create our tables using the power of raw SQL:
```sql theme={null}
CREATE TABLE IF NOT EXISTS users (
id SERIAL PRIMARY KEY NOT NULL,
name VARCHAR NOT NULL,
email VARCHAR NOT NULL,
password VARCHAR NOT NULL,
created_at TIMESTAMP WITH TIME ZONE DEFAULT CURRENT_TIMESTAMP
);
```
When we run our migrations, this folder will be the default that will be used - when we're ready to run our migrations, we can run `sqlx migrate run --database-url `. We can also just run the migrate macro programmatically after connecting to the database:
```rust theme={null}
sqlx::migrate!().run(&pool).await.expect("Migrations failed :(");
```
Be aware however that this will run every time the program starts, so if you only want your migrations to run once or you're not using `CREATE TABLE IF NOT EXISTS`, you will probably want to make sure you either comment this statement out or just delete it once the migrations have run.
To get started with querying using SQLx, we'll need to connect to our database. Normally you'd have to set up your connection pool manually - however, with Shuttle you can just set the macro up and you'll receive the connection pool immediately:
```rust theme={null}
#[shuttle_runtime::main]
async fn axum(
#[shuttle_shared_db::Postgres] pool: PgPool
) -> shuttle_axum::ShuttleAxum {
... some code
}
```
Now when you need to make a query, you can use `&pool` in your queries as the database connection you want to execute your query against. The query uses the referenced version of the variable to be able to keep the connection alive; if it doesn't, then the variable will get consumed and you'd lose your connection pool.
At a basic level, we can use `sqlx::query` to query something and then chain the `.bind()` method to insert our own variables into the query:
```rust theme={null}
// fetch one row
let query = sqlx::query("SELECT * FROM users WHERE id = $1")
.bind(id)
.fetch_one(&pool)
.await;
let number: i32 = query.get("id");
```
While this is pretty cool, what we probably want to do if we know what we want to extract from the database is to use SQLx's `sqlx::FromRow` derive macro on a struct with the data type we want from the database. Let's have a look at what that might look like below:
```rust theme={null}
#[derive(sqlx::FromRow)]
struct Message {
id: i32,
message: String
}
let query = sqlx::query::_as<_, Message>("SELECT id, message FROM messages")
.fetch_all(&pool)
.await;
```
This is much more convenient for us for gathering information since we already know what we want the outputted data type from the query to be, so when we carry out the query it'll now automatically be converted into a vector of structs.
One of SQLx's main strengths in addition to the above is being able to create data for compile-time queries. To do this, we'll want to enable the macros flag for SQLx by adding the `macros` feature flag to SQLx in our Cargo.toml file, though if following this guide from the start you'll probably have it already. If not, you can run this command:
```rust theme={null}
cargo add sqlx --features macros
```
This will allow us to be able to use the `query!()` and `query_as!()` macros, which like the above allows us to execute general queries as well as queries that can directly return structs. The main difference however is that these macros do compile-time syntactic and semantic verification of the SQL, meaning you'll get errors at compile time if the queries are invalid. Let's have a look at how we can use them below:
```rust theme={null}
// as you can see this is pretty much the same
let messages = sqlx::query!("SELECT * FROM messages")
.fetch_all(&pool)
.await;
#[derive(sqlx::FromRow)]
struct Message {
id: i32,
message: String
}
// as you can see we input the struct type in-macro, then the query & bound variables
let messages_to_struct = sqlx::query_as!(Message, "SELECT id, message FROM messages
WHERE id = (1)", 1i32)
.fetch_one(&pool)
.await;
```
Once we've built all of our queries that we want to use, all we need to do is run the following command which will create query metadata at the current working directory:
```bash theme={null}
cargo sqlx prepare --database-url
// use the command below if you're using a workspace
cargo sqlx prepare --workspace
```
Now we can build and run our program as usual, and if there's no errors then it should work!
If you're using SQLx in conjunction with a backend web development framework like Axum, bear in mind that you will want to use Serde to serialize your data to JSON - this can be enabled by installing Serde with the "derive" feature:
```bash theme={null}
cargo add serde --features derive
```
Then you need to add the Serialize derive macro to your structs as required, and then when you return the JSON-serialized data it'll automatically be accepted as a HTTP-compatible response.
# Discord Weather Forecast Bot
Source: https://docs.shuttle.dev/templates/tutorials/discord-weather-forecast
Learn how to write a Discord bot that can get the weather forecast.
In this tutorial, we will look at a simple way to add custom functionality to a
Discord server using a bot written in Rust. We will first register a bot with
Discord, then go about how to create a Serenity application that will later run
on Shuttle. Finally, we will make the bot do something useful, writing some Rust
code to get information from an external service.
The full code can be found in
[this repository](https://github.com/shuttle-hq/shuttle-examples/tree/main/serenity/weather-forecast).
### Registering our bot
Before we start making our bot, we need to register it for Discord. We do that
by going to [the Discord Developers applications page](https://discord.com/developers/applications) and creating a new
application.
The application process is also used for adding functionality to Discord but we
will be only using the bot offering. Fill in the basic details and you should
get to the following screen:
You want to copy the Application ID and have it handy, because we will use it to
add our bot to a test server.
Next, we want to create a bot. You can set its public username here:
You want to click the reset token and copy this value (we will use it in a later
step). This value represents the username and password as a single value that
Discord uses to authenticate that our server is the one controlling the bot. You
want to keep this value secret.
To add the bot to the server we will test on, we can use the following URL
(replace `` in the URL with the ID you copied beforehand):
```
https://discord.com/oauth2/authorize?client_id=&scope=bot&permissions=8
```
Here, we create it with `permissions=8` so that it can do everything on the
server. If you are adding to another server, select only the permissions it
needs.
We now have a bot on our server:
Oh, they're offline π’
### Getting a bot online
At this moment, our bot is not running because there is no code. We will have to
write it and run it before we can start interacting with it.
### Serenity
[Serenity](https://docs.rs/serenity/latest/serenity/index.html) is a library for writing Discord bots (and communicating with the
Discord API).
If you don't have Shuttle yet, you can install it with
`cargo install cargo-shuttle`. Afterwards, run the following in an empty
directory:
```bash theme={null}
shuttle init --template serenity
```
After running it you, should see the following generated in `src/main.rs`:
```rust src/main.rs theme={null}
use anyhow::Context as _;
use serenity::async_trait;
use serenity::model::channel::Message;
use serenity::model::gateway::Ready;
use serenity::prelude::*;
use shuttle_runtime::SecretStore;
use tracing::{error, info};
struct Bot;
#[async_trait]
impl EventHandler for Bot {
async fn message(&self, ctx: Context, msg: Message) {
if msg.content == "!hello" {
if let Err(e) = msg.channel_id.say(&ctx.http, "world!").await {
error!("Error sending message: {:?}", e);
}
}
}
async fn ready(&self, _: Context, ready: Ready) {
info!("{} is connected!", ready.user.name);
}
}
#[shuttle_runtime::main]
async fn serenity(
#[shuttle_runtime::Secrets] secrets: SecretStore,
) -> shuttle_serenity::ShuttleSerenity {
// Get the discord token set in `Secrets.toml`
let token = secrets.get("DISCORD_TOKEN").context("'DISCORD_TOKEN' was not found")?;
// Set gateway intents, which decides what events the bot will be notified about
let intents = GatewayIntents::GUILD_MESSAGES | GatewayIntents::MESSAGE_CONTENT;
let client = Client::builder(&token, intents)
.event_handler(Bot)
.await
.expect("Err creating client");
Ok(client.into())
}
```
### Building an interaction for our bot
We want to call our bot when chatting in a text channel. Discord enables this
with [slash commands](https://discord.com/blog/slash-commands-are-here).
Slash commands can be server-specific (servers are named as guilds in Discords
API documentation) or application specific (across all servers the bot is in).
For testing, we will only enable it on a single guild/server. This is because
the application-wide commands can take an hour to fully register whereas the
guild/server specific ones are instant, so we can test the new commands
immediately.
You can copy the guild ID by right-clicking on the icon of the server and click
`Copy Server ID`
([you will need developer mode enabled to do this](https://www.howtogeek.com/714348/how-to-enable-or-disable-developer-mode-on-discord/)):
Now that we have the information for setup, we can start writing our bot and its
commands.
We will first get rid of the `async fn message` hook as we won't be using it in
this example, and then configure the gateway intents to not use any, as we won't be needing them.
```rust src/main.rs theme={null}
// Set gateway intents, which decides what events the bot will be notified about.
// Here we don't need any intents so empty
let intents = GatewayIntents::empty();
```
In the `ready` hook we will call `set_commands` to register a command with Discord.
Here we register a `hello` command
with a description and no parameters (Discord refers to these as options).
```rust src/main.rs theme={null}
#[async_trait]
impl EventHandler for Bot {
async fn ready(&self, ctx: Context, ready: Ready) {
info!("{} is connected!", ready.user.name);
// We are going to move the guild ID to the Secrets.toml file later.
let guild_id = GuildId::new(*your guild id*);
// We are creating a vector with commands
// and registering them on the server with the guild ID we have set.
let commands = vec![CreateCommand::new("hello").description("Say hello")];
let commands = guild_id.set_commands(&ctx.http, commands).await.unwrap();
info!("Registered commands: {:#?}", commands);
}
}
```
> If you are working on a larger command application, poise
> (which builds on Serenity) might be better suited.
With our command registered, we will now add a hook for when these commands are
called using `interaction_create`.
```rust src/main.rs theme={null}
#[async_trait]
impl EventHandler for Bot {
async fn ready(&self, ctx: Context, ready: Ready) {
// ...
}
async fn interaction_create(&self, ctx: Context, interaction: Interaction) {
if let Interaction::Command(command) = interaction {
let response_content = match command.data.name.as_str() {
"hello" => "hello".to_owned(),
command => unreachable!("Unknown command: {}", command),
};
let data = CreateInteractionResponseMessage::new().content(response_content);
let builder = CreateInteractionResponse::Message(data);
if let Err(why) = command.create_response(&ctx.http, builder).await {
println!("Cannot respond to slash command: {why}");
}
}
}
}
```
### Trying it out
Now with the code written we can test it locally. Before we do that we have to
authenticate the bot with Discord. We do this with the value we got from "Reset
Token" on the bot screen in one of the previous steps. To register a secret with
Shuttle we create a `Secrets.toml` file with a key value pair:
```toml Secrets.toml theme={null}
DISCORD_TOKEN="your discord token"
```
Now we can run our bot and test the hello command:
```bash theme={null}
shuttle run
```
We should see that our bot now displays as online:
When typing, we should see our command come up with its description:
Our bot should respond with "hello" to our command:
Wow! Let's make our bot do something a little more useful.
### Making the bot do something
There are [public APIs](https://github.com/public-apis/public-apis) that
can be used for getting information on a variety of topics.
For this demo, we are going to build a bot that gives a forecast for a location.
I used the [AccuWeather API](https://developer.accuweather.com/) for this demo.
If you are following this tutorial 1:1 you can go and register an application to
get an access key. If you are using a different API this is still the sort of
process you would follow.
To get a forecast using the API requires two requests:
1. Get a location ID for a named location
2. Get the forecast at the location ID
The API requires making network requests and it returns a JSON response. We can
make the requests with `cargo add reqwest -F json` and deserialize the results
to structures using serde, with `cargo add serde`. We will then have a function
that chains the two requests together and deserializes the forecast to a
readable result.
> You can skip some of the boilerplate by using direct access on untyped values.
> But we will opt for the better strongly typed structured approach.
Here we type some of the structures returned by the API and add
`#[derive(Deserialize)]` so they can be decoded from JSON. All the keys are in
PascalCase so we use the `#[serde(rename_all = "PascalCase")]` helper attribute
to stay aligned with Rust standards. Some are completely different from the Rust
field name so we use `#[serde(alias = ...)]` on the field to set its matching
JSON representation.
```rust src/weather.rs theme={null}
use serde::Deserialize;
use std::fmt::Display;
#[derive(Deserialize, Debug)]
#[serde(rename_all = "PascalCase")]
pub struct Location {
key: String,
localized_name: String,
country: Country,
}
impl Display for Location {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
write!(f, "{}, {}", self.localized_name, self.country.id)
}
}
#[derive(Deserialize, Debug)]
pub struct Country {
#[serde(alias = "ID")]
pub id: String,
}
#[derive(Deserialize, Debug)]
#[serde(rename_all = "PascalCase")]
pub struct Forecast {
pub headline: Headline,
}
#[derive(Deserialize, Debug)]
pub struct Headline {
#[serde(alias = "Text")]
pub overview: String,
}
```
> The above skips a lot of the fields returned by the API,
> only opting for the ones we will use in this demo. If you wanted to type all
> the fields you could try the new type from JSON feature in rust-analyzer
> to avoid having to write as much.
Our location request call also fails if the search we put in returns no places.
We will create an intermediate type that represents this case and implements
`std::error::Error`:
```rust src/weather.rs theme={null}
#[derive(Debug)]
pub struct CouldNotFindLocation {
place: String,
}
impl Display for CouldNotFindLocation {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
write!(f, "Could not find location '{}'", self.place)
}
}
impl std::error::Error for CouldNotFindLocation {}
```
Now with all the types written, we create a new `async` function that, given a
place and a client, will return the forecast along with the location:
```rust src/weather.rs theme={null}
use reqwest::Client;
pub async fn get_forecast(
place: &str,
api_key: &str,
client: &Client,
) -> Result<(Location, Forecast), Box> {
// Endpoints we will use
const LOCATION_REQUEST: &str = "http://dataservice.accuweather.com/locations/v1/cities/search";
const DAY_REQUEST: &str = "http://dataservice.accuweather.com/forecasts/v1/daily/1day/";
// The URL to call combined with our API_KEY and the place (via the q search parameter)
let url = format!("{}?apikey={}&q={}", LOCATION_REQUEST, api_key, place);
// Make the request we will call
let request = client.get(url).build().unwrap();
// Execute the request and await a JSON result that will be converted to a
// vector of locations
let resp = client
.execute(request)
.await?
.json::>()
.await?;
// Get the first location. If empty respond with the above declared
// `CouldNotFindLocation` error type
let first_location = resp
.into_iter()
.next()
.ok_or_else(|| CouldNotFindLocation {
place: place.to_owned(),
})?;
// Now have the location combine the key/identifier with the URL
let url = format!("{}{}?apikey={}", DAY_REQUEST, first_location.key, api_key);
let request = client.get(url).build().unwrap();
let forecast = client
.execute(request)
.await?
.json::()
.await?;
// Combine the location with the forecast
Ok((first_location, forecast))
}
```
Now we have a function to get the weather, given a `reqwest` client and a place,
we can wire that into the bots logic.
### Setting up the reqwest client
Our `get_forecast` requires a `reqwest` Client and the weather API key. We will
add some fields to our bot for holding this data and initialize this in the
`shuttle_runtime::main` function. Using the secrets feature we can get our
weather API key(we will also move the guild ID to the secrets file):
```rust src/main.rs theme={null}
use anyhow::Context as _;
struct Bot {
weather_api_key: String,
client: reqwest::Client,
discord_guild_id: GuildId,
}
#[shuttle_runtime::main]
async fn serenity(
#[shuttle_runtime::Secrets] secret_store: SecretStore,
) -> shuttle_serenity::ShuttleSerenity {
// Get the discord token set in `Secrets.toml`
let discord_token = secret_store
.get("DISCORD_TOKEN")
.context("'DISCORD_TOKEN' was not found")?;
let weather_api_key = secret_store
.get("WEATHER_API_KEY")
.context("'WEATHER_API_KEY' was not found")?;
let discord_guild_id = secret_store
.get("DISCORD_GUILD_ID")
.context("'DISCORD_GUILD_ID' was not found")?;
let client = get_client(
&discord_token,
&weather_api_key,
discord_guild_id.parse().unwrap(),
)
.await;
Ok(client.into())
}
pub async fn get_client(
discord_token: &str,
weather_api_key: &str,
discord_guild_id: u64,
) -> Client {
// Set gateway intents, which decides what events the bot will be notified about.
// Here we don't need any intents so empty
let intents = GatewayIntents::empty();
Client::builder(discord_token, intents)
.event_handler(Bot {
weather_api_key: weather_api_key.to_owned(),
client: reqwest::Client::new(),
discord_guild_id: GuildId::new(discord_guild_id),
})
.await
.expect("Err creating client")
}
```
### Registering a /weather command
We will add our new command with a place option/parameter. Back in the `ready`
hook, we can add an additional command alongside the existing `hello` command:
```rust src/main.rs theme={null}
async fn ready(&self, ctx: Context, ready: Ready) {
info!("{} is connected!", ready.user.name);
let commands = vec![
CreateCommand::new("hello").description("Say hello"),
CreateCommand::new("weather")
.description("Display the weather")
.add_option(
CreateCommandOption::new(
serenity::all::CommandOptionType::String,
"place",
"City to lookup forecast",
)
.required(true)
),
];
let commands = &self
.discord_guild_id
.set_commands(&ctx.http, commands)
.await
.unwrap();
info!("Registered commands: {:#?}", commands);
}
```
Discord allows us to set the expected type and whether it is required. Here, the
place needs to be a string and is required.
Now in the interaction handler, we can add a new branch to the match tree. We
pull out the option/argument corresponding to place and extract its value.
Because of the restrictions made when setting the option we can assume that it
is well-formed (unless Discord sends a bad request) and thus the unwraps here.
After we have the arguments of the command we call the get\_forecast function and
format the results into a string to return.
```rust src/main.rs theme={null}
mod weather;
// In the match statement in interaction_create()
"weather" => {
let argument = command
.data
.options
.iter()
.find(|opt| opt.name == "place")
.cloned();
let value = argument.unwrap().value;
let place = value.as_str().unwrap();
let result =
weather::get_forecast(place, &self.weather_api_key, &self.client).await;
match result {
Ok((location, forecast)) => {
format!("Forecast: {} in {}", forecast.headline.overview, location)
}
Err(err) => {
format!("Err: {}", err)
}
}
}
```
### Running
Now, we have these additional secrets we are using and we will add them to the
`Secrets.toml` file:
```toml theme={null}
# In Secrets.toml
# Existing secrets:
DISCORD_TOKEN = "***"
# New secrets
DISCORD_GUILD_ID = "***"
WEATHER_API_KEY = "***"
```
With the secrets added, we can run the bot:
```bash theme={null}
shuttle run
```
While typing, we should see our command come up with the options/parameters:
Entering "Paris" as the place we get a result with a forecast:
And entering a location that isn't registered returns an error, thanks to the
error handling we added to the `get_forecast` function:
### Deploying on Shuttle
With all of that setup, it is really easy to get your bot hosted and running
without having to run your PC 24/7.
Just write:
```bash theme={null}
shuttle deploy
```
And you are good to go. Easy-peasy, right?
You could now take this idea even further:
* Use a different API, to create a bot that can return
[new spaceflights](https://spaceflightnewsapi.net/)
* Maybe you could use one of Shuttle's provided databases to remember certain
information about a user
* Expand on the weather forecast idea by adding more advanced options and
follow-ups to command options
* Use the
[localization information](https://discord.com/developers/docs/interactions/application-commands#localization)
to return information in other languages
# Writing a Rest HTTP Service with Axum
Source: https://docs.shuttle.dev/templates/tutorials/rest-http-service-with-axum
Learn how to write a REST HTTP service with Axum.
In this guide you'll learn the basics of how to write a competent Axum HTTP REST service - first starting off with basic routing and writing functions to act as our endpoints, adding app State and middleware functions, using cookies and CORS, then finally looking at testing our app.
### Getting Started
To get started, you'll want to initialise a Shuttle project with Axum (you can find more about getting started [here](https://docs.shuttle.dev/introduction/installation)).
```rust theme={null}
shuttle init --template axum
```
This will initialise a project with a basic router and basic configuration so that it can be deployed immediately.
## Routing
In Axum, routers can be made by writing `Router::new()` to create it and then adding a route with the `route` method by chaining it. Below is an example that returns a router that has one route at the base that will return "Hello World!" if we go to `localhost:8000`.
```rust theme={null}
#[shuttle_runtime::main]
async fn axum() -> shuttle_axum::ShuttleAxum {
let router = Router::new()
.route("/", get(hello_world));
Ok(router.into())
}
// this is a function that returns a static string
// all functions used as endpoints must return a HTTP-compatible response
async fn hello_world() -> &'static str {
"Hello world!"
}
```
However, this is quite basic. We are also missing a HTTP status code from our function and perhaps we'd also maybe like to return a JSON response instead of just one raw string. One way we can solve this is to use the `impl IntoResponse` type from Axum, which allows us to return a tuple of things that can convert into a HTTP response.
```rust theme={null}
async fn hello_world() -> impl IntoResponse {
// the json! macro is from the serde_json library
let hello_world = json!({ "hello": "world" });
(StatusCode::OK, hello_world)
}
```
Converting to and from JSON relies on the [Serde](https://serde.rs/) crate, which allows us to be able to convert things to and from JSON by either using the `json!()` macro, or adding a derive macro to a struct - you can find more about this [here](https://serde.rs/derive.html).
We can also attach things like a request body type which we need to define as a type or a struct that can be (de)serialized to/from JSON via `serde`. Here's an example that takes a JSON request with a message field and spits it back out to the client that made the request as well as an OK status code indicating it was successful:
```rust theme={null}
use serde::{Deserialize, Serialize};
use axum::{State, Json, http::StatusCode};
#[derive(Serialize, Deserialize)]
struct MyRequestType {
message: String
}
async fn hello_world(
Json(req): Json
) -> impl IntoResponse {
(StatusCode::OK, req.message)
}
```
So what's happening here is that when the client sends a HTTP request with the relevant JSON request body, the request type on Axum's side implements `serde::Deserialize` and `serde::Serialize` through the use of derive macros (a form of powerful Rust metaprogramming which you can find more about [here](https://doc.rust-lang.org/book/ch19-06-macros.html). This then converts the received JSON into the struct.
We can also utilise dynamic routing by using the Path type, allowing us to use things like record IDs or article slugs as a variable in our function, as shown below.
```rust theme={null}
// this assumes the dynamic variable is called "id"
async fn hello_world(
Path(id): Path
) -> impl IntoResponse {
let string = format!("Hello world {}!", id);
(StatusCode::OK, string)
}
```
Now if we plug this into a router that has this function using a GET request at the base URL (at `/:id`) and then visit `localhost:8000/32` (for example - assuming the server is run at port 8000), it should return this:
```
Hello world 32!
```
Now that we know the basics of writing endpoint functions, we can use them to write a router that has a few endpoints and can take multiple request methods at an endpoint that uses dynamic routing.
```rust theme={null}
async fn router() -> Router {
// make a route that uses dynamic routing at "/:id"
Router::new().route("/", get(get_worlds))
.route("/create", post(make_world))
.route("/:id", get(get_one_world).put(edit_world).delete(delete_world))
}
```
We can also nest routers in other routers, which is quite helpful for use cases where you might have a health check route on the top level (for example) then you might nest your CRUD routes inside. We can do this by using `.nest()` to nest the deeper-level router first, and then we can use `.route()` to add our higher-level routes.
```rust theme={null}
async fn router() -> Router {
let world_router = Router::new()
.route("/", get(get_worlds))
.route("/create", post(make_world))
.route("/:id", get(get_one_world).put(edit_world).delete(delete_world))
Router::new()
.nest("/worlds", world_router)
.route("/health", get(hello_world))
}
```
### Using State
State in Axum is a way to share scoped app-wide variables over your Router - this is great for us as we can store things like a database connection pool, a hashmap key-value store or an API key for an external service inside. Basic usage of a state struct might look like this:
```rust theme={null}
#[derive(Deserialize, Serialize)]
struct MyAppState {
db_connection: PgPool,
}
fn router() -> Router {
let db_connection = PgPoolOptions::new()
.max_connections(5)
.connect().await;
let state = MyAppState { db_connection };
Router::new()
.route("/", get(hello_world)).with_state(state)
}
```
As you can see, we have defined the struct and are initialising it in our function with the values we want to use then add it to our router with the `with_state` function. We can also use `FromRef` to generate a subset of our original app state so that we can avoid having to share all of our variables everywhere:
```rust theme={null}
// the application state
#[derive(Clone)]
struct AppState {
// this holds some api specific state
api_state: ApiState,
}
// the api specific state
#[derive(Clone)]
struct ApiState {}
// support converting an `AppState` in an `ApiState`
impl FromRef for ApiState {
fn from_ref(app_state: &AppState) -> ApiState {
app_state.api_state.clone()
}
}
```
### Static Files
Let's say you want to serve some static files using Axum - or that you have an application made using a frontend JavaScript framework like React, and you want to combine it with your Axum backend to make one large application instead of having to host your frontend and backend separately. How would you do that?
Axum does not by itself have capabilities to be able to do this; however, what it does have is super-strong compatibility with `tower-http`, which offers utility for serving your own static files whether you're running a SPA, statically-generated files from a framework like Next.js or simply just raw HTML, CSS and JavaScript.
If you're using static-generated files, you can easily slip this into your router (assuming your static files are in a 'dist' folder at the root of your project):
```rust theme={null}
Router::new().nest_service("/", ServeDir::new("dist"));
```
If you're using a SPA like React, Vue or something similar, you can build the assets into the relevant folder and then use the following:
```rust theme={null}
Router::new().nest_service(
"/", ServeDir::new("dist").not_found_service(ServeFile::new("dist/index.html")),
);
```
However, in rare cases this might not work - in which case, you will probably want to go for more of a low-level implementation by using `Tower`, which is the underlying crate for `tower-http`. We can build a fallback service that uses this:
```rust theme={null}
Router::new()
.route("/api", get(hello_world))
.fallback_service(get(|req| async move {
match ServeDir::new(opt.static_dir).oneshot(req).await {
Ok(res) => res.map(boxed),
Err(err) => Response::builder()
.status(StatusCode::INTERNAL_SERVER_ERROR)
.body(boxed(Body::from(format!("error: {err}"))))
.expect("error response"),
}
}
)
)
```
### Cookies
Cookies are essentially a way to store data on the user side. They have a variety of uses for web development; we can use them for tracking metrics for targeted advertising, we can use them for storing player score data and more. Cookies are also used for application authentication; however, it should be said that although this is a valid use case, **no user information like username or passwords should be stored through cookies - only session IDs that get validated against a database.**
Cookies in Axum can be easily handled through use of axum-extra's [cookiejar](https://docs.rs/axum-extra/latest/axum_extra/extract/cookie/struct.CookieJar.html) types. There are three types, a private cookiejar (where it uses private signing), a signed cookiejar where all cookies are signed publicly and then a regular cookiejar. You will need to enable the relevant flag to be able to use cookies, but you can enable the private cookiejar (for example) by running the following command:
```bash theme={null}
cargo add axum-extra --features cookie-private
```
Because cookiejars are extractor types, that means we can pass them in like we would our other parameters and they *just work*. No further configuration is needed for the normal cookiejar; however, if you are using the signed or private cookiejar types, you need to generate a cryptographically signed key:
```rust theme={null}
#[derive(Clone)]
struct AppState {
// that holds the key used to sign cookies
key: Key,
}
// this impl tells `SignedCookieJar` and `PrivateCookieJar how to access the
// key from our state
impl FromRef for Key {
fn from_ref(state: &AppState) -> Self {
state.key.clone()
}
}
#[shuttle_runtime::main]
async fn axum() -> shuttle_axum::ShuttleAxum {
let state = AppState {
key: Key::generate()
};
// ... the rest of your code
}
```
Now that we're done with that, we can pass our cookiejar into whatever function we'd like to use it in, and if there's any changes to the cookiejar that we want to pass back to the requesting client then we need to add it as a return type, like so:
```rust theme={null}
async fn check_cookie(
jar: PrivateCookieJar,
) -> impl IntoResponse {
if !jar.get("hello") {
// a similar thing can be done for deleting cookies, getting cookies etc
(jar.add(Cookie::new("hello", "world")), StatusCode::CREATED)
}
StatusCode::OK
}
```
However, what if we want to make our own cookies? Suppose we have a backend that uses CORS and requires secure, same-site cookies - what would we do? In this case, we can declare a variable as a cookie using the `CookieBuilder` struct from the `cookie` crate, re-imported to `axum-extra` (requires the 'cookie' flag enabled). Let's see what building this might look like:
```rust theme={null}
// bear in mind this uses the "time" crate for the duration
let session_id = "Hello world!";
let cookie = Cookie::build("foo", session_id)
.secure(true)
.same_site(SameSite::Strict)
.http_only(true)
.path("/")
.max_age(Duration::WEEK)
.finish();
```
Then we can put this cookie as the cookie we want to add to the jar as the return instead of `Cookie::new`!
Deleting a cookie is also pretty much the same as adding a cookie; to be able to pass the changes properly, the deletion of the cookie needs to be specified in the return otherwise it will not delete properly:
```rust theme={null}
async fn logout(
jar: PrivateCookieJar
) -> impl IntoResponse {
(jar.remove(Cookie::named("foo"), StatusCode::OK)
}
```
### Middleware
Middleware is essentially a function that runs before the client hits an endpoint; it can be used for things like adding a wait time to prevent server overload, to validating user sessions from cookies. Fortunately, middleware in Axum is quite simple to use!
There's two ways to be able to create middleware in Axum, but we will be focusing on the more simple way which is to just write a function that uses Axum's middleware utilities as parameters, then simply just adding it to our router and declaring it as middleware. We can also add middleware that uses state this way too, which is great as it means we can share a database connection pool (for example) in the middleware:
```rust theme={null}
async fn check_hello_world(
req: Request,
next: Next
) -> Result {
// requires the http crate to get the header name
if req.headers().get(CONTENT_TYPE).unwrap() != "application/json" {
return Err(StatusCode::BAD_REQUEST);
}
Ok(next.run(req).await)
}
```
This function declares a string and if the string doesn't meet the criteria, we can return an error with an internal server error code. If it does, we can run the function at the endpoint. Things like cookiejars and state can also be used in request functions similarly to how we would use it for a regular endpoint function, which allows us to do things like run database queries in middleware which is quite useful for things like session validation.
Now that we've written a middleware function, we can implement it in our router like so:
```rust theme={null}
Router::new()
.route("/", get(hello_world))
.layer(middleware::from_fn(check_hello_world))
```
The `layer` method is a very versatile method that we can use to layer multiple things - a CORS layer (which will be discussed later), middleware as well as tracing. what if we want to implement a middleware that has state? Thankfully, Axum has an aptly named `from_fn_with_state` method that we can use instead of `from fn`, like so:
```rust theme={null}
Router::new()
.route("/", get(hello_world))
.layer(middleware::from_fn_with_state(check_hello_world))
.with_state(state)
```
Ideally your middleware should come after all of the routes you are trying to cover with it, as well as before the `with_state` method if your middleware function requires state to access things like a database connection pool.
You can read more about writing middleware with Axum [here](https://docs.rs/axum/latest/axum/middleware/index.html).
### CORS
CORS is a mechanism designed to allow servers to serve content to a domain where the content does not originate from (for instance: one website to another website). Despite CORS protecting legitimate sites from being interacted with by malicious websites written by bad actors, it's still considered to be something that can be quite tricky to set up.
Like with middleware, although Axum does not deal with CORS by itself, it uses the `CorsLayer` type from [`tower-http`](https://docs.rs/tower-http/latest/tower_http/) to assist with us being able to quickly set up CORS quickly and efficiently so that we don't have to do everything ourselves. Let's see what that would look like:
```rust theme={null}
let cors = CorsLayer::new()
// allow `GET` and `POST` when accessing the resource
.allow_methods([Method::GET, Method::POST])
// allow requests from any origin
.allow_origin(Any);
```
As you can see, this statement declares a new `CorsLayer` that accepts GET and POST HTTP request methods from any origin. If we try to put in a DELETE request anywhere, CORS will automatically deny the request as it fails to meet CORS. We can also add an origin URL by using a string literal then parsing and unwrapping it in the `allow_origin` method. However, if you're using an environmental variable to store it then you can also just parse it as a `http::HeaderValue` type like so:
```rust theme={null}
let cors = CorsLayer::new()
.allow_credentials(true)
.allow_methods(vec![Method::GET, Method::POST, Method::PUT, Method::DELETE])
.allow_headers(vec![ORIGIN, AUTHORIZATION, ACCEPT])
.allow_origin(state.domain.parse::().unwrap());
// once we've created our CORS layer, we can layer it on top of our router
// in a nested router you'll want to layer it only after all the routes where it's required
Router::new().layer(cors)
```
### Deployment
With Shuttle, you can deploy your app with `shuttle deploy` and then it'll deploy your app to a live URL (assuming there's no errors). No Docker containerisation required!
If you're migrating to Shuttle, you need to wrap your entry point function with the Shuttle runtime macro and then add relevant code annotations, and it'll work without needing to do anything else. You can read more about this [here](https://docs.shuttle.dev/migration/migrating-to-shuttle).
# Using Shuttle with Datadog
Source: https://docs.shuttle.dev/templates/tutorials/send-your-logs-to-datadog
Learn how to send your logs to Datadog with Roberto.
> written by [Roberto Huertas](https://robertohuertas.com/)
## Some words about observability
As we all know, being able to '*see*' what's going on in our services can be critical in many ways. We can easily find bugs or identify undesired behaviors, and it's certainly an invaluable tool at our disposal.
Observability, in software, refers to the **ability to understand the state of a system and its behavior** by collecting, analyzing, and presenting data about its various components and interactions. This enables engineers to diagnose and resolve issues and make informed decisions about system health and performance.
Observability is **critical for ensuring the reliability, scalability, and performance** of modern systems, and is becoming increasingly important as software continues to play a larger role in our daily lives.
Fortunately, in the Rust ecosystem, we have [Tokio Tracing](https://docs.rs/tracing/latest/tracing/) which is a powerful framework for **instrumenting** Rust programs to collect structured, event-based diagnostic information. It provides a convenient and flexible API for collecting and viewing traces of events in your application and you can easily **add context and structure to your traces**, making it easier to identify bottlenecks and debug issues.
## Shuttle logs
A few months ago, I wrote a [post](https://robertohuertas.com/2023/01/09/shuttle-rust-backend-deployment/) about [Shuttle](https://www.shuttle.dev/), where I explained how ridiculously easy it is to deploy a Rust backend to the cloud by using their [CLI tool](https://docs.shuttle.dev/introduction/quick-start).
[Shuttle](https://www.shuttle.dev/) is still in beta, and although its observability features are not really polished yet, they offer [support](https://docs.shuttle.dev/introduction/telemetry) for [Tokio Tracing](https://docs.rs/tracing/latest/tracing/) and a way to [view logs](https://docs.shuttle.dev/introduction/telemetry#viewing-logs) by using their CLI tool.
By simply running `shuttle logs --follow`, you will be able to see something like this:

This is great for simple applications, but what if you want to send your logs to a **more powerful tool** like [Datadog](https://datadoghq.com)? Well, in this post, **I'll show you how to do it**.
## Datadog
[Datadog](https://datadoghq.com) is a **monitoring and observability platform** that provides a **single pane of glass** for your infrastructure and applications. It is a **cloud-based** service that allows you to **collect, aggregate and analyze** your data, and it is **extremely powerful**.
> As a disclaimer, I must say that I'm currently working at [Datadog](https://datadoghq.com), so I'm a bit biased, but I'm also a huge fan of the product and I think it's a great tool for developers π .
Most of the time, the easiest way to send anything to the [Datadog platform](https://www.datadoghq.com/observability-platform/) is by using the [Datadog Agent](https://docs.datadoghq.com/agent/), but in this case, as **we cannot install it** in any way, we will use a **small library I created for the occasion** called [dd-tracing-layer](https://docs.rs/dd-tracing-layer/latest/dd_tracing_layer/), which happens to be using the [Datadog HTTP API](https://docs.datadoghq.com/api/latest/logs/) under the hood to send logs to the [Datadog platform](https://www.datadoghq.com/observability-platform/).
## How to use tracing with Shuttle
If we check the [Shuttle documentation](https://docs.shuttle.dev/configuration/logs), we can read this:
> Shuttle will record anything your application writes to stdout, e.g. a tracing or log crate configured to write to stdout, or simply println!. By default, Shuttle will set up a global tracing subscriber behind the scenes.
```rust theme={null}
// [...]
use tracing::info;
#[shuttle_runtime::main]
async fn axum(#[shuttle_shared_db::Postgres] pool: PgPool) -> ShuttleAxum {
info!("Running database migration");
pool.execute(include_str!("../schema.sql"))
.await
.map_err(CustomError::new)?;
// [...]
}
```
So, as you can see, it seems that the Shuttle macro is already instantiating and initializing a [tracing subscriber](https://docs.rs/tracing/latest/tracing/trait.Subscriber.html) for us.
This is pretty **convenient for most of the simple cases**, but unfortunately, it's not enough for our purposes.
Ideally, if we had access to the underlying infrastructure, we could probably install the [Datadog Agent](https://docs.datadoghq.com/agent/) and configure it to send our logs directly to [Datadog](https://datadoghq.com), or even use [AWS Lambda functions](https://docs.datadoghq.com/logs/guide/send-aws-services-logs-with-the-datadog-lambda-function/?tab=awsconsole) or [Azure Event Hub + Azure Functions](https://docs.datadoghq.com/integrations/azure/?tab=azurecliv20#log-collection) in case we were facing some specific cloud scenarios.
> You can check the [Datadog docs for log collection and integrations](https://docs.datadoghq.com/logs/log_collection/) if you want to learn more.
Those solutions are generally great because they allow us to remove the burden of sending our logs to [Datadog](https://datadoghq.com) from our application, thus becoming the **responsibility of the platform** itself.
If we could do something like that with [Shuttle](https://www.shuttle.dev/), it would be great. But, as we just mentioned, in the case of [Shuttle](https://www.shuttle.dev/), we **don't have access to the underlying infrastructure**, so we need to find a way to send our logs to [Datadog](https://datadoghq.com) from our application.
And that's what we are going to try to do in this post.
## Getting access to the subscriber
So, the basic idea is to add a new [tracing layer](https://docs.rs/tracing-subscriber/latest/tracing_subscriber/layer/) to the subscriber which will be responsible for sending our logs to [Datadog](https://datadoghq.com).
But for that, we'll need to get **access to the subscriber instance prior to its initialization**, and it turns out that [Shuttle](https://www.shuttle.dev/) provides a way to do that just by disabling the default features on `shuttle-runtime` crate.
```toml theme={null}
shuttle-runtime = { version = "*", default-features = false }
```
## Creating our project
As a walkthrough, we are going to create a new [Shuttle](https://www.shuttle.dev/) project from scratch.
The idea is to build a simple REST API using [Axum](https://docs.rs/axum/latest/axum/) and send our logs to [Datadog](https://datadoghq.com) using the [dd-tracing-layer](https://crates.io/crates/dd-tracing-layer) crate.
Although I'm going to describe all the steps you need to take to make this work, you can see the **final state of the project** in this [GitHub repository](https://github.com/robertohuertasm/shuttle-datadog-logs).
Feel free to use it as a reference.
### Initializing the project
First of all, we need to create a new [Shuttle](https://www.shuttle.dev/) project. You can do that by using the [Shuttle CLI](https://docs.shuttle.dev/getting-started/cli):
```bash theme={null}
shuttle init --template axum
```
Follow the instructions and you should have a new project ready to go. I called mine `shuttle-datadog-logs`, but use the name you want.
### Adding some dependencies
In our example, we are going to be using [Shuttle Secrets](https://docs.shuttle.dev/resources/shuttle-secrets), [Tokio Tracing](https://docs.rs/tracing/latest/tracing/) and [dd-tracing-layer](https://crates.io/crates/dd-tracing-layer).
Make sure you have the following dependencies in your `Cargo.toml` file:
```toml theme={null}
[dependencies]
axum = "0.6"
shuttle-axum = "0.27.0"
shuttle-runtime = { version = "0.27.0", default-features = false }
tokio = "1"
# tracing
tracing = "0.1"
tracing-subscriber = { version = "0.3", features = ["env-filter", "json", "time"] }
dd-tracing-layer = "0.1"
```
### Instrumenting a little bit the default project
Now that we have our dependencies ready, we can **start instrumenting** our project a little bit.
Note that we have added the `#[instrument]` macro to the `hello_world` function and added a `tracing::info!` and a `tracing::debug!` log to it. We have also added an info log to the `axum` function.
```rust theme={null}
// [...]
use tracing::instrument;
#[instrument]
async fn hello_world() -> &'static str {
tracing::info!("Saying hello");
tracing::debug!("Saying hello for debug level only");
"Hello, world!"
}
#[shuttle_runtime::main]
async fn axum() -> shuttle_axum::ShuttleAxum {
let router = Router::new().route("/", get(hello_world));
tracing::info!("Starting axum service");
Ok(router.into())
}
```
At this point, if you try to run the project locally by using the `shuttle run` command, you should see none of our logs.
That's ok, as we haven't initialized a [tracing subscriber](https://docs.rs/tracing/latest/tracing/trait.Subscriber.html) yet.
### Adding our tracing subscriber
The first thing we're going to do is to add a [tracing subscriber](https://docs.rs/tracing/latest/tracing/trait.Subscriber.html) to our application.
Then we will add several [layers](https://docs.rs/tracing-subscriber/latest/tracing_subscriber/layer/index.html) to it:
* [EnvFilter layer](https://docs.rs/tracing-subscriber/latest/tracing_subscriber/filter/struct.EnvFilter.html) to set the tracing level according to a variable's value.
* [Format layer](https://docs.rs/tracing-subscriber/latest/tracing_subscriber/fmt/format/index.html) to format the logs. We will be using JSON format.
* [Datadog Tracing layer](https://docs.rs/dd-tracing-layer/) to send our logs to [Datadog](https://datadoghq.com).
Apart from that, we're also going to add support for [Shuttle Secrets](https://docs.shuttle.dev/resources/shuttle-secrets).
Let's do it! Make sure your `axum` function looks like this:
```rust theme={null}
use axum::{routing::get, Router};
use dd_tracing_layer::{DatadogOptions, Region};
use shuttle_runtime::SecretStore;
use tracing::instrument;
use tracing_subscriber::prelude::*;
// version of our app to be sent to Datadog
const VERSION: &'static str = "version:0.1.0";
// [...]
#[shuttle_runtime::main]
async fn axum(#[shuttle_runtime::Secrets] secret_store: SecretStore) -> shuttle_axum::ShuttleAxum {
// getting the Datadog Key from the secrets
let dd_api_key = secret_store
.get("DD_API_KEY")
.expect("DD_API_KEY not found");
// getting the Datadog tags from the secrets
let tags = secret_store
.get("DD_TAGS")
.map(|tags| format!("{},{}", tags, VERSION))
.unwrap_or(VERSION.to_string());
// getting the log level from the secrets and defaulting to info
let log_level = secret_store.get("LOG_LEVEL").unwrap_or("INFO".to_string());
// datadog tracing layer
let dd_layer = dd_tracing_layer::create(
DatadogOptions::new(
// first parameter is the name of the service
"shuttle-datadog-logs",
// this is the Datadog API Key
dd_api_key,
)
// this is the default, so it can be omitted
.with_region(Region::US1)
// adding some optional tags
.with_tags(tags),
);
// filter layer
let filter_layer =
tracing_subscriber::EnvFilter::try_new(log_level).expect("failed to set log level");
// format layer
let fmt_layer = tracing_subscriber::fmt::layer()
.with_ansi(true)
.with_timer(tracing_subscriber::fmt::time::UtcTime::rfc_3339())
.json()
.flatten_event(true)
.with_target(true)
.with_span_list(true);
// starting the tracing subscriber
tracing_subscriber::registry()
.with(filter_layer)
.with(fmt_layer)
.with(dd_layer)
.init();
// starting the server
let router = Router::new().route("/", get(hello_world));
tracing::info!("Starting axum service");
Ok(router.into())
}
```
There are many things going on in this code, so take your time to go through it.
### Secrets
Before running our project, there's still a thing we have to deal with: **secrets**.
As you can see in the code above, we are using the [Shuttle Secrets](https://docs.shuttle.dev/resources/shuttle-secrets) crate to get the [Datadog](https://datadoghq.com) API key, the tags and the log level.
[Shuttle Secrets](https://docs.shuttle.dev/resources/shuttle-secrets) relies on having a `Secrets.toml` file in the root of our project containing all the secrets, and it also supports having a `Secrets.dev.toml` file for local development. You can learn more about this convention in the [Shuttle Secrets documentation](https://docs.shuttle.dev/resources/shuttle-secrets#local-secrets).
So, let's create two files in the root of our project:
```toml Secrets.dev.toml theme={null}
DD_API_KEY = "your-datadog-api-key"
DD_TAGS = "env:dev,service:shuttle-datadog-logs"
# setting info as the default log level, but debug for our project
LOG_LEVEL = "INFO,shuttle_datadog_logs=DEBUG"
```
```toml Secrets.toml theme={null}
DD_API_KEY = "your-datadog-api-key"
DD_TAGS = "env:prod,service:shuttle-datadog-logs"
LOG_LEVEL = "INFO"
```
> Remember to add these files to your `.gitignore` file!
### Running the project
Now, run `shuttle run` and go to `http://localhost:8000` in your browser to see our "Hello, world!" message.
Alternatively, you can also use `curl` to test the endpoint:
```bash theme={null}
curl -i http://localhost:8080
```
You should be able to see the logs in your terminal now.
But remember... this endpoint was instrumented! So, if everything went well, we should be able to see the logs in [Datadog](https://app.datadoghq.com/logs).
Let's check it out! π

It works! π
## Conclusion
As you can see, it's pretty easy to send your logs to [Datadog](https://datadoghq.com) from your [Shuttle](https://www.shuttle.dev/) powered backend.
Again, you can see the full code in [this GitHub repository](https://github.com/robertohuertasm/shuttle-datadog-logs).
I hope you've enjoyed it! π
# Serverless Calendar App
Source: https://docs.shuttle.dev/templates/tutorials/serverless-calendar-app
Learn how to build a serverless calendar application with Matthias.
> written by [Matthias Endler](https://endler.dev/)
Every once in a while my buddies and I meet for dinner. I value these evenings,
but the worst part is scheduling these events!
* We send out a message to the group.
* We wait for a response.
* We decide on a date.
* Someone sends out a calendar invite.
* Things finally happen.
None of that is fun *except* for the dinner.
Being the reasonable person you are, you would think: "Why don't you just use a
scheduling app?".
I have tried many of them. None of them are any good. They are all...*too much*!
Just let me send out an invite and whoever wants can show up.
* I *don't* want to have to create an account for your
calendar/scheduling/whatever app.
* I *don't* want to have to add my friends.
* I *don't* want to have to add my friends' friends.
* I *don't* want to have to add my friends' friends' friends.
* You get the idea: I just want to send out an invite and get no response from
you.
### The nerdy, introvert engineer's solution
π‘ What we definitely need is yet another calendar app which allows us to create
events and send out an invite with a link to that event! You probably didn't see
that coming now, did you?
Oh, and I don't want to use Google Calendar to create the event because
[I](https://www.businessinsider.com/google-users-locked-out-after-years-2020-10)
[don't](https://killedbygoogle.com/)
[trust them](https://github.com/tycrek/degoogle).
Like any reasonable person, I wanted a way to create calendar entries from my
*terminal*.
That's how I pitched the idea to my buddies last time. The answer was: "I don't
know, sounds like a solution in search of a problem." But you know what they
say: Never ask a starfish for directions.
### Show, don't tell
That night I went home and built a website that would create a calendar entry
from `GET` parameters.
It allows you to create a calendar event from the convenience of your command
line:
```bash theme={null}
> curl https://zerocal.shuttle.app?start=2022-11-04+20:00&duration=3h&title=Birthday&description=paaarty
BEGIN:VCALENDAR
VERSION:2.0
PRODID:ICALENDAR-RS
CALSCALE:GREGORIAN
BEGIN:VEVENT
DTSTAMP:20221002T123149Z
CLASS:CONFIDENTIAL
DESCRIPTION:paaarty
DTEND:20221002T133149Z
DTSTART:20221002T123149Z
SUMMARY:Birthday
UID:c99dd4bb-5c35-4d61-9c46-7a471de0e7f4
END:VEVENT
END:VCALENDAR
```
You can then save that to a file and open it with your calendar app.
```bash theme={null}
curl https://zerocal.shuttle.app?start=2022-11-04+20:00&duration=3h&title=Birthday&description=paaarty > birthday.ics
open birthday.ics
```
In a sense, it's a "serverless calendar app", haha. There is no state on the
server, it just generates a calendar event on the fly and returns it.
### How I built it
You probably noticed that the URL contains "shuttle.app". That's because I'm
using [shuttle.dev](https://github.com/shuttle-hq/shuttle) to host the website.
Shuttle is a hosting service for Rust projects and I wanted to try it out for a
long time.
To initialize the project using the awesome
[axum](https://github.com/tokio-rs/axum) web framework, I've used
```bash theme={null}
cargo install cargo-shuttle
shuttle init --template axum --name zerocal zerocal
```
and I was greeted with everything I needed to get started:
```rust theme={null}
use axum::{routing::get, Router};
async fn hello_world() -> &'static str {
"Hello, world!"
}
#[shuttle_runtime::main]
async fn axum() -> shuttle_axum::ShuttleAxum {
let router = Router::new().route("/hello", get(hello_world));
Ok(router.into())
}
```
Let's quickly commit the changes:
```bash theme={null}
git add .gitignore Cargo.toml src/
git commit -m "Hello World"
```
Then:
```bash theme={null}
shuttle deploy
```
Now let's head over to the returned project URL:
Hello World! Deploying the first version took less than 5 minutes. Nice! We're
all set for our custom calendar app.
### Writing the app
To create the calendar event, I used the
[icalendar](https://github.com/hoodie/icalendar-rs) crate (shout out to
[hoodie](https://github.com/hoodie) for creating this nice library!).
[iCalendar](https://en.wikipedia.org/wiki/ICalendar) is a standard for creating
calendar events that is supported by most calendar apps.
```bash theme={null}
cargo add icalendar
cargo add chrono # For date and time parsing
```
Let's create a demo calendar event:
```rust theme={null}
let event = Event::new()
.summary("test event")
.description("here I have something really important to do")
.starts(Utc::now())
.ends(Utc::now() + Duration::days(1))
.done();
```
Simple enough.
### How to return a file!?
Now that we have a calendar event, we need to return it to the user. But how do
we return it as a file?
There's an example of how to return a file dynamically in axum
[here](https://github.com/tokio-rs/axum/discussions/608).
```rust theme={null}
async fn calendar() -> impl IntoResponse {
let ical = Calendar::new()
.push(
// add an event
Event::new()
.summary("It works! π")
.description("Meeting with the Rust community")
.starts(Utc::now() + Duration::hours(1))
.ends(Utc::now() + Duration::hours(2))
.done(),
)
.done();
CalendarResponse(ical)
}
```
Some interesting things to note here:
* Every calendar file is a collection of events so we wrap the event in a
`Calendar` object, which represents the collection.
* `impl IntoResponse` is a trait that allows us to return any type that
implements it.
* `CalendarResponse` is a
[newtype wrapper](https://rust-unofficial.github.io/patterns/patterns/behavioural/newtype.html)
around Calendar that implements IntoResponse.
Here is the `CalendarResponse` implementation:
```rust theme={null}
/// Newtype wrapper around Calendar for `IntoResponse` impl
#[derive(Debug)]
pub struct CalendarResponse(pub Calendar);
impl IntoResponse for CalendarResponse {
fn into_response(self) -> Response {
let mut res = Response::new(boxed(Full::from(self.0.to_string())));
res.headers_mut().insert(
header::CONTENT_TYPE,
HeaderValue::from_static("text/calendar"),
);
res
}
}
```
We just create a new `Response` object and set the `Content-Type` header to the
correct MIME type for iCalendar files: `text/calendar`. Then we return the
response.
### Add date parsing
This part is a bit hacky, so feel free to glance over it. We need to parse the
date and duration from the query string. I used
[dateparser](https://docs.rs/dateparser/latest/dateparser/), because it supports
sooo many different
[date formats](https://docs.rs/dateparser/latest/dateparser/#accepted-date-formats).
```rust theme={null}
async fn calendar(Query(params): Query>) -> impl IntoResponse {
let mut event = Event::new();
event.class(Class::Confidential);
if let Some(title) = params.get("title") {
event.summary(title);
} else {
event.summary(DEFAULT_EVENT_TITLE);
}
if let Some(description) = params.get("description") {
event.description(description);
} else {
event.description("Powered by zerocal.shuttle.app");
}
if let Some(start) = params.get("start") {
let start = dateparser::parse(start).unwrap();
event.starts(start);
if let Some(duration) = params.get("duration") {
let duration = humantime::parse_duration(duration).unwrap();
let duration = chrono::Duration::from_std(duration).unwrap();
event.ends(start + duration);
}
}
if let Some(end) = params.get("end") {
let end = dateparser::parse(end).unwrap();
event.ends(end);
if let Some(duration) = params.get("duration") {
if params.get("start").is_none() {
let duration = humantime::parse_duration(duration).unwrap();
let duration = chrono::Duration::from_std(duration).unwrap();
event.starts(end - duration);
}
}
}
let ical = Calendar::new().push(event.done()).done();
CalendarResponse(ical)
}
```
Would be nice to support more date formats like now and tomorrow, but I'll leave
that for another time.
Let's test it:
```rust theme={null}
> shuttle run # This starts a local dev server
> curl 127.0.0.1:8000?start=2022-11-04+20:00&duration=3h&title=Birthday&description=Party
*π€ bleep bloop, calendar file created*
```
Nice, it works!
Opening it in the browser creates a new event in the calendar:
And for all the odd people who don't use a terminal to create a calendar event,
let's also add a form to the website.
### Add a form
```html theme={null}
```
I modified the calendar function a bit to return the form if the query string is
empty:
```rust theme={null}
async fn calendar(Query(params): Query>) -> impl IntoResponse {
// if query is empty, show form
if params.is_empty() {
return Response::builder()
.status(200)
.body(boxed(Full::from(include_str!("../static/index.html"))))
.unwrap();
}
// ...
}
```
After some more tweaking, we got ourselves a nice little form in all of its web
1.0 glory.
And that's it! We now have a little web app that can create calendar events.
Well, almost. We still need to deploy it.
### Deploying
```bash theme={null}
shuttle deploy
```
Right, that's all. It's that easy. Thanks to the folks over at
[shuttle.dev](https://github.com/shuttle-hq/shuttle) for making this possible.
The calendar app is now available at `https://zerocal.shuttle.app`.
Now I can finally send my friends a link to a calendar event for our next pub
crawl. They'll surely appreciate it.yeahyeah
### From zero to calendar in 100 lines of Rust
Boy it feels great to be writing plain HTML again. Building little apps never
gets old.
Check out the source code on [GitHub](https://github.com/mre/zerocal) and help
me make it better! π
Here are some ideas:
* Add support for more human-readable date formats (e.g. `now`, `tomorrow`).
* Add support for recurring events. Add support for timezones.
* Add location support (e.g. `location=Berlin` or
`location=https://zoom.us/test`). Add Google calendar short-links
(`https://calendar.google.com/calendar/render?action=TEMPLATE&dates=20221003T224500Z%2F20221003T224500Z&details=&location=&text=`).
* Add example bash command to create a calendar event from the command line.
* Shorten the URL (e.g.
`zerocal.shuttle.app/2022-11-04T20:00/3h/Birthday/Party`)?
Check out the [issue tracker](https://github.com/mre/zerocal) and feel free to
open a PR!
# URL Shortener
Source: https://docs.shuttle.dev/templates/tutorials/url-shortener
Learn how to write a URL shortener with Terrence.
I was trying to get to sleep on a Wednesday night - I check my phone, it's 2:54
AM. A feeling of dread comes over me as I realise I'm not going to be able to
get more than 5 hours of sleep - again.
I've been a software developer for close to 10 years now. How did it get to
this?
My deployment broke production at 10pm (because I never learn) and I had to deal
with our infrastructure coupled with acute stress for the next 3 hours. As I sat
there contemplating my life choices, I had an idea. Can we do better?
I don't want to have to deal with Terraform and Kubernetes at midnight. I want
to write scalable code and just get it deployed. I want my dependencies
generated and managed for me. I want to be able to sleep at night.
It's 2023. **Surely** we can do better. As I stared blankly at my white ceiling,
I decided to see if it was possible.
I suddenly sat up in bed. Can I create a useful app, with some sort of database
state, a custom subdomain, focus only on my application code without needing to
worry about infrastructure and get it done in 10 minutes or less? I'll write a
URL shortener or something. I'll write in Rust. I'll write it tonight.
### Design
I got out of bed and turned on the lights in my office. I sat down on my
ergonomic chair and power up a comically large curved monitor. Arch boots up. I
quickly message a friend of mine on Signal to remind them that I use Arch. Now
I'm ready to code. Let's build this thing.
The API is going to be simple. No reason for GUIs or anything like that - I am
engineer, therefore I've convinced myself that UI's peaked with the 1970's
teletype.
Life's short so I'm going to build an HTTP API. The simplest thing I can come up
with.
You can shorten URLs like this:
```bash theme={null}
curl -X POST -d 'https://www.google.com' https://myapp.com
https://myapp.com/uvAivJ
```
And you get redirected like this:
```bash theme={null}
curl https://myapp.com/uvAivJ
< HTTP/2 301
...
```
Yeah that'll work.
Next I'll need some sort of database to store the urls. I briefly considered
using a bijective compression scheme without needing database state, but let's
face it I'm not really sure what a bijection is and it's already 3:02 AM.
I'll just get a Postgres instance with a basic schema:
```sql theme={null}
CREATE TABLE urls (
id VARCHAR(6) PRIMARY KEY,
url VARCHAR NOT NULL
);
```
Genius.
I'll add an index or something so that the database doesn't do a linear search
on every request. No one is really going to use this - but I can already feel
the judgement of anyone who happens to glance over my source code. I need to be
able to explain to people that you can search for urls in constant time,
implying I understand complexity theory.
Ok I'm ready. It's 3:05 AM. I have 10 minutes. I pick up my black vape and take
a large hit. Smoke fills up the room and I can't see the screen any more.
Whatever. I try to crack my fingers and neck for some dramatic flair, fail, and
open a terminal.
### Building the Barebones - 09:59 minutes remaining
I'm using [Shuttle](https://www.shuttle.dev/) for this project. It's a serverless
platform built for Rust and I don't have to deal with provisioning databases, or
subdomains or any of that gunk. I already have the CLI
[installed](https://docs.shuttle.dev/introduction/installation).
I stop and think for a little bit - which web framework do I want to use? I think
I'm going to go with Rocket. It's pretty much production ready with a sweet API and
I'm reasonably proficient with it.
To initialize a rocket project with boilerplate, I can use the Shuttle CLI `init` command:
```bash theme={null}
shuttle init --template rocket url-shortener
```
This leaves me with the following `main.rs` file:
```rust main.rs theme={null}
#[macro_use]
extern crate rocket;
#[get("/")]
fn index() -> &'static str {
"Hello, world!"
}
#[shuttle_runtime::main]
async fn rocket() -> shuttle_rocket::ShuttleRocket {
let rocket = rocket::build().mount("/", routes![index]);
Ok(rocket.into())
}
```
As you can see, this imports a few dependencies. Luckily, the `init` command takes care
of this as well, leaving us with the following `Cargo.toml`:
```toml Cargo.toml theme={null}
[package]
name = "url-shortener"
version = "0.1.0"
edition = "2021"
[dependencies]
rocket = "0.5.0"
shuttle-rocket = "0.57.0"
shuttle-runtime = "0.57.0"
tokio = "1.26.0"
```
The `init` command also created a new Shuttle project for us. This starts an
isolated container in Shuttle's infrastructure.
Now, to deploy:
```sh theme={null}
$ shuttle deploy
Packaging url-shortener v0.1.0 (/private/shuttle/examples/url-shortener)
Archiving Cargo.toml
Archiving Cargo.toml.orig
Archiving src/main.rs
Compiling tracing-attributes v0.1.20
Compiling tokio-util v0.6.9
Compiling multer v2.1.0
Compiling hyper v0.14.27
Compiling rocket_http v0.5.0
Compiling rocket_codegen v0.5.0
Compiling rocket v0.5.0
Compiling shuttle-rocket v0.57.0
Compiling shuttle-runtime v0.57.0
Compiling url-shortener v0.1.0 (/opt/shuttle/crates/url-shortener)
Finished dev [unoptimized + debuginfo] target(s) in 1m 01s
Project: url-shortener
Deployment Id: 3d08ac34-ad63-41c1-836b-99afdc90af9f
Deployment Status: DEPLOYED
Host: url-shortener.shuttle.app
Created At: 2022-04-13 03:07:34.412602556 UTC
```
Ok... this seemed a little too easy, let's see if it works.
```sh theme={null}
$ curl https://url-shortener.shuttle.app/
Hello, world!
```
Hm, not bad. I pour myself another shot...
### Adding Postgres - 07:03 minutes remaining
This is the part of my journey where I usually get a little flustered. I've set
up databases before, but it's always a pain. You need to provision a VM, make
sure storage isn't ephemeral, install and spin up the database, create an
account with the correct privileges and secure password, store the password in
some sort of secrets manager in CI, add your IP address and your VM's IP address
to the list of acceptable hosts etc etc etc. Oof that sounds like a lot of work.
Shuttle does a lot of this stuff for you - I just didn't remember how. I
quickly head over to the
[Shuttle shared database](https://docs.shuttle.dev/resources/shuttle-shared-db)
section in the docs. I added the `sqlx` dependency to `Cargo.toml` and change
one line in `main.rs`:
```rust theme={null}
#[shuttle_runtime::main]
async fn rocket(#[shuttle_shared_db::Postgres] pool: PgPool) -> ShuttleRocket {
// [...]
}
```
By adding a parameter to the main `rocket` function, `Shuttle` will
automatically provision a Postgres database for you, create an account and hand
you back an authenticated connection pool which is usable from your application
code.
Lets deploy it and see what happens:
```bash theme={null}
shuttle deploy
...
Finished dev [unoptimized + debuginfo] target(s) in 19.50s
Project: url-shortener
Deployment Id: 538e41cf-44a9-4158-94f1-3760b42619a3
Deployment Status: DEPLOYED
Host: url-shortener.shuttle.app
Created At: 2022-04-13 03:08:30.412602556 UTC
Database URI: postgres://***:***@pg.shuttle.rs/db-url-shortener
```
I have a database! I couldn't help but chuckle a little bit. So far so good.
### Setting up the Schema - 06:30 minutes remaining
The database provisioned by `Shuttle` is completely empty - I'm going to need to
either connect to Postgres and create the schema myself, or write some sort of
code to automatically perform the migration. As I start to ponder this seemingly
existential question I decide not to overthink it. I'm just going to go with
whatever is easiest.
I connect to the database provisioned by Shuttle using
[pgAdmin](https://www.pgadmin.org/) using the provided database URI and run the
following script:
```bash theme={null}
CREATE TABLE urls (
id VARCHAR(6) PRIMARY KEY,
url VARCHAR NOT NULL
);
```
As I was ready to Google 'how to create index postgres' I realised that since
the `id` used for the url lookup is a primary key, which is implicitly a
'unique' constraint, Postgres would create the index for me. Cool.
### Writing the Endpoints - 05:17 remaining
The app's going to need two endpoints - one to `shorten` URLs and one to
retrieve URLs and `redirect` the user.
I quickly created two stubs for the endpoints while I thought about the actual
implementation:
```rust theme={null}
#[get("/")]
async fn redirect(id: String, pool: &State) -> Result {
unimplemented!()
}
#[post("/", data = "")]
async fn shorten(url: String, pool: &State) -> Result {
unimplemented!()
}
```
I decided to start with the shorten method. The simplest implementation I could think of is to generate a
unique id on the fly using the [nanoid](https://github.com/nikolay-govorov/nanoid) crate
and then running an `INSERT` statement. Hm - what about duplicates? I decided
not to overthink it π€·.
```rust theme={null}
#[post("/", data = "")]
async fn shorten(url: String, pool: &State) -> Result {
let id = &nanoid::nanoid!(6);
let p_url = Url::parse(&url).map_err(|_| Status::UnprocessableEntity)?;
sqlx::query("INSERT INTO urls(id, url) VALUES ($1, $2)")
.bind(id)
.bind(p_url.as_str())
.execute(&**pool)
.await
.map_err(|_| Status::InternalServerError)?;
Ok(format!("https://url-shortener.shuttle.app/{id}"))
}
```
Next I implemented the `redirect` method in a similar spirit. At this point I
started to panic as it was really getting close to the 10 minute mark. I'll do a
`SELECT *` and pull the first url that matches with the query id. If the id does
not exist, you get back a `404`:
```rust theme={null}
#[get("/")]
async fn redirect(id: String, pool: &State) -> Result {
let url: (String,) = sqlx::query_as("SELECT url FROM urls WHERE id = $1")
.bind(id)
.fetch_one(&**pool)
.await
.map_err(|e| match e {
Error::RowNotFound => Status::NotFound,
_ => Status::InternalServerError
})?;
Ok(Redirect::to(url.0))
}
```
Whoops there's a typo in the SQL query.
After I fixed my typo and sorted out the various unresolved dependencies by
letting my IDE do the heavy lifting for me, I deployed to Shuttle for the last
time.
### Moment of truth - 00:25 minutes remaining
Feeling like an off-brand Tom Cruise in mission impossible I stared intently at
the clock counting down as Shuttle deployed my url-shortener. 19.3 seconds and
we're live. As soon as the `DEPLOYED` dialog came up, I instantly tested it out:
```bash theme={null}
$ curl -X POST -d "https://google.com" https://url-shortener.shuttle.app
https://s.shuttle.app/XDlrTB
```
I then copy/pasted the shortened URL to my browser and, lo an behold, was
redirected to Google.
I did it.
### Retrospective - 00:00 minutes remaining
With a sigh of relief I pushed myself back from my desk. I refilled my mug,
picked it up and headed to my derelict balcony. As I slid open the the windows
and the cold air flowed into my apartment, I took two steps forward to rest my
elbows and mug on the railing.
I sat there for a while reflecting on what had just happened. I *had* succeeded.
I'd successfully built a somewhat trivial app quickly without needing to worry
about provisioning databases or networking or any of that jazz.
But how would this measure up in the real world? Real software engineering is
complex, involving collaboration across different teams with different
skill-sets. The entire world of software is barely keeping it together. Is it
really feasible to replace our existing, tried and tested cloud paradigms with a
new paradigm of not having to deal with infrastructure at all? What I knew for
sure is I wasn't going to get to the bottom of this one tonight.
As I went back to my bedroom and laid once more in bed, I noticed I was
grinning. There's a chance we really can do better. Maybe we're not exactly
there yet, but my experience tonight had given me a certain optimism that we
aren't as far as I once thought. With the promise of a brighter tomorrow, I
turned on my side and fell asleep.
# Chat app with React & Rust
Source: https://docs.shuttle.dev/templates/tutorials/websocket-chat-app-js
Learn how to write a Rust chat application with React on the frontend.
Source code can be found [here](https://github.com/joshua-mo-143/react-websocket-chat-rust).
With Rust going from strength to strength within the web development space, it is clear why many developers and big names are starting to take notice. As one example of this, Meta has recently [recommended Rust](https://engineering.fb.com/2022/07/27/developer-tools/programming-languages-endorsed-for-server-side-use-at-meta/#:~:text=Meta's%20primary%20supported%20server%2Dside,new%20addition%20to%20this%20list.) as a server-side language. If that won't make people sit up and look, then it's hard to know what will - Rust's current offering easily stands on par with most other languages that you could use in a back-end API or microservice, and it will only get better with time.
Let's explore Rust in everyday usage by creating a Typescript React app and combining it with a Rust API that uses WebSockets. While node.js is quick to set up, doesn't require context switching and is easy to use if you already have Javascript knowledge from learning it for writing front-end web apps, you don't need to have a high level of knowledge in Rust to get started writing competent web services that can easily carry out whatever you need.

### Initial setup
We will be using Vite to scaffold our project, as it's a quick and fast bundler for starting up your development environment quickly as well as being less opinionated than create-react-app. Let's get into it:
`npm create vite@latest wschat-react-rust --template react-ts`
This should now scaffold a project within a subfolder of your current working directory called `wschat-react-rust`.
For our CSS, we'll be using TailwindCSS. Tailwind is a utility-first CSS library that allows you to be able to quickly and easily scaffold smaller projects without having to constantly fight media queries by providing utility classes with a mobile-first approach (as a side note: this isn't necessarily better or worse than regular CSS - this is just how I like to do it!). You can find out how to install it here. It's quick, easy, and very easy to configure.
Before we start, make sure you delete all of the HTML from the App menu (make sure you return an empty div!), make sure any unnecessary imports are removed and ensure that Tailwind is in your main CSS file.
Here are the contents of my CSS file if you'd like to use my CSS styles exactly:
```css theme={null}
// index.css
@import url('https://fonts.googleapis.com/css2?family=Text+Me+One&display=swap');
@tailwind base;
@tailwind components;
@tailwind utilities;
@layer base {
input, button {
box-shadow: 5px 5px rgba(0,0,0,0.5);
}
input:active, button:active {
box-shadow: 3px 3px rgba(0,0,0,0.5);
}
}
body {
font-family: 'Text Me One', 'Sans Serif';
padding: 0;
margin: 0;
background-color: rgb(214 211 209);
box-sizing: box-border;
overflow-x: hidden;
}
```
### Getting Started
Before we do anything, let's quickly scaffold our page so that we have something that we can look at (we'll be putting this in the main App component but feel free to put this on a page component):
```js theme={null}
// App.tsx
import React, { SetStateAction } from 'react'
type Message = {
name: string,
uid: number,
message: string
}
function App() {
const [message, setMessage] = React.useState("");
const [name, setName] = React.useState("");
const [vis, setVis] = React.useState(true);
return (
<>
Hi! Welcome to Rustcord. Enjoy your stay!
>
)
}
```
If you've already used Typescript, this component should be simple to understand. For the uninitiated however: the only changes that are there at the moment in comparison to a pure JavaScript project is that we've declared a new type ("Message") which we'll be using later on, and we've also had to declare specific types for our state setters as well as `e.target.value`. This is important as TypeScript needs to know what type our events are, otherwise it'll complain and refuse to compile.
That's pretty much it for the main component! We need a modal that can get a name, and then we just need to set up WebSocket functionality. Let's create our modal:
```js theme={null}
// UserModal.tsx
import React, { SetStateAction } from 'react'
type Props = {
vis: boolean,
name: string,
setName: React.Dispatch>,
setVis: React.Dispatch>
}
const NamePrompt = ({vis, name, setName, setVis}: Props) => {
const submitName = (e: React.FormEvent) => {
e.preventDefault()
if (name == "") {
return
}
setVis(false)
}
return (
)
}
export default NamePrompt
```
When we initially load up our webpage, we want this modal to appear before the user enters the chatroom as we need the user to set a name, which means we should make it so that the modal is initially visible, but once the user has confirmed a name (and is in the chat), we should hide the modal. Like before, the only real difference here in comparison to Javascript is we've declared types for our props as Typescript needs to know what to parse them as - otherwise, it won't compile.
Now we can simply proceed to import the modal into our page component like so (don't forget to pass props and use React fragments if required!):
```js theme={null}
// App.tsx
```
Now that the main design of the app is done, let's think about how we can implement a WebSocket connection. To start with, we can open a WebSocket connection at a URL by simply writing the following:
```js theme={null}
// App.tsx
const websocket = new WebSocket("ws://localhost:8000/ws");
```
This opens a WebSocket connection at `localhost:8000/ws`. Not particularly useful at the moment because we currently don't have anything we can connect it to, but we'll need this for testing later on.
Now that we've opened a WebSocket connection, we can add a method for when the connection opens, when it closes, and when we receive a message - like so:
```js theme={null}
// App.tsx
// On connection open, write "Connected" to the console
websocket.onopen = () => {
console.log("Connected");
}
// On connection close, write "Disconnected" to the console
websocket.onclose = () => {
console.log("Disconnected");
}
// On receiving a message from the server, write the WebSocket message to the console
websocket.onmessage = (ev) => {
let message = JSON.parse(ev.data);
create_message(message);
}
```
Although we've told our program that we want to create a message entry when we receive a message, we don't have a `create_message` function at the moment. This function will simply consist of creating a new HTML element, appending some classes and creating the text that will go inside the container div (and appending it to the container), and then appending our message itself to the chatbox as well as scrolling down to the bottom.
```js theme={null}
// App.tsx (put this outside of the component)
// store the message classes as an array by simply splitting the string of classes by whitespace
const message_classes = "mx-8 break-all chat-message bg-slate-600 rounded-xl w-fit max-w-screen rounded-xl px-5 py-4".split(" ");
const username_css_classes = "text-gray-200 text-sm".split(" ");
// create message div and append it to the chatbox
const create_message = (data: Message) => {
let messageContainer = document.createElement('div');
// add an array of classes using the spread operator here
messageContainer.classList.add(...message_classes);
let chatbox = document.querySelector('#chatbox');
let username = document.createElement('span');
username.classList.add(...username_css_classes);
username.innerText = `${data.name}`;
messageContainer.append(username);
let message = document.createElement('p');
message.innerText = `${data.message}`;
messageContainer.append(message);
chatbox?.append(messageContainer);
window.scrollTo(0, document.body.scrollHeight);
}
```
Now our front end is pretty much done!
### Setting up Rust
Getting started with Rust is very easy. You can install it on Linux or WSL (Windows Subsystems for Linux) by using the following command:
```bash theme={null}
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
```
If you're on Windows and don't have WSL, you can find the install page for Rust [here](https://www.rust-lang.org/learn/get-started).
However you install it, you'll also get Rust's package manager called Cargo, which is like NPM for Rust. Cargo allows you to install Rust's packages which are called "crates".
For the back end part, because we'll be serving the web server through Shuttle, we will need to install their CLI which we can do with the following command:
```bash theme={null}
cargo install cargo-shuttle
```
You can also use [`cargo-binstall`](https://github.com/cargo-bins/cargo-binstall) to install cargo-shuttle:
```bash theme={null}
cargo binstall cargo-shuttle
```
The installation may take a while depending on your Internet connection, so feel free to grab a drink while you wait. You will also want to log onto their website here through GitHub and make sure you have your API key as you will need to log in on the CLI with your key before you can make any projects.
Once the installation is done, you can start a Shuttle project with the following command below (run this in your React project at the packages.json level):
```bash theme={null}
shuttle init --template axum
```
This will prompt you to input a project name - once you've inputted a project name, this will scaffold a project for you that uses Axum, which is a Rust web framework that is easy to build on with simple syntax. The project will be built in a folder within the current working directory with the name you chose. **For this article, we will simply refer to the folder as "API" for clarity.**
Once the project has been created, you'll want to go into your `Cargo.toml` and make sure it looks like the following:
```toml theme={null}
[package]
name = "websocket-chat-react-rust" # The name here should be whatever you've decided to name your project
version = "0.1.0"
edition = "2021"
publish = false
[dependencies]
axum = { version = "^0.6.2", features = ["ws"] }
axum-extra = { version = "0.4.2", features = ["spa"] }
chrono = { version = "0.4", features = ["serde"] }
futures = "0.3"
hyper = { version = "0.14", features = ["client", "http2"] }
hyper-tls = "0.5"
serde = { version = "1.0", features = ["derive"] }
serde_json = "1.0"
shuttle-axum = "0.57.0"
shuttle-runtime = "0.57.0"
sync_wrapper = "0.1"
tokio = { version = "1", features = ["full"] }
tower-http = { version = "0.3.5", features = ["fs", "auth"]}
```
This will set up our project with all of the required dependencies for our project so we can simply just import them in as required.
As it would be ideal for us to have our front and back end running at the same time, there is an npm package we can use called concurrently which we can install at the `packages.json` level like so:
```bash theme={null}
npm i -D concurrently
```
Now we can write an npm script to run both our front and back ends in one command! Let's look at what that would look like:
```json theme={null}
"scripts": {
"dev": "concurrently new \"vite\" \"shuttle run --working-directory API\"",
// ... your other scripts
},
```
Running this npm command while at the packages.json level simply starts up your React app and also launches your Rust project so you can work on both at the same time.
### Getting Started (with Rust)
To get started, let's create all the values we need for the server to work.
```rust theme={null}
type Users = Arc>>>;
static NEXT_USERID: std::sync::atomic::AtomicUsize = std::sync::atomic::AtomicUsize::new(1);
#[derive(Serialize, Deserialize)]
struct Msg {
name: String,
uid: Option,
message: String,
}
```
Let's quickly dissect what these types actually mean. If you'd like to read more about what an arc is you can do so [here](https://doc.rust-lang.org/std/sync/struct.Arc.html), but in short: it's a [smart pointer](https://doc.rust-lang.org/book/ch15-00-smart-pointers.html) that can be cloned and holds a value. In this case, we're using it to hold a reader-writer lock ("RwLock"), which is typically used when you want the value inside to be read across multiple threads at the same time, but you want exclusive thread access for write operations (ie, can't mutate the value in any way from another thread). In short: it's like having a box of stuff that lets you know what's inside when you look at it, but to change the contents you have to open the box itself (and only one person can open it at a time!).
AtomicUsize is used for user IDs as we will want the value to be shared safely across threads. You can read more about Atomic values [here](https://doc.rust-lang.org/std/sync/atomic/). We will also want our messages to be able to be serialized and deserialized from JSON - hence, the derive macro provided to us by the [serde](https://serde.rs/) crate.
Let's quickly make up our main function so that we have a working route that we can test with our front end:
```rust theme={null}
#[shuttle_runtime::main]
async fn axum() -> ShuttleAxum {
// set up router with Secrets
let router = router(secret, static_folder);
Ok(router.into())
}
fn router(secret: String, static_folder: PathBuf) -> Router {
// initialise the Users k/v store and allow the static files to be served
let users = Users::default();
// return a new router with a WebSocket route
Router::new()
.route("/ws", get(ws_handler))
.layer(Extension(users))
}
```
Now at the moment we have our main application loop and a router, but as you may have noticed, `ws_handler` doesn't actually exist in our code at the moment. This is what we will be writing next, and it can be simply written as so:
```rust theme={null}
// "impl IntoResponse" means we want our function to return a websocket connection
async fn ws_handler(ws: WebSocketUpgrade, Extension(state): Extension) -> impl IntoResponse {
ws.on_upgrade(|socket| handle_socket(socket, state))
}
```
This function simply receives a connection, upgrades the connection into a WebSocket connection and initiates the socket handling to be able to receive and send messages.
Now let's implement the `handle_socket` function, as it currently doesn't actually exist:
```rust theme={null}
async fn handle_socket(stream: WebSocket, state: Users) {
// When a new user enters the chat (opens the websocket connection), assign them a user ID
let my_id = NEXT_USERID.fetch_add(1, std::sync::atomic::Ordering::Relaxed);
// By splitting the websocket into a receiver and sender, we can send and receive at the same time.
let (mut sender, mut receiver) = stream.split();
// Create a new channel for async task management (stored in Users hashmap)
let (tx, mut rx): (UnboundedSender, UnboundedReceiver) = mpsc::unbounded_channel();
// If a message has been received, send the message (expect on error)
tokio::spawn(async move {
while let Some(msg) = rx.recv().await {
sender.send(msg).await.expect("Error while sending message");
}
sender.close().await.unwrap();
});
// if there's a message and the message is OK, broadcast it along all available open websocket connections
while let Some(Ok(result)) = receiver.next().await {
println!("{:?}", result);
}
```
As you may have noticed in this function, we spawn a thread to await messages and send them back. We'll need a method of safely transporting messages across the thread we've created, which is why an Arc with a reader-writer lock is used.
If you use `shuttle run` to locally run this project and send "Hello" to the WebSocket connection from a front-end web app, on your terminal with the Rust project deployment should return `Some("Hello")`, which means we've successfully received a WebSocket message. Now we just need to figure out how we're going to send it back!
Let's create a function that will broadcast messages along every connected WebSocket:
```rust theme={null}
async fn broadcast_msg(msg: Message, users: &Users) {
// "If let" is basically a simple match statement, which is perfect for this use case
// as we want to only match against one condition.
if let Message::Text(msg) = msg {
for (&_uid, tx) in users.read().await.iter() {
tx.send(Message::Text(msg.clone()))
.expect("Failed to send Message")
}
}
}
```
This function will basically check that the message type matches and if it does, iterate through every single user connection and read it. Nothing too crazy here.
Let's have a look at enriching the results:
```rust theme={null}
fn enrich_result(result: Message, id: usize) -> Result {
match result {
Message::Text(msg) => {
let mut msg: Msg = serde_json::from_str(&msg)?;
msg.uid = Some(id);
let msg = serde_json::to_string(&msg)?;
Ok(Message::Text(msg))
}
_ => Ok(result),
}
}
```
This function adds the user ID to the incoming message. If the result is not a WebSocket message, return whatever the result was.
Now we will incorporate both of these methods into the respective section in our `handle_socket` function, like so:
```rust theme={null}
while let Some(Ok(result)) = receiver.next().await {
println!("{:?}", result);
if let Ok(result) = enrich_result(result, my_id) {
broadcast_msg(result, &state).await;
}
}
```
Now if you send a message from your front-end web app to your web server, you should receive a message in your React app from the server with the message, username and user ID! We are finally done building the bare bones of the app, but there are some other things before we're finished that you will probably want to consider.
Now while the app is now technically a minimum viable product, there's a couple of other things we need to sort out, like compiling React assets into our Rust project and making sure that we have a way to manually disconnect users if they're being abusive or breaking the rules of the chat.
### Admin Routing
Before we get started however, let's quickly update our main function so that we can quickly pull in our environment variables and use static assets:
```rust theme={null}
#[shuttle_runtime::main]
async fn axum(
#[shuttle_runtime::Secrets] secrets: SecretStore,
) -> ShuttleAxum {
// We use Secrets.toml to set the BEARER key, just like in a .env file and call it here
let secret = secrets.get("BEARER").unwrap_or("Bear".to_string());
// set up router with Secrets
let router = router(secret);
Ok(router.into())
}
```
Let's dissect our new additions quickly. `shuttle_runtime::Secrets` allows us to set environmental variables using a Secrets.toml file, much like how you'd typically use a .env file to be able to use environment variables locally and in production.
Now that our setup for this section is out of the way, let's cover the admin route first as that'll mean we can make sure our WebSocket service is complete before we compile any assets. Let's make a function that will take a user ID and manually disconnect them using the disconnect function we already have:
```rust theme={null}
async fn disconnect_user(
Path(user_id): Path,
Extension(users): Extension,
) -> impl IntoResponse {
disconnect(user_id, &users).await;
"Done"
}
```
Now we can easily set up an admin router within the router function that will allow us to disconnect people manually, given a user ID and an authentication secret which you should write like so:
```rust theme={null}
// write this somewhere in your router function
// RequireAuthorizationLayer dictates we must send a Bearer auth token to authorise the kick/remove
let admin = Router::new()
.route("/disconnect/:user_id", get(disconnect_user))
.layer(RequireAuthorizationLayer::bearer(&secret));
```
Now that we have this route, we can actually embed it into our main router using the "nest" method. This method is actually great for us, as it means we can put together several different groups of similar routes and functions to create one router. Let's have a look at what this should look like on the router we're returning:
```rust theme={null}
Router::new()
.route("/ws", get(ws_handler))
.nest("/admin", admin)
.layer(Extension(users))
```
As you can see here, our admin route should actually be `/ws/admin`, as dictated by the nest function. Now if we want to disconnect a user with the user ID of 5 manually for example, we would have to make a post request to `/ws/admin/disconnect/5` with a Bearer authentication header - as an example, if you've set your secret as "keyboard cat", you need to enter an authentication header of `Bearer keyboard cat`.
At this point, your router function should look like this (if not, you have likely missed a step somewhere):
```rust theme={null}
// initialise the Users k/v store and allow the static files to be served
let users = Users::default();
// make an admin route for kicking users
let admin = Router::new()
.route("/disconnect/:user_id", get(disconnect_user))
.layer(RequireAuthorizationLayer::bearer(&secret));
// return a new router and nest the admin route into the websocket route
Router::new()
.route("/ws", get(ws_handler))
.nest("/admin", admin)
.fallback_service("/", ServeDir::new("assets").not_found_service(ServeFile::new("assets/index.html")))
.layer(Extension(users))
}
```
### Integrating Front & Back Endpoints
Now we can start integrating our front and back end together. Let's set up our npm deploy scripts so that we can build our assets into our Rust folder:
```json theme={null}
"scripts": {
// ... your other package.json scripts
"build": "tsc && vite build --emptyOutDir",
// ... your other package.json scripts
},
```
Our build command basically tells npm that we want it to compile all of the Typescript files, and then build the compiled assets.
Now in your `vite.config.ts` file you'll want to have your defineConfig look like so:
```ts theme={null}
export default defineConfig({
base: '',
plugins: [react()],
build: {
outDir: 'API/static',
emptyOutDir: true
}
})
```
This tells npm exactly where we want our compiled assets to be built and whether or not we should empty the target directory before building assets (outside of the regular directory, this is normally false by default so we need to set it to true).
Now if we run `npm run build`, it should build our assets in the API folder in a subdirectory called `static`. We can serve this directory to our users on the Rust project, which is great for us as it means we can simply use one deployment instead of having to manage two different deployments. `shuttle_static_folder::StaticFolder` has a default value of "static", so we don't need to set the folder name manually.
Before we move on, let's re-write the WebSocket URL so that it will dynamically match whatever the URL of our hosted project will be, instead of a fixed string. Let's change our WebSocket connection in the React front-end like so:
```ts theme={null}
// Set up the websocket URL.
const wsUri = ((window.location.protocol == "https:" && "wss://") || "ws://") +
window.location.host +
"/ws";
const websocket = new WebSocket(wsUri);
```
Now that we've changed the connection URL, our compiled assets in our Rust folder will work on the local Rust run without us having to use our local Vite deployment! However, if we try to run our front end by itself, it won't connect as this connection string relies on using the local Rust project - but you can change the connection string as required.
Now we can update our main function and router together so that the router will use `tower_http`'s `ServeDir` function for serving static files:
```rust theme={null}
#[shuttle_runtime::main]
async fn axum(
#[shuttle_runtime::Secrets] secrets: SecretStore,
) -> ShuttleAxum {
// We use Secrets.toml to set the BEARER key, just like in a .env file and call it here
let secret = secrets.get("BEARER").unwrap_or("Bear".to_string());
// set up router with Secrets
let router = router(secret);
Ok(router.into())
}
fn router(secret: String) -> Router {
// initialise the Users k/v store and allow the static files to be served
let users = Users::default();
// make an admin route for kicking users
let admin = Router::new()
.route("/disconnect/:user_id", get(disconnect_user))
.layer(RequireAuthorizationLayer::bearer(&secret));
// return a new router and nest the admin route into the websocket route
Router::new()
.route("/ws", get(ws_handler))
.nest("/admin", admin)
.fallback_service("/", ServeDir::new("assets").not_found_service(ServeFile::new("assets/index.html")))
.layer(Extension(users))
}
```
### Finishing Up
Now we're pretty much done and ready to deploy! We can call our deploy script at the `packages.json` level by setting up an npm script like below:
```rust theme={null}
"scripts": {
// ...your other scripts
"dev": "concurrently new \"vite\" \"shuttle run --working-directory ./API\"",
"build": "tsc && vite build --emptyOutDir",
"deploy": "npm run build && shuttle deploy --working-directory ./API"
// ...your other scripts
}
```
Now if we run `npm run deploy`, it should build all of our assets into the required folder and then attempt to deploy to Shuttle - assuming there are no issues, it should deploy successfully!
If you would like to change the name of your folder while keeping the deployment name the same, you can do so by simply creating a file called `Shuttle.toml` at the `Cargo.toml` level and creating a variable for the `name` key like in a .env file (so for example if I wanted to call my project `keyboard-cat`, I'd type "name='keyboard-cat'" into the file).
If you need to check the status of your Shuttle project at any time, you can do so by using `shuttle status` at the `Cargo.toml` file, or you can add the `--name` flag followed by the project's name to use it from any directory.
# Introduction to Shuttle
Source: https://docs.shuttle.dev/welcome/introduction
Shuttle is a Rust-native cloud development platform that lets you deploy your app while also taking care of all of your infrastructure.
Installation and quickstart guide
Check out some Shuttle examples
Get started from one of many templates
Follow one of our tutorials
## What is Shuttle?
As a platform designed with a focus on providing an exceptional developer experience, our goal is to make building and deploying applications a breeze. Shuttle's **Infrastructure from Code** capabilities make provisioning resources simple and hassle-free. Getting a database is just a matter of asking for one with a macro:
```rust theme={null}
#[shuttle_runtime::main]
async fn main(
// Automatically provisions a Postgres database
// and hands you an authenticated connection pool
#[shuttle_shared_db::Postgres] pool: sqlx::PgPool,
) -> ShuttleAxum {
// Application code
}
```
You can hit the ground running and swiftly transform your ideas into tangible solutions. Accelerate your project's progress by rapidly building and deploying prototypes, ensuring you bring your vision to life in record time.
The GIF above demonstrates how easy it is to add resources to your project, visualized with the [Shuttle Console](https://console.shuttle.dev/).
Our mission is aligned with the wider shift of Rust becoming the future of web development, as we strive to deliver cutting-edge solutions that leverage the full potential of [the most loved programming language](https://survey.stackoverflow.co/2024/technology/#admired-and-desired).
## How Shuttle Works
The simplest way to build and deploy a web app on Shuttle looks like this:
```rust src/main.rs theme={null}
use axum::{routing::get, Router};
async fn hello_world() -> &'static str {
"Hello, world!"
}
#[shuttle_runtime::main]
async fn main() -> shuttle_axum::ShuttleAxum {
let router = Router::new().route("/", get(hello_world));
Ok(router.into())
}
```
This example starts an HTTP server where the `GET /` endpoint returns `Hello, world!`.
But most importantly, the code you see in the snippet above, is all it takes for `shuttle deploy` to deploy it.
This is possible due to the `#[shuttle_runtime::main]` procedural macro.
The macro wraps your app with Shuttle's runtime, which handles resource provisioning and initialization for you.
## Supported Frameworks
Many types of Rust programs can be deployed on Shuttle.
Shuttle provides all hosted apps with proxied HTTPS web traffic.
Therefore, the most common use case is to deploy web apps and APIs.
Any app that can bind to a socket and accept incoming HTTP traffic can run on Shuttle.
To make life easier we have implemented all the boilerplate required for these Rust web frameworks. Get started with just a few lines of code.
* [Axum](/examples/axum)
* [Actix Web](/examples/actix)
* [Rocket](/examples/rocket)
* [Rama](/examples/other)
* [Warp](/examples/other)
* [Tower](/examples/other)
* [Salvo](/examples/other)
* [Poem](/examples/other)
The Discord Bot building frameworks [Serenity](/examples/serenity) and [Poise](/examples/poise) are also officially supported.
If you need a custom service, you can take a look at our guide [right here](/templates/tutorials/custom-service).
## Resource Provisioning
One of the great features of Shuttle is the provisioning of resources through macros.
With just a few lines of code, you can get access to various resources. Here are some examples:
### Secrets
```rust theme={null}
use shuttle_runtime::SecretStore;
#[shuttle_runtime::main]
async fn main(
#[shuttle_runtime::Secrets] secrets: SecretStore,
) -> shuttle_axum::ShuttleAxum {
// Get secret defined in `Secrets.toml` file.
let secret = secrets.get("MY_API_KEY").expect("secret was not found");
}
```
### Postgres Database
```rust theme={null}
#[shuttle_runtime::main]
async fn main(
#[shuttle_shared_db::Postgres] pool: PgPool,
) -> shuttle_axum::ShuttleAxum {
// Use the connection pool to query the Postgres database
pool.execute(include_str!("../schema.sql"))
.await
.map_err(CustomError::new)?;
}
```
For more info on resources, head on over to our [Resources](/resources/resources) section.
## Deployment Process
When you run `shuttle deploy`, your project code is archived and sent to our platform where it is built into a Docker image.
Your service will then be started on Shuttle's infrastructure on AWS in London (eu-west-2).
The generated code from `shuttle_runtime::main` handles resource provisioning and initialization, leaving you to focus on what matters.
## Get Involved
Check us out at shuttle-hq/shuttle
Click to accept invite
Go to @shuttle\_dev on Twitter
If you are wondering about the best way to get involved, here's how