in blog | Screaming At My Screen |
---|---|
original entry | Deploying Django at small scale |
It feels like common knowledge and an accepted fact that you want to run PostgreSQL in production. And have some sort of load balancer to be able to scale your application server horizontally. Throw in Redis for caching and you have the most generic web application stack I can come up with. At scale this makes a ton of sense. It is a battle tested stack, that will most likely not fail you and if it does, there are tons of resources on how to fix it. Those are a lot of moving parts you want to keep updated and maintained, so let us talk about simplifying this stack a bit for small scale deployments.
There are lots of resources out there explaining the setup I mentioned above. Those resources are valuable and will guide you through production setups you will see at many places. But what when you are just starting and want to get your first application online? Things can be a lot easier.
I recently started moving my blog and photo sharing site to their own, small virtual server. I just blogged about the reason for doing this, so I will not go into detail. Both sites are powered by Django and are setup in exactly the same way. Deploying one of those sites took me roughly fifteen minutes, which I consider reasonably fast.
Depending on how your application is designed and used, you might not need PostgreSQL or any other database server. I am the only person publishing content on both sites. So I can pretty much say it is a read only application, as one or two writes to the database every other day are negligible. For read heavy applications SQLite is doing an amazing job. There are surely limitations, but you can get away with using SQLite more often than you would imagine.
I could go into more details how SQLite is suitable for production use, but let me link to some articles which are doing a way better job, and have actual data to show you which its strengths are.
A really neat feature is being able to move your database around with cp
. Having a copy of your production database on your local system with a simple copy and paste, or moving a newly prepared database to your production system before going live is pretty nice. For backups I would still suggest using .backup
.
When deploying to a single box, general wisdom is to setup a reverse proxy and use it to terminate SSL. The goto way to do this is setting up nginx and using the acme client to get a certificate from Let's Encrypt. (I have been deploying nginx for roughly ten years now, and I still cannot remember the stupid configuration file syntax.)
Caddy on the other hand not only takes care of getting a certificate from Let's Encrypt, but also keeping it renewed. And configuration is pretty simple if all you want is getting your site online.
static.screamingatmyscreen.com {
root * /home/timo/static
file_server
encode zstd gzip
}
media.screamingatmyscreen.com {
root * /home/timo/media
file_server
encode zstd gzip
}
www.screamingatmyscreen.com {
reverse_proxy 127.0.0.1:8000
encode zstd gzip
}
screamingatmyscreen.com {
redir https://www.screamingatmyscreen.com{uri} permanent
}
I can actually remember how to do this! Caddy still lacks a few features and might not work for more complex setups, but for many scenarios it is more than good enough. Did I mention that it is easy to setup?
If you want tools like goaccess to be able to parse your webserver logs, you most likely want them stored in the Common Log Format. To turn on logging you throw a small snippet in the domains configurations.
log {
output file /var/log/sams.log {
roll_size 1gb
roll_keep 5
roll_keep_for 720h
}
format single_field common_log
}
You might want to consider throwing in some cache headers as well :)
One of the most common questions I have heard from people deploying their first app, is how to make sure the app starts and restarts when the server is rebooted. I usually deploy on boxes running Debian, which ships with systemd.
[Unit]
Description=gunicorn
After=network.target
[Service]
User=blog
Group=blog
WorkingDirectory=/home/blog/app
ExecStart=/home/blog/venv/bin/gunicorn --bind 127.0.0.1:8000 --workers 5 blog.wsgi
[Install]
WantedBy=multi-user.target
Assuming you setup a user named blog
, created a virtual environment in /home/blog/venv
and cloned your app to /home/blog/app
, you can paste this in /etc/systemd/system/gunicorn.service
and enable and star the service.
$ systemctl enable gunicorn
$ systemctl start gunicorn
$ systemctl status gunicorn
I know we wanted to talk about tiny or small scale deployments. But this does not mean you do not hope your small website goes viral, is the next big hit or that your latest post hits HackerNews, Reddit and Twitter trending at the same time.
I am currently running my apps on a CPX11 virtual server from Hetzner. 2 cores, 2GB memory for 4,15€ / month. The little box is not even close to maxing out its resources, even when visitor count spikes when a new article goes live and is shared on larger platforms.
Let us assume your website goes viral, millions of unique visitors come each day to read your latest article. First of all: congratulations! You made it further than most of us ever will :) But rest assured bare metal servers will get you a long way. A PX62-NVME costs 88.06€ / month. 6 real cores, 64GB memory and 2 NVMe disks in raid 1. A box like this will handle way more traffic than you can imagine, if all you did so far was working with virtual servers from AWS or other cloud providers.
There are obviously very good reasons why we started using database servers on external boxes, scaling apps horizontally and adding a load balancer in front of it. If you run an app and not a website which hit scale, you will want to start doing all of this and more.
If you ever plan to manipulate data outside of your Django application, you might want to consider adding CHECK constraints
to make sure the data going into the DB is actually what you expect. When you want to migrate to another database server, this will make your life a lot easier.
Small scale websites allow you to get away with things which sound counter intuitive when you are used to deploying complex web applications with service level agreements, or when you are just starting out and only find resources talking about those production ready deployments.
You might want to use your small site or app to learn how to do a full scale deployment, which is a really good hands on way to learn. But if you are only after getting your site online, consider taking shortcuts. Getting the first deployment done and seeing your site live will be a great feeling and you can always iterate, do not worry about getting everything 100% right for a scale you do not have from the start.