
From Local to Live
From Local to Live: Hosting Your Apps on VPS with Docker and GitHub
A step-by-step guide for aspiring developers on how to host web applications on a Virtual Private Server (VPS) using Docker containers and GitHub Workflows.
Step 1: Acquire a VPS
Provisioning a VPS is now easier and more affordable than ever. Many providers offer budget-friendly solutions, including:
- Hostinger
- DigitalOcean
- RackNerd
- Superhosting
- ...and many more.
"Tip: Choose a server located near your primary audience (e.g., Europe), but don’t worry too much—servers in the US will only add a few milliseconds of latency."
Recommended Specs:
For a personal portfolio or a few small sites, 1 GB RAM and 40 GB SSD is more than enough. Don’t overspend!
Operating System:
Stick with Ubuntu (e.g., 20.04 or 22.04) unless you’re already comfortable with another Linux distribution. Most tutorials and community support are Ubuntu-focused.
Example Setup:
This tutorial uses a 1 GB RAM, 24 GB SSD, Ubuntu 20.04 server from RackNerd.
Once you’ve purchased your server, you’ll receive:
- An IP address
- Login credentials (usually
root
and a password)
To access your server, you’ll need an SSH-capable terminal. Popular options include PuTTY (Windows) and Tabby Terminal (cross-platform).
"Alternative: You can also repurpose an old computer or laptop as your own VPS!"
Step 2: Initial VPS setup
After logging in to your VPS, perform some essential setup steps.
- Update the Server
apt update apt upgrade
This may take a while, depending on your server’s initial state.
- Create a new user:
adduser krifod
You will be asked a few questions, starting with the account password.
- Set a strong password.
- You can skip the additional information prompts by pressing
ENTER
.
- Grant Administrative Privileges:
usermod -aG sudo krifod
- Install and Configure the Firewall (UFW) Install UFW:
apt install ufw
Check available applications:
ufw app list
We need to make sure that the firewall allows SSH connections so that we can log back in next time.
You should see:
Available applications: OpenSSH
Allow SSH connections:
ufw allow OpenSSH
Enable the firewall:
ufw enable
Type y
and press ENTER
to proceed.
Check status:
ufw status
You should see output similar to:
Status: active To Action From -- ------ ---- OpenSSH ALLOW Anywhere OpenSSH (v6) ALLOW Anywhere (v6)
Note: The firewall now blocks all connections except SSH. If you add more services later, remember to allow their ports. Docker container ports ignore the firewall rules and you don't need to do anything.
- (Optional) Configure SSH
Consider setting up SSH key authentication for better security.
- Log in as Your New User
Disconnect and reconnect using your new user credentials.
Step 3: Install Docker Engine
We'll follow the official Docker documentation.
- Set Up the Repository
Add Docker's official GPG key:
sudo apt-get update
sudo apt-get install ca-certificates curl
sudo install -m 0755 -d /etc/apt/keyrings
sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc
sudo chmod a+r /etc/apt/keyrings/docker.asc
Add the Docker repository:
echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu $(. /etc/os-release && echo "${UBUNTU_CODENAME:-$VERSION_CODENAME}") stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null sudo apt-get update
- Install the Docker packages
sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
- Verify the installation:
sudo docker run hello-world
This command downloads a test image and runs it in a container. When the container runs, it prints a message and exits.
- Post-Installation Steps:
Create the docker
group and add your user:
sudo groupadd docker sudo usermod -aG docker $USER
Log out and back in, or run:
newgrp docker
Test Docker without sudo
.
docker run hello-world
- Enable Docker to Start on Boot
On Debian and Ubuntu, the Docker service starts on boot by default. To automatically start Docker and containerd on boot for other Linux distributions using systemd, run the following commands:
sudo systemctl enable docker.service sudo systemctl enable containerd.service
To disable auto-start:
sudo systemctl disable docker.service sudo systemctl disable containerd.service
Step 4: Prepare your Application
Now it’s time to get your application ready for deployment. For this tutorial, we’ll use an ASP.NET MVC app as an example, but the process is similar for most web apps.
- Make Your Connection String Configurable
Instead of hardcoding your database connection string, use environment variables.
This is more secure and flexible, especially for deployments.
Open your appsettings.json
and replace the connection string with a parameterized version:
For MSSQL:
"DefaultConnection": "Server=${DB_SERVER},${DB_PORT};Database=${DB_DATABASE};User Id=${DB_USERNAME};Password=${DB_PASSWORD};Trust Server Certificate=True;"
For PostgreSQL:
"DefaultConnection": "Host=${DB_HOST};Port=${DB_PORT};Database=${DB_DATABASE};Username=${DB_USERNAME};Password=${DB_PASSWORD};"
"Note:
Adjust the string format to match your database provider."
- Update Your Application to Use Environment Variables
In your Program.cs
, change how you read the connection string:
Before:
var connectionString = builder.Configuration.GetConnectionString("DefaultConnection") ?? throw new InvalidOperationException("Connection string 'DefaultConnection' not found.");
After:
var connectionString = builder.Configuration["ConnectionStrings:DefaultConnection"] .Replace("${DB_SERVER}", Environment.GetEnvironmentVariable("DB_SERVER") ?? "localhost") .Replace("${DB_PORT}", Environment.GetEnvironmentVariable("DB_PORT") ?? "1433") .Replace("${DB_DATABASE}", Environment.GetEnvironmentVariable("DB_DATABASE") ?? "defaultdb") .Replace("${DB_USERNAME}", Environment.GetEnvironmentVariable("DB_USERNAME") ?? "SA") .Replace("${DB_PASSWORD}", Environment.GetEnvironmentVariable("DB_PASSWORD") ?? "password");
"Tip:
Always provide fallback/default values in case an environment variable is missing."
- Push Your Code to GitHub
Commit your changes and push your repository to GitHub.
- Set Up Docker Hub Credentials in GitHub
To automate Docker image builds, you’ll need to connect your GitHub repository to Docker Hub.
Go to your repository’s Settings → Security → Secrets and variables → Actions.
Click New repository secret and add your Docker Hub token as
DOCKERHUB_PASSWORD
.- You can generate a token in Docker Hub under Account Settings → Security → Personal Access Tokens.
Add your Docker Hub username as a variable:
- Switch to the Variables tab, click New repository variable, and name it
DOCKERHUB_USERNAME
.
- Switch to the Variables tab, click New repository variable, and name it
"Note:
You can get a new token by login on Docker Hub and then going to your account settings and under Personal access tokens" you click "Generate new token"."
"Security Note:
Using environment variables for secrets is better than hardcoding, but for production, consider a dedicated secrets manager (like Azure Key Vault, AWS Secrets Manager, HashiCorp Vault or Infisical). For learning and development, this method is fine."
Step 5: Build and Push Your Docker Image
Now it’s time to package your app as a Docker image and push it to Docker Hub automatically using GitHub Actions.
- Create a GitHub Workflow
In your repository, create a new file at .github/workflows/docker-image.yml
with the following content:
name: Build and Push Docker Image on: push: branches: - master jobs: build-and-push: runs-on: ubuntu-latest permissions: contents: read packages: write attestations: write id-token: write steps: - name: Checkout Repository uses: actions/checkout@v4 - name: Log in to DockerHub uses: docker/login-action@v3 with: username: ${{ vars.DOCKERHUB_USERNAME }} password: ${{ secrets.DOCKERHUB_PASSWORD }} - name: Set up Docker Buildx uses: docker/setup-buildx-action@v3 - name: Build and Push Docker Image uses: docker/build-push-action@3b5e8027fcad23fda98b2e3ac259d8d67585f671 with: context: . file: Dockerfile push: true tags: ${{ vars.DOCKERHUB_USERNAME }}/seminar-hub:latest - name: Verify Docker Image run: docker images
"Tip:
Make sure your Docker Hub username and password/token are set as repository variables and secrets in GitHub (DOCKERHUB_USERNAME and DOCKERHUB_PASSWORD)."
- Create a Dockerfile
Place your Dockerfile
in the root of your project (where Program.cs and your main .csproj file are):
FROM mcr.microsoft.com/dotnet/aspnet:6.0 AS base USER app WORKDIR /app EXPOSE 8080 EXPOSE 8081 FROM mcr.microsoft.com/dotnet/sdk:6.0 AS build ARG BUILD_CONFIGURATION=Production WORKDIR /src COPY ["SeminarHub.csproj", "."] RUN dotnet restore "SeminarHub.csproj" COPY . . WORKDIR "/src" RUN dotnet build "SeminarHub.csproj" -c $BUILD_CONFIGURATION -o /app/build WORKDIR /src RUN dotnet build "SeminarHub.csproj" -c $BUILD_CONFIGURATION -o /app/build FROM build AS publish ARG BUILD_CONFIGURATION=Production RUN dotnet publish "SeminarHub.csproj" -c $BUILD_CONFIGURATION -o /app/publish /p:UseAppHost=false FROM base AS final WORKDIR /app COPY --from=publish /app/publish . ENV ASPNETCORE_ENVIRONMENT=Production ENTRYPOINT ["dotnet", "SeminarHub.dll"]
"Note:
If your project is more complex and each section is separated in its own .csproj file, you need to COPY all of them into /src.
The BUILD_CONFIGURATION=Production argument is used to avoid developer error messages (you can set it to something else if needed). If your database data is in a separate project, make sure it is copied in the final step as well."
"Note:
Visual Studio (not VS Code) can auto-generate Dockerfiles for your project. If you’re unsure, you can use that feature to get started."
- (Optional) Add a
.dockerignore
File
To keep your Docker image small, add a .dockerignore
file to exclude files and folders you don’t need in your image.
- Push Your Changes
Commit and push your workflow and Dockerfile to GitHub.
GitHub Actions will automatically build and push your Docker image to Docker Hub every time you push to the master
branch.
You can monitor the build progress under the Actions tab in your GitHub repository.
Step 6: Orchestrate with Docker Compose
Now let’s set up Docker Compose to manage your app, database, and networking on your VPS.
- Create a Docker Compose File
Create a docker-compose.yaml
file:
services: webapp: image: kaiserdmc/seminar-hub:latest container_name: seminarhubpage environment: - ASPNETCORE_ENVIRONMENT=Production - DB_SERVER=${DB_SERVER} - DB_PORT=${DB_PORT} - DB_DATABASE=${DB_DATABASE} - DB_USERNAME=${DB_USERNAME} - DB_PASSWORD=${DB_PASSWORD} ports: - "8080:80" - "8081:8081" user: "root" networks: - seminarhubpage_network depends_on: - mssql volumes: - webapp-data:/app/data mssql: image: mcr.microsoft.com/azure-sql-edge:latest container_name: seminarhubpage_db environment: - ACCEPT_EULA=1 - MSSQL_SA_PASSWORD=${MSSQL_SA_PASSWORD} - MSSQL_DATA_DIR=/var/opt/mssql/data - MSSQL_LOG_DIR=/var/opt/mssql/log - MSSQL_BACKUP_DIR=/var/opt/mssql/backup ports: - "1433:1433" networks: - seminarhubpage_network volumes: - mssql-data:/var/opt/mssql db-init: image: mcr.microsoft.com/azure-sql-edge:latest depends_on: - mssql entrypoint: [ "/bin/bash", "/init-db.sh" ] environment: - MSSQL_SA_PASSWORD=${MSSQL_SA_PASSWORD} - DB_NAME=${DB_DATABASE} - DB_HOST=${DB_SERVER} volumes: - ./init-db.sh:/init-db.sh:ro networks: - seminarhubpage_network networks: seminarhubpage_network: driver: bridge volumes: webapp-data: mssql-data:
"Note: If you want to store other data that needs to persist, such as images under wwwroot, make sure to add those volumes in the first and last part of the script. If you use PostgreSQL, some of the variables differ and would need changing. Same goes for the database data path."
- Create a
.env
File
In the same directory, create a .env
file to store your environment variables:
MSSQL_SA_PASSWORD=YourStrongPassword123! DB_SERVER=seminarhubpage_db DB_PORT=1433 DB_DATABASE=seminarhub_database DB_USERNAME=sa DB_PASSWORD=YourStrongPassword123!
- (Optional) Fix Volume Permissions
If you run into issues with database volume permissions, run:
# Create the volume docker volume create mssql-data # Create a temporary container to set permissions docker run --rm -v mssql-data:/mssql-data alpine chmod -R 777 /mssql-data
- Add a Database Initialization Script
Create a file named init-db.sh
in your project directory:
#!/bin/bash # Use the service name as the host DB_HOST=${DB_HOST:-seminarhubpage_db} echo "Waiting for SQL Server to be available at $DB_HOST..." until /opt/mssql-tools/bin/sqlcmd -S $DB_HOST -U sa -P "$MSSQL_SA_PASSWORD" -Q "SELECT 1" &>/dev/null do sleep 2 done echo "SQL Server is up. Creating database if it does not exist..." /opt/mssql-tools/bin/sqlcmd -S $DB_HOST -U sa -P "$MSSQL_SA_PASSWORD" -Q "IF NOT EXISTS (SELECT name FROM sys.databases WHERE name = N'$DB_NAME') BEGIN CREATE DATABASE [$DB_NAME]; END" echo "Database check/creation complete."
Make it executable:
cd ~/websites/seminarhub chmod +x init-db.sh
"Why is this needed? When using Azure SQL Edge (or similar images), the container only creates the sa user and the master.dbo schema by default. Your own database (e.g., seminarhub_database) and its schema are not created automatically. This script waits for SQL Server to be ready, then creates your database if it doesn’t exist. If you use a different database image or provider, you may need a different initialization approach."
- Apply Database Migrations Automatically
Add a helper method to your project to apply migrations on startup.
Create a file called WebApplicationExtensions.cs
:
using Microsoft.EntityFrameworkCore; using SeminarHub.Data; namespace SeminarHub; public static class WebApplicationExtensions { public static void ConfigureMigrations(this WebApplication app) { using (var scope = app.Services.CreateScope()) { var dbContext = scope.ServiceProvider.GetRequiredService<SeminarHubDbContext>(); dbContext.Database.Migrate(); } } }
Then, in your Program.cs
, call this method after building the app:
var app = builder.Build();
app.ConfigureMigrations();
Step 7: Deploy and Run Your App on the VPS
You’re almost there! Let’s get your app running on your server.
- Upload Your Files
On your VPS, create a directory for your app:
mkdir -p ~/websites/seminarhub cd ~/websites/seminarhub
Upload your docker-compose.yaml
, .env
, and init-db.sh
files to this directory.
You can use an SFTP client like FileZilla, WinSCP, or your terminal.
- Start Your Containers
From your app directory, run:
docker compose up -d
Check that your containers are running:
docker ps
You should see your app and database containers listed.
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 0a394cee2720 kaiserdmc/seminar-hub:latest "dotnet SeminarHub.d…" 22 hours ago Up 22 hours 8080/tcp, 0.0.0.0:8081->8081/tcp, [::]:8081->8081/tcp, 0.0.0.0:8080->80/tcp, [::]:8080->80/tcp seminarhubpage 37c499220b96 mcr.microsoft.com/azure-sql-edge:latest "/opt/mssql/bin/perm…" 22 hours ago Up 22 hours 1401/tcp, 0.0.0.0:1433->1433/tcp, [::]:1433->1433/tcp seminarhubpage_db
- Open the Firewall
Allow HTTP traffic to your server:
sudo ufw allow 80
Now, visit your server’s IP address in your browser. Your app should be live!
Step 8: Set Up a Domain and HTTPS
Let’s make your app accessible via a custom domain and secure it with HTTPS.
8.1: Get a Domain Name
Register a domain with any provider (e.g., OVHcloud, GoDaddy, Namecheap, SuperHosting). In your domain’s DNS settings, add:
- An
A
record foryourdomain.com
pointing to your VPS IP - An
A
record (orCNAME
) forwww.yourdomain.com
pointing to your VPS IP
"Note:
DNS changes can take up to 48 hours to propagate.
8.2: Install Apache as a Reverse Proxy
The Apache web server is among the most popular web servers in the world. It’s well documented, has an active community of users, and has been in wide use for much of the history of the web, which makes it a great choice for hosting a website.
Install Apache:
sudo apt update sudo apt install apache2
Allow HTTP and HTTPS through the firewall:
sudo ufw allow 'Apache Full'
Create a new Apache site config:
sudo nano /etc/apache2/sites-available/your_domain.conf
Paste the following, replacing your_domain
with your actual domain:
# Configuration for kapybara.cloud <VirtualHost *:80> ServerAdmin webmaster@kapybara.cloud ServerName kapybara.cloud ServerAlias www.kapybara.cloud DocumentRoot /var/www/html ErrorLog ${APACHE_LOG_DIR}/kapybara_error.log CustomLog ${APACHE_LOG_DIR}/kapybara_access.log combined ProxyPass / http://127.0.0.1:8080/ ProxyPassReverse / http://127.0.0.1:8080/ <Directory /var/www/html> Options Indexes FollowSymLinks AllowOverride None Require all granted </Directory> </VirtualHost>
"Note: The last section
Directory
is not needed unless you want to server static files from Apache as well as from Docker, but I have left it in for the example."
Enable the necessary modules and your new site:
sudo a2enmod proxy
sudo a2enmod proxy_http
sudo a2ensite kapybara_hub.conf
sudo a2dissite 000-default
sudo systemctl restart apache2
8.3: Secure Your Site with HTTPS (Let’s Encrypt)
Let’s Encrypt is a Certificate Authority (CA) that facilitates obtaining and installing free TLS/SSL certificates, thereby enabling encrypted HTTPS on web servers. It simplifies the process by providing a software client, Certbot, that attempts to automate most (if not all) of the required steps. Currently, the entire process of obtaining and installing a certificate is fully automated on both Apache and Nginx.
In order to obtain an SSL certificate with Let’s Encrypt, we’ll first need to install the Certbot software on your server. We’ll use the default Ubuntu package repositories for that.
Install Certbot for Apache:
sudo apt install certbot python3-certbot-apache
Check your Apache config:
sudo apache2ctl configtest
If you see Syntax OK
, run:
sudo certbot --apache
and follow the instructions.
Follow the prompts to set up your SSL certificate. When asked, choose to redirect HTTP to HTTPS.
Check that auto-renewal is enabled:
sudo systemctl status certbot.timer
Test renewal with:
sudo certbot renew --dry-run
Step 9: Celebrate!
That’s it! Your app is now live, running in Docker on your VPS, accessible via your own domain, and secured with HTTPS.
Step 10: Hosting Multiple Websites on One VPS
Once you’ve mastered hosting a single app, you can easily expand your VPS to serve multiple websites! Here’s how to do it:
- Assign Unique Ports for Each App
Each web app must listen on a different port inside Docker and be mapped to a unique port on your VPS. For example:
exampleone.com
→ Docker port8080
exampletwo.org
→ Docker port8082
In your docker-compose.yaml
:
services: exampleone-app: image: yourdockerhubuser/exampleone:latest container_name: exampleone_app ports: - "8080:80" # ...other config... exampleone-db: # ...db config for exampleone... exampletwo-app: image: yourdockerhubuser/exampletwo:latest container_name: exampletwo_app ports: - "8082:80" # ...other config... exampletwo-db: # ...db config for exampletwo...
"Tip:
Make sure each app and its database use unique container names, ports, and environment variables."
- Update Apache to Proxy Each Domain to the Right App
You’ll need a separate Apache config for each website. Each config listens for its domain and proxies requests to the correct Docker port.
Example: Two Sites on One VPS
Create a file like /etc/apache2/sites-available/exampleone.com.conf
and paste:
# Configuration for exampleone.com and exampleone.net <VirtualHost *:80> ServerAdmin webmaster@exampleone.com ServerName exampleone.com ServerAlias www.exampleone.com exampleone.net www.exampleone.net DocumentRoot /var/www/html ErrorLog ${APACHE_LOG_DIR}/exampleone-error.log CustomLog ${APACHE_LOG_DIR}/exampleone-access.log combined # Proxy to your app (change port as needed) ProxyPass / http://127.0.0.1:8080/ ProxyPassReverse / http://127.0.0.1:8080/ <Directory /var/www/html> Options Indexes FollowSymLinks AllowOverride None Require all granted </Directory> # Redirect all www and .net traffic to https://exampleone.com RewriteEngine on RewriteCond %{HTTP_HOST} ^(www\.)?exampleone\.(com|net)$ [NC] RewriteRule ^ https://exampleone.com%{REQUEST_URI} [END,NE,R=permanent] RewriteCond %{SERVER_NAME} =www.exampleone.net [OR] RewriteCond %{SERVER_NAME} =exampleone.net [OR] RewriteCond %{SERVER_NAME} =www.exampleone.com [OR] RewriteCond %{SERVER_NAME} =exampleone.com RewriteRule ^ https://%{SERVER_NAME}%{REQUEST_URI} [END,NE,R=permanent] </VirtualHost>
And for your second site, /etc/apache2/sites-available/exampletwo.org.conf
:
# Configuration for exampletwo.org <VirtualHost *:80> ServerAdmin webmaster@exampletwo.org ServerName exampletwo.org ServerAlias www.exampletwo.org DocumentRoot /var/www/html ErrorLog ${APACHE_LOG_DIR}/exampletwo-error.log CustomLog ${APACHE_LOG_DIR}/exampletwo-access.log combined # Proxy to your app (change port as needed) ProxyPass / http://127.0.0.1:8082/ ProxyPassReverse / http://127.0.0.1:8082/ <Directory /var/www/html> Options Indexes FollowSymLinks AllowOverride None Require all granted </Directory> # Redirect all www traffic to https://exampletwo.org RewriteEngine on RewriteCond %{HTTP_HOST} ^(www\.)?exampletwo\.org$ [NC] RewriteRule ^ https://exampletwo.org%{REQUEST_URI} [END,NE,R=permanent] RewriteCond %{SERVER_NAME} =exampletwo.org [OR] RewriteCond %{SERVER_NAME} =www.exampletwo.org RewriteRule ^ https://%{SERVER_NAME}%{REQUEST_URI} [END,NE,R=permanent] </VirtualHost>
"Note:
You can also combine the two configurations in one file!"
- Enable Your New Sites and Restart Apache
Enable the necessary Apache modules and your new site configs:
sudo a2enmod proxy sudo a2enmod proxy_http sudo a2ensite exampleone.com.conf sudo a2ensite exampletwo.org.conf sudo systemctl reload apache2
"Tip:
You can add as many site configs as you need—just use a unique port for each app and update the ProxyPass lines accordingly."
- Set Up HTTPS for Each Domain
Run Certbot for each domain to get a free SSL certificate:
sudo certbot --apache
Follow the prompts for each domain.
Now you can host as many sites as you want on a single VPS!
Just repeat the process for each new app and domain.
No comments yet. Be the first to comment!