Deploy Tiled on a Single Host#

As a matter of preference, you may decide to run the Tiled server using a container (e.g. Docker, Podman). This documentation is organized into two sections, illustrating how to deploy Tiled with or without a container.

When you have a server running, see Test the server and Next steps at the end of this page.

With a Container#

Using Temporary Storage#

Generate a secure secret and start a Tiled server from the container image.

echo "TILED_SINGLE_USER_API_KEY=$(openssl rand -hex 32)" >> .env
docker run -p 8000:8000 --env-file .env ghcr.io/bluesky/tiled:latest

For development, it can be convenient to use a short memorable secret like TILED_SINGLE_USER_API_KEY=secret. Take caution never to use that approach on a public-facing server, or on a server containing important data.

The data and (embedded) databases are inside the container and will not persist outside it. Read on to persist it.

Using Persistent Storage#

Create a local directory for data and metadata storage, and mount it into the container so that it persists after the container stops.

mkdir storage/

We would like to make storage writable by the application in the container, while continuing to be able to direct access the files in it from the host, outside of Tiled.

This is one of those times when Docker and Podman differ.

Docker#

With Docker, it is straightforward because Docker runs with high privileges by default.

echo "TILED_SINGLE_USER_API_KEY=$(openssl rand -hex 32)" >> .env
docker run -p 8000:8000 --env-file .env -v ./storage:/storage ghcr.io/bluesky/tiled:latest

Podman#

With Podman, there are various options. Here is one that is relatively simple to set up.

  1. Make the storage directory group writable.

    chmod g+w storage
    
  2. Run the container with the options --userns=keep-id and --group-add $(id -g) to allow the container to write to the storage directory.

    echo "TILED_SINGLE_USER_API_KEY=$(openssl rand -hex 32)" >> .env
    podman run -p 8000:8000 --env-file .env -v ./storage:/storage --userns=keep-id --group-add $(id -g) ghcr.io/bluesky/tiled:latest
    

Customizing Configuration#

When you need to introduce custom configurations—such as multi-user authentication (e.g., OIDC) and access policies or support for custom file formats—it is time to use a configuration file.

The default configuration used by the container is:

/deploy/config/config.yml#
---
# This configuration is meant to run inside the container specified by the
# Containerfile in the tiled repository.  It presumes that the container has a
# writable directory at the path /storage.

authentication:
  # The default is false. Set to true to enable any HTTP client that can
  # connect to _read_. An API key is still required to write.
  allow_anonymous_access: false
catalog:
  # This database stores metadata and URIs pointing to data.
  uri: "sqlite:////storage/catalog.db"

  writable_storage:

    # File-based data storage
    - "/storage/data"

    # Embedded tabular data storage
    - "duckdb:///storage/data.db"

  # This creates the database if it does not exist. This is convenient, but in
  # a horizontally-scaled deployment, this can be a race condition and multiple
  # containers may simultaneously attempt to create the database.
  # If that is a problem, set this to false, and run:
  #
  # tiled catalog init URI
  #
  # separately.
  init_if_not_exists: true

streaming_cache:
  # Store recent metadata and uploaded data in memory for streaming.
  uri: memory

You can override it by mounting a configuration directory on the host to override the configuration directory in the container, by adding -v ./my-custom-config-dir:/deploy/config.

See the example server configuration for a comprehensive server configuration file with comments.

For example, combining persistent storage with a custom configuration:

docker run --env-file .env \
  -p 8000:8000 \
  -v ./storage:/storage \
  -v ./your/config/directory:/deploy/config \
  ghcr.io/bluesky/tiled:latest

Using Scalable Persistent Storage#

The default Tiled container uses embedded databases (SQLite, DuckDB) and in-process memory for caching. For larger workloads, you can upgrade to externally-managed services:

  • PostgreSQL — for metadata and tabular data

  • Redis — for live data streaming (optional)

Tiled ships with a compose.yml to orchestrate these services. Follow the steps below in order.

  1. Create a project directory and add the compose file.

    Create a directory for your deployment and place the following compose.yml inside it:

    compose.yml#
    ---
    services:
      tiled:
        image: ghcr.io/bluesky/tiled:0.2.9
        environment:
          - TILED_SINGLE_USER_API_KEY=${TILED_SINGLE_USER_API_KEY}
          - TILED_CATALOG_URI=postgresql://tiled:${POSTGRES_PASSWORD}@postgres:5432/tiled_catalog
          - TILED_CATALOG_WRITABLE_STORAGE=["file:///storage", "postgresql://tiled:${POSTGRES_PASSWORD}@postgres:5432/tiled_storage"]
          - TILED_STREAMING_CACHE_URI=redis://:${REDIS_PASSWORD}@redis:6379
        volumes:
          - tiled_data:/storage
        ports:
          - 8000:8000
        restart: unless-stopped
        depends_on:
          postgres:
            condition: service_healthy
          redis:
            condition: service_healthy
        networks:
          - backend
        healthcheck:
          test: curl --fail http://localhost:8000/healthz || exit 1
          interval: 60s
          timeout: 10s
          retries: 3
          start_period: 30s
    
      postgres:
        image: docker.io/postgres:16
        environment:
          - POSTGRES_USER=tiled
          - POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
          # We create two databases, 'tiled_catalog' (metadata) and
          # 'tiled_storage' (tabular data) below. But it is required to
          # specify one here, so we use 'postgres' as a neutral default.
          - POSTGRES_DB=postgres
        volumes:
          - postgres_data:/var/lib/postgresql/data
          - ./initdb:/docker-entrypoint-initdb.d
        restart: unless-stopped
        networks:
          - backend
        healthcheck:
          test: pg_isready -U tiled -d postgres
          interval: 10s
          timeout: 5s
          retries: 5
          start_period: 10s
    
      redis:
        image: docker.io/redis:7-alpine
        command: redis-server --requirepass ${REDIS_PASSWORD}
        volumes:
          - redis_data:/data
        restart: unless-stopped
        networks:
          - backend
        healthcheck:
          test: redis-cli -a ${REDIS_PASSWORD} ping
          interval: 10s
          timeout: 5s
          retries: 5
          start_period: 5s
    
    volumes:
      tiled_data:
      postgres_data:
      redis_data:
    
    networks:
      backend: {}
    

    By default, this uses the configuration file shown above. When you need to introduce a custom configuration file, place a file named compose.override.yml next to compose.yml.

    compose.override.yml#
    ---
    services:
      tiled:
        environment:
          - TILED_CONFIG=/deploy/config
        volumes:
          - ./my-custom-config-dir:/deploy/config:ro
    

    The name compose.override.yml matters: below, docker compose will automatically apply this override if it detects one is present.

  2. Create a .env file with secure secrets.

    In the same directory, create a .env file using the format below as a template:

    .env#
    TILED_SINGLE_USER_API_KEY=secret
    POSTGRES_PASSWORD=secret
    REDIS_PASSWORD=secret
    

    Then populate it with generated secrets:

    echo "TILED_SINGLE_USER_API_KEY=$(openssl rand -hex 32)" >> .env
    echo "POSTGRES_PASSWORD=$(openssl rand -hex 32)" >> .env
    echo "REDIS_PASSWORD=$(openssl rand -hex 32)" >> .env
    
  3. Create the database initialization script.

    Create an initdb/ subdirectory and place the following script inside it. This script initializes the Tiled catalog database (for metadata) and the storage database (for appendable tabular data) when PostgreSQL first starts.

    initdb/01-create-databases.sh#
    #!/bin/bash
    set -e
    psql -v ON_ERROR_STOP=1 --username "$POSTGRES_USER" --dbname "postgres" <<-EOSQL
        CREATE DATABASE tiled_catalog;
        CREATE DATABASE tiled_storage;
    EOSQL
    
    1. Start the services.

    docker compose up -d
    

    docker compose will automatically read your .env file. To stop all services, run docker compose down.

    Warning

    Adding -v to docker compose down will permanently delete all persisted storage. Do not use it unless you intend to wipe your data.

Without a Container#

Using Temporary Storage#

tiled serve catalog --temp

Note

By default, this generates a random API key at startup. For development purposes, it’s convenient to set a fixed API key, to avoid needing to copy/paste the API key each time. Take caution never to use this approach on a server that contains important data or is reachable from the public Internet.

tiled serve catalog --temp --api-key secret

Using Persistent Storage#

The “temporary storage” above allocates a temporary directory with:

  • A SQLite database for metadata (the “catalog”)

  • A directory for file-based data storage

  • A DuckDB database for appendable tabular data storage

This command specifies a persistent location for all three.

mkdir storage
mkdir storage/data

tiled serve catalog --init ./storage/catalog.db -w ./storage/data -w duckdb://./storage/data.db

Customizing Configuration#

When you need to introduce custom configurations—such as multi-user authentication (e.g., OIDC) and access policies or support for custom file formats—configuration via CLI command arguments becomes unwieldy, and it is usually best to use a configuration file.

Here is an example configuration file that specifies the same configuration we used in the above long CLI command, but given as a config file instead.

config.yml#
---
authentication:
  # The default is false. Set to true to enable any HTTP client that can
  # connect to _read_. An API key is still required to write.
  allow_anonymous_access: false
catalog:
  # This database stores metadata and URIs pointing to data.
  uri: "./catalog.db"

  writable_storage:

    # File-based data storage
    - "./data"

    # Embedded tabular data storage
    - "duckdb:///./data"

  # This creates the database if it does not exist. This is convenient, but in
  # a horizontally-scaled deployment, this can be a race condition and multiple
  # containers may simultaneously attempt to create the database.
  # If that is a problem, set this to false, and run:
  #
  # tiled catalog init URI
  #
  # separately.
  init_if_not_exists: true

streaming_cache:
  # For small, simple deployments, we can use process memory to cache recent
  # data for streaming.
  uri: memory

You can use it like:

tiled serve config path/to/config.yml

The command accepts a single file or a directory of configuration files, which can be combined. (Using multiple files can be convenient for complex deployments.)

See the example server configuration for a comprehensive server configuration file with comments.

Using Scalable Persistent Storage#

For larger workloads, you need:

  • PostgreSQL - two databases, for metadata and tabular data respectively, that may reside in one PostgreSQL instance

  • Redis - for live data streaming (optional)

This configuration presumes:

  • There is a PostgreSQL server listening at localhost:5432 with databases named tiled_catalog and tiled_storage and a PostgreSQL user named tiled with access to each.

  • There is a Redis server listening at localhost:6379 with a password set.

The secrets POSTGRES_PASSWORD (the password for the tiled user) and REDIS_PASSWORD should be set as environment variables; we avoid storing secrets directly in configuration files. Tiled will “template” these in from the environment when it loads the configuration.

config.yml#
---
authentication:
  # The default is false. Set to true to enable any HTTP client that can
  # connect to _read_. An API key is still required to write.
  allow_anonymous_access: false
catalog:
  # This database stores metadata and URIs pointing to data.
  uri: "postgresql://tiled:${POSTGRES_PASSWORD}@localhost:5432/tiled_catalog"
  writable_storage:
    # File-based data storage
    - "./storage"
    # Tabular data storage
    - "postgresql://tiled:${POSTGRES_PASSWORD}@localhost:5432/tiled_storage"

  # This creates the database if it does not exist. This is convenient, but in
  # a horizontally-scaled deployment, this can be a race condition and multiple
  # containers may simultaneously attempt to create the database.
  # If that is a problem, set this to false, and run:
  #
  # tiled catalog init URI
  #
  # separately.
  init_if_not_exists: true

streaming_cache:
  uri: redis://:${REDIS_PASSWORD}@localhost:6379
tiled serve config path/to/config.yml

Test the Server#

The server is ready to accept requests. You can test it with curl, for example. The landing page / and API endpoint /api/v1/ accept unauthenticated requests.

curl 'http://localhost:8000/'  # HTML landing page
curl 'http://localhost:8000/api/v1/'  # REST API

Requests that give access to data must be authenticated using the key configured in the .env file.

curl -H "Authorization: Apikey ${TILED_SINGLE_USER_API_KEY}" 'http://localhost:8000/api/v1/metadata/'

To test from a web browser, provide the API key in the URL: https://localhost:8000?api_key=....

Next steps#

  • Notice that the URL uses http not https. Tiled should be placed behind proxy that can perform TLS termination, such as HAproxy, caddy, nginx, or Apache.

  • For large workloads, multiple instances of Tiled should be deployed, sharing the same PostgreSQL, Redis, and network storage volumes for a consistent view of the data. This is addressed in the next section.