Deploying to Space
As previously mentioned, the system is air-gapped, meaning it does not have direct access to the internet or external networks. Because of this, updating Docker images must be handled differently compared to development environments on the ground. Instead of pulling images from remote registries, updates must be prepared, packaged, and transferred through a controlled uplink process.
Additionally, since direct command-line access to containers is not available onboard, all data exchange must follow a structured filesystem approach. Each container will interact with a shared, predefined and private directory where all relevant files are located—including input data, output results, and any build artifacts required by the application.
This means that:
-
Input files must be placed in this shared directory before execution.
-
Output files generated by the container must also be written back to the same location.
-
Any additional dependencies or build files the application needs at runtime should also be included there.
By standardizing this directory structure, Clustergate-2 ensures reliable and predictable data handling for both uplink and downlink, without requiring interactive shell access to the container itself.
Building Docker Images onboard
For instructions on how to structure and upload such an image, see the guide: How to Build a Docker Image.
There are two main approaches to building Docker images for Clustergate-2: building onboard the satellite or building on the ground.
Onboard builds require uplinking the Dockerfile along with all necessary binaries and dependencies while preserving the folder structure; these builds must start from one of the base images already available in space.
Ground builds allow you to create fully packaged Docker images—often using multi-stage builds and cross-compilation for ARM64—and upload them as .tar archives for direct loading on the satellite.
Because Clustergate-2 operates in an air-gapped environment with no package manager access, all dependencies must be embedded at build time, and container size should be optimized for limited uplink capacity. Data input, output, and build files are handled via mounted volumes, as containers have no interactive shell access during runtime.
Filesystem
Each Docker container will have a dedicated volume mounted at /data by default.
This mount point serves as the main interface for file exchange between your application and the Clustergate-2 system.
The default filesystem structure inside the container would look like:
/data
├── Dockerfile
├── my_build_files/
├── my_first_upload.txt
└── put_here_the_files_you_want_to_transmit_to_ground
Uplink (Earth to Satellite)
Files uploaded via the Dashboard on the ground will be placed directly into the container’s /data directory.
This allows you to include binaries, scripts, config files, or any other necessary data in your container runtime environment.
Downlink (Satellite to Earth)
To transmit files back to the ground, your application should write output files into the /data directory.
Only this directory will be monitored for downloadable artifacts.
The Dashboard will manage file retrieval using a git-like system, allowing you to fetch only updated or new files when requested.
Custom Mount Path
While /data is the default mount location, it can be customized during the first onboarding transfer if needed.
Scheduling
Clustergate-2 supports multiple ways to schedule the execution of containerized workloads. However, if you do not require scheduling, you can simply deploy your container without specifying any scheduling configuration—it will then be executed as soon as resources allow.
Time-Based Scheduling
To run your workload at a specific time, provide a timestamp in RFC3339 format. Example:
2025-05-22T12:10:00+02:00
This allows precise scheduling based on UTC time or with timezone offsets.
Location-Based Scheduling
To run your container when the satellite is over a specific geographic location, define the following parameters in the Dashboard:
latitude: 46.27
longitude: 6.96
altitude: 372.0
min_elevation: 5.0
Where:
-
latitude / longitude: Target ground location.
-
altitude: Altitude above sea level (in meters).
-
min_elevation: Minimum elevation angle (in degrees) for the satellite to be considered "in view" of the target.
Note: This feature is currently experimental and operates on a best-effort basis. The min_elevation threshold is not yet guaranteed to be strictly enforced.
Telemetry Access
Onboard telemetry data is available to users who require it. This includes system metrics, orbital data, sensor outputs, and more.
Telemetry is accessed via a REST API from the running containers, which allows them to query live or recent data from the satellite.
For details on available endpoints, authentication, and usage examples, see the documentation: CG2 Telemetry.
Specific Computing Needs
Some applications may require access to specialized onboard hardware. These resources must be explicitly requested in your deployment configuration, defined from the Dashboard, and some may involve additional integration work. Available options are:
FPGA (Programmable Logic)
If your application needs access to the onboard FPGA (Xilinx UltraScale+), please contact the team as early as possible. You may need to provide:
-
Custom kernel modules
-
Bitstreams or hardware descriptors
-
Information for feasibility assessment
Due to the complexity of integrating FPGA-based workloads, early coordination is essential.
GPU (Jetson Orin NX)
To use the onboard GPU, specify it in your deployment configuration. It will be passed to your container using the equivalent of Docker’s --gpus option.
Key notes:
-
The GPU is provided by a Jetson Orin NX module.
-
Several AI-focused base images (e.g., with CUDA, TensorRT) are preloaded and optimized for onboard use.
-
For best performance, ensure your application is compatible with ARM64 and the Jetson platform stack.