FAQ
DPhi Pod Execution
What is the difference between a container and a DPhi Pod?
A container is a self-contained environment that runs an application.
A DPhi Pod is a wrapper around a container — technically, it’s a Kubernetes pod that runs your container inside it.
Why it matters?
For simple jobs, running a container directly or through a DPhi Pod is essentially the same.
Using a DPhi Pod enables more complex setups if needed, like scheduling, telemetry, interpod communication or resource isolation.
The DPhi Pod name was chosen to highlight user-friendliness and the fact that it’s optimized for running jobs in a space-like execution environment.
Do I need to adapt my Docker image for the onboard system?
No — you do not need to make any changes for Kubernetes.
Our system automatically adapts your image to run inside a DPhi Pod.
The only requirement is that your Docker image is built for arm64, which matches the architecture of the execution environment.
This makes it easy to use your existing images without extra modifications.
Where is the persistent volume mounted for my DPhi Pods?
The persistent volume for your DPhi Pods is mounted at:
/data
All default DPhi Pods, in which a pod_name is not set, scheduled by the same user share this volume, so any files you write in /data will be accessible across all your default DPhi Pods.
Because default DPhi Pods are executed with the default name, only one instance of a default DPhi Pod can run at a time. To execute multiple pods in parallel, different pod_names must be set for each instance. However, this will use a unique persistent volume mapped to the pod_name for each instance, so data is not shared among them.
What actually runs on the EM when I submit a command with the DPhi Pod run POST request?
Use the exact command you typed — but remember that if you want to run something like:
ls -lh > /data/files.txt
you must prepend the /bin/bash binary to it. For example:
/bin/bash -c "ls -lh > /data/files.txt"
Your original command remains the same; it just needs to be executed through bash.
Can I use redirection, pipes, and multiple commands?
Yes. All of these work:
/bin/sh echo hello > /data/msg.txt
/bin/sh cat /data/file | grep foo
/bin/sh echo start && do-something && echo done
Because everything runs inside a normal shell.
Can I run multiple commands in one submission?
Yes. For example:
/bin/sh mkdir /data/logs && cp file.txt /data/logs/
Can I run my own script?
Yes. If your image contains the script:
/bin/sh /path/to/script.sh
My DPhi Pod was scheduled successfully but I don’t see logs — why?
A DPhi Pod being “scheduled successfully” only means your request was accepted. It does not guarantee that the job actually started running on the execution machine.
Once the scheduler tries to launch the pod, several things can still go wrong, for example:
- the requested Docker image does not exist
- the command inside the image fails immediately
- the pod starts and exits before producing logs
In all these cases, the pod is still considered “scheduled”, but it may never run or may fail instantly. To ensure the DPhi Pod is correctly running, always fetch its state with the GET /pod/status endpoint and check if there are any errors.
Why is my previous DPhi Pod removed when I request a new one?
At the moment, each client can run only one DPhi Pod at a time. This is a temporary limitation that ensures stable resource usage and avoids conflicts inside the execution environment.
Because of this, when you submit a new DPhi Pod, the system automatically removes your previous pod, and then schedules the new one.
This is expected behavior for now.
In the near future, you will be able to run multiple DPhi Pod in parallel for the same client.
However, it is important to know that all DPhi Pod belonging to a given user share the same dedicated volume on the EM. Therefore, data races, file conflicts, and overwrites will be the user’s responsibility to manage.
This gives you more flexibility but also more control over how your pods interact with shared data.
Do I need to provide a command for each DPhi Pod request?
No — you don’t have to provide a command every time.
If you don’t specify a command, the DPhi Pod will automatically run the default command baked into the Docker image you requested.
If you do provide a command, it overrides the image’s default, allowing you to run the same container with different commands without needing to rebuild the Docker image.
This makes it easy to run different experiments using the same container image.
Where can I recover the data that is printed to the console by my DPhi Pod ?
For now, any standard output (stdout) and standard errors (stderr) are not logged by our system, and therefore not recoverable. In the near future we will implement this such that the user can downlink the console output generated by their pods, but for now we recommend logging everything to a log file in the /data folder, which is the persistent folder location inside the DPhi Pods, such that the user can downlink it in the future.
DPhi Pods Storage Model
TBD