Warning

Nightly releases are generated automatically from the latest source code and are intended for experimental purposes only. These builds may contain incomplete or untested features, bugs, or security vulnerabilities, and as such, are not for production use. Users should be aware that nightly releases may cause unexpected behavior, data loss, or system instability. Use of these releases is at the user's own risk, and it is advised to have adequate backups before testing. The software is provided as is with no guarantees or support.

Troubleshooting an instant3Dhub Instance

Introduction

Note

For troubleshooting we focus on Kubernetes as it is our recommended deploy format.

The following cases contain frequent issues encountered during the rollout of instant3Dhub. This content serves as an addition to the other integration guides and will be extended over time. In case you need support it is vital that you look into the following topics first.

License Server

The startup of all instant3Dhub components is independent of the license server setup. All pods will show Running even if the address to the license server is wrong or the license is expired. Errors will only become visible during transcoding or other transactions which require a license checkout.

Container Stuck in ContainerCreating

This behavior can have many causes. Its is likely that the root cause is not an instant3Dhub issue. Possible sources of this behavior:

  • Volume mounts not setup correctly.

  • Special capability nodes are all claimed by other pods.

Refer to debugging steps below, especially kubecl describe to determine exact failure causes.

Pods show CrashLoopBackOff

Several pods are required for the system to work, and containers which depend on these will be in a CrashLoopBackOff if these do not start within about 5 minutes:

  • i3dhub-postgres

  • i3dhub-rabbitmq

  • i3dhub-consul

  • i3dhub-keystore

Once these have started successfully, the rest of the containers should start. If any containers are not starting, it is worth trying the debugging steps below to determine the root cause.

PostgreSQL or RabbitMQ in CrashLoopBackOff

If PostgreSQL or RabbitMQ are not starting, the system will not work. This can happen if upgrading to a newer major version of instant3Dhub without previously clearing the respective volumes. Clearing the volumes should resolve the issue.

Pods are crashing randomly

If pods are crashing randomly, it is likely that the system is running out of memory. This can happen when running on hardware with insufficient resources or with enabled memory limits.

To resolve the issue, try either increasing node sizes or, with enabled limits, increasing the limits for the respective pods. Especially the transcoder pods can have high memory requirements depending on input file sizes.

webvis client is stuck in Offline

This can happen if the webvis client is not able to connect to the backend. Causes can be:

  • Missing address in the system configuration. All addresses under which the backend is reachable must be added to the entrypoints section of values.yaml.

  • The backend is down. Check the backend pods for errors.

kubectl troubleshooting

If you prefer other tools to monitor or manage your cluster feel free to use them. As kubectl should be available anywhere, we use it as a baseline tool to explain our way of troubleshooting instant3Dhub. For extensive information use the official kubernetes documentation. The following commands should be enough to figure out most error cases:

Determine pod states:

kubectl describe pod <name> -n <namespace>

Print logs of running or failed pods:

kubectl logs <podname> -n <namespace>