diff --git a/code/container/README.md b/code/container/README.md
index c901b514b214fb75a2a6bc79e6a15626016c0d6d..048b81478f01a923b87f3baced60b9529596175a 100644
--- a/code/container/README.md
+++ b/code/container/README.md
@@ -71,6 +71,8 @@ Below are two examples of how to use HPC JupyterHub. The first example shows the
 5. Stop the JupyterHub server
 	- File -> Hub Control Panel -> Stop My Server 
 
+💎 `cat ~/current.jupyterhub.notebook.log` can give you more details on your JupyterHub sever.
+
 ## Using the container via CLI on the HPC 🏭
 
 The full potential of an HPC cluster can only be utilised using the command line interface (CLI). Workflows can be optimized for the available hardware, such as different accelerators (e.g. GPUs, NPU, ...) and highly parallel workflows. Here a simple workflow using our workshops example is shown, based on the same container, that was used in JupyterHub. For more details please have a look at our [documentation](https://docs.hpc.gwdg.de/index.html), e.g. on [Slurm/GPU Usage](https://docs.hpc.gwdg.de/how_to_use/slurm/gpu_usage/index.html) and [GPU Partitions](https://docs.hpc.gwdg.de/how_to_use/compute_partitions/gpu_partitions/index.html).