From 50dcfb30cecfbd4ebaa5048186515ba730146b49 Mon Sep 17 00:00:00 2001
From: Hauke Kirchner <hauke.gronenberg@gwdg.de>
Date: Wed, 21 Aug 2024 09:35:01 +0000
Subject: [PATCH] Update docs.hpc.gwdg.de links in README.md

---
 code/container/README.md | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/code/container/README.md b/code/container/README.md
index 264f455..c901b51 100644
--- a/code/container/README.md
+++ b/code/container/README.md
@@ -34,7 +34,7 @@ me@local-machine~ % ssh glogin
 [me@glogin9 ~]$ bash build_dlc-dlwgpu.sh
 ```
 
-The container can be built on our various HPC clusters using the `build_dlc-*.sh` scripts. Details on how to use Apptainer and the process of building containers are described in our [documentation](https://docs.hpc.gwdg.de/software/apptainer/) and our blog article [Decluttering your Python environments](https://gitlab-ce.gwdg.de/hpc-team-public/science-domains-blog/-/blob/main/20230907_python-apptainer.md).
+The container can be built on our various HPC clusters using the `build_dlc-*.sh` scripts. Details on how to use Apptainer and the process of building containers are described in our [documentation](https://docs.hpc.gwdg.de/software_stacks/list_of_modules/apptainer/index.html) and our blog article [Decluttering your Python environments](https://gitlab-ce.gwdg.de/hpc-team-public/science-domains-blog/-/blob/main/20230907_python-apptainer.md).
 Running `build_dlc-dlwgpu.sh` will build an image with the software used for our workshop example, as defined in `dlc-dlwgpu.def`. Contrary to the traditional way of using conda to install all packages defined in a `requirements.txt` file, pip is used here to reduce the number of software packages used. However, there are good reasons to use conda, so `build_dlc-conda-example.sh` shows a minimal example of installing Python packages in a container using conda (see `dlc-conda-example.def`).
 
 If you encounter errors while building the container, have a look at `build_dlc-dlwgpu.log` or `build_dlc-conda-example.log`. You can also use `cat` (in a different terminal) to see the progress of the image building process.
@@ -73,7 +73,7 @@ Below are two examples of how to use HPC JupyterHub. The first example shows the
 
 ## Using the container via CLI on the HPC 🏭
 
-The full potential of an HPC cluster can only be utilised using the command line interface (CLI). Workflows can be optimized for the available hardware, such as different accelerators (e.g. GPUs, NPU, ...) and highly parallel workflows. Here a simple workflow using our workshops example is shown, based on the same container, that was used in JupyterHub. For more details please have a look at our [documentation](https://docs.hpc.gwdg.de/index.html), e.g. on [Slurm/GPU Usage](https://docs.hpc.gwdg.de/usage_guide/slurm/gpu_usage/index.html) and [GPU Partitions](https://docs.hpc.gwdg.de/compute_partitions/gpu_partitions/index.html).
+The full potential of an HPC cluster can only be utilised using the command line interface (CLI). Workflows can be optimized for the available hardware, such as different accelerators (e.g. GPUs, NPU, ...) and highly parallel workflows. Here a simple workflow using our workshops example is shown, based on the same container, that was used in JupyterHub. For more details please have a look at our [documentation](https://docs.hpc.gwdg.de/index.html), e.g. on [Slurm/GPU Usage](https://docs.hpc.gwdg.de/how_to_use/slurm/gpu_usage/index.html) and [GPU Partitions](https://docs.hpc.gwdg.de/how_to_use/compute_partitions/gpu_partitions/index.html).
 
 ### Interactive HPC usage
 
@@ -82,7 +82,7 @@ Using an HPC cluster interactively gives you direct access to the cluster's comp
 ```bash
 cd path/to/deep-learning-with-gpu-cores/code/
 
-# get available GPUs with (see docs.hpc.gwdg.de/usage_guide/slurm/gpu_usage/)
+# get available GPUs with (see https://docs.hpc.gwdg.de/how_to_use/slurm/gpu_usage/index.html)
 # sinfo -o "%25N  %5c  %10m  %32f  %10G %18P " | grep gpu
 
 # grete
@@ -106,7 +106,7 @@ exit
 
 ### HPC usage via batch scripts
 
-Finally, the defined workflow can be submitted as a job to the workload manager [SLURM](https://docs.hpc.gwdg.de/usage_guide/slurm/index.html). To do this, a job script needs to be defined and submitted via `sbatch`. As you already have a lot of experience with the containerised software environment, starting on your local machine and JupyterHub, a major point of failure when scaling up your analysis and moving to HPC systems has become unlikely. 
+Finally, the defined workflow can be submitted as a job to the workload manager [SLURM](https://docs.hpc.gwdg.de/how_to_use/slurm/index.html). To do this, a job script needs to be defined and submitted via `sbatch`. As you already have a lot of experience with the containerised software environment, starting on your local machine and JupyterHub, a major point of failure when scaling up your analysis and moving to HPC systems has become unlikely. 
 
 ```bash
 sbatch path/to/deep-learning-with-gpu-cores/code/submit_train_dlc.sh
-- 
GitLab