To forward a port using srun in a Slurm environment, you typically want to set up an SSH tunnel to a compute node where your job is running. Here’s how you can do it:
When you submit an interactive job with srun, you can request port forwarding using the --tcpip or --tunnel options, but Slurm itself does not directly support port forwarding. Instead, you usually combine srun with SSH tunneling.
bashsrun --pty --nodes=1 --ntasks-per-node=1 --cpus-per-task=1 --mem=4G --time=01:00:00 --x11=first bash
Once your job is running, you can set up an SSH tunnel from your local machine to the compute node.
When your srun job starts, it will print the hostname of the compute node, e.g., node123.
On your local machine, run:
bashssh -L <local_port>:localhost:<remote_port> <your_username>@<login_node> -J <your_username>@<compute_node>
<local_port>: The port on your local machine (e.g., 8888).<remote_port>: The port on the compute node (e.g., 8888).<your_username>: Your username on the cluster.<login_node>: The login node of the cluster.<compute_node>: The compute node where your job is running (e.g., node123).bashssh -L 8888:localhost:8888 benjamin@login.cluster.edu -J benjamin@node123
localhost:8888 on your machine to node123:8888.Now, any service running on <remote_port> (e.g., a Jupyter notebook) on the compute node will be accessible on your local machine at localhost:<local_port>.
srun with --tunnel (if supported)Some Slurm setups allow you to specify --tunnel directly with srun:
bashsrun --tunnel=8888:8888 --pty bash
0.0.0.0 or the node’s hostname, not just localhost.