Parallelising a qutip for loop over the cluster

Hi all,

We have been running some code that basically runs QuTiP’s steadystate() for different input parameters. Because of a need for an increased Hilbert space size, each of these is now taking 4 hours on our laptop computers, and so we would like to explore running them on the cluster.

Some googling suggests that the qutip’s parfor is a basic staring point:

https://qutip.org/docs/latest/guide/guide-parfor.html

But reading further, it seems that qutip also has an implementation that can use ipython parallelisation across multiple hosts:

Functions — QuTiP 4.7 Documentation

And now my question:

  • Will qutip.parallel_map() directly be able to distribute compute tasks across the nodes of the cluster?

If yes, then my follow-up question is: how do I do this in practice? Do I log into the hpc05 node and batch a job based on the execution of a python file that calls qutip.parallel_map() and then magic just happens?

Thanks!
Gary

parfor is definitely not what you want, because it says:

Executes a multi-variable function in parallel on the local machine.

To use a parallel_map, you would need to launch an ipyparallel cluster through pbs. My group uses distributed to launch parallel computations on a cluster from the group jupyterhub. For that we use a small wrapper Quantum Tinkerer / dask-quantumtinkerer · GitLab that is specific to our environment.

OK, thanks, sounds complicated. I’ll look more at what Marios did, I think that is more at our IT deployment expertise level.

1 Like