Hi all,
We have been running some code that basically runs QuTiP’s steadystate() for different input parameters. Because of a need for an increased Hilbert space size, each of these is now taking 4 hours on our laptop computers, and so we would like to explore running them on the cluster.
Some googling suggests that the qutip’s parfor is a basic staring point:
https://qutip.org/docs/latest/guide/guide-parfor.html
But reading further, it seems that qutip also has an implementation that can use ipython parallelisation across multiple hosts:
Functions — QuTiP 4.7 Documentation
And now my question:
- Will
qutip.parallel_map()directly be able to distribute compute tasks across the nodes of the cluster?
If yes, then my follow-up question is: how do I do this in practice? Do I log into the hpc05 node and batch a job based on the execution of a python file that calls qutip.parallel_map() and then magic just happens?
Thanks!
Gary