meld.comm.MPICommunicator
- class meld.comm.MPICommunicator(n_atoms, n_replicas, timeout=600)[source]
Bases:
ICommunicator
Class to handle communications between leader and workers using MPI.
Note
creating an MPI communicator will not actually initialize MPI. To do that, call
initialize()
.- __init__(n_atoms, n_replicas, timeout=600)[source]
Initialize an MPICommunicator
- Parameters
n_atoms (
int
) – number of atomsn_replicas (
int
) – number of replicastimeout (
int
) – maximum time to wait before aborting
Methods
__init__
(n_atoms, n_replicas[, timeout])Initialize an MPICommunicator
barrier
()Wait until all workers reach this point
broadcast_all_states_to_workers
(states)Broadcast all states to all workers.
distribute_alphas_to_workers
(all_alphas)Distribute alphas to workers
distribute_states_to_workers
(all_states)Distribute a block of states to each worker.
gather_energies_from_workers
(energies_on_leader)Receive energies from each worker.
gather_states_from_workers
(state_on_leader)Receive states from all workers
Initialize and start MPI
Is this the leader node?
Negotiate CUDA device id
Receive all states from leader.
Receive a block of alphas from leader.
Get the block of states to run for this step
send_energies_to_leader
(energies)Send a block of energies to the leader.
send_states_to_leader
(block)Send a block of states to the leader
Attributes
X
alias of TypeVar('X')
number of atoms
number of replicas
number of workers
rank of this worker
- broadcast_all_states_to_workers(states)[source]
Broadcast all states to all workers.
- Parameters
states (
Sequence
[IState
]) – a list of states- Return type
None
- distribute_alphas_to_workers(all_alphas)[source]
Distribute alphas to workers
- Parameters
all_alphas (
List
[float
]) – the alpha values to be distributed- Return type
List
[float
]- Returns
the block of alpha values for the leader
- gather_energies_from_workers(energies_on_leader)[source]
Receive energies from each worker.
- Parameters
energies_on_leader (
ndarray
) – the energies from the leader- Return type
ndarray
- Returns
a square matrix of every state on every replica to be used for replica exchange
Note
Each row of the output matrix represents a different Hamiltonian. Each column represents a different state. Each worker will compute multiple rows of the output matrix.
- gather_states_from_workers(state_on_leader)[source]
Receive states from all workers
- Parameters
states_on_leader – the block of states on the leader after simulating
- Return type
List
[IState
]- Returns
A list of states, one from each replica.
- is_leader()[source]
Is this the leader node?
- Return type
bool
- Returns
True
if we are the leader, otherwiseFalse
- property n_atoms: int
number of atoms
- property n_replicas: int
number of replicas
- property n_workers: int
number of workers
- negotiate_device_id()[source]
Negotiate CUDA device id
- Return type
int
- Returns
the cuda device id to use
- property rank: int
rank of this worker
- receive_all_states_from_leader()[source]
Receive all states from leader.
- Return type
Sequence
[IState
]- Returns
a list of states to calculate the energy of
- receive_alphas_from_leader()[source]
Receive a block of alphas from leader.
- Return type
List
[float
]- Returns
the block of alpha values for this worker
- receive_states_from_leader()[source]
Get the block of states to run for this step
- Return type
List
[IState
]- Returns
the block of states to run for this step