How to run the inference with this model on multiple GPUs?Can anyone please help me with this π @tahirjm
You will have to spawn multiple processes and load separate instance of this model on those GPUs (using .to('cuda:X')).
Β· Sign up or log in to comment