Multi-Gpu Inference

#2
by munikumar - opened

How to run the inference with this model on multiple GPUs?
Can anyone please help me with this πŸ™
@tahirjm

AI4Bharat org

You will have to spawn multiple processes and load separate instance of this model on those GPUs (using .to('cuda:X')).

tahirjm changed discussion status to closed

Sign up or log in to comment