You can also manually replicate the results of the pipeline if you'd like: Tokenize the text and return the input_ids as PyTorch tensors: from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("my_awesome_opus_books_model") inputs = tokenizer(text, return_tensors="pt").input_ids Use the [~transformers.generation_utils.GenerationMixin.generate] method to create the translation.