Skip to content

Instantly share code, notes, and snippets.

@robintux
Created February 25, 2026 19:46
Show Gist options
  • Select an option

  • Save robintux/01ed496f607666482f33aeaaa40cc7d1 to your computer and use it in GitHub Desktop.

Select an option

Save robintux/01ed496f607666482f33aeaaa40cc7d1 to your computer and use it in GitHub Desktop.
def generar_respuesta(prompt, max_length=200, num_beams=4):
"""Genera respuesta del modelo"""
inputs = tokenizer(prompt, return_tensors="pt", truncation=True, max_length=512).to(device)
with torch.no_grad():
outputs = model.generate(
**inputs,
max_length=max_length,
min_length=20,
num_beams=num_beams,
length_penalty=1.5,
no_repeat_ngram_size=3,
early_stopping=True,
do_sample=False,
pad_token_id=tokenizer.pad_token_id,
eos_token_id=tokenizer.eos_token_id
)
respuesta = tokenizer.decode(outputs[0], skip_special_tokens=True)
return respuesta
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment