Token Limits

Very Easy50 pts0 solves
A user's response is cut off mid-sentence. The context window isn't full, but the output stops abruptly. What parameter caps the length of the generated response? (Note: this is the OpenAI/Anthropic name. Google calls it max_output_tokens. Always check your provider's API reference.) Flag format: CONGRESS{[parameter_name]} Example: CONGRESS{temperature}
Hint
This parameter limits how many tokens the model will generate, independent of context size.