Streaming Responses
Very Easy50 pts0 solves
Instead of waiting for the complete response, modern LLM APIs can send tokens to the user as they're generated. This reduces perceived latency.
What is this delivery method?
Flag format: CONGRESS{method_in_snake_case}
Hint
Tokens flow to the user in real-time, not all at once.