Streaming response

Can you share any examples or resources on implementing real-time streaming responses from large language models (LLMs) like OpenAI, specifically integrating with a FastAPI backend for processing and delivery?