Stay organized with collections
Save and categorize content based on your preferences.
You can use the extended streaming feature to stream audio content to
Dialogflow and stream human agent suggestions back.
Normally, you half-close or tell the Dialogflow API when to end the stream to generate the final transcript and Agent Assist suggestions. This happens at conversation turns, where the API receives the parameter is_final=true from the recognition result, StreamingAnalyzeContentResponse.recognition_result.
Extended streaming reduces the need for half-closing at conversation turns. It extends the connection timeout to three minutes, during which you can send audio streams without half-closing. The Dialogflow API automatically sends the final transcripts and Agent Assist suggestions back to the stream. You only restart the stream if it times out.
Streaming basics
The Agent Assist extended streaming feature is similar to audio
streaming for CCAI Transcription. Your system
streams audio data to the API, and Dialogflow streams back
StreamingAnalyzeContentResponse data. The returned data includes suggestions
for your human agents.
Extended Streaming only supports Agent Assist stage. See conversation
stage. To use this feature:
Call the streamingAnalyzeContent method and set the following fields:
StreamingAnalyzeContentRequest.audio_config.audio_encoding:
AUDIO_ENCODING_LINEAR_16 or AUDIO_ENCODING_MULAW
enable_extended_streaming: true.
The first streamingAnalyzeContent request prepares the stream and sets
your audio configuration.
In subsequent requests, you send audio bytes to the stream.
As long as you continue to send audio, you will keep receiving suggestions.
You don't need to manually close the stream. It will close automatically
once Agent Assist detects that utterances have stopped.
Restart the stream (which includes resending the initial audio
configuration) in the following cases:
The stream is broken (the stream stopped when it wasn't supposed to).
Your audio data is approaching the automatic timeout at 3 minutes.
You received a re-tryable error. You can retry up to three times.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-08-28 UTC."],[[["\u003cp\u003eExtended Streaming allows streaming audio content to Dialogflow and receiving human agent suggestions in return, similar to audio streaming for CCAI Transcription.\u003c/p\u003e\n"],["\u003cp\u003eThis feature, available "as is" and with potentially limited support under the Pre-GA Offerings Terms, is accessible via the \u003ccode\u003estreamingAnalyzeContent\u003c/code\u003e method in the RPC API and client libraries.\u003c/p\u003e\n"],["\u003cp\u003eTo initiate Extended Streaming, users must set \u003ccode\u003eenable_extended_streaming\u003c/code\u003e to \u003ccode\u003etrue\u003c/code\u003e and provide appropriate audio configurations (\u003ccode\u003eAUDIO_ENCODING_LINEAR_16\u003c/code\u003e or \u003ccode\u003eAUDIO_ENCODING_MULAW\u003c/code\u003e).\u003c/p\u003e\n"],["\u003cp\u003eThe stream remains active as long as audio data is sent, automatically closing when utterances stop, and it will automatically timeout after 3 minutes of activity.\u003c/p\u003e\n"],["\u003cp\u003eUsers should restart the stream if it breaks unexpectedly, if approaching the three-minute timeout, or after receiving a re-tryable error (up to three retries are allowed).\u003c/p\u003e\n"]]],[],null,["# Extended streaming\n\n| **Preview**\n|\n|\n| This feature is subject to the \"Pre-GA Offerings Terms\" in the General Service Terms section\n| of the [Service Specific Terms](/terms/service-terms#1).\n|\n| Pre-GA features are available \"as is\" and might have limited support.\n|\n| For more information, see the\n| [launch stage descriptions](/products#product-launch-stages).\n\nYou can use the extended streaming feature to stream audio content to\nDialogflow and stream human agent suggestions back.\n\nNormally, you half-close or tell the Dialogflow API when to end the stream to generate the final transcript and Agent Assist suggestions. This happens at conversation turns, where the API receives the parameter `is_final=true` from the recognition result, `StreamingAnalyzeContentResponse.recognition_result`.\n\nExtended streaming reduces the need for half-closing at conversation turns. It extends the connection timeout to three minutes, during which you can send audio streams without half-closing. The Dialogflow API automatically sends the final transcripts and Agent Assist suggestions back to the stream. You only restart the stream if it times out.\n| **Note:** Streaming is supported by the RPC API and client libraries only.\n\nStreaming basics\n----------------\n\nThe Agent Assist extended streaming feature is similar to [audio\nstreaming](/agent-assist/docs/transcription) for CCAI Transcription. Your system\nstreams audio data to the API, and Dialogflow streams back\n`StreamingAnalyzeContentResponse` data. The returned data includes suggestions\nfor your human agents.\n| **Note:** Streaming automatically times out after three minutes. If your conversation lasts longer than three minutes, you can handle the timeout by closing and re-opening the stream.\n\nTo use Extended Streaming, call the\n[`streamingAnalyzeContent`](/dialogflow/es/docs/reference/rpc/google.cloud.dialogflow.v2beta1#google.cloud.dialogflow.v2beta1.Participants.StreamingAnalyzeContent)\nmethod.\n\nExtended Streaming only supports Agent Assist stage. See [conversation\nstage](/agent-assist/docs/basics). To use this feature:\n\n1. Call the `streamingAnalyzeContent` method and set the following fields:\n - `StreamingAnalyzeContentRequest.audio_config.audio_encoding`: `AUDIO_ENCODING_LINEAR_16` or `AUDIO_ENCODING_MULAW`\n - `enable_extended_streaming`: `true`.\n2. The first `streamingAnalyzeContent` request prepares the stream and sets your audio configuration.\n3. In subsequent requests, you send audio bytes to the stream.\n4. As long as you continue to send audio, you will keep receiving suggestions. You don't need to manually close the stream. It will close automatically once Agent Assist detects that utterances have stopped.\n5. Restart the stream (which includes resending the initial audio configuration) in the following cases:\n - The stream is broken (the stream stopped when it wasn't supposed to).\n - Your audio data is approaching the automatic timeout at 3 minutes.\n - You received a re-tryable error. You can retry up to three times."]]