public sealed class StreamingDetectIntentResponse : IMessage<StreamingDetectIntentResponse>, IEquatable<StreamingDetectIntentResponse>, IDeepCloneable<StreamingDetectIntentResponse>, IBufferMessage, IMessage
Reference documentation and code samples for the Google Cloud Dialogflow v2beta1 API class StreamingDetectIntentResponse.
The top-level message returned from the
StreamingDetectIntent method.
Multiple response messages can be returned in order:
If the StreamingDetectIntentRequest.input_audio field was
set, the recognition_result field is populated for one
or more messages.
See the
[StreamingRecognitionResult][google.cloud.dialogflow.v2beta1.StreamingRecognitionResult]
message for details about the result message sequence.
The next message contains response_id, query_result,
alternative_query_results and optionally webhook_status if a WebHook
was called.
If output_audio_config was specified in the request or agent-level
speech synthesizer is configured, all subsequent messages contain
output_audio and output_audio_config.
public RepeatedField<QueryResult> AlternativeQueryResults { get; }
If Knowledge Connectors are enabled, there could be more than one result
returned for a given query or event, and this field will contain all
results except for the top one, which is captured in query_result. The
alternative results are ordered by decreasing
QueryResult.intent_detection_confidence. If Knowledge Connectors are
disabled, this field will be empty until multiple responses for regular
intents are supported, at which point those additional results will be
surfaced here.
The audio data bytes encoded as specified in the request.
Note: The output audio is generated based on the values of default platform
text responses found in the query_result.fulfillment_messages field. If
multiple default text responses exist, they will be concatenated when
generating audio. If no default platform text responses exist, the
generated audio content will be empty.
In some scenarios, multiple output audio fields may be present in the
response structure. In these cases, only the top-most-level audio output
has content.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-09-04 UTC."],[],[],null,["# Google Cloud Dialogflow v2beta1 API - Class StreamingDetectIntentResponse (1.0.0-beta23)\n\nVersion latestkeyboard_arrow_down\n\n- [1.0.0-beta23 (latest)](/dotnet/docs/reference/Google.Cloud.Dialogflow.V2Beta1/latest/Google.Cloud.Dialogflow.V2Beta1.StreamingDetectIntentResponse)\n- [1.0.0-beta22](/dotnet/docs/reference/Google.Cloud.Dialogflow.V2Beta1/1.0.0-beta22/Google.Cloud.Dialogflow.V2Beta1.StreamingDetectIntentResponse) \n\n public sealed class StreamingDetectIntentResponse : IMessage\u003cStreamingDetectIntentResponse\u003e, IEquatable\u003cStreamingDetectIntentResponse\u003e, IDeepCloneable\u003cStreamingDetectIntentResponse\u003e, IBufferMessage, IMessage\n\nReference documentation and code samples for the Google Cloud Dialogflow v2beta1 API class StreamingDetectIntentResponse.\n\nThe top-level message returned from the\n`StreamingDetectIntent` method.\n\nMultiple response messages can be returned in order:\n\n1. If the `StreamingDetectIntentRequest.input_audio` field was\n set, the `recognition_result` field is populated for one\n or more messages.\n See the\n \\[StreamingRecognitionResult\\]\\[google.cloud.dialogflow.v2beta1.StreamingRecognitionResult\\]\n message for details about the result message sequence.\n\n2. The next message contains `response_id`, `query_result`,\n `alternative_query_results` and optionally `webhook_status` if a WebHook\n was called.\n\n3. If `output_audio_config` was specified in the request or agent-level\n speech synthesizer is configured, all subsequent messages contain\n `output_audio` and `output_audio_config`.\n\nInheritance\n-----------\n\n[object](https://learn.microsoft.com/dotnet/api/system.object) \\\u003e StreamingDetectIntentResponse \n\nImplements\n----------\n\n[IMessage](https://cloud.google.com/dotnet/docs/reference/Google.Protobuf/latest/Google.Protobuf.IMessage-1.html)[StreamingDetectIntentResponse](/dotnet/docs/reference/Google.Cloud.Dialogflow.V2Beta1/latest/Google.Cloud.Dialogflow.V2Beta1.StreamingDetectIntentResponse), [IEquatable](https://learn.microsoft.com/dotnet/api/system.iequatable-1)[StreamingDetectIntentResponse](/dotnet/docs/reference/Google.Cloud.Dialogflow.V2Beta1/latest/Google.Cloud.Dialogflow.V2Beta1.StreamingDetectIntentResponse), [IDeepCloneable](https://cloud.google.com/dotnet/docs/reference/Google.Protobuf/latest/Google.Protobuf.IDeepCloneable-1.html)[StreamingDetectIntentResponse](/dotnet/docs/reference/Google.Cloud.Dialogflow.V2Beta1/latest/Google.Cloud.Dialogflow.V2Beta1.StreamingDetectIntentResponse), [IBufferMessage](https://cloud.google.com/dotnet/docs/reference/Google.Protobuf/latest/Google.Protobuf.IBufferMessage.html), [IMessage](https://cloud.google.com/dotnet/docs/reference/Google.Protobuf/latest/Google.Protobuf.IMessage.html) \n\nInherited Members\n-----------------\n\n[object.GetHashCode()](https://learn.microsoft.com/dotnet/api/system.object.gethashcode) \n[object.GetType()](https://learn.microsoft.com/dotnet/api/system.object.gettype) \n[object.ToString()](https://learn.microsoft.com/dotnet/api/system.object.tostring)\n\nNamespace\n---------\n\n[Google.Cloud.Dialogflow.V2Beta1](/dotnet/docs/reference/Google.Cloud.Dialogflow.V2Beta1/latest/Google.Cloud.Dialogflow.V2Beta1)\n\nAssembly\n--------\n\nGoogle.Cloud.Dialogflow.V2Beta1.dll\n\nConstructors\n------------\n\n### StreamingDetectIntentResponse()\n\n public StreamingDetectIntentResponse()\n\n### StreamingDetectIntentResponse(StreamingDetectIntentResponse)\n\n public StreamingDetectIntentResponse(StreamingDetectIntentResponse other)\n\nProperties\n----------\n\n### AlternativeQueryResults\n\n public RepeatedField\u003cQueryResult\u003e AlternativeQueryResults { get; }\n\nIf Knowledge Connectors are enabled, there could be more than one result\nreturned for a given query or event, and this field will contain all\nresults except for the top one, which is captured in query_result. The\nalternative results are ordered by decreasing\n`QueryResult.intent_detection_confidence`. If Knowledge Connectors are\ndisabled, this field will be empty until multiple responses for regular\nintents are supported, at which point those additional results will be\nsurfaced here.\n\n### DebuggingInfo\n\n public CloudConversationDebuggingInfo DebuggingInfo { get; set; }\n\nDebugging info that would get populated when\n`StreamingDetectIntentRequest.enable_debugging_info` is set to true.\n\n### OutputAudio\n\n public ByteString OutputAudio { get; set; }\n\nThe audio data bytes encoded as specified in the request.\nNote: The output audio is generated based on the values of default platform\ntext responses found in the `query_result.fulfillment_messages` field. If\nmultiple default text responses exist, they will be concatenated when\ngenerating audio. If no default platform text responses exist, the\ngenerated audio content will be empty.\n\nIn some scenarios, multiple output audio fields may be present in the\nresponse structure. In these cases, only the top-most-level audio output\nhas content.\n\n### OutputAudioConfig\n\n public OutputAudioConfig OutputAudioConfig { get; set; }\n\nThe config used by the speech synthesizer to generate the output audio.\n\n### QueryResult\n\n public QueryResult QueryResult { get; set; }\n\nThe selected results of the conversational query or event processing.\nSee `alternative_query_results` for additional potential results.\n\n### RecognitionResult\n\n public StreamingRecognitionResult RecognitionResult { get; set; }\n\nThe result of speech recognition.\n\n### ResponseId\n\n public string ResponseId { get; set; }\n\nThe unique identifier of the response. It can be used to\nlocate a response in the training example set or for reporting issues.\n\n### WebhookStatus\n\n public Status WebhookStatus { get; set; }\n\nSpecifies the status of the webhook request."]]