A partir de 29 de abril de 2025, os modelos Gemini 1.5 Pro e Gemini 1.5 Flash não estarão disponíveis em projetos que não os usaram antes, incluindo novos projetos. Para mais detalhes, consulte
Versões e ciclo de vida do modelo.
Gerar texto de um vídeo
Mantenha tudo organizado com as coleções
Salve e categorize o conteúdo com base nas suas preferências.
Este exemplo demonstra como usar a API Gemini para gerar texto de um vídeo
Exemplo de código
Exceto em caso de indicação contrária, o conteúdo desta página é licenciado de acordo com a Licença de atribuição 4.0 do Creative Commons, e as amostras de código são licenciadas de acordo com a Licença Apache 2.0. Para mais detalhes, consulte as políticas do site do Google Developers. Java é uma marca registrada da Oracle e/ou afiliadas.
[[["Fácil de entender","easyToUnderstand","thumb-up"],["Meu problema foi resolvido","solvedMyProblem","thumb-up"],["Outro","otherUp","thumb-up"]],[["Difícil de entender","hardToUnderstand","thumb-down"],["Informações incorretas ou exemplo de código","incorrectInformationOrSampleCode","thumb-down"],["Não contém as informações/amostras de que eu preciso","missingTheInformationSamplesINeed","thumb-down"],["Problema na tradução","translationIssue","thumb-down"],["Outro","otherDown","thumb-down"]],[],[],[],null,["# Generate text from a video\n\nThis sample demonstrates how to use the Gemini API to generate text from a video\n\nCode sample\n-----------\n\n### C#\n\n\nBefore trying this sample, follow the C# setup instructions in the\n[Vertex AI quickstart using\nclient libraries](/vertex-ai/docs/start/client-libraries).\n\n\nFor more information, see the\n[Vertex AI C# API\nreference documentation](/dotnet/docs/reference/Google.Cloud.AIPlatform.V1/latest).\n\n\nTo authenticate to Vertex AI, set up Application Default Credentials.\nFor more information, see\n\n[Set up authentication for a local development environment](/docs/authentication/set-up-adc-local-dev-environment).\n\n\n using https://cloud.google.com/dotnet/docs/reference/Google.Api.Gax/latest/Google.Api.Gax.Grpc.html;\n using https://cloud.google.com/dotnet/docs/reference/Google.Cloud.AIPlatform.V1/latest/Google.Cloud.AIPlatform.V1.html;\n using System.Text;\n using System.Threading.Tasks;\n\n public class MultimodalVideoInput\n {\n public async Task\u003cstring\u003e GenerateContent(\n string projectId = \"your-project-id\",\n string location = \"us-central1\",\n string publisher = \"google\",\n string model = \"gemini-2.0-flash-001\"\n )\n {\n var predictionServiceClient = new https://cloud.google.com/dotnet/docs/reference/Google.Cloud.AIPlatform.V1/latest/Google.Cloud.AIPlatform.V1.PredictionServiceClientBuilder.html\n {\n Endpoint = $\"{location}-aiplatform.googleapis.com\"\n }.Build();\n\n var generateContentRequest = new https://cloud.google.com/dotnet/docs/reference/Google.Cloud.AIPlatform.V1/latest/Google.Cloud.AIPlatform.V1.GenerateContentRequest.html\n {\n Model = $\"projects/{projectId}/locations/{location}/publishers/{publisher}/models/{model}\",\n Contents =\n {\n new https://cloud.google.com/dotnet/docs/reference/Google.Cloud.AIPlatform.V1/latest/Google.Cloud.AIPlatform.V1.Content.html\n {\n Role = \"USER\",\n Parts =\n {\n new https://cloud.google.com/dotnet/docs/reference/Google.Cloud.AIPlatform.V1/latest/Google.Cloud.AIPlatform.V1.Part.html { Text = \"What's in the video?\" },\n new https://cloud.google.com/dotnet/docs/reference/Google.Cloud.AIPlatform.V1/latest/Google.Cloud.AIPlatform.V1.Part.html { FileData = new() { MimeType = \"video/mp4\", FileUri = \"gs://cloud-samples-data/video/animals.mp4\" }}\n }\n }\n }\n };\n\n using PredictionServiceClient.StreamGenerateContentStream response = predictionServiceClient.StreamGenerateContent(generateContentRequest);\n\n StringBuilder fullText = new();\n\n AsyncResponseStream\u003cGenerateContentResponse\u003e responseStream = response.GetResponseStream();\n await foreach (https://cloud.google.com/dotnet/docs/reference/Google.Cloud.AIPlatform.V1/latest/Google.Cloud.AIPlatform.V1.GenerateContentResponse.html responseItem in responseStream)\n {\n fullText.Append(responseItem.Candidates[0].https://cloud.google.com/dotnet/docs/reference/Google.Cloud.AIPlatform.V1/latest/Google.Cloud.AIPlatform.V1.Content.html.Parts[0].Text);\n }\n return fullText.ToString();\n }\n }\n\n### Node.js\n\n\nBefore trying this sample, follow the Node.js setup instructions in the\n[Vertex AI quickstart using\nclient libraries](/vertex-ai/docs/start/client-libraries).\n\n\nFor more information, see the\n[Vertex AI Node.js API\nreference documentation](/nodejs/docs/reference/aiplatform/latest).\n\n\nTo authenticate to Vertex AI, set up Application Default Credentials.\nFor more information, see\n\n[Set up authentication for a local development environment](/docs/authentication/set-up-adc-local-dev-environment).\n\n const {VertexAI} = require('https://cloud.google.com/nodejs/docs/reference/vertexai/latest/overview.html');\n\n /**\n * TODO(developer): Update these variables before running the sample.\n */\n async function sendMultiModalPromptWithVideo(\n projectId = 'PROJECT_ID',\n location = 'us-central1',\n model = 'gemini-2.0-flash-001'\n ) {\n // Initialize Vertex with your Cloud project and location\n const vertexAI = new https://cloud.google.com/nodejs/docs/reference/vertexai/latest/vertexai/vertexai.html({project: projectId, location: location});\n\n const generativeVisionModel = vertexAI.https://cloud.google.com/nodejs/docs/reference/vertexai/latest/vertexai/vertexai.html({\n model: model,\n });\n\n // Pass multimodal prompt\n const request = {\n contents: [\n {\n role: 'user',\n parts: [\n {\n fileData: {\n fileUri: 'gs://cloud-samples-data/video/animals.mp4',\n mimeType: 'video/mp4',\n },\n },\n {\n text: 'What is in the video?',\n },\n ],\n },\n ],\n };\n\n // Create the response\n const response = await generativeVisionModel.generateContent(request);\n // Wait for the response to complete\n const aggregatedResponse = await response.response;\n // Select the text from the response\n const fullTextResponse =\n aggregatedResponse.https://cloud.google.com/nodejs/docs/reference/vertexai/latest/vertexai/generatecontentresponse.html[0].https://cloud.google.com/nodejs/docs/reference/vertexai/latest/vertexai/generatecontentcandidate.html.https://cloud.google.com/nodejs/docs/reference/vertexai/latest/vertexai/content.html[0].text;\n\n console.log(fullTextResponse);\n }\n\nWhat's next\n-----------\n\n\nTo search and filter code samples for other Google Cloud products, see the\n[Google Cloud sample browser](/docs/samples?product=generativeaionvertexai)."]]