Mantieni tutto organizzato con le raccolte
Salva e classifica i contenuti in base alle tue preferenze.
Questa pagina mostra come creare
un'istanza di TensorFlow Deep Learning VM Images
con TensorFlow e altri strumenti preinstallati. Puoi creare
un'istanza TensorFlow da Cloud Marketplace all'interno
della Google Cloud console o utilizzando la riga di comando.
Prima di iniziare
Sign in to your Google Cloud account. If you're new to
Google Cloud,
create an account to evaluate how our products perform in
real-world scenarios. New customers also get $300 in free credits to
run, test, and deploy workloads.
In the Google Cloud console, on the project selector page,
select or create a Google Cloud project.
Se utilizzi GPU con la tua VM di deep learning, controlla la
pagina Quote
per assicurarti di disporre di un numero sufficiente
di GPU nel progetto. Se le GPU non sono elencate nella pagina delle quote o se hai bisogno di una quota di GPU aggiuntiva, richiedi un aumento della quota.
Creazione di un'istanza VM di deep learning TensorFlow da Cloud Marketplace
Per creare un'istanza VM di TensorFlow Deep Learning
da Cloud Marketplace, completa i seguenti passaggi:
Vai alla pagina Cloud Marketplace delle VM di deep learning nella console Google Cloud .
In GPU, seleziona il tipo di GPU e il numero di GPU.
Se non vuoi utilizzare le GPU,
fai clic sul pulsante Elimina GPU
e vai al passaggio 7. Scopri di piรน sulle GPU.
In Framework, seleziona una delle versioni del framework TensorFlow.
Se utilizzi le GPU, รจ necessario un driver NVIDIA.
Puoi installare il driver
autonomamente o selezionare Installa il driver GPU NVIDIA automaticamente
al primo avvio.
Hai la possibilitร di selezionare Attiva l'accesso a JupyterLab tramite URL
anzichรฉ SSH (beta). L'attivazione di questa funzionalitร beta ti consente di accedere all'istanza di JupyterLab utilizzando un URL. Chiunque abbia il ruolo Editor o Proprietario nel tuo progettoGoogle Cloud puรฒ accedere a questo URL.
Al momento, questa funzionalitร funziona solo negli Stati Uniti, nell'Unione Europea e in Asia.
Seleziona un tipo e una dimensione del disco di avvio.
Seleziona le impostazioni di rete che preferisci.
Fai clic su Esegui il deployment.
Se scegli di installare i driver NVIDIA, attendi 3-5 minuti per il completamento dell'installazione.
Dopo il deployment della VM, la pagina viene aggiornata con le istruzioni per
accedere all'istanza.
Creazione di un'istanza VM di deep learning TensorFlow dalla riga di comando
Per utilizzare Google Cloud CLI per creare
una nuova istanza Deep Learning VM,
devi prima installare e inizializzare Google Cloud CLI:
Nome di una famiglia di immagini TensorFlow o TensorFlow Enterprise precedente (vedi Scelta di un'immagine)
--image-project deve essere deeplearning-platform-release.
Con una o piรน GPU
Compute Engine offre la possibilitร di aggiungere una o piรน GPU alle istanze di macchine virtuali. Le GPU offrono un'elaborazione piรน rapida
per molte attivitร complesse di dati e machine learning. Per saperne di piรน sulle GPU, consulta GPU su Compute Engine.
Per eseguire il provisioning di un'istanza Deep Learning VM con una o piรน GPU:
Nome di una famiglia di immagini TensorFlow o TensorFlow Enterprise precedente (vedi Scelta di un'immagine)
--image-project deve essere deeplearning-platform-release.
--maintenance-policy deve essere TERMINATE. Per saperne di piรน, consulta la sezione
Limitazioni delle GPU.
--accelerator specifica il tipo di GPU da utilizzare. Deve essere
specificato nel formato
--accelerator="type=TYPE,count=COUNT".
Ad esempio: --accelerator="type=nvidia-tesla-p100,count=2".
Consulta la tabella dei modelli di GPU per un elenco dei tipi e dei conteggi di GPU disponibili.
--metadata viene utilizzato per specificare che il driver NVIDIA deve
essere installato per tuo conto. Il valore รจ install-nvidia-driver=True.
Se specificato, Compute Engine carica l'ultimo driver stabile
al primo avvio ed esegue i passaggi necessari (incluso
un riavvio finale per attivare il driver).
Se hai scelto di installare i driver NVIDIA, attendi 3-5 minuti
per il completamento dell'installazione.
Potrebbero essere necessari fino a 5 minuti prima che il provisioning della VM sia completato. Durante questo periodo, non potrai accedere alla macchina tramite SSH. Al termine dell'installazione, per verificare che l'installazione del driver sia stata completata correttamente, puoi connetterti tramite SSH ed eseguire nvidia-smi.
Dopo aver configurato l'immagine, puoi salvarne uno snapshot per poter avviare istanze derivate senza dover attendere l'installazione del driver.
Puoi creare un'istanza Deep Learning VM prerilasciabile. Un'istanza
preemptible รจ un'istanza che puoi creare ed eseguire a un prezzo molto inferiore rispetto alle
istanze normali. Tuttavia, Compute Engine potrebbe arrestare (prerilasciare) queste
istanze se ha bisogno di accedere alle loro risorse per altre attivitร .
Le istanze preemptible si arrestano sempre dopo 24 ore. Per saperne di piรน sulle istanze preemptible, consulta Istanze VM preemptible.
Per creare un'istanza Deep Learning VM prerilasciabile:
Segui le istruzioni riportate sopra per creare una nuova istanza utilizzando la
riga di comando. Al comando gcloud compute instances create, aggiungi
quanto segue:
--preemptible
Passaggi successivi
Per istruzioni su come connetterti alla nuova istanza Deep Learning VM
tramite la console Google Cloud o la riga di comando, consulta Connessione alle
istanze. Il nome dell'istanza
รจ il nome del deployment specificato con l'aggiunta di -vm.
[[["Facile da capire","easyToUnderstand","thumb-up"],["Il problema รจ stato risolto","solvedMyProblem","thumb-up"],["Altra","otherUp","thumb-up"]],[["Difficile da capire","hardToUnderstand","thumb-down"],["Informazioni o codice di esempio errati","incorrectInformationOrSampleCode","thumb-down"],["Mancano le informazioni o gli esempi di cui ho bisogno","missingTheInformationSamplesINeed","thumb-down"],["Problema di traduzione","translationIssue","thumb-down"],["Altra","otherDown","thumb-down"]],["Ultimo aggiornamento 2025-09-04 UTC."],[[["\u003cp\u003eThis page guides users on creating a TensorFlow Deep Learning VM instance with pre-installed TensorFlow and other tools, available through the Google Cloud Marketplace or the command line.\u003c/p\u003e\n"],["\u003cp\u003eUsers can customize their VM instance by selecting machine type, zone, GPU type and number, TensorFlow framework version, boot disk specifications, and networking settings.\u003c/p\u003e\n"],["\u003cp\u003eNVIDIA GPU drivers can be automatically installed upon the first startup if using GPUs, or it can be done manually, though automatic installation will require a few minutes to complete.\u003c/p\u003e\n"],["\u003cp\u003eThe command line method offers flexibility to create instances with or without GPUs and specifies required parameters such as image family, zone, and project, with specific options for GPU and driver setup.\u003c/p\u003e\n"],["\u003cp\u003eUsers have the option to create preemptible instances, which are more cost-effective but subject to termination by Compute Engine, by using the \u003ccode\u003e--preemptible\u003c/code\u003e command in the creation process.\u003c/p\u003e\n"]]],[],null,["# Create a TensorFlow Deep Learning VM instance\n\nThis page shows you how to create\na TensorFlow Deep Learning VM Images instance\nwith TensorFlow and other tools pre-installed. You can create\na TensorFlow instance from Cloud Marketplace within\nthe Google Cloud console or using the command line.\n\nBefore you begin\n----------------\n\n- Sign in to your Google Cloud account. If you're new to Google Cloud, [create an account](https://console.cloud.google.com/freetrial) to evaluate how our products perform in real-world scenarios. New customers also get $300 in free credits to run, test, and deploy workloads.\n- In the Google Cloud console, on the project selector page,\n select or create a Google Cloud project.\n\n | **Note**: If you don't plan to keep the resources that you create in this procedure, create a project instead of selecting an existing project. After you finish these steps, you can delete the project, removing all resources associated with the project.\n\n [Go to project selector](https://console.cloud.google.com/projectselector2/home/dashboard)\n-\n [Verify that billing is enabled for your Google Cloud project](/billing/docs/how-to/verify-billing-enabled#confirm_billing_is_enabled_on_a_project).\n\n- In the Google Cloud console, on the project selector page,\n select or create a Google Cloud project.\n\n | **Note**: If you don't plan to keep the resources that you create in this procedure, create a project instead of selecting an existing project. After you finish these steps, you can delete the project, removing all resources associated with the project.\n\n [Go to project selector](https://console.cloud.google.com/projectselector2/home/dashboard)\n-\n [Verify that billing is enabled for your Google Cloud project](/billing/docs/how-to/verify-billing-enabled#confirm_billing_is_enabled_on_a_project).\n\n1. If you are using GPUs with your Deep Learning VM, check the [quotas page](https://console.cloud.google.com/quotas) to ensure that you have enough GPUs available in your project. If GPUs are not listed on the quotas page or you require additional GPU quota, [request a\n quota increase](/compute/quotas#requesting_additional_quota).\n\nCreating a TensorFlow Deep Learning VM instance from the Cloud Marketplace\n--------------------------------------------------------------------------\n\nTo create a TensorFlow Deep Learning VM instance\nfrom the Cloud Marketplace, complete the following steps:\n\n1. Go to the Deep Learning VM Cloud Marketplace page in\n the Google Cloud console.\n\n [Go to the Deep Learning VM Cloud Marketplace page](https://console.cloud.google.com/marketplace/details/click-to-deploy-images/deeplearning)\n2. Click **Get started**.\n\n3. Enter a **Deployment name** , which will be the root of your VM name.\n Compute Engine appends `-vm` to this name when naming your instance.\n\n4. Select a **Zone**.\n\n5. Under **Machine type** , select the specifications that you\n want for your VM.\n [Learn more about machine types.](/compute/docs/machine-types)\n\n6. Under **GPUs** , select the **GPU type** and **Number of GPUs** .\n If you don't want to use GPUs,\n click the **Delete GPU** button\n and skip to step 7. [Learn more about GPUs.](/gpu)\n\n 1. Select a **GPU type** . Not all GPU types are available in all zones. [Find a combination that is supported.](/compute/docs/gpus)\n 2. Select the **Number of GPUs** . Each GPU supports different numbers of GPUs. [Find a combination that is supported.](/compute/docs/gpus)\n7. Under **Framework**, select one of the TensorFlow\n framework versions.\n\n8. If you're using GPUs, an NVIDIA driver is required.\n You can install the driver\n yourself, or select **Install NVIDIA GPU driver automatically\n on first startup**.\n\n9. You have the option to select **Enable access to JupyterLab via URL\n instead of SSH (Beta)**. Enabling this Beta feature lets you\n access your JupyterLab\n instance using a URL. Anyone who is in the Editor or Owner role in your\n Google Cloud project can access this URL.\n Currently, this feature only works in\n the United States, the European Union, and Asia.\n\n10. Select a boot disk type and boot disk size.\n\n11. Select the networking settings that you want.\n\n12. Click **Deploy**.\n\nIf you choose to install NVIDIA drivers, allow 3-5 minutes for installation\nto complete.\n\nAfter the VM is deployed, the page updates with instructions for\naccessing the instance.\n\nCreating a TensorFlow Deep Learning VM instance from the command line\n---------------------------------------------------------------------\n\nTo use the Google Cloud CLI to create\na new Deep Learning VM instance,\nyou must first install and initialize the [Google Cloud CLI](/sdk/docs):\n\n1. Download and install the Google Cloud CLI using the instructions given on [Installing Google Cloud CLI](/sdk/downloads).\n2. Initialize the SDK using the instructions given on [Initializing Cloud\n SDK](/sdk/docs/initializing).\n\nTo use `gcloud` in Cloud Shell, first activate Cloud Shell using the\ninstructions given on [Starting Cloud Shell](/shell/docs/starting-cloud-shell).\n\nYou can create a TensorFlow instance with or without GPUs. \n\n### Without GPUs\n\nTo provision a Deep Learning VM instance without a GPU: \n\n export IMAGE_FAMILY=\"tf-ent-latest-cpu\"\n export ZONE=\"us-west1-b\"\n export INSTANCE_NAME=\"my-instance\"\n\n gcloud compute instances create $INSTANCE_NAME \\\n --zone=$ZONE \\\n --image-family=$IMAGE_FAMILY \\\n --image-project=deeplearning-platform-release\n\nOptions:\n\n- `--image-family` must be one of the following:\n - `tf-ent-latest-cpu` to get the latest [TensorFlow Enterprise](/tensorflow-enterprise/docs) 2 image\n - An earlier TensorFlow or TensorFlow Enterprise image family name (see [Choosing an image](/deep-learning-vm/docs/images))\n- `--image-project` must be `deeplearning-platform-release`.\n\n### With one or more GPUs\n\nCompute Engine offers the option of adding one or more\nGPUs to your virtual machine instances. GPUs offer faster processing\nfor many complex data and machine learning tasks. To learn more about\nGPUs, see [GPUs on Compute Engine](/compute/docs/gpus).\n\nTo provision a Deep Learning VM instance with one or more GPUs: \n\n export IMAGE_FAMILY=\"tf-ent-latest-gpu\"\n export ZONE=\"us-west1-b\"\n export INSTANCE_NAME=\"my-instance\"\n\n gcloud compute instances create $INSTANCE_NAME \\\n --zone=$ZONE \\\n --image-family=$IMAGE_FAMILY \\\n --image-project=deeplearning-platform-release \\\n --maintenance-policy=TERMINATE \\\n --accelerator=\"type=nvidia-tesla-v100,count=1\" \\\n --metadata=\"install-nvidia-driver=True\"\n\nOptions:\n\n- `--image-family` must be one of the following:\n\n - `tf-ent-latest-gpu` to get the latest [TensorFlow Enterprise](/tensorflow-enterprise/docs) 2 image\n - An earlier TensorFlow or TensorFlow Enterprise image family name (see [Choosing an image](/deep-learning-vm/docs/images))\n- `--image-project` must be `deeplearning-platform-release`.\n\n- `--maintenance-policy` must be `TERMINATE`. To learn more, see\n [GPU Restrictions](/compute/docs/gpus#restrictions).\n\n- `--accelerator` specifies the GPU type to use. Must be\n specified in the format\n `--accelerator=\"type=`\u003cvar translate=\"no\"\u003eTYPE\u003c/var\u003e`,count=`\u003cvar translate=\"no\"\u003eCOUNT\u003c/var\u003e`\"`.\n For example, `--accelerator=\"type=nvidia-tesla-p100,count=2\"`.\n See the [GPU models\n table](/compute/docs/gpus#other_available_nvidia_gpu_models)\n for a list of available GPU types and counts.\n\n Not all GPU types are supported in all regions. For details, see\n [GPU regions and zones availability](/compute/docs/gpus/gpu-regions-zones).\n- `--metadata` is used to specify that the NVIDIA driver should\n be installed on your behalf. The value is `install-nvidia-driver=True`.\n If specified, Compute Engine loads the latest stable\n driver on the first boot and performs the necessary steps (including\n a final reboot to activate the driver).\n\nIf you've elected to install NVIDIA drivers, allow 3-5 minutes\nfor installation to complete.\n\nIt may take up to 5 minutes before your VM is fully provisioned. In this\ntime, you will be unable to SSH into your machine. When the installation is\ncomplete, to guarantee that the driver installation was successful, you can\nSSH in and run `nvidia-smi`.\n\nWhen you've configured your image, you can save a snapshot of your\nimage so that you can start derivitave instances without having to wait\nfor the driver installation.\n\n### About TensorFlow Enterprise\n\n[TensorFlow Enterprise](/tensorflow-enterprise/docs) is a\ndistribution of\n[TensorFlow](https://www.tensorflow.org/)\nthat has been optimized to run on Google Cloud and includes\n[Long Term Version\nSupport](/tensorflow-enterprise/docs/overview#long_term_version_support).\n\nCreating a preemptible instance\n-------------------------------\n\nYou can create a preemptible Deep Learning VM instance. A preemptible\ninstance is an instance you can create and run at a much lower price than\nnormal instances. However, Compute Engine might stop (preempt) these\ninstances if it requires access to those resources for other tasks.\nPreemptible instances always stop after 24 hours. To learn more about\npreemptible instances, see [Preemptible VM\nInstances](/compute/docs/instances/preemptible).\n\nTo create a preemptible Deep Learning VM instance:\n\n- Follow the instructions located above to create a new instance using the\n command line. To the `gcloud compute instances create` command, append the\n following:\n\n ```\n --preemptible\n ```\n\nWhat's next\n-----------\n\nFor instructions on connecting to your new Deep Learning VM instance\nthrough the Google Cloud console or command line, see [Connecting to\nInstances](/compute/docs/instances/connecting-to-instance). Your instance name\nis the **Deployment name** you specified with `-vm` appended."]]