ほとんどの Confidential VMs インスタンス。Confidential VM インスタンスのライブ マイグレーションは、AMD SEV を実行する AMD EPYC Milan CPU プラットフォームの N2D マシンタイプでのみサポートされます。他のすべての Confidential VM インスタンスはライブ マイグレーションをサポートしていないため、ホスト メンテナンス イベント中に停止し、必要に応じて再起動するように設定する必要があります。詳しくは、ライブ マイグレーションをご覧ください。
GPU がアタッチされている VM。GPU がアタッチされた VM インスタンスは、停止して、必要に応じて再起動するように設定する必要があります。Compute Engine は、GPU のタイプに応じて、GPU がアタッチされている VM インスタンスが停止する前に通知を行います。
ほとんどの GPU の場合、Compute Engine は 60 分前に通知します。
AI Hypercomputer Cluster Director で実行されている GPU ファミリーの場合、Compute Engine は 10 分前に通知します。
VM がライブ マイグレーションするようスケジュール設定されている場合、Compute Engine は通知を行います。これにより、このライブ マイグレーションによる停止に備えてワークロードとアプリケーションを準備できます。ライブ マイグレーション中、 Google Cloud は最小の停止時間(通常は 1 秒未満)を順守します。VM がライブ マイグレーションできるよう設定されていない場合、Compute Engine はホスト メンテナンス中に VM を停止します。ホストイベント中に停止するよう設定されている VM は停止し、必要に応じて再起動します。
Google Cloud は実行中の VM をあるホストから別のホストに移行する場合、VM のすべての状態を、ゲスト OS やそれらと通信する対象にとって透過的な方法で移行元から移行先に移します。作業をシームレスに行うため、この移行には多くのコンポーネントが関係しますが、その概要を以下で説明します。
ライブ マイグレーションのコンポーネント
このプロセスでは、まず、現在のホストマシンから VM を強制的に移動することを通知します。BIOS の新しいバージョンのリリースを示すファイル変更、ハードウェアの定期メンテナンス、予知されるハードウェア障害による自動信号などにより通知が開始されます。
Google Cloudのクラスタ管理ソフトウェアは、これらのイベントを継続的に監視し、ストレージの使用率、1 つの顧客が同時に移行可能な VM の数などのデータセンターの制御ポリシーに基づいてプロセスのスケジュールを設定します。
VM が移行対象に選択されると、 Google Cloud はゲストに移行が近いことを通知します。待ち時間が経過すると、ターゲット ホストが選択され、移行するソース VM を受け取るための、新しい、空のターゲット VM をセットアップするように求められます。ソースとターゲットの間の接続を確立するために、認証が使用されます。
ブラックアウト。非常に短い時間ですが、VM の実行が停止します。ソース VM は一時停止状態になり、移行先での VM の再開に必要な残りの状態がすべて送信されます。ソース ブラウンアウト段階で状態変更の送信が収穫逓減ポイントに達すると、VM はブラックアウト段階に入ります。ゲスト VM が変更を行う速度に応じて送信するメモリのバイト数を決定するアルゴリズムが利用されます。
注: ブラックアウト イベント中は、システム クロックが最大 5 秒先に進んでいるように見えます。ブラックアウト イベントが 5 秒を超えると、 Google Cloud は VM ゲスト パッケージの一部として含まれるデーモンを使用して、クロックを停止して同期します。
ターゲット ブラウンアウト。VM が移行先の VM で実行されます。この段階では移行元の VM も存在し、移行先の VM にサポートを提供します。たとえば、ネットワーク ファブリックが移行先の VM の最新のロケーションを取得できるまで、移行元の VM が移行先の VM にパケット転送サービスを提供します。
最後に、移行が完了し、システムによって移行元の VM が削除されます。VM の Cloud Logging ログで移行が行われたことを確認できます。
単一テナント VM を別のノードまたはノードグループに移動するには、ライブ マイグレーションを手動で開始します。ライブ マイグレーションを手動で開始して、マルチテナント ホスト上の VM を単一テナントノードに移動させることもできます。詳しくは、VM を手動でライブ マイグレーションするをご覧ください。
[[["わかりやすい","easyToUnderstand","thumb-up"],["問題の解決に役立った","solvedMyProblem","thumb-up"],["その他","otherUp","thumb-up"]],[["わかりにくい","hardToUnderstand","thumb-down"],["情報またはサンプルコードが不正確","incorrectInformationOrSampleCode","thumb-down"],["必要な情報 / サンプルがない","missingTheInformationSamplesINeed","thumb-down"],["翻訳に関する問題","translationIssue","thumb-down"],["その他","otherDown","thumb-down"]],["最終更新日 2025-08-04 UTC。"],[[["\u003cp\u003eCompute Engine performs live migration to move a virtual machine (VM) to a new host server during maintenance events without interrupting workloads or modifying instance properties.\u003c/p\u003e\n"],["\u003cp\u003eLive migration occurs during infrastructure maintenance, security updates, system changes, and hardware failures, ensuring continuous operation of instances.\u003c/p\u003e\n"],["\u003cp\u003eVMs with Local SSD disks can be live-migrated, though large amounts of data can lead to longer periods of performance degradation or data loss.\u003c/p\u003e\n"],["\u003cp\u003eCertain VM types, such as bare metal instances, most Confidential VMs, VMs with GPUs, Cloud TPUs, and Z3 VMs, do not support live migration and instead terminate and optionally restart during host maintenance.\u003c/p\u003e\n"],["\u003cp\u003eThe live migration process involves a notification, followed by a "brownout" where VM state is transferred, a brief "blackout" when the VM pauses, and then a "target brownout" as the VM executes on the new host, with minimal disruption to the workload.\u003c/p\u003e\n"]]],[],null,["*** ** * ** ***\n\nDuring a planned maintenance event for the underlying hardware of a virtual\nmachine (VM) instance or bare\nmetal instance, the host server is unavailable. To keep an\ninstance running during a host event, Compute Engine performs a\n*live migration* of the instance to another host server in the same zone. For\nmore information about host events, see\n[About host events](/compute/docs/instances/host-maintenance-overview).\n\nLive migration lets Google Cloud perform maintenance without\ninterrupting a workload, rebooting an instance, or modifying any of the\ninstance's properties, such as IP addresses, metadata, block storage data,\napplication state, or network settings.\n\nLive migration keeps instances running during the following situations:\n\n- **Infrastructure maintenance.** Infrastructure maintenance includes host\n hardware, network and power grids in data centers, and host operating\n system (OS) and BIOS.\n\n- **Security-related updates and system configuration changes.** These include\n events such as installing security patches and changing the size of the host\n root partition for storage of the host OS image and packages.\n\n- **Hardware failures.** This includes failures in memory, CPUs, network\n interface cards, and disks. If the failure is detected before there is\n a complete server failure, then Compute Engine performs a preventative\n live migration of the instance to a new host server. If the hardware fails\n completely or otherwise prevents live migration, then the instance terminates\n and restarts automatically.\n\nCompute Engine only performs a live migration of VMs that have the\nhost maintenance policy set to migrate. For information about how to change the\nhost maintenance policy, see\n[Set VM host maintenance policy](/compute/docs/instances/host-maintenance-options).\n\nLive migration process and Local SSD disks\n\nCompute Engine can live migrate instances with Local SSD disks\nattached (excluding Z3 instances with more than 18 TiB of attached\nTitanium SSD). Compute Engine moves the VM instances along with\ntheir Local SSD data to a new machine in advance of any planned maintenance.\n| **Caution:** Instances with a large amount of Local SSD data can experience a longer period of performance degradation or Local SSD data loss during the migration to the new host server.\n\nLimitations\n\nLive migration is not supported for the following VM types:\n\n- **Bare metal instances** . Instances created with a [bare metal machine type](/compute/docs/instances/bare-metal-instances) don't support live migration. The maintenance behavior for these instances is set to `TERMINATE` and `RESTART`, respectively.\n- **Most Confidential VM instances** . Live migration for Confidential VM instances is only supported on N2D machine types with AMD EPYC Milan CPU platforms running AMD SEV. All other Confidential VM instances don't support live migration, and must be set to stop and optionally restart during a host maintenance event. See [Live migration](/confidential-computing/confidential-vm/docs/troubleshoot-live-migration) for more details.\n- **VMs with GPUs attached**. VM instances with GPUs attached must be set\n to stop and optionally restart. Compute Engine offers a notice\n before a VM instance with a GPU attached is stopped, depending on the GPU\n type:\n\n - For most GPUs, Compute Engine provides a 60-minute notice.\n - For GPU families running on AI Hypercomputer Cluster Director, Compute Engine provides a 10-minute notice.\n\n To learn more about these maintenance event notices, read\n [Query metadata\n server for maintenance event notices](/compute/docs/metadata/getting-live-migration-notice).\n\n To learn more about handling host maintenance with GPUs, read\n [Handling host maintenance](/compute/docs/gpus/gpu-host-maintenance)\n in the GPUs documentation.\n- **Cloud TPUs** . [Cloud TPUs](/tpu/docs/tpus) don't support live migration.\n- **Storage-optimized VMs** . Z3 VMs with more than 18 TiB of attached Titanium SSD don't support live migration. The maintenance behavior for these VMs is set to `TERMINATE` and `RESTART`.Compute Engine preserves the data on Titanium SSD during the maintenance event, as described in [Disk persistence following instance termination](/compute/docs/instances/host-maintenance-overview#disk-persistence).\n\n\u003cbr /\u003e\n\nHow does the live migration process work?\n\nWhen a VM is scheduled to live migrate, Compute Engine provides a\n[notification](/compute/docs/metadata/getting-live-migration-notice) so that\nyou can prepare your workloads and applications for this live migration\ndisruption. During live migration, Google Cloud observes a minimum\ndisruption time, which is typically much less than 1 second. If a VM is not\nset to live migrate, Compute Engine terminates the VM during host\nmaintenance. VMs that are set to terminate during a host event\n[stop and (optionally) restart](/compute/docs/instances/host-maintenance-overview#terminate_and_optionally_restart).\n\nWhen Google Cloud migrates a running VM from one host to another, it\nmoves the complete state of the VM from the source to the destination in a way\nthat is transparent to the guest OS and anything communicating with it.\n\nThere are many components involved in making this work seamlessly, but the\nhigh-level steps are shown in the following illustration:\n[](/static/compute/images/live-migration.svg) *Live migration components*\n\nThe process begins with a notification that a VM needs to be moved from its\ncurrent host machine. The notification might start with a file change indicating\nthat a new BIOS version is available, a hardware operation scheduling\nmaintenance, or an automatic signal from an impending hardware failure.\n\nGoogle Cloud's cluster management software constantly watches for these\nevents and schedules them based on policies that control the data centers, such\nas capacity utilization rates and the number of VMs that a single customer can\nmigrate at once.\n\nAfter a VM is selected for migration, Google Cloud provides a\nnotification to the guest that a migration is happening soon. After a waiting\nperiod, a target host is selected and the host is asked to set up a new, empty\n\"target\" VM to receive the migrating \"source\" VM. Authentication is used to\nestablish a connection between the source and the target.\n\nThere are three stages involved in the VM's migration:\n\n1. **Source brownout.** The VM is still executing on the source, while\n most state is sent from the source to the target. For example,\n Google Cloud copies all the guest memory to the target, while\n tracking the pages that have been changed on the source. The time spent in\n source brownout is a function of the size of the guest memory and the rate\n at which pages are being changed.\n\n2. **Blackout.** A very brief moment when the VM is not running anywhere, the\n source VM is paused and all the remaining state required to begin running\n the VM on the target is sent. The VM enters the blackout stage when sending\n state changes during the source brownout stage reaches a point of\n diminishing returns. An algorithm is used that balances numbers of bytes of\n memory being sent against the rate at which the guest VM is making changes.\n\n During blackout events, the system clock appears to jump forward, up to 5\n seconds. If a blackout event exceeds 5 seconds, Google Cloud stops\n and synchronizes the clock using a daemon that is included as part of the\n VM guest packages.\n3. **Target brownout.** The VM executes on the target VM. The source VM\n is present and might provide support for the target VM. For\n example, until the network fabric has caught up with the new location of the\n target VM, the source VM provides forwarding services for packets to and from\n the target VM.\n\nFinally, the migration is complete and the system deletes the source VM. You can\nsee that the migration took place in the [Cloud Logging logs](/compute/docs/instances/monitor-plan-host-maintenance-event#check-logs-for-maintenance)\nfor your VM.\n| **Note:** During live migration, VMs might experience a decrease in performance in disk, CPU, memory, and network utilization for a short period of time.\n\nLive migration of sole-tenant VMs\n\nAs your workload runs, you might want to move VMs to a different sole-tenant\nnode or node group. If you move a VM to a group of nodes, Compute Engine\ndetermines which node to place it on. For information about sole-tenancy, see\n[Sole-tenancy overview](/compute/docs/nodes/sole-tenant-nodes).\n\nTo move sole-tenant VMs to a different node or node group, you can manually\ninitiate a live migration. You can also manually initiate a live migration to\nmove a VM on a multi-tenant host into a sole-tenant node. For more information,\nsee [Manually live migrate VMs](/compute/docs/nodes/about-manual-live-migration).\n\nWhat's next\n\n- Set [VM host maintenance policy](/compute/docs/instances/setting-vm-host-options)\n options to configure your instances to live migrate.\n\n- Learn how to [get live migration notices](/compute/docs/metadata/getting-live-migration-notice#maintenanceevents)\n so you can trigger tasks that you want to perform prior to a\n maintenance event.\n\n- Read tips for [designing a robust system](/compute/docs/tutorials/robustsystems)\n that can handle service disruptions."]]