banner



How Much Bandwidth For "Medical" Practice Running 10 Workstations And Cloud Based Apps

This solution outlines alternatives to using floating IP addresses when migrating applications to Compute Engine from an on-premises network environment. Also referred to as "shared" or "virtual" IP addresses, floating IP addresses are often used to make on-premises network environments highly available. Using floating IP addresses, you can pass an IP address between multiple identically configured physical or virtual servers allowing for failover or upgrading of production software. However, you cannot directly implement floating IP addresses in a Compute Engine environment.

Floating IP addresses in on-premises environments

Floating IP addresses are commonly used in on-premises environments. The following list outlines just a few of the use cases for floating IP addresses:

  • Highly available physical appliances, such as a set of firewalls or load balancers, often use floating IP addresses for failovers.
  • Servers that require high availability typically use floating IP addresses, for example, primary-secondary relational databases such as Microsoft SQL Server using Always On Availability Groups.
  • Linux environments that are implementing load balancers or reverse proxies use floating IP addresses such as IPVS, HAProxy, or NGINX. For detecting node failures and moving floating IP addresses between instances, these environments use daemons such as heartbeat, pacemaker, or keepalived.
  • Floating IP addresses allow for high availability with Windows Services using Windows Server Failover Clustering.

There are several ways to implement floating IP addresses in an on-premises environment. In all cases, the servers sharing the IP address must also share each other's state through a heartbeat mechanism. This mechanism enables the servers to communicate their health status to each other; it also enables the secondary server to take over the floating IP address after the linked server fails. This scheme is frequently implemented using Virtual Router Redundancy Protocol (VRRP), but you can also use other, similar mechanisms.

Once an IP failover is initiated, the server taking over the floating IP address adds the address to its network interface. The server announces this takeover to other devices using Layer 2 by sending a gratuitous Address Resolution Protocol (ARP) frame. As an alternative approach, the IP address is sometimes announced by a routing protocol such as Open Shortest Path First (OSPF) to the upstream Layer 3 router.

The following diagram shows a typical setup in an on-premises environment.

typical on-premises environment

You use a slightly different setup with on-premises load-balancing solutions, such as Windows Network Load Balancing or Linux Load Balancing with Direct Server response—for example, IP Virtual Server (IPVS). In these cases, the service also sends out gratuitous ARP frames, but with a MAC address of another server as the gratuitous ARP source, essentially spoofing the ARP frames and taking over the source address of another server. This kind of setup is out of scope for this solution. In almost all cases, migrating to Load Balancing is the preferred migration path.

Challenges with migrating floating IP addresses to Compute Engine

Compute Engine uses a virtualized network stack in a Virtual Private Cloud (VPC) network, so typical implementation mechanisms don't work out of the box. For example, the VPC network handles ARP requests based on the configured routing topology, and ignores gratuitous ARP frames. In addition, it's impossible to directly modify the VPC network routing table with standard routing protocols such as OSPF or Border Gateway Protocol (BGP).

You could use an overlay network to create a configuration that enables full Layer 2 communication and IP takeover using ARP requests. However, setting up an overlay network is complex and makes managing Compute Engine network resources difficult. That approach is also out of scope for this solution. Instead, this solution offers alternative approaches for implementing failover scenarios in a native Compute Engine networking environment.

This solution describes ways to migrate the majority of the outlined use cases into Compute Engine.

The following step-by-step guides already exist for more specific use cases:

  • Running Windows Server Failover Clustering
  • Building a Microsoft SQL Server Always On Availability Group on Compute Engine

Example use case for migration

This solution outlines four different migration options for moving from on-premises floating IP addresses to Compute Engine.

The use case involves migrating two internal HAProxy servers that route traffic to different backends depending on complex Layer 7 header matching and replacement. Due to the complex rules involved, this set of servers cannot be replaced with Internal TCP/UDP Load Balancing or even HTTP Load Balancing. The following figure shows an overview of this use case.

migration use case

The HAProxy servers use the keepalived software on-premises to check availability using a separate cross connect, and pass the floating IP addresses between the two servers.

For this use case, all four options described in the following sections are valid on-premises replacements for floating IP addresses. For other, possibly more complex use cases, fewer options might be relevant. After describing these options, this solution provides guidance on preferred options based on specific use cases.

The next section discusses how to migrate this use case scenario to Compute Engine.

Implementation using Compute Engine

This section outlines several ways to migrate the on-premises scenario to Compute Engine. To reduce complexity, instead of using the header-based matching previously described, all requests are forwarded to a single group of NGINX backends with a minimal backend configuration.

For all of the examples, traffic is routed from the HAProxy to a group of Compute Engine backends placed in an autoscaling instance group. Those backends are accessed using an internal TCP/UDP load balancer. For the example configuration, these backends serve the NGINX default configuration.

To implement the example use case, use a dedicated project for testing.

Configuring the backends

In this section, you configure the NGINX backends to be accessed by the HAProxy nodes. As a best practice, you create those backends in a VPC dedicated to this deployment instead of the default network.

To set up the backends, follow these steps:

  1. Set your default zone, for example:

                  gcloud config set compute/zone us-central1-f                          
  2. Set up a network for testing and set firewall rules to allow internal traffic, and use the ssh command to communicate with the network:

                  gcloud compute networks create ip-failover  gcloud compute firewall-rules create failover-internal \     --network ip-failover --allow all --source-ranges 10.128.0.0/11  gcloud compute firewall-rules create failover-ssh \     --network ip-failover --allow tcp:22 --source-ranges 0.0.0.0/0                          
  3. Create an instance template for the NGINX backends:

                  gcloud compute instance-templates create www \     --machine-type n1-standard-1 --network ip-failover \     --metadata startup-script="apt-get -y install nginx"                          
  4. Create an autoscaling zonal managed instance group based on the template:

                  gcloud compute instance-groups managed create www \     --template www --size 1 --zone us-central1-f  gcloud compute instance-groups managed set-autoscaling www \     --max-num-replicas 10 --min-num-replicas 1 \     --target-cpu-utilization 0.8 --zone us-central1-f                          
  5. Attach an internal TCP/UDP load balancer with a fixed IP address (10.128.2.2) to this instance group:

                  gcloud compute health-checks create http simple-check  gcloud compute backend-services create www-lb \     --load-balancing-scheme internal \     --region us-central1 \     --health-checks simple-check \     --protocol tcp  gcloud compute backend-services add-backend www-lb\     --instance-group www \     --instance-group-zone us-central1-f \     --region us-central1  gcloud compute forwarding-rules create www-rule \     --load-balancing-scheme internal \     --ports 80 \     --network ip-failover \     --region us-central1 \     --address 10.128.2.2 \     --backend-service www-lb                          
  6. Create an instance for testing, and use the ssh command to connect to it and check if you can reach the Internal TCP/UDP Load Balancing IP address:

                  gcloud compute instances create testing \     --machine-type n1-standard-1 --zone us-central1-f \     --network ip-failover --scopes compute-ro  gcloud compute ssh testing --zone us-central1-f                          
    username@testing:~$ curl 10.128.2.2
    <!DOCTYPE html> [...]
    username@testing:~$ exit

This example configuration uses n1-standard-1 instances, which are limited by two-gigabytes-per-second network throughput per instance. For a real deployment, you would size the instances according to your needs.

In addition, instances are created with external IP addresses so they can download the packages necessary using startup scripts. In a production setting, you would create custom images and create the instances without external IP addresses.

Option 1: Using Internal TCP/UDP Load Balancing

You can implement the on-premises scenario in Compute Engine by putting the two HAProxy servers in a Managed Instance Group behind Internal TCP/UDP Load Balancing and using the Internal TCP/UDP Load Balancing IP address as a virtual IP address, as the following figure shows.

option 1: Internal TCP/UDP Load Balancing.

It is assumed the on-premises migrated service is exposed only internally. For applications using HTTP(S) traffic, you can also use Internal HTTP(S) Load Balancing. If the service you are trying to migrate is externally available, you can implement this scenario in a similar way using HTTP(S) Load Balancing, TCP proxy, SSL proxy, or Network Load Balancing.

Optionally, you can use failover for Internal TCP/UDP Load Balancing to have only one HAProxy server receive traffic at a time.

Differences compared to an on-premises setup

The Internal TCP/UDP Load Balancing IP address acts similarly to the floating IP addresses in the on-premises environment, with a few notable differences:

  • Traffic distribution

    The most notable difference is that traffic is shared between the two nodes, while in the original setup, traffic reaches only one node at a time. This approach is fine in a scenario where traffic is routed depending on the content of the request itself, but it doesn't work if there is a machine state that is not constantly synced, for example, a primary/secondary database.

  • Failover time

    Using keepalived in an on-premises environment when paired with gratuitous ARP might fail over an IP address in a few seconds. In the Compute Engine environment, the mean recovery time depends on the failure mode. In case the virtual machine (VM) instance or the VM instance service fails, the mean-time-to-failover traffic depends on health check parameters such as Check Interval and Unhealthy Threshold. With these parameters set to their default values, failover usually takes 15–20 seconds, but it can be reduced by adjusting those parameters. In Compute Engine, failovers within or in-between zones take the same amount of time.

  • Health checking

    When used on-premises, in addition to waiting for an alive signal, keepalived can check the host machine health in varying ways, such as monitoring the availability of the HAProxy process. In Compute Engine, the health check has to be accessible from outside the host using an HTTP/HTTPS/TCP or SSL port. If host specifics have to be checked, you need to install a simple service on the instance to expose those specifics, or choose an alternative option.

  • Ports

    In an on-premises setup, the floating IP addresses accept all traffic. For the internal TCP/UDP load balancer, you must choose one of the following port specifications in the internal forwarding rule:

    • Specify at least one and up to five ports, by number
    • Specify ALL to forward traffic on all ports

Implementing option 1

To implement this solution, complete the following steps:

  1. Create an instance template for your HAProxy servers forwarding the requests:

                  gcloud compute instance-templates create haproxy \     --machine-type n1-standard-1 --network ip-failover \     --metadata "startup-script= sudo apt-get install -y haproxy cat << EOF >> /etc/haproxy/haproxy.cfg frontend www     bind :80     option http-server-close     default_backend web-backend backend web-backend     server web-1 10.128.2.2:80 check EOF service haproxy restart"                          
  2. Create a zonal instance group based on the instance templates with a static size of two. Attach an autohealing policy to the instances using the health check you previously created:

                  gcloud compute instance-groups managed create haproxy \     --template haproxy --size 2 --zone us-central1-f  gcloud compute instance-groups managed update \     haproxy --health-check simple-check --zone us-central1-f                          
  3. Attach an internal TCP/UDP load balancer to the HAProxy servers with a health check:

                  gcloud compute backend-services create haproxy-lb \     --load-balancing-scheme internal \     --region us-central1 \     --health-checks simple-check \     --protocol tcp gcloud compute backend-services add-backend haproxy-lb\     --instance-group haproxy \     --instance-group-zone us-central1-f \     --region us-central1  gcloud compute forwarding-rules create haproxy-rule \     --load-balancing-scheme internal \     --ports 80 \     --network ip-failover \     --region us-central1 \     --address 10.128.1.1 \     --backend-service haproxy-lb                          
  4. Test if you can reach the HAProxy through Internal TCP/UDP Load Balancing:

                  gcloud compute ssh testing --zone us-central1-f                          
    username@testing:~$ curl 10.128.1.1
    <!DOCTYPE html> [...]
    username@testing:~$ exit

After deleting one of the HAProxy instances through the console or stopping the HAProxy process on one of the instances, curl will still succeed after a short failover time.

Option 2: Using a single managed instance

Depending on recovery-time requirements, migrating with a single VM instance might be a viable Compute Engine option even if multiple servers were used on-premises. The reason is that you can spin up a new Compute Engine instance in minutes, while on-premises failures typically require hours or even days to rectify.

option 2: single managed instance

Comparing option 2 to option 1: Internal TCP/UDP Load Balancing

Option 2 comes with major advantages and drawbacks compared to option 1.

Advantages:

  • Traffic distribution

    Because there is only one instance, all traffic hits a single instance, similar to an on-premises primary-secondary scenario.

  • Cost savings

    Using a single VM instance instead of two can cut the cost of the implementation in half.

  • Simplicity

    This solution is easy to implement and comes with little overhead.

Disadvantages:

  • Failover time

    After the health checks detect a machine failure, deleting and recreating the failed instance will take at least a minute, but often significantly more. This process is much slower than removing an instance from Internal TCP/UDP Load Balancing.

  • Reaction to zone failures

    A managed instance group with size 1 doesn't survive a zone failure. To react to zone failures, consider adding a Cloud Monitoring alert when the service fails, and manually create an instance group in another zone upon a zone failure.

Implementing option 2

Complete the following steps to implement option 2:

  1. Create an instance template with a static internal IP address for your HAProxy VM instance:

                  gcloud compute instance-templates create haproxy-single \     --machine-type n1-standard-1 --network ip-failover \     --private-network-ip=10.128.3.3 \     --metadata "startup-script= sudo apt-get install -y haproxy cat << EOF >> /etc/haproxy/haproxy.cfg frontend www     bind :80     option http-server-close     default_backend web-backend backend web-backend     server web-1 10.128.2.2:80 check EOF service haproxy restart"                          
  2. Create a managed instance group of size 1 for your HAProxy VM and attach an autohealing policy:

                  gcloud compute instance-groups managed create haproxy-single \     --template haproxy-single --size 1 --zone us-central1-f  gcloud compute instance-groups managed update \     haproxy-single --health-check simple-check --zone us-central1-f                          
  3. Test if you can reach the HAProxy through the Internal TCP/UDP Load Balancing IP address:

                  gcloud compute ssh testing --zone us-central1-f                          
    username@testing:~$ curl 10.128.3.3
    <!DOCTYPE html> [...]
    username@testing:~$ exit

    When you delete the HAProxy instance or stop the HAProxy instance process using the console, the instance automatically recovers after a delay with the same instance name and IP address.

Option 3: Failover using different priority routes

Two Compute Engine routes with differing priorities provide another way to enable traffic failover between two instances when you can't use Internal TCP/UDP Load Balancing.

In this section, you create two VM instances and place them into an autohealing managed instance group with a static size of 1, enablings the system to automatically heal.

You must enable IP forwarding on both of these instances. Then, after creating the instances, you divert all floating IP traffic to these two instances by setting up two routes with different priorities to handle the traffic.

option 3: different priority routes

Comparing option 3 to option 1: Internal TCP/UDP Load Balancing

Using option 3, you can migrate use cases where Internal TCP/UDP Load Balancing cannot be easily used. This option has the following advantages:

  • Traffic distribution

    Traffic always flows to the VM instance with the lowest priority. When this VM instance isn't available, traffic uses the next best route. This architecture resembles an on-premises environment where only one server is active at a given time.

  • Protocols

    Internal TCP/UDP Load Balancing is applied only to a specific set of protocols or ports, while routes apply to all traffic to a specific destination.

  • Regionality

    Internal TCP/UDP Load Balancing is available only within a region, while routes can be created globally.

Option 3 has drawbacks compared to option 1, which uses Internal TCP/UDP Load Balancing.

  • Health checking

    With option 3, no health check is attached to either of the two routes. Routes are used regardless of the health of the underlying VM services. Traffic is directed to instances even if the service is unhealthy. Attaching an autohealing policy to those instances kills the instances after a specific unhealthy time period, but once those instances restart, traffic resumes even before the service is up, which can lead to potential service errors during the period when unhealthy instances are still serving traffic or in the process of restarting.

  • Failover time

    After you delete or stop a VM instance, the route is automatically withdrawn. However, due to missing health checks, as long as the instance is still available the route is still used. In addition, stopping the instance takes time, so failover time is considerably higher than with the Internal TCP/UDP Load Balancing approach.

  • Floating IP address selection

    You can set routes only to IP addresses that are not part of any subnet. The Floating IP address must be chosen outside of all existing subnet ranges.

  • VPC Network Peering

    VM instances can only use routes from their own VPC network, and not from any peered VPC networks.

Implementing option 3

During implementation, you will use the 10.191.1.1IP address, which is outside all active subnets in the ip-failover network. Complete the following steps:

  1. Create an instance template for your HAProxy servers forwarding the requests:

                  gcloud compute instance-templates create haproxy-route \     --machine-type n1-standard-1 --network ip-failover \     --metadata "startup-script= apt-get update apt-get install -y haproxy cat << EOF >> /etc/haproxy/haproxy.cfg frontend www     bind :80     option http-server-close     default_backend web-backend backend web-backend     server web-1 10.128.2.2:80 check EOF cat << EOF >> /etc/network/interfaces auto eth0:0 iface eth0:0 inet static     address 10.191.1.1     netmask 255.255.255.255 EOF service haproxy restart service networking restart" --can-ip-forward                          
  2. Create two managed instance groups, both of size 1, for your HAProxy VM instances, and attach an autohealing policy to them:

                  gcloud compute instance-groups managed create haproxy-r1 \     --template haproxy-route --size 1 --zone us-central1-f  gcloud compute instance-groups managed update \     haproxy-r1 --health-check simple-check --zone us-central1-f  gcloud compute instance-groups managed create haproxy-r2 \     --template haproxy-route --size 1 --zone us-central1-b  gcloud compute instance-groups managed update \     haproxy-r2 --health-check simple-check --zone us-central1-b                          
  3. Create a primary and backup route to these VM instances after they have started:

                  haproxy1=$(gcloud compute instances list |awk '$1 \     ~ /^haproxy-r1/ { print $1 }')     #save the instance name of the first HAproxy instance  haproxy2=$(gcloud compute instances list |awk '$1 \     ~ /^haproxy-r2/ { print $1 }')     #save the instance name of the second HAproxy instance  gcloud compute routes create haproxy-route1 \     --destination-range 10.191.1.1/32 --network ip-failover \     --priority 500 --next-hop-instance-zone us-central1-f \     --next-hop-instance $haproxy1  gcloud compute routes create haproxy-route2 \     --destination-range 10.191.1.1/32 --network ip-failover \     --priority 600 --next-hop-instance-zone us-central1-b \     --next-hop-instance $haproxy2                          
  4. Test if you can reach the HAProxy through the route:

                  gcloud compute ssh testing --zone us-central1-f                          
    username@testing:~$ curl 10.191.1.1
    <!DOCTYPE html> [...]
    username@testing:~$ exit

    When you delete the primary HAProxy instance through the console, the route to the secondary instance is supposed to be used as soon as the instance is completely down.

Option 4: Failover using routes API calls

Like option 3, option 4 also uses routes, but differs in important ways. Instead of autohealing automatically and re-creating instances, keepalived or other scripts use API calls to add a route to a new healthy instance or remove a route from an unhealthy instance. This approach is useful in situations where you can't use Compute Engine health checks to track the health of the application or determine which virtual machine is primary. Any application logic can trigger dynamic reprogramming of routes.

Using routes API calls as a failover method is also useful when application failures are manually investigated and instances manually brought back online. However, because VMs need to be able to log all failures and be automatically replaced as they become healthy, don't manually investigate failures in Compute Engine.

option 4: failover using routes API calls

Comparing option 4 differences: Internal TCP/UDP Load Balancing

In contrast to using Internal TCP/UDP Load Balancing, option 4 offers these advantages:

  • Traffic distribution

    As with options 2 and 3, traffic hits only one VM instance at a time.

  • No reliance on Compute Engine health checks

    Failover can be triggered by any custom application logic. With option 4, you use a script to manage keepalived reactions to communication failures between primary and secondary HAProxies. This is the only option that works when you can't or don't want to use Compute Engine health checks.

Option 4 also has major disadvantages:

  • Complexity

    This option has to be custom-built using the Compute Engine API or gcloud calls to withdraw and set a new route using the Compute Engine API. Building this logic in a reliable way is often complex.

  • Failover time

    Because it requires at least two Compute Engine API calls by a custom script to withdraw and create a new Route on Compute Engine, failover is slightly slower than with an internal load balancer.

  • Floating IP address selection

    You can set routes only to IP addresses that are not part of any subnet. Floating IP addresses must be chosen outside of all existing subnet ranges.

  • VPC Network Peering

    VM instances can only use routes from their own VPC network, and not from any peered VPC networks.

Implementing option 4

This implementation uses the 10.190.1.1 IP address, which is outside all active subnets in the ip-failover network. The route for this address will be automatically created and deleted by keepalived.

First, you create two HAProxy instances with haproxy and keepalived installed using static internal IP addresses for both instances. You must also enable IP forwarding to be able to terminate the route, and require access to the Compute Engine API. In order to keep it simple, you won't be using instance templates and groups in this example.

Create option 4 with the following steps:

  1. Create the primary instance with a static IP address of 10.128.4.100:

                  gcloud compute instances create haproxy-a \     --machine-type n1-standard-1 --network ip-failover \     --can-ip-forward --private-network-ip=10.128.4.100 \     --scopes compute-rw --zone us-central1-f \     --metadata 'startup-script= apt-get update apt-get install -y haproxy keepalived cat << EOF >> /etc/haproxy/haproxy.cfg frontend www     bind :80     option http-server-close     default_backend web-backend backend web-backend     server web-1 10.128.2.2:80 check EOF cat << EOF >> /etc/network/interfaces auto eth0:0 iface eth0:0 inet static     address 10.190.1.1     netmask 255.255.255.255 EOF cat << EOF >> /etc/keepalived/keepalived.conf vrrp_script haproxy {     script "/bin/pidof haproxy"     interval 2 }  vrrp_instance floating_ip {     state MASTER     interface eth0     track_script {         haproxy     }     unicast_src_ip 10.128.4.100     unicast_peer {         10.128.4.200     }     virtual_router_id 50     priority 100     authentication {         auth_type PASS         auth_pass yourpassword     }     notify_master /etc/keepalived/takeover.sh } EOF cat << EOF >> /etc/keepalived/takeover.sh #!/bin/bash gcloud compute routes delete floating --quiet gcloud compute routes create floating \     --destination-range 10.190.1.1/32 --network ip-failover \     --priority 500 --next-hop-instance-zone us-central1-f \     --next-hop-instance haproxy-a --quiet EOF chmod +x /etc/keepalived/takeover.sh service haproxy restart service networking restart service keepalived start'                          
  2. Create the secondary instance with a static IP address of 10.128.4.200:

                  gcloud compute instances create haproxy-b \     --machine-type n1-standard-1 --network ip-failover \     --can-ip-forward --private-network-ip=10.128.4.200 \     --scopes compute-rw --zone us-central1-c \     --metadata 'startup-script= apt-get update apt-get install -y haproxy keepalived cat << EOF >> /etc/haproxy/haproxy.cfg frontend www     bind :80     option http-server-close     default_backend web-backend backend web-backend     server web-1 10.128.2.2:80 check EOF cat << EOF >> /etc/network/interfaces auto eth0:0 iface eth0:0 inet static     address 10.190.1.1     netmask 255.255.255.255 EOF cat << EOF >> /etc/keepalived/keepalived.conf vrrp_script haproxy {     script "/bin/pidof haproxy"     interval 2 }  vrrp_instance floating_ip {     state BACKUP     interface eth0     track_script {         haproxy     }     unicast_src_ip 10.128.4.200     unicast_peer {         10.128.4.100     }     virtual_router_id 50     priority 50     authentication {         auth_type PASS         auth_pass yourpassword     }     notify_master /etc/keepalived/takeover.sh } EOF cat << EOF >> /etc/keepalived/takeover.sh #!/bin/bash gcloud compute routes delete floating --quiet gcloud compute routes create floating \     --destination-range 10.190.1.1/32 --network ip-failover \     --priority 500 --next-hop-instance-zone us-central1-c \     --next-hop-instance haproxy-b --quiet EOF chmod +x /etc/keepalived/takeover.sh service haproxy restart service networking restart service keepalived start'                          
  3. Test if you can reach the HAProxy through the route:

                  gcloud compute ssh testing --zone us-central1-f                          
    username@testing:~$ curl 10.190.1.1
    <!DOCTYPE html> [...]
    username@testing:~$ exit

    When HAProxy on instance haproxy-a is killed or the instance locks up, VRRP heartbeats will be missing and the haproxy-b instance invokes the takeover.sh script. This script moves the route for 10.190.1.1 from haproxy-a to haproxy-b, and the test will still work.

Choosing the best option for your use case

For the example use cases involving a set of HAProxy nodes making complex routing decisions, the preferred Compute Engine implementation is Option 1: Internal TCP/UDP Load Balancing. This is because the VM instances are stateless, and can easily work in an active-active scenario. In addition, Compute Engine health checks can be used. With other use cases, option 1 might not be the best option.

In addition to the previously listed advantages and disadvantages given for each option, the following decision tree can help you decide on an implementation scheme.

decision tree

Highly available and reliable applications are best implemented in Compute Engine using horizontally scaling architectures, minimizing the impact of a single node failure. Migrating a typical on-premises scenario, such as two servers with floating IP addresses, is challenging because this scenario cannot be duplicated in Compute Engine. As previously noted, moving IP addresses between different machines in subseconds using gratuitous ARP doesn't work due to the nature of the virtual routing infrastructure.

Internal TCP/UDP Load Balancing enables many use cases to be transferred simply and reliably to Compute Engine. For cases where you can't use an internal load balancer, you can implement several other options that require no complex overlay routing mechanisms.

Next steps

  • Learn about Internal TCP/UDP Load Balancing.
  • Learn about failover options for Internal TCP/UDP Load Balancing (Beta).
  • Learn about Internal HTTP(S) Load Balancing (Beta).
  • Learn about routes in Compute Engine.
  • Review the SQL Server Always On Availability Group solution.
  • Review the solution about autoscaled service discovery with Consul in Compute Engine.
  • Review the Cloud Platform for Data Center Professionals guide.

How Much Bandwidth For "Medical" Practice Running 10 Workstations And Cloud Based Apps

Source: https://cloud.google.com/solutions/best-practices-floating-ip-addresses

Posted by: kruegerbittly.blogspot.com

0 Response to "How Much Bandwidth For "Medical" Practice Running 10 Workstations And Cloud Based Apps"

Post a Comment

Iklan Atas Artikel

Iklan Tengah Artikel 1

Iklan Tengah Artikel 2

Iklan Bawah Artikel