Virtual NetworksNetwork Virtualisation (NV) is the use of network resources through a logical segmentation of a single physical network. It's achieved by installing software and services to manage the sharing of storage, computing cycles and applications. Services on a Virtual Network, such as a PaaS cloud service or a Virtual Machine are assigned a Virtual IP (VIP) address. With a VIP, you can create and configure cloud features such as load balancing.Load balancing distributes client traffic evenly over multiple instances of the cloud service, this can be done locally or geographically. When creating an IaaS resource (Virtual Machine) or a PaaS resource (Cloud Service), you can allocate them, logically, to a Virtual Network at creation.Logical Networks are how devices appear to be connected to the user whilst Physical Networks are how they are actually physically connected via cabling and switches. This is usually very different to how it appears logically.There are two types of Virtual Networks which need to be considered; Cloud-Only and Cross-Premises. A Cloud-Only Virtual Network is Internet facing and using point-to-site Virtual Private Network (VPN) connections. Cross-Premises Virtual Networks, on the other hand, are connected to an on-premises network and they can also be Internet-facing. They too use point-to-site VPN connections but also use site-to-site VPN connections.
Addressing on Virtual Machines on a Virtual NetworkVirtual Machines can be addressed using either a Virtual IP (VIP), Direct IP (DIP) or a Public Instance-Level IP (PIP). A VIP address is an IP address that is shared among multiple domain names or multiple servers. It eliminates a host's dependency upon individual network interfaces. Incoming packets are sent to the system's VIP address, but all packets travel through the real network interfaces. Essentially, a public IP address is assigned to the cloud service container.A DIP address is used for internal, private network. The IP address is assigned to the virtual machine from the Virtual Network. It is also used for internal load balancing. This IP address is associated with the VM when it is created and remains associated with it while it is deployed. If the VM is deleted, it will lose its DIP address.A PIP address is associated with the VM in addition to the VIP. Traffic to the PIP goes directly to the VM and is not routed through a load balancer. Internet-bound traffic from the VM is sent over the PIP instead of going through the VIP. As a result, there is no need to configure an endpoint if a PIP is implemented, instead it is firewalled to restrict undesired traffic.
Virtual Private NetworksPoint-to-Site VPN's could be classed as cloud only networks if all their resources are hosted on the cloud. However, it could also be hosted in an on-premises datacentre. They typically involve remote workers connecting securely to business resources either on the cloud or on-premises and it provides a securely encrypted 'tunnel' from the remote client PC to the resources.Site-to-Site VPN's will extend the premises to the cloud securely by connecting branch offices to the main cloud hub. Cloud services can then be accessed securely via on-premises services and vice-versa. This requires hardware such as VPN router devices on each connected site and not just through software. A secure site-to-site connection also avoids the public Internet and adds an extra layer of isolation to the resources, only resources on the Virtual Network can see each other.
Load BalancingLocal load balancing is a very similar process across the wide variety of cloud provides. They simply distribute the traffic/compute resources between instances and Azure offers you two main types; external and internal.External load balancing is public Internet facing and is automatically configured when you deploy two or more instances at the same time in the same cloud service whilst internal load balancing is private not Internet facing. It is also not configured by default.Geo load balancing is when you have instances across the world, in different continents. For example, one instance could be deployed to the US and another in Asia. Services will have to be configured to cope with this and to do this, users connecting from Asia will be directed to the service in Asia. The same applies with US users. This reduces latency, distributes traffic, and improves the end-user experience overall.