This posting is ~5 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.
Update
On November 22, 2017, Ajay Patel (Senior Vice President, Product Development, Cloud Services, VMware) published a blog post in reaction to Microsofts announcement (VMware – The Platform of Choice in the Cloud). Especially these statements are interesting:
No VMware-certified partner names have been mentioned nor have any partners collaborated with VMware in engineering this offering. This offering has been developed independent of VMware, and is neither certified nor supported by VMware.
and
Microsoft recognizing the leadership position of VMware’s offering and exploring support for VMware on Azure as a superior and necessary solution for customers over Hyper-V or native Azure Stack environments is understandable but, we do not believe this approach will offer customers a good solution to their hybrid or multi-cloud future.
Looks like VMware is not happy about Microsofts annoucement. And this blog post clearly states, that VMware will not partner with VMware to bringt VMware virtualization stack on Azure.
I don’t know if this is a wise decision of VMware. The hypervisor, their core product, is a commodity nowadays. We are taking about a bare-metal solution, so it’s not different from what VMware build with AWS. It’s more about how it is embedded in the cloud services and cloud control plane. If you use VMware vSphere, Horizon and O365, the step to move virtualization workloads to VMware on Azure is smaller, than move it to AWS.
Yesterday, Microsoft announced new services to ease the migration from VMware to Microsoft Azure. Corey Sanders (Director of Compute, Azure) posted a blog post (Transforming your VMware environment with Microsoft Azure) and introduced three new Azure services.
Microsoft/ microsoft.com
Azure Migrate
The free Azure Migrate service does not focus on single server workloads. It is focused on multi-server application and will help customers through the three staged
Discovery and assessment
Migration, and
Resource & Cost Optimization
Azure Migrate can discover your VMware-hosted applications on-premises, it can visualize dependencies between them, and it will help customers to create a suitable sizing for the Azure hosted VMs. Azure Site Recovery (ASR) is used for the migration of workloads from the on-premises VMware infrastructure to Microsoft Azure. At the end, when your applications are running on Microsoft Azure, the free Azure Cost Management service helps you to forecast, track, and optimize your spendings.
Integrate VMware workloads with Azure services
Many of the current available Azure services can be used with your on-premises VMware infrastructure, without the need to migrate workloads to Microsoft Azure. This includes Azure Backup, Azure Site Recovery, Azure Log Analytics or managing Microsoft Azure resources with VMware vRealize Automation.
But the real game-changer seesm to bis this:
Host VMware infrastructure with VMware virtualization on Azure
Bam! Microsoft announces the preview of VMware vSphere on Microsoft Azure. It will run on bare-metal on Azure hardware, beside other Azure services. The general availability is expected in 2018.
My two cents
This is the second big announcement about VMware stuff on Azure (don’t forget VMware Horizon Cloud on Microsoft Azure). And although I believe, that this is something that Microsoft wants to offer to get more customers on Azure, this can be a great chance for VMware. VMware customers don’t have to go to Amazon, when they want to host VMware at a major public cloud provider, especially if they are already Microsoft Azure/ O365 customers. This is a pretty bold move from Microsoft and similar to VMware Cloud on AWS. I’m curious to get more details about this.
This posting is ~6 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.
When I talk to customers and colleagues about cloud offerings, most of them are still concerned about the cloud, and especially about the security of public cloud offerings. One of the most mentioned concerns is based on the belief, that each and every cloud-based VM is publicly reachable over the internet. This can be so, but it does not have to. It relies on your design. Maybe that is only a problem in germany. German privacy policies are the reason for the two german Azure datacenters. They are run by Deutsche Telekom, not by Microsoft.
Azure Virtual Networks
An Azure Virtual Network (VNet) is a network inside the public Azure cloud. It is isolated from the underlying infrastructure and it is dedicated to you. This allows you to fully control IP addressing, DNS, security policies and routing between subnets. Virtual Networks can include multiple subnets to reflect different security zones and/ or multi-tier designs. If you want to connect two or more VNets in the same region, you have to use VNet peering. Microsoft offers an excellent documentation about Virtual Networks. Because routing is managed by the Azure infrastructure, you will need to set user-defined routes to push traffic through a firewall or load-balancing appliance.
Who is Palo Alto Networks?
Palo Alto Networks was founded by Nir Zuk in 2005. Nir Zuk is the founder and CTO of Palo Alto Networks. He is still leading the development. Nil Zuk is a former employee of CheckPoint and NetScreen (was acquired by Juniper Networks). His motivation to develop his vision of a Next Generation Firewall (NGF) was the fact, that firewalls were unable to look into traffic streams. We all know this: You want that your employees can use Google, but you don’t want them to access Facebook. Designing polices for this can be a real PITA. You can solve this with a proxy server, but a proxy has other disadvantages.
Gartner has identified Palo Alto Networks as a leader in the enterprise firewall since 2011.
I was able to get my hands on some Palo Alto firewalls and I think I understand why Palo Alto Networks is noticed as a leader.
VM-Series for Microsoft Azure
Sometimes you have to separate networks. No big deal when your servers are located in your datacenter, even if they are virtualized. But what if the servers are located in a VNet on Azure? As already mentioned, you can create different subnets in an Azure VNet to create a multi-tier or multi-subnet environment. Because routing is managed by the underlying Azure infrastructure, you have to use Network Security Groups (NSG) to manage traffic. A NSG contains rules to allow or deny network traffic to VMs in a VNet. Unfortunately a NSGs can only act on layer 4. If you need something that can act on layer 7, you need something different. Now comes the Palo Alto Networks VM-Series for Microsoft Azure into play.
The VM-Series for Microsoft Azure can directly deployed from the Azure Marketplace. Palo Alto Networks also offers ARM templates on GitHub.
Palo Alto Networks aims four main use-cases:
Hybrid Cloud
Segmentation Gateway Compliance
Internet Gateway
The hybrid cloud use-case is interesting if you want to extend your datacenter to Azure. For example, if you move development workloads to Azure. Instead of using Azures native VPN connection capabilities, you can use the VM-Series Palo Alto Networks NGF as IPSec gateway.
If you are running different workloads on Azure, and you need inter-subnet communication between them, you can use the VM-Series as a firewall between the subnets. This allows you to manage traffic more efficiently, and it provides more security compared to the Azure NSGs.
If you are running production workloads on Azure, e.g. a RDS farm, you can use the VM-Series to secure the internet access from that RDS farm. Due to integration in directory services, like Microsoft Active Directory or plain LDAP, user-based policies allow the management of traffic based on the user identity.
There is a fourth use-case: Palo Alto Networks GlobalProtect. With GlobalProtect, the capabilities of the NGF are extended to remote users and devices. Traffic is tunneled to the NGF, and users and devices will be protected from threats. User- and application-based policies can be enforced, regardless where the user and the device is located: On-premises, in a remote location or in the cloud.
Palo Alto Networks offers two ways to purchase the VM-Series for Microsoft Azure:
Consumption-based licensing
Bring your own license (BYOL)
The consumption-based licensing is only available for the VM-300. The smaller VM-100, as well as the bigger VM-500 and VM-700, are only available via BYOL. It’s a good idea to offer a mid-sized model with a consumption-based license. If the VM-300 is too big (with consumption-based licensing), you can purchase a permanent license for a VM-100. If you need more performance, purchasing your own license might be the better way. You can start with a VM-300 and then rightsize the model and license.
All models can handle a throughput of 1 Gb/s, but they differ in the number of sessions. VM-100 and 300 use D3_v2, the VM-500 and VM-700 use D3_v2 instances.
Just play with it
Just create some Azure VM instance and deploy a VM-300 from the marketplace. Play with it. It’s awesome!
This posting is ~7 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.
Okay, the headline of this blog post is a bit provocative. This blog post is not written from the vendor perspective. It’s the perspective of someone, who’s sitting between the vendor and the customer. A value-added reseller (VAR) is typically located between vendor and customer. And the business model of a VAR is typically based on selling hardware, software and service.
Added value
The typical customer doesn’t have the time, money and the know-how to transform business requirements into a bill of materials (BOM). It’s a “make-or-buy” decision. And “buy” is often better than “make”. The customer needs a partner who helps him to transform the business requirements into a solution and a BOM.
Even “simple” things, like a new server, are sometimes complex. What memory configuration? How many disks? Which controller? Which CPU for which application? Who ensures that the firmware is upgraded? Who labels the cables during rack-and-stack? These things are not self-evident. Sure, servers are commodities. You can buy a HPE ProLiant from an online shop. You can buy expansion enclosures for a HPE 3PAR from an online shop. You can buy nearly everything online. But what customer risks it to buy crap? At this point, a VAR can offer added value.
The downside of a buyers market
IT budgets are under considerable cost pressure. The customer always wants the best price. And there are many VARs. As a VAR, you are not in the best position. Information technology is a buyers market. As a VAR, you must offer added value and the best price. Customers love free advice… and then they buy from an online shop, or from another VAR that was cheaper.
Cloud eliminates hardware/ software revenues
Cloud offerings are awesome! For customers… But they are the plague for VARs. Usually you need to sell more billable hours to achieve the same margins with cloud offerings and service, as with a combination of hardware, software and service. And your employees need different skills. Take the example of Office 365. To date, a VAR has sold 200 licenses for Microsoft Office (Open License). Now he sells 200 E3 plans. Revenue is not the same. Maybe a little more service for the implementation of Office 365 and AD FS. Or Microsoft Exchange. Many customers consider the use of Exchange Online (often as part of an Office 365 deployment). Or Microsoft Azure instead of VMware vSphere on-premises. No hardware, less software, similar amount of service, but a different skills.
Develop your business model
Cloud offerings and “price-conscious” customers are forcing VARs to rethink their business model. Decreasing margins and a highly competitive market make the sale of hardware and software increasingly unattractive. But cloud offerings require other skills from your sales and technical teams. Such fundamental changes need time, patience and leadership to be successful.
This posting is ~7 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.
Before you can manage Azure services with Azure Automation, you need to authenticate the Automation account against a subscription. This authentication process is part of each runbook. There are two different ways to authenticate against an Azure subscription:
Active Directory user
Certificate
If you want to use an Active Directory account, you have to create a credential asset in the Automation account and provide username and password for that Active Directory account. You can retrieve the credentials using the Get-AzureAutomationCredential cmdlet. This cmdlet returns a System.Management.Automation.PSCredential object, which can be used with Add-AzureAccount to connect to a subscription. If you want to use a certificate, you need four assets in the Automation account: A certificate and variables with the certificate name, the subscription ID and the subscription name. The values of these assets can be retrieved with Get-AutomationVariable and Get-AutomationCertificate.
Prerequisites
Before you start, you need a certificate. This certificate can be a self- or a CA-signed certificate. Check this blog post from Alice Waddicor if you want to start with a self-signed certificate. I used a certificate, that was signed by my lab CA.
At a Glance:
self- or CA-signed certificate
Base64 encoded DER format (file name extension .cer) to upload it as a management certificate
PKCS #12 format with private key (file name extension .pfx or .cer) to use it as an asset inside the Automation account
Upload the management certificate
First, you must upload the certificate to the management certificates. Login to Azure and click “Settings”.
Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0
Click on “Management Certificates”
Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0
and select “Upload” at the bottom of the website.
Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0
Make sure that the certificate has the correct format and file name extension (.cer).
Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0
Finish the upload dialog. After a few seconds, the certificate should appear in the listing.
Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0
Create a new Automation account
Now it’s time to create the Automation account. Select “Automation” from the left panel.
Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0
Click on “Create an Automation account”.
Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0
Give your Automation account a descriptive name and select a region. Please note that an Automation account can manage Azure services from all regions!
Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0
Click on the newly created account and click on “Assets”.
Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0
Select “Add setting” from the bottom of the website.
Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0
Add a credential asset by choosing “Add credential” and select “Certificate” as “Credential type”.
Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0
Enter a descriptive name for the certificate. You should remember this name. You will need it later. Now you have to upload the certificate. The certificate must have the file name extension .pfx or .cer and it must include the private key!
Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0
Finish the upload of the certificate. Now add three additional assets (variables).
Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0
Select the name, the value and the type from the table below. The name of the certificate is the descriptive name, you’ve previously entered when uploading the certificate.
Name
Value
Type
AutomationCertificateName
Name of your certificate
String
AzureSubscriptionName
Name of your subscription
String
AzureSubscriptionID
36 digit ID of the subscription
String
Done. You’ve uploaded and created all the required certificates and variables.
How to use it
To use the certificate and the variables to connect to an Azure subscription, you have to use the two cmdlets Get-AutomationCertificate and Get-AutomationVariable. I use this code block in my runbooks:
Certificate-based authentication is an easy way to authenticate an Automation account against an Azure subscription. It’s easy to implement and you don’t have to maintain users and passwords. You can use different certificates for different Automation accounts. I really recommend this, especially if you have separate accounts for dev, test and production.
All you need is to upload a certificate as a management certificates, and as a credential asset in the Automation account. You can use a self- or CA-signed certificate. The subscription ID, the subscription name and the name of the certificate are stored in variables.
At the beginning of each runbook, you have to insert a code block. This code block takes care of authentication.
This posting is ~7 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.
Automation is essential to reduce friction and to streamline operational processes. It’s indispensable when it comes to the automation of manual, error-prone and frequently repeated tasks in a cloud or enterprise environment. Automation is the key to IT industrialization. Azure Automation is used to automate operational processes withing Microsoft Azure.
Automation account
The very first thing you have to create is an Automation account. You can have multiple Automation accounts per subscription. An Automation account allows you so separate automation resources from other Automation accounts. Automation resources are runbooks and assets (credentials, certificates, connection strings, variables, scheudles etc.). So each Automation account has its own set of runbooks and assets. This is perfect to separate production from development. An Automation account is associated with an Azure region, but the Automation account can manage Azure services in all regions.
Runbooks
A runbook is a collection of PowerShell script or PowerShell workflows. You can automate nearly everything with it. If something provides an API, you can use a runbook and PowerShell to automate it. A runbook can run other runbooks, so you can build really complex automation processes. A runbook can access any services that can be accessed by Microsoft Azure, regardless if it’s an internal or external service.
There are three types of runbooks:
Graphical runbooks
PowerShell Workflow runbooks
PowerShell runbooks
Graphical runbooks can be created and maintained with a graphical editor within the Azure portal. Graphical runbooks use PowerShell workflow code, but you can’t directly view oder modify this code. Graphical runbooks are great for customers, that don’t have much automation and/ or PowerShell knowledge. Once you created a graphical runbook with an automation account, you can export and import this runbook into another automation accounts, but you can modify the runbook only with the account which was used during the creation of the runbook.
PowerShell Workflow runbooks doesn’t have a graphical presentation of the workflow. You can use a text editor to create and modify PowerShell Workflow runbooks. But you need to know how to deal with the logic of PowerShell Workflow code.
PowerShell runbooks are plain PowerShell code. Unlike PowerShell Workflows, a PowerShell runbook is faster, because it doesn’t have to be compiled before the run. But you have to be familiar with PowerShell. There is no parallel processing and you can’t use checkpoints (if a snapshot fails, it will be suspended. With a checkpoint, the workflow can started at the last sucessful checkpoint).
Schedule
Schedules are used to run runbooks to a specific point in time. Runbooks and schedules have a M:N relationship. A schedule can be associated with one or more runbooks, and a runbook can be linked to one or more schedules.
Summary
This is only a brief introduction into Azure Automation. Azure Automation uses Automation accounts to execute runbooks. A runbook consists of PowerShell Workflow or plain PowerShell code. You can use runbooks to automate nearly all operations of Azure services. To execute runbooks to a specific point in time, you can use schedules Runbooks, schedules and automation assets, like credentials, certificates etc., are associated with a specific Automation account. This helps you to separate between different Automation accounts, e.g. accounts for development and for production.
This posting is ~8 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.
Building networks in the cloud is sometimes hard to understand. A common mistake is to believe that all VMs can talk to another, regardless of the owner, and that all VMs are available over the internet.
Some basics about Cloud Service Endpoints and Virtual Networks
When we talk about Microsoft Azure, a Cloud Service Endpoint is the easiest way to access one or multiple VMs. A Cloud Service contains resources, like VMs, and it’s acting as a communication and security boundary. All VMs that use the same Cloud Service get their IPs via DHCP and share the same private IP address range. The VMs can communicate directly to each other. To access these VMs over the internet, a Cloud Service Endpoint is used. Each Cloud Service has a internet addressable virtual IP address assigned. And that’s the Cloud Service Endpoint. With PAT, ports for RDP or PowerShell are forwarded to the VMs by default. If you deploy a webserver and an application server, both can be provisioned to the same Cloud Service and therefore, they share the same Cloud Service Endpoint. But you can only forward http traffic to the webserver. Therefore only the webserver is available over the internet, not the application server.
If you need more complex networking within Microsoft Azure, you may take a look at the Virtual Networks (VNets). VNets are used to create and manage isolated networks within Microsoft Azure. Each VNet has its own choosable IP address range. VNets can be linked to other VNets and on-premises networks using VPN techniques. VNets allow you to assign “your own” IP addresses to VMs. To be honest: You can define “your own” IP subnet, but even with Set-AzureStaticVNetIP, you can’t assign a real static IPs to those VMs. It’s more a kind request to get the same IP everytime the VM boots up.
With VNets you can extend your on-premises network to Microsoft Azure. You can deploy test and development environments to Microsoft Azure and connect to the used VNets using IPsec. Or you can run you whole datacenter within Microsoft Azure and only have PCs and laptops on-premises.
Connect a on-premises network to Microsoft Azure
In this blog article, I’d like to show you how you connect your on-premises network to a Microsoft Azure Virtual Network. In this case, I used my AVM FRITZ!Box 7270 as the VPN gateway in my on-premises network. Many well-known manufacturers are listed on the compatibility list.
The very first step is to create a Virtual Network (VNet) in the Microsoft Azure portal. Select “NETWORKS” from the left panel and create a new VNet.
Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0
You should assign a descriptive name to your VNet. Just an information: The datacenter for the local “West Europe” is located in the Netherlands. Currently there is no datacenter in Germany.
Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0
If you have a DNS server in your local network, you can add it in step two of the “Create a Virtual Network” wizard. Make sure that you select the “Configure a site-to-site VPN” checkbox. Otherwise you can’t create a IPsec tunnel.
Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0
In the third step, you have to enter details about your on-premises network and the VPN device. The VPN device address is the public address of your VPN gateway. You have to use an IP address here. You can’t use a FQDN. So make sure that you have a static IP address! I don’t have one. This means that I have to change my VNet config each time my public IP changes (all 24h). Enter the address spaces of your local IP subnets.
Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0
The “Virtual Network Address Space” defines the IP subnet which should be used for the VMs that will be provisioned to this VNet. And you have to add a subnet for the gateway. The gateway subnet is used for the IPsec connection. Please note that this subnet can’t be used for VMs.
Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0
The creation of the VNet takes some time. After a couple of minutes you should be able to click on “VIRTUAL NETWORKS”, “LOCAL NETWORKS” and “DNS SERVERS” and you should see the values you’ve entered.
Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0
In the “LOCAL NETWORKS” section you can see the address space of your local network, as well as the IP of your VPN gateway. If you don’t have a static IP address, you have to change the VPN gateway address here everytime the IP address changes.
Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0
Last but not least: The DNS server of your local network.
Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0
The next step is to create a gatway. The gateway is necessary for the IPsec connection. Click on “CREATE GATEWAY”.
Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0
Select “Static Routing”. That’s all. The creation of the gatway will take some time. I had to wait about 5 minutes until the gateway was created.
Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0
After the creation of the gateway, you should see the gateway IP address. This address is the VPN endpoint from the perspective of your local network.
Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0
Beside the gateway IP address, you also need the shared key to create an IPsec tunnel. Click on “MANAGE KEY” to get the pre-shared key.
Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0
Copy the pre-shared key and save it for later use.
Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0
Now the configuration of the VNet is finished. Now it’s time to configure your on-premises VPN gateway. In my case it’s a AVM FRITZ!Box which isn’t listed on the compatible device list. You have to create and import a config file. This is the config file I’ve used. I highlighted the parts of the config that you have to modify. Please note, that previously created VPN connections may be deleted if you import this file. If you have additional VPN connections configured on your FRITZ!Box, please make sure that you add them to this file BEFORE you import it.
/*
* VPN Config file for
* AVM FRITZ!Box
*/
vpncfg {
connections {
enabled = yes;
conn_type = conntype_lan;
name = "azure-2-home"; /* change this name
always_renew = no;
reject_not_encrypted = no;
dont_filter_netbios = yes;
localip = 0.0.0.0;
local_virtualip = 0.0.0.0;
remoteip = 40.xxx.xxx.xxx; /* the gateway IP address from your Microsoft Azure Gateway
remote_virtualip = 0.0.0.0;
localid {
ipaddr = 78.xxx.xxx.xxx; /* your local VPN gateway IP address
}
remoteid {
ipaddr = 40.xxx.xxx.xxx; /* the gateway IP address from your Microsoft Azure Gateway
}
mode = phase1_mode_aggressive;
phase1ss = "all/all/all";
keytype = connkeytype_pre_shared;
key = "FcYKsPLpeDFxxxxxxxxxxxxxxxxxxxxxxx"; /* the PSK
cert_do_server_auth = no;
use_nat_t = yes;
use_xauth = no;
use_cfgmode = no;
phase2localid {
ipnet {
ipaddr = 192.168.20.0; /* your local subnet
mask = 255.255.255.224; /* your local subnet mask
}
}
phase2remoteid {
ipnet {
ipaddr = 192.168.186.0; /* the virtual address space
mask = 255.255.255.0; /* the corresponding subnet mask
}
}
phase2ss = "esp-all-all/ah-none/comp-all/no-pfs"; /* change pfs to no-pfs
accesslist = "permit ip any 192.168.186.0 255.255.255.0"; /* change to your virtual address space
}
ike_forward_rules = "udp 0.0.0.0:500 0.0.0.0:500",
"udp 0.0.0.0:4500 0.0.0.0:4500";
}
// EOF
You can use my config as a template for your VPN config. Simply change the highlighted values, remove the comments and import it into your FRITZ!Box. Don’t forget to remove the comments in the config file. Don’t remove the comments at the beginning of the file. Only remove the comments I added to highlight the values you have to change. If everything’s fine, the IPsec tunnel should be established within a couple of seconds.
Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0
As soon as the IPsec tunnel is established, you should be able to ping the gateway IP address. In my case it’s the IP address 192.168.186.37.
Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0
If you create a new Azure VM, you can now use your newly created VNet and subnet. Simply choose the VNet from the “REGION/ AFFINITY GROUP/ VIRTUAL NETWORK” drop down menu.
Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0
The first VM will get the IP address 192.168.186.4. Why the .4 when the first usable address is .1? The first four IP addresses of the address space are reserved. So the first usable IP address is 192.168.186.4. The second VM will get the 192.168.186.5 and so on. The same applies to the gateway address space.
Now, with a working IPsec tunnel to my VNet, I can open a RDP connection to my newly created VM using the IP address 192.168.186.4. And there it is:
Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0
If you connect your on-premises network to Microsoft Azure, you should know that you can use up to 80 Mbps with an availability SLA of 99.9% and no performance commitment! If you need more bandwidth, you can use a High Performance Gatway which can deliver up to up to 250 Mbps.
I would recommend to use VNets whenever it’s possible. Not only in case of connecting a on-premises network. If you wish to use VPN features, take a look at the VPN gateway compatibility list! Otherwise, the connecting your on-premise network to Microsoft Azure might be a bit fiddly.
This posting is ~9 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.
VMware vCloud Hybrid Service (vCHS) stands in one line with Amazon Web Services, Microsoft Azure, Rackspace Cloud or other cloud offerings. I don’t want to compare the different provider with vCHS. To be honest: This article is more a summary for myself, than really new content. I just want to summarize information about the IaaS offering of VMware. If you want a comparison of vCHS and AWS, I recommend to read this article written by Alex Mattson (AHEAD).
Introducing VMware vCloud Hybrid Service (vCHS)
VMware vCHS isn’t a tasty cheese (please DON’T pronounce it “vCheese”…), it’s a public cloud IaaS offering by VMware. And because public cloud concepts are no cheese, you should stick your nose into itmore closely. VMware vCHS is built with the same VMware products that you’re using in your private datacenter. Because of this vCHS is compatible to your private VMware environment and you can move VMs between your private datacenter and VMware vCHS. You can use vCHS to move workloads to a public cloud environment, or you can use it to start a new deployment. Sure you can move workloads vom vCHS to your private datacenter.
Core Compute Services
VMware offers two core compute services: Dedicated Cloud and Virtual Private Cloud. Both provide a pool of compute, storage and networking resources. Dedicated Cloud is, as the name says, dedicated to a single-tenant, physically isolated with a dedicated management stack. 100% of the resources are reserved and can be allocated depending on customer needs. The customer can assign the resources to virtual dedicated clouds. Each virtual dedicated cloud provides individual access and control over the resources. Virtual Private Cloud is multi-tenant and logical isolated. The infrastructure is shared among several tenants. The virtual private cloud is ideal for testing or peak workloads. Both core services can be extended by various options. CPU, memory, storage and IP addresses can be added in increments to both compute services.
Business Continuity
VMware offers two services to protect your private and cloud-based VMs: vCHS Disaster Recovery and vCHS Data Protection. vCHS Disaster Recovery is based on vSphere Replication. With vCHS Disaster Recovery you can replicate VMs from your private datacenter into your vCHS environment, regardsless if you have a dedicated cloud or virtual private cloud. vCHS Data Protection is used to protect the VMs, that are running in your vCHS dedicated cloud or virtual private cloud environment.
Management Tools & Networking
Beside the core compute services and services like vCHS Disaster Recovery and vCHS Data Protection, VMware offers, and supports tools and applications to increase the value of vCHS. You can use the free vCloud Connector to migrate VMs from your VMware vSphere or vCloud environment to a vCHS dedicated cloud or virtual private cloud. VMware vCloud Automation Center can be used together with vCHS, e.g. users can provision multi-tier applications in vCHS by using vCAC self-service. vCHS takes care of the infrastructure deployment, vCAC controls the application deployment and enforces governance. With the vSphere Web Client Plug-in you can manage your private VMware environment and your vCHS environment through the same client. Offline Data Transfer (ODT) can be used for bulk uploads of VMs, templates, vApps etc. An encrypted device is provided by VMware to store the data for the transfer. Regarding networking VMware offers vCloud Hybrid Service Edge Gateway and Direct Connect. The edge gateways provides features like firewalling, NAT, IPSec VPN and load balancing. Director Connect provides high bandwidth (1 Gbps and 10 Gbps) connections to connect vCHS to your private datacenter. Direct Connect is a service provided by VMware and VMware Direct Connect partners.
Pricing
For pricing you should visit the vCHS Pricing & Comparison website, butto give youa clue: A virtual private cloud (20 GB, 5 Ghz, 2 TB storage, 10 Mbps bandwidth and 2 public IPs) costs ~ 1.200 € per month.
Final words
vCHS is a great product and there are dozens of use cases for it, e.g. disaster recovery with vSphere Replication. Good news for vExperts: VMware has re-launched the vExpert access to vCHS. Participants can use vCHS for 30 days. A great chance to demonstrate this to potential customers!
To change your privacy setting, e.g. granting or withdrawing consent, click here:
Settings