This posting is ~6 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.
One of my personal predictions for 2017 is, that Microsoft Azure will gain more market share. Especially here in Germany. Because of this, I have started to refresh my knowledge about Azure. A nice side effect is that I can also improve my PowerShell skills.
Currently, the script creates a couple of VMs and resource groups. Nothing more, nothing less. The next features I want to add are:
add additional disks to the DCs (for SYSVOL and NTDS)
promote both two servers to domain controllers
change the DNS settings for the Azure vNetwork
deploy a Windows 10 client VM
I created a new repository on GitHub and shared a first v0.1 as public Gist. Please note, that this is REALLY a v0.1.
My lab is separated from my home network, and it’s focused on the needs of a lab. A detailed overview about my lab can be found here. My lab is a lab and therefore I divided it into a lab, and an infrastructure part. The infrastructure part of my lab consists of devices that are needed to provide basic infrastructure and management. The other part is my playground.
While planning my lab, I focused on these requirements:
Reuse of existing equipment
Separation of traffic within the lab and to the outer world
Scalable, robust and predictable performance
The equipment
To meet my requirements, I had the following equipment available:
HP 1910-24G switch
HP 1910-8G switch
Juniper 5GT firewall
The design
The HP 1910 switch is an awesome product with a very good price / performance ratio. Especially because the can do IP routing, which was important for my lab design. Each of my ESXi hosts has 4x 1 GbE interfaces, plus one interface for ILO. In sum 20 ports are necessary to connect my ESXi hosts to my network. The 1910-24G and 1910-8G were connected with a 1 GbE RJ45 SFP. The 1910-8G is used to connect the firewall and client devices, e.g. a Thin Client or a laptop. No other devices are connected to my lab. Because storage is delivered by a HP StoreVirtual VSA, no ports are needed for a NAS or similar.
To separate the traffic, I created a couple of VLANs. Unlike Chris, I’m still using VLAN 1 in my lab. In a customer environment, I would avoid the use of VLAN 1.
VLAN ID
Name
Usage
1
Access (Default)
Client connectivity
2
Management
ILO, Management VMkernel ports
3
Infra
VMs and devices for the lab infrastructure
4
Lab 1
Lab VLAN
5
Lab 2
Lab VLAN
6
Lab 3
Lab VLAN
7
Temp
temporary connectivity
10
iSCSI 1
iSCSI
11
iSCSI 2
iSCSI
100
NFS
NFS
200
vMotion
vMotion VMkernel ports
VLAN 1 (Default) and 3 are carried to the 1910-8G. All VLANs are carried to the ESXi hosts using trunk ports on the 1910-24G. The Juniper 5GT is connected to the 1910-8G and the trusted interface is connected to an access port in VLAN 3. The untrusted port is connected to the outer world.
The routing is a bit complex on the first look. I configured a couple of switch virtual interfaces (SVI) on the 1910-24G. I configured a SVI for the VLANs 1, 2, 3, 7, 10, 11 and 100. But how do I get traffic in and out of my lab VLANs? I use a small firewall VM that is housed in VLAN 3 (Infra). It has interfaces (vNICs) in VLAN 4, 5 and 6. With this VM, I can carry traffic in and out of my lab VLANs, as long as a policy allows the traffic.
I use /27 subnets for VLAN 1 to 7, two /28 for VLAN 100 (NFS) and 200 (vMotion), and two /24 for VLAN 10 and 11 (both iSCSI).
VLAN ID
Name
IP Subnet
1
Access (Default)
192.168.200.0/27
2
Management
192.168.200.32/27
3
Infra
192.168.200.64/27
4
Lab 1
192.168.200.96/27
5
Lab 2
192.168.200.128/27
6
Lab 3
192.168.200.160/27
7
Temp
192.168.200.192/27
10
iSCSI 1
192.168.110.0/24
11
iSCSI 2
192.168.111.0/24
100
NFS
192.168.200.224/28
200
vMotion
192.168.200.240/28
I don’t use a routing protocol inside my lab. It looks complex, but with this design I can easily separate the traffic for my three lab VLANs. iSCSI is routed, but I don’t route iSCSI traffic. The same applies for NFS. This drawing gives you an overview about the routing.
Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0
To simplify address assignment, I use a central DHCP on VLAN 3 with several scopes. The HP 1910-24G and my firewall VM act as DHCP relay and forward DHCP requests to my DHCP. For each VLAN only a small number of dynamic IPs are available. Usually, the servers get a fixed IP.
VLAN ID
Name
DHCP Scope
1
Access (Default)
192.168.200.0/27
3
Infra
192.168.200.64/27
4
Lab 1
192.168.200.96/27
5
Lab 2
192.168.200.128/27
6
Lab 3
192.168.200.160/27
7
Temp
192.168.200.192/27
The VLAN 10 is used to carry iSCSI from the HP StoreVirtual VSA to my ESXi hosts. The second iSCSI VLAN (ID 11) can be used for tests, e.g. to simulate routed iSCSI traffic. The VLANs 4, 5 and 6 are used for lab work. Until I add a rule on my firewall VM, no traffic can enter or leave VLAN 4, 5 and 6. When deploying a new VM, I add the VM to VLAN 1 or 3. The VM is installed using MDT and PXE. After applying all necessary updates (MDT uses WSUS during the setup), I can add the VM to VLAN 4, 5 or 6.
Final words
Sure, a lab network design could be easier. The IP subnets can be a pitfall, if you’re not familiar with subnetting. The routing seems to be complex, if you’re not an expert in IP routing. Until today, the network has done exactly what I expected.
To change your privacy setting, e.g. granting or withdrawing consent, click here:
Settings