This posting is ~1 year years old. You should keep this in mind. IT is a short living business. This information might be outdated.
Ausnahmsweise ein Blogpost in deutscher Sprache. Grund dafür ist, dass Claudia Kühn und ich seit Januar 2022 einen gemeinsamen Podcast rund um den Themenkomplex Datacenter, Cloud und IT ein. Eine lockere Kaminzimmerrunde in der wir entspannt über unseren Job, und alles was damit zu tun hat, plaudern.
Der Podcast erscheint alle zwei Wochen auf den üblichen Kanälen, oder ihr schaut auf der Homepage des Podcasts vorbei. Lasst gernen einen Kommentar/ Feedback da, und gebt uns eine Bewertung auf iTunes.
This posting is ~2 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.
As part of a Office 365 tenant rebuild, I had to move a custom domain to the new Office 365 tenant. The old tenant was not needed anymore, and the customer had to move to a Non-Profit tenant for compliance reasons. So the migration itself was no big deal:
disable AzureAD sync
change UPN of all users
remove the domain
connect the domain to the new tenant
setup a new AzureAD sync
assign licenses
time for a beer
That was my, honestly, naive plan for this migration.
Disabling the AzureAD sync was easy. Even the change from ADFS to Password Hash Sync was easy. Changing the UPN for all users was a bit challenging, but the PowerSHell code in this article was quite helpful.
$users = Get-MsolUser -All | Where {$_.UserPrincipalName -like "*customdomain.tld"} | select UserPrincipalName
foreach ($user in $users) {
#Create New User Principal Name
$newUser = $user.UserPrincipalName -replace "customdomain.tld", "customdomain.onmicrosoft.com"
#Set New User Principal Name
Set-MsolUserPrincipalName -UserPrincipalName $user.UserPrincipalName -NewUserPrincipalName $newUser
#Display New User Principal Name
$newUser
}
But after this, I still was unable to remove the custom domain from the tenant. The domain was still referenced in the ProxyAddresses attribute, which was synced by the AzureAD sync…
Removing the domain from the users in the on-prem Active Directory was not solution. The users were already cloud-only because the sync was switched off. With this in mind my plan was to modify the cloud-only users in the tenant. To be honest: This solution worked in this specific case!
The customer was using Microsoft Teams Commercial Cloud trial licenses, so I had no Exchange Online to edit the proxy addresses. But luckily, the Exchange Online Management PowerShell Module was quite helpful.
This line of code gave me an idea how many users were affected… quite a lot… With my colleague Claudia I quickly developed some dirty PowerShell code to remove all proxy addresses that included the custom domain.
It tool about 45 minutes to modify ~ 2000 users. After this, I was able to remove the domain and connect it to the new tenant.
This solution worked in my case. Another way might be using the AzureAD sync itself, masking out the custom domain and wait until the domain is removed from all proxy addresses. But I didn’t tested this.
This posting is ~3 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.
A couple of days ago, I wrote about our first steps to move our on-prem stuff to Azure. This post will cover how we adopted Office 365 and how we have started with our Azure deployment.
Our first step into Office 365 was Microsoft Teams. We needed a solution for calls (audio/ video) and chat. We skipped Skype 4 Business and started with Microsoft Teams.
Microsoft 2020 (c)
Our Microsoft Teams deployment was pretty simple: We used our Microsoft IUR Office 365 E3 plans. Microsoft Azure AD Connect was quickly deployed and the Microsoft Exchange Hybrid Connection Wizard did the rest. Some weeks later we deployed ADFS/ ADFS Proxy. We used this setup over several months and it was pretty slick and was working flawless. At this point, we only used Teams, Planner and OneDrive 4 Business (SharePoint).
Some months went by until we decided to move to Azure.
Resource groups in Azure
You can imagine a resource group (RG) as a container that contains one or more resources, like VMs, NICs, SQL instances etc. The resource group can contain all the resources for the solution, or only those resources that you want to manage as a group.
First question: What do we need to deploy?
The answer was easy:
in sum 9 VMs
VPN gateway
Recovery Services Vault
Automation Account
Log Analytics Workspace
Second question: One or multiple resource groups?
An easy rule of thumb is, that a resource group should contain only resources that share the same life cycle and sponsor.
Third question: Who needs delegated priviledges to manage this stuff?
In our case there was no need to fine-graded RBAC. All of our technical staff has a personalized admin account and should be able to do whatever is necessary.
To connect our on-prem network to Azure, we had to setup a Site-2-Site VPN. This was the first thing after creating our first resource group. We used a Gen 1 Basic VPN Gateway, which was sufficient for our needs (max 100 Mbit, no OpenVPN, no BGP).
Keep in mind to choose your networks and subnets wisely. If you need to deploy 9 VMs, don’t use 10.0.0.0/8. ;) In our case we added two network ranges with a single subnet in each network range. One for our server VMs, and a second subnet as gateway subnet.
VM Deployment
We deployed our VMs as B-Series VMs. A common mistake is to use the wrong VM size. Start small and right-size a VM if necessary. Most of our VMs are B2s (2 CPUs, 4 GB RAM). Only the Exchange (B4m), the management (B2ms) and the RDS server (B2ms) differ from this. This looks pretty small for Server 2019, but it is working pretty nice.
After deploying the VMs, we assigned static IP addresses to them. To our suprise most things in Azure are lacking proper IPv6 support. :( That hurt a lot.
For most VMs we used Standard HDDs instead of SSDs. Even for your file server, because the bottleneck is not the disk, it is the connection between clients and server. Beside this, we used managed disks for all VMs, and we deployed a second disk for data if necessary (Exchange, domain Controller, file server etc.).
If a server had a DNAT in our on-prem network, we deployed a public IP, and secured the access to it.
All VMs are connected to the same Network Security Group (NSG), which we use to get control over what a VM can reach, and who can access a VM.
Server Migration
Over a couple of days we moved more and more services to Azure, starting with our Domain Controllers, PKI and file services. These were low hanging fruits. The file server was easy because we already had a DFS namespace in place, so all we had to do were to change the DFS Links and point them to the new file server. The data was copied by using DFS replication.
DHCP was moved to our on-prem firewall. A printserver was not necessary any more. Windows Updates were switched back to download from Microsoft and Delivery Optimization.
The applications that were running on our Linux and Windows application server were also easy to migrate. After a couple of days we had our server workload running on Azure.
To get our ERP running, we deployed a single RDS host (quick deployment), and deployed our ERP as a remote app. It was too slow to use it over the VPN. Unfortunately the application lacks a proper database backend. :/ But as a remote app, it is working pretty good.
A bigger challenge was Exchange, but not because of the mailbox migrations.
Exchange Online
The migration to Exchange Online was pretty simple. Since our first HCW run, we used the central mail transport, so that all mails are received and sent by our on-prem mail gateway.
The mailbox migration was pretty easy and we had zero issues. Then we tried to switch the mail transport from central of Exchange Online. This was flawless too… except the fact, that our ticket system was unable to send e-mails.
Our ticket system relays its mail over our Exchange server. After switching the mail server in our ticket system to the new Azure based VM, the mails stuck in the outbound queue, even if the server tried to send the mail to our on-prem mail gateway. This quote from Microsoft explains the whole problem:
Starting on November 15, 2017, outbound email messages that are sent directly to external domains (such as outlook.com and gmail.com) from a virtual machine (VM) are made available only to certain subscription types in Microsoft Azure. Outbound SMTP connections that use TCP port 25 were blocked. (Port 25 is primarily used for unauthenticated email delivery.)
This change in behavior applies only to new subscriptions and new deployments since November 15, 2017.
This is the case for MSDN, Azure Pass, Azure in Open, Education, BizSpark, and Free Trial subscriptions!
If you created an MSDN, Azure Pass, Azure in Open, Education, BizSpark, Azure Sponsorship, Azure Student, Free Trial, or any Visual Studio subscription after November 15, 2017, you’ll have technical restrictions that block email that’s sent from VMs within these subscriptions directly to email providers. The restrictions are done to prevent abuse. No requests to remove this restriction will be granted.
If you’re using these subscription types, you’re encouraged to use SMTP relay services, as outlined earlier in this article or change your subscription type.
We accelerated our migration and disabled the central mail transport earlier than planned. Then we configured our Linux application server to authenticate against Exchange Online using SMTP Auth and SMTP Submission (587/tcp). For incoming mails, the mails are routed to the application server using a Exchange Online connector and a transport rule which matches to specific mail addresses.
The Azure based Exchange VM is only needed because we still have an Azure AD Connect running. Microsoft has planned to replace this by a new solution. And until this, we will run this Exchange 2016 in Azure. But it is not part of our mail flow.
Moving Azure AD Connect & decommissioning ADFS
Because we had to get rid of the ADFS server and ADFS Proxy, we deployed Pass-Through Authentication and Seamless SSO. Then we decommissioned the ADFS setup.
Moving Azure AD Connect was a bit quirky. We had conditional access already in place and the Azure AD Connect setup was unable to handle this. The synchronisation account was unable to sync, because it ran into a MFA request. We optimized our policies and got this sorted out.
Decommissioning old stuff
Whenever we moved a service successful to Azure, we switched off the on-prem server, and modified our documentation to reflect the made changes. At the end, we were able to switch off three of our four ESXi hosts. A last ESXi Host is still running for our Horizon View deployment and our firewall.
Next steps
The next post will cover how we automated this, how we do backups and whatever you’re interested in. Leave a comment! :)
This posting is ~3 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.
It was a bit quiet here due to the current COVID 19 pandemic. But now I’m back with a pretty interesting story on how my colleagues and I moved most of our on-prem server stuff to Microsoft Azure and Office 365.
Microsoft 2020 (c)
It all started with the COVID19 lockdown in Germany in March 2020. We moved into our home offices after setting up a small VMware Horizon View deployment to access our PCs using physical View Agents and manual desktop pools. Most projects were stopped, and we did most of our work remote. No lay-offs or short-time work.
We were running a small VMware vSphere cluster for a couple of years. Nothing fancy: Two HPE ProLiants, vCenter, two DCs, File-/ Printserver, WSUS, Exchange, Linux maschines for web services, Sophos UTM, a pfSense, View Connection Server, UAG, ADFS/ ADFS Proxy, PKI etc. In sum 18 VMs on two hosts, some VLANs with firewalls in between etc. We were running Exchange 2016, AzureAD Sync, Exchange Hybrid, but we only used Microsoft Teams from our Office 365 deployment. Veeam Backup & Replication was used for backups, a backup copy to a NAS and some Robocopy jobs that moved Veeam Backups to USB drives for DR. Everything was pretty simple and designed to work without much operations. Our focus is on our customers, not on our internal IT. It was stable, secure and pretty slick.
In March 2020 we asked “What if?”. What if we lose our offices due to a fire (we are located in a bigger office building and we had a couple of fire alarms this year due to remodeling work). How can we work if your DSL line is cut? How can we get our backups offsite? How can we modernize our IT withtout big invests? Money, that we don’t spend on our internal IT can given to our employees. ;) (By the way, that’s the same reason why we try to drive smaller and more efficient cars…).
We developed a couple of ideas, including new servers, storage etc. and put that stuff into a datacenter. But in the end, we decided to move most of our stuff to Microsoft Azure and Office 365.
I want to share some of the things we have learned on the road to Azure.
Initial assessment
We used the Azure Migrate Server Assessment tool to assess our vSphere environment. We wanted to get a ball park on how we had to size the VMs. We knew, that we wont need to migrate all VMs. For example our virtual pfSense firewall, the vCenter, the Sophos UTM or our ADFS setup were not planned to migrate.
After the first assessment, we started to play around the the Azure pricing calculator. Just to get an idea on how different VM sizes affect the costs.
Subscription
As a Microsoft partner, we were able to use our internal user rights (IUR) for Microsoft Office 365 and Azure. Microsoft offers us 25 Office 365 E3 plans and a 6000 US-$ budget for Azure (Azure Sponsorship Subscription). Our plan was to stretch the Azure budget over 12 months, so that we don’t have additional costs until we re-apply for our Microsoft partnership. Starting with 6000 US-$ Azure budget, it makes ~16 US-$ per day for our complete Azure deployment.
Sizing
Now, as we knew that we have 16 US-$ per day, we planned our Azure deployment. First of all, we planned the number of VMs. We had 18 VMs on-prem, and we managed to get down to 9 VMs.
two Domain Controller
PKI
Fileserver
Management Server
Remote Desktop Host (all-in-one Deployment)
Ticket System
SQL/ App server
Exchange 2016 for Hybrid Deployment
The View Connection Servers and UAG are still running on-prem. Our virtual pfSense will be moved to a WatchGuard Firebox soon. Sophos UTM and ADFS are gone. A dedicated WSUS server is not necessary any more, we moved back to simple Windows Update and Delivery Optimization.
Instead of D-Series VMs, we decided to go for B-Series VMs. The main reason for this were costs, but today I can say: The performance is quite good. I can’t see any reason for us to move to D-Series.
To connect to our Azure deployment, we had to setup Site-2-Site VPN. We deployed a simple Gen1 Basic SKU VPN Gateway. We had no need for more than 100 Mbit (we’re using a 50/10 VDSL at our office location), BGP or zone redundancy.
Backups are kept in a Recovery Services Vault with pretty simple polices. Either a VM needs to be current, in this case we keep 7 restore points, or we might need to keep more restore points. In this case we keep 7 daily, 5 weekly, 12 monthly and 3 yearly restore points. And this is only the case for our fileserver.
Additional cost savings
But with this setup we would not get under 16 US-$ a day. :( So we took another approach to break the mark: We shut down VMs at night and at the weekends! It took a bit until my colleagues and I get used to this. Nobody wants to shut down servers without a good reason.
But: We are currently at 18 US-$ per workday, and 10 US-$ for saturday and sunday. Everything, except domain controllers and ticket system, is shutdown at night and on the weekends.
We are using an Automation Account with some simple scripts and schedules to shutdown VMs and start them again.
What’s next?
The next blog post will be around how we planned the usage of Office 365, and how we started with Azure.
This posting is ~3 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.
A customer of mine asked for help to analyse a weird OAuth error. They are using a Microsoft Dynamics 365 Outlook plugin, which came up with an error:
“Can’t connect to Exchange”
In addition to this, they also faced an issueaccessing shared calendars of Exchange Online mailboxes.
Clearly an OAuth error. So we ran the Hybrid Connection Wizard again, which finished without any errors. But the errors persisted. Next stop: OAuth configuration.
We logged into one of the Exchange servers, started an Exchange Management Shell and checked the current OAuth configuration:
This posting is ~4 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.
The task was simple: Change the alias and the primary SMTP address of a Microsoft Teams team. This can be done by changing the alias and the SMTP address of the underlaying Office 365 group. But how? All you need is a PowerShell connection to Exchange Online.
All you need is a PowerShell on your local computer and Office 365 credentials with the necessary privileges.
First we need to provide the necessary credentials.
$cred = Get-Credential
A windows will come up and you must enter your Office365 credentials.
The next step is to create a PowerShell remote session with Exchange Online.
This posting is ~4 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.
When it comes to disaster recovery (DR), dedicated offsite infrastructure is a must. If you follow the 3-2-1 backup rule, then you should have at least three copies of your data, on two different media, and one copy should be offsite.
But an offsite copy of your data can be expensive… You have to setup storage and networking in a suitable colocation. And even if you have an offsite copy of your data, you must be able to recover the data. This could be fun in case of terabytes of data and an offsite copy on tape.
A offsite copy in a cloud is much more interesting. No need to provide hardware, software, licenses. Just provide internet-connectivity, book a suitable plan, and you are ready to go.
Replication to Cloud using Vembu CloudDR
Vembu offers a cloud-based disaster recovery plan through its own cloud services, which is hosted in Amazon Web Services (AWS). This product is designed for businesses, who can’t afford, or who are not willing, to setup a dedicated offsite infrastructure for disaster recovery.
The data, which is backuped by the Vembu BDR server, is replicated to the Vembu Cloud. In case of any disaster, the backup data can be directly restored from the cloud at anytime and anywhere. The replication is managed and monitored using the CloudDR portal.
Before you can enable the offsite replication, you have to register your Vembu BDR server with your Vembu Portal account. You can either go to onlinebackup.vembu.com, or you can go to portal.vembu.com and sign up.
Vembu Technologies/ Vembu CloudDR/ Copyright by Vembu Technologies
After configuring schedule, retention and bandwidth usage, Vembu CloudDR is ready to go.
The end is near – time for recovery
CloudDR offers two types of recovery:
Image Based Recovery
Application Based Recovery
In case of an image based recovery, you can either download a VMDK or VHD(X) image, or you can do a file level recovery. In this case you can restore single files from inside of a chosen image.
You can even download a VHD(X) image of a VMware backup, which allows you some kind of V2V or P2V restores.
In case of a application based recovery, you can recover single application items from
Microsoft Exchange
Microsoft SharePoint
Microsoft SQL Server, or
MySQL
Depending on the type of restore, you will get an encrypted and password protected ZIP file with documents, or even MDF/ LDF files. These files can than be used to restore the lost data.
Summary
Vembu CloudDR is a pretty interesting add-on for Vembu customers. It’s easy to setup, has an attractive price tag and therefore consequently addresses the SMB customers.
This posting is ~6 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.
In 2014, Microsoft announced the Azure Preview Portal, which was going GA in December 2015. Since January 8, 2018, the classic Azure Portal is turned off. The “Preview Portal” was more than a facelift. The classic Azure Portal was based on the Service Management mode, often called the “classic deployment model”, whereas the new Azure Portal uses the Resource Manager model. Azure Service Management (ASM) and Azure Resource Management are both deployment models. The Resource Manager model eases the deployment of complex setups by using templates to deploy, update and manage resources within a resource group as a single operation.
Azure PowerShell vs. Azure RM PowerShell
Different deployment models require different tools. Because of this, Microsoft offers two PowerShell modules for Azure. Depending on your deployment type, you have to use the Azure or AzureRM module. Both can be installed directly from the PowerShell Gallery using Install-Module -Name Azure or Install-Module -Name AzureRM .
Connect to Azure
Depending on the used module, the ways to connect to Azure differ.
You will notice, that AzureRM sessions does not persist between PowerShell sessions. This behaviour differs from Add-AzureAccount . But you can save and load your AzureRM session once you are connected.
This posting is ~6 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.
Microsoft two different logins for their services:
Microsoft Account (former Live ID)
work or school account (Azure AD)
Both are located in different directories. The Microsoft account is located in another user database at Microsoft, as a work or school account. Latter are located in a Azure AD, which is associated with a customer. Both account types are identified using the email address. Microsoft accounts are used for service like Skype, OneDrive, but also for the Microsoft Certified Professional portal. Work or school accounts are mainly used for Office 365 and Azure.
You can use your work email address for your Microsoft account, until someone creates an Azure AD, and a work or school account with the same email address is created. From this point, your login experience with the different Microsoft services will getting worse. The main problem is, that Microsoft tries to find the given account in one of their directories, and it seems that they prefer Azure AD. So the login will work, but the content of the user profile will be different, because it’s a different account.
This is a screenshot from the new login screen. As you can see, there is no way to choose if you wish to login with a work/ school or Microsoft account.
Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0
This is the old login experience, where you can choose between work/ school and Microsoft account.
Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0
Microsoft calls this an AzureAD and Microsoft account overlap, and they identified this as an issue. As a result, Microsoft denies the creation Microsoft accounts using a work or school email addresses, when the email domain is configured in Azure AD. Microsoft has published a blog post to address this issue: Cleaning up the #AzureAD and Microsoft account overlap
Because of this, you should avoid using your work email address for your personal Microsoft account. Even if this Microsoft account is linked to your MCP/ MCSE certifications. It is not a problem to use your personal Microsoft account, with your personal email address, for your certifications. And it is not a problem to link this account to a Microsoft partner account (as in my case).
To remove your work or school email address from your personal Microsoft account, follow the instructions in this support artice (Rename your personal account).
This posting is ~6 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.
In the last months I came across several customers that were in the process to evaluate, or to deploy Office 365. It usually started with a Office 365 trial, that some of the IT guys started to play around with. Weeks or months later, during the proof-of-concept or during the final deployment, the customer had to choose a Office 365 tenant name. That is the part before .onmicrosoft.com.
Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0
I had it multiple times, that the desired tenant name was already taken. Bummer. But the customer wants to move on, so the customer decided to take another another name. For example, they added the post code to the name, or a random string. To their surprise, I put my veto on it. They immediately understood why, after I explained the importance of the tenant name.
The tenant name is visible for everyone
When using Sharepoint or OneDrive for Business, the Office 365 tenant name is part of the URL to access the service. Due to this, the tenant name is visible for everyone, including your customers. And no one wants to click on a link that points to noobslayer4711.onmicrosoft.com.
How to build a good tenant name
When thinking about the tenant name, make sure that you involve all necessary people of your company. Make sure that the management and marketing have agreed, when you recommend a specific tenant name.
Don’t use long names, or tenant names with numbers at the end. They might look suspicious and randomly generated. Make sure, that the tenant name does not include parts that might change in the near future, for example the legal form of your company.
Don’t add the current year, month or a date to it. Don’t add things like “online” or “24” to it, except it’s part of the companies name.
If you have created a tenant during a trial or during a proof-of-concept, try to reactivate it, especially, if the tenant uses the desired name.
Currently, you can’t change the Office 365 tenant name. I don’t know if Microsoft plans to make this possible.
How to reclaim a tenant name
As far as I know there is no process for reclaiming a tenant name instantly. When the last subscription of a tenant expires, the tenant becomes inactive. After 30 days, the tenant will be decomissioned. But it takes several months, until a tenant name can be used again.
As I said: Choose one, choose wisely…
To change your privacy setting, e.g. granting or withdrawing consent, click here:
Settings