vcloudnine.de is the personal blog of Patrick Terlisten. Patrick has a strong focus on virtualization & cloud solutions, but also storage, networking, and IT infrastructure in general. He is a fan of Lean Management and agile methods, and practices continuous improvement whereever it is possible. Feel free to follow him on Twitter and/ or leave a comment.
This posting is ~2 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.
During the deployment of a vSAN cluster consisting of multiple HPE ProLiant DL380 Gen10 hosts, I noticed a memory health warning after updating the firmware using the Support Pack for ProLiant. The error was definitely not shown before the update, so it was clear, that this was not a real issue with the hardware. Furthermore: All hosts showed this error.
Memory health status after SPP
The same day, a customer called me and asked me about a strange memory health error after he has updated all of his hosts with the latest SPP…
My first guess, that this was not caused by a HW malfunction was correct. HPE published a advisory about this issue:
To fix this issue, you have to update the ILO5 firmware to version 2.31. You can do this manually using the ILO5 interface, or you can add the file to the SPP. I’ve added the BIN file to the USB stick with the latest SPP.
If you want to update the firmware manually, simply upload the BIN file using the built-in firmware update function.
Navigate to Firmware & OS Software in the navigation tree, and then click Update Firmware
Select the Local file option and browse to the BIN file
To save a copy of the component to the iLO Repository, select the Also store in iLO Repository check box
To start the update process, click Flash
You can download the latest ILO5 2.31from HPE using this link. After the FW update, the error will resolve itself.
Only ESXi 6.7 is affected, and only ESXi 6.7 running on HPE ProLiant hosts, regardless if ML, DL or BL series.
This posting is ~2 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.
A couple of days ago, I wrote about our first steps to move our on-prem stuff to Azure. This post will cover how we adopted Office 365 and how we have started with our Azure deployment.
Our first step into Office 365 was Microsoft Teams. We needed a solution for calls (audio/ video) and chat. We skipped Skype 4 Business and started with Microsoft Teams.
Microsoft 2020 (c)
Our Microsoft Teams deployment was pretty simple: We used our Microsoft IUR Office 365 E3 plans. Microsoft Azure AD Connect was quickly deployed and the Microsoft Exchange Hybrid Connection Wizard did the rest. Some weeks later we deployed ADFS/ ADFS Proxy. We used this setup over several months and it was pretty slick and was working flawless. At this point, we only used Teams, Planner and OneDrive 4 Business (SharePoint).
Some months went by until we decided to move to Azure.
Resource groups in Azure
You can imagine a resource group (RG) as a container that contains one or more resources, like VMs, NICs, SQL instances etc. The resource group can contain all the resources for the solution, or only those resources that you want to manage as a group.
First question: What do we need to deploy?
The answer was easy:
in sum 9 VMs
VPN gateway
Recovery Services Vault
Automation Account
Log Analytics Workspace
Second question: One or multiple resource groups?
An easy rule of thumb is, that a resource group should contain only resources that share the same life cycle and sponsor.
Third question: Who needs delegated priviledges to manage this stuff?
In our case there was no need to fine-graded RBAC. All of our technical staff has a personalized admin account and should be able to do whatever is necessary.
To connect our on-prem network to Azure, we had to setup a Site-2-Site VPN. This was the first thing after creating our first resource group. We used a Gen 1 Basic VPN Gateway, which was sufficient for our needs (max 100 Mbit, no OpenVPN, no BGP).
Keep in mind to choose your networks and subnets wisely. If you need to deploy 9 VMs, don’t use 10.0.0.0/8. ;) In our case we added two network ranges with a single subnet in each network range. One for our server VMs, and a second subnet as gateway subnet.
VM Deployment
We deployed our VMs as B-Series VMs. A common mistake is to use the wrong VM size. Start small and right-size a VM if necessary. Most of our VMs are B2s (2 CPUs, 4 GB RAM). Only the Exchange (B4m), the management (B2ms) and the RDS server (B2ms) differ from this. This looks pretty small for Server 2019, but it is working pretty nice.
After deploying the VMs, we assigned static IP addresses to them. To our suprise most things in Azure are lacking proper IPv6 support. :( That hurt a lot.
For most VMs we used Standard HDDs instead of SSDs. Even for your file server, because the bottleneck is not the disk, it is the connection between clients and server. Beside this, we used managed disks for all VMs, and we deployed a second disk for data if necessary (Exchange, domain Controller, file server etc.).
If a server had a DNAT in our on-prem network, we deployed a public IP, and secured the access to it.
All VMs are connected to the same Network Security Group (NSG), which we use to get control over what a VM can reach, and who can access a VM.
Server Migration
Over a couple of days we moved more and more services to Azure, starting with our Domain Controllers, PKI and file services. These were low hanging fruits. The file server was easy because we already had a DFS namespace in place, so all we had to do were to change the DFS Links and point them to the new file server. The data was copied by using DFS replication.
DHCP was moved to our on-prem firewall. A printserver was not necessary any more. Windows Updates were switched back to download from Microsoft and Delivery Optimization.
The applications that were running on our Linux and Windows application server were also easy to migrate. After a couple of days we had our server workload running on Azure.
To get our ERP running, we deployed a single RDS host (quick deployment), and deployed our ERP as a remote app. It was too slow to use it over the VPN. Unfortunately the application lacks a proper database backend. :/ But as a remote app, it is working pretty good.
A bigger challenge was Exchange, but not because of the mailbox migrations.
Exchange Online
The migration to Exchange Online was pretty simple. Since our first HCW run, we used the central mail transport, so that all mails are received and sent by our on-prem mail gateway.
The mailbox migration was pretty easy and we had zero issues. Then we tried to switch the mail transport from central of Exchange Online. This was flawless too… except the fact, that our ticket system was unable to send e-mails.
Our ticket system relays its mail over our Exchange server. After switching the mail server in our ticket system to the new Azure based VM, the mails stuck in the outbound queue, even if the server tried to send the mail to our on-prem mail gateway. This quote from Microsoft explains the whole problem:
Starting on November 15, 2017, outbound email messages that are sent directly to external domains (such as outlook.com and gmail.com) from a virtual machine (VM) are made available only to certain subscription types in Microsoft Azure. Outbound SMTP connections that use TCP port 25 were blocked. (Port 25 is primarily used for unauthenticated email delivery.)
This change in behavior applies only to new subscriptions and new deployments since November 15, 2017.
This is the case for MSDN, Azure Pass, Azure in Open, Education, BizSpark, and Free Trial subscriptions!
If you created an MSDN, Azure Pass, Azure in Open, Education, BizSpark, Azure Sponsorship, Azure Student, Free Trial, or any Visual Studio subscription after November 15, 2017, you’ll have technical restrictions that block email that’s sent from VMs within these subscriptions directly to email providers. The restrictions are done to prevent abuse. No requests to remove this restriction will be granted.
If you’re using these subscription types, you’re encouraged to use SMTP relay services, as outlined earlier in this article or change your subscription type.
We accelerated our migration and disabled the central mail transport earlier than planned. Then we configured our Linux application server to authenticate against Exchange Online using SMTP Auth and SMTP Submission (587/tcp). For incoming mails, the mails are routed to the application server using a Exchange Online connector and a transport rule which matches to specific mail addresses.
The Azure based Exchange VM is only needed because we still have an Azure AD Connect running. Microsoft has planned to replace this by a new solution. And until this, we will run this Exchange 2016 in Azure. But it is not part of our mail flow.
Moving Azure AD Connect & decommissioning ADFS
Because we had to get rid of the ADFS server and ADFS Proxy, we deployed Pass-Through Authentication and Seamless SSO. Then we decommissioned the ADFS setup.
Moving Azure AD Connect was a bit quirky. We had conditional access already in place and the Azure AD Connect setup was unable to handle this. The synchronisation account was unable to sync, because it ran into a MFA request. We optimized our policies and got this sorted out.
Decommissioning old stuff
Whenever we moved a service successful to Azure, we switched off the on-prem server, and modified our documentation to reflect the made changes. At the end, we were able to switch off three of our four ESXi hosts. A last ESXi Host is still running for our Horizon View deployment and our firewall.
Next steps
The next post will cover how we automated this, how we do backups and whatever you’re interested in. Leave a comment! :)
This posting is ~2 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.
As part of an ongoing Exchange 2010 to 2016 migration, I had to replace the self-signed certificate with a certificate from the customers PKI. Everything went fine, the customer had a suitable template, we’ve added the necessary hostnames and bound IIS and SMTP to the certificate. The mess started with an iisreset /noforce…
The iisreset took longer than expected. After that, I tried to login into the ECP, entered username and password and got an error.
<Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event">
<System>
<Provider Name="MSExchange Front End HTTP Proxy" />
<EventID Qualifiers="49152">1003</EventID>
<Level>2</Level>
<Task>1</Task>
<Keywords>0x80000000000000</Keywords>
<TimeCreated SystemTime="2020-10-22T12:16:38.934123400Z" />
<EventRecordID>368718</EventRecordID>
<Channel>Application</Channel>
<Computer>server.domain.tld</Computer>
<Security />
</System>
<EventData>
<Data>Owa</Data>
<Data>System.NullReferenceException: Object reference not set to an instance of an object. at
Microsoft.Exchange.HttpProxy.FbaModule.ParseCadataCookies(HttpApplication httpApplication) at
Microsoft.Exchange.HttpProxy.FbaModule.OnBeginRequestInternal(HttpApplication httpApplication) at
Microsoft.Exchange.HttpProxy.ProxyModule.<>c__DisplayClass16_0.<OnBeginRequest>b__0() at
Microsoft.Exchange.Common.IL.ILUtil.DoTryFilterCatch(Action tryDelegate, Func`2 filterDelegate, Action`1 catchDelegate)
</Data>
</EventData>
</Event>
Pretty strange. We switched back to the self-singned certificate, did an iisreset and everyting was fine again.So it was pretty obvious that the error was related to the certificate, or to be more clear, to the certificate template.
A short research confirmed this. The template was a modified v3 web server template from an Enterprise CA running Windows Server 2008 R2.
With Windows Server 2008, Microsoft introduced a new cryptographic API called Cryptography Next Generation (CNG), which separates cryptographic providers (algorithm implementation) from key storage providers (create, delete, export, import, open and store keys). The older CryptoAPI does not differ between this and implements cryptographic algorithms and key storage.
The modified template used CNG instead of CryptoAPI. We noticed this when we checked the certificate with certutil -store my <thumbprint>.
If the listed provider for the certificate is Microsoft Software Key Storage Provider, then you will have to re-import the certificate. If Microsoft RSA SChannel Cryptographic Provider is used, everything is fine.
You have to remove the certificate, then re-import it using
You need a PKCS#12 file (PFX) and the password. Re-import it and then you can use the certificate for Exchange. Bind services to it and restart the IIS.
This posting is ~2 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.
It was a bit quiet here due to the current COVID 19 pandemic. But now I’m back with a pretty interesting story on how my colleagues and I moved most of our on-prem server stuff to Microsoft Azure and Office 365.
Microsoft 2020 (c)
It all started with the COVID19 lockdown in Germany in March 2020. We moved into our home offices after setting up a small VMware Horizon View deployment to access our PCs using physical View Agents and manual desktop pools. Most projects were stopped, and we did most of our work remote. No lay-offs or short-time work.
We were running a small VMware vSphere cluster for a couple of years. Nothing fancy: Two HPE ProLiants, vCenter, two DCs, File-/ Printserver, WSUS, Exchange, Linux maschines for web services, Sophos UTM, a pfSense, View Connection Server, UAG, ADFS/ ADFS Proxy, PKI etc. In sum 18 VMs on two hosts, some VLANs with firewalls in between etc. We were running Exchange 2016, AzureAD Sync, Exchange Hybrid, but we only used Microsoft Teams from our Office 365 deployment. Veeam Backup & Replication was used for backups, a backup copy to a NAS and some Robocopy jobs that moved Veeam Backups to USB drives for DR. Everything was pretty simple and designed to work without much operations. Our focus is on our customers, not on our internal IT. It was stable, secure and pretty slick.
In March 2020 we asked “What if?”. What if we lose our offices due to a fire (we are located in a bigger office building and we had a couple of fire alarms this year due to remodeling work). How can we work if your DSL line is cut? How can we get our backups offsite? How can we modernize our IT withtout big invests? Money, that we don’t spend on our internal IT can given to our employees. ;) (By the way, that’s the same reason why we try to drive smaller and more efficient cars…).
We developed a couple of ideas, including new servers, storage etc. and put that stuff into a datacenter. But in the end, we decided to move most of our stuff to Microsoft Azure and Office 365.
I want to share some of the things we have learned on the road to Azure.
Initial assessment
We used the Azure Migrate Server Assessment tool to assess our vSphere environment. We wanted to get a ball park on how we had to size the VMs. We knew, that we wont need to migrate all VMs. For example our virtual pfSense firewall, the vCenter, the Sophos UTM or our ADFS setup were not planned to migrate.
After the first assessment, we started to play around the the Azure pricing calculator. Just to get an idea on how different VM sizes affect the costs.
Subscription
As a Microsoft partner, we were able to use our internal user rights (IUR) for Microsoft Office 365 and Azure. Microsoft offers us 25 Office 365 E3 plans and a 6000 US-$ budget for Azure (Azure Sponsorship Subscription). Our plan was to stretch the Azure budget over 12 months, so that we don’t have additional costs until we re-apply for our Microsoft partnership. Starting with 6000 US-$ Azure budget, it makes ~16 US-$ per day for our complete Azure deployment.
Sizing
Now, as we knew that we have 16 US-$ per day, we planned our Azure deployment. First of all, we planned the number of VMs. We had 18 VMs on-prem, and we managed to get down to 9 VMs.
two Domain Controller
PKI
Fileserver
Management Server
Remote Desktop Host (all-in-one Deployment)
Ticket System
SQL/ App server
Exchange 2016 for Hybrid Deployment
The View Connection Servers and UAG are still running on-prem. Our virtual pfSense will be moved to a WatchGuard Firebox soon. Sophos UTM and ADFS are gone. A dedicated WSUS server is not necessary any more, we moved back to simple Windows Update and Delivery Optimization.
Instead of D-Series VMs, we decided to go for B-Series VMs. The main reason for this were costs, but today I can say: The performance is quite good. I can’t see any reason for us to move to D-Series.
To connect to our Azure deployment, we had to setup Site-2-Site VPN. We deployed a simple Gen1 Basic SKU VPN Gateway. We had no need for more than 100 Mbit (we’re using a 50/10 VDSL at our office location), BGP or zone redundancy.
Backups are kept in a Recovery Services Vault with pretty simple polices. Either a VM needs to be current, in this case we keep 7 restore points, or we might need to keep more restore points. In this case we keep 7 daily, 5 weekly, 12 monthly and 3 yearly restore points. And this is only the case for our fileserver.
Additional cost savings
But with this setup we would not get under 16 US-$ a day. :( So we took another approach to break the mark: We shut down VMs at night and at the weekends! It took a bit until my colleagues and I get used to this. Nobody wants to shut down servers without a good reason.
But: We are currently at 18 US-$ per workday, and 10 US-$ for saturday and sunday. Everything, except domain controllers and ticket system, is shutdown at night and on the weekends.
We are using an Automation Account with some simple scripts and schedules to shutdown VMs and start them again.
What’s next?
The next blog post will be around how we planned the usage of Office 365, and how we started with Azure.
This posting is ~2 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.
The View Agent Direct-Connection (VADC) Plug-In was designed as an extension to the Horizon Agent, which allows a Horizon Client to directly connect to a VM or physical machine withtout using a Horizon Connection Server.
The VADC is nothing new, it is part of the Horizon View eco system for a couple of years now. Meanwhile, the VADC supports the Blast Exteme protocol, which makes it pretty interesting for remote access to lab environment or home office equipment.
There are a couple of requirements which I want to highlight:
VADC Plug-In has the following additional requirements:
The VM or or physical machine must have a minimum of 128 MB of video RAM
For a virtual machine, you must install VMware Tools before you install Horizon Agent
A physical machine supports Windows 10 Enterprise version 1803 or version 1809, newer releases tend to work flawless
A VM supports Blast and PCoIP protocols
A physical machine supports Blast only
The installation of the VADC is divieded into two steps:
Installation of the View Agent
Installation of the VADC
The View Agent has to be installed silently, because you are unable to add it to a Connection Server. The silent installation allows you to skip this step.
I used this command line to install the View Agent:
The second step is to install the VADC. This is pretty easy: Setup > Next, next, next. :)
Finally, you can start the View Client on another machine and add a Connection Server with the IP or FQDN of you newly installed VADC machine.
This is the output of netstat on my X250 after connecting using the VADC:
TCP 192.168.20.52:443 t480s:50996 ESTABLISHED
TCP 192.168.20.52:443 t480s:50997 ESTABLISHED
TCP 192.168.20.52:443 t480s:50998 ESTABLISHED
TCP 192.168.20.52:22443 t480s:51014 ESTABLISHED
TCP 192.168.20.52:32111 t480s:51027 ESTABLISHED
You might notice the typical Horion View Ports 22443 for Blast Extreme and 32111 for USB redirection.
This posting is ~2 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.
This issue was a bit annoying. I faced this issue not in a customer environment, rather then on my second Lenovo laptop, an X250 with Windows 10 20H2. My intention was to use it headless in a docking station. So how should I access it? RDP? TeamViewer? Why not use the Horizon Direct Connection Plug-in?
The Horizon Direct Connection Plug-in is not a new feature and you can think of it as a View Agent without a Connection Server. You can access it using the View Client, but you don’t have to run the connection through a Connection Server. For pretty small environments or direct access a perfect fit!
Error message View Agent setup
In order to use the Horizon Direct Connection Plug-in, you have to install the View Agent. So I downloaded the latest View 2006 Agent (VMware-Horizon-Agent-x86_64-8.0.0-16530789.exe) and started the setup.
It fails right at the beginning. Okay, I just installed Windows Updates, so I rebooted my laptop. But the setup fails again. Next reboot. And it fails, and fails, and fails.
There are some registry keys you can check of you get such an error. “PendingFileRenameOperations” is one of the common issues when you face this problem. I found a script, but there was no reboot pending.
I finally found it: RunOnce.
HKLM RunOnce
There was an entry under HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\RunOnce. After deleting this entry, the setup went through.
Fun Fact: There is an MSI Property for this, when you want to Silently install the View Agent : SUPPRESS_RUNONCE_CHECK.
This posting is ~2 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.
After a reboot, a VMware ESXi 6.7 U3 told me that he has no compatible NICs. Fun fact: Right before the reboot everything was fine.
The ILO also showed no NICs. Unfortunately, I wasn’t onsite to pull the blade server and put it back in. But there is a way to do this “virtually”.
You have to connect to the IP address of the Onboard Administrator via SSH. Then issue the reset server command with the bay of the server you want to reset and an argument.
OA1-C7000> reset server 13
WARNING: Resetting the server trips its E-Fuse. This causes all power to be momentarily removed from the server. This command should only be used when physical access to the server is unavailable, and the server must be removed and
reinserted.
Any disk operations on direct attached storage devices will be affected. I/O
will be interrupted on any direct attached I/O devices.
Entering anything other than 'YES' will result in the command not executing.
Do you want to continue ? yes
Successfully reset the E-Fuse for device bay 13.
The server will power up automatically. Please note, that the OBA is unable to display certain information right after this operation. It will take a couple of minutes until all information, like serial number or device bay name are visible again.
This posting is ~2 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.
During an vSphere 6.5 > 6.7 update a was host failing continously at the remediation with an “unknown error”. The host was updated from ESXI 6.5 to 6.7 using an upgrade baseline. Other hosts were updated to 6.7 and with the latest patches without any issues. Something strange was going on…
The esxupdate.log and the vua.log on the host itself showed nothing special. So I checked the vmware-vum-server-log4cpp.log which was much more informative!
[2020-07-19 13:03:25:217 'SingleHostScanTask.SingleHostScanTask{262}' 139762329831168 ERROR] [singleHostScanTask, 693] caught an odbc error: "ODBC error: (23505) - ERROR: duplicate key value violates unique constraint "pk_vci_scanresults"; Error while executing the query" is returned when executing SQL statement "INSERT INTO VCI_SCANRESULTS(endtime, id, scan_status, scan_type, starttime, target_component, target_uid) VALUES (?, ?, ?, ?, ?, ?, ?)"
[2020-07-19 13:03:25:219 'SingleHostScanTask.SingleHostScanTask{262}' 139762329831168 ERROR] [singleHostScanTask, 404] SingleHostScan caught exception: caught an odbc error: "ODBC error: (23505) - ERROR: duplicate key value violates unique constraint "pk_vci_scanresults"; Error while executing the query" is returned when executing SQL statement "INSERT INTO VCI_SCANRESULTS(endtime, id, scan_status, scan_type, starttime, target_component, target_uid) VALUES (?, ?, ?, ?, ?, ?, ?)" with code: -1
[2020-07-19 13:03:25:223 'SingleHostScanTask.SingleHostScanTask{262}' 139762329831168 ERROR] [vciTaskBase, 568] Task execution has failed: caught an odbc error: "ODBC error: (23505) - ERROR: duplicate key value violates unique constraint "pk_vci_scanresults"; Error while executing the query" is returned when executing SQL statement "INSERT INTO VCI_SCANRESULTS(endtime, id, scan_status, scan_type, starttime, target_component, target_uid) VALUES (?
Well… ERROR: duplicate key value violates unique constraint “pk_vci_scanresults” is not what I expected, but it is an error, and it occured everytime I tried to remediate the host.
Google found nothing about this error, so I decided to reset the VUM database. Please don’t try this at your customer! Log a call at VMware.
To reset the VUM database:
Connect to vCenter Server Appliance via SSH
Switch to the BASH
Stop the VMware Update Manager Service with this command
service-control –stop vmware-updatemgr
To reset the VMware Update Manager Database (applies only to VCSA 6.7 and 7.0!)
Delete the contents of the VMware Update Manager Patch Store
rm -rf /storage/updatemgr/patch-store/*
Start the VMware Update Manager Service again
service-control –start vmware-updatemgr
You will lose all your baselines, so you have to configure them again. And you need to download all patches again.
For vSAN environments this procedure will also remove the vSAN default baselines, but they will recreated automatically when there is a configuration change to vSAN or an update to the HCL DB. Again: Don’t do this at home!
This posting is ~2 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.
Public Folders are still a thing. And while companies are moving their stuff into the cloud, Public Folders still need to be accessed by cloud-located mailboxes.
Allowing the access from Exchange Online mailboxes to on-premise hosted Public Folders is well documented by Microsoft, but there are also some fuzz. I had to deal with this during a Office 365 transition project at one of my customers.
The background
The customer is running a single Exchange 2016 server in a Windows Server 2012 R2 forest. AzureAD Sync is running and its syncing on-premise identities to AzureAD. The customer uses Office 365 E5 plans and he wants to move to Exchange Online, aside other O365 services like SharePoint Online, Teams etc.
Something was missing
After setting up the Exchange Hybrid, the customer and I where able to migrate the first mailboxes to Exchange Online.
To our surprise the on-premise Public Folders were not visible from the migrated Exchange Online mailboxes. We had still things to do…
In order to get the access to the Public Folders working, the Public Folder mailbox object needs to be synced to AzureAD. This is not complicated, because all you need to make sure is, that the user object is synced. If you are using an OU filter for the AzureAD sync, make sure that the OU with the Public Folder mailbox user object is included into the sync.
Check if the user is synced by using the Exchange Online PowerShell. This is how the Public Folder mailbox user looks like from the Exchange Online perspective:
PS C:\Users\p.terlisten> Get-MailUser Mailbox1
Name RecipientType
---- -------------
Mailbox1 MailUser
This is the Public Folder mailbox that is hosted on-premise.
PS C:\Users\p.terlisten> Get-Mailbox -PublicFolder
Name Alias ServerName ProhibitSendQuota
---- ----- ---------- -----------------
Mailbox1 Mailbox1 EX Unlimited
As long as you can’t see the MailUser in Exchange Online, you have no chance to configure the Public Folder access.
The next step is to synchronize the mail-enabled Public Folder objects to Exchange Online. For this, you have to download two scripts from Microsoft.
Sync-ModernMailPublicFolders.ps1
Sync-ModernMailPublicFolders.psd1
Run the Sync-ModernMailPublicFolders.ps1 script on your on-premise Exchange server. You will need your Office 365 admin credentials for this task.
Please note that some of these steps need some time to get active! It will take some time for the background tasks to get some things sorted.
Controlled Connections to Public Folders in Outlook
It is worth mentioning that after enabling the access to Public Folders all Exchange online users can see the on-premise hosted Public Folders. If you need to enable the access only for some Exchange Online users, Microsoft has a solution for you: Controlled Connections to Public Folders.
First, you need to enable the Public Folder access for the users you have selected.
This posting is ~2 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.
A customer of mine asked for help to analyse a weird OAuth error. They are using a Microsoft Dynamics 365 Outlook plugin, which came up with an error:
“Can’t connect to Exchange”
In addition to this, they also faced an issueaccessing shared calendars of Exchange Online mailboxes.
Clearly an OAuth error. So we ran the Hybrid Connection Wizard again, which finished without any errors. But the errors persisted. Next stop: OAuth configuration.
We logged into one of the Exchange servers, started an Exchange Management Shell and checked the current OAuth configuration: