Tag Archives: microsoft

Choose one, choose wisely – Office 365 tenant name

This posting is ~2 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

In the last months I came across several customers that were in the process to evaluate, or to deploy Office 365. It usually started with a Office 365 trial, that some of the IT guys started to play around with. Weeks or months later, during the proof-of-concept or during the final deployment, the customer had to choose a Office 365 tenant name. That is the part before .onmicrosoft.com.

Office 365 User ID Creation

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

I had it multiple times, that the desired tenant name was already taken. Bummer. But the customer wants to move on, so the customer decided to take another another name. For example, they added the post code to the name, or a random string. To their surprise, I put my veto on it. They immediately understood why, after I explained the importance of the tenant name.

The tenant name is visible for everyone

When using Sharepoint or OneDrive for Business, the Office 365 tenant name is part of the URL to access the service. Due to this, the tenant name is visible for everyone, including your customers. And no one wants to click on a link that points to noobslayer4711.onmicrosoft.com.

How to build a good tenant name

When thinking about the tenant name, make sure that you involve all necessary people of your company. Make sure that the management and marketing have agreed, when you recommend a specific tenant name.

Don’t use long names, or tenant names with numbers at the end. They might look suspicious and randomly generated. Make sure, that the tenant name does not include parts that might change in the near future, for example the legal form of your company.

Don’t add the current year, month or a date to it. Don’t add things like “online” or “24” to it, except it’s part of the companies name.

If you have created a tenant during a trial or during a proof-of-concept, try to reactivate it, especially, if the tenant uses the desired name.

Currently, you can’t change the Office 365 tenant name. I don’t know if Microsoft plans to make this possible.

How to reclaim a tenant name

As far as I know there is no process for reclaiming a tenant name instantly. When the last subscription of a tenant expires, the tenant becomes inactive. After 30 days, the tenant will be decomissioned. But it takes several months, until a tenant name can be used again.

As I said: Choose one, choose wisely…

Hell freezes over – VMware virtualization on Microsoft Azure

This posting is ~2 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.
Update

On November 22, 2017, Ajay Patel (Senior Vice President, Product Development, Cloud Services, VMware) published a blog post in reaction to Microsofts announcement (VMware – The Platform of Choice in the Cloud). Especially these statements are interesting:

No VMware-certified partner names have been mentioned nor have any partners collaborated with VMware in engineering this offering. This offering has been developed independent of VMware, and is neither certified nor supported by VMware.

and

Microsoft recognizing the leadership position of VMware’s offering and exploring support for VMware on Azure as a superior and necessary solution for customers over Hyper-V or native Azure Stack environments is understandable but, we do not believe this approach will offer customers a good solution to their hybrid or multi-cloud future.

Looks like VMware is not happy about Microsofts annoucement. And this blog post clearly states, that VMware will not partner with VMware to bringt VMware virtualization stack on Azure.

I don’t know if this is a wise decision of VMware. The hypervisor, their core product, is a commodity nowadays. We are taking about a bare-metal solution, so it’s not different from what VMware build with AWS. It’s more about how it is embedded in the cloud services and cloud control plane. If you use VMware vSphere, Horizon and O365, the step to move virtualization workloads to VMware on Azure is smaller, than move it to AWS.

On November 23, 2017, the register published this interesting analysis: VMware refuses to support its wares running in Azure.

Original post

Yesterday, Microsoft announced new services to ease the migration from VMware to Microsoft Azure. Corey Sanders (Director of Compute, Azure) posted a blog post (Transforming your VMware environment with Microsoft Azure) and introduced three new Azure services.

Microsoft Azure

Microsoft/ microsoft.com

Azure Migrate

The free Azure Migrate service does not focus on single server workloads. It is focused on multi-server application and will help customers through the three staged

  • Discovery and assessment
  • Migration, and
  • Resource & Cost Optimization

Azure Migrate can discover your VMware-hosted applications on-premises, it can visualize dependencies between them, and it will help customers to create a suitable sizing for the Azure hosted VMs. Azure Site Recovery (ASR) is used for the migration of workloads from the on-premises VMware infrastructure to Microsoft Azure. At the end, when your applications are running on Microsoft Azure, the free Azure Cost Management service helps you to forecast, track, and optimize your spendings.

Integrate VMware workloads with Azure services

Many of the current available Azure services can be used with your on-premises VMware infrastructure, without the need to migrate workloads to Microsoft Azure. This includes Azure Backup, Azure Site Recovery, Azure Log Analytics or managing Microsoft Azure resources with VMware vRealize Automation.

But the real game-changer seesm to bis this:

Host VMware infrastructure with VMware virtualization on Azure

Bam! Microsoft announces the preview of VMware vSphere on Microsoft Azure. It will run on bare-metal on Azure hardware, beside other Azure services. The general availability is expected in 2018.

My two cents

This is the second big announcement about VMware stuff on Azure (don’t forget VMware Horizon Cloud on Microsoft Azure). And although I believe, that this is something that Microsoft wants to offer to get more customers on Azure, this can be a great chance for VMware. VMware customers don’t have to go to Amazon, when they want to host VMware at a major public cloud provider, especially if they are already Microsoft Azure/ O365 customers. This is a pretty bold move from Microsoft and similar to VMware Cloud on AWS. I’m curious to get more details about this.

How to install Visual Studio Code on Linux Mint 18

This posting is ~2 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

I have wrote about the installation of PowerShell Core in Linux Mint 18 yesterday. Today, I want to show you, how to install Visual Studio Code on Linux Mint 18. The installation is really easy:

  1. Download the deb package
  2. Install the deb package
  3. Run Visual Studio Code

You can download the latest packages for Windows, Linux (deb and rpm, if you want even a tar ball), and Mac on the Visual Studio Code download page. Download the deb file. To install the package, open a Terminal window and run dpkg .

Visual Studio Code on Linux

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

sudo might ask you for a password. That’s it! Now you can simply start VS Code.After you have installed your favorite extensions, VS Code is ready to code.

How to install PowerShell Core on Linux Mint 18

This posting is ~2 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

Beside my Lenovo X250, which is my primary working machine, I’m using a HP ProBook 6450b. This was my primary working machine from 2010 until 2013. With a 128 GB SSD, 8 GB RAM and the Intel i5 M 450 CPU, it is still a pretty usable machine. I used it mainly during projects, when I needed a second laptop (or the PC Express card with the serial port…). It was running Windows 10, until I decided to try Linux MInt. I used Linux as my primary desktop OS more than a decade ago. It was quite productive, but especially with laptops, there were many things that does not worked out of the box.

Because I use PowerShell quite often, and PowerShell is available for Windows, MacOS and Linux, the installation of PowerShell on this Linux laptop is a must.

How to install PowerShell?

Linux Mint is a based on Ubuntu, and I’m currently using Linux Mint 18.2. Microsoft offers different pre-compiled packages on the PowerShell GitHub repo. For Linux Mint 18, you have to download the Ubuntu 16.04 package. For Linux Mint 17, you will need the 14.04 package. Because you need the shell to install the packages, you can download the deb package from the shell as well. I used wget to download the deb package.

The next step is to install the deb package, and to fix broken dependencies. Make sure that you run dpkg with sudo .

Looks like it failed, because of broken dependencies. But this can be easily fixed. To fix the broken dependencies, run apt-get -f install . Make sure that you run it with sudo !

That’s it! PowerShell is now installed.

Yep, looks like a PowerShell prompt…on Linux. Thank you, Microsoft! :)

Workaround for broken Windows 10 Start Menus with floating desktops

This posting is ~2 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

Last month, I wrote about a very annoying issue, that I discovered during a Windows 10 VDI deployment: Roaming of the AppData\Local folder breaks the Start Menu of Windows 10 Enterprise (Roaming of AppData\Local breaks Windows 10 Start Menu). During research, I stumbled over dozens of threads about this issue.

Today, after hours and hours of testing, troubleshooting and reading, I might have found a solution.

The environment

Currently I don’t know if this is a workaround, a weird hack, or no solution at all. Maybe it was luck that none of my 2074203423 logins at different linked-clones resulted in a broken start menu. The customer is running:

  • Horizon View 7.1
  • Windows 10 Enterprise N LTSB 2016 (1607)
  • View Agent 7.1 with enabled Persona Management

Searching for a solution

During my tests, I tried to discover WHY the TileDataLayer breaks. As I wrote in my earlier blog post, it is sufficient to delete the TileDataLayer folder. The folder will be recreated during the next logon, and the start menu is working again. Today, I added path for path to “Files and folders excluded from roamin” GPO setting, and at some point I had a working start menu. With this in mind, I did some research and stumbled over a VMware Communities thread (Vmware Horizon View 7.0.3 – Linked clone – Persistent mode – Persona management – Windows 10 (1607) – -> Windows 10 Start Menu doesn’t work)

User oliober did the same: He roamed only a couple of folders, one of them is the TileDataLayer folder, but not the whole Appdata\Local folder.

The “solution”

To make a long story short: You have to enable the roaming of AppData\Local, but then you exclude AppData\Local, and add only necessary folders to the exclusion list of the exclusion. Sounds funny, but it seems to work.

Horizon View GPO AppData Roaming

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Feedback is welcome!

I am very interested in feedback. It would be great if you have the chance to verify this behaviour. Please leave a comment with your results.

As I already said: I don’t know if this is a workaround, a hack, a solution, or no solution at all. But for now, it seems to work. Microsoft deprecated TileDataLayer in Windows 10 1703. So for this new Windows 10 build, we have to find another working solution. The above described “solution” only works for 1607. But if you are using the Long Term Service Branch, this solution will work for the next 10 years. ;)

Why “Patch Tuesday” is only every four weeks – or never

This posting is ~2 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

Today, this tweet caught my attention.

Patch management is currently a hot topic, primarily because of the latest ransomware attacks.

After appearance of WannaCry, one of my older blog posts got unfamiliar attention: WSUS on Windows 2012 (R2) and KB3159706 – WSUS console fails to connect. Why? My guess: Many admins started updating their Windows servers after appearance of WannaCry. Nearly a year after Microsoft has published KB3159706, their WSUS servers ran into this issue.

The truth about patch management

I know many enterprises, that patch their Windows clients and servers only every four or eight weeks, mostly during a maintenance window. Some of them do this, because their change processes require the deployment and test of updates in a test environment. But some of them are simply to lazy to install updates more frequent. So they simply approve all needed updates every four or eight weeks, push them to their servers, and reboot them.

Trond mentioned golden images and templates in his blog posts. I strongly agree to what he wrote, because this is something I see quite often: You deploy a server from a template, and the newly deployed server has to install 172 updates. This is because the template was never updated since creation. But I also know companies that don’t use templates, or goldes master images. They simply create a new VM, mount an ISO file, and install the server from scratch. And because it’s urgent, the server is not patched when it goes into production.

Sorry, but that’s the truth about patch management: Either it is made irregular, made in too long intervals, or not made at all.

Change Management from hell

Frameworks, such as ITIL, play also their part in this tragedy. Applying change management processes to something like patch managent prevents companies to respond quickly to threats. If your change management process prevents you from deploying critical security patches ASAP, you have a problem –  a problem with your change management process.

If your change management process requires the deployment from patches in a test environment, you should change your change mangement process. What is the bigger risk? Deploying a faulty patch, or being the victim of an upcoming ransomware attack?

Microsoft Windows Server Update Service (WSUS) offers a way to automatically approve patches. This is something you want! You want to automatically approve critical security patches. And you also want that your servers automatically install these updates, and restart if necessary. If you can’t restart servers automatically when required, you need short maintenance windows every week to reboot these servers. If this is not possible at all, you have a problem with your infrastructure design. And this does not only apply to Microsoft updates. This applies to ALL systems in your environment. VMware ESXi hosts with uptimes > 100 days are not a sign of stability. It’s a sign of missing patches.

Validated environments are ransomwares best friends

This is another topic I meet regularly: Validated environments. An environmentsthat was installed with specific applications, in a specifig setup. This setup was tested according to a checklist, and it’s function was documented. At the end of this process, you have a validated environments and most vendors doesn’t support changes to this environments without a new validation process. Sorry, but this is pain in the rear! If you can’t update such an environment, place it behind a firewall, disconnect it from your network, and prohibit the use of removable media such as USB sticks. Do not allow this environment to be Ground Zero for a ransomware attack.

I know many environments with Windows 2000, XP, 2003, or even older stuff, that is used to run production facilities, test stands, or machinery. Partially, the software/ hardware vendor is no longer existing, thus making the application, that is needed to keep the machinery running, another security risk.

Patch quick, patch often

IT departments should install patches more often, and short after the release. The risk of deploying a faulty patch is lower than the risk of being hit by a security attack. Especially when we are talking about critical security patches.

IT departments should focus on the value that they deliver to the business. IT services that are down due to a security attack can’t deliver any value. Security breaches in general, are bad for reputation and revenue. If your customers and users complain about frequent maintenance windows due to critical security patches, you should improve your communication about why this is important.

Famous last words

I don’t install Microsoft patches instantly. Some years ago, Microsoft has published a patch that causes problems. Imagine, that a patch would cause our users can’t print?! That would be bad!

We don’t have time to install updates more often. We have to work off tickets.

We don’t have to automate our server deployment. We deploy only x servers a week/ month/ year.

We have a firewall from $VENDOR.

Secure your Azure deployment with Palo Alto VM-Series for Azure

This posting is ~3 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

When I talk to customers and colleagues about cloud offerings, most of them are still concerned about the cloud, and especially about the security of public cloud offerings. One of the most mentioned concerns is based on the belief, that each and every cloud-based VM is publicly reachable over the internet. This can be so, but it does not have to. It relies on your design. Maybe that is only a problem in germany. German privacy policies are the reason for the two german Azure datacenters. They are run by Deutsche Telekom, not by Microsoft.

Azure Virtual Networks

An Azure Virtual Network (VNet) is a network inside the public Azure cloud. It is isolated from the underlying infrastructure and it is dedicated to you. This allows you to fully control IP addressing, DNS, security policies and routing between subnets. Virtual Networks can include multiple subnets to reflect different security zones and/ or multi-tier designs.  If you want to connect two or more VNets in the same region, you have to use VNet peering. Microsoft offers an excellent documentation about Virtual Networks. Because routing is managed by the Azure infrastructure, you will need to set user-defined routes to push traffic through a firewall or load-balancing appliance.

Who is Palo Alto Networks?

Palo Alto Networks was founded by Nir Zuk in 2005. Nir Zuk is the founder and CTO of Palo Alto Networks. He is still leading the development. Nil Zuk is a former employee of CheckPoint and NetScreen (was acquired by Juniper Networks). His motivation to develop his vision of a Next Generation Firewall (NGF) was the fact, that firewalls were unable to look into traffic streams. We all know this: You want that your employees can use Google, but you don’t want them to access Facebook. Designing polices for this can be a real PITA. You can solve this with a proxy server, but a proxy has other disadvantages.

Gartner has identified Palo Alto Networks as a leader in the enterprise firewall since 2011.

I was able to get my hands on some Palo Alto firewalls and I think I understand why Palo Alto Networks is noticed as a leader.

VM-Series for Microsoft Azure

Sometimes you have to separate networks. No big deal when your servers are located in your datacenter, even if they are virtualized. But what if the servers are located in a VNet on Azure? As already mentioned, you can create different subnets in an Azure VNet to create a multi-tier or multi-subnet environment. Because routing is managed by the underlying Azure infrastructure, you have to use Network Security Groups (NSG) to manage traffic. A NSG contains rules to allow or deny network traffic to VMs in a VNet. Unfortunately a NSGs can only act on layer 4. If you need something that can act on layer 7, you need something different. Now comes the Palo Alto Networks VM-Series for Microsoft Azure into play.

The VM-Series for Microsoft Azure can directly deployed from the Azure Marketplace. Palo Alto Networks also offers ARM templates on GitHub.

Palo Alto Networks aims four main use-cases:

  • Hybrid Cloud
  • Segmentation Gateway Compliance
  • Internet Gateway

The hybrid cloud use-case is interesting if you want to extend your datacenter to Azure. For example, if you move development workloads to Azure. Instead of using Azures native VPN connection capabilities, you can use the VM-Series Palo Alto Networks NGF as IPSec gateway.

If you are running different workloads on Azure, and you need inter-subnet communication between them, you can use the VM-Series as a firewall between the subnets. This allows you to manage traffic more efficiently, and it provides more security compared to the Azure NSGs.

If you are running production workloads on Azure, e.g. a RDS farm, you can use the VM-Series to secure the internet access from that RDS farm. Due to integration in directory services, like Microsoft Active Directory or plain LDAP, user-based policies allow the management of traffic based on the user identity.

There is a fourth use-case: Palo Alto Networks GlobalProtect. With GlobalProtect, the capabilities of the NGF are extended to remote users and devices. Traffic is tunneled to the NGF, and users and devices will be protected from threats. User- and application-based policies can be enforced, regardless where the user and the device is located: On-premises, in a remote location or in the cloud.

Palo Alto Networks offers two ways to purchase the VM-Series for Microsoft Azure:

  • Consumption-based licensing
  • Bring your own license (BYOL)

The consumption-based licensing is only available for the VM-300. The smaller VM-100, as well as the bigger VM-500 and VM-700, are only available via BYOL. It’s a good idea to offer a mid-sized model with a consumption-based license. If the VM-300 is too big (with consumption-based licensing), you can purchase a permanent license for a VM-100. If you need more performance, purchasing your own license might be the better way. You can start with a VM-300 and then rightsize the model and license.

All models can handle a throughput of 1 Gb/s, but they differ in the number of sessions. VM-100 and 300 use D3_v2, the VM-500 and VM-700 use D3_v2 instances.

Just play with it

Just create some Azure VM instance and deploy a VM-300 from the marketplace. Play with it. It’s awesome!

Single Sign On (SSO) with RemoteApps on Windows Server 2012 (R2)

This posting is ~3 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

A RemoteApp is an application, that is running on a Remote Desktop Session Host (RDSH), and only the display output is sent to the client. Because the application is running on a RDSH, you can easily deliver applications to end users. Another benefit is, that data is not leaving the datacenter. Software and data are kept inside the datacenter. RemoteApps can be used and deployed in various ways:

  • Users can start RemoteApps through the Remote Desktop Web Access
  • Users can start RemoteApps using a special RDP file
  • Users can simply start a link on the desktop or from the start menu (RemoteApps and Desktop connections deployed by an MSI or a GPO)
  • or they can click on a file that is associated with a RemoteApp

Even in times of VDI (LOL…), RemoteApps can be quite handy. You can deploy virtual desktops without any installed applications. Application can then delivered using RemoteAPps. This can be handy, if you migrate from RDSH/ Citrix published desktops to  VMware Horizon View. Or if you are already using RDSH, and you want to try VMware Horizon View.

But three things can really spoil the usage of RemoteApps:

  • certificate warnings
  • warnings about an untrusted publisher
  • asking for credentials (no Single Sign On)

Avoid certificate warnings

As part of the RDS reployment, the assistant kindly asks for certificates. Sure, you can deploy self signed certificates, but that’s not a good idea. You should deploy certificates from your internal certificate authority.

RDS Certificate Settings

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

This is a screenshot from my tiny single server RDS farm. Make sure that you use the correct names for the certificates! If you are using a RDS farm, make sure that you include the DNS name of the RD Connection Broker HA cluster.

If you want to make the RD Web Access publicly available, make sure that you include the public DNS name into the certificate.

Untrusted Publisher

When you try to open a RemoteApp, you might get this message:

RDS Connection Warning

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Annoying, isn’t it? But easy to fix. Remember the certificates you deployed during the RDS deployment? You need the certificate thumbprint of the publisher certificate (check the screenshot from the deployment properties > “RD Connection Broker – Publishing”). This is a screenshot from my lab:

RDS Certificate Thumbprint

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Take this thumbprint, open a PowerShell windows and convert the thumbprint into a format, that can be used with the GPO we have to build.

The result is a string without spaces and only with uppercase letters. Now we need to create a GPO. This GPO has to be linked to the OU in which the computers or users reside, that should use the RemoteApp. In my example, I use the user part of a GPO. So this GPO has to be linked to the OU, in which the users reside. The necessary GPO setting can be found here:

User Configuration > Policies >Administrative Templates > Windows Components > Remote Desktop Services > Remote Desktop Connection Client > Specify SHA1 thumbprints of certificates representing trusted .rdp publishers

RemoteApp GPO Settings SHA1 Thumbprint

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

I use the same GPO to publish the default connection URL. With this setting configured, the users automatically get the published RemoteApps to their start menu. You can find the setting here:

User Configuration > Policies >Administrative Templates > Windows Components > Remote Desktop Services > RemoteAppe and Desktop Connections > Specify default connection URL

RemoteApp Connection URL

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Credentials dialog

At this point, you will still get a “Asking for credentials” dialog. To allow the client to pass the current user login information to the RDS host, we need to configure an additional setting. Create a new GPO and link this GPO to the OU, in which the computers reside, on which the RemoteApps should be used. The setting can be found here:

Computer Configuration > Policies >Administrative Templates > System > Credentials Delegation > Allow delegating default credentials

RemoteApp Credential Delegation

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

You have to add the FQDN of your RD Connection Broker server or farm. Please make sure that you add the “TERMSRV” prefix! Because I use a single server deployment, my RD Connection Broker is also my RDS host.

The final test

Make sure that all group policies were applied. Open the Remote Desktop Connection Client and enter the RDS farm name. If everything is configured properly, you should connected without asked for credentials.

RDS Connection

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

The same should happen, if you try to start a RemoteApp.

RDS Connection Notepad++

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Troubleshooting

If you are still getting asked for credentials, something  is wrong with the credentials delegation. Check the GPO and if it is linked to the correct OU. If you are getting certificate warnings, check the names that you have included in the certificates. Warnings about untrusted publishers may be caused by a wrong SHA1 thumbprint (or wrong format).

Tiny PowerShell/ Azure project: Deploy-AzureLab.ps1

This posting is ~3 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

One of my personal predictions for 2017 is, that Microsoft Azure will gain more market share. Especially here in Germany. Because of this, I have started to refresh my knowledge about Azure. A nice side effect is that I can also improve my PowerShell skills.

Currently, the script creates a couple of VMs and resource groups. Nothing more, nothing less. The next features I want to add are:

  • add additional disks to the DCs (for SYSVOL and NTDS)
  • promote both two servers to domain controllers
  • change the DNS settings for the Azure vNetwork
  • deploy a Windows 10 client VM

I created a new repository on GitHub and shared a first v0.1 as public Gist. Please note, that this is REALLY a v0.1.

Surprise, surprise: Enable/ disable circular logging without downtime

This posting is ~3 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

As part of a troubleshooting process, I had to disable circular logging on a Microsoft Exchange 2013 mailbox database, that was part of a Database Availability Group (DAG).

What is circular logging? Microsoft Exchange uses log files that record all transactions and modifications to a mailbox database. In this case, Exchange works like MS SQL or Oracle. These log files are used, to bring the database back to a consistent state after a crash, or to recover a database to a specific point-in-time. Because everything is written to log files, the number of log files increases over time. Log files are truncated, as soon as a successful full backup of the database is taken.

If you don’t need the capability to recover to a specific point-in-time, or if you are running low on disk space, circular logging can be an alternative. Especially if you are running low on disk space, because your backup isn’t working as expected. With circular logging enabled, Microsoft Exchange maintains only the minimum number of log files that is necessary, to keep the Exchange server running. As soon as a transaction was written to the database, the log file will be overwritten.

I rarely use circular logging. But this time I had to. As already mentioned, I had a mailbox database with enabled circular logging. This database was part of a DAG and I had to disable circular logging. Usually, you need to dismount and re-mount the database after enabling or disabling circular logging.

You can enable or disable circular logging using the Microsoft Exchange Control Panel (ECP), or with the Microsoft Exchange Management Shell. I have used the PowerShell.

To my surprise, dismounting and re-mounting the database was not necessary. The circular logging was disabled without downtime. I’ve checked the TechNet, and the observed behaviour was confirmed.

JET vs. CRCL

A non-replicated mailbox databases will use JET circular logging. If the database is part of a DAG, the database will use continuous replication circular logging (CRCL). A benefit of CRCL is, that it can be enabled and disabled without the need of dismounting and re-mounting the mailbox database.

To get this clear: This only works if you are using replicated mailbox databases, because only databases that belong to a DAG are using CRCL. If you are using standalone mailbox databases, you have to dismount and re-mount the database, after enabling or disabling circular logging.

As I mentioned earlier: I really don’t use circular logging often, but that was very handy today!