Tag Archives: microsoft

How to install PowerShell Core on Linux Mint 18

This posting is ~2 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

Beside my Lenovo X250, which is my primary working machine, I’m using a HP ProBook 6450b. This was my primary working machine from 2010 until 2013. With a 128 GB SSD, 8 GB RAM and the Intel i5 M 450 CPU, it is still a pretty usable machine. I used it mainly during projects, when I needed a second laptop (or the PC Express card with the serial port…). It was running Windows 10, until I decided to try Linux MInt. I used Linux as my primary desktop OS more than a decade ago. It was quite productive, but especially with laptops, there were many things that does not worked out of the box.

Because I use PowerShell quite often, and PowerShell is available for Windows, MacOS and Linux, the installation of PowerShell on this Linux laptop is a must.

How to install PowerShell?

Linux Mint is a based on Ubuntu, and I’m currently using Linux Mint 18.2. Microsoft offers different pre-compiled packages on the PowerShell GitHub repo. For Linux Mint 18, you have to download the Ubuntu 16.04 package. For Linux Mint 17, you will need the 14.04 package. Because you need the shell to install the packages, you can download the deb package from the shell as well. I used wget to download the deb package.

The next step is to install the deb package, and to fix broken dependencies. Make sure that you run dpkg  with sudo .

Looks like it failed, because of broken dependencies. But this can be easily fixed. To fix the broken dependencies, run apt-get -f install . Make sure that you run it with sudo !

That’s it! PowerShell is now installed.

Yep, looks like a PowerShell prompt…on Linux. Thank you, Microsoft! :)

Workaround for broken Windows 10 Start Menus with floating desktops

This posting is ~2 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

Last month, I wrote about a very annoying issue, that I discovered during a Windows 10 VDI deployment: Roaming of the AppData\Local folder breaks the Start Menu of Windows 10 Enterprise (Roaming of AppData\Local breaks Windows 10 Start Menu). During research, I stumbled over dozens of threads about this issue.

Today, after hours and hours of testing, troubleshooting and reading, I might have found a solution.

The environment

Currently I don’t know if this is a workaround, a weird hack, or no solution at all. Maybe it was luck that none of my 2074203423 logins at different linked-clones resulted in a broken start menu. The customer is running:

  • Horizon View 7.1
  • Windows 10 Enterprise N LTSB 2016 (1607)
  • View Agent 7.1 with enabled Persona Management

Searching for a solution

During my tests, I tried to discover WHY the TileDataLayer breaks. As I wrote in my earlier blog post, it is sufficient to delete the TileDataLayer folder. The folder will be recreated during the next logon, and the start menu is working again. Today, I added path for path to “Files and folders excluded from roamin” GPO setting, and at some point I had a working start menu. With this in mind, I did some research and stumbled over a VMware Communities thread (Vmware Horizon View 7.0.3 – Linked clone – Persistent mode – Persona management – Windows 10 (1607) – -> Windows 10 Start Menu doesn’t work)

User oliober did the same: He roamed only a couple of folders, one of them is the TileDataLayer folder, but not the whole Appdata\Local folder.

The “solution”

To make a long story short: You have to enable the roaming of AppData\Local, but then you exclude AppData\Local, and add only necessary folders to the exclusion list of the exclusion. Sounds funny, but it seems to work.

Horizon View GPO AppData Roaming

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Feedback is welcome!

I am very interested in feedback. It would be great if you have the chance to verify this behaviour. Please leave a comment with your results.

As I already said: I don’t know if this is a workaround, a hack, a solution, or no solution at all. But for now, it seems to work. Microsoft deprecated TileDataLayer in Windows 10 1703. So for this new Windows 10 build, we have to find another working solution. The above described “solution” only works for 1607. But if you are using the Long Term Service Branch, this solution will work for the next 10 years. ;)

Why “Patch Tuesday” is only every four weeks – or never

This posting is ~2 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

Today, this tweet caught my attention.

Patch management is currently a hot topic, primarily because of the latest ransomware attacks.

After appearance of WannaCry, one of my older blog posts got unfamiliar attention: WSUS on Windows 2012 (R2) and KB3159706 – WSUS console fails to connect. Why? My guess: Many admins started updating their Windows servers after appearance of WannaCry. Nearly a year after Microsoft has published KB3159706, their WSUS servers ran into this issue.

The truth about patch management

I know many enterprises, that patch their Windows clients and servers only every four or eight weeks, mostly during a maintenance window. Some of them do this, because their change processes require the deployment and test of updates in a test environment. But some of them are simply to lazy to install updates more frequent. So they simply approve all needed updates every four or eight weeks, push them to their servers, and reboot them.

Trond mentioned golden images and templates in his blog posts. I strongly agree to what he wrote, because this is something I see quite often: You deploy a server from a template, and the newly deployed server has to install 172 updates. This is because the template was never updated since creation. But I also know companies that don’t use templates, or goldes master images. They simply create a new VM, mount an ISO file, and install the server from scratch. And because it’s urgent, the server is not patched when it goes into production.

Sorry, but that’s the truth about patch management: Either it is made irregular, made in too long intervals, or not made at all.

Change Management from hell

Frameworks, such as ITIL, play also their part in this tragedy. Applying change management processes to something like patch managent prevents companies to respond quickly to threats. If your change management process prevents you from deploying critical security patches ASAP, you have a problem –  a problem with your change management process.

If your change management process requires the deployment from patches in a test environment, you should change your change mangement process. What is the bigger risk? Deploying a faulty patch, or being the victim of an upcoming ransomware attack?

Microsoft Windows Server Update Service (WSUS) offers a way to automatically approve patches. This is something you want! You want to automatically approve critical security patches. And you also want that your servers automatically install these updates, and restart if necessary. If you can’t restart servers automatically when required, you need short maintenance windows every week to reboot these servers. If this is not possible at all, you have a problem with your infrastructure design. And this does not only apply to Microsoft updates. This applies to ALL systems in your environment. VMware ESXi hosts with uptimes > 100 days are not a sign of stability. It’s a sign of missing patches.

Validated environments are ransomwares best friends

This is another topic I meet regularly: Validated environments. An environmentsthat was installed with specific applications, in a specifig setup. This setup was tested according to a checklist, and it’s function was documented. At the end of this process, you have a validated environments and most vendors doesn’t support changes to this environments without a new validation process. Sorry, but this is pain in the rear! If you can’t update such an environment, place it behind a firewall, disconnect it from your network, and prohibit the use of removable media such as USB sticks. Do not allow this environment to be Ground Zero for a ransomware attack.

I know many environments with Windows 2000, XP, 2003, or even older stuff, that is used to run production facilities, test stands, or machinery. Partially, the software/ hardware vendor is no longer existing, thus making the application, that is needed to keep the machinery running, another security risk.

Patch quick, patch often

IT departments should install patches more often, and short after the release. The risk of deploying a faulty patch is lower than the risk of being hit by a security attack. Especially when we are talking about critical security patches.

IT departments should focus on the value that they deliver to the business. IT services that are down due to a security attack can’t deliver any value. Security breaches in general, are bad for reputation and revenue. If your customers and users complain about frequent maintenance windows due to critical security patches, you should improve your communication about why this is important.

Famous last words

I don’t install Microsoft patches instantly. Some years ago, Microsoft has published a patch that causes problems. Imagine, that a patch would cause our users can’t print?! That would be bad!

We don’t have time to install updates more often. We have to work off tickets.

We don’t have to automate our server deployment. We deploy only x servers a week/ month/ year.

We have a firewall from $VENDOR.

Secure your Azure deployment with Palo Alto VM-Series for Azure

This posting is ~2 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

When I talk to customers and colleagues about cloud offerings, most of them are still concerned about the cloud, and especially about the security of public cloud offerings. One of the most mentioned concerns is based on the belief, that each and every cloud-based VM is publicly reachable over the internet. This can be so, but it does not have to. It relies on your design. Maybe that is only a problem in germany. German privacy policies are the reason for the two german Azure datacenters. They are run by Deutsche Telekom, not by Microsoft.

Azure Virtual Networks

An Azure Virtual Network (VNet) is a network inside the public Azure cloud. It is isolated from the underlying infrastructure and it is dedicated to you. This allows you to fully control IP addressing, DNS, security policies and routing between subnets. Virtual Networks can include multiple subnets to reflect different security zones and/ or multi-tier designs.  If you want to connect two or more VNets in the same region, you have to use VNet peering. Microsoft offers an excellent documentation about Virtual Networks. Because routing is managed by the Azure infrastructure, you will need to set user-defined routes to push traffic through a firewall or load-balancing appliance.

Who is Palo Alto Networks?

Palo Alto Networks was founded by Nir Zuk in 2005. Nir Zuk is the founder and CTO of Palo Alto Networks. He is still leading the development. Nil Zuk is a former employee of CheckPoint and NetScreen (was acquired by Juniper Networks). His motivation to develop his vision of a Next Generation Firewall (NGF) was the fact, that firewalls were unable to look into traffic streams. We all know this: You want that your employees can use Google, but you don’t want them to access Facebook. Designing polices for this can be a real PITA. You can solve this with a proxy server, but a proxy has other disadvantages.

Gartner has identified Palo Alto Networks as a leader in the enterprise firewall since 2011.

I was able to get my hands on some Palo Alto firewalls and I think I understand why Palo Alto Networks is noticed as a leader.

VM-Series for Microsoft Azure

Sometimes you have to separate networks. No big deal when your servers are located in your datacenter, even if they are virtualized. But what if the servers are located in a VNet on Azure? As already mentioned, you can create different subnets in an Azure VNet to create a multi-tier or multi-subnet environment. Because routing is managed by the underlying Azure infrastructure, you have to use Network Security Groups (NSG) to manage traffic. A NSG contains rules to allow or deny network traffic to VMs in a VNet. Unfortunately a NSGs can only act on layer 4. If you need something that can act on layer 7, you need something different. Now comes the Palo Alto Networks VM-Series for Microsoft Azure into play.

The VM-Series for Microsoft Azure can directly deployed from the Azure Marketplace. Palo Alto Networks also offers ARM templates on GitHub.

Palo Alto Networks aims four main use-cases:

  • Hybrid Cloud
  • Segmentation Gateway Compliance
  • Internet Gateway

The hybrid cloud use-case is interesting if you want to extend your datacenter to Azure. For example, if you move development workloads to Azure. Instead of using Azures native VPN connection capabilities, you can use the VM-Series Palo Alto Networks NGF as IPSec gateway.

If you are running different workloads on Azure, and you need inter-subnet communication between them, you can use the VM-Series as a firewall between the subnets. This allows you to manage traffic more efficiently, and it provides more security compared to the Azure NSGs.

If you are running production workloads on Azure, e.g. a RDS farm, you can use the VM-Series to secure the internet access from that RDS farm. Due to integration in directory services, like Microsoft Active Directory or plain LDAP, user-based policies allow the management of traffic based on the user identity.

There is a fourth use-case: Palo Alto Networks GlobalProtect. With GlobalProtect, the capabilities of the NGF are extended to remote users and devices. Traffic is tunneled to the NGF, and users and devices will be protected from threats. User- and application-based policies can be enforced, regardless where the user and the device is located: On-premises, in a remote location or in the cloud.

Palo Alto Networks offers two ways to purchase the VM-Series for Microsoft Azure:

  • Consumption-based licensing
  • Bring your own license (BYOL)

The consumption-based licensing is only available for the VM-300. The smaller VM-100, as well as the bigger VM-500 and VM-700, are only available via BYOL. It’s a good idea to offer a mid-sized model with a consumption-based license. If the VM-300 is too big (with consumption-based licensing), you can purchase a permanent license for a VM-100. If you need more performance, purchasing your own license might be the better way. You can start with a VM-300 and then rightsize the model and license.

All models can handle a throughput of 1 Gb/s, but they differ in the number of sessions. VM-100 and 300 use D3_v2, the VM-500 and VM-700 use D3_v2 instances.

Just play with it

Just create some Azure VM instance and deploy a VM-300 from the marketplace. Play with it. It’s awesome!

Single Sign On (SSO) with RemoteApps on Windows Server 2012 (R2)

This posting is ~3 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

A RemoteApp is an application, that is running on a Remote Desktop Session Host (RDSH), and only the display output is sent to the client. Because the application is running on a RDSH, you can easily deliver applications to end users. Another benefit is, that data is not leaving the datacenter. Software and data are kept inside the datacenter. RemoteApps can be used and deployed in various ways:

  • Users can start RemoteApps through the Remote Desktop Web Access
  • Users can start RemoteApps using a special RDP file
  • Users can simply start a link on the desktop or from the start menu (RemoteApps and Desktop connections deployed by an MSI or a GPO)
  • or they can click on a file that is associated with a RemoteApp

Even in times of VDI (LOL…), RemoteApps can be quite handy. You can deploy virtual desktops without any installed applications. Application can then delivered using RemoteAPps. This can be handy, if you migrate from RDSH/ Citrix published desktops to  VMware Horizon View. Or if you are already using RDSH, and you want to try VMware Horizon View.

But three things can really spoil the usage of RemoteApps:

  • certificate warnings
  • warnings about an untrusted publisher
  • asking for credentials (no Single Sign On)

Avoid certificate warnings

As part of the RDS reployment, the assistant kindly asks for certificates. Sure, you can deploy self signed certificates, but that’s not a good idea. You should deploy certificates from your internal certificate authority.

RDS Certificate Settings

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

This is a screenshot from my tiny single server RDS farm. Make sure that you use the correct names for the certificates! If you are using a RDS farm, make sure that you include the DNS name of the RD Connection Broker HA cluster.

If you want to make the RD Web Access publicly available, make sure that you include the public DNS name into the certificate.

Untrusted Publisher

When you try to open a RemoteApp, you might get this message:

RDS Connection Warning

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Annoying, isn’t it? But easy to fix. Remember the certificates you deployed during the RDS deployment? You need the certificate thumbprint of the publisher certificate (check the screenshot from the deployment properties > “RD Connection Broker – Publishing”). This is a screenshot from my lab:

RDS Certificate Thumbprint

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Take this thumbprint, open a PowerShell windows and convert the thumbprint into a format, that can be used with the GPO we have to build.

The result is a string without spaces and only with uppercase letters. Now we need to create a GPO. This GPO has to be linked to the OU in which the computers or users reside, that should use the RemoteApp. In my example, I use the user part of a GPO. So this GPO has to be linked to the OU, in which the users reside. The necessary GPO setting can be found here:

User Configuration > Policies >Administrative Templates > Windows Components > Remote Desktop Services > Remote Desktop Connection Client > Specify SHA1 thumbprints of certificates representing trusted .rdp publishers

RemoteApp GPO Settings SHA1 Thumbprint

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

I use the same GPO to publish the default connection URL. With this setting configured, the users automatically get the published RemoteApps to their start menu. You can find the setting here:

User Configuration > Policies >Administrative Templates > Windows Components > Remote Desktop Services > RemoteAppe and Desktop Connections > Specify default connection URL

RemoteApp Connection URL

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Credentials dialog

At this point, you will still get a “Asking for credentials” dialog. To allow the client to pass the current user login information to the RDS host, we need to configure an additional setting. Create a new GPO and link this GPO to the OU, in which the computers reside, on which the RemoteApps should be used. The setting can be found here:

Computer Configuration > Policies >Administrative Templates > System > Credentials Delegation > Allow delegating default credentials

RemoteApp Credential Delegation

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

You have to add the FQDN of your RD Connection Broker server or farm. Please make sure that you add the “TERMSRV” prefix! Because I use a single server deployment, my RD Connection Broker is also my RDS host.

The final test

Make sure that all group policies were applied. Open the Remote Desktop Connection Client and enter the RDS farm name. If everything is configured properly, you should connected without asked for credentials.

RDS Connection

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

The same should happen, if you try to start a RemoteApp.

RDS Connection Notepad++

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Troubleshooting

If you are still getting asked for credentials, something  is wrong with the credentials delegation. Check the GPO and if it is linked to the correct OU. If you are getting certificate warnings, check the names that you have included in the certificates. Warnings about untrusted publishers may be caused by a wrong SHA1 thumbprint (or wrong format).

Tiny PowerShell/ Azure project: Deploy-AzureLab.ps1

This posting is ~3 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

One of my personal predictions for 2017 is, that Microsoft Azure will gain more market share. Especially here in Germany. Because of this, I have started to refresh my knowledge about Azure. A nice side effect is that I can also improve my PowerShell skills.

Currently, the script creates a couple of VMs and resource groups. Nothing more, nothing less. The next features I want to add are:

  • add additional disks to the DCs (for SYSVOL and NTDS)
  • promote both two servers to domain controllers
  • change the DNS settings for the Azure vNetwork
  • deploy a Windows 10 client VM

I created a new repository on GitHub and shared a first v0.1 as public Gist. Please note, that this is REALLY a v0.1.

Surprise, surprise: Enable/ disable circular logging without downtime

This posting is ~3 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

As part of a troubleshooting process, I had to disable circular logging on a Microsoft Exchange 2013 mailbox database, that was part of a Database Availability Group (DAG).

What is circular logging? Microsoft Exchange uses log files that record all transactions and modifications to a mailbox database. In this case, Exchange works like MS SQL or Oracle. These log files are used, to bring the database back to a consistent state after a crash, or to recover a database to a specific point-in-time. Because everything is written to log files, the number of log files increases over time. Log files are truncated, as soon as a successful full backup of the database is taken.

If you don’t need the capability to recover to a specific point-in-time, or if you are running low on disk space, circular logging can be an alternative. Especially if you are running low on disk space, because your backup isn’t working as expected. With circular logging enabled, Microsoft Exchange maintains only the minimum number of log files that is necessary, to keep the Exchange server running. As soon as a transaction was written to the database, the log file will be overwritten.

I rarely use circular logging. But this time I had to. As already mentioned, I had a mailbox database with enabled circular logging. This database was part of a DAG and I had to disable circular logging. Usually, you need to dismount and re-mount the database after enabling or disabling circular logging.

You can enable or disable circular logging using the Microsoft Exchange Control Panel (ECP), or with the Microsoft Exchange Management Shell. I have used the PowerShell.

To my surprise, dismounting and re-mounting the database was not necessary. The circular logging was disabled without downtime. I’ve checked the TechNet, and the observed behaviour was confirmed.

JET vs. CRCL

A non-replicated mailbox databases will use JET circular logging. If the database is part of a DAG, the database will use continuous replication circular logging (CRCL). A benefit of CRCL is, that it can be enabled and disabled without the need of dismounting and re-mounting the mailbox database.

To get this clear: This only works if you are using replicated mailbox databases, because only databases that belong to a DAG are using CRCL. If you are using standalone mailbox databases, you have to dismount and re-mount the database, after enabling or disabling circular logging.

As I mentioned earlier: I really don’t use circular logging often, but that was very handy today!

Important foot note: Windows 10 Enterprise LTSB 2016 requires a new KMS host key

This posting is ~3 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

Today, I have stumbled upon a fact that is worth being documented.

TL;DR: Use the “Windows Srv 2016 DataCtr/Std KMS” host key (CSVLK), if you want to activate Windows 10 Enterprise LTSB 2016 using KMS. Or use AD-based activation. For more information read the blog post of the Ask the Core Team: Windows Server 2016 Volume Activation Tips.

A customer wants to deploy Windows 10 Enterprise LTSB 2016. A Windows Server 2012 R2 is acting as KMS host, and successfully activates Windows Server 2012 R2 and Microsoft Office 2013 Professional Plus. The “Windows Srv 2012R2 DataCtr/Std KMS for Windows 10” CSVLK was successfully installed. Nevertheless, the “current count” value does not increase. The client logged the event 12288:

On the server-side, I found the event 12290:

Error 0xC004F042 means:

The Software Protection Service determined that the specified Key Management Service (KMS) cannot be used.

It took a time to find the root cause, because I knew that Windows 10 Enterprise LTSB 2015 can successfully activated with this key. At the end it’s a question of the right search terms… Finally, I found the Ask the Core Team blog post. With the rightkey, the “current count” value increased, and finally the deployed Windows 10 Enterprise VMs were successfully activated.

Get-MailboxDatabase doesn’t show last backup timestamp

This posting is ~3 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

Sometimes you have to check when the last backup of an Exchange mailbox database was taken. This is pretty simple, because the timestamps of the last full, incremental and differential backup is stored for each mailbox database. You can check these attributes using the Exchange Control Panel (ECP), or you can use the Get-MailboxDatabase cmdlet.

Backup successful, but no timestamp?

Take a look at this output. As you can see, there’s no timestamp for the last full, incremental and differential backup. But this database was successfully backuped some minutes before.

Missing -status switch

The solution is easy: The -status  switch was missing.

After adding the -status  switch to the Get-MailboxDatabase command, the timestamps were added to the output. If you run the command during a running backup session, this is also added to the output (BackupInProgress).

Disable Outlook cached mode for shared mailboxes

This posting is ~3 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

When you use Microsoft Outlook in cached mode, what I always recommend, and you add additional mailboxes to your outlook profile, you will notice that the OST file will grow. Outlook will download the mailbox items (mails, calendar entries, contacts etc.), and store them in the OST file. This is the default behaviour since Microsoft Outlook 2010. If you want to disable this behaviour, you have two options:

  • Edit the registry
  • Use a group policy object (GPO)

Edit the Windows registry

The easiest way is to use a reg file. Copy this text into a file and save it as disablecachedmode.reg. Then double click the file and confirm, that you want to import the registry file.

Please note the version number after “Office”.

OutlookVersion
Outlook 201616.0
Outlook 201315.0
Outlook 201014.0

Make sure that you use the appropriate version number for your Outlook! Otherwise this setting is applied, but not working.

Group Policy

If you want to apply this setting on a bunch of clients, you should use a GPO. Before you can use a GPO, you have to install the necessary template files for Microsoft Office/ Outlook. These ADMX files are part of the “Office 2013 Administrative Template files (ADMX/ADML)” or “Office 2016 Administrative Template files (ADMX/ADML)” package.

Copy the Outlk16.admx or Outlk15.admx files to the PolicyDefinitions folder (either C:\Windows or Central Store), and the Outlk16.adml or Outlk15.adml to the corresponding language folder.

Then you can create a new GPO. The desired setting can be found under User Configuration >> Administrative Templates >> Microsoft Outlook 201x >> Outlook Options >> Delegates >> Disable shared mail folder caching. Set this to “enable” and apply the GPO.

Still using Microsoft Outlook 2010?

Please note, that the GPO template for Microsoft Outlook 2010 doesn’t contain the necessary setting that controls this functionality!