The underestimated compliance policy in Microsoft Intune

Way back when I started to work with Microsoft Intune, I remember that I had a very hard time understanding the use case for the compliance policy.

They don’t do anything, and I can’t even get them for all of my configuration settings?

Part of that is still true today, but not in a bad way. This post will explain what you need to know about them, how to design them and why it’s a vital part of most Microsoft courses I deliver.

Compliance policy 101

Compliance policies are used to verify that a device have configured the security settings that are required by an organization. This does not mean that you need to use Intune to configure a specific setting.

Compliance policies are as applicable to a BYO-device as for a company owned. However, the responsibility to configure the required settings could be either the user, and IT-department or both. Regardless, we want to ensure that the settings have applied before we allow access to company resources.

Evaluation of compliance policies

Compliance policies are evaluated, and the result sent to Intune, on the same schedule as policies – which is around every 8th hour. We should therefore not rely on compliance policies to, as an example, prevent malicious behavior or IT-attacks. If you force a device sync that will also trigger a compliance sync.

That allows us to manually or automated evaluate compliance more often. Other integrations, that do not rely on the compliance state being pushed from the device, such as JAMF and Microsoft Defender ATP, will give you an even faster and more frequent evaluation.

Compliance policies are there to ensure that those risk can be prevented from the start. We will have a lag between something changing on a device that makes it non-compliant until that information reaches the Intune service – make sure you are aware of that.

Compliance policies can be integrated with Configuration Manager to allow you to track compliance for just about anything. The settings that are available to you in the Intune-portal is in comparison limited, but in most cases, it should be enough from a security point of view.

A part of something bigger

Compliance policies have, on their own, a limited use case, even if we can use them to lock or retire devices that are non-compliant. We will get back to that later in this post. The real benefit is of course to integrate compliance as a part of your conditional access strategy.

CA policy that requires Compliant device

 

 

 

 

 

 

 

 

 

Integrating with conditional access have two benefits, first the obvious one that we can ensure that devices that are connecting to cloud- and/or internal resources are following security requirements.

The other aspect is that you can guide users into the management scenario you prefer, managed devices or managed apps. If you configure your conditional access for a specific app to require a compliant device, that will require the device to be enrolled. If not, the user of the device can connect – if they fulfill the other controls in the CA policy.

Designing compliance policies – more than just the settings

When I visit new customers, I usually find the compliance policies in the same state. They have one compliance policy per platform, with all or most settings configured according to their requirements. This configuration does work, but it has several downsides.

Some also have the default tenant wide settings configured. In rare cases I also find that many of the compliance policies are not assigned or not widely deployed.

Compliance policy settings – default or not?

There are settings that are available and configured in Intune to allow for a smooth onboarding and migration to a zero-trust security model. They aren’t really, in my opinion, supposed to stay in that state. One of them, which I may cover in a later blog post, is Hybrid Azure AD join as a requirement in Conditional Access.

The other one, is the setting for how to treat devices (or users really) that aren’t targeted by a compliance policy. The default settings is to mark these as compliant, to ensure access to corporate resources. You can find this, and the other settings mentioned in the paragraph, in the Endpoint Manager portal, Devices, Compliance Policies, Compliance policy settings.

This setting is something that, when you have implemented your Conditional Access policies and users are able to register their devices, I recommend that you change. To allow any device, from which a user is trying to access a service as compliant, based on that the users isn’t targeted by policies, is a bad idea.

Going on vacation or what?

Second is the time before a device that have not communicated with Intune is marked as non-compliant, or Compliance status validity period. 

Compliance Settings

 

 

 

In this case you should look at your organization and try to figure out how many days it is likely that a user could be offline under normal circumstances. The number of days should be the limit for when a device is marked as non-compliant. The default is 30 days, which depending on your organization could be a long time. Especially today where its very uncommon to go without a connection for an extended time period.

We configure this setting to ensure that an attacker would not be able to block access to the Intune service, but allow access to other online services while configuring the phone to a non-compliant state.

Also, the last setting here is Enhanced jailbreak detection. It requires location services to be turned on to work properly, but I advise you to try it out. If you find that it actually has a significant impact on battery or that your users turn location services off, you can consider to turn it off. If not, keep it on.

Do not make the group policy mistake – break down your policies

Do not put a variety of different settings into the same compliance policy. Divide them into logical policies based on the platform.

As an example: Have one policy for password settings and one policy for disk encryption.

This also allows you to more easily apply granular policies over time, if needed. For an organization that runs Windows, MacOS, Android and iOS you´ll probably see around 20-25 policies in total, instead of 4.

This will give you a much easier overview of which devices that aren’t compliant for a specific setting. It will also allow you to make notifications and compliance timings much more granular, relevant and user friendly.

A list of Compliance policies

A list of Compliance policies

Notify, notify, notify and take action

The main benefit of having multiple policies is that its much easier to inform the user of a non-compliant device about the reason for non-compliance. This allows the user to either seek help or remediate the non-compliance state on their own. If you are using a single policy, we can’t target a specific notification to a specific reason for non-compliance, making it harder to self-remediate.

In general, I see a much lower usage of notifications and actions for non-compliance than what would be useful and valuable both for the users but perhaps especially for administrators. The most common setting I see is just leaving the configuration at default. The default setting is to change the compliance state to non-compliant instantly when a configuration is changed to a non-compliant value.

In some cases, this is a suitable configuration, such as jailbreak detection or password configurations. These are either settings that could be potentially dangerous or which could be remediated easily and swiftly. For other settings, it could be more suitable to change the actions to more customized options.

Actions not action!

That brings us to another aspect of the actions. You are not limited to a certain number of actions for a policy. You could have multiple actions and notifications for each policy. Below you can find an example for a policy that evaluates the version of an iOS device.

Notification and actions for non-compliance

 

 

 

 

 

 

 

As you can see, I send customized notifications at a number of intervals and I also combine e-mail notifications as well as the newly released feature of push notifications. Lastly, I do take additional actions as well as change the compliance state of the device.

The available actions for iOS are the once below, but varies based on the platform:

iOS actions

 

 

 

 

 

This kind of process for non-compliance is what I would advise for most of your compliance policies. Also note that the e-mail notifications can send additional notifications to other addresses than the user that is targeted by the compliance policy. A very useful feature for administrators or security managers to get notified if a user tries to bypass the security configuration.

We also have actions that will retire a device (which is again very useful for BYOD scenarios especially) and lock devices (which may be mostly applicable to shared devices or kiosk). For personal devices, we in general would apply CA policies and therefore limit the usage of the device.

It is all about timing – give the user a chance

For notifications and actions to work efficiently we need to configure the timings for each action accordingly. Just as in the example above it could be a reasonable configuration to wait to put the device into a non-compliance state. This is especially applicable to version checks. It will, most likely, take the user a while to remediate the compliance state.

You can also use timings to send additional notifications with intervals to the user. Reminding the user to remediate if they have not done that after the first notification have been received.

Finding the correct timing

The timings always count from the time when the compliance state reached the Intune service, so there may be a time difference between the configuration change, the compliance state and therefore also the notifications. The difference could be up to 8 hours, which is also why we cant be more exact than a day. Keep this in mind when configuring your notifications as well.

The timings should be based on the ability to self-remediate as well as your organizations policies of course. Its therefore hard to give any general recommendations. Do have in mind that, again depending on the configuration, some compliance states could be harder to remediate during a weekend as an example. Like in the example above where I allow three days after non-compliance before the device is locked/retired.

Compliance policy integrations

Some aspects of the compliance policies are dependent on integrations with other tools and solutions. Based on your needs, your platform and your environment it could be advisable to integrate with these. Most importantly is however to be aware that these settings is not evaluated correctly, and it may lead to unexpected results.

ME Configuration Manager

If you are using Microsoft Endpoint Configuration Manager, you should be co-managing the devices that aren’t exclusively managed by Microsoft Intune. One of the oldest, and most valuable aspects of Co-management, is compliance evaluation.

With an option added in Configuration Manager 1910 we can combine the advanced options available in Configuration Managers configuration items with the compliance evaluation of Intune. In practice, we can evaluate close to anything on a co-managed device and send that compliance state to Intune. This could include settings in LOB-apps, more granular options for BitLocker-encryption or a third party anti-malware.

Microsoft Defender Advanced Threat Protection

For Windows and Android devices, we can base compliance state on the device risk delivered from Microsoft Defender ATP. Apart from providing a very detail and fast security evaluation of a device, MD ATP also communicates directly with the Intune. This allows for a more frequent update to the compliance state. Note that MDATP does not necessarily evaluate the activated features or applied configurations of the device. You should therefore ensure that that is covered by other compliance policies.

3d party threat protection

For iOS and Android, we can get a similar experience and state reporting from 3rd party threat protection tools. The protection here is mostly based around malicious apps, or apps that may required higher privileges than needed for a functioning app. Just as MDATP the 3rd party app, based on your configurations, sets a device risk score that you can evaluate as part of your compliance policy.

JAMF Pro

If you are using JAMF Pro to configure the MacOS devices in your environment, the tool is easy to integrate with Microsoft Intune. JAMF will continuously send inventory data from the devices to Intune, which Intune uses to evaluate the compliance state of the device. That allows you to manage Macs with JAMF, while getting the benefits from compliance and conditional access with Intune.

Networking

Even though we will not cover this integration in detail in this post, it is possible to use the compliance state of a device to allow or prevent access to internal networks and/or VPNs. The most common hardware vendors that support that from a networking perspective is Cisco and Checkpoint. The integration would enable a scenario where a firewall or wireless network controller would ask the Intune service for a compliance state. Based on that state, network access could be allowed, prevented or limited.

What you can learn from working with compliance policies

Other than being very useful, what can you learn from working with compliance policies?

First, you can learn a lot about integrations with other 1st and 3rd party products. That´s also why I usually save compliance policies together with Conditional Access to the end of every MS Intune course I deliver. It summarizes everything that the people who take my class have learned in a very hands on way.

Second, you learn about user interactivity. Since compliance policies drives Conditional Access, it will have an impact on the users experience. Imagine the situation yourself:

You are watching TV, when suddenly the program you are watching is interrupted. On the screen a message is visible:

You have broken one of our rules, we have therefore paused your subscription until you make up for what you have done.

Rather frustrating right? Not knowing, and also negatively impacting the experience, will for sure make your users angry.

Therefore, make it easy for users to remain compliant. Give them the proper information to get back to a compliant state. That will help your organization to stay compliant.

What Compliance policies are built for

Lastly, you also learn when to use the proper tool. If you are looking for something that instantly will block access if a malicious app is installed, compliance policies alone are not the tool.

If you are looking for something to report back on policy applicability on a granular level, compliance policies is not the tool

If you want to configure settings that will apply and reconfigure devices, compliance policies isn’t what you are looking for.

On the other hand: If you want to ensure that your users are keeping their devices up to date in a promptly fashion – it’s the tool to use.

If you want to allow your users to enroll their own devices, but don’t want to enforce policies in an intrusive way, it’s a great tool.

When you want to ensure that your users can’t access data on an unprotected devices, it’s a very good combination together with Conditional Access.

Summary (or TL: DR)

Compliance policies is one of the vital tools in your device management toolbox. Ensure to understand the usage of them and how to configure them in an efficient way. There are also a number of aspects that I haven’t been able to cover in this post. Therefore it is  likely that another one will follow with more details on individual settings.

  • Spilt the settings into logical policies, to make it easy to communicate the reason for non-compliance.
  • Configure the tenant wide settings to suit your needs, ensure that all users have a policy applied to them.
  • Use actions and notifications to minimize the amount of interaction needed with your service desk. When a user needs to remediate, they should know why and how.

Furthermore, evaluate what you want to use compliance policies for, and if MS Intune is enough to fulfill those needs. You may need to integrate additional services, to further protect your data and applications.

In the end, that´s what compliance policies are all about. Helping us protecting what´s valuable. Its one of the gatekeepers and implemented correctly it will be essential in your Stay Current strategy.

Links to official documentation as well as useful community blogs:

https://docs.microsoft.com/en-us/mem/intune/protect/device-compliance-get-started

https://docs.microsoft.com/en-us/mem/intune/configuration/device-profile-troubleshoot#how-long-does-it-take-for-devices-to-get-a-policy-profile-or-app-after-they-are-assigned

https://www.imab.dk/device-compliance-with-configuration-baselines-configuration-manager-version-1910-and-microsoft-intune/

 

 

New adventures – with TrueSec

Today is the first day of my new adventure. As of today, I´ve joined TrueSec Infrastructure and will be taking on the role as Principal Technical Architect. I will be focusing on Microsoft 365 and other technologies related to it.

Its really a dream come true, I still do remember my first user group that I attended many years ago. Johan Arwidmark was one of the speakers, and he and many of my new colleagues at TrueSec have in different ways been idols and people you always have been able to turn to.

That´s of course one of my goals as well, to become a trusted advisor for both customers and the community. I do see TrueSec as a great company that will enable me to achieve that. They will also be able to provide me with opportunities to work with some of the worlds most interesting and, in a good way, challenging companies and organizations.

From today, Ill be working globally and ill of course still be doing as many community activities as possible. In the coming months I´m speaking at Techdays Sweden, Microsoft Ignite, Experts Live Europe. Looking into the future I´m one of the featured speakers at Igel Disrupt EMEA in February.

Me, Alexander and Toni will continue the adventure we have set out on with Knee-Deep in Tech, that wont change. We will still aim to publish weekly podcasts, blogpost and be active on social media.

On top of that, I know that TrueSec have a few things planned for me – so stay tuned and reach out if there´s anything I can help you out with. You can reach me at Twitter, LinkedIn and of course via e-mail.

Multiple keys in Power BI

Let’s say we need keep track of certifications in a fictional company. Management has requested a Power BI table that should list the region, the certification, the number of certifications, the goal and a concatenation of the number of certifications and the goal. The end goal is requested to look something like this:

This should be easy – just visualize the columns in a table. Unfortunately the last column called “current” is, in fact, not a part of the table. And it gets worse: the columns in the table are from different tables, and we need more than one key. Let’s tackle this in two blog posts.

Multi-column keys

We have two Excel sheets as base data – one tracks the personnel available and one tracks the goals per region. Apparently Dana Scully believes in Azure.

The keys we need for connecting the two tables are “region” and “certification”, respectively. A key on just one of these columns won’t ensure uniqueness and here is hurdle number one: how do we create a relationship in Power BI that is based on more than one key? Simple answer is that we can’t. But what we CAN do is create a concatenated column with the data we need to create a unique key, and then do our relationships based on that. For starters, let’s add a custom column in the personnel table like this:

Then we do almost the same thing in the goals table, but as we only need the actual goal numbers and the key, we don’t need a new column like in the personnel table. We just merge the keys together into one column like this:

We now have the prerequisite keys in place to either merge the two tables together to form one base table or do relations on the fly. I chose the first alternative for this blog post, but either works fine. To create the new base table we’ll do the visualizations on in a bit we do a simple “Merge to New Table” like this:

We’ll call the resulting table “MergedDataTable” in order to keep track of it. After expanding the goals column and renaming the resulting column we are left with this:

 

Now we have the base table to tackle the second hurdle – counting rows per key. Stay tuned for the next blog post!

Mackmyra Intelligence – How AI blended technology and whisky

Before I begin – please always drink responsible. If you feel that you have an unhealthy alcohol consumption level or pattern, there´s help to get. This post is not written to encourage alcohol consumption nor has Mackmyra in any way had any influence on the text or the blog as such.

This will be a kind of special blogpost, but I do hope that ill get more opportunities like this one.

One of my passions in life is whisky, and oddly enough what made me discover this interest was my work in IT. Many years ago, on my first trip with Atea, we went north to a (by then) new (and Sweden’s first) distillery – Mackmyra. We of course did a lot of other fun things during that trip, but one of the highlights were to visit the building site of what came to be the world’s first gravity distillery – and we of course were given an opportunity to taste the different casks type you could order at the time.

After that if tested hundreds if not thousands of whiskies. If visited close to every distillery in Scotland, and Sweden, that allows visitors and I have a made a start towards a decent collection of wonderful whiskies.

When an opportunity shows to that allows me to combine my interest in technology with my interest in whisky – I jump at the chance! And this is what´s happened.

Two bottles of Mackmyra

Mackmyra, together with Microsoft and the Microsoft partner INSERT have created the worlds first whisky designed by AI, Intelligence. I will do my best, together with my much more AI-knowledgeable friend Alexander to get more information on the actual progress, dataset and what´s ahead but what we know is that Mackmyras Master Blender Angela D’Orazio were presented with a number of recipes, which usually consist of which kind of barley, the phenol-level (the smokiness of the whisky), the jeast, the fermentation, the cuts and so on, but that Angela then choose how to mature it. Angela choose recipe number 36, because the AI can’t understand which part that don’t mix well. It understands the data it’s been feed, the recipes and the ingredients and probably how successful the previous whiskies of Mackmyra have been.

The back label of Mackmyra Intelligence

In the end – and that´s vital to remember before we get to the tasting notes – a whisky was created, with the goal of being like of as many as possible. An easy drinker and a proof that the concept works and that an AI (with some assistance from a human) can create a very successful whisky. So, after that, lets head into my tasting notes of the Mackmyra Intelligence AI:01, at 46 % and without artificial coloring.

Tasting notes (neat)

Color:

Its rather light in color, close to an Instagram-filter, hay and not particularly oily.

Comment: Nothing out of the ordinary. Mackmyra uses a lot of bourbon and new Swedish oak barrels and the whisky is probably rather young so this is to be expected.

Smell:

Oaky, some alcohol vapors (or what could be felt as that, probably a lot of wood again), sawdust, juniper, a light touch of vanilla, dried fruit, peach, old raisins and a maltiness.

Comment: To me, the woody smells with the sting of alcohol is very typical Mackmyra and I personally have always loved it. I like the apparent oakiness with the juniper (which comes from Mackmyras usage of juniper when they smoke their barley). It’s a complex smell, not everyone will like it, but its great fun to find new smells while it warms up.

Taste:

The first taste is very light, almost watery and I was kind of disappointed but the longer you keep the whisky in your mouth, it grows. A clear tannin/alcohol sting combined with freshly cut wood and white pepper. Later a very, very light smokiness, more black pepper and a number of different kinds of wood. It finishes of with vanilla, burnt sugar and warm marzipan/frangipane.

Comment: It has all what I expect from a Mackmyra, but its obvious that its been toned down to suit a broader audience. Its representative and a good whisky to try on someone who just have started to enjoy whisky. My wife, who is a keen whisky drinker, likes easy drinkers with character – and usually don’t like Mackmyra, but when she tasted this her comment were: “Oh, that´s very good!”.

Finish:

Short, more of a feeling than a taste, dry (because of the oakiness and tannins) but more elegant than expected. White pepper and dry wood.

Comment: To me, aftertaste is almost more important than the actual taste. I would have hoped for more here. Not because its bad, its not, but because I would have thought that it would be something that others would enjoy and therefore the AI would have chosen a recipe to reflect that.

A poured glas of Mackmyra and the opened bottle

Water and whisky

I always taste my whiskies neat at first, and then add a few drops of water. In my experience, Mackmyra whisky should not go under 46 % (apart from the MACK-whisky) but I wanted to try it. To following now is my additional tasting notes for the Intelligence with a few drops of water. If you haven’t tried that in your whisky, I highly encourage you to do it.

Smell (with water):

Warmer, more smoke and peatiness, a nice, calm fire outside, fruitier and sweater with more vanilla.

Comment: Smell-wise, it’s totally different and in some perspectives an improvement.

Taste (with water):

Almost sour to start with, getting watery very quickly. Later, burnt sugar, caramel, burnt marzipan and a more obvious taste of juniper. Woodier and with a more obvious alcohol taste and aftertaste.

To me, drink it neat at room temperature and take your time.

Conclusion:

It’s a whisky I do like, but its not on my list of the best whiskies I´ve had. If I were to grade it from 0-100 (which many does) this would probably be somewhere between 75-80. Where 50 would be drinkable and 100 the best you´ve ever tried. I would recommend you to by one, either if it is your first Mackmyra or if you like the Mackmyra-taste – or if you just like the idea of owning the world’s first AI whisky.

In terms of the technology part, Ill do my best to find out more about it. I think that this is a very good way to learn about the limitations of AI and where humans still are required to achieve the task at hand. I´m very happy that I purchased the bottles (yes, I have two ?) and I´m looking forward to the next one. For that, I would love to see a more advanced whisky, based on as much data as possible from other whiskies (as well as Mackmyra) which have been given praise across the world. Until then, Slainte!

 

Domain Controller local admin password

Hey there. Toni here back with some thoughts on domain controllers and their local SAM database. You know, the thing that is disabled as soon as the server is promoted to a domain controller.
This is something that is often forgotten about until it’s too late. This database is actually critical if something bad happens to your active directory. Do you know the local admin password on your domain controller? How long ago since it was installed? The local admin password is set when the domain controller is promoted. Did you promote it? Did a consultant? Do you even know the password?


Missing something?

This local admin account comes into play when the domain controller needs to start in DSRM, or Directory Services Restore Mode. This is done when the house is on fire and no one can do anything. So is this the time when you don’t know the local admin password and need to find someone who does? I would guess no, since you are probably under enough stress at this moment anyway. Here is a quick guide on how to reset the local admin password on a fully functioning domain controller.

Run CMD as Admininstrator, type ntdsutil [Enter]
Next we switch to the Reset DSRM Password context.
Set d p [Enter]
Then we select which server to set the password on.
r p o s servername [Enter]
Enter your password and you’re done.

Now you are ready to restore your AD in case of emergency by starting the server in DSR Mode.

Windows Server 2019 gaming

Hey folks, long time no see. How are you doing?

In Episode 80 of Kneedeepintech I briefly mentioned Windows Server and gaming in the same sentence. Now that I have had time to actually test it again I’d thought I would post my findings here.

I have the Xbox 360 wired controllers that I’ve used plenty with Windows 7/8/10 but wanted to try and play on Windows Server 2012R2 previously. No luck back then. The controller would not light up at all complaining about driver issues.

So now I tried this again with Windows Server 2019. Same issue. No light, unknown controller and no drivers to be found online.

I check my Windows 10 1903 box in device manager what the driver files are for the controller and found that there was only one called “xusb22.sys” located under “\Windows\system32\drivers”

I copied that file to an empty folder knowing that I would also need the .inf file which I found under “\Windows\inf” with the name “xusb22.inf”. There was also a “xusb22.PNF”, so I copied that too.

Next I went back to my Windows Server box, device manager and just clicked on “Update driver” on the unknown controller device. This time Windows said that it found a matching driver but it was not signed properly. Ah, now we’re getting somewhere.

Now, I rebooted the server and pressed F8 for start-up options, since there is a workaround there for signed drivers. Select the option to disable driver signing check. Once back in Windows I went for device manager again, selected Update driver and this time I got a warning with the option to install anyway. Boom, the controller lit up. Hey hey!

Then I needed to test it out. I installed Steam and downloaded the games Braid and Limbo since I don’t have a hefty graphics card in said Server. Launched the games and both worked fine with the controller. Victory!

You can download the files here: xusb22

True north for the easily distracted

Kerberos fails with CIFS using AOVPN

Hey. Today I want to talk about an interesting case that involves Kerberos, Always On VPN and access to CIFS.
A customer has recently deployed Always on VPN in their infrastructure. Most clients worked well with that but a few had mixed issues with old VPN clients installed on some machines.

Cisco AnyConnect usually worked fine when installed, but there was another VPN client that disabled the IKE/EXT service which prevented the AO VPN IPSEC to work properly. So uninstalling that software solved the issues. The customer still had that installed on some clients as a backup solutions for when IPSEC was blocked at the source (for example hotels, airports etc.)

But hey! I mentioned Kerberos, how does that come into play?

Disclaimer: This post has might have little to do with Always On VPN, but the issue manifested itself when connected through AOVPN.

Well, there was a few clients that actually connected fine. They could ping stuff on the network and everything seemed fine until they tried to access the file server. They got prompted with credentials stating that they had no access to the domain controller even though I could actually get LDAP access with AD-powershell, so LDAP was obviously working. This was interesting. After a few log checks on both sides of the fence, nothing popped out. So I decided to install Wireshark on the domain controller to try and figure things out. This gave me lots of new and critical information.

I could clearly see that Kerberos was not working. UN_SUPPORTED when the client tried to get a Kerberos ticket from the KDC. So I checked the DC logs and found issues with the Kerberos certificate.
Sorry for the lack of screens, this all happened really fast and I was definitely not allowed to screenshot the customers data.

It turned out that the domain controller was using previously issued certificates from and old and retired Certificate Authority. So I deleted all of them and issued the domain controller new certs for domain controller authentication and Kerberos authentication. Now my senses were tingling since I knew that this would fix the problems. And lo and behold, it did! The troubled clients worked right away.

But, one thing was still bothering me and is still bothering me. Since this was a server side fix, why did not all the clients have this issue? Why was only a select few clients using Kerberos auth? The customer is telling me that all computers are equal and installed from the same image and are getting the same policies. So why are only a select few using Kerberos (that failed). At the time of writing, I don’t know. This happened just recently. Maybe you have some ideas? Feel free to contact me on twitter (@mrblackswe) or post a comment below. Something tells me that the clients are not equal at all, despite what the customer is telling me (Usually the case). The clients are Windows 10 Pro 1809, afaik.

Upgrading the lab

Good day to you. Today I’ve done a little write-up about my home lab equipment. I was noticing a few slowdowns once I got around 10 vm’s running on my old “server”, which was an older gen Intel E3 1240 series CPU running 32Gb of DDR3 RAM, SSD cache and spinning HDDs for mass storage. Since the stuff was closing in on 5 years of service I thought it was time to invest in some new hardware.

This time I decided to go AMD and specifically the Threadripper 1920X with 12 cores/24 threads and 64Gb of DDR4 memory. So I doubled the RAM amount and also the RAM speed and the core count tripled with higher clocks as well. All flash this time as well did it’s thing for sure as there are now 5 SSD in a combination of SATA and M2 drives in RAID-0 hosting the VM’s through storage spaces. As far as I know, the only limitation of running Hyper-V on Threadripper is that it can’t do nested virtualization, but I haven’t verified that myself yet as it is a feature I don’t specifically need.

I did not invest in networking at all since I don’t really need more that 1 Gbit externally from the host. Everything else in my network runs a singe NIC except for the NAS, which cannot get to line-speeds anyway despite having 4 ports. I could always get a 10Gbit addon-card later if needed.

So, once the new work horse was built and Hyper-V installed, it was only a matter of settings the constrained kerberos delegations correctly and start migrating machines. Live-migration was out the window due to CPU differences so I had to make the VMs “Migration enabled”. This is done for example with Powershell:

Set-VMProcessor -VMName NameOfVM -CompatibilityForMigrationEnabled 1

Note that the machine has to be turned off for this to run correctly. Once I’ve run the Move-VM command, I just run the above command again with -CompatibilityForMigrationEnabled 0 and the move was completed.

The new machine feels very much faster than the old one and a new install of WS2019 w/ desktop from the MDT took just 4 minutes to finish. I may do some iops testing further down the road but I expect that the numbers are pretty good for consumer/workstation grade hardware.

A Swede went to Finland, spoke and learned

A couple of weeks ago now, I was focused on preparing for, and speaking at, Techdays in Helsinki, Finland. I was really happy to be accepted for the conference after Alexander spoke there last year and praised the arrangement. I was also very happy that Techdays choose to accept my session on Windows Virtual Desktop, since this is one of the topics I’m most passionate about and involved in currently.

I have presented this session previously, at Igel Disrupt, but this time I had another kind of audience. With more mixed backgrounds and focused more on “regular” client management. In the end, it turned out great!

I felt that I had a very good interaction with the audience and I’ve received a number of questions during and after the event. Also, the feedback has been amazing and I’m very glad and humbled by that.

So, why do I think that WVD is such a big deal? Well, I’ve said it before and to me the first and most obvious benefit is that this till democratize the, so called, EUC (End User Compute) landscape. The technologies out there today is usually pricey and fairly complicated to configure and maintain (and yes, that includes Windows Server RDS). They usually also require you to buy a number of licenses up front, or at least do the implementation as a project.

This have prevented some, especially smaller, organizations from going down this route, even though they would like to. This is made possible with WVD. You can scale DOWN to a 1 user on 1 VM if you like, and that’s fine. You don’t have any upfront cost, you can for your consumption (even thought it actually can be cheaper to buy reserved instance and pay for it upfront). It’s a very, in the simplest configuration, an easy solution with implement and manage.

You of course get all the benefits that any, or most, EUC solutions have today in terms connect-ability, security and mobility.

One of the feedback points I received both in Munich and in Helsinki were that I almost sound overly positive and don’t present the downsides of the service. For this, I’m sorry. Its actually not intentional and therefore I would like to point out a few downsides I currently see with the service (based on publicly available fact):

  1. Its great to run apps and desktops in the cloud, but you need to consider your apps first. This will be the showstopper for many organizations. If you have systems that required connectivity to your local datacenter as an example, its perhaps not great from a performance perspective to put the client in the cloud. You can of course see this as an opportunity as well – you are moving your stuff to the cloud, but consider that first.
  2. Second, authentication. Personally, I do feel that the current solution could be highly improved, but could require more cross product group work. The RDS cant sort this out by themselves, they need help from the Windows, AD and Azure AD among others. Ill dig deeper into this in time of the public preview.
  3. Since this is some kind of hybrid if we compare it to other solutions, we need to have tools that makes it easier to manage the service, especially the VMs. You don’t need to manage and maintain the actually underlying infrastructure – but you need to configure it, secure parts of it and manage your VMs. This will also require some cross PG work, and this (as well as security) is where I see that I personally can make a difference.

There are of course other downsides as well – and I’m really looking forward to getting more information of the final decision on licensing of the service. We’ll see.

This is however feedback I’m struggling with. I do get it, I do see it as important and I do want to be better at not just look at the good sides of it, but also (in blogs or when I’m speaking) give my audience a realistic picture. Again, I’m not trying to hide anything, its just a matter of me focusing on the amazing technology.

I’ve actually had this challenge before. In the beginning of Windows 10 I did a customer presentation on Windows 10 and why that would be the best OS for this customer. They found the presentation interesting, they saw the benefits but then they asked me a question: “So, what’s bad with Windows 10? There needs to be something, or else we wont be able to trust what you are saying.” I do get that feedback, especially now a few years later. So, moving forward ill do my best to present a more nuanced picture whatever I’m presenting on.

So, we’ll for sure have reasons to get back to WVD in coming blogposts, but for now Ill be focusing a lot of my “core” technologies which is especially Windows 10 and EMS.

Take care and remember to follow the blog and listen to the Knee Deep in Tech podcast. You can find us wherever you find pods including iTunes and Spotify.