New adventures – with TrueSec

Today is the first day of my new adventure. As of today, I´ve joined TrueSec Infrastructure and will be taking on the role as Principal Technical Architect. I will be focusing on Microsoft 365 and other technologies related to it.

Its really a dream come true, I still do remember my first user group that I attended many years ago. Johan Arwidmark was one of the speakers, and he and many of my new colleagues at TrueSec have in different ways been idols and people you always have been able to turn to.

That´s of course one of my goals as well, to become a trusted advisor for both customers and the community. I do see TrueSec as a great company that will enable me to achieve that. They will also be able to provide me with opportunities to work with some of the worlds most interesting and, in a good way, challenging companies and organizations.

From today, Ill be working globally and ill of course still be doing as many community activities as possible. In the coming months I´m speaking at Techdays Sweden, Microsoft Ignite, Experts Live Europe. Looking into the future I´m one of the featured speakers at Igel Disrupt EMEA in February.

Me, Alexander and Toni will continue the adventure we have set out on with Knee-Deep in Tech, that wont change. We will still aim to publish weekly podcasts, blogpost and be active on social media.

On top of that, I know that TrueSec have a few things planned for me – so stay tuned and reach out if there´s anything I can help you out with. You can reach me at Twitter, LinkedIn and of course via e-mail.

Multiple keys in Power BI

Let’s say we need keep track of certifications in a fictional company. Management has requested a Power BI table that should list the region, the certification, the number of certifications, the goal and a concatenation of the number of certifications and the goal. The end goal is requested to look something like this:

This should be easy – just visualize the columns in a table. Unfortunately the last column called “current” is, in fact, not a part of the table. And it gets worse: the columns in the table are from different tables, and we need more than one key. Let’s tackle this in two blog posts.

Multi-column keys

We have two Excel sheets as base data – one tracks the personnel available and one tracks the goals per region. Apparently Dana Scully believes in Azure.

The keys we need for connecting the two tables are “region” and “certification”, respectively. A key on just one of these columns won’t ensure uniqueness and here is hurdle number one: how do we create a relationship in Power BI that is based on more than one key? Simple answer is that we can’t. But what we CAN do is create a concatenated column with the data we need to create a unique key, and then do our relationships based on that. For starters, let’s add a custom column in the personnel table like this:

Then we do almost the same thing in the goals table, but as we only need the actual goal numbers and the key, we don’t need a new column like in the personnel table. We just merge the keys together into one column like this:

We now have the prerequisite keys in place to either merge the two tables together to form one base table or do relations on the fly. I chose the first alternative for this blog post, but either works fine. To create the new base table we’ll do the visualizations on in a bit we do a simple “Merge to New Table” like this:

We’ll call the resulting table “MergedDataTable” in order to keep track of it. After expanding the goals column and renaming the resulting column we are left with this:


Now we have the base table to tackle the second hurdle – counting rows per key. Stay tuned for the next blog post!

Mackmyra Intelligence – How AI blended technology and whisky

Before I begin – please always drink responsible. If you feel that you have an unhealthy alcohol consumption level or pattern, there´s help to get. This post is not written to encourage alcohol consumption nor has Mackmyra in any way had any influence on the text or the blog as such.

This will be a kind of special blogpost, but I do hope that ill get more opportunities like this one.

One of my passions in life is whisky, and oddly enough what made me discover this interest was my work in IT. Many years ago, on my first trip with Atea, we went north to a (by then) new (and Sweden’s first) distillery – Mackmyra. We of course did a lot of other fun things during that trip, but one of the highlights were to visit the building site of what came to be the world’s first gravity distillery – and we of course were given an opportunity to taste the different casks type you could order at the time.

After that if tested hundreds if not thousands of whiskies. If visited close to every distillery in Scotland, and Sweden, that allows visitors and I have a made a start towards a decent collection of wonderful whiskies.

When an opportunity shows to that allows me to combine my interest in technology with my interest in whisky – I jump at the chance! And this is what´s happened.

Two bottles of Mackmyra

Mackmyra, together with Microsoft and the Microsoft partner INSERT have created the worlds first whisky designed by AI, Intelligence. I will do my best, together with my much more AI-knowledgeable friend Alexander to get more information on the actual progress, dataset and what´s ahead but what we know is that Mackmyras Master Blender Angela D’Orazio were presented with a number of recipes, which usually consist of which kind of barley, the phenol-level (the smokiness of the whisky), the jeast, the fermentation, the cuts and so on, but that Angela then choose how to mature it. Angela choose recipe number 36, because the AI can’t understand which part that don’t mix well. It understands the data it’s been feed, the recipes and the ingredients and probably how successful the previous whiskies of Mackmyra have been.

The back label of Mackmyra Intelligence

In the end – and that´s vital to remember before we get to the tasting notes – a whisky was created, with the goal of being like of as many as possible. An easy drinker and a proof that the concept works and that an AI (with some assistance from a human) can create a very successful whisky. So, after that, lets head into my tasting notes of the Mackmyra Intelligence AI:01, at 46 % and without artificial coloring.

Tasting notes (neat)


Its rather light in color, close to an Instagram-filter, hay and not particularly oily.

Comment: Nothing out of the ordinary. Mackmyra uses a lot of bourbon and new Swedish oak barrels and the whisky is probably rather young so this is to be expected.


Oaky, some alcohol vapors (or what could be felt as that, probably a lot of wood again), sawdust, juniper, a light touch of vanilla, dried fruit, peach, old raisins and a maltiness.

Comment: To me, the woody smells with the sting of alcohol is very typical Mackmyra and I personally have always loved it. I like the apparent oakiness with the juniper (which comes from Mackmyras usage of juniper when they smoke their barley). It’s a complex smell, not everyone will like it, but its great fun to find new smells while it warms up.


The first taste is very light, almost watery and I was kind of disappointed but the longer you keep the whisky in your mouth, it grows. A clear tannin/alcohol sting combined with freshly cut wood and white pepper. Later a very, very light smokiness, more black pepper and a number of different kinds of wood. It finishes of with vanilla, burnt sugar and warm marzipan/frangipane.

Comment: It has all what I expect from a Mackmyra, but its obvious that its been toned down to suit a broader audience. Its representative and a good whisky to try on someone who just have started to enjoy whisky. My wife, who is a keen whisky drinker, likes easy drinkers with character – and usually don’t like Mackmyra, but when she tasted this her comment were: “Oh, that´s very good!”.


Short, more of a feeling than a taste, dry (because of the oakiness and tannins) but more elegant than expected. White pepper and dry wood.

Comment: To me, aftertaste is almost more important than the actual taste. I would have hoped for more here. Not because its bad, its not, but because I would have thought that it would be something that others would enjoy and therefore the AI would have chosen a recipe to reflect that.

A poured glas of Mackmyra and the opened bottle

Water and whisky

I always taste my whiskies neat at first, and then add a few drops of water. In my experience, Mackmyra whisky should not go under 46 % (apart from the MACK-whisky) but I wanted to try it. To following now is my additional tasting notes for the Intelligence with a few drops of water. If you haven’t tried that in your whisky, I highly encourage you to do it.

Smell (with water):

Warmer, more smoke and peatiness, a nice, calm fire outside, fruitier and sweater with more vanilla.

Comment: Smell-wise, it’s totally different and in some perspectives an improvement.

Taste (with water):

Almost sour to start with, getting watery very quickly. Later, burnt sugar, caramel, burnt marzipan and a more obvious taste of juniper. Woodier and with a more obvious alcohol taste and aftertaste.

To me, drink it neat at room temperature and take your time.


It’s a whisky I do like, but its not on my list of the best whiskies I´ve had. If I were to grade it from 0-100 (which many does) this would probably be somewhere between 75-80. Where 50 would be drinkable and 100 the best you´ve ever tried. I would recommend you to by one, either if it is your first Mackmyra or if you like the Mackmyra-taste – or if you just like the idea of owning the world’s first AI whisky.

In terms of the technology part, Ill do my best to find out more about it. I think that this is a very good way to learn about the limitations of AI and where humans still are required to achieve the task at hand. I´m very happy that I purchased the bottles (yes, I have two ?) and I´m looking forward to the next one. For that, I would love to see a more advanced whisky, based on as much data as possible from other whiskies (as well as Mackmyra) which have been given praise across the world. Until then, Slainte!


Domain Controller local admin password

Hey there. Toni here back with some thoughts on domain controllers and their local SAM database. You know, the thing that is disabled as soon as the server is promoted to a domain controller.
This is something that is often forgotten about until it’s too late. This database is actually critical if something bad happens to your active directory. Do you know the local admin password on your domain controller? How long ago since it was installed? The local admin password is set when the domain controller is promoted. Did you promote it? Did a consultant? Do you even know the password?

Missing something?

This local admin account comes into play when the domain controller needs to start in DSRM, or Directory Services Restore Mode. This is done when the house is on fire and no one can do anything. So is this the time when you don’t know the local admin password and need to find someone who does? I would guess no, since you are probably under enough stress at this moment anyway. Here is a quick guide on how to reset the local admin password on a fully functioning domain controller.

Run CMD as Admininstrator, type ntdsutil [Enter]
Next we switch to the Reset DSRM Password context.
Set d p [Enter]
Then we select which server to set the password on.
r p o s servername [Enter]
Enter your password and you’re done.

Now you are ready to restore your AD in case of emergency by starting the server in DSR Mode.

Windows Server 2019 gaming

Hey folks, long time no see. How are you doing?

In Episode 80 of Kneedeepintech I briefly mentioned Windows Server and gaming in the same sentence. Now that I have had time to actually test it again I’d thought I would post my findings here.

I have the Xbox 360 wired controllers that I’ve used plenty with Windows 7/8/10 but wanted to try and play on Windows Server 2012R2 previously. No luck back then. The controller would not light up at all complaining about driver issues.

So now I tried this again with Windows Server 2019. Same issue. No light, unknown controller and no drivers to be found online.

I check my Windows 10 1903 box in device manager what the driver files are for the controller and found that there was only one called “xusb22.sys” located under “\Windows\system32\drivers”

I copied that file to an empty folder knowing that I would also need the .inf file which I found under “\Windows\inf” with the name “xusb22.inf”. There was also a “xusb22.PNF”, so I copied that too.

Next I went back to my Windows Server box, device manager and just clicked on “Update driver” on the unknown controller device. This time Windows said that it found a matching driver but it was not signed properly. Ah, now we’re getting somewhere.

Now, I rebooted the server and pressed F8 for start-up options, since there is a workaround there for signed drivers. Select the option to disable driver signing check. Once back in Windows I went for device manager again, selected Update driver and this time I got a warning with the option to install anyway. Boom, the controller lit up. Hey hey!

Then I needed to test it out. I installed Steam and downloaded the games Braid and Limbo since I don’t have a hefty graphics card in said Server. Launched the games and both worked fine with the controller. Victory!

You can download the files here: xusb22

True north for the easily distracted

Kerberos fails with CIFS using AOVPN

Hey. Today I want to talk about an interesting case that involves Kerberos, Always On VPN and access to CIFS.
A customer has recently deployed Always on VPN in their infrastructure. Most clients worked well with that but a few had mixed issues with old VPN clients installed on some machines.

Cisco AnyConnect usually worked fine when installed, but there was another VPN client that disabled the IKE/EXT service which prevented the AO VPN IPSEC to work properly. So uninstalling that software solved the issues. The customer still had that installed on some clients as a backup solutions for when IPSEC was blocked at the source (for example hotels, airports etc.)

But hey! I mentioned Kerberos, how does that come into play?

Disclaimer: This post has might have little to do with Always On VPN, but the issue manifested itself when connected through AOVPN.

Well, there was a few clients that actually connected fine. They could ping stuff on the network and everything seemed fine until they tried to access the file server. They got prompted with credentials stating that they had no access to the domain controller even though I could actually get LDAP access with AD-powershell, so LDAP was obviously working. This was interesting. After a few log checks on both sides of the fence, nothing popped out. So I decided to install Wireshark on the domain controller to try and figure things out. This gave me lots of new and critical information.

I could clearly see that Kerberos was not working. UN_SUPPORTED when the client tried to get a Kerberos ticket from the KDC. So I checked the DC logs and found issues with the Kerberos certificate.
Sorry for the lack of screens, this all happened really fast and I was definitely not allowed to screenshot the customers data.

It turned out that the domain controller was using previously issued certificates from and old and retired Certificate Authority. So I deleted all of them and issued the domain controller new certs for domain controller authentication and Kerberos authentication. Now my senses were tingling since I knew that this would fix the problems. And lo and behold, it did! The troubled clients worked right away.

But, one thing was still bothering me and is still bothering me. Since this was a server side fix, why did not all the clients have this issue? Why was only a select few clients using Kerberos auth? The customer is telling me that all computers are equal and installed from the same image and are getting the same policies. So why are only a select few using Kerberos (that failed). At the time of writing, I don’t know. This happened just recently. Maybe you have some ideas? Feel free to contact me on twitter (@mrblackswe) or post a comment below. Something tells me that the clients are not equal at all, despite what the customer is telling me (Usually the case). The clients are Windows 10 Pro 1809, afaik.

Upgrading the lab

Good day to you. Today I’ve done a little write-up about my home lab equipment. I was noticing a few slowdowns once I got around 10 vm’s running on my old “server”, which was an older gen Intel E3 1240 series CPU running 32Gb of DDR3 RAM, SSD cache and spinning HDDs for mass storage. Since the stuff was closing in on 5 years of service I thought it was time to invest in some new hardware.

This time I decided to go AMD and specifically the Threadripper 1920X with 12 cores/24 threads and 64Gb of DDR4 memory. So I doubled the RAM amount and also the RAM speed and the core count tripled with higher clocks as well. All flash this time as well did it’s thing for sure as there are now 5 SSD in a combination of SATA and M2 drives in RAID-0 hosting the VM’s through storage spaces. As far as I know, the only limitation of running Hyper-V on Threadripper is that it can’t do nested virtualization, but I haven’t verified that myself yet as it is a feature I don’t specifically need.

I did not invest in networking at all since I don’t really need more that 1 Gbit externally from the host. Everything else in my network runs a singe NIC except for the NAS, which cannot get to line-speeds anyway despite having 4 ports. I could always get a 10Gbit addon-card later if needed.

So, once the new work horse was built and Hyper-V installed, it was only a matter of settings the constrained kerberos delegations correctly and start migrating machines. Live-migration was out the window due to CPU differences so I had to make the VMs “Migration enabled”. This is done for example with Powershell:

Set-VMProcessor -VMName NameOfVM -CompatibilityForMigrationEnabled 1

Note that the machine has to be turned off for this to run correctly. Once I’ve run the Move-VM command, I just run the above command again with -CompatibilityForMigrationEnabled 0 and the move was completed.

The new machine feels very much faster than the old one and a new install of WS2019 w/ desktop from the MDT took just 4 minutes to finish. I may do some iops testing further down the road but I expect that the numbers are pretty good for consumer/workstation grade hardware.

A Swede went to Finland, spoke and learned

A couple of weeks ago now, I was focused on preparing for, and speaking at, Techdays in Helsinki, Finland. I was really happy to be accepted for the conference after Alexander spoke there last year and praised the arrangement. I was also very happy that Techdays choose to accept my session on Windows Virtual Desktop, since this is one of the topics I’m most passionate about and involved in currently.

I have presented this session previously, at Igel Disrupt, but this time I had another kind of audience. With more mixed backgrounds and focused more on “regular” client management. In the end, it turned out great!

I felt that I had a very good interaction with the audience and I’ve received a number of questions during and after the event. Also, the feedback has been amazing and I’m very glad and humbled by that.

So, why do I think that WVD is such a big deal? Well, I’ve said it before and to me the first and most obvious benefit is that this till democratize the, so called, EUC (End User Compute) landscape. The technologies out there today is usually pricey and fairly complicated to configure and maintain (and yes, that includes Windows Server RDS). They usually also require you to buy a number of licenses up front, or at least do the implementation as a project.

This have prevented some, especially smaller, organizations from going down this route, even though they would like to. This is made possible with WVD. You can scale DOWN to a 1 user on 1 VM if you like, and that’s fine. You don’t have any upfront cost, you can for your consumption (even thought it actually can be cheaper to buy reserved instance and pay for it upfront). It’s a very, in the simplest configuration, an easy solution with implement and manage.

You of course get all the benefits that any, or most, EUC solutions have today in terms connect-ability, security and mobility.

One of the feedback points I received both in Munich and in Helsinki were that I almost sound overly positive and don’t present the downsides of the service. For this, I’m sorry. Its actually not intentional and therefore I would like to point out a few downsides I currently see with the service (based on publicly available fact):

  1. Its great to run apps and desktops in the cloud, but you need to consider your apps first. This will be the showstopper for many organizations. If you have systems that required connectivity to your local datacenter as an example, its perhaps not great from a performance perspective to put the client in the cloud. You can of course see this as an opportunity as well – you are moving your stuff to the cloud, but consider that first.
  2. Second, authentication. Personally, I do feel that the current solution could be highly improved, but could require more cross product group work. The RDS cant sort this out by themselves, they need help from the Windows, AD and Azure AD among others. Ill dig deeper into this in time of the public preview.
  3. Since this is some kind of hybrid if we compare it to other solutions, we need to have tools that makes it easier to manage the service, especially the VMs. You don’t need to manage and maintain the actually underlying infrastructure – but you need to configure it, secure parts of it and manage your VMs. This will also require some cross PG work, and this (as well as security) is where I see that I personally can make a difference.

There are of course other downsides as well – and I’m really looking forward to getting more information of the final decision on licensing of the service. We’ll see.

This is however feedback I’m struggling with. I do get it, I do see it as important and I do want to be better at not just look at the good sides of it, but also (in blogs or when I’m speaking) give my audience a realistic picture. Again, I’m not trying to hide anything, its just a matter of me focusing on the amazing technology.

I’ve actually had this challenge before. In the beginning of Windows 10 I did a customer presentation on Windows 10 and why that would be the best OS for this customer. They found the presentation interesting, they saw the benefits but then they asked me a question: “So, what’s bad with Windows 10? There needs to be something, or else we wont be able to trust what you are saying.” I do get that feedback, especially now a few years later. So, moving forward ill do my best to present a more nuanced picture whatever I’m presenting on.

So, we’ll for sure have reasons to get back to WVD in coming blogposts, but for now Ill be focusing a lot of my “core” technologies which is especially Windows 10 and EMS.

Take care and remember to follow the blog and listen to the Knee Deep in Tech podcast. You can find us wherever you find pods including iTunes and Spotify.

Powershell for printer creation, with bonus feature.

Hey, the Finn here again with another random post.
A client reached out to me since they are in the process of switching print servers. They have a lot of printers, to say the least. They are all managed manually and they felt that they would like avoid the hassle of creating all the print queues manually on the new server. Powershell to the rescue!

They gave me a list with desired things that the script should do. I might have over delivered on that, but I like Powershell. It’s freaking awesome for so many things in Windows and beyond.

So, let’s have a look at what I did. Script full of comments for explanation ofc and split in regions for easy reading.

Since we will be dealing with installing printer ports, printers and DNS-records, I figured it might be a good idea to check that the script is “run as admin” or just end it right there.

Once that is out of the way we need to establish some customer specific variables such as the input file (CSV), the DNS server name, the zone in which to create the record and lastly the template printer that we want to use for custom settings.

After the initial variables now defined and checked by the script, we can move on to creating the actual functions that do stuff. So first we create the DNS-record.

This requires the DNSServer RSAT module to be installed on the machine running the script and you will not be able to run at all if that is missing since the script actually checks for that as well. However, this is really easy with Windows 10. Just run this powershell command: Add-WindowsCapability -Name Rsat.Dns.Tools~~~~ -Online and you are all set.

For the next portion of the script, we’ll need to create the network port on which the printer resides. We’ll get som in-data from the CSV that we utilize here. We’ll do a try/catch approach. Also, we will try to create the printer itself right after. Log to screen and file if something goes wrong.

Now, we have the printer created and ready to go. Now we will set the permissions on it from the template printer and also any other settings that are presented in XML from the originating printer.

That would be the last step in this excercise of the functions. Now, onto the trigger of the whole thing.

We do some basic checks before kicking things off. Making sure that there is a template printer, that we have the necessary powershell modules we need and also that the input-csv actually exists, before doing anything else.

Lastly we store the variables as arguments to send to the different functions and voilá, the script is done.

Feel free to comment all of my mistakes. Download file here.