Podcast episode 107

In this episode Alexander and Simon talk about different ways of conducting training, revisit the how vs why discussion from last week, explore new changes in Windows on ARM and have a philosophical chat on cloud neutrality.

Subscribe to us on Spotify, on iTunes or listen to the episode right here:

Show notes:

Clustered disks

Power BI Gartner Report

Intune technical preview

Podcast episode 106

In this episode we talk about the why vs the how of technical conferences, the newly announced preview of clustered disks in Azure, Microsoft staying the leader in the Gartner magic quadrant for analytics and BI and some extremely interesting new features in Intune!

Subscribe to us on Spotify, on iTunes or listen to the episode right here:

Show notes:

Clustered disks

Power BI Gartner Report

Intune technical preview

Podcast episode 105

In this episode Alexander managed to corner Trøyel (@Troyel), a Norwegian consultant and budding speaker specializing in CI/CD in ADF – whatever that is. We started by sorting out the abbreviations…

Subscribe to us on Spotify, on iTunes or listen to the episode right here:

Show notes:

The Tech of Knee-deep in Tech, 2020s edition – part 4

In part 1 of this blog series I outlined the gear we use to record the podcast. The second part was all about actually recording content. The third was a wall of text on post-processing. After getting back from Oslo and the Nordic Infrastructure Conference, it is now time to finish off the series with an outline of how I publish and push the podcast on the different social media platforms.

  1. The gear
  2. Recording
  3. Post-processing
  4. Publication

Base camp

After working off the blog for a bit, we started using the podcasting platform Pippa in January of 2018. (Pippa was subsequently absorbed into Acast). In essence we upload the MP3 file to Acast, set the metadata for the episode and Acast takes care of the rest for us. This involves registering our podcast with iTunes, Spotify and all the other major podcasting aggregators. It’s not free – we’re using the $14.99 / month “influencer” plan. While we’re not using most of the other features of Acast (website, etc.) the analytics part is nice. I’m still not convinced that the stats are correct, though. I can’t seem to figure out how they are computed either (what triggers a count of an episode?), but they’ll do for now:

As you can see, podcasting is not a 100-meter sprint – it is more of a long marathon. The first spike around Sep 22 of 2018 was the first Microsoft Ignite we attended and recorded at. The second huge spike was the floowing Microsoft Ignite. It kind of blows my mind that during Microsoft Ignite we spiked at 279 listeners on one day (and about 1000 for the whole week). For a project neither of us really expected to get off the ground that’s pretty amazing.

I generally schedule the episode to go live the next weekday at 11.30 CET. This seems to be a good time for the rest of the world to pick it up. Surprisingly enough, most of our listeners are in the USA. Perhaps not as surprising is that the number of listeners we have on Spotify is around 5 per day. Despite Spotify wanting to be a podcast powerhouse, it has some ways to go.

Social media

With the file on Acast it’s time to tell the world. Cathrine Wilhelmsen explained the finer parts of handling social media to me. This led me to (try to) create images for all episodes. In reality, “all episodes” have turned into “episodes with guests”. Tweets and social media posts with pictures tend to have a better reach, and reach is the goal.

We start by creating a post on the blog. This is where we actually link to Acast. We have also thrown in links to Spotify and iTunes for good measure. All our different media outlets all link back to the blog, further driving traffic there.

Cathrine also told me to set up a new Twitter account for Knee-Deep in Tech (@KneeDeepInTech). This account is responsible for tweeting about everything we do with the podcast. By adding a secondary account to Twitter I don’t have to log out of my normal @arcticdba account and into the @KneeDeepinTech one. In Tweetdeck it’s even simpler – just choose which of your connected accounts that will do the tweeting and off you go! This way @KneeDeepinTech will tweet the new episode, and all of us hosts will retweet with a comment. That way we increase our reach even further.

Another way of scheduling social media posts is by using Buffer (https://www.bufferapp.com) This is a great tool for scheduling Twitter, LinkedIn, Instagram or Facebook. With Buffer I get a good overview of what is happening in my small social media empire the days ahead. The only issue I’ve found with it is that it sometimes doesn’t do linking qite as I would have liked.

With the episode going live at 11.30 CET, I tend to schedule the tweets, LinkedIn post and the blog post a minute later. This way I’m sure that the episode has been release correctly before anything starts linking to them.

Conclusion

As I said earlier, a podcast is not a sprint. I think of it as a long, mostly uphill marathon. It is a TON of work to figure out content, hunt down guests, do post-processing and finally share the result. It is also a lot of fun. I’ve learned so much from doing for two years – not only about audio, but all the amazing conversations I’ve had with the guests we’ve had on. Expect more fun content during 2020.

We’ll keep on doing it as long as it’s fun!

 

The Tech of Knee-Deep in Tech, 2020s edition – part 3

In part 1 of this blog series I outlined the gear we use to record the podcast. The second part was all about actually recording content. It’s now time to dive into the third and most technical part – post-processing.

  1. The gear
  2. Recording
  3. Post-processing
  4. Publication

As a quick recap you might remember that the starting point for this step is the raw audio files. I will typically have one file per host plus the recording of the Teams meeting. Let’s start with Teams first, as we need a way to extract the audio feed from the video so I can use that for lining up my other audio files. The Teams recording results in an MP4 file, and this file has several data streams. One of them is the audio, and using a tool called ffmpeg, this stream can be extracted to a format of your choosing. On the commandline I do this:

ffmpeg.exe -i TeamsChatKDiT104.mp4 -vn -acodec pcm_s16le -ar 44100 -ac 2 TeamsChatKDiT104.wav

FFMpeg has a ton of different flags, more about them at the ffmpegs page. The different flags I use do the following: -i means input file – the MP4 we got from Teams. -vn means skip the video. -acodec pcm_s161e sets the codec, i.e the kind of decoder for the audio we’re trying to yank out (more from Wikipedia). -ar 44100 sets the sample rate (the exact same as we all use when recording). -ac 2 means 2 channels and finally I specify the output filename TeamsChadKDiT104.wav.

Using ffmpeg with the settigns above gives me a WAV file that is of the exact same type and quality as the raw audio files of the hosts. Time to go to work!

Aligning the audio tracks and doing basic edits

I open up Audacity and throw all the files in there. This is what it looks like in Audacity before I do anything. As you can see, the audio is not aligned at all – and this is what I’d be left with if I didn’t have the Teams track as a kind of a “metronome”. The only file where all the claps are recorded is the teams audio track at the bottom.

The unaligned audio tracks in Audacity

After hunting a bit in it I find the first claps:

Identifying the claps in the teams audio track

…and then I just need to line this up with the audio tracks, like so:

Lining up the claps using the teams track as the master

Now I can discard the teams channel as that audio is not useful for anything else.

Time for the least fun bit – editing out all the “ummms…”, clicks, unwanted sounds, too long silences and such. This is the part that takes the longest as I have to listen through the whole episode to catch them. More obvious noises I might be able to spot in the waveforms in Audacity, but I don’t trust myself to always do so.

Dealing with noise

Noise is everywhere. Noise can be something obvious as people talking loudly nearby, the drone of an air conditioning unit, to ground hum picked up from AC wiring. Some noise can be minimized, some might almost be completely removed. As always the key is to start with as clean a signal as possible, as there is only so much one can do in post-production. When it comes to handling noise on recordings from our respective home offices, things are fairly straight forward. I know the characteristics of Simon and Toni’s microphones. I know for a fact that Simon’s microphone and amplifier will produce a cleaner signal than Toni’s microphone. Armed with that knowledge I am able to handle noise reduction in a way that is best suited to the different input signals.

For starters, I will use the first couple of seconds to give Audacity’s noise reduction effect something to chew on. I select the first few seconds of quiet, go to the effect and click “Get Noise Profile”. This “teaches” the effect what to listen for. Then I select the whole track and go to noise reduction again, this time applying these settings and pressing OK:

Settings for the noise reduction plug-in

I won’t go into details what all these mean, but I’ve found these settings to work for our equipment. Your mileage may vary. Repeat this step for all the host tracks. It is vital to “train” the plug-in (using get noise profile) on the specific track it is to work with to ensure the best noise reduction possible.

Post-production on audio we’ve captured from the field can be very different. If the noise in that recording includes a faint murmur of someone talking in the background, it is impossible for the noise reduction effect to know *which* voice to remove and which voice to keep. In essence, trying the same noise reduction trick like above can lead to very garbled and strange sounding audio. There is no perfect solution to this conundrum – you just have to experiment a bit with finding a setting that gives reasonable reduction of unwanted noise while keeping the recording as clear and normal-sounding as possible. In some ways, the faint background din is part of the charm of an on-site recording.

Compression and dynamics

Time for the slightly more difficult parts of post-processing. Most people have some variation in loudness when they speak, especially over an extended time (like, for instance, the 30 minutes of an episode). This variation in loudness is generally not that desirable. What we often wish to do is to amplify the quieter parts and bring down the louder parts to create a cohesive loudness all through the episode. There are several ways to accomplish this. One is compression, another is limiting, or a third could be using automatic gain control. If you’re unreasonably interested in how this actually works, take a look at Wikipedia here. In short, I use a compressor to handle both making the audio more uniform, but also to improve the general dynamics of the sound. However, I was never quite satisified with the built-in compressor results.

Then I was pointed towards a 3rd party plugin called “Chris’ Dynamic Compressor” which did wonders. The Audacity Podcast has a great set of starting parameters, and I found them to suit my needs perfectly.

I select the track(s) I want to apply the compressor to, set the parameters to the following and press OK:

Settings for Chris' Dynamic Compressor plug-in

Equalization

It’s now time to tackle unwanted frequencies. Again I won’t go much into detail as others have written about them in a much better way than I can (for instance the blog FeedPress here. I’ve created an equalization (EQ) curve in the equalizer plug-in that looks like this:

Settings for the equalization plug-in

It’s somewhat hard to tell, but the bass roll-off is between 60Hz to 100Hz (a high-pass filter cuts out low frequencies that we want to avoid). Human hearing generally peaks around 15kHz, so that’s where I’m rolling off the top end as I don’t need that frequency range either.

This step can sometimes amplify and bring out noise that previously was hard to detect. I’ve found that Toni’s microphone has a tendency to be a bit noisy after this step, so in his case I throw in an extra noise reduction step just to clean that track even more.

Finally I cut out the dead air we used in the beginning of the recording to handle noise as that won’t be needed anymore.

Loudness

The newest step in the chain came about when I grew tired of having episodes with varying volume. Technically they are varying in loudness, but the end result is that our listeners can’t listen to two episodes back-to-back without having to fiddle with the volumen knob. That had to go. Unfortunatly, this is a pretty involved step to crack, but here’s how I do it.

The free Audacity plug-in called “Youlean Loudness Meter” can help me figure out the loudness per track. I select the track I want to check, bring up Youlean and click “apply”. The plug-in will have a quick (silent) listen through the track and give me the integrated loudness of the track (indicated in blue in the picture below).

Settings for the Youlean loudness meter plug-in, aiming for -19 LUFS/LKFS

My target is -19 LUFS/LKFS for mono tracks. What the heck is LUFS or LKFS? Look here for an explananation of LUFS/LKFS for podcasters.  Don’t worry if it says “stereo” down in the left hand corner of the window – it is only displaying the loudness of the track I’ve selected. To get to -19 LKFS from -17.7 LKFS I will use Audacity’s amplify plug-in. The thing with the amplify plug-in is that it can take both positive and negative values. In this case we’re going for a negative value as we want to decrease the loudness of the track from -17.7 to -19. According to plain old math, 19-17,7=1,3, so we’ll put in -1,3. Finally we check the results using Youlean one more time to make sure we’re around -19 integrated. (i.e over time and not momentary)

Since that was so much fun, we get to do it again for all the other tracks!

The end result is a set of mono tracks, all normalized to -19 LKFS/LUFS and this is both the unofficial “podcast standard” and a consistent number I can use going forward.

Mix, render and intro music

With most of the post-processing flow done, it’s time to mix and render the audio streams into one mono stream. I then add the intro music file (already normalized to -19 LUFS like above). Shifting things around when I mix and render again, I’m left with a stereo file almost ready for public consumption.

The final step is to make sure that the stereo tracks are normalized to -16 LKFS/LUFS using Youlean. Why -16? Didn’t we jump through all those hoops to get to -19? Yes we did, but that was for mono, remember? This is stereo, and then -16 equals -19 in mono. Deal with it.

With all the post-processing done I export the results as an MP3 file using the following settings:

Settings for the MP3 export

Summary

Getting the audio to where I want it involve a lot of work. The final episode of 2019 is probably the one with the worst audio – ever. Audacity decided to record my Skype headset instead of my ProCaster. I only realized when we were done. That episode boils my blood to this day, and I vowed to never inflict such a crappy episode on my listeners ever again.

The steps I’ve gone through are the following:

  1. Extract audio from the Teams recording
  2. Use that file to line up the other files
  3. Edit for silences, clicks, “umm…”, coughs, breathing, etc.
  4. First round of noise reduction
  5. Chris’ Dynamic Compressor
  6. Equalization
  7. Second round of noise reduction (if needed)
  8. Loudness adjustment per track using Youlean
  9. Mix, render and add intro music
  10. Final check with Youlean
  11. Export to MP3

The plug-ins I use apart from what comes with Audacity are

Chris’ Dynamic Compressor

Youlean Loudness Meter

Phew. With that out of the way, the fourth and final part of this blog series will deal with getting the episode and word out!

The Tech of Knee-Deep in Tech, 2020s edition – part 2

 

In part 1 of this blog series I outlined the gear we use to record the podcast. This part will focus on the techniques and settings we use to recording an episode.

  1. The gear
  2. Recording
  3. Post-processing
  4. Publication

We share a OneNote file with ideas and information for every episode. We kind of screwed up the episode numbering early on. While we’re technically on episode 104 at the time of this writing, we have recorded several more specials and weirdly numbered episodes. This goes to show that not all ideas are good ideas in the long run and that starting a podcast isn’t easy… We generally try to aim for 30 minutes. Some of the episodes run longer, some run shorter. It always surprises me how difficult it is to judge time while recording.

Some things we’ve had to learn the hard way. Sound treatment matters – more or less depending on the type of microphone used. Proper microphone technique matters even more – especially on condenser microphones. It is easy to breathe loudly, and that means more work for me in post-production. The same goes for distance to the microphone. Changing the distance will immediately affect the loudness of the signal – again leading to more work for me in post-production. Clicking the mouse or typing at they keyboard isn’t something you notice while recording, but trust me – the microphone does.

Recording the hosts

All of us record audio locally. This way we’re not dependent on the internet and should we lose communication we still have all the recorded raw audio. Toni and I use Audacity to record audio and Simon uses his Zoom H6. We all use the same 44.1 kHz sample rate for recording. A small tip about Audacity: when preparing for recording, we always save the project in Audacity first. Then we record – this seems to mitigate some of the weirder bugs in Audacity that can screw up the audio if the project wasn’t saved before recording.

We set our levels individually and aim somewhere close to -18dB. This gives me ample room to work with the audio in post-production without having to worry about peaks in the audio clipping. To set the levels in Audacity, click on the recording level on the top of the main window. Then talk normally into your microphone as you turn the gain knob on the audio interface. The goal is to find a gain setting that gives you roughly -18 dB on average on the meter.

While we record locally we also use a Microsoft Teams video chat for the audio sync. It works like this: We all wear a secondary headset (on Simon it looks extra weird as he has two headworn microphones) connected to Microsoft Teams. This way we can see each other as well as use that audio stream to sync the locally recorded streams. At the beginning of every episode I clap loudly twice, then Simon and Toni do the same. This gives me easy to spot spikes in the audio files that I can use to line all the files up to in post-production.

After the clapping bonanza it’s time for five seconds quiet. This is to give me ample data to use for noise reduction later in the post-production. Then it’s time to start the show with the usual “Hello and welcome! I’m Alexander…” and off we go.

When we’re done recording, Simon and Toni export the audio in 16 bit WAV (we might look at going up to 24 bit down the line) and put the files in my “incoming files” directory on onedrive. I save the Microsoft Teams stream to this folder as well. All these files are then collected to a raw folder and used as raw input for editing. I never change these files from the original and keep them as backup.

Recording with remote guests

If we have a remote guest things become trickier. We need to balance good audio with a minimum of hassle for the guest. The obvious choice would be Skype or Teams, but the sad truth is that the audio quality simply isn’t anywhere near what we need. The least bad solution we’ve settled on is Zencastr. This tool does all the things we do above – automatically. It creates a channel where everyone hears each other while still recording audio locally. Zencastr automatically sends all the resulting files to a dropbox account when the recording is done. In fact, Zencastr would be the PERFECT tool as it simplifies the workflow tremendously – if it wasn’t for one tiny thing. Audio drift.

In short, most of the time we use Zencastr, the files don’t line up. It would be easy if it was a linear drift, i.e if the files were a few seconds longer or shorter. That way I could just increase or decrease playback speed ever so slightly. Unfortunatly this is not the case. The drift is completely random. This means that I need to chop the episode into a gazillion tiny bits and pieces and then try to move them around to get a whole. It kind of did away with the whole point of Zencastr. I’ve spoken to their excellent support, but this “happens from time to time, especially on slow computers”. I assure you that neither of the hosts have a slow computer… Things might be better if we used the paid version and exported in WAV, I honestly don’t know.

The other alternatives we’ve found have been prohibitively expensive. They also try to do a lot of other things not only related to recording, something I’m not prepared to pay for at this time.

Someone said that the software Reaper might be a solution for this scenario. I have yet to check that out.

Despite the issues with audio drift and the fact that the free version of Zencastr only supports a total of three people, it is still the best choice for recording remote guests – from both a technical and financial standpoint. I will most likely have to do some cutting and splicing to get the audio to line up – but having to do that for one track beats having to do it for all tracks.

Recording on-site

Recording on-site is very similar to recording just the hosts. We all use the same kind of microphone (AKG C520/C520L) connected to the Zoom H6 previously mentioned. The levels are again set for -18dB. As the three of us (and the guest!) will have slightly different loudness, it is key to set the levels per channel. Rarely we find ourselves having more than one guest. The Zoom H6 is indeed a six-channel recording device, but it only has four XLR inputs. The last two channels can either be recorded on the X/Y microphone included (not a good idea) or using the 3.5mm input jack on the X/Y microphone (an excellent idea).

As this is a stereo jack, we have a choice. Either we bust out two Rode SmartLav+ lapel microphones and a 3.5mm stereo-to-mono splitter, or we use the Samson MobileMic receiver and two beltpacks+microphones. The receiver outputs a stero signal (one channel for each transmitter) and that can be used straight into the Zoom H6.

As the AKG microphones are condenser, we try to find a good recording area. We’ve found that while it might seem like a good idea to record in an empty and quiet room, the reverb will be evident on the recording.

It might actually be a better choice to record out in a hallway, despite the murmur and sound of passers-by. It’s all down to judgement at the time of recording, really. Here it is absolutely vital to leave 5-8 seconds of quiet in the beginning for noise reduction. But do be careful with the noise reduction in post-production as it might not behave as expected with a noisy background!

A tip for using a head-worn microphone with a guest not used to them: be wary of facial hair. It is easy to miss that a moustasche is constantly scratching the foam around the microphone, but the microphone will not miss it. At all.

The next post in the series is about post-processing, and I can already warn you that it’ll be long and technical. Act accordingly.

 

Podcast episode 104

In this episode (a.k.a “missing in Milan”), Toni and Alexander talk about the Challenger disaster 34 years ago, a much smaller SQL Server disaster this week, Toni’s experiences moving AD around and finally the released information about the Power Platform release wave 1!

Subscribe to us on Spotify, on iTunes or listen to the episode right here:

Show notes:

https://docs.microsoft.com/en-gb/power-platform-release-plan/2020wave1/business-intelligence/planned-features

The Tech of Knee-Deep in Tech, 2020s edition – part 1

Matthew Roche pinged me on Twitter the other day to ask about the workflow and gear I use to record Knee-deep in Tech. After responding on Twitter I decided it was time to do a new round of “The Tech of Knee-Deep in Tech”.

I’ll divide this into several blog posts due to the sheer amount of information.

  1. The gear
  2. Recording
  3. Post-processing
  4. Publication

Let’s kick off the first part of the series – the gear.

Recording on the move

When we’re at the same place to record (which is rare these days) or when we’re out and about and do interviews with people, we use a Zoom H6 recorder and AKG C520/C520L microphones.

The difference between them is that the C520L has a full-size XLR connector and the C520 has a mini-XLR connector. We use the Samson PM6 phantom power adapter to connect a C520 to the Zoom. We make sure to record all audio on separate tracks. These are condenser microphones that are highly directional and with an excellent, clean signal.

While the audio will suffer if we’re in a less than ideal recording space (lots of noise around us, a big, empty room with lots of reverb, etc.), I find that it the sound still is better with head-worn microphones than if we were using traditional microphones.

Having the microphones literally people’s faces reduce the risk of talking around the microphone as everyone is looking at everyone else during the conversation. This also avoid the need for bulky shock mounts and the like. The whole setup (Zoom plus four mics) breaks down and travels easily in a Thule Subterra PowerShuttle Plus padded bag.

For doing impromptu interviews on for instance an expo floor or on video, we use the Samson Go Mic Mobile. I have one handheld mic and a receiver, and Simon has one handheld, two belt packs and one receiver. We then pull the stero signal from the receiver into the Zoom H6 using the 3.5mm stero input on the Zoom X/Y microphone adapter.  What’s nice with the belt packs is that they use mini-XLR plugs – the same connector that the AKG C520 microphone uses. Sometimes we just need two more channels quickly. In that case we bring out the Rode SmartLav+ lapel microphone. It connects (through a 3.5mm stero-to-mono splitter) to the 3.5mm input jack on the X/Y microphone.

In rare cases we don’t want the microphone to be overly visible. This we solve by using the Rode SmartLav+ lapel microphone connected to the Rode VXLR+ 3.5mm-to-XLR adapter. It features something called “plug-in power” that drives the lapel microphone just as if it was connected to a “normal” 3.5mm input jack.

This image shows most of the components – absent are the belt packs as I didn’t have them at the time of writing.

Recording at home

Most of the time we’re not at the same physical position. This means we need to record remotely, and this took some thinking to figure out. More about that in the second part of this series.
We are currently three hosts, all with their own gear setup:

I run a Rode ProCaster dynamic microphone on a Rode mic stand. It is run into a CloudLifter and then connected to an Motu M2 audio interface and then run into my main computer. I have acoustically treated my home office, but not overyly so.

The CloudLifter can’t be seen as it is tucked away underneath my desk. 

 

Simon is using the same AKG C520 condenser microphone connected to his Zoom H6 as he uses when we’re out and about. He has done some acoustic treatment in his recording space, but not too much.

Toni is sporting a HyperX QuadCast USB condenser microphone and run into his main computer. He will soon install some serious acoustic treatment.

 

 

Microphone choice

Dynamic

A dynamic microphone is most often better for less than ideal recording spaces. This is due to how the microphone works and thus picks up characteristics of the room. One of the drawbacks of a dynamic microphone is the lack of a built-in amplifier. Some microphones require a lot of gain in order to produce a usable signal. Depending on the amplifier this can introduce more or less noise to the signal. The ProCaster is a “heavy” microphone in the sense that it requires a serious amount of gain from the amplifier to produce a usable signal. This generally results in noise as we are also in amplifying the inherent noise of the amplifier.

There are a few ways to combat this: The most obvious choice is to opt for a really clean amplifier/audio interface. “Clean” in this case unfortunately often means expensive. The Motu M2 is a brand-new audio interface with an extremely clean amplifier that is still surprisingly inexpensive. It can drive the ProCaster without introducing a lot of noise.

The other choice is to use a pre-amplifier. I bought a CloudLifter to get a good signal from the old mixer. This is an in-line pre-amplifier that helps drive the microphone and still produces an exceptionally clean audio signal. I still use the CloudLifter with the Motu M2 as the signal it creates is simply amazing.

Condenser

The condenser microphones Simon and Toni use presents their own challenges. Condenser microphones have in-built amplifiers – that’s why they require power from either the audio interface or the computer itself. They are much more sensitive and tend to pick up room characteristics much more clearly. This in turn means that good room treatment is important – and that’s why Toni is working on setting up acoustic panels in his home office. The reverb is more pronounced in Toni’s audio than in either Alexanders or Simons.

Simon has done some acoustic treatment of his recording space. The treatment is concentrated behind his monitor, and the furniture and carpets in the room help soak up the reverb. The fact that the AKG microphone is a head-worn microphone that sits right in front of his mouth also means that the room reverb becomes less of a problem.

 

Conclusion

We spent a lot of time and effort on finding good equipment. My extremely good hearing makes me cringe whenever I listen to poor audio. I refuse to force crappy audio on people who choose to spend their time listening to our podcast. That’s why I will spend an inordinate amount of time and money to make sure the audio is as good as I can make it. There is only so much one can do in post-processing and having this gear helps a lot to ensure the raw audio is as good as we can make it. With great raw audio I have all the tools available to really screw things up in post-production!

Podcast episode 103

In this episode the trio talk about the finally unveiled dates and location for Microsoft Ignite, Microsoft Business Applications Summit switching dates and locations, Microsoft planning on a horrible browser hijack, a weird Power BI performance issue and the recent security announcements.

Subscribe to us on Spotify, on iTunes or listen to the episode right here:

Show notes:

Podcast episode 102

We’re back! Episode 102 is the first episode of the new year and the new decade. We talk about Windows 2008/R2 going end-of-support on the 14th (a.k.a the day we recorded the episode), as well as what the plan looks like for us going forward in 2020!

Subscribe to us on Spotify, on iTunes or listen to the episode right here:

Show notes: