Mr Bates vs The Post Office, or Why Integrity Matters In This Industry

A TV programme is not usually the subject of this blog. What is occasionally the subject of this blog is the application of IT and the real human impact it has. People like me got into IT with a view to the betterment of society. I am not a TV watcher myself but after being asked many times about the Mr Bates vs The Post Office I decided that I would take the time watch it to understand the content of it.

The programme rightly focuses on the human impact of the scandal. Whilst the technical explanations for Horizon’s failings are not explained in great detail there are some points pertinent to the IT industry.

The Software Is Robust?

This claim is made by the then head of the post Office Paula Vennells in a phone conversation. This is a problematic claim to make; all software has the potential to contain bugs. As above the programme does not go into great depths regarding the technical aspects of the Horizon IT system but issues with multiple POS terminals and a PIN pad are raised.

The IT industry should always assume room for error within a system and the processes by which people use it. Concerns and patterns of issues should have been picked up by both Fujitsu and the Post Office long before this became a national scandal.

Remote Access?

There one particular theme running throughout the programme which is of remote access to systems. Remote access is a normal part of supporting modern IT systems however in the case of the Post Office, Fujitsu and Horizon things went badly wrong.

It is explained in the programme that Fujitsu employees made remote connections to Post Office branch systems and made “corrections” without the knowledge of the sub-postmasters. Under no circumstances should remote access be made to a system be made without consent. The results of the remote access session should be recorded and any corrective action made to a ledger should be made under an administrative or support account so that the action is not logged as the user reporting the fault instead.

It is inevitable that as a result of the programme that the idea of remote access to systems will be challenged more however given the trade off between rapid support and information confidentiality, accuracy and legitimacy I would expect we are in a fully “remote accessible” world.

Non-disclosure Agreements?

The use of Non-disclosure Agreements (NDAs) are brought up at several times in the programme. In the case of the Horizon system they were used by the Post Office to prevent victims of the scandal from discussing their settlements with sub-postmasters. The ethics of NDAs continue to be an issue for the industry. Whistleblowers should always be empowered and protected by law to ensure that they are able to raise issues to their employer or the authorities.

Lack of Training & Support

In the programme one of the victims Jo Hamilton states that she is no confident with either technology or accountancy. There is another lady later on in the series who joins the JFSA group and reports issues with the Horizon system reporting losses when the power is repeatedly going out at her branch.

In the case of both of these victims this highlights the lack of support given to the sub-postmasters for using the system which is supposed to help them run a Post Office. There is also a constant theme raised in the programme over helpline staff telling victims that “they are the only ones having this problem”.

I have often thought that by labelling someone a “user” outside of a design context is problematic. We often use words in IT that dehumanise. In the case of the Horizon scandal this also underlines why effective training and support should always be available to people using the systems.

The Future?

The themes discussed in Mr Bates vs The Post Office should be a wake-up call to anyone in the IT industry – regardless of their job description – that cover-ups and complacency with the truth has real human impact. Sub-postmasters have been wrongly convicted as criminals based on faulty evidence of fraudulent accounting and theft. As a result livelihoods have been lost, and some have tragically taken their own lives as a result. It is only the right thing to do that these people should be compensated.

For further reading I recommend Computer Weekly’s Everything you need to know article. As a publication they first investigated the story after receiving letters from Mr Bates and other sub-postmasters. They have done great work tracking the story for a considerable amount of time.

There is a character in the programme Robert Rutherford – I understand to be a compound character of two Second Sight forensic accountants – who when interviewing Jo Hamilton tells her in response to her asking: “can I ask a stupid question?” responds with “There are no stupid questions, only stupid answers”. Damn right.

Updating VMware Tools for VMware ESXi

A bit of a segue from the usual SQL Server posts this week but I wanted to share a recent challenge I encountered with a VMware ESXi host. I needed to to ensure that all high CVSS vulnerabilities were resolved for upcoming Cyber Essentials compliance. This necessitated an upgrade of a VMware ESXi host as a first step as the version in use had known vulnerabilities.

Post upgrade of the ESXi host what I found however was that this did not fully resolve the security concerns. Our Microsoft Defender for Endpoint dashboard picked up that all the servers on the host were still using a vulnerable version of the VMware tools. Upon further investigation despite that the tools had upgraded to a higher version included with the ESXi release there were still unresolved issues with that particular version. It would be necessary to upgrade to a even more recent version of VMware Tools from VMware’s downloads site to fully resolve the vulnerability findings.

You most certainly could manually logon to each server and perform the tools upgrade manually however if your ESXi host has as many servers as ours then the process to upgrade each one might take some time and cause service disruption. After all this is IT and we pursue the noble art of automation in all areas right?

I have to apologise for the lack of screenshots in this post as this was done on a server I’m not privileged to take screenshots of but hopefully you can make sense of the steps below. Comment below for any clarifications required.

  1. Ensure compatibility of the updated VMware tools using the VMware Compatibility Guide and then test on an isolated server – you don’t want to risk potential downtime by installing a version of VMware tools that has a compatibility problem.
  2. Enable SSH access to the VMware Server – to do this open the Host tab in the ESXi web front-end then go to actions > services > Enable Secure Shell (SSH).
  3. Logon to the ESXI host using an SCP tool such as WinSCP – once in navigate all the way to /vmimages/tools-isoimages. You’ll notice that this contains ISO images and manifests for Windows and Linux versions of the VMware tools. Note that his folder actually is a symlink to a folder at /vmfs/volumes/<GUID>/packages/<version>/vmtools.
  4. Backup the contents of the folder – just in case y’know.
  5. Get a copy of the VMware Tools and upload into /vminmages/tools-isoimages. Make sure you overwrite what’s in the folder and include all the files from the download.
  6. Disable SSH access by repeating step #2 – reducing the ESXi host’s attack surface area is always a good idea.

Shortly after completing the above process the ESXi host will automatically pickup that there’s been a change in the VMware Tools in the store. You can from that point upgrade using one of the following methods:

  1. Right click the VM > Guest OS > Install/Upgrade VMware Tools…
  2. If the option to automatically upgrade tools is selected for a VM then reboot and it will handle things itself.

You should at this point have VMs running the latest VMware Tools which you can check the version in the list of hosts. You can add in the column for the VMware Tools version to check all VMs without logging onto each one.

Seriously, Stop Using Windows Server 2012 & 2012 R2!

(Also SQL Server 2012 please)

Extended support for Windows Server 2012 and 2012 R2 expired on October 10th 2023. We’re coming up to November 2023’s Patch Tuesday which means that there’s really, really, really no life in Server 2012 or 2012 R2 any more in case that first deadline wasn’t important enough. Hacking crews out there will highly likely be able to spot a vulnerability in Server 2012 / R2 by checking out the vulnerabilities for Server 2016 and newer. So in other words if you’ve not planned to be off Windows Server 2012 / R2 by now you’re a bit stuffed. That is unless your organisation’s forking out for Extended Security Updates in which case you can breathe easy a bit longer.

If you are in the UK have Cyber Essentials renewals coming up you either need to be shut of the servers or segregate them somewhere off the main network to their own retirement VLAN before the audit starts otherwise you’ll fail it. Don’t say I didn’t warn you.

Don’t Just Move It To Azure!

Yes it’s true that you can move your server to Azure and get an extra three years of security updates included in the price of the VM service. Three years sounds a lot of time but that will run down before you know it. So don’t kick the proverbial can down the proverbial road.

Moving a series of servers from a private cloud or IT infrastructure to a hyperscaler can also be costly in direct costs for the VM (CPU, memory, Operating System, disks, etc) but may also result in hidden fees in terms of having to build remote access solutions bring in consultants and even patch the application. It’s generally cheaper to run VMs in a private cloud if they are needed 24/7 so check costs carefully.

Mark Your Calendars for Windows Server 2016 End of Extended Support

January 12th 2027. It’ll be here before you know it.

TLS on Windows is Dead. Long Live TLS on Windows But Also Avoid Losing Connectivity to SQL Server.

Microsoft have announced this week that future versions of Windows will disable TLS (Transport Layer Security) 1.0 and 1.1 by default. These ageing cryptographic protocols are designed to secure traffic over a network. The move is a bid to improve the security posture in Windows by ensuring that only newer versions of TLS are used between client and server applications.

TLS 1.0 and 1.1 were standardised in 1999(!) and 2006 respectively. They were both deprecated in 2021 via RFC 8996. Although Microsoft claims that no known unpatched exploits exist in the Schannel implementation newer versions of TLS offer much better security. With older versions of TLS a number of bodies have mandated that these older versions should be avoided. For example as the Payment Card Industry (PCI) have deprecated their use since 2018. There are a number of security flaws with both TLS 1.0 and 1.1 which means that we can no longer rely on them for securing traffic.

In addition all major browsers have dropped support for anything prior to TLS 1.2 since 2020. As with all things in computing security it’s best to be ahead rather than behind. There shouldn’t be any browsers and OSes out there that are still supported and can’t use at least TLS 1.2. I fully recommend keeping ahead with developments and plan accordingly to drop anything prior to TLS 1.2.

SQL Server and Applications Impacted

Although Microsoft believe that usage of deprecated versions of TLS are low via their telemetry it would be wrong to simply assume that you can turn off TLS 1.0/1.1 and job done.

If you aren’t sure about how this will impact your business it’s time to start with a review of your applications and how they will be affected. Soon there will be Windows desktops out there that definitely don’t support older versions of TLS out of the box. Whilst Microsoft have stated that you can re-enable TLS 1.0 and 1.1 via the Schannel registry keys in the meantime you absolutely shouldn’t bother with doing so. There’s a reason things move on. Microsoft will at some point do the right thing and completely remove deprecated versions of TLS from the operating system. Putting off the problem won’t solve anything long-term.

Possibly the most direct way this affects SQL Server based applications is indeed the front-end. Many applications now work via a web UI rather than a Windows application. This is perhaps where your investigations should start.

For internet facing applications you could run a test via Qualys which will produce a useful report on how your server is configured. Scroll down and you’ll see the projected impact regarding client browsers and OSes with what versions of TLS they might use.

Yes I do run this test against this blog.

If your applications are internal only it’s not wise to assume that your wires and airwaves are safe even if you own them. for these you can check the Schannel registry keys at the following location:

HKLM SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\

You can check individual protocols at this location to see if they are enabled or disabled.

Getting down to the SQL Server level things get more interesting. Support for TLS 1.2+ exists as of SQL Server 2016 so these versions are good to go out of the box.

For SQL Server 2014 if you have these configured with an encrypted connection this version needs a cumulative update applying before it will support TLS. By now you really, really should have applied an update beyond the version this was introduced anyway. for any new deployments of SQL Server 2014 instances you might have to do remember to apply CUs after you’ve done the install.

For SQL Server 2008, 2008 R2 and 2012 things are arguably beyond the point with those releases as they are no longer supported. You can get yourself a hotfix to apply to those too but unless this is for an application that’s segregated away in some corner of the network for legacy purposes you’ve either got bigger things to worry about or another good reason to upgrade if this is a production system for some reason.

Potentially you’ll need to update the client driver if they use the SQL Server Native driver. Check with the application vendor for the system requirements for this.

Last Week in Tech…

Two things caught my attention in the IT world this week.

The first was an unfortunate and catastrophic incident involving Danish cloud provider CloudNordic. The cloud provider told customers this week that following a ransomware attack that all their data was effectively lost and they were not prepared or had the means to negotiate with the attackers. The firm believes that the attack happened during a move from one data centre to another as the unknowingly infected servers were brought to the same network as their administration systems. Thankfully it appears that no data has been taken by the attackers however all the backups the company retained were also encrypted.

This isn’t and shouldn’t be a “told you so” moment or an argument against going to the cloud but it does serve as a reminder that even though your systems are in the cloud you really need to have a complete & tested backup of critical data. If the situation arises like this event and the provider is totally compromised you may be left in a disaster recovery situation you can’t fully recover from. Doesn’t matter if you are a customer of a hyperscaler like Microsoft or AWS because whilst your data ought to be safe on their servers it still doesn’t negate the need for your own off-site backups.

I really do feel sorry for the engineers at CloudNordic right now and wish them the best for the recovery effort.

Second was a piece in ISP Review that Ofcom decided it was time to consult on the use of “Fibre” in marketing for broadband services in the UK which quite clearly weren’t fully fibre optic. We’re talking mainly about ISPs reselling Openreach based VDSL2 which is Fibre to the Cabinet (FTTC) with the country’s expansive copper telephone cables used for the “last mile” to the property and Virgin Media who use Coaxial cables in a similar fashion.

Does anybody remember the halcyon days of the late 00s to late 10s where ISPs marketing broadband were getting away with vague terms such as “Meg” to describe download speeds, “up to” for ADSL2+ technologies that couldn’t possibly offer the 24mbps speeds advertised and – personal favourite for this one – advertising services as “unlimited” but burying an actual limit into the so-called Fair Use Policy (or “FUP”).

Yes. This would be the next battle in that ongoing saga.

It really should be a no-no that confusing marketing terms like all of the aforementioned should be allowed to creep into marketing materials. Advertising a product as “Fibre” (of which the attainable speed with such technology is theoretically limitless) despite there being copper used in the delivery just means the consumer is actually receiving a service not strictly as described.

It’s important for the UK economy that we have access to fast, reliable and affordable broadband services and advertising them fairly which means accurate & correctly is the first step in the uptake process for the end-user. There shouldn’t be terms like “full-fibre” to differentiate FTTC (Fibre to the cabinet) and FTTP/B (Fibre to the Premise/Building) for example.

ProtonPass

This week the Proton team announced that their take on a password manager ProtonPass is now on general release. I’ve had a test of the product in the beta phase and now that a stable release has been announced I’ve started testing in view of adoption that I’m intending to last until the end of July.

ProtonPass has launched with extensions for all the popular browsers such as Chrome, Edge, Firefox, Safari and more as well as Android & iOS apps. Within the vault there is support for storing data in the forms of logins for which you can also store 2FA codes as well as notes. Proton stress that all data is end to end encrypted and protected under Swiss privacy laws.

I’m currently using BitWarden but decided ProtonPass was worth a go. Here’s what I’ve found so far.

Loading from my current password manager BitWarden was straightforward as ProtonPass can read the output json. It for some reason missed at least one record which was a note on an encryption key. ProtonPass found the record but did not put the key with it. ProtonPass also doesn’t support filling in your adresses or credit/debit cards which is a useful feature to have. For these records there was at least a note in the import wizard recording that these records were skipped. It should be noted that there is support for many other providers too.

Screenshot from the iOS app.

As noted above there is support for a wide range of web browsers and also both major smartphone OSes. Being a Firefox user on the desktop and an iOS user in my hand I found no major issue installing either extension or app respectively. Both extension and app I’ve found to have a modern, clean and easy to use design which matches the design language of the existing Proton applications. All in all no complaints here. It does struggle for some reason to autofill logins to this website which seems to have something to do with the fact this blog URL matches several account emails in the vault.

There are some things missing in the product that you’ll find in other password managers such as the ability to organise items into folders. ProtonPass has a “vaults” concept but ProtonPass did not create vaults based on folders in my BitWarden import so I am not 100% sure folders and vaults are analogous concepts. Password breach monitoring and reports are also missing which I would like to see on the roadmap. These can alert you to finding if an account has appeared in a breach and are also useful for

Possibly most missed however is a web vault. At the moment you need to use a browser extension or smartphone app to access your vault. Whilst this is OK and arguably you’re using the password manager most from the apps & extesnsions it does mean if you need to see a larger UI to look through and oganise your vault then you might miss such an interface.

ProtonPass is free for the basic tier which includes unlimited devices & logins, entries such as passwords and notes in the vault and upto 10 hide my email aliases via SimpleLogin. The “PassPlus” option at €4.99 per month or €1 a month for a 12 month plan offers unlimited email aliases, 2FA and vaults to organise items is more feature complete but questionable value at €4.99 if you only want to commit month by month for some reason. If anything it would be best to either use it as the free tier or as part of an unlimited or family subscription.

At this point if you don’t have a password manager then ProtonPass is worth a try. It is barebones in some regards such as reporting and a completely missing web interface but with strong privacy credentials from the Proton team it’s worth a try as part of the

Pulling Teeth* or Pulling Contacts? 

I don’t know how this got complicated but it did so here’s a blog post on how I rescued a load of contacts off a Microsoft account without owning a Windows device and therefore Outlook on the desktop.

I’ve very much moved away from Microsoft as my email, calendar and storage provider. My new provider is Proton who are a privacy centric outfit based in Switzerland. The very last bit to move has been the contacts which Proton can do in the Mail client but doesn’t sync with devices so that you can use them in the phone and messenger apps. I posses an Apple iPhone and whilst I don’t use Apple iCloud for mail, calendar, etc I was using it for Tasks. I decided contacts can be stored there for now.

I’ve tried various tactics to get my contacts away from Microsoft but nothing seemed to work. If I had to make an educated guess this isn’t straightforward to do on a technical level as Microsoft’s Exchange and Apple’s iCloud (which is presumably an implementation of CardDAV) will store information in different formats. Microsoft will spit out a CSV; iCloud only accepts contact cards. There’s never much motivation to a provider to make the export process any easier when it’s about migrating away so I decided not to expect a straightforward time.

There’s probably a better way of doing this but as I no longer own a Windows device here’s how I achieved the move in an abstract:

  1. In Outlook.com export all the contacts into a CSV file. 
  2. Check your CSV file using your favourite text editor for any errors, duplicated contacts or anyone who’s unfortunately become a bit of an enemy. 
  3. In Evolution perform an import into the local contacts folder.
  4. Setup your iCloud Contacts account in Evolution to the CardDAV address https://contacts.icloud.com.
  5. Drag and drop all the contacts into the iCloud account.

You could do the process in fewer steps by importing direct to iCloud instead of the local Evolution folder however I found that Evolution would go unresponsive and not provide a progress indicator. I had 162 contacts and I observed the process to be overall slower by importing direct i.e: it seemed to work faster importing locally then copying to iCloud.

The caveat was that no matter what date format I used in the Microsoft CSV it wouldn’t import to Evolution or produce an error as to why it hadn’t. I had to manually re-enter the dates in my contacts in iCloud.

For my next trick I’m considering setting up a local contacts server such as Radicale.

(* Pulling teeth is an expression that means to do something that ends up quite painful to do like pulling out teeth without anaesthetic!) 

Looking Back Where You Came From

If you’re stuck in life trying to move on the internet might throw back at you various quotes in the vain of: if you’re trying to move forward then don’t look back. This blog isn’t about philosophy but sometimes it’s damn well close.

Recently I caught myself reminiscing the earlier days of my career when I’d joined the family firm and ended up taking charge of the IT infrastructure. When I started this was one Fujitsu box running Small Business Server 2003 along with a rabble of Windows XP desktops dotted about. There were a few Vista laptops appearing as well at the time.

When I first joined the company I became aware of an issue that would strike at some point over the weekend. On almost all Monday mornings the internet would be down. Not every Monday but quite a lot.

The simple fix was to exclude the Internet Security and Acceleration Server (ISA and now called Forefront Server) proxy cache file from the weekly full backup that Backup Exec was doing. That was the first major IT issue solved in my career.

Curvy boxes were an in thing back then.

Shortly after fixing that the company decided it had outgrown the Small Business Server 2003 setup and we decided to replace it with Small Business Server 2008 on advice of our IT partner. Windows 7 had also appeared which was of keen interest to the company and long suffering employees with that all too blue XP interface.

But if you don’t know much about the legendary Microsoft product that was Small Business Server I’ll explain dear reader.

The big idea with Small Business Server was to bundle together many core products vital to a growing business into one licence, at a reasonable price and all carefully designed to work together more or less out the box. It would then be up to an IT provider to design, implement and support the server. In addition if you needed it you could buy the shiny Premium add-on which granted a second Windows Server plus a licence for my favourite video game SQL Server (at the time the rather advanced SQL Server 2008).

For Small Business 2008 it would provide: Active Directory, Microsoft Exchange, File & Print, SharePoint, Windows Server Update Services plus a backup solution built in. To manage all this the server offered a console which also reported on the status of the server as well as the clients connected to it. Somehow it was a product that was sold like an appliance, worked like an appliance but wasn’t actually an appliance.

(There’s probably something important I missed out in that list).

Did I mention this thing also provide Remote Desktop Web Gateway and PPTP based VPN? Yes indeed! This was a more innocent time in the age of the internet where broadband lines weren’t quite as ubiquitous as they are now. You did have to run it behind a router as having a second network interface was prohibited.

SBS 2008 Console
Behold! The SBS 2008 console.

But in my tenure as an SBS admin this simply wasn’t enough. Nope. We decided to add on Symantec’s Backup Exec and Sophos Endpoint AND Sophos PureMessage. Somehow it all continued to work together.

This product back in the day was on one hand valuable for small/medium businesses to access server technologies but on the other questionable as to whether it was such a great idea to actually run it. By the standards of today it’s an absolutely crackers product for a small to medium business to run because the sheer number of moving parts on the installation were asking for trouble.

I would be very surprised if there weren’t stories of horror out there of SBS completely falling over, backups not working and entire businesses grinding to a halt. This product was arguably dangerous to run a small to medium business upon.

The world moved on from Small Business Server and the last version would be Small Business Server 2011 based on Windows Server 2008 R2. For the Windows Server 2012 era Microsoft would replace it with Windows Server Essentials and also nudge you towards Office 365.

By the standards of today it’s an absolutely crackers product for a small to medium business to run…

I owe a great deal to Small Business Server. What I learned running the product was the basis for the first 16 years of my career. After 9 years I moved onto a consultant role and took the skills with managing Windows Server, Active Directory and most importantly SQL Server. Arguably the last remaining “on-premise” skills now that the world is more cloud centric.

The most valuable lessons I learned from supporting Small Business Server?

First was to never run a server on RAID5 because whilst the storage is efficient (only 33% is used for parity) the performance was absolutely dire. Taking 10-20 minutes plus to reboot whilst emails were probably getting lost was unacceptable then and would be grounds for dismissal now.

Second was that given the rise of email, instant messaging and to a lesser extent services like SharePoint it’s absolutely vital to keep these afloat and therefore a single box running everything is too great a point of failure in the business. It’s time to consider hosted or cloud for such things unless you have the resources to reliably host and build adequate redundancy on site.

Third well now the product has gone it’s always worth remembering that there was a time where we needed to run everything ourselves. In a cloud first era someone still has to do the work in the datacentre to run all of this. Tip a thought to those individual every so often an appreciate the work that gets put in.

(And no I don’t want Small Business Server back)

LastPass Breached But Don’t Give Up On Your Password Manager

In the news recently has been that the Password Manager service LastPass was infiltrated and password vaults were stolen. The jist of it is that attackers were able to gain access to the company’s development environment and by extension raid a backup environment for customer password vaults.

Understandably a lot of people out there who have used LastPass will be very worried. For the IT profession there will begin questioning over how this has happened and how we should be responding when consulting. From my perspective this isn’t a post to defend LastPass, explain the attack or analyse what they should’ve done. That’s a whole separate subject matter and whilst these questions are important what I’m going into here is about the general theory of password managers, the immediate impact to users on the data loss and the potential security responses to it.

With the LastPass hack the vault that was stolen and much of the data in there was encrypted. In the December 22nd post from Karim Toubba the stolen data is described as:

The threat actor was also able to copy a backup of customer vault data from the encrypted storage container which is stored in a proprietary binary format that contains both unencrypted data, such as website URLs, as well as fully-encrypted sensitive fields such as website usernames and passwords, secure notes, and form-filled data.

Karim Toubba, LastPass, December 22nd 2022

The attacker was able to go and retrieve customer data pertaining to their account details such as their email address, billing address, phone numbers and IP addresses that were used to access the vault. Whilst this would appear that customers got off lightly this data is still sensitive and we’ll go into that in a bit.

Concerning the stolen vaults each one was protected with AES-256 encryption using a secret derived from the user’s master password. If users have gone with the security defaults of a 12-character password and 100,100 iterations on the Password-Based Key Derivation Function (PBKDF2) then it would take a considerable amount of processing power – potentially millions of years with the technology we currently have commercially available – to crack a properly secured vault. The attacker would have to keep hold of the trove long enough for a weakness to be found or wait enough time for the right computing power to be available and by which point the data could be useless anyway.

Based on what LastPass has said I would remain cautious but anything hyper-sensitive in the vault such as email accounts, banking or financial accounts, social media accounts and medical accounts I would instantly change as a swift precaution.

As touched on above however is that customer data was not kept encrypted. Arguably the biggest risk LastPass users now face is that the attacker has enough information to go targeted phishing against victims to either get the master password for the vault or for specific websites of interest they’ve identified.

My recommendations on password managers still stands the same and that is to combine a password manager with a separate multi-factor authentication (MFA) tool for the vault and contained logins. At the moment my favoured tools are Bitwarden combined with Yubikeys. An important note on what I’ve just said there: I do not use Bitwarden to store MFA codes. Whilst I think that it might be a good idea to add MFA codes to a vault if you for some reason have to share the account with a team on balance they really should be separate. Also a good point to make is that MFA backup codes don’t belong in the vault either.

YubiKey. Tethered to an old Intel Xeon so it can’t escape.

Yes there is considerable argument that an offline password management tool like Keepass is a much safer option but that in itself brings its own problems. What happens if you lose the vault and your backup is insufficient? What happens if you can’t get to the vault in a critical moment because it’s not with you on person? What if the vault is stolen from you and you didn’t apply security practices as good as you thought? As always with security it’s a battle between doing the most secure thing and most convenient option. Personally I stick with the online option so I don’t have to worry about any of this.

In short: don’t give up on password managers. The benefits of having them far outweighs going back to a shared password for all your accounts. As long as LastPass users had a decent master password on their vault, applied MFA to sensitive accounts, changed passwords for anything hyper-sensitive and most critically watch for phishing attempts then I would hope that victims will remain safe from mass attacks.

I Survived Consulting in 2022

That’s it for 2022. I packed away my work laptop and phone after submitting my final timesheet of the year. Overall it’s been a great year working hard, responding to the challenges of modern working and supporting organisations whatever their mission may be.

Lots happened for me in 2022. Professionally I ascended to membership of the British Computing Society, passed a few Microsoft exams and also formally adopted permanent working from home. In my private life I helped pull off a successful beer festival and bonfire as part of Mirfield Round Table, I got close to my goal of swimming 10k by swimming…9k…but I also had my heart broken a couple of times :’-(.

Key Anticipations for 2023

It’s getting a lot cloudier out there. For my part in this I’m going to be focusing a lot lot more on cloud hosted applications whether that be lifts n’ shifts to public cloud VMs or migrating clients to cloud native solutions. Fact is they don’t want anything “on-prem” anymore. Fine by me.

I also anticipate we’ll be talking more about general ethics in IT. Whether that be privacy concerns, making the profession more inclusive or ensuring that we are safeguarding the planet for future generations we do have our work cut out for us and it’s critically important we rise to that challenge.

We’re also inevitably going to see a lot more challenges regarding security, stability and connectivity. As we move to (arguably) post “Wintel” desktop and server world to one that’s more cloud native and ARM powered we will see opportunities and problems arise. A constant challenge of mine is getting applications into the hands of users in a variety of settings, devices and conditions. My personal challenge for 2023 and beyond will be to make sure I can do that for people who aren’t “Wintel native”.

However your 2023 looks I wish you a Merry Christmas and a Happy New Year.