IT administrators are struggling to deal with the ongoing fallout from the faulty CrowdStrike update. One spoke to The Register to share what it is like at the coalface.

Speaking on condition of anonymity, the administrator, who is responsible for a fleet of devices, many of which are used within warehouses, told us: “It is very disturbing that a single AV update can take down more machines than a global denial of service attack. I know some businesses that have hundreds of machines down. For me, it was about 25 percent of our PCs and 10 percent of servers.”

He isn’t alone. An administrator on Reddit said 40 percent of servers were affected, along with 70 percent of client computers stuck in a bootloop, or approximately 1,000 endpoints.

Sadly, for our administrator, things are less than ideal.

Another Redditor posted: "They sent us a patch but it required we boot into safe mode.

"We can’t boot into safe mode because our BitLocker keys are stored inside of a service that we can’t login to because our AD is down.

  • catloaf@lemm.ee
    link
    fedilink
    English
    arrow-up
    158
    ·
    2 years ago

    We can’t boot into safe mode because our BitLocker keys are stored inside of a service that we can’t login to because our AD is down.

    Someone never tested their DR plans, if they even have them. Generally locking your keys inside the car is not a good idea.

    • Zron@lemmy.world
      link
      fedilink
      English
      arrow-up
      39
      ·
      2 years ago

      I remember a few career changes ago, I was a back room kid working for an MSP.

      One day I get an email to build a computer for the company, cheap as hell. Basically just enough to boot Windows 7.

      I was to build it, put it online long enough to get all of the drivers installed, and then set it up in the server room, as physically far away from any network ports as possible. IIRC I was even given an IO shield that physically covered the network port for after it updated.

      It was our air-gapped encryption key backup.

      I feel like that shitty company was somehow prepared for this better than some of these companies today. In fact, I wonder if that computer is still running somewhere and just saved someone’s ass.

    • ripcord@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      1
      ·
      2 years ago

      They also don’t seem to have a process for testing updates like these…?

      This seems like showing some really shitty testing practices at a ton of IT departments.

      • catloaf@lemm.ee
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        1
        ·
        2 years ago

        Unfortunately, the pace of attack development doesn’t really give much time for testing.

        • ripcord@lemmy.world
          link
          fedilink
          English
          arrow-up
          5
          arrow-down
          1
          ·
          2 years ago

          More time that the zero time than companies appear to have invested here.

          • TonyOstrich@lemmy.world
            link
            fedilink
            English
            arrow-up
            5
            ·
            2 years ago

            I was just thinking about something similar. I can understand wanting to get a security update as quickly as possible, but it still seems like some kind of rolling update could have mitigated something like this. When I say rolling, I mean for example split all of your customers into 24 groups and push the update once an hour to another group. If it causes a massive fuck up it’s only some or most, but not all.

  • db0@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    154
    arrow-down
    1
    ·
    2 years ago

    Pity the administrators who dutifully kept a list of those keys on a secure server share, only to find that the server is also now showing a screen of baleful blue.

    Lol, can you imagine? It empathetically hurts me even thinking of this situation. Enter that brave hero who kept the fileshare decryption key in a local keepass :D

    • sugar_in_your_tea@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      99
      ·
      2 years ago

      That’s why the 3-2-1 rule exists:

      • 3 copies of everything on
      • 2 different forms of media with
      • 1 copy off site

      For something like keys, that means:

      1. secure server share
      2. server share backup at a different site
      3. physical copy (either USB, printed in a safe, etc)

      Any IT pro should be aware of this “rule.” Oh, and periodically test restoring from a backup to make sure the backup actually works.

      • IphtashuFitz@lemmy.world
        link
        fedilink
        English
        arrow-up
        27
        ·
        2 years ago

        We have a cron job that once a quarter files a ticket with whoever is on-call that week to test all our documented emergency access procedures to ensure they’re all working, accessible, up-to-date etc.

    • kescusay@lemmy.world
      link
      fedilink
      English
      arrow-up
      48
      arrow-down
      2
      ·
      2 years ago

      Seems like an argument for a heterogeneous environment, perhaps a solid and secure Linux server to host important keys like that.

  • gravitas_deficiency@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    85
    ·
    2 years ago

    Lmao this is incredible

    Another Redditor posted: "They sent us a patch but it required we boot into safe mode.

    "We can’t boot into safe mode because our BitLocker keys are stored inside of a service that we can’t login to because our AD is down.

    “Most of our comms are down, most execs’ laptops are in infinite bsod boot loops, engineers can’t get access to credentials to servers.”

    N.B.: Reddit link is from the source

    I hope a lot of c-suites get fired for this. But I’m pretty sure they won’t be.

    • Codex@lemmy.world
      link
      fedilink
      English
      arrow-up
      53
      ·
      2 years ago

      Our administrator is understandably a little bitter about the whole experience as it has unfolded, saying, "We were forced to switch from the perfectly good ESET solution which we have used for years by our central IT team last year.

      Sounds like a lot of architects and admins are going to get thrown under the bus for this one.

      “Yes, we ordered you to cut costs in impossible ways, but we never told you specifically to centralize everything with a third party, that was just the only financially acceptable solution that we would approve. This is still your fault, so we’re firing the entire IT department and replacing them with an AI managed by a company in Sri Lanka.”

      • Evotech@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        2 years ago

        Stupid argument though, honestly just chance that crowdstrike was the vendor to shit the bed. Might aswell have been set. You should still have procedures for this

  • MrNesser@lemmy.world
    link
    fedilink
    English
    arrow-up
    81
    arrow-down
    4
    ·
    2 years ago

    Lemmy appears to be weathering the storm quite well…

    …probably runs on linux

    • cygnus@lemmy.ca
      link
      fedilink
      English
      arrow-up
      69
      arrow-down
      1
      ·
      edit-2
      2 years ago

      The overwhelming majority of webservers run Linux (it’s not even close, like high 90 percent range) Edit: Upon double-checking it’s more like mid-80s, but the point stands

    • RBG@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      45
      ·
      2 years ago

      It runs on hundreds of servers. If any of them ran windows they might be out but unless you got an account on them you’d be fine with the rest. That’s the whole point of federation.

    • Defaced@lemmy.world
      link
      fedilink
      English
      arrow-up
      17
      ·
      2 years ago

      A word of caution, I’ve done this over a dozen times today and I did have one server where the bootloader was wiped after I attached it to another EC2. Always make a snapshot before doing the work just in case.

    • CaptPretentious@lemmy.world
      link
      fedilink
      English
      arrow-up
      33
      arrow-down
      3
      ·
      2 years ago

      I’m the corporate world, very much Windows gets used. I know Lemmy likes a circle jerk around Linux. But in the corporate world you find various OS’s for both desktop and servers. I had to support several different OS’s and developed only for two. They all suck in different ways there are no clear winners.

      • Alborlin@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        3
        ·
        2 years ago

        Thank för addressing Lemmy circlejerk för Linux . They really take it far

    • Hotzilla@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      22
      ·
      2 years ago

      Issue is not just on servers, but endpoints also. Servers are something that you can relatively easily fix, because they are either virtualized or physically in same location.

      But endpoints you might have thousand physical locations, and IT need to visit all of them (POS, info/commercial displays, IoT sensors etc.).

    • terminhell@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      11
      arrow-down
      2
      ·
      2 years ago

      On prem AD. At least for my MSP’s clients. Have been pushing hard last few years to migrate to azure.

    • kent_eh@lemmy.ca
      link
      fedilink
      English
      arrow-up
      8
      ·
      2 years ago

      My former employer had a bunch of windows servers providing remote desktops for us to access some proprietary (and often legacy) mission critical software.

      Part of the security policy was that any machines in the possession of end users were assumed to be untrustworthy, so they kept the applications locked down on the servers.

      • TonyOstrich@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        2 years ago

        I kinda wish my employer would do something like this for our current applications. Right before I started working there they switched from giving engineers desktops to laptops (work station laptops but still). There are some advantages to having a laptop like being able to work from home or use it in a meeting, but I would much prefer the extra power from a desktop. In mind the best of both worlds would be to have a relatively cheap laptop that basically acts as a thin client so that I can RDP into a dedicated server or workstation for my engineering applications. But what do I know ¯_(ツ)_/¯

        • kent_eh@lemmy.ca
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 years ago

          It was a pain in the ass more often than not. If the application server was having trouble the entire department was at a standstill.

          And getting config files, licence files, log files and the like in and out of the system was a long convoluted process.

          We often joked that we were so secure that our hands were tied.

    • stoly@lemmy.world
      link
      fedilink
      English
      arrow-up
      12
      arrow-down
      7
      ·
      2 years ago

      I can’t imagine how much work it would be to migrate all your services onto Linux. The problem was people adopting windows in the first place.

      • douglasg14b@lemmy.world
        link
        fedilink
        English
        arrow-up
        17
        arrow-down
        7
        ·
        2 years ago

        I love the Linux bros coming out of the woodwork on this one when this could have very well have been Linux on the receiving end of this shit show. Given that it’s a kernal level software issue, and not necessarily an OS one.

        It’s largely infeasible to use Linux for many, most, of these endpoints. But facts are hard.

        • kalleboo@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 years ago

          The Linux kernel has a special kernel extension scheme specifically to keep software like CloudStrike from crashing it https://ebpf.io/what-is-ebpf/ This is supported by CloudStrike on recent versions of Linux (if you’re running an older version, then yes CloudStrike still has the ability to ruin your day)

        • flop_leash_973@lemmy.world
          link
          fedilink
          English
          arrow-up
          7
          arrow-down
          12
          ·
          2 years ago

          They are just butt hurt that this whole thing really shines a light on how inaccurate the line of “the world runs on Linux” truly is.

          The world runs on a lot of different things for different reasons and that does not fit nicely into their Richard Stallman like world view.

  • Buffalox@lemmy.world
    link
    fedilink
    English
    arrow-up
    58
    arrow-down
    6
    ·
    edit-2
    2 years ago

    At least no mission critical services were hit, because nobody would run mission critical services in Windows, right?

    RIGHT??

    • EnderMB@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 years ago

      To preface, I want to see a tech workers union so, so bad.

      With that said, I genuinely don’t believe that most tech workers would unionize. So many of them are brainwashed into thinking that a union would dictate all salaries, would force hiring to be domestic-only, or would ensure jobs for life for incompetent people. Anyone that knows what a union does in 2024 knows that none of that has to be true. A tech union only needs to be a flat fee every month, guaranteed access to a lawyer with experience in your cases/employer, and the opportunity to strike when a company oversteps. It’s only beneficial.

      Even if you could get hundreds of thousands of signatories, the recent layoffs have shown that tech companies at the highest level would gladly fire a sizable number of employees if it meant stamping out a union. As someone that has conducted interviews in big tech, the sheer numbers at peak of people that had applied for some roles was higher than the number of active employees in the whole company. In theory, Google could terminate everyone and replace them with brand-new workers in a few months. It would be a fucking mess, but it (in theory) shows that if a Google or Apple decided that it wanted no part of unions they could just dig into their fungible talent pool, fire a ton of people, promote people that stayed, and fill roles with foreign or under-trained talent.

      • slacktoid@lemmy.ml
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 years ago

        I feel you with this. They do not see themselves as workers. Thank you for the preface.

        • EnderMB@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          2 years ago

          Agreed, sadly to many there is still the view of tech being a meritocracy, and that they’re in FAANG because of their hard work over everything else, so fuck everyone else. Naturally, many change their tune once their employer actions regressive policies, but it’s surprising how many people just have zero understanding of what a union does. They see cop shows or The Wire and assume it’ll be like the unions there…

  • TheObviousSolution@lemm.ee
    link
    fedilink
    English
    arrow-up
    15
    arrow-down
    1
    ·
    2 years ago

    It might be CrowdStrike’s fault, but maybe this will motivate companies to adopt better workflows and adopt actual preproduction deployment to test these sort of updates before they go live in the rest of the systems.

    • EnderMB@lemmy.world
      link
      fedilink
      English
      arrow-up
      13
      ·
      2 years ago

      I know people at big tech companies that work on client engineering, where this downtime has huge implications. Naturally, they’ve called a sev1, but instead of dedicating resources to fixing these issues the teams are basically bullied into working insane hours to manually patch while clients scream at them. One dude worked 36 hours straight because his manager outright told him “you can sleep when this is fixed”, as if he’s responsible for CloudStrike…

      Companies won’t learn. It’s always a calculated risk, and much of the fallout of that risk lies with the workers.

      • uis@lemm.ee
        link
        fedilink
        English
        arrow-up
        5
        ·
        2 years ago

        Sounds so illegal, that it makes labour authoririty happy

        • EnderMB@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 years ago

          Is it illegal? I’m not American so I have no idea if there are laws in your country against on-call maximum hours.

          • uis@lemm.ee
            link
            fedilink
            English
            arrow-up
            2
            ·
            2 years ago
            1. It’s not about oncall, they are literally in the office
            2. See 1
            3. Not sure about America, but it is very illegal in Russia.
      • MrAlternateTape@lemm.ee
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 years ago

        That comment about sleep…that’s about where I tell them to go fuck themselves. I’ll find a new job, I’m not going to put up with bullshit like that.

    • cheetah_cheetos@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      2 years ago

      Might be hard to do. Crowdstrike release several updates per day to the channel files to match changes in adversarial behaviour. In this case, BCP and backup are what need to be done.

  • Disaster@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    11
    ·
    2 years ago

    80% of our machines were hit. We were working through 9pm on Friday night running around putting in bitlocker keys and running the fix. Our organization made it worse by hiding the bitlocker keys from local administrators.

    Also gotta say… way the boot sequence works, combined with the nonsense with raid/nvme drivers on some machines really made it painful.

  • pelletbucket@lemm.ee
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    1
    ·
    2 years ago

    I got super lucky. got paid for my car just before the dealership systems went down, got my return flight 2 days before this shit started.

  • 7rokhym@lemmy.ca
    link
    fedilink
    English
    arrow-up
    3
    ·
    2 years ago

    Just a thought from experience: Be wary of any critical products and/or taking a job from a company run by an accountant. CrowdStrike CEO… accountant!

    Accounting firms are an obvious exception.

  • scottywh@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    6
    ·
    2 years ago

    If it only impacts a percentage of your machines then there was a problem in the deployment strategy or the solution wasn’t worthwhile to begin with.

    • Phoenixz@lemmy.ca
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      9
      ·
      2 years ago

      … So your point was that it would have been better if everything went down?

      There are plentiful reasons why deployments are done in parts, and I’m guessing that after today strategies will change to apply updates in groups to avoid everything going down.

      Also, dear God, stop using windows as a server, or even a client for that matter. If you’re paying actual money to get this shit then the results are on you.