Well thank christ I missed out on that one. We only have ~50 Debian hosts, but they run some pretty important stuff and I'm really the only senior tech with any skill in it.
Potentially dodged a bullet there ourselves; had to build some boxens directly facing t'interwebz some years back, and I chose Debian due to personal familiarity (running it for 20 years now more or less). Despite me and the in-house red team[1] spending months working on hardening the things, because of aforementioned procedural idiocy they failed the internal pen test report because they didn't have an antivirus installed (company didn't then and hasn't now an officially sanctioned AV for linux but the guidelines for the pen testers hadn't really wholly caught up with the heavy shift to *nix within the company). Thankfully the go-live decision is left up to the manager and, because the red team still couldn't get in even when three firewalls were disabled and they were given an ssh key for root, I was given the green light without having to install any AV. God bless ye SELinux.
Had some of the same madness at an insurance company I worked at some years back. Company standards mandated some form of AV regardless of the OS or exposure level; think they picked some McAfee bullshit for linux. Uptimes previously measured in years were reduced to days thanks to McAfee shitting on the kernel every week or so. In the whole time I remained there it didn't catch one virus but caused £$€ due to downtime on systems that were previously bulletproof.
Trending back in the general form of the thread... security is a process, not a product, and as much as the sales droids will like to tell you as such, you can't just press a single button to Make Things Secure. I've been of the impression and experience for years that most AV is largely useless, because by the time an AV notices something's gone wrong you're normally fucked anyway. Understanding of the nature of the exploits, in combination with defence-in-depth, is key. There's a load of vulns out there that flat out don't work when tested on our systems because we've got X, Y, and Z in place; luckily I work in a place where these can actually be tested if and when anyone cares enough. But that still doesn't stop the braindead security automation tools from flagging on the simplest of metrics; the
recent openSSH vuln was a classic in this regard;
- This system is running SSH below v9.6! It's vulnerable.
- The vuln was only introduced in v8.6. This system is running v7.4. It isn't vulnerable.
- Computer says it is!
- Computer is a fucking idiot that can't understand a difference between less than X but more than Y.
- This system is running SSH below v9.6! It's vulnerable.
- System is indeed running v8.2. But it's 64bit Linux, not 32bit which the vuln was announced for. As of yet there's no exploit for 64bit openSSH, given that the exploit relies on the exceedingly small address space of 32bit so much that not even ASLR can save it. ASLR under the 64bit addressing means that under current understanding you'd need to keep the computer online for 37 years for the same vuln to work. It isn't vulnerable.
- Computer says it is!
- Computer's fucking wrong.
- This system is running SSH below v9.6! It's vulnerable.
- System is indeed running v9.3, but its running BSD as the OS. As you'd have understood if you'd have read the very well-written CVE report, the vuln is only present when openSSH is compiled against the linux-centric glibc. BSD doesn't use glibc and instead has a thread-safe version of the vulnerable function. It isn't vulnerable.
- Computer says it is!
- Computer's a fucking idiot that needs reprogramming with a very large axe.
My point, if it's even discernible at this point, is that there usually a huge pressure to patch stuff and install whatever update to whatever product Right Now! Because We Care About Security And I'm Sure The Vendor Has Done Their Own QA (RNBWCASAISTVHDTOQA for short); almost no-one, from vendor to client, bothers with QA, regression testing, testing patches in test environments or even a staggered patch rollout these days. Most people assume it'll just always work, because newer is
always better, and someone above my pay grade
must have tested this already, right?
[1]For those unfamiliar with the term at least in IT circles - I think it's descended from US military jargon in war games but "red team" is what they call "team who are on your side really but will use the known tactics of the enemy against you in a (hopefully) non-destructive manner in order to show you your weaknesses"; they characteristically use the exploit du jour against us to see if we're actually vulnerable to it, or whether our other mitigations we've put in place help protect against it, so we can gauge the impact correctly rather than knee-jerking to install a patch on day 1 instead of regression testing it. Fellow greybeards will understand it in the context of "white-hat" hacker (as opposed to "black-hat" [evil] or "grey-hat" [chaotic neutral]) terminology. Most places I've worked at or know about don't use this strategy, because it's probably very expensive in the real world and it's easier to just patch everything without question always. Apart from when the patch hasn't been properly tested and ends up killing you and feasting on your sorry carcass.