• Saik0@lemmy.saik0.com
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    6 days ago

    I get your points. But we simply wouldn’t get along at all. Even though I’d be able to provide every tool you could possibly want in a secure, policy meeting way, and probably long before you actually ever needed it.

    but I hate debugging build and runtime issues remotely. There’s always something that remote system is missing that I need

    If the remote system is a dev system… it should never be missing anything. So if something’s missing… Then there’s already a disconnect. Also, if you’re debugging runtime issues, you’d want faster compile time anyway. So not sure why your “monolith” comment is even relevant. If it takes you 10 compiles to figure the problem out fully, and you end up compiling 5 minutes quicker on the remote system due to it not being a mobile chip in a shit laptop (that’s already setup to run dev anyway). Then you’re saving time to actually do coding. But to you that’s an “inconvenience” because you need root for some reason.

    but my point here is that security should be everyone’s concern, not just a team who locks down your device so you can’t screw the things up.

    No. At least not in the sense you present it. It’s not just locking down your device that you can’t screw it up. It’s so that you’re never a single point of failure. You’re not advocating for “Everyone looking out for the team”. You’re advocated that everyone should just cave and cater to your whim, rest of the team be damned. Where your whim is a direct data security risk. This is what the audit body will identify at audit time, and likely an ultimatum will occur for the company when it’s identified, fix the problem (lock down the machine to the policy standards or remove your access outright which would likely mean firing you since your job requires access) or certification will not be renewed. And if insurance has to kick in, and it’s found that you were “special” they’ll very easily deny the whole claim stating that the company was willfully negligent. You are not special enough. I’m not special enough, even as the C-suite officer in charge of it. The policies keep you safe just as much as it keeps the company safe. You follow it, the company posture overall is better. You follow it, and if something goes wrong you can point at policy and say “I followed the rules”. Root access to a company machine because you think you might one day need to install something on it is a cop out answer, tools that you use don’t change all that often that 2 day wait for the IT team to respond (your scenario) would only happen once in how many days of working for the company? It only takes one sudo command to install something compromised and bringing the device on campus or on the SDN (which you wouldn’t be able to access on your own install anyway… So not going to be able to do work regardless, or connect to dev machines at all)

    Edit to add:

    Users can’t even install an alternative browser, which is why our devs only support Chrome (our users are all corporate customers).

    We’re the same! But… it’s Firefox… If you want to use alternate browsers while in our network, you’re using the VDI which spins up a disposable container of a number of different options. But none of them are persistent. In our case, catering to chrome means potentially using non-standard chrome specific functions which we specifically don’t do. Most of us are pretty anti-google overall in our company anyway. So

    but it’s nearly impossible to tell the good from the bad when interviewing a company.

    This is fair enough.

    • sugar_in_your_tea@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      2
      ·
      6 days ago

      you end up compiling 5 minutes quicker

      This implies the entire build still takes a few minutes on that beefier machine, which is in the “check back later” category of tasks. Rebuilds need to be seconds, and going from 10s to 5s (or even 30s) isn’t worth a separate machine.

      If my builds took that long, I’d seriously reconsider how the project is structured to dramatically reduce that. A fresh build taking forever is fine, you can do that at the end of the day or whatever, but edit/reload should be very fast.

      it’s so that you’re never a single point of failure

      That belongs at the system architecture level IMO. A dev machine shouldn’t be that interesting to an attacker since a dev only needs:

      • code and internal docs
      • test environments
      • “personal” stuff (paystubs, contracts, etc)
      • VPN config for remote access to test envs

      My access to all of the source material is behind a login, so IT can easily disable my access and entirely cut an attacker out (and we require refreshing fairly frequently). The biggest loss is IP theft, which only requires read permissions to my home directory, and most competitors won’t touch that type of IP anyway (and my internal docs are dev level, not strategic). Most of my cached info is stale since I tend to only work in a particular area at a given time (i.e. if I’m working on reports, I don’t need the latest simulation code). I also don’t have any access to production, and I’ve even told our devOPs team about things that I was able to access but shouldn’t. I don’t need or even want prod access.

      The main defense here is frequent updates, and I’m 100% fine with having an automated system package monitor, and if IT really wants it, I can configure sudo to send an email every time I use it. I tend to run updates weekly, though sometimes I’ll wait 2 weeks if I’m really involved in a project.

      if something goes wrong you can point at policy and say “I followed the rules”

      And this, right here, is my problem with a lot of C-suite level IT policy, it’s often more about CYA and less about actual security. If there was another 9/11, the airlines would point to TSA and say, “not my problem,” when the attack very likely came through their supply chain. “I was just following orders” isn’t a great defense when the actor should have known better. Or on the IT side specifically, if my machine was compromised because IT was late rolling out an update, my machine was still compromised, so it doesn’t really matter whose shoulders the blame lands on.

      The focus should be less on preventing an attack (still important) and more on limiting the impact of an attack. My machine getting compromised means leaked source code, some dev docs, and having to roll back/recreate test environments. Prod keeps on going, and any commits an attacker makes in my name can be specifically audited. It would take maybe a day to assess the damage, and that’s it, and if I’m regularly sending system monitoring packets, an automated system should be able to detect unusual activity pretty quickly (and this has happened with our monitoring SW, and a quick, “yeah, that was me” message to IT was enough).

      My machine is quite unlikely to be compromised in the first place though. I run frequent updates, I have a high quality password, and I use a password manager (with an even better password, that locks itself after a couple hours) to access everything else. A casual drive-by attacker won’t get much beyond whatever is cached on my system, and compromising root wouldn’t get much more.

      For your average office worker who only needs office software and a browser, sure, lock that sucker down. But when you’re talking about a development team that may need to do system-level tweaks to debug/optimize, do regular training or something so they can be trusted to protect their system.

      tools that you use don’t change all that often

      Sure, but when I need them, I need them urgently. Maybe there’s a super high-priority bug on production that I need to track down, and waiting 2 days isn’t acceptable, because we need same-day turnaround. Yeah, I could escalate and get someone over pretty quickly, but things happen when critical people are on leave, and IT can review things afterward. That’s pretty rare, and if I have time, I definitely run changes like that through our IT pros (i.e. “hey, I want to install X to do Y, any concerns?”).

      Most of us are pretty anti-google overall in our company anyway.

      Then maybe we’d be a better fit than I thought. If, during the interview process, I discovered that IT didn’t use MS or Google for their cloud stuff, I may actually be okay with a locked-down machine, because the IT team is absolutely based. I’d probably ask a lot of follow-up questions, and maybe you’d mitigate my concerns.

      But when shopping around for a new job, I steer clear of any red flags, and “even devs use standard IT images” and “we’re a MS shop” completely kills my interest. My current company is an MS shop, but they said we have our own infra for our team, and we use Macs specifically to avoid the standard, locked-down IT images.

      On my personal machines, I use Firefox, openSUSE (due to openQA, YaST, etc; TW on desktop, Leap on NAS and VPS), and full-disk encryption. I’m considering moving to MicroOS as well, for even better security and ease of maintenance. I expose internal services through a WireGuard tunnel, and each of those services runs in a docker container (planning to switch to podman). I follow cybersecurity news, and I’m usually fully patched at home before we’re patched at work. Cyber security is absolutely something I’m passionate about, and I raise concerns a few times/year, which our OPs team almost always acts on.

      All of that said, I absolutely don’t expect the keys to the kingdom, and I actually encourage our OPs team to restrict my access to resources I don’t technically need. However, I do expect admin access on my work machine, because I do sometimes need to get stuff done quickly.

      • Saik0@lemmy.saik0.com
        link
        fedilink
        English
        arrow-up
        1
        ·
        6 days ago

        And this, right here, is my problem with a lot of C-suite level IT policy, it’s often more about CYA and less about actual security.

        Remediation after an attack happens is part of the security posture. How does the company recover and continue to operate is a vital part of security incident planning. The CYA aspect of it comes from the legal side of that planning. You can take every best practice ever, but if something happens. Then what does the company do if it doesn’t have insurance fallback or other protections? Even a minor data breach can cause all sorts of legal troubles to crop up, even ignoring a litigious user-base. Having the policies satisfied keeps those protections in place. Keeps the company operating, even when an honest mistake causes a significant problem. Unfortunately it’s a required evil.

        A casual drive-by attacker won’t get much beyond whatever is cached on my system, and compromising root wouldn’t get much more.

        On a company computer? That’s presumably on a company network? Able to talk and communicate with all the company infrastructure? You seem to be specifically narrowing the scope to just your machine, when a compromised machine talks to way more than just the shit on the local machine. With a root jump-host on a network, I can get a lot more than just what’s cached on your system.

        I discovered that IT didn’t use MS or Google for their cloud stuff,

        We don’t use google at all if it’s at all possible to get away with it… We do have disposable docker images that can be spun up in the VDI interface to do things like test the web side of the program in a chrome browser (and Brave, chromium, edge, vivaldi, etc…). We do use MS for email (and by extension other office suite stuff cause it’s in the license, teams… as much as I fucking hate what they do to the GUI/app every other fucking month… is useful to communicate with other companies… as we often have to get on calls with API teams from other companies), but that’s it and nextcloud/libreoffice is the actual company storage for “cloud”-like functions… and there’s backup local mail host infrastructure laying in wait for the day that MS inevitably fucks up their product more than I’m willing to deal with their shenanigans as far as O365 mail goes.

        I’m considering moving to MicroOS as well, for even better security and ease of maintenance.

        I’m pushing for a rewrite out of an archaic 80’s language (probably why compile times suck for us in general) into Rust and running it on alpine to get rid of the need for windows server all together from our infrastructure… and for the low maintenance value of a tiny linux distro. I’m not particularly on the SUSE boat… just because it’s never come up. I float more on the arch side of linux personally, and debian for production stuff typically. Most of our standalone products/infrastructure are already on debian/alpine containers. Every year I’ve been here I’ve pushed hard to get rid of more and more, and it’s been huge as far as stability and security goes for the company overall.

        “even devs use standard IT images”

        No, it’s “even devs meet SCA”. Not necessarily a standard image. I pointed it out, but only in passing. I can spawn an SCA for many different linux os’s that enforce/prove a minimum security posture for the company overall. I honestly wouldn’t care what you did with the system outside of not having root and meeting the SCA personally. Most of our policy is effectively that but in nicer terms for auditing people. The root restriction is simply so that you can’t disable the tools that prove the audit, and by extension that I know as the guy ultimately in charge of the security posture, that we’ve done everything reasonable to keep security above industry standard.

        The SCA checks for configuration hardening in most cases. That same Debian example I posted above, here’s a snippet of the checks

        • sugar_in_your_tea@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          2
          ·
          6 days ago

          Able to talk and communicate with all the company infrastructure?

          No, we have hard limits on what people can access. I can’t access prod infra, full stop. I can’t even do a prod deployment w/o OPs spinning up the deploy environment (our Sr. Support Eng. can do it as well if OPs aren’t available).

          We have three (main) VPNs:

          • corporate net - IT administrated internal stuff; don’t need for email and whatnot, but I do need it for our corporate wiki
          • dev net - test infra, source code, etc
          • OPs net - prod infra - few people have access (I don’t)

          I can’t be on two at the same time, and each requires MFA. The IT-supported machines auto-connect to the corporate VPN, whereas as a dev, I only need the corporate VPN like once/year, if that, so I’m almost never connected. Joe over in accounting can’t see our test infra, and I can’t see theirs. If I were in charge of IT, I would have more segmentation like this across the org so a compromise at accounting can’t compromise R&D, for example.

          None of this has anything to do with root on my machine though. Worst case scenario, I guess I infect everyone that happens to be on the VPN at the time and has a similar, unpatched vulnerability, which means a few days of everyone reinstalling stuff. That’s annoying, but we’re talking a week or so of productivity loss, and that’s about it. Having IT handle updates may reduce the chances of a successful attack, but it won’t do much to contain a successful attack.

          If one machine is compromised, you have to assume all devices that machine can talk to are also compromised, so the best course of action is to reduce interaction between devices. Instead of IT spending their time validating and rolling out updates, I’d rather they spend time reducing the potential impact of a single point of failure. Our VPN currently isn’t a proper DMZ (I can access ports my coworkers open if I know their internal IP), and I’d rather they fix that than care about whether I have root access. There’s almost no reason I’d ever need to connect directly to a peer’s machine, so that should be a special, time-limited request, but I may need to grab a switch and bridge my machine’s network if I needed to test some IOT crap on a separate net (and I need root for that).

          nextcloud/libreoffice is the actual company storage for “cloud”-like functions…

          Nice, we use Google Drive (dev test data) and whatever MS calls their drive (Teams recordings, most shared docs, etc). The first is managed by our internal IT group and is mostly used w/ external teams (we have two groups), and the second is managed by our corporate IT group. I hate both, but it works I guess. We use Slack for internal team communication, and Teams for corporate stuff.

          an archaic 80’s language (probably why compile times suck for us in general) into Rust

          That’s not going to help the compile times. :)

          I don’t use Rust at work (wish I did), but I do use it for personal projects (I’m building a P2P Lemmy alternative), and I’ve been able to keep build times reasonable. We’ll see what happens when SLOC increases, but I’m keeping an eye on projects like Cranelift.

          I float more on the arch side of linux personally

          That’s fair. I used Arch for a few years, but got tired of manually intervening when updates go sideways, especially Nvidia driver updates. openSUSE Tumbleweed’s openQA seemed to cut that down a bit, which is why I switched, and snapper made rollbacks painless when the odd Nvidia update borked stuff. I’m now on AMD GPUs, so update breakage has been pretty much non-existent. With some orchestration, Arch can be a solid server distro, I just personally want my desktop and servers to run the same family, and openSUSE was the only option that had rolling desktop and stable servers.

          For servers, I used to use Debian, and all our infra uses either Debian or Ubuntu. If I was in charge, I’d probably migrate Ubuntu to MicroOS since we only need a container host anyway. I’m comfortable w/ apt, pacman, and zypper, and I’ve done my share of dpkg shenanigans as well (we did unattended Debian upgrades for an IOT project).

          “even devs meet SCA”.

          SCA is for payment services, no? I’m in the US, and this seems to be an EU thing I’m not very familiar with, but regardless, we don’t touch ecommerce at all, we’re B2B and all payments go through invoices.

          The root restriction is simply so that you can’t disable the tools that prove the audit

          If you’re worried someone will disable your tools, why would you hire them in the first place? Also, that should be painfully obvious because you wouldn’t get reporting updates, no?

          We do auditing, and our devOPs team gets a weekly report from IT about any devices that aren’t updated yet or aren’t reporting. They also do a manual check every quarter or so to verify serials and version numbers and whatnot. I’ve gotten one notice from our local devOPs person, and very few of my team show up as well. The ones that do show up tend to be our UX and Product teams, and honestly, they have more access to interesting info than we devs do (i.e. they have planned features for the next 6 months, we just have the next month or so). And they need far fewer exceptions to the rules, since UX mostly just needs their design software and Product just needs office stuff and a browser.

          I obviously can’t speak for all devs, but in general, devs tend to be more interested in applying updates in a timely manner and keeping things secure. In fact, I think all of my devs already used a password manager and MFA before starting, which absolutely isn’t the case for other positions.

          • Saik0@lemmy.saik0.com
            link
            fedilink
            English
            arrow-up
            1
            ·
            6 days ago

            None of this has anything to do with root on my machine though.

            But it does. If your machine is compromised, and they have root permissions to run whatever they want, it doesn’t matter how segmented everything is, you said yourself you jump between them (though rare).

            Security Configuration Assessment

            SCA is for payment services, no? I’m in the US, and this seems to be an EU thing I’m not very familiar with, but regardless, we don’t touch ecommerce at all, we’re B2B and all payments go through invoices.

            No, it’s just a term for a defined check that configurations meet a standard. An SCA can be configured to check on any particular configuration change.

            Also, that should be painfully obvious because you wouldn’t get reporting updates, no?

            Not necessarily? Hard to tell if something is disabled vs just off.

            If you’re worried someone will disable your tools, why would you hire them in the first place?

            I don’t hire people… especially people in other departments.

            But while I found this discussion fun, I have to get back to work at this point. Shit just came up with a vendor we used for our old archaic code that might accelerate a rust-rewrite… and logically related to the conversation I might be in the market for some rust devs.

            • sugar_in_your_tea@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              2
              ·
              6 days ago

              you said yourself you jump between them

              Sure, but I need MFA to do so. So both my phone and my laptop would need to be compromised to jump between networks, unless we’re talking about a long-lived, opportunistic trojan or something, which smells a lot like a targeted attack.

              might accelerate a rust-rewrite… and logically related to the conversation I might be in the market for some rust devs.

              Sounds fun, and stressful. Good luck!