

In the US, salaried engineers are exempt from overtime pay regulations. He is telling them to work 20 extra hours, with no extra pay.
In the US, salaried engineers are exempt from overtime pay regulations. He is telling them to work 20 extra hours, with no extra pay.
Commentary from someone quite trusted in the historical gun community and who’s actually shot multiple Welrods/VP9s: https://www.youtube.com/shorts/POubd0SoCQ8
It’s not a VP9. Even at the very start of the video, on the first shot before the shooter even manually cycles the gun, gas is ejected backwards out of the action rather than forward out of the suppressor.
In general, on bare-metal, I mount below /mnt. For a long time, I just mounted in from pre-setup host mounts. But, I use Kubernetes, and you can directly specify a NFS mount. So, I eventually migrated everything to that as I made other updates. I don’t think it’s horrible to mount from the host, but if docker-compose supports directly defining an NFS volume, that’s one less thing to set up if you need to re-provision your docker host.
(quick edit) I don’t think docker compose reads and re-reads compose files. They’re read when you invoke docker compose
but that’s it. So…
If you’re simply invoking docker compose
to interact with things, then I’d say store the compose files where ever makes the most sense for your process. Maybe think about setting up a specific directory on your NFS share and mount that to your docker host(s). I would also consider version controlling your compose files. If you’re concerned about secrets, store them in encrypted env files. Something like SOPS can help with this.
As long as the user invoking docker compose can read the compose files, you’re good. When it comes to mounting data into containers from NFS… yes permissions will matter and it might be a pain as it depends on how flexible the container you’re using is in terms of user and filesystem permissions.
Docker’s documentation for supported backing filesystems for container filesystems.
In general, you should be considering your container root filesystems as completely ephemeral. But, you will generally want low latency and local. If you move most of your data to NFS, you can hopefully just keep a minimal local disk for images/containers.
As for your data volumes, it’s likely going to be very application specific. I’ve got Postgres databases running off remote NFS, that are totally happy. I don’t fully understand why Plex struggles to run it’s Database/Config dir from NFS. Disappointingly, I generally have to host it on a filesystem and disk local to my docker host.
In general, container root filesystems and the images backing them will not function on NFS. When deploying containers, you should be mounting data volumes into the containers rather than storing things on the container root filesystems. Hopefully you are already doing that, otherwise you’re going to need to manually copy data out of the containers. Personally, if all you’re talking about is 32 gigs max, I would just stop all of the containers, copy everything to the new NFS locations, and then re-create the containers to point at the new NFS locations.
All this said though, some applications really don’t like their data stored on NFS. I know Plex really doesn’t function well when it’s database is on NFS. But, the Plex media directories are fine to host from NFS.
deleted by creator
Perhaps as the more experienced smoker, you can be a good friend and offer a lower dose that is more suited for their tolerance. Maybe don’t pack a big-ol bong rip for someone who hasn’t smoked in months. Chop up that chocolate bar into something a little more manageable. If they wanna buy something, suggest something a little more controllable like a vape. And most of all, if you’re pressuring people who are on the fence into smoking, maybe just stop doing that.
Yea, I don’t think this is necessarily a horrible idea. It’s just that this doesn’t really provide any extra security, but even the first line of this blog is talking about security. This will absolutely provide privacy via pretty good traffic obfuscation, but you still need good security configuration of the exposed service.
If I understand this correctly, you’re still forwarding it a port from one network to another. It’s just in this case, instead of a port on the internet, it’s a port on the TOR network. Which is still just as open, but also a massive calling card for anyone trolling around the TOR network for things to hack.
After briefly reading about systemd’s tmpfiles.d, I have to ask why it was used to create home directories in the first place. The documentation I read said it was for volatile files. Is a users home directory considered volatile? Was this something the user set up, or the distro they were using. If the distro, this seems like a lot of ire at someone who really doesn’t deserve it.
I have a similar issue when I am visiting my parents. Despite having 30 mbps upload at my home, I cannot get anywhere near that when trying to access things from my parents house. Not just Plex either, I host a number of services. I’ve tested their wifi and download, and everything seems fine. I can also stream my Plex just fine from my friends places. I’ve chalked it up to poor (or throttled) peering between my parents ISP and my ISP. I’ve been meaning to test it through a VPN next time I go home.
I think I misunderstood what exactly you wanted. I don’t think you’re getting remote GPU passthrough to virtual machines over ROCE without an absolute fuckton of custom work. The only people who can probably do this are Google or Microsoft. And they probably just use proprietary Nvidia implementations.
I believe what you’re looking for is ROCE: https://en.wikipedia.org/wiki/RDMA_over_Converged_Ethernet
But, I don’t know if there’s any FOSS/libre/etc hardware for it.
From the article, “These systems range from ground-based lasers that can blind optical sensors on satellites to devices that can jam signals or conduct cyberattacks to hack into adversary satellite systems.”
In the LastPass case, I believe it was a native Plex install with a remote code execution vulnerability. But still, even in a Linux container environment, I would not trust them for security isolation. Ultimately, they all share the same kernel. One misconfiguration on the container or an errant privilege escalation exploit and you’re in.
You are not being overly cautious. You should absolutely practice isolation. The LastPass hack happened because one of their engineers had a vulnerable Plex server hosted from his work machine. Honestly, next iteration of my home network is going to probably have 4 segments. Home/Users, IOT, Lab, and Work.
Keep in mind, RAID is fault tolerant, not fault proof. For critical data, keep in mind the 3-2-1 rule. Stored in 3 locations, 2 separate mediums, 1 offsite.
At it’s most basic, a satellite will have two systems. A highly robust command and control system with a fairly omnidirectional antenna. And then the more complex system that handles the payload(s). So yea, if the payload system crashes, you can restart it via C&C.
Annoying yes, but I’d argue that’s likely the simplest and most performant approach. At best (IPTables NAT), you’d be adding in an extra network hop to your SMB connections which would effect latency, and SMB is fairly latency sensitive especially for small files. And at worst (Traefik), you’d adding in a user-space layer 7 application that needs to forward every bit of traffic going over your SMB connection.
After repeated failures to pass a test, I do not think it is unreasonable for the business to stop paying for your attempts at a certification. Either directly via training sessions and testing fees, or indirectly via your working hours.