This is really more of a home networking issue than anything having to do with self-hosting. Please consider posting this in one of the many Lemmy home networking communities.
This is really more of a home networking issue than anything having to do with self-hosting. Please consider posting this in one of the many Lemmy home networking communities.
This is a question probably better-suited for one of the Proxmox communities. But, I’ll give it a try.
Regarding your concerns about new SSDs and old VM configs: why not upgrade to PVE8 on the existing hardware? This would seem to mitigate your concerns about PVE8 restoring VMs from a PVE7 system. Still, I wouldn’t expect it to be a problem either way.
Not sure about your TrueNAS question. I wouldn’t expect any issues unless a PVE8 installs brings with it a kernel driver change that is relevant to hardware.
Finally, there are several config files that would be good to capture for backup. Proxmox itself doesn’t have a quick list, but this link has one that looks about right: https://www.hungred.com/how-to/list-of-proxmox-important-configuration-files-directory/
Love another iOS option.
Nextcloud Photos performs okay, but the interface is very ‘meh’. Plus, the mobile client’s sync is a little unstable. On iOS, there’s no background sync at all.
This seems the correct advice. If the container is on the same host as the data, there’s no need to access the data via Samba. In fact, it’s likely the container doesn’t contain the samba client needed for such connectivity.
Assuming TrueNAS allows the containers to see local data, a bind mount is the way to go.
This is good stuff. Has it been posted to the project’s GitHub (issue, discussion, etc.)?
Have you considered searching the GitHub issues?
IMO, this is a discussion that should be taking place on the project’s GitHub. I’m going to lock the comments so I don’t get any more reports about commenters’ behavior.
I imagine this would be up to the application. What you’re describing would been seen by the OS as the device becoming unavailable. That won’t really affect the OS. But, it could cause problems with the drivers and/or applications that are expecting the device to be available. The effect could range from “hm, the GPU isn’t responding, oh well” to a kernel panic.
Red Hat (RHEL) is not based on any other distro, like Ubuntu is with Debian. RHEL is downstream of Fedora, meaning that RHEL developers can work on code that affects Fedora AND RHEL. This is not really true of Debian and Ubuntu. They are distinct projects with different goals. In many ways, Ubuntu is beholden to what Debian does. This isn’t usually a problem because Debian is very conservative in its approach to software. Ubuntu doesn’t usually have to worry about Debian screwing with something Ubuntu is trying to do.
Which, is all to say that there is no other distribution you can officially equate to RHEL like you can with Debian & Ubuntu.
I still use the label ‘homelab’ for everything in my house, including the production services. It’s just a convenient term and not something I’ve seen anyone split hairs about until now.
if nothing on it is permanent. You can have a home lab where the things you’re testing are self hosted apps. But if the server in question is meant to be permanent, like if you’re backing up the data on it, or you’ve got it on a UPS you make sure it stays available, or you would be upset if somebody came by and accidentally unplugged it during the day, it’s not a home lab.
A home lab is an unimportant, transient environment me
Tailscale is an overlay network. It will use whatever networking is available. If only one of those NICs is a gateway, then that’s what will be used to reach remote Tailnet resources.
Leaving this post here since it’s an interesting project to keep an eye on, but the conversation isn’t constructive. So, locking the comments.
Would they have to be VLAN aware if the switch port was already tagged AND if OP doesn’t care to consider untagged traffic ?
With the disclaimer that Proxmox has nothing to do with this question, I’m forced to assume this is just a networking issue that happens to use OPNsense as the router. Because of that, I must advise that you seek help from a networking-focused community. There’s no clear link to self-hosting in this post, which is required per Rule 3.
If the connections are already tagged as you come into the Proxmox server, then you need only to create interfaces for them in Proxmox (vmbr1, vmbr2, etc). EDIT: if you’re doing PCI passthrough of the physical NICs, ignore this step.
Then, in OPNsense, you just adding the individual interfaces. No need to assign a VLAN inside OPnsense because the traffic is already tagged on the network (per your earlier statement).
Whether or not the managed switch that has tagged each port is also providing VLAN isolation, you’ll simply use the OPNsense firewall to provide isolation, which it does by default. You’ll use it to allow the connections access to the fiber WAN gateway.
You’ll need to be far more descriptive than “I can’t get it to work.” I can almost guarantee you that Fedora is not the problem.
I’m a little lost on how a container would mess with your boot loader (GRUB). That aside, most of what you’re explaining to do with the containers. These are OS-agnostic. What do the container logs tell you?
I think fans of Nix and NixOS would agree.