

Confused at this sentiment, Docker includes a MACVLAN driver so clearly it’s intended to be used. Do you eschew any networking in Docker beyond the default bridge for some reason?
Confused at this sentiment, Docker includes a MACVLAN driver so clearly it’s intended to be used. Do you eschew any networking in Docker beyond the default bridge for some reason?
With the default Docker bridge networking the container won’t have a unique IP/MAC address on the local network, as far as I am aware. Communication with external clients will have to contact the host server’s IP at the port the container is tied to in order to interact. If there’s a way to specify a specific parent interface, let me know!
This was very insightful and I’d like to say I groked 90% of it meaningfully!
For an Incus container with its unique MAC interface, yes if I run a Docker container in that Incus container and leave the Docker container in its default bridge mode then I get the desired feature set (with the power of onions).
And thanks for explaining CNI, I’ve seen it referenced but didn’t fully get how it’s involved. I see that podman uses it to make a MACVLAN interface that can do DHCP (until 5.0, but the replacement seems to be feature-compatible for MACVLAN), so podman will sidestep the pain point of having to assign a no-go-zone on the DHCP server for a Docker swath of IPv4s, as you mentioned. Close enough for containers that the host doesn’t need to talk to.
So in summary:
I’ve got Docker doing the extent it can manage with MACVLAN and there’s no extra magicks to be done on it.
Podman will still use MACVLAN (no host to container comms still) but it’s able to use DHCP to get an address for the MACVLAN container.
If the host must talk to the container with MACVLAN, I can either use the MACVLAN bypass as you linked to above or put the Docker/Podman container inside an Incus container with its bridge mode.
Kubernutes continues to sound very powerful and flexible but is definitely beyond my reach yet. (Womp womp)
Thanks again for taking the time to type and explain all of that!
Thanks for taking the time to reply!
The host setup has eth0
as the physical interface to the rest of the network, with br0
replacing it completely. br0
has the same MAC as the eth0
interface and eth0
just forwards to br0
which then does the bridging internally. br0
being a bridge means that incus is able to split it off without MACVLAN but rather its nic device in bridge mode which “Uses an existing bridge on the host (br0
) and creates a virtual device pair to connect the host bridge to the instance.” That results in a network interface that has its own MAC and is assigned a local IP by the DHCP server on the network while also being able to talk to the host.
Incus accomplishes the same goal as Proxmox (Proxmox has similar bridge network devices for its containers/VMs) just without Incus needing to be your OS/distro like Proxmox does, it’s just a package.
As for the Docker, the parent interface is br0
which has supplanted eth0
. MACVLAN is working as it is intended to in Docker, as far as I can tell. The container has a networking device with its own MAC address, and after supplying the MACVLAN network device with my network’s subnet and gateway and static IP address in the Docker compose file it works as expected. If I don’t supply a static IP in the Docker compose file, Docker just assigns it the first IP in the given subnet - no DHCP interaction. This docker-net-dhcp plugin (I linked to the issue about it not working on the latest version of Docker anymore) was made to give Docker network devices the ability to use DHCP to get an IP address, but it’s clearly not something to rely on.
If I’m missing something about MACVLAN that makes DHCP work for Docker, let me know! Hardcoding an IP into a docker-compose file adds an extra step to remember compared to everything else being configured on the centralized DHCP server - hence the shoddy implementation claim for Docker.
Thanks for the link to using another MACVLAN and routing around the host<-/->container connection issue inherent to MACVLAN. I’ll keep it in mind as an alternate to Incus container around another container! I do wish there could be something like Incus’ hassle-free solution for Docker or Podman.
Not what you asked for but possibly useful; if you have apple devices and can use airplay instead of Bluetooth, https://github.com/mikebrady/shairport-sync works really well. Even runs airplay 2 on a pi zero smoothly. Don’t know of Bluetooth otherwise sadly
Sad to hear for my quadlet future, do you remember what things were specifically annoying?
Hey bigdickdonkey, I recently tried and wasn’t able to shit my way through podman, there just wasn’t enough chatter and guides about it. I plan to revisit it when Debian 13 comes out, which will include podman quadlets. I also tried to get podman quadlets to work on Ubuntu 24 and got closer, but still didn’t manage and Ubuntu is squicky.
I read about true user rootless Docker and decided that was too finicky to keep up to date. It needs some annoying stuff to update, from what I could tell. I was planning on many users having their own containers, and that would have gotten annoying to manage. Maybe a single user would be an OK burden.
The podman people make a good argument for running podman as root and using userns to divvy out UIDs to achieve rootless https://www.redhat.com/en/blog/rootless-podman-user-namespace-modes but since podman is on the back burner till there’s more community and Debian 13, I applied that idea to Docker.
So I went with root Docker with the goals of:
Basically it’s the security best practices from this list https://cheatsheetseries.owasp.org/cheatsheets/Docker_Security_Cheat_Sheet.html
This still has risk of the Docker daemon being hacked from the container itself somehow, which podman eliminates, but it’s as close to the podman ideal I can get within my knowledge now.
Most things will run as rootless+read-only+cap_drop with minor messing. Automatic ripping machine would not, but that project is a wild ride of required permissions. Everything else has succumbed, but I’ve needed to sometimes have a “pre launch container” to do permission changes or make somewhere like /opt writable.
I would transition one app stack at a time to the best security practices, and it’s easier since you don’t need to change container managers. Hope this helps!
Here is a nice summary from https://www.reddit.com/r/firefox/comments/o28yi4/comment/h26mguk/?context=3 :
Privacy Badger is also redundant. It’s useless at best and can do a disservice:
Its local learning is disabled by default. Since they turned off the heuristic, PB just blocks third-party cookies from the yellowlist. Keeping a separate extension to block cookies from ≈800 domains makes no sense when you have uBlock Origin with tens of thousands of domains in filter lists. It’s detectable, that is, it adds extra info to your fingerprint. Even despite the disabled local learning, some of its methods of work are still detectable (function code: API tampering detected). And if you enable local learning, PB can become even more detectable.
Also it sends Global Privacy Control and Do Not Track headers (which even one of its creators called “a failed experiment”) by default, which is useless and only gives an extra bits for fingerprinting.
Basically how privacy badger works is noticeable, but you can turn on local learning to get bespoke ad blocking at the cost of your device being much more easily identifiable. Maybe half-n-half and have privacy badger off on private browsing so you can shop in that mode without Amazon knowing your life’s history as easily
Tiger, you’re very similar to many of the semiconductor EEs I know :) and I mean that in a teasing-but-you-know-cause-you-work-in-the-industry way. Yeah, we only really care about whiskering in the context of electrical devices. That’s what it’s saying. Read the “Mechanics” section, it tells you nothing about actual electromigration doing it; they describe an E field encouraging metal ions in a fluid to make a reaching whisker and link to electromigration because it technically is “electromigration” making the targeted whisker occur. But IC-style electromigration is not causing the whisker, clearly cause no currents are flowing, which is why I took the time to write the explanation in the first place.
But just because the semiconductor community called it whiskers so it shares the name with the Big Whiskers, does not make the process anywhere close to similar. The current densities that cause absolutely not present for the stress ones, which the wiki article is about.
Tiger I think you’re being pedantic, they linked to Whiskers (metallurgy) not Whiskers (electromigration). There is a difference! But it’s not super clear cut, which is why I took the time to write about it.
Electrons do not always move at the same speed in a given metal. A lot of things affects mobility, but the E field is very important too. Both things combine so that electrons do not always move at the same speed in a given metal. But you can simplify in an IC world because there you’re riding the saturation velocity basically always, which is why I assume you keep claiming that.
I want you to know that your experiences from your education and job are valid - you do deal with whiskers in ICs, not denying that; the fact is that whiskers due to stresses and strains aren’t called electromigration which is what the original comment says.
“A similar thing also called whiskers can happen inside ICs and has been a known failure mode for high frequency processors for many years. I work in chip design, and we use software tools to simulate it. It’s due to electromigration and doesn’t rely on stresses but instead high current densities.”
Check out https://en.wikipedia.org/wiki/Hot-carrier_injection hot carrier degredation, it’s in the vicinity of electron mobility but in a semiconductor setting. Key link is it’s electrons with momentum doing the work. In this case electrons (much hotter than in electron mobility, which are limited by the saturation velocity) smash into the gate dielectric, making it a worse dielectric. Hot carrier injection doesn’t have to end in damage to the dielectric, but when it does it’s hot carrier degredation. There’s a lot going on though, semiconductors are really complex - like electron tunneling also exists.
The metal moves due to very different reasons. I would not say whiskers due to mechanical/residual stresses are due to “electromigration” - electromigration isn’t even there since the wiki definition is “transport of material caused by the gradual movement of the ions in a conductor due to the momentum transfer between conducting electrons and diffusing metal atoms”. You build stresses and strains into semiconductors for better mobility profiles, and I’m sure that can cause whiskers - but again, it’s not electromigration.
Electromigration, as noted, plays a role in the form of encouraging stress whiskers to grow in a direction (with a very relaxed definition).
But in ICs, with their very unique extremely small scales, electromigration can directly form whiskers by moving individual ions via electron collisions. But the generation mechanism for those whiskers shares nothing with Big Whiskers generation mechanism. That’s my point.
Electrons in metal do not always move at the same speed; they move at v=mu*E where v is the velocity, mu is the electron mobility, and E is the electric field. Crank the E, you go faster. At very high E fields you reach the electron saturation velocity where slowing factors limit the maximum speed - I assume in your IC world you’re basically always there due to the extremely small regions (E = V/m; any V with m at nanometers is big E) which is why you claim that. But even then the electrons are accelerating due to the E field, smashing into ions and losing their momentum (mass static, so it’s just velocity), and then re-accelerating. The saturation velocity is the average bulk motion of electrons but it’s not a smooth highway, it’s LA traffic (constant crashes).
Electrons can gain significant momentum, which is just their static mass times their velocity. Limited at velocity by the saturation velocity, current density is important for significant momentum exchange. Luckily ICs are so tiny that the currents they drive are massive current densities.
What you said originally is correct; it’s just in ICs electromigration can cause whiskers. In the Big World it can’t. But it can influence Big Whiskers to grow to the worst places and fuck up things optimally if you take an extremely relaxed view of electromigration that defines it as “movement of ions encouraged by an electric field”.
This statement is not fully accurate. Whiskers in OP’s case are about (usually) tin whiskers that grow, often visibly, and then can connect (short) to unintended areas.
Electromigration is effectively when a large potential difference encourages ions to relocate to reduce the potential difference.
Big Whiskers have two methods of formation. The first way is that tin ions are able to move by becoming soluble in some form of water so they’re mobile. The other way whiskers can form is from stress alone. (Stress being force per area that compresses or tensions the metal in question, applied through a multitude of ways) Whiskers can be directed by electromigration so they form tendrils to a differing potential, basically purposefully ruining stuff instead of randomly shorting things.
Now in integrated circuits (ICs), there are extremely high currents running through extremely small regions. Electromigration in ICs is caused by electrons getting yeeted at extremely fast speeds, giving them significant momentum. They collide with ions in their path and dislodge the ions from their matrix. This can result in voids of ions preventing current from flowing (open circuits) or tendrils of ions making a path to an unintended area and connecting to it (shorting it). The tendrils here are also called whiskers, but are generated in a very different way (e.g., no water solubility or inherent stresses required) and on a significantly smaller scale. And probably not in tin.
The more you know!
In incus, I had the same setup of an LCX container with a Docker container inside of it. I passed 1000/1000 to the LXC container but the LXC container’s default root user has a an ID set of 0/0. So I had to pass 0/0 to the Docker container, not 1000/1000 to get the read/write permissions working.
That may fix your issue as it’s basically the same tech, just different automated things implementing the LXC container!
Good to know Proxmox’s bad updates are more pervasive than the latest bad update.
I have been able to install Docker in the LXC containers and pull images in with the normal commands. I do that container-in-container to get effectively rootless docker containers for stuff that I couldn’t figure out how to run rootless. So you don’t even lose out on docker if you’re determined! And as you said incus goes on any OS, you can docker just fine on the base OS of your choice and use incus for specific things!
Try a diff email if you do want one, a friend recently got one via email signup and wait a few weeks. But I do abs agree it fuckin sucks you have to do any of this effort to get one, it is just enabling scalpers
I do use it to hold internet-exposed things in LXC containers to sidestep having to figure out how to not run things as Docker root.
You do not need it for everything, but since it’s not an OS that makes it your everything, that’s ok! Run Docker containers as you need, put internet-exposed ones in an LXC container, put home assistant in a VM because it’s special.
Ah, I was wondering which one you updated and it made your containers inaccessible!
You have to sign up for the in stock notifications, annoying but it works in a delayed fashion. Sad it does enable scalpers.
I see, do you know of a way in Docker (or Podman) to bind to a specific network interface on the host? (So that a container could use a macvlan adapter on the host)
Or are you more advocating for putting the Docker/Podman containers inside of a VM/LXC that has the macvlan adapter (or fancy incus bridge adapter) attached?