• 1 Post
  • 17 Comments
Joined 2 years ago
cake
Cake day: July 2nd, 2023

help-circle
  • I select hostnames drawn from the ordinal numerals of whatever language I happen to be trying to learn. Recently, it was Japanese so the first host was named “ichiro”, the second as “jiro”, the third as “saburo”.

    Those are the romanized spellings of the original kanji characters: 一郎, 二郎, and 三郎. These aren’t the ordinal numbers per-se (eg first, second, third) but are an old way of assigning given names to male children. They literally mean “first son”, “second son”, “third son”.

    Previously, I did French ordinal numbers, and the benefit of naming this way is that I can enumerate a countably infinite number of hosts lol



  • Ah, now I understand your setup. To answer the title question, I’ll have to be a bit verbose with how I think Incus behaves, so that the Docker behavior can be put into context. Bear with me.

    br0 has the same MAC as the eth0 interface

    This behavior stood out to me, since it’s not a fundamental part of Linux bridging. And it turns out that this might be a systemd-specific thing, since creating a bridge is functionally equivalent to a software switch, where every port of the switch has its own MAC, and all “clients” to that switch also have their own MAC. If I had to guess, systemd does this so that traffic from the physical interface (eth0) that passes directly through to br0 will have the MAC from the physical network, thus making it easier to understand traffic flows in Wireshark, for example. I personally can’t agree with this design choice, since it obfuscates what Linux is really doing vis-a-vis a software switch. But reusing this MAC here is merely a weird side-effect and doesn’t influence what Incus is doing.

    Instead, the reason Incus needs the bridge interface is precisely because a physical interface like eth0 will not automatically forward frames to subordinate interfaces. Whereas for a virtual switch, that’s the default. To that end, the bridge interface is combined with virtual ethernet (veth) interfaces – another networking primitive in Linux – to each container that Incus manages. The behavior of a veth is akin to a point-to-point network cable, plus the NICs on both ends. That means a veth always consists of a pair of interfaces, where traffic into one end comes out the other, and each interface has its own MAC address. Functionally, this is the networking equivalent of a bidirectional pipe.

    By combining a bridge (ie a virtual switch) with veth (ie virtual cables), we have a full Layer 2 network topology that behaves identically as if it were a physical bridge with physical cables. Thus, your DHCP server is none the wiser when it sends and receives BOOTP traffic for assigning an IP address. This is the most flexible way of constructing a virtual network within Linux, since it has feature-parity with physical networks: there is no Macvlan or Ipvlan or tunneling or whatever needed to make this work. Linux is just operating as a switch, with all the attendant flexibility. This architecture is what Calico – a network framework for Kubernetes – uses, in order to achieve scalable, Layer 3 connectivity to containers; by default, Kubernetes does not depend on Layer 2 to function.

    OK, so we now understand why Incus does things the way it does. For Docker, when using the Macvlan driver, the benefits of the bridge+veth model are not achieved, because Macvlan – although being a feature of Linux networking – is something which is implemented against an individual interface on the host. Compare this to a bridge, which is a standalone concept and thus can exist with or without any interfaces to the host: when Linux is actually used as a switch – like on many home routers – the host itself can choose to have zero interfaces attached to the switch, meaning that traffic flows through the box, rather than to the box as a destination.

    So when creating subordinate interfaces using Macvlan, we get most of the bridging behavior as bridge+veth but the Macvlan implementation in the kernel means that outbound traffic from a subordinate interface always get put onto the outbound queue of the parent interface. This makes it impossible for a subordinate interface to exchange traffic with the host itself, by design. Had they chosen to go the extra mile, they would have just reinvented a version of bridge+veth that is excessively niche.

    We also need to discuss the behavior of Docker networks. Similar to Kubernetes, containers managed by Docker mandate having IP connectivity (Layer 3). But whereas Kubernetes will not start a container unless an IPAM (IP Address Management) plugin explicitly provides an IP address, Docker’s legacy behavior is to always generate a random IP address from a default range, unless given an IP explicitly. So even though bridge+veth or Macvlan will imbue Layer 2 connectivity to a DHCP server to obtain an IP address, Docker is eager to provide an IP, just so the container has one from the very start. The distinction between Docker and Kubernetes+Calico is thus one of actual utility: by getting an address from Calico’s IPAM, Kubernetes knows that the address will actual work for networking, because Calico also creates/manages a network. Whereas Docker has no problem assigning an IP but not actually checking if this IP can be used on that network; it’s almost a pro-forma exercise.

    I will say this about early Docker: although they led the charge for making containers useful, how they implemented networking was very strange and led to a whole class of engineers who now have a deep misunderstanding of how real networks operate, and that only causes confusion when scaling up to orchestrated container frameworks like Kubernetes that depend on rigorous understanding of networking and Linux implementations. But all the same, Docker was more interested in getting things working without external dependencies like DHCP servers, so there’s some sense in mandating an IP locally, perhaps because they didn’t yet envision that containers would talk to the physical network.

    The plugin that you mentioned operates by requesting a DHCP-assigned address for each container, but within the Docker runtime. And once it obtains that address, it then statically assigns it to the container. So from the container’s perspective, it’s just getting an IP assigned to it, not aware that DHCP has happened at all. The plugin is thus responsible for renewing that IP periodically. It’s a kludge to satisfy Docker’s networking requirements while still using DHCP-assigned addresses. But Docker just doesn’t play well with Layer 2 physical networks, because otherwise the responsibility for running the DHCP client would fall to the containers; some containers might not even have a DHCP client to run.

    If I’m missing something about MACVLAN that makes DHCP work for Docker, let me know!

    Sadly, there just isn’t a really good way to do this within Docker, and it’s not the kernel’s fault. Other container runtimes like containerd – which relies wholly on the standard CNI plugins and thus doesn’t have Docker’s networking footguns – have no problem with containers running their own DHCP client on a bridged network. But for any container manager to handle DHCP assignment without the container’s cooperation always leads to the same kludge as what Docker did. And that’s probably why no major container manager does that natively; it’s hard to solve.

    I do wish there could be something like Incus’ hassle-free solution for Docker or Podman.

    Since your containers were able to get their own DHCP addresses from a bridged network in Incus, can you still run the DHCP client on those containers to override Docker’s randomly-assigned local IP address? You’d have to use the bridge network driver in Docker, since you also want host-container traffic to work and we know Macvlan won’t do that. But even this is a delicate solution, since if DHCP fails to assign an address, then your container still has the Docker-assigned address but it won’t be usable on the bridged network.

    The best solution I’ve seen for containers on DHCP-assigned networks is to not use DHCP assignment at all. Instead, part of the IP subnet is carved out, a region which is dedicated only for containers. So in a home IPv4 network like 192.168.69.0/24, the DHCP server would be restricted to only assigning 192.168.69.2 through 192.168.69.127, and then Docker would be allowed to allocate the addresses from 192.168.69.128 to 192.168.69.254 however it wants, with a subnet mask of 255.255.255.0. This mask allows containers to speak directly to addresses in the entire 192.168.69.0/24 range, which includes the rest of the network. The other physical hosts do the same, allowing them to connect to containers.

    This neatly avoids interacting with the DHCP server, but at a loss of central management and it splits the allocatable addresses into smaller parts, potentially causing exhaustion in one side while the other has spare addresses. Yet another reason to adopt IPv6 as the standard for containers, but I digress. For Kubernetes and similar orchestration frameworks, DHCP isn’t even considered since the orchestrator must have full internal authority to assign addresses with its chosen IPAM plugin.

    TL;DR: if your containers are like mini VMs, DHCP assignment is doable. But if they’re pre-packaged appliances, then only sadness results when trying to use DHCP.


  • I want to make sure I’ve understood your initial configuration correctly, as well as what you’ve tried.

    In the original setup, you have eth0 as the interface to the rest of your network, and eth0 obtains a DHCP-assigned address from the DHCP server. Against eth0, you created a bridge interface br0, and your host also obtains a DHCP-assigned address in br0. Then in Incus, you created a Macvlan network against br0, such that each containers against this network will be assigned a random MAC, and all the container Ethernet frames will be bridged to br0, which in-turn bridges to eth0. In this way, all containers can each receive a DHCP-assigned address. Also, each container can send traffic to the br0 IP address, to access services running on the host. Do I have that right?

    For your Docker attempt, it looks like you created a Docker network using the Macvlan driver, but it wasn’t clear to me if the parent interface here was eth0 or br0, if you still have br0. When you say “I have MACVLAN working”, can you describe which aspect is working? Unique MAC assignment? Bridged traffic to/from the containers or the network?

    I’m not very familiar with Incus, but I’m entirely in the dark about this shoddy plugin you mentioned for DHCP and Macvlan to work. So far as I’m aware, modern Docker Engine uses the CNI plugins when creating networks, so the “-d macvlan” parameter specifies which CNI plugin will load. Since this would all be at Layer 2, I don’t see why a plugin is needed to support DHCP – v4 or v6? – traffic.

    And the host cannot contact the container due to the MACVLAN method

    Correct, but this is remedied by what’s to follow…

    Can I make another bridge device off of br0 and bind to that one host-like?

    Yes, this post seems to do exactly that: https://kcore.org/2020/08/18/macvlan-host-access/

    I can always put a Docker/podman inside of an Incus container, but I’d like to avoid onioning if possible.

    I think you’re right to avoid multiple container management tools, if simply because it’s generally unnecessary. Although it kinda looks like Incus is more akin to Proxmox, in that it supports managing VMs and containers, whereas Podman and Docker only manage containers, which is further still distinct from the container runtime (eg CRI-O, containerd, Docker Engine (which uses containerd under the hood)).








  • All the other comments on getting comfortable with the physical operation are apt, so what I’ll add is that with old power tools, a possible concern is with electrical safety. If you’re capable with electrics, you might partially take apart the machines to verify that things are in working order. As in, no frayed or loose wires, grounding continuity exists, safety circuits are intact, etc.

    And when you’re using such equipment, making sure you’re using a properly-sized extension cord (eg 12 AWG or 4 mm^2) and a GFCI-protected outlet.





  • This might be true, although I do it mostly so I can remove the earplugs and rest them around my neck if someone needs to talk to me.

    The best PPE are the ones which have the fewest barriers toward using. Even the minor annoyance of having to set down untethered earplugs is best avoided, if it acts as a subconscious disincentive towards using PPE. Good safety policy adapts and accommodates this aspect of human behavior.


  • I’m nowhere even remotely comparable to a proper furniture maker, but I can tell you some pitfalls to avoid.

    Don’t cut wood without eyes, ears, and face protection. The dust, noise, or fumes will get you one day or another, if without protection. I prefer earmuffs over earplugs, but if earplugs then use the ones which tether both ends together. For a face mask, I like low-profile half-masks like this one: https://www.kleintools.com/catalog/respirators/p100-half-mask-respirator-sm

    Resist the urge to dive into woodworking by starting with reclaimed wood. For example, pallets are a cheap/free source of material, but it’s a hodge-podge of different varieties, all riddled with nail holes, dents, and brown stains from rusty fasteners.

    That’s not to say it can’t be done, but it certainly aggravates the process if you’re just starting. I once came across a section of 2x4 recovered from a pallet, thinking that it would cut just like the pine I was used to. Instead, it wrecked two drill bits and burned a circular saw blade as well as itself. I later mailed a sample of it to the USDA Wood Identification Public Service, who informed me that it was Acer (Hard Maple). Up until then, I didn’t even know that maple came in both varieties.

    It seems hard maple is tougher than nails drill bits. I’m still learning.


  • Ah, now I understand what you mean. Yes, the stock C80 would indeed legally be a Class 2 ebike in California, by virtue of its operable pedals, whether or not it’s actually practical to use the pedals. That the marketing material suggests the C80 is used primarily with its throttle is no different than other Class 2 ebikes which are often ridden throttle-only, as many city dwellers have come to fear.

    As for the unlock to Class 3, I wonder how they do that: California’s Class 3 does not allow throttle-only operation, requiring some degree of pedal input.

    The spectrum of two-wheelers in California include: bicycles, ebikes (class 1, 2, 3), scooters, mopeds (CVC 406), motor-driven cycles, and motorcycles (aka motorbikes; CVC 400)

    The “moped” category, one which has almost been forgotten to the 1970s, has seen a resurgence: the now-updated law recognizes 30 mph, electric, 4 HP (3 kW) max two- or three-wheelers. These mopeds are street legal, bike lane legal, don’t have annual registration, no insurance requirement, but do need an M1/M2 license. These CVC 406 mopeds are not freeway legal, but darn if they’re not incredibly useful for in-town riding.

    I could get myself an electric dirt bike and plates for it, 100% legally.


  • Do you have a reference for “class 3 e-scooters”? My understanding of the California Vehicle Code is that the class system only applies to bicycles with pedals, per CVC 312.5.

    Whereas e-scooters – the things that Bird and Lime rent through their app – exist under CVC 407.5, which previously covered the older, gasoline-powered 50 cc types of scooters. But apparently the law has now completed written out the gas-powered ones, only mentioning electric-powered “motorized scooters”.

    Strictly speaking, there isn’t a requirement in the law for e-scooters to have a speed governor, whereas ebikes must have one, either 20 mph (32 kph) or 28 mph (45 kph). Instead, riders of e-scooters are subject to a speed limit of 15 mph (25 kph), a stalwart from the days of the gas-powered scooters.

    The key distinction here is that an ebike over-speeding beyond its class rating is an equipment violation, akin to an automobile without operational brake lights. But an e-scooter over-speeding beyond 15 mph is a moving violation, potentially incurring points on the rider’s driving license – if they have one – and can impact auto insurance rates, somewhat bizarrely.

    I’m not saying CA law is fair to e-scooters – it’s not – but I can’t see a legal scenario where an e-scooter can overtake an ebike rider if both are operating at full legal limits.