Choosing preferred gateway in linux with two interfaces configured for dhcp

The situation:

  • Main network and private network (physically separate in this case)
  • Main network has dhcp server
  • Private network has dhcp server (server A, OPNsense in this case), which also acts as a router to main network
  • Want to set up a separate system (server B) that we can ssh, rdp etc. into from main network to access private network
    • Desire to use dhcp addressing for both interfaces on server B

A problem can arise with this setup if the system uses server A as it’s default gateway for whatever reason. The outside network only sees the route to server B via server A, and when you try to remote access server B from the main network (using the appropriate IP address for the main network interface) it fails, as it routes via server A and doesn’t go anywhere.

In theory this could be fixed by telling server A not to advertise itself as a gateway. In this case we want it to act as a gateway for everything on the private network except this one system, so we’d like to configure dhcpd on server A to exclude the option routers for this one case. For a static mapping entry on OPNsense the gateway option appears to imply you can type none here and it will unset the default gateway (in the same way that setting this at the main dhcp page for the interface does). This does not in fact happen – see the forum discussion here for example.

There are workarounds:

  • Edit the dhcpd conf file manually. Example
  • Unset the option for the server and then include it in all static entries (but this doesn’t cover dynamic entries)

Alternatively, we can change the behaviour of server B. As this is a one-off issue this may be preferable. We could just set up everything as static addresses, but there is an arguably more elegant way – use interface metrics.

In this case we have two Ethernet interfaces, both configured by /etc/network/interfaces and obtaining their addresses by dhcp. Both will have a metric of zero (running ip route shows this – if there’s no metric number then it is 0 by default). The default gateway is probably chosen by the first dhcp server to answer or some other semi-random condition. We can alter the metric of one (or both) interfaces to prioritise the one we want. If we have interfaces ens19 (main network) and ens18 (private network) for example, then an example is:

# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).

source /etc/network/interfaces.d/*

# The loopback network interface
auto lo
iface lo inet loopback

# Main network
allow-hotplug ens19
iface ens19 inet dhcp

# Private network
allow-hotplug ens18
iface ens18 inet dhcp
metric 100

Adding the metric 100 line to ens18 deprioritises it (as ens19 should get the default metric 0). Restarting the networking with systemctl restart networking and then running ip route gives something like

default via 172.40.204.1 dev ens19
default via 192.168.60.1 dev ens18 metric 100
172.40.204.0/24 dev ens19 proto kernel scope link src 172.40.204.27
192.168.60.0/24 dev ens18 proto kernel scope link src 192.168.60.8

Accessing server B from the main network using 172.40.204.27 should then work properly. As a bonus if ens19 looses it’s connection then it should fall back to the private network gateway (although for our particular use case this isn’t important).

autofs/automount failing to mount nfs exports from localhost in Debian 12

We have a setup where various systems export scratch directories to each other for convenience. An export is set up like:

/local/scratch -rw,root_squash,no_subtree_check,sync 192.168.2.0/24

Autofs maps come from LDAP – the autofs configuration is:

[autofs]
timeout = 300
browse_mode = no
mount_nfs_default_protocol = 4

ldap_uri = ldap:///dc%3Dldap%2Cdc%3Dyour%2Cdc%3Dserver%2Cdc%3Ddns%2Cdc%3Dname

search_base = ou=maps,ou=subsubunit,ou=subunit,o=organisation

map_object_class = nisMap
entry_object_class = nisObject
map_attribute = nisMapName
entry_attribute = cn
value_attribute = nisMapEntry

[ amd ]
dismount_interval = 300

If server A (ip address 192.168.2.101) exports the directory, server B (ip address 192.168.2.102) that also gets the autofs maps automounts this as expected when accessing the desired directory. The findmnt command shows:

└─/scratch                           auto.scratch    autofs     rw,relatime,fd=25,pgrp=2511731,timeout=300,minproto=5,maxproto=5,indirect,pipe_ino=1385785421
  └─/scratch/serverA                 serverA:/local/scratch     nfs4       rw,nosuid,nodev,relatime,vers=4.2,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.2.102,local_lock=none,addr=192.168.2.101

If server A is a Debian 11 system and we access /scratch/serverA on this system the resulting findmnt output looks like

└─/scratch                           auto.scratch                              autofs     rw,relatime,fd=25,pgrp=2511731,timeout=300,minproto=5,maxproto=5,indirect,pipe_ino=1385785421
  └─/scratch/serverA                 /dev/nvme0n1p1                            ext4       rw,relatime

The automounter has realised that the export comes from the same system, and does a bind mount instead. In this case /local/scratch is where the partition /dev/nvme0n1p1 is mounted (this appears further up the findmnt tree). The automounter appears to follow the chain back and do a bind mount of the base filesystem to the new location. Note that this behaviour can be changed with the nobind option in the autofs maps – see the auto.master man page.

If server A is then upgraded or reinstalled as Debian 12, the behaviour changes. Initially, the automount fails. When we examine the logs we see:

rpc.mountd[2437]: refused mount request from 127.0.0.1 for /local/scratch (/): not exported

We can add the loopback address to the list of allowed addresses in /etc/exports, and the automount then works. findmnt shows that:

└─/scratch                                               auto.scratch            autofs      rw,relatime,fd=18,pgrp=5367,timeout=300,minproto=5,maxproto=5,indirect,pipe_ino=32005
  └─/scratch/serverA                                     serverA:/local/scratch  nfs4        rw,nosuid,nodev,relatime,vers=4.2,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=127.0.0.1,local_lock=none,addr=127.0.1.1

So the automount can be made to work as an actual network mount going through the loopback address.

This actually appears to be the expected behaviour for nfs4 – see this bug report from 2006(!). Notably:

Jeff Moyer 2006-04-17 21:52:34 UTC
In the event that you have “fstype=nfs4” in your options, the automounter should
not attempt a bind mount when the server is localhost. The reason for this is
that the automounter would need nfsv4 specific information to determine where
the mount actually comes from. As you noted, it’s not straight-forward as it is
in v2 and v3.

So, for your nfsv2 and nfsv3 mounts, the automounter should continue to use bind
mounts. If you specify -fstype=nfs4, then you should get a mount via nfs4.

Clear as mud?

So the question should probably be – why does the automounter do bind mounts in Debian 11, even if it is nominally dealing with nfs4? Falling back to v3 somehow. Debian-specific patch? Whatever it is I can’t find any Debian-specific discussion of the change (and there was only a minor version bump of autofs from 11 to 12).

Kea DHCP service failing to open network socket on boot

System – Kea IPv4 DHCP server (ver 2.2.0) running on Debian 12, serving addresses on second network interface (PCIe card).

On reboot the server starts, but warns that it could not open the socket on the prescribed interface, and so is not serving any addresses. Running

systemctl status kea-dhcp4-server.service

The error is


kea-dhcp4[582]: WARN DHCPSRV_OPEN_SOCKET_FAIL failed to open socket: the interface enp3s0 is not running
kea-dhcp4[582]: INFO DHCP4_OPEN_SOCKETS_FAILED maximum number of open service sockets attempts: 0, has been exhausted without success

Restarting the service afterwards works normally. This is a known problem. There are various suggestions to fix the ordering, but the simplest (albeit slightly inelegant) way to get it to work is to tell it to retry a bunch of times with the service-sockets-max-retries and service-sockets-retry-wait-time options. So the start of the configuration looks like this:

"Dhcp4": {
    "interfaces-config": {
        "interfaces": [ "enp3s0" ],
        "dhcp-socket-type": "raw",
        "service-sockets-max-retries": 100,
        "service-sockets-retry-wait-time": 5000
    },

This retries every 5 seconds up to 100 times, which should be enough to get things in order. Note that, annoyingly, there is no message logged when a retry succeeds.

HTTP 403 code when trying to access Embedded Web Server (EWS) on new HP printers

Scenario: You have a new HP Color LaserJet Pro MFP 4302. It has been powered up and assigned (manually or by DHCP) an IP address which already has a DNS address associated with it. After finding the label with the initial admin password you try and access the Embedded Web Server (EWS) via your web browser. You use the DNS address for this.

What you get after a couple of seconds is a HTTP 403 error.

Try accessing the page using the IP address – it should work. The problem seems to be that if the printer thinks its DNS name is different from the actual address it breaks the EWS.

In this particular case, it was fixed by going to Network→Network Settings→Identification and changing the Host Name to be the same as in the DNS server. Then (after some refreshing of pages and re-logging in) you can use the DNS address to access the EWS as expected.

md array with disk gone missing – recovering data

Had a server that decided to drop a disk (or disk went faulty) in a RAID5 array. On reboot the array didn’t want to start. Output of mdadm --detail /dev/md0

/dev/md0:
Version : 1.2
Creation Time : Wed May 22 18:17:58 2013
Raid Level : raid5
Used Dev Size : -1
Raid Devices : 3
Total Devices : 2
Persistence : Superblock is persistent

Update Time : Sun Jan 3 06:50:05 2021
State : active, degraded, Not Started
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0

Layout : left-symmetric
Chunk Size : 512K

Name : nebula:0 (local to host nebula)
UUID : 962b8ff0:00d88161:5a030e1f:236466af
Events : 31168

Number Major Minor RaidDevice State
4 8 16 0 active sync /dev/sdb
1 0 0 1 removed
3 8 48 2 active sync /dev/sdd

Try to start array with mdadm --run /dev/md0

mdadm: failed to run array /dev/md0: Input/output error

As expected, given it didn’t autorun on boot.

Trying mdadm --assemble --run /dev/md0 /dev/sdb /dev/sdd

mdadm: /dev/sdb is busy - skipping
mdadm: /dev/sdd is busy - skipping

This is because the md device is using the disks. Stop it: with mdadm --stop /dev/md0

then reassemble with mdadm --assemble --run --force /dev/md0 /dev/sd[bd]

mdadm: Marking array /dev/md0 as 'clean'
mdadm: /dev/md0 has been started with 2 drives (out of 3).

Ran mount -a to rerun fstab. Took a few seconds but worked. Copy data off ASAP!

 

Running svnadmin verify as root permission issues on Berkeley DB repositories

After a system crash I ran svnadmin verify on some of the relevant repositories to check things were ok – this was done as root. Afterwards normal network access (via Apache) was broken with

Internal error: Berkeley DB error for filesystem

appearing in the logs. The fix was to chown -R www-data:www-data on the broken repositories. It looks like running svnadmin verify as root changes permissions on SVN repositories somewhere (at least Berkeley DB ones).

Lesson learned – check normal SVN access after doing this sort of thing!

Blocking or disabling autofs automounts with the -null map

Suppose you have a linux network setup with automounter maps that come from the network (via nis, sssd, LDAP etc.) and you want to block some of them acting on a particular system. In our case we have an automount map that acts on /opt and mounts various software packages from network shares. The problem with this is that you can’t then install your own stuff locally to /opt, which is what a lot of Debian/Ubuntu packages expect to be able to do.

It turns out there is a option in the automounter for this sort of situation. There is a built-in map called -null that blocks any further automounts to a particular mountpoint. In our case we want to block auto.opt, so we add a line to auto.master (somewhere before the bottom +auto.master line)

/opt  -null

Then restart the autofs service (if stuff was mounted on /opt then unmount it). Or reboot the system. You should find that you can put stuff in the local /opt.

To check the map is blocked you can also run

automount --dumpmaps

(also handy for checking what is actually meant to be mapped where).

Another way of doing this that leaves the system auto.master untouched is to create a file /etc/auto.master.d/opt.autofs (the first part of the name can be anything you want). Put the same contents in the file, e.g.

/opt  -null

Note that using this mechanism normally requires two files – one in /etc/auto.master.d/ and a map file that it refers to. In this case -null is a built-in map.

Unfortunately this option is not well documented. Places where it is referred to are:

There are also other built-in maps, e.g. -passwd, -hosts, -fedfs. Of these only the -hosts map is documented in the auto.master(5) man page.

-null is confirmed to work in CentOS 7, CentOS 8, Ubuntu 20.04, Debian 10.

TrueNAS and Windows clients – NTLMv2 issues

Situation – TrueNAS (or FreeNAS, or other Samba servers) serving a SMB share with NTLMv1 authentication disabled. A standalone Windows 10 system can connect to it, but a domain joined Win 10 system constantly claims wrong password.

The culprit here was a old group policy setting in the domain:

Network Security: LAN Manager authentication level

(found in Computer Configuration - Windows Settings - Security Settings - Local Policies - Security Options)

This was set to Send LM & NTLM - use NTLMv2 session security if negotiated, for backwards compatibility reasons with Win 2000 boxes and the like. This affects the registry key lmcompatibilitylevel (setting it to 1) under HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Lsa.

Unfortunately this is a bit misleading. According to this article:

Security Watch: The Most Misunderstood Windows Security Setting of All Time

This should negotiate better session security if possible, but does not actually send NTLMv2 requests or responses.

Thus trying to connect to a TrueNAS SMB share fails unless NTLMv1 Auth is explicitly enabled (in the service settings).

Ideally the group policy should be removed and the normal setting restored (NTLMv2 only). Or we can enable NTLMv1 on the share if it isn’t going to be a permanent setup.