Kea DHCP service failing to open network socket on boot

System – Kea IPv4 DHCP server (ver 2.2.0) running on Debian 12, serving addresses on second network interface (PCIe card).

On reboot the server starts, but warns that it could not open the socket on the prescribed interface, and so is not serving any addresses. Running

systemctl status kea-dhcp4-server.service

The error is


kea-dhcp4[582]: WARN DHCPSRV_OPEN_SOCKET_FAIL failed to open socket: the interface enp3s0 is not running
kea-dhcp4[582]: INFO DHCP4_OPEN_SOCKETS_FAILED maximum number of open service sockets attempts: 0, has been exhausted without success

Restarting the service afterwards works normally. This is a known problem. There are various suggestions to fix the ordering, but the simplest (albeit slightly inelegant) way to get it to work is to tell it to retry a bunch of times with the service-sockets-max-retries and service-sockets-retry-wait-time options. So the start of the configuration looks like this:

"Dhcp4": {
    "interfaces-config": {
        "interfaces": [ "enp3s0" ],
        "dhcp-socket-type": "raw",
        "service-sockets-max-retries": 100,
        "service-sockets-retry-wait-time": 5000
    },

This retries every 5 seconds up to 100 times, which should be enough to get things in order. Note that, annoyingly, there is no message logged when a retry succeeds.

HTTP 403 code when trying to access Embedded Web Server (EWS) on new HP printers

Scenario: You have a new HP Color LaserJet Pro MFP 4302. It has been powered up and assigned (manually or by DHCP) an IP address which already has a DNS address associated with it. After finding the label with the initial admin password you try and access the Embedded Web Server (EWS) via your web browser. You use the DNS address for this.

What you get after a couple of seconds is a HTTP 403 error.

Try accessing the page using the IP address – it should work. The problem seems to be that if the printer thinks its DNS name is different from the actual address it breaks the EWS.

In this particular case, it was fixed by going to Network→Network Settings→Identification and changing the Host Name to be the same as in the DNS server. Then (after some refreshing of pages and re-logging in) you can use the DNS address to access the EWS as expected.

md array with disk gone missing – recovering data

Had a server that decided to drop a disk (or disk went faulty) in a RAID5 array. On reboot the array didn’t want to start. Output of mdadm --detail /dev/md0

/dev/md0:
Version : 1.2
Creation Time : Wed May 22 18:17:58 2013
Raid Level : raid5
Used Dev Size : -1
Raid Devices : 3
Total Devices : 2
Persistence : Superblock is persistent

Update Time : Sun Jan 3 06:50:05 2021
State : active, degraded, Not Started
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0

Layout : left-symmetric
Chunk Size : 512K

Name : nebula:0 (local to host nebula)
UUID : 962b8ff0:00d88161:5a030e1f:236466af
Events : 31168

Number Major Minor RaidDevice State
4 8 16 0 active sync /dev/sdb
1 0 0 1 removed
3 8 48 2 active sync /dev/sdd

Try to start array with mdadm --run /dev/md0

mdadm: failed to run array /dev/md0: Input/output error

As expected, given it didn’t autorun on boot.

Trying mdadm --assemble --run /dev/md0 /dev/sdb /dev/sdd

mdadm: /dev/sdb is busy - skipping
mdadm: /dev/sdd is busy - skipping

This is because the md device is using the disks. Stop it: with mdadm --stop /dev/md0

then reassemble with mdadm --assemble --run --force /dev/md0 /dev/sd[bd]

mdadm: Marking array /dev/md0 as 'clean'
mdadm: /dev/md0 has been started with 2 drives (out of 3).

Ran mount -a to rerun fstab. Took a few seconds but worked. Copy data off ASAP!

 

Running svnadmin verify as root permission issues on Berkeley DB repositories

After a system crash I ran svnadmin verify on some of the relevant repositories to check things were ok – this was done as root. Afterwards normal network access (via Apache) was broken with

Internal error: Berkeley DB error for filesystem

appearing in the logs. The fix was to chown -R www-data:www-data on the broken repositories. It looks like running svnadmin verify as root changes permissions on SVN repositories somewhere (at least Berkeley DB ones).

Lesson learned – check normal SVN access after doing this sort of thing!

Blocking or disabling autofs automounts with the -null map

Suppose you have a linux network setup with automounter maps that come from the network (via nis, sssd, LDAP etc.) and you want to block some of them acting on a particular system. In our case we have an automount map that acts on /opt and mounts various software packages from network shares. The problem with this is that you can’t then install your own stuff locally to /opt, which is what a lot of Debian/Ubuntu packages expect to be able to do.

It turns out there is a option in the automounter for this sort of situation. There is a built-in map called -null that blocks any further automounts to a particular mountpoint. In our case we want to block auto.opt, so we add a line to auto.master (somewhere before the bottom +auto.master line)

/opt  -null

Then restart the autofs service (if stuff was mounted on /opt then unmount it). Or reboot the system. You should find that you can put stuff in the local /opt.

To check the map is blocked you can also run

automount --dumpmaps

(also handy for checking what is actually meant to be mapped where).

Another way of doing this that leaves the system auto.master untouched is to create a file /etc/auto.master.d/opt.autofs (the first part of the name can be anything you want). Put the same contents in the file, e.g.

/opt  -null

Note that using this mechanism normally requires two files – one in /etc/auto.master.d/ and a map file that it refers to. In this case -null is a built-in map.

Unfortunately this option is not well documented. Places where it is referred to are:

There are also other built-in maps, e.g. -passwd, -hosts, -fedfs. Of these only the -hosts map is documented in the auto.master(5) man page.

-null is confirmed to work in CentOS 7, CentOS 8, Ubuntu 20.04, Debian 10.

TrueNAS and Windows clients – NTLMv2 issues

Situation – TrueNAS (or FreeNAS, or other Samba servers) serving a SMB share with NTLMv1 authentication disabled. A standalone Windows 10 system can connect to it, but a domain joined Win 10 system constantly claims wrong password.

The culprit here was a old group policy setting in the domain:

Network Security: LAN Manager authentication level

(found in Computer Configuration - Windows Settings - Security Settings - Local Policies - Security Options)

This was set to Send LM & NTLM - use NTLMv2 session security if negotiated, for backwards compatibility reasons with Win 2000 boxes and the like. This affects the registry key lmcompatibilitylevel (setting it to 1) under HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Lsa.

Unfortunately this is a bit misleading. According to this article:

Security Watch: The Most Misunderstood Windows Security Setting of All Time

This should negotiate better session security if possible, but does not actually send NTLMv2 requests or responses.

Thus trying to connect to a TrueNAS SMB share fails unless NTLMv1 Auth is explicitly enabled (in the service settings).

Ideally the group policy should be removed and the normal setting restored (NTLMv2 only). Or we can enable NTLMv1 on the share if it isn’t going to be a permanent setup.

OpenProject Apache reverse proxy with https secure connection

These are some notes on setting up OpenProject on a backend server (let’s call it backsrv.example.com), and accessing it via a front-end system (frontsrv.example.com). Normally we’d do the SSL termination at the reverse proxy, and there is some documentation on this. In this case I wanted to do things properly, and protect the login credentials all the way. This means using an https connection between the reverse proxy and the back end server.

Firstly, the reverse proxy has to trust the SSL certificate that the back end uses. There are several ways to go about this. I chose to set up a local certificate authority using the easy-rsa scripts (using another small virtual machine set up only for this purpose). For one connection this is probably overkill, but for multiple backends in the future it will make the administration a lot easier.

  • Set up CA
    • Debian 10, install easy-rsa package, do required setup.
  • Copy CA root certificate to frontsrv
    • For Debian systems, copy to /usr/local/share/ca-certificates/ and run update-ca-certificates
  • Create CSR on backsrv, copy it to CA, sign it and copy resulting certificate to backsrv. Put cert and key in sensible places (/etc/ssl/private/ and /etc/ssl/local-certs/). Make sure permissions are correct.
  • Configure Apache on backsrv and check cert works (for OpenProject edit /etc/openproject/installer.dat to put in the correct certificate paths and run openproject configure to update the config).

Set up Apache to do proxy stuff on frontsrv. Here’s the beginning fragment of default-ssl.conf that should work:

<IfModule mod_ssl.c>
        <VirtualHost _default_:443>
                ServerAdmin webmaster@localhost

                DocumentRoot /var/www/html

                RequestHeader edit Destination ^https http early

                SSLProxyEngine on
                SSLProxyCheckPeerName off

                # To openproject server on backserv
                ProxyPass /openproject https://backsrv.example.com/openproject
                ProxyPassReverse /openproject https://backsrv.example.com/openproject
                <Location /openproject>
                        ProxyPreserveHost On
                        Require all granted
                </Location>

You also need to go to the OpenProject web interface admin area, go to System Settings – General and change the Host name to the reverse proxy, and set protocol to https. It will complain if there’s a hostname mismatch (case sensitive, even!). You may also want to go to EmailEmail notifications and change the Emission email address to be consistent.

Don’t forget, need SSLProxyEngine on!

For OpenProject the subdirectory locations on the front and back ends do need to match.

The ProxyPreserveHost On is required per the OpenProject documentation. Unfortunately, that means it tries to match the name frontsrv.example.com to the back end cert, and the SSL handshake fails. This is the reason for the SSLProxyCheckPeerName off directive – it disables checking the certificate CN or Subject Alternative Names.

Apparently the SSLProxyCheckPeerName off can go in a <Proxy>...</Proxy> matching block with Apache 2.4.30 or newer, which would be nice. As it is this will turn it off for the whole vhost, which is a small lessening of security.

I suppose in principle we could create the certificate for the back end with the name of the front end, or add it to the SANs. I haven’t tried this and it seems like it could be a recipe for confusion and subtle bugs.

Secure disk wipe with Windows format command

From Windows 8 Microsoft snuck in a refinement to the format command. It is now possible to get it to do multi-pass random-number disk wipes. From the help (Win 10 20H2):

 /P:count  Zero every sector on the volume. After that, the volume
           will be overwritten "count" times using a different
           random number each time. If "count" is zero, no additional
           overwrites are made after zeroing every sector. This switch
           is ignored when /Q is specified.

So to do a single-pass random wipe:

  • Repartition disk with one partition (if desired) and give it a drive letter (let’s say F for this example). Probably a good idea to remove any OEM, EFI, recovery partitions like this. A quick way to do this is to use the clean command in diskpart.
  • Run format F: /P:1
  • If you feel like it finish up with a clean command in diskpart.

This should do a pass with all zeros, and then a random-number pass.

Note this isn’t a full ‘write random data to every block in the drive’ erase, but should still be secure enough for most purposes.