Suppose you have a linux network setup with automounter maps that come from the network (via
LDAP etc.) and you want to block some of them acting on a particular system. In our case we have an automount map that acts on
/opt and mounts various software packages from network shares. The problem with this is that you can’t then install your own stuff locally to
/opt, which is what a lot of Debian/Ubuntu packages expect to be able to do.
It turns out there is a option in the automounter for this sort of situation. There is a built-in map called
-null that blocks any further automounts to a particular mountpoint. In our case we want to block
auto.opt, so we add a line to
auto.master (somewhere before the bottom
Then restart the
autofs service (if stuff was mounted on
/opt then unmount it). Or reboot the system. You should find that you can put stuff in the local
To check the map is blocked you can also run
(also handy for checking what is actually meant to be mapped where).
Another way of doing this that leaves the system
auto.master untouched is to create a file
/etc/auto.master.d/opt.autofs (the first part of the name can be anything you want). Put the same contents in the file, e.g.
Note that using this mechanism normally requires two files – one in
/etc/auto.master.d/ and a map file that it refers to. In this case
-null is a built-in map.
Unfortunately this option is not well documented. Places where it is referred to are:
There are also other built-in maps, e.g.
-fedfs. Of these only the
-hosts map is documented in the
auto.master(5) man page.
-null is confirmed to work in CentOS 7, CentOS 8, Ubuntu 20.04, Debian 10.
Also see Publishing websites with Jekyll, Apache and SVN
If you send console output via email (like, say the output of
jekyll build as part of a SVN post-commit hook script) if there are ANSI control characters in the string (e.g. colour codes) this can break things. In this case the
mail command (Debian 9 default
exim) was only sending text up to the first ANSI code, which meant that the
jekyll build error messages (which are yellow and red) were missing.
To fix this pipe the text through
ansi2txt (comes with the
colorized-logs package in Debian and Ubuntu). This strips out all ANSI control codes making the string email safe.
(After this I pipe it through
unix2dos to convert to CRLF line endings, as this appears to be the standard for email. On Debian this comes with the
The last line in the hook script then becomes
echo "$LOGVAR" | /usr/bin/ansi2txt | /usr/bin/unix2dos | mail -s "$REPOS_BASENAME build $REV" "$BUILD_EMAIL"
Setup – Dell Latitude 7490 running Ubuntu 18.04 Bionic and Dell WD15 USB-C dock.
Problem – system freezes when dock unplugged.
This problem started after updates. The solution found was to revert to the previous kernel (
4.15.0-44-generic. Did this by setting GRUB to remember the boot setting – change
Then hit esc at the loading screen to get to the grub menu.
After updating anything to use systemd-235 NIS logins either don’t work at all (usually for GUI logins), or take a long time to login (console or ssh, sometimes). The culprit is a line in the
This sandboxes the service and doesn’t allow it to talk to the network. Unfortunately this affects nis lookups done via the glibc NSS API. See the links at https://github.com/systemd/systemd/pull/7343
The quick solution is to turn off the sandboxing, either by commenting out or changing the line in systemd-logind.service, or creating a drop-in snippet that overrides it. This can be done by creating a file
/etc/systemd/system/systemd-logind.service.d/IPAddress_clear.conf with the contents:
The file can be called anything you like (
Then restart things:
systemctl restart systemd-logind.service
You can check that the drop-in is being loaded with
systemctl status systemd-logind.service
In the output you should see something like:
Loaded: loaded (/lib/systemd/system/systemd-logind.service; static; vendor preset: enabled)
The other test is to see if NIS logins work correctly, of course…
The slightly slower solution is to use
nscd to cache the lookup requests, and apparently does so in a way that plays nicely with the sandboxing. The much slower solution is to switch to using
sssd or similar and ditch NIS once and for all…
Note – this may also affect
Ubuntu 18.04 has switched to netplan for configuring the network interfaces. Netplan generates configurations for
systemd-networkd and effectively replaces
ifupdown and the
In an install of Ubuntu Desktop, the default netplan configuration comes from
/etc/netplan/01-network-manager-all.yaml which reads:
This basically hands over all network control to NetworkManager. For a static setup we can change the configuration to:
enp0s25 is my network interface in this case, the address has a netmask of 255.255.255.0 and the search is the default dns search domain(s) (note this can be vital for getting automounting to work if your setup just uses machine names and assumes the domain is the same).
Note that if you have a laptop you could put this in a file called, say
02_ethernet_interface.yaml and it should override the first configuration for that interface only. I think. Later configurations override earlier ones.
This applies the configuration and then rolls it back in 120 seconds (by default). Press Enter to accept the new settings.
To apply the changes.
For a desktop I just deleted the first config file.
In theory you could probably use this to generate a configuration for NetworkManager (note that on a server you need to explicitly configure NetworkManager to bring up the interface on boot).
Problem: You have a SVN server sitting behind a reverse web proxy (e.g. for convenient SSL termination purposes). This works for new files, changes etc. but fails when you try to rename something, make a copy or move. The error is:
Unexpected HTTP status 502 'Bad Gateway'
The reason is explained here, but to summarise:
These operations involve sending a http COPY method. This includes a Destination: field in the header, which is not rewritten by Apache’s ProxyPass directives. Thus the destination field starts with https – not http. This confuses mod_dav on the SVN server.
The solution is to change the header. We can do this on Apache (2.2 or higher) using the headers module. This can be done either on the proxy server or the SVN server. As my SVN server is very old (the main reason why it’s behind a proxy) I’ll do this on the proxy server.
Enable the headers module if required. On Debian:
# a2enmod headers
and restart Apache. Then alter your configuration to include:
RequestHeader edit Destination ^https http early
This probably should go before any ProxyPass directives.
Then your config might look like:
RequestHeader edit Destination ^https http early
ProxyPass /svn http://your.real.svn.server/svn
ProxyPassReverse /svn http://your.real.svn.server/svn
Require all granted
(Note this is for a gateway system where other locations can proxy to other application servers.)
See the guide for 16.04, but with the following caveats:
Looks like you still need to add nis explicitly to
rpcbind service issue appears to be fixed.
Note – this fix in principle should work on most systemd distributions.
Problem – trying to get a Debian 9 system to mount an NFS share at boot. This was declared in
/etc/fstab in the normal way, but kept failing on boot. However, once the system was up you could log in and do a
mount -a, which would work fine. Reading around, it looks like a case of the system trying to mount before the network is up (and in this case the network should be reliable, as it’s an internal one between a VM and it’s host…)
Tried using the
bg option first, which should mount in the background and come up eventually, but still got error on boot.
192.168.45.1:/export /hostshare nfs bg,rw,soft 0 0
There is another option that in theory should help:
_netdev. I haven’t tried this yet.
What does work is adding an option
x-systemd.automount. This, unsurprisingly, tells systemd to try and mount the share on demand. So changing the line in fstab to read:
192.168.45.1:/export /hostshare nfs rw,soft,x-systemd.automount 0 0
works. Booting the system gives no errors (on the console anyway). The share does not show as mounted until the local mountpoint is accessed, and then it works without complaint.
- The context for this is a VM running under VirtualBox. The VM is Debian 9, the host is Ubuntu 17.10. The VM has one network interface with the default NAT setup to talk to the outside world, and a second interface to talk to a host-only network. This allows you to SSH into the the guest from the host, and also allows this NFS setup. You can use the VirtualBox shared folder setup to transfer files, but I figured as both the host and guest were Linux NFS would be easier (and not require the Guest Extensions to be installed on the guest).
- Debian 9 wouldn’t successfully install on the laptop, but I needed it for an easy install of LALSuite (Debian is a reference system for this, Ubuntu isn’t and has dependency issues). Hence this rather complicated setup. Fortunately LALSuite is entirely command line based…
- Yes, Docker or similar would be more efficient. I’m not so familiar with it and it’s a bit of a pain to get it talking to the host filesystem. I’d argue that running a full VM is slightly more portable, although you’d need to change how the filesharing is set up on a Windows or Mac host.
Note – this only sets up the system to use user and group logons, not automounting home directories. I haven’t figured out how to make this work in Ubuntu 16.
Probably a good idea to set network address statically in
/etc/network/interfaces (NetworkManager should recognise this and then leave it alone)
Probably also a good idea to check that
/etc/hosts has the domain name for the system, i.e.
127.0.1.1 domain.name.machinename machinename
Add yp server to
/etc/nsswitch.conf to add nis for passwd, group and shadow. Note that compat should include nis by default.
Add a dependency to make the rpcbind service start at boot
systemctl add-wants multi-user.target rpcbind.service
(See this Debian bug report or this Ubuntu one)
Note that this is not a complete fix – it is reported that if the network does not come up fast enough things still break.
For users that need to log on to the system, create home directories
Remember to reboot to check everything is working:
if that fails check if the bind services are running
systemctl status rpcbind
systemctl status ypbind
First user in Ubuntu 14.04.3 has membership of groups:
Another user created as administrator from the GUI gets: