Posted: 2021-06-04 10:14:27 by Alasdair Keyes
I needed to create a Debian Buster LXC container on my laptop and when running the following LXC create command I received the following error
# lxc-create -t debian -n testcontainer -- -r buster debootstrap is /usr/sbin/debootstrap Checking cache download in /var/cache/lxc/debian/rootfs-buster-amd64 ... gpg: key 7638D0442B90D010: 4 signatures not checked due to missing keys gpg: key 7638D0442B90D010: "Debian Archive Automatic Signing Key (8/jessie) <firstname.lastname@example.org>" not changed gpg: Total number processed: 1 gpg: unchanged: 1 Downloading debian minimal ... I: Retrieving InRelease I: Checking Release signature E: Release signed by unknown key (key id DCC9EFBF77E11517) The specified keyring /var/cache/lxc/debian/archive-key.gpg may be incorrect or out of date. You can find the latest Debian release key at https://ftp-master.debian.org/keys.html Failed to download the rootfs, aborting. Failed to download 'debian base' failed to install debian lxc-create: testcontainer: lxccontainer.c: create_run_template: 1626 Failed to create container from template lxc-create: testcontainer: tools/lxc_create.c: main: 319 Failed to create container testcontainer
This is telling me that the key used to sign the Debian release is unknown to LXC. It also shows that LXC is using the file
/var/cache/lxc/debian/archive-key.gpg as the GPG keyring.
We can check the keys listed in that keyring with the following command. As a break down, this is running the regular
gpg utility, but the
--keyring arguments are telling gpg to manage just the keyring file that LXC is using.
# gpg --no-default-keyring --keyring /var/cache/lxc/debian/archive-key.gpg --list-key /var/cache/lxc/debian/archive-key.gpg ------------------------------------- pub rsa4096 2014-11-21 [SC] [expires: 2022-11-19] 126C0D24BD8A2942CC7DF8AC7638D0442B90D010 uid [ unknown] Debian Archive Automatic Signing Key (8/jessie) <email@example.com>
Which shows it only has the key for Debian 8 - Jessie...
To get the latest version we need to check that the key listed in the error is a valid Debian key, otherwise we could be opening ourselves up to downloading malicious files.
Visiting https://ftp-master.debian.org/keys.html shows that the GPG key with fingerprint
DCC9EFBF77E11517 listed in the error is the valid Debian 10 Buster release key.
Now that we're satisfied that nothing shady is going on, we can import the key to the keyring.
Download the key from the Debian site...
# wget "https://ftp-master.debian.org/keys/release-10.asc" ... 2021-06-04 10:51:53 (35.6 MB/s) - ‘release-10.asc’ saved [1200/1200]
Then import into the keyring...
# gpg --no-default-keyring --keyring /var/cache/lxc/debian/archive-key.gpg --import release-10.asc gpg: key DCC9EFBF77E11517: public key "Debian Stable Release Key (10/buster) <firstname.lastname@example.org>" imported gpg: Total number processed: 1 gpg: imported: 1
--list-key command we ran before shows the new key in the the LXC keyring
# gpg --no-default-keyring --keyring /var/cache/lxc/debian/archive-key.gpg --list-key /var/cache/lxc/debian/archive-key.gpg ------------------------------------- pub rsa4096 2014-11-21 [SC] [expires: 2022-11-19] 126C0D24BD8A2942CC7DF8AC7638D0442B90D010 uid [ unknown] Debian Archive Automatic Signing Key (8/jessie) <email@example.com> pub rsa4096 2019-02-05 [SC] [expires: 2027-02-03] 6D33866EDD8FFA41C0143AEDDCC9EFBF77E11517 uid [ unknown] Debian Stable Release Key (10/buster) <firstname.lastname@example.org>
We can now run the create container command...
# lxc-create -t debian -n akeyescouk -- -r buster debootstrap is /usr/sbin/debootstrap Checking cache download in /var/cache/lxc/debian/rootfs-buster-amd64 ... gpg: key 7638D0442B90D010: 4 signatures not checked due to missing keys gpg: key 7638D0442B90D010: "Debian Archive Automatic Signing Key (8/jessie) <email@example.com>" not changed gpg: Total number processed: 1 gpg: unchanged: 1 Downloading debian minimal ... I: Retrieving InRelease I: Checking Release signature I: Valid Release signature (key id 6D33866EDD8FFA41C0143AEDDCC9EFBF77E11517) I: Retrieving Packages I: Validating Packages ...
If you found this useful, please feel free to donate via bitcoin to 1NT2ErDzLDBPB8CDLk6j1qUdT6FmxkMmNz
Posted: 2020-11-24 13:25:46 by Alasdair Keyes
PHPUnit 9.x coverage reporting
I started a new Laravel project today and used the latest Laravel 8.x release. After installation I go through and update a few things such as adding in
laravel-debugbar and also setup PHPUnit code coverage reports that I can hook into gitlab's code coverage reporting tools.
After making the changes to my
phpunit.xml file I was greeted with the following error
PHPUnit 9.4.3 by Sebastian Bergmann and contributors. Warning - The configuration file did not pass validation! The following problems have been detected: Line 29: - Element 'log': This element is not expected. Test results may not be as expected. .. 2 / 2 (100%) Time: 00:00.386, Memory: 30.00 MB OK (2 tests, 2 assertions)
Line 29 is part of the
<logging> block I added in for coverage reporting.
<phpunit ....> <logging> <log type="coverage-text" target="php://stdout" showUncoveredFiles="true"/> <log type="coverage-html" target="build/logs/html/" showUncoveredFiles="true"/> </logging> </phpunit>
After reading through the documentation for PHPUnit 9 (which is what is pulled in with Composer for Laravel 8) this is changed from
report and is now under the
testsuites tag and has an changed syntax.
<phpunit ....> <testsuites processUncoveredFiles="true"> ... <report> <text outputFile="php://stdout"/ showUncoveredFiles="true"> <html outputDirectory="build/logs/html/"/> </report> <testsuites> </phpunit>
I'm probably not going to be the only one caught out by this, so I thought it warranted a post.
If you found this useful, please feel free to donate via bitcoin to 1NT2ErDzLDBPB8CDLk6j1qUdT6FmxkMmNz
Posted: 2020-07-25 12:19:25 by Alasdair Keyes
I've recently started working on a new project using NGINX, PHP 7.4, Redis, PostgreSQL and Laravel 7.
As it's a new project I thought I would Dockerise it from the start. After configuring my
docker-compose.yml file I built the environment and installed Laravel7 with the Laravel Debugbar (https://packagist.org/packages/barryvdh/laravel-debugbar)
I noticed that the bootstrapping of the basic Laravel App was taking over 100ms. I ran the config cache
config:cache and it barely made any difference.
This didn't seem right to me but I had a number of variables and I was unsure of where to start looking... or if it was a problem at all
Thankfully the first part of my investigation found me a solution. I created a
phpinfo() page on an existing Laravel 5 setup and on the Docker container. It turns out that the
opcache isn't enabled on the Docker image by default.
Adding the following RUN statement to my DockerFile sorted the issue
After restarting the container, the app bootstraps in 25-30ms. I'm unsure why opcache isn't enabled by default, I can't think of any problems it would cause and I would imagine in over 99% of situations users would want it on... no one wants slow PHP.
If you found this useful, please feel free to donate via bitcoin to 1NT2ErDzLDBPB8CDLk6j1qUdT6FmxkMmNz
Posted: 2020-03-24 23:35:20 by Alasdair Keyes
I happened to chance across a post on LinkedIn from an old colleague of mine who has started a new podcast about freelance development. If you've got some time and you're either a contractor or looking to contract in the future, it's worth a listen.
Posted: 2020-01-19 10:17:03 by Alasdair Keyes
NOTE: Running End of Life software is risky, don't do it unless you accept the risks
So Windows 7 is now EOL for all but the few customers who are paying through the nose for long term support. I run Linux on most of my machines, but I still do have a solitary Windows machine for Steam and a few other Windows only apps that won't run on WINE.
Unfortunately, I dislike Windows 8 an 10, there are a number of reasons, but on a purely practical level I find the interface horrendous, un-intuitive and difficult to use. I would like to continue running Windows 7 for as long as I can. I will have to accept the increased security risks from running an OS with no further security updates but thankfully my use of Windows is very limited and doesn't involve browsing/email or other common attack vectors for viruses and trojans. With a good AV, installed too, this should reduce risk to acceptable levels.
With the EOL status, the Windows Update service for Windows 7 will no doubt end in time, this means that although my current machine is up-to-date, if I need to re-intstall due to hardware failure, I may not have access to all the updates.
With this in mind, I found the WSUSOffline tool http://www.wsusoffline.net/, which allows you to download all updates for a specific Windows/Office version and store them offline. The main use-case appears to be for sys-admins with network access restrictions to download and install updates on air-gapped machines, however in this instance it looks well suited to archiving. There are other options to me such as installing and maintaining a Windows WSUS server, but that is a lot of extra work.
If you wish to get your own backups of updates these are the steps I took
It took about 30 mins to download all the updates, then once it's done it copies a folder structure with all the updates into your shared drive which you can then backup from your host to wherever you want. The folder also includes the executable to kick off the updates on another machine.
Running the archived updates on another machine is not a run-and-forget process, Windows updates require reboots which means you will have to click a few buttons now and again, but that is no different than the Official update process.
It looks like the WSUSOffline tool works by distributing a list of updates to use. As Windows 7 only went EOL in January, I would imagine that I will have to wait for the next WSUSOffline update to get the last few Windows Updates archived but it looks like I should be able to continue using Windows 7 for some time yet, even if I have to rebuild.
Posted: 2019-07-30 11:15:33 by Alasdair Keyes
For about 10 years I've used a wiki to document everything that I learn and need to keep track of. This contains everything from walkthroughs of installing/configuring software, to lists of interview questions to ask potential hires.
When I first started working in hosting, I began collecting text files with information given to me by other colleagues. Over time this got un-wieldy so I created a MediaWiki wiki https://www.mediawiki.org/wiki/MediaWiki. I mainly picked this as it was both a wiki I was using at my workplace and it was a common interface; being the software that Wikipedia uses.
Over time I've kept Mediawiki updated but gradually I've had more and more problems with updates breaking and needing fixing so I started looking around for other wiki tools.
I eventually found Dokuwiki https://www.dokuwiki.org/. It's more lightweight and simple but seems to be up to the tasks that I need it for. It uses flat files as a back-end so I don't need to backup both files and a database and after importing all my data it's only 1/4 of the size on disk.
$ du -hs public_html.mediawiki/ 203M public_html.mediawiki/ $ du -hs public_html.dokuwiki 48M public_html.mediawiki/
I did have to install the
pagelist Dokuwiki plugins to allow me to use tags, which are the Dokuwiki version of Mediawiki's categories.
It would be nice to have been able to copy my articles directly across to the new wiki, but the Mediawiki syntax (https://www.mediawiki.org/wiki/Help:Formatting) and Dokuwiki syntax (https://www.dokuwiki.org/wiki:syntax) are different. The key differences were
<pre></pre>tags in Mediawiki that needed to be converted to
<code></code>tags in Dokuwiki that needed to be converted to
___had to be converted.
I knocked up a quick Perl script to connect to the Mediawiki DB and parse the articles into a format suitable for Dokuwiki. This was mostly done with regex replace statements to insert spaces and change tags etc.
While I was at it, I took this time to delete or update any old articles. So now I have a new wiki with refreshed info and am very pleased with Dokuwiki.
Posted: 2019-07-17 08:47:31 by Alasdair Keyes
I upgraded my home server to Debian 10 (Buster) this week. It's running on quite an old HP Proliant Microserver so I bought a new SSD to use for the OS partitions to give it a little extra life. As such, it was a fresh install rather than an in-place upgrade.
As you would imagine 10 is much the same as 9 in most respects. But there were a couple of points of note...
The Buster Puppet install was using version
5.5.10 whereas my Puppet Master (On Debian Stretch) was using
4.8.2 when connecting to the master the new install would error with
Warning: SSL_connect returned=1 errno=0 state=error: dh key too small
The answer to this was found at another chap's blog https://blog.steve.fi/upgraded_my_first_host_to_buster.html and is to do with system-wide SSL settings, although I fixed it slightly differently.
/etc/ssl/openssl.cnf I updated the line
CipherString = DEFAULT@SECLEVEL=2
CipherString = DEFAULT@SECLEVEL=1
It turns out this is a non-standard, custom security setting made by Debian https://wiki.debian.org/ContinuousIntegration/TriagingTips/openssl-1.1.1
It doesn't appear that you can define a custom set of Diffie Hellman params for a Puppet Master as you can for other software like NGINX and Apache. As soon as I have my Puppet Master on the later version I'll be changing this setting back, assuming it doesn't interfere with anything else.
check_disk_ioNagios plugin was failing
It turns out the output of the
iostat command had changed slightly and required a tweak to continue working. Commit https://gitlab.com/alasdairkeyes/nagios-plugin-check_disk_io/commit/0708ba7b9cb0017f6f36554d54ee3e37a9b58d63
debsecanpackage is enabled by default
I wasn't aware this package existed until it started emailing me with all the system vulnerabilities. I can see a use for it, but as my systems are updated regularly, it's now purged by Puppet.
SMBus PIIX4 adapterdevice
sensors utility used by the
check_sensors Nagios plugin was erroring that I had a critical alarm.
It turns out that there is no max/critical temp information for the thermometer on this device so the reported temperature is always higher than the threshold of 0C
# sensors ... jc42-i2c-0-18 Adapter: SMBus PIIX4 adapter port 0 at 0b00 temp1: +31.0°C (low = +0.0°C) ALARM (HIGH, CRIT) (high = +0.0°C, hyst = +0.0°C) (crit = +0.0°C, hyst = +0.0°C) ...
As I have other temperature sensors available I disabled this one by creating the following file
chip "jc42-i2c-0-18" bus "i2c-0" "SMBus PIIX4 adapter port 0 at 0b00" ignore temp1
Other than that it was all pretty seamless.
Posted: 2019-07-09 12:45:23 by Alasdair Keyes
I started up the Tor browser yesterday and noticed that it didn't start in it's usual time frame, 10 minutes later the browser had still not opened.
Checking top, I saw that a GPG process was using 100% CPU.
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 19330 username 20 0 78364 47020 4448 R 99.7 0.6 0:16.43 gpg 3145 username 20 0 3458164 139712 63512 R 12.6 1.7 18:43.51 cinnamon
I'd read recently about an attack on GPG where keys were being poisoned with a large number of signatures to exploit a GPG bug and corrupt GPG installs https://threatpost.com/pgp-ecosystem-targeted-in-poisoning-attacks/146240/, I wondered if this is what was occuring.
I checked what the GGP process was running.
$ ps aux | grep 19330 username 19330 64.6 0.6 82192 50980 ? RL 10:51 0:31 /usr/bin/gpg --status-fd 2 --homedir /home/username/.local/share/torbrowser/gnupg_homedir --keyserver hkps://hkps.pool.sks-keyservers.net --keyserver-options ca-cert-file /usr/share/torbrowser-launcher/sks-keyservers.netCA.pem include-revoked no-honor-keyserver-url no-honor-pka-record --refresh-keys
It seemed to be running
--refresh-keys which requests updates to keys from the key servers. I ran the following to see what keys were being refreshed.
$ /usr/bin/gpg --homedir /home/username/.local/share/torbrowser/gnupg_homedir --list-keys /home/username/.local/share/torbrowser/gnupg_homedir/pubring.kbx ---------------------------------------------------------------- pub rsa4096 2014-12-15 [C] [expires: 2020-08-24] EF6E286DDA85EA2A4BA7DE684E2C6E8793298290 uid [ unknown] Tor Browser Developers (signing key) <firstname.lastname@example.org> sub rsa4096 2018-05-26 [S] [expires: 2020-09-12]
I checked the key servers entry for the key
EF6E286DDA85EA2A4BA7DE684E2C6E8793298290 at http://pgp.mit.edu/pks/lookup?op=vindex&search=0x4E2C6E8793298290 and saw the key had received a large number of signatures on 2019-06-30, it does indeed look like it has been poisoned with excessive signatures.
I downloaded the latest Tor Browser for Linux directly from https://www.torproject.org/ and didn't receive this issue during startup which is good news.
However, my tor install is through the
torbrowser-launcher provided by the Linux Mint repos (originally provided by Ubuntu).
torbrowser-launcher doesn't contain the TOR Browser itself (as the name suggests, it's just a launcher), it is a python environment that will download the latest Tor Browser directly from Tor project. To do this, it uses the Tor Project's public GPG Key to verify the downloaded files are legitimate, during this process it does a refresh from the key servers and hits the poisoning issue.
It seems if you are affected by this, you're best off downloading tor direct from the Tor Project itself. Unfortunately, verification that the file you download from the website requires gpg, you can certainly try and ensure that the key that created the signature is correct...
$ gpg --verify tor-browser-linux64-8.5.3_en-US.tar.xz.asc Downloads/tor-browser-linux64-8.5.3_en-US.tar.xz gpg: Signature made Fri 21 Jun 2019 02:30:51 PM CEST gpg: using RSA key EB774491D9FF06E2 gpg: Can't check signature: No public key
EB774491D9FF06E2 matches the key listed at https://2019.www.torproject.org/docs/verifying-signatures.html.en and is a subkey for the Tor Project Signing key, but without the key in your keyring, this check isn't as secure as it should be.
Posted: 2019-07-09 09:06:41 by Alasdair Keyes
I've been interested in running ZFS for a while but have always held off making the leap due to worries about features and stability. ZFS was originally developed for Solaris and has been ported over to Linux by the ZFS on Linux (ZoL) project https://zfsonlinux.org/.
Recently ZoL 0.8 was released with native encryption which is really a must. Unfortunately the latest Debian release 'Buster' only has 0.7.12 so the native encryption feature isn't available.
I've been experimenting with a Virtualbox VM to develop and test a suitable setup that I would be happy with on my production hardware.
My existing production setup runs Debian Stretch using Linux software Raid with LUKS Encryption on top and running ext4 as a filesystem.
For this test setup I'm using Virtualbox with 4x 2GB disks for ZFS with Striped/Mirrored configuration, it's essentially ZFS's version of RAID 10. For a configuration like this you should ensure you have at least 2GB RAM, I did try with 1GB however the LUKS encrypted devices were failing to startup at boot with out of memory errors. Debian 'Buster' is the OS.
The Disk setup is
/dev/sdbZFS disk 1
/dev/sdcZFS disk 2
/dev/sddZFS disk 3
/dev/sdeZFS disk 4
apt update && apt upgrade -y
contribto the Debian apt repo list in
deb http://deb.debian.org/debian buster main contrib deb-src http://deb.debian.org/debian buster main contrib
apt update && apt install dpkg-dev linux-headers-amd64 cryptsetup -y
This can take some time, make a cup of tea.
apt install zfs-dkms zfsutils-linux -y
cryptsetup -y luksFormat /dev/sdb cryptsetup -y luksFormat /dev/sdc cryptsetup -y luksFormat /dev/sdd cryptsetup -y luksFormat /dev/sde
Get the UUID for each LUKS device
# ls -l /dev/disk/by-uuid/ total 0 lrwxrwxrwx 1 root root 10 Jul 9 08:55 0af47096-987d-41b5-b5a7-98827850f46d -> ../../sda1 lrwxrwxrwx 1 root root 9 Jul 9 08:55 5888dfc8-4df0-410e-8aec-992aad7abd97 -> ../../sdc lrwxrwxrwx 1 root root 9 Jul 9 08:55 abd4a557-de16-4ecd-ab73-e4d41293dcf4 -> ../../sde lrwxrwxrwx 1 root root 9 Jul 9 08:55 e2f1931b-2413-4181-9500-baad1a74c12d -> ../../sdd lrwxrwxrwx 1 root root 9 Jul 9 08:55 edc129d6-dc90-4338-bc2e-9476843ff41f -> ../../sdb lrwxrwxrwx 1 root root 10 Jul 9 08:55 fc1b09a1-41e2-4503-8c4f-d2e532dea5aa -> ../../sda5
/etc/crypttab file with your disk configuration, it should look similar to this, the target name can be any unique name that you want.
# <target name> <source device> <key file> <options> sdb_crypt UUID=edc129d6-dc90-4338-bc2e-9476843ff41f none luks sdc_crypt UUID=5888dfc8-4df0-410e-8aec-992aad7abd97 none luks sdd_crypt UUID=e2f1931b-2413-4181-9500-baad1a74c12d none luks sde_crypt UUID=abd4a557-de16-4ecd-ab73-e4d41293dcf4 none luks
As you can see the UUID mapping in
/dev/disk/by-uuid is mapped against a unique name for device mapper.
This isn't required, however it's good to ensure that your LUKS setup is correct before proceeding. You will be asked for your LUKS passwords on boot. Once you log back in again, you should be able to run the following
ls and see the LUKS devices are initialized correctly
$ ls -l /dev/mapper/ total 0 crw------- 1 root root 10, 236 Jul 9 08:55 control lrwxrwxrwx 1 root root 7 Jul 9 08:55 sdb_crypt -> ../dm-0 lrwxrwxrwx 1 root root 7 Jul 9 08:55 sdc_crypt -> ../dm-1 lrwxrwxrwx 1 root root 7 Jul 9 08:56 sdd_crypt -> ../dm-3 lrwxrwxrwx 1 root root 7 Jul 9 08:55 sde_crypt -> ../dm-2
You will sometimes get a warning that the zfs kernel module isn't loaded, just follow the instructions and run...
This will only need to be run once, once a pool is configured the module will be loaded automatically.
# zpool create pool01 mirror /dev/mapper/sdb_crypt /dev/mapper/sdc_crypt mirror /dev/mapper/sdd_crypt /dev/mapper/sde_crypt
Check the setup
# zpool status pool: pool01 state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM pool01 ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 sdb_crypt ONLINE 0 0 0 sdc_crypt ONLINE 0 0 0 mirror-1 ONLINE 0 0 0 sdd_crypt ONLINE 0 0 0 sde_crypt ONLINE 0 0 0 errors: No known data errors
Rebooting again will ensure that everything is configured and the LUKS devices are brought up before ZFS mounts the pool, otherwise you will end up with ZFS errors and the pool won't load.
zpool status again and you should see the same output as above. If the LUKS devices fail to initialize and none of the devices are available, you will see an error about
no pool available.
If only some of the LUKS devices fail to initialize you will see the state being something other than
ONLINE and you can check
/var/log/kern.log for information as to why.
Posted: 2019-07-02 09:15:22 by Alasdair Keyes
I recently registered a new .uk domain, and setup a basic website on behalf of a client. I checked the logs to see how long it took for it to be accessed without me having to advertise it's presence. The timeline is
X.X.X.X - - [28/Jun/2019:17:09:47 +0100] "GET / HTTP/1.1" 301 178 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.12; rv:52.0) Gecko/20100101 Firefox/33.0"
The site uses a name-based virtualhost so the visitor had to specifically request the domain rather than just hitting port 80/443 on the server IP. Within ~6 hours of registration, the domain was already being scanned. What's of further interest is that at 17:09 at the same second two separate IPs both hit the index page for the first time, indicating it was likely a bot doing a coordinated scan of new sites.
As far as I know the domain hadn't been registered for a while (if ever) and as .uk domains don't release new registrations, the most likely way for bots/people to be aware of the new website was from the HTTPS Certificate Transparency logs. If you're unaware, every new secure certificate that's issued is published to a public log, these can be searched via a number of sites such as https://crt.sh/ (and you can see all certificates issued for akeyes.co.uk here https://crt.sh/?q=akeyes.co.uk).
The take-away from this is that you should be aware that nothing goes unnoticed on the web anymore, if you're setting up a new website, ensure that it is secure from the get-go. Make sure passwords are changed from defaults and are secure and ensure software is up-to-date as bots will be looking to exploit it, this is especially important for popular CMS apps like Wordpress.
As an aside, I found it interesting that Bing had crawled the domain within 24 hours, and Google has yet still to visit.
I'm now available for IT consultancy and software development services - Cloudee LTD.
Happy user of Digital Ocean (Affiliate link)