The RAID

Posted: 2023-01-08 21:15:00 by Alasdair Keyes

Direct Link | RSS feed


After my failed drive on New Year's day, I ordered a new disk and rebuilt the array. Thankfully due to monthly checking by the OS, all the data on the three remaining drives was readable and the array is complete again.

I put the failed disk into another machine and ran the following badblocks command on it.

badblocks -o sdb_badblocks.txt -b 4096 -w -s /dev/sdb

I used the destructive test as the data was not needed now that the array was back to full strength. Incidentally, using a block size of 4096 over the default 1024 seemed to provide about a 2x-3x speed increase.

Even with that, the 2TB disk took just over 33 hours for a full write pass and a confirmation read pass.

At the end of it, a full write and read pass were managed with no errors reported. This is frustrating as mdadm had obviously detected a read error to reject the disk - this was logged in syslog.

I thought that maybe the bad sectors had been remapped by the firmware during the badblocks test, but checking the SMART stats again I saw that no errors are reported and also no re-allocation had been logged (ID# 5 below).

SMART Attributes Data Structure revision number: 16
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE
  1 Raw_Read_Error_Rate     0x000b   100   100   016    Pre-fail  Always       -       0
  2 Throughput_Performance  0x0005   134   134   054    Pre-fail  Offline      -       103
  3 Spin_Up_Time            0x0007   168   168   024    Pre-fail  Always       -       342 (Average 311)
  4 Start_Stop_Count        0x0012   100   100   000    Old_age   Always       -       75
  5 Reallocated_Sector_Ct   0x0033   100   100   005    Pre-fail  Always       -       0
  7 Seek_Error_Rate         0x000b   100   100   067    Pre-fail  Always       -       0
  8 Seek_Time_Performance   0x0005   146   146   020    Pre-fail  Offline      -       29
  9 Power_On_Hours          0x0012   086   086   000    Old_age   Always       -       99078
 10 Spin_Retry_Count        0x0013   100   100   060    Pre-fail  Always       -       0
 12 Power_Cycle_Count       0x0032   100   100   000    Old_age   Always       -       75
192 Power-Off_Retract_Count 0x0032   100   100   000    Old_age   Always       -       989
193 Load_Cycle_Count        0x0012   100   100   000    Old_age   Always       -       989
194 Temperature_Celsius     0x0002   200   200   000    Old_age   Always       -       30 (Min/Max 16/44)
196 Reallocated_Event_Count 0x0032   100   100   000    Old_age   Always       -       0
197 Current_Pending_Sector  0x0022   100   100   000    Old_age   Always       -       0
198 Offline_Uncorrectable   0x0008   100   100   000    Old_age   Offline      -       0
199 UDMA_CRC_Error_Count    0x000a   200   200   000    Old_age   Always       -       0

So I'm not sure why the error occurred. Maybe the controller is bad, the cable is dodgy or my server got hit by some stray cosmic rays somewhere and caused some kind of CRC error (yes, it can and does happen). The server has ECC memory so bitflips in the RAM should have been detected had they occurred.

Interestingly, this is the first failed disk I've had within a Linux MDADM array in over 20 years of running servers (I've had plenty of failed disks in Dell PERC controllers and whatever controllers Supermicro jam into their servers!). All previous arrays have been torn down before a disk failed.

As such this was also the first time I've had to rebuild an array. This particular RAID was running for over 11 years before this disk failed. For those interested, I followed this post by Redhat about the steps to take https://www.redhat.com/sysadmin/raid-drive-mdadm.

Should something similar happen again, I think I would run badblocks in non-destructive mode on the disk in situ, then if it passed push it back into the array for it to be rebuilt before I looked at buying a new disk.


If you found this useful, please feel free to donate via bitcoin to 1NT2ErDzLDBPB8CDLk6j1qUdT6FmxkMmNz

Best Albums of 2022

Posted: 2023-01-02 14:03:49 by Alasdair Keyes

Direct Link | RSS feed


My most enjoyed albums of 2022...

  1. Tears for Fears - The Tipping Point
  2. Scorpions - Rock Believer
  3. Red Hot Chili Peppers - Return of the Dream Canteen
  4. KMFDM - Hyëna
  5. Crystal Method - The Trip Out

A special mention to Röyksopp's trilogy of albums Profound Mysteries I/II/III and a special note of disappointment on the new albums from Megadeth and Rammstein, both had a couple of good tracks, but nowhere near as good as I'd expect.


If you found this useful, please feel free to donate via bitcoin to 1NT2ErDzLDBPB8CDLk6j1qUdT6FmxkMmNz

Happy New Year.... It would be a shame if that drive failed.

Posted: 2023-01-01 10:49:52 by Alasdair Keyes

Direct Link | RSS feed


A delightful New Year's Day gift.

A Fail event had been detected on md device /dev/md/0.


If you found this useful, please feel free to donate via bitcoin to 1NT2ErDzLDBPB8CDLk6j1qUdT6FmxkMmNz

Javascript Asyncronous coding in

Posted: 2022-08-10 18:49:54 by Alasdair Keyes

Direct Link | RSS feed


Being a (primarily) back-end developer, my use of Javascript has been fairly limited. Mostly using jQuery and some basic raw Javascript in the olden-days for frontend validation, basic animation and alerts etc.

I've been writing a project in Quasar https://quasar.dev/ and delving further into the mysteries of async programming.

I was struggling to understand some documentation I found online. After some digging, it turns out that through various versions of ECMAScript, the concept of asynchronous programming has evolved and changed, giving you multiple ways of doing things with callbacks, promises and async/await.

I found this video on Youtube https://www.youtube.com/watch?v=PoRJizFvM7s which although only 24 minutes long gives a fantastic introduction to how each iteration of callbacks/promises/async/await has been developed, how they work and how to code against them.

It gives and intro with examples and that was all I need to get it straight in my head and make some real progress. It's a must-watch for anyone starting to get involved with Javascript other than simple libraries like jQuery.


If you found this useful, please feel free to donate via bitcoin to 1NT2ErDzLDBPB8CDLk6j1qUdT6FmxkMmNz

Setting remote IP on Laravel controller tests

Posted: 2022-08-07 12:45:15 by Alasdair Keyes

Direct Link | RSS feed


When building a Laravel website you might want to create allow/block lists based on user's IP or from GeoIp information. This is easy enough using geoip2/geoip2 (https://packagist.org/packages/geoip2/geoip2) but how do you test your code is working correctly with specific IP addresses when writing your functional/integration tests?

At the beginning of a test that requires a custom IP you can add $this->serverVariables = ['REMOTE_ADDR' => '1.2.3.4']; and this will be what your controller sees in the IlluminateHttpRequest object.

public funtion testGeoIpFunctionality(): void
{
    $this->serverVariables = ['REMOTE_ADDR' => '1.2.3.4'];
    $response = $this->get('/website/endpoint');

    $response->assertStatus(200);
    ...
    // other assertions
    ...
}


If you found this useful, please feel free to donate via bitcoin to 1NT2ErDzLDBPB8CDLk6j1qUdT6FmxkMmNz

"They" are watching... even quicker than before

Posted: 2022-05-06 20:45:08 by Alasdair Keyes

Direct Link | RSS feed


I wrote a blog post in 2019 about the website of a newly registered domain getting visited by a bot within 5 hours of the website coming online. You can read the article here - Security first, "they" are watching.

In short, I had surmised that the Certificate Transparency logs were being monitored to discover new sites so they could be scanned for vulnerabilities before an admin had a chance to harden the website.

I read an article today (https://portswigger.net/daily-swig/wordpress-sites-getting-hacked-within-seconds-of-tls-certificates-being-issued) which looks as if this premonition has come to pass. Wordpress websites are apparently getting hacked 'within seconds' of the TLS certificates being issue.

It looks like the logs are being tailed and visited much quicker than before... from 5 hours 3 years ago to <1 minute today.

I've steered clear of Wordpress for years now and often advise my clients to do the same. Although the usability and extensibility of Wordpress is fantastic, the scope for vulnerabilities in both plugins and the core code is too great to rely on. If you do run it, assess if you really need it for a public facing site and if you don't, add IP or Basic Authentication restrictions to your webserver config to restrict access to only those who need it.


If you found this useful, please feel free to donate via bitcoin to 1NT2ErDzLDBPB8CDLk6j1qUdT6FmxkMmNz

Reducing dependencies and expanding Laravel Blade

Posted: 2022-05-05 08:05:06 by Alasdair Keyes

Direct Link | RSS feed


I've recently spent a couple of days moving my site to the latest version of Laravel. There were some problems upgrading through such a large number of major versions which I'll likely do a blog post about later.

Whilst I was re-adding my Composer dependencies I was looking at what I really needed. My blog posts are written in markdown and stored in a the database, the Laravel template engine, Blade, converts them from Markdown to HTML. For this task I was using the parsedown/laravel plugin (https://packagist.org/packages/parsedown/laravel)which uses erusev/parsedown (https://packagist.org/packages/erusev/parsedown)underneath to do the actual markdown processing.

I try to minimise dependencies used for two reasons,

  1. Reduced complexity in the codebase
  2. Reduced attack vectors either from attacks directly against my site or supply chain attacks through PHP's Composer system.

Whilst browsing through the Laravel Docs I noticed that they have an inbuilt Str::markdown (https://laravel.com/docs/9.x/helpers#method-str-markdown) helper which might allow me to do the same thing. Under the hood it uses Commonmark from the PHP League (https://commonmark.thephpleague.com/)

I used a couple of custom options on Parsedown, which I needed to be sure worked with the Laravel version.

$parseDown = Parsedown::instance();
$parseDown->setUrlsLinked(false);
$parseDown->setMarkupEscaped(false);

setUrlsLinked: false means that URLs aren't automatically converted into a href links and setMarkupEscaped: false means that I can include HTML markup in my blog posts if I desire.

After reading through the Common mark docs I the relative options were allow_unsafe_links: true and html_input: allow flag and I'd be set. Although these are defaults for Commonmark, I want to explicitly declare them in case defaults change in future.

I only used markdown in my templates and Parsedown automatically adds a blade directive of @parsedown("# Markdown Title") which I made use of. My first task was to create a Blade directive to process markdown in my templates, I decided on the name processMarkdown()

I created app/Providers/CustomBladeFunctionProvider.

<?php
  
declare(strict_types=1);

namespace App\Providers;

use Illuminate\Support\Facades\Blade;
use Illuminate\Support\ServiceProvider;
use Illuminate\Support\Str;

class CustomBladeFunctionProvider extends ServiceProvider
{
    /**
     * Bootstrap the application services.
     *
     * @return void
     */
    public function boot(): void
    {
        // Provide @processMarkdown
        Blade::directive('processMarkdown', function ($parameter) {
            return "<?= rtrim(Str::markdown($parameter, [ 'allow_unsafe_links' => true, 'html_input' => 'allow' ])); ?>";
        });
    }
}

Then added this to my providers in config/app.php.

...
    'providers' => [
        ...
        App\Providers\CustomBladeFunctionProvider::class,
        ...
    ],
...

Then all I needed to do was update my templates from

@parsedown($blogPost->body)
@processMarkdown($blogPost->body)

And then remove the parsedown/laravel dependency to slim down my codebase.


If you found this useful, please feel free to donate via bitcoin to 1NT2ErDzLDBPB8CDLk6j1qUdT6FmxkMmNz

Language Transfer for learning a new language

Posted: 2022-04-13 12:21:46 by Alasdair Keyes

Direct Link | RSS feed


I've recently tried to get back into learning German and I have started using the Language Transfer website.

Language Transfer is run by one chap, Mihalis who teaches a range of languages, French, Italian, German, Spanish, Greek, Turkish, Arabic and even Swahili to English speakers. His concept for learning languages is to understand what shared parts of English are similar (or transferred) across into the language you are learning and use that as a base to get a fast grounding.

I've been using Duolingo on and off for a while, but often become frustrated in it's lack of explanation as to why certain aspects of the language are the way they are. Language Transfer really adds to it by teaching common rules as to how to construct sentences in the given language, how verbs congugate, nouns pluralise, etc.

If you're learning any languages, I highly recommend checking the site to see if he is teaching your language and I think you'll find it a great help.

https://www.languagetransfer.org/


If you found this useful, please feel free to donate via bitcoin to 1NT2ErDzLDBPB8CDLk6j1qUdT6FmxkMmNz

Mounting an encrypted ZFS dataset at boot on Debian 11 Bullseye

Posted: 2021-11-09 16:43:36 by Alasdair Keyes

Direct Link | RSS feed


I've recently got myself another HP Microserver which has space for 4 disks so I decided setup Debian 11 on one disk and use the other three to create a ZFS zpool for data storage.

The last time I'd experimented with ZFS on Linux (ZoL) on a virtual machine, encryption wasn't available, but it is now so I enabled if for my dataset. This is fine when the dataset is created, as it will auto-mount, but it doesn't auto-mount on reboot as it's encrypted.

It turns out ZFS handles the process of obtaining the encryption key and mounting the volume as two distinct processes. This means that when the ZFS mount service starts, it will skip mounting the encrypted volume because there is no key available to it.

The Linux standard dm-crypt/LUKS encryption requires you to update /etc/crypttab with each encrypted volume on the system and it will prompt for a password at boot time. ZFS does have the ability to use a file as the encryption key, but as I already have to enter a password for the OS drive, I was looking for do the same for the ZFS dataset.

After some investigation I found the solution on the Arch Linux Wiki (https://wiki.archlinux.org/title/ZFS#Native_encryption). They provide a snippet for a systemd service file that can be set to run before the ZFS mount service to ask for the encryption keys.

It did require tweaking as the path to the ZFS binary is different on Debian. In short, create the file /etc/systemd/system/zfs-load-key.service with the following content...

[Unit]
Description=Load ZFS encryption keys
DefaultDependencies=no
After=zfs-import.target
Before=zfs-mount.service

[Service]
Type=oneshot
RemainAfterExit=yes
ExecStart=/usr/sbin/zfs load-key -a
StandardInput=tty-force

[Install]
WantedBy=zfs-mount.service

Once that is done run the following commands to refresh systemd with the new service and then set it to run on boot.

systemctl daemon-reload
systemctl enable zfs-load-key.service


If you found this useful, please feel free to donate via bitcoin to 1NT2ErDzLDBPB8CDLk6j1qUdT6FmxkMmNz

LXC Debian containers and unknown GPG signing keys

Posted: 2021-06-04 10:14:27 by Alasdair Keyes

Direct Link | RSS feed


I needed to create a Debian Buster LXC container on my laptop and when running the following LXC create command I received the following error

# lxc-create -t debian -n testcontainer -- -r buster
debootstrap is /usr/sbin/debootstrap
Checking cache download in /var/cache/lxc/debian/rootfs-buster-amd64 ...
gpg: key 7638D0442B90D010: 4 signatures not checked due to missing keys
gpg: key 7638D0442B90D010: "Debian Archive Automatic Signing Key (8/jessie) <ftpmaster@debian.org>" not changed
gpg: Total number processed: 1
gpg:              unchanged: 1
Downloading debian minimal ...
I: Retrieving InRelease
I: Checking Release signature
E: Release signed by unknown key (key id DCC9EFBF77E11517)
   The specified keyring /var/cache/lxc/debian/archive-key.gpg may be incorrect or out of date.
   You can find the latest Debian release key at https://ftp-master.debian.org/keys.html
Failed to download the rootfs, aborting.
Failed to download 'debian base'
failed to install debian
lxc-create: testcontainer: lxccontainer.c: create_run_template: 1626 Failed to create container from template
lxc-create: testcontainer: tools/lxc_create.c: main: 319 Failed to create container testcontainer

This is telling me that the key used to sign the Debian release is unknown to LXC. It also shows that LXC is using the file /var/cache/lxc/debian/archive-key.gpg as the GPG keyring.

We can check the keys listed in that keyring with the following command. As a break down, this is running the regular gpg utility, but the --no-default-keyring and --keyring arguments are telling gpg to manage just the keyring file that LXC is using.

# gpg --no-default-keyring --keyring /var/cache/lxc/debian/archive-key.gpg --list-key
/var/cache/lxc/debian/archive-key.gpg
-------------------------------------
pub   rsa4096 2014-11-21 [SC] [expires: 2022-11-19]
      126C0D24BD8A2942CC7DF8AC7638D0442B90D010
uid           [ unknown] Debian Archive Automatic Signing Key (8/jessie) <ftpmaster@debian.org>

Which shows it only has the key for Debian 8 - Jessie...

To get the latest version we need to check that the key listed in the error is a valid Debian key, otherwise we could be opening ourselves up to downloading malicious files.

Visiting https://ftp-master.debian.org/keys.html shows that the GPG key with fingerprint DCC9EFBF77E11517 listed in the error is the valid Debian 10 Buster release key.

Now that we're satisfied that nothing shady is going on, we can import the key to the keyring.

Download the key from the Debian site...

# wget "https://ftp-master.debian.org/keys/release-10.asc"
...
2021-06-04 10:51:53 (35.6 MB/s) - ‘release-10.asc’ saved [1200/1200]

Then import into the keyring...

# gpg --no-default-keyring --keyring /var/cache/lxc/debian/archive-key.gpg --import release-10.asc 
gpg: key DCC9EFBF77E11517: public key "Debian Stable Release Key (10/buster) <debian-release@lists.debian.org>" imported
gpg: Total number processed: 1
gpg:               imported: 1

Running the --list-key command we ran before shows the new key in the the LXC keyring

# gpg --no-default-keyring --keyring /var/cache/lxc/debian/archive-key.gpg --list-key
/var/cache/lxc/debian/archive-key.gpg
-------------------------------------
pub   rsa4096 2014-11-21 [SC] [expires: 2022-11-19]
      126C0D24BD8A2942CC7DF8AC7638D0442B90D010
uid           [ unknown] Debian Archive Automatic Signing Key (8/jessie) <ftpmaster@debian.org>

pub   rsa4096 2019-02-05 [SC] [expires: 2027-02-03]
      6D33866EDD8FFA41C0143AEDDCC9EFBF77E11517
uid           [ unknown] Debian Stable Release Key (10/buster) <debian-release@lists.debian.org>

We can now run the create container command...

# lxc-create -t debian -n akeyescouk -- -r buster
debootstrap is /usr/sbin/debootstrap
Checking cache download in /var/cache/lxc/debian/rootfs-buster-amd64 ... 
gpg: key 7638D0442B90D010: 4 signatures not checked due to missing keys
gpg: key 7638D0442B90D010: "Debian Archive Automatic Signing Key (8/jessie) <ftpmaster@debian.org>" not changed
gpg: Total number processed: 1
gpg:              unchanged: 1
Downloading debian minimal ...
I: Retrieving InRelease 
I: Checking Release signature
I: Valid Release signature (key id 6D33866EDD8FFA41C0143AEDDCC9EFBF77E11517)
I: Retrieving Packages 
I: Validating Packages 
...


If you found this useful, please feel free to donate via bitcoin to 1NT2ErDzLDBPB8CDLk6j1qUdT6FmxkMmNz

© Alasdair Keyes

IT Consultancy Services

I'm now available for IT consultancy and software development services - Cloudee LTD.



Happy user of Digital Ocean (Affiliate link)


Version:master-4091c64dc9


Validate HTML 5