HMRC Basic PAYE Tools P60 generation error on Linux

Posted: 2024-04-12 09:52:59 by Alasdair Keyes

Direct Link | RSS feed


TL;DR if you are just looking for the fix scroll down to see the The Fix section.

As I run a UK company, I have to supply various bits of information to HMRC (the UK tax office) for payroll.

HMRC offer a free tool called Basic PAYE Tools for small businesses to make this possible (https://www.gov.uk/basic-paye-tools). It's a great tool, without it small companies would have to shell out money for proprietary solutions just to fulfil basic payroll/tax obligations. The tool is available for Windows, Mac and Linux. This is specifically for the Linux version, but a similar bug/fix may work for Mac.

The tool is a binary rti.linux that starts a web-server which listens on a http://127.0.0.1:46729/. A browser is then opened to automatically connect to that URL to provide the user with a GUI. The server stores data in a local sqlite database. When any actions are performed that require HMRC to be notified (such as paying employees), these transactions are stored locally and then sent via a batch process to HMRC.

As we have just passed into the 24-25 tax year, I have to 'close' the old year so that I can start the new one. Part of this process is to generate a P60 form for all employees. However, when doing this, the page just appeared to refresh and no form was displayed. This action is not one that requires any transfer of information to HMRC as the P60 is just a summary of an employees payments and tax contributions for that tax year.

A Google didn't show any similar issues and there was no easy 'File bug report' option. The only reporting is to call up HMRC on the phone. I didn't really want to try explaining the issue over the phone so I thought I'd investigate myself.

I could find no information about log files for the tool so I found the process of the rti.linux and looks in /proc/ for any possible logs.

$ ls -al /proc/24876/fd/
total 0
dr-x------ 2 user user  0 Apr 12 09:56 . 
dr-xr-xr-x 9 user user  0 Apr 12 09:52 ..
lr-x------ 1 user user 64 Apr 12 09:56 0 -> /dev/null
l-wx------ 1 user user 64 Apr 12 09:56 1 -> /home/user/.xsession-errors
lrwx------ 1 user user 64 Apr 12 09:56 10 -> 'socket:[147745]'
lrwx------ 1 user user 64 Apr 12 09:56 11 -> 'socket:[148555]'
lrwx------ 1 user user 64 Apr 12 09:56 12 -> 'anon_inode:[eventfd]'
lrwx------ 1 user user 64 Apr 12 09:56 13 -> 'socket:[148556]'
lrwx------ 1 user user 64 Apr 12 09:56 14 -> 'socket:[147746]'
lrwx------ 1 user user 64 Apr 12 09:56 15 -> 'socket:[147747]'
lrwx------ 1 user user 64 Apr 12 09:56 18 -> 'socket:[147819]'
l-wx------ 1 user user 64 Apr 12 09:56 2 -> /home/user/.xsession-errors
lrwx------ 1 user user 64 Apr 12 09:56 21 -> 'anon_inode:[eventfd]'
lr-x------ 1 user user 64 Apr 12 09:56 3 -> /dev/urandom
l-wx------ 1 user user 64 Apr 12 09:56 4 -> /tmp/rti.log
lrwx------ 1 user user 64 Apr 12 09:56 5 -> 'anon_inode:[eventfd]'
lr-x------ 1 user user 64 Apr 12 09:56 6 -> 'pipe:[144859]'
l-wx------ 1 user user 64 Apr 12 09:56 7 -> 'pipe:[144859]'
lrwx------ 1 user user 64 Apr 12 09:56 8 -> 'socket:[144860]'
lrwx------ 1 user user 64 Apr 12 09:56 9 -> 'socket:[144024]'

The /tmp/rti.log shows one entry.

24876 Server has asked us to open_file

And /home/user/.xsession-errors shows something a little more helpful.

evince: error while loading shared libraries: libjpeg.so.62: failed to map segment from shared object

So it looks like an error in libjpeg.so.62, but nothing specific.

To get more information, I ran strace to find more about what was being called. (The read() call data is truncated to save space).

$ strace -Ff -v -s 1000 -p 24876
[pid 25497] openat(AT_FDCWD, "/home/user/HMRC/payetools-rti/libjpeg.so.62", O_RDONLY|O_CLOEXEC) = 3
[pid 25497] read(3, "\177ELF\2\1\----TRUNCATED----\0)\0\0\0", 832) = 832
[pid 25497] newfstatat(3, "", {st_dev=makedev(0xfd, 0x1), st_ino=23609456, st_mode=S_IFREG|0644, st_nlink=1, st_uid=1000, st_gid=1000, st_blksize=4096, st_blocks=848, st_size=432128, st_atime=1712911963 /* 2024-04-12T09:52:43.275396143+0100 */, st_atime_nsec=275396143, st_mtime=1602102343 /* 2020-10-07T21:25:43+0100 */, st_mtime_nsec=0, st_ctime=1712611938 /* 2024-04-08T22:32:18.204486045+0100 */, st_ctime_nsec=204486045}, AT_EMPTY_PATH) = 0
[pid 25497] mmap(NULL, 434200, PROT_READ, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7f318954d000
[pid 25497] mprotect(0x7f3189551000, 413696, PROT_NONE) = 0
[pid 25497] mmap(0x7f3189551000, 241664, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x4000) = -1 EACCES (Permission denied)
[pid 25497] close(3)                    = 0
[pid 25497] writev(2, [{iov_base="evince", iov_len=6}, {iov_base=": ", iov_len=2}, {iov_base="error while loading shared libraries", iov_len=36}, {iov_base=": ", iov_len=2}, {iov_base="libjpeg.so.62", iov_len=13}, {iov_base=": ", iov_len=2}, {iov_base="failed to map segment from shared object", iov_len=40}, {iov_base="", iov_len=0}, {iov_base="", iov_len=0}, {iov_base="\n", iov_len=1}], 10) = 102
[pid 25497] exit_group(127)             = ?
[pid 25497] +++ exited with 127 +++

Here we see /home/user/HMRC/payetools-rti/libjpeg.so.62 is opened for reading, the data is read and then mmap() and mprotect() are run, the file is closed and then we see the log error above is written to file handle #2 (/home/user/.xsession-errors) before the process exits with a non-zero code.

This indeed looks to be the issue, I don't know enough about the codebase to determine if it is corrupt or perhaps it wasn't updated when the rest of the code? It's very likely a FOSS library. So I checked if Debian supplied a version.

$ apt-file search libjpeg.so.62
libjpeg62-turbo: /usr/lib/x86_64-linux-gnu/libjpeg.so.62
libjpeg62-turbo: /usr/lib/x86_64-linux-gnu/libjpeg.so.62.3.0

Great, maybe I can use that?

The Fix

  1. Make sure paye-tools is closed.

  2. Install the libjpeg62-turbo package

It turns out I already had the libjpeg62-turbo package installed, but if you don't you can use.

apt install libjpeg62-turbo

I checked if the files were the same.... it turns out they're different versions.

$ sha256sum /home/user/HMRC/payetools-rti/libjpeg.so.62.paye-tools-version /usr/lib/x86_64-linux-gnu/libjpeg.so.62
4f3446bc4c2a2d3c75b7c62062305ff8c5fcdaa447d5a2461d5995d40f728d00  /home/user/HMRC/payetools-rti/libjpeg.so.62
dad87949ccad2be7e40a02986306087fdcfb35ccaadd59aea923a3f96d290eec  /usr/lib/x86_64-linux-gnu/libjpeg.so.62
  1. Back-up and link to the system version
$ cd /home/user/HMRC/payetools-rti
$ mv libjpeg.so.62 libjpeg.so.62.orig
$ ln -s /usr/lib/x86_64-linux-gnu/libjpeg.so.62 libjpeg.so.62

I started the tool back up and I was able to get the P60.


If you found this useful, please feel free to donate via bitcoin to 1NT2ErDzLDBPB8CDLk6j1qUdT6FmxkMmNz

Simple Local LLM AI via docker-compose

Posted: 2024-03-13 16:21:36 by Alasdair Keyes

Direct Link | RSS feed


I've been playing about with local LLM (Large Language Model) AIs.

I knocked together this docker-compose.yml file to help people get started with Ollama with a nice Open-WebUI front-end to have the "joy" of AI, but locally.

It's available through a Gitlab snippet, or you can copy and paste from below. https://gitlab.com/-/snippets/3687211

---
# Created by Alasdair Keyes (https://www.akeyes.co.uk)
# * `docker-compose up`
# * Visit http://127.0.0.1:3000 to create account and login
# * Click 'Select a model'
# * Enter the model name to use. Click the link on the page to see all. `llama2` or `llama2-uncensored` are suitable first options.
# * Chat
version: '3'
services:

  ollama:
    image: "ollama/ollama"
    volumes:
      - ollama-data:/root/.ollama
# Uncomment ports to allow access to ollama API from the host
#    ports:
#      - "127.0.0.1:11434:11434"

  open-webui:
    image: "ghcr.io/open-webui/open-webui:main"
    depends_on:
      - ollama
    ports:
      - "127.0.0.1:3000:8080"
    environment:
      - "OLLAMA_BASE_URL=http://ollama:11434"
    volumes:
      - open-webui-data:/app/backend/data

volumes:
  ollama-data:
  open-webui-data:


If you found this useful, please feel free to donate via bitcoin to 1NT2ErDzLDBPB8CDLk6j1qUdT6FmxkMmNz

Best Albums of 2023

Posted: 2024-01-01 11:17:12 by Alasdair Keyes

Direct Link | RSS feed


My most enjoyed albums of 2023...

Special mentions Jungle - Volcano, which just missed out a place in the top five. Additional mentions to Metallica - 72 Seasons, Aphex Twin - Blackbox Life Recorder 21f / In a Room7 F760 as it's been a while since each had a release but neither quite had the spark I was hoping for.


If you found this useful, please feel free to donate via bitcoin to 1NT2ErDzLDBPB8CDLk6j1qUdT6FmxkMmNz

Updating Dell BIOS on a machine without Windows Installed

Posted: 2023-12-23 14:53:03 by Alasdair Keyes

Direct Link | RSS feed


I recently had a Dell laptop that required a BIOS update, but the BIOS installers were only for Windows and my machine was running Debian.

This posed a problem as to how to apply the update. I didn't want to go swapping out hard-drives to install Windows, or even wipe the Linux installation to install Windows and then re-install Linux after.

In the end I found that I could start a Windows 10 installation process, drop to a shell and run the BIOS update.

The steps are as follows...

  1. Download a Windows 10 ISO image from Microsoft. Write it to a DVD or use software such as Rufus (https://rufus.ie/) to create a Windows 10 Bootable USB drive.
  2. Download the BIOS update .exe files and put them onto a second USB stick.
  3. Boot into the Windows 10 installer.
  4. At the first screen that asks you for languages, press SHIFT+F10, a Windows Command prompt appears.
  5. Insert the second USB stick with the update into a second USB port. Wait 5 seconds for the drive to be auto-mounted, Windows will assign it drive letter E:.
  6. In the command prompt type e: <ENTER>.
  7. Run the executable file that you downloaded. The update will run and then reboot into the BIOS update process as normal.

After the update, you can remove the USB sticks and reboot back into Linux.


If you found this useful, please feel free to donate via bitcoin to 1NT2ErDzLDBPB8CDLk6j1qUdT6FmxkMmNz

Sysadmin Appreciation Day 2023

Posted: 2023-07-28 07:59:10 by Alasdair Keyes

Direct Link | RSS feed


It's that time of year again - Give thanks to your sysadmin.

https://sysadminday.com/


If you found this useful, please feel free to donate via bitcoin to 1NT2ErDzLDBPB8CDLk6j1qUdT6FmxkMmNz

Ansible/Jinja2 array concatenation changes in Debian Bookworm

Posted: 2023-07-24 21:50:19 by Alasdair Keyes

Direct Link | RSS feed


When I was first getting into Ansible, I read this article https://www.jeffgeerling.com/blog/2017/adding-strings-array-ansible by Jeff Geerling on how to add items to arrays a playbook using Jinja2 and I ended up incorporating it into my playbooks.

It's worth reading the article but essentially the code is...

hosts: localhost
connection: local
gather_facts: false

vars:
  my_messages: []

tasks:
  - name: Add to array
    ansible.builtin.set_fact:
      my_messages: "{{ my_messages }} + ['Message added to array: {{ item }}']"
    loop:
      - element one
      - something else
      - 3
      - false

  - name: Display messages
    ansible.builtin.debug:
      var: my_messages

This produced the following debug output

ok: [127.0.0.1] => {
    "my_messages": [
        "Message added to array: element one",
        "Message added to array: something else",
        "Message added to array: 3",
        "Message added to array: False"
    ]
}

I found this useful for any post-run reminders. The debug block was added at the end of the playbook and could prompt me to perform any other tasks that might be required.

After I upgraded from Debian Bullseye (11) to Bookworm (12) this feature was now no-longer working correctly. The code wouldn't error but the output would be one continuous string including the [] characters.

ok: [127.0.0.1] => {
    "my_messages": "[] + [ 'Message added to array: element one' ] + [ 'Message added to array: something else' ] + [ 'Message added to array: 3' ] + [ 'Message added to array: False' ]"
}

The change has occurred somewhere between Ansible 2.10.7 (with Jinja 2.11.3) and Ansible 2.14.3 (with Jinja 3.1.2).

To resolve this, the correct code is now...

hosts: localhost
connection: local
gather_facts: false

vars:
  my_messages: []

tasks:
  - name: Add to array
    ansible.builtin.set_fact:
      my_messages: "{{ my_messages + ['Message added to array: ' + item | string] }}"
    loop:
      - element one
      - something else
      - 3
      - false

  - name: Display messages
    ansible.builtin.debug:
      var: my_messages

The only line that has changed is the my_messages: fact setting. The array manipulation is now performed entirely within the {{ }} Jinja2 tags. This has a couple of knock-on effects that you will need to be aware of...

  1. You cannot use a loop variable with {{ item }} method as you will be using tags within tags and Ansible will throw an undefined variable error. You will have to concatenate your string/variables using the + operator.

  2. Because you are not using the {{ }} tags (because of point 1) but instead the concatenation operator +. It expects that you are only concatenating strings. As such you will need to ensure that any variables being joined are strings. If they are not, convert them with the | string modifier.

The second example shows both these methods being used to work correctly with later versions of Ansible/Jinja2

Depending on your use-case it's a little more messy, but won't take too much effort to convert.


If you found this useful, please feel free to donate via bitcoin to 1NT2ErDzLDBPB8CDLk6j1qUdT6FmxkMmNz

Migrating calendars and contacts from Nextcloud to Radicale

Posted: 2023-04-09 08:53:16 by Alasdair Keyes

Direct Link | RSS feed


I've been running Nextcloud (https://nextcloud.com/) for several years (and Owncloud (https://owncloud.com/), prior to that) to provide my personal CalDAV calendar and CardDAV contact services.

My original intention was to make use of the other services that Nextcloud provides but as it turns out, I never have. It's a little overkill to run a full instance just for this functionality, it's also a big security footprint to keep updated and maintained when not making use if it to the full so I started to look for other solutions.

I tested out Radicale V3 and it seemed to do just what I needed. It provides both those services and is also included in the Debian repos, so it provides an easier install and update.

Although I tested out Radicale on Debian 11 which has V3, the server I will use for this is Debian 10 so I only get Radicale V2. Both seemed to work well, but this article is more about V2.

The configuration I chose was to run Radicale bound to localhost with htpasswd authentication and an Nginx reverse proxy in front to provide access via the internet. This seems to be the most basic and easiest setup. Although the bulk of this article is taken from the Radicale documentation, there are a few tweaks and changes included that I had to/wanted to make.

Note: Radicale refers to each calendar/contact entry as a collection, so you will see that terminology used here.

  1. Install

The python3 libraries are not requirements of the Radicale package, but if you're using htpasswd auth you will need them as they're required for it's htpasswd and bcrypt processing.

apt install radicale python3-passlib python3-bcrypt
  1. Configure Radicale

Edit /etc/radicale/config and ensure the following config exists.

[auth]
type = htpasswd
htpasswd_filename = /etc/radicale/users
htpasswd_encryption = bcrypt
delay = 1
  1. Create the user
# htpasswd -B -c /etc/radicale/users yourusername
New password: ************
Re-type new password: ************
# chmod 640 /etc/radicale/users
# chown radicale: /etc/radicale/users
  1. Set up Nginx

This is straight from Radicale docs. The docs also provide a copy/paste Apache config too.

location /radicale/ { # The trailing / is important!
    proxy_pass        http://localhost:5232/; # The / is important!
    proxy_set_header  X-Script-Name /radicale;
    proxy_set_header  X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header  Host $http_host;
    proxy_pass_header Authorization;
}

If you're going to be importing existing data, you'll likely need to extend the reverse proxy timeouts as my calendar import took quite a few minutes. Your timeouts may vary but as an indicator my 1.2M calendar file took about 2-3 minutes to import.

proxy_read_timeout 300;
proxy_connect_timeout 300;
proxy_send_timeout 300;

Restart everything

# systemctl restart radicale.service
# systemctl reload nginx.service
  1. Download calendars and contacts from Nextcloud.

Once logged into your Nextcloud instance, you can get a single .ics and .vcf file for your calendar/contacts by going to the URL with ?export added to the end.

It should go without saying, not to make any changes to your contacts/calendars between making the backups here and setting up your clients to use the new Radicale server.

  1. Creation/Uploading of calendars

Login to the web interface that's provided by the NGINX/Apache config you created above. e.g. https://myradicaleserver.com/radicale/

The collections created use UUIDs, I didn't want to have to bother with those so I changed it to nicer paths...

# cd /var/lib/radicale/collections/collection-root/yourusername
# ls 
# ls -al
total 188
drwxr-x--- 4 radicale radicale   4096 Apr  8 20:13 .
drwxr-x--- 4 radicale radicale   4096 Apr  8 19:43 ..
drwxr-x--- 3 radicale radicale 167936 Apr  8 20:40 55f41fed-eccb-4feb-a460-69b1745d2c02
drwxr-x--- 3 radicale radicale  12288 Apr  8 20:16 6a2b1bfd-ca46-45e4-8dcf-861b448c519f
# mv 6a2b1bfd-ca46-45e4-8dcf-861b448c519f contacts
# mv 55f41fed-eccb-4feb-a460-69b1745d2c02 calendar
# ls -al
total 188
drwxr-x--- 4 radicale radicale   4096 Apr  8 20:13 .
drwxr-x--- 4 radicale radicale   4096 Apr  8 19:43 ..
drwxr-x--- 3 radicale radicale 167936 Apr  8 20:40 calendar
drwxr-x--- 3 radicale radicale  12288 Apr  8 20:16 contacts

Refreshing the web UI will show the updated paths.

  1. Import the dumps to Radicale
# curl -u 'yourusername:plaintextpassword' -X PUT https://yourradicaleserver.com/radicale/username/calendar --data-binary @personal-2023-04-08.ics
# curl -u 'yourusername:plaintextpassword' -X PUT https://yourradicaleserver.com/radicale/username/contacts --data-binary @Contacts-2023-04-08.vcf

Reresh the Web UI and you will notice that the name/description/colours you set will have been changed or removed. Use the Edit link to set them again.

That's it, you then just need to configure your clients to use the new calendars/contacts. I use Thunderbird for my PC and DavX5 on my phone and both worked with no issues.


If you found this useful, please feel free to donate via bitcoin to 1NT2ErDzLDBPB8CDLk6j1qUdT6FmxkMmNz

Disabling syslog debugging on open-fprintd

Posted: 2023-04-02 19:23:47 by Alasdair Keyes

Direct Link | RSS feed


I recently installed python-validity (https://github.com/uunicorn/python-validity) and open-fprintd (https://github.com/uunicorn/open-fprintd) to get the fingerprint reader working on a Thinkpad T480s. (The standard fprintd package doesn't support the fingerprint reader device on the T480s).

After setting it up I noticed a lot of debug information in syslog (example truncated)...

Apr  2 09:11:00 hostname01 open-fprintd: >tls> 17: 4b00000b0053746757696e64736f7200
Apr  2 09:11:00 hostname01 open-fprintd: >cmd> 1703030050c00a...a82185cc9399d30625ee3c1451f
Apr  2 09:11:00 hostname01 open-fprintd: <cmd< 1703030050b7a4a...cdb0d7f97fa67b6337329
Apr  2 09:11:00 hostname01 open-fprintd: <tls< 17: 0000030002000b00000...6757696e64736f7200

To disable debug without changing the installed systemd files, run the following to create a custom open-fprintd.service file to override it.

sed 's/\(ExecStart.*\)\-\-debug\(.*\)/\1 \2/' /lib/systemd/system/open-fprintd.service > /etc/systemd/system/open-fprintd.service

After a reboot it'll stop spewing that data out.


If you found this useful, please feel free to donate via bitcoin to 1NT2ErDzLDBPB8CDLk6j1qUdT6FmxkMmNz

Group code reviews

Posted: 2023-01-29 16:03:38 by Alasdair Keyes

Direct Link | RSS feed


Whist working with a customer of mine, I was introduced to a way of code review that I hadn't seen before. I'm not sure if it's particularly unique to them, but in my experience it's fairly rare... group code review.

In all previous companies I've worked at/with, a pull request (PR) is created and has some key devs added for Code Review. For a speedy review a link to the PR was often thrown into the Dev Slack channel with a little prodding or begging to get the task done.

Now, most people in dev teams are busy; or at least if they're not too busy, they find almost anything more interesting than reviewing a PR. It can be tedious and if it's touching code that you're not familiar with, you can often end up spending some time trying to work out both what the code was doing and what the change is supposed to be doing. In the end it was often team leaders who ended up picking up the slack and performing the reviews to get code out the door. This puts an excessive burden on them when it could be shared more equally.

The process

This particular customer had group code reviews once in the morning after stand-up and then an optional code review mid afternoon if required. My first thought was that this would cause undue disruption to work flow but over time I saw immense benefit in this approach and very little drawback.

The developer that wrote the code would create a PR as normal. At the review, they would share their screen on a famous git-hosting provider. Usually they would give a sentence about the ticket and what they were trying to achieve. Due to stand-ups and backlog planning, everyone usually had a fairly good idea on this already. The dev would proceed to go through the changes on their branch, giving a run down of the code as they went. Questions or points could be made by anyone and a discussion could be had amongst the group. If new tests were made, they would also be run through briefly so that other devs could see what was being tested, whether it was suitable/adequate and any edge cases that might fall through the cracks and need to be added. (It also required that the tests passed!)

Any changes that were required and agreed upon by the team were added to the PR and the dev would go back and implement the changes.

If the changes needed were minor, they were often approved by a senior dev and scheduled for release. Larger changes would have to have a new PR created and go back into the code review process. The coder would only have to go through the changes from the last code review, not the whole shebang.

The benefits...

Things you think are bad... but they're not as bad as you think...

So that's my review on group code-review. If you're looking for possible improvements to your process I strongly advise you give it a shot. Do it for a month or so and see how you get on. You can always tweak the process based on your needs and fall back to your previous ways if you don't like it.


If you found this useful, please feel free to donate via bitcoin to 1NT2ErDzLDBPB8CDLk6j1qUdT6FmxkMmNz

Adding custom Firefox config with Ansible

Posted: 2023-01-26 21:05:42 by Alasdair Keyes

Direct Link | RSS feed


I've been writing Ansible playbooks to build my laptop and I came across a problem with applying my custom Firefox user.js config file to a new build.

Firefox won't create config/profile directory until it's started for the first time. During this first start a random profile directory is created in which the user.js file needs to be placed e.g /.mozilla/firefox/wxyz1234.default-release/.

As such you can't just install user.js with a basic file copy.

To counter this I wrote the following play so that if there was no Firefox profile folder, Ansible will start Firefox, allow it to create the profile folder and then kill it. It will then search for the newly created profile folder and install the file.

# Install Firefox
- name: Install Firefox
  become: true
  package:
    name:
      - firefox
    state: present

# Check if a profile folder exists
- name: Check Firefox config folder
  find:
    paths: "{{ ansible_user_dir }}/.mozilla/firefox"
    patterns: '^.*\.default-release'
    use_regex: true
    file_type: directory
  register: firefox_config_folder_search

# If profile folder doesn't exist, start Firefox
- name: Starting Firefox
  shell:
    cmd: "firefox &"
  changed_when: false
  when: firefox_config_folder_search.matched == 0

- name: Waiting for Firefox to start
  command: sleep 10s
  changed_when: false
  when: firefox_config_folder_search.matched == 0

# Kill Firefox
- name: Killing Firefox
  shell:
    cmd: kill $(pidof firefox)
  changed_when: false
  when: firefox_config_folder_search.matched == 0

# Search for the newly created profile directory
- name: Check Firefox config folder again
  find:
    paths: "{{ ansible_user_dir }}/.mozilla/firefox"
    patterns: '^.*\.default-release'
    use_regex: true
    file_type: directory
  register: firefox_config_folder_search

# Set a fact with the profile directory path
- name: Set firefox folder name as fact
  set_fact:
    firefox_config_folder_path: "{{ firefox_config_folder_search.files[0].path }}"

# Add in the custom config
- name: Add in Firefox config
  copy:
    src: files/firefox/user.js
    dest: "{{ firefox_config_folder_path }}/user.js"
    owner: "{{ ansible_user_id }}"
    group: "{{ ansible_user_id }}"
    mode: 0644
  when: firefox_config_folder_path != 'false'

The play is idempotent so you can run it as many times as you want and it will continue to give you a nice green response.

I only ever use a single Firefox profile so I don't need to ensure different configs, but it could be extended to take this into account if you needed.

I later found some other software I use has this same config issue so I extracted the process start/kill tasks into a separate file

- name: Starting process {{ item }}
  shell:
    cmd: "{{ item }} &"
  changed_when: false

- name: Waiting for proces {{ item }} to start
  command: sleep 10s
  changed_when: false

- name: Killing process {{ item }}
  shell:
    cmd: kill $(pidof {{ item }})
  changed_when: false

Which can then be called with the following in your playbook

- name: Start/Kill myprogram to generate config
  include_tasks: start-kill-process.yml
  loop:
    - myprogram


If you found this useful, please feel free to donate via bitcoin to 1NT2ErDzLDBPB8CDLk6j1qUdT6FmxkMmNz

© Alasdair Keyes

IT Consultancy Services

I'm now available for IT consultancy and software development services - Cloudee LTD.



Happy user of Digital Ocean (Affiliate link)


Version:master-640c668525


Validate HTML 5