Reddit and Hacker News time sink

Posted: 2017-10-28 14:25:49 by Alasdair Keyes

Direct Link | RSS feed


Four weeks ago, I finally deleted the Hacker News and Reddit apps from my phone.

I get a great deal of entertainment and information from these two sites however, I've found over time that I'll spend more and more time on them. If I feel remotely bored or disinterested, my go-to tool is Reddit. Instead of realising that I had spare time and could use it productively, I would just sink it time into browsing whatever dross was on there.

On top of this I found it would also start affecting my sleep, if I woke up at 4.30am and was unable to get back to sleep I would often grab my phone and browse. This was doing me no favours and so my decision to delete the apps.

I still view both sites on my laptop and I can obviously browse the websites on my phone to, but just removing the ability to load the sites up with one tap and seeing the icons on my screen has really had an effect and I find myself a lot less likely to just browse for the sake of browsing.

This is, of course, no guarantee that I will use my time more productively, but it certainly won't hurt.


If you found this useful, please feel free to donate via bitcoin to 1NT2ErDzLDBPB8CDLk6j1qUdT6FmxkMmNz

Firefox Multi-Account Containers

Posted: 2017-09-18 12:59:05 by Alasdair Keyes

Direct Link | RSS feed


For anyone that uses Firefox, I strongly recommend you install https://addons.mozilla.org/en-GB/firefox/addon/multi-account-containers/ Multi Account Containers.

It's written by Mozilla themselves and allows you to carve up Firefox into different containers for separation.

The containers are colour coded and each tab has the colour of the container it's running in. There is a Default container which is used for all websites until you decide otherwise.

This means if I open up a new tab in the Personal container and go to Github, I get my personal account. If I open my Work tab and visit the same site, it's logged into my Work account. No more logging in and out or running multiple browsers.

You can also pin websites to specific containers. Create a Finance container and pin your credit card, banking and ISA website into it and whenever you go to those sites it'll automatically open the site in that container. Much less cross-site tracking and also extra security from possible Cross site scripting vulnerabilities.

Do it, do it now.


If you found this useful, please feel free to donate via bitcoin to 1NT2ErDzLDBPB8CDLk6j1qUdT6FmxkMmNz

iproute and net-tools

Posted: 2017-09-14 18:01:52 by Alasdair Keyes

Direct Link | RSS feed


For those Linux admins that have been doing their thing for at least 10 years, you will be very familiar with the standard networking tools ifconfig, netstat etc from the net-tools package. As you also know these are no longer being developed and have been deprecated in favour of a newer iproute2 tool set.

This has been the case for many years, but I bet you still type ifconfig and route don't you?... Years of muscle memory is a hard habit to break.

Although the much more recent iproute2 range of tools has been available for a long time, I still find myself using the old ones as well. When I catch myself doing that, I force myself to lookup how to do the same thing in the newer tools. I'm slowly getting there but it'll take many years yet.

As a handy guide, I was passed this by a friend. It's most useful for those trying to transition and it's well worth a bookmark.

https://dougvitale.wordpress.com/2011/12/21/deprecated-linux-networking-commands-and-their-replacements/

As you transition, it's well worth remembering to update any scripts you have to use the new ones. There might come a time when net-tools is completely removed and you'll want to make sure your trusty helper scripts don't fail you!


If you found this useful, please feel free to donate via bitcoin to 1NT2ErDzLDBPB8CDLk6j1qUdT6FmxkMmNz

Save time... automate

Posted: 2017-09-11 17:24:45 by Alasdair Keyes

Direct Link | RSS feed


This is an old post I discovered I'd started writing... and then got sidetracked for a couple of months so the Hacker News article is a little old now.


I read this post on automation at hacker news Hacker News: What tasks do you automate? and it got me thinking.

I'm a great fan of automation and most things get automated if I can. And strangely enough I quite enjoy writing automation code/configs. So here's a brief list of what I automate day-to-day:-

Puppet

Most of my automation is rolled out via Puppet, but Ansible/Chef etc are great alternatives. I even have a Puppet manifest to configure my Puppet master, it's puppets all the way down.

Backups

Probably the biggest one for anyone that works with computers. You only make the mistake of not having consistent backups once! There's a myriad of tools out there. Whether you're a home Windows/Apple user or an IT professional, make sure your data is backed up. I have a backup system rolled out to all my servers with a Puppet manifest as soon as a machine connects to the Puppet master it's backed up.

Project Builds

I do all my development in LXC containers or VMs. Every project that I begin starts with a bash script that I update with the commands I've run as I build up the system. This script is the second item after a README.md file to be committed into the repo. This means that after several months I can still check out the repository, run the script and be back where I was. In addition if I decide to push the project into a CI pipeline, I have the tools available to get test builds working instantly. If I'm using gitlab, I will also build this script into the CI pipeline to configure my containers for the testing phase.

Server Builds

I run a number of servers for varying uses from web sites, email, gopher, XMPP etc. These are all configured through puppet. Even simple servers have tiny tweaks and changes that are added over the years and remembering all these when you build up another is nigh on impossible. Small changes like adding a new email address to one of my domains, may take a couple of extra seconds by updating my puppet config and then pushing it out, but it brings great peace of mind. I can now rollout all my servers in a number of minutes.

Desktop builds

A continuation of the above, I have a puppet manifestscript to install all the software I require, set-up BASH prompts, vim config, firewall config, IDE config.

Desktop builds change very quickly, often installing and removing software so this one really pays off.

Nagios server monitoring

This is another one that I just set and leave. On top of this, it monitors my backups as well as my server build puppet runs. Again this is done through Puppet so my Nagios server is deployed in a matter of minutes on a new server.

Tiny tasks

My crontab is filled with lots of small scripts to do this and that. From emailing me the daily bitcoin price to checking domain availability.

These could be checked manually when I remember, but why spend the effort? Recently I even wrote one to scrape a site to find when the next Ice Hockey match was on so I could get tickets and email me. The tasks for this are endless but it all saves time and stops me overlooking something important.

I recall reading some advice that "If you have to do it once, do it by hand. If you have to do it twice, automate it, you will need to do it a third time"


If you found this useful, please feel free to donate via bitcoin to 1NT2ErDzLDBPB8CDLk6j1qUdT6FmxkMmNz

Gitlab Perl CI and CD Pipeline

Posted: 2017-09-10 12:38:47 by Alasdair Keyes

Direct Link | RSS feed


I've been looking at Gitlab's CI pipeline to automate testing and deployment of a Perl App I've been writing.

Gitlab's documentation on the subject is very comprehensive https://docs.gitlab.com/ee/ci/pipelines.html however there's no Perl example https://docs.gitlab.com/ee/ci/examples/README.html so I did a bit of playing to get a working configuration for those who are interested.

Gitlab makes it extremely easy to use their CI by creating a gitlab-ci.yml file to control the pipeline.

Firstly we define the container image we wish the use. Gitlab uses Docker containers so you can choose any image from Docker Hub https://hub.docker.com/

image: ubuntu:artful

Secondly we define the before_script section, this is a script that is run to prepare the containers for your tests. The before_script section can be limited to specific sections as you will see under the deployment sections later. But this global area will be executed for every stage of the build process.

before_script:
  - echo "Before script installation"
  - apt update
  - apt install libdevel-cover-perl libjson-xs-perl -y

Next, we define the stages of the CI pipeline. This is a fairly small app so there's just a test and deploy stage which we'll hook into.

stages:
  - test
  - deploy

We then define the execution of the unit tests. The test phase runs a single Perl harness script, which will in turn run all the test files under the t/ directory. It returns 0 if all tests are succesful, otherwise 1. This makes it slightly easier than putting each test file into it's own section.

The test is executed with the Devel::Cover module to produce code coverage output, which we then harvest with the coverage regex. This allows us to place the Coverage: X% badge on our website/README.md files.

test:unit:
  stage: test
  script:
    - perl -MDevel::Cover t/test_harness_script.pl
  coverage: /Total\s+.+\s(\d+\.\d+?)$/

Next, the deployment stages

This could be pretty complicated depending on your setup, so to simplify it for this example, I've just set it to login to the remote server with ssh and perform a git pull.

On staging this is set to only run when pushing changes to the master branch. For production this runs only when pushing tags.

You will notice the $STAGING_PRIV_KEY and $PRODUCTION_PRIV_KEY variables. These are defined in the settings for your repository in the Github UI under Settings -> Pipelines. They contain the private part of an SSH key to enable access to the environments. Make sure that you limit each variable to the environment it relates to, this saves any deployment to the wrong environment if you make a mistake on your pipeline configuration.

deploy_staging:
  stage: deploy
  before_script:
    - 'which ssh-agent || ( apt-get update -y && apt-get install openssh-client -y )'
    - eval $(ssh-agent -s)
    - ssh-add <(echo "$STAGING_PRIV_KEY")
  script:
    - echo "Deploy to staging server"
    - ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no user@hostname "cd gitfolder && git pull"
  environment:
    name: staging
    url: http://staging.example.com
  only:
  - master

deploy_production:
  stage: deploy
  before_script:
    - 'which ssh-agent || ( apt-get update -y && apt-get install openssh-client -y )'
    - eval $(ssh-agent -s)
    - ssh-add <(echo "$PRODUCTION_PRIV_KEY")
  script:
    - echo "Deploy to production server"
    - ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no user@hostname "cd gitfolder && git pull"
  environment:
    name: production
    url: http://www.example.com
  only:
  - tags

And that's it. Your testing and deployment pipeline are now automatically building and deploying

You can add the badges into your README.md site with the following markdown

[![pipeline status](https://gitlab.com/account/repo/badges/master/pipeline.svg)](https://gitlab.com/account/repo/commits/master)
[![coverage report](https://gitlab.com/account/repo/badges/master/coverage.svg)](https://gitlab.com/account/repo/commits/master)

The full configuration for .gitlab-ci.yml is here.

image: ubuntu:artful

before_script:
  - echo "Before script installation"
  - apt update
  - apt install libdevel-cover-perl libjson-xs-perl -y

stages:
  - test
  - deploy

test:unit:
  stage: test
  script:
    - perl -MDevel::Cover t/test_harness_script.pl
  coverage: /Totals+.+s(d+.d+?)$/

deploy_staging:
  stage: deploy
  before_script:
    - 'which ssh-agent || ( apt-get update -y && apt-get install openssh-client -y )'
    - eval $(ssh-agent -s)
    - ssh-add <(echo "$STAGING_PRIV_KEY")
  script:
    - echo "Deploy to staging server"
    - ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no user@hostname "cd gitfolder && git pull"
  environment:
    name: staging
    url: http://staging.example.com
  only:
  - master

deploy_production:
  stage: deploy
  before_script:
    - 'which ssh-agent || ( apt-get update -y && apt-get install openssh-client -y )'
    - eval $(ssh-agent -s)
    - ssh-add <(echo "$PRODUCTION_PRIV_KEY")
  script:
    - echo "Deploy to production server"
    - ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no user@hostname "cd gitfolder && git pull"
  environment:
    name: production
    url: http://www.example.com
  only:
  - tags


If you found this useful, please feel free to donate via bitcoin to 1NT2ErDzLDBPB8CDLk6j1qUdT6FmxkMmNz

Development Prompt

Posted: 2017-09-10 11:16:00 by Alasdair Keyes

Direct Link | RSS feed


I've been looking at a way to streamline my development recently, and I picked up on a couple of problems that I was facing.

I knocked up the following changes to my BASH prompt to help me tackle this.

My prompt now has he following field for previous status.

alasdair@machine (OK) ~ $ 

OK shows that the last command exited with 0

alasdair@machine (OK) ~ $ sdgsdfgsdfhsdfh
sdgsdfgsdfhsdfh: command not found
alasdair@machine (127) ~ $ 

On error the status will be shown instead

In addition, when I'm in a git repository, an extra field is added showing the following

(reponame.git[branch]-<TRACKED_FILE_CHANGES>:<UNTRACKED_FILES>)

In reality, this looks like so

alasdair@machine (OK) (myrepo.git[master]-1:3) ~/myrepo $

I can see that on the master branch I have 1 modified file and 3 untracked files.

The code to achieve this is in the following gist https://gitlab.com/snippets/1731310 - Just add to your .bashrc, .bash_profile or whichever file you use to control such things.

If you modify this, make sure that the last_command_status() function is called first, otherwise you'll get incorrect return values being picked up.


If you found this useful, please feel free to donate via bitcoin to 1NT2ErDzLDBPB8CDLk6j1qUdT6FmxkMmNz

Hotmail.com, Outlook.com, Live.com blacklisting... a pleasant experience

Posted: 2017-07-10 16:59:01 by Alasdair Keyes

Direct Link | RSS feed


I've recently migrated my server to https://www.arubacloud.com/.

And upon sending an email to Hotmail, I received the dreaded bounceback...

SMTP error from remote mail server after MAIL FROM:<someemailaddress> SIZE=4705: host hotmail-co-uk.olc.protection.outlook.com [104.44.194.235]: 550 SC-001 (SNT004-MC9F10) Unfortunately, messages from 1.2.3.4 weren't sent. Please contact your Internet service provider since part of their network is on our block list. You can also refer your provider to http://mail.live.com/mail/troubleshooting.aspx#errors.

The URL links to the following description of code SC-001.

Mail rejected by Outlook.com for policy reasons. Reasons for rejection may be related to content with spam-like characteristics or IP/domain reputation. If you are not an email/network admin please contact your Email/Internet Service Provider for help.

My IP is not on any well known block lists such as http://barracudacentral.org/rbl or https://www.spamhaus.org/ so I had no quick and easy way of delisting.

I was hit with a sense of sudden dread and the horrible sinking feeling that you only get when you realise you have to speak to a support team at a large multi-national IT company.

You know what's coming.... hours of arguing that it's not a server misconfiguration, my DNS/SPF/DKIM/MX setup is all actually valid and correct. Waiting days for a reply to your well reasoned email only to receive a canned response that doesn't address anything close to your complaint. yadda, yadda, yadda.

Or so I thought... I found the following "Sender information form" at https://support.live.com/eform.aspx?productKey=edfsmsbl3&ct=eformts&wa=wsignin1.0&scrx=1 which I filled out and within an hour I had received a couple of emails back from Microsoft saying that they had conditionally mitigated the the restriction on my IPs and that emails will be allowed through at a decreased rate-limit until such time as reputation has been improved.

I left a couple of hours and I was able to send to Hotmail with no bouncebacks!

OK... so Microsoft can really try harder with their URLs and user/search engine friendliness but I was incredibly impressed with their response on this. It used to be that Hotmail would often blackhole your emails and you had no recourse, but they really seem to be on the ball with this and I'm most impressed.


If you found this useful, please feel free to donate via bitcoin to 1NT2ErDzLDBPB8CDLk6j1qUdT6FmxkMmNz

Advanced templating in Puppet

Posted: 2017-07-02 10:54:40 by Alasdair Keyes

Direct Link | RSS feed


For my web servers I have a puppet class that takes a hostname and sets up all the elements required to create a hosting account including...

All the configs are very easily templated except for Nginx. My requirement for this class is that if I have a predefined Nginx config, that puppet should use that, otherwise generate a config from a default template.

This turned out to be a harder task than I anticipated, but I managed to find a solution which could be useful to others.

    $hostname = $title
    $username = $hostname
    $home = "/path/to/hosting/space/$hostname"

    $nginx_config = "/etc/nginx/sites-available/$hostname.conf"

    file { $nginx_config:
        ensure  => 'present',
        owner   => 'root',
        group   => 'root',
        mode    => 0644,
        content => inline_template(
            file(
                "<MODULENAME>/etc/nginx/static_site_files/$hostname.conf",
                "<MODULENAME>/etc/nginx/default_site_template.conf"
            )
        ),
        notify  => Service['nginx'],
    }

It's a bit of a messy solution as we are actually putting a template within the files/ folder. As an alternative you can use an absolute path for file() and put the template within the templates/ folder, but this gets a bit problematic if you change your absolute paths.

It works in the following way....

This means new default sites can be created easily and then customisations are simple to implement.


If you found this useful, please feel free to donate via bitcoin to 1NT2ErDzLDBPB8CDLk6j1qUdT6FmxkMmNz

Bitcoin Guide for Beginners

Posted: 2017-05-22 14:34:34 by Alasdair Keyes

Direct Link | RSS feed


With the price of 1 Bitcoin reaching 1600 GBP, now is as good a time as any to get on board if you're not already.

I came across this article on https://www.reddit.com which is well worth reading if you want to get started https://howtobuybitcoin.io/.


If you found this useful, please feel free to donate via bitcoin to 1NT2ErDzLDBPB8CDLk6j1qUdT6FmxkMmNz

A look at CockroachDB

Posted: 2017-05-20 13:10:28 by Alasdair Keyes

Direct Link | RSS feed


CockroachDB has been floating around for a few years and version 1.0 has just been released ready for production. I had been aware of the system for some time but had never really played with it, so I decided now would be as good a time as any to prod it a little.

This article won't be any kind of how-to because their own documentation (available at https://www.cockroachlabs.com/docs/) is fantastic and if you do look into using CockroachDB, it is by far the best place to start.

My main DB experience is with configuring, maintaining and developing on MySQL (although I've slowly been using Postgresql on projects due to the advanced features it provides) so these two RDMS systems are my benchmark going forward. (Within the MySQL/Postgresql labels I'm also including the add-ons and enterprise tools such as Percona for MySQL, and EnterpriseDB for Postgresql).

As the name 'CockroachDB' suggests, the system is designed to be hard to kill, being able to provide a scalable, fault tolerant, distributed DB solution which will continue running with multiple nodes missing.

For any system nowadays, high availability is not a 'nice to have' feature or requirement to consider later, but something that requires careful thought and planning from the outset; for even the most basic set-up. For many companies this is often just setting up a MySQL Master/Slave (Or Master/Master if you're a sadist and into hacky solutions) or Postgresql's streaming replication to "kind-of-sort-of" get some duplication of data and redundancy. Although this does provide some quick wins over a single node setup, in a modern platform that needs to minimise downtime and remove risk of data loss it is not a good solution. Postgres has some solutions such as PG-Pool, EDB Failover Manager, PgBouncer etc, but these are still tacked on and from experience, not a solution that I would want to force my business to rely on.

It's with this experience that I've been waiting for something like CockroachDB. On top of this, it's good to see that 'old-fashioned' Relational databases are still getting new blood after the fast increase of NoSQL systems over the last 10 years.

From having a play about these are the key things that poppet out at me (but I'm sure there are many others)

Any node can be used to run SQL queries

With clusters such as MySQL's NDB, there are data nodes and SQL nodes. Clients can only run queries via the SQL nodes and I've always thought this a limitation that you are not utilising your cluster to the full. With CockroachDB, if your node is running, you can connect to it and run SQL queries against the data stored on it. You will need to look at some way of managing connections, the simple way is with HAProxy and they even provide a way of making the HAProxy config automatically for you https://www.cockroachlabs.com/docs/manual-deployment.html###step-5-set-up-haproxy-load-balancers

Easy to scale

And when I say easy.... I mean easy. With any real-world use-case you will be tweaking and configuring your system to use many more switches, but in testing, I just started a node and told it which cluster to join and it just joined, synced data and became usable within seconds.

cockroach start --insecure --host=cdbnode02 --join=cdbnode01 --background

And that's it.

Note: The --insecure switch just allows you to run a local cluster without generating TLS CA/Client certs etc, this would not be used in a live environment

Web Interface

I'm not usually a fan of pretty interfaces for server applications, often they sacrifice the brevity and conciseness of a command line for very little benefit, however CockroachDB starts a web interface by default when the node starts... and it's fantastic. The interface is clean and easily understandable. You can view DB logs, statistics, cluster information, node details all through one screen. With DB systems, interfaces like this usually require you installing some bulky Java app or paying a fortune for their 'Enterprise' tools, but this is neither and invaluable for monitoring the health and performance of your cluster.

CockroachDB uses Postgres interface

Any web developer will have got used to using MySQL/Postgresql/MSSQL/other RDMS client libraries for their chosen language and it can take some time for a new DB to get a mature, reliable library. With CockroachDB this is not an issue. The system is designed to be compatible with Postgresql, so you can use the existing libraries for your language to get stuck right in https://www.cockroachlabs.com/docs/install-client-drivers.html

This is also a benefit for users should CockroachDB not succeed. A company can go down the CockroachDB road early on and be secure in the knowledge that even if it doesn't succeed or shuts up shop after 5 years, there is a migration path to Postgresql and their application will require no big re-write. This will really help adoption of Cockroach early on and is a great decision by CokcroachLabs.

Simplicity of installation

I can only speak for Linux, but I assume Mac/Windows is the same. Everything is available in one binary and installation is to download this binary and place it into /usr/local/bin/ (or elsewhere, should you prefer) and that is it. The same binary provides server tools and client tools all in one. If I were to use this in production, I would likely use BASH aliases or similar to split out the client/server functionality, but this means that upgrades are a doddle and it would be good to have.

Simplicity of configuration and setup

CockroachDB have taken a interesting path with configuration.... there are no config files. Everything is configured using command line switches. There is no SysV Init/Upstart/Systemd shipped with it, setup is controlled by creating a startup script with all your settings and then placing it into a VCS.

One thing to note is that CockroachDB uses $HOME as a base for creating/storing files by default.

Simplicity of Upgrades

Going back to the simplicity of a single binary. CockroachDB is designed to use a rolling upgrade path (See here for more information https://www.cockroachlabs.com/docs/upgrade-cockroach-version.html). To upgrade, you just rollout the new binary and restart.

Simplicity of backups

Most MySQL/Postgresql clusters usually have a slave set aside solely for backups. This is a node that receives updates from the master but accepts no other client connections so it can be used to backup data without causing locking/load issues. This is the same with CockroachDB, add another node, firewall off client IPs and set up a backup cron https://www.cockroachlabs.com/docs/back-up-data.html

I will continue to play about with Cockroach, and I have not had to use it in anger or run any performance benchmarks against it so I have no idea how it competes with other RDMS', but for a first look, it is outstanding.

One thing that I am impressed with is how stable it feels, the availability of tools such as the web interface, the easy of set-up and configuration really lends itself to feeling like an extremely mature and safe system. And although it's not at all logical to make decisions on a hunch; feeling safe and confident in software will be 9/10s of the battle when a team/company is deciding if it should implement a specific database solution.

I'm extremely excited about the idea of an easy to use, reliable HA database system such as CockroachDB, the only worry I have is that in a cloud-driven era where lots of people are already invested into platforms such as MySQL on AWS, it will be hard for CockroachDB to get a foot in the door. It would however be fantastic for an in-house system.

Note: Beware that Cockroach does send scrubbed diagnostics information back to CockroachLabs, see here for information on how to stop it https://www.cockroachlabs.com/docs/diagnostics-reporting.html

Note #2: Reading the FAQ is a must, there are some use-cases where CockroachDB is not suitable https://www.cockroachlabs.com/docs/frequently-asked-questions.html


If you found this useful, please feel free to donate via bitcoin to 1NT2ErDzLDBPB8CDLk6j1qUdT6FmxkMmNz

© Alasdair Keyes

IT Consultancy Services

I'm now available for IT consultancy and software development services - Cloudee LTD.



Happy user of Digital Ocean (Affiliate link)


Version:master-e10e29ed4b


Validate HTML 5