Have you been pwned? Maybe not as fully as you think

Posted: 2018-03-02 09:10:09 by Alasdair Keyes

Direct Link | RSS feed


For those interested in security breaches you are probably aware of the existence of the site Have I Been Pwned (HIBP) run by Troy Hunt.

I've used it to check email addresses in the past, however Troy has added some new useful features to the site of the the past few years.

I gave the domain search option a go. Instead of searching for just an address, you give your domain and it identifies all email Aliases that have been found in compromised lists for that domain. https://haveibeenpwned.com/DomainSearch If you operate your own personal or your company domain(s), it's well worth looking into.

It's very straight forward and you can validate domain ownership using a number of methods such as DNS, Email, HTTP and download the information in various formats such as MS Excel or JSON.

When reviewing this information, one thing I noticed is that in the Onliner Spambot breach there were quite a few aliases listed on my domains that I don't, nor have ever used. In particular, I've owned the akeyes.co.uk domain since 2005 and it was unregistered before then, so it's unlikely to be from a previous domain owner. In fact on akeyes.co.uk only 2 out of 9 listed aliases would ever have been used and able to receive emails and therefore used to access online services.

My first thought was that these aliases were there as part of a scatter-gun approach to spam, however as the leak they were from also contained passwords or password hashes there are some other possible inferences from this data.

  1. There's no indication as to which aliases had passwords, apparently not all did, but as the leak description outlines "many of which were also accompanied by corresponding passwords" we can assume over 25% did. If these addresses have never been used for either mail or online services, it would seem that having a legitimate password is unlikely. This could mean perhaps a password was obtained for leakedemail@domain.com and then tried against other common aliases on the same domain in an attempt at compromising a mail server account. This would be a far more efficient way of trying to compromise a mailbox than just trying known passwords from other domains.

  2. Although the sale of personal/account details on is quite prevalent, the cost per-email/password combination is very low. If this list was obtained via the purchase of compromised details, it could indicate that sellers on the black-market are padding out their lists with dummy addresses and passwords/password hashes to be more appealing to buyers.

  3. Nefarious types may have tried to sign up to online services with email addresses on my domain for online services which have later been exploited. This might be quite common with well known domains such as microsoft.com etc but I'd say this is unlikely on domains as unknown as mine unless an online service had a known issue that could somehow be exploited in this manner.

When we hear of compromised data of 100 Million users being leaked, it could be worth bearing in mind that a fair proportion of these may be fake, or at least have dubious origin. This doesn't make the security breaches and data leaks any less serious as they will contain real information as well and sites like HIBP are doing good work allowing people to be aware of compromises and hopefully holding some to account.


If you found this useful, please feel free to donate via bitcoin to 1NT2ErDzLDBPB8CDLk6j1qUdT6FmxkMmNz

JetBrains IntelliJ Community Edition

Posted: 2018-02-28 22:46:48 by Alasdair Keyes

Direct Link | RSS feed


Having used JetBrains' PHPStorm for a long whilst in my PHP dev roles, I was interested to try their IntelliJ Community Edition offering.

It's the same IDEA based application that PHPStorm is based on, but the main draw for a lot of people will be that it's free.

There are some limitations, for example, it doesn't support the JetBrains' PHP plugin which would turn it into PHPStorm for free. Other unsupported languages are CSS, Ruby, Javascript and others, Full list here. So if your language is supported through a community plugin then you get the power of JetBrains without the cost!

I still do a large amount of Perl Development. Thankfully the fantastic Perl plugin works a treat.

If you're unable to afford the license fees for your chosen JetBrains product, it's worth seeing if this will work for you in it's cut-down form.


If you found this useful, please feel free to donate via bitcoin to 1NT2ErDzLDBPB8CDLk6j1qUdT6FmxkMmNz

Reddit and Hacker News time sink

Posted: 2017-10-28 14:25:49 by Alasdair Keyes

Direct Link | RSS feed


Four weeks ago, I finally deleted the Hacker News and Reddit apps from my phone.

I get a great deal of entertainment and information from these two sites however, I've found over time that I'll spend more and more time on them. If I feel remotely bored or disinterested, my go-to tool is Reddit. Instead of realising that I had spare time and could use it productively, I would just sink it time into browsing whatever dross was on there.

On top of this I found it would also start affecting my sleep, if I woke up at 4.30am and was unable to get back to sleep I would often grab my phone and browse. This was doing me no favours and so my decision to delete the apps.

I still view both sites on my laptop and I can obviously browse the websites on my phone to, but just removing the ability to load the sites up with one tap and seeing the icons on my screen has really had an effect and I find myself a lot less likely to just browse for the sake of browsing.

This is, of course, no guarantee that I will use my time more productively, but it certainly won't hurt.


If you found this useful, please feel free to donate via bitcoin to 1NT2ErDzLDBPB8CDLk6j1qUdT6FmxkMmNz

Firefox Multi-Account Containers

Posted: 2017-09-18 12:59:05 by Alasdair Keyes

Direct Link | RSS feed


For anyone that uses Firefox, I strongly recommend you install https://addons.mozilla.org/en-GB/firefox/addon/multi-account-containers/ Multi Account Containers.

It's written by Mozilla themselves and allows you to carve up Firefox into different containers for separation.

The containers are colour coded and each tab has the colour of the container it's running in. There is a Default container which is used for all websites until you decide otherwise.

This means if I open up a new tab in the Personal container and go to Github, I get my personal account. If I open my Work tab and visit the same site, it's logged into my Work account. No more logging in and out or running multiple browsers.

You can also pin websites to specific containers. Create a Finance container and pin your credit card, banking and ISA website into it and whenever you go to those sites it'll automatically open the site in that container. Much less cross-site tracking and also extra security from possible Cross site scripting vulnerabilities.

Do it, do it now.


If you found this useful, please feel free to donate via bitcoin to 1NT2ErDzLDBPB8CDLk6j1qUdT6FmxkMmNz

iproute and net-tools

Posted: 2017-09-14 18:01:52 by Alasdair Keyes

Direct Link | RSS feed


For those Linux admins that have been doing their thing for at least 10 years, you will be very familiar with the standard networking tools ifconfig, netstat etc from the net-tools package. As you also know these are no longer being developed and have been deprecated in favour of a newer iproute2 tool set.

This has been the case for many years, but I bet you still type ifconfig and route don't you?... Years of muscle memory is a hard habit to break.

Although the much more recent iproute2 range of tools has been available for a long time, I still find myself using the old ones as well. When I catch myself doing that, I force myself to lookup how to do the same thing in the newer tools. I'm slowly getting there but it'll take many years yet.

As a handy guide, I was passed this by a friend. It's most useful for those trying to transition and it's well worth a bookmark.

https://dougvitale.wordpress.com/2011/12/21/deprecated-linux-networking-commands-and-their-replacements/

As you transition, it's well worth remembering to update any scripts you have to use the new ones. There might come a time when net-tools is completely removed and you'll want to make sure your trusty helper scripts don't fail you!


If you found this useful, please feel free to donate via bitcoin to 1NT2ErDzLDBPB8CDLk6j1qUdT6FmxkMmNz

Save time... automate

Posted: 2017-09-11 17:24:45 by Alasdair Keyes

Direct Link | RSS feed


This is an old post I discovered I'd started writing... and then got sidetracked for a couple of months so the Hacker News article is a little old now.


I read this post on automation at hacker news Hacker News: What tasks do you automate? and it got me thinking.

I'm a great fan of automation and most things get automated if I can. And strangely enough I quite enjoy writing automation code/configs. So here's a brief list of what I automate day-to-day:-

Puppet

Most of my automation is rolled out via Puppet, but Ansible/Chef etc are great alternatives. I even have a Puppet manifest to configure my Puppet master, it's puppets all the way down.

Backups

Probably the biggest one for anyone that works with computers. You only make the mistake of not having consistent backups once! There's a myriad of tools out there. Whether you're a home Windows/Apple user or an IT professional, make sure your data is backed up. I have a backup system rolled out to all my servers with a Puppet manifest as soon as a machine connects to the Puppet master it's backed up.

Project Builds

I do all my development in LXC containers or VMs. Every project that I begin starts with a bash script that I update with the commands I've run as I build up the system. This script is the second item after a README.md file to be committed into the repo. This means that after several months I can still check out the repository, run the script and be back where I was. In addition if I decide to push the project into a CI pipeline, I have the tools available to get test builds working instantly. If I'm using gitlab, I will also build this script into the CI pipeline to configure my containers for the testing phase.

Server Builds

I run a number of servers for varying uses from web sites, email, gopher, XMPP etc. These are all configured through puppet. Even simple servers have tiny tweaks and changes that are added over the years and remembering all these when you build up another is nigh on impossible. Small changes like adding a new email address to one of my domains, may take a couple of extra seconds by updating my puppet config and then pushing it out, but it brings great peace of mind. I can now rollout all my servers in a number of minutes.

Desktop builds

A continuation of the above, I have a puppet manifestscript to install all the software I require, set-up BASH prompts, vim config, firewall config, IDE config.

Desktop builds change very quickly, often installing and removing software so this one really pays off.

Nagios server monitoring

This is another one that I just set and leave. On top of this, it monitors my backups as well as my server build puppet runs. Again this is done through Puppet so my Nagios server is deployed in a matter of minutes on a new server.

Tiny tasks

My crontab is filled with lots of small scripts to do this and that. From emailing me the daily bitcoin price to checking domain availability.

These could be checked manually when I remember, but why spend the effort? Recently I even wrote one to scrape a site to find when the next Ice Hockey match was on so I could get tickets and email me. The tasks for this are endless but it all saves time and stops me overlooking something important.

I recall reading some advice that "If you have to do it once, do it by hand. If you have to do it twice, automate it, you will need to do it a third time"


If you found this useful, please feel free to donate via bitcoin to 1NT2ErDzLDBPB8CDLk6j1qUdT6FmxkMmNz

Gitlab Perl CI and CD Pipeline

Posted: 2017-09-10 12:38:47 by Alasdair Keyes

Direct Link | RSS feed


I've been looking at Gitlab's CI pipeline to automate testing and deployment of a Perl App I've been writing.

Gitlab's documentation on the subject is very comprehensive https://docs.gitlab.com/ee/ci/pipelines.html however there's no Perl example https://docs.gitlab.com/ee/ci/examples/README.html so I did a bit of playing to get a working configuration for those who are interested.

Gitlab makes it extremely easy to use their CI by creating a gitlab-ci.yml file to control the pipeline.

Firstly we define the container image we wish the use. Gitlab uses Docker containers so you can choose any image from Docker Hub https://hub.docker.com/

image: ubuntu:artful

Secondly we define the before_script section, this is a script that is run to prepare the containers for your tests. The before_script section can be limited to specific sections as you will see under the deployment sections later. But this global area will be executed for every stage of the build process.

before_script:
  - echo "Before script installation"
  - apt update
  - apt install libdevel-cover-perl libjson-xs-perl -y

Next, we define the stages of the CI pipeline. This is a fairly small app so there's just a test and deploy stage which we'll hook into.

stages:
  - test
  - deploy

We then define the execution of the unit tests. The test phase runs a single Perl harness script, which will in turn run all the test files under the t/ directory. It returns 0 if all tests are succesful, otherwise 1. This makes it slightly easier than putting each test file into it's own section.

The test is executed with the Devel::Cover module to produce code coverage output, which we then harvest with the coverage regex. This allows us to place the Coverage: X% badge on our website/README.md files.

test:unit:
  stage: test
  script:
    - perl -MDevel::Cover t/test_harness_script.pl
  coverage: /Total\s+.+\s(\d+\.\d+?)$/

Next, the deployment stages

This could be pretty complicated depending on your setup, so to simplify it for this example, I've just set it to login to the remote server with ssh and perform a git pull.

On staging this is set to only run when pushing changes to the master branch. For production this runs only when pushing tags.

You will notice the $STAGING_PRIV_KEY and $PRODUCTION_PRIV_KEY variables. These are defined in the settings for your repository in the Github UI under Settings -> Pipelines. They contain the private part of an SSH key to enable access to the environments. Make sure that you limit each variable to the environment it relates to, this saves any deployment to the wrong environment if you make a mistake on your pipeline configuration.

deploy_staging:
  stage: deploy
  before_script:
    - 'which ssh-agent || ( apt-get update -y && apt-get install openssh-client -y )'
    - eval $(ssh-agent -s)
    - ssh-add <(echo "$STAGING_PRIV_KEY")
  script:
    - echo "Deploy to staging server"
    - ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no user@hostname "cd gitfolder && git pull"
  environment:
    name: staging
    url: http://staging.example.com
  only:
  - master

deploy_production:
  stage: deploy
  before_script:
    - 'which ssh-agent || ( apt-get update -y && apt-get install openssh-client -y )'
    - eval $(ssh-agent -s)
    - ssh-add <(echo "$PRODUCTION_PRIV_KEY")
  script:
    - echo "Deploy to production server"
    - ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no user@hostname "cd gitfolder && git pull"
  environment:
    name: production
    url: http://www.example.com
  only:
  - tags

And that's it. Your testing and deployment pipeline are now automatically building and deploying

You can add the badges into your README.md site with the following markdown

[![pipeline status](https://gitlab.com/account/repo/badges/master/pipeline.svg)](https://gitlab.com/account/repo/commits/master)
[![coverage report](https://gitlab.com/account/repo/badges/master/coverage.svg)](https://gitlab.com/account/repo/commits/master)

The full configuration for .gitlab-ci.yml is here.

image: ubuntu:artful

before_script:
  - echo "Before script installation"
  - apt update
  - apt install libdevel-cover-perl libjson-xs-perl -y

stages:
  - test
  - deploy

test:unit:
  stage: test
  script:
    - perl -MDevel::Cover t/test_harness_script.pl
  coverage: /Totals+.+s(d+.d+?)$/

deploy_staging:
  stage: deploy
  before_script:
    - 'which ssh-agent || ( apt-get update -y && apt-get install openssh-client -y )'
    - eval $(ssh-agent -s)
    - ssh-add <(echo "$STAGING_PRIV_KEY")
  script:
    - echo "Deploy to staging server"
    - ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no user@hostname "cd gitfolder && git pull"
  environment:
    name: staging
    url: http://staging.example.com
  only:
  - master

deploy_production:
  stage: deploy
  before_script:
    - 'which ssh-agent || ( apt-get update -y && apt-get install openssh-client -y )'
    - eval $(ssh-agent -s)
    - ssh-add <(echo "$PRODUCTION_PRIV_KEY")
  script:
    - echo "Deploy to production server"
    - ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no user@hostname "cd gitfolder && git pull"
  environment:
    name: production
    url: http://www.example.com
  only:
  - tags


If you found this useful, please feel free to donate via bitcoin to 1NT2ErDzLDBPB8CDLk6j1qUdT6FmxkMmNz

Development Prompt

Posted: 2017-09-10 11:16:00 by Alasdair Keyes

Direct Link | RSS feed


I've been looking at a way to streamline my development recently, and I picked up on a couple of problems that I was facing.

I knocked up the following changes to my BASH prompt to help me tackle this.

My prompt now has he following field for previous status.

alasdair@machine (OK) ~ $ 

OK shows that the last command exited with 0

alasdair@machine (OK) ~ $ sdgsdfgsdfhsdfh
sdgsdfgsdfhsdfh: command not found
alasdair@machine (127) ~ $ 

On error the status will be shown instead

In addition, when I'm in a git repository, an extra field is added showing the following

(reponame.git[branch]-<TRACKED_FILE_CHANGES>:<UNTRACKED_FILES>)

In reality, this looks like so

alasdair@machine (OK) (myrepo.git[master]-1:3) ~/myrepo $

I can see that on the master branch I have 1 modified file and 3 untracked files.

The code to achieve this is in the following gist https://gitlab.com/snippets/1731310 - Just add to your .bashrc, .bash_profile or whichever file you use to control such things.

If you modify this, make sure that the last_command_status() function is called first, otherwise you'll get incorrect return values being picked up.


If you found this useful, please feel free to donate via bitcoin to 1NT2ErDzLDBPB8CDLk6j1qUdT6FmxkMmNz

Hotmail.com, Outlook.com, Live.com blacklisting... a pleasant experience

Posted: 2017-07-10 16:59:01 by Alasdair Keyes

Direct Link | RSS feed


I've recently migrated my server to https://www.arubacloud.com/.

And upon sending an email to Hotmail, I received the dreaded bounceback...

SMTP error from remote mail server after MAIL FROM:<someemailaddress> SIZE=4705: host hotmail-co-uk.olc.protection.outlook.com [104.44.194.235]: 550 SC-001 (SNT004-MC9F10) Unfortunately, messages from 1.2.3.4 weren't sent. Please contact your Internet service provider since part of their network is on our block list. You can also refer your provider to http://mail.live.com/mail/troubleshooting.aspx#errors.

The URL links to the following description of code SC-001.

Mail rejected by Outlook.com for policy reasons. Reasons for rejection may be related to content with spam-like characteristics or IP/domain reputation. If you are not an email/network admin please contact your Email/Internet Service Provider for help.

My IP is not on any well known block lists such as http://barracudacentral.org/rbl or https://www.spamhaus.org/ so I had no quick and easy way of delisting.

I was hit with a sense of sudden dread and the horrible sinking feeling that you only get when you realise you have to speak to a support team at a large multi-national IT company.

You know what's coming.... hours of arguing that it's not a server misconfiguration, my DNS/SPF/DKIM/MX setup is all actually valid and correct. Waiting days for a reply to your well reasoned email only to receive a canned response that doesn't address anything close to your complaint. yadda, yadda, yadda.

Or so I thought... I found the following "Sender information form" at https://support.live.com/eform.aspx?productKey=edfsmsbl3&ct=eformts&wa=wsignin1.0&scrx=1 which I filled out and within an hour I had received a couple of emails back from Microsoft saying that they had conditionally mitigated the the restriction on my IPs and that emails will be allowed through at a decreased rate-limit until such time as reputation has been improved.

I left a couple of hours and I was able to send to Hotmail with no bouncebacks!

OK... so Microsoft can really try harder with their URLs and user/search engine friendliness but I was incredibly impressed with their response on this. It used to be that Hotmail would often blackhole your emails and you had no recourse, but they really seem to be on the ball with this and I'm most impressed.


If you found this useful, please feel free to donate via bitcoin to 1NT2ErDzLDBPB8CDLk6j1qUdT6FmxkMmNz

Advanced templating in Puppet

Posted: 2017-07-02 10:54:40 by Alasdair Keyes

Direct Link | RSS feed


For my web servers I have a puppet class that takes a hostname and sets up all the elements required to create a hosting account including...

All the configs are very easily templated except for Nginx. My requirement for this class is that if I have a predefined Nginx config, that puppet should use that, otherwise generate a config from a default template.

This turned out to be a harder task than I anticipated, but I managed to find a solution which could be useful to others.

    $hostname = $title
    $username = $hostname
    $home = "/path/to/hosting/space/$hostname"

    $nginx_config = "/etc/nginx/sites-available/$hostname.conf"

    file { $nginx_config:
        ensure  => 'present',
        owner   => 'root',
        group   => 'root',
        mode    => 0644,
        content => inline_template(
            file(
                "<MODULENAME>/etc/nginx/static_site_files/$hostname.conf",
                "<MODULENAME>/etc/nginx/default_site_template.conf"
            )
        ),
        notify  => Service['nginx'],
    }

It's a bit of a messy solution as we are actually putting a template within the files/ folder. As an alternative you can use an absolute path for file() and put the template within the templates/ folder, but this gets a bit problematic if you change your absolute paths.

It works in the following way....

This means new default sites can be created easily and then customisations are simple to implement.


If you found this useful, please feel free to donate via bitcoin to 1NT2ErDzLDBPB8CDLk6j1qUdT6FmxkMmNz

© Alasdair Keyes

IT Consultancy Services

I'm now available for IT consultancy and software development services - Cloudee LTD.



Happy user of Digital Ocean (Affiliate link)


Version:master-53c82addfa


Validate HTML 5