PHP Docker Image and opcache

Posted: 2020-07-25 12:19:25 by Alasdair Keyes

Direct Link | RSS feed


I've recently started working on a new project using NGINX, PHP 7.4, Redis, PostgreSQL and Laravel 7.

As it's a new project I thought I would Dockerise it from the start. After configuring my docker-compose.yml file I built the environment and installed Laravel7 with the Laravel Debugbar (https://packagist.org/packages/barryvdh/laravel-debugbar)

I noticed that the bootstrapping of the basic Laravel App was taking over 100ms. I ran the config cache config:cache and it barely made any difference.

This didn't seem right to me but I had a number of variables and I was unsure of where to start looking... or if it was a problem at all

Thankfully the first part of my investigation found me a solution. I created a phpinfo() page on an existing Laravel 5 setup and on the Docker container. It turns out that the opcache isn't enabled on the Docker image by default.

Adding the following RUN statement to my DockerFile sorted the issue

docker-php-ext-install opcache

After restarting the container, the app bootstraps in 25-30ms. I'm unsure why opcache isn't enabled by default, I can't think of any problems it would cause and I would imagine in over 99% of situations users would want it on... no one wants slow PHP.


If you found this useful, please feel free to donate via bitcoin to 1NT2ErDzLDBPB8CDLk6j1qUdT6FmxkMmNz

The Freelance Developer Podcast

Posted: 2020-03-24 23:35:20 by Alasdair Keyes

Direct Link | RSS feed


I happened to chance across a post on LinkedIn from an old colleague of mine who has started a new podcast about freelance development. If you've got some time and you're either a contractor or looking to contract in the future, it's worth a listen.

https://www.thefreelancedeveloper.co.uk/


If you found this useful, please feel free to donate via bitcoin to 1NT2ErDzLDBPB8CDLk6j1qUdT6FmxkMmNz

Windows 7 EOL

Posted: 2020-01-19 10:17:03 by Alasdair Keyes

Direct Link | RSS feed


NOTE: Running End of Life software is risky, don't do it unless you accept the risks

So Windows 7 is now EOL for all but the few customers who are paying through the nose for long term support. I run Linux on most of my machines, but I still do have a solitary Windows machine for Steam and a few other Windows only apps that won't run on WINE.

Unfortunately, I dislike Windows 8 an 10, there are a number of reasons, but on a purely practical level I find the interface horrendous, un-intuitive and difficult to use. I would like to continue running Windows 7 for as long as I can. I will have to accept the increased security risks from running an OS with no further security updates but thankfully my use of Windows is very limited and doesn't involve browsing/email or other common attack vectors for viruses and trojans. With a good AV, installed too, this should reduce risk to acceptable levels.

With the EOL status, the Windows Update service for Windows 7 will no doubt end in time, this means that although my current machine is up-to-date, if I need to re-intstall due to hardware failure, I may not have access to all the updates.

With this in mind, I found the WSUSOffline tool http://www.wsusoffline.net/, which allows you to download all updates for a specific Windows/Office version and store them offline. The main use-case appears to be for sys-admins with network access restrictions to download and install updates on air-gapped machines, however in this instance it looks well suited to archiving. There are other options to me such as installing and maintaining a Windows WSUS server, but that is a lot of extra work.

If you wish to get your own backups of updates these are the steps I took

It took about 30 mins to download all the updates, then once it's done it copies a folder structure with all the updates into your shared drive which you can then backup from your host to wherever you want. The folder also includes the executable to kick off the updates on another machine.

Running the archived updates on another machine is not a run-and-forget process, Windows updates require reboots which means you will have to click a few buttons now and again, but that is no different than the Official update process.

It looks like the WSUSOffline tool works by distributing a list of updates to use. As Windows 7 only went EOL in January, I would imagine that I will have to wait for the next WSUSOffline update to get the last few Windows Updates archived but it looks like I should be able to continue using Windows 7 for some time yet, even if I have to rebuild.


If you found this useful, please feel free to donate via bitcoin to 1NT2ErDzLDBPB8CDLk6j1qUdT6FmxkMmNz

Wiki Migration

Posted: 2019-07-30 11:15:33 by Alasdair Keyes

Direct Link | RSS feed


For about 10 years I've used a wiki to document everything that I learn and need to keep track of. This contains everything from walkthroughs of installing/configuring software, to lists of interview questions to ask potential hires.

When I first started working in hosting, I began collecting text files with information given to me by other colleagues. Over time this got un-wieldy so I created a MediaWiki wiki https://www.mediawiki.org/wiki/MediaWiki. I mainly picked this as it was both a wiki I was using at my workplace and it was a common interface; being the software that Wikipedia uses.

Over time I've kept Mediawiki updated but gradually I've had more and more problems with updates breaking and needing fixing so I started looking around for other wiki tools.

New Wiki

I eventually found Dokuwiki https://www.dokuwiki.org/. It's more lightweight and simple but seems to be up to the tasks that I need it for. It uses flat files as a back-end so I don't need to backup both files and a database and after importing all my data it's only 1/4 of the size on disk.

$ du -hs public_html.mediawiki/
203M        public_html.mediawiki/
$ du -hs public_html.dokuwiki
48M         public_html.mediawiki/

I did have to install the tag and pagelist Dokuwiki plugins to allow me to use tags, which are the Dokuwiki version of Mediawiki's categories.

Migration

It would be nice to have been able to copy my articles directly across to the new wiki, but the Mediawiki syntax (https://www.mediawiki.org/wiki/Help:Formatting) and Dokuwiki syntax (https://www.dokuwiki.org/wiki:syntax) are different. The key differences were

I knocked up a quick Perl script to connect to the Mediawiki DB and parse the articles into a format suitable for Dokuwiki. This was mostly done with regex replace statements to insert spaces and change tags etc.

While I was at it, I took this time to delete or update any old articles. So now I have a new wiki with refreshed info and am very pleased with Dokuwiki.


If you found this useful, please feel free to donate via bitcoin to 1NT2ErDzLDBPB8CDLk6j1qUdT6FmxkMmNz

Debian Buster first install

Posted: 2019-07-17 08:47:31 by Alasdair Keyes

Direct Link | RSS feed


I upgraded my home server to Debian 10 (Buster) this week. It's running on quite an old HP Proliant Microserver so I bought a new SSD to use for the OS partitions to give it a little extra life. As such, it was a fresh install rather than an in-place upgrade.

As you would imagine 10 is much the same as 9 in most respects. But there were a couple of points of note...

  1. Puppet install was producing DH key error

The Buster Puppet install was using version 5.5.10 whereas my Puppet Master (On Debian Stretch) was using 4.8.2 when connecting to the master the new install would error with

Warning: SSL_connect returned=1 errno=0 state=error: dh key too small

The answer to this was found at another chap's blog https://blog.steve.fi/upgraded_my_first_host_to_buster.html and is to do with system-wide SSL settings, although I fixed it slightly differently.

In /etc/ssl/openssl.cnf I updated the line

CipherString = DEFAULT@SECLEVEL=2

to

CipherString = DEFAULT@SECLEVEL=1

It turns out this is a non-standard, custom security setting made by Debian https://wiki.debian.org/ContinuousIntegration/TriagingTips/openssl-1.1.1

It doesn't appear that you can define a custom set of Diffie Hellman params for a Puppet Master as you can for other software like NGINX and Apache. As soon as I have my Puppet Master on the later version I'll be changing this setting back, assuming it doesn't interfere with anything else.

  1. check_disk_io Nagios plugin was failing

It turns out the output of the iostat command had changed slightly and required a tweak to continue working. Commit https://gitlab.com/alasdairkeyes/nagios-plugin-check_disk_io/commit/0708ba7b9cb0017f6f36554d54ee3e37a9b58d63

  1. The debsecan package is enabled by default

I wasn't aware this package existed until it started emailing me with all the system vulnerabilities. I can see a use for it, but as my systems are updated regularly, it's now purged by Puppet.

  1. The sensors utility and SMBus PIIX4 adapter device

The sensors utility used by the check_sensors Nagios plugin was erroring that I had a critical alarm.

It turns out that there is no max/critical temp information for the thermometer on this device so the reported temperature is always higher than the threshold of 0C

# sensors
...
jc42-i2c-0-18
Adapter: SMBus PIIX4 adapter port 0 at 0b00
temp1:        +31.0°C  (low  =  +0.0°C)                  ALARM (HIGH, CRIT)
                       (high =  +0.0°C, hyst =  +0.0°C)
                       (crit =  +0.0°C, hyst =  +0.0°C)
... 

As I have other temperature sensors available I disabled this one by creating the following file /etc/sensors.d/jc42-i2c-0-18

chip "jc42-i2c-0-18"
    bus "i2c-0" "SMBus PIIX4 adapter port 0 at 0b00"
    ignore temp1

Other than that it was all pretty seamless.


If you found this useful, please feel free to donate via bitcoin to 1NT2ErDzLDBPB8CDLk6j1qUdT6FmxkMmNz

Tor Project Signing Key Poisoning and Ubuntu's torbrowser-launcher package

Posted: 2019-07-09 12:45:23 by Alasdair Keyes

Direct Link | RSS feed


I started up the Tor browser yesterday and noticed that it didn't start in it's usual time frame, 10 minutes later the browser had still not opened.

Checking top, I saw that a GPG process was using 100% CPU.

  PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND                                                                                                                                                                                             
19330 username  20   0   78364  47020   4448 R  99.7  0.6   0:16.43 gpg                                                                                                                                                                                                 
 3145 username  20   0 3458164 139712  63512 R  12.6  1.7  18:43.51 cinnamon                                                                                                                                                                                            

I'd read recently about an attack on GPG where keys were being poisoned with a large number of signatures to exploit a GPG bug and corrupt GPG installs https://threatpost.com/pgp-ecosystem-targeted-in-poisoning-attacks/146240/, I wondered if this is what was occuring.

I checked what the GGP process was running.

$ ps aux | grep 19330
username 19330 64.6  0.6  82192 50980 ?        RL   10:51   0:31 /usr/bin/gpg --status-fd 2 --homedir /home/username/.local/share/torbrowser/gnupg_homedir --keyserver hkps://hkps.pool.sks-keyservers.net --keyserver-options ca-cert-file /usr/share/torbrowser-launcher/sks-keyservers.netCA.pem include-revoked no-honor-keyserver-url no-honor-pka-record --refresh-keys

It seemed to be running --refresh-keys which requests updates to keys from the key servers. I ran the following to see what keys were being refreshed.

$ /usr/bin/gpg --homedir /home/username/.local/share/torbrowser/gnupg_homedir --list-keys
/home/username/.local/share/torbrowser/gnupg_homedir/pubring.kbx
----------------------------------------------------------------
pub   rsa4096 2014-12-15 [C] [expires: 2020-08-24]
      EF6E286DDA85EA2A4BA7DE684E2C6E8793298290
uid           [ unknown] Tor Browser Developers (signing key) <torbrowser@torproject.org>
sub   rsa4096 2018-05-26 [S] [expires: 2020-09-12]

I checked the key servers entry for the key EF6E286DDA85EA2A4BA7DE684E2C6E8793298290 at http://pgp.mit.edu/pks/lookup?op=vindex&search=0x4E2C6E8793298290 and saw the key had received a large number of signatures on 2019-06-30, it does indeed look like it has been poisoned with excessive signatures.

I downloaded the latest Tor Browser for Linux directly from https://www.torproject.org/ and didn't receive this issue during startup which is good news.

However, my tor install is through the torbrowser-launcher provided by the Linux Mint repos (originally provided by Ubuntu).

Because the torbrowser-launcher doesn't contain the TOR Browser itself (as the name suggests, it's just a launcher), it is a python environment that will download the latest Tor Browser directly from Tor project. To do this, it uses the Tor Project's public GPG Key to verify the downloaded files are legitimate, during this process it does a refresh from the key servers and hits the poisoning issue.

It seems if you are affected by this, you're best off downloading tor direct from the Tor Project itself. Unfortunately, verification that the file you download from the website requires gpg, you can certainly try and ensure that the key that created the signature is correct...

$ gpg --verify tor-browser-linux64-8.5.3_en-US.tar.xz.asc Downloads/tor-browser-linux64-8.5.3_en-US.tar.xz
gpg: Signature made Fri 21 Jun 2019 02:30:51 PM CEST
gpg:                using RSA key EB774491D9FF06E2
gpg: Can't check signature: No public key

That key EB774491D9FF06E2 matches the key listed at https://2019.www.torproject.org/docs/verifying-signatures.html.en and is a subkey for the Tor Project Signing key, but without the key in your keyring, this check isn't as secure as it should be.


If you found this useful, please feel free to donate via bitcoin to 1NT2ErDzLDBPB8CDLk6j1qUdT6FmxkMmNz

LUKS Encrypted ZFS on Debian Buster

Posted: 2019-07-09 09:06:41 by Alasdair Keyes

Direct Link | RSS feed


I've been interested in running ZFS for a while but have always held off making the leap due to worries about features and stability. ZFS was originally developed for Solaris and has been ported over to Linux by the ZFS on Linux (ZoL) project https://zfsonlinux.org/.

Recently ZoL 0.8 was released with native encryption which is really a must. Unfortunately the latest Debian release 'Buster' only has 0.7.12 so the native encryption feature isn't available.

I've been experimenting with a Virtualbox VM to develop and test a suitable setup that I would be happy with on my production hardware.

My existing production setup runs Debian Stretch using Linux software Raid with LUKS Encryption on top and running ext4 as a filesystem.

For this test setup I'm using Virtualbox with 4x 2GB disks for ZFS with Striped/Mirrored configuration, it's essentially ZFS's version of RAID 10. For a configuration like this you should ensure you have at least 2GB RAM, I did try with 1GB however the LUKS encrypted devices were failing to startup at boot with out of memory errors. Debian 'Buster' is the OS.

The Disk setup is

Setup process

  1. Install Debian Buster and make sure it's fully updated on first boot
apt update && apt upgrade -y
  1. Add in the contrib repos by adding contrib to the Debian apt repo list in /etc/apt/sources.list
deb http://deb.debian.org/debian buster main contrib
deb-src http://deb.debian.org/debian buster main contrib
  1. Install dependencies
apt update && apt install dpkg-dev linux-headers-amd64 cryptsetup -y
  1. Install ZFS

This can take some time, make a cup of tea.

apt install zfs-dkms zfsutils-linux -y
  1. Setup LUKS encryption on the raw devices. Each time you will be asked to confirm that you want to overwrite the device and also enter a password for the device twice.
cryptsetup -y luksFormat /dev/sdb
cryptsetup -y luksFormat /dev/sdc
cryptsetup -y luksFormat /dev/sdd
cryptsetup -y luksFormat /dev/sde
  1. Setup LUKS initialization on boot

Get the UUID for each LUKS device

# ls -l /dev/disk/by-uuid/
total 0
lrwxrwxrwx 1 root root 10 Jul  9 08:55 0af47096-987d-41b5-b5a7-98827850f46d -> ../../sda1
lrwxrwxrwx 1 root root  9 Jul  9 08:55 5888dfc8-4df0-410e-8aec-992aad7abd97 -> ../../sdc
lrwxrwxrwx 1 root root  9 Jul  9 08:55 abd4a557-de16-4ecd-ab73-e4d41293dcf4 -> ../../sde
lrwxrwxrwx 1 root root  9 Jul  9 08:55 e2f1931b-2413-4181-9500-baad1a74c12d -> ../../sdd
lrwxrwxrwx 1 root root  9 Jul  9 08:55 edc129d6-dc90-4338-bc2e-9476843ff41f -> ../../sdb
lrwxrwxrwx 1 root root 10 Jul  9 08:55 fc1b09a1-41e2-4503-8c4f-d2e532dea5aa -> ../../sda5

Update the /etc/crypttab file with your disk configuration, it should look similar to this, the target name can be any unique name that you want.

# <target name>	<source device>		<key file>	<options>
sdb_crypt UUID=edc129d6-dc90-4338-bc2e-9476843ff41f none luks
sdc_crypt UUID=5888dfc8-4df0-410e-8aec-992aad7abd97 none luks
sdd_crypt UUID=e2f1931b-2413-4181-9500-baad1a74c12d none luks
sde_crypt UUID=abd4a557-de16-4ecd-ab73-e4d41293dcf4 none luks

As you can see the UUID mapping in /dev/disk/by-uuid is mapped against a unique name for device mapper.

  1. Reboot the system

This isn't required, however it's good to ensure that your LUKS setup is correct before proceeding. You will be asked for your LUKS passwords on boot. Once you log back in again, you should be able to run the following ls and see the LUKS devices are initialized correctly

$ ls -l /dev/mapper/
total 0
crw------- 1 root root 10, 236 Jul  9 08:55 control
lrwxrwxrwx 1 root root       7 Jul  9 08:55 sdb_crypt -> ../dm-0
lrwxrwxrwx 1 root root       7 Jul  9 08:55 sdc_crypt -> ../dm-1
lrwxrwxrwx 1 root root       7 Jul  9 08:56 sdd_crypt -> ../dm-3
lrwxrwxrwx 1 root root       7 Jul  9 08:55 sde_crypt -> ../dm-2
  1. Setup your ZFS pool

You will sometimes get a warning that the zfs kernel module isn't loaded, just follow the instructions and run...

modprobe zfs

This will only need to be run once, once a pool is configured the module will be loaded automatically.

# zpool create pool01 mirror /dev/mapper/sdb_crypt /dev/mapper/sdc_crypt mirror /dev/mapper/sdd_crypt /dev/mapper/sde_crypt

Check the setup

# zpool status
  pool: pool01
 state: ONLINE
  scan: none requested
config:

	NAME           STATE     READ WRITE CKSUM
	pool01         ONLINE       0     0     0
	  mirror-0     ONLINE       0     0     0
	    sdb_crypt  ONLINE       0     0     0
	    sdc_crypt  ONLINE       0     0     0
	  mirror-1     ONLINE       0     0     0
	    sdd_crypt  ONLINE       0     0     0
	    sde_crypt  ONLINE       0     0     0

errors: No known data errors
  1. Reboot the system again.

Rebooting again will ensure that everything is configured and the LUKS devices are brought up before ZFS mounts the pool, otherwise you will end up with ZFS errors and the pool won't load.

Run zpool status again and you should see the same output as above. If the LUKS devices fail to initialize and none of the devices are available, you will see an error about no pool available.

If only some of the LUKS devices fail to initialize you will see the state being something other than ONLINE and you can check dmesg or /var/log/kern.log for information as to why.


If you found this useful, please feel free to donate via bitcoin to 1NT2ErDzLDBPB8CDLk6j1qUdT6FmxkMmNz

Security first, "they" are watching

Posted: 2019-07-02 09:15:22 by Alasdair Keyes

Direct Link | RSS feed


I recently registered a new .uk domain, and setup a basic website on behalf of a client. I checked the logs to see how long it took for it to be accessed without me having to advertise it's presence. The timeline is

The site uses a name-based virtualhost so the visitor had to specifically request the domain rather than just hitting port 80/443 on the server IP. Within ~6 hours of registration, the domain was already being scanned. What's of further interest is that at 17:09 at the same second two separate IPs both hit the index page for the first time, indicating it was likely a bot doing a coordinated scan of new sites.

As far as I know the domain hadn't been registered for a while (if ever) and as .uk domains don't release new registrations, the most likely way for bots/people to be aware of the new website was from the HTTPS Certificate Transparency logs. If you're unaware, every new secure certificate that's issued is published to a public log, these can be searched via a number of sites such as https://crt.sh/ (and you can see all certificates issued for akeyes.co.uk here https://crt.sh/?q=akeyes.co.uk).

The take-away from this is that you should be aware that nothing goes unnoticed on the web anymore, if you're setting up a new website, ensure that it is secure from the get-go. Make sure passwords are changed from defaults and are secure and ensure software is up-to-date as bots will be looking to exploit it, this is especially important for popular CMS apps like Wordpress.

As an aside, I found it interesting that Bing had crawled the domain within 24 hours, and Google has yet still to visit.


If you found this useful, please feel free to donate via bitcoin to 1NT2ErDzLDBPB8CDLk6j1qUdT6FmxkMmNz

Wireguard

Posted: 2019-06-26 08:38:57 by Alasdair Keyes

Direct Link | RSS feed


Last year I made a post on Wireguard and wrote a Nagios plugin to allow monitoring of connected peers. I mentioned that I would likely do a post about my thoughts on Wireguard later, and here it is...

Before I begin, this isn't a copy and past piece about how it's a slim code base and listing off the encryption algos, that is all important, but is covered in-depth in every article about Wireguard on the internet. This is viewed from a more user/admin point of view.

I will also refer to Server/Client paradigm however Wireguard seems to only operate on the idea of Peers, essentially, a "Server" would be a server with lots of peers connecting and routing traffic through it and a "Client" would be a Peer that connects to a single (or limited number) of peers and routes some/all of it's traffic across the interface.

It should be noted that this is tested using Debian Linux. Wireguard is available for lesser operating systems.

The Good

  1. Clear and limited encryption

Moving on from the fact that I won't just list off the protocols it uses internally, Wireguard's use of limited encryption algorithms, ciphers etc means that as a sysadmin, I know that I can not actively downgrade or harm my VPN's security.

With tools like OpenVPN, having a range of ciphers and ability to choose different key lengths is good, but at some point I will forget to update these and eventually be running it with a key size that's too small or a cipher that has a known flaw. Large companies may security review their setups regularly but small companies or personal users will most likely not.

This does lead to the potential problem of one vulnerability potentially affecting all Wireguard installs due to similar configuration, however this can occur with any software and I don't have to worry that my lack of knowledge or ability are actively making the tunnel less secure than it should be.

Other nice extras are that Wireguard operates on asymmetric cryptography with public/private keys but also gives the option of a pre-shared key per-client for extra security (especially for say post-quantum world) and it also offers Perfect Forward Secrecy (PFS) so even if private keys are leaked previous session data is still secure.

  1. Ease of end user configuration.

The client (or 'peer' in Wireguard parlance) configuration file is very light weight. Often less than 10 lines of config, Private/Public and optional pre-shared keys are all included in-line in the file and are very small. No more need to hand out CA certs, private keys etc on top of config files to users.

The config file can also contain PreUp, PostUp etc. type commands to enable firewall changes or other relevant tasks that should be performed so you don't have to find ingenious ways of hooking it in with other things on your system.

Versions of Gnome Network Manager have support for Wireguard making configuration even easier for the non-tech savvy.

  1. Tooling

This part is quite impressive and well thought out. Wireguard config is stored in a single file and can either be edited directly in the file if the interface is down, or configured in realtime using the wg tool when the interface is up (You have the option as to whether these changes are persisted or temporary until the interface is brought down).

The tools and man pages have great detail and are easy to follow and the general amount of limited options mean that there's not too much to get wrong with configuration.

There is also a wg-quick tool which will bring up interfaces and configure default routing for you too.

Having the functionality for editing config via CLI is great for automation. I built a puppet module this weekend to configure a Wireguard server and the wg and wg-quick tools were invaluable.

The Bad

  1. Security reviews

As far as I know Wireguard hasn't been security reviewed. This is not surprising, it's still in development and it takes a lot of time and effort for software to be reviewed but it will be interesting to see the results when it finally does happen.

  1. No definite connection numbers

Due to the connection-less way Wireguard works there is not defined list of peers that are connected/unconnected. The server knows how long it has been since a handshake has occurred and started a new PFS session with peers but not if a peer is actually connected. This is also in part due to the use of UDP (Tunnelling TCP over TCP has some problems so UDP is best here). Connection information can be extrapolated (as the Nagios plugin does), but it would be nice to know how many connections there are. Connection numbers can be a good way of knowing early on if there are any problems.

  1. 'Nameless' peers

When viewing the output of Wireguard's configuration all peers are defined only by their public key. This is good for providing some level of anonymity, but if you were running a large organisation with a lot of Wireguard peers, it would be handy to have a nice-name field to indicate either a particular real-world person or perhaps the data-centre that is on the other end of the interface. This can be added into the config file as a comment, but it would be nice to see it added as an optional extra in the config.

The stuff I'm too lazy to properly look into

Wireguard is touted as being very fast due to both it's slim code and the way it's designed to operate.

My VPN servers generally don't have too many users so I can't make a direct useful comparison. The Client's network speed seemed neither faster or slower than an OpenVPN connection. If I had done some in-depth checks I may have seen a reduction in CPU/RAM/Network use, but really, who has the time?

All in all I like Wireguard and plan on moving to it soon. Maybe in tandem with my existing VPN software until I have confidence that it is suitable.

I've written a puppet configuration to roll out once I'm ready. The only thing holding me back is waiting for my desktop OS to ship Network Manager with a Wireguard plugin so I can play nicely with my general network configuration.

There are a million articles on line for how to get Wireguard up and running and if you use VPNs I would suggest at least looking in to it.


If you found this useful, please feel free to donate via bitcoin to 1NT2ErDzLDBPB8CDLk6j1qUdT6FmxkMmNz

PHP Design Pattern Code Implementations

Posted: 2019-03-15 22:06:15 by Alasdair Keyes

Direct Link | RSS feed


I was refreshing my memory on the Bridge pattern (https://en.wikipedia.org/wiki/Bridge_pattern) for some code I was writing and I came across this Github repo https://github.com/domnikl/DesignPatternsPHP with PHP implementations of many common design patterns.

It's well worth bookmarking for when you need to brush up or even use as a framework for implementing them.


If you found this useful, please feel free to donate via bitcoin to 1NT2ErDzLDBPB8CDLk6j1qUdT6FmxkMmNz

© Alasdair Keyes

IT Consultancy Services

I'm now available for IT consultancy and software development services - Cloudee LTD.



Happy user of Digital Ocean (Affiliate link)


Version:master-eadb207b39


Validate HTML 5