Open Source Project Updates WordPress

Passwordless Entry for WordPress

Hey everybody, so I wanted to put a quick post up here. I do a lot of work with WordPress and have done for many years. Usually, nowadays, it takes the form of basic maintenance and hosting.

One of the things I have always found to be a problem, particularly if you work from multiple machines, or if you have multiple installations inside the same domain/subdomain, is remembering login credentials. In the event of having different details within the same domain, even browsers can get confused.

With this in mind, I have now released my first WordPress plugin on the WordPress directory for passwordless entry. Instead of requesting your password reset, following the link, changing your password, then logging in (which can take a little while) this plugin allows you to request a one time use authentication link.

Anyway, I thought I would share that I’ve released the plugin, you can find the links below.

The plugin is only in its first version, but nonetheless, it is there. Instructions and everything else will be running soon.

WP Passwordless Entry on JTC

Passwordless Entry on

Passwordless Entry on


Retrieving 3+ million MySQL records, in less than 2 minutes

I’m going to jump straight into this one, I have a task to do. This task consists of going through 3.2 million records in a MYISAM table, retrieving some information, doing some basic processing, and moving certain data elsewhere.

This is a very quick cheat or hack, to get that information out, but first, we need to start with the problem. Let’s say we are using Laravel, we can use DB Chunking, and this works pretty well, but it’s worth understanding how this works at its core, alongside MySQL.

We start with a fairly innocuous paginated query…

SELECT * FROM mytable ORDER BY id ASC LIMIT 0, 10000

Great, and then we continue on with that, until we reach the magic 3.2 million records. (3,200,000 divided by 10,000 items per page is 320 pages). Fair enough, cool.

If you’ve never tried to do this, I will save you the pain. Page 1 will be like lightening (say half a second), page 2 will be a bit slower, by the time you get to page 50 or 100 you will be looking at near enough a minute per page.

I was actually running batches of 100,000 so looking at 32 pages to cover my 3.2 million records. Running it using the LIMIT keyword meant that looping the batches (and I mean retrieving the page, and running a foreach in the PHP without doing anything) took 14.5 minutes to run.

Not good. Especially when I have lots of intensive stuff to do for each item in each loop. So here is the little hack to save you time, it does come with caveats.

  1. It assumes you have an id column that is unique and sequential
  2. This will not work if you have to do any kind of ordering
  3. We can’t guarantee every page will be the same size (records may be missing), so your mechanisms need to not rely on that idea

So instead of…

SELECT * FROM mytable ORDER BY id ASC LIMIT 0, 10000

We’re going to do…

SELECT * FROM mytable WHERE id >= 1 AND id <= 100000

What this means is that instead of counting to 300,000 and then counting your next 100,000 rows, you perform a much simpler query to retrieve the desired results.

The result of this for me (querying against a 5gb 3.2 million record table) was that it took around 90 seconds to retrieve and loop all the records, instead of 14.5 minutes.

When you need to loop through and trigger a process for every single record in a big table, this is definitely a faster way of doing it. Assuming there is no other viable way of retrieving the information, of course.

Laravel Tutorials

Handling large data with cron jobs and queue workers

One problem a lot of developers seem to come into, from reading their code, is when they try to handle large sets of data with background scripts. There are a few common mistakes that I have seen from inheriting the code of my predecessors, and I thought I would offer some solutions to the problems that can come about.

I will cover off these issues as problems and solutions, and I’m going to use Laravel solutions, but conceptually it doesn’t really matter what framework you’re working in.

Problem #1 – The script takes too long (or many hours) to run!

This is quite a common one, particularly when inheriting code from days gone by, when the business and database were small. Then things grew, and suddenly that script which took ten minutes to run, takes many hours.

When I have to tackle these kinds of issues, assuming I understand the script and what it is doing there are a couple of ways to solve the issue;

Firstly, if your script is doing many different things, split it out into multiple scripts, which run at their appropriate times, which do a single job each. However, this probably isn’t the ideal.

Usually a script that is doing some daily processing looks something like this (in Laravel, but it’s not really relevant), this is awful example we’re calculating some cumulative business values for the customer.

This is not an actual feature, or actual code, I’ve written it as a deliberately simple example. Honestly I’ve not even tried to run this code, it’s just an example

$customers = Customer::all();
foreach($customers as $customer){
    $orderTotal = Order::where('customer_id', '=', $customer->id)->sum('subtotal);
    $thisMonthOrders = Order::where('customer_id','=',$customer->id)->where('created_at','>=',now()->subMonths(1)->format('Y-m-d'))->sum('subtotal');
    $customer->total_spend = $orderTotal;
    $customer->total_spend_this_month = $thisMonthOrders;

In this example we’re getting the total value of the orders this customer has made in their lifetime and the last month, and then saving it against the customer record. Presumably for easy/fast search and filtering or something.

This is fine until you start hitting many thousands or hundreds of thousands of sales and/or customers, then the script is going to take a very long time to run.

Firstly, we can optimise the script, to only fetch the orders in the last month and therefore only calculate totals for the affected customers, that would have a huge impact. Buut that’s not the point that I am trying to make here. (Though it would be relevant).

It would be much more efficient in terms of execution speed to do something like the following, though it does assume you have a Queue Worker or Laravel Horizon running (or some kind of mechanism to handle jobs on a queue)

foreach(Customer::all() as $customer){

// app/Jobs/CalculateTotals

class CalculateTotals implements ShouldQueue{
    use Dispatchable, InteractsWithQueue, Queueable, SerializesModels;
    protected $contact;
    public function __construct(Contact $contact)
        $this->contact = $contact;
    public function handle()
        $this-contact->total_spend = Order::where('customer_id', '=', $customer->id)->sum('subtotal);
        $this-contact->total_spend_this_month = Order::where('customer_id','=',$customer->id)->where('created_at','>=',now()->subMonths(1)->format('Y-m-d'))->sum('subtotal');

What we have actually done here, to speed up the execution time, is use the cron job to calculate what we should be handling/processing, but then passing off the work through to the job queues to handle. This means that you can have however many queue workers you have doing the processing and time consuming parts of the job. The cron will only be running for as long as it takes to find the relevant jobs to execute, and dispatch them via Redis or whatever queueing mechanism you’re using.

If you have 10,000 contacts to process, and it takes 2 seconds to process a contact, it would take 20,000 seconds to run the script.

If you run 20 queue workers, all all things are equal, the work from your script is now divided by 20 workers, or each worker would now have to deal with 500 jobs. 500 jobs multiplied by 2 seconds per job is 1,000 seconds to run (instead of 20,000).

NB: Personally, I would calculate these numbers on the fly, if they needed saving in the database I would calculate them on a listener to an event for when the orders were added, so they’re always live-ish, naturally I would not be using an ::all() as the basis to start any kind of processing in a large environment.

Problem #2 – The script uses too much memory, and crashes

This one depends on where the memory is coming from, but sticking with our above example…

$contacts = Contact::all();

Is a pretty bad place to start, it will work for a while, but if your database grows you’re eventually going to max out your memory trying to retrieve your whole data set.

$contacts = Contact::where('last_order_at', '>=', $carbonSomething->format('Y-m-d')->get();

This is a good place to start, whittle down how much you’re retrieving, but you might still be bringing back too much data.

Again, this is a demonstration, I’ve not run it, it’s just for the concept

$relevantContactsCount = Contact::where($qualifiers)->count();
$perPage = 10000;
$pages = ceil($relevantContactsCount / $perPage);
$currentPage = 1;
while($currentPage < $pages){
    $contacts = Contact::where($qualifiers)->orderBy('field','order')->limit($perPage)->offset(($currentPage - 1) * $perPage)->get();
    // Now we have 10,000 contacts only
    foreach($contacts as $contact){
        MyJobFromAbove::dispatch($contact); // Run it in a background job

Problem #3 – I can’t run my script as jobs, because it has to report back with totals, etc.

Now, this is an awkward one, but you simply need to change the way that you’re thinking about how you gather that information. You need a persistent way of storing the information, so that it can be summarised later.

There are a couple of ways to achieve this;

  1. When you start running the script, you create a record in the database or somewhere which you can report back to from your jobs (be wary of having every job run an incremental update for a “total contacts processed” type column!), or a way of consolidating the results
  2. Utilise caching mechanisms to remember certain information about what is running, so information can be shared between jobs and later on reported on
  3. Consolidate your results in somewhere very easy to calculate, and run your finalised report (which is perhaps emailed) once you know all processing is finished

Parting Thoughts

The normal rules of performance here still count…

  1. Only pull out from SQL what you actually need (select contact_id, subtotal, items_count from orders instead of select * from orders)
  2. Do what you can at SQL level, where possible SELECT SUM(subtotal) FROM orders WHERE contact_id = ?
  3. Ensure you have effective and efficient indices on the relevant columns
  4. With MySQL, for speed, use InnoDB tables, for saving storage space use MYISAM (very simplified!)
  5. Having issues with pulling information out of the database (rather than processing it?) consider NoSQL solutions like MongoDB
  6. Consider if there are better ways to get the data into your database, so that it can be retrieved more efficiently, or if there are other database design implementations which will suit your needs more closely
  7. Consider working in a near-live speed, instead of bulk processing, even if you hold that information off to be implemented the following day (for business logic purposes)

Reflective practice for programmers and developers in 8 points

Hey guys

It’s my birthday tomorrow, a time of year when I always feel quite reflective about my life. Specifically today, I am thinking about my career, as this year marks a decade since I started dabbling with web stuff.

So, here are eight lessons.

1. You are never done learning

This is such a cliche, but it is so true. The moment you think you know everything is when you need to grow, seriously. That level of over confidence, the moment of “I can do anything I want in PHP” was arguably when I was at my most arrogant and dangerous point in my career.

For me, it was about 7 or so years ago, and I knew, comparatively nothing. If I don’t think the same think in 10 years about myself now, I will have failed to continue learning.

There’s always a new framework, pattern, feature, methodology, or something you don’t know yet. Respect your lack of knowledge, for it will be your downfall if you do not.

Learning to know what you don’t know, and being self aware in that context will set you apart from many other developers.

Recognise the difference between commercial and personal experience, they are worlds apart. Just because you successfully implemented React on a WP website in your spare time, does not mean you’re ready to roll React out to an SaaS platform, or long-term-supported CMS version (etc).

Know what you don’t know. Identify what you need to know that you don’t currently, and figure it out.

It is not your employer’s responsibility (unless you’re a Junior or Trainee) to enhance your career. That’s your job. Own it.

2. You should hate all your old code

Okay, so hate is a strong word, but…

They say anything you wrote more than 6 months ago may as well have been written by someone else. I don’t necessarily agree with that, I leave a trail of flow charts, documentation, tests, and code comments in my wake; to mitigate this point, however;

If you don’t hate your old code, or (to be less dramatic), if you wouldn’t rebuild every project you built over a year ago, I would argue you’ve not learned or grown in that time.

I dislike everything I built more than a year ago, for sure, for various reasons, but every project would’ve had (whether large or nuanced) differences.

3. Your defeats are your biggest victories

The worst project. The hardest client. The most demanding project manager. The hardest bug. The slowest system/database, the most unruly repository. The most gut wrenching interview.

Every single one of these things is an opportunity for you to learn.

Every minute of pain, characterises things you will never do again, and will never subject another developer to it. It builds you up and makes you better. There’s a lesson to be learned, you just have to find it.

Embrace the suck.

4. When you are at your most comfortable, is when you are most vulnerable

I touched on complacency and over-confidence in point 1. But when you feel like this, you’re in dangerous territory.

It is almost for sure that you will make mistakes when you feel overly comfortable in what you’re doing.

You are either bored, so not paying attention, or don’t care. Neither of those things characterise a good developer, at all. This is when you’ll make a potentially career altering mistake. Which may well lead you to point 3…

Too comfortable = Complacent = Mistakes

5. If you think you are worth more, you probably are

This is not necessarily about money. Once you’ve been in the industry for a few years, you should start to recognise good and bad signs. If you start to think you are worth more, in terms of working conditions, work/life balance, management, tech stack, or generally being valued, you probably are.

Never be afraid to take the leap. Never be afraid to ask for what you want. Even when you’re afraid, do it anyway.

Side note; honestly, respect your body, your friends and your family. Seriously. Never neglect your own well-being for any employer, ever.

Bet on yourself, and take the leap. You can do it.

6. Experience, exposure, and expertise are not the same thing

I want to set this one straight. When I am hiring developers, I very rarely specify a hard minimum on experience or anything like that.

The reason for this is simple; someone is considered to have 10 years experience if they have been employed, as a developer, for 10 years. That 10 years could’ve been spent writing bug fixes for some PHP 5.6 project which should’ve been retired years ago, that nobody uses and is only maintained because it still pulls in license fees.

Exposure, however, is different. Being exposed to lots of different tech stacks (frameworks, languages, content management systems) and different ways of working is invaluable. It allows new perspective, this is valuable insight. Someone with lots of exposure usually can help with transformation, and spot own-goals before you score them. If you listen to them.

Expertise is about how well they know their subject matter. Someone might have 3 years experience, a shed load of expertise, and lots of exposure to different ways of working. Are they less valuable than someone who has done the same thing for 10 years?

Pick your heroes wisely. Never evangelise someone for whom you have no comparison. Especially if you are young in your career. Decades of experience does not necessarily equate to an expert developer.

7. You do not know more than the developer next to you, but you know different things to each other

When I lead development teams, and when I work on them, whatever level I am at, I understand what I know (and what I do not) and I respect that the “Junior Developer” (just a job title) next to me, probably knows things I do not. The same as I know things that she does not. The difference is likely age (thus years of experience) and exposures.

Never neglect the little hero sat in your office, who is obsessed with technology. Whilst you’re bogged down with implementation of a 10 million record schema, she sat up until 4am reading about [the latest amazing piece of tech] – you may well be surprised at how she can help you. So don’t dismiss a junior for being a junior.

You both know different things. Respect it and embrace it.

8. Don’t learn on the job unless you are supposed to

“Yeah, we can figure out how to do that!” – “Promise it today, figure out how to deliver it tomorrow” – in technology spheres these things suck.

If you are about to build something in a technology with which you have insufficient knowledge or experience, tell the person managing the project, and get the appropriate amount of R&D time scheduled into it.

Again, personal and commercial experience are not the same thing, and you are setting yourself up for a fail if you take on, and say you can do, technical jobs that you don’t know how to accomplish.

Well, that’s all for tonight folks, have a great bank holiday weekend!


Cloud Hosting – Basic Overview and Considerations

This article is aimed at people who are traditionally used to running production environments Bare Metal, or using a single VPS to host their sites/platforms. People who now need to understand the practical considerations of hosting their application in a more scalable way.

For the purpose of this article I am going to assume the application in question is Laravel based, realistically this isn’t particularly relevant (save for the fact that you need to be able to control your configurations across all servers).

I will soon be writing some case studies about the work I have done at various assignments, but for now this is all based on experience, but strictly hypothetical.

Additionally, to keep things simple for the purposes of this article, I am going to talk about Digital Ocean, and assume we are using their services. Whilst we all know that essentially AWS is the monarch of cloud scalability, at an entry level Digital Ocean is easier to cost and manage.

This article is not a deep dive! It just outlines, at a very high level, the things one would need to consider before moving to this kind of setup.

Our considerations

A brief list of the things that we need to be thinking about

  • Serving web traffic
  • Running the Laravel Schedule (cron)
  • Running Queue Workers
  • MySQL
  • Queue Driver (I tend to use Redis)
  • Cache Driver (again, I tend to use Redis – but the File driver will not work in this instance)
  • Log Files
  • Sessions (I tend to use either database or Redis, depending on the application, but as always, the File driver will not work in this instance)
  • User-uploaded files, and their distribution
  • Backups
  • Security / Firewalling

Digital Ocean Implementation Overview

For the purpose of this article we’re going to spin up a number of resources with Digital Ocean to facilitate our application. In the real world you will need to decide what resources you need.

  • 3x Web Servers
  • 1x Processing Server
  • 1x Data Server
  • 1x Load Balancer
  • 1x Space

Annotations on the above

Something that is worth taking note of, early doors, is that your servers are expendable at any given time. If you kill a web server, your application should survive.

In the real world, I would suggest that you use a managed MySQL and Redis (or Cache + Queue driver) service, to mitigate your risk on those droplets, making all of them expendable.

Notes on other tools

I personally would always recommend, in this environment, using Continuous Integration to test your application, and using Merge requests to ensure that anything which finds its way into a deployable branch (or a branch from which versions will be tagged) is as safe as it can be.

I would also advise Test Driven Development (TDD) because this is probably the only way you really know that you have 100% test coverage.

I am going to come onto deploying to your environment at the bottom of this article.

Part 1: Building Your Servers

It makes good sense to build the base requirements of your server into an image/snapshot/etc (depending on your provider), and then your redistribution process is much easier, because you can spin up your servers/droplets/instances based on that image.

Whilst the Data Server does not need to have any application knowledge, the others will all need to have your chosen version of PHP and any required extensions that your application needs to run.

Once you’ve built the first server, take a snapshot of it, with SSL certificates and such.

I would advise running supervisor on this server, to ensure that Apache / nginx (or your chosen web server) stays running all the time, to minimise and mitigate downtime.

Part 2: Serving Web Traffic

Whether you are using Digital Ocean, Amazon Web Services, or any other provider; they all offer Load Balancers. The purpose of a Load Balancer is to balance the load, by distributing your web traffic to the servers available. This means that in times of peak traffic you can add more web servers to handle the traffic (this is known as horizontal scaling – the opposite of vertical scaling which is where you upsize your servers).

So, build the servers that you need, and make them capable of serving web traffic. Then fire up a Load Balancer, and you can assign these droplets to your load balancer.

Once you have done this, point your DNS to point to the Load Balancer, so that it can distribute the traffic for that domain.

If you are using CloudFlare, make sure you forward port 443 (SSL) to port 443, and set the SSL to parse-through, so that the handshake between destination server, load balancer, and browser can succeed.

Now that you have web servers, you need to make sure that you have the backing to fulfil this.

Part 3: Running the Laravel Schedule (cron tasks) and Queue Processors

You do not want to run the cron on your web servers. Unless, of course, your desired effect is to have the job running X amount of times per interval (where X is the amount of web servers you are running).

For this kind of background processing, you want another server, which is where the processing server comes in. This is the server which is going to do all of your background processing for you.

This server needs to be able to do all the same things that your web servers can do, except it just won’t be serving web traffic (port 80 for HTTP and port 443 for HTTPS).

This is the server on which you will run either artisan queue:work or artisan horizon, it’s the same server which you’re going to be running your schedule on too. Of course, this could be separated out to different servers if that works better for your use case.

Part 4: Data Server (MySQL and Redis)

As I say, I personally would be looking for managed solutions for this part of my architecture. If I was using AWS then, of course, they have Elasticache and RDS to handle these problems for you. If you’re using Google Cloud, they also have a Relational Database service.

Or, you build a droplet to host this stuff on. If you host on a droplet, ensure you have arranged adequate backups! Digital Ocean will do a weekly backup for a percentage of the droplet’s cost. If you need to backup more frequently, I’ll cover this off later on,

This server needs to have MySQL (make sure to remove the Bind Address) and Redis (take it out of protective mode, and make any modifications as required). Install these, get them running, and if necessary set up how Redis will persist and the resource limits/supervision for MySQL.

Part 5: Log Files

If, as they should be, log files are an important part of your application, you want to get them off your servers as soon as possible. Cloud servers are like cattle, you must be prepared to kill them at a moment’s notice. If you’re going to do that then you need the logs (which may contain useful information about why the server died) to not be there when you nuke them into oblivion.

There are loads of services about to manage this. Though what I will say is that it is just as easy to run an Artisan command to migrate them to Digital Ocean Spaces periodically. Space start at $5/month. AWS S3 Buckets are also exceptionally cost effective.

The key point here is, don’t leave your log files on the web or processing servers for any longer than you need to.

Part 6: Interconnectivity

Quick side note here, when you are dealing with Droplet-to-Droplet communications, use their private IPs, which are routed differently, thus faster and don’t count towards your bandwidth limits.

Part 7: Sessions

I am only going to touch on this really quickly. You need to be using a centralised store for your session (Redis or MySQL work perfectly well), otherwise (unless you are using sticky sessions*) your authentication and any other session/state dependant functionality will break.

Sticky sessions are great, but be careful if you are storing session information on the local web server (the one the client is stuck to) and that web server goes down, because the assumption on any web server is that anything stored on it is ephemeral.

Part 8: User Uploaded Files

Your application may well have user uploaded files, which need to be distributed. We know by now that they cannot be stored on any of the web servers due to their ephemerality.

On this one I would advise use whatever stack you’re using, in this example we’re using Digital Ocean, so use Spaces (which is essentially file storage + CDN). If you’re using Amazon Web Services use AWS S3 bucket(s), with CloudFront for the CDN aspect.

Configure your driver(s) as appropriate to use Digital Ocean Spaces, which is S3 compatible. Your user uploaded files still work as they always did, except they will no longer be stored in storage/* for reasons I’ve probably reiterated half a dozen times in this article.

Also make sure than any public linking you’re doing, to article or profile images, for example, is respecting the appropriate CDN location.

Part 9: Backups

It is almost impossible for me to describe how your backups should work, as I don’t know your environment. However, Spaces and S3 are both cost effective, and are where I would advise you store your backups.

Depending on how you want to backup, there are 101 ways to take those backups. But nothing (logs don’t count) from your web servers should be backed up, they should only contain the necessary configurations to serve traffic.

Your files, theoretically, are unlikely to need backing up either, because they are sitting within a managed service, and if you have used manage MySQL / Redis / etc services then you don’t need to back them up either!

Part 10: Kill enemies with fire (firewalling)

If you’re using Digital Ocean, Firewalls are just included, but whatever solution you are using your security policies need to reflect the work your servers are doing, roughly outlined below.

  1. Web servers need to accept port 80 (HTTP) and port 443 (HTTPS) traffic, but only from your Load Balancer
  2. Your Data Server (assuming MySQL and Redis) need to accept port 3306 (MySQL) and port 6379 (Redis) traffic, but only from the processing and web servers
  3. Your Processing Server should not accept any traffic from anywhere
  4. You may wish to put exceptions in for your deployment tool, and potentially your own IP address so that you can shell into your servers, view MySQL and such

Part 11: Deployments

Personally, I love Envoyer, I think it’s brilliant. But the key point to take away with deployments in this scenario is that it can no longer be done manually.

If you have 3x web servers + 1x processing server, you cannot shell into all 4 and do git pull and git checkout and all that stuff, you need to manage that process for you properly.

If you use Envoyer, there are loads of resources out there on how to get it set up just right for your application, but the key point is you cannot just do it by hand.

Your deployment process should be capable of running without you (Continuous Deployment) following successful Continuous Integration, for example when a merge request is successfully merged into Master, you may wish to deploy straight to your production servers.

Deployments will need to cover NPM/Yarn dependencies, composer dependencies, and anything else your application needs to worry about, perhaps migrations, or clearing views, caches, and configs.

You will also need to know how you are going to deploy environment changes (a change of MySQL host address, for example) to your servers instantaneously.

Part 12: Manual Stuff

If, for any reason, you need to run anything manually, like Artisan commands, then you would log into your processing server and run them there.

Ideally you would never need to do this, of course. But it is worth noting how you would go about doing this.


  1. Digital Ocean
  2. Amazon Web Services
  3. Envoyer – Zero Downtime Deployments
  4. ScaleGrid – Managed RDS Solutions (MySQL and Redis)
  5. Laravel – the PHP Framework for Web Artisans
  6. CloudFlare – the web performance and security company