My Blog

Cloud Hosting – Basic Overview and Considerations

This article is aimed at people who are traditionally used to running production environments Bare Metal, or using a single VPS to host their sites/platforms. People who now need to understand the practical considerations of hosting their application in a more scalable way.

For the purpose of this article I am going to assume the application in question is Laravel based, realistically this isn’t particularly relevant (save for the fact that you need to be able to control your configurations across all servers).

I will soon be writing some case studies about the work I have done at various assignments, but for now this is all based on experience, but strictly hypothetical.

Additionally, to keep things simple for the purposes of this article, I am going to talk about Digital Ocean, and assume we are using their services. Whilst we all know that essentially AWS is the monarch of cloud scalability, at an entry level Digital Ocean is easier to cost and manage.

This article is not a deep dive! It just outlines, at a very high level, the things one would need to consider before moving to this kind of setup.

Our considerations

A brief list of the things that we need to be thinking about

  • Serving web traffic
  • Running the Laravel Schedule (cron)
  • Running Queue Workers
  • MySQL
  • Queue Driver (I tend to use Redis)
  • Cache Driver (again, I tend to use Redis – but the File driver will not work in this instance)
  • Log Files
  • Sessions (I tend to use either database or Redis, depending on the application, but as always, the File driver will not work in this instance)
  • User-uploaded files, and their distribution
  • Backups
  • Security / Firewalling

Digital Ocean Implementation Overview

For the purpose of this article we’re going to spin up a number of resources with Digital Ocean to facilitate our application. In the real world you will need to decide what resources you need.

  • 3x Web Servers
  • 1x Processing Server
  • 1x Data Server
  • 1x Load Balancer
  • 1x Space

Annotations on the above

Something that is worth taking note of, early doors, is that your servers are expendable at any given time. If you kill a web server, your application should survive.

In the real world, I would suggest that you use a managed MySQL and Redis (or Cache + Queue driver) service, to mitigate your risk on those droplets, making all of them expendable.

Notes on other tools

I personally would always recommend, in this environment, using Continuous Integration to test your application, and using Merge requests to ensure that anything which finds its way into a deployable branch (or a branch from which versions will be tagged) is as safe as it can be.

I would also advise Test Driven Development (TDD) because this is probably the only way you really know that you have 100% test coverage.

I am going to come onto deploying to your environment at the bottom of this article.

Part 1: Building Your Servers

It makes good sense to build the base requirements of your server into an image/snapshot/etc (depending on your provider), and then your redistribution process is much easier, because you can spin up your servers/droplets/instances based on that image.

Whilst the Data Server does not need to have any application knowledge, the others will all need to have your chosen version of PHP and any required extensions that your application needs to run.

Once you’ve built the first server, take a snapshot of it, with SSL certificates and such.

I would advise running supervisor on this server, to ensure that Apache / nginx (or your chosen web server) stays running all the time, to minimise and mitigate downtime.

Part 2: Serving Web Traffic

Whether you are using Digital Ocean, Amazon Web Services, or any other provider; they all offer Load Balancers. The purpose of a Load Balancer is to balance the load, by distributing your web traffic to the servers available. This means that in times of peak traffic you can add more web servers to handle the traffic (this is known as horizontal scaling – the opposite of vertical scaling which is where you upsize your servers).

So, build the servers that you need, and make them capable of serving web traffic. Then fire up a Load Balancer, and you can assign these droplets to your load balancer.

Once you have done this, point your DNS to point to the Load Balancer, so that it can distribute the traffic for that domain.

If you are using CloudFlare, make sure you forward port 443 (SSL) to port 443, and set the SSL to parse-through, so that the handshake between destination server, load balancer, and browser can succeed.

Now that you have web servers, you need to make sure that you have the backing to fulfil this.

Part 3: Running the Laravel Schedule (cron tasks) and Queue Processors

You do not want to run the cron on your web servers. Unless, of course, your desired effect is to have the job running X amount of times per interval (where X is the amount of web servers you are running).

For this kind of background processing, you want another server, which is where the processing server comes in. This is the server which is going to do all of your background processing for you.

This server needs to be able to do all the same things that your web servers can do, except it just won’t be serving web traffic (port 80 for HTTP and port 443 for HTTPS).

This is the server on which you will run either artisan queue:work or artisan horizon, it’s the same server which you’re going to be running your schedule on too. Of course, this could be separated out to different servers if that works better for your use case.

Part 4: Data Server (MySQL and Redis)

As I say, I personally would be looking for managed solutions for this part of my architecture. If I was using AWS then, of course, they have Elasticache and RDS to handle these problems for you. If you’re using Google Cloud, they also have a Relational Database service.

Or, you build a droplet to host this stuff on. If you host on a droplet, ensure you have arranged adequate backups! Digital Ocean will do a weekly backup for a percentage of the droplet’s cost. If you need to backup more frequently, I’ll cover this off later on,

This server needs to have MySQL (make sure to remove the Bind Address) and Redis (take it out of protective mode, and make any modifications as required). Install these, get them running, and if necessary set up how Redis will persist and the resource limits/supervision for MySQL.

Part 5: Log Files

If, as they should be, log files are an important part of your application, you want to get them off your servers as soon as possible. Cloud servers are like cattle, you must be prepared to kill them at a moment’s notice. If you’re going to do that then you need the logs (which may contain useful information about why the server died) to not be there when you nuke them into oblivion.

There are loads of services about to manage this. Though what I will say is that it is just as easy to run an Artisan command to migrate them to Digital Ocean Spaces periodically. Space start at $5/month. AWS S3 Buckets are also exceptionally cost effective.

The key point here is, don’t leave your log files on the web or processing servers for any longer than you need to.

Part 6: Interconnectivity

Quick side note here, when you are dealing with Droplet-to-Droplet communications, use their private IPs, which are routed differently, thus faster and don’t count towards your bandwidth limits.

Part 7: Sessions

I am only going to touch on this really quickly. You need to be using a centralised store for your session (Redis or MySQL work perfectly well), otherwise (unless you are using sticky sessions*) your authentication and any other session/state dependant functionality will break.

Sticky sessions are great, but be careful if you are storing session information on the local web server (the one the client is stuck to) and that web server goes down, because the assumption on any web server is that anything stored on it is ephemeral.

Part 8: User Uploaded Files

Your application may well have user uploaded files, which need to be distributed. We know by now that they cannot be stored on any of the web servers due to their ephemerality.

On this one I would advise use whatever stack you’re using, in this example we’re using Digital Ocean, so use Spaces (which is essentially file storage + CDN). If you’re using Amazon Web Services use AWS S3 bucket(s), with CloudFront for the CDN aspect.

Configure your driver(s) as appropriate to use Digital Ocean Spaces, which is S3 compatible. Your user uploaded files still work as they always did, except they will no longer be stored in storage/* for reasons I’ve probably reiterated half a dozen times in this article.

Also make sure than any public linking you’re doing, to article or profile images, for example, is respecting the appropriate CDN location.

Part 9: Backups

It is almost impossible for me to describe how your backups should work, as I don’t know your environment. However, Spaces and S3 are both cost effective, and are where I would advise you store your backups.

Depending on how you want to backup, there are 101 ways to take those backups. But nothing (logs don’t count) from your web servers should be backed up, they should only contain the necessary configurations to serve traffic.

Your files, theoretically, are unlikely to need backing up either, because they are sitting within a managed service, and if you have used manage MySQL / Redis / etc services then you don’t need to back them up either!

Part 10: Kill enemies with fire (firewalling)

If you’re using Digital Ocean, Firewalls are just included, but whatever solution you are using your security policies need to reflect the work your servers are doing, roughly outlined below.

  1. Web servers need to accept port 80 (HTTP) and port 443 (HTTPS) traffic, but only from your Load Balancer
  2. Your Data Server (assuming MySQL and Redis) need to accept port 3306 (MySQL) and port 6379 (Redis) traffic, but only from the processing and web servers
  3. Your Processing Server should not accept any traffic from anywhere
  4. You may wish to put exceptions in for your deployment tool, and potentially your own IP address so that you can shell into your servers, view MySQL and such

Part 11: Deployments

Personally, I love Envoyer, I think it’s brilliant. But the key point to take away with deployments in this scenario is that it can no longer be done manually.

If you have 3x web servers + 1x processing server, you cannot shell into all 4 and do git pull and git checkout and all that stuff, you need to manage that process for you properly.

If you use Envoyer, there are loads of resources out there on how to get it set up just right for your application, but the key point is you cannot just do it by hand.

Your deployment process should be capable of running without you (Continuous Deployment) following successful Continuous Integration, for example when a merge request is successfully merged into Master, you may wish to deploy straight to your production servers.

Deployments will need to cover NPM/Yarn dependencies, composer dependencies, and anything else your application needs to worry about, perhaps migrations, or clearing views, caches, and configs.

You will also need to know how you are going to deploy environment changes (a change of MySQL host address, for example) to your servers instantaneously.

Part 12: Manual Stuff

If, for any reason, you need to run anything manually, like Artisan commands, then you would log into your processing server and run them there.

Ideally you would never need to do this, of course. But it is worth noting how you would go about doing this.

Links

  1. Digital Ocean
  2. Amazon Web Services
  3. Envoyer – Zero Downtime Deployments
  4. ScaleGrid – Managed RDS Solutions (MySQL and Redis)
  5. Laravel – the PHP Framework for Web Artisans
  6. CloudFlare – the web performance and security company

Using MySQL to return complex tagged results

Hi everyone

In this article I thought I would tackle complex tag searching, using MySQL. There are lots of ways this can be achieved at a PHP level, through looping and checking/comparing.

However, if you are working with a large dataset, and tagging is a required piece of functionality, for example in an enterprise blog, or a CRM, then you’ll need to get your results direct from the database, rather than loading into memory and looping, which can work in smaller applications.

We are going to cover 3 types of tag searching, and those types of search working with and correlating all 3 of those results together.

  • Record must have all specified tags (e.g. this contact must be both a customer and high value)
  • Record must have any specified tags (e.g. this post must be tagged with laravel or mvc)
  • Record must not have any specified tags (e.g. don’t show anybody from a CRM which has already received the “A” campaign, so that “A” customers don’t receive “B” campaign, too)

Outlining the example

To keep things simple, the example we’re going to work with is as follows;

  1. We have a customers table
  2. Every customer record has one, many, or no corresponding tags in the customer_tags table
  3. The customer_tags table has customer_id and tag_id bridging the customers and tags tables, respectively

The query we are working to needs to do return records which conform to the following tagged search

  1. Must have tags 999 and 888
  2. Must also have any of the tags 777, 666, 555
  3. Must not have any of the tags 444 or 333

Finding records which have all tags

The first thing we need, is a successful WHERE statement which will do this, of course, you are then going to want it to be programmatically generated.

In the example below we’re doing is joining on the mandatory tags, this is arguably the simplest part of the selection, because we’re doing an inner join – if the tag doesn’t exist, the record won’t be available, which means we don’t need to do any WHERE filtering etc.

Notice that I am naming the contact_tags joins as mh1 and mh2 (mh being short for must have)

/* Retrieve the fields we want */
SELECT id, first_name, last_name

/* Start with the customers table */
FROM customers c

/* For each MUST HAVE tag */
JOIN customer_tags mh1
ON c.id = mh1.customer_id
AND mh1.tag_id = 999

JOIN customer_tags mh2
ON c.id = mh2.customer_id
AND mh2.tag_id = 888

Finding records which also have any of the any-specified tags

Now we have 999 and 888 records only, we now need to make sure that we have one more more of the any tags (777, 666, and 555)

What we need here is to look for the connections; we need to do these as an outer join, as it is perfectly valid for the corresponding record to not be found.

Following that, we need to wrap a WHERE statement which ensures that at least one of those records were returned.

For the must-have tags we did an inner join, this means that unless the corresponding record is found, the result will be returned.

In this case we’re going to do an OUTER JOIN on the LEFT. The join being an OUTER JOIN means that the (contact) record will be returned, even if the corresponding contact_tag record was not found. Being a LEFT JOIN means that we’re going to base from our contacts table, we’re not going to return irrelevant contact_tags records.

/* Retrieve the fields we want */
SELECT id, first_name, last_name

/* Start with the customers table */
FROM customers c

/* For each MUST HAVE tag */
JOIN customer_tags mh1
ON c.id = mh1.customer_id
AND mh1.tag_id = 999
JOIN customer_tags mh2
ON c.id = mh2.customer_id
AND mh2.tag_id = 888

/* Join on the have-any tag relationships */
LEFT OUTER JOIN customer_tags ha1
ON c.id = ha1.customer_id
AND ha1.tag_id = 777
LEFT OUTER JOIN customer_tags ha2
ON c.id = ha2.customer_id
AND ha2.tag_id = 666
LEFT OUTER JOIN customer_tags ha3
ON c.id = ha3.customer_id
AND ha3.tag_id = 555

Now this in itself isn’t going to actually solve the issue we have around making sure we have 777, 666, or 555 – it is simply going to join on those relationships, if they exist.

This is where we need a WHERE statement to filter the results for us.

AND (
ha1.id IS NOT NULL
OR ha2.id IS NOT NULL
OR ha3.id IS NOT NULL
)

This statement means that if any of the have any (ha) tag relationships were found (the brackets thus returning TRUE) will be valid and returned

Does not have any of these tags

So the final part in this requirement, is to ensure that the records returned do not have any of the specified tags that we have to avoid.

We are going to do this in a very similar way to how we did the have any tags, see below

LEFT OUTER JOIN customer_tags nh1
ON c.id = nh1.customer_id
AND nh1.tag_id = 444

LEFT OUTER JOIN customer_tags nh2
ON c.id = nh2.customer_id
AND nh2.tag_id = 333

And, to make sure they were not returned, we’re going to do the following

AND nh1.id IS NULL
AND nh2.id IS NULL

So we’re LEFT OUTER JOINing on here for the same reason we left outer joined on the have any tags – it is valid for this tag to not exist, in fact, in this case we actively don’t want it to exist.

Putting the query together

/* Retrieve the fields we want */
SELECT id, first_name, last_name

/* Start with the customers table */
FROM customers c

/* For each MUST HAVE tag */
JOIN customer_tags mh1
ON c.id = mh1.customer_id
AND mh1.tag_id = 999
JOIN customer_tags mh2
ON c.id = mh2.customer_id
AND mh2.tag_id = 888

/* Join on the have-any tag relationships */
LEFT OUTER JOIN customer_tags ha1
ON c.id = ha1.customer_id
AND ha1.tag_id = 777
LEFT OUTER JOIN customer_tags ha2
ON c.id = ha2.customer_id
AND ha2.tag_id = 666
LEFT OUTER JOIN customer_tags ha3
ON c.id = ha3.customer_id
AND ha3.tag_id = 555

/* Join on the must not have tag relationships */
LEFT OUTER JOIN customer_tags nh1
ON c.id = nh1.customer_id
AND nh1.tag_id = 444
LEFT OUTER JOIN customer_tags nh2
ON c.id = nh2.customer_id
AND nh2.tag_id = 333

/* Ensure at least one of the have-any relationships is found */
WHERE (
ha1.id IS NOT NULL
OR ha2.id IS NOT NULL
OR ha3.id IS NOT NULL
)

/* Ensure not have relationships don't exist */
AND nh1.id IS NULL
AND nh2.id IS NULL

/* Never return duplicate customer */
GROUP BY c.id

/* Order by last updated customer records */
ORDER BY c.updated_at DESC

Adding further search filtering

I am not actually going to fully cover this, that will be on you with your application to make the best decisions, if you want to build a SQL query (not a prepared statement) then you may find this line handy, I can’t vouch for its safety.

$sanitised = DB::connection()->getPdo()->quote($string);

Dynamically Generating Your Query

All you really need to do is variablise (yes, I made that word up) your query generation;

I’m sorry, this isn’t PSR-2, I’m not going to lie to you, I wrote it straight into WordPress, and don’t want to spin up containers to run PHPCBF/PHPCS against it

// These are the must have tags
$mustHaveAllTags = [999,888,777];
// Must have 1+ of these tags
$mustHaveAnyTags = [666,555];
// Must not have any of these tags
$mustNotHaveTags = [444,333];
// Pagination of the query
$page = 1;
$perPage = 100;
// These are the components of our WHERE statement
$whereStatements = [];
// This is where we will hold our SQL query
$sql = '';
// Loop the must have tags
foreach($mustHaveAll Tags as $tagId){
$joinName = "mh$tagId";
$sql .= "
JOIN customer_tags $joinName
ON $joinName.customer_id = c.id
AND $joinName.tag_id = $tagId";
$wheres[] = "$joinName.id IS NOT NULL";
}
// Handle the must have any tags
if(!empty($mustHaveAnyTags)){
$anyWheres = [];
foreach($haveAny as $tagId){
$joinName = "ha$tagId";
$sql .= "
LEFT OUTER JOIN customer_tags $joinName
ON $joinName.customer_id = c.id
AND $joinName.tag_id = $tagId";
$anyWheres[] = "$joinName.id IS NOT NULL";
}
$wheres[] = "(" . implode(" OR ", $anyWheres) . ")";
}
// Not have any tags
foreach($notHave as $tagId){
$joinName = "nh$tagId";
$sql .= "
LEFT OUTER JOIN customer_tags
$joinName ON $joinName.customer_id = c.id
AND $joinName.tag_id = $tagId";
$wheres[] = "$joinName.id IS NULL";
}
// Append all the WHERE statements we created
$sql .= " WHERE " . implode(" AND ", $wheres);
// Group by the contact
$sql .= " GROUP BY c.id ";
// Calculate pagination
$offset = ($page-1) * $perPage;
$limit = $perPage;
// Append query limiting
$sql .= " LIMIT $offset, $limit ";

Next Steps

The next steps on this, to be honest, are beyond the scope of this article. What you now have is a query which will provide your tagged results as appropriate.

For me, I would now extract that functionality into a class, probably 2 (one responsible for providing a search “interface”, and one for generating SQL), and depending on how you want to use Eloquent or raw SQL either extract the custom searching (if you wanted to search by name, for example) into Eloquent, using prepared statements, or escape strings and queries (I would personally discourage the latter approach).

If you do want to do it in Eloquent you will incur a second query, but there are (from a programmatic perspective) some advantages (like not worrying about generating and executing prepared statement), if this is the case only select c.id from your query. From where run whatever normal Eloquent stuff you want, and simply add

$query->whereIn('id', $idsReturnedFromYourQuery);

What about performance? Joins are not performant

I’ve tested this kind of query with up to about 10 joins (10 tags), on a dataset of about 500,000 base records and it ran in about 300ms on a $40 Digital Ocean droplet.

Make sure that customer_tags has a compound index set on customer_id + tag_id to make the lookup faster, and foreign key both fields in that table.

In a lot of applications, this will be performant enough. When you start hitting the millions of records I am sure it will slow down and you will need a different approach.

Dependency Inversion and Interface Segregation using Laravel’s Service Container

In very short, we’re going to cover the Interface Segregation Principle (ISP) and the Dependency Inversion Principle (DIP) in this article, and practically how we implement them within a Laravel environment.

Dependency Inversion Principle

  1. The principle is handy for the decoupling modules (making them independent of one another)
  2. Modules should not depend upon each other, but instead upon abstractions (in PHP there are generally interfaces)
  3. Abstractions should not be concerned with the implementation of functionality

Interface Segregation Principle

The next part we’re going to look at in this article is the Interface Segregation Principle, roughly covered in these two points

  1. No client should be forced to depend upon methods which it does not use
  2. Many specific interfaces are better than a single general purpose interface

Approaching the task

Now that we have recapped the principles, very briefly, here’s how we implement this.

  1. Split the required functionality into any parts which, according to the Single Responsibility Principle, would need to be separate – this will create interface segregation
  2. Understand the communication between any of these parts at its most generic level
  3. Create the interfaces (abstractions) through which this communication can be achieved
  4. Inject your dependencies based on the abstractions, achieving dependency inversion
  5. Resolve the abstractions to concretions with the Laravel service container

Implementing the Approach

The specification I was working to was (roughly);

Players should be able to send cash, bullets, and bonds to one another

Paraphrased specification

I’m going to quote some code below, which comes from this Pull Request

1. Split the requirements into single responsibilities

Several things happen when funds are transferred between accounts, so we need to logically split these

  1. Currency is credited to a character
  2. Currency is debited from a character
  3. A log of the transaction is stored
  4. A notification is sent

Logically, only the first 3 parts of this need to be programmed, because we can use Laravel Events, Listeners, and Notifications to handle the forth part.

Part 1 and 2 of this can be handled through a single class, which is responsible for the adjustment of currency amounts on a specific character.

So that leaves part 3, and the wrapping of parts 1 and 2 for our transferrer. With this separation, I now have 2 classes with single responsibilities:

  1. CurrencyHandler which is responsible for handling currency on characters (checking, crediting, debiting), single responsibility: this class will only change if the way in which we store/calculate currency is changed
  2. CurrencyTransferrer which is responsible for tying up the process of transferring currency from one character to another, single responsibility: this class will only change if the way which we carry out a transfer of currency changes

2. Understand the required communication

The transferrer depends upon the handler, because the transfer cannot happen without the handler. The next question is; what does the handler need to be able to communicate (whether answering questions, or receiving instructions)?

  1. Question: Is this (string) a valid currency? We have three in game
  2. Question: Is this amount valid for this character? i.e. can they afford to expend this amount
  3. Instruction: Add X amount of Y currency to this character
  4. Instruction: Remove X amount of Y currency from this character

And we also have to think about the transferrer, what do we need to be able to communicate with a currency transferrer?

  1. Instruction: Transfer X amount of Y currency from A character to B character

3. Create the Interfaces

From these single responsibilities, we can create 2 interfaces

  1. HandlesCurrency
  2. TransfersCurrency

HandlesCurrency defines the following methods to be implemented, according to the lines of communication we defined

public function creditCurrency(Character $character, string $currency, int $amount): void;

public function debitCurrency(Character $character, string $currency, int $amount): void;

public function characterHasOnHand(Character $character, string $currency, int $amount): bool;

public function validateCurrency(string $currency): bool;

Whereas, based on the required communication, the TransfersCurrency interface declares the following

public function transfer(Character $from, Character $to, string $currency, int $amount);

4. Segregate these interfaces from their concretions, and inject the abstractions

Laravel makes this bit exceptionally easy… (the following is within a Service Provider)

$this->app->bind(HandlesCurrency::class, CurrencyHandler::class);
$this->app->bind(TransfersCurrency::class, function () {
    return new CurrencyTransferrer(
        resolve(HandlesCurrency::class)
    );
});

So what we are doing here is…

  1. When I request the HandlesCurrency interface, Laravel will give me an instance of the CurrencyHandler class (concretion)
  2. When I request the TransfersCurrency interface, Laravel will give me an instance of CurrencyTransferrer
  3. The constructor method of CurrencyTransferrer requires an injection of its dependency of HandlesCurrency so what we do, is we tell Laravel to inject the appropriate resolution of HandlesCurrency (which is the CurrencyHandler) in for us
  4. Now, whenever we want to transfer currency between players we can simply request a CurrencyTransferrer

There are some really important things to note here;

  1. The CurrencyTransferrer does not know, nor care, how currency is handled throughout the game, it just knows that it needs to be able to ask some questions, and issue some instructions to the implementation
  2. Nowhere in the codebase (unit tests excluded) will the CurrencyHandler or the CurrencyTransferrer be mentioned, any implementation will only ever depend upon the abstractions thus, if I were to want to completely replace the CurrencyHandler, because it’s now handled by a microservice (for example), I would create a class that implements the interface, and change the service binding, every usage would be swapped

Just to wrap this full circle, let’s assume we have an API end point which transfers funds from one character to another, when we declare our Controller we would define the constructor something like as below (and Laravel would do the rest for us)

class CurrencyTransferController extends Controller
{
    protected $transferrer;
    public function __construct(TransfersCurrency $transferrer)
    {
        $this->transferrer = $transferrer;
    }
}

In Summary

To summarise here, what we’ve done is taken a piece of functionality which could’ve gotten very messy. We have split it into its reusable components, we have then inverted any dependency on those concretions, so we only depend upon the channels of communication which they have.

After doing all that we have completely interchangeable classes, we’ve used the Laravel service container, to understand how to resolve (and inject a dependency into a concretion of) our abstractions.

This is a very simple example, but what it does is demonstrate how to create highly decoupled code, following the I and the D from the SOLID principles.

Some disclaimers

  1. The Character model is injected directly, and is not inverted via an interface. I am porting old functionality across, and nothing throughout the system can function with the concept, however I think I probably will go and implement an interface to the effect of HasCurrency rather than accessing the fields directly. The other thing is, if I’m not careful I create lasagne here, I may, later on, decide to implement the HandlesCurrency interface directly upon the Character model, but in doing that I will be breaking the Single Responsibility Principle – I’m still toying with the different ways I could do this
  2. Strictly speaking the CurrencyHandler could be split into CreditsCurrency and DebitsCurrency – the reason I haven’t done this is because they are direct opposite functionalities. But now that I have noticed this, I may well go and change it
  3. I’m not perfect – there will be mistakes, I’m okay with that, but feel free to flag them, I am always happy to take on board other perspectives and learnings

Laravel Deep Dive – Mafia Online – Contents
I will update this as I add new articles

  1. Introduction
  2. Dependency Inversion and Interface Segregation using Laravel’s Service Container
  3. Achieving Single Responsibility with HTTP Requests in Laravel (coming soon)

Introduction to the Deep Dive

Hi everyone

I’ve been meaning to do deep dive articles for a while, and have always struggled to find the right code samples and stuff.

This weekend I made the decision to take a project I’ve been working on (Mafia Online) in a different direction, and port the codebase (which is shocking, for the reasons I talk about here) into a fresh Laravel installation.

Now, the scary part of that, is as follows; lots of people talk the talk frequently, but you can never really see what they’ve done, the things they’re capable of, and how they go about practicing what they preach.

Mafia Online was never supposed to go live. It only went live because a few of my friends liked it and I thought “well, why not?”, but because it was never supposed to go into production, it was built, well, shoddily.

This brings me to the next point of these deep dive articles. I want to cover, in depth, programming principles, software design patterns, and various other things. I need a real, working codebase to do this.

I also wanted to hold myself to account to not take shortcuts, so, I made the new Mafia Online codebase open source. No license attached, if you want to contribute to the project, awesome. It’s not a commercial project, so if you want to fork off and do your own stuff, great. Want to create your own clone of the game? Go for it.

You can find the repository on GitHub, and every now and then I am going to post real world code samples and examples of how to implement the things that you’ve heard of.

I realise I am likely to open myself to a world of pain here, but that is what kind of the point. I want to empower developers new and old, to learn new things, to make themselves vulnerable to criticism. This gives you the ability to learn new things through critique, but also empowers you to own your decisions. In front of the whole internet.

Laravel Deep Dive – Mafia Online – Contents
I will update this as I add new articles

  1. Introduction
  2. Dependency Inversion and Interface Segregation using Laravel’s Service Container
  3. Achieving Single Responsibility with HTTP Requests in Laravel (coming soon)

How to attack technical tests like a boss

I’ve been asked about technical tests more times than I can count, so I thought I would write this article. I feel qualified to do so as I have participated in literally dozens of technical tests throughout my career, and I’ve tested numerous candidates.

So, if you want to know how to attack a technical test as a candidate, this article might help you. Alternatively if you’re looking for how to technical test your candidates, because you’re new to the process, this is also for you!

The ultimate, one way ticket, to unlimited job offers, by bossing your technical tests – listen closely, it’ll blow your mind…

Brace yourselves, this one will change your career and your outlook on life. Nobody will have ever given you such accurate advice, and never will again…

You can’t. The only way to nail technical tests and technical interviews are to actually be exactly as skilled, or maybe more skilled, than your CV implied.

It is absolutely acceptable to not be an expert in everything, in fact, nobody is. That’s why the phrase “Jack of all trades” is completed (in part) by “…master of none”, however the full version being “, is oftentimes better than master of one”.

The point here is understand where you’re at. If you’re an absolute master at a single technology, or someone with a breadth of knowledge across multiple fields, without “mastery”, that’s fine. Being honest about your knowledge and expertise is what you should do, only that way can employers and agents match you correctly.

I’m confident in my ability. How are they going to test me?

This is the specific part I’ve been asked about on several occasions. That fear of the unknown, how are they going to gauge my skill. In all my experience, there are 5 main ways to test a candidate’s technical knowledge, experience, exposure and expertise.

These all have difference advantages, and disadvantages, and all come with different twists; largely based on the (technical) hiring managers, but I will cover them all here.

Many companies will do multiple of the below; or may offer many of testing. To summarise here; the main types of test are

  1. The low level technical question
  2. The whiteboard test
  3. The code test
  4. The high level chat
  5. The takeaway specification

The low level technical question

This is best used when trying to understand that the candidate understands what they are doing, and the technologies they are using. This might include something like

What is the difference between “==” and “===” in PHP?

PHP Developer role, about 5 or 6 years ago

What would the following code do? [insert code sample]

Various developer roles from 2011 onwards

These kinds of questions are where you can unearth true understanding vs conceptual understanding, and depth of experience vs dabbling.

When interviewing low to mid level candidates these kind of questions are perfect to gauge where the candidate sits in terms of exposure, and consequent salary.

Preparation tips: Make sure you know the languages you’re going to be tested on (probably the ones you said you know), and think of as much obscure syntax as you can think you’ve used; especially the bits of that language you hate. For me, in PHP, that’s ternary operators. Not sure why, it’s entirely subjective, and I’m not a fan.

I’ve been on the receiving end of this test probably a dozen times, if not more, particularly in the first 4 years or so of my career.

The Whiteboard Test

The infamous whiteboard test, or a problem solving technical test. This is usually some kind of broad problem provided, with you expected to design and/or explain how you would approach solving the problem.

Usually used in more heavyweight to senior/lead type roles, these questions are handy for understanding the candidate’s ability to solve problems or design solutions rather than their ability to write code.

Show me how you would implement a modular, tenanted system which could allow for bespoke modules, as well as those which were to be redistributed

Technical test for a lead developer role, about 2.5 years ago

There is a stock table. This is a transactional table, so no updates or deletes, only inserts. Stock is moved from one shelf to the other by movements in this table. Design the solution which would tell you how much stock is available at any given time

Technical test for a lead developer role, about 2.5 years ago

Preparation Tip: Think about the problems you’ve solved in the past. You may well have done this several times in the past. Consider the approaches you took, the problems you faced, and understand how that will likely be relevant in this interview. Then prepare how you would articulate those problem solving approaches. Being able to relay your given scenario to your real world experience is always good.

Ive been on the receiving end of this one maybe 3 times, mostly when I was sitting between 3 and 5 years experience.

Write some code (on site)

These ones are horrible, usually for 2 simple reasons – you’re given notepad rather than an IDE, and generally speaking Google is forbidden. Which is crazy considering we all know Google is your friend.

I’ve found a couple of variations on this one, and oftentimes this test is in conjunction with either the whiteboard test, the low level test (especially if the task is to find problems or fix bugs).

This particular style of testing can be adapted for any level of developer, and will help to gauge skill and speed, as well as experience (especially if the task is one an experience developer will have solved many times over).

Write a PHP function, in as few lines of possible, to give the entire contents of a specified directory, in infinite depth

Technical test for a developer role, about 5 years ago

Write a PHP function, which will give me the nth number in the Fibonacci sequence

Technical test for a developer role, about 4 years ago

Preparation Tip: There isn’t one for this really, except be as good as you said you are, and get hands on with code as many times as possible before your interview. There are plenty of coding challenges online that can help prepare for this.

In the early part of my career (up to about 4/5 years in) I had this one nearly every interview, so probably half a dozen times or more.

The High Level Chat

More often found in heavyweight, senior, and lead type roles. The high level chat is a great way to understand someone’s expertise and breadth of understanding.

High level chats are fluid ways to have a conversation with a candidate and understand how they solve problems, how they decide what technologies to use, and how well they know their stuff. The interesting part, for interviewer and candidate, of this is the ability to derail the conversation to follow new lines of technical enquiry; which can be very revealing for someone’s true level of skill and experience.

What is your experience with caching mechanisms?

Technical test for PHP Developer role, about a month ago

What do you know about Automation?

Technical test for Senior PHP developer role, about a month ago

These conversations are usually based on questions, which may be based on the company’s tech stack, or your cited experience in your CV.

Preparation Tip: Revisit your CV and reflect on all the projects you’ve said you did. Research the tech stack of your new employer, they’re going to be interested in this stuff specifically.

Since I was about 5 years in, this has become the most common. Almost every process includes it.

The Takeaway Specification

Usually delivered by the agent (if there is one), here is a specification – often delivered via GitHub readme and a repository to be forked. The specification could be anything, but your job will be to hit that specification, usually within a rough timeframe or for a specified time.

The specification can be anything from “deliver a small system to do this” or “here are some details, build X integration with Y system”. It might be 2 hours of work, it might be 4 hours.

Deliver a web page which has a form, that captures X data, and submits it to Y third party endpoint.

Technical test for a PHP Developer role, about 3 years ago

Deliver a versioned API which handles video files

Technical test for a Senior PHP Developer role, about a month ago

Preparation tip: This one really holds the candidate to account to be able to deliver autonomously, like they no doubt said they could. You can’t really prepare for this other than being as capable as you say you are. If you do something maybe a bit controversial, leave a code comment or something to explain what you’re doing, annotating the readme is also fine. Be prepared to go above and beyond.

5 top tips!

So, time for my general advice in preparation for technical tests.

  1. Use the agent. They want to get you placed (it’s how agencies earn their commission) and will want to help you, they may have insight, so ask for it
  2. Be as skilled as your CV suggests, being a mid-weight dev is acceptable, trying to blag a lead’s job when you don’t meet their requirements, is not
  3. Be prepared, if you think your weakness may be obscure syntax or functionality, learn it. If you have loads of experience, spend some time revising your CV, and thinking about projects.
  4. Be genuine and sincere, if you don’t know something, tell them, make note, learn it later. If you’re passionate, let them see it. If you’re experiences, talk about it. Proud of a project? Sing from the rooftops!
  5. Understand the trends in the industry, that’s what people will want to test you on. When IE8 vs Chrome was the major thing, I got asked about it constantly. When Laravel first started booming, that was the major thing people asked me about. Automation is now all the rage, understand it and be ahead of the trend