Cloud Hosting – Basic Overview and Considerations

This article is aimed at people who are traditionally used to running production environments Bare Metal, or using a single VPS to host their sites/platforms. People who now need to understand the practical considerations of hosting their application in a more scalable way.

For the purpose of this article I am going to assume the application in question is Laravel based, realistically this isn’t particularly relevant (save for the fact that you need to be able to control your configurations across all servers).

I will soon be writing some case studies about the work I have done at various assignments, but for now this is all based on experience, but strictly hypothetical.

Additionally, to keep things simple for the purposes of this article, I am going to talk about Digital Ocean, and assume we are using their services. Whilst we all know that essentially AWS is the monarch of cloud scalability, at an entry level Digital Ocean is easier to cost and manage.

This article is not a deep dive! It just outlines, at a very high level, the things one would need to consider before moving to this kind of setup.

Our considerations

A brief list of the things that we need to be thinking about

  • Serving web traffic
  • Running the Laravel Schedule (cron)
  • Running Queue Workers
  • MySQL
  • Queue Driver (I tend to use Redis)
  • Cache Driver (again, I tend to use Redis – but the File driver will not work in this instance)
  • Log Files
  • Sessions (I tend to use either database or Redis, depending on the application, but as always, the File driver will not work in this instance)
  • User-uploaded files, and their distribution
  • Backups
  • Security / Firewalling

Digital Ocean Implementation Overview

For the purpose of this article we’re going to spin up a number of resources with Digital Ocean to facilitate our application. In the real world you will need to decide what resources you need.

  • 3x Web Servers
  • 1x Processing Server
  • 1x Data Server
  • 1x Load Balancer
  • 1x Space

Annotations on the above

Something that is worth taking note of, early doors, is that your servers are expendable at any given time. If you kill a web server, your application should survive.

In the real world, I would suggest that you use a managed MySQL and Redis (or Cache + Queue driver) service, to mitigate your risk on those droplets, making all of them expendable.

Notes on other tools

I personally would always recommend, in this environment, using Continuous Integration to test your application, and using Merge requests to ensure that anything which finds its way into a deployable branch (or a branch from which versions will be tagged) is as safe as it can be.

I would also advise Test Driven Development (TDD) because this is probably the only way you really know that you have 100% test coverage.

I am going to come onto deploying to your environment at the bottom of this article.

Part 1: Building Your Servers

It makes good sense to build the base requirements of your server into an image/snapshot/etc (depending on your provider), and then your redistribution process is much easier, because you can spin up your servers/droplets/instances based on that image.

Whilst the Data Server does not need to have any application knowledge, the others will all need to have your chosen version of PHP and any required extensions that your application needs to run.

Once you’ve built the first server, take a snapshot of it, with SSL certificates and such.

I would advise running supervisor on this server, to ensure that Apache / nginx (or your chosen web server) stays running all the time, to minimise and mitigate downtime.

Part 2: Serving Web Traffic

Whether you are using Digital Ocean, Amazon Web Services, or any other provider; they all offer Load Balancers. The purpose of a Load Balancer is to balance the load, by distributing your web traffic to the servers available. This means that in times of peak traffic you can add more web servers to handle the traffic (this is known as horizontal scaling – the opposite of vertical scaling which is where you upsize your servers).

So, build the servers that you need, and make them capable of serving web traffic. Then fire up a Load Balancer, and you can assign these droplets to your load balancer.

Once you have done this, point your DNS to point to the Load Balancer, so that it can distribute the traffic for that domain.

If you are using CloudFlare, make sure you forward port 443 (SSL) to port 443, and set the SSL to parse-through, so that the handshake between destination server, load balancer, and browser can succeed.

Now that you have web servers, you need to make sure that you have the backing to fulfil this.

Part 3: Running the Laravel Schedule (cron tasks) and Queue Processors

You do not want to run the cron on your web servers. Unless, of course, your desired effect is to have the job running X amount of times per interval (where X is the amount of web servers you are running).

For this kind of background processing, you want another server, which is where the processing server comes in. This is the server which is going to do all of your background processing for you.

This server needs to be able to do all the same things that your web servers can do, except it just won’t be serving web traffic (port 80 for HTTP and port 443 for HTTPS).

This is the server on which you will run either artisan queue:work or artisan horizon, it’s the same server which you’re going to be running your schedule on too. Of course, this could be separated out to different servers if that works better for your use case.

Part 4: Data Server (MySQL and Redis)

As I say, I personally would be looking for managed solutions for this part of my architecture. If I was using AWS then, of course, they have Elasticache and RDS to handle these problems for you. If you’re using Google Cloud, they also have a Relational Database service.

Or, you build a droplet to host this stuff on. If you host on a droplet, ensure you have arranged adequate backups! Digital Ocean will do a weekly backup for a percentage of the droplet’s cost. If you need to backup more frequently, I’ll cover this off later on,

This server needs to have MySQL (make sure to remove the Bind Address) and Redis (take it out of protective mode, and make any modifications as required). Install these, get them running, and if necessary set up how Redis will persist and the resource limits/supervision for MySQL.

Part 5: Log Files

If, as they should be, log files are an important part of your application, you want to get them off your servers as soon as possible. Cloud servers are like cattle, you must be prepared to kill them at a moment’s notice. If you’re going to do that then you need the logs (which may contain useful information about why the server died) to not be there when you nuke them into oblivion.

There are loads of services about to manage this. Though what I will say is that it is just as easy to run an Artisan command to migrate them to Digital Ocean Spaces periodically. Space start at $5/month. AWS S3 Buckets are also exceptionally cost effective.

The key point here is, don’t leave your log files on the web or processing servers for any longer than you need to.

Part 6: Interconnectivity

Quick side note here, when you are dealing with Droplet-to-Droplet communications, use their private IPs, which are routed differently, thus faster and don’t count towards your bandwidth limits.

Part 7: Sessions

I am only going to touch on this really quickly. You need to be using a centralised store for your session (Redis or MySQL work perfectly well), otherwise (unless you are using sticky sessions*) your authentication and any other session/state dependant functionality will break.

Sticky sessions are great, but be careful if you are storing session information on the local web server (the one the client is stuck to) and that web server goes down, because the assumption on any web server is that anything stored on it is ephemeral.

Part 8: User Uploaded Files

Your application may well have user uploaded files, which need to be distributed. We know by now that they cannot be stored on any of the web servers due to their ephemerality.

On this one I would advise use whatever stack you’re using, in this example we’re using Digital Ocean, so use Spaces (which is essentially file storage + CDN). If you’re using Amazon Web Services use AWS S3 bucket(s), with CloudFront for the CDN aspect.

Configure your driver(s) as appropriate to use Digital Ocean Spaces, which is S3 compatible. Your user uploaded files still work as they always did, except they will no longer be stored in storage/* for reasons I’ve probably reiterated half a dozen times in this article.

Also make sure than any public linking you’re doing, to article or profile images, for example, is respecting the appropriate CDN location.

Part 9: Backups

It is almost impossible for me to describe how your backups should work, as I don’t know your environment. However, Spaces and S3 are both cost effective, and are where I would advise you store your backups.

Depending on how you want to backup, there are 101 ways to take those backups. But nothing (logs don’t count) from your web servers should be backed up, they should only contain the necessary configurations to serve traffic.

Your files, theoretically, are unlikely to need backing up either, because they are sitting within a managed service, and if you have used manage MySQL / Redis / etc services then you don’t need to back them up either!

Part 10: Kill enemies with fire (firewalling)

If you’re using Digital Ocean, Firewalls are just included, but whatever solution you are using your security policies need to reflect the work your servers are doing, roughly outlined below.

  1. Web servers need to accept port 80 (HTTP) and port 443 (HTTPS) traffic, but only from your Load Balancer
  2. Your Data Server (assuming MySQL and Redis) need to accept port 3306 (MySQL) and port 6379 (Redis) traffic, but only from the processing and web servers
  3. Your Processing Server should not accept any traffic from anywhere
  4. You may wish to put exceptions in for your deployment tool, and potentially your own IP address so that you can shell into your servers, view MySQL and such

Part 11: Deployments

Personally, I love Envoyer, I think it’s brilliant. But the key point to take away with deployments in this scenario is that it can no longer be done manually.

If you have 3x web servers + 1x processing server, you cannot shell into all 4 and do git pull and git checkout and all that stuff, you need to manage that process for you properly.

If you use Envoyer, there are loads of resources out there on how to get it set up just right for your application, but the key point is you cannot just do it by hand.

Your deployment process should be capable of running without you (Continuous Deployment) following successful Continuous Integration, for example when a merge request is successfully merged into Master, you may wish to deploy straight to your production servers.

Deployments will need to cover NPM/Yarn dependencies, composer dependencies, and anything else your application needs to worry about, perhaps migrations, or clearing views, caches, and configs.

You will also need to know how you are going to deploy environment changes (a change of MySQL host address, for example) to your servers instantaneously.

Part 12: Manual Stuff

If, for any reason, you need to run anything manually, like Artisan commands, then you would log into your processing server and run them there.

Ideally you would never need to do this, of course. But it is worth noting how you would go about doing this.

Links

  1. Digital Ocean
  2. Amazon Web Services
  3. Envoyer – Zero Downtime Deployments
  4. ScaleGrid – Managed RDS Solutions (MySQL and Redis)
  5. Laravel – the PHP Framework for Web Artisans
  6. CloudFlare – the web performance and security company

How to attack technical tests like a boss

I’ve been asked about technical tests more times than I can count, so I thought I would write this article. I feel qualified to do so as I have participated in literally dozens of technical tests throughout my career, and I’ve tested numerous candidates.

So, if you want to know how to attack a technical test as a candidate, this article might help you. Alternatively if you’re looking for how to technical test your candidates, because you’re new to the process, this is also for you!

The ultimate, one way ticket, to unlimited job offers, by bossing your technical tests – listen closely, it’ll blow your mind…

Brace yourselves, this one will change your career and your outlook on life. Nobody will have ever given you such accurate advice, and never will again…

You can’t. The only way to nail technical tests and technical interviews are to actually be exactly as skilled, or maybe more skilled, than your CV implied.

It is absolutely acceptable to not be an expert in everything, in fact, nobody is. That’s why the phrase “Jack of all trades” is completed (in part) by “…master of none”, however the full version being “, is oftentimes better than master of one”.

The point here is understand where you’re at. If you’re an absolute master at a single technology, or someone with a breadth of knowledge across multiple fields, without “mastery”, that’s fine. Being honest about your knowledge and expertise is what you should do, only that way can employers and agents match you correctly.

I’m confident in my ability. How are they going to test me?

This is the specific part I’ve been asked about on several occasions. That fear of the unknown, how are they going to gauge my skill. In all my experience, there are 5 main ways to test a candidate’s technical knowledge, experience, exposure and expertise.

These all have difference advantages, and disadvantages, and all come with different twists; largely based on the (technical) hiring managers, but I will cover them all here.

Many companies will do multiple of the below; or may offer many of testing. To summarise here; the main types of test are

  1. The low level technical question
  2. The whiteboard test
  3. The code test
  4. The high level chat
  5. The takeaway specification

The low level technical question

This is best used when trying to understand that the candidate understands what they are doing, and the technologies they are using. This might include something like

What is the difference between “==” and “===” in PHP?

PHP Developer role, about 5 or 6 years ago

What would the following code do? [insert code sample]

Various developer roles from 2011 onwards

These kinds of questions are where you can unearth true understanding vs conceptual understanding, and depth of experience vs dabbling.

When interviewing low to mid level candidates these kind of questions are perfect to gauge where the candidate sits in terms of exposure, and consequent salary.

Preparation tips: Make sure you know the languages you’re going to be tested on (probably the ones you said you know), and think of as much obscure syntax as you can think you’ve used; especially the bits of that language you hate. For me, in PHP, that’s ternary operators. Not sure why, it’s entirely subjective, and I’m not a fan.

I’ve been on the receiving end of this test probably a dozen times, if not more, particularly in the first 4 years or so of my career.

The Whiteboard Test

The infamous whiteboard test, or a problem solving technical test. This is usually some kind of broad problem provided, with you expected to design and/or explain how you would approach solving the problem.

Usually used in more heavyweight to senior/lead type roles, these questions are handy for understanding the candidate’s ability to solve problems or design solutions rather than their ability to write code.

Show me how you would implement a modular, tenanted system which could allow for bespoke modules, as well as those which were to be redistributed

Technical test for a lead developer role, about 2.5 years ago

There is a stock table. This is a transactional table, so no updates or deletes, only inserts. Stock is moved from one shelf to the other by movements in this table. Design the solution which would tell you how much stock is available at any given time

Technical test for a lead developer role, about 2.5 years ago

Preparation Tip: Think about the problems you’ve solved in the past. You may well have done this several times in the past. Consider the approaches you took, the problems you faced, and understand how that will likely be relevant in this interview. Then prepare how you would articulate those problem solving approaches. Being able to relay your given scenario to your real world experience is always good.

Ive been on the receiving end of this one maybe 3 times, mostly when I was sitting between 3 and 5 years experience.

Write some code (on site)

These ones are horrible, usually for 2 simple reasons – you’re given notepad rather than an IDE, and generally speaking Google is forbidden. Which is crazy considering we all know Google is your friend.

I’ve found a couple of variations on this one, and oftentimes this test is in conjunction with either the whiteboard test, the low level test (especially if the task is to find problems or fix bugs).

This particular style of testing can be adapted for any level of developer, and will help to gauge skill and speed, as well as experience (especially if the task is one an experience developer will have solved many times over).

Write a PHP function, in as few lines of possible, to give the entire contents of a specified directory, in infinite depth

Technical test for a developer role, about 5 years ago

Write a PHP function, which will give me the nth number in the Fibonacci sequence

Technical test for a developer role, about 4 years ago

Preparation Tip: There isn’t one for this really, except be as good as you said you are, and get hands on with code as many times as possible before your interview. There are plenty of coding challenges online that can help prepare for this.

In the early part of my career (up to about 4/5 years in) I had this one nearly every interview, so probably half a dozen times or more.

The High Level Chat

More often found in heavyweight, senior, and lead type roles. The high level chat is a great way to understand someone’s expertise and breadth of understanding.

High level chats are fluid ways to have a conversation with a candidate and understand how they solve problems, how they decide what technologies to use, and how well they know their stuff. The interesting part, for interviewer and candidate, of this is the ability to derail the conversation to follow new lines of technical enquiry; which can be very revealing for someone’s true level of skill and experience.

What is your experience with caching mechanisms?

Technical test for PHP Developer role, about a month ago

What do you know about Automation?

Technical test for Senior PHP developer role, about a month ago

These conversations are usually based on questions, which may be based on the company’s tech stack, or your cited experience in your CV.

Preparation Tip: Revisit your CV and reflect on all the projects you’ve said you did. Research the tech stack of your new employer, they’re going to be interested in this stuff specifically.

Since I was about 5 years in, this has become the most common. Almost every process includes it.

The Takeaway Specification

Usually delivered by the agent (if there is one), here is a specification – often delivered via GitHub readme and a repository to be forked. The specification could be anything, but your job will be to hit that specification, usually within a rough timeframe or for a specified time.

The specification can be anything from “deliver a small system to do this” or “here are some details, build X integration with Y system”. It might be 2 hours of work, it might be 4 hours.

Deliver a web page which has a form, that captures X data, and submits it to Y third party endpoint.

Technical test for a PHP Developer role, about 3 years ago

Deliver a versioned API which handles video files

Technical test for a Senior PHP Developer role, about a month ago

Preparation tip: This one really holds the candidate to account to be able to deliver autonomously, like they no doubt said they could. You can’t really prepare for this other than being as capable as you say you are. If you do something maybe a bit controversial, leave a code comment or something to explain what you’re doing, annotating the readme is also fine. Be prepared to go above and beyond.

5 top tips!

So, time for my general advice in preparation for technical tests.

  1. Use the agent. They want to get you placed (it’s how agencies earn their commission) and will want to help you, they may have insight, so ask for it
  2. Be as skilled as your CV suggests, being a mid-weight dev is acceptable, trying to blag a lead’s job when you don’t meet their requirements, is not
  3. Be prepared, if you think your weakness may be obscure syntax or functionality, learn it. If you have loads of experience, spend some time revising your CV, and thinking about projects.
  4. Be genuine and sincere, if you don’t know something, tell them, make note, learn it later. If you’re passionate, let them see it. If you’re experiences, talk about it. Proud of a project? Sing from the rooftops!
  5. Understand the trends in the industry, that’s what people will want to test you on. When IE8 vs Chrome was the major thing, I got asked about it constantly. When Laravel first started booming, that was the major thing people asked me about. Automation is now all the rage, understand it and be ahead of the trend

What is a memory leak? A quick analogy

This is something that came up in conversation, some friends and I were discussing deploying code, that runs in the background, to production environments.

One of the things I raised was what can happen with daemon processes, should you have a very small inefficiency, given enough time to run (usually by the time it gets to production) it can, and will, destroy live servers.

I then realised that, at this point in the conversation, a description of what a memory leak is, had become an appropriate thing to explain.

Anybody who knows me, knows I love an analogy; so this is the analogy I gave, to give a really simple explanation as to what a memory leak is:

Every morning, you go to a fast food drive through.

You order a meal, eat it, and throw the paper bag with some leftovers into the passenger footwell.

At the end of the day, you arrive at home. You pick up the bag of rubbish, and put it into your bin. Without realising it, you drop a single french fry in the car.

In development and testing you run this same process 50 times, dropping a single french fry each time. The fries are not visible, they’re under the passenger seat, or with the momentum of the car have ended up in the back.

When you go live, the process runs more frequently, and instead of a single meal you’re buying 10 at a time.

Very quickly, those single french fries culminate in an unusable car, because you can’t fit in a Honda Civic if it has 1,000,000 festering french fries inside.

Matt “Johno the Coder” Johnson, on a cold winter morning

So there it is, a quick explanation of what a memory leak is, in an easy to understand analogy.

I’ve been asked to do walk throughs on practical implementations on daemons and a few other topics, so I am going to write those up soon.

Practically, what might it look like?

Imagine your daemon script looks something like this…

// Store the jobs that have been processed
$jobsProcessed = [];

// This is a daemon script, it needs to run, forever
while(true){

// This is just for demonstration purposes!
$job = getNewJob();

// Do whatever you need to, to handle the job

// Let's store the job we've processed
$jobsProcessed[] = $job;

}

This all looks fairly innocent right. In testing there are probably, at tops, a few thousand test jobs. In production, when this is running forever, that very small array, can become very big. That’s the bit that could cause a server to topple.

For reference, if you do need to keep this information, store it somewhere, anywhere, else. A log file is usually a good shout (as long as you’re periodically cleaning out your log files), perhaps a database (I’d recommend a MyISAM table for this, as you’re dumping a whole load of plain text data). If you keep this information in a variable in your script, it’ll hold in memory, which is exactly where you don’t want it.

So there it is, a quick and easy analogy, with an (overly) simplified example of what it might look like.

Beware, the Anti Pattern!

In this article I am going to cover the application of patterns within your… application. In short, I am going to show you how to use design patterns in a logical manner.

Patterns are always sometimes awesome!

The first thing here is that all design patterns have a purpose, every design pattern has its place. Whilst I’m talking about design patterns the same can be said of development methodologies, database designs, and really any other kind of concept.

The proper application of design patterns can take a frustrating piece of software, and make it easy to maintain, or hyper-secure, or crazy performant.

The bad application of a design pattern will do the exact opposite.

But, why would you ever not use a pattern?

Well, when it’s not the right time to use it. Every pattern has its place, but not every pattern should be used everywhere, it’s a bad idea.

Don’t take my word for it, let’s look at some practical examples and experiences that really demonstrate what I am saying here.

I’m going to be deliberately controversial here, and I’m going to pick stuff that all developers seem to love and then prove where it will ruin your application.

I’ve heard of Patterns, what is an Anti Pattern?

An anti-pattern is a pattern. Subjectively an anti-pattern is when you take a pattern and either apply it in the wrong circumstances, or implement it badly.

The consequence of the application of an, otherwise great, pattern, causes adverse impacts (usually for maintainability, or for performance etc). Now, it is an anti-pattern.

Model View Controller (MVC)

Well all love MVC, right? I mean, what’s not to love? CodeIgniter, CakePHP, Laravel, Angular, and Joomla all follow this pattern. It is arguably one of the most used design patterns of recent times.

For good reason, it’s awesome! Because it’s awesome, developers are pretty darn impassioned about using it everywhere, and they are right, in the vast majority of cases.

So when, I hear you ask, would this not be a good idea? When could it be an anti-pattern?

What about if I am writing a daemon, which is going to continuously monitor the usage of the mounted hard drives on a server and, in certain scenarios, send an email to a system administrator?

Don’t need models – we’re not handling any data. We don’t have anything, not even a CLI output, so no need for views. Realistically there’s not a controller it’s a standalone script. MVC would be a bit overkill, in this instance, wouldn’t it?

That’s a bit of a drastic example though, let’s look at something more subtle.

Single Responsibility Principle

Ah, right. Let’s get controversial then, shall we?

A class should only have one reason to change

Single Responsibility Principle, SOLID Principles

Everyone loves this, and likes to really preach about it. It is crazy controversial, widely adopted, and I personally think that it’s a good idea.

The Single Responsibility Principle is like the best song in the world, that you hear 1,000 times per day on every music channel and radio station. It is best practice, yes. But sometimes, it’s okay to break it!

Oh Gosh, quick, get the smelling salts and a wet flannel, they’ve feinted!

The whole point of the SOLID principles are to make well structured, easy to maintain code.

So, time for a real-life example. I have a class, it is an Eloquent Model. It is responsible for some mission-critical, core functionality. This class has a method within it. This method is 150 lines long (probably 50 lines of code, once you take out empty lines and code comments).

This method, in of itself, could (and maybe even should) be abstracted into it’s own class. For some context it is a static method, responsible for fetching records, based on a set of arguments.

Every part of the Single Responsibility Principle dictates this method should be abstracted to its own class, perhaps even a set of classes.

Internally, I have had this code reviewed, and to quote the developers who have checked (and indeed worked on) it, it is “exceptionally easy to follow” and “super easy to add and change the functionality”.

It is, essentially, a set of if statements. Based on the outcome of those if statements, the Eloquent Query is modified, then returning either limited, or paginated, results.

So, I hear you scream, “why won’t you abstract it?!” – well I could. But following conversations with the development team, the code would actually be harder to follow if I were to abstract and refactor it.

In this instance, the code can be easily found, easily changed, and is incredibly stable.

Following Single Responsibility to the letter, this time, would be an anti-pattern. Rather than making life easier, it would make life more difficult. Abstraction can be bad! If it has no performance, functional, or security perks. It makes the code more difficult to maintain and follow; then abstracting it would not make any sense. Except to follow a pattern for the sake of following a pattern, at this point, it becomes an anti-pattern.

Dependency Injection

Oh God. I’ve been here before. It is bad. I remember the flames, vividly. As the Reddit Hellfire engulfed my computer. Just kidding, but this one does really evoke emotional reactions.

Just quickly and very simply, dependency injection is parsing an object (dependency) into another object on which the latter depends. Thus, injecting the dependency. It usually looks (forgetting containers and autowiring) something like this.

class Calculator{

public function __construct(
AdditionServiceProvider $addition,
SubtractionServiceProvider $subtraction
){
$this->additionService = $addition;
$this->subtractionService = $subtraction;
}
}

The point here is that I can control the services that the Calculator is using. So if I were, at a later date, wanting to swap out my AdditionServiceProvider for AcmeIncAdditionServiceProvider (assuming they implement the same interface, or extend the same base class) then I could, and the rest of the class would work as expected.

However, I have, several times, seen things like this.

class PaymentsIncorporatedWrapper{

public function __construct(
PaymentsInc\Payer $payer,
PaymentsInc\Refunder $refunder
){
$this->payer = $payer;
$this->refunder = $refunder;
}

}

Right, this makes sense, doesn’t it? Wire up the payer and the refunder, then drop them into your Wrapper. As I said above, take the dependency injection container side of it; whenever I want to do something with the PaymentsIncorporatedWrapper I have to do something like

$payer = new PaymentsInc\Payer($apiKey, $somethingElse);
$refunder = new PaymentsInc\Refunder($apiKey, $somethingElse);
$wrapper = new PaymentsIncorporatedWrapper($payer, $refunder);

That’s a kind of annoying amount of code to write, to instantiate a class. “But the container does that for you!” I hear you scream. Yes, yes it will.

But why? You don’t know, at the time of instantiation, that I need the refunder. I might just be querying a payment. Why do I need the refunder? I don’t. Ah, maybe this is an anti-pattern.

Also, this set of classes is specific to the PaymentsInc integration. So it’s not like I’m going to swap in another payment provider (otherwise this would all make sense).

In this instance, when I couldn’t possibly want to swap anything else in/out, perhaps this would make more sense?

class PaymentsIncFactory{

public static function getPayer() : PaymentsInc\Payer
{
$factory = new static();
return $factory->getPayer();
}

private function getApiKey() : string
{ ... }

private function getSomethingElse() : string
{ ... }

public function getPayer() : PaymentsInc\Payer
{ ... }

...

}

class PaymentsIncoporatedWrapper{

public function takePayment(float $amount)
{
$payer = PaymentsIncFactory::getPayer();
$payer->takePayment($amount);
}

}

Anybody who is passionate about Dependency Injection will argue this is wrong, and they will probably cite Unit Testing as the reason. But, to my knowledge, unit testing isn’t justification for using Dependency Injection.

In fact, I have implemented both Unit Testing, and Test Driven Development, without unnecessarily using Dependency Injection. Of course, Dependency Injection was used, just only where it was truly necessary.

And the point of those tales was….

Just because a pattern is the best thing since sliced bread, doesn’t mean you should apply it liberally, everywhere, without thinking about it.

In the above I’ve taken three of the most beloved patterns we possess, and given you three good places where perhaps those pattern were best not applied.

So think about the patterns you’re using, never blindly use it because someone on [insert social website or influencer here] said is is amazing.

The key, as with all things, is to genuinely understand the pattern, its application, its benefits, and its constraints. And then think, and make a decision, about whether it makes sense to apply it in your use case.

Further Reading / Sources

  1. Model View Controller (MVC) – Wikipedia
  2. Single Responsibility Principle – Wikipedia
  3. Dependency Injection – Wikipedia

Practical Implementations of Agile Software Development

So, Agile. It seems like a bit of a buzz word, and my favourite experience of someone completely missing the concept of Agile was someone I interviewed, I won’t name them, but our conversation went something like this;

“Do you have any experience working in Agile?”

“Is there any other way to work?”

Okay, this sounds promising! The blank expression on this candidate’s face when I mentioned things like Scrum, Velocity, and Retrospective showed me that the misconception here was that “Agile” simply meant “working to changing specifications”.

Now, moving into Agile can be a scary and daunting task, and the Agile manifesto is a bit obscure, as are the principles of Agile. So the aim of this article is to give some tiny snippets as to how you can make start to (and you may already be!) adhere to this manifesto and these principles.

My plan, for this article, is to demonstrate the manifesto, and how that should worm its way into your every day working!


Value individuals and interactions over processes and tools

Let’s just let that sink in for a minute. Anybody who has worked with me, especially those who have worked with me at Speed, will agree that I love a flow chart, I love things working like a machine; so this one took a little while to sit properly with me.

The more I think about that “value individuals” the happier it makes me. What this means, is we need to collaborate, understand the skills everyone brings to the table.

This one is arguably the easiest one to achieve, but when you’re mulling over specifications and features, get everybody involved. Start to snowball, get creative. Sit everybody around a table with a whiteboard or some post it notes (Jay Heal loves a good post-it session! Or a workshop as they are more accurately described)

People involved in brand will have different thoughts to offer from those thinking of the product from a marketing perspective, which will be different from the UX gurus, and the frontend ninjas, and the data scientists, and the software engineers.

Magic happens when these people are collaborating, and are each individually valued.

Value working software over comprehensive documentation

Again, this was a culture shock to me. I like my software specifications to be really, really comprehensive. However, drawing up these documents take time, and from a technical delivery standpoint there’s very little value, actually, it’s more valuable from a commercial standpoint.

So, what’s the alternative? User story planning. Rather than drawing up a million specification points, let’s plan the user’s story, and build that.

This pulls really nicely into Test Driven Development, and into YAGNI (You Aren’t Gonna Need It), because you develop and delivery what’s needed.

It is better to deliver small pieces of working software, frequently, than spend months hashing out the too-fine details of a contract and specification.

From a commercial standpoint this sounds counter-productive, but hear me out; if you keep delivering working software, you are going to avoid cash-flow problems where you’re waiting to get the project signed off, and consequently paid for. Additionally, you’re going to continue to gain the confidence of your client, so they’re going to be far, far more inclined to continue to invest into a working relationship, than a black hole of “you’ll get to see it one day” development team.

Practically then, what do we do? Go through the wireframes and the high level conceptual specification, enough to know what you need to develop. Then start coding, as soon as you have enough to work on, start coding something, and make sure you show the client. Prototypes are acceptable, but have something to show and deliver. Don’t wait for an unveil moment, make the client part of your development process.

Side note; you’re also likely to avoid costly test cycles and amendment rounds at the end of a project by working like this.

Value customer collaboration over contract negotiation

This leads on really nicely from the last point. Collaborate with your customer. They know their business, as well as your team know the technology.

Rather than having “no” conversations, have collaborative conversations. Using velocity and backlogs and the point system, once the client has bought into the concept themselves (which you might have to do), they can make decisions about how to spend the resources they’re paying for.

Let the sky be blue, lets the creativity flow.

Practically, liaise with your clients almost constantly. Depending on your environment you probably can’t have them in your scrum every morning, but you can certainly keep them involved. Collaborate with your clients, don’t just negotiate terms.

Value responding to change over following a plan

This is the key thing with Agile, that makes it considerably different from other project management methodologies, like Waterfall.

This is where your sprints, and an accurate velocity, come into their own. If everything is planned and scoped (with points) in your backlog, and you know the velocity of your team, changing direction on a penny shouldn’t be a problem.

Adjust your sprint, and continue collaborating. I’ve been on projects where the entire success of the project has been based on my team’s ability to change direction in an instant, to match a new opportunity, respond to a threat, or based on a new strength or weakness identified.

This is about letting the deliverable lead the project, not the commercials. By working against sprints, you can change direction without necessarily worrying about commercials in the first instance.


Further Reading

Agile Manifesto courtesy of AgileManifesto

Agile Principles courtesy of The Agile Alliance

Featured image credit: Agile Alliance (it is their logo)