This is something that came up in conversation, some friends and I were discussing deploying code, that runs in the background, to production environments.
One of the things I raised was what can happen with daemon processes, should you have a very small inefficiency, given enough time to run (usually by the time it gets to production) it can, and will, destroy live servers.
I then realised that, at this point in the conversation, a description of what a memory leak is, had become an appropriate thing to explain.
Anybody who knows me, knows I love an analogy; so this is the analogy I gave, to give a really simple explanation as to what a memory leak is:
Every morning, you go to a fast food drive through.
You order a meal, eat it, and throw the paper bag with some leftovers into the passenger footwell.
At the end of the day, you arrive at home. You pick up the bag of rubbish, and put it into your bin. Without realising it, you drop a single french fry in the car.
In development and testing you run this same process 50 times, dropping a single french fry each time. The fries are not visible, they’re under the passenger seat, or with the momentum of the car have ended up in the back.
When you go live, the process runs more frequently, and instead of a single meal you’re buying 10 at a time.
Very quickly, those single french fries culminate in an unusable car, because you can’t fit in a Honda Civic if it has 1,000,000 festering french fries inside.Matt “Johno the Coder” Johnson, on a cold winter morning
So there it is, a quick explanation of what a memory leak is, in an easy to understand analogy.
I’ve been asked to do walk throughs on practical implementations on daemons and a few other topics, so I am going to write those up soon.
Practically, what might it look like?
Imagine your daemon script looks something like this…
// Store the jobs that have been processed
$jobsProcessed = ;
// This is a daemon script, it needs to run, forever
// This is just for demonstration purposes!
$job = getNewJob();
// Do whatever you need to, to handle the job
// Let's store the job we've processed
$jobsProcessed = $job;
This all looks fairly innocent right. In testing there are probably, at tops, a few thousand test jobs. In production, when this is running forever, that very small array, can become very big. That’s the bit that could cause a server to topple.
For reference, if you do need to keep this information, store it somewhere, anywhere, else. A log file is usually a good shout (as long as you’re periodically cleaning out your log files), perhaps a database (I’d recommend a MyISAM table for this, as you’re dumping a whole load of plain text data). If you keep this information in a variable in your script, it’ll hold in memory, which is exactly where you don’t want it.
So there it is, a quick and easy analogy, with an (overly) simplified example of what it might look like.