Some quick scribblings re: the content of my talk, and the feedback from the Djangonauts present.
I’m talking about Performance Testing, not particularly about Performance Improvement, although I’ll touch on a few common traps later. Specifically, I’m splitting testing up into:
This is really just a plug for Amazon AWS
Empty databases go fast! You need to work with the business people, or your own business plan, to work out what the requirements of “success” are for your project, and then generate synthetic test data which fits that profile, so that your testing actually has some meaningful results.
Django makes it very easy to generate test data from simple Python code. I like the ‘random’, ‘inflect’ and ‘loremipsum’ modules for this. Generating test data which looks a lot like real data not only makes your testing more relistic, it makes the test data useful for your designers etc as dummy data.
Sam Stewart pointed out Factory Boy which I haven’t used but looks like an easy way to set this up.
Siege is one of the easier load testing tools to get into.
The important thing here is to make sure your load testing model also matches the business model … Three big things to consider are:
I had collected some data showing this off but it didn’t really fit into the time available, maybe next time!
I somehow didn’t end up with a slide for this, but …
Look at where the CPU is going:
Or: how to not torture your SQL layer.
Postgres & MySQL both offer quite good options for logging queries.
There’s also lots of middleware options, which I think I forgot to mention.
Entropy: I thought I’d written a blog post about this but it turns out I never got around to finishing it. Some refs:
I couldn’t remeber the name of the system command to check the entropy pool in linux, because all you have to do is:
It may sound a bit farfetched but I’ve run into this problem on a couple of production systems which happen to be running on virtuals.