Premature optimisation is the root of all evil
The most important word in the above quote by Donald Knuth, who knows a thing about programming, is 'premature'. If premature optimisation really is the root of all evil, then how do you know when to begin? If you're anything like us, then "when people start complaining" is the stock answer. That's not a great answer for anyone outside of the development team, so we decided to run a profiler on our live servers.
Now there are plenty of python profiling tools, and there are a few Django-specific tools. But they're generally a.) useful once you know you have a problem, and b.) too heavy to run in production. We wanted something that we could run all day on our live servers that would log only the information we wanted, and only when we wanted it. So we wrote
django-request-profiler. It's not really a profiler in the true sense - it's more of a selective logging tool, but it's handy for keeping an eye on things.
It logs the basic request info (path, method), basic response info (status code, length), the name of any view function used to process it, user id if appropriate, and of course the duration taken process it.
You can filter requests to profile by URL (regex) or user (all, auth-only, or auth+group), with fine-grained control over exactly how it captures data accessible through a signal. As an example, we ran it across every single request for a day, to get some idea of the overall behaviour of the site, and now we log only those requests that take over 1s (internally) to process.
Here's how it looks:
from django.conf import settings
from django.dispatch import receiver
from request_profiler.signals import request_profile_complete
def on_request_profile_complete(sender, **kwargs):
"""Log long-running requests as WARNING."""
profiler = kwargs.get('instance')
if profiler.elapsed > settings.REQUEST_THRESHOLD:
logging.warn(u"Long-running request warning: %s", profiler)
# cancel profiling to prevent us drowning in data
It's up on Github and PyPI.