Tuesday, April 3, 2012

Re: Profiling Django (WAS Django database-api)

On Tue, Apr 3, 2012 at 5:25 PM, Andre Terra <andreterra@gmail.com> wrote:
> To make things a little more complicated, the task involves writing a large
> amount of data to a temp database, handling it and then saving some
> resulting queries to the permanent DB. This makes it a tad harder to analyze
> what goes on in the first part of the code.

i haven't had that kind of problem, but these are the things i would try:

- time-limiting logs. if the last call was too recent, just discard
the message (for some log levels, those used in the inner loops)

- send the logfiles to fast devices (maybe even in ram, like tmpfs in
Linux) and aggressively rotate them (so they don't accumulate
needlessly)

- log to a fast database (like Redis) and do automated analysis every
hour or maybe even every minute, discarding the raw log entries.

- log to a listening process that checks if each (time limited) entry
is out of the ordinary. if so, keep it for analysis. if not, discard
it. 'ordinary' could mean if some indicator is growing or reducing as
expected, or just changed from the last, or stable, or whatever you
could expect from your intended calculations.

--
Javier

--
You received this message because you are subscribed to the Google Groups "Django users" group.
To post to this group, send email to django-users@googlegroups.com.
To unsubscribe from this group, send email to django-users+unsubscribe@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/django-users?hl=en.

0 Comments:

Post a Comment

Subscribe to Post Comments [Atom]

<< Home


Real Estate