On Wed, 2008-11-12 at 08:32 +0000, Arjen Lentz wrote:
> On 12/11/2008, at 4:42 PM, Wultsch wrote:
> >> Bug description:
> >> When long_query_time is really small, the slow log will also see
> >> queries that are not actually executed, for instance because of a
> >> syntax error. Naturally this is not desirable behaviour.
> >>
> >> The probable cause is actually likely to be with the original slow
> >> query log code and not with the microslow patch as such; with
> >> second-granularity, no syntax error query would ever take longer
> >> than a second to get through the parsing stage....
> >>
> >
> > I think this is not a bug. If a DBA decides that any query over that
> > takes over some period of time should be logged, and a query (with a
> > syntax error, or not) takes longer than that period it __should__ be
> > logged.
> >
> > As long as the behavior is documented I do not see it as problematic.
>
>
> The slow log is intended for execution time, queries with syntax
> errors are not executed.
> An error is an error, it already reports back instantly on the client
> side... what would be the practical purpose of logging it in the slow
> log?
>
> I use a very low long_query_time in training to show how queries are
> executed (mem/disk tmp table, mem/disk sort, etc); of course I make
> typos on the cmdline client while typing, don't want all that junk
> logged.
Suggestion: log_long_parse_time/log_long_parse_fail as extra option.
I can see advantages both ways for DBAs. So why not give them both, and
have a second cfg option that allows them to log queries that spend more
than XXX microseconds in the parser, with whether or not they're
executed as a boolean?
Don't underestimate the need for a DBA to have numbers to beat their
developers over the head with ;-)
- P.
--
Peter Lieverdink counter.li.org #108200 -37.807478, 144.94465
0x969F3F57 9662 1CB5 8E54 450D 2E12 9D7E 580E 2519 969F 3F57
2.6.27-7-generic #1 SMP Tue Nov 4 19:33:06 UTC 2008 x86_64 GNU/Linux
On Wed, 2008-11-12 at 08:32 +0000, Arjen Lentz wrote:
> On 12/11/2008, at 4:42 PM, Wultsch wrote:
> >> Bug description:
> >> When long_query_time is really small, the slow log will also see
> >> queries that are not actually executed, for instance because of a
> >> syntax error. Naturally this is not desirable behaviour.
> >>
> >> The probable cause is actually likely to be with the original slow
> >> query log code and not with the microslow patch as such; with
> >> second-granularity, no syntax error query would ever take longer
> >> than a second to get through the parsing stage....
> >>
> >
> > I think this is not a bug. If a DBA decides that any query over that
> > takes over some period of time should be logged, and a query (with a
> > syntax error, or not) takes longer than that period it __should__ be
> > logged.
> >
> > As long as the behavior is documented I do not see it as problematic.
>
>
> The slow log is intended for execution time, queries with syntax
> errors are not executed.
> An error is an error, it already reports back instantly on the client
> side... what would be the practical purpose of logging it in the slow
> log?
>
> I use a very low long_query_time in training to show how queries are
> executed (mem/disk tmp table, mem/disk sort, etc); of course I make
> typos on the cmdline client while typing, don't want all that junk
> logged.
Suggestion: log_long_ parse_time/ log_long_ parse_fail as extra option.
I can see advantages both ways for DBAs. So why not give them both, and
have a second cfg option that allows them to log queries that spend more
than XXX microseconds in the parser, with whether or not they're
executed as a boolean?
Don't underestimate the need for a DBA to have numbers to beat their
developers over the head with ;-)
- P.
--
Peter Lieverdink counter.li.org #108200 -37.807478, 144.94465
0x969F3F57 9662 1CB5 8E54 450D 2E12 9D7E 580E 2519 969F 3F57
2.6.27-7-generic #1 SMP Tue Nov 4 19:33:06 UTC 2008 x86_64 GNU/Linux