Catch up on stories from the past week (and beyond) at the Slashdot story archive


Forgot your password?
Programming Technology IT

How Should an Application's Logs Work? 93

emmjayell writes "You've been there, loaded up a new application (think server-based app like Apache or Samba ...), it's working okay for a few days or a few months, then the intermittent problems start. Usually it's the CEO or someone else of relative importance that is the first victim. You can't readily duplicate the problem, so you go to find out where the application put's it's logs - maybe it's in var/log/messages - maybe in it's own directory - sometimes it's right there and available in some administrative GUI. So what makes you happiest when diagnosing the problem? Do you want tools to access it? UI or command line? Do you want it formatted to use tools like cut and sed? Do you have any examples of an app that does a great job with system logging and diag logging? Background: My team is working on an application that is gearing up for a first release. We have a logging framework in place already (we are using Apache: -- so that covers how we are logging, but not what we should log and how it should be laid out for optimal use."
This discussion has been archived. No new comments can be posted.

How Should an Application's Logs Work?

Comments Filter:
  • by Zugot ( 17501 ) * <bryan AT osesm DOT com> on Monday May 09, 2005 @08:39AM (#12476593)
    JBoss 4. Even though I can't figure it the first time most time I look at it, the answer is always in the log.

    Even though it is resource intensive, I prefer the developer log everything, and let me decide how verbose or terse I want the logs to be.
      • Too many posts hit +4. Decrease the number of moderators.

        I hate going off-topic, but we all know /. discourages "meta-discussion" (e.g. discussion about slash itself). Anyway, I'd say a better solution would be to expand the moderation cap. Instead of topping out at +5, let the range run from +20 to -20, for example. I find a few of the -1 posts amusing or interesting -- after all I don't always agree with the mods. Really worthless GNAA crap or whatever will disappear down a deep dark hole (no pun intend
    • While we are on the subject, let me make the following recommendations :

      1. When your app creates a new log file, put in a header. In the header put the following information : retention period, logging level, full application path and name, where the source code to this application is located, who (at the time the code was last changed) can accurately interpret the contents of this log file, the original intent / purpose of the log file, the time / date stamp of the creation of the file. If you can look
  • Should the administrator of the software already know where the log file is saved? If he/she wanted the application to create a log file, then he/she would have made sure it work before unleashing the application.
  • by Velex ( 120469 ) on Monday May 09, 2005 @08:43AM (#12476627) Journal

    Any kid of log is fine with me as long as it's there and it gives me some kind of insight into what's going wrong, e.g. "can't open this file," "that file's corrupt," "null pointer." Of course, text files are nice, because you can actually search through them.

    Sadly, most applications for M$ operating systems usually just leave things like, "Error #543892157893421 occured." When you go to look up what error 84901257893423 is, no one in the world seems to have had it. Tech support proceeds to blame your hardware vendor, who blames your software vendor, ad nauseum. Seems like most applications for m$ operating systems just pull error numbers out of their asses.

  • by NoSuchGuy ( 308510 ) <do-not-harvest-m ...> on Monday May 09, 2005 @08:47AM (#12476655) Journal
    Maybe you want to know

    When from Where did What by Whom

    When = ISO 8601 Timestamp [] (from)
    Where = IP Adress / Name of computer...
    Who = which Login /registered user did
    What requested file foo/bar?a=213b=dfg
    • I also log every sql query with that info, and if it failed/went through and what the error was if any. This helps a lot when database entries mysteriously change values.

      I log it all in the database (ya ya, I know but it's not a critical site) and search through it with phpadmin
      • >I also log every sql query with that info

        For "big" applications you really can't do this because of bind variables. (You can, but it would add another level of complexity to your logging)

        If you don't know I mean, then lucky you. :)
        • true, but if you were using bind variables, I doubt you would want another db call as overhead anyways :)
        • What the heck is a bind variable?
          • Thank you for encouraging me to go learn something new. I'll try to summarise; others more experienced than myself will have to forgive or correct any misinterpretation.

            'Bind variables' are, essentially, a database optimization. If you repeatedly execute many queries with the same structure, but different data values, you'll cause the underlying DBMS to repeatedly parse those statements. If you give the DBMS enough of a hint that the statements are, essentially, of the same structure, you can buy quite a l
            • You described "Parameterized Queries."

              I'm not sure "parameterized queries" is what the GGP meant by "bind variables", because he mentioned that "bind variables" might only be used on "large applications." Truthfully, I thought everyone always used parameterized queries on all calls to the dabase, from all applications, large and small.

              And yes, the Perl MySQL drivers have supported this for at least the last 6 years.
              • The description was based on a reading of the article to which I linked. Bind variables may well be the same as parameterized queries; maybe this is all just application-specific semantics.

                Parameterized queries are pretty much a given if you're writing stored procs anyway (yes, we're all doing that). And I bet there's a single occurence somewhere of a non-parameterized query that's more efficient than a parameterized one.
              • I did mean exactly what was explained.

                By "large" I meant "heavily used". It just has to be worth it to see the performance improvement that not parsing the statement for the 1000th time in an hour.

                Not everyone uses parameterized queries because not everyone has to worry/encountered this optimization. One of the biggest negatives I get from programmers is that using bind variables makes it hard to figure out exactly what is happening.
                • To me, the performance boost by re-using prepared statements is one of the least important reasons to use bind variables.

                  With bind variables you are sidestepping the entire problem of escaping special characters in your data, and many charset problems are solved automatically by the database driver.

                  SQL injection becomes almost[*] impossible, that alone justifies using it everywhere (IMHO).

                  [*] Only almost, because sometimes outside parameters are used to choose the name of a table or column, bind variable
        • For "big" applications you really can't do this because of bind variables. (You can, but it would add another level of complexity to your logging)

          I've done it in Java, and it's no big deal. You simply implement PreparedStatement with a logging feature, then dump the parameters + sql in the event of an exception. If you don't want to do that, you can usually get the pre-bind query in the event of an exception.

      • You log the DB query to the database, but do you log the insert of the log data? Then do you log the insert of the insert of the log data?

        What about the insert of the insert of the insert of the log data?

        And then you should probably be logging the insert of the insert of the insert of the insert of the log data.. and don't forget the insert of the insert of the insert of the insert^#(I*$&%*&

        Error: maximum recursion depth exceeded.
        • Oracle products can do this. Yes, it has been proven that it isn't 100% failsafe, but they got it so it'll return an error on the original query fast enough and rare enough that it isn't an issue.
    • Yes, standard timestamps! (Especially when timezones are involved.) It's extremely irritating parsing stuff into a db for processing when someone uses a non-standard format for no good reason. I've seen sites with different formats in HTTP headers for Date and Last-Modified. Huh? And almost every domain registrar uses a slightly different format.
  • syslog! (Score:4, Insightful)

    by Anonymous Coward on Monday May 09, 2005 @09:06AM (#12476850)
    The distressing part is that there's an answer: the syslog daemon, a quite capable application. The problem is that some prominent apps don't use it at all. This is in part due to the limitations of syslog, but the solution is not to reinvent the wheel, it's to modify the wheel so that it does as you wish. syslog doesn't have the facilities you want? Change it so it does. It doesn't deal with things like splitting logs for virtual (web) hosts? Change is so it does.

    I say this as a sysadmin for a large number (several thousands) of servers. So many problems could be solved by Apache et al using, and modifying where necessary, the existing solutions. But they don't, they roll their own, and so we see problems.

    I realize this is a sorta OT rant, but it's causing major problems for me at work and so I think it's justified. Stuff like rotatelog has functionality to tell Apache "dump your logs, I'm rotating the file!" (which by the way doesn't always work) which could be easily rolled into a single notification of the syslog daemon. But it isn't. For whatever perverse reason, Apache and many others decided to roll their own.

    It pisses me the fuck off, having to deal with it. The greatest shortcoming by far of OSS is this insistency on reimplemtning proven, robust, existing solutions in favor of a trivial fix. This is a particularly egregious example, one the OSS world would be served well by acknowledging.

    • Re:syslog! (Score:4, Interesting)

      by Sentry21 ( 8183 ) on Monday May 09, 2005 @09:25AM (#12477064) Journal
      Well, I'd like to abandon syslog in favour of logging to an SQL database for everything (a central sysloggable database would be nice, a standard API and what have you). Text logs are nice, but given the choice, I'd rather put everything into MySQL. It's searchable, it's archivable, it's a lot faster to process than plaintext. A hundred megs of logs takes forever to process with perl, but if it's in the database, you can make a lot of queries live.

      SELECT SUM(transfersize)/1024 FROM logs.apache WHERE vhost = "" AND date > "2005-05-01 00:00:00" AND date "2005-06-01 00:00:00"

      Now I have the amount of bandwidth (in kilobytes) that was served from my files site in the month of May. Doing this with raw logs is absurdly slow in comparison, and only gets moreso as time goes on. If you want to archive them in compressed format, you can do 'mysqldump --where="date ..." --database logs' and such.

      So my recommendation for logging: support mysql!
      • Re:syslog! (Score:4, Insightful)

        by DrSkwid ( 118965 ) on Monday May 09, 2005 @09:38AM (#12477217) Homepage Journal
        In my experience, logging each request to a database isn't fast enough and eventually it becomes CPU overloaded.

        tab separated plain text logging is just fine and dandy. Syslog is great because it has a standard format so any tools you make will just keep on trucking.

        If you want a db of it do it in batch mode when you need it i.e.

        zcat messages.0.gz | syslogtodb | grep httpd | mysql httpd

        Why waste those CPU cycles indexing millions of lines of logging you'll never look at ?

        • In Oracle at least, you can map a text file as a table (an external table I think it's called). Then you get the benefit of all your database processing without the overhead during the log.
        • Or, just do the Best of Both Worlds(tm) and dump your log to the database every night as part of the log rotation.

      • syslog-ng.
        Alternatively, there is a Pg logging module for Apache (dblog, IIRC).
    • Re:syslog! (Score:4, Informative)

      by EnronHaliburton2004 ( 815366 ) * on Monday May 09, 2005 @11:02AM (#12478036) Homepage Journal
      Apache and many others decided to roll their own.

      That's partially because Apache runs on Windows, which doesn't use syslog by default. Syslog also runs differently on different Unixes, and since not all Unixes are open-source, you can't always fix it.
    • The problem with not being able to independently log virtual web hosts separately can be solved by using different facilities: local1 for host1, etc. There are up to 16 of these depending on the syslog variant. A bit of a kludge but it is there.

      I agree with the parent's rant that Unix progs don't necessarily exploit existing interfaces like syslog or tcp wrappers guess - my pet peeve is people not universally using --version or --help as parameters - perhaps if everyone continues to complain....

      Another si
    • It pisses me the fuck off, having to deal with it. The greatest shortcoming by far of OSS is this insistency on reimplemtning proven, robust, existing solutions in favor of a trivial fix. This is a particularly egregious example, one the OSS world would be served well by acknowledging.

      If you're adminning apps that use log4j, know that a syslog appender exists and can be used with a configuration change.

  • by sydb ( 176695 ) <michael AT wd21 DOT co DOT uk> on Monday May 09, 2005 @09:15AM (#12476953)
    as I once said to a colleague. /var/log

    If you have simple logging needs, log via syslog and leave the details to the site.

    For more complex needs, especially if you have several logs, /var/log/appname/* is good.

    Obviously, the logs should be a text file. You ask if special tools should be provided. For text files we already have grep, sed, awk, perl.

    The exception is if you are providing some kind of administrative GUI, say a web app. Logs that relate to specific functionality should be near the controls for that functionality. By using a GUI you are saying "I don't want to get my hands dirty" which, for time-pressed admins, is a perfectly legitimate approach for apps with complicated configuration architectures (Sendmail, WebShere 5). So the GUI should take away the complexity of having to know where the logs are. It should always be possible, though, to get at the text of the logs and run standard tools against them.

  • by rjh ( 40933 ) <> on Monday May 09, 2005 @09:19AM (#12477008)
    My favorite logs are the ones where I get control over what events get logged and in what detail they get logged. There's no such thing as a one-size-fits-all software solution; why do we believe in one-size-fits-all logging?

    The alternative is to log everything in great detail, but do so in such a way as to make it truly trivial for me to strip out everything except the specific events in which I'm interested, in the level of detail in which I'm interested.
    • Configurable logging with wide control over groups and level of logging is a blessing, especially when you're trying to figure out what's going wrong at a customer site with Idiot Inside. You can always turn it down/off when you don't need it or the customer site is close to a tropical beach.
    • My favorite logs are the ones where I get control over what events get logged and in what detail they get logged. There's no such thing as a one-size-fits-all software solution; why do we believe in one-size-fits-all logging?

      We, who don't, use the log4j [] framework. If I had one word to put into an ad for log4j it would be "flexibility". You can configure (at runtime!) what where and how your app logs things. Severity level, application-defined types of events, event data (wan't dd/yyyy/mm date format?

  • log4j (Score:3, Informative)

    by Anonymous Coward on Monday May 09, 2005 @09:30AM (#12477116)
    is easily the best logging package.

    Configurable log levels, and you can define your own appenders.

    Eg, I typically configure an email appender for severe/fatal errors, so they come straight to my inbox. Often I know of problems before the users do as a result of this.

    Also, something described in one the Pragmatic Series of books is an RSS appender - just point your RSS reader at the channel, and wait for any errors to occur.
    • agreed, log4j is excellent

      the big problem with logging from java web pages is that many programmers forget that the server is multi-threaded; the answer is to have a local variable created at the start of each page from a static synchronized number which allows you to track which thread is generating the debug.
  • Here's a great app to look at both for how to do logs right and how to do them wrong. The default setup gives one line for each email, including time/IP addr/status, all of which is good. It can also be incomprehensible and not well documented - bad.
  • A good reference (Score:5, Informative)

    by delirium of disorder ( 701392 ) on Monday May 09, 2005 @09:47AM (#12477332) Homepage Journal
    Not log specific, but this describes the Unix-Way(tm) of doing file formats. []

  • by Proteus ( 1926 ) on Monday May 09, 2005 @09:51AM (#12477386) Homepage Journal

    I really don't much care where logs are kept or what particular format they are in. However, it's important that the man page tells me where the logs are, and clearly documents the format of the log files. What do flags mean? What do particular messages mean?

    Also, formatting the logs in such a way that they can be quickly searched with grep or parsed by a simple script is most helpful. One of my favorite loggers does this:

    MESG: 2005-05.May-09@09.02.54CDT: Started run
    WARN: 2005-05.May-09@09.03.17CDT: Couldn't find file 'control.rc', creating
    ERR!: 2005-05.May-09@09.03.18CDT: Unable to create 'control.rc', terminating
    MESG: 2005-05.May-09@09.02.54CDT: Completed run. 1 error, 1 warning.

    This lets me see everything in chronological order, but I can quickly parse the log. Splitting on ':' will yeild the first two feilds consistently, and the first four chars are *always* the type of log message. So doing something like:

    $ egrep '^ERR!: 2005-05.May-09' report.log

    Lets me immediately see all the errors for a given day. The key to good logging is, IMO, making sure that the logs can be parsed effectively.

    • by Anonymous Coward
      That's actually a pretty bad log format

      #1 wastes space with colons. that means you technically have the split on ": ", not " ".

      #2 doesn't put the date first. That means if you're merging logs from multiple sources, you can't just sort them.

      #3 uses some bizarre ridiculous date format that the author probably made up himself. Where does "May" come from? Is it dependent on the locale? Why does it have "05.May"?? What is CDT? There is more than one time zone that has abbreviation CDT. And if you're talking C
      • 2005-05-06 02:04:06 error The printer is on fire! 2005-05-06 02:05:12 notice The printer is no longer on fire.

        These errors are famously generated by the unix lp drivers, which assumed that any error a printer reported that wasn't offline or out of paper was on fire.

      • OK, I generally agree, but I'll play devil's advocate on a couple of points.

        "#1 wastes space with colons."
        It actually looks like it's using the colon as the field separator to avoid having to deal with whitespace in the message. Although colons could also be present in the message, they're far less likely to occur than spaces. True, it's probably a nasty optimization, and I'm not sure why the colons are augmented with spaces, but it's not all bad.

        "#2 doesn't put the date first."
        Agreed, but, as you point o
  • by EnronHaliburton2004 ( 815366 ) * on Monday May 09, 2005 @10:58AM (#12477993) Homepage Journal
    Whatever your logging strategy, please remember to rotate your logfiles.

    You'd be amazed how many Internet applications crash simply because the logs fill up the partition that hosts the application.

    This problem was threefold, because:

    1. The logfiles should never get that size
    2. Logs should usually exist on a seperate partition like /var or your custom directory under /a
    3. People usually ignore the messages in a Production debug log

    I run 12 moderate websites. We easily generate 5GB of logfiles a week (Rotated). It used to be 10GB, but then I switched off debug logging and fixed the most common errors.

    When I started my current job 1 year ago, I deleted 40GB of logs which ran back to year 2000.
  • Coerce Consistency (Score:5, Informative)

    by BoyBlunder ( 882644 ) on Monday May 09, 2005 @11:05AM (#12478071) Homepage
    I work for a log analysis firm, and the bane of our lives are logs where the information is presented inconsistently from one message to the next. So in some cases a message might have, say, an IP Address as the first word, and in another message its somewhere else in the line with as IP address:port, etc. It's a right royal PITA to write code to extract the IP address in this example if you have to find out every potential message that the app will ever issue in order to automate analysis.

    So, in order to make storage, analysis and reporting easy, your framework should attempt to coerce a consistent approach to the data logged - even the plaintext "human readable" data if you can. If you can do the same with metadata about the event (e.g. ID fields, links to online KBs etc), so much the better.


  • by Goyuix ( 698012 ) on Monday May 09, 2005 @11:54AM (#12478641) Homepage
    It is nice to allow the end user to specify the verbosity of the log file - do you log each request, only failures, or every entrance/exit to a function? If the log is to assist in diagnosing the error, it is nice to be able to turn on extra information that would normally just quickly fill up disk space.
  • by __david__ ( 45671 ) * on Monday May 09, 2005 @12:21PM (#12478948) Homepage
    I agree with what others have said--I don't care about the format too much as long as it's text and has a timestamp in front.

    What really matters to me is that errors get logged nicely. "Error number #57575" or even "permission denied" doesn't help at all. It needs to be specific "permission denied while opening file /blah/blah" is infintely better since it lets you actually fix the error without looking something up on google. Speaking of that, if you *do* have some kind of strange error you are reporting and you know it's not going to make sense, at least make it long and unique so I *can* look it up on google. Terseness can be a good quality in a program, but it is *not* a good quality in an error log.

    Just remember, the people installing your software and using probably don't the first thing about the way it works on the inside. You have to explain to them what went wrong and give them clues on how to fix it in a short error line.

    Oh, it's also nice to have a link back to the original source line that caused the problem. In C, use __FILE__, __LINE__, and __func__ (if you have it) so that if I'm really stumped I can download your source, quickly find the error and start working backwards to find the cause. However, this could get confusing if there are a number of different versions of your code floating around, and it's not as important as the other things I mentioned.

  • by Parsec ( 1702 ) on Monday May 09, 2005 @01:22PM (#12479558) Homepage Journal

    To paraphrase an Apache (1.x) error I recently encountered, was: "The client was denied access by configuration rule.".

    First of all: which rule did the denial? Second: does the program recall the file or line this rule was in or on? Simply having the text or location of a rule would be a huge help in debugging.

  • pflogsumm [].

    Set it up with cron, get a mail every day and keep them for a few days. Saves you a lot of headache.

    For all other stuff use egrep (or grep), awk and sed. I did my own scripts to search for specific abuses. vim in command line mode may also come in handy.
  • Related to logging, is program output. If you are writing tools that run at a Unix console, then someday you'll want to run them from scripts and use their output:

    1. Use stdout and stderr appropriately: I should be able to run "foo > file" and see warnings on my console, but only get useful data in the file. Don't write messages to stdout if they aren't useful output. C++ has std::clog too. Don't make scripts use "grep -v" to remove random status messages or whatever.!

    2. prefix your warnings/error
  • Windows 2000 (Score:3, Informative)

    by Chemisor ( 97276 ) on Monday May 09, 2005 @06:24PM (#12483062)
    You know what logs I really like? The Windows 2000 system logs. All the services write in them, so there is never any hunting for files. They can have a lot of information packed into them, if the application takes some time to do it. They rotate automatically. They have severity icons so you know which ones are the errors and which are not. And they are all in a nice GUI list, so you don't need a command line PhD to view them. User friendly, indeed!
    • Well, as it seems you mostly like the gui rather than the logs themselves; you need someone to whip up a clone for OS tools that allows for profiles of where logs are kept and displays them in a GUI similar to the windows one...
      • > you mostly like the gui rather than the logs themselves

        It's more than that. The format matters too. The Linux way is to dump everything into a text file and let the human figure it out. There is no clear boundary between log messages, for example. Some messages are logged with several entries. There is no way to tell what the severity of the error is except by filtering certain logs into different files.
    • That's funny, I hate the windows logging GUI....

      Any real problem, and it's like finding a needle in a haystack. grep is a much more useful tool in sorting out what's going on.

      To be fair, the Windows Event Log GUI has a filtering mechanism, but I find grep to be much easier to use in practice. Further, the filtering mechanism is limited to the logging services notion of catagories ("Source" "Event Catagory", "Severity" "System"). Most of the time that I need to troubleshoot problems these catagories obscur
  • I usually make a list of event codes to help ID the type of event or error and it's serverity. Then I make sure that whoever reads the log entry can find the line of code that generated the error.

    For example, if I am coding in perl on apache I might have a handler called '' ....
    sub add_employee
    try { [something that generates an error]}
    catch {
    log("Error 1001 at employees.add_employee.100: Can't find the file $filename");

    try { [something else that generates an error]}
    • frankly, I think that's not a good way to write code.

      Exceptions are great to *REDUCE* the clutter created by the endless if(doit){ error() }; sequences in non-exception-savy programming languages, but you just seem to use exceptions as some different type of return code for your functions.

      You assume that some specific exception is thrown by your functions, because you don't distinguish between the possible causes in your catch{} blocks.

      I'd say, a better way of writing would have been:

      try {
  • We use log4j, which lets us dynamcally flip into debug mode, with more verbose info, when we want. So I've recently added support in our app for turning on debug mode logging on the fly. That's pretty big if the app's going into a hosting center...restarting the app may mean a walk to a secured room in another building, or at least remoting into the box. But even then, there's work in progress with that app you may not want to lose. What I did was set up a timer so that debug mode is turned off automagical
  • So your log's filled with interesting tidbits of code gone wild...who is going to find out about it? Good options are: - SNMP (for hosting centers) - Email (for hosting centers, and otherwise) - Sysout (if somebody watches the screen)
  • Do not invent your own.

    Use the openlog(3) and syslog(3). Document the facility you are going to use (like LOG_LOCAL0) and allow it to be changed through config-file or command-line option.

    Once there, use different priorities for messages of different importance (from LOG_DEBUG to LOG_EMERG) -- the verbosity can be set once during the config-processing through the openlog(3).

    There are tools to process such logs, rotate them, and to send messages to a central location. You don't need to worry about t

A triangle which has an angle of 135 degrees is called an obscene triangle.