System logging is a very important management function. The information in log files provides statistics for checking security and network access, insight into system performance and failures, and provides a measure of your systems behavior and baseline. All Unix implementations provide logging functions, and some logs need to be explicitly turned on to work. As a general rule you should enable detailed logging and if possible use a central loghost. That is, while still running local logs it is important to have all messages mirrored to a central location. You should keep not only system logs, but if you are running important network applications logs for those applications as well.
Logs are very dependent on timing issues, your timestamps being a critical piece of data that will make your data interpretable. Therefore it is incumbent upon you to make sure that your timekeeping utility (e.g. NTP) is functioning correctly, not only on your servers, but consistently across all of your logged systems.
Because you are going to generate a lot of data, it is important to develop a system for rotating and backing up your logs both regularly and automatically. It's a good idea carefully to consider your log-retention policies, and to request comments on proposed policies from knowledgeable colleagues and managers.
Finally, logs are only useful if they are examined -- and not just during emergencies. Scan your logs for validity, and use log-monitoring packages to alert you to urgent events as well as to provide a data-refinement and -analysis function.
Barrie Sosinsky is president of consulting company Sosinsky and Associates (Medfield MA). He has written extensively on a variety of computer topics. His company specializes in custom software (database and Web related), training and technical documentation.