亚洲国产日韩欧美一区二区三区,精品亚洲国产成人av在线,国产99视频精品免视看7,99国产精品久久久久久久成人热,欧美日韩亚洲国产综合乱

Table of Contents
2. Rotate Logs Regularly with logrotate
3. Monitor Log Sizes and Disk Usage
4. Secure and Protect Log Files
5. Filter and Structure Log Output
6. Set Up Alerts for Critical Events
Home System Tutorial LINUX Best Practices for Managing Logs on a Linux System

Best Practices for Managing Logs on a Linux System

Jul 28, 2025 am 12:33 AM

Use a centralized logging system by configuring tools like rsyslog, syslog-ng, or modern platforms such as ELK Stack or Graylog to aggregate logs from multiple servers, improving visibility and auditability. 2. Rotate logs regularly using logrotate with configurations in /etc/logrotate.conf or /etc/logrotate.d/ to prevent disk exhaustion, compress old logs, and retain logs for a defined period, testing configurations with logrotate -d before applying. 3. Monitor log sizes and disk usage via commands like journalctl --disk-usage and du -sh /var/log/*, set up alerts using monitoring tools when /var/log exceeds thresholds, and limit systemd journal growth by setting SystemMaxUse, SystemMaxFileSize, and MaxRetentionSec in journald.conf. 4. Secure log files by setting proper permissions (e.g., chmod 644 or 600), avoiding public directories, using ACLs or encryption at rest when required by compliance standards, and sending logs to remote write-only servers to prevent tampering. 5. Filter and structure log output by adopting machine-readable formats like JSON, using structured logging libraries, tagging logs by service or host, and filtering noise in rsyslog with conditional rules to improve parsing and searchability. 6. Set up alerts for critical events by leveraging tools like fail2ban, logwatch, or SIEM solutions such as Wazuh, writing custom scripts with grep and awk to detect anomalies like repeated failed logins, and automating checks via cron to send real-time notifications, ensuring proactive system monitoring and incident response.

Best Practices for Managing Logs on a Linux System

Managing logs effectively on a Linux system is essential for troubleshooting, security monitoring, and maintaining system performance. Poor log management can lead to disk exhaustion, difficulty in diagnosing issues, or missing critical alerts. Here are key best practices to keep your logging under control.

Best Practices for Managing Logs on a Linux System

1. Use a Centralized Logging System

Relying solely on local logs limits visibility, especially in multi-server environments. Centralized logging allows you to aggregate logs from multiple systems into a single location for easier analysis.

  • Tools like rsyslog or syslog-ng can forward logs to a central server.
  • Modern solutions include ELK Stack (Elasticsearch, Logstash, Kibana) or Graylog for advanced searching and visualization.
  • For lightweight setups, consider Fluentd or Prometheus Loki.

Example: Configure rsyslog to send logs over TCP/UDP to a central server by editing /etc/rsyslog.conf:

Best Practices for Managing Logs on a Linux System
*.* @central-logging-server:514

This improves auditability and simplifies monitoring across infrastructure.


2. Rotate Logs Regularly with logrotate

Uncontrolled log growth can fill up disks quickly. The logrotate utility automates the process of archiving, compressing, and removing old logs.

Best Practices for Managing Logs on a Linux System
  • Default configuration is in /etc/logrotate.conf, with app-specific rules in /etc/logrotate.d/.
  • Set rotation frequency (daily, weekly), retention period, and compression.

Example configuration for a custom app:

/var/log/myapp/*.log {
    daily
    missingok
    rotate 7
    compress
    delaycompress
    notifempty
    create 644 root root
}

This rotates logs daily, keeps 7 days of history, and compresses old files to save space.

? Tip: Test your logrotate config with logrotate -d /etc/logrotate.conf before applying.


3. Monitor Log Sizes and Disk Usage

Even with rotation, misconfigurations or sudden spikes in logging can consume disk space.

  • Use tools like du, df, and journalctl to check log sizes:
    journalctl --disk-usage        # For systemd journals
    du -sh /var/log/*              # Check individual log directories
  • Set up alerts using monitoring tools (e.g., Nagios, Zabbix, or Prometheus) when /var/log exceeds a threshold.

For systemd-based systems, limit journal size in /etc/systemd/journald.conf:

SystemMaxUse=500M
SystemMaxFileSize=50M
MaxRetentionSec=1month

This prevents journals from growing indefinitely.


4. Secure and Protect Log Files

Logs often contain sensitive data (e.g., IP addresses, usernames, error details). Unauthorized access can expose system vulnerabilities.

  • Set proper permissions:
    chmod 644 /var/log/*.log       # Readable by admin, not writable by others
    chmod 600 /var/log/sensitive.log  # Restrict access
  • Use ACLs or group-based access if needed.
  • Consider encrypting logs at rest if compliance (e.g., GDPR, HIPAA) requires it.
  • Prevent log tampering by sending logs to a remote, write-only log server.

?? Never store logs in publicly accessible directories (e.g., web root).


5. Filter and Structure Log Output

Applications should log in a consistent, machine-readable format (like JSON) when possible. This makes parsing and alerting easier.

  • Use structured logging in apps (e.g., via libraries like logrus or winston).
  • Filter unnecessary noise in rsyslog/syslog-ng to avoid clutter.
  • Tag logs by service, environment, or host for better filtering.

Example rsyslog filter to route specific logs:

if $programname == 'nginx' then /var/log/nginx/filtered.log
& stop

This reduces noise and improves searchability.


6. Set Up Alerts for Critical Events

Don’t wait for a failure to notice log entries. Proactively monitor for patterns like repeated authentication failures, service crashes, or disk errors.

  • Use tools like fail2ban (for SSH brute-force detection) or logwatch (daily summaries).
  • Integrate with SIEM tools (e.g., Wazuh, OSSEC) for real-time alerting.
  • Write custom scripts with grep, awk, or jq to detect anomalies.

Example: Alert on repeated failed logins:

grep "Failed password" /var/log/auth.log | grep "$(date -d '1 hour ago' ' %b %d %H')"

Automate such checks via cron and email alerts.


Basically, good log management balances retention, performance, and security. Use rotation, centralization, monitoring, and access control together — it’s not complex, but it’s easy to overlook one piece.

The above is the detailed content of Best Practices for Managing Logs on a Linux System. For more information, please follow other related articles on the PHP Chinese website!

Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn

Hot AI Tools

Undress AI Tool

Undress AI Tool

Undress images for free

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Clothoff.io

Clothoff.io

AI clothes remover

Video Face Swap

Video Face Swap

Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Tools

Notepad++7.3.1

Notepad++7.3.1

Easy-to-use and free code editor

SublimeText3 Chinese version

SublimeText3 Chinese version

Chinese version, very easy to use

Zend Studio 13.0.1

Zend Studio 13.0.1

Powerful PHP integrated development environment

Dreamweaver CS6

Dreamweaver CS6

Visual web development tools

SublimeText3 Mac version

SublimeText3 Mac version

God-level code editing software (SublimeText3)

Hot Topics

PHP Tutorial
1488
72
How to troubleshoot DNS issues on a Linux machine? How to troubleshoot DNS issues on a Linux machine? Jul 07, 2025 am 12:35 AM

When encountering DNS problems, first check the /etc/resolv.conf file to see if the correct nameserver is configured; secondly, you can manually add public DNS such as 8.8.8.8 for testing; then use nslookup and dig commands to verify whether DNS resolution is normal. If these tools are not installed, you can first install the dnsutils or bind-utils package; then check the systemd-resolved service status and configuration file /etc/systemd/resolved.conf, and set DNS and FallbackDNS as needed and restart the service; finally check the network interface status and firewall rules, confirm that port 53 is not

How would you debug a server that is slow or has high memory usage? How would you debug a server that is slow or has high memory usage? Jul 06, 2025 am 12:02 AM

If you find that the server is running slowly or the memory usage is too high, you should check the cause before operating. First, you need to check the system resource usage, use top, htop, free-h, iostat, ss-antp and other commands to check CPU, memory, disk I/O and network connections; secondly, analyze specific process problems, and track the behavior of high-occupancy processes through tools such as ps, jstack, strace; then check logs and monitoring data, view OOM records, exception requests, slow queries and other clues; finally, targeted processing is carried out based on common reasons such as memory leaks, connection pool exhaustion, cache failure storms, and timing task conflicts, optimize code logic, set up a timeout retry mechanism, add current limit fuses, and regularly pressure measurement and evaluation resources.

Install Guacamole for Remote Linux/Windows Access in Ubuntu Install Guacamole for Remote Linux/Windows Access in Ubuntu Jul 08, 2025 am 09:58 AM

As a system administrator, you may find yourself (today or in the future) working in an environment where Windows and Linux coexist. It is no secret that some big companies prefer (or have to) run some of their production services in Windows boxes an

How to find my private and public IP address in Linux? How to find my private and public IP address in Linux? Jul 09, 2025 am 12:37 AM

In Linux systems, 1. Use ipa or hostname-I command to view private IP; 2. Use curlifconfig.me or curlipinfo.io/ip to obtain public IP; 3. The desktop version can view private IP through system settings, and the browser can access specific websites to view public IP; 4. Common commands can be set as aliases for quick call. These methods are simple and practical, suitable for IP viewing needs in different scenarios.

How to Install NodeJS 14 / 16 & NPM on Rocky Linux 8 How to Install NodeJS 14 / 16 & NPM on Rocky Linux 8 Jul 13, 2025 am 09:09 AM

Built on Chrome’s V8 engine, Node.JS is an open-source, event-driven JavaScript runtime environment crafted for building scalable applications and backend APIs. NodeJS is known for being lightweight and efficient due to its non-blocking I/O model and

20 YUM Commands for Linux Package Management 20 YUM Commands for Linux Package Management Jul 06, 2025 am 09:22 AM

In this article, we will learn how to install, update, remove, find packages, manage packages and repositories on Linux systems using YUM (Yellowdog Updater Modified) tool developed by RedHat. The example commands shown in this article are practicall

System requirements to install linux System requirements to install linux Jul 20, 2025 am 03:49 AM

Linuxcanrunonmodesthardwarewithspecificminimumrequirements.A1GHzprocessor(x86orx86_64)isneeded,withadual-coreCPUrecommended.RAMshouldbeatleast512MBforcommand-lineuseor2GBfordesktopenvironments.Diskspacerequiresaminimumof5–10GB,though25GBisbetterforad

How to Install MySQL 8.0 on Rocky Linux and AlmaLinux How to Install MySQL 8.0 on Rocky Linux and AlmaLinux Jul 12, 2025 am 09:21 AM

Written in C, MySQL is an open-source, cross-platform, and one of the most widely used Relational Database Management Systems (RDMS). It’s an integral part of the LAMP stack and is a popular database management system in web hosting, data analytics,

See all articles