Understand Java log levels, duplicate recording, and log loss issues
Dec 11, 2020 pm 05:26 PMjava Basic TutorialThe column introduces how to solve problems such as Java log levels
##Related free learning recommendations:1 Common causes of log errors1.1 There are many log frameworksDifferent class libraries may use different log frameworks, and compatibility is a problem1.2 The configuration is complex and error-proneLog configuration files are usually very complicated. Many students are used to copying configuration files directly from other projects or online blogs, but do not study carefully how to modify them. Common errors occur in duplicate logging, performance of synchronous logging, and misconfiguration of asynchronous logging. 1.3 There are some misunderstandings in logging itselfFor example, the cost of obtaining log content is not considered, the log level is used indiscriminately, etc. 2 SLF4JLogback, Log4j, Log4j2, commons-logging, JDK’s own java.util.logging, etc. are all log frameworks of the Java system, and there are indeed many. Different class libraries may also choose to use different logging frameworks. As a result, unified management of logs becomes very difficult.
- SLF4J (Simple Logging Facade For Java) provides a unified log facade API to solve this problem
- . That is, the purple part in the figure implements the neutral logging API
- bridging function, and the blue part bridges various logging framework APIs (green part) to the SLF4J API. In this way, even if you use various logging APIs to record logs in your program, you can eventually bridge to the SLF4J facade API.
- Adaptation function, the red part, can realize the binding of SLF4J API and the actual log framework (gray part).
- SLF4J is just a logging standard, but it still requires an actual logging framework. The logging framework itself does not implement the SLF4J API, so pre-conversion is required. Logback is implemented according to the SLF4J API standard, so there is no need to bind modules for conversion.
log4j-over-slf4j to implement Log4j bridging to SLF4J, you can also use
slf4j-log4j12 to implement SLF4J adaptation to Log4j, and also use them A column is drawn, but it cannot use them at the same time, otherwise an infinite loop will occur. The same goes for jcl and jul.
spring-boot-starter-loggingModule
spring-boot-starter-loggingModule automatically introducedlogback -classic (contains SLF4J and Logback logging framework) and some adapters for SLF4J. Among them, log4j-to-slf4j is used to bridge the Log4j2 API to SLF4J, and jul-to-slf4j is used to bridge the java.util.logging API to SLF4J.
- Define a method to implement the recording of debug, info, warn and error logs
- Logback configuration
- There is nothing wrong with the configuration, but duplicate log records appear after executing the method
- Analysis
The Appender of CONSOLE is mounted to two Loggers at the same time, the defined
and
<root>, due to The defined
is inherited from
<root>, so the same log will be recorded through the logger and sent to the root record, so there will be duplicate records in the log under the application package. .
In my heart, I want to implement a custom logger configuration so that the logs in the application can temporarily enable DEBUG level logging. In fact, there is no need to mount the Appender repeatedly, just remove the Appender mounted under :
<logger name="org.javaedge.time.commonmistakes.logging" level="DEBUG"/>If you customize
you need to Log output to different Appenders:
For example,
- The application log is output to the file app.log
- Other framework logs are output to the console
additivity attribute of
<root>
will not be inherited.
##
Incorrect configuration of LevelFilter causes duplicate logs
While recording logs to the console, log records are recorded to two files at different levels
Execution results
The info.log file contains INFO, WARN and ERROR three-level logs, which are not as expected
error.log contains WARN and ERROR level logs, resulting in repeated log collection
- ##Accident Accountability
Some companies use automated ELK solutions to collect logs , the log will be output to the console and file at the same time. Developers will not care about the logs recorded in the file when testing locally. In test and production environments, because developers do not have server access rights, repeated problems in the original log file are difficult to find. .
- When
- log level ≥ configuration level
returns
NEUTRAL and continues to call the next filter on the filter chain Otherwise, return - DENY and directly refuse to record the log
ThresholdFilter to WARN, so WARN and ERROR level logs can be recorded.
LevelFilter is used to compare log levels and then handle them accordingly.- If there is a match, the processing method defined in
- onMatch is called: by default, it is handed over to the next filter for processing (the default value defined in the AbstractMatcherFilter base class) Otherwise Call the processing method defined by
- onMismatch: By default, it is also handed over to the next filter
ThresholdFilter Different, LevelFilterOnly configuring level cannot really work.
_info.log file will only have INFO level logs, and there will be no duplicate logs.
CONSOLE is a ConsoleAppender used to record records with time Flagged logs. Output a large number of logs to a file. The log file will be very large. If the performance test results are also mixed in, it will be difficult to find that log. Therefore, EvaluatorFilter is used here to filter logs according to tags, and the filtered logs are output to the console separately. In this case, a time mark is added to the log that outputs the test results.
Use tags and EvaluatorFilter together to filter logs by tags.
- Test code: Record a specified number of large logs. Each log contains 1MB of simulated data. Finally, a method execution time-consuming log marked with time is recorded:
When appending logs, the logs are written directly into the OutputStream, which is a synchronous log recording
所以日志大量寫入才會(huì)曠日持久。如何才能實(shí)現(xiàn)大量日志寫入時(shí),不會(huì)過(guò)多影響業(yè)務(wù)邏輯執(zhí)行耗時(shí)而影響吞吐量呢?
AsyncAppender
使用Logback的AsyncAppender
即可實(shí)現(xiàn)異步日志記錄。AsyncAppender類似裝飾模式,在不改變類原有基本功能情況下為其增添新功能。這便可把AsyncAppender附加在其他Appender,將其變?yōu)楫惒健?/p>
定義一個(gè)異步Appender ASYNCFILE,包裝之前的同步文件日志記錄的FileAppender, 即可實(shí)現(xiàn)異步記錄日志到文件
- 記錄1000次日志和10000次日志的調(diào)用耗時(shí),分別是537毫秒和1019毫秒
異步日志真的如此高性能?并不,因?yàn)檫@并沒(méi)有記錄下所有日志。
AsyncAppender異步日志坑
- 記錄異步日志撐爆內(nèi)存
- 記錄異步日志出現(xiàn)日志丟失
- 記錄異步日志出現(xiàn)阻塞。
案例
模擬慢日志記錄場(chǎng)景:
首先,自定義一個(gè)繼承自ConsoleAppender的MySlowAppender,作為記錄到控制臺(tái)的輸出器,寫入日志時(shí)休眠1秒。
配置文件中使用AsyncAppender,將MySlowAppender包裝為異步日志記錄
測(cè)試代碼
耗時(shí)很短但出現(xiàn)日志丟失:要記錄1000條日志,最終控制臺(tái)只能搜索到215條日志,而且日志行號(hào)變問(wèn)號(hào)。
原因分析
AsyncAppender提供了一些配置參數(shù),而當(dāng)前沒(méi)用對(duì)。
源碼解析
- includeCallerData
默認(rèn)false:方法行號(hào)、方法名等信息不顯示 - queueSize
控制阻塞隊(duì)列大小,使用的ArrayBlockingQueue阻塞隊(duì)列,默認(rèn)容量256:內(nèi)存中最多保存256條日志 - discardingThreshold
丟棄日志的閾值,為防止隊(duì)列滿后發(fā)生阻塞。默認(rèn)隊(duì)列剩余容量 < 隊(duì)列長(zhǎng)度的20%
,就會(huì)丟棄TRACE、DEBUG和INFO級(jí)日志 - neverBlock
控制隊(duì)列滿時(shí),加入的數(shù)據(jù)是否直接丟棄,不會(huì)阻塞等待,默認(rèn)是false- 隊(duì)列滿時(shí):offer不阻塞,而put會(huì)阻塞
- neverBlock為true時(shí),使用offer
public?class?AsyncAppender?extends?AsyncAppenderBase<ILoggingEvent>?{ //?是否收集調(diào)用方數(shù)據(jù) ????boolean?includeCallerData?=?false; ????protected?boolean?isDiscardable(ILoggingEvent?event)?{ ????????Level?level?=?event.getLevel(); ????????//?丟棄?≤?INFO級(jí)日志 ????????return?level.toInt()?<= Level.INFO_INT; } protected void preprocess(ILoggingEvent eventObject) { eventObject.prepareForDeferredProcessing(); if (includeCallerData) eventObject.getCallerData(); }}public class AsyncAppenderBase<E>?extends?UnsynchronizedAppenderBase<E>?implements?AppenderAttachable<E>?{ //?阻塞隊(duì)列:實(shí)現(xiàn)異步日志的核心 ????BlockingQueue<E>?blockingQueue; ????//?默認(rèn)隊(duì)列大小 ????public?static?final?int?DEFAULT_QUEUE_SIZE?=?256; ????int?queueSize?=?DEFAULT_QUEUE_SIZE; ????static?final?int?UNDEFINED?=?-1; ????int?discardingThreshold?=?UNDEFINED; ????//?當(dāng)隊(duì)列滿時(shí):加入數(shù)據(jù)時(shí)是否直接丟棄,不會(huì)阻塞等待 ????boolean?neverBlock?=?false; ????@Override ????public?void?start()?{ ??????? ... ????????blockingQueue?=?new?ArrayBlockingQueue<E>(queueSize); ????????if?(discardingThreshold?==?UNDEFINED) ????????//默認(rèn)丟棄閾值是隊(duì)列剩余量低于隊(duì)列長(zhǎng)度的20%,參見(jiàn)isQueueBelowDiscardingThreshold方法 ????????????discardingThreshold?=?queueSize?/?5; ????????... ????} ????@Override ????protected?void?append(E?eventObject)?{ ????????if?(isQueueBelowDiscardingThreshold()?&&?isDiscardable(eventObject))?{?//判斷是否可以丟數(shù)據(jù) ????????????return; ????????} ????????preprocess(eventObject); ????????put(eventObject); ????} ????private?boolean?isQueueBelowDiscardingThreshold()?{ ????????return?(blockingQueue.remainingCapacity()?< discardingThreshold); } private void put(E eventObject) { if (neverBlock) { //根據(jù)neverBlock決定使用不阻塞的offer還是阻塞的put方法 blockingQueue.offer(eventObject); } else { putUninterruptibly(eventObject); } } //以阻塞方式添加數(shù)據(jù)到隊(duì)列 private void putUninterruptibly(E eventObject) { boolean interrupted = false; try { while (true) { try { blockingQueue.put(eventObject); break; } catch (InterruptedException e) { interrupted = true; } } } finally { if (interrupted) { Thread.currentThread().interrupt(); } } }}
默認(rèn)隊(duì)列大小256,達(dá)到80%后開(kāi)始丟棄<=INFO級(jí)日志后,即可理解日志中為什么只有兩百多條INFO日志了。
queueSize 過(guò)大
可能導(dǎo)致OOM
queueSize 較小
默認(rèn)值256就已經(jīng)算很小了,且discardingThreshold設(shè)置為大于0(或?yàn)槟J(rèn)值),隊(duì)列剩余容量少于discardingThreshold的配置就會(huì)丟棄<=INFO日志。這里的坑點(diǎn)有兩個(gè):
- 因?yàn)?strong>discardingThreshold,所以設(shè)置queueSize時(shí)容易踩坑。
比如本案例最大日志并發(fā)1000,即便置queueSize為1000,同樣會(huì)導(dǎo)致日志丟失 - discardingThreshold參數(shù)容易有歧義,它
不是百分比,而是日志條數(shù)
。對(duì)于總?cè)萘?0000隊(duì)列,若希望隊(duì)列剩余容量少于1000時(shí)丟棄,需配置為1000
neverBlock 默認(rèn)false
意味總可能會(huì)出現(xiàn)阻塞。
- 若discardingThreshold = 0,那么隊(duì)列滿時(shí)再有日志寫入就會(huì)阻塞
- 若discardingThreshold != 0,也只丟棄≤INFO級(jí)日志,出現(xiàn)大量錯(cuò)誤日志時(shí),還是會(huì)阻塞
queueSize、discardingThreshold和neverBlock三參密不可分,務(wù)必按業(yè)務(wù)需求設(shè)置:
- 若優(yōu)先絕對(duì)性能,設(shè)置
neverBlock = true
,永不阻塞 - 若優(yōu)先絕不丟數(shù)據(jù),設(shè)置
discardingThreshold = 0
,即使≤INFO級(jí)日志也不會(huì)丟。但最好把queueSize設(shè)置大一點(diǎn),畢竟默認(rèn)的queueSize顯然太小,太容易阻塞。 - 若兼顧,可丟棄不重要日志,把queueSize設(shè)置大點(diǎn),再設(shè)置合理的discardingThreshold
以上日志配置最常見(jiàn)兩個(gè)誤區(qū)
Let’s look at the misunderstandings in the log record itself.
Using log placeholders eliminates the need to determine the log level?
SLF4J’s {} placeholder syntax will only obtain the actual parameters when the log is actually recorded, thus solving the problem of log Performance issues with data acquisition.
Is this correct?
- Verification code: It takes 1 second to return the result
If DEBUG log is recorded, and set to record only>=INFO level Log, will the program also take 1 second?
Three methods to test:
- Concatenate strings to record slowString
- Use placeholder method to record slowString
- First determine whether the log level is DEBUG enabled.
The first two methods call slowString, so they both take 1 second. And the second method is to use placeholders to record slowString. Although this method allows passing Object without explicitly splicing String, it is only a delay (if the log is not recorded, it will be omitted) Log parameter object.toString() and String concatenation takes time.
In this case, unless the log level is determined in advance, slowString must be called.
Therefore, using {} placeholder
cannot solve the performance problem of log data acquisition by delaying parameter value acquisition.
In addition to judging the log level in advance, you can also obtain delayed parameter content through lambda expressions. However, SLF4J's API does not yet support lambda, so you need to use the Log4j2 log API and replace Lombok's @Slf4j annotation with the **@Log4j2** annotation to provide a method for lambda expression parameters:
Call debug like this, signature Supplier>, the parameters will be delayed until the log is actually needed to be obtained:
So debug4 will not call the slowString method
just replace it with Log4j2 API, the real logging is still through Logback, this is the benefit of SLF4J adaptation.
Summary
- SLF4J unifies the Java logging framework. When using SLF4J, it is important to understand its bridging API and bindings. If an SLF4J error message appears when the program starts, it may be a configuration problem. You can use Maven's dependency:tree command to sort out dependencies.
- Asynchronous logs solve performance problems by trading space for time. But the space is limited after all. When the space is full, you should consider blocking and waiting or discarding the log. If you prefer not to discard important logs, then choose blocking waiting; if you prefer that the program does not block due to logging, then you need to discard the logs.
- The parameterized logging method provided by the log framework cannot completely replace log level judgment. If your log volume is large and the cost of obtaining log parameters is also high, you must determine the log level to avoid time-consuming acquisition of log parameters without logging.
The above is the detailed content of Understand Java log levels, duplicate recording, and log loss issues. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undress AI Tool
Undress images for free

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

The settings.json file is located in the user-level or workspace-level path and is used to customize VSCode settings. 1. User-level path: Windows is C:\Users\\AppData\Roaming\Code\User\settings.json, macOS is /Users//Library/ApplicationSupport/Code/User/settings.json, Linux is /home//.config/Code/User/settings.json; 2. Workspace-level path: .vscode/settings in the project root directory

To correctly handle JDBC transactions, you must first turn off the automatic commit mode, then perform multiple operations, and finally commit or rollback according to the results; 1. Call conn.setAutoCommit(false) to start the transaction; 2. Execute multiple SQL operations, such as INSERT and UPDATE; 3. Call conn.commit() if all operations are successful, and call conn.rollback() if an exception occurs to ensure data consistency; at the same time, try-with-resources should be used to manage resources, properly handle exceptions and close connections to avoid connection leakage; in addition, it is recommended to use connection pools and set save points to achieve partial rollback, and keep transactions as short as possible to improve performance.

DependencyInjection(DI)isadesignpatternwhereobjectsreceivedependenciesexternally,promotingloosecouplingandeasiertestingthroughconstructor,setter,orfieldinjection.2.SpringFrameworkusesannotationslike@Component,@Service,and@AutowiredwithJava-basedconfi

itertools.combinations is used to generate all non-repetitive combinations (order irrelevant) that selects a specified number of elements from the iterable object. Its usage includes: 1. Select 2 element combinations from the list, such as ('A','B'), ('A','C'), etc., to avoid repeated order; 2. Take 3 character combinations of strings, such as "abc" and "abd", which are suitable for subsequence generation; 3. Find the combinations where the sum of two numbers is equal to the target value, such as 1 5=6, simplify the double loop logic; the difference between combinations and arrangement lies in whether the order is important, combinations regard AB and BA as the same, while permutations are regarded as different;

fixture is a function used to provide preset environment or data for tests. 1. Use the @pytest.fixture decorator to define fixture; 2. Inject fixture in parameter form in the test function; 3. Execute setup before yield, and then teardown; 4. Control scope through scope parameters, such as function, module, etc.; 5. Place the shared fixture in conftest.py to achieve cross-file sharing, thereby improving the maintainability and reusability of tests.

java.lang.OutOfMemoryError: Javaheapspace indicates insufficient heap memory, and needs to check the processing of large objects, memory leaks and heap settings, and locate and optimize the code through the heap dump analysis tool; 2. Metaspace errors are common in dynamic class generation or hot deployment due to excessive class metadata, and MaxMetaspaceSize should be restricted and class loading should be optimized; 3. Unabletocreatenewnativethread due to exhausting system thread resources, it is necessary to check the number of threads, use thread pools, and adjust the stack size; 4. GCoverheadlimitexceeded means that GC is frequent but has less recycling, and GC logs should be analyzed and optimized.

Use classes in the java.time package to replace the old Date and Calendar classes; 2. Get the current date and time through LocalDate, LocalDateTime and LocalTime; 3. Create a specific date and time using the of() method; 4. Use the plus/minus method to immutably increase and decrease the time; 5. Use ZonedDateTime and ZoneId to process the time zone; 6. Format and parse date strings through DateTimeFormatter; 7. Use Instant to be compatible with the old date types when necessary; date processing in modern Java should give priority to using java.timeAPI, which provides clear, immutable and linear

TheJVMenablesJava’s"writeonce,runanywhere"capabilitybyexecutingbytecodethroughfourmaincomponents:1.TheClassLoaderSubsystemloads,links,andinitializes.classfilesusingbootstrap,extension,andapplicationclassloaders,ensuringsecureandlazyclassloa
