-
Notifications
You must be signed in to change notification settings - Fork 8.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
MAPREDUCE-7453. Revert HADOOP-18649. #6102
Conversation
e23ae03
to
58bd123
Compare
@ayushtkn |
58bd123
to
1b98163
Compare
💔 -1 overall
This message was automatically generated. |
💔 -1 overall
This message was automatically generated. |
💔 -1 overall
This message was automatically generated. |
💔 -1 overall
This message was automatically generated. |
💔 -1 overall
This message was automatically generated. |
💔 -1 overall
This message was automatically generated. |
6b9bd3d
to
8a2501a
Compare
💔 -1 overall
This message was automatically generated. |
🎊 +1 overall
This message was automatically generated. |
🎊 +1 overall
This message was automatically generated. |
" 0" + | ||
" 1><LOG_DIR>/stdout" + | ||
" 2><LOG_DIR>/stderr ]", app.launchCmdList.get(0)); | ||
"[" + MRApps.crossPlatformify("JAVA_HOME") + "/bin/java" + |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
avoid space
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I didn't want to change this. But check style problem show
in https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6102/5/artifact/out/results-checkstyle-root.txt.
./hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/job/impl/TestMapReduceChildJVM.java:67: " -Dyarn.app.container.log.filesize=10240" +: '" -Dyarn.app.container.log.filesize=10240"' has incorrect indentation level 6, expected level should be 8. [Indentation]
./hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/job/impl/TestMapReduceChildJVM.java:172: " -Dyarn.app.container.log.filesize=10240" +: '" -Dyarn.app.container.log.filesize=10240"' has incorrect indentation level 6, expected level should be 8. [Indentation]
" 0" + | ||
" 1><LOG_DIR>/stdout" + | ||
" 2><LOG_DIR>/stderr ]", app.launchCmdList.get(0)); | ||
"[" + MRApps.crossPlatformify("JAVA_HOME") + "/bin/java" + |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
avoid space
…g.filesize is set to default value 0.
Thank you for the explanation @zhengchenyu |
On the other hand, as a second option, since this is now going to be an incompatible change (change of default value) and mostly we will not have much progress on log4j2 migration, you can also feel free to revert cf4a678 Though maintenance of such custom logic (maxEvents in CLA) is anyways going to be painful in future. |
Thanx @zhengchenyu exploring a bit more.
Now how does this change restores that behaviour? You did set a cap, now if I don't want to have any capping, how will I achieve that? And with this change in, still there would be behavioural differences? The problem & reasoning does makes sense to me, though not the fix.... if there ain't any option but only this, then ok but I am more inclined towards reverting the cause though... jfyi. @jojochuang |
Another observation, the day this PR was reverted, some tests which were broken since eternity broke again. So, some way this fixes those tests, that we should anyway fix |
@ayushtkn Thanks for you reply! log4j.appender.CLA change s from 'org.apache.hadoop.yarn.ContainerLogAppender' to 'org.apache.log4j.RollingFileAppender'. If we use RollingFileAppender, I think maybe there are no way to disable the cap. But we can only set the enough big value for RollingFileAppender::maxBackups and RollingFileAppender::maxFileSize. See: https://github.com/apache/logging-log4j1/blob/b7e9154128cd4ae1244c877a6fda8f834a0f2247/src/main/java/org/apache/log4j/RollingFileAppender.java#L50 |
In my opinion, I think CLA should use FileAppender, and CRLA should use RollingFileAppender. FileAppender will disable the cap. Then CRLA will limited by maxFileSize and maxBackups. Users can choose CLA or CRLA they want |
@ayushtkn |
This fix was maybe fixing those tests, so when we reverted that, they started failing again. for the fix, @zhengchenyu I think we should update the PR reverting the source fix and maintain old behaviour and compat, once the problem is fixed we can explore anything further around that rather than changing defaults |
Thanks for you reply! I reproduce the problem of 'testThreadDumpOnTaskTimeout' in my laptop. Here we can see the log dir:
We can see this is just the problem which I describe the problem in this PR. For RollingFileAppender, when RollingFileAppender::MaxSize is 0, RollingFileAppender::MaxBackup is 1, we will always roll up the log, and show the latest log forever. And this PR indeed solve this problem. I think it is necessary for us to maintain old behaviour. Because @virajjasani said that "Maintenance of custom appenders for Log4J2 is costly", CLA should be replace with FileAppender, but not ContainerLogAppender. Then totalLogFileSize will be invalid, we can not limit the container log size. Because FileAppender can not limit the log size. |
It is there since long & maintained, those changes are breaking the existing tests as well since long I believe, so lets revert that & fix the existing problems. if it is required, folks will figure out a way maintaining compat & redo. Changing defaults are incompatible change & we have a way without breaking compat, so it is a No from my side to change the defaults |
Thanks for your reply. I think what you said is reasonable. We should try to maintain old behaviour. There are two appender for regular container log: CLA, CRLA. CLA is regular appender. CRLA is roll over appender. Let's analyze the original behavior before HADOOP-18649.
Then let's analyze the original behavior after HADOOP-18649.
For CRLA, there is no change in behavior. Line 635 in 5f47f09
For CLA, I will replace with org.apache.log4j.FileAppender. But there will be no way to limit the size for FileAppender. And I think there is no need to limit the size for CLA. If we wanna limit the size, we could use CRLA. |
…ainer.log.filesize is set to default value 0." This reverts commit c001dcd.
…g.filesize is set to default value 0.
If I got you right, we are sorted with CRLA.
Was this possible before that Hadoop patch, if yes may be we can let CRLA stuff as is & revert the CLA and get back the If there was a way to limit previously, we should allow that now as well, I don't find that Hadoop patch very relevant too, if we have to compromise on behavioural aspects. btw. nice summary :-) I can directly revert that Hadoop patch, but good if we do as part of this PR & get the tests run. Lemme know if you are good with it |
We can get back the ContainerLogAppender for CLA. But how about CRLA? If we have to keep our old behavior, I think we can just revert HADOOP-18649. |
Yahh, then we should revert only… Can you update this PR with the revert commit? We can get a test run & merge this |
…ainer.log.filesize is set to default value 0." This reverts commit 22141d5.
…pache#5448)" This reverts commit cf4a678.
@ayushtkn OK, I have change the tittle to 'Revet Hadoop-18649', and revert it. @virajjasani How about revert cf4a678? I found RollingFileAppender and FileAppender has changed in log4j2. Will this revert have any impact on upgrading to the log4j2? |
🎊 +1 overall
This message was automatically generated. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
💔 -1 overall
This message was automatically generated. |
🎊 +1 overall
This message was automatically generated. |
🎊 +1 overall
This message was automatically generated. |
…engchenyu. In container-log4j.properties, log4j.appender.{APPENDER}.MaxFileSize is set to ${yarn.app.container.log.filesize}, but yarn.app.container.log.filesize is 0 in default. So log is missing. This log is always rolling and only show the latest log.
…engchenyu. In container-log4j.properties, log4j.appender.{APPENDER}.MaxFileSize is set to ${yarn.app.container.log.filesize}, but yarn.app.container.log.filesize is 0 in default. So log is missing. This log is always rolling and only show the latest log.
Description of PR
JIRA: MAPREDUCE-7453. Container logs are missing when yarn.app.container.log.filesize is set to default value 0.
Since HADOOP-18649, in container-log4j.properties, log4j.appender.{APPENDER}.MaxFileSize is set to ${yarn.app.container.log.filesize}, but yarn.app.container.log.filesize is 0 in default. So log is missing. This log is always rolling and only show the latest log.
This is the running log like below:
When we read the log from web, syslog is alway nothing, syslog.1 only show the latest log.
How was this patch tested?
test in cluster.
For code changes:
just fix default value to avoid missing logs.