java.lang.OutOfMemoryError
See original GitHub issueWe are running log4j2-scan.exe v2.7.1 on our Windows servers using TrueSight Server Automation (TSSA) to deploy and execute the package. This is done through a mapped-user elevation to a local-administrator account and executed via command shell as that user.
On a particular server, we’re seeing this error from the output of the execution of the scanner:
Logpresso CVE-2021-44228 Vulnerability Scanner 2.7.1 (2022-01-02) (Time in agent's deploy log:: 01/17/2022 13:50:37)
Scanning drives: C:\, M:\ (without P:, Z:) (Time in agent's deploy log:: 01/17/2022 13:50:38)
Scanned 3191 directories and 27196 files
Found 0 vulnerable files
Found 0 potentially vulnerable files
Found 0 mitigated files
Completed in 10403.38 seconds
Error: Garbage-collected heap size exceeded.
java.lang.OutOfMemoryError: Garbage-collected heap size exceeded.
"C:\temp\stage\b197902652953cc29ef9df4465ff0232\bldeploycmd-2.bat": Item 'Execute log4j2-scan.exe' returned exit code -1 (Time in agent's deploy log:: 01/17/2022 16:44:03)
"C:\temp\stage\b197902652953cc29ef9df4465ff0232\bldeploycmd-2.bat": Command returned non-zero exit code: -1 (Time in agent's deploy log:: 01/17/2022 16:44:03)
The scanner is executed using this command string. Note that %RPTFILE% is defined prior to the execution of the scanner.
log4j2-scan.exe --silent --scan-zip --scan-log4j1 --all-drives --report-path "%RPTFILE%" --report-dir "C:\Temp" --exclude "P:" --exclude "Z:" --exclude-fs afs,cifs,autofs,tmpfs,devtmpfs,fuse.sshfs,iso9660 2>&1
This scan seems to take an inordinately long amount of time to run because it is getting stuck trying to scan more than 6M machine key files from an application called PortalProtect by TrendMicro. The file names are lengthy GUID-type names, and each file is roughyl 1k in size. There’s no specific type other than “System File” which is generic. The file names are not patterned in any easily distinguishable way as to provide an easy exclusion filter.
When I ran the scan manually without the --silent option and included the --trace and --debug options, it got as far as this directory on the C: drive, output one single line of status update after 10 seconds and then basically hung itself.
Edit: The process was slowly climbing up the ladder consuming all available memory. I had to kill it before it caused the server to run out of memory. It would appear that there is a need to include some regular flushing of memory to the log and then cycling through the next batch of X files and directories, especially when there are significant numbers of files to be processed. (i.e. - millions)
Issue Analytics
- State:
- Created 2 years ago
- Comments:10 (5 by maintainers)
Top GitHub Comments
@greg-michael I will. Thank you for your understanding 😄
@greg-michael Wow. that’s a good news. Xmx switch is supported by JVM or substrateVM, and it just set maximum available memory limit for java process. If scanner cannot allocate more memory above specified memory limit, it fails with OOM as usual. Therefore, if scanner completed scan without any error with Xmx1000M option, it means that scanner successfully scanned all files.
By the way, which one did you used? JAR version or native binary?