LEAK: ByteBuf.release() was not called before it's garbage-collected.
See original GitHub issue“LEAK: ByteBuf.release() was not called before it’s garbage-collected.” is shown on every trial of elasticsearch output. Even if change the output ‘null’, the result is same. After about 5 million data has inserted, embulk stop and killed. ( Out of memory )
Issue Type: Bug Report
- OS version : CentOS Linux release 7.7.1908 (Core)
- Java version : openjdk version “1.8.0_232”
- Embulk version : Embulk v0.9.22
- Your Embulk configuration (YAML) : config.yml
in:
type: file
path_prefix: /data/201912
decoders:
- {type: gzip}
parser:
default_timezone: 'Asia/Seoul'
charset: UTF-8
newline: CRLF
type: csv
delimiter: ','
quote: '"'
escape: '"'
trim_if_not_quoted: false
skip_header_lines: 1
allow_extra_columns: false
allow_optional_columns: false
columns:
- {name: createdat, type: timestamp, format: '%Y-%m-%d %H:%M:%S.%N %z'}
- {name: url, type: string}
- {name: referer, type: string}
- {name: useragent, type: string}
out:
type: elasticsearch
index: t_201912
index_type: _doc
bulk_actions: 100
nodes:
- {host: 1.1.1.1, port: 9200}
-
Plugin versions : embulk-input-postgresql (0.10.1) embulk-output-command (0.1.4) embulk-output-elasticsearch (0.4.7)
-
Write all what you did, e.g. your commands executed $ embulk -J-Dio.netty.leakDetection.level=advanced run config.yml
-
Log
2020-01-07 10:51:56.343 +0900 [ERROR] (0025:task-0002): LEAK: ByteBuf.release() was not called before it's garbage-collected. See http://netty.io/wiki/reference-counted-objects.html for more information. Recent access records: 0 Created at: io.netty.util.ResourceLeakDetector.track(ResourceLeakDetector.java:229) io.netty.buffer.PooledByteBufAllocator.newHeapBuffer(PooledByteBufAllocator.java:286) io.netty.buffer.AbstractByteBufAllocator.heapBuffer(AbstractByteBufAllocator.java:158) io.netty.buffer.AbstractByteBufAllocator.heapBuffer(AbstractByteBufAllocator.java:149) io.netty.buffer.AbstractByteBufAllocator.buffer(AbstractByteBufAllocator.java:107) org.embulk.deps.buffer.PooledBufferAllocatorImpl.allocate(PooledBufferAllocatorImpl.java:26) org.embulk.spi.PageBuilder.newBuffer(PageBuilder.java:46) org.embulk.spi.PageBuilder.flush(PageBuilder.java:221) org.embulk.spi.PageBuilder.addRecord(PageBuilder.java:198) org.embulk.standards.CsvParserPlugin.run(CsvParserPlugin.java:368) org.embulk.spi.FileInputRunner.run(FileInputRunner.java:140) org.embulk.spi.util.Executors.process(Executors.java:62) org.embulk.spi.util.Executors.process(Executors.java:38) org.embulk.exec.LocalExecutorPlugin$DirectExecutor$1.call(LocalExecutorPlugin.java:170) org.embulk.exec.LocalExecutorPlugin$DirectExecutor$1.call(LocalExecutorPlugin.java:167) java.util.concurrent.FutureTask.run(FutureTask.java:266) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748)
Issue Analytics
- State:
- Created 4 years ago
- Comments:12 (7 by maintainers)
Top GitHub Comments
@hiroyuki-sato
Hi
Thank you for helping me in twitter. https://twitter.com/csk_pos/status/1418400742801481729
Buffer detected double release() has been solved by increasing enough storage capacity. I have pasted error logs and how it solved https://github.com/embulk/embulk-output-bigquery/issues/138
thank you so much !
Hello, @case-k-git
Could you try
It long term issue, but We can’t reproduce in minimize environment.