Elk log system error
The error information is as follows:
{"statusCode":429,"error":"Too Many Requests","message":"[circuit_breaking_exception] [parent] Data too large, data for [indices:data/write/bulk[s]] would be [2087165840/1.9gb], which is larger than the limit of [2040109465/1.8gb], real usage: [2087165392/1.9gb], new bytes reserved: [448/448b], usages [request=0/0b, fielddata=182738/178.4kb, in_flight_requests=448/448b, model_inference=0/0b, accounting=89449992/85.3mb], with { bytes_wanted=2087165840 & bytes_limit=2040109465 & durability=\"PERMANENT\" }"}
Du Niang said something, which probably means that the memory given to ES is not enough. However, there is no timely recovery of memory
too much data leads to insufficient memory. You can set the memory limit of fielddata, which is 60% by default
Solution 1: modify the configuration file
Modify the ES configuration file and add the following configuration
[ root@sjyt -node-1 ~]# vim /etc/elasticsearch/elasticsearch.yml
# Avoid OOM, which can have a significant impact on the cluster, by combining request and fielddata breakers to ensure that the combination of the two does not use more than 70% of the heap memory.
indices.breaker.total.limit: 80%
# With this setting, the longest unused (LRU) fielddata will be reclaimed to make room for new data
indices.fielddata.cache.size: 10%
# fielddata breaker defaults to set the heap as the upper limit of fielddata size.
indices.breaker.fielddata.limit: 60%
The #request breaker estimates the size of structures that need to complete other request parts, such as creating an aggregation bucket, with a default limit of 40% of the heap memory.
indices.breaker.request.limit: 40%
#Longest unused (LRU) fielddata will be reclaimed to make room for new data Must add configuration
indices.breaker.total.use_real_memory: false
After removing the note:
indices.breaker.total.limit: 80%
indices.fielddata.cache.size: 10%
indices.breaker.fielddata.limit: 60%
indices.breaker.request.limit: 40%
indices.breaker.total.use_real_memory: false
Solution 2: dynamic setting
Reference connection: https://blog.csdn.net/sdlyjzh/article/details/48035723
PUT /_cluster/settings
{
"persistent" : {
"indices.breaker.fielddata.limit" : "40%"
}
}
When the size of the fielddata circuit breaker exceeds the set value, the data too large mentioned at the beginning will appear.
Read More:
- [Solved] ES Query SIZE too large Error: ENTITY CONTENT IS TOO LONG [105539255] FOR THE CONFIGURED BUFFER LIMIT [104857600]
- Grafana Error: 414 Request-URI Too Large [How to Solve]
- [How to Solve] java.lang.IllegalArgumentException: Request header is too large
- [Solved] nodejs Error: request entity too large
- How to Solve Nginx 413 Error (request entity too large)
- [Solved] Browser Access Error: Request Header or Cookie too large
- WordPress update failed 429 too many requests (How to Fix)
- [Solved] Nginx Error: 400 Request Header Or Cookie Too Large
- Error in plot.new() : figure margins too large
- .Net Core 5.0 Upload File limit via Swagger Api report error: error: request entity too large [Three Methods]
- [Solved] sqoop Error: jSQLException in nextKeyValue Caused by: ORA-24920:column size too large for client
- [Solved] Laravel Create Data Table Error: Syntax error or access violation: 1071 Specified key was too long
- [Solved] waterdrop Import hive to clickhouse Error: Too many partitions for single INSERT block (more than 100).
- Mongodb Crash Error: Too many open files [How to Solve]
- log4j Error: Please initialize the log4j system properly [How to Solve]
- [Solved] Doris Error: too many filtered rows
- [Solved] Intellij IDEA Run Error: Command line is too long
- How to Solve Cocos creator label text is too many error
- error RC2247 : SYMBOL name too long [How to Solve]
- [Solved] Intellij IDEA Error: Command line is too long