Tag Archives: ELK Log System Error

[Solved] ELK Log System Error: “statusCode“:429,“error“:“Too Many Requests“,“message“ Data too large

Elk log system error

The error information is as follows:

{"statusCode":429,"error":"Too Many Requests","message":"[circuit_breaking_exception] [parent] Data too large, data for [indices:data/write/bulk[s]] would be [2087165840/1.9gb], which is larger than the limit of [2040109465/1.8gb], real usage: [2087165392/1.9gb], new bytes reserved: [448/448b], usages [request=0/0b, fielddata=182738/178.4kb, in_flight_requests=448/448b, model_inference=0/0b, accounting=89449992/85.3mb], with { bytes_wanted=2087165840 & bytes_limit=2040109465 & durability=\"PERMANENT\" }"}

Du Niang said something, which probably means that the memory given to ES is not enough. However, there is no timely recovery of memory
too much data leads to insufficient memory. You can set the memory limit of fielddata, which is 60% by default

Solution 1: modify the configuration file

Modify the ES configuration file and add the following configuration
[ root@sjyt -node-1 ~]# vim /etc/elasticsearch/elasticsearch.yml

# Avoid OOM, which can have a significant impact on the cluster, by combining request and fielddata breakers to ensure that the combination of the two does not use more than 70% of the heap memory.
indices.breaker.total.limit: 80%

# With this setting, the longest unused (LRU) fielddata will be reclaimed to make room for new data   
indices.fielddata.cache.size: 10%

# fielddata breaker defaults to set the heap as the upper limit of fielddata size.
indices.breaker.fielddata.limit: 60%

The #request breaker estimates the size of structures that need to complete other request parts, such as creating an aggregation bucket, with a default limit of 40% of the heap memory.
indices.breaker.request.limit: 40%

#Longest unused (LRU) fielddata will be reclaimed to make room for new data Must add configuration
indices.breaker.total.use_real_memory: false

After removing the note:

indices.breaker.total.limit: 80%  
indices.fielddata.cache.size: 10%
indices.breaker.fielddata.limit: 60%
indices.breaker.request.limit: 40%
indices.breaker.total.use_real_memory: false

Solution 2: dynamic setting

Reference connection: https://blog.csdn.net/sdlyjzh/article/details/48035723

PUT /_cluster/settings
{
"persistent" : {
"indices.breaker.fielddata.limit" : "40%"
}
}

When the size of the fielddata circuit breaker exceeds the set value, the data too large mentioned at the beginning will appear.