Es circuit breaker, the JVM heap memory is not enough to load data for the current query, so it will report data too large, and the request is blown.
1、 Cause of occurrence
There are too many batch import data or too many query data, which is more frequent
When the cluster status is yellow or red, analyze the reason:
GET _ cluster/allocation/explain
2、 Periodically clean up the cache
The regular memory cleaning can not ensure the availability of the server. However, if the memory is not enough, the ES can increase the availability of the service.
Although the query may be slow, it is much better than directly reporting an error, unable to query and unsuccessful data saving
1. Clean cache method
delete data acquisition Indexes Cache: POST / collect_ data*/_ cache/clear
monitor Field cache: get /_ stats/fielddata?fields=*
2. Restart cluster node
If the cleanup cache is invalid_ Cache/clear, then restart.
If it occurs frequently The data too large is abnormal. I think it can be reduced again indices.fielddata.cache.size This setting. Let it clean up the cache as soon as possible.
Increase at the same time The available memory of ES, and the limit size corresponding to the available memory: i.e indices.breaker..limit
Through monitoring, find the cause of the problem: get /_ cluster/stats?pretty
3、 Optimize query
1. Increase server memory and ES cluster
When there is too much index data and the server memory is too small, even if the cache is cleaned regularly, the _ In cache/clear, there is still an exception that triggers the fuse. At this time, we can only consider upgrading the memory and expanding the ES cluster.
2. Avoid using Index name*
This method is similar to select * from table
For example, I generate an index every year or every month user_ 2020, and user_ two thousand and nineteen
When querying, I use the index as follows: user* This will definitely affect the query performance,
If a complex aggregation statement, es will aggregate all indexes beginning with user, which must occupy a lot of memory.
At this time, you can consider whether it is the query statement usage of ES?Check and optimize
More tuning References: https://blog.csdn.net/andy_ only/article/details/98172044
3. JVM configuration uses G1 garbage collector
Read More:
- Kibana access error: data too large [How to Solve]
- ERROR 1406 (22001): Data Too Long, field len 30, data len 48
- Summary and statistics of large amount of data
- Inconsistency between adapter data and UI data after dragging recyclerview (data disorder)
- Solved: elasticsearch error: exception [type = search]_ phase_ execution_ exception, reason=all shards failed]
- Django + jQuery get data in the form + Ajax send data
- pandas parse_ Data exception, automatically skip
- Failed to load response data:No data found for resource with given identifie
- No data: data: get host by name failed in TCP_ Connect() error resolution
- SSIS Exception: Failed to retrieve long data for column “TS_Description”
- Data analysis to obtain Yahoo stock data: some problems are encountered when using panda datareader (cannot import name ‘is_ list_ Like ‘problem)
- [MySQL] [serialize] [error record] after modifying data, no data will be returned (in fact, MySQL does not support it)
- How to Fix Sklearn ValueError: This solver needs samples of at least 2 classes in the data, but the data
- mysql5.7.26:[ERR] 1118 – Row size too large (> 8126)
- Nginx upload error 413 request entity too large
- This (code, message, data: null) still exists after importing spring cloud project into Lombok; the data in the project is unrecognized
- Nginx modifies the front end request size limit (413 request entity too large)
- Flume receives an error when a single message is too large
- Error in plot.new() : figure margins too large
- TIDB-kafka server: Message was too large, server rejected it to avoid allocation error