Tag Archives: elasticsearch

[Windows] elasticsearch.exceptions.RequestError: <unprintable RequestError object>

elasticsearch.exceptions.RequestError: <unprintable RequestError object>

There are many ways to solve this problem.
Use the following two commands in Pycharm
$pip install django haystack
$pip install elasticsearch==2.4.1

Note that the server-side elasticsearch should be consistent with the pip install elasticsearch==2.4.1 version

How to Solve elasticsearch and logstash Install Error

Turning on the logstash service appears: Failed to start logstash.service: Unit not found.

[root@localhost ~]# systemctl start logstash
Failed to start logstash.service: Unit not found.

Issue 1:
First problem: Failed to start logstash.service: Unit not found.
Solution idea.
Generate logstash.service file

[root@localhost ~]# sudo /usr/share/logstash/bin/system-install /etc/logstash/startup.options systemd

Check if the service can be opened normally

Issue 2:
The second problem: If you use this access could not find any executable java binary. Please install java in your PATH or set JAVA_HOME.

[root@localhost ~]# sudo /usr/share/logstash/bin/system-install /etc/logstash/startup.options systemd
Could not find any executable java binary. Please install java in your PATH or set JAVA_HOME.

Reason: logstash can’t get to AVA_HOME variable, need to add refresh profile in the configuration file
Solution.

[root@localhost ~]# vi /etc/profile                #Add the specified version of the JDK directory installed on the local machine
export JAVA_HOME=/usr/local/jdk1.8
export CLASSPATH=$JAVA_HOME/lib:$JAVA_HOME/jre/lib
export PATH=$JAVA_HOME/lib:$JAVA_HOME/jre/bin:$PATH:$HOME/bin

[root@localhost ~]# vi /usr/share/logstash/bin/logstash.lib.sh
Add source /etc/profile in the last line
[root@localhost ~]# vi /usr/share/logstash/bin/logstash
Add source /etc/profile in the last line

Refresh the configuration file, and then see if the service can be opened normally

Issue 3:
Third problem : /usr/share/logstash/vendor/jruby/bin/jruby:line 388: /usr/bin/java: No that file or directory
Unable to install system startup script for Logstash.
Reason: Can’t get the java executable file
Solution:

[root@localhost ~]# ln -s /usr/local/jdk1.8/bin/java /usr/bin/java

Reinstall the service to install.

[root@localhost ~]# rpm -e logstash
Error: package logstash is not installed
[root@localhost ~]# rpm -ivh /mnt/logstash-5.5.1.rpm
Warning: /mnt/logstash-5.5.1.rpm: Head V4 RSA/SHA512 Signature, key ID d88e42b4: NOKEY
In preparation…                          ################################# [100%]
Package logstash-1:5.5.1-1.noarch is installed

Generate logstash.service file

[root@localhost ~]# sudo /usr/share/logstash/bin/system-install /etc/logstash/startup.options systemd
Using provided startup.options file: /etc/logstash/startup.options

Start successfully!

[root@localhost ~]# systemctl start logstash

[Solved] Docker Elasticsearch8.4.0 Error: Exception in thread “main” java.nio.file.FileSystemException

Exception in thread "main" java.nio.file.FileSystemException: /usr/share/elasticsearch/config/elasticsearch.yml.Dym72YkCRZ-GMAliqWE2IA.tmp -> /usr/share/elasticsearch/config/elasticsearch.yml: Device or resource busy
	at java.base/sun.nio.fs.UnixException.translateToIOException(UnixException.java:100)
	at java.base/sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:106)
	at java.base/sun.nio.fs.UnixCopyFile.move(UnixCopyFile.java:416)
	at java.base/sun.nio.fs.UnixFileSystemProvider.move(UnixFileSystemProvider.java:266)
	at java.base/java.nio.file.Files.move(Files.java:1432)
	at org.elasticsearch.xpack.security.cli.AutoConfigureNode.fullyWriteFile(AutoConfigureNode.java:1127)
	at org.elasticsearch.xpack.security.cli.AutoConfigureNode.fullyWriteFile(AutoConfigureNode.java:1139)
	at org.elasticsearch.xpack.security.cli.AutoConfigureNode.execute(AutoConfigureNode.java:687)
	at org.elasticsearch.server.cli.ServerCli.autoConfigureSecurity(ServerCli.java:161)
	at org.elasticsearch.server.cli.ServerCli.execute(ServerCli.java:85)
	at org.elasticsearch.common.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:54)
	at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:85)
	at org.elasticsearch.cli.Command.main(Command.java:50)
	at org.elasticsearch.launcher.CliToolLauncher.main(CliToolLauncher.java:64)

Cause: it is estimated that there is a problem with the mounting of the configuration file

My solution: when docker starts, it can run successfully without mounting the configuration file

1. The address bar cannot access port 9200

It needs to be added in elasticsearch.yml in the container

http.host: 0.0.0.0

2. After the above method is configured, you need to input the account password to access the 9200 port

After finding some solutions, you need to configure the following contents in the configuration file in the container

xpack.secruity.enabled: false

After restarting the container, ES can be run successfully

Note: VIM needs to be installed before editing the files in the container

[Solved] SpringBoot Integrate ES Error: Elasticsearch health check failed

Recently, when a springboot integrated es project was started, an error was reported after successful startup: Elasticsearch health check failed

There are two method to solve this error:

1. Close the health check of the actor on elasticsearch (I tried this method, and the project cannot be started later, and this method is not recommended):

management:
  health:
    elasticsearch:
      enabled: false

2. Configure according to spring.elasticsearch.rest. uris (the problem is solved after restart):

spring:
  # ES search engine
  data:
    elasticsearch:
      cluster-nodes: 47.103.5.190:9300
      cluster-name: docker-cluster
      repositories:
        enabled: true
  elasticsearch:
    rest:
      uris: ["http://47.103.5.190:9200"]

[Solved] Logstash Error: Logstash – java.lang.IllegalStateException: Logstash stopped processing because of an err

I recently tried to use Elasticsearch and IK in combination with Logstash to link mysql, and tested Logstash with the following error message.

First enter the command: logstash -e ‘input {stdin{}} output {stdout{}}’

D:\myworkspace\es\logstash-6.4.3\bin>logstash -e 'input {stdin{}} output {stdout{}}'

The command is correct, but the result is:

D:\myworkspace\es\logstash-6.4.3\bin>logstash -e 'input {stdin{}} output {stdout{}}'
ERROR: Unknown command '{stdin{}}'

See: 'bin/logstash --help'
[ERROR] 2022-08-23 09:06:42.875 [main] Logstash - java.lang.IllegalStateException: Logstash stopped processing because of an error: (SystemExit) exit

 

Solution:

You should try the following command first:

logstash -e “”

The result was successful:

D:\myworkspace\es\logstash-6.4.3\bin>logstash -e ""
Sending Logstash logs to D:/myworkspace/es/logstash-6.4.3/logs which is now configured via log4j2.properties
[2022-08-23T09:16:16,950][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.queue", :path=>"D:/myworkspace/es/logstash-6.4.3/data/queue"}
[2022-08-23T09:16:16,958][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.dead_letter_queue", :path=>"D:/myworkspace/es/logstash-6.4.3/data/dead_letter_queue"}
[2022-08-23T09:16:17,054][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2022-08-23T09:16:17,164][INFO ][logstash.agent           ] No persistent UUID file found. Generating new UUID {:uuid=>"0777ac0f-9efb-463d-8e2c-874bc1dc9feb", :path=>"D:/myworkspace/es/logstash-6.4.3/data/uuid"}
[2022-08-23T09:16:17,592][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"6.4.3"}
[2022-08-23T09:16:20,129][INFO ][logstash.pipeline        ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>8, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50}
[2022-08-23T09:16:20,231][INFO ][logstash.pipeline        ] Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0x5fba80a0 run>"}
The stdin plugin is now waiting for input:
[2022-08-23T09:16:20,277][INFO ][logstash.agent           ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2022-08-23T09:16:20,611][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}
[2022-08-23T09:16:43,203][WARN ][logstash.runner          ] SIGINT received. Shutting down.
[2022-08-23T09:16:43,338][INFO ][logstash.pipeline        ] Pipeline has terminated {:pipeline_id=>"main", :thread=>"#<Thread:0x5fba80a0 run>"}
[2022-08-23T09:16:43,340][FATAL][logstash.runner          ] SIGINT received. Terminating immediately.

Decisively replace with the following command:

logstash -e "input { stdin {} }  output {stdout {} }"

Done!

D:\myworkspace\es\logstash-6.4.3\bin>logstash -e "input { stdin {} }  output {stdout {} }"
Sending Logstash logs to D:/myworkspace/es/logstash-6.4.3/logs which is now configured via log4j2.properties
[2022-08-23T09:17:48,125][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2022-08-23T09:17:48,690][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"6.4.3"}
[2022-08-23T09:17:50,871][INFO ][logstash.pipeline        ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>8, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50}
[2022-08-23T09:17:50,964][INFO ][logstash.pipeline        ] Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0x268e4bb5 run>"}
The stdin plugin is now waiting for input:
[2022-08-23T09:17:51,008][INFO ][logstash.agent           ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2022-08-23T09:17:51,209][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}

ES Startup error: ERROR: [2] bootstrap checks failed

ERROR: [2] bootstrap checks failed
[1]: max file descriptors [4096] for elasticsearch process is too low,   increase to at least [65536]

Reason: Generally, you can check whether the following nodes are present in the elastic.yaml configuration file in the es installation directory

elsearch soft nofile 65536
elsearch hard nofile 65536

If not, then you need to configure it, and replace the elsearch with your own server user.

There is also a possibility that the above error is still reported even though the server has been configured, possibly due to the fact that the current logged-in user has not synchronized the configuration due to a server reboot.
Use su ~ xx to re-login to solve the problem.

[Solved] ElasticsearchStatusException[Elasticsearch exception [type=mapper_parsing_exception, reason=failed t

Project scenario:

Today, when I was writing code in Java to check the data in ES, the following error occurred. I checked it for a while and found the problem


Console prompt

ElasticsearchStatusException[Elasticsearch exception [type=mapper_parsing_exception, reason=failed to parse field [location] of type [geo_point]]
]; nested: ElasticsearchException[Elasticsearch exception [type=parse_exception, reason=unsupported symbol [.] in geohash [30.871729121.81959]]]; nested: ElasticsearchException[Elasticsearch exception [type=illegal_argument_exception, reason=unsupported symbol [.] in geohash [30.871729121.81959]]];
	at org.elasticsearch.rest.BytesRestResponse.errorFromXContent(BytesRestResponse.java:176)
	at org.elasticsearch.client.RestHighLevelClient.parseEntity(RestHighLevelClient.java:2011)
	at org.elasticsearch.client.RestHighLevelClient.parseResponseException(RestHighLevelClient.java:1988)
	at org.elasticsearch.client.RestHighLevelClient.internalPerformRequest(RestHighLevelClient.java:1745)
	at org.elasticsearch.client.RestHighLevelClient.performRequest(RestHighLevelClient.java:1702)
	at org.elasticsearch.client.RestHighLevelClient.performRequestAndParseEntity(RestHighLevelClient.java:1672)
	at org.elasticsearch.client.RestHighLevelClient.index(RestHighLevelClient.java:1029)
	at com.woniu.EsDemoApplicationTests.insertDoc(EsDemoApplicationTests.java:115)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at 
	at org.junit.platform.launcher.core.SessionPerRequestLauncher.execute(SessionPerRequestLauncher.java:53)
	at com.intellij.junit5.JUnit5IdeaTestRunner.startRunnerWithArgs(JUnit5IdeaTestRunner.java:69)
	at com.intellij.rt.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:33)
	at com.intellij.rt.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:230)
	at com.intellij.rt.junit.JUnitStarter.main(JUnitStarter.java:58)
	Suppressed: org.elasticsearch.client.ResponseException: method [PUT], host [http://120.48.46.177:9200], URI [/hotel/_doc/45870?timeout=1m], status line [HTTP/1.1 400 Bad Request]
Warnings: [Elasticsearch built-in security features are not enabled. Without authentication, your cluster could be accessible to anyone. See https://www.elastic.co/guide/en/elasticsearch/reference/7.16/security-minimal-setup.html to enable security.]
{"error":{"root_cause":[{"type":"parse_exception","reason":"unsupported symbol [.] in geohash [30.871729121.81959]"}],"type":"mapper_parsing_exception","reason":"failed to parse field [location] of type [geo_point]","caused_by":{"type":"parse_exception","reason":"unsupported symbol [.] in geohash [30.871729121.81959]","caused_by":{"type":"illegal_argument_exception","reason":"unsupported symbol [.] in geohash [30.871729121.81959]"}}},"status":400}
		at org.elasticsearch.client.RestClient.convertResponse(RestClient.java:326)
		at org.elasticsearch.client.RestClient.performRequest(RestClient.java:296)
		at org.elasticsearch.client.RestClient.performRequest(RestClient.java:270)
		at org.elasticsearch.client.RestHighLevelClient.performClientRequest(RestHighLevelClient.java:2082)
		at org.elasticsearch.client.RestHighLevelClient.internalPerformRequest(RestHighLevelClient.java:1732)
		... 71 more
Caused by: ElasticsearchException[Elasticsearch exception [type=parse_exception, reason=unsupported symbol [.] in geohash [30.871729121.81959]]]; nested: ElasticsearchException[Elasticsearch exception [type=illegal_argument_exception, reason=unsupported symbol [.] in geohash [30.871729121.81959]]];
	at org.elasticsearch.ElasticsearchException.innerFromXContent(ElasticsearchException.java:485)
	at org.elasticsearch.ElasticsearchException.fromXContent(ElasticsearchException.java:396)
	at org.elasticsearch.ElasticsearchException.innerFromXContent(ElasticsearchException.java:426)
	at org.elasticsearch.ElasticsearchException.failureFromXContent(ElasticsearchException.java:592)
	at org.elasticsearch.rest.BytesRestResponse.errorFromXContent(BytesRestResponse.java:168)
	... 74 more
Caused by: ElasticsearchException[Elasticsearch exception [type=illegal_argument_exception, reason=unsupported symbol [.] in geohash [30.871729121.81959]]]
	at org.elasticsearch.ElasticsearchException.innerFromXContent(ElasticsearchException.java:485)
	at org.elasticsearch.ElasticsearchException.fromXContent(ElasticsearchException.java:396)
	at org.elasticsearch.ElasticsearchException.innerFromXContent(ElasticsearchException.java:426)

Cause analysis:

The data structure of the geographic coordinate attribute location in ES is (latitude, longitude), and I inserted a comma in the middle when I spliced strings, resulting in an error in inserting data. Here are the really useful parts of the above code.

Real error reason:

{"error":{"root_cause":[{"type":"parse_exception","reason":"unsupported symbol [.] in 
geohash [30.871729121.81959]"}],"type":"mapper_parsing_exception","reason":"failed to parse 
field [location] of type [geo_point]","caused_by":{"type":"parse_exception","reason":"unsupported symbol [.] in geohash 
[30.871729121.81959]","caused_by":
{"type":"illegal_argument_exception","reason":"unsupported symbol [.] in geohash 
[30.871729121.81959]"}}},"status":400}

Incorrect writing:

this.location = hotel.getLatitude()+hotel.getLongitude();

Correct writing:

this.location = hotel.getLatitude()+","+hotel.getLongitude();

Solution:

Check the string before splicing, and the order (latitude and longitude) cannot be reversed, otherwise it may lead to geographical location errors.

ElasticsearchSecurityException[failed to load SSL configuration [xpack.security.transport.ssl]]

ElasticsearchSecurityException[failed to load SSL configuration [xpack.security.transport.ssl]]

[ERROR][o.e.b.ElasticsearchUncaughtExceptionHandler] [master] uncaught exception in thread [main]
org.elasticsearch.bootstrap.StartupException: ElasticsearchSecurityException[failed to load SSL configuration [xpack.security.transport.ssl]]; nested: ElasticsearchException[failed to initialize SSL TrustManager]; nested: IOException[parseAlgParameters failed: ObjectIdentifier() -- data isn't an object ID (tag = 48)]; nested: IOException[ObjectIdentifier() -- data isn't an object ID (tag = 48)];

 

Solution:
1. configure xpack.security.transport.ssl.keystore.path: elastic-certificates.p12 in elasticsearch.yml
change the path of elastic-certificates.p12 to absolute path and assign 777 permissions, restart.
2. after modification 1 did not work, continue to think, set the jdk to ES comes with jdk11, restart.
Perfect solution

How to Solve elasticSearch8.1.2 Install Error in Win10

 

1. Error: there should be no \java\jdk1.8.0_20

Solution:
when the Java environment variables are set correctly, it may be because the Java installation path contains spaces and brackets. Reinstall java to a new path without spaces to start successfully
for example, the location of my environment variable JAVAHOME was original:

C:\Program Files (x86)\Java\jdk1.8.0_20

Change it to:

C:\Progra\Java\jdk1.8.0_20

2. An error is reported: ‘elasticsearch.bat ‘is not an internal or external command, nor is it a runnable program or batch file

Check whether you filled in the configuration file, if not, just add the following configuration parameters at the bottom of the elasticsearch.yml file

# Change the name of the cluster so that it doesn't get mixed up with someone else's cluster
cluster.name: el-m

# Change the name of the node
node.name: el_node_m1

# Change the listening address of the ES so that other machines can access it
network.host: 0.0.0.0

# Set the http port for the external service, the default is 9200
http.port: 9200

# Set the path to store the index data
path.data: E:\elasticsearch-8.1.2\data #Switch to your own path
# Set the path to store log files
path.logs: E:\elasticsearch-8.1.2\logs #Switch to your own path

# Turn off http access restrictions
xpack.security.enabled: false

# Add new parameter, head plugin can access es

http.cors.enabled: true
http.cors.allow-origin: "*"

3. Command line printing error log

please check that any required plugins are installed, or check the breaking changes documentation for removed settings

Solution: please refer to the above configuration file
the version of elasticSearch has been updated and changed greatly. When the configuration file is filled with redundant configuration, the above error message may be reported.

failed to obtain in-memory shard lock [How to Solve]

Cause of problem:

1. The reason for this problem is that the original slice was not properly closed and cleaned up, so there is no way to get a slice lock when the slice has to be reallocated back to the problem node.
2. This does not cause the slice data to be lost, it just needs to retrigger the allocation

Recovery instruction

curl -XPOST http://localhost:9200/_cluster/reroute?retry_failed

View details

curl -XGET http://localhost:9200/_cluster/allocation/explain

ElasticsearchStatusException[Elasticsearch exception [type=search_phase_execution_exception, reason=all shards failed]]

Phenomenon
When doing a position search using elasticsearch, an error is reported.
ElasticsearchStatusException[Elasticsearch exception [type=search_phase_execution_exception, reason=all shards failed]]

I am using GeoDistanceQueryBuilder for ElasticSearch’s geolocation search and sorting

Searching
Later, I logged in to the elasticsearch server to check the error logs and found the following errors.

That is, my location is not of the geo_point type, and this problem has been troubleshooting for quite a while.
The reason for the problem is simple, it is because my index is automatically created by IndexRequest and there will be problems.

For example.

 String string = JSONObject.fromObject(entity).toString();
            IndexRequest indexRequest = new IndexRequest(INDEX).type(DOC).id(INDEX + "_" + entity.getId()).source(string, XContentType.JSON);

            bulkRequest.add(indexRequest);

Solution:
Create the index manually, or through Java code. Be sure to note that the type of the attribute corresponding to the mapping must be geo_point to work

Here I change the index, position means position information

# use java to manually create the index

public class CreateElsIndexMain {

    static final String INDEX_NAME = "t_els_mock";

    @Test
    public void test() throws Exception {
        RestHighLevelClient client = new RestHighLevelClient(RestClient.builder(
                new HttpHost(
                        "127.0.0.1",
                        9200,
                        "http"
                )));
        boolean exists = checkExists(client);
        if (exists) {
            deleteIndex(client);
        }
        createIndex(client);

    }

    public static boolean checkExists(RestHighLevelClient client) throws Exception {
        GetIndexRequest existsRequest = new GetIndexRequest();
        existsRequest.indices(INDEX_NAME);
        boolean exists = client.indices().exists(existsRequest, RequestOptions.DEFAULT);
        return exists;
    }

    public static void createIndex(RestHighLevelClient client) throws Exception {
        Settings.Builder setting = Settings.builder().put("number_of_shards", "5").put("number_of_replicas", 1);
        XContentBuilder mappings = JsonXContent.contentBuilder().
                startObject().startObject("properties").startObject("id").field("type", "text").endObject().
                startObject("name").field("type", "keyword").endObject().
                startObject("createTime").field("type", "keyword").endObject().
                startObject("score").field("type","keyword").endObject().
                startObject("longitude").field("type","float").endObject().
                startObject("latitude").field("type","float").endObject().
                startObject("position").field("type","geo_point").endObject().endObject().endObject();
        CreateIndexRequest request = new CreateIndexRequest(INDEX_NAME).settings(setting).mapping("doc",mappings);
        CreateIndexResponse createIndexResponse = client.indices().create(request, RequestOptions.DEFAULT);
        System.out.println(createIndexResponse);
    }


    public static void deleteIndex(RestHighLevelClient client) throws Exception {
        DeleteIndexRequest deleteIndexRequest = new DeleteIndexRequest();//To delete an index, also create an object to accept the index name
        deleteIndexRequest.indices(INDEX_NAME);//pass the index name
        //execute the delete method for deletion.
        AcknowledgedResponse delete = client.indices().delete(deleteIndexRequest, RequestOptions.DEFAULT);
    }

}

Kibana data

We can use the test data to put the data into es and view the data of the index through kibana’s dev tools, as shown in the following example:

Code to add data to es


 @Autowired
    private RestHighLevelClient client;

    private static final String INDEX = "t_els_mock";

    String DOC = "doc";

public void addData(){
        BulkRequest bulkRequest = new BulkRequest();
        List<MockLocationEntity> entities = getEntities();
        for (MockLocationEntity entity : entities){
            String string = JSONObject.fromObject(entity).toString();
            IndexRequest indexRequest = new IndexRequest(INDEX).type(DOC).id(INDEX + "_" + entity.getId()).source(string, XContentType.JSON);

            bulkRequest.add(indexRequest);
        }
        try {
            BulkResponse bulk = client.bulk(bulkRequest, RequestOptions.DEFAULT);
            
        } catch (IOException e) {
        }
    }

    private static List<MockLocationEntity> getEntities(){
        List<MockLocationEntity> list = new ArrayList<>();

        MockLocationEntity one = new MockLocationEntity();
        one.setId(UUID.randomUUID().toString());
        one.setName("YuanYan GuoJi");
        one.setScore("10");
        one.setCreateTime("20220322145900");
        one.setLongitude(117.20);
        one.setLatitude(38.14);
        one.setPosition(one.getLatitude() + "," +one.getLongitude());


        MockLocationEntity two = new MockLocationEntity();
        two.setId(UUID.randomUUID().toString());
        two.setName("WenGuang DaSha");
        two.setScore("9");
        two.setCreateTime("20220322171100");
        two.setLongitude(116.01);
        two.setLatitude(38.89);
        two.setPosition(two.getLatitude() + "," +two.getLongitude());


        MockLocationEntity three = new MockLocationEntity();
        three.setId(UUID.randomUUID().toString());
        three.setName("NeiMengGu JiuDian");
        three.setScore("8");
        three.setCreateTime("20220322171101");
        three.setLongitude(117.99);
        three.setLatitude(39.24);
        three.setPosition(three.getLatitude() + "," +three.getLongitude());


        MockLocationEntity four = new MockLocationEntity();
        four.setId(UUID.randomUUID().toString());
        four.setName("GuoXianSheng");
        four.setScore("10");
        four.setCreateTime("20220322171102");
        four.setLongitude(117.20);
        four.setLatitude(39.50);
        four.setPosition(four.getLatitude() + "," +four.getLongitude());
        Location fourLocation = new Location();


        MockLocationEntity five = new MockLocationEntity();
        five.setId(UUID.randomUUID().toString());
        five.setName("NongYe YinHang");
        five.setScore("8");
        five.setCreateTime("20220322171103");
        five.setLongitude(116.89);
        five.setLatitude(39.90);
        five.setPosition(five.getLatitude() + "," +five.getLongitude());
        Location fiveLocation = new Location();

        MockLocationEntity six = new MockLocationEntity();
        six.setId(UUID.randomUUID().toString());
        six.setName("XingBaKe");
        six.setScore("9");
        six.setCreateTime("20220322171104");
        six.setLongitude(117.25);
        six.setLatitude(39.15);
        six.setPosition(six.getLatitude() + "," +six.getLongitude());


        MockLocationEntity seven = new MockLocationEntity();
        seven.setId(UUID.randomUUID().toString());
        seven.setName("JuFuYuan");
        seven.setScore("6");
        seven.setCreateTime("20220322171104");
        seven.setLongitude(117.30);
        seven.setLatitude(39.18);
        seven.setPosition(seven.getLatitude() + "," +seven.getLongitude());

        list.add(one);
        list.add(two);
        list.add(three);
        list.add(four);
        list.add(five);
        list.add(six);
        list.add(seven);

        return list;
    }

[Solved] Elasticsearch 7.10 Startup Error: bootstrap checks failed

Error Messages:

ERROR: [1] bootstrap checks failed
[1]: max file descriptors [4096] for elasticsearch process is too low, increase to at least [65535]
[2]: max number of threads [3795] for user [es] is too low, increase to at least [4096]
[3]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]

 

Problem Cause.
es higher versions of resource requirements, linux system default configuration can not meet its requirements, so it needs to be configured separately
Solution.
[1]: max file descriptors [4096] for elasticsearch process is too low, increase to at least [65535]
Modify the vi /etc/security/limits.conf file and add the configuration

* hard nofile 65535  # *can be es start user
* soft nofile 65535

[2]: max number of threads [3795] for user [es] is too low, increase to at least [4096]
Modify the vi /etc/security/limits.conf file and add the configuration

es - nproc 4096 # es is my start user

After the above two configuration items are changed, ES start the user and login again will take effect;

[3]: max virtual memory areas vm. max_map_count [65530] is too low, increase to at least [262144]

Modify /etc/sysctl.config file, add the following configuration

vi /etc/sysctl.conf 

vm.max_map_count=262144

Execute the order with immediate effect

/sbin/sysctl -p
vm.max_map_count = 262144

​

Start successful