Category Archives: JAVA

[Solved] HttpPost Call https Interface error: PKIX path building failed

When using HttpPost to call the https interface, report an error: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to the requested target,

Here is a solution below:

    /**
     * HttpClient error: ”SSLPeerUnverifiedException: peer not authenticated”
     * You don't need to import SSL 
     *
     * @param base
     * @return
     */
    public static HttpClient wrapClient(HttpClient base) {
        try {
            SSLContext ctx = SSLContext.getInstance("TLS");
            X509TrustManager tm = new X509TrustManager() {

                @Override
                public void checkClientTrusted(java.security.cert.X509Certificate[] arg0, String arg1)
                        throws java.security.cert.CertificateException {
                    // TODO Auto-generated method stub

                }

                @Override
                public void checkServerTrusted(java.security.cert.X509Certificate[] arg0, String arg1)
                        throws java.security.cert.CertificateException {
                    // TODO Auto-generated method stub

                }

                @Override
                public java.security.cert.X509Certificate[] getAcceptedIssuers() {
                    // TODO Auto-generated method stub
                    return null;
                }
            };
            ctx.init(null, new TrustManager[]{tm}, null);
            SSLConnectionSocketFactory ssf = new SSLConnectionSocketFactory(ctx, NoopHostnameVerifier.INSTANCE);
            CloseableHttpClient httpclient = HttpClients.custom().setSSLSocketFactory(ssf).build();
            return httpclient;
        } catch (Exception ex) {
            ex.printStackTrace();
            return HttpClients.createDefault();
        }
    }

Calling wrapclient method to generate httpclient can avoid problems.

    @Inject(target = "/infoResourcesManageRest/custom/getAllDataNum", type = InjectTypeExt.CUSTOM_URL)
    public synchronized WSResult getAllDataNum(JSONObject json) throws Exception {
        Integer talentNum = 0;
        Integer companyNum = 0;
        Integer organizationNum = 0;
        CloseableHttpClient httpClient = HttpClients.createDefault();
        httpClient = (CloseableHttpClient) wrapClient(httpClient);
        CloseableHttpResponse response = null;
        /**
         * 1. Little data
         * **/
        String[] zx = new String[]{"qyjbxx", "dwjbxx"};
        for (String s : zx) {
            JSONObject jsonObject1 = JSONObject.fromObject("{\n" +
                    "    \"size\": 1,\n" +
                    "    \"page\": 1,\n" +
                    "    \"params\": {\n" +
                    "        \"englishName\": \"" + s + "\"\n" +
                    "    },\n" +
                    "    \"filter\": {}\n" +
                    "}");
            String jsonString = JSON.toJSONString(jsonObject1);
            HttpPost httpPost = new HttpPost("https://www.baidu.com");
            StringEntity entity = new StringEntity(jsonString, "UTF-8");
            httpPost.setEntity(entity);
            httpPost.setHeader("Content-Type", "application/json;charset=utf8");
            response = httpClient.execute(httpPost);
            JSONObject jsonObject = JSONObject.fromObject(EntityUtils.toString(response.getEntity(), "utf-8"));
            JSONObject result = (JSONObject) jsonObject.get("result");
            if (result != null) {
                Integer totalElements = (Integer) result.get("totalElements");
                if (s.equals("qyjbxx")) {
                    companyNum = companyNum + totalElements;
                } else if (s.equals("dwjbxx")) {
                    organizationNum = organizationNum + totalElements;
                }
            }
        }
 }

[Solved] Failed to re-init queues: Illegal queue capacity setting (abs-capacity=0.6) > (abs-maximum-capacity

The following exception was thrown when allocating queues to Yarn today,

llq@hadoop001:/software/hadoop-3.1.3$ yarn rmadmin -refreshQueues
2022-07-30 05:43:14,554 INFO client.RMProxy: Connecting to ResourceManager at hadoop002/192.168.86.102:8033
refreshQueues: java.io.IOException: Failed to re-init queues : Illegal queue capacity setting (abs-capacity=0.6) > (abs-maximum-capacity=0.4) for queue=[root.default],label=[]
	at org.apache.hadoop.yarn.ipc.RPCUtil.getRemoteException(RPCUtil.java:38)
	at org.apache.hadoop.yarn.server.resourcemanager.AdminService.logAndWrapException(AdminService.java:920)
	at org.apache.hadoop.yarn.server.resourcemanager.AdminService.refreshQueues(AdminService.java:406)
	at org.apache.hadoop.yarn.server.api.impl.pb.service.ResourceManagerAdministrationProtocolPBServiceImpl.refreshQueues(ResourceManagerAdministrationProtocolPBServiceImpl.java:114)
	at org.apache.hadoop.yarn.proto.ResourceManagerAdministrationProtocol$ResourceManagerAdministrationProtocolService$2.callBlockingMethod(ResourceManagerAdministrationProtocol.java:271)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:527)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1036)
	at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1000)
	at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:928)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:422)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2916)
Caused by: java.io.IOException: Failed to re-init queues : Illegal queue capacity setting (abs-capacity=0.6) > (abs-maximum-capacity=0.4) for queue=[root.default],label=[]
	at org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.reinitialize(CapacityScheduler.java:477)
	at org.apache.hadoop.yarn.server.resourcemanager.AdminService.refreshQueues(AdminService.java:430)
	at org.apache.hadoop.yarn.server.resourcemanager.AdminService.refreshQueues(AdminService.java:401)
	... 10 more
Caused by: java.lang.IllegalArgumentException: Illegal queue capacity setting (abs-capacity=0.6) > (abs-maximum-capacity=0.4) for queue=[root.default],label=[]
	at org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CSQueueUtils.capacitiesSanityCheck(CSQueueUtils.java:75)
	at org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CSQueueUtils.loadUpdateAndCheckCapacities(CSQueueUtils.java:116)
	at org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.AbstractCSQueue.setupConfigurableCapacities(AbstractCSQueue.java:179)
	at org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.AbstractCSQueue.setupQueueConfigs(AbstractCSQueue.java:356)
	at org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue.setupQueueConfigs(LeafQueue.java:177)
	at org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue.<init>(LeafQueue.java:162)
	at org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue.<init>(LeafQueue.java:141)
	at org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacitySchedulerQueueManager.parseQueue(CapacitySchedulerQueueManager.java:259)
	at org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacitySchedulerQueueManager.parseQueue(CapacitySchedulerQueueManager.java:283)
	at org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacitySchedulerQueueManager.reinitializeQueues(CapacitySchedulerQueueManager.java:171)
	at org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.reinitializeQueues(CapacityScheduler.java:726)
	at org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.reinitialize(CapacityScheduler.java:472)

Reason: the default rated queue capacity is greater than the maximum online queue capacity
solution:

<!-- Reduce the default queue resource rating to 40%, default 100% -->
<property>
    <name>yarn.scheduler.capacity.root.default.capacity</name>
    <value>40</value>
</property>

<!-- Lower the maximum capacity of default queue resources to 60%, default 100% -->
<property>
    <name>yarn.scheduler.capacity.root.default.maximum-capacity</name>
    <value>60</value>
</property>

Failed to scan osdt_cert.jar & osdt_core.jar [How to Solve]

java.io.FileNotFoundException: D:\WorkSpace\repository\com\oracle\ojdbc\oraclepki\oracle.osdt\osdt_cert.jar (系统找不到指定的路径。)
java.io.FileNotFoundException: D:\WorkSpace\repository\com\oracle\ojdbc\oraclepki\oracle.osdt\osdt_core.jar (系统找不到指定的路径。)

The reason for the problem is the dependency added in POM and the built-in dependence of toCat on JSP support, which is used to compile JSP.

<dependency>
  <groupId>org.apache.tomcat.embed</groupId>
    <artifactId>tomcat-embed-jasper</artifactId>
</dependency>

Solution: shield the jar package:
method 1: add the following codes in the startup class:

System.setProperty(org.apache.tomcat.util.scan.Constants.SKIP_JARS_PROPERTY,"*.jar");

Method 2: add the following codes in application.yml:

server:
  tomcat:
    additional-tld-skip-patterns:
      - osdt_cert.jar
      - osdt_core.jar

[Solved] org.springframework.web.util.NestedServletException: Handler dispatch failed; nested exception is

Error Messages: org.springframework.web.util.NestedServletException: Handler dispatch failed; nested exception is java.lang.NoSuchMethodError: javax.servlet.http.HttpServletResponse.setContentLengthLong(J)V

Problem Description

type Exception report

message Handler dispatch failed; nested exception is java.lang.NoSuchMethodError: javax.servlet.http.HttpServletResponse.setContentLengthLong(J)V

description The server encountered an internal error that prevented it from fulfilling this request.

exception

org.springframework.web.util.NestedServletException: Handler dispatch failed; nested exception is java.lang.NoSuchMethodError: javax.servlet.http.HttpServletResponse.setContentLengthLong(J)V
org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:1082)
org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:963)
org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:1006)
org.springframework.web.servlet.FrameworkServlet.doGet(FrameworkServlet.java:898)
javax.servlet.http.HttpServlet.service(HttpServlet.java:621)
org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:883)
javax.servlet.http.HttpServlet.service(HttpServlet.java:728)
org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:51)
   <dependency>
      <groupId>javax.servlet</groupId>
      <artifactId>javax.servlet-api</artifactId>
      <version>3.1.0</version>
      <scope>provided</scope>
    </dependency>
    <dependency>
      <groupId>org.springframework</groupId>
      <artifactId>spring-webmvc</artifactId>
      <version>5.2.10.RELEASE</version>
    </dependency>

Solution:

Import the coordinates of javax.servlet-api and spring-webmvc to the same version
Or change it to the version in the image above

[Solved] Failed to configure a DataSource: ‘url‘ attribute is not specified and no embedded datasource could

An error occurred when starting the springboot server

Error prompt:

Description:

Failed to configure a DataSource: 'url' attribute is not specified and no embedded datasource could be configured.

Reason: Failed to determine a suitable driver class


Action:

Consider the following:
	If you want an embedded database (H2, HSQL or Derby), please put it on the classpath.
	If you have database settings to be loaded from a particular profile you may need to activate it (no profiles are currently active).

Solution:

This is a dependency issue, using druid’s data pool to configure the database parameters and not importing druid’s package in pom.xml.

<dependency>
<groupId>com.alibaba</groupId>
<artifactId>druid-spring-boot-starter</artifactId>
<version>1.1.23</version>
</dependency>

 

[Solved] swagger Failed to start bean ‘documentationPluginsBootstrapper‘; nested exception is java.lang.NullP

Null pointer exception when adding swagger dependency to the project

Failed to start bean ‘documentationPluginsBootstrapper’; nested exception is java.lang.NullPointerException

The swagger and UI I use here are both version 2.9.2, but my springboot is version 2.6

Springfox uses path matching based on AntPathMatcher, while spring boot 2.6.x uses PathPatternMatcher. The two solutions are as follows:

1. Modify the version of springboot and use the earlier version to solve this problem;

2. Add in the configuration file

spring.mvc.pathmatch.matching-strategy=ant_path_matcher

​

 

[Solved] JAVA fx Error: java.lang.instrument ASSERTION FAILED ***: “!errorOutstanding“ with message transform

java fx error: java.lang.instrument ASSERTION FAILED ***: “!errorOutstanding“ with message transform

Problem description

Errors encountered in Java FX

In fxml, the controller is bound by fx:controller.
FXMLLoader.load is used in the controller of control to get the fxml file

An error is reported

java.lang.instrument ASSERTION FAILED ***: "!errorOutstanding" with message transform method call failed at JPLISAgent

Cause analysis:

The reason for the error is that there is a cycle, similar to the cycle dependency problem

as a result of:

a.fxml depends on the controller AController.java
But AController.java will load this fxml again


Solution:

Remove FXMLLoader.load from the controller or remove fx:controller from fxml

 

[Solved] kafka Error: java.net.UnknownHostException: ls-bptysztw

kafka connect error:

java.net.UnknownHostException: ls-bptysztw

2022-07-20 15:48:28.701  INFO 15924 --- [ntainer#0-0-C-1] org.apache.kafka.clients.Metadata        : [Consumer clientId=consumer-abc-1, groupId=abc] Cluster ID: LFbHxG8qSSu7PyPKXoDD4g
2022-07-20 15:48:28.703  INFO 15924 --- [ntainer#0-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator  : [Consumer clientId=consumer-abc-1, groupId=abc] Discovered group coordinator ls-bptysztw:9092 (id: 2147483647 rack: null)
2022-07-20 15:48:30.990  WARN 15924 --- [ntainer#0-0-C-1] org.apache.kafka.clients.NetworkClient   : [Consumer clientId=consumer-abc-1, groupId=abc] Error connecting to node ls-bptysztw:9092 (id: 2147483647 rack: null)

java.net.UnknownHostException: ls-bptysztw
	at java.net.Inet6AddressImpl.lookupAllHostAddr(Native Method) ~[na:1.8.0_144]
	at java.net.InetAddress$2.lookupAllHostAddr(InetAddress.java:928) ~[na:1.8.0_144]
	at java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1323) ~[na:1.8.0_144]
	at java.net.InetAddress.getAllByName0(InetAddress.java:1276) ~[na:1.8.0_144]
	at java.net.InetAddress.getAllByName(InetAddress.java:1192) ~[na:1.8.0_144]
	at java.net.InetAddress.getAllByName(InetAddress.java:1126) ~[na:1.8.0_144]
	at org.apache.kafka.clients.DefaultHostResolver.resolve(DefaultHostResolver.java:27) ~[kafka-clients-3.1.1.jar:na]
	at org.apache.kafka.clients.ClientUtils.resolve(ClientUtils.java:110) ~[kafka-clients-3.1.1.jar:na]
	at org.apache.kafka.clients.ClusterConnectionStates$NodeConnectionState.currentAddress(ClusterConnectionStates.java:511) ~[kafka-clients-3.1.1.jar:na]
	at org.apache.kafka.clients.ClusterConnectionStates$NodeConnectionState.access$200(ClusterConnectionStates.java:468) ~[kafka-clients-3.1.1.jar:na]
	at org.apache.kafka.clients.ClusterConnectionStates.currentAddress(ClusterConnectionStates.java:173) ~[kafka-clients-3.1.1.jar:na]
	at org.apache.kafka.clients.NetworkClient.initiateConnect(NetworkClient.java:988) [kafka-clients-3.1.1.jar:na]
	at org.apache.kafka.clients.NetworkClient.ready(NetworkClient.java:301) [kafka-clients-3.1.1.jar:na]
	at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.tryConnect(ConsumerNetworkClient.java:575) [kafka-clients-3.1.1.jar:na]
	at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$FindCoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:854) [kafka-clients-3.1.1.jar:na]
	at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$FindCoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:830) [kafka-clients-3.1.1.jar:na]
	at org.apache.kafka.clients.consumer.internals.RequestFuture$1.onSuccess(RequestFuture.java:206) [kafka-clients-3.1.1.jar:na]
	at org.apache.kafka.clients.consumer.internals.RequestFuture.fireSuccess(RequestFuture.java:169) [kafka-clients-3.1.1.jar:na]
	at org.apache.kafka.clients.consumer.internals.RequestFuture.complete(RequestFuture.java:129) [kafka-clients-3.1.1.jar:na]
	at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler.fireCompletion(ConsumerNetworkClient.java:602) [kafka-clients-3.1.1.jar:na]
	at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.firePendingCompletedRequests(ConsumerNetworkClient.java:412) [kafka-clients-3.1.1.jar:na]
	at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:297) [kafka-clients-3.1.1.jar:na]
	at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:236) [kafka-clients-3.1.1.jar:na]
	at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:215) [kafka-clients-3.1.1.jar:na]
	at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureCoordinatorReady(AbstractCoordinator.java:246) [kafka-clients-3.1.1.jar:na]
	at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.coordinatorUnknownAndUnready(ConsumerCoordinator.java:459) [kafka-clients-3.1.1.jar:na]
	at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:487) [kafka-clients-3.1.1.jar:na]
	at org.apache.kafka.clients.consumer.KafkaConsumer.updateAssignmentMetadataIfNeeded(KafkaConsumer.java:1262) [kafka-clients-3.1.1.jar:na]
	at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1231) [kafka-clients-3.1.1.jar:na]
	at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1211) [kafka-clients-3.1.1.jar:na]
	at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.pollConsumer(KafkaMessageListenerContainer.java:1522) [spring-kafka-2.8.7.jar:2.8.7]
	at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.doPoll(KafkaMessageListenerContainer.java:1512) [spring-kafka-2.8.7.jar:2.8.7]
	at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.pollAndInvoke(KafkaMessageListenerContainer.java:1340) [spring-kafka-2.8.7.jar:2.8.7]
	at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.run(KafkaMessageListenerContainer.java:1252) [spring-kafka-2.8.7.jar:2.8.7]
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [na:1.8.0_144]
	at java.util.concurrent.FutureTask.run(FutureTask.java:266) [na:1.8.0_144]
	at java.lang.Thread.run(Thread.java:748) [na:1.8.0_144]

analysis

It can be found from the log that ls-bptysztw:9092 is the address information of the host. Because the host cannot recognize the IP corresponding to ls-bptysztw, which leads to an unknownHost exception. Therefore, as long as the host is configured to point to the correct IP, this error will be solved.

Solution:

Configure the C:\Windows\System32\drivers\etc\hosts file

123.123.123.123       ls-bptysztw

[Solved] org.apache.maven.archiver.MavenArchiver.getManifest(org.apache.maven.project.MavenProject…

Error message:

org.apache.maven.archiver.MavenArchiver.getManifest(org.apache.maven.project.MavenProject, org.apache.maven.archiver.MavenArchiveConfiguration)

 

Solution:

Add and change the following plugin configuration in pom:

            </plugin>
                <plugin>
                    <artifactId>maven-war-plugin</artifactId>
                    <version>2.6</version>
                </plugin>  

[Solved] Redis error: NOAUTH Authentication required.

1. Development environment

redis

2. Redis reports an error: NOAUTH Authentication required.

1. Set your password and open redis.window.conf file, search requirepass to view your password

2. For those without a password, there is a situation that can cause this error. Redis is running in the background. End redis in the background of the task manager and restart it

Hive operation TMP file viewing content error [How to Solve]

1. Hive operation TMP file viewing content error

Permission denied: user=dr.who, access=READ_EXECUTE, inode="/tmp":hadoopadmin:supergroup:drwx-wx-wx

2. Cause analysis:

In case of insufficient user permissions, you can see that this tmp folder does not have read r permission for other users. The default login user of the page is dr.who users


3. Solution:

1. Change the permissions of TMP

Just execute the code:

hdfs dfs -chmod -R 777 /tmp

2. Modify the default login user of 50070

The core-site.xml can be configured as the user name corresponding to Hadoop

<property>
    <name>hadoop.http.staticuser.user</name>
    <value>username</value>
</property>