Category Archives: JAVA

[Solved] Springboot upload failed to find the temporary directory error

The springboot upload failed to find the temporary directory and reported an error

1. Problem description

According to the feedback of online users, the upload file function suddenly reported an error. After checking the log file, the error message is:
failed to parse multipart servlet request; nested exception is java.lang.RuntimeException: java.nio.file.NoSuchFileException: /tmp/undertow.8099.1610110889708131476/undertow1171565914461190366upload

2. Cause investigation

Originally, when launching the spring boot application through java - jar in the linux operating system, a temporary directory will be created by default in the /tmp directory (in Windows operating system, C:\users\default\appdata\ local\temp), and the temporary directory is generally in the format of undertow. Port.* (if the Tomcat container is Tomcat.Port.*, this article will take undertow as an example, and Tomcat is the same.) , files need to be converted into temporary files and stored here when uploading. However, if the files in the /tmp directory are not used for more than 10 days, they will be automatically cleaned up by the system. Therefore, the above problems do not occur in the directory when uploading again.

3. Problem recurrence

Since the temporary directory will be created automatically when the service is started, restart the service in the local or test environment, delete the undertow.Port.* (if Tomcat, it is Tomcat.Port.*) directory generated under /tmp , and upload the file again.

4. Solution

1. Manually create the temporary directory (not recommended)

mkdir -p /tmp/undertow.8099.1610110889708131476/undertow1171565914461190366upload

PS: if the file is not uploaded for more than 10 days again, the same problem will occur. The symptoms are not the root cause.

2. Modify Linux system configuration (not recommended)

vim /usr/lib/tmpfiles.d/tmp.con
# Add at the end of the file, which means that the folder at the beginning of undertow will not be cleaned up
x /tmp/undertow*

PS: if multiple servers are deployed, each server needs to be modified.

3. Modify spring boot configuration file (recommended)

spring:
  servlet:
    multipart:
      # Specify a custom upload directory
      location: /mnt/tmp

PS: when using this method, you must ensure that /MNT/tmp exists. If it does not exist, the same error will occur. Therefore, it needs to be judged every time the service is started. If the directory exists, it will be ignored, and if it does not exist, it will be created. The code is as follows:

@Slf4j
@Configuration
public class MultipartConfig {

    @Value("${spring.servlet.multipart.location}")
    private String fileTempDir;

    @Bean
    MultipartConfigElement multipartConfigElement() {
        String os = System.getProperty("os.name");
        // windows
        if(os.toLowerCase().startsWith("win")){
            fileTempDir = "C:" + fileTempDir;
        }
        log.info("fileTempDir:{}", fileTempDir);
        MultipartConfigFactory factory = new MultipartConfigFactory();
        File tmpDirFile = new File(fileTempDir);
        // Determine whether the folder exists
         if (!tmpDirFile.exists()) {
             //Create folder
            boolean mkdirSuccess = tmpDirFile.mkdirs();
            log.info("create temp dir,result:{}", mkdirSuccess);
        }
        factory.setLocation(fileTempDir);
        return factory.createMultipartConfig();
    }

}

Spring integrated HBase error [How to Solve]

Problem 1
ClassNotFoundException:org/springframework/data/hadoop/configuration/ConfigurationFactoryBean
Solution
Replace the jar package with spring-data-hadoop-1.0.0.RELEASE version
Problem 2
ClassNotFoundException:org/apache/hadoop/conf/Configuration
Solution
Introduce hadoop-client-3.1.3.jar and hadoop-common-3.1.3.jar
Problem 3
java.lang.NoClassDefFoundError: org/apache/commons/configuration2/ConfigurationSolution
Introduce commons-configuration2-2.3.jar
Problem 4
java.lang.NoClassDefFoundError: org/apache/hadoop/util/PlatformName
Solution
Introduce hadoop-auth-3.1.3.jar
Problem 5
java.lang.NoClassDefFoundError: org/apache/hadoop/mapred/JobConf
Solution
Introduce hadoop-mapreduce-client-common-3.1.3.jar, hadoop-mapreduce-client-core-3.1.3.jar and
hadoop-mapreduce-client-jobclient-3.1.3.jar
Problem 6
java.lang.NoClassDefFoundError: com/ctc/wstx/io/SystemId
Solution
Introduce woodstox-core-5.0.3.jar
Problem 7
java.lang.NoClassDefFoundError: com/google/common/collect/Interners
Solution
Introduce guava-30.1.1-jre.jar
Problem 8
java.lang.NoSuchMethodError: com.google.common.collect.MapMaker.keyEquivalence(Lcom/google/common/base/Equivalence;)Lcom/google/ common/collect/MapMaker
Solution
Remove the google-collect-1.0.jar package, guava conflict
Problem 9
java.lang.NoClassDefFoundError: com/fasterxml/jackson/core/JsonGenerator
Solution
Introduce jackson-annotations-2.12.4.jar, jackson-core-2.12.4.jar and jackson-databind-2.12.4.jar
Problem 10
java.lang.NoClassDefFoundError: org/apache/hadoop/hbase/HBaseConfiguration
Solution
Introduce hbase-common-2.2.4.jar
Problem 11
java.lang.NoClassDefFoundError: org/apache/hadoop/hbase/client/HTableInterface
Solution
After searching for a long time, I found that it is written in the configuration file
<bean id=”htemplate” class=”org.springframework.data.hadoop.hbase.HbaseTemplate”>
<property name=”configuration” ref=”hbaseConfiguration”>
</property>
</bean>
Comment it out Summary
Most of the problem is the lack of jar packages, Spring integration with Hbase requires 15 packages.
Among them.
spring-data-hadoop-1.0.0.RELEASE.jar
hadoop-client-3.1.3.jar
hadoop-common-3.1.3.jar
hadoop-auth-3.1.3.jar
hadoop-mapreduce-client-common-3.1.3.jar
hadoop-mapreduce-client-core-3.1.3.jar
hadoop-mapreduce-client-jobclient-3.1.3.jar
commons-configuration2-2.3.jar
guava-30.1.1-jre.jar
jackson-annotations-2.12.4.jar
jackson-core-2.12.4.jar
jackson-databind-2.12.4.jar
These packages are also required when integrating HDFS

[Solved] Multithreading uses jsch to obtain a session for connection error: session.connect: java.net.socketexception: connection reset

Phenomenon

The project uses the spring batch framework. Multiple slices use jsch to obtain SFTP connections to read files and report errors

In fact, it is multithreading, using jsch to obtain the session connection and report an error

com.jcraft.jsch.JSchException: Session.connect: java.net.SocketException: Connection reset

Jsch version

version=0.1.54
groupId=com.jcraft
artifactId=jsch

reason

Various reasons have been found on the Internet. Some say the number of SSH terminal connections is limited, and some say there is a TCP connection problem. The final reason has not been found yet. Please inform us in the comment area

Reappearance

public static Session getSshSession(String sftpHost, int sftpPort, String userName, String password) {
	JSch jsch = new JSch();
	// GET sshSession
	Session sshSession = null;
	try {
		sshSession = jsch.getSession(userName, sftpHost, sftpPort);
	} catch (JSchException e) {
		e.printStackTrace();
	}
	if (StringUtils.isNotBlank(password)) {
		sshSession.setPassword(password);
	}
	Properties sshConfig = new Properties();
	sshConfig.put("StrictHostKeyChecking", "no");
	sshSession.setConfig(sshConfig);
	return sshSession;
}


static void test() {
	for (int i = 1; i < 50; i++) {
		new Thread(() -> {
			Session sshSession = getSshSession("*.*.*.*", 22, "root", "***");
			try {
				Thread.sleep(100);
				sshSession.connect();
			} catch (Exception e) {
				e.printStackTrace();
			} finally {
				sshSession.disconnect();
			}

		}).start();
	}
}

Solution:

Create a channel pool using apache.commons.pool2

Since the SFTP configuration of the project is dynamic and not fixed, the following code is not encapsulated as a spring boot managed bean

Connection pool configuration:

public class ConnPoolConfig extends GenericObjectPoolConfig {
    public ConnPoolConfig() {
        // https://blog.csdn.net/weixin_42340670/article/details/108431381
        // The minimum number of free objects in the object pool should be
        setMinIdle(4);
        // The maximum capacity of the pool. The maximum number of objects to be stored in the pool
        setMaxTotal(10);
        // Check the validity of an object when it is borrowed from the pool.
        setTestOnBorrow(true);
        // How often the recycler thread performs idle object recovery (polling interval, in milliseconds)
        setTimeBetweenEvictionRunsMillis(60 * 60000);
        // Whether to verify the validity of the object when the recycler is scanning for idle objects.
        // If an object has not reached the specified threshold of idle time, and if testWhileIdle is configured to true
        // then it checks if the object is still valid, and if the object's resources have expired (e.g., the connection is disconnected), then he can be recycled.
        setTestWhileIdle(true);
    }
}

Connection pool factory:

public class ConnPoolFactory extends BasePooledObjectFactory<ChannelSftp> {

    private String host;
    private Integer port;
    private String userName;
    private String password;
    private final String strictHostKeyChecking = "no";

    public ConnPoolFactory(String host, Integer port, String userName, String password) {
        this.host = host;
        this.port = port;
        this.userName = userName;
        this.password = password;
    }

    @Override
    public ChannelSftp create() throws Exception {
        JSch jsch = new JSch();
        Session session = jsch.getSession(userName, host, port);
        session.setPassword(password);
        Properties config = new Properties();
        config.put("StrictHostKeyChecking", strictHostKeyChecking);
        session.setConfig(config);
        session.connect();
        ChannelSftp channel = (ChannelSftp) session.openChannel("sftp");
        channel.connect();
        return channel;
    }

    @Override
    public PooledObject<ChannelSftp> wrap(ChannelSftp obj) {
        return new DefaultPooledObject<>(obj);
    }

    // https://segmentfault.com/a/1190000003920723
    // Destroy the object, if the object pool detects that an "object" idle timeout,
    // or if the operator detects that the "object" is no longer valid when "returning the object" to the object pool, then this will result in "object destruction";
    // The design of the "destroy object" operation is far different, but it must be clear:
    // When this method is called, the life of the "object" must end. If object is a thread, then the thread must exit at this point;
    // If object is a socket operation, then the socket must be closed;
    // If object is a file stream operation, then "data flush" is done and closed normally.
    @Override
    public void destroyObject(PooledObject<ChannelSftp> pooledObject) throws Exception {
        Channel channel = pooledObject.getObject();
        Session session = channel.getSession();
        channel.disconnect();
        session.disconnect();
    }

    // Check if the object is "valid";
    // The Pool cannot hold invalid "objects", so the "background detection thread" will periodically check the validity of the "objects" in the Pool,
    // If the object is invalid, it will be removed from the Pool and destroyed;
    // In addition, when the caller gets an "object" from the Pool, it also checks the validity of the "object" to make sure that no "invalid" objects can be output to the caller;
    // When the caller returns the "object" to the Pool after use, the validity of the object is still checked. By validity,
    // The validity of the object is whether the object is in the expected state and can be used directly by the caller;
    // If the object is a socket, then its validity is whether the socket's channel is open/blocking timeout, etc.
    @Override
    public boolean validateObject(PooledObject<ChannelSftp> pooledObject) {
        return pooledObject.getObject().isConnected();
    }

    // "Activate" an object, an additional "activation" action when the Pool decides to remove an object for delivery to the caller,
    // For example, you can "reset" the list of parameters in the activateObject method to make it feel like a "newly created" object when the caller uses it;
    // If the object is a thread, you can reset the "thread break flag" in the "activate" operation, or wake up the thread from blocking, etc;
    // If the object is a socket, then you can refresh the channel in the "activate" operation,
    // or rebuild the link to the socket (if the socket is unexpectedly closed), etc.
    @Override
    public void activateObject(PooledObject<ChannelSftp> pooledObject) throws Exception {
        ChannelSftp channelSftp = pooledObject.getObject();
        Session session = channelSftp.getSession();
        if (!session.isConnected()) {
            session.connect();
            channelSftp.connect();
        }
    }

    // "Passivate" the object, when the caller "returns the object", the Pool will "passivate the object".
    // The implication of passivate is that the "object" needs a "rest" for a while.
    // If the object is a socket, then you can passivateObject to clear the buffer and block the socket;
    // If the object is a thread, you can sleep the thread or wait for an object in the thread during the "passivate" operation.
    // Note that the methods activateObject and passivateObject need to correspond to each other to avoid deadlocks or confusion about the state of the "object".
    @Override
    public void passivateObject(PooledObject<ChannelSftp> pooledObject) throws Exception {
    }
}

Connection pool:

public class ConnPool extends GenericObjectPool<ChannelSftp> {

    private static final Map<String, ConnPool> MAP = new ConcurrentHashMap<>();

    private ConnPool(String host, Integer port, String userName, String password) {
        super(new ConnPoolFactory(host, port, userName, password), new ConnPoolConfig());
    }

    public static ConnPool getConnPool(String host, Integer port, String userName, String password) {
        String key = host + ":" + port;
        ConnPool connPool = MAP.get(key);
        if (connPool == null) {
            synchronized (ConnPool.class) {
                connPool = MAP.get(key);
                if (connPool == null) {
                    connPool = new ConnPool(host, port, userName, password);
                    MAP.put(key, connPool);
                }
            }
        }
        return connPool;
    }
}

The connection pool supports the establishment of different pools for different remote IP

Tool class encapsulation:

public static ChannelSftp borrowChannel(ConnectionConfig connCfg) {
	ConnPool connPool = ConnPool.getConnPool(connCfg.getHost(), connCfg.getPort(), connCfg.getUserName(),
			connCfg.getPassword());
	try {
		return connPool.borrowObject();
	} catch (Exception e) {
		logger.error("Get channelSftp from pool fail", e);
	}
}

public static void returnChannel(ConnectionConfig connCfg, ChannelSftp channel) {
	ConnPool connPool = ConnPool.getConnPool(connCfg.getHost(), connCfg.getPort(), connCfg.getUserName(),
			connCfg.getPassword());
	try {
		connPool.returnObject(channel);
	} catch (Exception e) {
		logger.error("Return channelSftp to pool fail", e);
	}
}

No problem with the test:

static void test2() {
	AtomicInteger j = new AtomicInteger(0);
	for (int i = 0; i < 50; i++) {
		new Thread(() -> {
			ConnPool connPool = ConnPool.getConnPool("*", 22, "root", "*");
			System.out.println(connPool + "--" + j.getAndIncrement());
			ChannelSftp channelSftp = null;
			try {
				channelSftp = connPool.borrowObject();
			} catch (Exception e) {
				e.printStackTrace();
			} finally {
				connPool.returnObject(channelSftp);
			}
		}).start();
	}
}

How to Solve shiro Set sessionIdUrlRewritingEnabled Error (jessionid Removed)

Project scenario:

When using Shiro for authority authentication, the login address always carries the jeonid automatically for the first access. Now it needs to be removed and cannot be displayed.

Problem Description:

First, I searched Baidu and found that most solutions are to set SessionManager when defaultwebsecuritymanager is injected. This method also needs Shiro 1.3.2 or above. Coincidentally, my 1.3.0 is definitely not good. Go to POM directly to change the version number
annotation method:

    @Bean
    public DefaultWebSessionManager sessionManager(){
        DefaultWebSessionManager sessionManager = new DefaultWebSessionManager();
        sessionManager.setSessionIdUrlRewritingEnabled(false);
        return sessionManager;
    }

    @Bean
    public DefaultWebSecurityManager securityManager(){
        DefaultWebSecurityManager securityManager = new DefaultWebSecurityManager();
        securityManager.setSessionManager(sessionManager());
        return securityManager;
    }

XML mode:

		<bean id="sessionManager" class="org.apache.shiro.web.session.mgt.DefaultWebSessionManager">
		<property name="sessionIdUrlRewritingEnabled" value="false"/>
	</bean>

	<!-- Shiro Security Manager -->
	<bean id="securityManager" class="org.apache.shiro.web.mgt.DefaultWebSecurityManager">
		<property name="sessionManager" ref="sessionManager"/>
	</bean>

Remember to add package scanning in XML mode.My:

<context:component-scan base-package="config" />

Then run the project and report the same error as before. Anyway, there is no getter method.

Cause analysis:

During debugging, it is found that there are three versions of Shiro in the project, and two are from other modules, so it doesn’t matter. However, during debugging, it is found that the number of lines is not matched. After downloading the source code, it is found that this.sessionidurlrewritingenabled = true; There are no breakpoints in this line, and then the rebuild project, MVN clean install, repackaging and restarting idea are all used, and then the breakpoints can be interrupted. However, after debugging starts, the breakpoint icon turns into a circle slash, which is not the Shiro version of my current project at all
finally, you can only guess whether there are problems with multiple versions at the same time.

Solution:

It is found that one of the three versions is very old 1.2.4, but there is no place to import the whole project. Then, according to the warehouse, it is still imported in the way of Shiro all, so the name is temporarily modified. After running the project, it can run as expected

[Solved] Spring Boot Package Error: Failed to execute goal org.apache.maven.plugins:maven-resources-plugin:3.2.0

When starting the spring boot project, the following error is suddenly reported:

[ERROR] Failed to execute goal org.apache.maven.plugins:maven-resources-plugin:3.2.0:resources (default-resources) on project xxxxxxx: Input length = 1 -> [Help 1]

According to the error message, the versions of the introduced Maven plug-in Maven resources plugin may conflict. The spring boot version I use is 2.5.7, while the Maven plug-in used here is 3.2.0 the default introduced plug-in:
directly specify its version as follows, and then refresh it

<plugin>
	<groupId>org.apache.maven.plugins</groupId>
	<artifactId>maven-resources-plugin</artifactId>
	<version>3.2.0</version>
</plugin>

Mybatis sets the primary key Auto-Increment error: No setter found for the keyProperty

Mybatis sets the auto increment of the primary key, and an error is reported: no setter found for the keyproperty

SQL statement in XML:

<insert id="registerReader" parameterType="com.by.tsgl.bean.Reader" useGeneratedKeys="true" keyProperty="reader_id">
    insert into reader(deposit_num,borrowing_num,reader_state,grade_id,user_id)
    values(0,0,"normal",1,#{user_id});
</insert>

Test Times Error
org.mybatis.spring.MyBatisSystemException: nested exception is org.apache.ibatis.executor.ExecutorException: Error getting generated key or setting result to parameter object. Cause: org.apache.ibatis.executor.ExecutorException: No setter found for the keyProperty 'reader_id' in 'java.lang.String'.
Solution:
Remove the keyProperty property from the insert tag
Change it to.

<insert id="registerReader" parameterType="com.by.tsgl.bean.Reader" useGeneratedKeys="true">

Cause analysis

The corresponding value in keyProperty is a property of the entity class, not a database field.

The fields that have been set up for primary key auto-increment in the database only need to configure the useGeneratedKeys attribute.

useGeneratedKeys=“true” keyProperty=“id”
When useGeneratedKeys is set to true, it means that if the inserted table id has an auto-incrementing column as the primary key, JDBC is allowed to support automatic primary key generation.

keyProperty=“id” The automatically generated primary key id can be returned to the id of the passed in object . Since the object we passed in does not have the id field, it naturally does not have its set method, so an error will be reported.

What if there is no primary key in the inserted table?

Can use attributeskeyColumn

<insert id="registerReader" parameterType="com.by.tsgl.bean.Reader" useGeneratedKeys=true keyProperty="userId" keyColumn="user_id">
    

This annotation means to use the primary key automatically increased by the database and user from the table_ In the ID field, put the data into the member variable userid of the incoming object. If we have specified the primary key in the database table, the keycolumn attribute can be defaulted</ ol>

The following is from the mybatis document

usegeneratedkeys (only applicable to insert and update) this will make mybatis use JDBC’s getgeneratedkeys method to retrieve the primary key generated internally by the database (such as the auto increment field of relational database management systems such as MySQL and SQL Server). The default value is false
keyproperty (only applicable to insert and update) specifies the property that can uniquely identify the object. Mybatis will use the return value of getgeneratedkeys or the selectkey sub element of the insert statement to set its value. The default value is unset. If more than one column is generated, multiple attribute names can be separated by commas.

Client Error: Could not get a resource from the pool [How to Solve]

Client error: could not get a resource from the pool

1. Reason & Solution

Concurrency is indeed too high, and the link pool configuration parameters are unreasonable. Solution: adjust the configuration parameters; The execution queue of the capacity expansion node redis is occupied by a large number of operations or time-consuming operations. Solution: optimize slow operations; Slow operation is prohibited. There is a hot key solution: split the key and distribute the pressure to each redis node; Increase the local memory. First check the local memory, and then go to a node in redis. The link pool is exhausted. Solution: solve the problem of data skew, execute time-consuming commands, resulting in Ping timeout. Solution: disable time-consuming commands, such as: keys *; Optimize the time-consuming operation. There is a bug in the lower version of jedis package. Solution: upgrade the jedis version

2. Hot key scene sorting

Question 01:

Frequent IP access in a region

Solution:

Increase the application local cache and LRU maintain a certain number of hot IP addresses

Question 02:

Frequently query a large Zset set

Solution:

Split by business dimension; Split by data number segment

3. Sort out the scenario when a node’s link pool is exhausted

Homicide caused by hashtag abuse

[Solved] Tomcat runs JavaWeb servlet Error 404

Problem description

A new server template project is built with idea. After Tomcat is configured, the access report 404
is accessed

reason

Tomcat cannot load message 404 because war is not loaded

Solution:

An error occurs because the war artifact of this project is not loaded. You need to import this project as a artifact

and then select war_ Compared with war, the structure of the artifact will be consistent with the source directory for easy development

now the URL is the home page of the artifact by default

access

Spring project import @Resource Annotation Error [How to Solve]

An error is reported after the @resource annotation is introduced into the spring project

Solution:

@Resource annotations are provided by J2EE

However, jdk1.9 and above versions need to import a javax.annotation dependency package from pom.xml in maven:

<dependencies>
    <dependency>
        <groupId>javax.annotation</groupId>
        <artifactId>javax.annotation-api</artifactId>
        <version>1.3.2</version>
    </dependency>
</dependencies>

[Solved] Log4j2 log startup error: javax.xml.parsers.ParserConfigurationException…

Background error log

 javax.xml.parsers.ParserConfigurationException: Feature 'http://apache.org/xml/features/xinclude' is not recognized.
	at org.apache.xerces.jaxp.DocumentBuilderFactoryImpl.newDocumentBuilder(Unknown Source)
	at org.apache.logging.log4j.core.config.xml.XmlConfiguration.newDocumentBuilder(XmlConfiguration.java:191)
	at org.apache.logging.log4j.core.config.xml.XmlConfiguration.<init>(XmlConfiguration.java:89)
	at org.apache.logging.log4j.core.config.xml.XmlConfigurationFactory.getConfiguration(XmlConfigurationFactory.java:46)
	at org.apache.logging.log4j.core.config.ConfigurationFactory$Factory.getConfiguration(ConfigurationFactory.java:458)
	at org.apache.logging.log4j.core.config.ConfigurationFactory$Factory.getConfiguration(ConfigurationFactory.java:385)
	at org.apache.logging.log4j.core.config.ConfigurationFactory.getConfiguration(ConfigurationFactory.java:293)
	at org.apache.logging.log4j.core.LoggerContext.reconfigure(LoggerContext.java:616)
	at org.apache.logging.log4j.core.LoggerContext.reconfigure(LoggerContext.java:637)
	at org.apache.logging.log4j.core.LoggerContext.start(LoggerContext.java:231)
	at org.apache.logging.log4j.core.async.AsyncLoggerContext.start(AsyncLoggerContext.java:76)
	at org.apache.logging.log4j.core.impl.Log4jContextFactory.getContext(Log4jContextFactory.java:153)
	at org.apache.logging.log4j.core.impl.Log4jContextFactory.getContext(Log4jContextFactory.java:45)
	at org.apache.logging.log4j.LogManager.getContext(LogManager.java:307)
	at org.apache.log4j.Logger$PrivateManager.getContext(Logger.java:59)
	at org.apache.log4j.Logger.getLogger(Logger.java:41)

Solution:
Add startup parameters for JVM startup

-Djavax.xml.parsers.DocumentBuilderFactory=com.sun.org.apache.xerces.internal.jaxp.DocumentBuilderFactoryImpl