Author Archives: Robins

[Solved] Redis Startup Error: QForkMasterInit: system error caught. error code=0x000005af

 

1. Problems

When you use redis-server.exe to startup directly, it will flashback. When it is started with script and configuration file, it will also flashback. When it is started with command line, an error will be reported:

[23848] 16 Mar 16:10:32.565 # QForkMasterInit: system error caught. error code=0x000005af, message=VirtualAllocEx failed.: unknown error

2. Solutions

Redis’s conf file sets the parameters maxheap and maxmemory

maxmemory 120MB

maxheap 180MB

Maxmemory and maxheap depend on your computer configuration. Usually: maxheap = 1.5 * maxmemory

[Solved] org.apache.flink.client.program.ProgramInvocationException: The main method caused an error

After the flick task starts the checkpoint and sets the status backend, the task is submitted for operation. The above errors occur. The specific errors are as follows:

org.apache.flink.client.program.ProgramInvocationException: The main method caused an error: org.apache.flink.runtime.concurrent.FutureUtils$RetryException: Could not complete the operation. Number of retries has been exhausted.
  at org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:366)
  at org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:219)
  at org.apache.flink.client.ClientUtils.executeProgram(ClientUtils.java:114)
  at org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:812)
  at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:246)
  at org.apache.flink.client.cli.CliFrontend.parseAndRun(CliFrontend.java:1054)
  at org.apache.flink.client.cli.CliFrontend.lambda$main$10(CliFrontend.java:1132)
  at java.security.AccessController.doPrivileged(Native Method)
  at javax.security.auth.Subject.doAs(Subject.java:422)
  at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1761)
  at org.apache.flink.runtime.security.contexts.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
  at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1132)
Caused by: java.util.concurrent.ExecutionException: org.apache.flink.runtime.concurrent.FutureUtils$RetryException: Could not complete the operation. Number of retries has been exhausted.
  at java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357)
  at java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1908)
  at org.apache.flink.client.program.StreamContextEnvironment.getJobExecutionResult(StreamContextEnvironment.java:123)
  at org.apache.flink.client.program.StreamContextEnvironment.execute(StreamContextEnvironment.java:80)
  at org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:1782)
  at org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:1765)
  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
  at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
  at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  at java.lang.reflect.Method.invoke(Method.java:498)
  at org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:349)
  ... 11 more
Caused by: org.apache.flink.runtime.concurrent.FutureUtils$RetryException: Could not complete the operation. Number of retries has been exhausted.
  at org.apache.flink.runtime.concurrent.FutureUtils.lambda$retryOperationWithDelay$9(FutureUtils.java:386)
  at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:774)
  at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:750)
  at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
  at java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1990)
  at org.apache.flink.runtime.rest.RestClient.lambda$submitRequest$1(RestClient.java:430)
  at org.apache.flink.shaded.netty4.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:577)
  at org.apache.flink.shaded.netty4.io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:570)
  at org.apache.flink.shaded.netty4.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:549)
  at org.apache.flink.shaded.netty4.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:490)
  at org.apache.flink.shaded.netty4.io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:615)
  at org.apache.flink.shaded.netty4.io.netty.util.concurrent.DefaultPromise.setFailure0(DefaultPromise.java:608)
  at org.apache.flink.shaded.netty4.io.netty.util.concurrent.DefaultPromise.tryFailure(DefaultPromise.java:117)
  at org.apache.flink.shaded.netty4.io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.fulfillConnectPromise(AbstractNioChannel.java:321)
  at org.apache.flink.shaded.netty4.io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:337)
  at org.apache.flink.shaded.netty4.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:702)
  at org.apache.flink.shaded.netty4.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:650)
  at org.apache.flink.shaded.netty4.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:576)
  at org.apache.flink.shaded.netty4.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493)
  at org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
  at org.apache.flink.shaded.netty4.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
  at java.lang.Thread.run(Thread.java:748)
Caused by: java.util.concurrent.CompletionException: org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannel$AnnotatedConnectException: Connection refused: xxxxx
  at java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:292)
  at java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:308)
  at java.util.concurrent.CompletableFuture.uniCompose(CompletableFuture.java:957)
  at java.util.concurrent.CompletableFuture$UniCompose.tryFire(CompletableFuture.java:940)
  ... 19 more
Caused by: org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannel$AnnotatedConnectException: Connection refused: xxxxx
Caused by: java.net.ConnectException: Connection refused
  at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
  at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:714)
  at org.apache.flink.shaded.netty4.io.netty.channel.socket.nio.NioSocketChannel.doFinishConnect(NioSocketChannel.java:330)
  at org.apache.flink.shaded.netty4.io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:334)
  at org.apache.flink.shaded.netty4.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:702)
  at org.apache.flink.shaded.netty4.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:650)
  at org.apache.flink.shaded.netty4.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:576)
  at org.apache.flink.shaded.netty4.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493)
  at org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
  at org.apache.flink.shaded.netty4.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
  at java.lang.Thread.run(Thread.java:748)


After checking the yarn log, it may be the reason for the jar package conflict. After noting the dependency on Hadoop in the POM file, resubmit and run successfully.

[vite] http proxy error: Error: self signed certificate in certificate chain vite

In order to prevent cross domain problems when requesting interfaces, vite proxy is used for configuration.

For example, the address of the request interface is https://172.1.1.0:8080 , the vite configuration information is as follows:

...

server: {
        host: '0.0.0.0',
        port: 12000,
        proxy: {
            '/local/': {
                target: 'https://172.1.1.0:8080',
                changeOrigin: true,
                rewrite: (path) => path.replace(/^\/local\//, ''),
            },
        },
},

...

Local requests are all interfaces. You only need to add a prefix -/local /. For example, the login interface is’/local/Login ‘.

So I went to request and found that the error was reported directly. The error information is as follows:

[vite] http proxy error: Error: self signed certificate

The certificate is wrong.

Solution: add a configuration – secure: false The overall configuration code is as follows:

proxy: {
            '/local/': {
                target: '',
                

                // Add
                secure: false,
                // End




                changeOrigin: true,
                rewrite: (path) => path.replace(/^\/local\//, ''),
            },
        },

Then try again. Sure enough, there’s no problem.

[Solved] swagger Error: Fetch errorInternal Server Error /swagger/v1/swagger.jso

When using swagger as API interface document, the following error often occurs

If this error occurs, you can take a closer look at the console,

As can be seen from the figure, ambiguous HTTP method for action is an ambiguous HTTP operation method

Check the corresponding controller and you can see that the operation feature [httppost] or [httpget] is not added. If it is added, it will be OK.

[Solved] Fatal error C1083: unable to open include file: ‘d3dx9.h’

1. First confirm in the control panel whether to install Microsoft DirectX SDK (June 2010), if no you can go to https://www.microsoft.com/en-us/download/details.aspx?id=6812 to download.

After installation d3dx9.h file in: D:\Program Files (x86)\Microsoft DirectX SDK (June 2010)\Include

2. Set the path.

Project — Properties — c/c++ — General — Additional header files (the first line is) enter here which file you are located in the directory, multiple directories separated by a semicolon, that is, enter: D:\Program Files (x86)\Microsoft DirectX SDK (June 2010)\Lib\x86\Include

[Solved] java.io.IOException: Got error, status=ERROR, status message, ack with firstBadLink as

Today, I reported such an error when uploading files to Hadoop

2022-03-17 17:17:11,994 INFO hdfs.DataStreamer: Exception in createBlockOutputStream blk_1073741946_1137
java.io.IOException: Got error, status=ERROR, status message , ack with firstBadLink as 120.78.239.136:9866
	at org.apache.hadoop.hdfs.protocol.datatransfer.DataTransferProtoUtil.checkBlockOpStatus(DataTransferProtoUtil.java:134)
	at org.apache.hadoop.hdfs.protocol.datatransfer.DataTransferProtoUtil.checkBlockOpStatus(DataTransferProtoUtil.java:110)
	at org.apache.hadoop.hdfs.DataStreamer.createBlockOutputStream(DataStreamer.java:1778)
	at org.apache.hadoop.hdfs.DataStreamer.nextBlockOutputStream(DataStreamer.java:1679)
	at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:716)
2022-03-17 17:17:11,998 WARN hdfs.DataStreamer: Abandoning BP-1890970308-172.25.12.163-1646541195774:blk_1073741946_1137
2022-03-17 17:17:12,007 WARN hdfs.DataStreamer: Excluding datanode DatanodeInfoWithStorage[120.78.239.136:9866,DS-87287b18-21ac-4314-884e-d78b139945b8,DISK]

The result is that

slave1 has no copy

Reason and Solution:

slave1 did not turn off the firewall, so just turning off the firewall will be OK.

 

[Solved] Tomcat Do not Start Error: Error: Application Server not specified

Tomcat does not start, and reports Error: Application Server not specified. after checking, I found that although I followed the teacher’s steps, but the teacher did not import the local tomcat, it should already exist in the idea, and because I did not import it, I can not use it properly, I need to import the local downloaded tomcat

After importing the correct path, Tomcat can be enabled normally, because I have always added Tomcat plug-in to POM file, so I ignored this step.

[Solved] MacOS Compile ffmpeg Error: ERROR: openssl not found

On a whim today, I suddenly want to start studying the source code of ffmpeg. After downloading all the dependent libraries, I use the command configure

sh configure --prefix=/usr/local/ffmpeg \
--enable-gpl \
--enable-version3 \
--enable-nonfree \
--enable-postproc \
--enable-libass \
--disable-libcelt \
--enable-libfdk-aac \
--enable-libfreetype \
--enable-libmp3lame \
--enable-libopencore-amrnb \
--enable-libopencore-amrwb \
--enable-libopenjpeg \
--enable-openssl \
--enable-libopus \
--enable-libspeex \
--enable-libtheora \
--enable-libvorbis \
--enable-libvpx \
--enable-libx264 \
--enable-libxvid \
--disable-static \
--disable-x86asm \
--enable-shared

appear. Error: an error is reported when OpenSSL is not found. The version of OpenSSL I use is

Then the solution is seen from here

export LDFLAGS="-L/usr/local/opt/openssl/lib"
export CPPFLAGS="-I/usr/local/opt/openssl/include"

After writing, configure succeeded!

If you encounter some other problems, such as the following, you only need to delete the corresponding file and reconfigure it

config.h is unchanged
libavutil/avconfig.h is unchanged
libavfilter/filter_list.c is unchanged
libavcodec/codec_list.c is unchanged
libavcodec/parser_list.c is unchanged
libavcodec/bsf_list.c is unchanged
libavformat/demuxer_list.c is unchanged
libavformat/muxer_list.c is unchanged
libavdevice/indev_list.c is unchanged
libavdevice/outdev_list.c is unchanged
libavformat/protocol_list.c is unchanged
ffbuild/config.sh is unchanged

[Solved] init datasource error, url: jdbc:mysql://lcoalhost:3306/test com.mysql.cj.jdbc.exceptions.Com

Error message:

com.mysql.cj.jdbc.exceptions.CommunicationsException: Communications link failure

The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.
	at com.mysql.cj.jdbc.exceptions.SQLError.createCommunicationsException(SQLError.java:174) ~[mysql-connector-java-8.0.18.jar:8.0.18]
	at com.mysql.cj.jdbc.exceptions.SQLExceptionsMapping.translateException(SQLExceptionsMapping.java:64) ~[mysql-connector-java-8.0.18.jar:8.0.18]
	at com.mysql.cj.jdbc.ConnectionImpl.createNewIO(ConnectionImpl.java:836) ~[mysql-connector-java-8.0.18.jar:8.0.18]
	at com.mysql.cj.jdbc.ConnectionImpl.<init>(ConnectionImpl.java:456) ~[mysql-connector-java-8.0.18.jar:8.0.18]
	at com.mysql.cj.jdbc.ConnectionImpl.getInstance(ConnectionImpl.java:246) ~[mysql-connector-java-8.0.18.jar:8.0.18]
	at com.mysql.cj.jdbc.NonRegisteringDriver.connect(NonRegisteringDriver.java:199) ~[mysql-connector-java-8.0.18.jar:8.0.18]
	at com.alibaba.druid.filter.FilterChainImpl.connection_connect(FilterChainImpl.java:156) ~[druid-1.1.10.jar:1.1.10]
	at com.alibaba.druid.filter.stat.StatFilter.connection_connect(StatFilter.java:218) ~[druid-1.1.10.jar:1.1.10]
	at com.alibaba.druid.filter.FilterChainImpl.connection_connect(FilterChainImpl.java:150) ~[druid-1.1.10.jar:1.1.10]
	at com.alibaba.druid.pool.DruidAbstractDataSource.createPhysicalConnection(DruidAbstractDataSource.java:1560) ~[druid-1.1.10.jar:1.1.10]
	at com.alibaba.druid.pool.DruidAbstractDataSource.createPhysicalConnection(DruidAbstractDataSource.java:1623) ~[druid-1.1.10.jar:1.1.10]
	at com.alibaba.druid.pool.DruidDataSource.init(DruidDataSource.java:861) ~[druid-1.1.10.jar:1.1.10]
	at com.alibaba.druid.pool.DruidDataSource.getConnection(DruidDataSource.java:1229) ~[druid-1.1.10.jar:1.1.10]
	at com.alibaba.druid.pool.DruidDataSource.getConnection(DruidDataSource.java:1225) ~[druid-1.1.10.jar:1.1.10]
	at com.alibaba.druid.pool.DruidDataSource.getConnection(DruidDataSource.java:90) ~[druid-1.1.10.jar:1.1.10]
	at org.springframework.jdbc.datasource.DataSourceTransactionManager.doBegin(DataSourceTransactionManager.java:263) ~[spring-jdbc-5.2.0.RELEASE.jar:5.2.0.RELEASE]
	at org.springframework.transaction.support.AbstractPlatformTransactionManager.getTransaction(AbstractPlatformTransactionManager.java:376) ~[spring-tx-5.2.0.RELEASE.jar:5.2.0.RELEASE]
	at org.springframework.transaction.interceptor.TransactionAspectSupport.createTransactionIfNecessary(TransactionAspectSupport.java:560) ~[spring-tx-5.2.0.RELEASE.jar:5.2.0.RELEASE]
	at org.springframework.transaction.interceptor.TransactionAspectSupport.invokeWithinTransaction(TransactionAspectSupport.java:347) ~[spring-tx-5.2.0.RELEASE.jar:5.2.0.RELEASE]
	at org.springframework.transaction.interceptor.TransactionInterceptor.invoke(TransactionInterceptor.java:99) ~[spring-tx-5.2.0.RELEASE.jar:5.2.0.RELEASE]
	at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186) ~[spring-aop-5.2.0.RELEASE.jar:5.2.0.RELEASE]
	at org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.proceed(CglibAopProxy.java:747) ~[spring-aop-5.2.0.RELEASE.jar:5.2.0.RELEASE]
	at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:689) ~[spring-aop-5.2.0.RELEASE.jar:5.2.0.RELEASE]
	at com.baidu.fsg.uid.worker.DisposableWorkerIdAssigner$$EnhancerBySpringCGLIB$$17ab474d.assignWorkerId(<generated>) ~[classes/:?]
	at com.baidu.fsg.uid.impl.DefaultUidGenerator.afterPropertiesSet(DefaultUidGenerator.java:89) ~[classes/:?]
	at com.baidu.fsg.uid.impl.CachedUidGenerator.afterPropertiesSet(CachedUidGenerator.java:69) ~[classes/:?]
	at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.invokeInitMethods(AbstractAutowireCapableBeanFactory.java:1862) ~[spring-beans-5.2.0.RELEASE.jar:5.2.0.RELEASE]
	at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1799) ~[spring-beans-5.2.0.RELEASE.jar:5.2.0.RELEASE]
	at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:595) ~[spring-beans-5.2.0.RELEASE.jar:5.2.0.RELEASE]
	at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:517) ~[spring-beans-5.2.0.RELEASE.jar:5.2.0.RELEASE]
	at org.springframework.beans.factory.support.AbstractBeanFactory.lambda$doGetBean$0(AbstractBeanFactory.java:323) ~[spring-beans-5.2.0.RELEASE.jar:5.2.0.RELEASE]
	at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:222) ~[spring-beans-5.2.0.RELEASE.jar:5.2.0.RELEASE]
	at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:321) ~[spring-beans-5.2.0.RELEASE.jar:5.2.0.RELEASE]
	at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:202) ~[spring-beans-5.2.0.RELEASE.jar:5.2.0.RELEASE]
	at org.springframework.beans.factory.config.DependencyDescriptor.resolveCandidate(DependencyDescriptor.java:276) ~[spring-beans-5.2.0.RELEASE.jar:5.2.0.RELEASE]
	at org.springframework.beans.factory.support.DefaultListableBeanFactory.doResolveDependency(DefaultListableBeanFactory.java:1287) ~[spring-beans-5.2.0.RELEASE.jar:5.2.0.RELEASE]
	at org.springframework.beans.factory.support.DefaultListableBeanFactory.resolveDependency(DefaultListableBeanFactory.java:1207) ~[spring-beans-5.2.0.RELEASE.jar:5.2.0.RELEASE]
	at org.springframework.context.annotation.CommonAnnotationBeanPostProcessor.autowireResource(CommonAnnotationBeanPostProcessor.java:537) ~[spring-context-5.2.0.RELEASE.jar:5.2.0.RELEASE]
	at org.springframework.context.annotation.CommonAnnotationBeanPostProcessor.getResource(CommonAnnotationBeanPostProcessor.java:513) ~[spring-context-5.2.0.RELEASE.jar:5.2.0.RELEASE]
	at org.springframework.context.annotation.CommonAnnotationBeanPostProcessor$ResourceElement.getResourceToInject(CommonAnnotationBeanPostProcessor.java:653) ~[spring-context-5.2.0.RELEASE.jar:5.2.0.RELEASE]
	at org.springframework.beans.factory.annotation.InjectionMetadata$InjectedElement.inject(InjectionMetadata.java:224) ~[spring-beans-5.2.0.RELEASE.jar:5.2.0.RELEASE]
	at org.springframework.beans.factory.annotation.InjectionMetadata.inject(InjectionMetadata.java:116) ~[spring-beans-5.2.0.RELEASE.jar:5.2.0.RELEASE]
	at org.springframework.context.annotation.CommonAnnotationBeanPostProcessor.postProcessProperties(CommonAnnotationBeanPostProcessor.java:334) ~[spring-context-5.2.0.RELEASE.jar:5.2.0.RELEASE]
	at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.populateBean(AbstractAutowireCapableBeanFactory.java:1429) ~[spring-beans-5.2.0.RELEASE.jar:5.2.0.RELEASE]
	at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:594) ~[spring-beans-5.2.0.RELEASE.jar:5.2.0.RELEASE]
	at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:517) ~[spring-beans-5.2.0.RELEASE.jar:5.2.0.RELEASE]
	at org.springframework.beans.factory.support.AbstractBeanFactory.lambda$doGetBean$0(AbstractBeanFactory.java:323) ~[spring-beans-5.2.0.RELEASE.jar:5.2.0.RELEASE]
	at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:222) [spring-beans-5.2.0.RELEASE.jar:5.2.0.RELEASE]
	at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:321) [spring-beans-5.2.0.RELEASE.jar:5.2.0.RELEASE]
	at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:202) [spring-beans-5.2.0.RELEASE.jar:5.2.0.RELEASE]
	at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:879) [spring-beans-5.2.0.RELEASE.jar:5.2.0.RELEASE]
	at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:878) [spring-context-5.2.0.RELEASE.jar:5.2.0.RELEASE]
	at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:550) [spring-context-5.2.0.RELEASE.jar:5.2.0.RELEASE]
	at org.springframework.boot.web.servlet.context.ServletWebServerApplicationContext.refresh(ServletWebServerApplicationContext.java:141) [spring-boot-2.2.0.RELEASE.jar:2.2.0.RELEASE]
	at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:747) [spring-boot-2.2.0.RELEASE.jar:2.2.0.RELEASE]
	at org.springframework.boot.SpringApplication.refreshContext(SpringApplication.java:397) [spring-boot-2.2.0.RELEASE.jar:2.2.0.RELEASE]
	at org.springframework.boot.SpringApplication.run(SpringApplication.java:315) [spring-boot-2.2.0.RELEASE.jar:2.2.0.RELEASE]
	at org.springframework.boot.SpringApplication.run(SpringApplication.java:1226) [spring-boot-2.2.0.RELEASE.jar:2.2.0.RELEASE]
	at org.springframework.boot.SpringApplication.run(SpringApplication.java:1215) [spring-boot-2.2.0.RELEASE.jar:2.2.0.RELEASE]
	at cn.com.icbf.framework.uid.UidStartApplication.main(UidStartApplication.java:20) [classes/:?]
Caused by: com.mysql.cj.exceptions.CJCommunicationsException: Communications link failure

Solution:

This is because MySQL needs to specify whether to connect with SSL in the higher version: ?useSSL=false

[Solved] HBase Error: ERROR: org.apache.hadoop.hbase.PleaseHoldException: Master is initializing

Error reporting

After installing and entering HBase for the first time, you will encounter this problem:

[root@zhiyong2 /]# hbase shell
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/usdp-srv/srv/udp/2.0.0.0/hdfs/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/usdp-srv/srv/udp/2.0.0.0/hbase/lib/client-facing-thirdparty/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
HBase Shell
Use "help" to get list of supported commands.
Use "exit" to quit this interactive shell.
For Reference, please visit: http://hbase.apache.org/2.0/book.html#shell
Version 2.1.10, rUnknown, Mon Nov 23 09:56:35 WIB 2020
Took 0.0036 seconds
hbase(main):001:0> list_namespace
NAMESPACE

ERROR: org.apache.hadoop.hbase.PleaseHoldException: Master is initializing
        at org.apache.hadoop.hbase.master.HMaster.checkInitialized(HMaster.java:3003)
        at org.apache.hadoop.hbase.master.HMaster.getNamespaces(HMaster.java:3299)
        at org.apache.hadoop.hbase.master.MasterRpcServices.listNamespaceDescriptors(MasterRpcServices.java:1237)
        at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java)
        at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:413)
        at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:133)
        at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324)
        at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304)

For usage try 'help "list_namespace"'

Took 9.8027 seconds

In this case, I can’t wait all the time. I waited for a while and didn’t initialize well…

Solution

Because it is a new cluster, there is no useful data. You can clear the metadata in the following way to reinitialize. Before clearing metadata, close HBase components in USDP’s Web UI.

Delete ZK’s metadata

Since part of the metadata of HBase is stored on zookeeper, so you should so this as below:

[root@zhiyong2 /]# zkCli.sh
Connecting to localhost:2181
2022-03-03 00:09:47,689 [myid:] - INFO  [main:Environment@100] - Client environment:zookeeper.version=3.4.13-2d71af4dbe22557fda74f9a9b4309b15a7487f03, built on 06/29/2018 04:05 GMT
2022-03-03 00:09:47,693 [myid:] - INFO  [main:Environment@100] - Client environment:host.name=zhiyong2
2022-03-03 00:09:47,693 [myid:] - INFO  [main:Environment@100] - Client environment:java.version=1.8.0_202
2022-03-03 00:09:47,695 [myid:] - INFO  [main:Environment@100] - Client environment:java.vendor=Oracle Corporation
2022-03-03 00:09:47,695 [myid:] - INFO  [main:Environment@100] - Client environment:java.home=/usr/java/jdk1.8.0_202/jre
2022-03-03 00:09:47,696 [myid:] - INFO  [main:Environment@100] - Client environment:java.class.path=/srv/udp/2.0.0.0/zookeeper/bin/../zookeeper-server/target/classes:/srv/udp/2.0.0.0/zookeeper/bin/../build/classes:/srv/udp/2.0.0.0/zookeeper/bin/../zookeeper-server/target/lib/*.jar:/srv/udp/2.0.0.0/zookeeper/bin/../build/lib/*.jar:/srv/udp/2.0.0.0/zookeeper/bin/../lib/slf4j-log4j12-1.7.25.jar:/srv/udp/2.0.0.0/zookeeper/bin/../lib/slf4j-api-1.7.25.jar:/srv/udp/2.0.0.0/zookeeper/bin/../lib/netty-3.10.6.Final.jar:/srv/udp/2.0.0.0/zookeeper/bin/../lib/log4j-1.2.17.jar:/srv/udp/2.0.0.0/zookeeper/bin/../lib/jline-0.9.94.jar:/srv/udp/2.0.0.0/zookeeper/bin/../lib/audience-annotations-0.5.0.jar:/srv/udp/2.0.0.0/zookeeper/bin/../zookeeper-3.4.13.jar:/srv/udp/2.0.0.0/zookeeper/bin/../zookeeper-server/src/main/resources/lib/*.jar:/srv/udp/2.0.0.0/zookeeper/bin/../conf:.:/usr/java/jdk1.8.0_202/jre/lib/rt.jar:/usr/java/jdk1.8.0_202/lib/dt.jar:/usr/java/jdk1.8.0_202/lib/tools.jar
2022-03-03 00:09:47,696 [myid:] - INFO  [main:Environment@100] - Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
2022-03-03 00:09:47,696 [myid:] - INFO  [main:Environment@100] - Client environment:java.io.tmpdir=/tmp
2022-03-03 00:09:47,696 [myid:] - INFO  [main:Environment@100] - Client environment:java.compiler=<NA>
2022-03-03 00:09:47,696 [myid:] - INFO  [main:Environment@100] - Client environment:os.name=Linux
2022-03-03 00:09:47,696 [myid:] - INFO  [main:Environment@100] - Client environment:os.arch=amd64
2022-03-03 00:09:47,697 [myid:] - INFO  [main:Environment@100] - Client environment:os.version=3.10.0-957.el7.x86_64
2022-03-03 00:09:47,697 [myid:] - INFO  [main:Environment@100] - Client environment:user.name=root
2022-03-03 00:09:47,697 [myid:] - INFO  [main:Environment@100] - Client environment:user.home=/root
2022-03-03 00:09:47,697 [myid:] - INFO  [main:Environment@100] - Client environment:user.dir=/
2022-03-03 00:09:47,698 [myid:] - INFO  [main:ZooKeeper@442] - Initiating client connection, connectString=localhost:2181 sessionTimeout=30000 watcher=org.apache.zookeeper.ZooKeeperMain$MyWatcher@41906a77
2022-03-03 00:09:47,721 [myid:] - INFO  [main-SendThread(localhost:2181):ClientCnxn$SendThread@1029] - Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error)
Welcome to ZooKeeper!
JLine support is enabled
2022-03-03 00:09:47,808 [myid:] - INFO  [main-SendThread(localhost:2181):ClientCnxn$SendThread@879] - Socket connection established to localhost/127.0.0.1:2181, initiating session
2022-03-03 00:09:47,817 [myid:] - INFO  [main-SendThread(localhost:2181):ClientCnxn$SendThread@1303] - Session establishment complete on server localhost/127.0.0.1:2181, sessionid = 0x1000009708f000e, negotiated timeout = 30000

WATCHER::

WatchedEvent state:SyncConnected type:None path:null
[zk: localhost:2181(CONNECTED) 0] ls /
[cluster, brokers, zookeeper, yarn-leader-election, hadoop-ha, admin, isr_change_notification, dolphinscheduler, log_dir_event_notification, controller_epoch, rmstore, consumers, latest_producer_id_block, config, hbase]
[zk: localhost:2181(CONNECTED) 1] rmr /hbase
[zk: localhost:2181(CONNECTED) 2] [root@zhiyong2 /]# ^C
[root@zhiyong2 /]# ^C
[root@zhiyong2 /]#

HDFS metadata deletion

Since part of HBase data is stored in HDFS, it is also necessary to delete the metadata stored in HDFS by HBase. Since USDP has Ranger and actually has users and permissions, you can’t use root to delete important data of HDFS. You must switch to Hadoop user first.

[root@zhiyong2 /]# hadoop fs -rmr /hbase/data/hbase/meta/*
rmr: DEPRECATED: Please use '-rm -r' instead.
rmr: Failed to move to trash: hdfs://zhiyong-1/hbase/data/hbase/meta/.tabledesc: Permission denied: user=root, access=WRITE, inode="/hbase/data/hbase/meta":hadoop:supergroup:drwxr-xr-x
rmr: Failed to move to trash: hdfs://zhiyong-1/hbase/data/hbase/meta/.tmp: Permission denied: user=root, access=WRITE, inode="/hbase/data/hbase/meta":hadoop:supergroup:drwxr-xr-x
rmr: Failed to move to trash: hdfs://zhiyong-1/hbase/data/hbase/meta/1588230740: Permission denied: user=root, access=WRITE, inode="/hbase/data/hbase/meta":hadoop:supergroup:drwxr-xr-x
[root@zhiyong2 /]# cd /etc/passwd/
-bash: cd: /etc/passwd/: 不是目录
[root@zhiyong2 /]# cat /etc/passwd
root:x:0:0:root:/root:/bin/bash
bin:x:1:1:bin:/bin:/sbin/nologin
daemon:x:2:2:daemon:/sbin:/sbin/nologin
adm:x:3:4:adm:/var/adm:/sbin/nologin
lp:x:4:7:lp:/var/spool/lpd:/sbin/nologin
sync:x:5:0:sync:/sbin:/bin/sync
shutdown:x:6:0:shutdown:/sbin:/sbin/shutdown
halt:x:7:0:halt:/sbin:/sbin/halt
mail:x:8:12:mail:/var/spool/mail:/sbin/nologin
operator:x:11:0:operator:/root:/sbin/nologin
games:x:12:100:games:/usr/games:/sbin/nologin
ftp:x:14:50:FTP User:/var/ftp:/sbin/nologin
nobody:x:99:99:Nobody:/:/sbin/nologin
systemd-network:x:192:192:systemd Network Management:/:/sbin/nologin
dbus:x:81:81:System message bus:/:/sbin/nologin
polkitd:x:999:998:User for polkitd:/:/sbin/nologin
libstoragemgmt:x:998:997:daemon account for libstoragemgmt:/var/run/lsm:/sbin/nologin
abrt:x:173:173::/etc/abrt:/sbin/nologin
rpc:x:32:32:Rpcbind Daemon:/var/lib/rpcbind:/sbin/nologin
apache:x:48:48:Apache:/usr/share/httpd:/sbin/nologin
sshd:x:74:74:Privilege-separated SSH:/var/empty/sshd:/sbin/nologin
postfix:x:89:89::/var/spool/postfix:/sbin/nologin
ntp:x:38:38::/etc/ntp:/sbin/nologin
chrony:x:997:995::/var/lib/chrony:/sbin/nologin
tcpdump:x:72:72::/:/sbin/nologin
hadoop:x:1000:1000::/home/hadoop:/bin/bash
mysql:x:27:27:MySQL Server:/var/lib/mysql:/bin/false
saslauth:x:996:76:Saslauthd user:/run/saslauthd:/sbin/nologin
elastic:x:1001:1001::/home/elastic:/bin/bash
hue:x:1002:1002::/home/hue:/bin/bash
[root@zhiyong2 /]# su - hadoop
上一次登录:四 3月  3 00:24:33 CST 2022
[hadoop@zhiyong2 ~]$ hadoop fs -rmr /hbase/data/hbase/meta/*
rmr: DEPRECATED: Please use '-rm -r' instead.
2022-03-03 00:27:31 INFO fs.TrashPolicyDefault: Moved: 'hdfs://zhiyong-1/hbase/data/hbase/meta/.tabledesc' to trash at: hdfs://zhiyong-1/user/hadoop/.Trash/Current/hbase/data/hbase/meta/.tabledesc
2022-03-03 00:27:31 INFO fs.TrashPolicyDefault: Moved: 'hdfs://zhiyong-1/hbase/data/hbase/meta/.tmp' to trash at: hdfs://zhiyong-1/user/hadoop/.Trash/Current/hbase/data/hbase/meta/.tmp
2022-03-03 00:27:31 INFO fs.TrashPolicyDefault: Moved: 'hdfs://zhiyong-1/hbase/data/hbase/meta/1588230740' to trash at: hdfs://zhiyong-1/user/hadoop/.Trash/Current/hbase/data/hbase/meta/1588230740
[hadoop@zhiyong2 ~]$ hadoop fs -rmr /hbase/data/hbase/namespace/*
rmr: DEPRECATED: Please use '-rm -r' instead.
2022-03-03 00:27:34 INFO fs.TrashPolicyDefault: Moved: 'hdfs://zhiyong-1/hbase/data/hbase/namespace/.tabledesc' to trash at: hdfs://zhiyong-1/user/hadoop/.Trash/Current/hbase/data/hbase/namespace/.tabledesc
2022-03-03 00:27:34 INFO fs.TrashPolicyDefault: Moved: 'hdfs://zhiyong-1/hbase/data/hbase/namespace/.tmp' to trash at: hdfs://zhiyong-1/user/hadoop/.Trash/Current/hbase/data/hbase/namespace/.tmp
2022-03-03 00:27:34 INFO fs.TrashPolicyDefault: Moved: 'hdfs://zhiyong-1/hbase/data/hbase/namespace/98fb8a0448305b2f9af4f9a72495b6df' to trash at: hdfs://zhiyong-1/user/hadoop/.Trash/Current/hbase/data/hbase/namespace/98fb8a0448305b2f9af4f9a72495b6df
[hadoop@zhiyong2 ~]$ hadoop fs -rmr /hbase/MasterProcWALs/*
rmr: DEPRECATED: Please use '-rm -r' instead.
2022-03-03 00:27:38 INFO fs.TrashPolicyDefault: Moved: 'hdfs://zhiyong-1/hbase/MasterProcWALs/pv2-00000000000000000009.log' to trash at: hdfs://zhiyong-1/user/hadoop/.Trash/Current/hbase/MasterProcWALs/pv2-00000000000000000009.log
2022-03-03 00:27:38 INFO fs.TrashPolicyDefault: Moved: 'hdfs://zhiyong-1/hbase/MasterProcWALs/pv2-00000000000000000010.log' to trash at: hdfs://zhiyong-1/user/hadoop/.Trash/Current/hbase/MasterProcWALs/pv2-00000000000000000010.log
2022-03-03 00:27:38 INFO fs.TrashPolicyDefault: Moved: 'hdfs://zhiyong-1/hbase/MasterProcWALs/pv2-00000000000000000011.log' to trash at: hdfs://zhiyong-1/user/hadoop/.Trash/Current/hbase/MasterProcWALs/pv2-00000000000000000011.log
2022-03-03 00:27:38 INFO fs.TrashPolicyDefault: Moved: 'hdfs://zhiyong-1/hbase/MasterProcWALs/pv2-00000000000000000012.log' to trash at: hdfs://zhiyong-1/user/hadoop/.Trash/Current/hbase/MasterProcWALs/pv2-00000000000000000012.log
2022-03-03 00:27:38 INFO fs.TrashPolicyDefault: Moved: 'hdfs://zhiyong-1/hbase/MasterProcWALs/pv2-00000000000000000013.log' to trash at: hdfs://zhiyong-1/user/hadoop/.Trash/Current/hbase/MasterProcWALs/pv2-00000000000000000013.log
[hadoop@zhiyong2 ~]$ exit
登出
[root@zhiyong2 /]#

Restart HBase

Available after restart:

[hadoop@zhiyong2 ~]$ hbase shell
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/usdp-srv/srv/udp/2.0.0.0/hdfs/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/usdp-srv/srv/udp/2.0.0.0/hbase/lib/client-facing-thirdparty/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
HBase Shell
Use "help" to get list of supported commands.
Use "exit" to quit this interactive shell.
For Reference, please visit: http://hbase.apache.org/2.0/book.html#shell
Version 2.1.10, rUnknown, Mon Nov 23 09:56:35 WIB 2020
Took 0.0045 seconds
hbase(main):001:0> list_namespace
list_namespace          list_namespace_tables
hbase(main):001:0> list_namespace
NAMESPACE
default
hbase
2 row(s)
Took 0.5645 seconds
hbase(main):002:0> exit
[hadoop@zhiyong2 ~]$ exit
登出
[root@zhiyong2 /]# hbase shell
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/usdp-srv/srv/udp/2.0.0.0/hdfs/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/usdp-srv/srv/udp/2.0.0.0/hbase/lib/client-facing-thirdparty/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
HBase Shell
Use "help" to get list of supported commands.
Use "exit" to quit this interactive shell.
For Reference, please visit: http://hbase.apache.org/2.0/book.html#shell
Version 2.1.10, rUnknown, Mon Nov 23 09:56:35 WIB 2020
Took 0.0035 seconds
hbase(main):001:0> list_namespace
list_namespace          list_namespace_tables
hbase(main):001:0> list_namespace
NAMESPACE
default
hbase
2 row(s)
Took 0.6270 seconds
hbase(main):002:0> exit
[root@zhiyong2 /]#

At this time, both the root and the root of the HDFS can see whether the root of the HDFS has the highest permission on the HDFS component or not.

Testlink 1.9.16 Error: Fatal error: Uncaught Error: Cannot use string offset as an array in D:\softe

Installed testlink 1.9.16 on Windows 10 with error when running tests:

Fatal error:*UncaughtError:*Cannotaccomplish use of strings along offset as an oararray in D:\softest\htdocs\testlink\lib\execute\execSetResults. php:1533 php(93):*processTestCase(NULL,*Object(stdClass),Object(stdClass),*Array,*Object(tree),*Object(testcase),*Object(tlAttachmentRepository)*#1{main}*thrownalong in D:\softest\htdocs\testlink\lib\execute\execSetResults. php on line 1533

 

Open the path file provided in the error:          D:\softest\htdocs\testlink\lib\execute\execSetResults. php database

By opening the memory book, Ctrl+F search for $finalFilters

Replace

foreach($locationFilters as $locationKey => $filterValue)
  {
       
    $finalFilters=$cf_filters+$filterValue;
    $guiObj->design_time_cfields[$tcase_id][$locationKey] =........;

With

$finalFilters = array();
$guiObj-> design_time_cfields = array();

As below:

After returning, refresh new testlinks into the Internet immediately.