Tag Archives: ProgrammerAH

Flink1.12 integration Hadoop 3. X error reporting

Flink1.12 integration Hadoop 3. X error reporting

Due to the needs of the project, it is necessary to carry out rule early warning for a period of time, so first install the Flink cluster and Hadoop on the alicloud server of the project, but when running the official website example, an error is directly reported. The specific information is as follows:

Caused by: java.lang.RuntimeException: org.apache.flink.runtime.client.JobInitializationException: Could not instantiate JobManager.
	at org.apache.flink.util.ExceptionUtils.rethrow(ExceptionUtils.java:309)
	at org.apache.flink.util.function.FunctionUtils.lambda$uncheckedFunction$2(FunctionUtils.java:76)
	at java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:616)
	at java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:591)
	at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:457)
	at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289)
	at java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1067)
	at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1703)
	at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:172)
Caused by: org.apache.flink.runtime.client.JobInitializationException: Could not instantiate JobManager.
	at org.apache.flink.runtime.dispatcher.Dispatcher.lambda$createJobManagerRunner$5(Dispatcher.java:463)
	at java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1604)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.flink.util.FlinkRuntimeException: Failed to create checkpoint storage at checkpoint coordinator side.
	at org.apache.flink.runtime.checkpoint.CheckpointCoordinator.<init>(CheckpointCoordinator.java:316)
	at org.apache.flink.runtime.checkpoint.CheckpointCoordinator.<init>(CheckpointCoordinator.java:231)
	at org.apache.flink.runtime.executiongraph.ExecutionGraph.enableCheckpointing(ExecutionGraph.java:495)
	at org.apache.flink.runtime.executiongraph.ExecutionGraphBuilder.buildGraph(ExecutionGraphBuilder.java:347)
	at org.apache.flink.runtime.scheduler.SchedulerBase.createExecutionGraph(SchedulerBase.java:291)
	at org.apache.flink.runtime.scheduler.SchedulerBase.createAndRestoreExecutionGraph(SchedulerBase.java:256)
	at org.apache.flink.runtime.scheduler.SchedulerBase.<init>(SchedulerBase.java:238)
	at org.apache.flink.runtime.scheduler.DefaultScheduler.<init>(DefaultScheduler.java:134)
	at org.apache.flink.runtime.scheduler.DefaultSchedulerFactory.createInstance(DefaultSchedulerFactory.java:108)
	at org.apache.flink.runtime.jobmaster.JobMaster.createScheduler(JobMaster.java:323)
	at org.apache.flink.runtime.jobmaster.JobMaster.<init>(JobMaster.java:310)
	at org.apache.flink.runtime.jobmaster.factories.DefaultJobMasterServiceFactory.createJobMasterService(DefaultJobMasterServiceFactory.java:96)
	at org.apache.flink.runtime.jobmaster.factories.DefaultJobMasterServiceFactory.createJobMasterService(DefaultJobMasterServiceFactory.java:41)
	at org.apache.flink.runtime.jobmaster.JobManagerRunnerImpl.<init>(JobManagerRunnerImpl.java:141)
	at org.apache.flink.runtime.dispatcher.DefaultJobManagerRunnerFactory.createJobManagerRunner(DefaultJobManagerRunnerFactory.java:80)
	at org.apache.flink.runtime.dispatcher.Dispatcher.lambda$createJobManagerRunner$5(Dispatcher.java:450)
	... 4 more
Caused by: org.apache.flink.core.fs.UnsupportedFileSystemSchemeException: Could not find a file system implementation for scheme 'hdfs'. The scheme is not directly supported by Flink and no Hadoop file system to support this scheme could be loaded. For a full list of supported file systems, please see https://ci.apache.org/projects/flink/flink-docs-stable/ops/filesystems/.
	at org.apache.flink.core.fs.FileSystem.getUnguardedFileSystem(FileSystem.java:491)
	at org.apache.flink.core.fs.FileSystem.get(FileSystem.java:389)
	at org.apache.flink.core.fs.Path.getFileSystem(Path.java:292)
	at org.apache.flink.runtime.state.filesystem.FsCheckpointStorageAccess.<init>(FsCheckpointStorageAccess.java:64)
	at org.apache.flink.runtime.state.filesystem.FsStateBackend.createCheckpointStorage(FsStateBackend.java:501)
	at org.apache.flink.runtime.checkpoint.CheckpointCoordinator.<init>(CheckpointCoordinator.java:313)
	... 19 more
Caused by: org.apache.flink.core.fs.UnsupportedFileSystemSchemeException: Hadoop is not in the classpath/dependencies.
	at org.apache.flink.core.fs.UnsupportedSchemeFactory.create(UnsupportedSchemeFactory.java:58)
	at org.apache.flink.core.fs.FileSystem.getUnguardedFileSystem(FileSystem.java:487)
	... 24 more

Only later did Baidu find that it needed a jar package to bridge Flink and HDFS, so it put the two jar packages into the Lib of Flink.

Then, when starting, an error will still be reported, and the information is as follows:

[[email protected] flink-1.12.0]# bin/flink run  examples/batch/WordCount.jar --input hdfs://k8s-master:9000/test/word.txt --output hdfs://k8s-master:9000/test/result.txt  --parallelism 2
Job has been submitted with JobID 058a2b79197c7cdb1dd04269fcc2a0b6

------------------------------------------------------------
 The program finished with the following exception:

org.apache.flink.client.program.ProgramInvocationException: The main method caused an error: org.apache.flink.client.program.ProgramInvocationException: Job failed (JobID: 058a2b79197c7cdb1dd04269fcc2a0b6)
        at org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:330)
        at org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:198)
        at org.apache.flink.client.ClientUtils.executeProgram(ClientUtils.java:114)
        at org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:743)
        at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:242)
        at org.apache.flink.client.cli.CliFrontend.parseAndRun(CliFrontend.java:971)
        at org.apache.flink.client.cli.CliFrontend.lambda$main$10(CliFrontend.java:1047)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1876)
        at org.apache.flink.runtime.security.contexts.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
        at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1047)
Caused by: java.util.concurrent.ExecutionException: org.apache.flink.client.program.ProgramInvocationException: Job failed (JobID: 058a2b79197c7cdb1dd04269fcc2a0b6)
        at java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357)
        at java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1908)
        at org.apache.flink.client.program.ContextEnvironment.getJobExecutionResult(ContextEnvironment.java:112)
        at org.apache.flink.client.program.ContextEnvironment.execute(ContextEnvironment.java:76)
        at org.apache.flink.examples.java.wordcount.WordCount.main(WordCount.java:93)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:316)
        ... 11 more
Caused by: org.apache.flink.client.program.ProgramInvocationException: Job failed (JobID: 058a2b79197c7cdb1dd04269fcc2a0b6)
        at org.apache.flink.client.deployment.ClusterClientJobClientAdapter.lambda$null$6(ClusterClientJobClientAdapter.java:119)
        at java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:616)
        at java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:591)
        at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
        at java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975)
        at org.apache.flink.client.program.rest.RestClusterClient.lambda$pollResourceAsync$22(RestClusterClient.java:602)
        at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:774)
        at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:750)
        at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
        at java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975)
        at org.apache.flink.runtime.concurrent.FutureUtils.lambda$retryOperationWithDelay$9(FutureUtils.java:379)
        at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:774)
        at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:750)
        at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
        at java.util.concurrent.CompletableFuture.postFire(CompletableFuture.java:575)
        at java.util.concurrent.CompletableFuture$UniCompose.tryFire(CompletableFuture.java:943)
        at java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:456)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.flink.runtime.client.JobExecutionException: Job execution failed.
        at org.apache.flink.runtime.jobmaster.JobResult.toJobExecutionResult(JobResult.java:147)
        at org.apache.flink.client.deployment.ClusterClientJobClientAdapter.lambda$null$6(ClusterClientJobClientAdapter.java:117)
        ... 19 more
Caused by: org.apache.flink.runtime.JobException: Recovery is suppressed by NoRestartBackoffTimeStrategy
        at org.apache.flink.runtime.executiongraph.failover.flip1.ExecutionFailureHandler.handleFailure(ExecutionFailureHandler.java:116)
        at org.apache.flink.runtime.executiongraph.failover.flip1.ExecutionFailureHandler.getFailureHandlingResult(ExecutionFailureHandler.java:78)
        at org.apache.flink.runtime.scheduler.DefaultScheduler.handleTaskFailure(DefaultScheduler.java:224)
        at org.apache.flink.runtime.scheduler.DefaultScheduler.maybeHandleTaskFailure(DefaultScheduler.java:217)
        at org.apache.flink.runtime.scheduler.DefaultScheduler.updateTaskExecutionStateInternal(DefaultScheduler.java:208)
        at org.apache.flink.runtime.scheduler.SchedulerBase.updateTaskExecutionState(SchedulerBase.java:610)
        at org.apache.flink.runtime.scheduler.SchedulerNG.updateTaskExecutionState(SchedulerNG.java:89)
        at org.apache.flink.runtime.jobmaster.JobMaster.updateTaskExecutionState(JobMaster.java:419)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcInvocation(AkkaRpcActor.java:286)
        at org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcMessage(AkkaRpcActor.java:201)
        at org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.handleRpcMessage(FencedAkkaRpcActor.java:74)
        at org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleMessage(AkkaRpcActor.java:154)
        at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:26)
        at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:21)
        at scala.PartialFunction.applyOrElse(PartialFunction.scala:123)
        at scala.PartialFunction.applyOrElse$(PartialFunction.scala:122)
        at akka.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:21)
        at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171)
        at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:172)
        at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:172)
        at akka.actor.Actor.aroundReceive(Actor.scala:517)
        at akka.actor.Actor.aroundReceive$(Actor.scala:515)
        at akka.actor.AbstractActor.aroundReceive(AbstractActor.scala:225)
        at akka.actor.ActorCell.receiveMessage(ActorCell.scala:592)
        at akka.actor.ActorCell.invoke(ActorCell.scala:561)
        at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:258)
        at akka.dispatch.Mailbox.run(Mailbox.scala:225)
        at akka.dispatch.Mailbox.exec(Mailbox.scala:235)
        at akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
        at akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
        at akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
        at akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
Caused by: org.apache.flink.core.fs.UnsupportedFileSystemSchemeException: Could not find a file system implementation for scheme 'hdfs'. The scheme is not directly supported by Flink and no Hadoop file system to support this scheme could be loaded. For a full list of supported file systems, please see https://ci.apache.org/projects/flink/flink-docs-stable/ops/filesystems/.
        at org.apache.flink.core.fs.FileSystem.getUnguardedFileSystem(FileSystem.java:491)
        at org.apache.flink.core.fs.FileSystem.get(FileSystem.java:389)
        at org.apache.flink.core.fs.Path.getFileSystem(Path.java:292)
        at org.apache.flink.api.common.io.FileOutputFormat.open(FileOutputFormat.java:218)
        at org.apache.flink.api.java.io.CsvOutputFormat.open(CsvOutputFormat.java:161)
        at org.apache.flink.runtime.operators.DataSinkTask.invoke(DataSinkTask.java:209)
        at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:722)
        at org.apache.flink.runtime.taskmanager.Task.run(Task.java:547)
        at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.flink.core.fs.UnsupportedFileSystemSchemeException: Hadoop is not in the classpath/dependencies.
        at org.apache.flink.core.fs.UnsupportedSchemeFactory.create(UnsupportedSchemeFactory.java:58)
        at org.apache.flink.core.fs.FileSystem.getUnguardedFileSystem(FileSystem.java:487)
        ... 8 more

Since I install the Flink cluster here, I can package the two jars in SCP to the other two nodes and they can run normally.

[Solved] MIT cheetah make error: ‘ioctl’ was not declared in this scope

Question:

error: ‘ioctl’ was not declared in this scope
   35 |   ioctl(fd, TCGETS2, &tty);

Solution:
Open Cheetah-Software-master/robot/src/rt/rt_serial.cpp
Change

#define termios asmtermios

#include <asm/termios.h>

#undef termios

#include <termios.h>
#include <math.h>
#include <pthread.h>
#include <stropts.h>
#include <endian.h>
#include <stdint.h>

to

#define termios asmtermios

//#include <asm/termios.h>
#include<asm/ioctls.h>
#include<asm/termbits.h>

#undef termios

#include<sys/ioctl.h>
#include <termios.h>
#include <math.h>
#include <pthread.h>
#include <stropts.h>
#include <endian.h>
#include <stdint.h>

An error is reported when the jeecg boot project connects to the MySQL database running on docker

An error is reported when the jeecg boot project connects to the MySQL database running on docker. The error information is:

2021-08-23 11:39:45.271 [MyScheduler_QuartzSchedulerThread] ERROR druid.sql.Statement:149 - {conn-10004, pstmt-20010} execute error. SELECT TRIGGER_NAME, TRIGGER_GROUP, NEXT_FIRE_TIME, PRIORITY FROM QRTZ_TRIGGERS WHERE SCHED_NAME = 'MyScheduler' AND TRIGGER_STATE = ?AND NEXT_FIRE_TIME <= ?AND (MISFIRE_INSTR = -1 OR (MISFIRE_INSTR != -1 AND NEXT_FIRE_TIME >= ?)) ORDER BY NEXT_FIRE_TIME ASC, PRIORITY DESC
java.sql.SQLSyntaxErrorException: Table 'water-cloud-dev.QRTZ_TRIGGERS' doesn't exist
	at com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:120)
	at com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:97)
	at com.mysql.cj.jdbc.exceptions.SQLExceptionsMapping.translateException(SQLExceptionsMapping.java:122)
	at com.mysql.cj.jdbc.ClientPreparedStatement.executeInternal(ClientPreparedStatement.java:953)
	at com.mysql.cj.jdbc.ClientPreparedStatement.executeQuery(ClientPreparedStatement.java:1003)
	at com.alibaba.druid.filter.FilterChainImpl.preparedStatement_executeQuery(FilterChainImpl.java:3240)
	at com.alibaba.druid.filter.FilterEventAdapter.preparedStatement_executeQuery(FilterEventAdapter.java:465)
	at com.alibaba.druid.filter.FilterChainImpl.preparedStatement_executeQuery(FilterChainImpl.java:3237)
	at com.alibaba.druid.wall.WallFilter.preparedStatement_executeQuery(WallFilter.java:647)
	at com.alibaba.druid.filter.FilterChainImpl.preparedStatement_executeQuery(FilterChainImpl.java:3237)
	at com.alibaba.druid.filter.FilterEventAdapter.preparedStatement_executeQuery(FilterEventAdapter.java:465)
	at com.alibaba.druid.filter.FilterChainImpl.preparedStatement_executeQuery(FilterChainImpl.java:3237)
	at com.alibaba.druid.proxy.jdbc.PreparedStatementProxyImpl.executeQuery(PreparedStatementProxyImpl.java:181)
	at com.alibaba.druid.pool.DruidPooledPreparedStatement.executeQuery(DruidPooledPreparedStatement.java:227)
	at org.quartz.impl.jdbcjobstore.StdJDBCDelegate.selectTriggerToAcquire(StdJDBCDelegate.java:2613)
	at org.quartz.impl.jdbcjobstore.JobStoreSupport.acquireNextTrigger(JobStoreSupport.java:2844)
	at org.quartz.impl.jdbcjobstore.JobStoreSupport$41.execute(JobStoreSupport.java:2805)
	at org.quartz.impl.jdbcjobstore.JobStoreSupport$41.execute(JobStoreSupport.java:2803)
	at org.quartz.impl.jdbcjobstore.JobStoreSupport.executeInNonManagedTXLock(JobStoreSupport.java:3864)
	at org.quartz.impl.jdbcjobstore.JobStoreSupport.acquireNextTriggers(JobStoreSupport.java:2802)
	at org.quartz.core.QuartzSchedulerThread.run(QuartzSchedulerThread.java:287)

Error reason: the MySQL database running on docker is case sensitive

Solution (I):

Delete the MySQL container and re create a new container with the case sensitive parameter lower_ case_ table_ Names = 1, for example:

docker run --name mysql -p 3306:3306 -e MYSQL_ROOT_PASSWORD=123456 -d mysql --lower_case_table_names=1

Solution (II):

1. Run the docker PS command to view the MySQL container ID:

C:\Users\Administrator>docker ps
CONTAINER ID   IMAGE     COMMAND                  CREATED      STATUS          PORTS                                                  NAMES
86136bf6eebe   redis     "docker-entrypoint.s…"   3 days ago   Up 54 minutes   0.0.0.0:6379->6379/tcp, :::6379->6379/tcp              redis
5721f675c819   mysql     "docker-entrypoint.s…"   4 days ago   Up 54 minutes   0.0.0.0:3306->3306/tcp, :::3306->3306/tcp, 33060/tcp   mysql

2. Run the docker exec command to enter the MySQL container:

C:\Users\Administrator>docker exec -it 5721f675c819 bash
[email protected]:/#

3. Run the apt get update command to update the software:

[email protected]:/#apt-get update
Get:1 http://deb.debian.org/debian buster InRelease [122 kB]
Get:2 http://security.debian.org/debian-security buster/updates InRelease [65.4 kB]
Get:3 http://repo.mysql.com/apt/debian buster InRelease [21.5 kB]
Get:4 http://repo.mysql.com/apt/debian buster/mysql-8.0 amd64 Packages [8341 B]
Get:5 http://deb.debian.org/debian buster-updates InRelease [51.9 kB]
Get:6 http://deb.debian.org/debian buster/main amd64 Packages [7907 kB]
Get:7 http://security.debian.org/debian-security buster/updates/main amd64 Packages [301 kB]
80% [6 Packages 6149 kB/7907 kB 78%]                                                                                                       28.3 kB/s 1min 2s^Get:8 http://deb.debian.org/debian buster-updates/main amd64 Packages [15.2 kB]
Fetched 8492 kB in 7min 56s (17.8 kB/s)
Reading package lists... Done

4. Run the apt get install VIM command to install the VIM editor:

[email protected]:/#apt-get install vim
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following additional packages will be installed:
  vim-common vim-runtime xxd
Suggested packages:
  ctags vim-doc vim-scripts
The following NEW packages will be installed:
  vim vim-common vim-runtime xxd
0 upgraded, 4 newly installed, 0 to remove and 0 not upgraded.
Need to get 7390 kB of archives.
After this operation, 33.7 MB of additional disk space will be used.
Do you want to continue?[Y/n] y
Get:1 http://deb.debian.org/debian buster/main amd64 xxd amd64 2:8.1.0875-5 [140 kB]
Get:2 http://deb.debian.org/debian buster/main amd64 vim-common all 2:8.1.0875-5 [195 kB]
Get:3 http://deb.debian.org/debian buster/main amd64 vim-runtime all 2:8.1.0875-5 [5775 kB]
53% [3 vim-runtime 3612 kB/5775 kB 63%]                                                                                                   16.6 kB/s 3min 27s
Get:4 http://deb.debian.org/debian buster/main amd64 vim amd64 2:8.1.0875-5 [1280 kB]
83% [4 vim 152 kB/1280 kB 12%]                                                                                          84% [4 vim 233 kB/1280 kB 18%]       Fetched 7390 kB in 5min 35s (22.0 kB/s)
debconf: delaying package configuration, since apt-utils is not installed
Selecting previously unselected package xxd.
(Reading database ... 9284 files and directories currently installed.)
Preparing to unpack .../xxd_2%3a8.1.0875-5_amd64.deb ...
Unpacking xxd (2:8.1.0875-5) ...
Selecting previously unselected package vim-common.
Preparing to unpack .../vim-common_2%3a8.1.0875-5_all.deb ...
Unpacking vim-common (2:8.1.0875-5) ...
Selecting previously unselected package vim-runtime.
Preparing to unpack .../vim-runtime_2%3a8.1.0875-5_all.deb ...
Adding 'diversion of /usr/share/vim/vim81/doc/help.txt to /usr/share/vim/vim81/doc/help.txt.vim-tiny by vim-runtime'
Adding 'diversion of /usr/share/vim/vim81/doc/tags to /usr/share/vim/vim81/doc/tags.vim-tiny by vim-runtime'
Unpacking vim-runtime (2:8.1.0875-5) ...
Selecting previously unselected package vim.
Preparing to unpack .../vim_2%3a8.1.0875-5_amd64.deb ...
Unpacking vim (2:8.1.0875-5) ...
Setting up xxd (2:8.1.0875-5) ...
Setting up vim-common (2:8.1.0875-5) ...
Setting up vim-runtime (2:8.1.0875-5) ...
Setting up vim (2:8.1.0875-5) ...
update-alternatives: using /usr/bin/vim.basic to provide /usr/bin/vim (vim) in auto mode
update-alternatives: using /usr/bin/vim.basic to provide /usr/bin/vimdiff (vimdiff) in auto mode
update-alternatives: using /usr/bin/vim.basic to provide /usr/bin/rvim (rvim) in auto mode
update-alternatives: using /usr/bin/vim.basic to provide /usr/bin/rview (rview) in auto mode
update-alternatives: using /usr/bin/vim.basic to provide /usr/bin/vi (vi) in auto mode
update-alternatives: warning: skip creation of /usr/share/man/da/man1/vi.1.gz because associated file /usr/share/man/da/man1/vim.1.gz (of link group vi) doesn't exist
update-alternatives: warning: skip creation of /usr/share/man/de/man1/vi.1.gz because associated file /usr/share/man/de/man1/vim.1.gz (of link group vi) doesn't exist
update-alternatives: warning: skip creation of /usr/share/man/fr/man1/vi.1.gz because associated file /usr/share/man/fr/man1/vim.1.gz (of link group vi) doesn't exist
update-alternatives: warning: skip creation of /usr/share/man/it/man1/vi.1.gz because associated file /usr/share/man/it/man1/vim.1.gz (of link group vi) doesn't exist
update-alternatives: warning: skip creation of /usr/share/man/ja/man1/vi.1.gz because associated file /usr/share/man/ja/man1/vim.1.gz (of link group vi) doesn't exist
update-alternatives: warning: skip creation of /usr/share/man/pl/man1/vi.1.gz because associated file /usr/share/man/pl/man1/vim.1.gz (of link group vi) doesn't exist
update-alternatives: warning: skip creation of /usr/share/man/ru/man1/vi.1.gz because associated file /usr/share/man/ru/man1/vim.1.gz (of link group vi) doesn't exist
update-alternatives: warning: skip creation of /usr/share/man/man1/vi.1.gz because associated file /usr/share/man/man1/vim.1.gz (of link group vi) doesn't exist
update-alternatives: using /usr/bin/vim.basic to provide /usr/bin/view (view) in auto mode
update-alternatives: warning: skip creation of /usr/share/man/da/man1/view.1.gz because associated file /usr/share/man/da/man1/vim.1.gz (of link group view) doesn't exist
update-alternatives: warning: skip creation of /usr/share/man/de/man1/view.1.gz because associated file /usr/share/man/de/man1/vim.1.gz (of link group view) doesn't exist
update-alternatives: warning: skip creation of /usr/share/man/fr/man1/view.1.gz because associated file /usr/share/man/fr/man1/vim.1.gz (of link group view) doesn't exist
update-alternatives: warning: skip creation of /usr/share/man/it/man1/view.1.gz because associated file /usr/share/man/it/man1/vim.1.gz (of link group view) doesn't exist
update-alternatives: warning: skip creation of /usr/share/man/ja/man1/view.1.gz because associated file /usr/share/man/ja/man1/vim.1.gz (of link group view) doesn't exist
update-alternatives: warning: skip creation of /usr/share/man/pl/man1/view.1.gz because associated file /usr/share/man/pl/man1/vim.1.gz (of link group view) doesn't exist
update-alternatives: warning: skip creation of /usr/share/man/ru/man1/view.1.gz because associated file /usr/share/man/ru/man1/vim.1.gz (of link group view) doesn't exist
update-alternatives: warning: skip creation of /usr/share/man/man1/view.1.gz because associated file /usr/share/man/man1/vim.1.gz (of link group view) doesn't exist
update-alternatives: using /usr/bin/vim.basic to provide /usr/bin/ex (ex) in auto mode
update-alternatives: warning: skip creation of /usr/share/man/da/man1/ex.1.gz because associated file /usr/share/man/da/man1/vim.1.gz (of link group ex) doesn't exist
update-alternatives: warning: skip creation of /usr/share/man/de/man1/ex.1.gz because associated file /usr/share/man/de/man1/vim.1.gz (of link group ex) doesn't exist
update-alternatives: warning: skip creation of /usr/share/man/fr/man1/ex.1.gz because associated file /usr/share/man/fr/man1/vim.1.gz (of link group ex) doesn't exist
update-alternatives: warning: skip creation of /usr/share/man/it/man1/ex.1.gz because associated file /usr/share/man/it/man1/vim.1.gz (of link group ex) doesn't exist
update-alternatives: warning: skip creation of /usr/share/man/ja/man1/ex.1.gz because associated file /usr/share/man/ja/man1/vim.1.gz (of link group ex) doesn't exist
update-alternatives: warning: skip creation of /usr/share/man/pl/man1/ex.1.gz because associated file /usr/share/man/pl/man1/vim.1.gz (of link group ex) doesn't exist
update-alternatives: warning: skip creation of /usr/share/man/ru/man1/ex.1.gz because associated file /usr/share/man/ru/man1/vim.1.gz (of link group ex) doesn't exist
update-alternatives: warning: skip creation of /usr/share/man/man1/ex.1.gz because associated file /usr/share/man/man1/vim.1.gz (of link group ex) doesn't exist
update-alternatives: using /usr/bin/vim.basic to provide /usr/bin/editor (editor) in auto mode
update-alternatives: warning: skip creation of /usr/share/man/da/man1/editor.1.gz because associated file /usr/share/man/da/man1/vim.1.gz (of link group editor) doesn't exist
update-alternatives: warning: skip creation of /usr/share/man/de/man1/editor.1.gz because associated file /usr/share/man/de/man1/vim.1.gz (of link group editor) doesn't exist
update-alternatives: warning: skip creation of /usr/share/man/fr/man1/editor.1.gz because associated file /usr/share/man/fr/man1/vim.1.gz (of link group editor) doesn't exist
update-alternatives: warning: skip creation of /usr/share/man/it/man1/editor.1.gz because associated file /usr/share/man/it/man1/vim.1.gz (of link group editor) doesn't exist
update-alternatives: warning: skip creation of /usr/share/man/ja/man1/editor.1.gz because associated file /usr/share/man/ja/man1/vim.1.gz (of link group editor) doesn't exist
update-alternatives: warning: skip creation of /usr/share/man/pl/man1/editor.1.gz because associated file /usr/share/man/pl/man1/vim.1.gz (of link group editor) doesn't exist
update-alternatives: warning: skip creation of /usr/share/man/ru/man1/editor.1.gz because associated file /usr/share/man/ru/man1/vim.1.gz (of link group editor) doesn't exist
update-alternatives: warning: skip creation of /usr/share/man/man1/editor.1.gz because associated file /usr/share/man/man1/vim.1.gz (of link group editor) doesn't exist
[email protected]:/etc/mysql# apt-get install vim
Reading package lists... Done
Building dependency tree
Reading state information... Done
vim is already the newest version (2:8.1.0875-5).
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
[email protected]:/#

5. Enter MySQL configuration folder:

[email protected]:/# cd /etc/mysql
[email protected]:/etc/mysql# ls
conf.d  my.cnf  my.cnf.fallback

6. Modify my.cnf configuration file:

[email protected]:/etc/mysql# vi my.cnf

6. Restart MySQL service:

Note: restart may fail.

7. View modification results:

Graphiz error: filenotfounderror [How to Solve]

First, PIP install graphviz in CMD

Then CMD enter dot – version. If displayed:

Note: PIP install graphviz was not successfully installed because only the python calling interface of graphviz is installed. If you use it, you also need to download the installation file of graphviz. At this time, go to the official website to download:

Graphviz official website

After downloading and opening, remember to select the add path here

Re-open CMD and enter dot – version. The installation is successful when you see this

After installation, the error filenotfounderror is still reported. Right-click my computer – advanced system settings – environment variable, select “path” to open the environment, and the blue part can see that the path of Mingming Graphviz already exists

Enter in CMD

import os

os.environ[‘PATH’]

Restart the computer

Configure. AC: error: possibly undefined macro [How to Solve]

When compiling libzmq on ubuntu, an error is reported during configure and makefile generation.

libzmq-master$ ./autogen.sh
autoreconf: Entering directory `.’
autoreconf: configure.ac: not using Gettext
autoreconf: running: aclocal -I config --force -I config
autoreconf: configure.ac: tracing
autoreconf: configure.ac: not using Libtool
autoreconf: running: /usr/bin/autoconf --include=config --force
configure.ac:28: error: possibly undefined macro: AC_SUBST
If this token and others are legitimate, please use m4_pattern_allow.
See the Autoconf documentation.
configure.ac:74: error: missing some pkg-config macros (pkg-config package)
configure.ac:83: error: possibly undefined macro: AC_LIBTOOL_WIN32_DLL
configure.ac:84: error: possibly undefined macro: AC_PROG_LIBTOOL
configure.ac:132: error: possibly undefined macro: AC_MSG_RESULT
configure.ac:145: error: possibly undefined macro: AC_DEFINE
configure.ac:253: error: possibly undefined macro: AC_CHECK_LIB
configure.ac:335: error: possibly undefined macro: AC_CHECK_HEADERS
configure.ac:337: error: possibly undefined macro: AC_MSG_ERROR
configure.ac:521: error: missing some pkg-config macros (pkg-config package)
configure.ac:526: error: possibly undefined macro: AC_SEARCH_LIBS
configure.ac:555: error: possibly undefined macro: AC_MSG_NOTICE
configure.ac:831: error: possibly undefined macro: AC_MSG_WARN
configure:6458: error: possibly undefined macro: AC_DISABLE_STATIC
configure:6462: error: possibly undefined macro: AC_ENABLE_STATIC
autoreconf: /usr/bin/autoconf failed with exit status: 1
autogen.sh: error: autoreconf exited with status 1

As suggested by the webmaster
sudo apt install automake libtool m4 autoconf
I found that all of them are installed. Try other solutions:
The software installed by apt-get is usually in the /usr directory. Since it is prompted that AC_PROG_LIBTOOL cannot be found, there are two general reasons for the analysis.
1. failure to install the package or other reasons such as version problems resulting in no definition of AC_PROG_LIBTOOL.
2. there is a problem finding the path.
From these two points, you can look for AC_PROG_LIBTOOL globally in the /usr directory


can see that AC_PROG_LIBTOOL can be found in the m4 file, then it may be a path issue.
Most m4 files are in the /usr/share/aclocal/ directory, but the default aclocal path for configure is actually /usr/local/share/aclocal.
Then there can be two ways to do it.
First, copy all the *.m4 files under /usr/share/aclocal/ to the usr/local/share/aclocal/ directory.
Second, specify the installation path of aclocal.
sudo cp /usr/share/aclocal/*.m4 /usr/local/share/aclocal/
Again . /autogen.sh, and it passes.

[Solved] Response Export error on submit request on future invoke, java.lang.OutOfMemoryError: Java heap space

Background

HSF distributed framework realizes excel export based on easyexcel
when the control layer sends a get request and HttpServletRequest and httpservletrepose to the business console, an exception occurs

reason

In distributed applications, when the control layer transfers the response to the middle console, the response is closed in advance.

solve

The export implementation idea is changed to: post receives parameters, exports them to bytearrayoutputstream, uploads the file server, and returns uploadid

Detailed log

com.taobao.hsf.exception.HSFException: 
error message : error on submit request on future invoke:
	at com.taobao.hsf.remoting.service.RemotingRPCProtocolComponent.invokeForOne(RemotingRPCProtocolComponent.java:109)
	at com.taobao.hsf.remoting.service.RemotingRPCProtocolComponent.invoke(RemotingRPCProtocolComponent.java:84)
	at com.taobao.hsf.protocol.MultiplexingProtocol$ProtocolMultiplexingInvocationHandler.invoke(MultiplexingProtocol.java:126)
	at com.taobao.hsf.invocation.filter.RPCAfterFilterBuilder$TailNode.invoke(RPCAfterFilterBuilder.java:165)
	at com.taobao.hsf.rpc.generic.GenericInvocationAfterClientFilter.invoke(GenericInvocationAfterClientFilter.java:43)
	at com.taobao.hsf.invocation.filter.RPCAfterFilterNode.invoke(RPCAfterFilterNode.java:70)
	at com.taobao.hsf.invocation.filter.RPCAfterFilterBuilder$HeadNode.invoke(RPCAfterFilterBuilder.java:135)
	at com.taobao.hsf.registry.RegistryInvocationHandler.invoke(RegistryInvocationHandler.java:139)
	at com.taobao.hsf.invocation.DelegateInvocationHandler.invoke(DelegateInvocationHandler.java:12)
	at com.taobao.hsf.remoting.service.LocalInvocationHandler.invoke(LocalInvocationHandler.java:100)
	at com.taobao.hsf.invocation.filter.RPCFilterBuilder$TailNode.invoke(RPCFilterBuilder.java:165)
	at com.taobao.hsf2dubbo.context.DubboRPCContextClientFilter.invoke(DubboRPCContextClientFilter.java:43)
	at com.taobao.hsf.invocation.filter.RPCFilterNode.invoke(RPCFilterNode.java:71)
	at com.taobao.hsf.context.RPCContextClientFilter.invoke(RPCContextClientFilter.java:31)
	at com.taobao.hsf.invocation.filter.RPCFilterNode.invoke(RPCFilterNode.java:71)
	at com.taobao.hsf.rpc.generic.GenericInvocationClientFilter.invoke(GenericInvocationClientFilter.java:59)
	at com.taobao.hsf.invocation.filter.RPCFilterNode.invoke(RPCFilterNode.java:71)
	at com.taobao.hsf.remoting.service.InvocationValidationFilter.invoke(InvocationValidationFilter.java:48)
	at com.taobao.hsf.invocation.filter.RPCFilterNode.invoke(RPCFilterNode.java:71)
	at com.taobao.hsf.filter.QosClientFilter.invoke(QosClientFilter.java:64)
	at com.taobao.hsf.invocation.filter.RPCFilterNode.invoke(RPCFilterNode.java:71)
	at com.taobao.hsf.plugins.spas.SpasClientFilter.invoke(SpasClientFilter.java:82)
	at com.taobao.hsf.invocation.filter.RPCFilterNode.invoke(RPCFilterNode.java:71)
	at com.taobao.hsf.plugins.eagleeye.EagleEyeClientFilter.invoke(EagleEyeClientFilter.java:79)
	at com.taobao.hsf.invocation.filter.RPCFilterNode.invoke(RPCFilterNode.java:71)
	at com.taobao.hsf.dpath.DPathTagClientFilter.invoke(DPathTagClientFilter.java:38)
	at com.taobao.hsf.invocation.filter.RPCFilterNode.invoke(RPCFilterNode.java:71)
	at com.taobao.hsf.invocation.stats.InvocationStatsClientFilter.invoke(InvocationStatsClientFilter.java:45)
	at com.taobao.hsf.invocation.filter.RPCFilterNode.invoke(RPCFilterNode.java:71)
	at com.taobao.hsf.monitor.log.filter.MonitorLogClientFilter.invoke(MonitorLogClientFilter.java:51)
	at com.taobao.hsf.invocation.filter.RPCFilterNode.invoke(RPCFilterNode.java:71)
	at com.taobao.hsf.common.filter.CommonClientFilter.invoke(CommonClientFilter.java:37)
	at com.taobao.hsf.invocation.filter.RPCFilterNode.invoke(RPCFilterNode.java:71)
	at com.taobao.hsf.invocation.filter.RPCFilterBuilder$HeadNode.invoke(RPCFilterBuilder.java:134)
	at com.taobao.hsf.invocation.filter.FilterInvocationHandler.invoke(FilterInvocationHandler.java:28)
	at com.taobao.hsf.cluster.TestAddressInvocationHandler.invoke(TestAddressInvocationHandler.java:50)
	at com.taobao.hsf.remoting.service.AsyncInvocationHandler.invoke(AsyncInvocationHandler.java:44)
	at com.taobao.hsf.invocation.FutureInvocationHandler.invoke(FutureInvocationHandler.java:44)
	at com.taobao.hsf2dubbo.DubboAsyncInvocationHandler.invoke(DubboAsyncInvocationHandler.java:93)
	at com.taobao.hsf.invocation.AsyncToSyncInvocationHandler.invokeType(AsyncToSyncInvocationHandler.java:226)
	at com.taobao.hsf.invocation.AsyncToSyncInvocationHandler.invoke(AsyncToSyncInvocationHandler.java:52)
	at com.taobao.hsf.profiler.ProfilerSyncInvocationHandler.invoke(ProfilerSyncInvocationHandler.java:35)
	at com.taobao.hsf.rpc.client.ErrorLogSyncInvocationHandler.invoke(ErrorLogSyncInvocationHandler.java:47)
	at com.taobao.hsf2dubbo.DubboClientFilterSyncInvocationHandlerInterceptor.invoke(DubboClientFilterSyncInvocationHandlerInterceptor.java:56)
	at com.taobao.hsf.rpc.client.ClientConcurrencyLimiter.invoke(ClientConcurrencyLimiter.java:41)
	at com.taobao.hsf.InvocationUtil.invoke(InvocationUtil.java:51)
	at com.taobao.hsf.proxy.ProxyInvocationHandler.invoke(ProxyInvocationHandler.java:31)
	at com.taobao.hsf.proxy.JdkProxyFactory$JdkProxyInvocationHandler.invoke(JdkProxyFactory.java:99)
	at com.sun.proxy.$Proxy180.exportDetailExcel(Unknown Source)
	at cn.ccccltd.smp.smp.controller.ExportController.transSettlementWriteOffDetail(ExportController.java:57)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.springframework.web.method.support.InvocableHandlerMethod.doInvoke(InvocableHandlerMethod.java:189)
	at org.springframework.web.method.support.InvocableHandlerMethod.invokeForRequest(InvocableHandlerMethod.java:138)
	at org.springframework.web.servlet.mvc.method.annotation.ServletInvocableHandlerMethod.invokeAndHandle(ServletInvocableHandlerMethod.java:102)
	at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.invokeHandlerMethod(RequestMappingHandlerAdapter.java:892)
	at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.handleInternal(RequestMappingHandlerAdapter.java:797)
	at org.springframework.web.servlet.mvc.method.AbstractHandlerMethodAdapter.handle(AbstractHandlerMethodAdapter.java:87)
	at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:1038)
	at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:942)
	at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:1005)
	at org.springframework.web.servlet.FrameworkServlet.doGet(FrameworkServlet.java:897)
	at javax.servlet.http.HttpServlet.service(HttpServlet.java:687)
	at org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:882)
	at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
	at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:231)
	at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
	at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:53)
	at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
	at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
	at cn.ccccltd.biz.plugin.security.interceptor.CasFilterSecurityInterceptor.invoke(CasFilterSecurityInterceptor.java:43)
	at cn.ccccltd.biz.plugin.security.interceptor.CasFilterSecurityInterceptor.doFilter(CasFilterSecurityInterceptor.java:33)
	at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
	at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
	at cn.ccccltd.biz.plugin.security.filter.CSRFilter.doFilter(CSRFilter.java:54)
	at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
	at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
	at org.springframework.boot.actuate.web.trace.servlet.HttpTraceFilter.doFilterInternal(HttpTraceFilter.java:90)
	at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107)
	at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
	at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
	at org.springframework.security.web.FilterChainProxy.doFilterInternal(FilterChainProxy.java:209)
	at org.springframework.security.web.FilterChainProxy.doFilter(FilterChainProxy.java:178)
	at org.springframework.web.filter.DelegatingFilterProxy.invokeDelegate(DelegatingFilterProxy.java:357)
	at org.springframework.web.filter.DelegatingFilterProxy.doFilter(DelegatingFilterProxy.java:270)
	at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
	at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
	at org.springframework.web.filter.RequestContextFilter.doFilterInternal(RequestContextFilter.java:99)
	at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107)
	at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
	at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
	at org.springframework.web.filter.FormContentFilter.doFilterInternal(FormContentFilter.java:92)
	at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107)
	at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
	at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
	at org.springframework.web.filter.HiddenHttpMethodFilter.doFilterInternal(HiddenHttpMethodFilter.java:93)
	at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107)
	at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
	at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
	at org.springframework.session.web.http.SessionRepositoryFilter.doFilterInternal(SessionRepositoryFilter.java:167)
	at org.springframework.session.web.http.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:80)
	at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
	at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
	at org.springframework.boot.actuate.metrics.web.servlet.WebMvcMetricsFilter.filterAndRecordMetrics(WebMvcMetricsFilter.java:117)
	at org.springframework.boot.actuate.metrics.web.servlet.WebMvcMetricsFilter.doFilterInternal(WebMvcMetricsFilter.java:106)
	at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107)
	at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
	at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
	at org.springframework.web.filter.CharacterEncodingFilter.doFilterInternal(CharacterEncodingFilter.java:200)
	at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107)
	at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
	at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
	at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:200)
	at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:96)
	at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:490)
	at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:139)
	at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:92)
	at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:74)
	at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:343)
	at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:408)
	at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:66)
	at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:834)
	at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1415)
	at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:49)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
	at java.lang.Thread.run(Thread.java:748)
Caused by: com.taobao.hsf.io.serialize.HSFSerializeException: SerializeError
error message : HSF Serialize request arguments error.Please make sure your DO is serializable and your dependency is latest.
	at com.taobao.hsf.io.remoting.hsf.HSFPacketFactory.clientCreate(HSFPacketFactory.java:68)
	at com.taobao.hsf.io.stream.AbstractClientStream.write(AbstractClientStream.java:121)
	at com.taobao.hsf.remoting.service.RemotingRPCProtocolComponent.invokeForOne(RemotingRPCProtocolComponent.java:106)
	... 129 more
Caused by: java.lang.OutOfMemoryError: Java heap space
	at com.taobao.hsf.io.common.BytesUtil.copyOf(BytesUtil.java:38)
	at com.taobao.hsf.io.serialize.UnsafeByteArrayOutputStream.write(UnsafeByteArrayOutputStream.java:63)
	at com.taobao.hsf.com.caucho.hessian.io.Hessian2Output.flushBuffer(Hessian2Output.java:1553)
	at com.taobao.hsf.com.caucho.hessian.io.Hessian2Output.writeBytes(Hessian2Output.java:1136)
	at com.taobao.hsf.com.caucho.hessian.io.BasicSerializer.writeObject(BasicSerializer.java:175)
	at com.taobao.hsf.com.caucho.hessian.io.Hessian2Output.writeObject(Hessian2Output.java:421)
	at com.taobao.hsf.com.caucho.hessian.io.UnsafeSerializer$ObjectFieldSerializer.serialize(UnsafeSerializer.java:293)
	at com.taobao.hsf.com.caucho.hessian.io.UnsafeSerializer.writeInstance(UnsafeSerializer.java:212)
	at com.taobao.hsf.com.caucho.hessian.io.UnsafeSerializer.writeObject(UnsafeSerializer.java:166)
	at com.taobao.hsf.com.caucho.hessian.io.Hessian2Output.writeObject(Hessian2Output.java:421)
	at com.taobao.hsf.com.caucho.hessian.io.UnsafeSerializer$ObjectFieldSerializer.serialize(UnsafeSerializer.java:293)
	at com.taobao.hsf.com.caucho.hessian.io.UnsafeSerializer.writeInstance(UnsafeSerializer.java:212)
	at com.taobao.hsf.com.caucho.hessian.io.UnsafeSerializer.writeObject(UnsafeSerializer.java:166)
	at com.taobao.hsf.com.caucho.hessian.io.Hessian2Output.writeObject(Hessian2Output.java:421)
	at com.taobao.hsf.com.caucho.hessian.io.UnsafeSerializer$ObjectFieldSerializer.serialize(UnsafeSerializer.java:293)
	at com.taobao.hsf.com.caucho.hessian.io.UnsafeSerializer.writeInstance(UnsafeSerializer.java:212)
	at com.taobao.hsf.com.caucho.hessian.io.UnsafeSerializer.writeObject(UnsafeSerializer.java:166)
	at com.taobao.hsf.com.caucho.hessian.io.Hessian2Output.writeObject(Hessian2Output.java:421)
	at com.taobao.hsf.com.caucho.hessian.io.ArraySerializer.writeObject(ArraySerializer.java:69)
	at com.taobao.hsf.com.caucho.hessian.io.Hessian2Output.writeObject(Hessian2Output.java:421)
	at com.taobao.hsf.com.caucho.hessian.io.UnsafeSerializer$ObjectFieldSerializer.serialize(UnsafeSerializer.java:293)
	at com.taobao.hsf.com.caucho.hessian.io.UnsafeSerializer.writeInstance(UnsafeSerializer.java:212)
	at com.taobao.hsf.com.caucho.hessian.io.UnsafeSerializer.writeObject(UnsafeSerializer.java:166)
	at com.taobao.hsf.com.caucho.hessian.io.Hessian2Output.writeObject(Hessian2Output.java:421)
	at com.taobao.hsf.com.caucho.hessian.io.UnsafeSerializer$ObjectFieldSerializer.serialize(UnsafeSerializer.java:293)
	at com.taobao.hsf.com.caucho.hessian.io.UnsafeSerializer.writeInstance(UnsafeSerializer.java:212)
	at com.taobao.hsf.com.caucho.hessian.io.UnsafeSerializer.writeObject(UnsafeSerializer.java:171)
	at com.taobao.hsf.com.caucho.hessian.io.Hessian2Output.writeObject(Hessian2Output.java:421)
	at com.taobao.hsf.com.caucho.hessian.io.UnsafeSerializer$ObjectFieldSerializer.serialize(UnsafeSerializer.java:293)
	at com.taobao.hsf.com.caucho.hessian.io.UnsafeSerializer.writeInstance(UnsafeSerializer.java:212)
	at com.taobao.hsf.com.caucho.hessian.io.UnsafeSerializer.writeObject(UnsafeSerializer.java:171)
	at com.taobao.hsf.com.caucho.hessian.io.Hessian2Output.writeObject(Hessian2Output.java:421)

[Solved] UserWarning: CUDA initialization: CUDA unknown error

An accident caused by a school power failure.

UserWarning: CUDA initialization: CUDA unknown error – this may be due
to an incorrectly set up environment, e.g. changing env variable CUDA_VISIBLE_DEVICES after program start. Setting the available devices to be zero. (Triggered internally at /pytorch/c10/cuda/CUDAFunctions.cpp:109.) return torch._C._cuda_getDeviceCount() > 0

Solution.
I did reinstall pytorch and then restarted.
But today’s hiccups were more than that.

UnknownError: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.

Solution:

conda install -c anaconda keras-gpu

But there are new mistakes:

ImportError: cannot import name ‘keras_ export’

I’m completely stuck and can only remove it:

conda list

Viewing all the packages configured in our CONDA environment, we can see that the following three packages are associated with tensorflow:

there is no way. We can only uninstall one package and reconfigure the environment:

pip uninstall ‘Package Name’

Spring Project Error: Error creating bean with name [How to Solve]

This kind of mistake is always encountered when doing exercises recently

org.springframework.beans.factory.UnsatisfiedDependencyException: Error creating bean with name ‘com.example.Demo2MybatisApplicationTests’: Unsatisfied dependency expressed through field ‘userDao’; nested exception is org.springframework.beans.factory.NoSuchBeanDefinitionException: No qualifying bean of type ‘com.example.dao.UserDao’ available: expected at least 1 bean which qualifies as autowire candidate. Dependency annotations: {@org.springframework.beans.factory.annotation.Autowired(required=true)}

There is a long string of balabalabala at the beginning; There are many solutions on the Internet, and the problem I have is that I don’t add the annotation of package scanning

The @mapperscan (“interface path”) annotation is added to the spring startup class

Its function is to specify the package where the interface to become the implementation class is located, and then all interfaces under the package will generate corresponding implementation classes after compilation.

With this project, there will be no error that the bean is not found

[Solved] RuntimeError: CUDA error: CUBLAS_STATUS_ALLOC_FAILED when calling `cublasCreate(handle)`

Today, I encountered a speechless mistake and tossed it for a long time… I mentioned the feature network with Bert, but it didn’t work… I checked the size and found that it wasn’t, and later found that it wasn’t on CUDA

Specific error reporting

RuntimeError: Caught RuntimeError in replica 0 on device 0.
RuntimeError: CUDA error: CUBLAS_ STATUS_ ALLOC_ FAILED when calling `cublasCreate(handle)

resolvent:

ext_hashCodes.unsqueeze(1).repeat(1, B, 1)
#change to
ext_hashCodes.unsqueeze(1).repeat(1, B, 1).cuda()
#It's OK