Tag Archives: problem

[Solved] Illegal access: this web application instance has been stopped already

The environment at that time:

When testing the UAT project, I suddenly found that the project could not be accessed normally. In fact, there are many times when the project is still in good condition in the last second and explodes in the next second. It’s very uncomfortable, especially when testing the joint debugging

So he opened the log with curiosity and found that the original report was wrong

java.lang.IllegalStateException: Illegal access: this web application instance has been stopped already.
 
Could not load [java.beans.PropertyChangeEvent]. The following stack trace is thrown for debugging 

purposes as well as to attempt to terminate the thread which caused the illegal access.

His general meaning is that the web application has stopped and can’t load something called XXXXXXXX.

Then kill the Tomcat process, and then get up
to solve the problem….

be careful:

if you restart tomcat, you cannot solve the problem.
to modify the server.xml , add a child element to the tag, find the tag, and set the attribute value of reloadable to reloadable = false.

It means hot deployment, which is convenient for developers

Caused by: java.io.IOException: APR error: -730053

After the project started, the swigger web page was closed, and the error was reported. Ah, just now it was good. How could the error be reported?I didn’t see the error, and I didn’t know what the problem was. I thought it was a code problem, and then I went to Baidu

The general reason is: the server is outputting problems to the browser, and I shut them down. That’s why the problem is caused. Let’s do it again and solve the problem

Postscript: although it’s a small problem, if you don’t know it, you can waste a lot of time and change your bad habits.

learn in actual combat and grow in happiness

The usage of Java JUnit unit test and the solution of initialization error

Usage:
1. Select project – right click – build path – add Libraries – JUnit – next – junit4 – finish
2. Write @ test on the method body to be tested

Note:
1. The test method must be modified with public
2. The return value must be void
3. There can be no parameters in the method
4. There can be no class named test under the package
if the test code is OK, an error is still reported, initialization error is not reported in other classes,
it may be because there is an error method in this class
2 Or there are multiple @ test annotations in a class, one of which adds static or has other errors.

Linux Mint installs Hadoop environment

Using Hadoop-Language-2.8.4.jar, the command is as follows: ./share/hadoop/tools/lib/hadoop – streaming – 2.8.4. Jar – input/Mr – – the output/input/* Mr – output – the file/home/LZH/external/Mapper. Py – Mapper ‘Mapper. Py’ – the file/home/LZH/external/Reducer. Py – Reducer ‘Reducer.py’
Problem 1: bash:/share/hadoop/tools/lib/hadoop – streaming – 2.8.4. Jar: Permission denied
Solution: expand the file permissions, chmod -r 777/share/hadoop/tools/lib/hadoop – streaming – 2.8.4. Jar
Invalid File (bad Magic Number): Exec format Error
Solution: I was careless, and the command omitted the hadoop JAR in the front, and added it, namely: Hadoop jar./share/hadoop/tools/lib/hadoop – streaming – 2.8.4. Jar – input/Mr – – the output/input/* Mr – output – the file/home/LZH/external/Mapper. Py – Mapper ‘Mapper. Py’ – the file/home/LZH/external/Reducer. Py – Reducer ‘Reducer.py’
Problem 3 May be encountered: Mapper.py and reduc. py have to be turned to executable. Chmod +x filename modification permission is executable
When writing MapReduce in Python, it’s a good idea to start with: #! The /usr/bin/env python statement
And finally it works

Hadoop worked fine, but for other reasons, I pressed the power button to force a shutdown, huh?After starting start-up, start start-up all-sh, and then JPS check that there is a missing Datanode in Hadoop, FS-LS/Input, it is found that it cannot connect to 127.0.0.1. Restart hadoop again when starting Hadoop, it is found that it can also create folders by running the command Hadoop, FS-LS/Input, but there is a problem 3 when putting file: Put the File/input/inputFile. TXT. _COPYING_ could only be replicated to 0 home nodes minReplication (= 1). There are 0 datanode (s) running and no node (s) are excluded in this operation.
At this point, stop-all.sh discovers no Proxyserver to stop and no Datanode to stop. (pro test the first solution is successful)
Reason 1: Every time the Namenode Format creates a namenodeId again, while under Hadoop.tmp. dir contains the ID generated by the last format. The Namenode format cleans up the data under the Namenode, but does not clean up the data under the Datanode, which causes failure at startup.
Here are two solutions:

rm -rf /opt/hadoop/ DFS /name/*
rm -rf/rf /hadoop/ DFS /name/*
remove the contents of the “DFS. Datand. data.dir”
rm-rf /opt/hadoop/ DFS /data/*
2) delete files beginning with “hadoop” under “hadoop.tmp.dir”
rm-rf /apt/hadoop/ TMP /hadoop*
3) reformat hadoop
hadoop namenode-format
4) start hadoop
The disadvantage of the start-all-sh
scheme is that all the important data on the original cluster is gone. Therefore, the second scheme is recommended:
1) modify the namespaceID of each Slave so that it is consistent with the Master’s namespaceID.
or
2) modify the Master’s namespaceID to match the Slave’s namespaceID.
Master “namespaceID” located in the “/ opt/hadoop/DFS/name/current/VERSION” file, the Slave “namespaceID” is located in the “//opt/hadoop/DFS/data/current/VERSION” file.

reason 2: the problem is that hadoop USES the mapred and DFS process Numbers on the datanode when it stops. While the default process number is saved under/TMP, Linux defaults to delete files in this directory every once in a while (usually a month or 7 days). Therefore, after deleting the two files of Hadoop-Hadoop-Jobtracker. pid and Hadoop-Hadoop-Namenode. Pid, the Namenode will naturally not find the two processes on the Datanode.
configuring export HADOOP_PID_DIR in the configuration file hadoop_env.sh solves this problem.
In the configuration file, the default path for HADOOP_PID_DIR is “/var/hadoop/pids”, we will manually create a “Hadoop” folder in the “/var” directory, if it already exists, do not create it, remember to chown the permissions to hadoop users. Then kill the process of the Datanode and Tasktracker in Slave (kill -9 process number), restart -all.sh and stop-all.sh without a “No Datanode to stop”, indicating that the problem has been solved.

Run.sh: Bash:./run.sh: Permission denied
Solutions:
Use the command chmod to modify the directory. Sh permissions can be
Such as chmod u + x *. Sh
A Container killed on requisition.exit code is 143.
I just ran out of memory. There are two ways to solve it perfectly:
1. Several more Mapper and Reducer are specified during the runtime:
-d mapred.map.tasks=10 \ #command [genericOptions] [commandOptions]
-d mapred.reduce.tasks=10 \ # note that -d is genericOptions,
before the other parameters
– numReduceTasks 10
2. Modify yarn-site.xml to add the following attributes:

<property>
   <name>yarn.nodemanager.vmem-check-enabled</name>
   <value>false</value>
   <description>Whether virtual memory limits will be enforced for containers</description>
</property>

<property>
   <name>yarn.nodemanager.vmem-pmem-ratio</name>
   <value>4</value>
   <description>Ratio between virtual memory to physical memory when setting memory limits for containers</description>
</property> 

Reference:

[Python] Implement Hadoop MapReduce program in Python: calculate the mean and variance of a set of data

! LaTeX Error: Option clash for package hyperref.

The article directories
Causes, solutions, results

upload paper to arxiv. When compiling online, the following error occurs:

! LaTeX Error: Option clash for package hyperref.

why
\usepackage[hidelinks]{hyperref} conflicts with other packages

The solution
Remove the package %\usepackage[hidelinks]{hyperref}

The results of
Compile successfully

caffe deep learning [three] compilation error: fatal error: hdf5.h: No such file or directory compilation terminated.

Question:
When configuring Caffe-SSD today, I was ready to compile Caffe when I encountered:

fatal error: hdf5.h: No such file or directory compilation terminated.

Such problems are shown in the following figure:

The reason is that the header file for HDF5.h was not found.
 
 
 
 
 
 
Solutions:
1. Modify makefile.config file
Go to the download directory for Caffe
In the makefile.config file, hold down CRTL + F to search: INCLUDE_DIRS
Note that it’s not makefile.config. example!!
Add/usr/include/hdf5/serial/to INCLUDE_DIRS
Namely, the original:

INCLUDE_DIRS := $(PYTHON_INCLUDE) /usr/local/include

Now it’s:

INCLUDE_DIRS := $(PYTHON_INCLUDE) /usr/local/include /usr/include/hdf5/serial/

 
 
2. Modify the Makefile file
In the Makefile file, press and hold CRTL + F to search: LIBRARIES +=
Note that this is not the makefile.config from step 1 above!!
Change HDF5_HL and HDF5 to HDF5_SERIal_HL and HDF5_serial.
Namely, the original:

LIBRARIES += glog gflags protobuf boost_system boost_filesystem m hdf5_hl hdf5

Now it’s:

LIBRARIES += glog gflags protobuf boost_system boost_filesystem m hdf5_serial_hl hdf5_serial

 
 
 
 
Results:

 
 
 

unrecognized relocation (0x2a) in section `.text`

The problem
One of the libraries used today has been updated. After the new version is updated, there will be an error when compiling the Linux-x86-64 version on the server:
/usr/bin/ld: libsdk.a(Imagexxx.cpp.o): unrecognized relocation (0x2a) in section .text
/usr/bin/ld: final link failed: Bad value
collect2: error: ld returned 1 exit status
Then I tested it on my own computer, and the results compiled.
Based on the relevant information of see on stack overflow:
https://stackoverflow.com/questions/46058050/unable-to-compile-unrecognized-relocation
Most likely the LD version is incorrect

ld version:
GNU ld(GNU Binutils for Ubuntu) 2.26
server ld version:
GNU ld(GNU Binutils for Ubuntu) 2.24
To solve
Have root access

$ sudo apt-get update
$ sudo apt-get install binutils-2.26

export PATH="/usr/lib/binutils-2.26/bin:$PATH"

No root
download source:
https://ftp.gnu.org/gnu/binutils/

tar -zxvf binutils-2.26.tar.gz
cd binutils-2.26
./configure --prefix=/home/xxx/binutils
make
make install
export PATH="/home/xxx/binutils/bin:$PATH"

Apache2 cannot be started and an error is reported for apache2.service failed because the control process exited with error code.

Today, when I was preparing to use Kali to build a website, The Apache server could not run normally. After patiently understanding the error code and Baidu finally solved the problem.
1. Error problem:
When you started apache2, the following error occurred. At first I thought it was not root, but switching to root still reported an error.

root@kali:/root# service apache2 start
Job for apache2.service failed because the control process exited with error code.
See "systemctl  status apache2.service" and "journalctl  -xe" for details.

At that time, I felt that KALI had not been used for a long time, because the software package was too old, so I updated it first.

root@kali:~# apt-get update

But the same error is reported, so you view the error message through the apache2 –help command.

apache2 --help
[Mon Apr 13 23:45:17.772837 2020] [core:warn] [pid 24587] AH00111: Config variable ${APACHE_RUN_DIR} is not defined
apache2: Syntax error on line 80 of /etc/apache2/apache2.conf: DefaultRuntimeDir must be a valid directory, absolute or relative to ServerRoot

No new Apache environment variables were imported due to a change in the Apache configuration file after the upgrade. Solutions:

source /etc/apache2/envvars

Update later and still report an error, but the error message is different. The reasons for the error are:

root@kali:/root# apache2 --help
AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 127.0.1.1. Set the 'ServerName' directive globally to suppress this message
(98)Address already in use: AH00072: make_sock: could not bind to address [::]:80
(98)Address already in use: AH00072: make_sock: could not bind to address 0.0.0.0:80
no listening sockets available, shutting down
AH00015: Unable to open logs

2. Solution:
Through the error code, we can know that it should not be able to bind to port 80, because the default port of Apache is 80. Of course, now there are two choices, one is to kill the process occupying the port, and the other is the better default port of Apache. I choose the first one.
First look at what process is occupying the port.

root@kali:/root# netstat -lnp|grep 80
tcp        0      0 127.0.0.1:80            0.0.0.0:*               LISTEN      18594/gsad          
unix  2      [ ACC ]     STREAM     LISTENING     18198    808/VGAuthService    /var/run/vmware/guestServicePipe

Then kill the process.

root@kali:/root# kill -9 18594

Finally, apache2 was successfully restarted.

root@kali:/root# /etc/init.d/apache2 start
[ ok ] Starting apache2 (via systemctl): apache2.service.

You can also change the default port for apache2, as shown in the response on StackOverflow.

CentOS7 nginx Failed to read PID from file /run/nginx.pid: Invalid argument?

1. On centos7, after configuring the nginx proxy service,

systemctl status nginx.service

error:

Failed to read PID from file /run/nginx.pid: Invalid argument

2. See a lot of said to delete change nginx.pid file, after trying, invalid.

. 3. Then a solution was found:

the mkdir -p/etc/systemd/system/nginx. Service. D.

printf “[Service] \ nExecStartPost =/bin/sleep 0.1 \ n” & gt; /etc/systemd/system/nginx.service.d/override.conf

4. Finally, execute the command

systemctl daemon – reload

systemctl restart nginx.service

solves the problem

Conda HTTP 000 CONNECTION FAILED for url

article directory

  • problem
  • cause
  • solution

problem

on MAC when conda create –name tensorflow1.4python3.6 python=3.6, the following problem

occurs

==============
XXX ~ % conda create –name tensorflow1.4python3.6
python=3.6
collection package metadata (current_repodata.json): failed

CondaHTTPError: HTTP 000 CONNECTION FAILED for url https://repo.anaconda.com/pkgs/main/osx-64/current_repodata.json
Elapsed: –

An HTTP error occurred when trying to retrieve this URL.
HTTP errors are often intermittent, and a simple retry will get you on your way.

If your current network has https://www.anaconda.com blocked, please file
a support request with your network engineering team.

‘https://repo.anaconda.com/pkgs/main/osx-64’

===================

why
The

download failed because of an incorrect channel selection, which slowed access to anaconda’s url.
can view all optional channels, command: conda config –show-sources, the following prompt
> /Users/xx/.condarc <
ssl_verify: True
channels:

  • https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free/
  • https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge/
  • https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/ms Ys2 /
  • defaults
    show_channel_urls: True
    as you can see, there are three channels here, maybe one of them was wrong.

solution

delete all other channels, keep the first channel
(1) open the.condarc file, command: open.condarc
change this to
ssl_verify: True
channels:

channels:

channels:

channels

  • https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free/ show_channel_urls: True save file