Tag Archives: zookeeper

[Solved] Error occurred during initialization of VM javalangNoClassDefFoundError javalangObject

When you run java/javac/java -version under the doc command, an error was reported:

Error occurred during initialization of VM java/lang/NoClassDefFoundError: java/lang/Object

 

Or eclipse cannot be opened for this reason

I summarized the following three reasons:

1: java environment variables are misconfigured, check the right and wrong environment variables, especially check the classpath

General (in the case of java configuration only), the value of the environment variable

JAVA_HOME ========= “your jdk installation directory” such as “C:Program FilesJavajdk1.8.0_121”

Path ========= “%JAVA_HOME%in;%JAVA_HOME%jrein”

CLASS_PATH ========= “%JAVA_HOME%libdt.jar;%JAVA_HOME%lib ools.jar”

2: If there is no problem with the environment variable configuration, then go to the jdk installation directory (such as C:Program FilesJavajdk1.8.0_121) and look for tools.jar under lib and rt.jar under jrelib to see if they exist, it is possible that rt.pack and tools.pack exist.

At this point, just unpack the corresponding files into rt.jar and tools.jar, and you can use the unpack200 tool inside the bin

#cd /usr/java/j2sdk1.4.2/lib
#unpack tools.pack tools.jar
#cd …/jre/lib
#…/…/unpack rt.pack rt.jar

3: There is no tools.jar under lib or rt.jar under jrelib, or there are missing files under lib or jrelib (40 files), so just copy them from elsewhere.

If you run java command after copying tools.jar or rt.jar, there may be missing files under jrelib, look carefully or just copy a whole jrelib, it can be solved.

[Solved] nacos Startup Error: Unable to start embedded Tomcat

Project scenario:

Nacos is prepared to be used as the configuration center. Version 1.3.1 (official recommended stable version – January 11, 2021)
MySQL version: version 5.7 (I heard that version 5.8 has changed greatly), otherwise nacos-mysql.sql file cannot be run
JDK version: 1.8.0_144 (the lower version not works)


Problem Description:

1. Run the spring cloud Alibaba Nacos source code in the local idea, and an error is reported: unable to start embedded Tomcat

org.springframework.context.ApplicationContextException: Unable to start web server; nested exception is org.springframework.boot.web.server.WebServerException: Unable to start embedded Tomcat
	at org.springframework.boot.web.servlet.context.ServletWebServerApplicationContext.onRefresh(ServletWebServerApplicationContext.java:157)
	at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:540)
	at org.springframework.boot.web.servlet.context.ServletWebServerApplicationContext.refresh(ServletWebServerApplicationContext.java:142)
	at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:775)
	at org.springframework.boot.SpringApplication.refreshContext(SpringApplication.java:397)
	at org.springframework.boot.SpringApplication.run(SpringApplication.java:316)
	at org.springframework.boot.SpringApplication.run(SpringApplication.java:1260)
	at org.springframework.boot.SpringApplication.run(SpringApplication.java:1248)
	at com.alibaba.nacos.Nacos.main(Nacos.java:35)
Caused by: org.springframework.boot.web.server.WebServerException: Unable to start embedded Tomcat
	at org.springframework.boot.web.embedded.tomcat.TomcatWebServer.initialize(TomcatWebServer.java:125)
	at org.springframework.boot.web.embedded.tomcat.TomcatWebServer.<init>(TomcatWebServer.java:86)
	at org.springframework.boot.web.embedded.tomcat.TomcatServletWebServerFactory.getTomcatWebServer(TomcatServletWebServerFactory.java:414)
	at org.springframework.boot.web.embedded.tomcat.TomcatServletWebServerFactory.getWebServer(TomcatServletWebServerFactory.java:174)
	at org.springframework.boot.web.servlet.context.ServletWebServerApplicationContext.createWebServer(ServletWebServerApplicationContext.java:181)
	at org.springframework.boot.web.servlet.context.ServletWebServerApplicationContext.onRefresh(ServletWebServerApplicationContext.java:154)
	... 8 common frames omitted

Cause analysis:

Modified in the application.properties configuration file as below:

#*************** Config Module Related Configurations ***************#
### If use MySQL as datasource:
spring.datasource.platform=mysql

### Count of DB:
db.num=1

### Connect URL of DB:
db.url.0=jdbc:mysql://127.0.0.1:3306/nacos?characterEncoding=utf8&connectTimeout=1000&socketTimeout=3000&autoReconnect=true&useUnicode=true&useSSL=false&serverTimezone=UTC
db.user=root
db.password=root

Solution:

The solution of idea
The source code was downloaded locally and imported into idea, no error was reported or anything (although the import will prompt that various classes cannot be found or something) The startup class is in the console module, you need to modify the application.properties of this module, the modification method is the same as the modification method of the configuration file of nacos server, Nacos local source code build Start, if not a cluster start, you need to add -Dnacos.standalone=true in the start parameters.
1. Click Edit Configurations in the upper right corner

2. In the pop-up interface, enter -Dnacos.standalone=true in VM options, save it

[Solved] Errorjava Compilation failed internal java compiler error

Error Messages:

Errorjava Compilation failed internal java compiler error

 

Solution:

1. view the project’s jdk (Ctrl+Alt+shift+S)
File ->Project Structure->Project Settings ->Project

2, view the project’s jdk (Ctrl+Alt+shift+S)
File ->Project Structure->Project Settings -> Modules -> (Name of the project to be modified) -> Sources ->

3. View Java configuration in idea
File ->Setting ->Build,Execution,Deployment -> Compiler -> Java Compiler

If the above three steps still fail
Clear IDEA cache Restart IDEA
File->Invalidate Caches/Restart

[Solved] Error–dubbo Connect zk Error: zookeeper not connected

Error Message:

Error creating bean with name 'dubboBootstrapApplicationListener': Initialization of bean failed; nested exception is java.lang.IllegalStateException: zookeeper not connected

 

Problem:
The project starts with the following error.
IllegalStateException: zookeeper not connected

Reason:
First of all, the default connection timeout for dubbo registration is 5 seconds.
And the zookeeper server I connected to was a little slow to access, which caused the connection to time out.

Solution:
Adjust the default registration time for dubbo to 200 seconds
The default registration timeout is 5 seconds
All others are 1 second
So I increased the timeout for other service calls as well.
If you only need to modify the registration timeout, just add or modify registry.

dubbo:
  registry:
    timeout: 200000
  service:
    timeout: 200000
  consumer: 
    timeout: 200000
  provider:
    timeout: 200000

[Solved] Win 10 Kafka error: failed to construct Kafka consumer

After updating the code, the package succeeds, but an error is reported when the project is started. The error information is shown in the following figure:

I mean, it is obvious that the problem lies in Kafka. After looking at the configuration, it should be the port problem.

Instead of using IP configuration, I directly configured the port name in the host file, as shown in the following figure:

Add configuration 127.0.0.1 Kafka-server

Then change the service address of Kafka to kafka-server:9092 in the application configuration file of the project

[Solved] zookeeper Startup Error: already running as process

1. Problem description

Zookeeper has been able to start normally before, but an error is reported in a startup. Already running as process

2. Problem description

Enter the command JPS to check whether it is really started. It is found that it is not started (the process name of zookeeper is QuorumPeerMain)

This is usually caused by the machine abnormally closing the residual PID file in the cache directory (forced shutdown for the closing process, etc.)
just clean up the cache directory

3. Problem handling

Enter the file directory (this directory is the dataDir directory when setting zoo.cfg(cd /export/server/zookeeper-3.4.9/conf)):

(base) [ root@node2 zkdatas]# cd /export/server/apache-zookeeper-3.5.6-bin/zkdatas

(base) [ root@node2 zkdatas]# ll

Clean up cache files

(base) [root@node2 zkdatas]# rm -rf zookeeper_server.pid

Restart andyou can start normally

(base) [root@node2 apache-zookeeper-3.5.6-bin]# bin/zkServer.sh start

[Solved] HBase Error: ERROR: org.apache.hadoop.hbase.PleaseHoldException: Master is initializing

Error reporting

After installing and entering HBase for the first time, you will encounter this problem:

[root@zhiyong2 /]# hbase shell
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/usdp-srv/srv/udp/2.0.0.0/hdfs/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/usdp-srv/srv/udp/2.0.0.0/hbase/lib/client-facing-thirdparty/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
HBase Shell
Use "help" to get list of supported commands.
Use "exit" to quit this interactive shell.
For Reference, please visit: http://hbase.apache.org/2.0/book.html#shell
Version 2.1.10, rUnknown, Mon Nov 23 09:56:35 WIB 2020
Took 0.0036 seconds
hbase(main):001:0> list_namespace
NAMESPACE

ERROR: org.apache.hadoop.hbase.PleaseHoldException: Master is initializing
        at org.apache.hadoop.hbase.master.HMaster.checkInitialized(HMaster.java:3003)
        at org.apache.hadoop.hbase.master.HMaster.getNamespaces(HMaster.java:3299)
        at org.apache.hadoop.hbase.master.MasterRpcServices.listNamespaceDescriptors(MasterRpcServices.java:1237)
        at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java)
        at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:413)
        at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:133)
        at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324)
        at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304)

For usage try 'help "list_namespace"'

Took 9.8027 seconds

In this case, I can’t wait all the time. I waited for a while and didn’t initialize well…

Solution

Because it is a new cluster, there is no useful data. You can clear the metadata in the following way to reinitialize. Before clearing metadata, close HBase components in USDP’s Web UI.

Delete ZK’s metadata

Since part of the metadata of HBase is stored on zookeeper, so you should so this as below:

[root@zhiyong2 /]# zkCli.sh
Connecting to localhost:2181
2022-03-03 00:09:47,689 [myid:] - INFO  [main:Environment@100] - Client environment:zookeeper.version=3.4.13-2d71af4dbe22557fda74f9a9b4309b15a7487f03, built on 06/29/2018 04:05 GMT
2022-03-03 00:09:47,693 [myid:] - INFO  [main:Environment@100] - Client environment:host.name=zhiyong2
2022-03-03 00:09:47,693 [myid:] - INFO  [main:Environment@100] - Client environment:java.version=1.8.0_202
2022-03-03 00:09:47,695 [myid:] - INFO  [main:Environment@100] - Client environment:java.vendor=Oracle Corporation
2022-03-03 00:09:47,695 [myid:] - INFO  [main:Environment@100] - Client environment:java.home=/usr/java/jdk1.8.0_202/jre
2022-03-03 00:09:47,696 [myid:] - INFO  [main:Environment@100] - Client environment:java.class.path=/srv/udp/2.0.0.0/zookeeper/bin/../zookeeper-server/target/classes:/srv/udp/2.0.0.0/zookeeper/bin/../build/classes:/srv/udp/2.0.0.0/zookeeper/bin/../zookeeper-server/target/lib/*.jar:/srv/udp/2.0.0.0/zookeeper/bin/../build/lib/*.jar:/srv/udp/2.0.0.0/zookeeper/bin/../lib/slf4j-log4j12-1.7.25.jar:/srv/udp/2.0.0.0/zookeeper/bin/../lib/slf4j-api-1.7.25.jar:/srv/udp/2.0.0.0/zookeeper/bin/../lib/netty-3.10.6.Final.jar:/srv/udp/2.0.0.0/zookeeper/bin/../lib/log4j-1.2.17.jar:/srv/udp/2.0.0.0/zookeeper/bin/../lib/jline-0.9.94.jar:/srv/udp/2.0.0.0/zookeeper/bin/../lib/audience-annotations-0.5.0.jar:/srv/udp/2.0.0.0/zookeeper/bin/../zookeeper-3.4.13.jar:/srv/udp/2.0.0.0/zookeeper/bin/../zookeeper-server/src/main/resources/lib/*.jar:/srv/udp/2.0.0.0/zookeeper/bin/../conf:.:/usr/java/jdk1.8.0_202/jre/lib/rt.jar:/usr/java/jdk1.8.0_202/lib/dt.jar:/usr/java/jdk1.8.0_202/lib/tools.jar
2022-03-03 00:09:47,696 [myid:] - INFO  [main:Environment@100] - Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
2022-03-03 00:09:47,696 [myid:] - INFO  [main:Environment@100] - Client environment:java.io.tmpdir=/tmp
2022-03-03 00:09:47,696 [myid:] - INFO  [main:Environment@100] - Client environment:java.compiler=<NA>
2022-03-03 00:09:47,696 [myid:] - INFO  [main:Environment@100] - Client environment:os.name=Linux
2022-03-03 00:09:47,696 [myid:] - INFO  [main:Environment@100] - Client environment:os.arch=amd64
2022-03-03 00:09:47,697 [myid:] - INFO  [main:Environment@100] - Client environment:os.version=3.10.0-957.el7.x86_64
2022-03-03 00:09:47,697 [myid:] - INFO  [main:Environment@100] - Client environment:user.name=root
2022-03-03 00:09:47,697 [myid:] - INFO  [main:Environment@100] - Client environment:user.home=/root
2022-03-03 00:09:47,697 [myid:] - INFO  [main:Environment@100] - Client environment:user.dir=/
2022-03-03 00:09:47,698 [myid:] - INFO  [main:ZooKeeper@442] - Initiating client connection, connectString=localhost:2181 sessionTimeout=30000 watcher=org.apache.zookeeper.ZooKeeperMain$MyWatcher@41906a77
2022-03-03 00:09:47,721 [myid:] - INFO  [main-SendThread(localhost:2181):ClientCnxn$SendThread@1029] - Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error)
Welcome to ZooKeeper!
JLine support is enabled
2022-03-03 00:09:47,808 [myid:] - INFO  [main-SendThread(localhost:2181):ClientCnxn$SendThread@879] - Socket connection established to localhost/127.0.0.1:2181, initiating session
2022-03-03 00:09:47,817 [myid:] - INFO  [main-SendThread(localhost:2181):ClientCnxn$SendThread@1303] - Session establishment complete on server localhost/127.0.0.1:2181, sessionid = 0x1000009708f000e, negotiated timeout = 30000

WATCHER::

WatchedEvent state:SyncConnected type:None path:null
[zk: localhost:2181(CONNECTED) 0] ls /
[cluster, brokers, zookeeper, yarn-leader-election, hadoop-ha, admin, isr_change_notification, dolphinscheduler, log_dir_event_notification, controller_epoch, rmstore, consumers, latest_producer_id_block, config, hbase]
[zk: localhost:2181(CONNECTED) 1] rmr /hbase
[zk: localhost:2181(CONNECTED) 2] [root@zhiyong2 /]# ^C
[root@zhiyong2 /]# ^C
[root@zhiyong2 /]#

HDFS metadata deletion

Since part of HBase data is stored in HDFS, it is also necessary to delete the metadata stored in HDFS by HBase. Since USDP has Ranger and actually has users and permissions, you can’t use root to delete important data of HDFS. You must switch to Hadoop user first.

[root@zhiyong2 /]# hadoop fs -rmr /hbase/data/hbase/meta/*
rmr: DEPRECATED: Please use '-rm -r' instead.
rmr: Failed to move to trash: hdfs://zhiyong-1/hbase/data/hbase/meta/.tabledesc: Permission denied: user=root, access=WRITE, inode="/hbase/data/hbase/meta":hadoop:supergroup:drwxr-xr-x
rmr: Failed to move to trash: hdfs://zhiyong-1/hbase/data/hbase/meta/.tmp: Permission denied: user=root, access=WRITE, inode="/hbase/data/hbase/meta":hadoop:supergroup:drwxr-xr-x
rmr: Failed to move to trash: hdfs://zhiyong-1/hbase/data/hbase/meta/1588230740: Permission denied: user=root, access=WRITE, inode="/hbase/data/hbase/meta":hadoop:supergroup:drwxr-xr-x
[root@zhiyong2 /]# cd /etc/passwd/
-bash: cd: /etc/passwd/: 不是目录
[root@zhiyong2 /]# cat /etc/passwd
root:x:0:0:root:/root:/bin/bash
bin:x:1:1:bin:/bin:/sbin/nologin
daemon:x:2:2:daemon:/sbin:/sbin/nologin
adm:x:3:4:adm:/var/adm:/sbin/nologin
lp:x:4:7:lp:/var/spool/lpd:/sbin/nologin
sync:x:5:0:sync:/sbin:/bin/sync
shutdown:x:6:0:shutdown:/sbin:/sbin/shutdown
halt:x:7:0:halt:/sbin:/sbin/halt
mail:x:8:12:mail:/var/spool/mail:/sbin/nologin
operator:x:11:0:operator:/root:/sbin/nologin
games:x:12:100:games:/usr/games:/sbin/nologin
ftp:x:14:50:FTP User:/var/ftp:/sbin/nologin
nobody:x:99:99:Nobody:/:/sbin/nologin
systemd-network:x:192:192:systemd Network Management:/:/sbin/nologin
dbus:x:81:81:System message bus:/:/sbin/nologin
polkitd:x:999:998:User for polkitd:/:/sbin/nologin
libstoragemgmt:x:998:997:daemon account for libstoragemgmt:/var/run/lsm:/sbin/nologin
abrt:x:173:173::/etc/abrt:/sbin/nologin
rpc:x:32:32:Rpcbind Daemon:/var/lib/rpcbind:/sbin/nologin
apache:x:48:48:Apache:/usr/share/httpd:/sbin/nologin
sshd:x:74:74:Privilege-separated SSH:/var/empty/sshd:/sbin/nologin
postfix:x:89:89::/var/spool/postfix:/sbin/nologin
ntp:x:38:38::/etc/ntp:/sbin/nologin
chrony:x:997:995::/var/lib/chrony:/sbin/nologin
tcpdump:x:72:72::/:/sbin/nologin
hadoop:x:1000:1000::/home/hadoop:/bin/bash
mysql:x:27:27:MySQL Server:/var/lib/mysql:/bin/false
saslauth:x:996:76:Saslauthd user:/run/saslauthd:/sbin/nologin
elastic:x:1001:1001::/home/elastic:/bin/bash
hue:x:1002:1002::/home/hue:/bin/bash
[root@zhiyong2 /]# su - hadoop
上一次登录:四 3月  3 00:24:33 CST 2022
[hadoop@zhiyong2 ~]$ hadoop fs -rmr /hbase/data/hbase/meta/*
rmr: DEPRECATED: Please use '-rm -r' instead.
2022-03-03 00:27:31 INFO fs.TrashPolicyDefault: Moved: 'hdfs://zhiyong-1/hbase/data/hbase/meta/.tabledesc' to trash at: hdfs://zhiyong-1/user/hadoop/.Trash/Current/hbase/data/hbase/meta/.tabledesc
2022-03-03 00:27:31 INFO fs.TrashPolicyDefault: Moved: 'hdfs://zhiyong-1/hbase/data/hbase/meta/.tmp' to trash at: hdfs://zhiyong-1/user/hadoop/.Trash/Current/hbase/data/hbase/meta/.tmp
2022-03-03 00:27:31 INFO fs.TrashPolicyDefault: Moved: 'hdfs://zhiyong-1/hbase/data/hbase/meta/1588230740' to trash at: hdfs://zhiyong-1/user/hadoop/.Trash/Current/hbase/data/hbase/meta/1588230740
[hadoop@zhiyong2 ~]$ hadoop fs -rmr /hbase/data/hbase/namespace/*
rmr: DEPRECATED: Please use '-rm -r' instead.
2022-03-03 00:27:34 INFO fs.TrashPolicyDefault: Moved: 'hdfs://zhiyong-1/hbase/data/hbase/namespace/.tabledesc' to trash at: hdfs://zhiyong-1/user/hadoop/.Trash/Current/hbase/data/hbase/namespace/.tabledesc
2022-03-03 00:27:34 INFO fs.TrashPolicyDefault: Moved: 'hdfs://zhiyong-1/hbase/data/hbase/namespace/.tmp' to trash at: hdfs://zhiyong-1/user/hadoop/.Trash/Current/hbase/data/hbase/namespace/.tmp
2022-03-03 00:27:34 INFO fs.TrashPolicyDefault: Moved: 'hdfs://zhiyong-1/hbase/data/hbase/namespace/98fb8a0448305b2f9af4f9a72495b6df' to trash at: hdfs://zhiyong-1/user/hadoop/.Trash/Current/hbase/data/hbase/namespace/98fb8a0448305b2f9af4f9a72495b6df
[hadoop@zhiyong2 ~]$ hadoop fs -rmr /hbase/MasterProcWALs/*
rmr: DEPRECATED: Please use '-rm -r' instead.
2022-03-03 00:27:38 INFO fs.TrashPolicyDefault: Moved: 'hdfs://zhiyong-1/hbase/MasterProcWALs/pv2-00000000000000000009.log' to trash at: hdfs://zhiyong-1/user/hadoop/.Trash/Current/hbase/MasterProcWALs/pv2-00000000000000000009.log
2022-03-03 00:27:38 INFO fs.TrashPolicyDefault: Moved: 'hdfs://zhiyong-1/hbase/MasterProcWALs/pv2-00000000000000000010.log' to trash at: hdfs://zhiyong-1/user/hadoop/.Trash/Current/hbase/MasterProcWALs/pv2-00000000000000000010.log
2022-03-03 00:27:38 INFO fs.TrashPolicyDefault: Moved: 'hdfs://zhiyong-1/hbase/MasterProcWALs/pv2-00000000000000000011.log' to trash at: hdfs://zhiyong-1/user/hadoop/.Trash/Current/hbase/MasterProcWALs/pv2-00000000000000000011.log
2022-03-03 00:27:38 INFO fs.TrashPolicyDefault: Moved: 'hdfs://zhiyong-1/hbase/MasterProcWALs/pv2-00000000000000000012.log' to trash at: hdfs://zhiyong-1/user/hadoop/.Trash/Current/hbase/MasterProcWALs/pv2-00000000000000000012.log
2022-03-03 00:27:38 INFO fs.TrashPolicyDefault: Moved: 'hdfs://zhiyong-1/hbase/MasterProcWALs/pv2-00000000000000000013.log' to trash at: hdfs://zhiyong-1/user/hadoop/.Trash/Current/hbase/MasterProcWALs/pv2-00000000000000000013.log
[hadoop@zhiyong2 ~]$ exit
登出
[root@zhiyong2 /]#

Restart HBase

Available after restart:

[hadoop@zhiyong2 ~]$ hbase shell
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/usdp-srv/srv/udp/2.0.0.0/hdfs/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/usdp-srv/srv/udp/2.0.0.0/hbase/lib/client-facing-thirdparty/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
HBase Shell
Use "help" to get list of supported commands.
Use "exit" to quit this interactive shell.
For Reference, please visit: http://hbase.apache.org/2.0/book.html#shell
Version 2.1.10, rUnknown, Mon Nov 23 09:56:35 WIB 2020
Took 0.0045 seconds
hbase(main):001:0> list_namespace
list_namespace          list_namespace_tables
hbase(main):001:0> list_namespace
NAMESPACE
default
hbase
2 row(s)
Took 0.5645 seconds
hbase(main):002:0> exit
[hadoop@zhiyong2 ~]$ exit
登出
[root@zhiyong2 /]# hbase shell
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/usdp-srv/srv/udp/2.0.0.0/hdfs/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/usdp-srv/srv/udp/2.0.0.0/hbase/lib/client-facing-thirdparty/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
HBase Shell
Use "help" to get list of supported commands.
Use "exit" to quit this interactive shell.
For Reference, please visit: http://hbase.apache.org/2.0/book.html#shell
Version 2.1.10, rUnknown, Mon Nov 23 09:56:35 WIB 2020
Took 0.0035 seconds
hbase(main):001:0> list_namespace
list_namespace          list_namespace_tables
hbase(main):001:0> list_namespace
NAMESPACE
default
hbase
2 row(s)
Took 0.6270 seconds
hbase(main):002:0> exit
[root@zhiyong2 /]#

At this time, both the root and the root of the HDFS can see whether the root of the HDFS has the highest permission on the HDFS component or not.

Node Kubelet Error: node “xxxxx“ not found [How to Solve]

11月 05 10:58:35 localhost.localdomain kubelet[974]: E1105 10:58:35.108952     974 kubelet.go:2412] "Error getting node" err="node "localhost.localdomain" not found"
11月 05 10:58:35 localhost.localdomain kubelet[974]: E1105 10:58:35.209293     974 kubelet.go:2412] "Error getting node" err="node "localhost.localdomain" not found"
11月 05 10:58:35 localhost.localdomain kubelet[974]: E1105 10:58:35.310543     974 kubelet.go:2412] "Error getting node" err="node "localhost.localdomain" not found"
11月 05 10:58:35 localhost.localdomain kubelet[974]: E1105 10:58:35.411121     974 kubelet.go:2412] "Error getting node" err="node "localhost.localdomain" not found"
11月 05 10:58:35 localhost.localdomain kubelet[974]: E1105 10:58:35.511949     974 kubelet.go:2412] "Error getting node" err="node "localhost.localdomain" not found"
11月 05 10:58:35 localhost.localdomain kubelet[974]: E1105 10:58:35.612822     974 kubelet.go:2412] "Error getting node" err="node "localhost.localdomain" not found"
11月 05 10:58:35 localhost.localdomain kubelet[974]: E1105 10:58:35.713249     974 kubelet.go:2412] "Error getting node" err="node "localhost.localdomain" not found"
11月 05 10:58:35 localhost.localdomain kubelet[974]: E1105 10:58:35.781263     974 controller.go:144] failed to ensure lease exists, will retry in 7s, error: leases.coordination.k8s.io "localhost.localdomain" is forbidden: User "system:node:k8s222" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease": can only access node lease with the same name as the requesting node
11月 05 10:58:35 localhost.localdomain kubelet[974]: E1105 10:58:35.813355     974 kubelet.go:2412] "Error getting node" err="node "localhost.localdomain" not found"
11月 05 10:58:35 localhost.localdomain kubelet[974]: E1105 10:58:35.913495     974 kubelet.go:2412] "Error getting node" err="node "localhost.localdomain" not found"

1.1 this node is always notready

[root@crust-m01 ~]# kubectl get node
NAME        STATUS     ROLES                  AGE   VERSION
k8s220   NotReady   control-plane,master   44d   v1.21.3
k8s221   NotReady   <none>                 44d   v1.21.3
k8s222   NotReady   <none>                 44d   v1.21.3

1.2 view details of this node

[root@localhost ~]# kubectl describe node k8s221


……
Unschedulable:      false
Lease:
  HolderIdentity:  k8s221
  AcquireTime:     <unset>
  RenewTime:       Tue, 28 Sep 2021 14:37:08 +0800
Conditions:
  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
  ----             ------    -----------------                 ------------------                ------              -------
  MemoryPressure   Unknown   Tue, 28 Sep 2021 14:32:16 +0800   Tue, 28 Sep 2021 14:38:17 +0800   NodeStatusUnknown   Kubelet stopped posting node status.
  DiskPressure     Unknown   Tue, 28 Sep 2021 14:32:16 +0800   Tue, 28 Sep 2021 14:38:17 +0800   NodeStatusUnknown   Kubelet stopped posting node status.
  PIDPressure      Unknown   Tue, 28 Sep 2021 14:32:16 +0800   Tue, 28 Sep 2021 14:38:17 +0800   NodeStatusUnknown   Kubelet stopped posting node status.
  Ready            Unknown   Tue, 28 Sep 2021 14:32:16 +0800   Tue, 28 Sep 2021 14:38:17 +0800   NodeStatusUnknown   Kubelet stopped posting node status.
……

1.3 view kubelet logs on this node

[root@crust-m2 ~]# service kubelet status -l
Redirecting to /bin/systemctl status  -l kubelet.service
● kubelet.service - kubelet: The Kubernetes Node Agent
   Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)
  Drop-In: /usr/lib/systemd/system/kubelet.service.d
           └─10-kubeadm.conf
   Active: active (running) since 二 2021-09-28 14:51:57 CST; 4min 6s ago
     Docs: https://kubernetes.io/docs/
 Main PID: 21165 (kubelet)
    Tasks: 19
   Memory: 43.0M
   CGroup: /system.slice/kubelet.service
           └─21165 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --network-plugin=cni --pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.4.1

9月 28 14:56:03 crust-m2 kubelet[21165]: E0928 14:56:03.119645   21165 kubelet.go:2291] "Error getting node" err="node "crust-m2" not found"
9月 28 14:56:03 crust-m2 kubelet[21165]: E0928 14:56:03.220694   21165 kubelet.go:2291] "Error getting node" err="node "crust-m2" not found"
9月 28 14:56:03 crust-m2 kubelet[21165]: E0928 14:56:03.321635   21165 kubelet.go:2291] "Error getting node" err="node "crust-m2" not found"
9月 28 14:56:03 crust-m2 kubelet[21165]: E0928 14:56:03.385100   21165 eviction_manager.go:255] "Eviction manager: failed to get summary stats" err="failed to get node info: node "crust-m2" not found"
9月 28 14:56:03 crust-m2 kubelet[21165]: E0928 14:56:03.422387   21165 kubelet.go:2291] "Error getting node" err="node "crust-m2" not found"
9月 28 14:56:03 crust-m2 kubelet[21165]: E0928 14:56:03.523341   21165 kubelet.go:2291] "Error getting node" err="node "crust-m2" not found"
9月 28 14:56:03 crust-m2 kubelet[21165]: E0928 14:56:03.624021   21165 kubelet.go:2291] "Error getting node" err="node "crust-m2" not found"
9月 28 14:56:03 crust-m2 kubelet[21165]: E0928 14:56:03.724418   21165 kubelet.go:2291] "Error getting node" err="node "crust-m2" not found"
9月 28 14:56:03 crust-m2 kubelet[21165]: E0928 14:56:03.825475   21165 kubelet.go:2291] "Error getting node" err="node "crust-m2" not found"
9月 28 14:56:03 crust-m2 kubelet[21165]: E0928 14:56:03.926199   21165 kubelet.go:2291] "Error getting node" err="node "crust-m2" not found"

2. [troubleshooting]

The startup command of log output in 1.3 is as follows:

/usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --network-plugin=cni --pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.4.1

 

There is no problem viewing and analyzing all configuration files in the startup command

The error err = "node" localhost. Output in 1.3 was found localdomain "Not found
the information of kubectl get node on the master is k8s220, k8s221, k8s222

conclusion
when kubernetes was installed before, the name of the master was k8s220, and the node was k8s221, k8s222, because /etc/hostname was written as localhost.localdomain by default, kubelet has always been report errors

3. [modification]

Modify the hostname file, execute the hostname command, modify the server name,

Restart kubelete

Kafka executes the script to create topic error: error org apache. kafka. common. errors. InvalidReplicationFactorException: Replicati

Question:

To test the integration of sparkstreaming and Kafka in the code, you need to create two topics in Kafka in advance, but the following errors are reported during the execution of the creation script

 kafka-topics.sh --zookeeper linux1:2181,linux2:2181,linux3:2181 --create --topic wufabao_topic01 --replication-factor 2 --partitions 3

WARNING: Due to limitations in metric names, topics with a period ('.') or underscore ('_') could collide. To avoid issues it is best to use either, but not both.
Error while executing topic command : Replication factor: 2 larger than available brokers: 0.
[2022-02-09 17:27:18,432] ERROR org.apache.kafka.common.errors.InvalidReplicationFactorException: Replication factor: 2 larger than available brokers: 0.
 (kafka.admin.TopicCommand$)

reason:

The path of metadata stored in zookeeper configured by Kafka is incorrect. The metadata path I configured in Kafka is:
zookeeper connect=linux1:2181,linux2:2181,linux3:2181/myKafka

Solution:

Modify the path of Kafka metadata in the script as follows:
kafka-topics.sh --zookeeper linux1:2181,linux2:2181,linux3:2181/myKafka --create --topic wufabao_topic01 --replication-factor 2 --partitions 3
created successfully

[Solved] Error contacting service. It is probably not running.

First, check whether more than half of the servers start zookeeper. If yes, use the JPS command and find that the quorumpeermain main class is not started

[atguigu@Hadoop103 zookeeper-3.5.7]$ jps
14850 Jps

The most likely cause: Zookeeper decompression path in the conf folder zoo.cfg (I am here after changing the name) configuration when adding content after the addition of spaces and the creation of myid up and down there are empty lines or left and right spaces, enter the file deleted, and then check the jps

#######################cluster########################## 
server.2=hadoop102:2888:3888 
server.3=hadoop103:2888:3888 
server.4=hadoop104:2888:3888

[Solved] Error contacting service. It is probably not running.

First, check whether more than half of the servers start zookeeper. If yes, use the JPS command and find that the quorumpeermain main class is not started

[atguigu@Hadoop103 zookeeper-3.5.7]$ jps
14850 Jps

The most likely cause: Zookeeper decompression path in the conf folder zoo.cfg (I am here after changing the name) configuration when adding the content after the addition of a space and the creation of myid up and down there are empty lines or left and right spaces, enter the file deleted, and then check the jps

#######################cluster########################## 
server.2=hadoop102:2888:3888 
server.3=hadoop103:2888:3888 
server.4=hadoop104:2888:3888