Tag Archives: # hadoop

[Solved] Internal error XFS_WANT_CORRUPTED_GOTO at line 1635 of file fs/xfs/libxfs/xfs_alloc.c.

Error Messages:

Internal error XFS_WANT_CORRUPTED_GOTO at line 1635 of file fs/xfs/libxfs/xfs_alloc.c. Caller xfs_free_extent

Internal error xfs_trans_cancel at line 990 of flie fs/xfs/xfs_trans.c.

xfs_repair: /dev/mapper/cl-root contains a mounted filesystem
xfs_repair: /dev/mapper/cl-root contains a mounted writable filesystem
fatal error – couldn’t initialize XFS library

 

Reason description:

I found that most of the solutions found on the Internet do not explain why, so we don’t know why, but just follow them. Some people may actually solve the problem of the partition of the system directory mount, while some people can’t solve the problem because it’s not the directory of the system mount.

The above error is mainly due to the problem of the file in the disk partition, so it needs to be repaired. But please check which partition of the attached directory has a problem first, and then repair the corresponding disk partition.

 

Solution:

1. First, after reporting the following errors, please check the information in the red box

You can see that you are asked to enter the root password, and then press enter to see that you have entered the root user, and you can enter the command

First, enter the following command first. df is to view the partition of the mounted directory, and cat /etc/fstab is to view the directory information of the previously persistent mounted partition. It can be seen (in the red box) that the directory attached to /book is gone, so it can be inferred that the partition /dev/sdb1 has a problem and needs to be repaired.

df -h
cat /etc/fstab

2. Next, you can repair it. Use the following command to repair it. If you do not add the L parameter, the following error will be reported:

Function of L parameter:

Add the L parameter to complete the execution

xfs_repair -L /dev/sdb1

Finally, restart with the following command to solve the problem

 init 6

Note: if you repair some partitions that are not damaged, the following error will be reported, so be sure to find the damaged partition that you need to repair, and then go to XFS_ repair

 

[Solved] hiveonspark:Execution Error, return code 30041 from org.apache.hadoop.hive.ql.exec.spark.SparkTask

Problem Description:
when deploying hive on spark, the test reports an error, and the table creation operation is successful, but the following error occurs when inserting insert:

Failed to execute spark task, with exception ‘org.apache.hadoop.hive.ql.metadata.HiveException(Failed to create Spark client for Spark session 2df0eb9a-15b4-4d81-aea1-24b12094bf44)’
FAILED: Execution Error, return code 30041 from org.apache.hadoop.hive.ql.exec.spark.SparkTask. Failed to create Spark client for Spark session 2df0eb9a-15b4-4d81-aea1-24b12094bf44

View the hive log according to the required time in the/TMP/Xiaobai path:

cause analysis
prompt timed out waiting for client connection. Indicates that the connection time between hive and spark has timed out

Solution
1). Change the spark-env.sh.template file in/opt/module/spark/conf/directory to spark env. Sh , and then add the content export spark_ DIST_ CLASSPATH=$(hadoop classpath)
2). Change hive-site.xml in/opt/module/hive/conf directory to modify the connection time between hive and spark

execute the insert statement again. Success! Cry with joy

I made a mistake last night. I checked it all night and didn’t solve it. As a result, I solved it today.

Common errors and solutions in MapReduce stage

1) Package guide is error prone. Especially text and combinetextinputformat. 2) The first input parameter in mapper must be longwritable or nullwritable, not intwritable.
the error reported is a type conversion exception.
3) java.lang.Exception : java.io.IOException : Legal partition for 13926435656 (4), indicating that partition
is not matched with the number of reducetask, so adjust the number of reducetask. 4) If the number of partitions is not 1, but reductask is 1, do you want to execute the partitioning process. The answer is: no partitioning.
Because in the source code of maptask, the premise of partition execution is to determine whether the number of reducenums is greater than 1. No more than 1 is definitely not implemented.
5) import the jar package compiled in Windows environment into linux environment to run,
Hadoop jar wc.jar com.atguigu.mapreduce . wordcount.WordCountDriver /User/atguigu/
/user/atguigu/output
reports the following error:
exception in thread “main” java.lang.UnsupportedClassVersionError :
com/atguigu/mapreduce/wordcount/WordCountDriver : Unsupported major.minor Version 52.0
the reason is that jdk1.7 is used in Windows environment and JDK1.8 is used in Linux environment.
Solution: unified JDK version.
6) cache pd.txt In the case of small files, the report cannot be found pd.txt File
reason: most of them are path writing errors. There is also to check pd.txt.txt It’s a matter of time. There are also some computers that write relative paths
that cannot be found pd.txt , which can be changed to absolute path. 7) Report type conversion exception.
It’s usually a writing error when setting the map output and final output in the driver function.
If the key output from map is not sorted, an exception of type conversion will be reported.
8) running in cluster wc.jar An error occurred when unable to get the input file.
Reason: the input file of wordcount case cannot be placed in the root directory of HDFS cluster.
9) the following related exceptions appear
exception in thread “main” java.lang.UnsatisfiedLinkError :
org.apache.hadoop . io.nativeio.NativeIO

W

i

n

d

o

w

s

.

a

c

c

e

s

s

0

(

L

j

a

v

a

/

l

a

n

g

/

S

t

r

i

n

g

;

I

)

Z

a

t

o

r

g

.

a

p

a

c

h

e

.

h

a

d

o

o

p

.

i

o

.

n

a

t

i

v

e

i

o

.

N

a

t

i

v

e

I

O

Windows.access0 (Ljava/lang/String;I)Z at org.apache.hadoop . io.nativeio.NativeIO

Windows.access0 (Ljava/lang/String; I) Zatorg.apache.hadoop . io.nativeio.NativeIOWindows .access0(Native Method)
at org.apache.hadoop . io.nativeio.NativeIO $ Windows.access ( NativeIO.java:609 )
at org.apache.hadoop . fs.FileUtil.canRead ( FileUtil.java:977 )
java.io.IOException : Could not locate executable null\bin\ winutils.exe in the Hadoop binaries.
at org.apache.hadoop . util.Shell.getQualifiedBinPath ( Shell.java:356 )
at org.apache.hadoop . util.Shell.getWinUtilsPath ( Shell.java:371 )
at org.apache.hadoop . util.Shell .( Shell.java:364 )
solution: copy hadoop.dll File to the windows directory C::?Windows?System32. Some students need to modify the Hadoop source code.
Scheme 2: create the following package name and NativeIO.java Copy to the package name
10) when customizing the output format, note that the close method in recordwirter must close the stream resource. Otherwise, the data in the output file is empty.
@Override
public void close(TaskAttemptContext context) throws IOException,
InterruptedException {
if (atguigufos != null) {
atguigufos.close ();
}
if (otherfos != null) {
otherfos.close ();
} }