Tag Archives: ubuntu

[problem solving] Error:failed to get canonical path of /cow

this blog runs in Ubuntu18.04

problem description:
Error:failed to get canonical path of /cow

solution

use the recovery software boot-repair(after installing the Chinese system, look for “boot repair” in Dash) :

we need to use the system CD or U disk to enter the system, and then click “Try Ubuntu”

to install boot-repair, you need to add Ubuntu’s boot-repair PPA source and update the apt library.

sudo add-apt-repository ppa:yannubuntu/boot-repair && sudo apt-get update

Cannot add PPA if the error is reported… , can see: https://blog.csdn.net/weixin_44436677/article/details/107133371

below is the installation:

sudo apt-get install -y boot-repair   

use boot-repair to repair:

use boot-repair on the command line or click open in Dash, and then fix automatically in the software window or select advanced mode to set accordingly.

, wait a little, and restart.

VirtualBox problem solving set -[drm:vmw_host_log [vmwgfx]] *ERROR* Failed to send host log message

environment description:
virtual machine: VirtualBox version 6.0.14 r133895 (Qt5.6.2)
virtual machine: Ubuntu 18.04 LTS

problem description: [DRM :vmw_host_log [VMWGFX]] ERROR Failed to send host log message
screenshot:

solution:

  1. shut down the target system;
  2. VirtualBox main panel, select the virtual machine to be modified on the left, click “Settings” on the right;
  3. Settings page, click “display” on the left, select “screen” on the top right, select “video card controller” on the bottom, and select VBoxVGA;
  4. Start the target system, but you can find that there are no more prompts.

Ubuntu error: flAbsPath on /var/lib/dpkg/status failed – realpath (2: No such file or directory)

false

E: flAbsPath on /var/lib/dpkg/status failed - realpath (2: No such file or directory)

solution

sudo mkdir -p /var/lib/dpkg/{alternatives,info,parts,triggers,updates}
sudo cp /var/backups/dpkg.status.0 /var/lib/dpkg/status
apt-get download dpkg
sudo dpkg -i dpkg*.deb
apt-get download base-files
sudo dpkg -i base-files*.deb
sudo apt-get update
sudo apt-get check

Stop: job failed while stopping

system: ubuntu14.04


computer in the program, after dinner came back to find that the computer crashed, stay in the login interface, basically unable to move the mouse.
do not want to restart, otherwise to reprocess the program, too much trouble, baidu do not restart under the way into the system.

also press CTRL + Alt +F1 to enter the tty interface, enter the top command, view those processes with large memory footprint, and find those unimportant processes that can be closed. The first column is the process number PID. After determining the process to be killed, type sudo kill pid to close the process. After reducing the memory footprint, you can basically enter the system without accident.

after entering the system, I found the network icon in the upper right corner of my system, but it was an empty fan. This was the first time I encountered such a situation, that is, I plugged in the network cable, but the computer neither showed wired nor wireless, so there should be something wrong with the network module. Want to restart the computer should be able to solve, but still do not want to restart based on that reason just now, do not have, that continues to do, hence again Baidu. Baidu to several methods, this time for my situation is useless, first record here, other encountered this problem may be useful. Of course, my problem was solved in the end.

The first method that

looks up. Sudo /etc/init.d/networking restart

prompt

start: Job is already running: networking

indicate that the network service is running, but can’t normally closed
then baidu, instructed to check the error log information
sudo tail -f/var/log/upstart/networking. The log

Stopping or restarting the networking job is not supported.
Use ifdown & ifup to reconfigure desired interface.
Stopping or restarting the networking job is not supported.
Use ifdown & ifup to reconfigure desired interface.
Stopping or restarting the networking job is not supported.
Use ifdown & ifup to reconfigure desired interface.
Stopping or restarting the networking job is not supported.
Use ifdown & ifup to reconfigure desired interface.

according to the log, continue:
sudo ifdown eth0 & & ifup eth0

fails again:

ifdown: interface eth0 not configured
Ignoring unknown interface eth0=eth0.

then tried the first command with a lucky heart, but again it failed:
sudo /etc/init.d/networking restart

Various commands tried during

:

ifconfig eth0 down
ifconfig eth0 up

finally come to think of it, finishing before an ubuntu system under some of the problems of notes, found after watching, try the following command:

sudo NetworkManager restart

doesn’t solve the problem, but the hint is useful to me:

NetworkManager is running (pid 1082)

since there is a problem shutting down the network service normally, why don’t I just kill the process and start the network service?So, keep trying:

sudo kill 1082
sudo NetworkManager restart

now prompt:

NetworkManager is running (pid 30417)

can see that the process number has changed, indicating that the network service restart was successful.
then found that the network fault has been lifted, you can access the Internet normally ~~~
hope to solve your problems.


above, welcome to exchange.

Solve the error in Ubuntu 18.04: called “net usershare info” but it failed: failed to execute child process “net”

1. Problem description

Ubuntu 18.04 suddenly encountered the following error while using:

Called "net usershare info" but it failed: Failed to execute child process “net” (No such file or directory)

The

error comes from Ubuntu nautilus, and the trigger is when nautilus is closed, as shown below:

2. Solution

s1.install samba-common-bin by running the following command from the terminal:

sudo apt install samba-common-bin

if you finish executing S1 and close nautilus with the following error, proceed with S2.

S2. Execute the following command:

sudo mkdir /var/lib/samba/usershares	

you can now find that closing nautilus will no longer report an error, and the problem is resolved.


reference:

https://askubuntu.com/questions/1024593/failed-to-execute-child-process-net-when-entering-nautilus

Ubuntu add apt repository command not found solution

Launchpad PPA Repositories is a useful non-ubuntu personal third-party repository that can be easily installed with third-party software.

However, when running the Apt-Repository command, it is sometimes suggested that the command does not exist. In this case, apt-get Add-apt-Repository is not possible!

The solution is to install Software-Properties-common. Input command:

apt-get install software-properties-common

Restart and shutdown of Ubuntu system in the terminal

if you want to keep the current user logged in and execute the command using root:

su root

can! Enter the honey code to run, shutdown command restart command:

1, reboot

2, shutdown -r now immediately restart (used by root user)

3, shutdown-r 10 over 10 minutes automatic restart (root user)

If the restart is set by the shutdown command, the shutdown can be canceled by using the shut-down c command to restart

shutdown command:

1, halt immediately shut down

2, poweroff immediately shut down

3, shutdown-h now immediately shutdown (used by root user)

If the shutdown is set by the shutdown command, the shutdown can be cancelled by the shutdown -c command to restart

shutdown and restart systems under Linux have shutdown, halt, reboot, init, and their internal working procedures are different for them.

1, shutdown command

can be used to safely shut down the system. However, when the system is shut down, all logged in users will be notified that the system is going to be shut down, and all instructions will be frozen, that is, all new users can no longer log in. Using this instruction will result in an immediate shutdown or restart and a delayed shutdown or restart. (note: only superusers can use this command)

command syntax format:

Shutdown [option] [time] [warning message]

-k: just sends a warning message to the user

-r: restart the system after shutting down the system

-h: do not restart the system after shutting down the system

-f: quickly shut down the system, but do not do disk detection when restarting the system (found that the system entered the BIOS interface during the experiment, but can not do any operation)

-n: quickly shut down the system, but without an init program

-c: interrupt to shut down the system (no specific experiment found)

2, halt command

using the halt command is a call to the shut-down -h command to execute shutdown system

command syntax format;

halt [option]

-w: don’t really shut down the system, just write “WTMP” (/var/log/ WTMP) record

-d: do not write “WTMP”

-f: shutdown is not called, but

is forced to close

-i: close all uo interfaces in the network

before shutting down or restarting the system

-p: this option is the default option to call “power off”

when the system is shut down

3, reboot command

The

reboot command works similarly to the halt command, but reboot triggers a host reboot. All its parameters are similar to “HALT”.

4, init command

init command USES the system’s runlevel control system. It is the ancestor of all systems’ processes, and its process number is always 1, so signaling “TERM” to init kills all user processes, daemons, etc. Shutdown is the mechanism used. Init 0 shuts down the system and init 1 restarts the system.

How to search files or folders in Ubuntu

1. Whereis + filename

is used to search the program name. The search results are limited to binary files (parameter -b), man description files (parameter -m), and source code files (parameter -s). If the parameters are omitted, all information is returned.



2. The find/name + filename

find is in the specified directory through the search, if the directory using/means in all directories to find, find way to find the file is expensive, the speed is a little slower.



3. Locate + filename
Linux will record all the files in the system in a database file. The method of using locate+ file name will find the target in the database maintained by Linux system. Compared with the way of find command to traverse the disk to find, this method is much more efficient.


The problem with

is that the database files are not updated in real time and are typically updated weekly, so the results found with the locate command may not be accurate. You can of course update the database with the updatedb command before using locate to ensure that the results are correct.


4. Which + executable file name

wh0 ich 0 ich

0.

.


, so basic function is to look for executable files in the directory set by the environment variable $PATH.


Error resolution by Ubuntu: aclocal-1.14 ‘is missing on your system

http://blog.csdn.net/wwt18946637566/article/details/46602305

problem

we have a problem installing protobuf to make

CDPATH="${ZSH_VERSION+.}:" && cd . && /bin/bash /home/user/protobuf-2.6.1/protobuf-2.6.1/missing aclocal-1.14 -I m4
/home/user/protobuf-2.6.1/protobuf-2.6.1/missing: line 81: aclocal-1.14: command not found
WARNING: 'aclocal-1.14' is missing on your system.
         You should only need it if you modified 'acinclude.m4' or
         'configure.ac' or m4 files included by 'configure.ac'.
         The 'aclocal' program is part of the GNU Automake package:
         <http://www.gnu.org/software/automake>
         It also requires GNU Autoconf, GNU m4 and Perl in order to run:
         <http://www.gnu.org/software/autoconf>
         <http://www.gnu.org/software/m4/>
         <http://www.perl.org/>
Makefile:641: recipe for target 'aclocal.m4' failed
make: *** [aclocal.m4] Error 127

solution

reference: http://blog.csdn.net/wwt18946637566/article/details/46602305

actually I used only one instruction
sudo autoreconf-ivf

How to generate and view SSH keys in Ubuntu 16.04

ubuntu 16.04 how to generate SSH key and how to view SSH key

check if SSH Key exists locally

enter

at the terminal

ls -al ~/.ssh

if output is:

No such file or directory

then there is no SSH key

, if you have it, it looks like this:

generates a new SSH key

first enter

in the terminal

ssh-keygen -t rsa -C "[email protected]"

[email protected] for your email address when registering on GitHub or GitLab

enter the terminal will display:

Created directory '/Users/xxx/.ssh'.
Enter passphrase (empty for no passphrase):

The path to save.ssh/ id_RSA is /Users/ XXX /.ssh/ id_RSA, press enter directly.

one thing to note here is that if you already have an SSH key and you want to recreate it using the above it will tell you that you don’t want to regenerate, just type y and press enter.

and the terminal will prompt:

Created directory '/Users/xxx/.ssh'.
Enter passphrase (empty for no passphrase):

You are prompted to set passphrase. You are required to enter passphrase every time you communicate with Git to avoid problems caused by some wrong operation. It is recommended to set it.

after success, the terminal will prompt:

Your identification has been saved in /Users/xxx/.ssh/id_rsa.

Your public key has been saved in /Users/xxx/.ssh/id_rsa.pub.

The key fingerprint is:

16:27:ac:a5:76:28:2d:36:63:1b:56:4d:eb:df:a6:48 [email protected]

The key's randomart image is:

心形图形

Then input at the terminal:

ssh-add ~/.ssh/id_rsa

You will be asked to enter the Passphrase entered in the above step

after success, the terminal display:

Identity added: /Users/xxx/.ssh/id_rsa (/Users/xxx/.ssh/id_rsa)

Finally, two files are generated in /Users/ XXX /.ssh/, id_RSA and id_Rsa.pub

input at the terminal:

cat /Users/xxx/.ssh/id_rsa.pub

terminal will display your SSH key, just copy it directly.

that’s all ~ ~ O (∩ _ ∩) O ha ha ~

Error: java.lang.RuntimeException: PipeMapRed.waitOutputThreads(): subprocess failed with code 127

recently came to MapReduce programming with big data —

USES the following code

that is to use hadoop-streaming run on tm has been reporting an error

hadoop jar /opt/hadoop-2.7.3/share/hadoop/tools/lib/hadoop-streaming-2.7.3.jar -input /ncdc -output /ncdc_out -mapper max_temp_map.py -reducer max_temp_reduce.py -file max_temp_map.py -file max_temp_reduce.py

Error: java.lang.RuntimeException: PipeMapRed.waitOutputThreads(): subprocess failed with code 127

and I have two hours left to solve this problem for two days —

During

, baidu went to a lot of places and Google went to a lot of

method is nothing more than CRLF and LF

file encoding problem, for me, the following is my solution

solution:

The

LiveDataNode needs to be greater than 2, which is my guess

enter the hadoop directory to find slaves file

I had only one slave1

It’s

and I’ll just add a master which is the hostname

let’s make fun of that

really tm want to dead snapshot delete too fast directly back to reinstall twice to tell the truth directly split

below is the error message

, let’s just take a look at that

root@master:/usr/bin# hadoop jar /opt/hadoop-2.7.3/share/hadoop/tools/lib/hadoop-streaming-2.7.3.jar -input /ncdc -output /ncdc_out -mapper max_temp_map.py -reducer max_temp_reduce.py -file max_temp_map.py -file max_temp_reduce.py
20/05/30 14:21:17 WARN streaming.StreamJob: -file option is deprecated, please use generic option -files instead.
packageJobJar: [max_temp_map.py, max_temp_reduce.py, /tmp/hadoop-unjar5025674109683727172/] [] /tmp/streamjob1773442556914840065.jar tmpDir=null
20/05/30 14:21:18 INFO client.RMProxy: Connecting to ResourceManager at master/192.168.150.131:8032
20/05/30 14:21:18 INFO client.RMProxy: Connecting to ResourceManager at master/192.168.150.131:8032
20/05/30 14:21:19 WARN hdfs.DFSClient: Caught exception 
java.lang.InterruptedException
	at java.lang.Object.wait(Native Method)
	at java.lang.Thread.join(Thread.java:1252)
	at java.lang.Thread.join(Thread.java:1326)
	at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.closeResponder(DFSOutputStream.java:609)
	at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.endBlock(DFSOutputStream.java:370)
	at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:546)
20/05/30 14:21:19 INFO mapred.FileInputFormat: Total input paths to process : 1
20/05/30 14:21:19 WARN hdfs.DFSClient: Caught exception 
java.lang.InterruptedException
	at java.lang.Object.wait(Native Method)
	at java.lang.Thread.join(Thread.java:1252)
	at java.lang.Thread.join(Thread.java:1326)
	at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.closeResponder(DFSOutputStream.java:609)
	at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.endBlock(DFSOutputStream.java:370)
	at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:546)
20/05/30 14:21:19 INFO mapreduce.JobSubmitter: number of splits:2
20/05/30 14:21:19 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1590817515448_0008
20/05/30 14:21:19 INFO impl.YarnClientImpl: Submitted application application_1590817515448_0008
20/05/30 14:21:19 INFO mapreduce.Job: The url to track the job: http://master:8088/proxy/application_1590817515448_0008/
20/05/30 14:21:19 INFO mapreduce.Job: Running job: job_1590817515448_0008
20/05/30 14:21:27 INFO mapreduce.Job: Job job_1590817515448_0008 running in uber mode : false
20/05/30 14:21:27 INFO mapreduce.Job:  map 0% reduce 0%
20/05/30 14:21:33 INFO mapreduce.Job: Task Id : attempt_1590817515448_0008_m_000001_0, Status : FAILED
Error: java.lang.RuntimeException: PipeMapRed.waitOutputThreads(): subprocess failed with code 127
	at org.apache.hadoop.streaming.PipeMapRed.waitOutputThreads(PipeMapRed.java:322)
	at org.apache.hadoop.streaming.PipeMapRed.mapRedFinished(PipeMapRed.java:535)
	at org.apache.hadoop.streaming.PipeMapper.close(PipeMapper.java:130)
	at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:61)
	at org.apache.hadoop.streaming.PipeMapRunner.run(PipeMapRunner.java:34)
	at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:453)
	at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343)
	at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:422)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698)
	at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)

20/05/30 14:21:33 INFO mapreduce.Job: Task Id : attempt_1590817515448_0008_m_000000_0, Status : FAILED
Error: java.lang.RuntimeException: PipeMapRed.waitOutputThreads(): subprocess failed with code 127
	at org.apache.hadoop.streaming.PipeMapRed.waitOutputThreads(PipeMapRed.java:322)
	at org.apache.hadoop.streaming.PipeMapRed.mapRedFinished(PipeMapRed.java:535)
	at org.apache.hadoop.streaming.PipeMapper.close(PipeMapper.java:130)
	at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:61)
	at org.apache.hadoop.streaming.PipeMapRunner.run(PipeMapRunner.java:34)
	at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:453)
	at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343)
	at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:422)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698)
	at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)

20/05/30 14:21:39 INFO mapreduce.Job: Task Id : attempt_1590817515448_0008_m_000001_1, Status : FAILED
Error: java.lang.RuntimeException: PipeMapRed.waitOutputThreads(): subprocess failed with code 127
	at org.apache.hadoop.streaming.PipeMapRed.waitOutputThreads(PipeMapRed.java:322)
	at org.apache.hadoop.streaming.PipeMapRed.mapRedFinished(PipeMapRed.java:535)
	at org.apache.hadoop.streaming.PipeMapper.close(PipeMapper.java:130)
	at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:61)
	at org.apache.hadoop.streaming.PipeMapRunner.run(PipeMapRunner.java:34)
	at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:453)
	at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343)
	at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:422)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698)
	at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)

Container killed by the ApplicationMaster.
Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143

20/05/30 14:21:40 INFO mapreduce.Job: Task Id : attempt_1590817515448_0008_m_000000_1, Status : FAILED
Error: java.lang.RuntimeException: PipeMapRed.waitOutputThreads(): subprocess failed with code 127
	at org.apache.hadoop.streaming.PipeMapRed.waitOutputThreads(PipeMapRed.java:322)
	at org.apache.hadoop.streaming.PipeMapRed.mapRedFinished(PipeMapRed.java:535)
	at org.apache.hadoop.streaming.PipeMapper.close(PipeMapper.java:130)
	at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:61)
	at org.apache.hadoop.streaming.PipeMapRunner.run(PipeMapRunner.java:34)
	at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:453)
	at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343)
	at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:422)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698)
	at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)

Container killed by the ApplicationMaster.
Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143

20/05/30 14:21:46 INFO mapreduce.Job: Task Id : attempt_1590817515448_0008_m_000001_2, Status : FAILED
Error: java.lang.RuntimeException: PipeMapRed.waitOutputThreads(): subprocess failed with code 127
	at org.apache.hadoop.streaming.PipeMapRed.waitOutputThreads(PipeMapRed.java:322)
	at org.apache.hadoop.streaming.PipeMapRed.mapRedFinished(PipeMapRed.java:535)
	at org.apache.hadoop.streaming.PipeMapper.close(PipeMapper.java:130)
	at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:61)
	at org.apache.hadoop.streaming.PipeMapRunner.run(PipeMapRunner.java:34)
	at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:453)
	at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343)
	at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:422)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698)
	at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)

Container killed by the ApplicationMaster.
Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143

20/05/30 14:21:47 INFO mapreduce.Job: Task Id : attempt_1590817515448_0008_m_000000_2, Status : FAILED
Error: java.lang.RuntimeException: PipeMapRed.waitOutputThreads(): subprocess failed with code 127
	at org.apache.hadoop.streaming.PipeMapRed.waitOutputThreads(PipeMapRed.java:322)
	at org.apache.hadoop.streaming.PipeMapRed.mapRedFinished(PipeMapRed.java:535)
	at org.apache.hadoop.streaming.PipeMapper.close(PipeMapper.java:130)
	at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:61)
	at org.apache.hadoop.streaming.PipeMapRunner.run(PipeMapRunner.java:34)
	at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:453)
	at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343)
	at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:422)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698)
	at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)

Container killed by the ApplicationMaster.
Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143

20/05/30 14:21:54 INFO mapreduce.Job:  map 100% reduce 100%
20/05/30 14:21:54 INFO mapreduce.Job: Job job_1590817515448_0008 failed with state FAILED due to: Task failed task_1590817515448_0008_m_000001
Job failed as tasks failed. failedMaps:1 failedReduces:0

20/05/30 14:21:54 INFO mapreduce.Job: Counters: 17
	Job Counters 
		Failed map tasks=7
		Killed map tasks=1
		Killed reduce tasks=1
		Launched map tasks=8
		Other local map tasks=6
		Data-local map tasks=2
		Total time spent by all maps in occupied slots (ms)=36658
		Total time spent by all reduces in occupied slots (ms)=0
		Total time spent by all map tasks (ms)=36658
		Total time spent by all reduce tasks (ms)=0
		Total vcore-milliseconds taken by all map tasks=36658
		Total vcore-milliseconds taken by all reduce tasks=0
		Total megabyte-milliseconds taken by all map tasks=37537792
		Total megabyte-milliseconds taken by all reduce tasks=0
	Map-Reduce Framework
		CPU time spent (ms)=0
		Physical memory (bytes) snapshot=0
		Virtual memory (bytes) snapshot=0
20/05/30 14:21:54 ERROR streaming.StreamJob: Job not successful!
Streaming Command Failed!
root@master:/usr/bin#