Tag Archives: solve

modelsim Error: (vlog-13067) Syntax error, unexpected non-printable character.

 ** Error: (vlog-13067) Syntax error, unexpected non-printable character.

The reason is that. V file format is UTF-8 encoding. Modelsim supports ANSI. UTF-8 white space characters are not blank. When converting. V to ANSI encoding, you can see the following figure. The white space characters are not blank. Just delete these exceptional white space characters.

Linux Mint installs Hadoop environment

Using Hadoop-Language-2.8.4.jar, the command is as follows: ./share/hadoop/tools/lib/hadoop – streaming – 2.8.4. Jar – input/Mr – – the output/input/* Mr – output – the file/home/LZH/external/Mapper. Py – Mapper ‘Mapper. Py’ – the file/home/LZH/external/Reducer. Py – Reducer ‘Reducer.py’
Problem 1: bash:/share/hadoop/tools/lib/hadoop – streaming – 2.8.4. Jar: Permission denied
Solution: expand the file permissions, chmod -r 777/share/hadoop/tools/lib/hadoop – streaming – 2.8.4. Jar
Invalid File (bad Magic Number): Exec format Error
Solution: I was careless, and the command omitted the hadoop JAR in the front, and added it, namely: Hadoop jar./share/hadoop/tools/lib/hadoop – streaming – 2.8.4. Jar – input/Mr – – the output/input/* Mr – output – the file/home/LZH/external/Mapper. Py – Mapper ‘Mapper. Py’ – the file/home/LZH/external/Reducer. Py – Reducer ‘Reducer.py’
Problem 3 May be encountered: Mapper.py and reduc. py have to be turned to executable. Chmod +x filename modification permission is executable
When writing MapReduce in Python, it’s a good idea to start with: #! The /usr/bin/env python statement
And finally it works

Hadoop worked fine, but for other reasons, I pressed the power button to force a shutdown, huh?After starting start-up, start start-up all-sh, and then JPS check that there is a missing Datanode in Hadoop, FS-LS/Input, it is found that it cannot connect to 127.0.0.1. Restart hadoop again when starting Hadoop, it is found that it can also create folders by running the command Hadoop, FS-LS/Input, but there is a problem 3 when putting file: Put the File/input/inputFile. TXT. _COPYING_ could only be replicated to 0 home nodes minReplication (= 1). There are 0 datanode (s) running and no node (s) are excluded in this operation.
At this point, stop-all.sh discovers no Proxyserver to stop and no Datanode to stop. (pro test the first solution is successful)
Reason 1: Every time the Namenode Format creates a namenodeId again, while under Hadoop.tmp. dir contains the ID generated by the last format. The Namenode format cleans up the data under the Namenode, but does not clean up the data under the Datanode, which causes failure at startup.
Here are two solutions:

rm -rf /opt/hadoop/ DFS /name/*
rm -rf/rf /hadoop/ DFS /name/*
remove the contents of the “DFS. Datand. data.dir”
rm-rf /opt/hadoop/ DFS /data/*
2) delete files beginning with “hadoop” under “hadoop.tmp.dir”
rm-rf /apt/hadoop/ TMP /hadoop*
3) reformat hadoop
hadoop namenode-format
4) start hadoop
The disadvantage of the start-all-sh
scheme is that all the important data on the original cluster is gone. Therefore, the second scheme is recommended:
1) modify the namespaceID of each Slave so that it is consistent with the Master’s namespaceID.
or
2) modify the Master’s namespaceID to match the Slave’s namespaceID.
Master “namespaceID” located in the “/ opt/hadoop/DFS/name/current/VERSION” file, the Slave “namespaceID” is located in the “//opt/hadoop/DFS/data/current/VERSION” file.

reason 2: the problem is that hadoop USES the mapred and DFS process Numbers on the datanode when it stops. While the default process number is saved under/TMP, Linux defaults to delete files in this directory every once in a while (usually a month or 7 days). Therefore, after deleting the two files of Hadoop-Hadoop-Jobtracker. pid and Hadoop-Hadoop-Namenode. Pid, the Namenode will naturally not find the two processes on the Datanode.
configuring export HADOOP_PID_DIR in the configuration file hadoop_env.sh solves this problem.
In the configuration file, the default path for HADOOP_PID_DIR is “/var/hadoop/pids”, we will manually create a “Hadoop” folder in the “/var” directory, if it already exists, do not create it, remember to chown the permissions to hadoop users. Then kill the process of the Datanode and Tasktracker in Slave (kill -9 process number), restart -all.sh and stop-all.sh without a “No Datanode to stop”, indicating that the problem has been solved.

Run.sh: Bash:./run.sh: Permission denied
Solutions:
Use the command chmod to modify the directory. Sh permissions can be
Such as chmod u + x *. Sh
A Container killed on requisition.exit code is 143.
I just ran out of memory. There are two ways to solve it perfectly:
1. Several more Mapper and Reducer are specified during the runtime:
-d mapred.map.tasks=10 \ #command [genericOptions] [commandOptions]
-d mapred.reduce.tasks=10 \ # note that -d is genericOptions,
before the other parameters
– numReduceTasks 10
2. Modify yarn-site.xml to add the following attributes:

<property>
   <name>yarn.nodemanager.vmem-check-enabled</name>
   <value>false</value>
   <description>Whether virtual memory limits will be enforced for containers</description>
</property>

<property>
   <name>yarn.nodemanager.vmem-pmem-ratio</name>
   <value>4</value>
   <description>Ratio between virtual memory to physical memory when setting memory limits for containers</description>
</property> 

Reference:

[Python] Implement Hadoop MapReduce program in Python: calculate the mean and variance of a set of data

When jar file is running: Failed to load Main-Class manifest attribute from ……Solution

The original address is:
Jar file runtime: Failed to load main-class manifest Attribute from… The solution

Failed to load main-class manifest Attribute from… , which is caused by the unset program entry program. Open the JAR file with WinRAR, expand the meta-INF folder, and check the MANIFEst.MF file. It can be found that main-class is not set, which is the reason of the exception.

reprint please indicate this article address:
Jar file runtime: Failed to load main-class manifest Attribute from… The solution

Solution to “error code is 0x4001” when Intel SGX is running

by qiu pengfei

I was running the Intel SGX application on my computer yesterday, but this morning when I was running some minor modifications to the program based on the example provided by Intel, a 0x4001 error occurred. The prompt is as follows:

Error code is 0x4001. Please refer to the "Intel SGX SDK Developer Reference" for more details.

this affects the mood! Yesterday was also the afternoon to write a half – day program, how can there be a problem?I wonder if there is something wrong with my code, not through printing debugging, just start to delete one function after another, or not. Then, I replaced the example files provided by Intel one by one. If the problem is not my own code, I will directly run the example program provided by Intel, and the error 0x4001 still appears. To query the Intel provide SGX SDK development reference documentation, reference documentation can be downloaded at https://download.01.org/intel-sgx/linux-2.0/docs/Intel_SGX_SDK_Developer_Reference_Linux_2.0_Open_Source.pdf, query 0 x4001, indeed as expected to, the above is this to say:

AE service did not respond or the requested service is not supported.

means that the architecture service provided by Intel SGX is not responsive or the requested service is not supported, and that Intel SGX itself provides some enclaves, assisting the creation of enclaves, report generation, and so on. In this way, it should be the problem of SGX itself. Some people said that it was ok to compile and run in simulation mode, but not in hardware mode, which further verified the problem of SGX itself. Quickly boot up, enter the BIOS to see whether the SGX service is on, a look is on, this bothers me. The computer supports SGX and the service is on. How can there be a problem?Later think may be the driver problem, quickly reinstall the driver, enter the SGX driver download folder, execute the following command, re-install the SGX driver.

sudo ./sgx_linux_x64_driver_eb61a95.bin

compile and run the SGX application again, and there you go! Although I do not know why the driver will go wrong, do not know where the driver problem, but finally solved, or quite gratified.