Problem Hash Verification failed
In cdh5.8.2 web interface cluster installation process, Parcels hash check failed.
2. Solution 1(Hoodwink)
1. Modify the hash value of manifest.JSON
[root @ hadoop – 01 parcels] # cat CDH 5.8.2-1. Cdh5.8.2. P0.3 – el6. Parcel. Sha
227d11e344698c70e654ca6768fc017735b73ae3
[root @ hadoop – 01 parcels] # sha1sum CDH 5.8.2-1. Cdh5.8.2. P0.3 – el6. Parcel
cb7c70d07e68a256a1cb3b06e79e688ac64f3432 b> CDH 5.8.2-1. Cdh5.8.2. P0.3 – el6. Parcel
[root @ hadoop – 01 parcels] # cat manifest. Json | grep CDH 5.8.2-1. Cdh5.8.2. P0.3 – el6. Parcel
“ParcelName” : “CDH – 5.8.2-1. Cdh5.8.2. P0.3 – el6. Parcel”,
[root @ hadoop – 01 parcels] # vi manifest. The json
# by/into the search mode, the input CDH 5.8.2-1. Cdh5.8.2. P0.3 – el6. Parcel,
the hash from 227 d11e344698c70e654ca6768fc017735b73ae3 instead of b> cb7c70d07e68a256a1cb3b06e79e688ac64f3432 b>
2. In the Web interface, click Back, and then click Continue
Iii. Solution 2(Root cause)
If solution b> 1 b> , still can’t solve, may b> parcel b> file damage.
so you need to download, then b> web b> interface don’t shut down, click b> b> Back b> , b> click b> b> Continue b> .
Delete the local CDH-5.8.2-1.CDH5.8.2.p0.3-el6.Parcel file
[root @ hadoop – 01 parcels] # rm -f CDH 5.8.2-1. Cdh5.8.2. P0.3 – el6. Parcel
2. Redownload
[root @ hadoop – 01 parcels] # wget HTTP:// http://archive.cloudera.com/cdh5/parcels/5.8.2/CDH-5.8.2-1.cdh5.8.2.p0.3-el6.parcel
3. Check the hash value again for comparison
[root @ hadoop – 01 parcels] # sha1sum CDH 5.8.2-1. Cdh5.8.2. P0.3 – el6. Parcel
227 d11e344698c70e654ca6768fc017735b73ae3 b> CDH 5.8.2-1. Cdh5.8.2. P0.3 – el6. Parcel b>
[root @ hadoop – 01 parcels] #
[root @ hadoop – 01 parcels] # cat CDH 5.8.2-1. Cdh5.8.2. P0.3 – el6. Parcel. Sha
227d11e344698c70e654ca6768fc017735b73ae3
### # consistent, that means the download file is complete, not damaged
4. In the Web interface, click Back and then click Continue to find the error
ERROR:
?The Src file/opt/cloudera/parcels/flood/CDH – 5.8.2-1. Cdh5.8.2. P0.3 – el6. Parcel/CDH – 5.8.2-1. Cdh5.8.2. P0.3 – el6. Parcel does not exist – hadoop – 01… and 4 others
5. In the Web interface, click Back quickly and decisively
6. To each machine manually delete the folder/opt/cloudera/parcels /. Flood and restart the agent
[[email protected] ~]# rm -rf /opt/cloudera/parcels/.flood
[[email protected] ~]# rm -rf /opt/cloudera/parcels/.flood
[[email protected] ~]# rm -rf /opt/cloudera/parcels/.flood
[[email protected] ~]# rm -rf /opt/cloudera/parcels/.flood
[[email protected] ~]# rm -rf /opt/cloudera/parcels/.flood
[[email protected] ~]# rm -rf /opt/cloudera/parcels/.flood
[[email protected] ~]# service cloudera-scm-agent restart
[[email protected] ~]# service cloudera-scm-agent restart
[[email protected] ~]# service cloudera-scm-agent restart
[[email protected] ~]# service cloudera-scm-agent restart
[[email protected] ~]# service cloudera-scm-agent restart
[root-Hadoop-06 ~]# Service Cloudera-SCM-Agent Restart
7. In the Web interface, click Continue again and find the following error
?Untar failed with return code: 2, with tar output: stdout: [], stderr: [ gzip: stdin: invalid compressed data–crc error tar: Child returned status 1 tar: Error is not recoverable: No coinages now] -Hadoop-04
8. Google, cloudera community search, unable to resolve
9. Perform step 6 again
Close the current page, open http://172.16.101.54:7180
11. Click Add Cloudera Management Service to create CMS Service
12. Click Add Cluster, create Cluster, failed, cannot click Continue, should be a bug
13. Force uninstall and clean the cluster’s files and install packages
14. Reinstall, the interface is as follows
Iv. Lessons learned:
A. Verify downloaded files
B When Hash Verification failed occurs, do not do solution 1, go to Solution 2 step 1-3, then click back and continue button
C. Simply follow steps 13-14 of Solution 2
5. Note
Encounter the problem, fight it, exhaust methods, improve yourself, start again.
From “ITPUB blog” link: http://blog.itpub.net/30089851/viewspace-2128607/, if you want to reprint, please indicate the source, otherwise will be investigated for legal responsibility.
Reproduced in: http://blog.itpub.net/30089851/viewspace-2128607/
Read More:
- Hadoop download and install cloudera virtual machine (VM)
- CDH opens Kerberos and reports an error: ticket expired
- Hbase Native memory allocation (mmap) failed to map xxx bytes for committing reserved memory
- [Solved] Import org.apache.hadoop.hbase.hbaseconfiguration package cannot be imported
- Error: Exception was raised when calling per-thread-terminate function in extension lrwreplaymain.dl
- win10 unable to start ssh-agent service, error :1058 solution
- Parsing double quotation marks with JSON
- Form Compile Issues: (FRM-18108: Failed to load the following objects)
- Failed to open stream: http request failed!
- /usr/sbin/zabbix_agentd: error while loading shared libraries: libcurl.so.4
- Error 945 Database cannot be opened due to inaccessible files or insufficient memory or disk space
- The solution of comments are not permitted in JSON. (521) in vscode
- VScode: How to Fix “Comments are not permitted in JSON” issue
- Ambari OpenSSL version problem: sslerror: failed to connect. Please check OpenSSL library versions
- Server Tomcat v7.0 server at localhost failed to start
- Android Studio error “Manifest merger failed with multiple errors, see logs” solution
- How to Fix:java.lang.NoClassDefFoundError: Failed resolution of: Lorg/apache/http/client/methods/HttpPost
- Failed to load Main-Class manifest attribute from when the jar file is running
- Error resolution of unexpected token in JSON at position 0
- JIRA startup failed, JIRA has been locked.