Category Archives: How to Fix

[Solved] Spark job failed during runtime. Please check stacktrace for the root cause.

hive on spark reports an error
executing the hive command is an error

[42000][3] Error while processing statement: FAILED: Execution Error, return code 3 from org.apache.hadoop.hive.ql.exec.spark.SparkTask. Spark job failed during runtime. Please check stacktrace for the root cause.

[Reason]
View running tasks on yarn, Query error results from error log

Map operator initialization failed: org.apache.hadoop.hive.ql.metadata.HiveException: Unexpected column vector type LIST

List type error
List corresponds to array in hive, array corresponds to list in Java

[Solution]
Temporarily change the execution engine to MR

set hive.execution.engine=mr;

There are many bugs in hive on spark, When an unknown error occurs, First try to replace the underlying execution engine with MR, to execute the sql statement.

[Subsequent modification]
1. View the current execution engine of hive:

set hive.execution.engine;

2. Manually set hive’s current execution engine to Spark

set hive.execution.engine=spark;

3. Manually set hive’s current execution engine to MR

set hive.execution.engine=mr;

[Solved] ONNXImporter::handleNode DNN/ONNX和create layer “onnx::Gather_384“ of type “NonMaxSuppression“

Today I encountered a lot of OpenCV loading model errors when debugging the yolov7 model conversion and loading problemm There is no way to fully display it due to the title length limit, I will post it here in its entirety.

[ERROR:0] global D:\opencv-python\opencv\modules\dnn\src\onnx\onnx_importer.cpp (720) cv::dnn::dnn4_v20211004: :ONNXImporter::handleNode DNN/ONNX: ERROR during processing node with 5 inputs and 1 outputs: [NonMaxSuppression]:(onnx::Gather_384)
cv2.error: OpenCV(4.5.4) D:\opencv-python\opencv\modules\dnn\src\onnx\onnx_importer.cpp:739: error: (- 2:Unspecified error) in function 'cv::dnn::dnn4_v20211004::ONNXImporter::handleNode'
cv2.error: OpenCV(4.5.4) D:\opencv-python\opencv\modules\dnn\src\onnx\onnx_importer.cpp:739: error: (- 2:Unspecified error) in function 'cv::dnn::dnn4_v20211004::ONNXImporter::handleNode'
> Node [NonMaxSuppression]:(onnx::Gather_384) parse error: OpenCV(4.5.4) D:\opencv-python\opencv\modules\dnn\src\dnn.cpp:615: error: (-2:Unspecified error) Can't create layer "onnx::Gather_384" of type "NonMaxSuppression" in function 'cv::dnn::dnn4_v20211004::LayerData::getLayerInstance&# 39;

At this time, I think of a way to compare my own model with the official model one by one,Comparison of one node and one node, Finally found the problem at the end.

[Official Model]

[My own model]

Seeing this, I’m wondering if there is such a big difference??It shouldn’t be,It’s all models built from the same code,So I started to trace the source,Sure enough Problem found.

At the position of my red frame, the official model ends here, and there is a large string of, tensor shapes for debugging both by printing, I guess that there may be a problem with the parameter settings during the model export process, So I tried to verify basically all the uncertain parameters, I found the problem.

In order to facilitate your understanding, I am giving my original conversion operation command here:

python export.py --weights best.pt --grid --end2end --simplify --topk-all 100 --iou-thres 0.65 --conf-thres 0.35 --img-size 640 640 --max-wh 640 

This is the command after:

python38 export.py --weights best.pt --grid --simplify --topk-all 100 --iou-thres 0.65 --conf-thres 0.35 --img -size 640 640 --max-wh 640 

See the difference, In fact, it is caused by the parameter end2end, After the modification, my model is as follows:

Because what I am doing here is the detection of the category, so the final output is: 1x25200x6, and the official one is: 1x25200x85.

zookeeper Failed to Startup: Error: JAVA_HOME is not set and java could not be found in PATH

Obviously, JAVA_HOME is configured normally, but the error is still reported:

[email protected]:/data/apache-zookeeper-3.7.1-bin/bin$ sh zkServer.sh start
zkServer.sh: 78: /data/apache-zookeeper-3.7.1-bin/bin/zkEnv.sh: [[: not found
-p: not found
java is /data/hadoop/jdk1.8.0_202/bin/java
Error: JAVA_HOME is not set and java could not be found in PATH.

 

Solution:

Replace it with ./zkServer.sh start

or bash zkServer.sh start

make Error: error: cast from ‘int32_t*’ {aka ‘int*’} to ‘int’ loses precision [-fpermissive]

Error message:

     xxx.h:117:59: error: cast from ‘int32_t*’ {aka ‘int*’} to ‘int’ loses precision [-fpermissive]
     int m_MinValidLen = (int)(&(((DataOnAir *)0)->rx_ts_s));

 

Reason for error:

This is because the pointer type occupies 8 bytes on the 64-bit system based on the Linux kernel, and the int type occupies 4 bytes, so there will be losses precision.

You can convert the int* to the long type first, and the long type can be implicitly converted to the int type. You can directly modify it to long long or long

Modified:

long m_MinValidLen = (long)(&(((DataOnAir *)0)->rx_ts_s));

[Solved] uiautomateviewer Tool Error: Error while obtaining UI hierarchy XML file: com.android.ddmlib.SyncException

Error Message:

Error while obtaining UI hierarchy XML file: com.android.ddmlib.SyncException: Remote object doesn't exist! Error while obtaining UI hierarchy XML file: com.android.ddmlib.SyncException: Remote object doesn't exist!

 

 

Solution: Turn on developer permissions on the phone, turn the USB debugging button back on, and then restart the emulator

Reference link: https://www.cnblogs.com/uniquefu/p/11496211.html

[Solved] This dependency was not found: * core-js/modules/es.error.cause.js in ./node_modules/@babel

[Error] This dependency was not found: * core-js/modules/es.error.cause. js in ./node_ modules/@babel/runtime

Solution: Delete the node_modules folder directly in the project folder, then install core-js in the terminal

npm install --save core-js

After installation, continue to run the following command

npm install

Start project:

npm run dev

The project is ready to run

[Endnote X9 Error] Unable to search online – prompt windows error 12029

Endnote failed to search online, prompting Windows error 12029. The solution is as follows:

In the IE browser, click the Internet option, select Advanced, and then select TLS1.2 from the drop-down box of Settings.

The official solution is invalid on my computer:

You can download the file using the link below :-

Connection Files

#Update Pubmed File

Proceed to open the downloaded file, and then click File > Save as and proceed to remove the copy and save to override this file.

[Solved] Unity Error: Assertion failed on expression: ‘m_ErrorCode == MDB_MAP_RESIZED

Unity报错Assertion failed on expression: ‘m_ErrorCode == MDB_MAP_RESIZED

Assertion failed on expression: ‘m_ ErrorCode == MDB_ MAP_ RESIZED || ! HasAbortingErrors()’

Asset database transaction committed twice!

Assertion failed on expression: ‘errors == MDB_ SUCCESS || errors == MDB_ NOTFOUND’

The three errors are reported all the time without code prompt; The reason is that the Unity license has expired;

Solution: Restart Unity, open Unity Hub and reactivate the license; Re-open the project

[Solved] Vivado System Generator for DSP: “Error evaluating ‘OpenFcn‘ callback of Xilinx Block“

When using Vivado System Generator for DSP, I encountered the error “Error evaluating ‘OpenFcn’ callback of Xilinx Block”, the solution is as follows.

1 check whether the installed System Generator and Matlab version match, I use Matlab2019b + Vivado19.2 version, the specific installation method of the expansion package see Baidu.

2 check whether it is opened from System Generator, the software will automatically open Matlab, no additional separate open Matlab software, Matlab will open the following content.

3 If the above error still occurs, open System Generator 20xx.x MATLAB Configurator software,

The following interface pops up. After checking MATLAB, remove, close and reapply;

4 Input simulink in MATLAB,

The following interface appears, and select Blank Model;

5 Click Library Browser,

Find xilinx blockset, and the corresponding option appears,

Add a module, the parameter configuration dialog box appears, and the problem is solved!

pod install error: Oh no, an error occurred. (Ultimate Solution)

Project scenario:

There was an error in pod install recently

[!] Oh no, an error occurred.


Problem description


JSON::ParserError - 416: unexpected token at '"SharedTestUtilities/FIROptionsMock'
/Users/tiger/.rvm/rubies/ruby-2.6.3/lib/ruby/2.6.0/json/common.rb:156:in `parse'
/Users/tiger/.rvm/rubies/ruby-2.6.3/lib/ruby/2.6.0/json/common.rb:156:in `parse'
/Users/tiger/.rvm/rubies/ruby-2.6.3/lib/ruby/gems/2.6.0/gems/cocoapods-core-1.11.3/lib/cocoapods-core/specification/json.rb:61:in `from_json'
/Users/tiger/.rvm/rubies/ruby-2.6.3/lib/ruby/gems/2.6.0/gems/cocoapods-core-1.11.3/lib/cocoapods-core/specification.rb:748:in `from_string'
/Users/tiger/.rvm/rubies/ruby-2.6.3/lib/ruby/gems/2.6.0/gems/cocoapods-core-1.11.3/lib/cocoapods-core/specification.rb:722:in `from_file'
/Users/tiger/.rvm/rubies/ruby-2.6.3/lib/ruby/gems/2.6.0/gems/cocoapods-core-1.11.3/lib/cocoapods-core/source.rb:188:in `specification'
/Users/tiger/.rvm/rubies/ruby-2.6.3/lib/ruby/gems/2.6.0/gems/cocoapods-core-1.11.3/lib/cocoapods-core/specification/set.rb:58:in `block in specification_name'
/Users/tiger/.rvm/rubies/ruby-2.6.3/lib/ruby/gems/2.6.0/gems/cocoapods-core-1.11.3/lib/cocoapods-core/specification/set.rb:56:in `each'
/Users/tiger/.rvm/rubies/ruby-2.6.3/lib/ruby/gems/2.6.0/gems/cocoapods-core-1.11.3/lib/cocoapods-core/specification/set.rb:56:in `specification_name'
/Users/tiger/.rvm/rubies/ruby-2.6.3/lib/ruby/gems/2.6.0/gems/cocoapods-core-1.11.3/lib/cocoapods-core/cdn_source.rb:216:in `search'
/Users/tiger/.rvm/rubies/ruby-2.6.3/lib/ruby/gems/2.6.0/gems/cocoapods-core-1.11.3/lib/cocoapods-core/source/aggregate.rb:83:in `block in search'
/Users/tiger/.rvm/rubies/ruby-2.6.3/lib/ruby/gems/2.6.0/gems/cocoapods-core-1.11.3/lib/cocoapods-core/source/aggregate.rb:83:in `select'
/Users/tiger/.rvm/rubies/ruby-2.6.3/lib/ruby/gems/2.6.0/gems/cocoapods-core-1.11.3/lib/cocoapods-core/source/aggregate.rb:83:in `search'
/Users/tiger/.rvm/gems/ruby-2.6.3/gems/cocoapods-1.11.3/lib/cocoapods/resolver.rb:416:in `create_set_from_sources'
/Users/tiger/.rvm/gems/ruby-2.6.3/gems/cocoapods-1.11.3/lib/cocoapods/resolver.rb:385:in `find_cached_set'
/Users/tiger/.rvm/gems/ruby-2.6.3/gems/cocoapods-1.11.3/lib/cocoapods/resolver.rb:360:in `specifications_for_dependency'
/Users/tiger/.rvm/gems/ruby-2.6.3/gems/cocoapods-1.11.3/lib/cocoapods/resolver.rb:165:in `search_for'
/Users/tiger/.rvm/gems/ruby-2.6.3/gems/cocoapods-1.11.3/lib/cocoapods/resolver.rb:274:in `block in sort_dependencies'
/Users/tiger/.rvm/gems/ruby-2.6.3/gems/cocoapods-1.11.3/lib/cocoapods/resolver.rb:267:in `each'
/Users/tiger/.rvm/gems/ruby-2.6.3/gems/cocoapods-1.11.3/lib/cocoapods/resolver.rb:267:in `sort_by'
/Users/tiger/.rvm/gems/ruby-2.6.3/gems/cocoapods-1.11.3/lib/cocoapods/resolver.rb:267:in `sort_by!'
/Users/tiger/.rvm/gems/ruby-2.6.3/gems/cocoapods-1.11.3/lib/cocoapods/resolver.rb:267:in `sort_dependencies'
/Users/tiger/.rvm/rubies/ruby-2.6.3/lib/ruby/gems/2.6.0/gems/molinillo-0.8.0/lib/molinillo/delegates/specification_provider.rb:60:in `block in sort_dependencies'
/Users/tiger/.rvm/rubies/ruby-2.6.3/lib/ruby/gems/2.6.0/gems/molinillo-0.8.0/lib/molinillo/delegates/specification_provider.rb:77:in `with_no_such_dependency_error_handling'
/Users/tiger/.rvm/rubies/ruby-2.6.3/lib/ruby/gems/2.6.0/gems/molinillo-0.8.0/lib/molinillo/delegates/specification_provider.rb:59:in `sort_dependencies'
/Users/tiger/.rvm/rubies/ruby-2.6.3/lib/ruby/gems/2.6.0/gems/molinillo-0.8.0/lib/molinillo/resolution.rb:754:in `push_state_for_requirements'
/Users/tiger/.rvm/rubies/ruby-2.6.3/lib/ruby/gems/2.6.0/gems/molinillo-0.8.0/lib/molinillo/resolution.rb:744:in `require_nested_dependencies_for'
/Users/tiger/.rvm/rubies/ruby-2.6.3/lib/ruby/gems/2.6.0/gems/molinillo-0.8.0/lib/molinillo/resolution.rb:727:in `activate_new_spec'
/Users/tiger/.rvm/rubies/ruby-2.6.3/lib/ruby/gems/2.6.0/gems/molinillo-0.8.0/lib/molinillo/resolution.rb:684:in `attempt_to_activate'
/Users/tiger/.rvm/rubies/ruby-2.6.3/lib/ruby/gems/2.6.0/gems/molinillo-0.8.0/lib/molinillo/resolution.rb:254:in `process_topmost_state'
/Users/tiger/.rvm/rubies/ruby-2.6.3/lib/ruby/gems/2.6.0/gems/molinillo-0.8.0/lib/molinillo/resolution.rb:182:in `resolve'
/Users/tiger/.rvm/rubies/ruby-2.6.3/lib/ruby/gems/2.6.0/gems/molinillo-0.8.0/lib/molinillo/resolver.rb:43:in `resolve'
/Users/tiger/.rvm/gems/ruby-2.6.3/gems/cocoapods-1.11.3/lib/cocoapods/resolver.rb:94:in `resolve'
/Users/tiger/.rvm/gems/ruby-2.6.3/gems/cocoapods-1.11.3/lib/cocoapods/installer/analyzer.rb:1078:in `block in resolve_dependencies'
/Users/tiger/.rvm/gems/ruby-2.6.3/gems/cocoapods-1.11.3/lib/cocoapods/user_interface.rb:64:in `section'
/Users/tiger/.rvm/gems/ruby-2.6.3/gems/cocoapods-1.11.3/lib/cocoapods/installer/analyzer.rb:1076:in `resolve_dependencies'
/Users/tiger/.rvm/gems/ruby-2.6.3/gems/cocoapods-1.11.3/lib/cocoapods/installer/analyzer.rb:124:in `analyze'
/Users/tiger/.rvm/gems/ruby-2.6.3/gems/cocoapods-1.11.3/lib/cocoapods/installer.rb:416:in `analyze'
/Users/tiger/.rvm/gems/ruby-2.6.3/gems/cocoapods-1.11.3/lib/cocoapods/installer.rb:241:in `block in resolve_dependencies'
/Users/tiger/.rvm/gems/ruby-2.6.3/gems/cocoapods-1.11.3/lib/cocoapods/user_interface.rb:64:in `section'
/Users/tiger/.rvm/gems/ruby-2.6.3/gems/cocoapods-1.11.3/lib/cocoapods/installer.rb:240:in `resolve_dependencies'
/Users/tiger/.rvm/gems/ruby-2.6.3/gems/cocoapods-1.11.3/lib/cocoapods/installer.rb:161:in `install!'
/Users/tiger/.rvm/gems/ruby-2.6.3/gems/cocoapods-1.11.3/lib/cocoapods/command/install.rb:52:in `run'
/Users/tiger/.rvm/rubies/ruby-2.6.3/lib/ruby/gems/2.6.0/gems/claide-1.0.3/lib/claide/command.rb:334:in `run'
/Users/tiger/.rvm/gems/ruby-2.6.3/gems/cocoapods-1.11.3/lib/cocoapods/command.rb:52:in `run'
/Users/tiger/.rvm/gems/ruby-2.6.3/gems/cocoapods-1.11.3/bin/pod:55:in `<top (required)>'
/Users/tiger/.rvm/gems/ruby-2.6.3/bin/pod:23:in `load'
/Users/tiger/.rvm/gems/ruby-2.6.3/bin/pod:23:in `<main>'
/Users/tiger/.rvm/gems/ruby-2.6.3/bin/ruby_executable_hooks:24:in `eval'
/Users/tiger/.rvm/gems/ruby-2.6.3/bin/ruby_executable_hooks:24:in `<main>'

――― TEMPLATE END ――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――

[!] Oh no, an error occurred.

Search for existing GitHub issues similar to yours:
https://github.com/CocoaPods/CocoaPods/search?q=416%3A+unexpected+token+at+%27%22SharedTestUtilities%2FFIROptionsMock%27&type=Issues

If none exists, create a ticket, with the template displayed above, on:
https://github.com/CocoaPods/CocoaPods/issues/new

Be sure to first read the contributing guide for details on how to properly submit a ticket:
https://github.com/CocoaPods/CocoaPods/blob/master/CONTRIBUTING.md

Don't forget to anonymize any private data!

Looking for related issues on cocoapods/cocoapods...
Found no similar issues. To create a new issue, please visit:
https://github.com/cocoapods/cocoapods/issues/new


Cause analysis:

In fact, after upgrading cocoapods, you need to clear it…


Solution:

Use this following command to clear the cache:

sudo rm -rf ~/.cocoapods/repos

[Solved] JMeter Save Testing File Error: Error loading results file – see file log

When saving a test file with JMeter, an error occurs: Error loading results file – see file log

Solution:
Create a new text file anywhere and add:

<?xml version="1.0" encoding="UTF-8"?>
<testResults version="1.2">
</testResults>

After saving the text file, change the suffix of the text file to .jmx and select the .jmx file when saving the test file. Run the script file in JMeter, a pop-up box will appear, select “overwrite existing file”.

[Solved] RuntimeError: CUDA error: CUBLAS_STATUS_ALLOC_FAILED when calling `cublasCreate(handle)`

[problem description]

The previous code can run normally. After the data set is expanded, the following errors are reported in the GPU program running the deep learning training model, but CUDA out of memory error is not prompted.

RuntimeError: CUDA error: CUBLAS_STATUS_ALLOC_FAILED when calling `cublasCreate(handle)`

[solution 1]

Change the program to run on the CPU and find that it can run normally, but the speed will be very slow and it will take a long time.

--device cpu

[solution 2]

Try to reduce the batch size used in the training model, and it can run normally.