Tag Archives: Problem Summary

[Solved] wxauto error: ImportError: DLL load failed while importing win32gui: Can’t find the specified program

Background

Using wxauto to develop wechat robot, there was an error when running the program in Pycharm

Error prompt

Traceback (most recent call last):
  File "D:\Project\wechatBot\test.py", line 2, in <module>
    from wxauto import WeChat
  File "C:\Users\pokeu\anaconda3\envs\wechatbot\lib\site-packages\wxauto\__init__.py", line 2, in <module>
    from .wxauto import WxParam, WxUtils, WeChat, COPYDICT
  File "C:\Users\pokeu\anaconda3\envs\wechatbot\lib\site-packages\wxauto\wxauto.py", line 10, in <module>
    import win32gui, win32con
ImportError: DLL load failed while importing win32gui: Can't find the specified program.

Solution:

Check if the win32gui.pyd file exists in the C:\Users\username\anaconda3\envs\wechatbot\Lib\site-packages\win32 directory

If not, run pip install pywin32 to install it.

Add C:\Users\username\anaconda3\envs\wechatbot\Lib\site-packages\pywin32_system32 to the system environment variable.

Notes.
a. User name Replace with your own user name.
b. The first half of C:\Users\username\anaconda3 is the installation path of anaconda, replace it with your own.
c. \envs\wechatbot is the path of the new environment I created (wechatbot), replace it with your own environment, or ignore it if you didn’t create it, and just find \Lib\site-packages\win32.

In the original import … import the following library before the original import …: import pywintypes, e.g.

import pywintypes
#import pythoncom # Uncomment this if some other DLL load will fail
from wxauto import WeChat
import time, random

Now run the program again, and there should be no error.

[Solved] Error: java.io.EOFException: Premature EOF from inputStream

Solve the problem of error: java.io.eofexception: precondition EOF from InputStream

1. Question

1. Problem process

During the log parsing task, an error is reported suddenly, and the task is always very stable. How can an error be reported suddenly?A tight heart

2. Detailed error type:

Check the log and find the following errors

21/11/18 14:36:29 INFO mapreduce.Job: Task Id : attempt_1628497295151_1290365_m_000002_2, Status : FAILED
Error: java.io.EOFException: Premature EOF from inputStream
	at com.hadoop.compression.lzo.LzopInputStream.readFully(LzopInputStream.java:75)
	at com.hadoop.compression.lzo.LzopInputStream.readHeader(LzopInputStream.java:114)
	at com.hadoop.compression.lzo.LzopInputStream.<init>(LzopInputStream.java:54)
	at com.hadoop.compression.lzo.LzopCodec.createInputStream(LzopCodec.java:83)
	at com.hadoop.mapreduce.LzoSplitRecordReader.initialize(LzoSplitRecordReader.java:58)
	at org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.initialize(MapTask.java:548)
	at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:786)
	at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
	at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:422)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1907)
	at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)

The error is queried through a search engine, and the result points to the upper limit of the dfs.datanode.max.transfer.threads parameter, such as
https://blog.csdn.net/zhoujj303030/article/details/44422415

Viewing the cluster configuration, it is found that the parameter is modified to 8192. Check other problems.

Later, it was found that there was an LZO empty file in the log file. After deletion, the task was executed again and successfully.

2. Solution

To prevent the above problems from happening again, write a script to delete LZO empty files before performing the parsing task

1. Traverse the files under the specified path

for file in `hdfs dfs -ls /xxx/xxx/2037-11-05/pageview | sed '1d;s/  */ /g' | cut -d\  -f8`;
do  
	echo $file; 
done

Result output:

/xxx/xxx/2037-11-05/pageview/log.1631668209557.lzo
/xxx/xxx/2037-11-05/pageview/log.1631668211445.lzo

2. Judge whether the file is empty

for file in `hdfs dfs -ls /xxx/xxx/2037-11-05/pageview | sed '1d;s/  */ /g' | cut -d\  -f8`;
do  
	echo $file; 
	lzoIsEmpty=$(hdfs dfs -count $file | awk '{print $3}')
	echo $lzoIsEmpty;
	if [[ $lzoIsEmpty -eq 0 ]];then 
		# is empty, delete the file
		hdfs dfs -rm $file;
	else
		echo "Loading data"
	fi
done

3. Final script

for type in webclick error pageview exposure login
do
    isEmpty=$(hdfs dfs -count /xxx/xxx/$do_date/$type | awk '{print $2}')
    if [[ $isEmpty -eq 0 ]];then 
        echo "------ Given Path:/xxx/xxx/$do_date/$type is empty" 
    else 
		for file in `hdfs dfs -ls /xxx/xxx/$do_date/$type | sed '1d;s/  */ /g' | cut -d\  -f8`;
		do  
			echo $file; 
			lzoIsEmpty=$(hdfs dfs -count $file | awk '{print $3}')
			echo $lzoIsEmpty;
			if [[ $lzoIsEmpty -eq 0 ]];then 
				echo Delete Files: $file
				hdfs dfs -rm $file;
			fi
		done
		
		echo ================== Import log data of type $do_date $type into ods layer ==================
		... Handling log parsing logic
   fi
done

An error is reported when installing the package directly in pycharm, but it can be installed through the terminal. Error non zero exit code (2)

An error is reported when installing the package directly in pycharm, but it can be installed through the terminal

The problem description is shown in the figure below:

Error content: non zero exit code (2) direct installation will report the following error , but you can use the command to install in the terminal in pycharm, but it is too troublesome to install every time, which is not applicable to problem analysis

The PIP I use here is version 21.3.1. I checked my previous projects and found that pip21.2.4 can be installed normally without error, so I reduced the PIP version of this project to pip21.2.4. However, I don’t know the specific reason why pip21.3.1 can’t be used. The problem is solved

    enter the directory where the project environment is located
    open windows PowerShell in administrator mode and enter the directory where the environment is located. Use python.exe in the project to run the command - M PIP install PIP = = 21.2.4

    The goal here is to use the python execution command in the project instead of the global Python execution command . Enter pycharm to view the PIP version installation package . If there is no accident, it can be solved. I solved the problem in this way. At this time, if you like, you can upgrade PIP again. I try to upgrade pip to 21.3.1 again, which can be installed normally

    Reference blog

      https://blog.csdn.net/CNWorldisyourFC/article/details/110468251?utm_ medium=distribute.pc_ relevant.none-task-blog-title-2& spm=1001.2101.3001.4242. https://blog.csdn.net/weixin_ 51119842/article/details/110469060.

[Solved] MybatisPlusException: Error: Method queryTotal execution error of sql

Cause: com.baomidou.mybatisplus.core.exceptions.MybatisPlusException: Error: Method queryTotal execution error of sql :

Error reason:
the user-defined SQL is written in mapper, where is added too much, and the user-defined SQL is transferred into querywrapper, where cannot be added
error code:

String customSql="select * from ("+queryAll+") as q where ${ew.customSqlSegment}";
    @Select(customSql)
    IPage<BranchBasic> baseQuery(Page<Object> objectPage, @Param(Constants.WRAPPER)QueryWrapper queryWrapper);
   

Correct code:
as both case and upper case are OK

String customSql="select * from ("+queryAll+") as q ${ew.customSqlSegment}";
    @Select(customSql)
    IPage<BranchBasic> baseQuery(Page<Object> objectPage, @Param(Constants.WRAPPER)QueryWrapper queryWrapper);

Ambari shield big data components

article directory

  • 1. Question:
  • 2. If the Ambari cluster has been successfully deployed, the operation steps are as follows:
  • 3. If the Ambari cluster has not been deployed, the steps are as follows:

1. Question:

  • in the Ambari page, shielding the big data component, make it on the installation page, big data component is not visible to the user, can not be installed.

2. If the Ambari cluster has been successfully deployed, the steps are as follows:

In

  1. /var/lib/ambari - server/resources/sports/HDP/3.1/services directory, delete all the big data component directory and the directory content; Ambari-agent cache: rm-rf /var/lib/ambari-agent/cache/*
  2. restart all ambari-agent nodes: ambari-agent restart

3. If the Ambari cluster has not been deployed, follow these steps:

  1. modify the Ambari source code, in Ambari \ambari-server\ SRC \main\resources\ HDP\3.1\services directory, delete the big data component directory and all contents under the directory;
  2. recompile Ambari source code;
  3. unified deployment of Ambari cluster;