python minio client Error: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certific

Built minio service, support https, python call reported error.

urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='xx.xx.xx.xxx', port=9000): Max retries exceeded with url: /allstruct?location= (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate (_ssl.c:1108)')))

Ignore the certificate error issue and try out the demo script


import os
from minio import Minio
import urllib3
from urllib.parse import urlparse
import certifi
from minio.commonconfig import REPLACE, CopySource
urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)


minio_endpoint = os.getenv("MINIO_ENDPOINT", "https://xxx.xxx.xxx.xxx:9000")
secure = True

minio_endpoint = urlparse(minio_endpoint)


if minio_endpoint.scheme == 'https':
    secure = True

ok_http_client=urllib3.PoolManager(
            timeout=urllib3.util.Timeout(connect=10, read=10),
            maxsize=10,
            cert_reqs='CERT_NONE',
            ca_certs= os.environ.get('SSL_CERT_FILE') or certifi.where(),
            retries=urllib3.Retry(
                total=5,
                backoff_factor=0.2,
                status_forcelist=[500, 502, 503, 504]
            )
        )


minioClient = Minio(minio_endpoint.netloc,
                    access_key='username',
                    secret_key='password',
                    http_client=ok_http_client,
                    secure=secure)

print(minioClient.list_buckets())

[Solved] KDevelop Error: Failed to specify program to start

The build was successful, but the runtime encountered an error

Failed to specify program to start

Solution:

    1. Check if Run/Current Launch Configuration is the current project
    2. If it is New Compiled Binary Launcher, select the current project
    3. If there is no current project name, go to the next option Configure Launches
    4. Click on the current project name
    5. AddCompiled Binary
    6. Go back to the first step and find that there are options, select
    7. Execute, run successfully

Android Studio: Gradle project sync failed [How to Solve]

Question:

Unable to find method ”java.lang.String org.gradle.api.artifacts.result.ComponentSelectionReason.getDescription()

Reference:

gradle issues details in discription – Stack Overflow

Solution:

Upgrade Android Studio.

Stack Overflow says to upgrade IntelliJ Idea. Similarly, The version of Android Studio is too low, and does not match the gradle version.

[Solved] Hikvision SDK: NET_DVR_GetDVRConfig failed Device does not support this function

Problem:

I have written some code based on Hikvision’s sdk for controlling the camera. One section of the program is mainly used to get NVR channel configuration information

The codes is as below:
I use the function NET_DVR_GetDVRConfig

#include  #include "HCNetSDK.h" int main() { NET_DVR_Init(); //Set connection time and reconnect time NET_DVR_SetConnectTime(2000, 1); NET_DVR_SetReconnect(10000, true); // 注册设备 LONG lUserID; //Login parameters, including device address, login user, password, etc. NET_DVR_USER_LOGIN_INFO struLoginInfo = { 0 }; struLoginInfo.bUseAsynLogin = 0; //同步登录方式 strcpy(struLoginInfo.sDeviceAddress, "192.168.20.106"); //设备IP地址 struLoginInfo.wPort = 8000; //设备服务端口 strcpy(struLoginInfo.sUserName, "admin"); //设备登录用户名 strcpy(struLoginInfo.sPassword, "111111hk"); //设备登录密码 //设备信息, 输出参数 NET_DVR_DEVICEINFO_V40 struDeviceInfoV40 = { 0 }; lUserID = NET_DVR_Login_V40(&struLoginInfo, &struDeviceInfoV40); if (lUserID < 0) { printf("Login failed, error code: %d\n", NET_DVR_GetLastError()); NET_DVR_Cleanup(); return -1; } NET_DVR_IPPARACFG_V40 ipcfg; DWORD bytesReturned = 0; ipcfg.dwSize = sizeof(NET_DVR_IPPARACFG_V40); int iGroupNO = 0; bool resCode = NET_DVR_GetDVRConfig(lUserID, NET_DVR_GET_IPPARACFG_V40, iGroupNO, &ipcfg, sizeof(NET_DVR_IPPARACFG_V40), &bytesReturned); if (! resCode) { DWORD code = NET_DVR_GetLastError(); std::cout << "NET_DVR_GetDVRConfig failed " << NET_DVR_GetErrorMsg((LONG*)(&code)) << std::endl; NET_DVR_Logout(lUserID); NET_DVR_Cleanup(); return -1; } std::cout << "设备组 " << ipcfg.dwGroupNum << " 数字通道个数 " << ipcfg.dwDChanNum << " 起始通道 " << ipcfg.dwStartDChan << std::endl << std::endl; for (int i = 0; i < ipcfg.dwDChanNum; i++) { NET_DVR_PICCFG_V30 channelInfo; bytesReturned = 0; channelInfo.dwSize = sizeof(NET_DVR_PICCFG_V30); int channelNum = i + ipcfg.dwStartDChan; NET_DVR_GetDVRConfig(lUserID, NET_DVR_GET_PICCFG_V30, channelNum, &channelInfo, sizeof(NET_DVR_PICCFG_V30), &bytesReturned); std::cout <<"通道号 "<< channelNum << "\t通道名称 " << channelInfo.sChanName; std::cout << "\t用户名 " << ipcfg.struIPDevInfo[i].sUserName << "\t密码 " << ipcfg.struIPDevInfo[i].sPassword; std::cout << "\t设备ID " << (int)ipcfg.struIPDevInfo[i].szDeviceID; std::cout << "\tip地址 " << ipcfg.struIPDevInfo[i].struIP.sIpV4 << "\t端口 " << ipcfg.struIPDevInfo[i].wDVRPort << std::endl; } //释放SDK资源 NET_DVR_Logout(lUserID); NET_DVR_Cleanup(); return 0; } 

The code was running ok, but when migrating to another machine, something went wrong.
Report the error: NET_DVR_GetDVRConfig failed Device does not support this function

Note the phrase:
If the number of IP channels supported by the device is greater than 0, then the remote parameter configuration interface NET_DVR_GetDVRConfig can be used.
This means that to use this function, you need to check the number of IP channels supported by the device first.
Also, the manual gives a sample program (check first, then call)
The sample procedure is as follows.
ps: there is a point I want to spit, the manual on the ET_DVR_GetDVRConfig function explanation, there is no mention of this issue, causing me to look for a long time to find here to write. Since this function is not supported by all devices, then the norm should be written to check first, then call.

#include  #include  #include "Windows.h" #include "string.h" #include "HCNetSDK.h" using namespace std; void main() { int i=0; BYTE byIPID,byIPIDHigh; int iDevInfoIndex, iGroupNO, iIPCh; DWORD dwReturned = 0; //--------------------------------------- // 初始化 NET_DVR_Init(); //设置连接时间与重连时间 NET_DVR_SetConnectTime(2000, 1); NET_DVR_SetReconnect(10000, true); //--------------------------------------- // 注册设备 LONG lUserID; //Login parameters, including device address, login user, password, etc. NET_DVR_USER_LOGIN_INFO struLoginInfo = {0}; struLoginInfo.bUseAsynLogin = 0; //同步登录方式 strcpy(struLoginInfo.sDeviceAddress, "192.0.0.64"); //设备IP地址 struLoginInfo.wPort = 8000; //设备服务端口 strcpy(struLoginInfo.sUserName, "admin"); //设备登录用户名 strcpy(struLoginInfo.sPassword, "abcd1234"); //设备登录密码 //设备信息, 输出参数 NET_DVR_DEVICEINFO_V40 struDeviceInfoV40 = {0}; lUserID = NET_DVR_Login_V40(&struLoginInfo, &struDeviceInfoV40); if (lUserID < 0) { printf("Login failed, error code: %d\n", NET_DVR_GetLastError()); NET_DVR_Cleanup(); return; } printf("The max number of analog channels: %d\n",struDeviceInfoV40.struDeviceV30.byChanNum); //模拟通道个数 printf("The max number of IP channels: %d\n", struDeviceInfoV40.struDeviceV30.byIPChanNum + struDeviceInfoV40.struDeviceV30.byHighDChanNum * 256);//IP通道个数 //获取IP通道参数信息 NET_DVR_IPPARACFG_V40 IPAccessCfgV40; memset(&IPAccessCfgV40, 0, sizeof(NET_DVR_IPPARACFG)); iGroupNO=0; if (! NET_DVR_GetDVRConfig(lUserID, NET_DVR_GET_IPPARACFG_V40, iGroupNO, &IPAccessCfgV40, sizeof(NET_DVR_IPPARACFG_V40), &dwReturned)) { printf("NET_DVR_GET_IPPARACFG_V40 error, %d\n", NET_DVR_GetLastError()); NET_DVR_Logout(lUserID); NET_DVR_Cleanup(); return; } else { for (i=0;i<IPAccessCfgV40.dwDChanNum;i++) { switch(IPAccessCfgV40.struStreamMode[i].byGetStreamType) { case 0: //直接从设备取流 if (IPAccessCfgV40.struStreamMode[i].uGetStream.struChanInfo.byEnable) { byIPID=IPAccessCfgV40.struStreamMode[i].uGetStream.struChanInfo.byIPID; byIPIDHigh=IPAccessCfgV40.struStreamMode[i].uGetStream.struChanInfo.byIPIDHigh; iDevInfoIndex=byIPIDHigh*256 + byIPID-1-iGroupNO*64; printf("IP channel no.%d is online, IP: %s\n", i+1, IPAccessCfgV40.struIPDevInfo[iDevInfoIndex].struIP.sIpV4); } break; case 1: //从流媒体取流 if (IPAccessCfgV40.struStreamMode[i].uGetStream.struPUStream.struStreamMediaSvrCfg.byValid) { printf("IP channel %d connected with the IP device by stream server.\n", i+1); printf("IP of stream server: %s, IP of IP device: %s\n",IPAccessCfgV40.struStreamMode[i].uGetStream.\ struPUStream.struStreamMediaSvrCfg.struDevIP.sIpV4, IPAccessCfgV40.struStreamMode[i].uGetStream.\ struPUStream.struDevChanInfo.struIP.sIpV4); } break; default: break; } } } //配置IP通道5; iIPCh=4; //支持自定义协议 NET_DVR_CUSTOM_PROTOCAL struCustomPro; if (! NET_DVR_GetDVRConfig(lUserID, NET_DVR_GET_CUSTOM_PRO_CFG, 1, &struCustomPro, sizeof(NET_DVR_CUSTOM_PROTOCAL), &dwReturned)) //获取自定义协议1 { printf("NET_DVR_GET_CUSTOM_PRO_CFG error, %d\n", NET_DVR_GetLastError()); NET_DVR_Logout(lUserID); NET_DVR_Cleanup(); return; } struCustomPro.dwEnabled=1; //启用主码流 struCustomPro.dwEnableSubStream=1; //启用子码流 strcpy((char *)struCustomPro.sProtocalName,"Protocal_RTSP"); //自定义协议名称:Protocal_RTSP,最大16字节 struCustomPro.byMainProType=1; //主码流协议类型: 1- RTSP struCustomPro.byMainTransType=2; //主码流传输协议: 0-Auto, 1-udp, 2-rtp over rtsp struCustomPro.wMainPort=554; //主码流取流端口 strcpy((char *)struCustomPro.sMainPath,"rtsp://192.168.1.65/h264/ch1/main/av_stream");//主码流取流URL struCustomPro.bySubProType=1; //子码流协议类型: 1-RTSP struCustomPro.bySubTransType=2; //子码流传输协议: 0-Auto, 1-udp, 2-rtp over rtsp struCustomPro.wSubPort=554; //子码流取流端口 strcpy((char *)struCustomPro.sSubPath,"rtsp://192.168.1.65/h264/ch1/sub/av_stream");//子码流取流URL if (! NET_DVR_SetDVRConfig(lUserID, NET_DVR_SET_CUSTOM_PRO_CFG, 1, &struCustomPro, sizeof(NET_DVR_CUSTOM_PROTOCAL))) //设置自定义协议1 { printf("NET_DVR_SET_CUSTOM_PRO_CFG error, %d\n", NET_DVR_GetLastError()); NET_DVR_Logout(lUserID); NET_DVR_Cleanup(); return; } printf("Set the custom protocol: %s\n", "Protocal_RTSP"); NET_DVR_IPC_PROTO_LIST m_struProtoList; if (! NET_DVR_GetIPCProtoList(lUserID, &m_struProtoList)) //Get the front-end protocols supported by the device { printf("NET_DVR_GetIPCProtoList error, %d\n", NET_DVR_GetLastError()); NET_DVR_Logout(lUserID); NET_DVR_Cleanup(); return; } IPAccessCfgV40.struIPDevInfo[iIPCh].byEnable=1; //启用 for (i = 0; i<m_struProtoList.dwProtoNum; i++) { if(strcmp((char *)struCustomPro.sProtocalName,(char *)m_struProtoList.struProto[i].byDescribe)==0) { IPAccessCfgV40.struIPDevInfo[iIPCh].byProType=m_struProtoList.struProto[i].dwType; //选择自定义协议 break; } } //IPAccessCfgV40.struIPDevInfo[iIPCh].byProType=0; //厂家私有协议 strcpy((char *)IPAccessCfgV40.struIPDevInfo[iIPCh].struIP.sIpV4,"192.168.1.65"); //前端IP设备的IP地址 IPAccessCfgV40.struIPDevInfo[iIPCh].wDVRPort=8000; //前端IP设备服务端口 strcpy((char *)IPAccessCfgV40.struIPDevInfo[iIPCh].sUserName,"admin"); //前端IP设备登录用户名 strcpy((char *)IPAccessCfgV40.struIPDevInfo[iIPCh].sPassword,"12345"); //前端IP设备登录密码 IPAccessCfgV40.struStreamMode[iIPCh].byGetStreamType=0; IPAccessCfgV40.struStreamMode[iIPCh].uGetStream.struChanInfo.byChannel=1; IPAccessCfgV40.struStreamMode[iIPCh].uGetStream.struChanInfo.byIPID=(iIPCh+1)%256; IPAccessCfgV40.struStreamMode[iIPCh].uGetStream.struChanInfo.byIPIDHigh=(iIPCh+1)/256; //IP通道配置,包括添加、删除、修改IP通道等 if (! NET_DVR_SetDVRConfig(lUserID, NET_DVR_SET_IPPARACFG_V40, iGroupNO, &IPAccessCfgV40, sizeof(NET_DVR_IPPARACFG_V40))) { printf("NET_DVR_SET_IPPARACFG_V40 error, %d\n", NET_DVR_GetLastError()); NET_DVR_Logout(lUserID); NET_DVR_Cleanup(); return; } else { printf("Set IP channel no.%d, IP: %s\n", iIPCh+1, IPAccessCfgV40.struIPDevInfo[iIPCh].struIP.sIpV4); } //注销用户 NET_DVR_Logout(lUserID); //释放SDK资源 NET_DVR_Cleanup(); return; } 

failed: RawInventory gets null OracleHomeInfo [How to Solve]

Error:

List of Homes on this system:

  Home name= OraDb11g_home1, Location= "/opt/oracle/products/11.2.0"
LsInventorySession failed: RawInventory gets null OracleHomeInfo

OPatch failed with error code 73

在这里插入图片描述

Solution:

Switch to ORACLE_HOME directory and add to oraInventory:

% cs $ORACLE_HOME/oui/bin

% ./attachHome.sh (Linux) execute attachHome.cmd (Windows)

在这里插入图片描述

[Solved] Spring MVC Error: A child container failed during start

Error:

WARMING: A child container failed during start
java.util.concurrent.ExecutionException: org.apache.catalina.LifecycleException: Failed to start component [StandardEngine[Tomcat]. StandardHost[localhost]. StandardContext[]]
at java.util.concurrent.FutureTask.report(FutureTask.java:122)
at java.util.concurrent.FutureTask.get(FutureTask.java:192)
at org.apache.catalina.core.ContainerBase.startInternal(ContainerBase.java:1123)
at org.apache.catalina.core.StandardHost.startInternal(StandardHost.java:800)
at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150)
at org.apache.catalina.core.ContainerBase S t a r t C h i l d . c a l l ( C o n t a i n e r B a s e . j a v a : 1559 ) a t o r g . a p a c h e . c a t a l i n a . c o r e . C o n t a i n e r B a s e StartChild.call(ContainerBase.java:1559) at org.apache.catalina.core.ContainerBase StartChild.call(ContainerBase.java:1559)atorg.apache.catalina.core.ContainerBaseStartChild.call(ContainerBase.java:1549)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.catalina.LifecycleException: Failed to start component [StandardEngine[Tomcat]. StandardHost[localhost]. StandardContext[]]
at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:154)
… 6 more
Caused by: java.lang.ClassCastException: org.springframework.web.SpringServletContainerInitializer cannot be cast to javax.servlet.ServletContainerInitializer
at org.apache.catalina.startup.ContextConfig.getServletContainerInitializer(ContextConfig.java:1661)
at org.apache.catalina.startup.ContextConfig.processServletContainerInitializers(ContextConfig.java:1569)
at org.apache.catalina.startup.ContextConfig.webConfig(ContextConfig.java:1277)
at org.apache.catalina.startup.ContextConfig.configureStart(ContextConfig.java:878)
at org.apache.catalina.startup.ContextConfig.lifecycleEvent(ContextConfig.java:369)
at org.apache.catalina.util.LifecycleSupport.fireLifecycleEvent(LifecycleSupport.java:119)
at org.apache.catalina.util.LifecycleBase.fireLifecycleEvent(LifecycleBase.java:90)
at org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5179)
at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150)
… 6 more

Sep 18, 2022 8:24:53 PM org.apache.catalina.core.ContainerBase startInternal
WARMING: A child container failed during start
java.util.concurrent.ExecutionException: org.apache.catalina.LifecycleException: Failed to start component [StandardEngine[Tomcat]. StandardHost[localhost]]
at java.util.concurrent.FutureTask.report(FutureTask.java:122)
at java.util.concurrent.FutureTask.get(FutureTask.java:192)
at org.apache.catalina.core.ContainerBase.startInternal(ContainerBase.java:1123)
at org.apache.catalina.core.StandardEngine.startInternal(StandardEngine.java:302)
at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150)
at org.apache.catalina.core.StandardService.startInternal(StandardService.java:443)
at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150)
at org.apache.catalina.core.StandardServer.startInternal(StandardServer.java:732)
at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150)
at org.apache.catalina.startup.Tomcat.start(Tomcat.java:335)
at org.apache.tomcat.maven.plugin.tomcat7.run.AbstractRunMojo.startContainer(AbstractRunMojo.java:1091)
at org.apache.tomcat.maven.plugin.tomcat7.run.AbstractRunMojo.execute(AbstractRunMojo.java:512)
at org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:137)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:210)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:156)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:148)
at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:117)
at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:81)
at org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThreadedBuilder.build(SingleThreadedBuilder.java:56)
at org.apache.maven.lifecycle.internal.LifecycleStarter.execute(LifecycleStarter.java:128)
at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:305)
at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:192)
at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:105)
at org.apache.maven.cli.MavenCli.execute(MavenCli.java:956)
at org.apache.maven.cli.MavenCli.doMain(MavenCli.java:288)
at org.apache.maven.cli.MavenCli.main(MavenCli.java:192)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced(Launcher.java:282)
at org.codehaus.plexus.classworlds.launcher.Launcher.launch(Launcher.java:225)
at org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode(Launcher.java:406)
at org.codehaus.plexus.classworlds.launcher.Launcher.main(Launcher.java:347)
at org.codehaus.classworlds.Launcher.main(Launcher.java:47)
Caused by: org.apache.catalina.LifecycleException: Failed to start component [StandardEngine[Tomcat]. StandardHost[localhost]]
at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:154)
at org.apache.catalina.core.ContainerBase S t a r t C h i l d . c a l l ( C o n t a i n e r B a s e . j a v a : 1559 ) a t o r g . a p a c h e . c a t a l i n a . c o r e . C o n t a i n e r B a s e StartChild.call(ContainerBase.java:1559) at org.apache.catalina.core.ContainerBase StartChild.call(ContainerBase.java:1559)atorg.apache.catalina.core.ContainerBaseStartChild.call(ContainerBase.java:1549)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.catalina.LifecycleException: A child container failed during start
at org.apache.catalina.core.ContainerBase.startInternal(ContainerBase.java:1131)
at org.apache.catalina.core.StandardHost.startInternal(StandardHost.java:800)
at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150)
… 6 more

 

Solution:
I found that the scope of the javax.servlet-api was not written in the pom file for provided, so I fixed it.

<dependency> <groupId>javax.servlet</groupId> <artifactId>javax.servlet-api</artifactId> <version>3.1.0</version> <scope>provided</scope> </dependency> 

Reason:

In our servlet writing HttpServletRequest and HttpServletResponse objects are provided by servlet-api, we can’t get the corresponding objects without importing servlet-api, but in the runtime, these objects are also available in tomcat, in order to prevent conflicts, in the configuration In order to prevent conflicts, when configuring the pom file, set the scope provided under servlet-api, meaning that the dependency is not packaged, the project will use the HttpServletRequest object in tomcat when running tomcat, and use the servlet-api object when compiling

[Solved] Job for mysqld.service failed because the control process exited with error code

When closing the virtual machine, and then reopening the virtual machine, The following error will appear when executing the command systemctl start mysqld to start the mysql service: Job for mysqld.service failed because the control process exited with error code. See “systemctl status mysqld.service” and “journalctl -xe” for details.
 Insert picture description here
The above situation occurs because the virtual machine is shutting down, mysql service-related control process error, After my own learning and experience, I got a solution: First enter the specified directory/run, create a mysqld file in this directory, Then authorize this file, In this way, the mysql service can be started normally
Insert picture description here
In fact, when we install mysql, start When mysql service, use the command systemctl enable mysqld to set the boot to start automatically, the above error will not occur.

[Solved] JPA query data error: Page 1 of 0 containing UNKNOWN instances

The reason for the error is that there is an empty object according to your query condition, that is, it is not checked. Then double-check the conditions you wrote.
I reported an error because the previous code used cb.equal(root.get(“userName”), res.getName()), which is an absolute query. I copied the previous code directly, but now I want to change it to a fuzzy query, cb.equal(root.get(“userName”), “%” + res.getName().trim() + “%”), here you can see that I only changed the parameter value to a fuzzy query, but the condition is still used equal absolute query reported an error, I searched for more than half an hour to find, and then changed to cb.like(root.get(“userName “), “%” + res.getName().trim() + “%”) and it works.

Specification<TUser> specification = (root, query, cb) -> {
            List<Predicate> predicates = Lists.newArrayList();
            predicates.add(cb.or(cb.equal(root.get("role"), StateEnum.ecgDoctor.name()), cb.equal(root.get("role"), StateEnum.intern.name())));
            //Add name condition
            if (res.getName() != null) {
                predicates.add(cb.like(root.get("userName"), "%" + res.getName().trim() + "%"));
            }
            if (res.getPhoneno() != null) {
                predicates.add(cb.like(root.get("phone"), "%" + res.getPhoneno().trim() + "%"));
            }
            return cb.and(predicates.toArray(new Predicate[predicates.size()]));
        };

To sum up, if you use absolute queries in the future, write it like this

//Add name condition
if (res.getName() != null) {
    predicates.add(cb.equal(root.get("userName"), res.getName().trim()));
}

Fuzzy queries are written like this.

//Add name condition
if (res.getName() != null) {
    predicates.add(cb.equal(root.get("userName"), "%" + res.getName().trim() + "%"));
}

This will not be wrong.

Also note that the paging inside the JPA, page is from 0, there is root.get(“property”), the name of this property is the name of the entity class inside, is this private String userName, this @Column(name = “user_name”) is the name of the database, do not get confused.

@Column(name = "user_name")
private String userName;

[Solved] Maven Project Packaging Error: Unable to find main class

I. Description of the problem

When I use the Maven aggregation project to package the parent project for packaging, I get an error :Unable to find main class which probably means that the main startup class cannot be found

insert image description here

2. How to Solve

The project contains modules of some tool classes, and the tool class modules do not need us to be started, It is only provided for other interface service references, No need to start means no main startup class, but the pom file of the parent project references the springboot packaging plugin spring-boot-maven-plugin, namely:

 <plugins> 
            <plugin>
                <groupId>org.springframework.boot</ groupId>
                <artifactId>spring-boot-maven-plugin&lt ;/artifactId>
                <configuration>
                    <excludes>
                        <exclude>
                            <groupId>org.projectlombok</groupId&gt ;
                            <artifactId>lombok</artifactId>
                        </exclude>
                    </excludes>
                </configuration>
            </plugin>
        </plugins>

So when packaging, mvn will scan all dependent modules. If it finds that there is no main startup class under a module, it will report an error.

3. Solution

My solution is: Comment out the packaged plugin spring-boot-maven-plugin of the parent project and then package/install it

Insert picture description here
You can see that the packaging is successful

[Solved] conda activates the virtual environment error: Invoke-Expression

Insert picture description here
I’ll borrow the picture of someone else’s problem, my problem is more or less the same, I got an error when activating the conda virtual environment
I checked the system environment variable Path and found that the python path where the error directory is located is wrong, after deleting it, I activated the environment normally.

[Solved] Kafka Restarts error | Cloudera Manager Access Returns 500 | HDFS Startup Error

Hi~ Long time no update

1.Problems that need attention after restarting kafka:
Kafka will have a write file a in the target storage location during execution,this file a will keep a write state for a while,usually one hour Heavy
Generate a new write file b,End the last write file a(The duration of this ending needs to check the configuration of each cluster). then restart
Here comes the problem,The last write file a,will be recreated after restarting,The last write file b,So the current
a will keep writing status,when reading and writing file a, it will report an error,including importing Hive query will also report an error&# xff08; load to hive
The table will not report an error,but it will report an error when selecting),because this file is always in the write state,It is inoperable,It is also called writing
Lock(I believe everyone has heard of).
Solution:Then we need to manually terminate the write status of the write file,First we need to determine the status of the write file,In the command
Execute the command on the line :
hdfs fsck /data/logs/( Write the directory where the file is located,Change according to where your file is located) -openforwrite
The displayed files are all in the write state:

insert picture description here

 After seeing the writing file,execute the command to stop all writing files,here explain,why all stop&# xff0c; Logically, it should be stopped before
A write file, but stopping all of them can also solve the problem, is relatively simple and violent, because manual stopping will automatically generate a
, writing files, so you can stop them all. then now execute the command :
hdfs debug recoverLease -path /logs/common_log/2022-09 -16/FlumeData.1663292498820.tmp(Execute the previous command to display Output write file path) -retries 3
It can be solved by executing each file once,Say more,If this file has been loaded into hive,, you need to go to /user/warehouse/hive/ to find this write status file

insert picture description here

2.CDH's Cloudera Manager launch browser access returns 500error:
① First check the configuration of the /etc/hosts file, only need to leave these two lines with the cluster The intranet IP mapping can be
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6

②It is also necessary to check whether the ports related to ,cm are occupied by the firewall.

③ Then restart CM, execute the command
nameNode:systemctl stop cloudera-scm-server
Then execute :systemctl stop cloudera-scm-agent on each node

nameNode:systemctl start cloudera-scm-server
Then execute :systemctl start cloudera-scm-agent on each node
Attention Pay attention to!!! The execution order of these commands cannot be reversed, Otherwise, there may be problems with cluster startup.
Then you can systemctl status cloudera-scm-server, systemctl status cloudera-scm-agent
Check out the operation.

②If cm starts and can access , but starts HDFS error 1 or 2
1.Unable to retrieve non-local non-loopback IP address. Seeing address: cm/127.0.0.1
2.ERROR ScmActive-0:com.cloudera.server.cmf. components.ScmActive: ScmActive was not able to access CM identity  to validate it.2017-04-18 09:40 :29,308 ERROR ScmActive-0

So congratulations ,find a solution.
First find the source database of CM,Some of them were configured at that time,If you don’t know, ask the person who installed them,Almost all of them are in
Don't ask me for the , account password on nameNode ~, then show databases; can See that there is a cm or scm library

insert picture description here

 use this library,then show tables;
You will see a table called HOSTS,View the data of this table-select * from HOSTS ;

insert picture description here

 You will find that there is a different line , that is, there is a difference between NAME and IP_ADDRESS, Then you need to modify it back, to
The name and IP_ADDRESS of the intranet,I believe everyone will modify it!Then restart the CM,It's done!