Category Archives: Error

[Solved] Qt Error: qt.network.ssl: QSslSocket::connectToHostEncrypted: TLS initialization failed

Project scenario:

QT network programming appears when requesting the website: qt.network.ssl: QSslSocket::connectToHostEncrypted: TLS initialization failed


Problem description

Cause analysis:

The computer may not have the correct OpenSSL installed.


Solution:

1. we use the following code to determine the OpenSSL version our QT support

#include <QSslSocket>
#include <QDebug>
qDebug()<< QSslSocket::sslLibraryBuildVersionString();

My output here is:
you can see that my OpenSSL version is bit 1.1.1. Download the corresponding version of OpenSSL
2. on this website: http://slproweb.com/products/Win32OpenSSL.html Download the corresponding OpenSSL version. One thing to note is: if you use MinGW 32-bit kit, Download

otherwise, if you use MinGW 64 bit kit, Download win64 OpenSSL. Note that both compilers only need to download the EXE executable of the light version.

3. Install OpenSSL
click next all the time. In the last step, set the folder to bin.
4. Copy the file
I downloaded the Win64 version of OpenSSL here. Therefore, copy the libcrypto-1_1-x64.dll and libssl-1_1-x64.dll files in the OpenSSL folder to the qt installation directory. The specific directory is here: D:\Qt\Qt5.14.0\5.14.0\ mingw73_64\bin. If you download 32-bit, you only need to put it in the D:\Qt\Qt5.14.0\5.14.0\mingw73_32\bin directory, and the others are similar.
5. Ends
if you run the project file again, QT will not report the error: qt.network.ssl: QSslSocket::connectToHostEncrypted: TLS initialization failed

[Solved] FATAL ERROR: Ineffective mark-compacts near heap limit Allocation failed – JavaScript heap out of me

Build react project prompts error message


<--- Last few GCs --->

[1118:0x104c00000]   210060 ms: Scavenge (reduce) 2020.9 (2049.8) -> 2020.1 (2051.3) MB, 3.4/0.0 ms  (average mu = 0.148, current mu = 0.006) allocation failure 
[1118:0x104c00000]   213180 ms: Mark-sweep (reduce) 2021.0 (2050.3) -> 2020.1 (2051.3) MB, 3117.6/0.0 ms  (average mu = 0.089, current mu = 0.010) allocation failure scavenge might not succeed


<--- JS stacktrace --->

FATAL ERROR: Ineffective mark-compacts near heap limit Allocation failed - JavaScript heap out of memory
 1: 0x101319fc5 node::Abort() (.cold.1) [/usr/local/bin/node]
 2: 0x1000b6169 node::Abort() [/usr/local/bin/node]
 3: 0x1000b62df node::OnFatalError(char const*, char const*) [/usr/local/bin/node]
 4: 0x100200ba7 v8::Utils::ReportOOMFailure(v8::internal::Isolate*, char const*, bool) [/usr/local/bin/node]
 5: 0x100200b43 v8::internal::V8::FatalProcessOutOfMemory(v8::internal::Isolate*, char const*, bool) [/usr/local/bin/node]

Solution:

1. Install increase-memory-limit.

 npm install -g increase-memory-limit

2. Execute after successful installation

increase-memory-limit

Execute npm run build again and it will be OK!

[Solved] Kubernetes ingress-srv. error: failed calling webhook “validate.nginx.ingress.kubernetes.io”

Problem scenario

When writing the configuration file of the ingress controller entry controller of kubernetes

kubectl apply -f ingress-srv.yaml

Three Errors:

    1. Error 1:

no matches for kind “Ingress” in version “networking.k8s.io/v1beta1”

    Error 2: pathType is wrong after I modified APIVersion to v1.

Error 3: Error from server (InternalError): error when creating “ingress-srv.yaml”: Internal error occurred: failed calling webhook “validate.nginx.ingress.kubernetes.io”: Post “https://ingress-nginx-controller-admission.ingress-nginx.svc:443/networking/v1/ingresses?timeout=10s”: dial tcp 10.102.20.133:443: connect: connection refused

 

Solution 1:
no matches for kind “Ingress” in version “networking.k8s.io/v1beta1”
Modify

apiVersion: networking.k8s.io/v1beta1

to

apiVersion: networking.k8s.io/v1

 

Solution 2:

pathType is wrong after I modified APIVersion to v1.

It is the version issue. here is the mofied codes below:

Original codes (report error):

apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: ingress-srv
  annotations:
    kubernetes.io/ingress.class: nginx
    nginx.ingress.kubernetes.io/use-regex: "true"
spec:
  rules:
    - host: posts.com
      http:
        paths:
          - path: /posts/create
            backend:
              serviceName: posts-clusterip-srv
              servicePort: 4000
          - path: /posts
            backend:
              serviceName: query-srv
              servicePort: 4002
          - path: /posts/?(.*)/comments
            backend:
              serviceName: comments-srv
              servicePort: 4001
          - path: /?(.*)
            backend:
              serviceName: client-srv
              servicePort: 3000

Here is the modified code:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ingress-srv
  annotations:
    kubernetes.io/ingress.class: nginx
    nginx.ingress.kubernetes.io/use-regex: "true"
spec:
  rules:
    - host: posts.com
      http:
        paths:
          - pathType: Prefix
            path: "/posts/create"
            backend:
              service:
                name: posts-clusterip-srv
                port:
                  number: 4000
          - pathType: Prefix
            path: "/posts"
            backend:
              service:
                name: query-srv
                port:
                  number: 4002
          - pathType: Prefix
            path: "/posts/?(.*)/comments"
            backend:
              service:
                name: comments-srv
                port:
                  number: 4001
          - pathType: Prefix
            path: "/?(.*)"
            backend:
              service:
                name: client-srv
                port:
                  number: 3000

 

Solution 3:
1. First kubectl get ValidatingWebhookConfiguration
2. Then kubectl delete -A ValidatingWebhookConfiguration ingress-nginx-admission
3. Delete that admission, and then it’s normal

ArcGIS error 999999: error executing function, table not found.

The python script is used to complete the identification work in ArcGIS, but it always prompts that the output table does not exist or there is a problem with the path

Personal solutions:

If you define a workspace to a file database or a non database file, for some unknown reason, you may not be able to create a new feature class file in the personal database, resulting in an error; It may be because of authority and time, and there is no in-depth study.

[Solved] Nacos Cluster startup error: error=‘Cannot allocate memory‘ (errno=12)

Problem discovery

1. Start one of the Nacos clusters

2. Query the number of starts through the number of starts in the cluster command

ps -ef|grep nacos|grep -v grep|wc -l

This is the second startup. One has been started before, so the problem comes. Why is it still one after the second startup

3. Use the tail-f command to read the contents of the file loop, monitor the growth of the file and find out the reason

tail -f 文件路径   #The file path is given after the start command

Normal start

Second startup error

From the Nacos startup log information, we can see that the memory is insufficient

4. Check the memory usage through the free -h command

There is only 70m of available memory left

5. By viewing startup.sh file to view the JVM startup command

– xms2g represents 2G of initially allocated memory
– xmx2g represents the maximum value of JVM memory
– xmn1g represents 1g of Cenozoic memory;

Solution:

1. Increase system memory

2. Modify the startup parameters of the JVM in the startup script and reduce the memory allocated to the JVM

Allocate according to your current usage mode and the memory of your virtual machine

After modification, it is started successfully

SQL Server Error: Arithmetic overflow error converting expression to data type int.

1. Problem description

SQL Server (SQL DW) queries the number of data in a table and reports an error using count

select count(*)  from test.test_t;

Then an error is reported:

SQL ERROR [8115] [S0002]: Arithmetic overflow error converting expression to data type int.

2. Cause of the problem

The amount of data is relatively large. The query result directly with count is of type int, which exceeds the range of int.

tinyint: integer from 0 to 255
smallint: integer from – 2 15 (-32768) to 2 15 (32767)
int: integer from – 2 31 (-2147483648) to 2 31 (2147483647)
bigint: integer data (all numbers) from -2 63 (-9223372036854775808) to 2 63 -1 (9223372036854775807) decimal: numeric data with fixed precision and range
from -10 38 -1 to 10 38 -1

 

3. Solution

Microsoft sql provides count_big method to count

select count_big(*)  from test.test_t;

kernel module insert error: ERROR: could not insert module …../file.ko : File exits

When inserting the kernel module, I was prompted that the file already exists, but before that, when I inserted the kernel module, I was prompted that I could not insert normally. In order to avoid problems with the module, I decided to delete the original module and insert it again. Here is a method for you.

Check the current modules

lsmod

You can see that there are running modules in it. If you find the module you want to insert, delete it

sudo rmmod openvswitch

Then re insert it

sudo insmod datapath/linux/openvswitch.ko

Then you can insert the module normally.

[Solved] SSM Project Error: Error during artifact deployment. See server log for details.

SSM project reports error during artifact deployment See server log for details. Solution

First look at the Tomcat localhost log and find:

java.lang.ClassNotFoundException:org.springframework.web.context.ContextLoaderListener

At this time, we can set this in the idea to solve the error.

In idea, click File > Project Structure > Artifacts

Delete the jar file at the same level as web-inf

Right-click the project name in the right output layout and select Put into Output Root.

After execution, you can see that the jar files are placed in the Lib directory of WEB-INF, and click OK.

Finally, the restart takes effect.

spark Program Error: ERROR01——java.lang.NullPointerException

Run Spark Program in idea and find that datafram can perform df.show() but just df.count() will display the following exception information:

2022-03-25 17:56:13,691 ERROR executor.Executor: Exception in task 14.0 in stage 7.0 (TID 222)
java.lang.NullPointerException
	at $line33.$read$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw.$anonfun$rdd01$1(<console>:26)
	at scala.collection.Iterator$$anon$10.next(Iterator.scala:461)
	at scala.collection.Iterator$$anon$11.nextCur(Iterator.scala:486)
	at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:492)
	at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:460)
	at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.agg_doAggregateWithKeys_0$(Unknown Source)
	at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source)
	at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
	at org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:759)
	at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:460)
	at org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:140)
	at org.apache.spark.shuffle.ShuffleWriteProcessor.write(ShuffleWriteProcessor.scala:59)
	at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99)
	at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:52)
	at org.apache.spark.scheduler.Task.run(Task.scala:131)
	at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:506)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1462)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:509)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
2022-03-25 17:56:13,728 WARN scheduler.TaskSetManager: Lost task 14.0 in stage 7.0 (TID 222) (westgis-134 executor driver): java.lang.NullPointerException
	at $line33.$read$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw.$anonfun$rdd01$1(<console>:26)
	at scala.collection.Iterator$$anon$10.next(Iterator.scala:461)
	at scala.collection.Iterator$$anon$11.nextCur(Iterator.scala:486)
	at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:492)
	at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:460)
	at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.agg_doAggregateWithKeys_0$(Unknown Source)
	at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source)
	at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
	at org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:759)
	at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:460)
	at org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:140)
	at org.apache.spark.shuffle.ShuffleWriteProcessor.write(ShuffleWriteProcessor.scala:59)
	at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99)
	at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:52)
	at org.apache.spark.scheduler.Task.run(Task.scala:131)
	at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:506)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1462)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:509)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)

The reason for this error is that there is a null value in the dataframe attribute in the previous processing, using na.drop() is removed and the error is resolved.

df05=df05.select("direction","station_name","order_no","lat","lng").na.drop()

[Solved] CMake Error: Error: generator : Unix Makefiles

The following problems occurred when using cmake – G “MinGW makefiles”

CMake Error: Error: generator : Unix Makefiles
Does not match the generator used previously: MinGW Makefiles
Either remove the CMakeCache.txt file and CMakeFiles directory or choose a different binary directory.

The reason is that it has been generated before, and other problems have not been deleted before regeneration
Solution:
1. Delete the previously generated files inside the build file
2. Create a new folder, cd to the new folder and re-cmake

[Solved] Error from server (InternalError): error when creating “ingress.yaml”: Internal error occurred: fail

When using the ingress exposure service, kubectl apply -f ingress.yaml reports the following error.
Reported error:

Error from server (InternalError): error when creating “ingress.yaml”: Internal error occurred: failed calling webhook “validate.nginx.ingress.kubernetes.io”: failed to call webhook: Post “https://ingress-nginx-controller-admission.ingress-nginx.svc:443/networking/v1/ingresses?timeout=10s”: x509: certificate has expired or is not yet valid: current time 2022-03-26T14:45:34Z is before 2022-03-26T20:16:32Z

 

Solution:
Check kubectl apply -f ingress.yaml

kubectl get validatingwebhookconfigurations

Delete ingress-nginx-admission

kubectl delete -A ValidatingWebhookConfiguration ingress-nginx-admission

Then execute

kubectl apply -f ingress.yaml