Category Archives: Python

urllib.error.URLError: <urlopen error [Errno -3] Temporary failure in name resolution>

When training the model, load some pre training models, such as VGg. The code is as follows

model = torchvision.models.vgg19(pretrained=True)

Train will display

Downloading: "https://download.pytorch.org/models/vgg19-dcbb9e9d.pth" to /root/.cache/torch/checkpoints/vgg19-dcbb9e9d.pth

Then an error occurred:

socket.gaierror: [Errno -3] Temporary failure in name resolution
and
urllib.error.URLError: <urlopen error [Errno -3] Temporary failure in name resolution>

This is because the pre training model cannot be downloaded, so it needs to be downloaded from the Internet
therefore, it is more convenient to download the model first, find a way to connect to the Internet, and then input the link automatically https://download.pytorch.org/models/vgg19-dcbb9e9d.pth
Then put the downloaded. PTH model file under a fixed path, such as

/home/team/torch/models/pre_ model/vgg19-dcbb9e9d.pth

Finally, change the code to

model = torchvision.models.vgg19(pretrained=False)
pthfile = r'/home/team/torch/models/pre_model/vgg19-dcbb9e9d.pth'
model.load_state_dict(torch.load(pthfile))```

[Solved] Pygame.error: mixer not initialized & pygame.error: WASAPI can‘t find requested audio endpoint: Could not Find the Element.

When developing games using python, we will inevitably use the pyGame module, which has a sound function. Using this function, we can add sound effects to our games.


Problem Description:

To use the sound module, we must initialize our game at the beginning of the main function, so we add the following statement at the beginning of the main function to initialize the game.

# Game initialization
    pygame.init()

However, when I run the program, I find that the game window flashes back and an error message appears, as follows:

D:\Game\TankWar\venv\Scripts\python.exe D:/Game/TankWar/main.py
pygame 2.0.2 (SDL 2.0.16, Python 3.8.5)
Hello from the pygame community. https://www.pygame.org/contribute.html
Traceback (most recent call last):
  File "D:/Game/TankWar/main.py", line 41, in <module>
    is_quit_game = run_Game(config)
  File "D:/Game/TankWar/main.py", line 22, in run_Game
    sounds[key] = pygame.mixer.Sound(value)
pygame.error: mixer not initialized

Process finished with exit code 1

it says I didn’t initialize the mixer!!! We can’t help it. Let’s go according to his error report and initialize the mixer separately.

pygame.init()
pygame.mixer.init()

The following error message still appears, and the game window still flashes back.

D:\Game\TankWar\venv\Scripts\python.exe D:/Game/TankWar/main.py
pygame 2.0.2 (SDL 2.0.16, Python 3.8.5)
Hello from the pygame community. https://www.pygame.org/contribute.html
Traceback (most recent call last):
  File "D:/Game/TankWar/main.py", line 42, in <module>
    is_quit_game = run_Game(config)
  File "D:/Game/TankWar/main.py", line 17, in run_Game
    pygame.mixer.init()
pygame.error: WASAPI can't find requested audio endpoint: Could not find the element.

Process finished with exit code 1

Solution:

After repeated tests, I found that it can run normally sometimes, and the above error reports will appear sometimes. Finally, I found a big man’s article and solved this problem.

earphone problem

Because I use a desktop computer and have no audio connected, there has been no audio output device, which causes pyGame to not know where to output the sound (in this case, the audio device cannot be found), resulting in an error. After inserting the audio device (i.e. my headset), it’s solved…

[Solved] RuntimeError: CUDA error: device-side assert triggered CUDA kernel errors

Error reporting reason:

Probably because: the code has a place where the array is out of bounds. The blind guess is in the cross entropy loss function. I’m here anyway.

Small probability is another reason. However, the following solutions are generic.

Solution:
run device = “CPU” first. You can locate where the array is out of bounds and modify the code. Make sure it is correct before running on the GPU.

[Solved] Python urllib sending request Error: urllib.error.urlerror: <urlopen error [SSL: certificate_verify_failed]….>

Error:urllib.error.URLError: <urlopen error [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:xxx)>

Solution:
Add the following codes before you use urllib.request.Request(url):

import ssl
ssl._create_default_https_context = ssl._create_unverified_context

Problem analysis

This is because the website visited is HTTPS://, which requires SSL authentication, and using urllib directly will lead to local authentication failure (the specific reason is not found out), so SSL is used_create_unverified_Context turn off authentication

Error recurrence

When request = urllib.Request.Request (URL, data) is executed, an error is reported. Cancel the comments in the upper two lines to solve the problem

import json
import urllib


def baidu_search():
    url = "https://www.baidu.com/s?"
    data = {"wd": "AHA"}
    data = json.dumps(data).encode('GBK')
    # import ssl
    # ssl._create_default_https_context = ssl._create_unverified_context  # If these two lines are not added, the next line reports an error
    request = urllib.request.Request(url, data)
    response = urllib.request.urlopen(request)
    content = response.read()
    print(str(content))


if __name__ == '__main__':
    baidu_search()

Error when downloading the built-in dataset of pytoch = urllib.error.urlerror: urlopen error [SSL: certificate_verify_failed]

Error reason:

This is an SSL certificate validation error. When an HTTPS site is requested, but the certificate validation error occurs, such an error will be reported.

Solution:

Just add the following two lines to the code to skip the certificate check and successfully access the web page.

# Global removal of certificate validation
import ssl
ssl._create_default_https_context = ssl._create_unverified_context

[Solved] Pytest Error: E ModuleNotFoundError: No module named ‘common

Hint: make sure your test modules/packages have valid Python names. Pytest error
_____________________________________________________________________________________ ERROR collecting test_panda_1.py ______________________________________________________________________________________
ImportError while importing test module

'D:\pythonhome\pandabus_API_test_pytest\case\test_panda_1.py'.
Hint: make sure your test modules/packages have valid Python names.
Traceback:
test_panda_1.py:7: in <module>
from common.logger import log
E ModuleNotFoundError: No module named 'common

 

Solution:

Method 1: Create a new conftest.py file in the root directory where you want to execute pytest; the contents of the file

import os
import sys
sys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), '. ')))

Method 2: pyteset contains __init__.py files in each directory under the test case directory; execute it in the project’s follow directory, which is also possible, as tested

Pytorch torch.cuda.FloatTensor Error: RuntimeError: one of the variables needed for gradient computation has…

pytorch 1.9 Error:
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [4, 512, 16, 16]], which is output 0 of ConstantPadNdBackward, is at version 1; expected version 0 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True). #23
At first I thought it was the input z = torch.randn(batch_size, 128,1,1).to(device).
Solution:
pip install torch == 1.4 torchvision = 0.05

[Solved] Grid Search Error (GridSearchCV): ‘ascii‘ codec can‘t encode characters in position 18-20: ordinal not in r

Grid Search Error: UnicodeEncodeError: ‘ascii’ codec can’t encode characters in position 18-20: ordinal not in range(128)

E:\DLstudy\Scripts\python.exe E:/PycharmProjects/DLstudy/run/train_model.py
[INFO] tuning hyperparameters...
Traceback (most recent call last):
  File "E:\PycharmProjects\DLstudy\run\train_model.py", line 22, in <module>
    model.fit(trainX, trainY)
  File "E:\DLstudy\lib\site-packages\sklearn\model_selection\_search.py", line 820, in fit
    with parallel:
  File "E:\DLstudy\lib\site-packages\joblib\parallel.py", line 725, in __enter__
    self._initialize_backend()
  File "E:\DLstudy\lib\site-packages\joblib\parallel.py", line 735, in _initialize_backend
    n_jobs = self._backend.configure(n_jobs=self.n_jobs, parallel=self,
  File "E:\DLstudy\lib\site-packages\joblib\_parallel_backends.py", line 494, in configure
    self._workers = get_memmapping_executor(
  File "E:\DLstudy\lib\site-packages\joblib\executor.py", line 20, in get_memmapping_executor
    return MemmappingExecutor.get_memmapping_executor(n_jobs, **kwargs)
  File "E:\DLstudy\lib\site-packages\joblib\executor.py", line 42, in get_memmapping_executor
    manager = TemporaryResourcesManager(temp_folder)
  File "E:\DLstudy\lib\site-packages\joblib\_memmapping_reducer.py", line 531, in __init__
    self.set_current_context(context_id)
  File "E:\DLstudy\lib\site-packages\joblib\_memmapping_reducer.py", line 535, in set_current_context
    self.register_new_context(context_id)
  File "E:\DLstudy\lib\site-packages\joblib\_memmapping_reducer.py", line 560, in register_new_context
    self.register_folder_finalizer(new_folder_path, context_id)
  File "E:\DLstudy\lib\site-packages\joblib\_memmapping_reducer.py", line 590, in register_folder_finalizer
    resource_tracker.register(pool_subfolder, "folder")
  File "E:\DLstudy\lib\site-packages\joblib\externals\loky\backend\resource_tracker.py", line 191, in register
    self._send('REGISTER', name, rtype)
  File "E:\DLstudy\lib\site-packages\joblib\externals\loky\backend\resource_tracker.py", line 204, in _send
    msg = '{0}:{1}:{2}\n'.format(cmd, name, rtype).encode('ascii')
UnicodeEncodeError: 'ascii' codec can't encode characters in position 18-20: ordinal not in range(128)

Process finished with exit code 1

Solutioin:

Original error code:

model = GridSearchCV(LogisticRegression(), params, cv=3, n_jobs=-1)

Set parameter n_jobs = - 1 parameter can be deleted and changed to:

model = GridSearchCV(LogisticRegression(), params, cv=3)

After a look, this parameter indicates how many processors we need to work

 

n_jobs : int, default=None
        Number of jobs to run in parallel.
        ``None`` means 1 unless in a :obj:`joblib.parallel_backend` context.
        ``-1`` means using all processors. See :term:`Glossary <n_jobs>`
        for more details.

If n_jobs = – 1 is specified, there is a step at the bottom to use ASCII for coding, but the coding fails every time
therefore, if we do not specify this parameter, one processor will be used by default.

If you really want to specify multiple processors

Then we need to modify the code of the path with the problem in our error message
for example, in our error messages:

  File "E:\DLstudy\lib\site-packages\joblib\externals\loky\backend\resource_tracker.py", line 204, in _send
    msg = '{0}:{1}:{2}\n'.format(cmd, name, rtype).encode('ascii')
UnicodeEncodeError: 'ascii' codec can't encode characters in position 18-20: ordinal not in range(128)

Note the location of my error is e:\dlstudy\lib\site packages\joblib\externals\rocky\backend\resource_Line 204 of tracker.py In the _send method, click
Source code of _send function:

  def _send(self, cmd, name, rtype):
        msg = '{0}:{1}:{2}\n'.format(cmd, name, rtype).encode('ascii')
        if len(name) > 512:
            # posix guarantees that writes to a pipe of less than PIPE_BUF
            # bytes are atomic, and that PIPE_BUF >= 512
            raise ValueError('name too long')
        nbytes = os.write(self._fd, msg)
        assert nbytes == len(msg)

Change
msg = '{0}:{1}:{2}\n'.format(cmd, name, rtype).encode('ascii')
to
msg = '{0}:{1}:{2}\n'.format(cmd, name, rtype).encode('utf8')
That is, the encoding is changed to utf-8, and the changed code is as follows.

```python
  def _send(self, cmd, name, rtype):
        msg = '{0}:{1}:{2}\n'.format(cmd, name, rtype).encode('utf8')
        if len(name) > 512:
            # posix guarantees that writes to a pipe of less than PIPE_BUF
            # bytes are atomic, and that PIPE_BUF >= 512
            raise ValueError('name too long')
        nbytes = os.write(self._fd, msg)
        assert nbytes == len(msg)

Then run the code again
don’t worry, you will still report errors. Because we only modified the encoding method, but not the decoding method
the error information is as follows:

     .............(...)
    splitted = line.strip().decode('ascii').split(':')
UnicodeDecodeError: 'ascii' codec can't decode byte 0xe5 in position 18: ordinal not in range(128)
Traceback (most recent call last):
  File "E:\DLstudy\lib\site-packages\joblib\externals\loky\backend\resource_tracker.py", line 253, in main
    splitted = line.strip().decode('ascii').split(':')
UnicodeDecodeError: 'ascii' codec can't decode byte 0xe5 in position 18: ordinal not in range(128)

Similarly, find the error path in the error message: e:\dlstudy\lib\sitepackages\joblib\externals\loky\backend\resource

There is an error in line 253 of the tracker.py file. We found the corresponding location:

......(...)
       with open(fd, 'rb') as f:
            while True:
                line = f.readline()
                if line == b'':  # EOF
                    break
                try:
                    splitted = line.strip().decode('ascii').split(':')
                    # name can potentially contain separator symbols (for
                    # instance folders on Windows)
                    cmd, name, rtype = (
                        splitted[0], ':'.join(splitted[1:-1]), splitted[-1])
......(...)

We just need to replace
line. Strip(). Decode ('ascii '). Split (': ')
with
line. Strip(). Decode (' 'utf8). Split (': ') ,
Run the file again to succeed.

[Solved] Pycharm from xx import xx Error: Unresolved reference

There is a problem: the related classes cannot be referenced, but these classes are indeed in the project

Analysis reason: import failed because the path does not correspond. Pycharm defaults to the source directory as the root directory of the project

Solution:

Search the corresponding item searchtest and select “sources”; Finally, be sure to “apply”

Set the folder where the package is placed as source, so that the module class of import can be found through these source folders as the root path, that is, find the imported things in these source folders

Or

[Solved] CONDA ENV create and run Error: F environment.yml under win10

The error report description corresponds to the solution according to the serial number. Since everyone has different luck and problems when installing the software, read it as needed. (of course, I stumbled all the way, so friends who installed for the first time still suggest reading the error report description first, and then read it as needed)

Error reporting description

    1. the CMD console constantly displays the following warning information

    1. can’t find the version of Matplotlib = = 2.2.2 (if the corresponding version number of other packages can’t be found, it can also be handled as this) failed to build panda numpy

Solution

      1. the PIP command may be omitted in the environment.yml file. Just add the PIP command in the corresponding position of the file (the content indicated by the red arrow in the figure below)

      1. delete the following version number (as shown by the red arrow)

      1. failed to build panda numpy

 

      1. to be honest, in this environment, I ignored the error report, I found that it doesn’t seem to affect my subsequent operation. For example, I can open the gluon environment and open jupyter notepad in the gluon environment. (if there is any subsequent impact, I will continue to solve it and update the content)


Reference link:

        1. after installing miniconda3, run CONDA env create – F environment.yml and report an error miniconda installs numpy but Python can’t import it

Add a little knowledge. The command to delete the gluon environment is as follows:

conda remove -n gluon --all  

[Solved] ERROR PythonRunner: Python worker exited unexpectedly (crashed)

Some time ago, I received a private letter from my fans and reported an error when running in pychart. Error Python runner: Python worker exited unexpectedly (crashed)

The test run print (input_rdd. First()) can be printed, but the print (input_rdd. Count()) trigger function will report an error

print(input_rdd.count())

Error Python runner: Python worker exited unexpectedly (crashed) means Python worker exited unexpectedly (crashed)

21/10/24 10:24:48 ERROR PythonRunner: Python worker exited unexpectedly (crashed)
java.net.SocketException: Connection reset by peer: socket write error
	at java.net.SocketOutputStream.socketWrite0(Native Method)
	at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:111)
	at java.net.SocketOutputStream.write(SocketOutputStream.java:155)
	at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)
	at java.io.BufferedOutputStream.write(BufferedOutputStream.java:95)
	at java.io.DataOutputStream.writeInt(DataOutputStream.java:199)
	at org.apache.spark.api.python.PythonRDD$.writeUTF(PythonRDD.scala:476)
	at org.apache.spark.api.python.PythonRDD$.write$1(PythonRDD.scala:297)
	at org.apache.spark.api.python.PythonRDD$.$anonfun$writeIteratorToStream$1(PythonRDD.scala:307)
	at org.apache.spark.api.python.PythonRDD$.$anonfun$writeIteratorToStream$1$adapted(PythonRDD.scala:307)
	at scala.collection.Iterator.foreach(Iterator.scala:941)
	at scala.collection.Iterator.foreach$(Iterator.scala:941)
	at scala.collection.AbstractIterator.foreach(Iterator.scala:1429)
	at org.apache.spark.api.python.PythonRDD$.writeIteratorToStream(PythonRDD.scala:307)
	at org.apache.spark.api.python.PythonRunner$$anon$2.writeIteratorToStream(PythonRunner.scala:621)
	at org.apache.spark.api.python.BasePythonRunner$WriterThread.$anonfun$run$1(PythonRunner.scala:397)
	at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1996)
	at org.apache.spark.api.python.BasePythonRunner$WriterThread.run(PythonRunner.scala:232)
21/10/24 10:24:48 ERROR PythonRunner: This may have been caused by a prior exception:
java.net.SocketException: Connection reset by peer: socket write error
	at java.net.SocketOutputStream.socketWrite0(Native Method)
	at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:111)
	at java.net.SocketOutputStream.write(SocketOutputStream.java:155)
	at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)
	at java.io.BufferedOutputStream.write(BufferedOutputStream.java:95)
	at java.io.DataOutputStream.writeInt(DataOutputStream.java:199)
	at org.apache.spark.api.python.PythonRDD$.writeUTF(PythonRDD.scala:476)
	at org.apache.spark.api.python.PythonRDD$.write$1(PythonRDD.scala:297)
	at org.apache.spark.api.python.PythonRDD$.$anonfun$writeIteratorToStream$1(PythonRDD.scala:307)
	at org.apache.spark.api.python.PythonRDD$.$anonfun$writeIteratorToStream$1$adapted(PythonRDD.scala:307)
	at scala.collection.Iterator.foreach(Iterator.scala:941)
	at scala.collection.Iterator.foreach$(Iterator.scala:941)
	at scala.collection.AbstractIterator.foreach(Iterator.scala:1429)
	at org.apache.spark.api.python.PythonRDD$.writeIteratorToStream(PythonRDD.scala:307)
	at org.apache.spark.api.python.PythonRunner$$anon$2.writeIteratorToStream(PythonRunner.scala:621)
	at org.apache.spark.api.python.BasePythonRunner$WriterThread.$anonfun$run$1(PythonRunner.scala:397)
	at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1996)
	at org.apache.spark.api.python.BasePythonRunner$WriterThread.run(PythonRunner.scala:232)
21/10/24 10:24:48 ERROR Executor: Exception in task 0.0 in stage 0.0 (TID 0)
java.net.SocketException: Connection reset by peer: socket write error
	at java.net.SocketOutputStream.socketWrite0(Native Method)
	at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:111)
	at java.net.SocketOutputStream.write(SocketOutputStream.java:155)
	at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)
	at java.io.BufferedOutputStream.write(BufferedOutputStream.java:95)
	at java.io.DataOutputStream.writeInt(DataOutputStream.java:199)
	at org.apache.spark.api.python.PythonRDD$.writeUTF(PythonRDD.scala:476)
	at org.apache.spark.api.python.PythonRDD$.write$1(PythonRDD.scala:297)
	at org.apache.spark.api.python.PythonRDD$.$anonfun$writeIteratorToStream$1(PythonRDD.scala:307)
	at org.apache.spark.api.python.PythonRDD$.$anonfun$writeIteratorToStream$1$adapted(PythonRDD.scala:307)
	at scala.collection.Iterator.foreach(Iterator.scala:941)
	at scala.collection.Iterator.foreach$(Iterator.scala:941)
	at scala.collection.AbstractIterator.foreach(Iterator.scala:1429)
	at org.apache.spark.api.python.PythonRDD$.writeIteratorToStream(PythonRDD.scala:307)
	at org.apache.spark.api.python.PythonRunner$$anon$2.writeIteratorToStream(PythonRunner.scala:621)
	at org.apache.spark.api.python.BasePythonRunner$WriterThread.$anonfun$run$1(PythonRunner.scala:397)
	at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1996)
	at org.apache.spark.api.python.BasePythonRunner$WriterThread.run(PythonRunner.scala:232)
21/10/24 10:24:48 WARN TaskSetManager: Lost task 0.0 in stage 0.0 (TID 0) (LAPTOP-RK2V2UMB executor driver): java.net.SocketException: Connection reset by peer: socket write error
	at java.net.SocketOutputStream.socketWrite0(Native Method)
	at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:111)
	at java.net.SocketOutputStream.write(SocketOutputStream.java:155)
	at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)
	at java.io.BufferedOutputStream.write(BufferedOutputStream.java:95)
	at java.io.DataOutputStream.writeInt(DataOutputStream.java:199)
	at org.apache.spark.api.python.PythonRDD$.writeUTF(PythonRDD.scala:476)
	at org.apache.spark.api.python.PythonRDD$.write$1(PythonRDD.scala:297)
	at org.apache.spark.api.python.PythonRDD$.$anonfun$writeIteratorToStream$1(PythonRDD.scala:307)
	at org.apache.spark.api.python.PythonRDD$.$anonfun$writeIteratorToStream$1$adapted(PythonRDD.scala:307)
	at scala.collection.Iterator.foreach(Iterator.scala:941)
	at scala.collection.Iterator.foreach$(Iterator.scala:941)
	at scala.collection.AbstractIterator.foreach(Iterator.scala:1429)
	at org.apache.spark.api.python.PythonRDD$.writeIteratorToStream(PythonRDD.scala:307)
	at org.apache.spark.api.python.PythonRunner$$anon$2.writeIteratorToStream(PythonRunner.scala:621)
	at org.apache.spark.api.python.BasePythonRunner$WriterThread.$anonfun$run$1(PythonRunner.scala:397)
	at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1996)
	at org.apache.spark.api.python.BasePythonRunner$WriterThread.run(PythonRunner.scala:232)

21/10/24 10:24:48 ERROR TaskSetManager: Task 0 in stage 0.0 failed 1 times; aborting job
Traceback (most recent call last):
  File "D:/Code/pycode/exercise/pyspark-study/pyspark-learning/pyspark-day04/main/01_web_analysis.py", line 28, in <module>
    print(input_rdd.first())
  File "D:\opt\Anaconda3-2020.11\lib\site-packages\pyspark\rdd.py", line 1586, in first
    rs = self.take(1)
  File "D:\opt\Anaconda3-2020.11\lib\site-packages\pyspark\rdd.py", line 1566, in take
    res = self.context.runJob(self, takeUpToNumLeft, p)
  File "D:\opt\Anaconda3-2020.11\lib\site-packages\pyspark\context.py", line 1233, in runJob
    sock_info = self._jvm.PythonRDD.runJob(self._jsc.sc(), mappedRDD._jrdd, partitions)
  File "D:\opt\Anaconda3-2020.11\lib\site-packages\py4j\java_gateway.py", line 1304, in __call__
    return_value = get_return_value(
  File "D:\opt\Anaconda3-2020.11\lib\site-packages\py4j\protocol.py", line 326, in get_return_value
    raise Py4JJavaError(
py4j.protocol.Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.runJob.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 1 times, most recent failure: Lost task 0.0 in stage 0.0 (TID 0) (LAPTOP-RK2V2UMB executor driver): java.net.SocketException: Connection reset by peer: socket write error
	at java.net.SocketOutputStream.socketWrite0(Native Method)
	at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:111)
	at java.net.SocketOutputStream.write(SocketOutputStream.java:155)
	at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)
	at java.io.BufferedOutputStream.write(BufferedOutputStream.java:95)
	at java.io.DataOutputStream.writeInt(DataOutputStream.java:199)
	at org.apache.spark.api.python.PythonRDD$.writeUTF(PythonRDD.scala:476)
	at org.apache.spark.api.python.PythonRDD$.write$1(PythonRDD.scala:297)
	at org.apache.spark.api.python.PythonRDD$.$anonfun$writeIteratorToStream$1(PythonRDD.scala:307)
	at org.apache.spark.api.python.PythonRDD$.$anonfun$writeIteratorToStream$1$adapted(PythonRDD.scala:307)
	at scala.collection.Iterator.foreach(Iterator.scala:941)
	at scala.collection.Iterator.foreach$(Iterator.scala:941)
	at scala.collection.AbstractIterator.foreach(Iterator.scala:1429)
	at org.apache.spark.api.python.PythonRDD$.writeIteratorToStream(PythonRDD.scala:307)
	at org.apache.spark.api.python.PythonRunner$$anon$2.writeIteratorToStream(PythonRunner.scala:621)
	at org.apache.spark.api.python.BasePythonRunner$WriterThread.$anonfun$run$1(PythonRunner.scala:397)
	at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1996)
	at org.apache.spark.api.python.BasePythonRunner$WriterThread.run(PythonRunner.scala:232)

Driver stacktrace:
	at org.apache.spark.scheduler.DAGScheduler.failJobAndIndependentStages(DAGScheduler.scala:2258)
	at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2(DAGScheduler.scala:2207)
	at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2$adapted(DAGScheduler.scala:2206)
	at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
	at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
	at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
	at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:2206)
	at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1(DAGScheduler.scala:1079)
	at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1$adapted(DAGScheduler.scala:1079)
	at scala.Option.foreach(Option.scala:407)
	at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:1079)
	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2445)
	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2387)
	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2376)
	at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49)
	at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:868)
	at org.apache.spark.SparkContext.runJob(SparkContext.scala:2196)
	at org.apache.spark.SparkContext.runJob(SparkContext.scala:2217)
	at org.apache.spark.SparkContext.runJob(SparkContext.scala:2236)
	at org.apache.spark.api.python.PythonRDD$.runJob(PythonRDD.scala:166)
	at org.apache.spark.api.python.PythonRDD.runJob(PythonRDD.scala)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
	at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
	at py4j.Gateway.invoke(Gateway.java:282)
	at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
	at py4j.commands.CallCommand.execute(CallCommand.java:79)
	at py4j.GatewayConnection.run(GatewayConnection.java:238)
	at java.lang.Thread.run(Thread.java:748)
Caused by: java.net.SocketException: Connection reset by peer: socket write error
	at java.net.SocketOutputStream.socketWrite0(Native Method)
	at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:111)
	at java.net.SocketOutputStream.write(SocketOutputStream.java:155)
	at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)
	at java.io.BufferedOutputStream.write(BufferedOutputStream.java:95)
	at java.io.DataOutputStream.writeInt(DataOutputStream.java:199)
	at org.apache.spark.api.python.PythonRDD$.writeUTF(PythonRDD.scala:476)
	at org.apache.spark.api.python.PythonRDD$.write$1(PythonRDD.scala:297)
	at org.apache.spark.api.python.PythonRDD$.$anonfun$writeIteratorToStream$1(PythonRDD.scala:307)
	at org.apache.spark.api.python.PythonRDD$.$anonfun$writeIteratorToStream$1$adapted(PythonRDD.scala:307)
	at scala.collection.Iterator.foreach(Iterator.scala:941)
	at scala.collection.Iterator.foreach$(Iterator.scala:941)
	at scala.collection.AbstractIterator.foreach(Iterator.scala:1429)
	at org.apache.spark.api.python.PythonRDD$.writeIteratorToStream(PythonRDD.scala:307)
	at org.apache.spark.api.python.PythonRunner$$anon$2.writeIteratorToStream(PythonRunner.scala:621)
	at org.apache.spark.api.python.BasePythonRunner$WriterThread.$anonfun$run$1(PythonRunner.scala:397)
	at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1996)
	at org.apache.spark.api.python.BasePythonRunner$WriterThread.run(PythonRunner.scala:232)


Process finished with exit code 1

For the solution to this problem, Xiaobian inquired online. This problem may be caused by many situations. For the current situation that Xiaobian helps solve, spark running locally on Windows system is a software problem. The amount of data is a little large, and errors may be reported when running on pycharm.

Without much nonsense, let’s talk about the solution to the problem of fans. It’s very simple. After pycharm is closed, open it again and run it again. Note that if not, shut down again and run again.

NPM install Error: gyp ERR! stack Error: Could not find any Python installation to use

When the Vue project uses NPM install as a dependency, the following error is reported:

gyp verb command rebuild []
gyp verb command clean []
gyp verb clean removing "build" directory
gyp verb command configure []
gyp verb find Python Python is not set from command line or npm configuration
gyp verb find Python Python is not set from environment variable PYTHON
gyp verb find Python checking if "python3" can be used
gyp verb find Python - executing "python3" to get executable path
gyp verb find Python - "python3" is not in PATH or produced an error
gyp verb find Python checking if "python" can be used
gyp verb find Python - executing "python" to get executable path
gyp verb find Python - "python" is not in PATH or produced an error
gyp verb find Python checking if "python2" can be used
gyp verb find Python - executing "python2" to get executable path
gyp verb find Python - "python2" is not in PATH or produced an error
gyp verb find Python checking if Python is C:\Python37\python.exe
gyp verb find Python - executing "C:\Python37\python.exe" to get version
gyp verb find Python - "C:\Python37\python.exe" could not be run
gyp verb find Python checking if Python is C:\Python27\python.exe
gyp verb find Python - executing "C:\Python27\python.exe" to get version
gyp verb find Python - "C:\Python27\python.exe" could not be run
gyp verb find Python checking if the py launcher can be used to find Python
gyp verb find Python - executing "py.exe" to get Python executable path
gyp verb find Python - "py.exe" is not in PATH or produced an error
gyp ERR! find Python
gyp ERR! find Python Python is not set from command line or npm configuration
gyp ERR! find Python Python is not set from environment variable PYTHON
gyp ERR! find Python checking if "python3" can be used
gyp ERR! find Python - "python3" is not in PATH or produced an error
gyp ERR! find Python checking if "python" can be used
gyp ERR! find Python - "python" is not in PATH or produced an error
gyp ERR! find Python checking if "python2" can be used
gyp ERR! find Python - "python2" is not in PATH or produced an error
gyp ERR! find Python checking if Python is C:\Python37\python.exe
gyp ERR! find Python - "C:\Python37\python.exe" could not be run
gyp ERR! find Python checking if Python is C:\Python27\python.exe
gyp ERR! find Python - "C:\Python27\python.exe" could not be run
gyp ERR! find Python checking if the py launcher can be used to find Python
gyp ERR! find Python - "py.exe" is not in PATH or produced an error
gyp ERR! find Python
gyp ERR! find Python **********************************************************
gyp ERR! find Python You need to install the latest version of Python.
gyp ERR! find Python Node-gyp should be able to find and use Python. If not,
gyp ERR! find Python you can try one of the following options:
gyp ERR! find Python - Use the switch --python="C:\Path\To\python.exe"
gyp ERR! find Python   (accepted by both node-gyp and npm)
gyp ERR! find Python - Set the environment variable PYTHON
gyp ERR! find Python - Set the npm configuration variable python:
gyp ERR! find Python   npm config set python "C:\Path\To\python.exe"
gyp ERR! find Python For more information consult the documentation at:
gyp ERR! find Python https://github.com/nodejs/node-gyp#installation
gyp ERR! find Python **********************************************************
gyp ERR! find Python
gyp ERR! configure error
gyp ERR! stack Error: Could not find any Python installation to use
gyp ERR! stack     at PythonFinder.fail (E:\project\DBApi-master\dbapi-ui\node_modules\node-gyp\lib\find-python.js:302:47)
gyp ERR! stack     at PythonFinder.runChecks (E:\project\DBApi-master\dbapi-ui\node_modules\node-gyp\lib\find-python.js:136:21)
gyp ERR! stack     at PythonFinder.<anonymous> (E:\project\DBApi-master\dbapi-ui\node_modules\node-gyp\lib\find-python.js:200:18)
gyp ERR! stack     at PythonFinder.execFileCallback (E:\project\DBApi-master\dbapi-ui\node_modules\node-gyp\lib\find-python.js:266:16)
gyp ERR! stack     at exithandler (child_process.js:390:5)
gyp ERR! stack     at ChildProcess.errorhandler (child_process.js:402:5)
gyp ERR! stack     at ChildProcess.emit (events.js:400:28)
gyp ERR! stack     at Process.ChildProcess._handle.onexit (internal/child_process.js:280:12)     
gyp ERR! stack     at onErrorNT (internal/child_process.js:469:16)
gyp ERR! stack     at processTicksAndRejections (internal/process/task_queues.js:82:21)
gyp ERR! System Windows_NT 10.0.19042
gyp ERR! command "D:\\nodejs\\node.exe" "E:\\project\\DBApi-master\\dbapi-ui\\node_modules\\node-gyp\\bin\\node-gyp.js" "rebuild" "--verbose" "--libsass_ext=" "--libsass_cflags=" "--libsass_ldflags=" "--libsass_library="
gyp ERR! cwd E:\project\DBApi-master\dbapi-ui\node_modules\node-sass
gyp ERR! node -v v14.18.1
gyp ERR! node-gyp -v v7.1.2
gyp ERR! not ok
Build failed with error code: 1
npm WARN optional SKIPPING OPTIONAL DEPENDENCY: fsevents@~2.3.2 (node_modules\chokidar\node_modules\fsevents):
npm WARN notsup SKIPPING OPTIONAL DEPENDENCY: Unsupported platform for [email protected]: wanted {"os":"darwin","arch":"any"} (current: {"os":"win32","arch":"x64"})
npm WARN optional SKIPPING OPTIONAL DEPENDENCY: fsevents@^1.2.7 (node_modules\watchpack-chokidar2\node_modules\chokidar\node_modules\fsevents):
npm WARN notsup SKIPPING OPTIONAL DEPENDENCY: Unsupported platform for [email protected]: wanted {"os":"darwin","arch":"any"} (current: {"os":"win32","arch":"x64"})
npm WARN optional SKIPPING OPTIONAL DEPENDENCY: fsevents@^1.2.7 (node_modules\webpack-dev-server\node_modules\chokidar\node_modules\fsevents):
npm WARN notsup SKIPPING OPTIONAL DEPENDENCY: Unsupported platform for [email protected]: wanted {"os":"darwin","arch":"any"} (current: {"os":"win32","arch":"x64"})

npm ERR! code ELIFECYCLE
npm ERR! errno 1
npm ERR! [email protected] postinstall: `node scripts/build.js`
npm ERR! Exit status 1
npm ERR!
npm ERR! Failed at the [email protected] postinstall script.
npm ERR! This is probably not a problem with npm. There is likely additional logging output above.

npm ERR! A complete log of this run can be found in:
npm ERR!     C:\Users\WRD\AppData\Roaming\npm-cache\_logs\2021-10-20T05_42_34_767Z-debug.log 

This is due to the lack of Python dependencies. After downloading and installing on the python official website, delete the dependencies and re execute NPM install