Tag Archives: python

[Solved] ERROR PythonRunner: Python worker exited unexpectedly (crashed)

Some time ago, I received a private letter from my fans and reported an error when running in pychart. Error Python runner: Python worker exited unexpectedly (crashed)

The test run print (input_rdd. First()) can be printed, but the print (input_rdd. Count()) trigger function will report an error

print(input_rdd.count())

Error Python runner: Python worker exited unexpectedly (crashed) means Python worker exited unexpectedly (crashed)

21/10/24 10:24:48 ERROR PythonRunner: Python worker exited unexpectedly (crashed)
java.net.SocketException: Connection reset by peer: socket write error
	at java.net.SocketOutputStream.socketWrite0(Native Method)
	at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:111)
	at java.net.SocketOutputStream.write(SocketOutputStream.java:155)
	at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)
	at java.io.BufferedOutputStream.write(BufferedOutputStream.java:95)
	at java.io.DataOutputStream.writeInt(DataOutputStream.java:199)
	at org.apache.spark.api.python.PythonRDD$.writeUTF(PythonRDD.scala:476)
	at org.apache.spark.api.python.PythonRDD$.write$1(PythonRDD.scala:297)
	at org.apache.spark.api.python.PythonRDD$.$anonfun$writeIteratorToStream$1(PythonRDD.scala:307)
	at org.apache.spark.api.python.PythonRDD$.$anonfun$writeIteratorToStream$1$adapted(PythonRDD.scala:307)
	at scala.collection.Iterator.foreach(Iterator.scala:941)
	at scala.collection.Iterator.foreach$(Iterator.scala:941)
	at scala.collection.AbstractIterator.foreach(Iterator.scala:1429)
	at org.apache.spark.api.python.PythonRDD$.writeIteratorToStream(PythonRDD.scala:307)
	at org.apache.spark.api.python.PythonRunner$$anon$2.writeIteratorToStream(PythonRunner.scala:621)
	at org.apache.spark.api.python.BasePythonRunner$WriterThread.$anonfun$run$1(PythonRunner.scala:397)
	at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1996)
	at org.apache.spark.api.python.BasePythonRunner$WriterThread.run(PythonRunner.scala:232)
21/10/24 10:24:48 ERROR PythonRunner: This may have been caused by a prior exception:
java.net.SocketException: Connection reset by peer: socket write error
	at java.net.SocketOutputStream.socketWrite0(Native Method)
	at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:111)
	at java.net.SocketOutputStream.write(SocketOutputStream.java:155)
	at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)
	at java.io.BufferedOutputStream.write(BufferedOutputStream.java:95)
	at java.io.DataOutputStream.writeInt(DataOutputStream.java:199)
	at org.apache.spark.api.python.PythonRDD$.writeUTF(PythonRDD.scala:476)
	at org.apache.spark.api.python.PythonRDD$.write$1(PythonRDD.scala:297)
	at org.apache.spark.api.python.PythonRDD$.$anonfun$writeIteratorToStream$1(PythonRDD.scala:307)
	at org.apache.spark.api.python.PythonRDD$.$anonfun$writeIteratorToStream$1$adapted(PythonRDD.scala:307)
	at scala.collection.Iterator.foreach(Iterator.scala:941)
	at scala.collection.Iterator.foreach$(Iterator.scala:941)
	at scala.collection.AbstractIterator.foreach(Iterator.scala:1429)
	at org.apache.spark.api.python.PythonRDD$.writeIteratorToStream(PythonRDD.scala:307)
	at org.apache.spark.api.python.PythonRunner$$anon$2.writeIteratorToStream(PythonRunner.scala:621)
	at org.apache.spark.api.python.BasePythonRunner$WriterThread.$anonfun$run$1(PythonRunner.scala:397)
	at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1996)
	at org.apache.spark.api.python.BasePythonRunner$WriterThread.run(PythonRunner.scala:232)
21/10/24 10:24:48 ERROR Executor: Exception in task 0.0 in stage 0.0 (TID 0)
java.net.SocketException: Connection reset by peer: socket write error
	at java.net.SocketOutputStream.socketWrite0(Native Method)
	at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:111)
	at java.net.SocketOutputStream.write(SocketOutputStream.java:155)
	at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)
	at java.io.BufferedOutputStream.write(BufferedOutputStream.java:95)
	at java.io.DataOutputStream.writeInt(DataOutputStream.java:199)
	at org.apache.spark.api.python.PythonRDD$.writeUTF(PythonRDD.scala:476)
	at org.apache.spark.api.python.PythonRDD$.write$1(PythonRDD.scala:297)
	at org.apache.spark.api.python.PythonRDD$.$anonfun$writeIteratorToStream$1(PythonRDD.scala:307)
	at org.apache.spark.api.python.PythonRDD$.$anonfun$writeIteratorToStream$1$adapted(PythonRDD.scala:307)
	at scala.collection.Iterator.foreach(Iterator.scala:941)
	at scala.collection.Iterator.foreach$(Iterator.scala:941)
	at scala.collection.AbstractIterator.foreach(Iterator.scala:1429)
	at org.apache.spark.api.python.PythonRDD$.writeIteratorToStream(PythonRDD.scala:307)
	at org.apache.spark.api.python.PythonRunner$$anon$2.writeIteratorToStream(PythonRunner.scala:621)
	at org.apache.spark.api.python.BasePythonRunner$WriterThread.$anonfun$run$1(PythonRunner.scala:397)
	at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1996)
	at org.apache.spark.api.python.BasePythonRunner$WriterThread.run(PythonRunner.scala:232)
21/10/24 10:24:48 WARN TaskSetManager: Lost task 0.0 in stage 0.0 (TID 0) (LAPTOP-RK2V2UMB executor driver): java.net.SocketException: Connection reset by peer: socket write error
	at java.net.SocketOutputStream.socketWrite0(Native Method)
	at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:111)
	at java.net.SocketOutputStream.write(SocketOutputStream.java:155)
	at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)
	at java.io.BufferedOutputStream.write(BufferedOutputStream.java:95)
	at java.io.DataOutputStream.writeInt(DataOutputStream.java:199)
	at org.apache.spark.api.python.PythonRDD$.writeUTF(PythonRDD.scala:476)
	at org.apache.spark.api.python.PythonRDD$.write$1(PythonRDD.scala:297)
	at org.apache.spark.api.python.PythonRDD$.$anonfun$writeIteratorToStream$1(PythonRDD.scala:307)
	at org.apache.spark.api.python.PythonRDD$.$anonfun$writeIteratorToStream$1$adapted(PythonRDD.scala:307)
	at scala.collection.Iterator.foreach(Iterator.scala:941)
	at scala.collection.Iterator.foreach$(Iterator.scala:941)
	at scala.collection.AbstractIterator.foreach(Iterator.scala:1429)
	at org.apache.spark.api.python.PythonRDD$.writeIteratorToStream(PythonRDD.scala:307)
	at org.apache.spark.api.python.PythonRunner$$anon$2.writeIteratorToStream(PythonRunner.scala:621)
	at org.apache.spark.api.python.BasePythonRunner$WriterThread.$anonfun$run$1(PythonRunner.scala:397)
	at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1996)
	at org.apache.spark.api.python.BasePythonRunner$WriterThread.run(PythonRunner.scala:232)

21/10/24 10:24:48 ERROR TaskSetManager: Task 0 in stage 0.0 failed 1 times; aborting job
Traceback (most recent call last):
  File "D:/Code/pycode/exercise/pyspark-study/pyspark-learning/pyspark-day04/main/01_web_analysis.py", line 28, in <module>
    print(input_rdd.first())
  File "D:\opt\Anaconda3-2020.11\lib\site-packages\pyspark\rdd.py", line 1586, in first
    rs = self.take(1)
  File "D:\opt\Anaconda3-2020.11\lib\site-packages\pyspark\rdd.py", line 1566, in take
    res = self.context.runJob(self, takeUpToNumLeft, p)
  File "D:\opt\Anaconda3-2020.11\lib\site-packages\pyspark\context.py", line 1233, in runJob
    sock_info = self._jvm.PythonRDD.runJob(self._jsc.sc(), mappedRDD._jrdd, partitions)
  File "D:\opt\Anaconda3-2020.11\lib\site-packages\py4j\java_gateway.py", line 1304, in __call__
    return_value = get_return_value(
  File "D:\opt\Anaconda3-2020.11\lib\site-packages\py4j\protocol.py", line 326, in get_return_value
    raise Py4JJavaError(
py4j.protocol.Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.runJob.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 1 times, most recent failure: Lost task 0.0 in stage 0.0 (TID 0) (LAPTOP-RK2V2UMB executor driver): java.net.SocketException: Connection reset by peer: socket write error
	at java.net.SocketOutputStream.socketWrite0(Native Method)
	at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:111)
	at java.net.SocketOutputStream.write(SocketOutputStream.java:155)
	at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)
	at java.io.BufferedOutputStream.write(BufferedOutputStream.java:95)
	at java.io.DataOutputStream.writeInt(DataOutputStream.java:199)
	at org.apache.spark.api.python.PythonRDD$.writeUTF(PythonRDD.scala:476)
	at org.apache.spark.api.python.PythonRDD$.write$1(PythonRDD.scala:297)
	at org.apache.spark.api.python.PythonRDD$.$anonfun$writeIteratorToStream$1(PythonRDD.scala:307)
	at org.apache.spark.api.python.PythonRDD$.$anonfun$writeIteratorToStream$1$adapted(PythonRDD.scala:307)
	at scala.collection.Iterator.foreach(Iterator.scala:941)
	at scala.collection.Iterator.foreach$(Iterator.scala:941)
	at scala.collection.AbstractIterator.foreach(Iterator.scala:1429)
	at org.apache.spark.api.python.PythonRDD$.writeIteratorToStream(PythonRDD.scala:307)
	at org.apache.spark.api.python.PythonRunner$$anon$2.writeIteratorToStream(PythonRunner.scala:621)
	at org.apache.spark.api.python.BasePythonRunner$WriterThread.$anonfun$run$1(PythonRunner.scala:397)
	at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1996)
	at org.apache.spark.api.python.BasePythonRunner$WriterThread.run(PythonRunner.scala:232)

Driver stacktrace:
	at org.apache.spark.scheduler.DAGScheduler.failJobAndIndependentStages(DAGScheduler.scala:2258)
	at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2(DAGScheduler.scala:2207)
	at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2$adapted(DAGScheduler.scala:2206)
	at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
	at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
	at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
	at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:2206)
	at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1(DAGScheduler.scala:1079)
	at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1$adapted(DAGScheduler.scala:1079)
	at scala.Option.foreach(Option.scala:407)
	at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:1079)
	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2445)
	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2387)
	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2376)
	at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49)
	at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:868)
	at org.apache.spark.SparkContext.runJob(SparkContext.scala:2196)
	at org.apache.spark.SparkContext.runJob(SparkContext.scala:2217)
	at org.apache.spark.SparkContext.runJob(SparkContext.scala:2236)
	at org.apache.spark.api.python.PythonRDD$.runJob(PythonRDD.scala:166)
	at org.apache.spark.api.python.PythonRDD.runJob(PythonRDD.scala)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
	at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
	at py4j.Gateway.invoke(Gateway.java:282)
	at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
	at py4j.commands.CallCommand.execute(CallCommand.java:79)
	at py4j.GatewayConnection.run(GatewayConnection.java:238)
	at java.lang.Thread.run(Thread.java:748)
Caused by: java.net.SocketException: Connection reset by peer: socket write error
	at java.net.SocketOutputStream.socketWrite0(Native Method)
	at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:111)
	at java.net.SocketOutputStream.write(SocketOutputStream.java:155)
	at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)
	at java.io.BufferedOutputStream.write(BufferedOutputStream.java:95)
	at java.io.DataOutputStream.writeInt(DataOutputStream.java:199)
	at org.apache.spark.api.python.PythonRDD$.writeUTF(PythonRDD.scala:476)
	at org.apache.spark.api.python.PythonRDD$.write$1(PythonRDD.scala:297)
	at org.apache.spark.api.python.PythonRDD$.$anonfun$writeIteratorToStream$1(PythonRDD.scala:307)
	at org.apache.spark.api.python.PythonRDD$.$anonfun$writeIteratorToStream$1$adapted(PythonRDD.scala:307)
	at scala.collection.Iterator.foreach(Iterator.scala:941)
	at scala.collection.Iterator.foreach$(Iterator.scala:941)
	at scala.collection.AbstractIterator.foreach(Iterator.scala:1429)
	at org.apache.spark.api.python.PythonRDD$.writeIteratorToStream(PythonRDD.scala:307)
	at org.apache.spark.api.python.PythonRunner$$anon$2.writeIteratorToStream(PythonRunner.scala:621)
	at org.apache.spark.api.python.BasePythonRunner$WriterThread.$anonfun$run$1(PythonRunner.scala:397)
	at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1996)
	at org.apache.spark.api.python.BasePythonRunner$WriterThread.run(PythonRunner.scala:232)


Process finished with exit code 1

For the solution to this problem, Xiaobian inquired online. This problem may be caused by many situations. For the current situation that Xiaobian helps solve, spark running locally on Windows system is a software problem. The amount of data is a little large, and errors may be reported when running on pycharm.

Without much nonsense, let’s talk about the solution to the problem of fans. It’s very simple. After pycharm is closed, open it again and run it again. Note that if not, shut down again and run again.

NPM install Error: gyp ERR! stack Error: Could not find any Python installation to use

When the Vue project uses NPM install as a dependency, the following error is reported:

gyp verb command rebuild []
gyp verb command clean []
gyp verb clean removing "build" directory
gyp verb command configure []
gyp verb find Python Python is not set from command line or npm configuration
gyp verb find Python Python is not set from environment variable PYTHON
gyp verb find Python checking if "python3" can be used
gyp verb find Python - executing "python3" to get executable path
gyp verb find Python - "python3" is not in PATH or produced an error
gyp verb find Python checking if "python" can be used
gyp verb find Python - executing "python" to get executable path
gyp verb find Python - "python" is not in PATH or produced an error
gyp verb find Python checking if "python2" can be used
gyp verb find Python - executing "python2" to get executable path
gyp verb find Python - "python2" is not in PATH or produced an error
gyp verb find Python checking if Python is C:\Python37\python.exe
gyp verb find Python - executing "C:\Python37\python.exe" to get version
gyp verb find Python - "C:\Python37\python.exe" could not be run
gyp verb find Python checking if Python is C:\Python27\python.exe
gyp verb find Python - executing "C:\Python27\python.exe" to get version
gyp verb find Python - "C:\Python27\python.exe" could not be run
gyp verb find Python checking if the py launcher can be used to find Python
gyp verb find Python - executing "py.exe" to get Python executable path
gyp verb find Python - "py.exe" is not in PATH or produced an error
gyp ERR! find Python
gyp ERR! find Python Python is not set from command line or npm configuration
gyp ERR! find Python Python is not set from environment variable PYTHON
gyp ERR! find Python checking if "python3" can be used
gyp ERR! find Python - "python3" is not in PATH or produced an error
gyp ERR! find Python checking if "python" can be used
gyp ERR! find Python - "python" is not in PATH or produced an error
gyp ERR! find Python checking if "python2" can be used
gyp ERR! find Python - "python2" is not in PATH or produced an error
gyp ERR! find Python checking if Python is C:\Python37\python.exe
gyp ERR! find Python - "C:\Python37\python.exe" could not be run
gyp ERR! find Python checking if Python is C:\Python27\python.exe
gyp ERR! find Python - "C:\Python27\python.exe" could not be run
gyp ERR! find Python checking if the py launcher can be used to find Python
gyp ERR! find Python - "py.exe" is not in PATH or produced an error
gyp ERR! find Python
gyp ERR! find Python **********************************************************
gyp ERR! find Python You need to install the latest version of Python.
gyp ERR! find Python Node-gyp should be able to find and use Python. If not,
gyp ERR! find Python you can try one of the following options:
gyp ERR! find Python - Use the switch --python="C:\Path\To\python.exe"
gyp ERR! find Python   (accepted by both node-gyp and npm)
gyp ERR! find Python - Set the environment variable PYTHON
gyp ERR! find Python - Set the npm configuration variable python:
gyp ERR! find Python   npm config set python "C:\Path\To\python.exe"
gyp ERR! find Python For more information consult the documentation at:
gyp ERR! find Python https://github.com/nodejs/node-gyp#installation
gyp ERR! find Python **********************************************************
gyp ERR! find Python
gyp ERR! configure error
gyp ERR! stack Error: Could not find any Python installation to use
gyp ERR! stack     at PythonFinder.fail (E:\project\DBApi-master\dbapi-ui\node_modules\node-gyp\lib\find-python.js:302:47)
gyp ERR! stack     at PythonFinder.runChecks (E:\project\DBApi-master\dbapi-ui\node_modules\node-gyp\lib\find-python.js:136:21)
gyp ERR! stack     at PythonFinder.<anonymous> (E:\project\DBApi-master\dbapi-ui\node_modules\node-gyp\lib\find-python.js:200:18)
gyp ERR! stack     at PythonFinder.execFileCallback (E:\project\DBApi-master\dbapi-ui\node_modules\node-gyp\lib\find-python.js:266:16)
gyp ERR! stack     at exithandler (child_process.js:390:5)
gyp ERR! stack     at ChildProcess.errorhandler (child_process.js:402:5)
gyp ERR! stack     at ChildProcess.emit (events.js:400:28)
gyp ERR! stack     at Process.ChildProcess._handle.onexit (internal/child_process.js:280:12)     
gyp ERR! stack     at onErrorNT (internal/child_process.js:469:16)
gyp ERR! stack     at processTicksAndRejections (internal/process/task_queues.js:82:21)
gyp ERR! System Windows_NT 10.0.19042
gyp ERR! command "D:\\nodejs\\node.exe" "E:\\project\\DBApi-master\\dbapi-ui\\node_modules\\node-gyp\\bin\\node-gyp.js" "rebuild" "--verbose" "--libsass_ext=" "--libsass_cflags=" "--libsass_ldflags=" "--libsass_library="
gyp ERR! cwd E:\project\DBApi-master\dbapi-ui\node_modules\node-sass
gyp ERR! node -v v14.18.1
gyp ERR! node-gyp -v v7.1.2
gyp ERR! not ok
Build failed with error code: 1
npm WARN optional SKIPPING OPTIONAL DEPENDENCY: fsevents@~2.3.2 (node_modules\chokidar\node_modules\fsevents):
npm WARN notsup SKIPPING OPTIONAL DEPENDENCY: Unsupported platform for [email protected]: wanted {"os":"darwin","arch":"any"} (current: {"os":"win32","arch":"x64"})
npm WARN optional SKIPPING OPTIONAL DEPENDENCY: fsevents@^1.2.7 (node_modules\watchpack-chokidar2\node_modules\chokidar\node_modules\fsevents):
npm WARN notsup SKIPPING OPTIONAL DEPENDENCY: Unsupported platform for [email protected]: wanted {"os":"darwin","arch":"any"} (current: {"os":"win32","arch":"x64"})
npm WARN optional SKIPPING OPTIONAL DEPENDENCY: fsevents@^1.2.7 (node_modules\webpack-dev-server\node_modules\chokidar\node_modules\fsevents):
npm WARN notsup SKIPPING OPTIONAL DEPENDENCY: Unsupported platform for [email protected]: wanted {"os":"darwin","arch":"any"} (current: {"os":"win32","arch":"x64"})

npm ERR! code ELIFECYCLE
npm ERR! errno 1
npm ERR! [email protected] postinstall: `node scripts/build.js`
npm ERR! Exit status 1
npm ERR!
npm ERR! Failed at the [email protected] postinstall script.
npm ERR! This is probably not a problem with npm. There is likely additional logging output above.

npm ERR! A complete log of this run can be found in:
npm ERR!     C:\Users\WRD\AppData\Roaming\npm-cache\_logs\2021-10-20T05_42_34_767Z-debug.log 

This is due to the lack of Python dependencies. After downloading and installing on the python official website, delete the dependencies and re execute NPM install

RuntimeError: stack expects each tensor to be equal size [How to Solve]

When debugging the code of densenet for classification task, the following errors are encountered in the process of image preprocessing:
runtimeerror: stack expectations each tensor to be equal size, but got [640, 640] at entry 0 and [560, 560] at entry 2
it means that the size of the loaded sheets is inconsistent
after searching, it is found that there should be a problem in the preprocessing process when I load the image
the following is the instantiation part of training data preprocessing.

train_transform = Compose(
        [
            LoadImaged(keys=keys),
            AddChanneld(keys=keys),
            CropForegroundd(keys=keys[:-1], source_key="tumor"),
            ScaleIntensityd(keys=keys[:-1]),
            # # Orientationd(keys=keys[:-1], axcodes="RAI"),
            Resized(keys=keys[:-1], spatial_size=(64, 64), mode='bilinear'),
            ConcatItemsd(keys=keys[:-1], name="image"),
            RandGaussianNoised(keys=["image"], std=0.01, prob=0.15),
            RandFlipd(keys=["image"], prob=0.5),  # , spatial_axis=[0, 1]
            RandAffined(keys=["image"], mode='bilinear', prob=1.0, spatial_size=[64, 64],    # The 3 here is because we don't know what the size of the three modal images will be after stitching, so we first use
                        rotate_range=(0, 0, np.pi/15), scale_range=(0.1, 0.1)),
            ToTensord(keys=keys),
        ]

    )

My keys are [“t2_img”, “dwi_img”, “adc_img”, “tumor”]
the error shows that the loaded tensor has dimensions [640, 640] and [560, 560], which are the dimensions of my original image, which indicates that there may be a problem in my clipping step or resize step. Finally, after screening, it is found that there is a problem in my resize step. In the resize step, I selected keys = keys [: – 1], that is, it does not contain “tumor”. Therefore, when resizing, my tumor image will still maintain the size of the original image, and the data contained in this dictionary will still be a whole when loaded, The dimensions of each dimension of the whole will automatically expand to the largest of the corresponding dimensions of all objects, so the data I loaded will still be the size of the original drawing. Make the following corrections:

 train_transform = Compose(
        [
            LoadImaged(keys=keys),
            AddChanneld(keys=keys),
            CropForegroundd(keys=keys[:-1], source_key="tumor"),
            ScaleIntensityd(keys=keys[:-1]),
            # # Orientationd(keys=keys[:-1], axcodes="RAI"),
            Resized(keys=keys, spatial_size=(64, 64), mode='bilinear'),  # remove [:-1]
            ConcatItemsd(keys=keys[:-1], name="image"),
            RandGaussianNoised(keys=["image"], std=0.01, prob=0.15),
            RandFlipd(keys=["image"], prob=0.5),  # , spatial_axis=[0, 1]
            RandAffined(keys=["image"], mode='bilinear', prob=1.0, spatial_size=[64, 64],    # The 3 here is because we don't know what the size of the three modal images will be after stitching, so we first use
                        rotate_range=(0, 0, np.pi/15), scale_range=(0.1, 0.1)),
            ToTensord(keys=keys),
        ]

    )

Run successfully!

AttributeError: module ‘time‘ has no attribute ‘clock‘ [How to Solve]

AttributeError: module ‘time’ has no attribute ‘clock’
Error Messages:

# `flask_sqlalchemy` Error:
File "D:\python38-flasky\lib\site-packages\sqlalchemy\util\compat.py", line 172, in <module>
    time_func = time.clock
AttributeError: module 'time' has no attribute 'clock'

reason:

Python 3.8 no longer supports time.clock, but it still contains this method when calling. There is a version problem.

Solution:

Use the replacement method: time.perf_Counter(), such as:

import time
if win32 or jython:
    # time_func = time.clock
    time_finc = time.perf_counter()
else:
    time_func = time.time

[Solved] Pdfplumber Read PDF Sheet Error: AttributeError: function/symbol ‘ARC4_stream_init‘ not found in library

Pdfplumber reports an error when reading PDF table attributeerror: function/symbol ‘arc4_stream_init’ not found in library

Solutions to errors reported

Error reporting item

When using pdfplumber to extract tables in PDF, you will be prompted that arc4 is missing_stream_init。

Traceback (most recent call last):

  File "C:\Users\Stan\Python\ALIRT\pdf extracter\test.py", line 50, in <module>
    text = convert_pdf_to_txt('test_pdf.pdf')

  File "C:\Users\Stan\Python\ALIRT\pdf extracter\test.py", line 40, in convert_pdf_to_txt
    for page in PDFPage.get_pages(fp, pagenos, maxpages=maxpages, password=password,caching=caching, check_extractable=True):

  File "C:\Users\Stan\anaconda3\lib\site-packages\pdfminer\pdfpage.py", line 127, in get_pages
    doc = PDFDocument(parser, password=password, caching=caching)

  File "C:\Users\Stan\anaconda3\lib\site-packages\pdfminer\pdfdocument.py", line 564, in __init__
    self._initialize_password(password)

  File "C:\Users\Stan\anaconda3\lib\site-packages\pdfminer\pdfdocument.py", line 590, in _initialize_password
    handler = factory(docid, param, password)

  File "C:\Users\Stan\anaconda3\lib\site-packages\pdfminer\pdfdocument.py", line 283, in __init__
    self.init()

  File "C:\Users\Stan\anaconda3\lib\site-packages\pdfminer\pdfdocument.py", line 291, in init
    self.init_key()

  File "C:\Users\Stan\anaconda3\lib\site-packages\pdfminer\pdfdocument.py", line 304, in init_key
    self.key = self.authenticate(self.password)

  File "C:\Users\Stan\anaconda3\lib\site-packages\pdfminer\pdfdocument.py", line 354, in authenticate
    key = self.authenticate_user_password(password)

  File "C:\Users\Stan\anaconda3\lib\site-packages\pdfminer\pdfdocument.py", line 361, in authenticate_user_password
    if self.verify_encryption_key(key):

  File "C:\Users\Stan\anaconda3\lib\site-packages\pdfminer\pdfdocument.py", line 368, in verify_encryption_key
    u = self.compute_u(key)

  File "C:\Users\Stan\anaconda3\lib\site-packages\pdfminer\pdfdocument.py", line 326, in compute_u
    result = ARC4.new(key).encrypt(hash.digest())  # 4

  File "C:\Users\Stan\anaconda3\lib\site-packages\Crypto\Cipher\ARC4.py", line 132, in new
    return ARC4Cipher(key, *args, **kwargs)

  File "C:\Users\Stan\anaconda3\lib\site-packages\Crypto\Cipher\ARC4.py", line 60, in __init__
    result = _raw_arc4_lib.ARC4_stream_init(c_uint8_ptr(key),

  File "C:\Users\Stan\anaconda3\lib\site-packages\cffi\api.py", line 912, in __getattr__
    make_accessor(name)

  File "C:\Users\Stan\anaconda3\lib\site-packages\cffi\api.py", line 908, in make_accessor
    accessors[name](name)

  File "C:\Users\Stan\anaconda3\lib\site-packages\cffi\api.py", line 838, in accessor_function
    value = backendlib.load_function(BType, name)

AttributeError: function/symbol 'ARC4_stream_init' not found in library 'C:\Users\Stan\anaconda3\lib\site-packages\Crypto\Util\..\Cipher\_ARC4.cp37-win_amd64.pyd': error 0x7f

Solution:

Downgrade:

pip install pycryptodome==3.0.0

Two methods

Method 1:

# Installation
$ pip install arc4
# Import ARC4 package
from arc4 import ARC4

Method 2:

# Installation
$ pip install crypto
# Import ARC4 package
from Crypto.Cipher import ARC4

[Solved] forrtl: error (200): program aborting due to control-C event

forrtl: error (200): program aborting due to control-C event
pycharm Error:
forrtl: error (200): program aborting due to control-C event
Image PC Routine Line Source
libifcoremd.dll 00007FFD5FCA3B58 Unknown Unknown Unknown
KERNELBASE.dll 00007FFDC015B933 Unknown Unknown Unknown
KERNEL32.DLL 00007FFDC15D7034 Unknown Unknown Unknown
ntdll.dll 00007FFDC2762651 Unknown Unknown Unknown

Solution:

pip install --upgrade scipy

Just run this in terminal. The principle is not clear.

RuntimeError: Unable to locate turbojpeg library automatically. You may specify the turbojpeg librar

This error occurs when using the official instance code of jpegturbo. Record the solution.

Environment introduction

          ubuntu 16.04

          conda

          python3.7

Error details

Traceback (most recent call last):
  File "/home/ubuntu/PycharmProjects/python_code_test/JPEGTurbo/official_demo1.py", line 30, in <module>
    turbo_jpeg = TurboJPEG()
  File "/home/ubuntu/.conda/envs/pytorch-1.7/lib/python3.7/site-packages/turbojpeg.py", line 286, in __init__
    self.__find_turbojpeg() if lib_path is None else lib_path)
  File "/home/ubuntu/.conda/envs/pytorch-1.7/lib/python3.7/site-packages/turbojpeg.py", line 895, in __find_turbojpeg
    'Unable to locate turbojpeg library automatically. '
RuntimeError: Unable to locate turbojpeg library automatically. You may specify the turbojpeg library path manually.
e.g. jpeg = TurboJPEG(lib_path)

Solution

sudo apt-get update -y
sudo apt-get install -y libturbojpeg

References

PyTurboJPEG · PyPI

https://github.com/lilohuang/PyTurboJPEG/issues/27

ERROR: GLEW initalization error: Missing GL version

System: Ubuntu 18.04
since there is no NVIDIA XXX folder in my/usr/lib directory, use the following method in the. Bashrc file:

export PATH=/usr/local/cuda-10.2/bin${PATH:+:${PATH}}
export LD_LIBRARY_PATH=/usr/local/cuda-10.2/lib64${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}}
export LD_PRELOAD=/usr/lib/x86_64-linux-gnu/libGLEW.so:/usr/lib/nvidia-460/libGL.so

it won ‘t work.

The final solution is:

By manually adding environment variables for the current running environment, run – & gt; Edit configuration – & gt; Add the corresponding environment variable to the environment variable

Variable name: LD_ Preload
variable value:/usr/lib/x86_ 64-linux-gnu/libGLEW.so

Solution to [SSL: certificate_verify_failed] when you get downloads video

[SSL: certificate_verify_failed] problem when downloading video using you get and ffmpeg

Since the you get -- debug debugging shows that it is a certificate verification problem, the part ignoring SSL certificate verification is added to the code and implemented in pycharm (non command line)
Modify url = 'website', output_Dir = R 'save path'

import ssl
from you_get import common

# Ignore certificate validation issues
ssl._create_default_https_context = ssl._create_unverified_context

# Call any_download_playlist in you_get.common to download a collection
common.any_download_playlist(url='https://www.bilibili.com/video/BVXXX',stream_id='',info_only=False,
                             output_dir=r'F:\StudyLesson\YouGet',merge=True)

# Call any_download in you_get.common for single set download
common.any_download(url='https://www.bilibili.com/video/BVXXX?p=8',stream_id='',
                    info_only=False,output_dir=r'F:\StudyLesson\YouGet',merge=True)

So far, the video has been downloaded successfully
record.