Tag Archives: development language

[Solved] AttributeError: module ‘PIL.Image‘ has no attribute ‘open‘

AttributeError: module ‘PIL. Image’ has no attribute ‘open’. It means PIL.image does not has an open method. I have searched lots of solutions online, but they are not work. Finally, I inadvertently saw the address of image.py (c:\users\lenovo\pycharmprojects\kk\venv\lib\site packages\pil\image.py). I know the reason of the error.

from PIL import Image
import os
import csv
import time

Reason: the image.py file under the PIL package was accidentally emptied, so image.open() cannot be realized.

temp_img_now = Image.open(temp_file)

Solution: uninstall the pilot and pillow-PIL, and then reinstall them.

[Solved] Operator Not Allowed In Graph Error & Attribute Error Tensor object has no attribute numpy

The reason for the above error when compiling custom functions is that tf2.x’s keras.compile does not support specific values by default

Questions

When using the wrapping method to customize the loss function of the keras model and need to calculate accuracy metrics such as precision or recall, or need to extract the specific values of the inputs y_true and y_prd (operations such as y_true.numpy()), an error message appears:

OperatorNotAllowedInGraphError: using a `tf.Tensor` as a Python `bool` is not allowed: AutoGraph did convert this function. This might indicate you are trying to use an unsupported feature.

Or

 AttributeError: 'Tensor' object has no attribute 'numpy'

 

Solution:

Pass in parameters in the compile function:

run_eagerly=True

 

Reason:

Tf2.x enables eager mode by default, namely eager execution, that is, dynamic calculation graph. Compared with the static calculation graph of tf1.x, the advantage of eager mode is that it is convenient for debugging, which can easily print tensor values ​​and evaluate results; and Numpy interacts well, and the conversion between tensor and ndarray is convenient and even universal. The tradeoff is that it runs significantly slower. After the static calculation graph is defined, it is almost always executed with C++ code on the tensorflow core, so the calculation efficiency is higher and the speed is faster.

Even so, run_eagerly defaults to False in the model.compile method, which means that the logic of the model is encapsulated in tf.function, which achieves faster computational efficiency (the autograph mechanism converts the dynamic computational graph through the @tf.function wrapper). is a static computation graph). But the @tf.function wrapper requires the function to use basic tf operations, not other operations in python or even functions from other packages, so the first error occurs when calling functions such as sklearn.metrics’ accuracy_score or imblearn.metrcis’ geometric_mean_score function. The second error occurs when using the y_true.numpy() method. The fundamental reason is that the model.compile method does not support the above operations after the static calculation graph converted by the @tf.function wrapper, although tf2.x enables the use of dynamic calculation graphs by default.

After passing run_eagerly=True to the model.compile method, the dynamic calculation graph is used to run, and the above operations can be performed normally. The disadvantage is that the dynamic calculation graph has the disadvantage of low operation efficiency.

[Solved] ERROR: lib/bridge_generated.dart:837:9: Error: The parameter ‘ptr‘ of the method ‘FlutterRustB

[error reporting]

Launching lib/main.dart on Linux in debug mode...
Finished dev [unoptimized + debuginfo] target(s) in 0.04s
ERROR: lib/bridge_generated.dart:837:9: Error: The parameter 'ptr' of the method 'FlutterRustBridgeExampleWire.store_dart_post_cobject' has type 'int', which does not match the corresponding type, 'Pointer<NativeFunction<Bool Function(Int64, Pointer)>>', in the overridden method, 'FlutterRustBridgeWireBase.store_dart_post_cobject'.
ERROR: - 'Pointer' is from 'dart:ffi'.
ERROR: - 'NativeFunction' is from 'dart:ffi'.
ERROR: - 'Bool' is from 'dart:ffi'.
ERROR: - 'Int64' is from 'dart:ffi'.
ERROR: - 'Void' is from 'dart:ffi'.
ERROR: Change to a supertype of 'Pointer<NativeFunction<Bool Function(Int64, Pointer)>>', or, for a covariant parameter, a subtype.
ERROR: int ptr,
ERROR: ^
ERROR: ../../frb_dart/lib/src/basic.dart:153:8: Context: This is the overridden method ('store_dart_post_cobject').
ERROR: void store_dart_post_cobject(
ERROR: ^
Building Linux application...
Exception: Build process failed

[Solution]

$ export REPO_DIR=$PWD
$ cd /

$ flutter_rust_bridge_codegen \
    --rust-input $REPO_DIR/rust/src/api.rs \
    --dart-output $REPO_DIR/lib/bridge_generated.dart \
    --c-output $REPO_DIR/ios/Classes/bridge_generated.h
[2021-10-22T14:39:33Z INFO  flutter_rust_bridge_codegen] Picked config: Opts { rust_input_path: "/home/consulting/Documents/native_add/rust/src/api.rs", dart_output_path: "/home/consulting/Documents/native_add/lib/bridge_generated.dart", c_output_path: "/home/consulting/Documents/native_add/ios/Classes/bridge_generated.h", rust_crate_dir: "/home/consulting/Documents/native_add/rust", rust_output_path: "/home/consulting/Documents/native_add/rust/src/bridge_generated.rs", class_name: "NativeAdd", dart_format_line_length: 80, skip_add_mod_to_lib: false }
[2021-10-22T14:39:33Z INFO  flutter_rust_bridge_codegen] Phase: Parse source code to AST
[2021-10-22T14:39:33Z INFO  flutter_rust_bridge_codegen] Phase: Parse AST to IR
[2021-10-22T14:39:33Z INFO  flutter_rust_bridge_codegen] Phase: Transform IR
[2021-10-22T14:39:33Z INFO  flutter_rust_bridge_codegen] Phase: Generate Rust code
[2021-10-22T14:39:33Z INFO  flutter_rust_bridge_codegen] Phase: Generate Dart code
[2021-10-22T14:39:33Z INFO  flutter_rust_bridge_codegen] Phase: Other things
[2021-10-22T14:39:34Z INFO  flutter_rust_bridge_codegen] Success! Now go and use it :)

$ cd $REPO_DIR

[Solved] Failed to instantiate java.util.List using constructor NO_CONSTRUCTOR with arguments

Failed to instantiate java.util.List using constructor NO_CONSTRUCTOR with arguments

Report errors

MappingInstantiationException: Failed to instantiate java.util.List using constructor NO_CONSTRUCTOR with arguments 

Reason

Entity classes mapped by a set in Mongo Library

@Data
@NoArgsConstructor
@AllArgsConstructor
@Document(collection = "a")
public class A {
	
	private List<B> b; // this is list
	
	@Data
    @Builder
    @NoArgsConstructor
    @AllArgsConstructor
    public static class B {

        private String bb;
    }
}

A data format of set

{ 
    "_id" : ObjectId("62df884326d4311d9c80de8d"), 
    "b" : {
        "bb" : "test" //this is object
    }
}

Solution:

Modify entity classes or process problem data.

 

ERROR: Failed building wheel for osgeo [How to Solve]

ERROR: Failed building wheel for osgeo

Problem: When Installing pip3 install osgeo report an error: ERROR: Failed building wheel for osgeo

Solution:

Method 1

conda install gdal

Method 2:

1. According to the python version, download the corresponding GDAL installation file

for example: Python 3.8 download GDAL‑3.4.3‑cp38‑cp38‑win_amd64.whl

Cp38 stands for python3.8 version and windows64 bit

2. install:

pip3 install gdal-3.4.3-cp38-cp38-win_amd64.whl

 

[Solved] UserWarning: mkl-service package failed to import, therefore Intel(R) MKL initialization ensuring it

UserWarning: mkl-service package failed to import, therefore Intel(R) MKL initialization ensuring it

After installing anaconda, there will report an error when you import some module in ipython:

UserWarning: mkl-service package failed to import, therefore Intel® MKL initialization ensuring its correct out-of-the box operation under condition when Gnu OpenMP had already been loaded by Python process is not assured. Please install mkl-service package

 

Solution:
First configure three environment variables, one is missing
E:\Anaconda3
E:\Anaconda3\Scripts
E:\Anaconda3\Library\bin
Put E:\Anaconda3\Library\bin under
libcrypto-1_1-x64.dll
libssl-1_1-x64.dll
Copy the above two files to
E:\Anaconda3\DLLs
OK, you can restart pycharm and it will work

[Solved] Nacos offline service error: errCode: 500

Error Messages:

caused: errCode: 500, errMsg: do metadata operation failed ;caused: com.alibaba.nacos.consistency.exception.ConsistencyException: com.alibaba.nacos.core.distributed.raft.exception.NoLeaderException: The Raft Group [naming_instance_metadata] did not find the Leader node;caused: com.alibaba.nacos.core.distributed.raft.exception.NoLeaderException: The Raft Group [naming_instance_metadata] did not find the Leader node;

 

Solution:
The reason for the error is the registered Ip or something confusing
1. Stop nacos first,
2. Delete the protocol folder in the data directory,
3. reboot. Done!

[Solved] fontconfig cross-compilation Error: PRI_CHAR_WIDTH_STRONG 

make report an error as below:
“fontconfig-2.12.1/src/fcmatch.c:324:63: error: ‘PRI_CHAR_WIDTH_STRONG’ undeclared here (not in a function); did you mean ‘PRI_WIDTH_STRONG’?”

"fontconfig-2.12.1/src/fcmatch.c:324:63: error: ‘PRI_CHAR_WIDTH_STRONG' undeclared here (not in a function); did you mean ‘PRI_WIDTH_STRONG’?"

 

Solution:
Enter the source code directory. My path is
1.
fontconfig-2.12.1/fontconfig/fontconfig.h
Found #define FC_CHAR_WIDTH “charwidth” /* Int / deleted
Add #define FC_CHARWIDTH “charwidth” / Int */
#define FC_CHAR_WIDTH FC_CHARWIDTH
2.
fontconfig-2.12.1/src/fcobjs.h
Find FC_OBJECT (CHAR_WIDTH, FcTypeInteger, NULL) and delete it.
Add FC_OBJECT (CHARWIDTH, FcTypeInteger, NULL)
3.
fontconfig-2.12.1/src/fcobjshash.gperf
Find “CHARWIDTH”, FC_CHAR_WIDTH_OBJECT and delete it.
Add “charwidth”,FC_CHARWIDTH_OBJECT
4.
fontconfig-2.12.1/src/fcobjshash.h
Find {(int)(long)&((struct FcObjectTypeNamePool_t *)0)->FcObjectTypeNamePool_str45,FC_CHAR_WIDTH_OBJECT}, delete
add {(int)(long)&((struct FcObjectTypeNamePool_t *)0)->FcObjectTypeNamePool_str45,FC_CHARWIDTH_OBJECT},
End, make && make install successful
is to modify and replace a few macro definitions, you can happily edit cairo!
Although the high version of fontconfig does not have this error, but the cross-compiler reported that the other failed to solve

 

[Solved] Vue3.2 component computed Error: Write operation failed: computed value is readonly

<template>
	<component
        ref="formComponent"
        :is="formComponent"
    />
</template>

<script setup lang='ts'>
	const mapComp: any = {
	  Record
	};
	const formComponent = computed(() => {
	  return mapComp[route.params.type as keyof typeof mapComp];
	});
</script>

Then the browser reports two warnings

 

After modification:

<template>
	<component
        ref="formComponent"
		:is="mapComp[route.params.type as string]"
    />
</template>

<script setup lang='ts'>
	import {useRoute} from 'vue-router'
	const route = useRoute()
	
	const mapComp: any = {
	  Record
	};
	const formComponent = ref<string>(route.params.type as string);
</script>

At this point, the problem is solved

[Solved] M1 Chip MacBook Pro Error: snappy-java FAILED_TO_LOAD_NATIVE_LIBRARY

Solution:

Directly find the xml file that introduces the dependency (I am the xml file of spark-core), change the version of snappy-java in it to 1.1.8.4. just reload it.

    <dependency>
      <groupId>org.xerial.snappy</groupId>
      <artifactId>snappy-java</artifactId>
      <version>1.1.8.4</version>
      <scope>compile</scope>
    </dependency>

 

How to Solve: (Detailed exploration process)

The MacBook Pro with the new M1 pro chip arrived a few days ago, and I couldn’t wait to try it out. Because the chip uses the ARM instruction set, the previous Intel chip computers used the X86 instruction set, so there were inevitably some problems. However, after a year of adaptation, there are basically no major problems, so after a few days of trying, I decided to use the machine for work production.

Because I want to install Java, at first I used the traditional Intel version of jdk (because the main use is jdk1.8, and Oracle only did the adaptation of jdk17 for the arm version), and I feel that some programs in the development will not be executed so fast (and 18 MacBook Pro execution speed is about the same), because after all, it has to go through Rosetta2 translate. Therefore, after using it for a few days, I replaced the Intel version of jdk with the ARM version of jdk, but there is a problem in the documentation, the direct download link given in the documentation is not the ARM version, I found that the execution has been using the intel version of java after installation, then I found that the download link is wrong, you need to manually download the real ARM version of jdk from the official website.

After installing the arm version of jdk, the compilation speed is really improved, but when executing the spark program locally, I encountered such a problem, the detailed log is as follows:

Caused by: org.xerial.snappy.SnappyError: [FAILED_TO_LOAD_NATIVE_LIBRARY] no native library is found for os.name=Mac and os.arch=aarch64
	at org.xerial.snappy.SnappyLoader.findNativeLibrary(SnappyLoader.java:331)
	at org.xerial.snappy.SnappyLoader.loadNativeLibrary(SnappyLoader.java:171)
	at org.xerial.snappy.SnappyLoader.load(SnappyLoader.java:152)
	at org.xerial.snappy.Snappy.<clinit>(Snappy.java:47)
	at org.apache.parquet.hadoop.codec.SnappyDecompressor.decompress(SnappyDecompressor.java:62)
	at org.apache.parquet.hadoop.codec.NonBlockedDecompressorStream.read(NonBlockedDecompressorStream.java:51)
	at java.io.DataInputStream.readFully(DataInputStream.java:195)
	at java.io.DataInputStream.readFully(DataInputStream.java:169)
	at org.apache.parquet.bytes.BytesInput$StreamBytesInput.toByteArray(BytesInput.java:205)
	at org.apache.parquet.column.values.dictionary.PlainValuesDictionary$PlainBinaryDictionary.<init>(PlainValuesDictionary.java:89)
	at org.apache.parquet.column.values.dictionary.PlainValuesDictionary$PlainBinaryDictionary.<init>(PlainValuesDictionary.java:72)
	at org.apache.parquet.column.Encoding$1.initDictionary(Encoding.java:90)
	at org.apache.parquet.column.Encoding$4.initDictionary(Encoding.java:149)
	at org.apache.spark.sql.execution.datasources.parquet.VectorizedColumnReader.<init>(VectorizedColumnReader.java:103)
	at org.apache.spark.sql.execution.datasources.parquet.VectorizedParquetRecordReader.checkEndOfRowGroup(VectorizedParquetRecordReader.java:280)
	at org.apache.spark.sql.execution.datasources.parquet.VectorizedParquetRecordReader.nextBatch(VectorizedParquetRecordReader.java:225)
	at org.apache.spark.sql.execution.datasources.parquet.VectorizedParquetRecordReader.nextKeyValue(VectorizedParquetRecordReader.java:137)
	at org.apache.spark.sql.execution.datasources.RecordReaderIterator.hasNext(RecordReaderIterator.scala:39)
	at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:105)
	at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.nextIterator(FileScanRDD.scala:177)
	at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:105)
	at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.scan_nextBatch$(Unknown Source)
	at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.processNext(Unknown Source)
	at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
	at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$8$$anon$1.hasNext(WholeStageCodegenExec.scala:395)
	at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.agg_doAggregateWithKeys$(Unknown Source)
	at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.processNext(Unknown Source)
	at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
	at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$8$$anon$1.hasNext(WholeStageCodegenExec.scala:395)
	at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
	at org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:125)
	at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:96)
	at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53)
	at org.apache.spark.scheduler.Task.run(Task.scala:108)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:335)
	... 3 more

org.xerial.snappy is a compressed/uncompressed library, I am using spark version 2.2.0, which depends on snappy 1.1.2.6. The contents of this version are as follows:

As you can see, this version of snappy only supports x86 and x86_64 for Mac systems, not arm64. Just find the dependency that introduces snappy-java and upgrade its version directly to the latest version.

Searching through IDEA’s dependency graph, I found that the library was imported by spark-core, my version of spark-core is 2.2.0 and the snappy-java introduced is 1.1.2.6. As shown in the figure:

The latest version of snappy-java is 1.1.8.4 in the mvnrepository, so replace the original version in the spark-core xml file directly, as shown in the figure

Reload the dependency and execute it again. The code executes smoothly and the problem is solved.

[Solved] socket failed: EPERM (Operation not permitted)

Reason & Solution:

1. The network permission is not enabled
2 HTTP is not supported

Add network permissions to AndroidManifest.xml

<uses-permission android:name="android.permission.INTERNET" />
<uses-permission android:name="android.permission.ACCESS_NETWORK_STATE" />

If the error is still reported, add the following to the Application in the AndroidManifest.xml file

android:usesCleartextTraffic="true"

Reason: On Android 9.0 machines, http access is not supported by default, all network access must use https. of course Android has supported https in several earlier versions, except that http is not supported by default using https on 9.0.

If the above operation can not be solved, please uninstall the application first, and then run it again.