The reason for the above error when compiling custom functions is that tf2.x’s keras.compile does not support specific values by default

Questions

When using the wrapping method to customize the loss function of the keras model and need to calculate accuracy metrics such as precision or recall, or need to extract the specific values of the inputs y_true and y_prd (operations such as y_true.numpy()), an error message appears:

OperatorNotAllowedInGraphError: using a `tf.Tensor` as a Python `bool` is not allowed: AutoGraph did convert this function. This might indicate you are trying to use an unsupported feature.

Or

AttributeError: 'Tensor' object has no attribute 'numpy'

**Solution:**

Pass in parameters in the compile function:

run_eagerly=True

**Reason:**

Tf2.x enables eager mode by default, namely eager execution, that is, dynamic calculation graph. Compared with the static calculation graph of tf1.x, the advantage of eager mode is that it is convenient for debugging, which can easily print tensor values and evaluate results; and Numpy interacts well, and the conversion between tensor and ndarray is convenient and even universal. The tradeoff is that it runs significantly slower. After the static calculation graph is defined, it is almost always executed with C++ code on the tensorflow core, so the calculation efficiency is higher and the speed is faster.

Even so, run_eagerly defaults to False in the model.compile method, which means that the logic of the model is encapsulated in tf.function, which achieves faster computational efficiency (the autograph mechanism converts the dynamic computational graph through the @tf.function wrapper). is a static computation graph). But the @tf.function wrapper requires the function to use basic tf operations, not other operations in python or even functions from other packages, so the first error occurs when calling functions such as sklearn.metrics’ accuracy_score or imblearn.metrcis’ geometric_mean_score function. The second error occurs when using the y_true.numpy() method. The fundamental reason is that the model.compile method does not support the above operations after the static calculation graph converted by the @tf.function wrapper, although tf2.x enables the use of dynamic calculation graphs by default.

After passing run_eagerly=True to the model.compile method, the dynamic calculation graph is used to run, and the above operations can be performed normally. The disadvantage is that the dynamic calculation graph has the disadvantage of low operation efficiency.

### Read More:

- [Solved] Pytorch Tensor to numpy error: RuntimeError: Can‘t call numpy() on Tensor that requires grad.报错
- `Model.XXX` is not supported when the `Model` instance was constructed with eager mode enabled
- [Solved] ValueError: only one element tensors can be converted to Python scalars
- Error:output with shape [1, 224, 224] doesn‘t match the broadcast shape [3, 224, 224]
- [Solved] RuntimeError: Numpy is not available (Associated Torch or Tensorflow)
- torch.nn.functional.normalize() Function Interpretation
- Python error collection: NameError: name ‘numpy’ is not defined
- [Solved] Pytorch-transformers Error: AttributeError: ‘str‘ object has no attribute ‘shape‘
- How to Solve Pytorch DataLoader Loading Error: UnicodeDecodeError: ‘utf-8‘ codec can‘t decode byte 0xe5 in position 1023
- [ONNXRuntimeError] : 10 : INVALID_Graph loading model error
- [Solved] mnn Import Error: initMNN: init numpy failed
- Autograd error in Python: runtimeerror: grad can be implicitly created only for scalar outputs
- [Solved] RuntimeError: scatter(): Expected dtype int64 for index
- [Solved] Original error was: No module named ‘numpy.core._multiarray_umath‘
- How to Solve Error: RuntimeError CUDA out of memory
- Python: Torch.nn.functional.normalize() Function
- Python: RNN principle realized by numpy
- Keras import a custom metric model error: unknown metric function: Please ensure this object is passed to`custom_object‘
- Pytorch directly creates a tensor on the GPU error [How to Solve]
- Here is the difference and connection of Torch. View (), Transpose (), and Permute ()