This error usually occurs in Windows systems using multiple processes. For example, execute the following code in pychar:
import torch
import torch.utils.data as Data
import numpy as np
from sklearn.datasets import load_iris
iris_x, irisy = load_iris(return_X_y=True)
print("iris_x.dtype:", iris_x.dtype)
print("irisy:", irisy.dtype)
## transform the training set x into a tensor, and the training set y into a tensor
train_xt = torch.from_numpy(iris_x.astype(np.float32))
train_yt = torch.from_numpy(irisy.astype(np.int64))
print("train_xt.dtype:", train_xt.dtype)
print("train_yt.dtype:", train_yt.dtype)
## After converting the training set into a tensor, use TensorDataset to collate X and Y together
train_data = Data.TensorDataset(train_xt, train_yt)
## Define a data loader to batch the training dataset
train_loader = Data.DataLoader(
dataset=train_data, ## the dataset to use
batch_size=10, # # Batch sample size
shuffle=True, # Break up the data before each iteration
num_workers=2, # [Note: 2 processes are used here]
)
## Check if the dimensionality of the samples of a batch of the training dataset is correct
for step, (b_x, b_y) in enumerate(train_loader):
if step > 0:
break
## Output the dimensions of the training image and the labels, and the data type
print("b_x.shape:", b_x.shape)
print("b_y.shape:", b_y.shape)
print("b_x.dtype:", b_x.dtype)
print("b_y.dtype:", b_y.dtype)
## --------- -The correct result is as follows -------- --
# iris_x.dtype: float64
# irisy: int32
# train_xt.dtype: torch.float32
# train_yt.dtype: torch.int64
# b_x.shape: torch.Size([10, 4])
# b_y.shape: torch.Size([10])
# b_x.dtype: torch.float32
# b_y.dtype: torch.int64
The following errors will be reported. (no error will be reported when running in jupyter notebook under the same environment. I don’t know why…)
RuntimeError:
An attempt has been made to start a new process before the
current process has finished its bootstrapping phase.
This probably means that you are not using fork to start your
child processes and you have forgotten to use the proper idiom
in the main module:
if __name__ == '__main__':
freeze_support()
...
The "freeze_support()" line can be omitted if the program
is not going to be frozen to produce an executable.
Solution 1:
Remove the statement setting up multiple processes. In this example, comment or delete the following line.
num_workers=2, # [Note: 2 processes are used here]
Solution 2:
Move the code part of calling multiple processes to [if _name_ = = ‘_main_’:].
if __name__ == '__main__':
## Check if the dimensionality of the samples of a batch of the training dataset is correct
for step, (b_x, b_y) in enumerate(train_loader):
if step > 0:
break
## Output the dimensions of the training image and the dimensions of the labels, and the data type
print("b_x.shape:", b_x.shape)
print("b_y.shape:", b_y.shape)
print("b_x.dtype:", b_x.dtype)
print("b_y.dtype:", b_y.dtype)
However, in pychart, the part before [for step, (b_x, b_y) in enumerate (train_loader):] will be executed twice.
## ——————————The result of running in Pycharm is as follows——————————
iris_x.dtype: float64
irisy: int32
train_xt.dtype: torch.float32
train_yt.dtype: torch.int64
iris_x.dtype: float64
irisy: int32
train_xt.dtype: torch.float32
train_yt.dtype: torch.int64
b_x.shape: torch.Size([10, 4])
b_y.shape: torch.Size([10])
b_x.dtype: torch.float32
b_y.dtype: torch.int64
Read More:
- [Solved] Pyg load dataset Error: attributeerror [pytorch geometry]
- [Solved] RuntimeError: Input type (torch.FloatTensor) and weight type (torch.cuda.FloatTensor) should be the
- [Solved] RuntimeError: scatter(): Expected dtype int64 for index
- Error:output with shape [1, 224, 224] doesn‘t match the broadcast shape [3, 224, 224]
- Python: RNN principle realized by numpy
- Python RuntimeError: Expected 4-dimensional input for 4-dimensional weight [32, 1, 5, 5]
- [Solved] RuntimeError: Numpy is not available (Associated Torch or Tensorflow)
- [Solved] Error occurred when finalizing GeneratorDataset iterator: Failed precondition: Python interpreter st
- Mxnet.gluon Load Pre Training
- Pytorch ValueError: Expected more than 1 value per channel when training, got input size [1, 768
- [Solved] RuntimeError: Error(s) in loading state_dict for Net:
- [How to Fix] TypeError: Cannot cast array data from dtype(‘float64‘) to dtype(‘<U32‘)….
- [Solved] AttributeError: module ‘pandas‘ has no attribute ‘rolling_count‘
- Here is the difference and connection of Torch. View (), Transpose (), and Permute ()
- Python: Panda scramble data
- Normalize error: TypeError: Input tensor should be a float tensor…
- [Solved] RuntimeError: gather(): Expected dtype int64 for index
- AttributeError: lower not found (NLP extracted tfidf features)
- [Solved] RuntimeError : PyTorch was compiled without NumPy support
- [Solved] Pytorch Download CIFAR1 Datas Error: urllib.error.URLError: <urlopen error [SSL: CERTIFICATE_VERIFY_FAILED] certi