add with torch.no_grad() before for i, (inp, target) in enumerate(test_loader): no error will be reported
with torch.no_grad():
for i, (inp, target) in enumerate(test_loader):
target = target.cuda()
gt = torch.cat((gt, target), 0)
bs, n_crops, c, h, w = inp.size()
input_var = torch.autograd.Variable(inp.view(-1, c, h, w).cuda())
output = model(input_var)
output_mean = output.view(bs, n_crops, -1).mean(1)
pred = torch.cat((pred, output_mean.data), 0)
But the effect is not good
The average AUROC is 0.531
The AUROC of Atelectasis is 0.552835646499197
The AUROC of Cardiomegaly is 0.4891981635698571
The AUROC of Effusion is 0.5078801344734772
The AUROC of Infiltration is 0.5471494224277326
The AUROC of Mass is 0.5019205110036506
The AUROC of Nodule is 0.48564796421763534
The AUROC of Pneumonia is 0.49787587924670523
The AUROC of Pneumothorax is 0.46888930706822896
The AUROC of Consolidation is 0.45797680791836254
The AUROC of Edema is 0.5500617972215441
The AUROC of Emphysema is 0.7189693346796524
The AUROC of Fibrosis is 0.548014619318718
The AUROC of Pleural_Thickening is 0.47339353023337755
The AUROC of Hernia is 0.638215609588036
Read More:
- Runtimeerror using Python training model: CUDA out of memory error resolution
- MobaXterm error cuda:out of memory
- CUDA error:out of memory
- FCOS No CUDA runtime is found, using CUDA_HOME=’/usr/local/cuda-10.0′
- RuntimeError: CUDA error: out of memory solution (valid for pro-test)
- RuntimeError: CUDA out of memory. Tried to allocate 600.00 MiB (GPU 0; 23.69 GiB total capacity)
- Error: cudaGetDevice() failed. Status: CUDA driver version is insufficient for CUDA runtime version
- RuntimeError: cuda runtime error (100) : no CUDA-capable device is detected at /opt/conda/conda-bld/
- After the two hosts set up the master-slave replication of MySQL, the show slave status displays: last_ IO_ Error: error connecting to master ……
- Tensorflow 2.1.0 error resolution: failed call to cuinit: CUDA_ ERROR_ NO_ DEVICE: no CUDA-capable device is detected
- Python: CUDA error: an illegal memory access was accounted for
- (29)RuntimeError: cuda runtime error (999)
- Error connecting to master, – retry time: 60 retries: 86400
- Neither –kubeconfig nor –master was specified. Using the inClusterConfig. This might not work
- ERROR: KeeperErrorCode = NoNode for /hbase/master
- How to Solve Hbase Error: Master is initializing
- torch.cuda.is_ Available() returns false
- Summary of k8s single master cluster deployment