Tag Archives: gpu

CUDA_ERROR_SYSTEM_DRIVER_MISMATCH [How to Solve]

nvidia/cuda:11.4.2-cudnn8-devel-ubuntu20.04:CUDA_ ERROR_ SYSTEM_ DRIVER_ MISMATCH

Problem viewing and solving

Question

When running a program calling cudnn library, an error occurs when running. The error is CUDA_ ERROR_ SYSTEM_ DRIVER_ MISMATCH。 This thing is very speechless. I don’t know why. I use docker: NVIDIA/CUDA: 11.4.2-cudnn8-devel-ubuntu 20.04.

see

This problem is related to the version of libcuda. You can check NVIDIA SMI to confirm whether the version of libcuda (i.e. driver version) is inconsistent with the host version:

the problem I encountered is the version inconsistency.

Solution:

Libcuda. So and libcuda. So. 1 should be in/usr/lib/x86_ In the 64 Linux GNU folder, enter this folder and modify the soft connection of libcuda. So. 1:

ln -s libcuda.so.465.19.01 libcuda.so.1 

In this way, the problems can be solved

failed call to cuInit: CUDA_ERROR_UNKNOWN: unknown error

Question

It was good before. After restarting the computer running program, this error is reported:

failed call to cuInit: CUDA_ERROR_UNKNOWN: unknown error
retrieving CUDA diagnositic information for host: ...

Then it runs with the old slow CPU.

environment

Ubuntu 20.04TensorFlow 2.5cudatoolkit 11.2cudnn 8.1

solve

The probability is that the graphics card driver is stained with something.

Because I happened to update the system (Ubuntu) automatically before, I probably moved some NVIDIA files or something. Then I can’t restart.

Then I opened the Software Updater, completely updated it, restarted it, and finished it.

Tensorflow GPU error (4 Type Error and their Solutions)

I have just changed my laptop, and I have been reporting errors when training models with TensorFlow-gpu, so I am writing down my solution here. Because I tried to replace the cuda version, replace the cudnn version, replace the tensorflow-gpu and keras version when solving the gpu can not run, so the reported errors are also a mess.
Error 1:
2021-08-09 21:04:53.637764: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cublas64_100.dll
2021-08-09 21:04:58.598447: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudnn64_7.dll
2021-08-09 21:17:47.603456: W tensorflow/stream_executor/cuda/redzone_allocator.cc:312] Internal: Invoking ptxas not supported on Windows
Relying on driver to perform ptx compilation. This message will be only logged once.
2021-08-09 21:17:47.675868: E tensorflow/stream_executor/cuda/cuda_blas.cc:428] failed to run cuBLAS routine: CUBLAS_STATUS_EXECUTION_FAILED
2021-08-09 21:17:47.676730: I tensorflow/stream_executor/stream.cc:4963] [stream=000001774007A1F0,impl=00000177393F7250] did not memzero GPU location; source: 000000726209DF28
2021-08-09 21:17:47.676867: I tensorflow/stream_executor/stream.cc:316] did not allocate timer: 000000726209DED0
2021-08-09 21:17:47.676954: I tensorflow/stream_executor/stream.cc:1964] [stream=000001774007A1F0,impl=00000177393F7250] did not enqueue ‘start timer’: 000000726209DED0
2021-08-09 21:17:47.677084: I tensorflow/stream_executor/stream.cc:1976] [stream=000001774007A1F0,impl=00000177393F7250] did not enqueue ‘stop timer’: 000000726209DED0
2021-08-09 21:17:47.677201: F tensorflow/stream_executor/gpu/gpu_timer.cc:65] Check failed: start_event_ != nullptr && stop_event_ != nullptr
Error 2:
Error 3:
tensorflow.python.framework.errors_impl.UnknownError:  Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
Error 4:
CuDNN library: 7.4.1 but source was compiled with: 7.6.0.  CuDNN library major and minor version needs to match or have higher minor version in case of CuDNN 7.0 or later version. If using a binary install, upgrade your CuDNN library.  If building from sources, make sure the library loaded at runtime is compatible with the version specified during compile configuration
In fact, these errors are caused by a problem: my computer graphics card is 3050ti, belongs to the 30 series, can only install cuda11 version or higher, so I reinstalled cuda11.3.1 and the corresponding cudnn8.2.0 version (cudnn8.2.1 is reported as an error)

After the new video card rtx3060 arrives, configure tensorflow and run “TF. Test. Is”_ gpu_ The solution of “available ()” output false

First of all, install according to the normal installation method:
the necessary conditions for success are:
1. The version number should be correct, that is, CUDA should be installed above 11.1 (because CUDA version supported by 30 AMP architecture graphics card starts from 11.1)
link: https://developer.nvidia.com/zh-cn/cuda-downloads
2. Cudnn needs to install the, Link (to register and log in to NVIDIA account) https://developer.nvidia.com/zh-cn/cudnn
If you haven’t installed it, you can see other posts https://so.csdn.net/so/search/all?q=3060%20tensorflow& t=all& p=1& s=0& tm=0& lv=-1& ft=0& l=& U =
after installation, enter the created environment and run tf.test.is_ gpu_ available()。
if the computer can detect the graphics card, it can display the number of cores, computing power and other parameters of each graphics card, but the final answer is false
if the command line shows that cusolver64 cannot be found_ 10 documents

, at the following address C:// program files/NVIDIA GPU computing toolkit/CUDA/V11.1/bin

Will cusolver64_ 11. DLL renamed to cusolver64_ 10. Dll
and then run tf.test.is again_ gpu_ available()

Your uncle made it!

To solve the problem of importerror when installing tensorflow: libcublas.so . 10.0, failed to load the native tensorflow runtime error

Recently installing TensorFlow-GPU on a service has been experiencing the following error:
ImporError: libcublas.so.10.0: Cannot open shared object file: No such file or directory
iled to load the native TensorFlow Runtime.
I>Error: libcublas.so. Different TensorFlow-GPU versions correspond to different CUDA and CUDNN versions. How do I view CUDA and CUDNN versions?

DA 8.0→ CUDNN V6.0/CUDA 8.0→ CUDNN V6.0/CUDA 9.0→ CUDNN V7.0.5
T>rFlow 1.6/1.5 CUDA 9.0 and 1.3/1.3 CUDA 8.0.
TensorFlow 1.6/1.5 CUDA 9.0
As a result, specify TensorFlow – GPU version to reinstall (Note: you don’t need to install TF before uninstalling it, you can PIP directly because it will automatically detect the installed TF and uninstall it).

pip install  tensorflow-gpu==1.5

Note: it is best to use mirror image such as Tsinghua mirror, the speed difference is not a little bit.

pip install -i https://pypi.tuna.tsinghua.edu.cn/simple some-package

You can test it once it’s installed

import tensorflow as tf
sess = tf.Session(config=tf.ConfigProto(log_device_placement=True))

Displaying large section of relevant information indicates successful installation!

Introduction to the principle of Mali tile based rendering

comes first

a little in-depth understanding of Mali’s architecture, compare the basic process of the existing GPU with that of Mali, and propose the advantages and disadvantages of the GPU. The original address: https://developer.arm.com/graphics/developer-guides/tile-based-rendering
Traditional GPU

the architecture of a traditional GPU is generally called the Immediate mode GPU. The main process is vertex shader and fragment shader executed sequentially. The pseudocode is:

for draw in renderPass:
    for primitive in draw:
        for vertex in primitive:
            execute_vertex_shader(vertex)
        for fragment in primitive:
            execute_fragment_shader(fragment)

The

data stream looks like this:

Advantages of


The main advantage of

is that the output from vertex stays on the chip and can be read directly and quickly in the next stage.

shortcomings

if there are large graphics (mostly triangles) that need to be rendered, then the framebuffer will be very large. For example, rendering the color of the entire screen or deep rendering will consume a lot of storage resources, but there are no such resources on the chip, so the DDR will be read frequently. Many operations related to the current frame (such as blending, depth testing or stencil testing) all need to read this working set, so the bandwidth required is huge and the energy consumption is also high. For mobile devices, this way is not conducive to the operation of the device.

Tile-based GPU

so Mali’s GPU proposed the Tile-based concept, which is to divide the image into 16*16 pieces. Rendering in small chunks and writing to DDR solves this problem by reducing the frequency of reading and writing to DDR. But chunking requires knowing the geometry of the entire image, so the operation is broken down into two steps:

  1. first step to perform geometry related operations, and generate tile list.
  2. second step to execute fragment operation on each tile, after completion, write memory

The

pseudocode is as follows:

# Pass one
for draw in renderPass:
    for primitive in draw:
        for vertex in primitive:
            execute_vertex_shader(vertex)
        append_tile_list(primitive)

# Pass two
for tile in renderPass:
    for primitive in tile:
        for fragment in primitive:
            execute_fragment_shader(fragment)

data flow as follows:

Advantages of

obviously solves the bandwidth problem of the traditional model, because the fragment shader reads a small fragment every time and puts it on the fragment. There is no need to read the memory frequently until the last operation is finished, and then write to the memory. You can even further reduce reads and writes to memory by compressing tiles. In addition, when some areas of the image are fixed, the function is called to determine whether tiles are the same, so as to reduce repeated rendering.

shortcomings

is used to write the output geometry to the DDR after the vertex phase, and then to be read by fragment shader. This is the balance between the overhead of tile writing DDR and the overhead of fragment Shader rendering reading DDR. Another operation, such as Tessellation, is not suitable for the Tile-based GPU.

summary

now the resolution of the screen is getting bigger and bigger from 1080p to 1440p to 4K, you can see, Mali’s architecture will be used on a large scale in the future.

but there are some pitfalls that developers need to avoid. The first is to properly set the Render Pass to take advantage of the features of the architecture; The second is to understand the benefits of this geometric division.

Solution to unbalanced load of multiple cards (GPU’s 0 card is too high) in Python model training (simple and effective)

this paper mainly solves the problem that zero card of pytorch GPU occupies more video memory than other CARDS during model training. As shown in the figure below: the native GPU card is TITAN RTX, video memory is 24220M, batch_size = 9, and three CARDS are used. The 0th card video memory occupies 24207M. At this time, it just starts to run, and only a small amount of data is transferred to the video card. If the data is in multiple points, the video memory of the 0 card must burst. The reason why 0 card has higher video memory: During the back propagation of the network, the calculated gradient of loss is calculated on 0 card by default. So will be more than other graphics card some video memory, how much more specific, mainly to see the structure of the network.

as a result, in order to prevent training was interrupted due to out of memory. The foolhardy option is to set batch_size to 6, or 2 pieces of data per card.
batch_size = 6, the other the same, as shown in the figure below

have found the problem?Video memory USES only 1,2 CARDS and less than 16 gigabytes of memory. The batch_size is sacrificed because the 0 card might exceed a little bit of video memory.
so there’s no more elegant way?The answer is yes. That is borrowed from the transformer – xl BalancedDataParallel used in the class. The code is as follows (source) :

import torch
from torch.nn.parallel.data_parallel import DataParallel
from torch.nn.parallel.parallel_apply import parallel_apply
from torch.nn.parallel._functions import Scatter


def scatter(inputs, target_gpus, chunk_sizes, dim=0):
    r"""
    Slices tensors into approximately equal chunks and
    distributes them across given GPUs. Duplicates
    references to objects that are not tensors.
    """

    def scatter_map(obj):
        if isinstance(obj, torch.Tensor):
            try:
                return Scatter.apply(target_gpus, chunk_sizes, dim, obj)
            except Exception:
                print('obj', obj.size())
                print('dim', dim)
                print('chunk_sizes', chunk_sizes)
                quit()
        if isinstance(obj, tuple) and len(obj) > 0:
            return list(zip(*map(scatter_map, obj)))
        if isinstance(obj, list) and len(obj) > 0:
            return list(map(list, zip(*map(scatter_map, obj))))
        if isinstance(obj, dict) and len(obj) > 0:
            return list(map(type(obj), zip(*map(scatter_map, obj.items()))))
        return [obj for targets in target_gpus]

    # After scatter_map is called, a scatter_map cell will exist. This cell
    # has a reference to the actual function scatter_map, which has references
    # to a closure that has a reference to the scatter_map cell (because the
    # fn is recursive). To avoid this reference cycle, we set the function to
    # None, clearing the cell
    try:
        return scatter_map(inputs)
    finally:
        scatter_map = None


def scatter_kwargs(inputs, kwargs, target_gpus, chunk_sizes, dim=0):
    """Scatter with support for kwargs dictionary"""
    inputs = scatter(inputs, target_gpus, chunk_sizes, dim) if inputs else []
    kwargs = scatter(kwargs, target_gpus, chunk_sizes, dim) if kwargs else []
    if len(inputs) < len(kwargs):
        inputs.extend([() for _ in range(len(kwargs) - len(inputs))])
    elif len(kwargs) < len(inputs):
        kwargs.extend([{} for _ in range(len(inputs) - len(kwargs))])
    inputs = tuple(inputs)
    kwargs = tuple(kwargs)
    return inputs, kwargs


class BalancedDataParallel(DataParallel):

    def __init__(self, gpu0_bsz, *args, **kwargs):
        self.gpu0_bsz = gpu0_bsz
        super().__init__(*args, **kwargs)

    def forward(self, *inputs, **kwargs):
        if not self.device_ids:
            return self.module(*inputs, **kwargs)
        if self.gpu0_bsz == 0:
            device_ids = self.device_ids[1:]
        else:
            device_ids = self.device_ids
        inputs, kwargs = self.scatter(inputs, kwargs, device_ids)
        if len(self.device_ids) == 1:
            return self.module(*inputs[0], **kwargs[0])
        replicas = self.replicate(self.module, self.device_ids)
        if self.gpu0_bsz == 0:
            replicas = replicas[1:]
        outputs = self.parallel_apply(replicas, device_ids, inputs, kwargs)
        return self.gather(outputs, self.output_device)

    def parallel_apply(self, replicas, device_ids, inputs, kwargs):
        return parallel_apply(replicas, inputs, kwargs, device_ids)

    def scatter(self, inputs, kwargs, device_ids):
        bsz = inputs[0].size(self.dim)
        num_dev = len(self.device_ids)
        gpu0_bsz = self.gpu0_bsz
        bsz_unit = (bsz - gpu0_bsz) // (num_dev - 1)
        if gpu0_bsz < bsz_unit:
            chunk_sizes = [gpu0_bsz] + [bsz_unit] * (num_dev - 1)
            delta = bsz - sum(chunk_sizes)
            for i in range(delta):
                chunk_sizes[i + 1] += 1
            if gpu0_bsz == 0:
                chunk_sizes = chunk_sizes[1:]
        else:
            return super().scatter(inputs, kwargs, device_ids)
        return scatter_kwargs(inputs, kwargs, device_ids, chunk_sizes, dim=self.dim)

you can see, in the code BalancedDataParallel inherited the torch. The nn. DataParallel, through the custom after 0, the size of the card batch_size gpu0_bsz, namely 0 card a bit less data. Balance the memory usage of 0 CARDS with other CARDS. The invocation code is as follows:

import BalancedDataParallel

 if n_gpu > 1:
    model = BalancedDataParallel(gpu0_bsz=2, model, dim=0).to(device)
    # model = torch.nn.DataParallel(model)

gpu0_bsz: 0 card batch_size of GPU;
model: model;
dim: batch dimension

as a result, we might as well just batch_size set to 8, namely gpu0_bsz = 2 try, the results are as follows:

the batch_size from 6 to 8 of success, because 0 put a batch less, therefore, will be smaller than the other CARDS. But sacrificing the video memory of one card to the video memory of others, eventually increasing the batch_size, is still available. The advantages of this method are even more obvious when the number of CARDS is large.