Category Archives: Error

Uncaught (in promise) Error: Delete success at __webpack_exports__.default 405 error

Problem description

After deleting a user, you want to get the user list again, but you don’t get it again. You can see that the system reports uncaught (in promise) error: deletion succeeded at webpack_ Exports. Default this error

Problem solving

The reason is that the request is encapsulated in the project, and the response status code is hijacked. Find the processing location of the response in the corresponding project, annotate it and return it to the response directly

ffmpeg:error while loading shared libraries: libopenh264.so.5

Problem Description:
in the Ubuntu 10.04 system, we want to use ffmpeg for face cutting, but there is a title error. I can’t generate the corresponding version number by inputting ffmpeg – version. After Google[ https://stackoverflow.com/questions/62213783/ffmpeg-error-while-loading-shared-libraries-libopenh264-so-5 ]It turned out to be too new

I use the name of CONDA virtual environment, which is houyw. If houyw appears below, it is my virtual name.

(1) I’ll take a chance according to the instructions given by the boss

sudo ln -s ~/anaconda3/lib/libopenh264.so  ~/anaconda3/envs/houyw/lib/libopen264.so.5

As a result, there is no libopenh264. So in my ~/anaconda3/lib directory. So there is an error that the file does not exist

(2) At this time, I found that although there is no ~/anaconda3/lib directory, there is ~/anaconda3/envs/houyw/lib directory.. So I had a whim and tried to use libopenh264. So under ~/anaconda3/envs/houyw/lib (delete libopenh264. So. 5 first).

cd ~/anaconda3/envs/houyw/lib
rm -rf libopenh264.so.5
sudo ln -s libopenh264.so libopenh264.so.5

(3) Finished, input ffmpeg – version to display the version number correctly
(4) Summary: if libopenh264.so exists in ~/anaconda3/lib, use method 1, if not, use method 2. Finally, I hope everyone can solve this problem perfectly!!!

CDH-hue : Could not start SASL: Error in sasl_client_start (-4) SASL(-4): no mechanism available

Problem: Server development badlands problem
Problem solved.
Execute the following commands on the server, choose any one of them, and restart hue afterwards

yum install cyrus-sasl-plain cyrus-sasl-devel cyrus-sasl-gssapi

sudo yum install apache-maven ant asciidoc cyrus-sasl-devel cyrus-sasl-gssapi gcc gcc-c++ krb5-devel libxml2-devel libxslt-devel make mysql mysql-devel openldap-devel python-devel sqlite-devel gmp-devel

[error record] Android application release package error handling (turn off syntax check log processing release configuration)

1. Turn off grammar checking


 

When Android applications are packaged, there will be a series of syntax checks, such as the placement of a layout file, which is cumbersome;

In build. Gradle under the module, configure as follows: check the syntax and ignore some minor syntax errors;

android {
    lintOptions {
        checkReleaseBuilds false
        // Or, if you prefer, you can continue to check for errors in release builds,
        // but continue the build even when errors are found:
        abortOnError false
    }
}

 

 

 

 

2. Log processing

According to the compilation type buildconfig.debug in the current compilation configuration, select whether to print the log;

public final class BuildConfig {
  public static final boolean DEBUG = Boolean.parseBoolean("true");
  public static final String APPLICATION_ID = "cn.zkhw.midi";
  public static final String BUILD_TYPE = "debug";
  public static final int VERSION_CODE = 1;
  public static final String VERSION_NAME = "0.1";
}

If the current version is release, the value of buildconfig.debug is false;

 

Example of development log tool class log:

public class L {

    public static void i(String TAG, String msg) {
        if (BuildConfig.DEBUG)
            Log.i(TAG, msg);
    }
}

 

3. Release compiler optimization configuration

In general, the release release version needs the following configuration;

android {
    buildTypes {
        debug {
        }

        release {
            zipAlignEnabled true     
            shrinkResources true    
            minifyEnabled true      
            proguardFiles getDefaultProguardFile('proguard-android.txt'), 'proguard-rules.pro'
        }
    }
}

Wechat program typeerror: a solution to cannot read property ‘SetData’ of undefined

Questions

In the custom click event function, there is an error calling SetData, indicating that it is undefined

reason

The current this refers to the instance object of the success function

delTaskTap(e){
    wx.showModal({
      title:'Tips',
      content:'Do I have to clear all my to-do lists?',
      success:function(res){
        if(res.confirm){
          this.setData({
            taskText:[]
          })
          console.log('Clear all to-do items')
        }
        else if(res.cancel){
          console.log('Not clearing all to-do items')
        }
      }
    })

terms of settlement

delTaskTap(e){
    var that=this
    wx.showModal({
      title:'Tips',
      content:'Do I have to clear all my to-do lists?',
      success:function(res){
        if(res.confirm){
          that.setData({
            taskText:[]
          })
          console.log('Clear all to-do items')
        }
        else if(res.cancel){
          console.log('Not clearing all to-do items')
        }
      }
    })

[HBase Error]“java.lang.OutOfMemoryError: Requested array size exceeds VM limit”

Use version cdh5.4.5, hbase1.0.0

Soon after the new company arrived, the regionserver outage occurred. The exception reported is as follows:


2017-05-12 21:15:26,396 FATAL [B.defaultRpcServer.handler=123,queue=6,port=60020] regionserver.RSRpcServices: Run out of memory; RSRpcServices will abort itself immediately

java.lang.OutOfMemoryError: Requested array size exceeds VM limit

at java.nio.HeapByteBuffer.<init>(HeapByteBuffer.java:57)

at java.nio.ByteBuffer.allocate(ByteBuffer.java:331)

at org.apache.hadoop.hbase.io.ByteBufferOutputStream.checkSizeAndGrow(ByteBufferOutputStream.java:77)

at org.apache.hadoop.hbase.io.ByteBufferOutputStream.write(ByteBufferOutputStream.java:116)

at org.apache.hadoop.hbase.KeyValue.oswrite(KeyValue.java:2532)

at org.apache.hadoop.hbase.KeyValueUtil.oswrite(KeyValueUtil.java:548)

at org.apache.hadoop.hbase.codec.KeyValueCodec$KeyValueEncoder.write(KeyValueCodec.java:58)

at org.apache.hadoop.hbase.ipc.IPCUtil.buildCellBlock(IPCUtil.java:122)

at org.apache.hadoop.hbase.ipc.RpcServer$Call.setResponse(RpcServer.java:376)

at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:130)

at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)

at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)

at java.lang.Thread.run(Thread.java:745)

Focus on “requested array size exceeds VM limit”

In openjdk, there is a limit that the size of the array is 2 to the power of 31 – 2. If it exceeds this size, the JVM will report an error.

In fact, this is definitely a bug in HBase IPC. In some cases, the length of the created array exceeds the limit of the JVM. Through searching, a patch is found and the problem is fixed
hbase-14598 mainly modifies the length of the array. If it exceeds this, an exception will be sent directly to the client. The direct reason is, but for an operation and maintenance company, it is more important to know which table request causes this problem?

We have a patch, hbase-16033  , More logs are provided. Finally, the following types of logs are found:


[B.defaultRpcServer.handler=90,queue=12,port=60020] ipc.RpcServer: (responseTooLarge): {"processingtimems":2822,"call":"Multi(org.apache.hadoop.hbase.protobuf.generated.ClientProtos$MultiRequest)","client":"10.120.69.147:43481","param":"region= ., for 1 actions and 1st row key=A","starttimems":1494609020832,"queuetimems":0,"class":"HRegionServer","responsesize":31697082,"method":"Multi"}

The main problem here is response size, that is, the amount of data returned at one time is too large, which leads to this problem.

In addition, in the search process, we also found that someone had a similar problem. Click Connect, which is basically the same as our type. It is worth noting that the two patches are: hbase-14946 and hbase-14978, which solve the problem of batch reading and writing exceeding the limit. The above pathc is to solve the problem of not reporting errors, and the following is the basis.

We need to find time to upgrade. I hope it will help you.

Deep learning model error + 1: CUDA error: device side assert triggered

Scenario:
some time ago, when running the fast RCNN model in Google’s colab, there was no problem. Later, when using featurize to rent a server to run the model, the same code kept reporting the error “CUDA error: device side assert triggered”
these two days have driven me crazy. There are many blog articles about this situation on the Internet. Most of them say that the label is out of bounds, and some of them have problems in the calculation of loss function
I can only debug step by step, and I’d better solve my own problems.

'''When running with GPU, this function reports an error “CUDA error: device-side assert triggered”'''
perm1 = torch.randperm(positive.numel(), device=positive.device)[:num_pos]
perm2 = torch.randperm(negative.numel(), device=negative.device)[:num_neg]

'''After modification, change device to cpu'''
perm1 = torch.randperm(positive.numel(), device="cpu")[:num_pos]
perm2 = torch.randperm(negative.numel(), device="cpu")[:num_neg]

Make a record, hoping to help people in the same situation.

[environment] docker: error response from daemon: OCI runtime

Background

When the compiled image is exported and loaded into another computer, the error is as follows:

//import mirror
docker import example.tar

//run the docker
docker run -it example:v20210119 /bin/bash

//error:
docker: Error response from daemon: OCI runtime create failed: container_linux.go:348: 
starting container process caused "exec: \"/bin/bash\": stat /bin/bash: no such file or directory": unknown.

For the description of this problem, there are probably two kinds of factional explanations on the Internet under Google

Left wing school

There is no/bin/bash in production, try/bin/sh, the result is still the problem

Right wing

Compatibility between Linux and docker. Then uninstall the old version and install the latest one.

The solution of the left faction is very easy to verify. After repeated verification, the problem is still unsolved. It is commonly known as the left faction.

The solution of the right faction is a bit difficult to verify, but intuition tells me that it should not be such a problem

Sure enough, I saw the right answer. After a careful look, a short description and a quick verification, the problem was solved.

this error occurs when docker runs. It is caused by different ways of saving the image. If the image is imported by using import, it should be noted that import can import the image package saved by save and the container package saved by export. However, if the image package saved by save is imported, there is no error. But this error occurs when run runs

    1. solutions
docker load < buildroot_v20210119.tar

Compile .h file with error “error: backslash-newline at end of file [-Werror]:

Solution: add a blank line at the end of the file

For example, when compiling the following. H file, an error is reported

#define func1(name, begin)          \
    static thread_local A __x_y_z_agg_##name(#name); \
    (__x_y_z_agg_##name).B(begin)
    
#define func2 A::C

It needs to be changed to

#define func1(name, begin)          \
    static thread_local A __x_y_z_agg_##name(#name); \
    (__x_y_z_agg_##name).B(begin)
    
#define func2 A::C

Aspecj cannot intercept method annotations on an interface

Aspecj cannot intercept method annotations on an interface

Aspecj can’t intercept the method annotation on the interface, it can only act on the method of the implementation class. At this time, it needs to use methodinterceptor to implement.

@Target(ElementType.METHOD)
@Retention(RetentionPolicy.RUNTIME)
@Documented
@Inherited
public @interface AopTest {
}

Interface

public interface TestAOPI {
    @AopTest
    public String test();

}

Implementation class 1

@Service
public class TestAOPService implements TestAOPI{
    @Override
    public String test() {
        return "service";
    }
}

Implementation class 2

@Service
public class TestAOPService2 implements TestAOPI{
	@AopTest
    @Override
    public String test() {
        return "service";
    }
}

Aspecj (partially valid)

If and only if the @ aoptest annotation is added to the method of the implementation class, it will take effect (implementation class 2), but implementation class 1 will not

@Aspect
@Configuration
public class AopTestAspect {
  /**
     * Identifies the method annotated with OperationLog
     */
    @Pointcut("@annotation(com.example.demo1.config.AopTest)")
    public void methodHasAopTestAnnotation() {
    }
    
    @Around("methodHasAopTestAnnotation()")
    public Object doAround(ProceedingJoinPoint joinPoint) throws Throwable {
        System.out.println("aop!!!");
        return joinPoint.proceed();
    }
}

Solution

It needs to be changed to the following way by manual

@Configuration
public class AopTestConfiguration {
	@Bean
    public Advisor methodPointcutAdvisor() {
        AopTestMethodPointcutAdvisor advisor = new AopTestMethodPointcutAdvisor();
        advisor.setAdvice(new AopTestInterceptor());
        return advisor;
    }

    class AopTestInterceptor implements MethodInterceptor {
        @Override
        public Object invoke(MethodInvocation invocation) throws Throwable {
            String name = invocation.getMethod().getName();
            System.out.println("==============" + name + " before ================");
            Object result = invocation.proceed();
            System.out.println("==============" + name + " after ================");
            return result;
        }
    }
    public class AopTestMethodPointcutAdvisor extends StaticMethodMatcherPointcutAdvisor {
        @Override
        public boolean matches(Method method, Class<?> targetClass) {
        	// Implementing a class method with a target annotation on it
            if(method.isAnnotationPresent(AopTest.class)){
                return true;
            }
            // The method has a corresponding interface method and the interface method is annotated
            Class<?>[] interfaces = method.getDeclaringClass().getInterfaces();
            for (int i = 0; i < interfaces.length; i++) {
                Method[] methods = interfaces[i].getMethods();
                for (int j = 0; j < methods.length; j++) {
                    if(methods[j].getName().equals(method.getName())){
                        return methods[j].isAnnotationPresent(AopTest.class);
                    }
                }
            }
            return false;
        }
    }
}