Author Archives: Robins

ABAP: Overbooking BAPI_ACC_DOCUMENT_POST error reported FI/CO interface: inconsistent FI/CO voucher header data to be updated

Problem: When using BAPI_ACC_DOCUMENT_POST, the reason for the error is “FI/CO interface: inconsistent FI/CO voucher header data to be updated” when automatically posting the account.

Reasons:

1. If the company in the header data and the line item company are consistent, check the line item and do not assign company bukrs to the line item.

“it_item-comp_code = wa_account-bukrs.

2, check whether the amount is 0, if the line item amount is 0, it will report this error.

3, whether the positive and negative of the amount of debit and credit are consistent. Example: The bookkeeping code is 40, debit, should be a positive value; the bookkeeping code is 50, credit, is a negative value.

it_curr-amt_doccur = wa_account-wrbtr.

Solution:

Follow the above reasons, troubleshoot the program, and modify it.

petalinux-build Error: ERROR: Task (/opt/pkg/petalinux/components/yocto/source/aarch64/layers/meta-xilinx-tools/recipes-bsp/fsbl/fsbl_git.bb:do_compile) failed with exit code ‘1’

petalinux-build Error: ERROR: Task (/opt/pkg/petalinux/components/yocto/source/aarch64/layers/meta-xilinx-tools/recipes-bsp/fsbl/fsbl_git.bb:do_compile) failed with exit code ‘1’
as below:

co@nvdla:~/petaproj/nvdla$ petalinux-build 
[INFO] building project
[INFO] sourcing bitbake
[INFO] generating user layers
INFO: bitbake petalinux-user-image
Loading cache: 100% |############################################| Time: 0:00:00
Loaded 3811 entries from dependency cache.
Parsing recipes: 100% |##########################################| Time: 0:00:03
Parsing of 2777 .bb files complete (2775 cached, 2 parsed). 3812 targets, 150 skipped, 0 masked, 0 errors.
NOTE: Resolving any missing task queue dependencies
Initialising tasks: 100% |#######################################| Time: 0:00:02
Checking sstate mirror object availability: 100% |###############| Time: 0:00:11
Sstate summary: Wanted 174 Found 9 Missed 330 Current 757 (5% match, 82% complete)
NOTE: Executing SetScene Tasks
NOTE: Executing RunQueue Tasks
ERROR: fsbl-2019.1+gitAUTOINC+26c14d9861-r0 do_compile: Function failed: do_compile (log file is located at /home/co/petaproj/nvdla/build/tmp/work/plnx_zynqmp-xilinx-linux/fsbl/2019.1+gitAUTOINC+26c14d9861-r0/temp/log.do_compile.899)
ERROR: Logfile of failure stored in: /home/co/petaproj/nvdla/build/tmp/work/plnx_zynqmp-xilinx-linux/fsbl/2019.1+gitAUTOINC+26c14d9861-r0/temp/log.do_compile.899
Log data follows:
| DEBUG: Executing shell function do_compile
| aarch64-none-elf-gcc -MMD -MP      -Wall -fmessage-length=0 -DARMA53_64 -Os -flto -ffat-lto-objects -Wall -fmessage-length=0 -DARMA53_64 -Os -flto -ffat-lto-objects    -c xfsbl_board.c -o xfsbl_board.o -Izynqmp_fsbl_bsp/psu_cortexa53_0/include -I.
| In file included from xfsbl_board.c:54:
| xfsbl_board.h:64:10: fatal error: xiicps.h: No such file or directory
|  #include "xiicps.h"
|           ^~~~~~~~~~
| compilation terminated.
| make: *** [Makefile:34: xfsbl_board.o] Error 1
| WARNING: /home/co/petaproj/nvdla/build/tmp/work/plnx_zynqmp-xilinx-linux/fsbl/2019.1+gitAUTOINC+26c14d9861-r0/temp/run.do_compile.899:1 exit 2 from 'make'
| ERROR: Function failed: do_compile (log file is located at /home/co/petaproj/nvdla/build/tmp/work/plnx_zynqmp-xilinx-linux/fsbl/2019.1+gitAUTOINC+26c14d9861-r0/temp/log.do_compile.899)
ERROR: Task (/opt/pkg/petalinux/components/yocto/source/aarch64/layers/meta-xilinx-tools/recipes-bsp/fsbl/fsbl_git.bb:do_compile) failed with exit code '1'
NOTE: Tasks Summary: Attempted 2926 tasks of which 2910 didn't need to be rerun and 1 failed.

Summary: 1 task failed:
  /opt/pkg/petalinux/components/yocto/source/aarch64/layers/meta-xilinx-tools/recipes-bsp/fsbl/fsbl_git.bb:do_compile
Summary: There was 1 ERROR message shown, returning a non-zero exit code.
ERROR: Failed to build project

Solution:

1) File Directory <petalinux_proj>/project-spec/meta-user/recipes-bsp/fsbl/fsbl_%.bbappend

Check whether there is fsbl_% Bbappend or not, create it if not:

vim <plnx-proj-root>/project-spec/meta-user/recipes-bsp/fsbl/fsbl_%.bbappend

2) Add the following

do_compile_prepend(){

   install -m 0644 ${TOPDIR}/../project-spec/hw-description/psu_init.c ${B}/fsbl/psu_init.c

   install -m 0644 ${TOPDIR}/../project-spec/hw-description/psu_init.h ${B}/fsbl/psu_init.h

}

3) Clean up and rebuild the fsbl file

$ petalinux-build -c fsbl -x cleanall

$ petalinux-build -c fsbl

Problem-solving!

Zuul Gateway Failed to Import Dependency [How to Solve]

Zuul gateway import dependency error

Error message: Error is reported in the startup dependency annotation and the dependency cannot be found

Analyze the reason: check whether pom dependency is introduced

Reason for error reporting: No dependency is introduced. Let’s try to introduce dependency (the code is as follows)

<dependency>
	<groupId>org.springframework.cloud</groupId>
    <artifactId>spring-cloud-starter-netflix-zuul</artifactId>
</dependency>

The problem is solved after the dependency is introduced.

gerapy Run scrapy Program report timeout error [How to Solve]

Recently I’ve been developing projects using scrapy.
scrapy project, using a proxy.
Then it runs locally, everything is fine, the data can be crawled normally.
After deploying it to the online gerapy and running it, it reports an error and the logs show that:

2022-10-08 17:03:24 [scrapy.downloadermiddlewares.retry] ERROR: Gave up retrying <GET https://www.xxx.com/en/product?o=100&p=100> (failed 3 times): User timeout caused connection failure: Getting https://www.xxx.com/en/product?o=100&p=100 took longer than 180.0 seconds..

2022-10-08 17:03:24 [xxx] ERROR: <twisted.python.failure.Failure twisted.internet.error.TimeoutError: User timeout caused connection failure: Getting https://www.xxx.com/en/product?o=100&p=100 took longer than 180.0 seconds..>

2022-10-08 17:03:24 [xxx] ERROR: TimeoutError on https://www.xxx.com/en/product?o=100&p=100

I thought it was a proxy problem, then I removed the proxy and it worked fine locally
After deploying to the online gerapy and running, the error was reported again and the logs showed that:

2022-10-09 10:39:45 [scrapy.downloadermiddlewares.retry] DEBUG: Retrying <GET https://www.xxx.com/en/product?o=100&p=100> (failed 3 times): [<twisted.python.failure.Failure twisted.internet.error.ConnectionLost: Connection to the other side was lost in a non-clean fashion: Connection lost.>]

2022-10-09 10:39:56 [scrapy.downloadermiddlewares.retry] ERROR: Gave up retrying <GET https://www.xxx.com/en/product?o=100&p=100> (failed 3 times): [<twisted.python.failure.Failure twisted.internet.error.ConnectionLost: Connection to the other side was lost in a non-clean fashion: Connection lost.>]

Find a solution from stackoverflow:

Add a method in spider:

# Citation needed
from scrapy.spidermiddlewares.httperror import HttpError
from twisted.internet.error import DNSLookupError
from twisted.internet.error import TimeoutError


    # How to use
    yield scrapy.Request(url=url, meta={'dont_redirect': True, dont_filter=True, callback=self.parse_list, errback=self.errback_work)


    # Definition Method
    def errback_work(self, failure):
        self.logger.error(repr(failure))
        if failure.check(HttpError):
            response = failure.value.response
            self.logger.error('HttpError on %s', response.url)
        elif failure.check(DNSLookupError):
            request = failure.request
            self.logger.error('DNSLookupError on %s', request.url)
        elif failure.check(TimeoutError):
            request = failure.request
            self.logger.error('TimeoutError on %s', request.url)

After deploying it to the online gerapy and running it again, it still reported errors again.

Looking at the version of scrapy, I found that it was version 2.6.1, so I changed it to version 2.5.1.

pip3 install scrapy==2.5.1

Run it locally and report an error:

AttributeError: module ‘OpenSSL. SSL’ has no attribute ‘SSLv3_ METHOD’

Look at the version of the pyopenssl library:

pip3 show pyopenssl

I find that the version is incorrect.

Then change the library version to 22.0.0

pip3 install pyopenssl==22.0.0

Run it locally again, it is normal!

Then deploy it to online gerapy to run, and it is normal!

[Solved] shiro Error: org.apache.shiro.authc.UnknownAccountException:

Error reporting: error reporting during login

org.apache.shiro.authc.UnknownAccountException: Realm [org.apache.shiro.realm.SimpleAccountRealm@30cecdca] was unable to find account data for the submitted AuthenticationToken [org.apache.shiro.authc.UsernamePasswordToken - wmyskxz, rememberMe=false].

	at org.apache.shiro.authc.pam.ModularRealmAuthenticator.doSingleRealmAuthentication(ModularRealmAuthenticator.java:184)
	at org.apache.shiro.authc.pam.ModularRealmAuthenticator.doAuthenticate(ModularRealmAuthenticator.java:267)
	at org.apache.shiro.authc.AbstractAuthenticator.authenticate(AbstractAuthenticator.java:198)
	at org.apache.shiro.mgt.AuthenticatingSecurityManager.authenticate(AuthenticatingSecurityManager.java:106)
	at org.apache.shiro.mgt.DefaultSecurityManager.login(DefaultSecurityManager.java:274)
	at org.apache.shiro.subject.support.DelegatingSubject.login(DelegatingSubject.java:260)
	at com.example.demo.DemoApplicationTests.contextLoads(DemoApplicationTests.java:33)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:725)
	at org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60)
	at org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131)
	at org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:149)
	at org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestableMethod(TimeoutExtension.java:140)
	at org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestMethod(TimeoutExtension.java:84)
	at org.junit.jupiter.engine.execution.ExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(ExecutableInvoker.java:115)
	at org.junit.jupiter.engine.execution.ExecutableInvoker.lambda$invoke$0(ExecutableInvoker.java:105)
	at org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106)
	at org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64)
	at org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45)
	at org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:37)
	at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:104)
	at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:98)
	at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeTestMethod$7(TestMethodTestDescriptor.java:214)
	at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
	at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.invokeTestMethod(TestMethodTestDescriptor.java:210)
	at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:135)
	at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:66)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$6(NodeTestTask.java:151)
	at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:141)
	at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$9(NodeTestTask.java:139)
	at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:138)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:95)
	at java.util.ArrayList.forEach(ArrayList.java:1257)
	at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:41)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$6(NodeTestTask.java:155)
	at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:141)
	at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$9(NodeTestTask.java:139)
	at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:138)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:95)
	at java.util.ArrayList.forEach(ArrayList.java:1257)
	at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:41)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$6(NodeTestTask.java:155)
	at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:141)
	at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$9(NodeTestTask.java:139)
	at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:138)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:95)
	at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.submit(SameThreadHierarchicalTestExecutorService.java:35)
	at org.junit.platform.engine.support.hierarchical.HierarchicalTestExecutor.execute(HierarchicalTestExecutor.java:57)
	at org.junit.platform.engine.support.hierarchical.HierarchicalTestEngine.execute(HierarchicalTestEngine.java:54)
	at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:107)
	at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:88)
	at org.junit.platform.launcher.core.EngineExecutionOrchestrator.lambda$execute$0(EngineExecutionOrchestrator.java:54)
	at org.junit.platform.launcher.core.EngineExecutionOrchestrator.withInterceptedStreams(EngineExecutionOrchestrator.java:67)
	at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:52)
	at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:114)
	at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:86)
	at org.junit.platform.launcher.core.DefaultLauncherSession$DelegatingLauncher.execute(DefaultLauncherSession.java:86)
	at org.junit.platform.launcher.core.SessionPerRequestLauncher.execute(SessionPerRequestLauncher.java:53)
	at com.intellij.junit5.JUnit5IdeaTestRunner.startRunnerWithArgs(JUnit5IdeaTestRunner.java:69)
	at com.intellij.rt.execution.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:47)
	at com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:242)
	at com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:70)

code:

package com.example.demo;

import org.apache.shiro.SecurityUtils;
import org.apache.shiro.authc.UsernamePasswordToken;
import org.apache.shiro.mgt.DefaultSecurityManager;
import org.apache.shiro.realm.SimpleAccountRealm;
import org.apache.shiro.subject.Subject;
import org.junit.Before;
import org.junit.jupiter.api.BeforeEach;
import org.junit.jupiter.api.Test;
import org.springframework.boot.test.context.SpringBootTest;


@SpringBootTest
public class DemoApplicationTests {

    private SimpleAccountRealm simpleAccountRealm = new SimpleAccountRealm();

    @Before
    public void addUsr(){
        simpleAccountRealm.addAccount("wmyskxz","123456");
    }

    @Test
    void contextLoads() {
        DefaultSecurityManager defaultSecurityManager =  new DefaultSecurityManager();
        defaultSecurityManager.setRealm(simpleAccountRealm);

        SecurityUtils.setSecurityManager(defaultSecurityManager);
        Subject subject = SecurityUtils.getSubject();

        UsernamePasswordToken token = new UsernamePasswordToken("wmyskxz","123456");
        subject.login(token);

        System.out.println("isAuthenticated:"+subject.isAuthenticated());

        subject.logout();

        System.out.println("isAuthenticated:"+subject.isAuthenticated());

    }

}

The reason for the error is that Junit’s @Before did not register the account up, and the version of Junit, Spring Boot 2.2.9 default use is junit 5, in junit 5 @Before was @BeforeEach instead of @Before change @Before to @BeforeEach on it.

[Solved] Flink Error: Flink Hadoop is not in the classpath/dependencies

Error background:

When installing the Flink on yarn cluster, the Flink cluster cannot be started.

Version:

flink-1.14.6

hadoop-3.2.3

org.apache.flink.runtime.entrypoint.ClusterEntrypointException: Failed to initialize the cluster entrypoint StandaloneSessionClusterEntrypoint.
	at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.startCluster(ClusterEntrypoint.java:216) ~[flink-dist_2.12-1.14.6.jar:1.14.6]
	at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.runClusterEntrypoint(ClusterEntrypoint.java:617) [flink-dist_2.12-1.14.6.jar:1.14.6]
	at org.apache.flink.runtime.entrypoint.StandaloneSessionClusterEntrypoint.main(StandaloneSessionClusterEntrypoint.java:59) [flink-dist_2.12-1.14.6.jar:1.14.6]
Caused by: java.io.IOException: Could not create FileSystem for highly available storage path (hdfs:/flink/ha/default)
	at org.apache.flink.runtime.blob.BlobUtils.createFileSystemBlobStore(BlobUtils.java:92) ~[flink-dist_2.12-1.14.6.jar:1.14.6]
	at org.apache.flink.runtime.blob.BlobUtils.createBlobStoreFromConfig(BlobUtils.java:76) ~[flink-dist_2.12-1.14.6.jar:1.14.6]
	at org.apache.flink.runtime.highavailability.HighAvailabilityServicesUtils.createHighAvailabilityServices(HighAvailabilityServicesUtils.java:121) ~[flink-dist_2.12-1.14.6.jar:1.14.6]
	at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.createHaServices(ClusterEntrypoint.java:361) ~[flink-dist_2.12-1.14.6.jar:1.14.6]
	at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.initializeServices(ClusterEntrypoint.java:318) ~[flink-dist_2.12-1.14.6.jar:1.14.6]
	at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.runCluster(ClusterEntrypoint.java:243) ~[flink-dist_2.12-1.14.6.jar:1.14.6]
	at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.lambda$startCluster$1(ClusterEntrypoint.java:193) ~[flink-dist_2.12-1.14.6.jar:1.14.6]
	at org.apache.flink.runtime.security.contexts.NoOpSecurityContext.runSecured(NoOpSecurityContext.java:28) ~[flink-dist_2.12-1.14.6.jar:1.14.6]
	at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.startCluster(ClusterEntrypoint.java:190) ~[flink-dist_2.12-1.14.6.jar:1.14.6]
	... 2 more
Caused by: org.apache.flink.core.fs.UnsupportedFileSystemSchemeException: Could not find a file system implementation for scheme 'hdfs'. The scheme is not directly supported by Flink and no Hadoop file system to support this scheme could be loaded. For a full list of supported file systems, please see https://nightlies.apache.org/flink/flink-docs-stable/ops/filesystems/.
	at org.apache.flink.core.fs.FileSystem.getUnguardedFileSystem(FileSystem.java:532) ~[flink-dist_2.12-1.14.6.jar:1.14.6]
	at org.apache.flink.core.fs.FileSystem.get(FileSystem.java:409) ~[flink-dist_2.12-1.14.6.jar:1.14.6]
	at org.apache.flink.core.fs.Path.getFileSystem(Path.java:274) ~[flink-dist_2.12-1.14.6.jar:1.14.6]
	at org.apache.flink.runtime.blob.BlobUtils.createFileSystemBlobStore(BlobUtils.java:89) ~[flink-dist_2.12-1.14.6.jar:1.14.6]
	at org.apache.flink.runtime.blob.BlobUtils.createBlobStoreFromConfig(BlobUtils.java:76) ~[flink-dist_2.12-1.14.6.jar:1.14.6]
	at org.apache.flink.runtime.highavailability.HighAvailabilityServicesUtils.createHighAvailabilityServices(HighAvailabilityServicesUtils.java:121) ~[flink-dist_2.12-1.14.6.jar:1.14.6]
	at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.createHaServices(ClusterEntrypoint.java:361) ~[flink-dist_2.12-1.14.6.jar:1.14.6]
	at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.initializeServices(ClusterEntrypoint.java:318) ~[flink-dist_2.12-1.14.6.jar:1.14.6]
	at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.runCluster(ClusterEntrypoint.java:243) ~[flink-dist_2.12-1.14.6.jar:1.14.6]
	at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.lambda$startCluster$1(ClusterEntrypoint.java:193) ~[flink-dist_2.12-1.14.6.jar:1.14.6]
	at org.apache.flink.runtime.security.contexts.NoOpSecurityContext.runSecured(NoOpSecurityContext.java:28) ~[flink-dist_2.12-1.14.6.jar:1.14.6]
	at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.startCluster(ClusterEntrypoint.java:190) ~[flink-dist_2.12-1.14.6.jar:1.14.6]
	... 2 more
Caused by: org.apache.flink.core.fs.UnsupportedFileSystemSchemeException: Hadoop is not in the classpath/dependencies.
	at org.apache.flink.core.fs.UnsupportedSchemeFactory.create(UnsupportedSchemeFactory.java:55) ~[flink-dist_2.12-1.14.6.jar:1.14.6]
	at org.apache.flink.core.fs.FileSystem.getUnguardedFileSystem(FileSystem.java:528) ~[flink-dist_2.12-1.14.6.jar:1.14.6]
	at org.apache.flink.core.fs.FileSystem.get(FileSystem.java:409) ~[flink-dist_2.12-1.14.6.jar:1.14.6]
	at org.apache.flink.core.fs.Path.getFileSystem(Path.java:274) ~[flink-dist_2.12-1.14.6.jar:1.14.6]
	at org.apache.flink.runtime.blob.BlobUtils.createFileSystemBlobStore(BlobUtils.java:89) ~[flink-dist_2.12-1.14.6.jar:1.14.6]
	at org.apache.flink.runtime.blob.BlobUtils.createBlobStoreFromConfig(BlobUtils.java:76) ~[flink-dist_2.12-1.14.6.jar:1.14.6]
	at org.apache.flink.runtime.highavailability.HighAvailabilityServicesUtils.createHighAvailabilityServices(HighAvailabilityServicesUtils.java:121) ~[flink-dist_2.12-1.14.6.jar:1.14.6]
	at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.createHaServices(ClusterEntrypoint.java:361) ~[flink-dist_2.12-1.14.6.jar:1.14.6]
	at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.initializeServices(ClusterEntrypoint.java:318) ~[flink-dist_2.12-1.14.6.jar:1.14.6]
	at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.runCluster(ClusterEntrypoint.java:243) ~[flink-dist_2.12-1.14.6.jar:1.14.6]
	at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.lambda$startCluster$1(ClusterEntrypoint.java:193) ~[flink-dist_2.12-1.14.6.jar:1.14.6]
	at org.apache.flink.runtime.security.contexts.NoOpSecurityContext.runSecured(NoOpSecurityContext.java:28) ~[flink-dist_2.12-1.14.6.jar:1.14.6]
	at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.startCluster(ClusterEntrypoint.java:190) ~[flink-dist_2.12-1.14.6.jar:1.14.6]
	... 2 more

The reason for the error
Flink needs two jar package dependencies to access HDFS. Flink does not have them, so it needs to be put in by itself.

  1. flink-shaded-hadoop-3-3.1.1.7.2.9.0-173-9.0.jar
  2. commons-cli-1.5.0.jar

Solution:

Directly search the Maven warehouse for these two jar packages and download them: https://mvnrepository.com/

Put the jar package in the /flink/lib directory.

JBOSS jBPM 7.73 Startup Error [How to Solve]

Follow this link to download the zip version: jBPM – Open Source Business Automation Toolkit – Getting Started. After decompression to the local location, execute jbpm-server\bin\standalone.bat, which is always interrupted by errors. Finally, find the answer on YouTube. The reason is that there is a system parameter setting problem in jboss.

Solution:

1. Add the maven command path to the PATH environment variable.

2. Set the jvm parameter in the startup command, add -Djboss.as.management.blocking.timeout=2400, this parameter can be modified standalone.bat

:RESTART
  "%JAVA%" %JAVA_OPTS% ^
   "-Dorg.jboss.boot.log.file=%JBOSS_LOG_DIR%\server.log" ^
   "-Dlogging.configuration=file:%JBOSS_CONFIG_DIR%/logging.properties" ^
   "-Dfile.encoding=utf-8"  ^
   "-Djboss.as.management.blocking.timeout=2400" ^

Or modify the standalone\configuration\standalone.xml configuration file and add the system-properties property

   </system-properties>
    ....
    ....
    <property name="jboss.as.management.blocking.timeout" value="2400"/>
   </system-properties>

After modification, the service starts normally,

http://127.0.0.1:8080/business -central

Import sample project

[Solved] Mac M1 Debug Error: could not launch process: can not run under Rosetta

Debugging with vscode or goland in M1 environment reports the following errors

could not launch process: can not run under Rosetta, check that the installed build of Go is right for your CPU architecture

main cause:

M1 chip is based on ARM architecture. If the installed Golang SDK is ARM, the above error will be reported during debugging

Solution:

Re-download the Go SDK of ARM version, and the Golang Installer will automatically overwrite the previous version

Go env to view after installation

Dlv is required for golang debugging. If dlv is not installed, dlv must be installed

go install github.com/go-delve/delve/cmd/dlv

Debug again. It is found that debugging can be performed normally

crypto-js Error: Malformed UTF-8 data [How to Solve]

Crypto js Malformed UTF-8 data

Note: If your application is started by jar package, please refer to solution 1; If your application is started by tomcat, please refer to solution 2.

Solution 1: jar package startup configuration

 java -jar -Dfile.encoding=utf-8 itemapi-1.0.1.jar

Solution 2: tomat configuration

Set up in Tomcat’s catalina.bat.

set "JAVA_OPTS=%JAVA_OPTS% %LOGGING_CONFIG% -Dfile.encoding=UTF-8"

[Solved] Cannot read properties of undefined (reading ‘ajax‘); Cannot read property ‘ajax‘ of undefined

Cannot read properties of undefined (reading ‘ajax‘); Cannot read property ‘ajax‘ of undefined

Sending requests using ajax in jQuery. it report an error: Cannot read properties of undefined (reading 'ajax'); Cannot read property 'ajax' of undefined

Codes that report an error:

            $.ajax({
                type:"POST",
                url:"pageServlet",
                data:jsonData,
                dataType:"json",
                success:function (data) {
                    alert(data);
                }
            })

Solution: Change $ to jQuery

            jQuery.ajax({
                type:"POST",
                url:"pageServlet",
                data:jsonData,
                dataType:"json",
                success:function (data) {
                    alert(data);
                }
            })

[Solved] PyTorch Error: TypeError: exceptions must derive from BaseException

Project scenario:

PyTorch reports an error: TypeError: exceptions must deliver from BaseException


Problem description

In base_options.py, set the –netG parameters to be selected only from these.

self.parser.add_argument('--netG', type=str, default='p2hed', choices=['p2hed', 'refineD', 'p2hed_att'], help='selects model to use for netG')

However, when selecting netG, the code is written as follows:

def define_G(input_nc, output_nc, ngf, netG, n_downsample_global=3, n_blocks_global=9, n_local_enhancers=1, 
             n_blocks_local=3, norm='instance', gpu_ids=[]):    
    norm_layer = get_norm_layer(norm_type=norm)     
    if netG == 'p2hed':    
        netG = DDNet_p2hED(input_nc, output_nc, ngf, n_downsample_global, n_blocks_global, norm_layer)
    elif netG == 'refineDepth':
        netG = DDNet_RefineDepth(input_nc, output_nc, ngf, n_downsample_global, n_blocks_global, n_local_enhancers, n_blocks_local, norm_layer)
    elif netG == 'p2h_noatt':        
        netG = DDNet_p2hed_noatt(input_nc, output_nc, ngf, n_downsample_global, n_blocks_global, n_local_enhancers, n_blocks_local, norm_layer)
    else:
        raise('generator not implemented!')
    #print(netG)
    if len(gpu_ids) > 0:
        assert(torch.cuda.is_available())   
        netG.cuda(gpu_ids[0])
    netG.apply(weights_init)
    return netG

Cause analysis:

Note that there is no option of ‘rfineD’, so when running the code, the program cannot find the network that netG should select, so it reports an error.


Solution:

In fact, change the “elif netG==’refineDepth’:”  to “elif netG==’refineD’:”. it will be OK!