Category Archives: Error

[Solved] Eureka Startup Error: Root name (‘timestamp‘) does not match expected type EurekaApplications

Eureka Startup Error: Root name (‘timestamp‘) does not match expected type EurekaApplications

1. Eureka startup error

Recently, I was learning that Eureka started the client to report errors, and checked a bunch of fucking answers on the Internet

  • Some people say that the spring security csrf authentication control is turned on, you have to add a configuration to cancel the csrf authentication
    I see that my configuration does not introduce spring security package ah, where to open the login authentication, bullshit it
  • Some people say it is eureka url to add userName:password My eureka does not have a username password, should not be the problem here either

If you have introduced spring security and set login authentication to be added, the fundamental problem is not here

the error message is

Root name (‘timestamp’) does not match expected (‘applications’) for type EurekaApplications

Caused by: com.fasterxml.jackson.databind.exc.MismatchedInputException: Root name (‘timestamp’) does not match expected (‘applications’) for type org.springframework.cloud.netflix.eureka.http.EurekaApplications
at [Source: (org.springframework.util.StreamUtils$NonClosingInputStream); line: 1, column: 2] (through reference chain: org.springframework.cloud.netflix.eureka.http.EurekaApplications[“timestamp”])

 

Caused by: com.fasterxml.jackson.databind.exc.MismatchedInputException: Root name ('timestamp') does not match expected ('applications') for type `org.springframework.cloud.netflix.eureka.http.EurekaApplications`
 at [Source: (org.springframework.util.StreamUtils$NonClosingInputStream); line: 1, column: 2] (through reference chain: org.springframework.cloud.netflix.eureka.http.EurekaApplications["timestamp"])

2. Solutions

Eureka requires that the suffix end of your serviceurl must be Eureka
if you write something else, the above error will appear
what I write is http://localhost:${server.port}/eurekajzj is wrong

The server is consistent with the client http://localhost: ${server. Port}/Eureka will do

Otherwise, the above error will appear

3. Eureka server configuration

First, start the class eurekaserver
to see the configuration of the server

spring.application.name=eureka-server
server.port=8761

#eureka auto-preservation off
eureka.server.enable-self-preservation=false

eureka.instance.hostname=eureka-hostname-jzj
# Cannot call self
eureka.client.fetch-registry=false
#Does not register itself
eureka.client.register-with-eureka=false

eureka.client.service-url.defaultZone = http://localhost:${server.port}/eurekajzj

4. Eureka client configuration

server.port=8088

#user service
spring.application.name=user-client

#eureka
eureka.client.serviceUrl.defaultZone=http://localhost:8761/eurekajzj

5. Problems

The problem is the eurekajzj suffix
The serviceUrl address of my Eureka Server configuration
eureka.client.service-url.defaultZone = http://localhost:${server.port}/eurekajzj
My client’s configured eureka address
eureka.client.serviceUrl.defaultZone = http://localhost:8761/eurekajzj
The suffix ending is not eureka

After modification, all Eureka services and clients are working fine

How to Solve kubelet starts error (k8s Cluster Restarted)

How to Solve kubelet starts error after k8s Cluster is Restarted

After the k8s cluster restarts, kubelet starts to solve the error

1 k8s version 1.23.0, docker CE version 20.10.14

2. An error is reported for the problem, and an error is reported for starting kubelet. The contents are as follows:

May 16 09:47:13 k8s-master kubelet: E0516 09:47:13.512956    7403 server.go:302] "Failed to run kubelet" err="failed to run Kubelet: misconfiguration: kubelet cgroup driver: \"systemd\" is different from docker cgroup driver: \"cgroupfs\""
May 16 09:47:13 k8s-master systemd: kubelet.service: main process exited, code=exited, status=1/FAILURE
May 16 09:47:13 k8s-master systemd: Unit kubelet.service entered failed state.
May 16 09:47:13 k8s-master systemd: kubelet.service failed

3 problem analysis: according to the error report, the reason should be that kubelet’s cgroups are inconsistent with docker

4. Solve the problem and modify the docker configuration

cat > /etc/docker/daemon.json <<EOF
{"exec-opts": ["native.cgroupdriver=systemd"]}
EOF

5. Restart docker to solve the problem

[root@k8s-master ~]# systemctl restart docker
[root@k8s-master ~]# systemctl restart kubelet
[root@k8s-master ~]# 
[root@k8s-master ~]# systemctl status  kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
   Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)
  Drop-In: /usr/lib/systemd/system/kubelet.service.d
           └─10-kubeadm.conf
   Active: active (running) since Mon 2022-05-16 09:48:06 CST; 3s ago
     Docs: https://kubernetes.io/docs/
 Main PID: 8226 (kubelet)
    Tasks: 23
   Memory: 56.9M
   CGroup: /system.slice/kubelet.service
           ├─8226 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config...
           └─8745 /opt/cni/bin/calico

[Solved] pod Error: back off restarting failed container

pod Error: back off restarting failed container

 

Solution:

1. Find the corrosponding deployment
2. Add command: [ “/bin/bash”, “-ce”, “tail -f /dev/null” ]
as following:

kind: Deployment
apiVersion: apps/v1beta2
metadata:
  labels:
    app: jenkins-master
  name: jenkins-master-deploy
spec:
  replicas: 1
  selector:
    matchLabels:
      app: jenkins-master
  template:
    metadata:
      labels:
        app: jenkins-master
    spec:
      containers:
      - name: jenkins-master
        image: drud/jenkins-master:v0.29.0
        imagePullPolicy: IfNotPresent
        command: [ "/bin/bash", "-ce", "tail -f /dev/null" ]
        volumeMounts:
        - mountPath: /var/jenkins_home/
          name: masterjkshome
        ports:
        - containerPort: 8080
      volumes:
      - name: masterjkshome
        persistentVolumeClaim:
          claimName: pvcjkshome

How to Solve UE Shader develop error

As far as I can observe, the UE is generally loaded to 39%-45% when the compilation of the Shader will be carried out, but due to our sometimes negligent, often write shader with errors, resulting in the UE directly crash without prompting error messages, given that only the ConsoleVariables.ini file needs to be configured.

original:

[Startup]
; Uncomment to get detailed logs on shader compiles and the opportunity to retry on errors
;r.ShaderDevelopmentMode=1
; Uncomment to dump shaders in the Saved folder
; Warning: leaving this on for a while will fill your hard drive with many small files and ...

After modification:

[Startup]
; Uncomment to get detailed logs on shader compiles and the opportunity to retry on errors
r.ShaderDevelopmentMode=1
; Uncomment to dump shaders in the Saved folder
; Warning: leaving this on for a while will fill your hard drive with many small files and ...

Can’t you see the change? Pay attention

r.ShaderDevelopmentMode=1

Removed the “;” in front, This is the difference. In this way, when we write the shader, it will directly tell us what errors are, which will greatly help us eliminate errors, such as undefined variables (written incorrectly due to negligence).

[Solved] Parcel Service Error: regeneratorRuntime is not defined

When using the Parcelfront-end packaging tool to start the local service, the consoleconsole reports an error: Uncaught ReferenceError: regeneratorRuntime is not defined, according to the information: regeneratorRuntimeit is a global auxiliary function generated by the packaging tool, which is babelgenerated and is used for compatible async/awaitsyntax, so you need to configure the corresponding babelplugin.

Front-end engineering Parcel

First, configure babel

There are two ways to configure the babelplugin :

1. Create a separate configuration file .babelrc.

Under the windowsystem, a file whose name starts with . cannot be generated directly, but it can be generated by using the command echo on the cmd command line. The operation is as follows:

echo > .babelrc

Edit the .babelrcfile and configure it as follows:

{  "plugins": [    '@babel/plugin-transform-runtime'  ]}

2. Configure babelin package.json

"babel": {  "plugins": [    '@babel/plugin-transform-runtime'  ]}

After the configuration is successful, restart the service, and Parcelthe dependencies will be downloaded and installed automatically, without manual operation npm install, which is really friendly.

2. Summary

Note: package.json higher than .babelrcthe weight.

If the project is not too complicated, it is highly recommended to use Parcelto build web applications, which is absolutely worry-free, convenient and fast.

[Solved] Flink Hadoop is not in the classpath/dependencies

Error background

After installing the Flink on yarn cluster, the Flink cluster cannot be started.

Version:

flink-1.14.4

hadoop-3.2.3

error phenomenon

2022-04-18 10:22:31,395 ERROR org.apache.flink.runtime.entrypoint.ClusterEntrypoint[] - Could not start cluster entrypoint StandaloneSessionClusterEntrypoint.
org.apache.flink.runtime.entrypoint.ClusterEntrypointException: Failed to initialize the cluster entrypoint StandaloneSessionClusterEntrypoint.
    at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.startCluster(ClusterEntrypoint.java:216) ~[flink-dist_2.12-1.14.4.jar:1.14.4]
    at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.runClusterEntrypoint(ClusterEntrypoint.java:617) [flink-dist_2.12-1.14.4.jar:1.14.4]
    at org.apache.flink.runtime.entrypoint.StandaloneSessionClusterEntrypoint.main(StandaloneSessionClusterEntrypoint.java:59) [flink-dist_2.12-1.14.4.jar:1.14.4]
Caused by: java.io.IOException: Could not create FileSystem for highly available storage path (hdfs://flink/ha/default)
    at org.apache.flink.runtime.blob.BlobUtils.createFileSystemBlobStore(BlobUtils.java:92) ~[flink-dist_2.12-1.14.4.jar:1.14.4]
    at org.apache.flink.runtime.blob.BlobUtils.createBlobStoreFromConfig(BlobUtils.java:76) ~[flink-dist_2.12-1.14.4.jar:1.14.4]
    at org.apache.flink.runtime.highavailability.HighAvailabilityServicesUtils.createHighAvailabilityServices(HighAvailabilityServicesUtils.java:121) ~[flink-dist_2.12-1.14.4.jar:1.14.4]
    at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.createHaServices(ClusterEntrypoint.java:361) ~[flink-dist_2.12-1.14.4.jar:1.14.4]
    at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.initializeServices(ClusterEntrypoint.java:318) ~[flink-dist_2.12-1.14.4.jar:1.14.4]
    at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.runCluster(ClusterEntrypoint.java:243) ~[flink-dist_2.12-1.14.4.jar:1.14.4]
    at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.lambda$startCluster$1(ClusterEntrypoint.java:193) ~[flink-dist_2.12-1.14.4.jar:1.14.4]
    at org.apache.flink.runtime.security.contexts.NoOpSecurityContext.runSecured(NoOpSecurityContext.java:28) ~[flink-dist_2.12-1.14.4.jar:1.14.4]
    at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.startCluster(ClusterEntrypoint.java:190) ~[flink-dist_2.12-1.14.4.jar:1.14.4]
    ... 2 more
Caused by: org.apache.flink.core.fs.UnsupportedFileSystemSchemeException: Could not find a file system implementation for scheme 'hdfs'. The scheme is not directly supported by Flink and no Hadoop file system to support this scheme could be loaded. For a full list of supported file systems, please see https://nightlies.apache.org/flink/flink-docs-stable/ops/filesystems/.
    at org.apache.flink.core.fs.FileSystem.getUnguardedFileSystem(FileSystem.java:532) ~[flink-dist_2.12-1.14.4.jar:1.14.4]
    at org.apache.flink.core.fs.FileSystem.get(FileSystem.java:409) ~[flink-dist_2.12-1.14.4.jar:1.14.4]
    at org.apache.flink.core.fs.Path.getFileSystem(Path.java:274) ~[flink-dist_2.12-1.14.4.jar:1.14.4]
    at org.apache.flink.runtime.blob.BlobUtils.createFileSystemBlobStore(BlobUtils.java:89) ~[flink-dist_2.12-1.14.4.jar:1.14.4]
    at org.apache.flink.runtime.blob.BlobUtils.createBlobStoreFromConfig(BlobUtils.java:76) ~[flink-dist_2.12-1.14.4.jar:1.14.4]
    at org.apache.flink.runtime.highavailability.HighAvailabilityServicesUtils.createHighAvailabilityServices(HighAvailabilityServicesUtils.java:121) ~[flink-dist_2.12-1.14.4.jar:1.14.4]
    at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.createHaServices(ClusterEntrypoint.java:361) ~[flink-dist_2.12-1.14.4.jar:1.14.4]
    at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.initializeServices(ClusterEntrypoint.java:318) ~[flink-dist_2.12-1.14.4.jar:1.14.4]
    at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.runCluster(ClusterEntrypoint.java:243) ~[flink-dist_2.12-1.14.4.jar:1.14.4]
    at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.lambda$startCluster$1(ClusterEntrypoint.java:193) ~[flink-dist_2.12-1.14.4.jar:1.14.4]
    at org.apache.flink.runtime.security.contexts.NoOpSecurityContext.runSecured(NoOpSecurityContext.java:28) ~[flink-dist_2.12-1.14.4.jar:1.14.4]
    at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.startCluster(ClusterEntrypoint.java:190) ~[flink-dist_2.12-1.14.4.jar:1.14.4]
    ... 2 more
Caused by: org.apache.flink.core.fs.UnsupportedFileSystemSchemeException: Hadoop is not in the classpath/dependencies.
    at org.apache.flink.core.fs.UnsupportedSchemeFactory.create(UnsupportedSchemeFactory.java:55) ~[flink-dist_2.12-1.14.4.jar:1.14.4]
    at org.apache.flink.core.fs.FileSystem.getUnguardedFileSystem(FileSystem.java:528) ~[flink-dist_2.12-1.14.4.jar:1.14.4]
    at org.apache.flink.core.fs.FileSystem.get(FileSystem.java:409) ~[flink-dist_2.12-1.14.4.jar:1.14.4]
    at org.apache.flink.core.fs.Path.getFileSystem(Path.java:274) ~[flink-dist_2.12-1.14.4.jar:1.14.4]
    at org.apache.flink.runtime.blob.BlobUtils.createFileSystemBlobStore(BlobUtils.java:89) ~[flink-dist_2.12-1.14.4.jar:1.14.4]
    at org.apache.flink.runtime.blob.BlobUtils.createBlobStoreFromConfig(BlobUtils.java:76) ~[flink-dist_2.12-1.14.4.jar:1.14.4]
    at org.apache.flink.runtime.highavailability.HighAvailabilityServicesUtils.createHighAvailabilityServices(HighAvailabilityServicesUtils.java:121) ~[flink-dist_2.12-1.14.4.jar:1.14.4]
    at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.createHaServices(ClusterEntrypoint.java:361) ~[flink-dist_2.12-1.14.4.jar:1.14.4]
    at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.initializeServices(ClusterEntrypoint.java:318) ~[flink-dist_2.12-1.14.4.jar:1.14.4]
    at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.runCluster(ClusterEntrypoint.java:243) ~[flink-dist_2.12-1.14.4.jar:1.14.4]
    at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.lambda$startCluster$1(ClusterEntrypoint.java:193) ~[flink-dist_2.12-1.14.4.jar:1.14.4]
    at org.apache.flink.runtime.security.contexts.NoOpSecurityContext.runSecured(NoOpSecurityContext.java:28) ~[flink-dist_2.12-1.14.4.jar:1.14.4]
    at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.startCluster(ClusterEntrypoint.java:190) ~[flink-dist_2.12-1.14.4.jar:1.14.4]
    ... 2 more
Reason for error

Flink needs two jar package dependencies to access HDFS. Flink itself does not have it, so it needs to be put in by itself.

flink-shaded-hadoop-3-3.1.1.7.2.9.0-173-9.0.jar

commons-cli-1.5.0.jar

Error solution

Search these two jar packages directly in the maven repository to download: https://mvnrepository.com/

Put the jar package into the /flink/lib directory.

[Solved] error processing package libapache2-mod-php7.2

Error in installing libpciaccess:


Setting up php7.2-cli (7.2.24-0ubuntu0.18.04.11) ...
dpkg: error processing package php7.2-cli (--configure):
 installed php7.2-cli package post-installation script subprocess returned error exit status 10
No apport report written because MaxReports is reached already
                                                              Setting up python-libxml2 (2.9.4+dfsg1-6.1ubuntu1.5) ...
Processing triggers for man-db (2.8.3-2ubuntu0.1) ...
dpkg: dependency problems prevent configuration of libapache2-mod-php7.2:
 libapache2-mod-php7.2 depends on php7.2-cli; however:
  Package php7.2-cli is not configured yet.

dpkg: error processing package libapache2-mod-php7.2 (--configure):
 dependency problems - leaving unconfigured
Setting up libsqlite0 (2.8.17-14fakesync1) ...
No apport report written because MaxReports is reached already
                                                              Setting up librpm8 (4.14.1+dfsg1-2) ...

... ...

Errors were encountered while processing:
 ufw
 nfs-common
 openssh-server
 php7.2-cli
 libapache2-mod-php7.2
E: Sub-process /usr/bin/dpkg returned an error code (1)

To view installation information:

 apt list | grep libapache2-mod-php

WARNING: apt does not have a stable CLI interface. Use with caution in scripts.

libapache2-mod-php/bionic,bionic,now 1:7.2+60ubuntu1 all [installed]
libapache2-mod-php5filter/trusty 5.5.9+dfsg-1ubuntu4 amd64
libapache2-mod-php7.2/bionic-security,bionic-updates,now 7.2.24-0ubuntu0.18.04.11 amd64 [installed]

Just remove the contents reported as errors:

apt-get remove --purge libapache2-mod-php7.2
apt-get remove --purge nfs-common
apt-get remove --purge php7.2-cli
apt-get remove --purge ufw
apt-get remove --purge openssh-server
... ...

Upgrade list:

apt-get update

Review the installed information again and confirm remove:

apt list | grep libapache2-mod-php*

WARNING: apt does not have a stable CLI interface. Use with caution in scripts.

libapache2-mod-php/bionic,bionic 1:7.2+60ubuntu1 all
libapache2-mod-php5filter/trusty 5.5.9+dfsg-1ubuntu4 amd64
libapache2-mod-php7.2/bionic-security,bionic-updates,now 7.2.24-0ubuntu0.18.04.11 amd64 [residual-config]

Reinstall, done!:

apt install libpciaccess
Reading package lists... Done
Building dependency tree       
Reading state information... Done
E: Unable to locate package libpciaccess
e0005055@ibudev20:~/wk/bak_load/win2030/buildroot/dl$ sd apt-get install libpciaccess-dev
Reading package lists... Done
Building dependency tree       
Reading state information... Done
libpciaccess-dev is already the newest version (0.14-1).
The following packages were automatically installed and are no longer required:
  keyutils libnfsidmap2 libtirpc1 ncurses-term openssh-sftp-server php7.2-common php7.2-json php7.2-opcache php7.2-readline rpcbind ssh-import-id
Use 'sudo apt autoremove' to remove them.
0 upgraded, 0 newly installed, 0 to remove and 32 not upgraded.

Error: Host Key Verification Failed [How to Solve]

Delete the previous GitHub account information (including the deletion of the file under. SSH and the modification of user, name and user.email), reuse the new GitHub account to generate the public-private key, and then configure it into the settings of the new GitHub account. After that, an error is reported in the GIT clone project. Git error: host key verification failed

Solution:
Open git bash

Enter the following commands in sequence to solve the problem

mkdir -p ~/.ssh
ssh-keyscan -t rsa github.com >> ~/.ssh/known_hosts
ssh-keygen -t rsa -C "user.email"

Reference: stackoverflow

[Solved] sqoop Error: jSQLException in nextKeyValue Caused by: ORA-24920:column size too large for client

Question

When importing Oracle data with sqoop, the following errors are reported:

INFO mapreduce.Job: Task Id : attempt_1646802944907_15460_m_000000_1, Status : FAILED
Error: java.io.IOException: SQLException in nextKeyValue
        at org.apache.sqoop.mapreduce.db.DBRecordReader.nextKeyValue(DBRecordReader.java:275)
        at org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:568)
        at org.apache.hadoop.mapreduce.task.MapContextImpl.nextKeyValue(MapContextImpl.java:80)
        at org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.nextKeyValue(WrappedMapper.java:91)
        at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
        at org.apache.sqoop.mapreduce.AutoProgressMapper.run(AutoProgressMapper.java:64)
        at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:799)
        at org.apache.hadoop.mapred.MapTask.run(MapTask.java:347)
        at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:174)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1875)
        at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:168)
Caused by: java.sql.SQLException: ORA-24920: column size too large for client

reason

Before using sqoop import other database is normal, this time from the new database import data problems, first check what is the difference between the two databases, found an Oracle version is 11, the new Oracle database version is 19, which may be the cause of the problem.
Go online to check the ORA-24920 error, said to upgrade the oracle client, further speculation may be the problem of Oracle driver.
Under the lib file of sqoop tool, the Oracle JDBC driver found for sqoop is ojdbc6.jar, which does not match with Oracle version 19.
You can check the Oracle version and the corresponding Oracle JDBC driver version on this page:
https://www.oracle.com/database/technologies/faq-jdbc.html#02_03
The screenshot is as follows:

the link to the download page is as follows:
https://www.oracle.com/database/technologies/appdev/jdbc-downloads.html

Solution:

According to the version, ojdbc8.0.jar was downloaded. After uploading, delete the original version and re import the data.
the driver of the original version here needs to be deleted or moved, otherwise it will not succeed. Guess that if there are two versions, the old version may be read

[Solved] standard_init_linux.go:190: exec user process caused “exec format error“

Scene

In the process of packaging the golang application into a docker image, execute the following command

docker run -it -P --name docker_client -m 1024m --net host docker_client:1.0

After execution, the server reported this error

standard_init_linux.go:190: exec user process caused "exec format error"

It’s useless to follow the online method. I can run normally on the virtual machine. I’ll look at my dockerfile carefully later

FROM golang:alpine

ENV GO111MODULE=on \
    GOPROXY=https://goproxy.cn,direct \
    CGO_ENABLED=0 \
    GOOS=linux \
    GOARCH=amd64

# Create an apps directory in the container root directory
WORKDIR /build

# Copy the go_docker_demo1 executable from the current directory
COPY . .

# Compile our code into a binary executable app
RUN go build -o app .

# Move to the /dist directory where the generated binaries are stored
WORKDIR /dist

# Copy the binaries from the /build directory to here
RUN cp /build/app .

# Expose the port
EXPOSE 8080

# The command to run the golang program
CMD ["/dist/app"]

It is found that the goarch parameter is AMD64. Check the relevant version of the server later

 docker version
 #check the version of the docker

A problem was found in the output information. One line of parameters is arm64

 OS/Arch:           linux/arm64

So I modified the dockerfile file

FROM golang:alpine

ENV GO111MODULE=on \
    GOPROXY=https://goproxy.cn,direct \
    CGO_ENABLED=0 \
    GOOS=linux \
    GOARCH=arm64

# Create an apps directory in the container root directory
WORKDIR /build

# Copy the go_docker_demo1 executable from the current directory
COPY . .

# Compile our code into a binary executable app
RUN go build -o app .

# Move to the /dist directory where the generated binaries are stored
WORKDIR /dist

# Copy the binaries from the /build directory to here
RUN cp /build/app .

# Expose the port
EXPOSE 8080

# The command to run the golang program
CMD ["/dist/app"]

After rebuilding the dockerfile image, it will run normally.

[Solved] latex Import ntheorem Package Error: Package ntheorem Error: Theorem style plain already defined.

This may be caused by a conflict between the ntheorem package and the amsthm package.

Solution: don’t use both, just delete the reference of one package.

Note that some template class files (.cls) may introduce one of the packages in advance. For example, the template file (ccjnl.cls) of China communication introduces the amsthm package in advance

At this time, you can only use the amsthm package instead of the ntheorem package.