Category Archives: Error

[jv-convert] Error 1, [all-recursive] Error 1 (How to Solve)

g++4.6 compilation is an error:

make profiledbootstrap  
  
/var/tmp/gcc4/gcc-4.0.1/sparc-sun-solaris2.8/gcc/gcj  
-B/var/tmp/gcc4/gcc-4.0.1/sparc-sun-solaris2.8/gcc/  
-B/tools/freeware/gcc4.0/sparc-sun-solaris2.8/bin/  
-B/tools/freeware/gcc4.0/sparc-sun-solaris2.8/lib/  
-isystem /tools/freeware/gcc4.0/sparc-sun-solaris2.8/include  
-isystem /tools/freeware/gcc4.0/sparc-sun-solaris2.8/sys-include -g -O2  
-o .libs/jv-convert --main=gnu.gcj.convert.Convert -shared-libgcc   
-L/var/tmp/gcc4/gcc-4.0.1/sparc-sun-solaris2.8/sparc-sun-solaris2.8/libjava  
-L/var/tmp/gcc4/gcc-4.0.1/sparc-sun-solaris2.8/sparc-sun-solaris2.8/libjava/.libs ./.libs/libgcj.so  
-L/var/tmp/gcc4/gcc-4.0.1/sparc-sun-solaris2.8/sparc-sun-solaris2.8/libstdc++-v3/src  
-L/var/tmp/gcc4/gcc-4.0.1/sparc-sun-solaris2.8/sparc-sun-solaris2.8/libstdc++-v3/src/.libs  
-lpthread -lrt -ldl -L/var/tmp/gcc4/gcc-4.0.1/sparc-sun-solaris2.8/gcc  
-L/tools/freeware/gcc4.0/sparc-sun-solaris2.8/bin  
-L/tools/freeware/gcc4.0/sparc-sun-solaris2.8/lib -L/usr/dt/lib  
-L/usr/openwin/lib -L/usr/lib/X11 -L/usr/ucblib -L/usr/atria/lib  
-L/tools/freeware/gcc4.0/lib -L/tools/freeware/lib -L/tools/sun5/lib  
-L/tools/freeware/gcc4.0/lib/gcc/sparc-sun-solaris2.8/../../../sparc-sun-solaris2.8/lib  
-L/usr/ccs/bin -L/usr/ccs/lib  
-L/tools/freeware/gcc4.0/lib/gcc/sparc-sun-solaris2.8/../.. -lgcc_s -lgcc_s  
-Wl, --rpath -Wl,/tools/freeware/gcc4.0/lib  
/tools/freeware/gcc4.0/bin/ld: unrecognized option'-Wl,-rpath'  
/tools/freeware/gcc4.0/bin/ld: use the --help option for usage information  
collect2: ld returned 1 exit status  
make[3]: *** [jv-convert] Error 1  
make[3]: Leaving directory  
`/var/tmp/gcc4/gcc-4.0.1/sparc-sun-solaris2.8/sparc-sun-solaris2.8/libjava'  
make[2]: *** [all-recursive] Error 1  
rm gnu/gcj/tools/gcj_dbtool/Main.class  
make[2]: Leaving directory  
`/var/tmp/gcc4/gcc-4.0.1/sparc-sun-solaris2.8/sparc-sun-solaris2.8/libjava'  
make[1]: *** [all-target-libjava] Error 2  
make[1]: Leaving directory `/var/tmp/gcc4/gcc-4.0.1/sparc-sun-solaris2.8'  
make: *** [profiledbootstrap] Error 2  

At this time, the configuration is to set the speaking language to c, c++. as follows:

./configure –prefix=/usr/local/gcc-4.6.1 -enable-threads=posix -disable-checking -disable-multilib -enable-languages=c,c++

[Solved] error [email protected]: The engine “node” is incompatible with this module.

An error was reported when initializing the react project:

error [email protected]: The engine “node” is incompatible with this module. Expected version “^6.14.0 || ^8.10.0 || >=9.10.0”. Got “8.9.4”
error Found incompatible module
info Visit https://yarnpkg.com/en/docs/cli/add for documentation about this command.

This is caused by the incompatibility of the node version. Upgrading the node version can solve it, or it can be solved without upgrading. Use the following command:

1
npx create-react-app my-app  --use-npm

 

Official issues: https://github.com/facebook/create-react-app/issues/5714 and https://github.com/facebook/create-react-app/issues/3974

problem:

Guys, a few days ago, I installed `yarn` on my Mac just because it’s necessary for a project where I’m involved.

My problem is that, now, when I create a new app with `CRA`, it’s going to use `yarn` instead of `npm`. And, to be honest, I prefer `npm`.

Is there a way to force to use `npm` instead of `yarn`?
Or should I wait until finish the initial setup, then remove the `yarn.lock` file, the `node_modules` folder and then run `npm i`?

answer:

You can run CRA with `–use-npm` flag to force it to use npm.

[Solved] Hadoop Mapreduce Error: GC overhead limit exceeded

When running mapreduce, Error: GC overhead limit exceeded appears. Check the log and find that the abnormal information is

2015-12-11 11:48:44,716 FATAL [main] org.apache.hadoop.mapred.YarnChild: Error running child: java.lang.OutOfMemoryError: GC overhead limit exceeded 
    at java.io.DataInputStream.readUTF(DataInputStream.java : 661 )
    at java.io.DataInputStream.readUTF(DataInputStream.java: 564 )
     at xxxx.readFields(DateDimension.java: 186)
    at xxxx.readFields(StatsUserDimension.java:67)
    at xxxx.readFields(StatsBrowserDimension.java:68 ) 
    at org.apache.hadoop.io.WritableComparator.compare(WritableComparator.java: 158 )
    at org.apache.hadoop.mapreduce.task.ReduceContextImpl.nextKeyValue(ReduceContextImpl.java: 158 )
    at org.apache.hadoop.mapreduce.task.ReduceContextImpl$ValueIterator.next(ReduceContextImpl.java: 239 )
    at xxx.reduce(BrowserReducer.java: 37)
    at xxx.reduce(BrowserReducer.java:16 ) 
    at org.apache.hadoop.mapreduce.Reducer.run(Reducer.java: 171 )
    at org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java: 627 )
    at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java: 389 )
    at org.apache.hadoop.mapred.YarnChild$ 2.run(YarnChild.java:168 )
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java: 415 )
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java: 1614 )
    at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java: 163)

From the exception, we can see that when reduce reads the next data, there is a problem of insufficient memory. From the code, I found that the reduce side uses a read map set, which will cause the problem of insufficient memory. In hadoop2.x, the default container’s yarn child jvm heap size is 200M, which is specified by the parameter mapred.child.java.opts, which can be given when the job is submitted. It is a client-side effective parameter, which is configured in mapred-site. In the xml file, by modifying the parameter to -Xms200m -Xmx1000m to change the jvm heap size, the exception is resolved.

parameter name Defaults description
mapred.child.java.opts -Xmx200m Define the execution jvm parameters of the container container executed by mapreduce
mapred.map.child.java.opts Separately specify the execution jvm parameters of the map phase
mapred.reduce.child.java.opts Separately specify the execution jvm parameters of the reduce phase
mapreduce.admin.map.child.java.opts
-Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN
The administrator specifies the jvm parameters executed in the map phase
mapreduce.admin.reduce.child.java.opts
-Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN
The administrator specifies the execution jvm parameters of the reduce phase

 

 The respective execution order of the above five parameters taking effect is:

map stage: mapreduce.admin.map.child.java.opts <mapred.child.java.opts <mapred.map.child.java.opts, which means that the definition of mapred.map.child.java.opts will be used eventually jvm parameters, if there is a conflict.

Reduce phase: mapreduce.admin.reduce.child.java.opts <mapred.child.java.opts <mapred.reduce.child.java.opts

 Hadoop source code reference: org.apache.hadoop.mapred.MapReduceChildJVM.getChildJavaOpts method.

private  static String getChildJavaOpts(JobConf jobConf, boolean isMapTask) {
    String userClasspath = "" ;
    String adminClasspath = "" ;
     if (isMapTask) {
        userClasspath = jobConf.get(JobConf.MAPRED_MAP_TASK_JAVA_OPTS,
                jobConf.get(JobConf.MAPRED_TASK_JAVA_OPTS,
                        JobConf.DEFAULT_MAPRED_TASK_JAVA_OPTS));
        adminClasspath = jobConf.get(
                MRJobConfig.MAPRED_MAP_ADMIN_JAVA_OPTS,
                MRJobConfig.DEFAULT_MAPRED_ADMIN_JAVA_OPTS);
    } else {
        userClasspath = jobConf.get(JobConf.MAPRED_REDUCE_TASK_JAVA_OPTS,
                jobConf.get(JobConf.MAPRED_TASK_JAVA_OPTS,
                        JobConf.DEFAULT_MAPRED_TASK_JAVA_OPTS));
        adminClasspath = jobConf.get(
                MRJobConfig.MAPRED_REDUCE_ADMIN_JAVA_OPTS,
                MRJobConfig.DEFAULT_MAPRED_ADMIN_JAVA_OPTS);
    }

    // Add admin classpath first so it can be overridden by user. 
    return adminClasspath + "" + userClasspath;
}

[Solved] Vue.js error: Module build failed: Error: No parser and no file path given, couldn’t infer a parser.

ERROR Failed to compile with 2 errors                                                                          12 : 00 : 33

 error   in ./src/ App.vue

Module build failed: Error: No parser and no file path given, couldn ' t infer a parser. 
    at normalize (C:\Users\admin\Desktop\ 222 \demo\node_modules\prettier\index.js: 7051 : 13 )
    at formatWithCursor (C:\Users\admin\Desktop\ 222 \demo\node_modules\prettier\index.js: 10370 : 12 )
    at C:\Users\admin\Desktop\ 222 \demo\node_modules\prettier\index.js: 31115 : 15 
    at Object.format (C:\Users\admin\Desktop\ 222 \demo\node_modules\prettier\index.js : 31134 : 12 )
    at Object.module.exports (C:\Users\admin\Desktop\ 222 \demo\node_modules\vue-loader\lib\template-compiler\index.js: 80 : 23 )

 @. /Src/App.vue 11 : 0 - 354 
 @. / Src / main.js
 @ multi (webpack) -dev-server/client?http: // localhost:8081 webpack/hot/dev-server ./src/main.js 

 error   in ./src/components/ HelloWorld.vue

Module build failed: Error: No parser and no file path given, couldn ' t infer a parser. 
    at normalize (C:\Users\admin\Desktop\ 222 \demo\node_modules\prettier\index.js: 7051 : 13 )
    at formatWithCursor (C:\Users\admin\Desktop\ 222 \demo\node_modules\prettier\index.js: 10370 : 12 )
    at C:\Users\admin\Desktop\ 222 \demo\node_modules\prettier\index.js: 31115 : 15 
    at Object.format (C:\Users\admin\Desktop\ 222 \demo\node_modules\prettier\index.js : 31134 : 12 )
    at Object.module.exports (C:\Users\admin\Desktop\ 222 \demo\node_modules\vue-loader\lib\template-compiler\index.js: 80 : 23 )

 @. /Src/components/HelloWorld.vue 11 : 0 - 366 
 @. / Src / Router / index.js
 @. /src/ main.js
 @ multi (webpack) -dev-server/client?http: // localhost:8081 webpack/hot/dev-server ./src/main.js

Ways to resolve consciousness:

npm i prettier@~ 1.12 . 0

Re-run:

npm run dev

 

K8s Install Error: Error: unknown flag: –experimental-upload-certs

When installing the k8sV1.16 version today, the execution suddenly found that the command was wrong. It was possible to install V1.15 before, which may be the reason for the version upgrade.

Solution:

unknown flag: –experimental-upload-certs, replace –experimental-upload-certs with –upload-certs

[root@k8s-master opt]# kubeadm init --config=kubeadm-config.yaml --experimental-upload-certs | tee kubeadm-init.log
Error: unknown flag: --experimental-upload-certs
Usage:
  kubeadm init [flags]
  kubeadm init [command]

Available Commands:
  phase       Use this command to invoke single phase of the init workflow

Flags:
      --apiserver-advertise-address string   The IP address the API Server will advertise it's listening on. If not set the default network interface will be used.
      --apiserver-bind-port int32            Port for the API Server to bind to. (default 6443)
      --apiserver-cert-extra-sans strings    Optional extra Subject Alternative Names (SANs) to use for the API Server serving certificate. Can be both IP addresses and DNS names.
      --cert-dir string                      The path where to save and store the certificates. (default "/etc/kubernetes/pki")
      --certificate-key string               Key used to encrypt the control-plane certificates in the kubeadm-certs Secret.
      --config string                        Path to a kubeadm configuration file.
      --control-plane-endpoint string        Specify a stable IP address or DNS name for the control plane.
      --cri-socket string                    Path to the CRI socket to connect. If empty kubeadm will try to auto-detect this value; use this option only if you have more than one CRI installed or if you have non-standard CRI socket.
      --dry-run                              Don't apply any changes; just output what would be done.
  -k, --experimental-kustomize string        The path where kustomize patches for static pod manifests are stored.
      --feature-gates string                 A set of key=value pairs that describe feature gates for various features. Options are:
                                             IPv6DualStack=true|false (ALPHA - default=false)
  -h, --help                                 help for init
      --ignore-preflight-errors strings      A list of checks whose errors will be shown as warnings. Example: 'IsPrivilegedUser,Swap'. Value 'all' ignores errors from all checks.
      --image-repository string              Choose a container registry to pull control plane images from (default "k8s.gcr.io")
      --kubernetes-version string            Choose a specific Kubernetes version for the control plane. (default "stable-1")
      --node-name string                     Specify the node name.
      --pod-network-cidr string              Specify range of IP addresses for the pod network. If set, the control plane will automatically allocate CIDRs for every node.
      --service-cidr string                  Use alternative range of IP address for service VIPs. (default "10.96.0.0/12")
      --service-dns-domain string            Use alternative domain for services, e.g. "myorg.internal". (default "cluster.local")
      --skip-certificate-key-print           Don't print the key used to encrypt the control-plane certificates.
      --skip-phases strings                  List of phases to be skipped
      --skip-token-print                     Skip printing of the default bootstrap token generated by 'kubeadm init'.
      --token string                         The token to use for establishing bidirectional trust between nodes and control-plane nodes. The format is [a-z0-9]{6}\.[a-z0-9]{16} - e.g. abcdef.0123456789abcdef
      --token-ttl duration                   The duration before the token is automatically deleted (e.g. 1s, 2m, 3h). If set to '0', the token will never expire (default 24h0m0s)
      --upload-certs                         Upload control-plane certificates to the kubeadm-certs Secret.

Global Flags:
      --add-dir-header           If true, adds the file directory to the header
      --log-file string          If non-empty, use this log file
      --log-file-max-size uint   Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800)
      --rootfs string            [EXPERIMENTAL] The path to the 'real' host root filesystem.
      --skip-headers             If true, avoid header prefixes in the log messages
      --skip-log-headers         If true, avoid headers when opening log files
  -v, --v Level                  number for the log level verbosity

Use "kubeadm init [command] --help" for more information about a command.

unknown flag: --experimental-upload-certs
To see the stack trace of this error execute with --v=5 or higher
[root@k8s-master opt]# kubeadm init --config=kubeadm-config.yaml --upload-certs | tee kubeadm-init.log
[init] Using Kubernetes version : v1 .16 .1
[preflight] Running pre-flight checks
        [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet- start ] Writing kubelet environment file  with flags to  file  "/var/lib/kubelet/kubeadm-flags.env"
[kubelet- start ] Writing kubelet configuration to  file  "/var/lib/kubelet/config.yaml"
[kubelet- start ] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.180.121]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.180.121 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.180.121 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file

 

[Solved] error: failed to run custom build command for `librocksdb-sys v6.17.3`

Prompt for error in trust build:

error: failed to run custom build command for `librocksdb-sys v6.17.3`

Details:


...

   Compiling ed25519-dalek v1.0.1
   Compiling tracing-subscriber v0.2.17
   Compiling schnorrkel v0.9.1
   Compiling addr2line v0.14.1
   Compiling prost-build v0.7.0
   Compiling mio-uds v0.6.8
error: failed to run custom build command for `librocksdb-sys v6.17.3`

Caused by:
  process didn't exit successfully: `/home/y/IdeaProjects/MinixChain/target/release/build/librocksdb-sys-6de902cd8dc81c39/build-script-build` (exit code: 101)
  --- stderr
  thread 'main' panicked at 'Unable to find libclang: "couldn\'t find any valid shared libraries matching: [\'libclang.so\', \'libclang-*.so\', \'libclang.so.*\', \'libclang-*.so.*\'], set the `LIBCLANG_PATH` environment variable to a path where one of these files can be found (invalid: [])"', /home/y/.cargo/registry/src/github.com-1ecc6299db9ec823/bindgen-0.57.0/src/lib.rs:1975:31
  note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
warning: build failed, waiting for other jobs to finish...
error: build failed

Solution:

sudo apt install llvm clang


[Solved] Hive tez due to: ROOT_INPUT_INIT_FAILURE java.lang.IllegalArgumentException: Illegal Capacity: -38297

hive  tez  error:
Status: Failed
Vertex failed, vertexName=Map 1, vertexId=vertex_1625122203217_0010_1_00, diagnostics=[Vertex vertex_1625122203217_0010_1_00 [Map 1] killed/failed due to:ROOT_INPUT_INIT_FAILURE, Vertex Input: info initializer failed, vertex=vertex_1625122203217_0010_1_00 [Map 1], java.lang.IllegalArgumentException: Illegal Capacity: -38297

———————————————————————————————-
VERTICES      MODE        STATUS  TOTAL  COMPLETED  RUNNING  PENDING  FAILED  KILLED
———————————————————————————————-
Map 1            container  INITIALIZING     -1          0        0       -1       0       0
Map 2            container  INITIALIZING     -1          0        0       -1       0       0
———————————————————————————————-
VERTICES: 00/02  [>>————————–] 0%    ELAPSED TIME: 1.61 s
———————————————————————————————-
Status: Failed
Vertex failed, vertexName=Map 1, vertexId=vertex_1625122203217_0010_1_00, diagnostics=[Vertex vertex_1625122203217_0010_1_00 [Map 1] killed/failed due to:ROOT_INPUT_INIT_FAILURE, Vertex Input: info initializer failed, vertex=vertex_1625122203217_0010_1_00 [Map 1], java.lang.IllegalArgumentException: Illegal Capacity: -38297
at java.util.ArrayList.<init>(ArrayList.java:156)
at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:350)
at org.apache.hadoop.hive.ql.io.HiveInputFormat.addSplitsForGroup(HiveInputFormat.java:519)
at org.apache.hadoop.hive.ql.io.HiveInputFormat.getSplits(HiveInputFormat.java:765)
at org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator.initialize(HiveSplitGenerator.java:243)
at org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:280)
at org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:271)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
at org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:271)
at org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:255)
at com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:125)
at com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:57)
at com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:78)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
]

Modify the hive compute engine, vim $HIVE_HOME/conf/hive-site.xml and add the following

<property>
    <name>hive.tez.container.size</name>
    <value>1024</value>
</property>

[Solved] Error: Failure while executing; `tar –extract –no-same-owner –file /Users/wangchuangyan/Library/C

Mac run brew install npm error:
Error: Failure while executing; tar --extract --no-same-owner --file /Users/wangchuangyan/Library/Caches/Homebrew/downloads/01840f175b09e7eb3d4ca7f11492bb1bee74fa7569a41a884c7ffb3418e11a02--libuv-1.41.0.catalina.bottle.tar.gz --directory /private/tmp/d20210708-6134-w2f3oo exited with

1. Here’s the output:
tar: Error opening archive: Failed to open ‘/Users/wangchuangyan/Library/Caches/Homebrew/downloads/01840f175b09e7eb3d4ca7f11492bb1bee74fa7569a41a884c7ffb3418e11a02–libuv-1.41.0.catalina.bottle.tar.gz’
It means that the libuv file cannot be opened
Solution: manually install brew install libuv

How to Solve Cocoapods Installation Failure

The scene appears

Recently, I replaced a 512 large hard disk and re installed MacOS 10.15. When installing cocoapods, the following error occurred

Building native extensions. This could take a while...
ERROR:  Error installing cocoapods:
    ERROR: Failed to build gem native extension.

    current directory: /Library/Ruby/Gems/2.6.0/gems/ffi-1.15.0/ext/ffi_c
/System/Library/Frameworks/Ruby.framework/Versions/2.6/usr/bin/ruby -I /System/Library/Frameworks/Ruby.framework/Versions/2.6/usr/lib/ruby/2.6.0 -r ./siteconf20210324-1667-1wwdce5.rb extconf.rb
checking for ffi.h... no
checking for ffi.h in /usr/local/include,/usr/include/ffi,/Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/ffi,/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/usr/include/ffi... yes
checking for ffi_prep_closure_loc() in -lffi... yes
checking for ffi_prep_cif_var()... yes
checking for ffi_raw_call()... yes
checking for ffi_prep_raw_closure()... yes
creating extconf.h
creating Makefile

current directory: /Library/Ruby/Gems/2.6.0/gems/ffi-1.15.0/ext/ffi_c
make "DESTDIR=" clean

current directory: /Library/Ruby/Gems/2.6.0/gems/ffi-1.15.0/ext/ffi_c
make "DESTDIR="
make: *** No rule to make target `"/Volumes/macOS', needed by `AbstractMemory.o'.  Stop.

make failed, exit code 2

Gem files will remain installed in /Library/Ruby/Gems/2.6.0/gems/ffi-1.15.0 for inspection.
Results logged to /Library/Ruby/Gems/2.6.0/extensions/universal-darwin-20/2.6.0/ffi-1.15.0/gem_make.out

Solution:

    1. it may be the problem of system version, which is not compatible with the latest version, so install the old version directly. I encountered the problem is 2021.07.06, the latest version is 1.10.1, find a 1.9.32020.05 release, should be compatible with 10.15
# Specify version to install
sudo gem install -n /usr/local/bin cocoapods -v 1.9.3
      1. use brew to install
brew install cocoapods

MOTR compiling error: cannot call member function ‘void std::basic_string<_CharT, _Traits, _Alloc>::_R

Key error information

/usr/local/include/c++/8.2.0/bits/basic_string.tcc:1067:1: error: cannot call member function 'void std::basic_string<_CharT, _Traits, _Alloc>::_Rep::_M_set_sharable() [with _CharT = char32_t; _Traits = std::
char_traits<char32_t>; _Alloc = std::allocator<char32_t>]' without object
ninja: build stopped: subcommand failed.
Traceback (most recent call last):
  File "/home/miniconda3/envs/motr/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 1423, in _run_ninja_build
    check=True)
  File "/home/miniconda3/envs/motr/lib/python3.7/subprocess.py", line 512, in run
    output=stdout, stderr=stderr)
subprocess.CalledProcessError: Command '['ninja', '-v']' returned non-zero exit status 1.

Solution:

/usr/local/include/c++/8.2.0/bits/basic_string.tcc Line 1067

__p->_M_set_sharable();

Change to:

(*__p)._M_set_sharable();

Attachment: the overall error report is as follows

which: no hipcc in (/home/miniconda3/envs/motr/bin:/home/miniconda3/bin:/usr/local/nvidia/bin:/usr/local/cuda/bin:/usr/local/bin:/usr/bin:/home/bin:/usr/local/sbin:/usr/sbin)
running build
running build_py
creating build/lib.linux-x86_64-3.7
creating build/lib.linux-x86_64-3.7/functions
copying functions/__init__.py -> build/lib.linux-x86_64-3.7/functions
copying functions/ms_deform_attn_func.py -> build/lib.linux-x86_64-3.7/functions
creating build/lib.linux-x86_64-3.7/modules
copying modules/__init__.py -> build/lib.linux-x86_64-3.7/modules
copying modules/ms_deform_attn.py -> build/lib.linux-x86_64-3.7/modules
running build_ext
building 'MultiScaleDeformableAttention' extension
creating /github/MOTR/models/ops/build/temp.linux-x86_64-3.7
creating /github/MOTR/models/ops/build/temp.linux-x86_64-3.7/nfs
creating /github/MOTR/models/ops/build/temp.linux-x86_64-3.7/nfs/volume-95-4
creating /github/MOTR/models/ops/build/temp.linux-x86_64-3.7/nfs/volume-95-4/liushuai
creating /github/MOTR/models/ops/build/temp.linux-x86_64-3.7/github
creating /github/MOTR/models/ops/build/temp.linux-x86_64-3.7/github/MOTR
creating /github/MOTR/models/ops/build/temp.linux-x86_64-3.7/github/MOTR/models
creating /github/MOTR/models/ops/build/temp.linux-x86_64-3.7/github/MOTR/models/ops
creating /github/MOTR/models/ops/build/temp.linux-x86_64-3.7/github/MOTR/models/ops/src
creating /github/MOTR/models/ops/build/temp.linux-x86_64-3.7/github/MOTR/models/ops/src/cpu
creating /github/MOTR/models/ops/build/temp.linux-x86_64-3.7/github/MOTR/models/ops/src/cuda
/home/miniconda3/envs/motr/lib/python3.7/site-packages/torch/utils/cpp_extension.py:220: UserWarning:

                               !! WARNING !!

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
Your compiler (c++) is not compatible with the compiler Pytorch was
built with for this platform, which is g++ on linux. Please
use g++ to to compile your extension. Alternatively, you may
compile PyTorch from source using c++, and then you can also use
c++ to compile your extension.

See https://github.com/pytorch/pytorch/blob/master/CONTRIBUTING.md for help
with compiling PyTorch from source.
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

                              !! WARNING !!

  platform=sys.platform))
Emitting ninja build file /github/MOTR/models/ops/build/temp.linux-x86_64-3.7/build.ninja...
Compiling objects...
Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
[1/3] c++ -MMD -MF /github/MOTR/models/ops/build/temp.linux-x86_64-3.7/github/MOTR/models/ops/src/cpu/ms_deform_attn_cpu.o.d -pthread -B /home/miniconda3/envs/motr/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -DWITH_CUDA -I/github/MOTR/models/ops/src -I/home/miniconda3/envs/motr/lib/python3.7/site-packages/torch/include -I/home/miniconda3/envs/motr/lib/python3.7/site-packages/torch/include/torch/csrc/api/include -I/home/miniconda3/envs/motr/lib/python3.7/site-packages/torch/include/TH -I/home/miniconda3/envs/motr/lib/python3.7/site-packages/torch/include/THC -I/usr/local/cuda/include -I/home/miniconda3/envs/motr/include/python3.7m -c -c /github/MOTR/models/ops/src/cpu/ms_deform_attn_cpu.cpp -o /github/MOTR/models/ops/build/temp.linux-x86_64-3.7/github/MOTR/models/ops/src/cpu/ms_deform_attn_cpu.o -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=MultiScaleDeformableAttention -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++14
cc1plus: warning: command line option '-Wstrict-prototypes' is valid for C/ObjC but not for C++
[2/3] c++ -MMD -MF /github/MOTR/models/ops/build/temp.linux-x86_64-3.7/github/MOTR/models/ops/src/vision.o.d -pthread -B /home/miniconda3/envs/motr/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -DWITH_CUDA -I/github/MOTR/models/ops/src -I/home/miniconda3/envs/motr/lib/python3.7/site-packages/torch/include -I/home/miniconda3/envs/motr/lib/python3.7/site-packages/torch/include/torch/csrc/api/include -I/home/miniconda3/envs/motr/lib/python3.7/site-packages/torch/include/TH -I/home/miniconda3/envs/motr/lib/python3.7/site-packages/torch/include/THC -I/usr/local/cuda/include -I/home/miniconda3/envs/motr/include/python3.7m -c -c /github/MOTR/models/ops/src/vision.cpp -o /github/MOTR/models/ops/build/temp.linux-x86_64-3.7/github/MOTR/models/ops/src/vision.o -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=MultiScaleDeformableAttention -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++14
cc1plus: warning: command line option '-Wstrict-prototypes' is valid for C/ObjC but not for C++
In file included from /github/MOTR/models/ops/src/vision.cpp:11:
/github/MOTR/models/ops/src/ms_deform_attn.h: In function 'at::Tensor ms_deform_attn_forward(const at::Tensor&, const at::Tensor&, const at::Tensor&, const at::Tensor&, const at::Tensor&, int)':
/github/MOTR/models/ops/src/ms_deform_attn.h:29:20: warning: 'at::DeprecatedTypeProperties& at::Tensor::type() const' is deprecated: Tensor.type() is deprecated. Instead use Tensor.options(), which in many ca
ses (e.g. in a constructor) is a drop-in replacement. If you were using data from type(), that is now available from Tensor itself, so instead of tensor.type().scalar_type(), use tensor.scalar_type() instead
and instead of tensor.type().backend() use tensor.device(). [-Wdeprecated-declarations]
     if (value.type().is_cuda())
                    ^
In file included from /home/miniconda3/envs/motr/lib/python3.7/site-packages/torch/include/ATen/Tensor.h:11,
                 from /home/miniconda3/envs/motr/lib/python3.7/site-packages/torch/include/ATen/Context.h:4,
                 from /home/miniconda3/envs/motr/lib/python3.7/site-packages/torch/include/ATen/ATen.h:5,
                 from /home/miniconda3/envs/motr/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/types.h:3,
                 from /home/miniconda3/envs/motr/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader_options.h:4,
                 from /home/miniconda3/envs/motr/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/base.h:3,
                 from /home/miniconda3/envs/motr/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/stateful.h:3,
                 from /home/miniconda3/envs/motr/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader.h:3,
                 from /home/miniconda3/envs/motr/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data.h:3,
                 from /home/miniconda3/envs/motr/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/all.h:4,
                 from /home/miniconda3/envs/motr/lib/python3.7/site-packages/torch/include/torch/extension.h:4,
                 from /github/MOTR/models/ops/src/cpu/ms_deform_attn_cpu.h:12,
                 from /github/MOTR/models/ops/src/ms_deform_attn.h:13,
                 from /github/MOTR/models/ops/src/vision.cpp:11:
/home/miniconda3/envs/motr/lib/python3.7/site-packages/torch/include/ATen/core/TensorBody.h:262:30: note: declared here
   DeprecatedTypeProperties & type() const {
                              ^~~~
In file included from /github/MOTR/models/ops/src/vision.cpp:11:
/github/MOTR/models/ops/src/ms_deform_attn.h: In function 'std::vector<at::Tensor> ms_deform_attn_backward(const at::Tensor&, const at::Tensor&, const at::Tensor&, const at::Tensor&, const at::Tensor&, const
at::Tensor&, int)':
/github/MOTR/models/ops/src/ms_deform_attn.h:51:20: warning: 'at::DeprecatedTypeProperties& at::Tensor::type() const' is deprecated: Tensor.type() is deprecated. Instead use Tensor.options(), which in many ca
ses (e.g. in a constructor) is a drop-in replacement. If you were using data from type(), that is now available from Tensor itself, so instead of tensor.type().scalar_type(), use tensor.scalar_type() instead
and instead of tensor.type().backend() use tensor.device(). [-Wdeprecated-declarations]
     if (value.type().is_cuda())
                    ^
In file included from /home/miniconda3/envs/motr/lib/python3.7/site-packages/torch/include/ATen/Tensor.h:11,
                 from /home/miniconda3/envs/motr/lib/python3.7/site-packages/torch/include/ATen/Context.h:4,
                 from /home/miniconda3/envs/motr/lib/python3.7/site-packages/torch/include/ATen/ATen.h:5,
                 from /home/miniconda3/envs/motr/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/types.h:3,
                 from /home/miniconda3/envs/motr/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader_options.h:4,
                 from /home/miniconda3/envs/motr/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/base.h:3,
                 from /home/miniconda3/envs/motr/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/stateful.h:3,
                 from /home/miniconda3/envs/motr/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader.h:3,
                 from /home/miniconda3/envs/motr/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data.h:3,
                 from /home/miniconda3/envs/motr/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/all.h:4,
                 from /home/miniconda3/envs/motr/lib/python3.7/site-packages/torch/include/torch/extension.h:4,
                 from /github/MOTR/models/ops/src/cpu/ms_deform_attn_cpu.h:12,
                 from /github/MOTR/models/ops/src/ms_deform_attn.h:13,
                 from /github/MOTR/models/ops/src/vision.cpp:11:
/home/miniconda3/envs/motr/lib/python3.7/site-packages/torch/include/ATen/core/TensorBody.h:262:30: note: declared here
   DeprecatedTypeProperties & type() const {
                              ^~~~
[3/3] /usr/local/cuda/bin/nvcc -DWITH_CUDA -I/github/MOTR/models/ops/src -I/home/miniconda3/envs/motr/lib/python3.7/site-packages/torch/include -I/home/miniconda3/envs/motr/lib/python3.7/site-packages/torch/i
nclude/torch/csrc/api/include -I/home/miniconda3/envs/motr/lib/python3.7/site-packages/torch/include/TH -I/home/miniconda3/envs/motr/lib/python3.7/site-packages/torch/include/THC -I/usr/local/cuda/include -I/
home/miniconda3/envs/motr/include/python3.7m -c -c /github/MOTR/models/ops/src/cuda/ms_deform_attn_cuda.cu -o /github/MOTR/models/ops/build/temp.linux-x86_64-3.7/github/MOTR/models/ops/src/cuda/ms_deform_attn
_cuda.o -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options ''"'"'-fPIC'"'"'' -DCUDA_HAS_FP16=1 -D__CUDA_NO_HALF_OPERATORS__ -
D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=MultiScaleDeformableAttention -D_GLIBCXX_USE_CXX11_ABI=0 -gencode=arch=compute_61,code=sm_61
-std=c++14
FAILED: /github/MOTR/models/ops/build/temp.linux-x86_64-3.7/github/MOTR/models/ops/src/cuda/ms_deform_attn_cuda.o
/usr/local/cuda/bin/nvcc -DWITH_CUDA -I/github/MOTR/models/ops/src -I/home/miniconda3/envs/motr/lib/python3.7/site-packages/torch/include -I/home/miniconda3/envs/motr/lib/python3.7/site-packages/torch/include
/torch/csrc/api/include -I/home/miniconda3/envs/motr/lib/python3.7/site-packages/torch/include/TH -I/home/miniconda3/envs/motr/lib/python3.7/site-packages/torch/include/THC -I/usr/local/cuda/include -I/home/m
iniconda3/envs/motr/include/python3.7m -c -c /github/MOTR/models/ops/src/cuda/ms_deform_attn_cuda.cu -o /github/MOTR/models/ops/build/temp.linux-x86_64-3.7/github/MOTR/models/ops/src/cuda/ms_deform_attn_cuda.
o -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options ''"'"'-fPIC'"'"'' -DCUDA_HAS_FP16=1 -D__CUDA_NO_HALF_OPERATORS__ -D__CUD
A_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=MultiScaleDeformableAttention -D_GLIBCXX_USE_CXX11_ABI=0 -gencode=arch=compute_61,code=sm_61 -std=c
++14
/github/MOTR/models/ops/src/cuda/ms_deform_im2col_cuda.cuh(261): warning: variable "q_col" was declared but never referenced
          detected during instantiation of "void ms_deformable_im2col_cuda(cudaStream_t, const scalar_t *, const int64_t *, const int64_t *, const scalar_t *, const scalar_t *, int, int, int, int, int, int, i
nt, scalar_t *) [with scalar_t=double]"
/github/MOTR/models/ops/src/cuda/ms_deform_attn_cuda.cu(64): here

/github/MOTR/models/ops/src/cuda/ms_deform_im2col_cuda.cuh(762): warning: variable "q_col" was declared but never referenced
          detected during instantiation of "void ms_deformable_col2im_cuda(cudaStream_t, const scalar_t *, const scalar_t *, const int64_t *, const int64_t *, const scalar_t *, const scalar_t *, int, int, int
, int, int, int, int, scalar_t *, scalar_t *, scalar_t *) [with scalar_t=double]"
/github/MOTR/models/ops/src/cuda/ms_deform_attn_cuda.cu(134): here

/github/MOTR/models/ops/src/cuda/ms_deform_im2col_cuda.cuh(872): warning: variable "q_col" was declared but never referenced
          detected during instantiation of "void ms_deformable_col2im_cuda(cudaStream_t, const scalar_t *, const scalar_t *, const int64_t *, const int64_t *, const scalar_t *, const scalar_t *, int, int, int
, int, int, int, int, scalar_t *, scalar_t *, scalar_t *) [with scalar_t=double]"
/github/MOTR/models/ops/src/cuda/ms_deform_attn_cuda.cu(134): here

/github/MOTR/models/ops/src/cuda/ms_deform_im2col_cuda.cuh(331): warning: variable "q_col" was declared but never referenced
          detected during instantiation of "void ms_deformable_col2im_cuda(cudaStream_t, const scalar_t *, const scalar_t *, const int64_t *, const int64_t *, const scalar_t *, const scalar_t *, int, int, int
, int, int, int, int, scalar_t *, scalar_t *, scalar_t *) [with scalar_t=double]"
/github/MOTR/models/ops/src/cuda/ms_deform_attn_cuda.cu(134): here

/github/MOTR/models/ops/src/cuda/ms_deform_im2col_cuda.cuh(436): warning: variable "q_col" was declared but never referenced
          detected during instantiation of "void ms_deformable_col2im_cuda(cudaStream_t, const scalar_t *, const scalar_t *, const int64_t *, const int64_t *, const scalar_t *, const scalar_t *, int, int, int
, int, int, int, int, scalar_t *, scalar_t *, scalar_t *) [with scalar_t=double]"
/github/MOTR/models/ops/src/cuda/ms_deform_attn_cuda.cu(134): here

/github/MOTR/models/ops/src/cuda/ms_deform_im2col_cuda.cuh(544): warning: variable "q_col" was declared but never referenced
          detected during instantiation of "void ms_deformable_col2im_cuda(cudaStream_t, const scalar_t *, const scalar_t *, const int64_t *, const int64_t *, const scalar_t *, const scalar_t *, int, int, int
, int, int, int, int, scalar_t *, scalar_t *, scalar_t *) [with scalar_t=double]"
/github/MOTR/models/ops/src/cuda/ms_deform_attn_cuda.cu(134): here

/github/MOTR/models/ops/src/cuda/ms_deform_im2col_cuda.cuh(649): warning: variable "q_col" was declared but never referenced
          detected during instantiation of "void ms_deformable_col2im_cuda(cudaStream_t, const scalar_t *, const scalar_t *, const int64_t *, const int64_t *, const scalar_t *, const scalar_t *, int, int, int
, int, int, int, int, scalar_t *, scalar_t *, scalar_t *) [with scalar_t=double]"
/github/MOTR/models/ops/src/cuda/ms_deform_attn_cuda.cu(134): here

/usr/local/include/c++/8.2.0/bits/basic_string.tcc: In instantiation of 'static std::basic_string<_CharT, _Traits, _Alloc>::_Rep* std::basic_string<_CharT, _Traits, _Alloc>::_Rep::_S_create(std::basic_string<
_CharT, _Traits, _Alloc>::size_type, std::basic_string<_CharT, _Traits, _Alloc>::size_type, const _Alloc&) [with _CharT = char16_t; _Traits = std::char_traits<char16_t>; _Alloc = std::allocator<char16_t>; std
::basic_string<_CharT, _Traits, _Alloc>::size_type = long unsigned int]':
/usr/local/include/c++/8.2.0/bits/basic_string.tcc:578:28:   required from 'static _CharT* std::basic_string<_CharT, _Traits, _Alloc>::_S_construct(_InIterator, _InIterator, const _Alloc&, std::forward_iterat
or_tag) [with _FwdIterator = const char16_t*; _CharT = char16_t; _Traits = std::char_traits<char16_t>; _Alloc = std::allocator<char16_t>]'
/usr/local/include/c++/8.2.0/bits/basic_string.h:5043:20:   required from 'static _CharT* std::basic_string<_CharT, _Traits, _Alloc>::_S_construct_aux(_InIterator, _InIterator, const _Alloc&, std::__false_typ
e) [with _InIterator = const char16_t*; _CharT = char16_t; _Traits = std::char_traits<char16_t>; _Alloc = std::allocator<char16_t>]'
/usr/local/include/c++/8.2.0/bits/basic_string.h:5064:24:   required from 'static _CharT* std::basic_string<_CharT, _Traits, _Alloc>::_S_construct(_InIterator, _InIterator, const _Alloc&) [with _InIterator =
const char16_t*; _CharT = char16_t; _Traits = std::char_traits<char16_t>; _Alloc = std::allocator<char16_t>]'
/usr/local/include/c++/8.2.0/bits/basic_string.tcc:656:134:   required from 'std::basic_string<_CharT, _Traits, _Alloc>::basic_string(const _CharT*, std::basic_string<_CharT, _Traits, _Alloc>::size_type, cons
t _Alloc&) [with _CharT = char16_t; _Traits = std::char_traits<char16_t>; _Alloc = std::allocator<char16_t>; std::basic_string<_CharT, _Traits, _Alloc>::size_type = long unsigned int]'
/usr/local/include/c++/8.2.0/bits/basic_string.h:6716:95:   required from here
/usr/local/include/c++/8.2.0/bits/basic_string.tcc:1067:1: error: cannot call member function 'void std::basic_string<_CharT, _Traits, _Alloc>::_Rep::_M_set_sharable() [with _CharT = char16_t; _Traits = std::
char_traits<char16_t>; _Alloc = std::allocator<char16_t>]' without object
       __p->_M_set_sharable();
 ^     ~~~~~~~~~
/usr/local/include/c++/8.2.0/bits/basic_string.tcc: In instantiation of 'static std::basic_string<_CharT, _Traits, _Alloc>::_Rep* std::basic_string<_CharT, _Traits, _Alloc>::_Rep::_S_create(std::basic_string<
_CharT, _Traits, _Alloc>::size_type, std::basic_string<_CharT, _Traits, _Alloc>::size_type, const _Alloc&) [with _CharT = char32_t; _Traits = std::char_traits<char32_t>; _Alloc = std::allocator<char32_t>; std
::basic_string<_CharT, _Traits, _Alloc>::size_type = long unsigned int]':
/usr/local/include/c++/8.2.0/bits/basic_string.tcc:578:28:   required from 'static _CharT* std::basic_string<_CharT, _Traits, _Alloc>::_S_construct(_InIterator, _InIterator, const _Alloc&, std::forward_iterat
or_tag) [with _FwdIterator = const char32_t*; _CharT = char32_t; _Traits = std::char_traits<char32_t>; _Alloc = std::allocator<char32_t>]'
/usr/local/include/c++/8.2.0/bits/basic_string.h:5043:20:   required from 'static _CharT* std::basic_string<_CharT, _Traits, _Alloc>::_S_construct_aux(_InIterator, _InIterator, const _Alloc&, std::__false_typ
e) [with _InIterator = const char32_t*; _CharT = char32_t; _Traits = std::char_traits<char32_t>; _Alloc = std::allocator<char32_t>]'
/usr/local/include/c++/8.2.0/bits/basic_string.h:5064:24:   required from 'static _CharT* std::basic_string<_CharT, _Traits, _Alloc>::_S_construct(_InIterator, _InIterator, const _Alloc&) [with _InIterator =
const char32_t*; _CharT = char32_t; _Traits = std::char_traits<char32_t>; _Alloc = std::allocator<char32_t>]'
/usr/local/include/c++/8.2.0/bits/basic_string.tcc:656:134:   required from 'std::basic_string<_CharT, _Traits, _Alloc>::basic_string(const _CharT*, std::basic_string<_CharT, _Traits, _Alloc>::size_type, cons
t _Alloc&) [with _CharT = char32_t; _Traits = std::char_traits<char32_t>; _Alloc = std::allocator<char32_t>; std::basic_string<_CharT, _Traits, _Alloc>::size_type = long unsigned int]'
/usr/local/include/c++/8.2.0/bits/basic_string.h:6721:95:   required from here
/usr/local/include/c++/8.2.0/bits/basic_string.tcc:1067:1: error: cannot call member function 'void std::basic_string<_CharT, _Traits, _Alloc>::_Rep::_M_set_sharable() [with _CharT = char32_t; _Traits = std::
char_traits<char32_t>; _Alloc = std::allocator<char32_t>]' without object
ninja: build stopped: subcommand failed.
Traceback (most recent call last):
  File "/home/miniconda3/envs/motr/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 1423, in _run_ninja_build
    check=True)
  File "/home/miniconda3/envs/motr/lib/python3.7/subprocess.py", line 512, in run
    output=stdout, stderr=stderr)
subprocess.CalledProcessError: Command '['ninja', '-v']' returned non-zero exit status 1.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "setup.py", line 70, in <module>
    cmdclass={"build_ext": torch.utils.cpp_extension.BuildExtension},
  File "/home/miniconda3/envs/motr/lib/python3.7/site-packages/setuptools/__init__.py", line 153, in setup
    return distutils.core.setup(**attrs)
  File "/home/miniconda3/envs/motr/lib/python3.7/distutils/core.py", line 148, in setup
    dist.run_commands()
  File "/home/miniconda3/envs/motr/lib/python3.7/distutils/dist.py", line 966, in run_commands
    self.run_command(cmd)
  File "/home/miniconda3/envs/motr/lib/python3.7/distutils/dist.py", line 985, in run_command
    cmd_obj.run()
  File "/home/miniconda3/envs/motr/lib/python3.7/distutils/command/build.py", line 135, in run
    self.run_command(cmd_name)
  File "/home/miniconda3/envs/motr/lib/python3.7/distutils/cmd.py", line 313, in run_command
    self.distribution.run_command(command)
  File "/home/miniconda3/envs/motr/lib/python3.7/distutils/dist.py", line 985, in run_command
    cmd_obj.run()
  File "/home/miniconda3/envs/motr/lib/python3.7/site-packages/setuptools/command/build_ext.py", line 79, in run
    _build_ext.run(self)
  File "/home/miniconda3/envs/motr/lib/python3.7/site-packages/Cython/Distutils/old_build_ext.py", line 186, in run
    _build_ext.build_ext.run(self)
  File "/home/miniconda3/envs/motr/lib/python3.7/distutils/command/build_ext.py", line 340, in run
    self.build_extensions()
  File "/home/miniconda3/envs/motr/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 603, in build_extensions
    build_ext.build_extensions(self)
  File "/home/miniconda3/envs/motr/lib/python3.7/site-packages/Cython/Distutils/old_build_ext.py", line 195, in build_extensions
    _build_ext.build_ext.build_extensions(self)
  File "/home/miniconda3/envs/motr/lib/python3.7/distutils/command/build_ext.py", line 449, in build_extensions
    self._build_extensions_serial()
  File "/home/miniconda3/envs/motr/lib/python3.7/distutils/command/build_ext.py", line 474, in _build_extensions_serial
    self.build_extension(ext)
  File "/home/miniconda3/envs/motr/lib/python3.7/site-packages/setuptools/command/build_ext.py", line 196, in build_extension
    _build_ext.build_extension(self, ext)
  File "/home/miniconda3/envs/motr/lib/python3.7/distutils/command/build_ext.py", line 534, in build_extension
    depends=ext.depends)
  File "/home/miniconda3/envs/motr/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 437, in unix_wrap_ninja_compile
    with_cuda=with_cuda)
  File "/home/miniconda3/envs/motr/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 1163, in _write_ninja_file_and_compile_objects
    error_prefix='Error compiling objects for extension')
  File "/home/miniconda3/envs/motr/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 1436, in _run_ninja_build
    raise RuntimeError(message)
RuntimeError: Error compiling objects for extension

Vue3 Warning: [Vue warn]: Extraneous non-emits event listeners (changeParentProps) were passed to component

The following warning is reported in component communication (child to parent) in Vue3.
[Vue warn]: Extraneous non-emits event listeners (changeParentProps) were passed to component but could not be automatically inherited because component renders fragment or text root nodes. If the listener is intended to be a component custom event listener only, declare it using the “emits” option.

Solution: Just declare the custom event name

emits: [‘changeParentProps’]

<template>
  <div>
    Subcomponents
  </div>
  <button @click="changeParentProps"> Changes the parent component's passed props</button>
</template>
<script lang="ts">
import { defineComponent } from '@vue/composition-api'
export default defineComponent({
  emits: ['changeParentProps'],
  props: {
    data: {
      type: String,
      default: ''
    }
  },
  setup (props, { emit }) {
    // console.log(props)
    const changeParentProps = () => {
      emit('changeParentProps', '123')
    }
    return {
      changeParentProps
    }
  }
})
</script>