Author Archives: Robins

No supported authentication methods available

When (Little Turtle) remotely pushes files to GitHub, it will report an error no supported authentication methods available

Solution: because of the conflict between GIT and git, we need to correct git as follows.

Find tortoisegit — settings — Network

Point SSH client to e: (GIT, GIT, usr, bin, SSH. Exe)

When changing the SSH path, check the above, click apply, and then confirm

 

 

 

 

 

[unity] [fairygui] [ilruntime] Hot update error prompt add automatic code glist.itemextender problem

Glist.itemextender is used in the hot update ilruntime, and the unity editor will run with an error

KeyNotFoundException: Cannot find convertor for FairyGUI.ListItemProvider

Add the following code to the void registdelegate() function of ilruntimewrapper. CS in the unity project

appdomain.DelegateManager.RegisterDelegateConvertor<FairyGUI.ListItemProvider>((act) =>
{
    return new FairyGUI.ListItemProvider((index) =>
    {
        return ((Func<System.Int32, System.String>)act)(index);
    });
});

Unity editor running error, error prompt

KeyNotFoundException: Cannot find Delegate Adapter for:uicreatetroops.GetListItemResource(Int32 index), Please add following code:
appdomain.DelegateManager.RegisterFunctionDelegate<System.Int32, System.String>();

  Add the following code to the void registdelegate() function of ilruntimewrapper. CS in the unity project

appdomain.DelegateManager.RegisterFunctionDelegate<System.Int32, System.String>();

Make sure it is

Add the following code to the void registdelegate() function of ilruntimewrapper. CS in the unity project

...
void RegistDelegate()
{
...
        appdomain.DelegateManager.RegisterDelegateConvertor<FairyGUI.ListItemProvider>((act) =>
        {
            return new FairyGUI.ListItemProvider((index) =>
            {
                return ((System.Func<System.Int32, System.String>)act)(index);
            });
        });
        
        appdomain.DelegateManager.RegisterFunctionDelegate<System.Int32, System.String>();
...
}
...

Hot update ilruntime automatic error generation code…

Undefined symbol: cblas appears after installing pytorch1.0.0_ sgemm_ Alloc error

Installation of pytorch1.0 encountered the following problems:

>>> import torch
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "xxxxx/site-packages/torch/__init__.py", line 84, in <module>
    from torch._C import *
ImportError: xxxxx/site-packages/torch/lib/libmkldnn.so.0: undefined symbol: cblas_sgemm_alloc

Someone on the Internet has solved this problem by opening ~ /. Bashrc and finding the declaration of such a variable:

export LD_LIBRARY_PATH=/usr/local/lib:$LD_LIBRARY_PATH

Then comment it out and the problem is solved.

But I don’t have such a line when I open the bashrc file
after many searches, I have solved this problem as follows:

conda install -c anaconda mkl

This line of code, and then I import torch succeeded!

Hope to help you!

NoClassDefFoundError: org.springframework.validation.annotation.ValidationAnnotationUtils

After the recent project upgrade, I reported this error inexplicably. A colleague just met it and helped to solve it. Record it

First of all, this class is in the spinning context package. Just upgrade the package

        <spring-context.version>5.3.7</spring-context.version>
        
	<dependency>
        <groupId>org.springframework</groupId>
        <artifactId>spring-context</artifactId>
        <version>${spring-context.version}</version>
    </dependency>

On the usage of ‘ref.stor.type search’ field in SAP WM movement type

SAP WM Movement Type   On the usage of ‘ref.stor.type search’ field in

The author is responsible for the implementation of mm and WM in the current project. According to the current situation of the storage area of the customer warehouse, I have enabled the following storage type in the design of storage type at the WM level:

001: high level storage area of raw materials

005: fixed shelf storage area for raw materials

PBL: isolation area beside the production line (production red area)

Qbl: quality isolation area (quality red area)

As well as self-made semi-finished products and finished products storage area.

The storage type indicator is enabled when the raw materials are delivered,   There are different indicators for different types of materials, so that when these materials are put on the shelf or removed from the shelf, the system can automatically suggest that they be put on the shelf or removed from the shelf in the specified storage area.

For example, when some raw materials are removed from shelves for production preparation, they are first delivered from 001 and then delivered from 005;

When some raw materials are removed from shelves for production preparation, they are first discharged from 005 and then from 001. According to these requirements, I have completed the relevant configuration in storage type search.

When business departments need to isolate materials for some reason, they need to move the materials to PBL and qbl. If judged by the quality department, these isolated materials need to be scrapped. At this time of shipment, when the business does not want to issue from to in this scrap scenario, the system recommends to issue from 001 or 005 by default, but requires the system to automatically recommend to issue from PBL and qbl.

To achieve this requirement, you can enable ‘ref.stor.type search’ in WM mobile type settings. That is to say, for the same material, different storage types are recommended for different business scenarios

Then, in the configuration of storage type search, add the following configuration items:

Of course, the premise is that when you search for storage type, you should consider  ‘ Ref.storage type search ‘, as shown in the following figure:

This completes mb1a + 555 at the front desk   After the mobile type scrap issue, when lt06 is executed to create to, the interface will automatically suggest that the business be transferred from PBL & amp; In qbl:

In other scenarios, for example, when the goods are delivered to the cost center, the system will issue the goods from 001/005 and other storage areas according to the normal delivery strategy

2016-12-01   Written in Wuhan Economic Development Zone

AttributeError: ‘_io.TextIOWrapper‘ object has no attribute ‘softspace‘

Write the title of the table of contents here

1. Problem presentation 2. Solutions

1. Problem presentation

Problem display:

when I was learning the file module, I encountered such a situation. How can I solve this situation?

2. Solutions

This involves a version problem. As we all know, Python is mainly divided into Python 2 and python 3. When we check the python official website, we can see that
(Python website address: www.python. ORG)

it can be seen that in versions above 3.0, the softspace attribute may have been removed, so use the command line py – 2 (if you do not install python2, you need to install python2 first, python2 and python3 can be installed at the same time, and use py – 2 and py – 3 to switch under Windows Environment) to switch to python2.7, Open a file to view the softspace property, and the command line executes normally.

The problem of error in adaptation of Vue using vant mobile terminal rem

Using vant to adapt REM to
any gate vant

If necessary  rem   It is recommended to use the following two tools for adaptation

postcss-pxtorem   Is a post CSS plug-in, used to convert PX units into REM units lib flexible   Used to set the REM reference value

npm install postcss-pxtorem --save-dev
npm i -S amfe-flexible

  Create a new postcss. Config. JS
in the root directory  

module.exports = {
    plugins: {
      'autoprefixer': {
        browsers: ['Android >= 4.0', 'iOS >= 8']
      },
      'postcss-pxtorem': {
        rootValue: 37.5,
        propList: ['*']
      }
    }
  }

Introduce main.js again

import 'amfe-flexible'

NPM run serve startup found an error

  Check that the package.json version is too high
“postcss pxtorem”:  “^ 6.0.0”,

Degraded version

npm i [email protected]

e.g. run serve 21551; 36816;

 

[Solved] error: failed to run custom build command for `librocksdb-sys v6.17.3`

Prompt for error in trust build:

error: failed to run custom build command for `librocksdb-sys v6.17.3`

Details:


...

   Compiling ed25519-dalek v1.0.1
   Compiling tracing-subscriber v0.2.17
   Compiling schnorrkel v0.9.1
   Compiling addr2line v0.14.1
   Compiling prost-build v0.7.0
   Compiling mio-uds v0.6.8
error: failed to run custom build command for `librocksdb-sys v6.17.3`

Caused by:
  process didn't exit successfully: `/home/y/IdeaProjects/MinixChain/target/release/build/librocksdb-sys-6de902cd8dc81c39/build-script-build` (exit code: 101)
  --- stderr
  thread 'main' panicked at 'Unable to find libclang: "couldn\'t find any valid shared libraries matching: [\'libclang.so\', \'libclang-*.so\', \'libclang.so.*\', \'libclang-*.so.*\'], set the `LIBCLANG_PATH` environment variable to a path where one of these files can be found (invalid: [])"', /home/y/.cargo/registry/src/github.com-1ecc6299db9ec823/bindgen-0.57.0/src/lib.rs:1975:31
  note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
warning: build failed, waiting for other jobs to finish...
error: build failed

Solution:

sudo apt install llvm clang


[Solved] Hive tez due to: ROOT_INPUT_INIT_FAILURE java.lang.IllegalArgumentException: Illegal Capacity: -38297

hive  tez  error:
Status: Failed
Vertex failed, vertexName=Map 1, vertexId=vertex_1625122203217_0010_1_00, diagnostics=[Vertex vertex_1625122203217_0010_1_00 [Map 1] killed/failed due to:ROOT_INPUT_INIT_FAILURE, Vertex Input: info initializer failed, vertex=vertex_1625122203217_0010_1_00 [Map 1], java.lang.IllegalArgumentException: Illegal Capacity: -38297

———————————————————————————————-
VERTICES      MODE        STATUS  TOTAL  COMPLETED  RUNNING  PENDING  FAILED  KILLED
———————————————————————————————-
Map 1            container  INITIALIZING     -1          0        0       -1       0       0
Map 2            container  INITIALIZING     -1          0        0       -1       0       0
———————————————————————————————-
VERTICES: 00/02  [>>————————–] 0%    ELAPSED TIME: 1.61 s
———————————————————————————————-
Status: Failed
Vertex failed, vertexName=Map 1, vertexId=vertex_1625122203217_0010_1_00, diagnostics=[Vertex vertex_1625122203217_0010_1_00 [Map 1] killed/failed due to:ROOT_INPUT_INIT_FAILURE, Vertex Input: info initializer failed, vertex=vertex_1625122203217_0010_1_00 [Map 1], java.lang.IllegalArgumentException: Illegal Capacity: -38297
at java.util.ArrayList.<init>(ArrayList.java:156)
at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:350)
at org.apache.hadoop.hive.ql.io.HiveInputFormat.addSplitsForGroup(HiveInputFormat.java:519)
at org.apache.hadoop.hive.ql.io.HiveInputFormat.getSplits(HiveInputFormat.java:765)
at org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator.initialize(HiveSplitGenerator.java:243)
at org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:280)
at org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:271)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
at org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:271)
at org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:255)
at com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:125)
at com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:57)
at com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:78)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
]

Modify the hive compute engine, vim $HIVE_HOME/conf/hive-site.xml and add the following

<property>
    <name>hive.tez.container.size</name>
    <value>1024</value>
</property>