Author Archives: Robins

CDH oozie SSH error [How to Solve]

Question:

Reason: the oozie user cannot SSH remotely to the root user.

Solution:

1.vi /etc/passwd

oozie:x:973:967:Oozie User:/var/lib/oozie:/bin/false

Change to oozie: X: 973:967: oozie user:/var/lib/oozie:/bin/Bash

2. It can be executed: Su – oozie

3.ssh-keygen<Generate secret key

4.Cat  / var/lib/oozie/.ssh/id_ rsa.pub

5. Send the generated public key to the remote server user

/ root/.ssh/authorized_ keys

[Solved] Vue Less error: Webpack project report expected indentation of 0 spaces but found 2  

Problem Description:

Webpack project report expected indentation of 0 spaces but found 2


Solution:

1. Find. Eslintrc.js in the project root directory

2. Add ‘indent’: ‘off’ in the rules tag

3. Restart project NPM run dev


Cause analysis:

Tip: fill in the analysis of the problem here:
for example, there are two ways for the handler to send messages, namely, handler. Obtainmessage() and handler. Sendmessage(). In obtainmessage mode, when the amount of data is too large, due to the limited size of messagequeue, when the message processing is not enough, the first transmitted data will be overwritten, resulting in data loss.


Solution:

Tip: fill in the specific solution to this problem here:
for example, create a new message object, store the read data in message, and then mhandler.obtainmessage (read)_ DATA, bytes, -1, buffer).sendToTarget(); Replace with mhandler. Sendmessage().

An error is reported when the file in hive parquet format is written in the Flink connection

Version: cdh6.3.2
flick version: 1.13.2
CDH hive version: 2.1.1

Error message:

java.lang.NoSuchMethodError: org.apache.parquet.hadoop.ParquetWriter$Builder.<init>(Lorg/apache/parquet/io/OutputFile;)V
	at org.apache.flink.formats.parquet.row.ParquetRowDataBuilder.<init>(ParquetRowDataBuilder.java:55) ~[flink-parquet_2.11-1.13.2.jar:1.13.2]
	at org.apache.flink.formats.parquet.row.ParquetRowDataBuilder$FlinkParquetBuilder.createWriter(ParquetRowDataBuilder.java:124) ~[flink-parquet_2.11-1.13.2.jar:1.13.2]
	at org.apache.flink.formats.parquet.ParquetWriterFactory.create(ParquetWriterFactory.java:56) ~[flink-parquet_2.11-1.13.2.jar:1.13.2]
	at org.apache.flink.table.filesystem.FileSystemTableSink$ProjectionBulkFactory.create(FileSystemTableSink.java:624) ~[flink-table-blink_2.11-1.13.2.jar:1.13.2]
	at org.apache.flink.streaming.api.functions.sink.filesystem.BulkBucketWriter.openNew(BulkBucketWriter.java:75) ~[flink-table-blink_2.11-1.13.2.jar:1.13.2]
	at org.apache.flink.streaming.api.functions.sink.filesystem.OutputStreamBasedPartFileWriter$OutputStreamBasedBucketWriter.openNewInProgressFile(OutputStreamBasedPartFileWriter.java:90) ~[flink-table-blink_2.11-1.13.2.jar:1.13.2]
	at org.apache.flink.streaming.api.functions.sink.filesystem.BulkBucketWriter.openNewInProgressFile(BulkBucketWriter.java:36) ~[flink-table-blink_2.11-1.13.2.jar:1.13.2]
	at org.apache.flink.streaming.api.functions.sink.filesystem.Bucket.rollPartFile(Bucket.java:243) ~[flink-dist_2.11-1.13.2.jar:1.13.2]
	at org.apache.flink.streaming.api.functions.sink.filesystem.Bucket.write(Bucket.java:220) ~[flink-dist_2.11-1.13.2.jar:1.13.2]
	at org.apache.flink.streaming.api.functions.sink.filesystem.Buckets.onElement(Buckets.java:305) ~[flink-dist_2.11-1.13.2.jar:1.13.2]
	at org.apache.flink.streaming.api.functions.sink.filesystem.StreamingFileSinkHelper.onElement(StreamingFileSinkHelper.java:103) ~[flink-dist_2.11-1.13.2.jar:1.13.2]
	at org.apache.flink.table.filesystem.stream.AbstractStreamingWriter.processElement(AbstractStreamingWriter.java:140) ~[flink-table-blink_2.11-1.13.2.jar:1.13.2]
	at org.apache.flink.streaming.runtime.tasks.CopyingChainingOutput.pushToOperator(CopyingChainingOutput.java:71) ~[flink-dist_2.11-1.13.2.jar:1.13.2]
	at org.apache.flink.streaming.runtime.tasks.CopyingChainingOutput.collect(CopyingChainingOutput.java:46) ~[flink-dist_2.11-1.13.2.jar:1.13.2]
	at org.apache.flink.streaming.runtime.tasks.CopyingChainingOutput.collect(CopyingChainingOutput.java:26) ~[flink-dist_2.11-1.13.2.jar:1.13.2]
	at org.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:50) ~[flink-dist_2.11-1.13.2.jar:1.13.2]
	at org.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:28) ~[flink-dist_2.11-1.13.2.jar:1.13.2]
	at StreamExecCalc$35.processElement(Unknown Source) ~[?:?]
	at org.apache.flink.streaming.runtime.tasks.CopyingChainingOutput.pushToOperator(CopyingChainingOutput.java:71) ~[flink-dist_2.11-1.13.2.jar:1.13.2]
	at org.apache.flink.streaming.runtime.tasks.CopyingChainingOutput.collect(CopyingChainingOutput.java:46) ~[flink-dist_2.11-1.13.2.jar:1.13.2]
	at org.apache.flink.streaming.runtime.tasks.CopyingChainingOutput.collect(CopyingChainingOutput.java:26) ~[flink-dist_2.11-1.13.2.jar:1.13.2]
	at org.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:50) ~[flink-dist_2.11-1.13.2.jar:1.13.2]
	at org.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:28) ~[flink-dist_2.11-1.13.2.jar:1.13.2]
	at org.apache.flink.table.runtime.operators.source.InputConversionOperator.processElement(InputConversionOperator.java:128) ~[flink-table-blink_2.11-1.13.2.jar:1.13.2]
	at org.apache.flink.streaming.runtime.tasks.CopyingChainingOutput.pushToOperator(CopyingChainingOutput.java:71) ~[flink-dist_2.11-1.13.2.jar:1.13.2]
	at org.apache.flink.streaming.runtime.tasks.CopyingChainingOutput.collect(CopyingChainingOutput.java:46) ~[flink-dist_2.11-1.13.2.jar:1.13.2]
	at org.apache.flink.streaming.runtime.tasks.CopyingChainingOutput.collect(CopyingChainingOutput.java:26) ~[flink-dist_2.11-1.13.2.jar:1.13.2]
	at org.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:50) ~[flink-dist_2.11-1.13.2.jar:1.13.2]
	at org.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:28) ~[flink-dist_2.11-1.13.2.jar:1.13.2]
	at org.apache.flink.streaming.api.operators.StreamSourceContexts$ManualWatermarkContext.processAndCollectWithTimestamp(StreamSourceContexts.java:322) ~[flink-dist_2.11-1.13.2.jar:1.13.2]
	at org.apache.flink.streaming.api.operators.StreamSourceContexts$WatermarkContext.collectWithTimestamp(StreamSourceContexts.java:426) ~[flink-dist_2.11-1.13.2.jar:1.13.2]
	at org.apache.flink.streaming.connectors.kafka.internals.AbstractFetcher.emitRecordsWithTimestamps(AbstractFetcher.java:365) ~[flink-connector-kafka_2.11-1.13.2.jar:1.13.2]
	at org.apache.flink.streaming.connectors.kafka.internals.KafkaFetcher.partitionConsumerRecordsHandler(KafkaFetcher.java:183) ~[flink-connector-kafka_2.11-1.13.2.jar:1.13.2]
	at org.apache.flink.streaming.connectors.kafka.internals.KafkaFetcher.runFetchLoop(KafkaFetcher.java:142) ~[flink-connector-kafka_2.11-1.13.2.jar:1.13.2]
	at org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase.run(FlinkKafkaConsumerBase.java:826) ~[flink-connector-kafka_2.11-1.13.2.jar:1.13.2]
	at org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:110) ~[flink-dist_2.11-1.13.2.jar:1.13.2]
	at org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:66) ~[flink-dist_2.11-1.13.2.jar:1.13.2]
	at org.apache.flink.streaming.runtime.tasks.SourceStreamTask$LegacySourceFunctionThread.run(SourceStreamTask.java:269) ~[flink-dist_2.11-1.13.2.jar:1.13.2]
2021-08-15 10:45:37,863 INFO  org.apache.flink.runtime.resourcemanager.slotmanager.DeclarativeSlotManager [] - Clearing resource requirements of job e8f0af4bb984507ec9f69f07fa2df3d5
2021-08-15 10:45:37,865 INFO  org.apache.flink.runtime.executiongraph.failover.flip1.RestartPipelinedRegionFailoverStrategy [] - Calculating tasks to restart to recover the failed task cbc357ccb763df2852fee8c4fc7d55f2_0.
2021-08-15 10:45:37,866 INFO  org.apache.flink.runtime.executiongraph.failover.flip1.RestartPipelinedRegionFailoverStrategy [] - 1 tasks should be restarted to recover the failed task cbc357ccb763df2852fee8c4fc7d55f2_0. 
2021-08-15 10:45:37,867 INFO  org.apache.flink.runtime.executiongraph.ExecutionGraph 

According to the guidelines given on the official website of Flink:
add the Flink parquet dependency package and parquet-hadoop-1.11.1.jar and parquet-common-1.11.1.jar packages. The above error still exists and the specified construction method cannot be found.

reason:

In CDH hive version: the version in parquet-hadoop-bundle.jar is inconsistent with that in Flink parquet.

**

resolvent:

**
1. Because the Flink itself has provided the Flink parquet package and contains the corresponding dependencies, it is only necessary to ensure that the dependencies provided by the Flink are preferentially loaded when the Flink task is executed. Flink parquet can be packaged and distributed with the code
2. Because the package versions are inconsistent, you can consider upgrading the corresponding component version. Note that you can’t simply adjust the version of parquet-hadoop-bundle.jar. After viewing it from Maven warehouse, there are no available packages to use. And: upgrade the version of hive or reduce the version of Flink.

Troubleshooting of errors in installing elasticsearch

When elasticsearch is installed, various errors are reported during startup. The summary is as follows:

Error 1

Java.lang.runtimeexception: can not run elasticsearch as root
solution: use a non root user to start es
 

Error report 2

max virtual memory areas vm.max_ map_ count [65530] is too low, increase to at least [262144]

Solution:
switch root user
VI  / Etc/sysctl. Conf
add the last line
vm.max_ map_ Count = 655360
execute the command: sysctl – P
 

Error reporting 3

the default discovery settings are unsuitable for production use; at least one of [discovery.seed_ hosts, discovery.seed_ providers, cluster.initial_ master_ nodes] must be configured

Solution:
in the config directory of elasticsearch, modify the elasticsearch.yml configuration file and add the following configuration to the configuration file:
cluster.initial_ master_ nodes: [“node-1”]

Error reporting 4

Max file descriptors [4096] for elasticsearch process is too low, increase to at least [65536]
the maximum number of files opened simultaneously in each process is too small. You can view the current number through the following two commands

ulimit -Hn
ulimit -Sn

Switch the root user, modify the/etc/security/limits.conf file, add the configuration, and log in again after the user exits

*               soft    nofile          65536
*               hard    nofile          65536

Error reporting 5

max number of threads [3818] for user [es] is too low, increase to at least [4096]

The problem is the same as above. The maximum number of threads is too low. Modify the configuration file/etc/security/limits.conf (and question 4 is a file) and add the configuration

*               soft    nproc           4096
*               hard    nproc           4096

It can be viewed through the command

ulimit -Hu
ulimit -Su

 

Modified file:

 

 

 

[Solved] Postcss Error: Invalid options object. PostCSS Loader has been initialized using an options object that does not match the API schema.

This error occurs when packaging with postcss:   Invalid options object. PostCSS Loader has been initialized using an options object that does not match the API schema.

I’ve been searching for information for a long time because the versions are incompatible,

Solution: create a new postcss.config.js file under the root directory

The file configuration is as follows

module.exports={
    plugins:[
        require("postcss-preset-env")
    ]
}

Then delete the options in webpack.config.js and package again.

Rsync client synchronization error

Rsync client synchronization error

Error reason: the password is entered correctly but cannot be synchronized

Solution: because the permission of the account password file on the server is not 600, you need to set the permission to 600 to synchronize it

How to Solve Hexo init error: bash: hexo: command not found

Project scenario:

in hexo + GitHub blog deployment, set up and register GitHub account and create warehouse in nodejs and git environment, and then start to set up blog

    1. Run NPM install – G hexo install hexo local environment, enter hexo to check whether the hexo command can be run
    1. (1) create a working folder for saving the local blog myblog
    1. (2) initialize the hexo blog project: hexo init
    1. (3) compile the blog system: hexo g
    1. (4) start the local server for preview: hexo s

If hexo is working properly, enter http://localhost:4000/ You can see the initial appearance of the blog


Problem Description:

after installing the local environment and using git bash in windows, the hexo init command cannot be executed, and the following error is reported: </ font>

bash: hexo: command not found

Cause analysis:

it may be that the NPM environment of node is not well configured. You can try to reinstall node and install it in the default path, or it may not be completely installed when installing with git bash


Solution:

method 1: select the folder where you downloaded the note.js, right-click to open it with git bash, and then enter the command
method 2: open the previously created folder (your blog, mine is blog), hold down shift, right-click, select PowerShell option, open the command prompt, and enter the following command:

after that, a new folder blog will be generated in your folder, which is the content to be deployed

After entering NPX hexo server, you can see that a section of address appears: http://localhost:4000 , you can preview it by typing in your browser

Note: it is recommended to use method 1, because if you use method 2 to deploy successfully, you should use NPX command when writing articles later!!!

[Solved] Mindspot error: Error: runtimeerror:_kernel.cc:88 CheckParam] AddN output shape must be equal to input…

Mindspot rewrites withlosscell, traionestepcell interface

Error: runtimeerror:_kernel.cc:88 CheckParam] AddN output shape must be equal to input shape.Trace: In file add_impl.py(272)/ return F.addn((x, y))/

Solution: do not return multiple parameters in the construct method of withlosscell, otherwise an error will be reported in the gradoperation in the construct of traionestepcell.

[Solved] JIRA startup error: JIRA startup failed, JIRA has been locked.

As the machine moved, IP was replaced. When you encounter some problems during restarting JIRA, record the key points and solutions.

Key points:
check the database configuration
/var/atlas/application data/JIRA/dbconfig. XML
since LDAP authentication is used, you also need to check the user group authentication configuration
find CWD in the database_ directory_ Attribute table, check the configuration with the attribute name ldap.url
Problems and solutions:
JIRA startup failed, JIRA has been locked.
******************************************

JIRA startup failed
unable to create and acquire lock file for jira.home directory ‘/ var/atlas/application data/JIRA… I don’t remember later
solution: delete the. Jira-home.lock (hidden file) under the corresponding directory prompted, enter the bin directory and restart the JIRA service

>>>>>
then an error is reported:
JIRA startup failed
unable to clean the cache directory:/var/atlas/application data/JIRA/plugins /. OSGi plugins/Felix

Check the startup log:
caused by: java.io.ioexception: unable to delete file:/var/atlas/application data/JIRA/plugins /. OSGi plugins/Felix/Felix cache/bundle163/bundle. State
solution: delete the. OSGi plugins folder (hidden folder) in the prompt directory and restart JIRA service

At this time, the original error page will become startup seccused and can be used normally

./startup.sh and./shutdown.sh are recommended for restarting the JIRA service