Tag Archives: linux

Su – MySQL switch user, display error: resource temporarily unavailable

The problem
Mysql> su -mysql -mysql>

su: failed to execute /bin/bash: Resource temporarily unavailable

To solve
/etc/securitymits. D ,
, <>> mysq. conf> code>, mysq.>f>

mysql soft nofile 131072
mysql hard nofile 131072
mysql soft nproc 65535
mysql hard nproc 65535

Limits. D overwrites the Limits.
oblem solved.
Information about the description file limits. Conf can look at this article:
https://blog.csdn.net/fanren224/article/details/79971359

Waitpid call return error prompt: no child processes problem

The problem
An error occurred in a function today with a probabilistic waitpid call. The error is No child processes. The prompt does not have this child process, the PID number can also be wrong, so add the print, when the duplicate found that the PID number can correspond to, no problem.
online, found “No child” the processes of error code corresponding ECHILD, waitpid there is in man’s document, if the process set the SIGCHLD signal processing to SIG_IGN, then the call will return ECHILD waitpid.
to see the code, the parent process does have to capture the SIGCHLD signal inside, for processing way

rc = waitpid(-1, &status, WNOHANG);

Wait for any child process to exit so that it can be recovered. I suspect that calling WaitPid again when the parent process has already collected its body will be an error. And that explains the probabilistic problem. So write the following code to verify it.

#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <unistd.h>
#include <signal.h>

void sig_handle(int sig)
{
    waitpid(-1, NULL, 0);
    printf("llm->%s(%d)\n", __FUNCTION__, __LINE__);
}

int main(int argc, char *argv[])
{
    int rtn = 0;
    int pid = 0;
    char *arg[] = {"date", NULL};
    signal(SIGCHLD, sig_handle);
    //signal(SIGCHLD, SIG_IGN);

    while(1)
    {
        pid = fork();
        if(!pid)
        {
            execvp("date", arg);
            exit(1);
        }
        usleep(10*1000);
        rtn = waitpid(pid, NULL, 0);

        if(rtn < 0)
            perror("waitpid");
        //usleep(10*1000);
    }
    return 0;
}

If you add sleep before waitpid, the problem will be 100% duplicated. If you remove sleep, there will be no problem.
The solution
Given the current code logic, there is a problem with either Waitpid being removed, so the best course of action is to determine whether it is true or false. If Waitpid returns ECHILD, then ignore the error.
The system calls
This is not a problem with the system call. This is not a problem with the implementation of waitpid after fork. The implementation handles the signal accordingly. The SIGCHLD signal processing action is restored before fork, as follows:

int __libc_system(char *command)
{
	int wait_val, pid;
	__sighandler_t save_quit, save_int, save_chld;

	if (command == 0)
		return 1;

	save_quit = signal(SIGQUIT, SIG_IGN);
	save_int = signal(SIGINT, SIG_IGN);
	save_chld = signal(SIGCHLD, SIG_DFL);

	if ((pid = vfork()) < 0) {
		signal(SIGQUIT, save_quit);
		signal(SIGINT, save_int);
		signal(SIGCHLD, save_chld);
		return -1;
	}
	if (pid == 0) {
		signal(SIGQUIT, SIG_DFL);
		signal(SIGINT, SIG_DFL);
		signal(SIGCHLD, SIG_DFL);

		execl("/bin/sh", "sh", "-c", command, (char *) 0);
		_exit(127);
	}
	/* Signals are not absolutly guarenteed with vfork */
	signal(SIGQUIT, SIG_IGN);
	signal(SIGINT, SIG_IGN);

#if 0
	printf("Waiting for child %d\n", pid);
#endif

	if (wait4(pid, &wait_val, 0, 0) == -1)
		wait_val = -1;

	signal(SIGQUIT, save_quit);
	signal(SIGINT, save_int);
	signal(SIGCHLD, save_chld);
	return wait_val;
}
weak_alias(__libc_system, system)

Resolve – bash: fork: Retail: resource temporarily unavailable error

1. If -bash: fork: retry: Resource error is unavailable, then you have no further questions. If the maximum number of Linux processes is exceeded, then change the maximum number of Linux processes.

Connect to the server using the CRT connection tool

[support@localhost ~]$ ulimit -a
core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 127405
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 1024
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 10240
cpu time               (seconds, -t) unlimited
max user processes              (-u) 1024
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited
[support@localhost ~]$ 

You can’t switch root after logging in, even if you type the clear command, you will get an error.
online said it could open too many processes, the system allows you to create the maximum number of processes Max user the processes the parameters.
> Max user processes can use ulimit-u 4096, but only in the session of the current terminal. The default values will be used once you log in again. So it can’t be used.
the right way to change is to modify the/etc/security/limits. D/90 – nproc. The values in the conf file. CD/etc/securit/lims.d/
: : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : :

# Default limit for number of user's processes to prevent
# accidental fork bombs.
# See rhbz #432903 for reasoning.

*          soft    nproc     4096
root       soft    nproc     unlimited

You can change the value, but note that root is used for the change, and no other user has permission by default.
because they have no root, he found a can be resource-intensive process, users can connect.
On the Internet to see the number of connections command
ps – the eLf | grep username, check the progress of the target user.
Reference links: http://www.nginx.cn/3002.html
https://zhidao.baidu.com/question/1640745287732090500.html

Su prompt when switching users: resource temporarily unavailable

I failed to connect the FTP server authentication with WinSCP today. I felt really strange. Nothing has been moved and everything was normal before.
If I su to FTP user, I keep saying “-bash:fork:Resource permanently unavailable”. If I have no further questions, I have no further questions. If I su to FTP user, I have no further questions.
[root@cls vsftpd]# su bupdate
bash: fork: retry: Resource temporarily unavailable
bash: fork: retry: Resource temporarily unavailable
bash: fork: retry: Resource temporarily unavailable
bash: fork: retry: Resource temporarily unavailable
^C
1, first check the disk, memory are OK
[bupdate@cls vsftpd]$ top
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
32141 bupdate 20 0 16296 2548 1004 R 2.3 0.1 0:00.61 top
root 20 0 19272 1576 1312 S 0.0 0.0 0:06. 43 init
2 root 20 0 0 0 0 S 0.0 0.0 0:00. 00 kthreadd
root RT 3 0 0 0 0 S 0.0 0.0 0:01.45 migration/0
4 root 20 00 0 S 0.0 0.0 0:00.12 ksoftirqd/0
root RT 00 0 S 0.0 0.0 0:00.00 migration/0
6 root RT 0 0 0 0 S 0.0 0.0 0:00. 00 watchdog/0
7 root RT 0 0 0 0 S 0.0 0.0 0:02. 26 migration/1
root RT 8 0 00 0 0 0 S 0.0 0.0 0:00. The migration/1
9 root 20 0 0 0 0 S 0.0 0.0 0:00. 25 ksoftirqd/1
10 root RT 0 0 0 0 S 0.0 0.0 0:27. 92 watchdog/1
11 root RT 0 0 0 0 S 0.0 0.0 0:00. 92 migration/2
12 root RT 0 0 0 0 S 0.0 0.0 0:00. 00 migration/2
13 root 20 0 0 0 0 S 0.0 0.0 0:00. 01 ksoftirqd/2

2. Use ulimit-a to get the result

(plain)
view plain
copy

    the core file size (data blocks, -c) 0 seg size (kbytes, -d) unlimited Max nice (-e) 0, the file size (blocks, -f) is unlimited Pending signals (-i) 71679 Max locked memory (in kbytes, -l) 32 Max memory size (kbytes, -m) unlimited open files (-n) 1024 Pipe size (512 bytes, the -p) 8 POSIX message the queues (bytes, – q) 819200 Max rt priority (-r) 0 stack size (10240 kbytes, -s) CPU time (seconds, -t) unlimited Max user the processes (-u) 2047 virtual memory (in kbytes, -v) unlimited file locks (-x) unlimited

     
    3, modify the/etc/security/limits. Conf

    (plain)
    view plain
    copy

    The

      * soft nproc 2047 * hard nproc 16384

      limits. Conf format:

      username | @ groupname type resource limit
      The username | @ groupname: Settings need to be restricted user name, add @ in front of the group name and user name. You can also limit all users with a wildcard *.
      Type: soft, hard and -, soft refers to the current system in effect. Hard indicates the maximum value that can be set in the system. The maximum value of soft cannot exceed the value of hard. Using – indicates that both soft and hard values are set.
      The resource:
      the core – limit the size of the kernel file
      the date – the largest data size
      fsize – maximum file size
      memlock – largest lock memory address space
      nofiles – the maximum number of open file
      RSS – maximum persistent set size
      stack – stack size
      maximum CPU – units of minutes up CPU time
      noproc – process the maximum number of
      the as – Address space limit
      maxlogins – the maximum number of logins allowed by this user
      For the lims.conf file configuration to take effect, you must ensure that the pam_lims.so file is added to the startup file.
      session required /lib/security/pam_limits. So.
      session required /lib/security/pam_limits
       
      4. Or modify the /etc/profile file
      Ulimit [- acdfHlmnpsStvw] [size]
      Parameter details:

      -a>
      size: Sets the maximum value of the core file.
      -c size: Sets the maximum value of the core file. Blocks
      -d Size: Sets the maximum number of blocks in the data segment.
      -f size: Set the maximum value of the file to create. Blocks
      -l Size: Sets the maximum number of processes locked in memory.
      -m Size: Sets the maximum amount of resident memory that can be used. Kbytes
      -n Size: Sets the maximum number of file descriptors that the kernel can open at the same time. N
      -p Size: Sets the maximum value of the pipeline buffer.
      -s size: Sets the maximum value of the stack.
      -t size: Set the maximum CPU usage time. Seconds
      -v size: Sets the maximum value of virtual memory. Unit: kbytes
      Add something like ulimit -f 1000 to the end of /etc/profile so that each session will take effect when logged in.

      Ps:

      Permanent change

      remove
      Limits on maximum number of processes and maximum number of files open for Linux systems:

      Vi/etc/security/limits. Conf

      # Add the following line

      * Soft noproc 11000 # soft connection

      * Hard noproc 11000 # hard connection

      * soft nofiles 4100

      * hard nofiles 4100

      Note: * represents for all users, noproc represents the maximum number of processes, nofile represents the maximum number of open files

      Refer to the address: http://blog.csdn.net/jlds123/article/details/9146865

Php7 compiles collect2: error: LD returned 1 exit status

Problem description
PHP7: PHP7: PHP7: PHP7: PHP7: PHP7: PHP7

/usr/bin/ld: ext/ldap/.libs/ldap.o: undefined reference to symbol ‘ber_strdup’
/usr/bin/ld:note: ‘ber_strdup’ is defined in DSO /lib64/liblber-2.4.so.2 so try adding it to the linker command line
/lib64/liblber-2.4.so.2:could not read symbols: Invalid operation
collect2:error: ld returned 1 exit status
make: *** [sapi/cli/php] Error 1

The solution
In the PHP source code directory vi Makefile open the file, find the EXTRA_LIBS line, line at the end of the add - llber save out of the again make

The solution of centos7 in VMware virtual machine unable to access after installing nginx

VMware virtual machine in Centos7 after installing NGINX is not native access solution
To install Nginx on Linux, see: Linux Centos7 to install Nginx
The firewall of CentOS is changed to “iptables”, which is no longer called “iptables”. The firewall of CentOS is changed to “iptables”, which is no longer called “iptables”. The firewall of CentOS is changed to “iptables”.

firewall-cmd --zone=public --add-port=80/tcp --permanent  

Command meaning:
— zone # scope
— add-port=80/ TCP # Add port in format: port/communication protocol
— permanent # is permanent and will fail if restarted without this parameter
Restart firewall:

systemctl stop firewalld.service  
systemctl start firewalld.service  

Refresh the access again, as shown in the figure below:

How to install postman tool in Ubuntu 16.04

Ubuntu16.04 postman installation: basic steps:

1) : official website to download software package: https://www.getpostman.com/apps

2) : unzip the installation:

sudo tar -xzf Postman-linux-x64-6.0.10.tar.gz

3): Enter the unzipped PostMan folder to open the terminal and start PostMan

./Postman/Postman

4): Create startup icon for quick startup
Create a soft link to create Postman from the extracted Postman file in /usr/bin/

sudo ln -s  /home/c/Downloads/Postman/Postman   /usr/bin/

 

Ubuntu 18.04 installing postman

Download the tar packages
Official download zip package
The installation
1, enter the download directory to decompress

sudo  tar -xzf postman.tar.gz	-C /usr/local/tools

2. Try running PostMan

/Postman/Postman

Create global variables

sudo ln -s /usr/local/tools/Postman/Postman /usr/bin/postman

4. Add launcher application icon

sudo vim /usr/share/applications/postman.desktop

Add content

[Desktop Entry]

Encoding=UTF-8

Name=Postman

Exec=postman

Icon=/usr/local/tools/Postman/app/resources/app/assets/icon.png

Terminal=false

Type=Application

Categories=Development;

Docker load loading mirror message JSON no such file or directory error

1. Problem description, ordinary image export and load
Export: docker save-o gz_dockernlfsmorev2.0.tar gz_docker:morev2.0
No such file or directory :docker load-i gz_dockerlnfsmorev2.0.tar

At first, it was thought that tar was missing, but later, SHA added the password to eliminate this reason;
After searching on Baidu, most of the answers they gave were “save” versus “load export” and “import”. Failure to do so, however, will not solve the problem. Because I’m using save and load and it’s OK. So the solution was abandoned. If you look at the kernel: cat /proc/version, docker version: docker-v
Later I wondered if the Linux kernel version and Docker version were incompatible.
First: Ubuntu Series, Docker Version: Docker Version 18.06.1-CE, Build E68FC7A, Kernel: Linux Version 4.15.0-112- Generic (buildd@lcy01-amd64-021) (GCC Version 5.4.0 20160609 (Ubuntu 5.4.0-6Ubuntu 1~16.04.12)) #113~16.04.1-Ubuntu SMP Fri Jul 10 04:37:08 UTC 2020
Docker version: Docker version 19.03.1, build 74B1E89E8A kernel: El7.x86_64 ([email protected]) (GCC Version 4.8.5 20150623 (Red Hat 4.8.5-39) (GCC)) #1 SMP Tue Mar 31 23:36:51 UTC 2020
Linux version 3.10.0-1127.el7.x86_64 ([email protected]) (GCC version 4.8.5 20150623 (Red Hat 4.5.3-39) (GCC))
The first and second sets load OK, the problem is the third set; Then I suspected that the Docker version was too high, so I updated the third Docker to Docker version 18.06.1-CE. Results: However, it was not useful. Excluded version incompatibilities;
At the moment their state of mind to collapse, finally calm down, re-comb the train of thought; Open the road to change destiny against heaven;
mkdir mydocker
1.tar-zxvf gz_dockerlnfsmorev2.0.tar MyDocker
2. CD mydocker

3. tar-cvf gz_dockerlnfsmorev2.0.tar *

4. The docker load -i gz_dockerlnfsmorev2. 0. The tar
Until this problem is completely solved;
Note that when re-tar-cvf, be sure to compress the file in the current directory where it was extracted. Otherwise, no such file or directory will also be reported
 
 

QT encountered in CentOS installation( qt.qpa.plugin : Could not load the Qt platform plugin “xcb” )

I don’t believe it. How hard is it to install Qt on Linux?I reinstalled the system several times and got drunk. Here are the problems I encountered:
Round1:
After installation, execute the QTCreator prompt:
Linux qt “qt.qpa.plugin: Could not load the Qt platform plugin “xcb” ”

sudo gedit ~/.bashrc
sudo gedit ~/.bashrc
sudo gedit ~/.bashrc

export QT_DEBUG_PLUGINS=1
export LD_LIBRARY_PATH=/opt/Qt5.13.1/5.13.1/gcc_64/lib:$LD_LIBRARY_PATH
export LD_LIBRARY_PATH=/opt/Qt5.13.1/Tools/QtCreator/lib:$LD_LIBRARY_PATH

The last two sentences are from the link library path added to the search path, do not know whether to work.
Emphasized in fact the first sentence here, and after the configuration items, QtCreator will output a lot of information, so we can be targeted, step by step to solve the problem/
the changes to take effect

sudo source  ~/.bashrc

round2:
Execute the QtCreator program again from the command line, and output a bunch of log information. Skip to the end to find a prompt:
always load the library/opt/Qt5.13.1/Tools/QtCreator/lib/Qt/plugins/platforms/libqxcb. So: (libxkbcommon – x11. So. 0: unable to open the Shared object file: Don’t have that file or directory)
QLibraryPrivate: : loadPlugin failed on “/ opt/Qt5.13.1/Tools/QtCreator/lib/Qt/plugins/platforms/libqxcb. So” : “Always load the library/opt/Qt5.13.1/Tools/QtCreator/lib/Qt/plugins/platforms/libqxcb. So: (libxkbcommon – x11. So. 0: unable to open the Shared object file: no files or directories)”

obviously failure load of dynamic link library, first check whether there is the file exists, then look at whether the dependent libraries are able to find success:

ldd  /opt/Qt5.13.1/Tools/QtCreator/lib/Qt/plugins/platforms/libqxcb.so

Libxkbcommon is not available, you need to install the library, use yum to install:

yum -y install libxkbcommon-x11-devel

Round 3 lacks DBUS
If you continue, there is still an error. If you skip to the last line, there is still an error
/ opt/Qt5.13.1/Tools/QtCreator/bin/QtCreator: relocation error:/opt/Qt5.13.1/Tools/QtCreator/lib/Qt/plugins/platforms /.. /.. /lib/ libqt5dbus.so.5: Symbol dbus_message_set_allow_interactive_authorization, version LIBDBUS_1_3 not defined in file libbus-1.so.3 with link time reference

yum -y install dbus-devel

round4
Continue with the following error
Opt/Qt5.13.1/Tools/QtCreator/bin/QtCreator: symbol lookup error:/opt/Qt5.13.1/Tools/QtCreator/lib/Qt/plugins/platforms /.. /.. /lib/ libqt5xcbqpa.so.5: undefined Symbol: FT_GET_FONT_FORMAT

yum -y install freetype-devel

Export QT_DEBUG_PLUGINS=1. You may not have the same problem as me, but with the output, you can solve it step by step.

Remember once to solve the problem that Ubuntu 18.04 can’t enter GUI

Recently, I tried to use a relatively easy Markdown software vnote. As its release version seems to be hosted on AWS, it is very inconvenient to download in China. Although the download address of Baidu network disk is provided, Baidu network disk client cannot be used on Linux system, so we try to build AppImage with code. Found a code directory. Travis_linux. Sh scripts that looked like a used to generate the appimage, then try to run the script, then the nightmare begins.
Linux after restart the system has been unable to enter the graphical user interface, has been stuck in “started the GNOME Display manager”, but there is no other specific useful information, and only under the recovery mode through Google search solution,
The usual way to do this is to reinstall gdm3 or lightdm and select gdm3 or lightdm as the desktop. Normally this would work, but mine never did.
Try looking at the.travis_linux.sh files, make uninstall them one by one, reboot them, and still not get to the graphical interface. Here is the installation package for one of the worst offenders, but unfortunately it was not found at first.

Wget http://xkbcommon.org/download/libxkbcommon-0.5.0.tar.xz
the tar xf libxkbcommon – 0.5.0. Tar. Xz
CD libxkbcommon – 0.5.0
./configure — prefix =/usr – libdir =/usr/lib/x86_64 – Linux – gnu -disable-x11
make -j$(nproc) & & sudo make install

start c1 session of gnome =false
start c1 session of gnome =false
start c1 session of gnome =false
start c1 session of gnome =false
/usr/bin/gnome-session: Unable to find libxkbcommon.so.0

d /usr/bin/gnome-session/libxkbcommon.so.

Solutions to the problem of unable to locate package

When installing Ubuntu12.04, the VMware Player will be installed.
Unable to locate package error: Unable to locate package error: Unable to locate package

sudo apt-get update

The reason should be that the software source has not been updated, so it can not find the package. I suspect this problem is also likely to occur after the software source is changed.