Category Archives: How to Fix

Segmentation fault (SIGSEGV) location method

When we develop programs under Linux, if the program code is not rigorous, we often encounter problems

segmentation fault

Error, the result of this error is that the program will hang up directly, it is difficult to locate the problem code in the program.

reason

Segmentation fault is often referred to as memory leak / overflow: when a process executes an invalid memory reference or breaks, SIGSEGV signal will be triggered, and the kernel’s default action is to terminate the process.
For example, we use illegal pointers:


 1 #include <stdio.h>
 2 #include <string.h>
 3 
 4 int main(void)
 5 {
 6         char *str = "abcd123";
 7         char *p = NULL;
 8         char a;
 9 
10         p = strstr(str, "456");
11 
12         a = *p;
13 
14         return 0;
15 
16 }

When the above code is executed, there will be an error in memory. Because P is null, if we access the null pointer, there will be an error.
Revision:

char *str = "abcd123";
char *p = NULL;
char a;
p = strstr(str, "456");
if(p != NULL){
  a = *p;
}

So the key is to program strictly.

Solutions

    using GDB to debug and run, which needs to be positioned step by step, is obviously not optimal. Using the core file, Linux will generate a core file for SIGSEGV fault by default, which records the stack information when the fault occurs. The GDB program core can quickly trace the problem code. However, we need to add the compilation parameter – G
when compiling

gcc -g a.c -o test

GDB view command: BT and where

jsq@jsq:/opt/tmp/sigsegv-test$ gdb ./test core
GNU gdb (Ubuntu 8.1-0ubuntu3) 8.1.0.20180409-git
Copyright (C) 2018 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.  Type "show copying"
and "show warranty" for details.
This GDB was configured as "x86_64-linux-gnu".
Type "show configuration" for configuration details.
For bug reporting instructions, please see:
<http://www.gnu.org/software/gdb/bugs/>.
Find the GDB manual and other documentation resources online at:
<http://www.gnu.org/software/gdb/documentation/>.
For help, type "help".
Type "apropos word" to search for commands related to "word"...
Reading symbols from ./test...done.
[New LWP 5868]
Core was generated by `./test'.
Program terminated with signal SIGSEGV, Segmentation fault.
#0  0x0000561a6e023680 in main () at a.c:12
12		a = *p;
(gdb) bt
#0  0x0000561a6e023680 in main () at a.c:12
(gdb) where
#0  0x0000561a6e023680 in main () at a.c:12
(gdb) 

As you can easily see from the above information, the problem is located at line 12 of A.C.

Unable to generate core file resolution

The core file is usually generated in the current directory where the program is executed. If it cannot be generated, there are two common reasons:
1. Linux kernel is limited. You need to set it through ulimit – C, and you can check whether there are restrictions through ulimit – A

jsq@jsq:/opt/tmp/sigsegv-test$ ulimit -a
core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 15071
max locked memory       (kbytes, -l) 16384
max memory size         (kbytes, -m) unlimited
open files                      (-n) 1024
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) 15071
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited

In the figure above, we can see that the size of the core file is 0, which indicates that it is forbidden. We can set the size through the – C option, for example:

ulimit -c 1024

Set to 1024
2. The current program does not have “write” permission in the current directory
this has been encountered before. In some hardware configurations or special paths, if the core file is not 0, the core file still cannot be generated. At this time, we need to consider the issue of write permission (LS – L view), which mainly involves the permissions of program users and user groups, We can set users and user groups through chown and chrgp commands, for example:

chown test root
chrgp test root

Set the user and user group of the test program to root.

Summary

Segmentation fault is an abnormal memory fault. The fastest way to locate it is to trace the core file, so it is necessary to set the Linux environment to be able to run and generate the core file.

How to Use the Reverse() Function

How to use the reverse() function

1. Reverse function reverses string

string N;
cin>>N;
reverse(N.begin(), N.end());//begin & end;

2. The reverse function reverses the character array

char s[101];
cin.getline(s,sizeof(s));      //You can also do without cin.getline
int m=strlen(s);
reverse(s,s+m);
puts(s);

3. The reverse function reverses the integer array

int a[100];
reverse(a,a+10);         //The second parameter is the next address of the last element of the array;

Ioremap function and iounmap() function

Original post address: http://blog.chinaunix.net/uid-21289517-id-1828602.html

void * __ ioremap(unsigned long phys_ Addr, signed long size, signed long flags)
entry: phys_ Addr: the initial IO address to be mapped;

Size: the size of the space to be mapped;

Flags: marks related to permissions of IO space to be mapped;

Function: map an IO address space to the virtual address space of the kernel for easy access;

*void ioremap(unsigned long offset, unsigned long size);

Parameter:
offset: physical address
size: the size of the space to be mapped

Return value: page mapping, return virtual address


Iounmap() function to unmap the virtual address;
for example: iounmap (gpsetl0);

Table ‘sell.hibernate_sequence‘ doesn‘t exist

Problem Description:

When adding data to the database, the primary key of the data is not specified, and the entity auto increment is not set at this time. As a result, the data cannot be added to the database

Solution:

Set entity class as primary key auto increment, or specify primary key when adding data

    @Id
    @GeneratedValue(strategy = GenerationType.IDENTITY)
    private Integer categoryId;

When executing hive – f script com.mysql.jdbc . exceptions.jdbc4 .CommunicationsException: Communications link failure

cd $HIVE_ HOME/bin
vi hive- site.xml
The reason for the error is hive- site.xml The configuration in is incomplete or the configuration item is wrong

<configuration>
<property>
<name>hive.metastore.warehouse.dir</name>
<value>/opt/software/hadoop/hive110/warehouse</value>
</property>
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:mysql://192.168.56.130:3306/hive110?createDatabaseIfNotExist=true</value>
</property>
<property>
<name>javax.jdo.option.ConnectionDriverName</name>
 <value>com.mysql.jdbc.Driver</value>
</property>
<property>
<name>javax.jdo.option.ConnectionUserName</name>
 <value>root</value>
</property>
<property>
<name>javax.jdo.option.ConnectionPassword</name>
 <value>root</value>
</property>
</configuration>

In particular, we should pay attention to whether the IP address in the IP address configuration item is correct

<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:mysql://192.168.56.130:3306/hive110?createDatabaseIfNotExist=true</value>

Vscode HTML file auto supplement HTML skeleton failure

Vscode HTML files automatically supplement HTML skeleton invalidation

Input! +Tab key to complete HTML skeleton failure

terms of settlement:

1. Let the HTML file be in editing state, press the shortcut key Ctrl + Shift + P

2. Enter change language mode in the jump dialog box, find the “HTML” configuration file in the drop-down options bar and set it

Numpy realizes the forward propagation process of CNN

Due to the natural tensor property of numpy, the implementation code with numpy is very concise and has few parameters.

In this version, only a small number of operations are numpy processed, and most operations are for loops, only to understand the algorithm

import numpy as np

def zero_pad(X, pad):
    """
    Pad with zeros all images of the dataset X. The padding is applied to the height and width of an image, 
    as illustrated in Figure 1.
    
    Argument:
    X -- python numpy array of shape (m, n_H, n_W, n_C) representing a batch of m images
    pad -- integer, amount of padding around each image on vertical and horizontal dimensions
    
    Returns:
    X_pad -- padded image of shape (m, n_H + 2*pad, n_W + 2*pad, n_C)
    """

    return np.pad(X, ((0, 0), (pad, pad), (pad, pad), (0, 0)), 'constant', constant_values=0)


def conv_single_step(a_slice_prev, W, b):
    """
    Apply one filter defined by parameters W on a single slice (a_slice_prev) of the output activation 
    of the previous layer.
    
    Arguments:
    a_slice_prev -- slice of input data of shape (f, f, n_C_prev)
    W -- Weight parameters contained in a window - matrix of shape (f, f, n_C_prev)
    b -- Bias parameters contained in a window - matrix of shape (1, 1, 1)
    
    Returns:
    Z -- a scalar value, result of convolving the sliding window (W, b) on a slice x of the input data
    """
    s = np.multiply(a_slice_prev, W) + b
    Z = np.sum(s)
    return Z

def conv_forward(A_prev, W, b, hparameters):
    """
    Implements the forward propagation for a convolution function
    
    Arguments:
    A_prev -- output activations of the previous layer, numpy array of shape (m, n_H_prev, n_W_prev, n_C_prev)
    W -- Weights, numpy array of shape (f, f, n_C_prev, n_C)
    b -- Biases, numpy array of shape (1, 1, 1, n_C)
    hparameters -- python dictionary containing "stride" and "pad"
        
    Returns:
    Z -- conv output, numpy array of shape (m, n_H, n_W, n_C)
    cache -- cache of values needed for the conv_backward() function
    """
    
    # Retrieve dimensions from A_prev's shape  
    (m, n_H_prev, n_W_prev, n_C_prev) = A_prev.shape
    
    # Retrieve dimensions from W's shape
    (f, f, n_C_prev, n_C) = W.shape

    # Retrieve information from "hparameters"
    stride = hparameters['stride']
    pad = hparameters['pad']
    
    # Compute the dimensions of the CONV output volume using the formula given above.
    n_H = int((n_H_prev - f + 2 * pad) / stride) + 1
    n_W = int((n_W_prev - f + 2 * pad) / stride) + 1
    
    # Initialize the output volume Z with zeros.
    Z = np.zeros((m, n_H, n_W, n_C))
    
    # Create A_prev_pad by padding A_prev
    A_prev_pad = zero_pad(A_prev, pad)
    
    for i in range(m):                                 
        a_prev_pad = A_prev_pad[i]                    
        for h in range(n_H):                           
            for w in range(n_W):                       
                for c in range(n_C):                  
                    # Find the corners of the current "slice"
                    vert_start = h * stride
                    vert_end = vert_start + f
                    horiz_start = w * stride
                    horiz_end = horiz_start + f
                    # Use the corners to define the (3D) slice of a_prev_pad
                    a_slice_prev = a_prev_pad[vert_start:vert_end, horiz_start:horiz_end, :]
                    # Convolve the (3D) slice with the correct filter W and bias b, to get back one output neuron.
                    Z[i, h, w, c] = conv_single_step(a_slice_prev, W[...,c], b[...,c])
                                        
    # Making sure your output shape is correct
    assert(Z.shape == (m, n_H, n_W, n_C))
    
    # Save information in "cache" for the backprop
    cache = (A_prev, W, b, hparameters)
    
    return Z, cache

def relu(Z):
    return np.maximum(0, Z)


def pool_forward(A_prev, hparameters, mode = "max"):
    """
    Implements the forward pass of the pooling layer
    
    Arguments:
    A_prev -- Input data, numpy array of shape (m, n_H_prev, n_W_prev, n_C_prev)
    hparameters -- python dictionary containing "f" and "stride"
    mode -- the pooling mode you would like to use, defined as a string ("max" or "average")
    
    Returns:
    A -- output of the pool layer, a numpy array of shape (m, n_H, n_W, n_C)
    cache -- cache used in the backward pass of the pooling layer, contains the input and hparameters 
    """
    
    # Retrieve dimensions from the input shape
    (m, n_H_prev, n_W_prev, n_C_prev) = A_prev.shape
    
    # Retrieve hyperparameters from "hparameters"
    f = hparameters["f"]
    stride = hparameters["stride"]
    
    # Define the dimensions of the output
    n_H = int(1 + (n_H_prev - f) / stride)
    n_W = int(1 + (n_W_prev - f) / stride)
    n_C = n_C_prev
    
    # Initialize output matrix A
    A = np.zeros((m, n_H, n_W, n_C))              
    
    for i in range(m):                           
        for h in range(n_H):                     
            for w in range(n_W):                
                for c in range (n_C):            
                    vert_start = h * stride
                    vert_end = vert_start + f
                    horiz_start = w * stride
                    horiz_end = horiz_start + f
                    
                    # Use the corners to define the current slice on the ith training example of A_prev, channel c. (≈1 line)
                    a_prev_slice = A_prev[i, vert_start:vert_end, horiz_start:horiz_end, c]
                    
                    # Compute the pooling operation on the slice. Use an if statment to differentiate the modes. Use np.max/np.mean.
                    if mode == "max":
                        A[i, h, w, c] = np.max(a_prev_slice)
                    elif mode == "average":
                        A[i, h, w, c] = np.mean(a_prev_slice)
    
    # Store the input and hparameters in "cache" for pool_backward()
    cache = (A_prev, hparameters)
    
    # Making sure your output shape is correct
    assert(A.shape == (m, n_H, n_W, n_C))
    
    return A, cache


def conv_relu_pooling_forward(A_prev, W, b, hparameters_conv, hparameters_pool, mode_pool = "max"):
    # Retrieve dimensions from A_prev's shape  
    (m, n_H_prev, n_W_prev, n_C_prev) = A_prev.shape
    
    # Retrieve dimensions from W's shape
    (f, f, n_C_prev, n_C) = W.shape

    # Retrieve information from "hparameters"
    stride_conv = hparameters_conv['stride']
    pad_conv = hparameters_conv['pad']

    stride = hparameters_pool['stride']
    f = hparameters_pool['f']
    
    # Compute the dimensions of the CONV output volume using the formula given above.
    n_H_conv = int((n_H_prev - f + 2 * pad_conv) / stride_conv) + 1
    n_W_conv = int((n_W_prev - f + 2 * pad_conv) / stride_conv) + 1
    
    # Initialize the output volume Z with zeros.
    Z = np.zeros((m, n_H_conv, n_W_conv, n_C))
    
    # Create A_prev_pad by padding A_prev
    A_prev_pad = zero_pad(A_prev, pad_conv)

    n_H = int(1 + (n_H_conv - f) / stride)
    n_W = int(1 + (n_W_conv - f) / stride)
    n_C = n_C_prev

    Z_out = np.zeros((m, n_H, n_W, n_C))  


    for i in range(m):                                 
        a_prev_pad = A_prev_pad[i]

        for h in range(n_H_conv):                           
            for w in range(n_W_conv):                       
                for c in range(n_C):                  
                    # Find the corners of the current "slice"
                    vert_start = h * stride_conv
                    vert_end = vert_start + f
                    horiz_start = w * stride_conv
                    horiz_end = horiz_start + f
                    # Use the corners to define the (3D) slice of a_prev_pad
                    a_slice_prev = a_prev_pad[vert_start:vert_end, horiz_start:horiz_end, :]
                    # Convolve the (3D) slice with the correct filter W and bias b, to get back one output neuron.
                    Z[i, h, w, c] = conv_single_step(a_slice_prev, W[...,c], b[...,c])
                    Z[i, h, w, c] = max(0.0, Z[i, h, w, c])

        for h in range(n_H):                     
            for w in range(n_W):                
                for c in range (n_C):            
                    vert_start = h * stride
                    vert_end = vert_start + f
                    horiz_start = w * stride
                    horiz_end = horiz_start + f
                    
                    # Use the corners to define the current slice on the ith training example of A_prev, channel c. (≈1 line)
                    a_prev_slice = Z[i, vert_start:vert_end, horiz_start:horiz_end, c]
                    
                    # Compute the pooling operation on the slice. Use an if statment to differentiate the modes. Use np.max/np.mean.
                    if mode_pool == "max":
                        Z_out[i, h, w, c] = np.max(a_prev_slice)
                    elif mode_pool == "average":
                        Z_out[i, h, w, c] = np.mean(a_prev_slice)
                
    # Making sure your output shape is correct
    assert(Z_out.shape == (m, n_H, n_W, n_C))
    
    # Save information in "cache" for the backprop
    # cache = (A_prev, W, b, hparameters)
    
    return Z


 

[How to Solve]Warning: connect.static is not a function

grant contrib connect is not supported since version 0.11. X connect.static And connect.directory

you should install serve static (load static files) and serve index (load directory)

npm install --save-dev grunt-contrib-connect serve-static 

 

Examples

 

var serveStatic = require('serve-static');
var serveIndex = require('serve-index');

grunt.initConfig({
    connect: {
        options: {
            test: {
               directory: 'somePath',
               middleware: function(connect, options){
                    var _staticPath = path.resolve(options.directory);
                    return [serveStatic(_staticPath), serveIndex(_staticPath)]
               }
            }
        }
    }
})


Reference link
http://stackoverflow.com/questions/32961124/warning-connect-static-is-not-a-function-use-force-to-continue

Diamond types are not supported at this language level appears in IntelliJ

Appears in Intellij: Diamond types are not supported at this language level
1. Solution
File->project-> Modules-> Source-> Language Leve-> 8-Lambda,type annotation etc.
File->project-> Project中-> project Language Level-> 8-Lambda,type annotation etc.
Settings-> Build,execution,Deployment-> Compiler->Java Compiler

Typeerror: UFUNC ‘isn’t supported for the input types

It took me a lot of time to find the wrong problem, so I hope you can be inspired.

Look at the code explanation

da1
Out[1]: 
          a   b  c        aa
0  0.200000  a1  1  0.200000
1  0.500000  a2  2  0.500000
2  0.428571  a3  3  0.428571
3       NaN  a2  4       NaN
4  0.833333  a1  5  0.833333
5  0.750000  a1  6  0.750000
6  0.777778  a3  7  0.777778
7       NaN  a1  8       NaN
8      test  a3  9       NaN

In [2]: ddn1 = da1['a'].values

In [3]: ddn1
Out[3]: 
array([0.2, 0.5, 0.42857142857142855, nan, 0.8333333333333334, 0.75,
       0.7777777777777778, nan, 'test'], dtype=object)

The type dtype of numpy array is object

In [4]: np.isnan(ddn1)
---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
<ipython-input-414-406cd3e92434> in <module>
----> 1 np.isnan(ddn1)

TypeError: ufunc 'isnan' not supported for the input types, and the inputs could not be safely coerced to any supported types according to the casting rule ''safe''

The reason for the error is that the type dtype of numpy is object, not number.

In [5]: type(ddn1[:8])
Out[5]: numpy.ndarray
In [6]: type(ddn1[8])
Out[6]: str

Although the previous values are all numbers, the last value is a string, and all values of the array are not of the same type.

In [7]: ddn1 = ddn1[:8]

In [8]: ddn1
Out[8]: 
array([0.2, 0.5, 0.42857142857142855, nan, 0.8333333333333334, 0.75,
       0.7777777777777778, nan], dtype=object)

Even if the last string is truncated by slicing, the type of the array does not change.

ddn1 = ddn1.astype('float')

ddn1
Out[9]: 
array([0.2       , 0.5       , 0.42857143,        nan, 0.83333333,
       0.75      , 0.77777778,        nan])

np.isnan(ddn1)
Out[10]: array([False, False, False,  True, False, False, False,  True])

Need to display the array into a numeric type line (here is converted to float).

In [11]: ddn1 = np.append(ddn1,'test')

In [12]: ddn1
Out[12]: 
array(['0.2', '0.5', '0.42857142857142855', 'nan', '0.8333333333333334',
       '0.75', '0.7777777777777778', 'nan', 'test'], dtype='<U32')
In [13]: np.isnan(np.append(ddn1,'test'))
---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
<ipython-input-440-26598f53c9e6> in <module>
----> 1 np.isnan(np.append(ddn1,'test'))

TypeError: ufunc 'isnan' not supported for the input types, and the inputs could not be safely coerced to any supported types according to the casting rule ''safe''

When a text value is appended, the type dtype of the array is changed again, which is not numeric. Reuse np.isnan There must be a mistake.

The conclusion is that to avoid errors, the value type in the array must be float or int.

cannot import name ‘_validate_lengths’ from ‘numpy.lib.arraypad’

Error in importing skimage:

>>> import skimage
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/opt/AI/AN/lib/python3.7/site-packages/skimage/__init__.py", line 157, in <module>
    from .util.dtype import *
  File "/opt/AI/AN/lib/python3.7/site-packages/skimage/util/__init__.py", line 8, in <module>
    from .arraycrop import crop
  File "/opt/AI/AN/lib/python3.7/site-packages/skimage/util/arraycrop.py", line 8, in <module>
    from numpy.lib.arraypad import _validate_lengths
ImportError: cannot import name '_validate_lengths' from 'numpy.lib.arraypad' (/opt/AI/AN/lib/python3.7/site-packages/numpy/lib

Because it doesn’t match the numpy version, my numpy is 1.16

The version of numpy can be reduced or the version of skimage can be improved. When I finally use the latter, the former will report an error

ValueError: numpy.ufunc has the wrong size, try recompiling. Expected 192, got 216

resolvent:

1) View version:

[root@localhost datasets]# pip install scikit-image==9999
Looking in indexes: https://pypi.tuna.tsinghua.edu.cn/simple
ERROR: Could not find a version that satisfies the requirement scikit-image==9999 (from versions: 0.7.2, 0.8.0, 0.8.1, 0.8.2, 0.9.0, 0.9.1, 0.9.3, 0.10.0, 0.10.1, 0.11.2, 0.11.3, 0.12.0, 0.12.1, 0.12.2, 0.12.3, 0.13.0, 0.13.1, 0.14.0, 0.14.1, 0.14.2, 0.14.3, 0.14.4, 0.14.5, 0.15.0, 0.16.1, 0.16.2)
ERROR: No matching distribution found for scikit-image==9999

2) Install the latest

[root@localhost datasets]# pip install scikit-image==0.16.2
Looking in indexes: https://pypi.tuna.tsinghua.edu.cn/simple
Collecting scikit-image==0.16.2
  Downloading https://pypi.tuna.tsinghua.edu.cn/packages/dc/48/454bf836d302465475e02bc0468b879302145b07a005174c409a5b5869c7/scikit_image-0.16.2-cp37-cp37m-manylinux1_x86_64.whl (26.5MB)
     |████████████████████████████████| 26.5MB 1.8MB/s 
Requirement already satisfied: matplotlib!=3.0.0,>=2.0.0 in /opt/AI/AN/lib/python3.7/site-packages (from scikit-image==0.16.2) (2.2.3)
Requirement already satisfied: scipy>=0.19.0 in /opt/AI/AN/lib/python3.7/site-packages (from scikit-image==0.16.2) (1.1.0)
Requirement already satisfied: networkx>=2.0 in /opt/AI/AN/lib/python3.7/site-packages (from scikit-image==0.16.2) (2.1)
Requirement already satisfied: imageio>=2.3.0 in /opt/AI/AN/lib/python3.7/site-packages (from scikit-image==0.16.2) (2.4.1)
Requirement already satisfied: PyWavelets>=0.4.0 in /opt/AI/AN/lib/python3.7/site-packages (from scikit-image==0.16.2) (1.0.0)
Requirement already satisfied: pillow>=4.3.0 in /opt/AI/AN/lib/python3.7/site-packages (from scikit-image==0.16.2) (5.2.0)
Requirement already satisfied: numpy>=1.7.1 in /opt/AI/AN/lib/python3.7/site-packages (from matplotlib!=3.0.0,>=2.0.0->scikit-image==0.16.2) (1.16.0)
Requirement already satisfied: cycler>=0.10 in /opt/AI/AN/lib/python3.7/site-packages (from matplotlib!=3.0.0,>=2.0.0->scikit-image==0.16.2) (0.10.0)
Requirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in /opt/AI/AN/lib/python3.7/site-packages (from matplotlib!=3.0.0,>=2.0.0->scikit-image==0.16.2) (2.4.5)
Requirement already satisfied: python-dateutil>=2.1 in /opt/AI/AN/lib/python3.7/site-packages (from matplotlib!=3.0.0,>=2.0.0->scikit-image==0.16.2) (2.7.3)
Requirement already satisfied: pytz in /opt/AI/AN/lib/python3.7/site-packages (from matplotlib!=3.0.0,>=2.0.0->scikit-image==0.16.2) (2018.5)
Requirement already satisfied: six>=1.10 in /opt/AI/AN/lib/python3.7/site-packages (from matplotlib!=3.0.0,>=2.0.0->scikit-image==0.16.2) (1.13.0)
Requirement already satisfied: kiwisolver>=1.0.1 in /opt/AI/AN/lib/python3.7/site-packages (from matplotlib!=3.0.0,>=2.0.0->scikit-image==0.16.2) (1.0.1)
Requirement already satisfied: decorator>=4.1.0 in /opt/AI/AN/lib/python3.7/site-packages (from networkx>=2.0->scikit-image==0.16.2) (4.3.0)
Requirement already satisfied: setuptools in /opt/AI/AN/lib/python3.7/site-packages (from kiwisolver>=1.0.1->matplotlib!=3.0.0,>=2.0.0->scikit-image==0.16.2) (41.0.0)
Installing collected packages: scikit-image
  Found existing installation: scikit-image 0.14.0
    Uninstalling scikit-image-0.14.0:
      Successfully uninstalled scikit-image-0.14.0
Successfully installed scikit-image-0.16.2

3) Try:

[root@localhost datasets]# python
Python 3.7.0 (default, Jun 28 2018, 13:15:42) 
[GCC 7.2.0] :: Anaconda, Inc. on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import skimage
>>> 

Absolutely OK