Tag Archives: ProgrammerAH

Uninstall Windows computer software, so that the uninstall is clean

When we uninstalled the computer, although we uninstalled the software, but we found that after the completion of the uninstall, there are still a lot of garbage in the computer, such as registry information, residual configuration information.
A lot of people uninstall software just by opening . > Control panel & GT; > Application & gt; > Uninstall , do you think this will clean the uninstall?In fact, this uninstall method may still save a lot of junk content, even for the rogue software does not work.
So how do we uninstall software in the right position?
Here’s a cleaner Uninstaller tool: Geek Uninstaller
This software is very small, only a few meters in size, and the installation-free version is also very convenient to use.

After we open Geek Uninstaller, we find the software we want to uninstall, then right-click on it to uninstall the corresponding software in the computer, unlike normal Uninstaller tools, after uninstall the software will also retrieve the remaining folders and registry on the computer.

This software handles support for uninstalling normal installation software, as well as support for uninstalling Win10 software.
Select the App Store in view, and then you can see the software downloaded from the app Store and uninstall accordingly.

Forced delete: for some special uninstall can not be completed normally, you can use the forced delete function, directly delete the corresponding folder and retrieve the registry residual files, is also very easy to use, but for most software, use the above method can be very good uninstall.

The picture is from Jane’s Book App

How to download Google Chrome offline installation package from the official website

Google Chrome is already the default browser for many people, but for “you know what” reasons, the online installation has been largely unsuccessful, and its own automatic updates are mostly still being loaded, so we go to some download stations to download the package, but I’ve been told many times that the ones that come back are usually 32-bit.

Since I’m using a 64-bit version of Windows 7, and I’m sure many of you will feel the same way, the 64-bit OS seems to work more smoothly together than the 32-bit OS, so we’re looking for the latest version of Google Chrome offline installer for 64-bit.

1, open the Chrome browser home page: http://www.google.cn/chrome
2, the url address bar is this: http://www.google.cn/chrome/browser/desktop/index.html
3, at the end of the website add:?standalone=1& Platform = Win64. Click the enter button to open the 64-bit download page. Click the “Download Chrome” button to get the Google Chrome 64-bit offline installation package
Explain the added parameters,?Standalone =1 refers to the offline installation package and Platform =win64 refers to the 64-bit version of Windows.
What if you just add “?Standalone =1. It is a standalone class that can be used to download the 32-bit Chrome offline package. Replace “Win” with “MAC” and you can download the MAC version.

Encapsulation of adding, deleting and modifying database by JDBC

private static final String CLASS_FORNAME = "com.mysql.jdbc.Driver";
private static final String URL = "jdbc:mysql://127.0.0.1:3306/test";
private static final String USER = "ROOT";
private static final String PASSWORD = "123456";

static{
    try {
        Class.forName(CLASS_FORNAME);
    } catch (ClassNotFoundException e) {
        e.printStackTrace();
    }
}

public static Connection getConn(){
    try {
        return DriverManager.getConnection(URL,USER,PASSWORD);
    } catch (SQLException e) {
        e.printStackTrace();
    }
    return null;
}

public static void close(Connection conn, Statement ps, ResultSet rs){
    try {
        conn.close();
        ps.close();
        rs.close();
    } catch (SQLException e) {
        e.printStackTrace();
    }
}
public static boolean update(String sql,Object...obj){
    Connection conn = getConn();
    PreparedStatement ps = null;
    try{
        ps =conn.prepareStatement(sql);
        if (Objects.nonNull(obj)) {
            for (int i = 0; i < obj.length; i++) {
                ps.setObject(i+1,obj[i]);
            }
        }
        int b = ps.executeUpdate();
        return b >0;
    } catch (SQLException e) {
        e.printStackTrace();
    }finally {
        close(conn,ps,null);
    }
    return false;
}

Thinking: the JDBC database operation step cannot modify data set as a constant, then the access to database objects and close the resources as method, will be updated as method, namely the deletion operation, the operation of SQL to update for us, then the array as the incoming conditions, first of all determine whether array is empty, if the data is not empty

Why namenode can’t be started and its solution

Reasons why the NameNode fails to start and solutions

Problem: After starting Hadoop, I checked with JPS and found no NameNode

The NAMenode Format creates a new NAMenodeld every time, and TMP/DFS /data contains the ID under the last format. The Namenode Format clears the data under the Namenode, but does not clean the data under the Datanode, which causes startup failure.
Solutions:

    stop running hadoop

stop -dfs.sh

    delete the file items mapped by hadoop.tmp.dir in the core-site configuration file, generally hadoop/ TMP folder

sudo rm -r tmp

    performs formatting

./bin/hdfs namenode -format

    restart

start-hdfs.sh

Python failed to read TIF file exported by envi.

Python failed to read files saved directly as TIF or TIF in ENVI.
error is reported as follows:

Traceback (most recent call last):
  File "/data/wdh/.conda/envs/AI_studywdh/lib/python3.6/site-packages/tifffile/tifffile.py", line 2296, in __init__
    byteorder = {b'II': '<', b'MM': '>', b'EP': '<'}[header[:2]]
KeyError: b'\x00\x00'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "beijing_landsat.py", line 19, in <module>
    band1 = imageio.imread(path+img_path[:-1]+'1.tif')
  File "/data/wdh/.conda/envs/AI_studywdh/lib/python3.6/site-packages/imageio/core/functions.py", line 265, in imread
    reader = read(uri, format, "i", **kwargs)
  File "/data/wdh/.conda/envs/AI_studywdh/lib/python3.6/site-packages/imageio/core/functions.py", line 186, in get_reader
    return format.get_reader(request)
  File "/data/wdh/.conda/envs/AI_studywdh/lib/python3.6/site-packages/imageio/core/format.py", line 170, in get_reader
    return self.Reader(self, request)
  File "/data/wdh/.conda/envs/AI_studywdh/lib/python3.6/site-packages/imageio/core/format.py", line 221, in __init__
    self._open(**self.request.kwargs.copy())
  File "/data/wdh/.conda/envs/AI_studywdh/lib/python3.6/site-packages/imageio/plugins/tifffile.py", line 226, in _open
    self._tf = _tifffile.TiffFile(f, **kwargs)
  File "/data/wdh/.conda/envs/AI_studywdh/lib/python3.6/site-packages/tifffile/tifffile.py", line 2298, in __init__
    raise TiffFileError('not a TIFF file')
tifffile.tifffile.TiffFileError: not a TIFF file

Solution:
please use save as option
file — > save as

CRM related SQL statements

CustomerReport.xml

select ${groupType} groupType,count(c.id) number
    from customer c
    left join employee e
    on c.seller_id = e.id
    <where>
      c.status = 0
      <if test="keyword!=null">
        and e.name like concat('%',#{keyword},'%')
      </if>
      <if test="beginDate!=null">
        and c.input_time &gt;=#{beginDate}
      </if>
      <if test="endDate!=null">
        and c.intput_time &lt;=#{endDate}
      </if>
    </where>
    group by ${groupType}

EmployeeMapper

 <select id="selectByName" resultMap="BaseResultMap">
      select * from employee where name = #{name}
    </select>

PermissionMapper

<select id="list" resultType="cn.wolfcode.domain.Permission">
      select * from permission
    </select>
  <select id="selectExpressionByCurrentuserId" resultType="java.lang.String">
    select p.expression
    from employee_role er
    left join role_permission rp
    on er.role_id = rp.role_id
    left join permission p
    on rp.permission_id = p.id
    where er.employee_id = #{id}
  </select>
  <select id="selectAllExpression" resultType="java.lang.String">
    select expression from permission
  </select>

RoleMapper

 <select id="selectByEmployeeId" resultType="cn.wolfcode.domain.Role">
    select r.*
    from employee_role er
    left join role r
    on er.employee_id = r.id
    where er.employee_id = #{id}
  </select>

Flicker problem of Vue

Vue has to go through a series of operations, first load the template before rendering data,
solution :
can let the page load data after the display

<div v-clock>
   {{message}}
</div>
 <style>
       [v-clock]{
           display: none;
       }
   </style>

Difference between getelementsbyname and getelementbyid

The difference between getElementsByName and getElementById
GetElementsByName gets an array
getElementById gets a number

function sum(n, m) {
            var summary = 0;
            var a = document.getElementsByName(n.toString());
            for (var i = 0; i < a.length; i++) {
                summary = summary + Number(a[i].value);
            }
            
            //1
            var b = document.getElementById(m);
            b.value = summary;
            //2
            var f = document.getElementsByName(m);
            f[0].value = summary;
        }

Location and optimization of server IO high problem

This share is mainly about the possible guesses, positioning and solutions to performance-related problems that we often encounter in interviews. During the interview, I found that many students did not have a clear idea
The purpose and objectives of this course
• Common causes of high SERVER IO
• Methods of locating common problems
 
= = to a common cause of high IO the server = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = =
Summary: Disks are usually the slowest subsystem of a computer and the most prone to performance bottlenecks because disks are the farthest from the CPU and CPU access to disks involves mechanical operations such as shaft rotation, track seeking, and so on.
If the occupancy of IO is too high, the following considerations can be made:
1) First consider writing too much log content (or heavy traffic)
1) Whether the content printed in the log is reasonable
Front-end application server. Avoid frequent local logging or abnormal logging
2) Whether the log level is reasonable
3] Consider asynchronous log writing (generally can solve CPU sawtooth fluctuation). In order to reduce disk IO operation, log writing is like memory partition; However, the log volume is so large that it is easy to fill the memory, and then consider compressing the log.
2) Full disk (phenomenon during pressure measurement: TPS decreases and response time increases)
1】 To find the disk full of large files, reasonable deletion, it is best to have a regular cleaning script, can be cleaned regularly
2】 To expand the disk space disk capacity
3] If it is difficult to clean, read and write on the main hard disk, and the basic data is moved to the mounted hard disk regularly.  
3) The number of database connections is over limited, resulting in too much sleep and too many sleep tasks:
1】 Every time the program connected to the database, remember to close the database.
2) Or, in the mysql configuration file, set the mysql timeout wait_timout, which defaults to eight hours and is set to a lower level
4) The database IO is too high and the amount of query is large, so it can be read/write separation (increase the read library) or library branch operation to reduce disk pressure, and some buffer parameters can be adjusted to reduce the IO write frequency
5) High disk IO is caused by reading and writing files
1】 Raid can be used to reduce stress
6) Insufficient performance of the disk itself
1) Consider replacing a new disk (one with strong performance)
Common positioning problem = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = =
The Linux system has a performance problem, and we can generally use the commands top, iostat, iotop, free, vmstat, and so on to see the initial location problem.
Today we’re going to talk about iostat and IOtop, the general steps for locating a problem:
Iostat is a command that gives us a wealth of IO status data. We usually start with iostat to see if there is a performance bottleneck
Step-2 use IOTOP to find IO high process
1. Common usage of Iostat:
Iostat -D-K 1 10 # to view TPS and throughput information
Parameter -d indicates that the device (disk) is in use;
-k Some columns that use blocks force Kilobytes to use units;
1, 10 means that the data display is refreshed once every 1 second, showing a total of 10 times

Iostat -d-x-k 1 10 # View device utilization (%util), response time (await)
We can get more statistics using the -x parameter.
note: generally %util is greater than 70%, and the I/O pressure is relatively high, indicating that there are too many I/O requests generated, the I/O system is full load, and there may be a bottleneck on the disk. The disk may have a bottleneck.

Iostat can also be used to get partial CPU state values:
Iostat -c 1 10 # to see the CPU status
Note that IDLE has a higher pressure when it is less than 70% IO, and there is more wait for the general reading speed.

2. We can basically determine whether there is a bottleneck of IO through the common command of iostat above, and then we can use the iotop command to catch the culprit process. Here, it is relatively simple to directly enter the command and execute it (generally, Java process is caught, mySQld is caught, more problems are more)

Password retrieval of Android keystore

As for the basic situation, I always remember the password when packing. However, my colleague asked me to pack the password when he was on a business trip, but I forgot it. Embarrassed…

Tried to use the usual password is wrong, there is no way, a variety of baidu Google, or found the answer in the omnipresent Stack Overflow.
(Note: this method is only suitable if you have successfully packaged it locally, because only then will your history store your password from the time you packed it.)
In the Project view, go to the.Gradle folder, find the version of Gradle in which the program is currently running, my version is 4.6, and there will be a taskHistory folder with a file called TaskHistory.bin, open this folder, display it in TEXT style, and you will see a lot of Mars TEXT.
Click Command+F to search. The search keyword is keyAlias. Add your keyAlias. In this way, you can find the password you stored when you packaged it.
 

Ranger yarn plug-in installation

Ranger-0.6.0-yarn-plugin is installed to all ResourceManager nodes of the Yarn. Other NodeManager nodes
do not need to be installed.
Landing HDFS install user, garrison/zdh1234 hadoop (user group), obtain installation package decompression
SCP/home/backup/ranger/ranger – 0.6.0 – yarn – plugin. Tar. Gz.
the tar – ZXVF ranger – 0.6.0 – yarn – plugin. Tar. Gz.
vi install properties
modify parameters is as follows:

POLICY_MGR_URL=http://10.43.159.245:6080
SQL_CONNECTOR_JAR=/usr/share/java/mysql-connector-java.jar
REPOSITORY_NAME=yarndev
CUSTOM_USER=garrison
CUSTOM_GROUP=hadoop

To install the Ranger Yarn Plugin, note:./ enabled-plugin. sh script should be run as root. :
./ enableyarn-plugin. sh
needs to restart after the creation of the Yarn.
ZDH – 245 bales to garrison of ZDH – 240
SCP garrison @ ZDH – r – 245:/home/garrison/ranger – 0.6.0 – yarn – plugin.
root installation script execution, and restart the yarn.
The Service of HDFS plugin
YARN new Service registered in Ranger-Admin is modified as follows

Service Name = yarnpdev
UserName = garrison
Password = zdh1234
YARN REST URL = http://10.43.159.240:8188

Then click TestConnection to save successfully.
Close the all-queue policy,
create root.default policy, and give mysql users the right to submit the queue.
use mysql user to perform mapreduce task, give mysql access to the corresponding directory of HDFS:

export JAVA_HOME=/usr/share/java/jdk1.7.0_80
/home/garrison/hadoop-2.7.1/bin/hadoop jar /home/garrison/hadoop-2.7.1/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.1.jar wordcount -Dmapreduce.job.queuename=default hdfs://gagcluster/usr/wordcout.txt /usr/wordresult_002
/home/garrison/hadoop-2.7.1/bin/hadoop fs -text /usr/wordresult_002/part-r-00000

User usersync cannot submit applications to queue root.default
Yarn queue permissions enable capacity schedule queues to implement
Yarn -site.xml configuration file to add configuration items:

<property>
    <name>yarn.resourcemanager.scheduler.class</name>    
    <value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler</value>
</property>

The capacity-scheduler. XML configuration is as follows, allowing only the garrison user to submit a job:

<property>
  <name>yarn.scheduler.capacity.root.acl_submit_applications</name>
   <value>garrison</value>
   <description>
     The ACL of who can submit jobs to the root queue.
   </description>
 </property>
 <property>
  <name>yarn.scheduler.capacity.root.acl_administer_queue</name>
  <value>garrison</value>
  <description>
    The ACL of who can administer jobs on the default queue.
  </description>
</property>
<property>
  <name>yarn.scheduler.capacity.root.default.acl_submit_applications</name>
  <value>garrison</value>
  <description>
    The ACL of who can submit jobs to the default queue.
  </description>
</property>
<property>
  <name>yarn.scheduler.capacity.root.default.acl_administer_queue</name>
  <value>garrison</value>
  <description>
    The ACL of who can administer jobs on the default queue.
  </description>
</property>