Tag Archives: cache

[Solved] Redis Startup Error: FATAL CONFIG FILE ERROR

1.Redis Startup Error: Reading the configuration file, at line 194>>> ‘always-show-logo yes’Bad directive or wrong number of arguments
Error Messages:

[root@xxx-0001 src]# redis-server /etc/redis-cluster/redis-7001.conf
*** FATAL CONFIG FILE ERROR ***
Reading the configuration file, at line 194
>>> 'always-show-logo yes'
Bad directive or wrong number of arguments

Cause analysis:

Error means that the specified configuration file directory is wrong or the number of parameters in the configuration file is wrong
the reason is that redis-4.0 is installed for the first time At 8:00, the environment variable is written. When redis server is executed, it will first query whether this instruction is configured in the environment variable,
it is found that there is (or the old 4.0.8) However, the configuration file used is 5.0 To sum up, the redis server in the environment variable is imported from my previous version. If I change the version of redis, I can’t use the previously imported environment variable to execute

Solution:

From this point of view, the solution is clear:
method 1: re import the redis server of the new version of redis to the environment variable
method 2: directly use the redis server in the new version of redis to execute the startup command

Finally, let’s see the situation after the solution

[root@xxx-0001 src]# ./redis-server /etc/redis-cluster/redis-7001.conf
27895:C 06 Dec 2021 13:09:29.818 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
27895:C 06 Dec 2021 13:09:29.818 # Redis version=5.0.7, bits=64, commit=00000000, modified=0, pid=27895, just started
27895:C 06 Dec 2021 13:09:29.818 # Configuration loaded
[root@apm-0003 src]# ./redis-server /etc/redis-cluster/redis-7002.conf
27952:C 06 Dec 2021 13:09:37.218 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
27952:C 06 Dec 2021 13:09:37.218 # Redis version=5.0.7, bits=64, commit=00000000, modified=0, pid=27952, just started
27952:C 06 Dec 2021 13:09:37.218 # Configuration loaded
[root@apm-0003 src]# ./redis-server /etc/redis-cluster/redis-7003.conf
27996:C 06 Dec 2021 13:09:40.829 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
27996:C 06 Dec 2021 13:09:40.829 # Redis version=5.0.7, bits=64, commit=00000000, modified=0, pid=27996, just started
27996:C 06 Dec 2021 13:09:40.829 # Configuration loaded
[root@apm-0003 src]# ./redis-server /etc/redis-cluster/redis-7004.conf
28021:C 06 Dec 2021 13:09:43.651 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
28021:C 06 Dec 2021 13:09:43.651 # Redis version=5.0.7, bits=64, commit=00000000, modified=0, pid=28021, just started
28021:C 06 Dec 2021 13:09:43.651 # Configuration loaded
[root@apm-0003 src]# ./redis-server /etc/redis-cluster/redis-7005.conf
28065:C 06 Dec 2021 13:09:46.736 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
28065:C 06 Dec 2021 13:09:46.737 # Redis version=5.0.7, bits=64, commit=00000000, modified=0, pid=28065, just started
28065:C 06 Dec 2021 13:09:46.737 # Configuration loaded
[root@apm-0003 src]# ./redis-server /etc/redis-cluster/redis-7006.conf
28124:C 06 Dec 2021 13:09:50.963 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
28124:C 06 Dec 2021 13:09:50.963 # Redis version=5.0.7, bits=64, commit=00000000, modified=0, pid=28124, just started
28124:C 06 Dec 2021 13:09:50.963 # Configuration loaded
[root@xxx-0001 src]# ps -ef|grep redis
root      6227     1  0 12:35 ?       00:00:04 redis-server 0.0.0.0:6379
root     27896     1  0 13:09 ?       00:00:00 ./redis-server 0.0.0.0:7001 [cluster]
root     27953     1  0 13:09 ?       00:00:00 ./redis-server 0.0.0.0:7002 [cluster]
root     27998     1  0 13:09 ?       00:00:00 ./redis-server 0.0.0.0:7003 [cluster]
root     28022     1  0 13:09 ?       00:00:00 ./redis-server 0.0.0.0:7004 [cluster]
root     28066     1  0 13:09 ?       00:00:00 ./redis-server 0.0.0.0:7005 [cluster]
root     28125     1  0 13:09 ?       00:00:00 ./redis-server 0.0.0.0:7006 [cluster]
root     28276  4581  0 13:10 pts/4    00:00:00 grep --color=auto redis

[Solved] QT cmak Compile Error: CMake Error: The source.. does not match the soused

**Cmake error: the source… Does not match the used to generate cache Re-run cmake…

Solution:
Delete the CMakeLists.txt.user file in the project.

problem is solved. Later, it is found that the file is also a cache file, which contains the compilation status information of the project before, such as the debug directory. In cmake, if there is this file, the compiler will use the relevant cache information in this file, so various errors will occur:

delete the file and restart cmake, A new file is regenerated.

[Solved] Redis Execute redis-cli shutdown Error: (error) ERR Errors trying to SHUTDOWN. Check logs.

When redis executes redis cli shutdown, it reports error err errors trying to shutdown Check logs.

1. Start the pseudo cluster after installing reids (the configuration file is in/data/server/redis/etc/redis.CONF)

redis-server /data/server/redis/etc/redis.conf
redis-server /data/server/redis/etc/redis.conf --port 6380
redis-server /data/server/redis/etc/redis.conf --port 6381
redis-server /data/server/redis/etc/redis.conf --port 6382

Generate multiple redis nodes

2 Suddenly want to delete the redundant nodes

redis-cli -p 6380 shutdown

Error reporting:

3 Solve the problem of
modifying the redis configuration file

vim /data/server/redis/etc/redis.conf
# Modify
logfile "/data/server/redis/log/redis.log"

Kill the redis process and restart it.

Redis cli shutdown can be used

[Solved] redis Startup Error: Creating Server TCP listening socket 127.0. 0.1:6379: bind: no error redis

Error: [52904] 08 Dec 15:09:41.278 # creating server TCP listening socket 127.0 0.1:6379: bind: No error

Solution:

You can connect successfully by entering the following commands in sequence

D:\zip\redis\Redis-x64-3.2.100>redis-server.exe redis.windows.conf
[52904] 08 Dec 15:09:41.278 # Creating Server TCP listening socket 127.0.0.1:6379: bind: No error

D:\zip\redis\Redis-x64-3.2.100>redis-cli.exe
127.0.0.1:6379> shutdown
not connected> exit

D:\zip\redis\Redis-x64-3.2.100>redis-server.exe  redis.windows.conf
                _._
           _.-``__ ''-._
      _.-``    `.  `_.  ''-._           Redis 3.2.100 (00000000/0) 64 bit
  .-`` .-```.  ```\/    _.,_ ''-._
 (    '      ,       .-`  | `,    )     Running in standalone mode
 |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379
 |    `-._   `._   /    _.-'    |     PID: 47344
  `-._    `-._  `-./  _.-'    _.-'
 |`-._`-._    `-.__.-'    _.-'_.-'|
 |    `-._`-._        _.-'_.-'    |           http://redis.io
  `-._    `-._`-.__.-'_.-'    _.-'
 |`-._`-._    `-.__.-'    _.-'_.-'|
 |    `-._`-._        _.-'_.-'    |
  `-._    `-._`-.__.-'_.-'    _.-'
      `-._    `-.__.-'    _.-'
          `-._        _.-'
              `-.__.-'

[47344] 08 Dec 15:10:36.580 # Server started, Redis version 3.2.100
[47344] 08 Dec 15:10:36.583 * DB loaded from disk: 0.000 seconds
[47344] 08 Dec 15:10:36.584 * The server is now ready to accept connections on port 6379

Arduino cache multiple definition error [How to Solve]

esp32
Compile error message.
C:\Users\……\Temp\arduino_build_26414\sketch\src\myMp3Data.cpp.o:(.data.MySampleMp3+0x0): multiple definition of `MySampleMp3′
C:\Users\……\Temp\arduino_build_26414\sketch\Vs1053Test.ino.cpp.o:(.data.MySampleMp3+0x0): first defined here

 

Solution:
Go to the directory, delete all the files inside or delete them according to the specified file, and then compile
C:\Users\……\Temp\arduino_build_26414\sketch\src

[Solved] Redis Cache Error: org.springframework.data.redis.serializer.SerializationException: Could not read JSON..

Problem description


Using redis to cache data, it can be successfully saved to redis

The serialization format of the value used is: genericjackson2jsonredisserializer , but the error is obtained from the cache for the second time


Key error message, suggesting no constructor
org.springframework.data.redis.serializer.SerializationException: Could not read JSON: Cannot construct instance of `io.renren.common.utils.PageUtils` (no Creators, like default construct, exist): cannot deserialize from Object value (no delegate- or property-based Creator)

Problem-solving:

The first method directly changes the serialization of value to string type. The second method adds an empty parameter structure

The second method is adopted. The disadvantage of the first method is that Chinese information may be escaped

Research on LRU algorithm

LRU is least recently used algorithm.

A page replacement algorithm of memory management. For data blocks (internal memory blocks) that are in memory but not in use, it is called LRU. The operating system will move them out of memory according to which data belong to LRU to make room for loading other data.
What is LRU algorithm?LRU is the abbreviation of least recently used, that is, the least page replacement algorithm, which serves for virtual page storage management.
As for the memory management of the operating system, how to save the memory with small capacity and provide resources for the most processes has always been an important research direction. The virtual storage management of memory is the most common and successful way at present. In the case of limited memory, a part of external memory is expanded as virtual memory, and the real memory only stores the information used by the current runtime. This undoubtedly greatly expands the function of memory and greatly improves the concurrency of computer. Virtual page storage management is to divide the space required by the process into multiple pages, only the current required pages are stored in memory, and the rest pages are put into external memory.

Note:
for virtual page storage, the replacement of internal and external memory information is based on pages. When a page in external memory is needed, it should be transferred into memory. At the same time, in order to maintain the size of the original space, the less internal memory is transferred, the higher the efficiency of process execution. So, which page can be transferred out to achieve the purpose of transferring as little as possible?We need an algorithm.
In fact, the algorithm to achieve such a situation is the most ideal – the page swapped out each time is the latest one to be used in all memory pages – which can delay page swapping to the maximum extent. This algorithm is called ideal page replacement algorithm. Unfortunately, this algorithm can not be realized.

In order to minimize the gap with the ideal algorithm, a variety of ingenious algorithms have been produced, at least using page replacement algorithm is one of them. The LRU algorithm is based on the fact that the pages used frequently in the previous instructions are likely to be used frequently in the following instructions. Conversely, pages that have not been used for a long time are likely not to be used for a long time in the future. This is the famous locality principle. Cache, which is faster than memory, is also based on the same principle. Therefore, we only need to find the least used page to call out the memory in each swap. This is the whole content of LRU algorithm.

FIFO, LRU and LFU algorithms

When it comes to caching, there are two points that must be considered:
(1) the consistency between cached data and target data.
(2) Cache expiration policy (mechanism).
Among them, cache expiration strategy involves elimination algorithm. Common elimination algorithms are as follows:
(1) FIFO: first in first out, first in first out
(2) LRU: least recently used
(3) LFU: least frequently used
pay attention to the difference between LRU and LFU. LFU algorithm selects the least used data item according to the number of times the data item is used in a period of time, that is, it is determined according to the difference of the number of times. LRU is determined according to the difference of usage time.
An excellent caching framework must implement all the above caching mechanisms. For example, ehcache implements all the above policies.

Double linked list + hashtable

package com.example.test;

import java.util.Hashtable;

public class LRUCache{

    private int cacheSize;
    private Hashtable<Object,Entry> nodes;
    private int currentSize;
    private Entry first; 
    private Entry last; 

    public LRUCache(int i){
        currentSize = 0;
        cacheSize = i;
        nodes = new Hashtable<Object,Entry>(i);
    }

    /**
     *Get the object in the cache and put it at the top
     */
    public Entry get(Object key){
        Entry node = nodes.get(key);
        if(node != null){
            moveToHead(node);
            return node;
        }else{
            return null;
        }
    }

    /**
     * add 
     */
    public void put(Object key,Object value){
        //First see if the hashtable has the entry, if it exists, then only update it
        Entry node = nodes.get(key);

        if(node == null){
            if(currentSize >= cacheSize){
                nodes.remove(last.key);
                removeLast();
            }else{
                currentSize++;
            }
            node = new Entry();
        }
        node.value = value;
        //Place the most recently used node at the head of the chain table to indicate the most recently used
        moveToHead(node);
        nodes.put(key, node);
    }

    public void remove(Object key){
        Entry node = nodes.get(key);
        //Delete in the chain table
        if(node != null){
            if(node.prev != null){
                node.prev.next = node.next;
            }
            if(node.next != null){
                node.next.prev = node.prev;
            }
            if(last == node)
                last = node.prev;
            if(first == node)
                first = node.next;
        }
        //Delete in hashtable中
        nodes.remove(key);
    }

    /**
     * Delete the last node of the chain, i.e., use the last entry used
     */
    private void removeLast(){
        //The end of the chain table is not empty, then the end of the chain table to point to null, delete even the end of the table (delete the least reference)
        if(last != null){
            if(last.prev != null)
                last.prev.next = null;
            else
                first = null;
            last = last.prev;
        }
    }

    /**
     *Move to the head of the chain table to indicate the latest used
     */
    private void moveToHead(Entry node){
        if(node == first)
            return;
        if(node.prev != null)
            node.prev.next = node.next;
        if(node.next != null)
            node.next.prev = node.prev;
        if(last == node)
            last =node.prev;
        if(first != null){
            node.next = first;
            first.prev = node;
        }
        first = node;
        node.prev = null;
        if(last == null){
            last = first;
        }
    }
    /*
     * clear the cache
     */
    public void clear(){
        first = null;
        last = null;
        currentSize = 0;
    }


}

    class Entry{
        Entry prev;
        Entry next;
        Object value;
        Object key;
    }

Bitmap, cache and fresco Android image loading Library

Fresco Android image loading Library — Facebook

Fresco is a powerful image loading component.

A module called image pipeline is designed in fresco. It is responsible for loading images from network, local file system and local resources. In order to save space and CPU time to the greatest extent, it contains 3-level cache design (2-level memory, 1-level file).

There is a module called drawings in fresco, which can easily display the loading graph. When the picture is no longer displayed on the screen, it can release the memory and space in time.

Fresco supports Android 2.3 (API level 9) and above.

memory management

The extracted image, that is, the bitmap in Android, takes up a lot of memory. Large memory consumption is bound to lead to more frequent GC. Below 5.0, GC will cause interface stuck.

In systems below 5.0, fresco places the image in a special memory area. Of course, when the picture is not displayed, the occupied memory will be released automatically. This will make the app smoother and reduce the oom caused by image memory occupation.

Fresco does just as well on low-end machines, so you don’t have to think twice about how much image memory you’re using.

Progressive image presentation #
the progressive JPEG image format has been popular for several years. The progressive image format first presents the outline of the image, and then presents the gradually clear image as the image download continues, which is of great benefit to mobile devices, especially the slow network, and can bring better user experience.

Android’s own image library does not support this format, but fresco does. When using it, as usual, you just need to provide a URI of the image, and fresco will handle the rest.

It is very important to display pictures quickly and efficiently on Android devices. In the past few years, we have encountered many problems in how to efficiently store images. The picture is too big, but the memory of the phone is very small. The R, G, B and alpha channels of each pixel take up a total of 4 bytes of space. If the screen of the mobile phone is 480 * 800, a picture of the screen size will occupy 1.5m of memory. The memory of mobile phones is usually very small, especially Android devices need to allocate memory for various applications. On some devices, only 16MB of memory is allocated to the Facebook app. A picture will occupy one tenth of its memory.

What happens when your app runs out of memory?Of course it will collapse! We developed a library to solve this problem. We call it fresco. It can manage the images and memory used, and the app will no longer crash.

Memory area

In order to understand what Facebook does, we need to understand the difference between the heap memory that Android can use. The Java heap memory size of every app in Android is strictly limited. Each object is instantiated in heap memory using java new, which is a relatively safe area in memory. Memory has garbage collection mechanism, so when the app is not using memory, the system will automatically recycle this memory.

Unfortunately, the process of garbage collection in memory is the problem. When the memory is garbage collected, the memory is not only garbage collected, but also the Android application is completely terminated. This is also one of the most common reasons for users to get stuck or temporarily feign death when using the app. This will make the users who are using the app very depressed, and then they may anxiously slide the screen or click the button, but the only response of the app is to ask the user to wait patiently before the app returns to normal

In contrast, the native heap is allocated by the new of the C + + program. There is more available memory in the native heap. App is only limited by the physical available memory of the device, and there is no garbage collection mechanism or other things. But the C + + programmer must recycle every memory allocated by himself, otherwise it will cause memory leakage and eventually lead to program crash.

Android has another memory area called ashmem. It operates more like a native heap, but with additional system calls. When Android operates the ashmem heap, it will extract the memory area in the heap that contains data from the ashmem heap instead of releasing it, which is a weak memory release mode; the extracted memory will be released only when the system really needs more memory (the system memory is not enough). When Android puts the extracted memory back to the ashmem heap, as long as the extracted memory space is not released, the previous data will be restored to the corresponding location.

Three level cache
1. Bitmap cache
bitmap cache stores bitmap objects, which can be immediately used for display or post-processing

In systems below 5.0, the bitmap cache is located in ashmem, so that the creation and release of bitmap objects will not cause GC, and fewer GC will make your app run more smoothly.

In contrast, memory management has been greatly improved, so the bitmap cache is directly located on the heap of Java.

When the application is running in the background, the memory will be cleared.

    1. memory cache of undeciphered pictures
    this cache stores pictures in the original compressed format. The pictures retrieved from the cache need to be decoded before being used.

If there is any resizing, rotation, or webp transcoding work to be done, it will be done before decoding.

When the app is in the background, the cache will also be cleared.

    1. file cache
    is similar to UN decoded memory cache. File cache stores UN decoded images in original compressed format, which also needs decoding before use.

Unlike the memory cache, when the app is in the background, the content will not be cleared. Not even if it’s turned off. Users can clear the cache at any time in the system’s settings menu.

Bitmap and cache

Bitmap is special in Android, because Android limits the memory of single APP, for example, allocate 16MB, and the domestic custom system will be larger than 16. There are two common caching strategies in Android, LruCache and DiskLruCache. The former is used as memory cache method and the latter is used as storage cache method.

android.support Brief reading of lrucache source code in. V4 package

package android.util;  

import java.util.LinkedHashMap;  
import java.util.Map;  

/** 
 * A cache that holds strong references to a limited number of values. Each time 
 * a value is accessed, it is moved to the head of a queue. When a value is 
 * added to a full cache, the value at the end of that queue is evicted and may 
 * become eligible for garbage collection. 
 * Cache keeps a strong reference to limit the number of contents. Whenever an Item is accessed, this Item is moved to the head of the queue.
 * When a new item is added when the cache is full, the item at the end of the queue is reclaimed.
 * <p>If your cached values hold resources that need to be explicitly released, 
 * override {@link #entryRemoved}. 
 * If a value in your cache needs to be explicitly freed, override entryRemoved()
 * <p>If a cache miss should be computed on demand for the corresponding keys, 
 * override {@link #create}. This simplifies the calling code, allowing it to 
 * assume a value will always be returned, even when there's a cache miss. 
 * If the item corresponding to the key is lost, rewrite create(). This simplifies the calling code and always returns it even if it is lost.
 * <p>By default, the cache size is measured in the number of entries. Override 
 * {@link #sizeOf} to size the cache in different units. For example, this cache 
 * is limited to 4MiB of bitmaps: The default cache size is the number of items measured, rewrite sizeof to calculate the size of different items
 * size.
 * <pre>   {@code 
 *   int cacheSize = 4 * 1024 * 1024; // 4MiB 
 *   LruCache<String, Bitmap> bitmapCache = new LruCache<String, Bitmap>(cacheSize) { 
 *       protected int sizeOf(String key, Bitmap value) { 
 *           return value.getByteCount(); 
 *       } 
 *   }}</pre> 
 * 
 * <p>This class is thread-safe. Perform multiple cache operations atomically by 
 * synchronizing on the cache: <pre>   {@code 
 *   synchronized (cache) { 
 *     if (cache.get(key) == null) { 
 *         cache.put(key, value); 
 *     } 
 *   }}</pre> 
 * 
 * <p>This class does not allow null to be used as a key or value. A return 
 * value of null from {@link #get}, {@link #put} or {@link #remove} is 
 * unambiguous: the key was not in the cache.
 * Do not allow key or value to be null
 * When get(), put(), remove() return null, the corresponding item of the key is not in the cache
 */  
public class LruCache<K, V> {  
    private final LinkedHashMap<K, V> map;  

    /** Size of this cache in units. Not necessarily the number of elements. */  
    private int size; //The size of the already stored
    private int maxSize; //the maximum storage space specified

    private int putCount; //the number of times to put
    private int createCount; //the number of times to create
    private int evictionCount; //the number of times to recycle
    private int hitCount; //number of hits
    private int missCount; //number of misses

    /** 
     * @param maxSize for caches that do not override {@link #sizeOf}, this is 
     *     the maximum number of entries in the cache. For all other caches, 
     *     this is the maximum sum of the sizes of the entries in this cache. 
     */  
    public LruCache(int maxSize) {  
        if (maxSize <= 0) {  
            throw new IllegalArgumentException("maxSize <= 0");  
        }  
        this.maxSize = maxSize;  
        this.map = new LinkedHashMap<K, V>(0, 0.75f, true);  
    }  

    /** 
     * Returns the value for {@code key} if it exists in the cache or can be 
     * created by {@code #create}. If a value was returned, it is moved to the 
     * head of the queue. This returns null if a value is not cached and cannot 
     * be created. The corresponding item is returned by key, or created. the corresponding item is moved to the head of the queue.
     * If the value of the item is not cached or cannot be created, null is returned.
     */  
    public final V get(K key) {  
        if (key == null) {  
            throw new NullPointerException("key == null");  
        }  

        V mapValue;  
        synchronized (this) {  
            mapValue = map.get(key);  
            if (mapValue != null) {  
                hitCount++;  
                return mapValue;  
            }  
            missCount++;  
        }  

        /* 
         * Attempt to create a value. This may take a long time, and the map 
         * may be different when create() returns. If a conflicting value was 
         * added to the map while create() was working, we leave that value in 
         * the map and release the created value. 
         * If it's missing, try to create an item
         */  

        V createdValue = create(key);  
        if (createdValue == null) {  
            return null;  
        }  

        synchronized (this) {  
            createCount++;
            mapValue = map.put(key, createdValue);  

            if (mapValue != null) {  
                // There was a conflict so undo that last put  
                //If oldValue exists before it, then undo put() 
                map.put(key, mapValue);  
            } else {  
                size += safeSizeOf(key, createdValue);  
            }  
        }  

        if (mapValue != null) {  
            entryRemoved(false, key, createdValue, mapValue);  
            return mapValue;  
        } else {  
            trimToSize(maxSize);  
            return createdValue;  
        }  
    }  

    /** 
     * Caches {@code value} for {@code key}. The value is moved to the head of 
     * the queue. 
     * 
     * @return the previous value mapped by {@code key}. 
     */  
    public final V put(K key, V value) {  
        if (key == null || value == null) {  
            throw new NullPointerException("key == null || value == null");  
        }  

        V previous;  
        synchronized (this) {  
            putCount++;  
            size += safeSizeOf(key, value);  
            previous = map.put(key, value);  
            if (previous != null) {  //The previous value returned
                size -= safeSizeOf(key, previous);  
            }  
        }  

        if (previous != null) {  
            entryRemoved(false, key, previous, value);  
        }  

        trimToSize(maxSize);  
        return previous;  
    }  

    /** 
     * @param maxSize the maximum size of the cache before returning. May be -1 
     *     to evict even 0-sized elements. 
     *  Empty cache space
     */  
    private void trimToSize(int maxSize) {  
        while (true) {  
            K key;  
            V value;  
            synchronized (this) {  
                if (size < 0 || (map.isEmpty() && size != 0)) {  
                    throw new IllegalStateException(getClass().getName()  
                            + ".sizeOf() is reporting inconsistent results!");  
                }  

                if (size <= maxSize) {  
                    break;  
                }  

                Map.Entry<K, V> toEvict = map.eldest();  
                if (toEvict == null) {  
                    break;  
                }  

                key = toEvict.getKey();  
                value = toEvict.getValue();  
                map.remove(key);  
                size -= safeSizeOf(key, value);  
                evictionCount++;  
            }  

            entryRemoved(true, key, value, null);  
        }  
    }  

    /** 
     * Removes the entry for {@code key} if it exists. 
     * Delete the corresponding cache item of the key and return the corresponding value
     * @return the previous value mapped by {@code key}. 
     */  
    public final V remove(K key) {  
        if (key == null) {  
            throw new NullPointerException("key == null");  
        }  

        V previous;  
        synchronized (this) {  
            previous = map.remove(key);  
            if (previous != null) {  
                size -= safeSizeOf(key, previous);  
            }  
        }  

        if (previous != null) {  
            entryRemoved(false, key, previous, null);  
        }  

        return previous;  
    }  

    /** 
     * Called for entries that have been evicted or removed. This method is 
     * invoked when a value is evicted to make space, removed by a call to 
     * {@link #remove}, or replaced by a call to {@link #put}. The default 
     * implementation does nothing. 
     * Called when the item is recycled or deleted. Change method is called by remove when value is reclaimed to free up storage space.
     * or put called when the value of the item is replaced, the default implementation does nothing.
     * <p>The method is called without synchronization: other threads may 
     * access the cache while this method is executing. 
     * 
     * @param evicted true if the entry is being removed to make space, false 
     *     if the removal was caused by a {@link #put} or {@link #remove}. 
     * true---is deleted for free space; false - put or remove causes
     * @param newValue the new value for {@code key}, if it exists. If non-null, 
     *     this removal was caused by a {@link #put}. Otherwise it was caused by 
     *     an eviction or a {@link #remove}. 
     */  
    protected void entryRemoved(boolean evicted, K key, V oldValue, V newValue) {}  

    /** 
     * Called after a cache miss to compute a value for the corresponding key. 
     * Returns the computed value or null if no value can be computed. The 
     * default implementation returns null. 
     * Called when an Item is missing and returns the corresponding calculated value or null
     * <p>The method is called without synchronization: other threads may 
     * access the cache while this method is executing. 
     * 
     * <p>If a value for {@code key} exists in the cache when this method 
     * returns, the created value will be released with {@link #entryRemoved} 
     * and discarded. This can occur when multiple threads request the same key 
     * at the same time (causing multiple values to be created), or when one 
     * thread calls {@link #put} while another is creating a value for the same 
     * key. 
     */  
    protected V create(K key) {  
        return null;  
    }  

    private int safeSizeOf(K key, V value) {  
        int result = sizeOf(key, value);  
        if (result < 0) {  
            throw new IllegalStateException("Negative size: " + key + "=" + value);  
        }  
        return result;  
    }  

    /** 
     * Returns the size of the entry for {@code key} and {@code value} in 
     * user-defined units.  The default implementation returns 1 so that size 
     * is the number of entries and max size is the maximum number of entries. 
     * Return the size of the user-defined item, the default return 1 represents the number of items, the maximum size is the maximum item value
     * <p>An entry's size must not change while it is in the cache. 
     */  
    protected int sizeOf(K key, V value) {  
        return 1;  
    }  

    /** 
     * Clear the cache, calling {@link #entryRemoved} on each removed entry. 
     * 清空cacke
     */  
    public final void evictAll() {  
        trimToSize(-1); // -1 will evict 0-sized elements  
    }  

    /** 
     * For caches that do not override {@link #sizeOf}, this returns the number 
     * of entries in the cache. For all other caches, this returns the sum of 
     * the sizes of the entries in this cache. 
     */  
    public synchronized final int size() {  
        return size;  
    }  

    /** 
     * For caches that do not override {@link #sizeOf}, this returns the maximum 
     * number of entries in the cache. For all other caches, this returns the 
     * maximum sum of the sizes of the entries in this cache. 
     */  
    public synchronized final int maxSize() {  
        return maxSize;  
    }  

    /** 
     * Returns the number of times {@link #get} returned a value that was 
     * already present in the cache. 
     */  
    public synchronized final int hitCount() {  
        return hitCount;  
    }  

    /** 
     * Returns the number of times {@link #get} returned null or required a new 
     * value to be created. 
     */  
    public synchronized final int missCount() {  
        return missCount;  
    }  

    /** 
     * Returns the number of times {@link #create(Object)} returned a value. 
     */  
    public synchronized final int createCount() {  
        return createCount;  
    }  

    /** 
     * Returns the number of times {@link #put} was called. 
     */  
    public synchronized final int putCount() {  
        return putCount;  
    }  

    /** 
     * Returns the number of values that have been evicted. 
     * Return the number of recycled
     */  
    public synchronized final int evictionCount() {  
        return evictionCount;  
    }  

    /** 
     * Returns a copy of the current contents of the cache, ordered from least 
     * recently accessed to most recently accessed. Returns a copy of the current cache, from least recently accessed to most accessed
     */  
    public synchronized final Map<K, V> snapshot() {  
        return new LinkedHashMap<K, V>(map);  
    }  

    @Override public synchronized final String toString() {  
        int accesses = hitCount + missCount;  
        int hitPercent = accesses != 0 ?(100 * hitCount/accesses) : 0;  
        return String.format("LruCache[maxSize=%d,hits=%d,misses=%d,hitRate=%d%%]",  
                maxSize, hitCount, missCount, hitPercent);  
    }  
}

Cache penetration, cache breakdown and cache avalanche solutions

1. Preface

Cache is used in program design. The front-end sends data access request to the background

case 1: first, the data is retrieved from the cache and returned to the front end directly

case 2: if the data is not retrieved from the cache, the data will be retrieved from the database. After the data is retrieved, the cache will be updated first and then returned to the front end

case 3: if it is not found in the database, it will be returned to null directly.

2.Cache penetration [penetration cache, database, no data]

definition: cache penetration refers to the fact that there is no data in the cache and database, but the user constantly initiates requests, such as data with ID of “- 1” or data with ID of extra large and nonexistent. At this time, the user is likely to be an attacker, and the attack will lead to excessive pressure on the database.

solutions:

1) The verification is added in the interface layer. For example: ① user authentication verification, ② ID basic verification, ID & lt; = 0 direct interception and return.

2) Use temporary caching mechanism. If neither the cache nor the database can be retrieved, the key value pair can be written as key null, and a shorter cache validity time can be set (for example, 30 seconds. If the cache validity time is set too long, it may lead to the failure of normal use). In this way, users can be prevented from repeatedly using the same ID to brute force query attacks.

3.Cache breakdown [breakdown cache, can be found in database]

definition: cache breakdown refers to the fact that there is no data in the cache and there is data in the database (generally, the cache time is expired). At this time, because there are too many concurrent users, they can not read the data in the cache at the same time, and they go to the database to get the data at the same time, resulting in an instant increase in the pressure on the database.

solutions:

1) Hotspot data is set to never expire.

2) Add mutex lock to synchronize query operation. The reference code is as follows.

static Lock reenLock = new ReentrantLock();
   public List<String> getData() throws InterruptedException {
       List<String> result = new ArrayList<String>();
     
       // Fetching from the cache
       result = getDataFromCache();

       if (result.isEmpty()) {
           if (reenLock.tryLock()) {
               try {
                   System.out.println("Get the lock, fetch the database from the DB and write it to the cache");
                   // fetch data from database
                   result = getDataFromDB();

                   // Write the query data to the cache
                   setDataToCache(result);

               } finally {
                   reenLock.unlock();// Release the lock
               }

           } else {
               result = getDataFromCache();// check the cache again first
               
               if (result.isEmpty()) {
                   System.out.println("No lock, no data in cache, waiting...") ;
                   Thread.sleep(100);//wait
                   return getData();//retry
               }

           }
       }

       return result;

   }

Note:

1) If there is data in the cache, the result will be returned directly.

2) If there is no data in the cache, get the lock and get the data from the database. Before releasing the lock, other parallel threads will wait for 100ms, and then go to the cache again to get the data. In this way, we can prevent the database from repeatedly fetching data and updating data in the cache.

3) Of course, this is a simplified process. In theory, it would be better if the lock could be added according to the key value. That is, thread a’s fetching key1 data from the database does not prevent thread B’s fetching key2 data. The above code obviously can’t do this. scheme: lock can be fine-grained to key.

4、 Cache avalanche

definition: cache avalanche refers to the phenomenon that a large amount of data in the cache is due to the expiration time, and the amount of query data is huge, which leads to too much pressure on the database and even down the machine.

different from “cache breakdown”: cache breakdown refers to the concurrent query of the same data; cache avalanche refers to the fact that different data have basically expired at the same time, and many data cannot be found in the cache, so they turn to query the database.

solutions:

1) When saving data to redis in batches, the failure time of each key is set to a random value, so as to ensure that the data will not fail in a large area at the same time.

setRedis(Key,value,time + Math.random () * 10000);

2) If redis is a cluster deployment, the hotspot data can be evenly distributed in different redis databases to avoid the problem of all failure.

3) Hotspot data settings will never expire. If there is an update operation, the cache can be updated.

Solutions to flex4 error ා 2032 stream error

Recently, I was working on a project, and found no problems in the initial release of the program, but some users reported that they could not see the site, and screenshot Error#2032 error. But it worked in the r&d center, and was later tested and luckily found on a test machine. While the other 9 machines were tested at the same time, and found that they could display normally. Then I searched the Internet. No results. Set Cache, compile Settings, try a bunch of them. No response. Weird question! I was even thinking about going back to Flex3, because I stumbled upon a Flex3 project that made sense here. But then I came across a foreigner’s website:
Salesforce Flex: “Error #2032: Stream Error. URL: “
Give it a try, re-select the frame join and merge into the code.
Oh, my God, it worked.
This setting can be found in the Flex build path library path for project properties.
Originally, the first default in Flash Builder4 was to use the SDK default. This is exactly the difference from Flex3.
But it’s not clear what the difference is between the first option, the SDK default. But from the results of the release. It’s a lot smaller when you merge it into code.

Chrome Failed to load resource: net::ERR_CACHE_MISS

There is no such error message in IE/FF, but the following error message appears on the Chrome command line:
Failed to load resource: net::ERR_CACHE_MISS
The issue is a bug in the Chrome browser development tool that appears to be related to caching and has been submitted to the Chrome issues system
(https://code.google.com/p/chromium/issues/detail?Id = 424599).
Does not affect normal use, can be ignored, will be fixed in chrome40.x.x.x version.

After installation, Ubuntu encountered [SDB] asking for cache data failed assembling drive cache: write through

the original address: http://blog.csdn.net/liufei_learning/article/details/8521221

when installing ubuntu12.10 64bit server, the following error occurs:

[11690.011238] [SDB] Asking for cache data failed

[11690.011248] [SDB], drive cache: write through

googel found the following solution, which is a bug in ubuntu: after uninstalling on my machine, there will be no problem, but there will be a problem when reinstalling. For a temporary solution, write a script to start up and run

sudo rmmod ums_realtek
http://askubuntu.com/questions/132100/errors-in-dmesg-test-wp-failed-assume-write-enabled


I’m having the same issue on the official 12.04 LTS relase I also believe it is causing the system to be

less responsive. According to some sources it’s harmless. (i can apparently only post 2 links)

The following thinks this is error output from an onboard card reader:

https://bbs.archlinux.org/viewtopic.php?pid=1059099

It’s confirmed to be an upstream issue in

https://bugs.launchpad.net/ubuntu/+source/linux/+bug/987993

Run lsusb and find the offending device

nathan@Ham-Bone:~$ lsusb 

Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub
Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub
Bus 005 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub
Bus 006 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub
Bus 007 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub
Bus 008 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub
Bus 001 Device 002: ID 0bda:0158 Realtek Semiconductor Corp. USB 2.0 multicard reader

在我的情况下,它是Realtek多卡阅读器,快速检查

$ dmesg | grep realtek
[    4.716068] usbcore: registered new interface driver ums-realtek
$ lsmod | grep realtek
ums_realtek            17920  0 

显示模块ums-realtek

$sudo rmmod ums_realtek

为我解决了一个可逆的问题。这是

$sudo modprobe ums_realtek

再次启用读卡器。我还没有测试它是否工作,因为我从来没有使用它
如果这不能工作,有一些其他的方法来禁用usb设备,通过解除绑定在/sys/目录

tbody> <表> <

7
了投票

道明> <

我有同样的问题在正式12.04 LTS中继,我也相信这是导致系统反应更慢。根据一些消息来源,它是无害的。(显然我只能发布2个链接)

中被确认为上游问题

https://bugs.launchpad.net/ubuntu/ +源/ linux/+错误/ 987993 nathan@Ham-Bone:~$ lsusb Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 005 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 006 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 007 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 008 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 001 Device 002: ID 0bda:0158 Realtek Semiconductor Corp. USB 2.0 multicard reader

在我的情况下,它是Realtek多卡阅读器,快速检查

$ dmesg | grep realtek
[    4.716068] usbcore: registered new interface driver ums-realtek
$ lsmod | grep realtek
ums_realtek            17920  0 

显示模块ums-realtek

$sudo rmmod ums_realtek

为我解决了一个可逆的问题。这是

$sudo modprobe ums_realtek

再次启用读卡器。我还没有测试它是否工作,因为我从来没有使用它
如果这不能工作,有一些其他的方法来禁用usb设备,通过解除绑定在/sys/目录