Tag Archives: memory

Research on LRU algorithm

LRU is least recently used algorithm.

A page replacement algorithm of memory management. For data blocks (internal memory blocks) that are in memory but not in use, it is called LRU. The operating system will move them out of memory according to which data belong to LRU to make room for loading other data.
What is LRU algorithm?LRU is the abbreviation of least recently used, that is, the least page replacement algorithm, which serves for virtual page storage management.
As for the memory management of the operating system, how to save the memory with small capacity and provide resources for the most processes has always been an important research direction. The virtual storage management of memory is the most common and successful way at present. In the case of limited memory, a part of external memory is expanded as virtual memory, and the real memory only stores the information used by the current runtime. This undoubtedly greatly expands the function of memory and greatly improves the concurrency of computer. Virtual page storage management is to divide the space required by the process into multiple pages, only the current required pages are stored in memory, and the rest pages are put into external memory.

Note:
for virtual page storage, the replacement of internal and external memory information is based on pages. When a page in external memory is needed, it should be transferred into memory. At the same time, in order to maintain the size of the original space, the less internal memory is transferred, the higher the efficiency of process execution. So, which page can be transferred out to achieve the purpose of transferring as little as possible?We need an algorithm.
In fact, the algorithm to achieve such a situation is the most ideal – the page swapped out each time is the latest one to be used in all memory pages – which can delay page swapping to the maximum extent. This algorithm is called ideal page replacement algorithm. Unfortunately, this algorithm can not be realized.

In order to minimize the gap with the ideal algorithm, a variety of ingenious algorithms have been produced, at least using page replacement algorithm is one of them. The LRU algorithm is based on the fact that the pages used frequently in the previous instructions are likely to be used frequently in the following instructions. Conversely, pages that have not been used for a long time are likely not to be used for a long time in the future. This is the famous locality principle. Cache, which is faster than memory, is also based on the same principle. Therefore, we only need to find the least used page to call out the memory in each swap. This is the whole content of LRU algorithm.

FIFO, LRU and LFU algorithms

When it comes to caching, there are two points that must be considered:
(1) the consistency between cached data and target data.
(2) Cache expiration policy (mechanism).
Among them, cache expiration strategy involves elimination algorithm. Common elimination algorithms are as follows:
(1) FIFO: first in first out, first in first out
(2) LRU: least recently used
(3) LFU: least frequently used
pay attention to the difference between LRU and LFU. LFU algorithm selects the least used data item according to the number of times the data item is used in a period of time, that is, it is determined according to the difference of the number of times. LRU is determined according to the difference of usage time.
An excellent caching framework must implement all the above caching mechanisms. For example, ehcache implements all the above policies.

Double linked list + hashtable

package com.example.test;

import java.util.Hashtable;

public class LRUCache{

    private int cacheSize;
    private Hashtable<Object,Entry> nodes;
    private int currentSize;
    private Entry first; 
    private Entry last; 

    public LRUCache(int i){
        currentSize = 0;
        cacheSize = i;
        nodes = new Hashtable<Object,Entry>(i);
    }

    /**
     *Get the object in the cache and put it at the top
     */
    public Entry get(Object key){
        Entry node = nodes.get(key);
        if(node != null){
            moveToHead(node);
            return node;
        }else{
            return null;
        }
    }

    /**
     * add 
     */
    public void put(Object key,Object value){
        //First see if the hashtable has the entry, if it exists, then only update it
        Entry node = nodes.get(key);

        if(node == null){
            if(currentSize >= cacheSize){
                nodes.remove(last.key);
                removeLast();
            }else{
                currentSize++;
            }
            node = new Entry();
        }
        node.value = value;
        //Place the most recently used node at the head of the chain table to indicate the most recently used
        moveToHead(node);
        nodes.put(key, node);
    }

    public void remove(Object key){
        Entry node = nodes.get(key);
        //Delete in the chain table
        if(node != null){
            if(node.prev != null){
                node.prev.next = node.next;
            }
            if(node.next != null){
                node.next.prev = node.prev;
            }
            if(last == node)
                last = node.prev;
            if(first == node)
                first = node.next;
        }
        //Delete in hashtable中
        nodes.remove(key);
    }

    /**
     * Delete the last node of the chain, i.e., use the last entry used
     */
    private void removeLast(){
        //The end of the chain table is not empty, then the end of the chain table to point to null, delete even the end of the table (delete the least reference)
        if(last != null){
            if(last.prev != null)
                last.prev.next = null;
            else
                first = null;
            last = last.prev;
        }
    }

    /**
     *Move to the head of the chain table to indicate the latest used
     */
    private void moveToHead(Entry node){
        if(node == first)
            return;
        if(node.prev != null)
            node.prev.next = node.next;
        if(node.next != null)
            node.next.prev = node.prev;
        if(last == node)
            last =node.prev;
        if(first != null){
            node.next = first;
            first.prev = node;
        }
        first = node;
        node.prev = null;
        if(last == null){
            last = first;
        }
    }
    /*
     * clear the cache
     */
    public void clear(){
        first = null;
        last = null;
        currentSize = 0;
    }


}

    class Entry{
        Entry prev;
        Entry next;
        Object value;
        Object key;
    }

Bitmap optimization and memory optimization

Is it necessary for bitmap to call recycle method in Android system

Before Android 3.0, bitmap image data was processed in the underlying C, so before Android 3.0, recycle() should be called. Although finalize() will call recycle(), students who are experienced in Java should know that there are many problems in releasing resources only by finalize()

After Android 3.0, the image data is put in a member variable mbuffer [] of the bitmap object. Therefore, it is not necessary to call recycle (). After the bitmap is set to null, the image data will be recycled by GC.

Now that Android is in the age of 5.0, it is recommended not to consider supporting versions before 3.0.

actually Bitmap.recycle This is an advanced call, and normally need not be called, since the normal GC process will free up this memory when there are no more references to this bitmap. When no reference points to a bitmap, GC will automatically free memory.)

Managing bitmap memory Google’s official explanation of bitmap

In addition to the steps described in Caching Bitmaps, there are specific things you can do to facilitate garbage collection and bitmap reuse. The recommended strategy depends on which version(s) of Android you are targeting. The BitmapFun sample app included with this class shows you how to design your app to work efficiently across different versions of Android.

To set the stage for this lesson, here is how Android’s management of bitmap memory has evolved:

On Android Android 2.2 (API level 8) and lower, when garbage collection occurs, your app’s threads get stopped. This causes a lag that can degrade performance. Android 2.3 adds concurrent garbage collection, which means that the memory is reclaimed soon after a bitmap is no longer referenced.

On Android 2.3.3 (API level 10) and lower, the backing pixel data for a bitmap is stored in native memory. It is separate from the bitmap itself, which is stored in the Dalvik heap. The pixel data in native memory is not released in a predictable manner, potentially causing an application to briefly exceed its memory limits and crash. As of Android 3.0 (API level 11), the pixel data is stored on the Dalvik heap along with the associated bitmap.

The following sections describe how to optimize bitmap memory management for different Android versions.

Memory optimization

Memory optimization from: http://my.oschina.net/u/1753339/blog/223379

It’s easy to load a picture on the UI of your application, but when you need to load a lot of pictures on the UI, the situation becomes more complicated. In many cases (such as using listview, GridView or viewpager), the images displayed on the screen can be increased continuously by sliding the screen and other events, which eventually leads to oom.

In order to ensure that the use of memory is always maintained in a reasonable range, the removed screen images are usually recycled. At this time, the garbage collector will also think that you no longer hold the reference of these images, so it will GC these images. It’s very good to use this idea to solve the problem, but in order to make the program run quickly and load pictures on the interface quickly, you have to consider the situation that the user slides some pictures back into the screen after they are recycled. At this time, reloading the image just loaded is undoubtedly the bottleneck of performance. You need to find a way to avoid this situation.

At this time, the use of memory caching technology can solve this problem, it can let components quickly reload and process images. Now let’s take a look at how to use memory caching technology to cache images, so that your application can improve the response speed and fluency when loading many images.

Memory caching technology provides a fast way to access images that occupy a lot of valuable memory of applications. The core class is lrucache (this class is provided in the package of android-support-v4). This class is very suitable for caching images. Its main algorithm principle is to store the most recently used objects in LinkedHashMap with strong references, and remove the least recently used objects from memory before the cache value reaches the preset value.

In the past, we often used a very popular implementation of memory caching technology, namely soft reference or weak reference. However, this method is no longer recommended, because since Android 2.3 (API level 9), the garbage collector tends to recycle objects with soft references or weak references, which makes soft references and weak references unreliable. In addition, in Android 3.0 (API level 11), the image data will be stored in the local memory, so it can not be released in a predictable way, which has the potential risk of causing the memory overflow and crash of the application.

In order to select an appropriate cache size for lrucache, the following factors should be considered, for example:

How much memory can your device allocate for each application?

How many pictures can be displayed on the device screen at a time?How many images need to be preloaded because they may be displayed on the screen soon?

What is the screen size and resolution of your device?A super-high resolution device (such as Galaxy nexus) needs more cache space than a lower resolution device (such as nexus s) when holding the same number of images.

The size and size of the image, and how much memory space each image will occupy.

How often are images visited?Will some pictures be visited more frequently than others?If so, you may want to keep some images resident in memory, or use multiple lrucache objects to distinguish different groups of images.

Can you keep the balance between quantity and quality?Sometimes, it’s more effective to store multiple low pixel images, but it’s more effective to load high pixel images in the background.

There is no specified cache size for all applications, which is up to you. You should analyze the memory usage of the program, and then work out an appropriate solution. A too small cache space may cause images to be released and reloaded frequently, which is not good. However, if the cache space is too large, it may still cause problems java.lang.OutOfMemory It’s abnormal.

Here is an example of using lrucache to cache images:

private LruCache<String, Bitmap> mMemoryCache;  
@Override
protected void onCreate(Bundle savedInstanceState) { 
    // Get the maximum value of available memory, using memory beyond this value will raise an OutOfMemory exception. 
    // LruCache passes in the cache value in KB through the constructor. 
    int maxMemory = (int) (Runtime.getRuntime().maxMemory() /1024); 
    // Use 1/8 of the maximum available memory value as the size of the cache. 
    int cacheSize = maxMemory/8; 
    mMemoryCache = new LruCache<String, Bitmap>(cacheSize) { 
        @Override
        protected int sizeOf(String key, Bitmap bitmap) { 
            // Override this method to measure the size of each image and return the number of images by default. 
            return bitmap.getByteCount()/1024; 
        } 
    }; 
} 

public void addBitmapToMemoryCache(String key, Bitmap bitmap) { 
    if (getBitmapFromMemCache(key) == null) { 
        mMemoryCache.put(key, bitmap); 
    } 
} 

public Bitmap getBitmapFromMemCache(String key) { 
    return mMemoryCache.get(key); 
}

In this example, one eighth of the memory allocated by the system to the application is used as the cache size. In medium and high configuration mobile phones, there will be about 4 megabytes (32g8) of cache space. If a full screen GridView is filled with four 800×480 resolution images, it will occupy about 1.5 megabytes of space (800 * 480 * 4). Therefore, this cache size can store 2.5 pages of images.

When a picture is loaded into ImageView, it will be checked in the cache of lrucache first. If the corresponding key value is found, the ImageView will be updated immediately. Otherwise, a background thread will be started to load the image.

public void loadBitmap(int resId, ImageView imageView) { 
    final String imageKey = String.valueOf(resId); 
    final Bitmap bitmap = getBitmapFromMemCache(imageKey); 
    if (bitmap != null) { 
        imageView.setImageBitmap(bitmap); 
    } else { 
        imageView.setImageResource(R.drawable.image_placeholder); 
        BitmapWorkerTask task = new BitmapWorkerTask(imageView); 
        task.execute(resId); 
    } 
}

Bitmapworkertask also puts the key value pair of the newly loaded image into the cache.

class BitmapWorkerTask extends AsyncTask<Integer, Void, Bitmap> { 
    // loading the pictures
    @Override
    protected Bitmap doInBackground(Integer... params) { 
        final Bitmap bitmap = decodeSampledBitmapFromResource( 
                getResources(), params[0], 100, 100); 
        addBitmapToMemoryCache(String.valueOf(params[0]), bitmap); 
        return bitmap; 
    } 
}

(how to define the key of image key value pair?Path to image (URI)

What are the common memory leaks in Android

Query database without closing cursor

In Android, Cursor is a very common object, but in writing code, people often forget to call close, or because of the code logic problem situation, close is not called.

Usually, in an activity, we can call startmanagingcursor or directly use managedquery to let the activity automatically manage the cursor object.
However, it should be noted that when activity is introduced, cursor will no longer be available!
If the code of operating cursor is not synchronized with UI (such as background thread), there is no need to judge whether the activity has ended or wait for the background thread to end before calling ondestroy.
In addition, the following is also a common case that the cursor will not be closed:
although it seems that, Cursor.close () has been called, but if there is an exception, close () will be skipped, resulting in a memory leak.
Therefore, our code should be written in the following way:

Cursor c = queryCursor(); 
try { 
int a = c.getInt(1); 
...... 
} catch (Exception e) { 
} finally { 
c.close(); //Call close() in finally, ensuring that it will be called 
} 
try { 
Cursor c = queryCursor(); 
int a = c.getInt(1); 
...... 
c.close(); 
} catch (Exception e) { 
} 

Unregisterreceiver() was not called after calling registerreceiver

After calling registerreceiver, if unregisterreceiver is not called, it will occupy a large amount of memory.
We can often see the following code:
this is a very serious error, because it will cause the broadcastreceiver not to be unregistered, resulting in memory leakage.

registerReceiver(new BroadcastReceiver() { 
... 
}, filter); ... 

*InputStream/OutputStream not closed*

When using files or accessing network resources, using InputStream/OutputStream will also lead to memory leakage

Recycle () is not called after bitmap is used
according to the description of SDK, it is not necessary to call recycle. However, when we call recycle (), we no longer use memory as much as possible.

Context leak
this is a very obscure memory leak.
Let’s take a look at the following code:
in this code, we use a static drawable object.
This usually happens when we need to call a drawable frequently, and its loading is time-consuming. We don’t want to create the drawable every time we load an activity.
At this point, using static is undoubtedly the fastest way to write code, but it is also very bad.
When a drawable is attached to a view, the view will be set to the callback of the drawable Drawable.setCallback () Implementation).
This means that the drawable has a reference to textview, which in turn has a reference to activity.
As a result, the memory will not be released after the activity is destroyed.

Python error memoryerror

Python error memoryerror

Python 32bit can only use 2G of memory at most. If the memory is larger than 2G, memoryerror will be reported.

However, 64bit Python has no such limitation, so it is recommended to use 64bit python.
Possible problems: in the past, official libraries of numpy and SciPy only supported 32bit python, but now we should release the corresponding 64bit version.

Opencv error: insufficient memory

Recently, I have been using the function Haartraining provided by Opencv to train the classifier. The previous picture used was 20*20, which could train the classifier. Later, when it was changed to 80*86, the error was reported, which was due to insufficient memory. Here is a screenshot of the error report:

So, I looked it up on the Internet and found a solution: change the debugging platform from 32 to 64, because there seems to be a problem with 32 bits, no matter how big your actual memory is, it only works around 2 or 3 gs. Another solution is to make your training pictures smaller.

JVM start error: could not reserve enough space for object heap error

JVM startup error: Could not Reserve enough space for Object Heap error
First, understand what the parameters mean:
The

parameter

meaning

— Xms2G Xmx2G

represents the minimum and maximum JVM heap available memory

– XX: PermSize -xx :MaxPermSize represents the JVM’s metadata memory size

Solve problems:

    eclipse startup error is: Could not reserve enough space for object heap error
    current configuration is:
    -xms512m-xmx1024m-xx :PermSize 512M : many SO answers are recommended to use JAVA_OPTION variable, but one answer is that the 32-bit process of Windows 7 cannot get more than 1200M of memory. This answer is a little reliable, SO I tried it. Download and install the 64BIT JDK and boot it up without error.

Appendix:
When

    was converted to a 64-bit JDK, jrebel was found to fail. After a long search, one of the answers given on the official forums is to back up Jrebel32.dll, and then change the 64-bit Jrebel32.DLL to Jrebel32.dll. JVM out of memroy error report summary:
    Java heap space: Increases -XMXpermgen space: Increases -xx :PermSizeRequested Array size: Error means that the size of the array created exceeds the maximum heap size, so the apparent solution is to either increase the size of -XMX or reduce the size of the array to be created.