Analytical View Of Garbage Collection In Solid State Media File Systems

  • Length: 1886 words (5.4 double-spaced pages)
  • Rating: Excellent
Open Document

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

Text Preview

More ↓

Continue reading...

Open Document

Analytical View of Garbage Collection in Solid State Media File Systems

1.0 Overview
Solid State media's such as flash memory are non-volatile computer memory that can be electrically erased and reprogrammed. These are specific types of EEPROM's that are erased and programmed in large blocks.
Flash memory is non-volatile, which means that it does not need power to maintain the information stored in the chip. In addition, flash memory offers fast read access times and better kinetic shock resistance than hard disks (Table 1).
Though flash memory has many advantages, its special hardware characteristics impose design challenges on storage systems.
Media Access Time
Read (512B) Write (512B) Erase
DRAM 2.56µs 2.56µs -
NOR Flash 14.4µs 3.53ms 1.2s (128kB)
NAND Flash 135.9µs 226µs 2-3ms (16kB)
Disk 12.4ms 12.4ms -

Table 1. Characteristics of different storage media. (NOR Flash: Intel 28F128JF3A-150, NAND Flash: K9F5608U0M)

1.1 Architecture
1.1.1 Partitions
Flash memory is divided into one or many sections of memory called partitions. A multi-partition architecture allows system processor to multi-task the I/O operations with the flash memory. While the processor can read from one partition, it can write/erase in another.

Figure 1. Partitions, Blocks and Pages
1.1.2 Blocks
In addition to partitions, flash memory is further divided into sections of memory called blocks (Figure 1). Flash memory devices in which all blocks are of the same size are symmetrically-blocked, while devices which are asymmetrically-blocked generally have several blocks that are significantly smaller than main array of flash blocks. Small blocks are typically used for storing small data or boot code. Block sizes vary 64kB to 256kB.
1.1.3 Pages
Each block in flash memory comprises of fixed number of pages (Figure 1). A page is typically of size 512B to 2kB. While erase operations can be done only on blocks, I/O operations can be done on every page.
1.2 Programming Data
Flash devices allow programming values from logical "1" to "0", but not from "0" to "1" (Figure 2). To program values back to "1"s requires erasing a full block. In most cases when data is data edited, it can either be re-written to the same block by caching the original, erasing the block and re-writing over it, or by writing the edited file to a new location in flash and the old one invalidated. Eventually, invalid data needs to be reclaimed which is usually done as a background process.
1 1 1 1 1 1 1 1 1 1

1 0 1 0 0 1 1 0 1 0

1 1 1 1 1 1 1 1 1 1

Figure 2. Flash Programming Limitations

2.0 Flash File System Functions
While flash file systems have many functions in common with file systems for other media, there are many that are unique to file systems for flash devices.
2.1 Wear Leveling
Each block in a flash memory device has a finite number of erase-write cycles (~ 10,000 to 100,000).

Need Writing Help?

Get feedback on grammar, clarity, concision and logic instantly.

Check your paper »

How to Cite this Page

MLA Citation:
"Analytical View Of Garbage Collection In Solid State Media File Systems." 24 Jun 2018
Title Length Color Rating  
Secure File Storage Server: The Case of First World Bank Savings and Loans - First World Bank Savings and Loans will need to have a secure file storage server. Along with a secure web server, database server. The bank will also need to provide layered security to make sure that they keep their customer confidential information from being compromised. This is an outline along with an executive summary to meet the needs. From my experience in the IT filed and what I have learned throughout my education and from making my own mistake a file server needs to have a strong antivirus program installed along with a strong firewall rules on the server....   [tags: IT field, file systems, network]
:: 3 Works Cited
1521 words
(4.3 pages)
Term Papers [preview]
The Global Ethical Perspective of Peer-to-peer File-sharing Essay - The Global Ethical Perspective of Peer-to-peer File-sharing Introduction This paper is an analytical essay on global ethical issues on peer-to-peer (P2P) file-sharing. A history and background of peer-to-peer file-sharing will be given, as well as how it became an issue. This paper will explore what aspects of file-sharing are ethical and at what point it becomes unethical. An explanation of the laws will be described and whether the laws different from region-to-region around the world. The paper will include personal experiences with file sharing, as well as an in-depth analysis on the topic with high-quality industry and academic references to defend a particular moral/ethical positio...   [tags: Technology Internet File Sharing Essays]
:: 10 Works Cited
4077 words
(11.6 pages)
Powerful Essays [preview]
An Examination of File-sharing on the Internet Essay - An Examination of File-sharing on the Internet “Napster and its founder held the promise of everything the new medium of the Internet encompassed: youth, radical change and the free exchange of information. But youthful exuberance would soon give way to reality as the music industry placed a bull's-eye squarely on Napster.” I. Introduction Today the use of a computer has provided many privileges to its users, and among those privileges the main and largest one is the distribution of information across the internet....   [tags: Research File Sharing Internet Essays]
:: 7 Works Cited
4591 words
(13.1 pages)
Powerful Essays [preview]
An Analytical View of Say Yes by Tobias Wolff Essay - An Analytical View of Say Yes by Tobias Wolff "Say Yes" is an emotional sorry of love and its pitfalls. The husband loves his wife dearly but fails to really know that all she wants to hear is affirmation of her proposal of love despite the racial undertone involve. The Husband does not come to the realization of this concept until the end of the story when he accepts the proposal and puts forth the effort to "make it up" to Ann The story begins around dusk, one evening in a non descript kitchen on El Camino Street in some unnamed American ghetto....   [tags: Papers] 665 words
(1.9 pages)
Better Essays [preview]
Music File-Sharing- Right or Wrong? Essay - Music File-Sharing- Right or Wrong. To file share or not to file share. That is the question. Should free music off the internet be legal. Who is in the right- Napster or the music industry. There are some of the topics I hoped to discuss when I invited four journalists to my house to debate the controversial issue of online music. Ding-dong. “Uh-oh”, I think, wiping my hands on a paper towel. “They must be here early.” It’s six-thirty, my guests aren’t due to arrive until seven, and I am already a half-hour behind....   [tags: Internet File Sharing Essays] 2547 words
(7.3 pages)
Powerful Essays [preview]
File Sharing Survey Essay - File Sharing Survey With a coke in one hand and the computer mouse in the other, Jack Napster is busy downloading Mp3's from Limewire with his super fast college Ethernet connection. Jack Napster lives for his music and consequently makes use of file sharing programs in order to keep up with current musical trends. He can access all the new hit songs and even some underground artists that his friends have recommended to him. Jack feels that file sharing is an ethical practice even though it is deemed illegal by the law....   [tags: File Sharing Illegality Essays]
:: 2 Works Cited
1616 words
(4.6 pages)
Strong Essays [preview]
Essay on The Pros and Cons of Legalizing File Sharing of MP3s - The Pros and Cons of Legalizing File Sharing of MP3s An estimated 70 million people have reportedly engaged in online file sharing, most of it illegal. Illegal downloading mostly of music and some of movies or programs, took off in the late 1990's with the file-sharing programs like Napster, Kazaa, Morpheus, Audiogalaxy, and more. Many of these programs have started charging money for downloads like Napster, have been shut down by the RIAA like Audiogalaxy, or are still downloading illegally like Kazaa....   [tags: File Sharing Music Downloading] 1197 words
(3.4 pages)
Strong Essays [preview]
An Ethical Evaluation of Peer-to-Peer File Swapping Essays - An Ethical Evaluation of Peer-to-Peer File Swapping Abstract The last few years has seen an explosion in the use of the Internet as a means for exchanging, free of charge, digital media by way of Peer-to-Peer (P2P) file sharing technologies. Initially, the practice was primarily limited to the swapping of music, in the form of MP3 files. The pervasiveness of broadband, the advent of newer file types, and the creation of more sophisticated technologies has subsequently made possible the exchange of other types as well – including movies, television shows and software....   [tags: File Sharing Research Papers]
:: 5 Works Cited
3209 words
(9.2 pages)
Powerful Essays [preview]
File Sharing Essay - FILE SHARING Ethical Debate with Today’s Technology (INTRO) You may have illegal content on your computer right now. File sharing has become a very large issue today in society even though it has existed for decades. It has been the recent advances in our technology that has made it main stream and in the eyes of the general public. File sharing today tests the limits of technology along with our ethics making it a fuzzy grey area. When discussing file sharing's effect on society, first you should have a brief understanding about the terminology, technology, and methods that are used....   [tags: essays research papers] 2576 words
(7.4 pages)
Powerful Essays [preview]
File Sharing Essay - File Sharing Napster was just another step into the huge world we know as the internet or the World Wide Web. It was a step I believe in the right direction, but some people have differing views. Napster is a program in which people could chat, share files such as mpeg or mpeg3 layered files or other formatted files across the internet. This program was very controversial because it was a very well made program. Its design and user interface was extremely easy to use. Therefore it attracted more and more people to the files sharing business....   [tags: Argumentative Persuasive Technology Essays]
:: 3 Works Cited
1349 words
(3.9 pages)
Strong Essays [preview]

To increase the life of the flash device, writes and erases should be spread as evenly as possible over all the blocks of the device. This is called wear leveling. Care needs to be taken in the software to balance performance with evenly spread wear leveling of blocks.
2.2 Garbage Collection
As mentioned previously, edits to files are usually not done "in place", rather data is written to a new location and the old data is invalidated. Since blocks should be erased in advance before updating, updates in place are not efficient.
The invalid data needs to be cleaned up at regular intervals and this process is called garbage collection or reclaim. When a block is reclaimed the valid data in the block is migrated to a clean (erased) block of data called the spare block. When the reclaim process is completed the old block is erased and it becomes the new spare block.
There have been many researches on garbage collection algorithms (or cleaning policies) for log-structured file systems used on disk-based storages. Garbage collection algorithms should deal with some issues such as how many blocks to erase, which blocks to erase, and where to migrate valid data from erased block. The primary concern of garbage collection algorithms has been to reduce the cleaning cost. But, the number of victim blocks is also a problem for garbage collection policy of flash memory file system. This is because the costs of erase operations are much higher than read/write operations and thus garbage collection could disturb normal I/O operations to the device.
3.0 Garbage Collection in Detail
Data on flash memory cannot be over-written unless the blocks are erased in advanced. Also, erase operations can occur only in larger units than write operations and hence it takes an order of magnitude longer than the write operations. The erase operation is hence slow that usually decreases the system performance and consumes more power.
Therefore, if every update operation is performed in place, then system performance will be poor since updating even one byte requires one erase and several write operations. In order to avoid having to erase during every update, a logging approach can be used since it is quite effective in several ways.
First, logging solves the inability to update in place, since an update results in a new write at the end of the log and invalidation of the old. The natural separation of asynchronous erases from writes allows write operations to fully utilize the fixed I/O bandwidth, and thus prevents performance degradation that may occur when writes and erases are performed simultaneously.
When valid data are updated to empty spaces at the end of the log, obsolete data are left at the same place as garbage, which a garbage collector process can later reclaim. Garbage collection operation can be run as background process so that update operation can be performed efficiently.
3.1 Steps to Collecting Garbage!
The process of garbage collection can be performed in three stages:
1. Select the victim block to be erased.
2. Identify valid pages and copy them to the end of log.
3. Erase the victim block.

Figure 3. Three steps of garbage collection.

The cleaning cost and the degree of wear-leveling are two primary concerns of garbage collector. The garbage collector tries to minimize cleaning cost and wear down all blocks as evenly as possible. Sometimes the objective of minimizing cleaning cost conflicts with that of wear-leveling. For example, excessive wear leveling generates a large number of invalidated blocks, which degrades cleaning performance.

3.2 Garbage Collection Issues
There are some issues of garbage collection:
1. When: When should garbage collection be started or stopped? It executes for defined periods indefinitely or is triggered until the value of free blocks gets below the defined threshold.
2. How Many: How many blocks are cleaned at once? The more blocks are cleaned at once; the more valid data can be reorganized. However, cleaning several blocks needs much time, which can disturb normal I/O execution. Thus, garbage collection algorithms should select only one block at a time.
3. Which: Which block should exactly be taken for erasing first? This can be done by various ways by considering the one with maximum amount of garbage in it or choose them by retrieving details such as age, update time etc. Also garbage blocks containing greater number of valid data need more time to erase as migrating times are high. This can be done using Victim Selection Algorithm.
4. Where: We need know exactly where the valid data goes. This can be done using data migration algorithms. There are different ways of enhancing reorganization of valid data like putting them together according to their age or their types etc.
4.0 Garbage Collection Algorithms
There are mainly two algorithms that help us in the cleansing effect.
1. Greedy Algorithm: It selects blocks with maximum amount of garbage in it, and then hopes to reclaim it with minimum amount of cleansing work. It doesn't take into consideration the data access patterns, and rather selects a block in FIFO style.
2. Cost-Benefit Algorithm: It is an algorithm that selects a block considering the age and utilization of data in it.
Garbage collection performance depends on the combination of victim selection
policy and data migration policy. The cost-benefit policy generally performs better than the greedy policy.
However, it does not perform well for high localities of access without combining efficient
data migration policy. After a large number of logging and cleaning operations under high localities of access, cold data mixes with hot data within each block. Afterwards, cold data
moves uselessly with hot data. For this reason, the utilization of cleaned blocks remains stable at a high value and the amount of free space collected becomes small. In simpler words, migration cost and erasure cost could be increased. For overcoming this problem, the cost-benefit policy has to combine with data migration policy that separates cold data and hot data when migrating valid data.
The Separate Block Cleaning Algorithm uses separate blocks in cleaning: one for cleaning not-cold blocks and writing new data, the other for cleaning cold segments. The separate segment cleaning was show to perform better than when only one segment is used in leaning, since hot data are less likely to mix with cold data. The Dynamic Data Clustering Algorithm clusters data according to their update frequencies by active data migration. As many flash memory file systems are based on Log-Structured File system, the separate block cleaning policy and the dynamic data clustering policy cannot be used for it.
The Age-Sort Policy is used in Log-Structured File system. It sorts the valid pages in victim blocks by the time they were last modified and migrates them at the end of log. For example,
it migrates the oldest pages first at the end of log. We use the cost-benefit with age-sort algorithm like Log-Structured File system. If we can predict the I/O workload like the number of I/O request arrivals during the next garbage collection execution, we can control the number of victim blocks to be erased according to the estimated I/O workload. If it is high, garbage collector selects at most one victim block. Otherwise, garbage collector can select many different victim blocks and thus improve its performance.
The proposed garbage collection module consists of three components (Figure 4):
1. Monitor: It measures the request arrival rate
2. Predictor - It uses the measurements from the monitor module to estimate the workload characteristics in the near future
3. Garbage Collector - It performs garbage collection task.
The monitor tracks the number of request arrivals in each measurement interval and records this value. The monitor maintains a finite history consisting of the most recent values of the number of arrivals.
The garbage collector uses the cost-benefit with age-sort algorithm. It uses the predicted workload to determine the number of victim blocks.

Figure 4. Proposed Garbage Collector Architecture
5.0 Conclusion
In this paper, we studied an intelligent garbage collection algorithm, which predicts I/O workload of the near future and determines the number of victim blocks according to the predicted I/O workload. If we can predict the number of I/O request arrivals during the next garbage collection execution, we use this information to control the number of victim blocks so that garbage collector can gather valid data from several victim blocks as much as possible. Proposed garbage collection scheme can reduce the cleaning cost by performing data migration efficiently.
6.0 References
• A Flash-Memory Based File System
• A Space-Efficient Flash Memory Software For Mobile Devices
• An Efficient NAND Flash File System For Flash Memory Storage
• An Intelligent Garbage Collection Algorithm For Flash Memory Storages
• Efficient Allocation Algorithms For Flash File Systems
• Fast Initialization And Memory Management Techniques For Log-Based Flash Memory File Systems
• Flash File System
• Flash File Systems Overview
• Flash Memory File Caching For Mobile Computers
• JFFS: The Journaling Flash File System
• Memory-Efficient Compressed File system Architecture For NAND Flash-Based Embedded Systems
• Reverse Indirect Flash File System
• The Design And Implementation Of A Log-Structured File System

Return to