Everything you need to know about Virtual memory
Virtual memory consists of virtualizing the address space. It is an operating system mechanism that requires hardware support. Here we will discuss what the virtual memory is and how it works. Let’s delve into it!
What is Virtual Memory?
Virtual memory is a mechanism of the operating system that allows running programs larger than RAM. It gives the illusion to the processes of having potentially unlimited space. The immediate consequences are these:
· We can have processes larger than the free space in RAM.
· We can process data larger than the free space in RAM.
It looks like witchcraft because we know that everything that needs to be processed must reside in primary memory. In fact, the thing that makes virtual memory brilliant is that we use the disk as if it were RAM.
In reality, it is not really so, instead, it is not entirely correct.
We have to say that we use secondary memory to temporarily support pieces of processes/data that we do not need at this time, and that probably will not serve us for a while.
Imagine this scenario:
1. You must perform a process.
2. Load only the first pages in RAM.
3. The rest allocate them on the disc.
4. As the process requires more pages, you take them from the disk and swap them to RAM.
Or:
1. You have some RAM processes.
2. You have to load another process, but you have no free space.
3. Take a piece of another process, whose pages haven't been used for a long time.
1. In the newly freed space, go to load the first pages of the new process.
A beautiful saved, right?
We have just introduced a term: swap, which indicates a transfer between primary and secondary memory more precisely:
· Swap in disk -> RAM.
· Swap out RAM -> disk.
Addressing in Virtual Memory (Segmentation on Pagination)
Now that we know what virtual memory is let's see how addressing works.
Since we use RAM and disk a bit in the same way, we should have an address that takes this property into account. In fact, virtual addresses are used both to address pages in RAM and on disk.
A virtual address is a triple formed by <segment number. page number. offset>
Below this address, we have the two familiar levels of segmentation and paging.
Line segment table : < control bit. length. base of the segment>.
Row page table : < P. M. control bit. frame number>.
We realized that page addressing is now slightly different. In particular, we have added two bits Ped M.
· P (present): indicates that that page is currently in RAM, and not on disk.
· M (modified): indicates that the page has been modified.
Before continuing with the translation of the addresses, it is smart to remember that:
· Each process has its own table of segments, which directs points of leaving and length of its various segments.
· Each process has various page tables, which address various pages.
· Each segment addresses a specific page table.
Well, let's see the general scheme of addressing.
If you find something out of place, give one back to the three things we remembered a moment ago.
It may be useful to have a commentary on the image just seen:
1. We have a virtual address, therefore: a segment number, a page number, and an offset.
2. We also have the pointer to the segment table for the process.
3. Let’s take the segment table, and go to the line denoted by segment number
4. In that line we find the address of the page table for that segment.
6. Let's take that page of the pages, and go to the line denoted by
page number. In that line, we find the number of frames.
7. Let's take the RAM and go to the address denoted by that number of frames, and move to it based on the offset.
Page Swap Policies
Ok, we know what swap is. Imagine now that you have a situation where the system is in a constant state of swap in and out. This phenomenon is called thrashing. Since swap is expensive, with thrashing, we are going to devastate system performance heavily. We realize that there is always the general principle of the locality that winks at the corner.
Fetch Policy (upload policies)
This type of Policy decides how to load pages into RAM.
When there is a reference to a page not present in memory, we have a page fault, which triggers the fetch mechanism, or * replacement * (which we will see shortly).
We have two types of fetch policies:
· On-Demand Paging: load page as soon as it is referenced, initially causes many page faults.
· Preparing: loads several contiguous pages (locality principle).
· Replacement Policy (replacement policies)
· Decide which page to replace, if necessary.
These policies are quite important, in fact, we have different algorithms to implement them. The quality of an algorithm is determined by the number of page faults that it triggers.
· The replaced pages are then inserted into a page buffer, which typically divides between modified and non-modified pages.
After a page fault, before loading the page from the disk, the memory manager checks its presence in the page buffer.
· Let's look at these five-page replacement algorithms.
· OPT (Optimal Replacement)
· It is a utopian optimal replacement of the pages.
It is not feasible because it involves replacing the page that will be used later, which we cannot know.
· LRU (Least Recently Used)
· Replaces the recently used less page.
It needs a label that represents the time of use.
· FIFO (First in - First Out)
· Replaces the page in memory for the longest time. It's like a circular buffer.
Simple and cute, too bad that it does not take advantage of the principle of locality. So it sucks us.
Clock:
We call it a clock because we can represent it graphically as a clock in which:
· Instead of numbers we have pages.
· Each page is assigned a use bit (or reference bit) with values {0, 1}
· The hand acts as a pointer.
From a general point of view, we can say that the clock algorithm works like this:
· The pointer points to the oldest page.
· Initially the use bit is 0.
· When referencing a page, its bit increments to 1. The bit of the pages encountered by the pointer (hand) up to the referenced page is set to 0.
· The page to be replaced is the first that the hand encounters with a 0 use bit.
This algorithm is also called second chance because pages with use bit 1 are given a second chance to remain in memory.
Author: Vicki Lezama