Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
762 views
in Technique[技术] by (71.8m points)

operating system - What are the differences between virtual memory and physical memory?

I am often confused with the concept of virtualization in operating systems. Considering RAM as the physical memory, why do we need the virtual memory for executing a process?

Where does this virtual memory stand when the process (program) from the external hard drive is brought to the main memory (physical memory) for the execution.

Who takes care of the virtual memory and what is the size of the virtual memory?

Suppose if the size of the RAM is 4GB (i.e. 2^32-1 address spaces) what is the size of the virtual memory?

Question&Answers:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Answer

0 votes
by (71.8m points)

Softwares run on the OS on a very simple premise - they require memory. The device OS provides it in the form of RAM. The amount of memory required may vary - some softwares need huge memory, some require paltry memory. Most (if not all) users run multiple applications on the OS simultaneously, and given that memory is expensive (and device size is finite), the amount of memory available is always limited. So given that all softwares require a certain amount of RAM, and all of them can be made to run at the same time, OS has to take care of two things:

  1. That the software always runs until user aborts it, i.e. it should not auto-abort because OS has run out of memory.
  2. The above activity, while maintaining a respectable performance for the softwares running.

Now the main question boils down to how the memory is being managed. What exactly governs where in the memory will the data belonging to a given software reside?

Possible solution 1: Let individual softwares specify explicitly the memory address they will use in the device. Suppose Photoshop declares that it will always use memory addresses ranging from 0 to 1023 (imagine the memory as a linear array of bytes, so first byte is at location 0, 1024th byte is at location 1023) - i.e. occupying 1 GB memory. Similarly, VLC declares that it will occupy memory range 1244 to 1876, etc.

Advantages:

  1. Every application is pre-assigned a memory slot, so when it is installed and executed, it just stores its data in that memory area, and everything works fine.

Disadvantages:

  1. This does not scale. Theoretically, an app may require a huge amount of memory when it is doing something really heavy-duty. So to ensure that it never runs out of memory, the memory area allocated to it must always be more than or equal to that amount of memory. What if a software, whose maximal theoretical memory usage is 2 GB (hence requiring 2 GB memory allocation from RAM), is installed in a machine with only 1 GB memory? Should the software just abort on startup, saying that the available RAM is less than 2 GB? Or should it continue, and the moment the memory required exceeds 2 GB, just abort and bail out with the message that not enough memory is available?

  2. It is not possible to prevent memory mangling. There are millions of softwares out there, even if each of them was allotted just 1 kB memory, the total memory required would exceed 16 GB, which is more than most devices offer. How can, then, different softwares be allotted memory slots that do not encroach upon each other's areas? Firstly, there is no centralized software market which can regulate that when a new software is being released, it must assign itself this much memory from this yet unoccupied area, and secondly, even if there were, it is not possible to do it because the no. of softwares is practically infinite (thus requiring infinite memory to accommodate all of them), and the total RAM available on any device is not sufficient to accommodate even a fraction of what is required, thus making inevitable the encroaching of the memory bounds of one software upon that of another. So what happens when Photoshop is assigned memory locations 1 to 1023 and VLC is assigned 1000 to 1676? What if Photoshop stores some data at location 1008, then VLC overwrites that with its own data, and later Photoshop accesses it thinking that it is the same data is had stored there previously? As you can imagine, bad things will happen.

So clearly, as you can see, this idea is rather naive.

Possible solution 2: Let's try another scheme - where OS will do majority of the memory management. Softwares, whenever they require any memory, will just request the OS, and the OS will accommodate accordingly. Say OS ensures that whenever a new process is requesting for memory, it will allocate the memory from the lowest byte address possible (as said earlier, RAM can be imagined as a linear array of bytes, so for a 4 GB RAM, the addresses range for a byte from 0 to 2^32-1) if the process is starting, else if it is a running process requesting the memory, it will allocate from the last memory location where that process still resides. Since the softwares will be emitting addresses without considering what the actual memory address is going to be where that data is stored, OS will have to maintain a mapping, per software, of the address emitted by the software to the actual physical address (Note: that is one of the two reasons we call this concept Virtual Memory. Softwares are not caring about the real memory address where their data are getting stored, they just spit out addresses on the fly, and the OS finds the right place to fit it and find it later if required).

Say the device has just been turned on, OS has just launched, right now there is no other process running (ignoring the OS, which is also a process!), and you decide to launch VLC. So VLC is allocated a part of the RAM from the lowest byte addresses. Good. Now while the video is running, you need to start your browser to view some webpage. Then you need to launch Notepad to scribble some text. And then Eclipse to do some coding.. Pretty soon your memory of 4 GB is all used up, and the RAM looks like this:

                                   enter image description here

Problem 1: Now you cannot start any other process, for all RAM is used up. Thus programs have to be written keeping the maximum memory available in mind (practically even less will be available, as other softwares will be running parallelly as well!). In other words, you cannot run a high-memory consuming app in your ramshackle 1 GB PC.

Okay, so now you decide that you no longer need to keep Eclipse and Chrome open, you close them to free up some memory. The space occupied in RAM by those processes is reclaimed by OS, and it looks like this now:

                                    enter image description here

Suppose that closing these two frees up 700 MB space - (400 + 300) MB. Now you need to launch Opera, which will take up 450 MB space. Well, you do have more than 450 MB space available in total, but...it is not contiguous, it is divided into individual chunks, none of which is big enough to fit 450 MB. So you hit upon a brilliant idea, let's move all the processes below to as much above as possible, which will leave the 700 MB empty space in one chunk at the bottom. This is called compaction. Great, except that...all the processes which are there are running. Moving them will mean moving the address of all their contents (remember, OS maintains a mapping of the memory spat out by the software to the actual memory address. Imagine software had spat out an address of 45 with data 123, and OS had stored it in location 2012 and created an entry in the map, mapping 45 to 2012. If the software is now moved in memory, what used to be at location 2012 will no longer be at 2012, but in a new location, and OS has to update the map accordingly to map 45 to the new address, so that the software can get the expected data (123) when it queries for memory location 45. As far as the software is concerned, all it knows is that address 45 contains the data 123!)! Imagine a process that is referencing a local variable i. By the time it is accessed again, its address has changed, and it won't be able to find it any more. The same will hold for all functions, objects, variables, basically everything has an address, and moving a process will mean changing the address of all of them. Which leads us to:

Problem 2: You cannot move a process. The values of all variables, functions and objects within that process have hardcoded values as spat out by the compiler during compilation, the process depends on them being at the same location during its lifetime, and changing them is expensive. As a result, processes leave behind big "holes" when they exit. This is called External Fragmentation.

Fine. Suppose somehow, by some miraculous manner, you do manage to move the processes up. Now there is 700 MB of free space at the bottom:

                        enter image description here

Opera smoothly fits in at the bottom. Now your RAM looks like this:

                                    enter image description here

Good. Everything is looking fine. However, there is not much space left, and now you need to launch Chrome again, a known memory-hog! It needs lots of memory to start, and you have hardly any left...Except.. you now notice that some of the processes, which were initially occupying large space, now is not needing much space. May be you have stopped your video in VLC, hence it is still occupying some space, but not as much as it required while running a high resolution video. Similarly for Notepad and Photos. Your RAM now looks like this:

                                        enter image description here

Holes, once again! Back to square one! Except, previously, the holes occurred due to processes terminating, now it is due t


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...