The memory is usually divided into two partition:
One for the resident operating system, and one for the user processes
We may place the operating system in either low memory or high memory. The major factor affecting this decision is the location of the interrupt vector. Since the interrupt vector is often in low memory, programmers usually place the operating system in low memory as well.
In the contiguous memory allocation,each process is contained in a single continuous section of memory.
Memory Protection
protecting the operation system from the user processes and protecting the user processes from one another The relocation register is used for this protection. The relocation register contains the value of the smallest physical address; the limit register contains the range of logical address. With relocation and limit register, each logical address must be less than the limit register; the MMU maps the logical address dynamically by adding the value in the relocation register.The CPU can check the every address generated, so it can protect the operating system and the other users programs and data from being modified by this running process.
The relocation register scheme provides an effective way to allow the operating system size to change dynamically. Like the device drive, it is loaded to memory when it is needed.
Such code is sometimes called transient operating-system code; it comes and goes as needed.
Memory Allocation
The hole
All memory is available for user processes, and is consider as one large block of available memory, a hole.When a process arrives and needs memory, we search for a hole large enough for this process. If we find one, we allocate only as much memory as is needed, keeping the rest available to satisfy future requests.
The allocation procedure from the process
As processes enter the system, they are put into an input queue. The operating system takes into account the memory requirements of each process and the amount of available memory space in determining which processes are allocated memory.When a process is allocated space, it is loaded into memory and it can then compete for the CPU.When a process terminates, it releases its memory, which the operating system may then fill with another process from the input queue.
The allocation procedure from the input queue
At any given time, we have a list of available block sizes and the input queue. The operating system can order the input queue according to a scheduling algorithm.Memory is allocated to processes until, finally, the memory requirements of the next process cannot be satisfied; no available block of memory (or hole) is larger enough to hold process.The operating system can then wait until a large enough block is available, or it can skip down the input queue to see whether the smaller memory requirements of some other process can be met.
The choice of hole during allocation
In general, a set of holes, of various sizes, is scattered throughout memory at any given time.When a process arrives and needs memory, the system searches this set of hole that is large enough for this process.If the hole is too large, it is split into two: One part is allocated to the arriving process; the other is returned to the set of holes.If the new hole is adjacent holes are merged to form one larger hole.After this point, the system may need to check whether these are processes waiting for memory and whether this newly freed and recombined memory could satisfy the demands of any these waiting processes.
dynamic storage-allocation problem
How to satisfy a request of size n from a list of free holes.
The most common ways used to select a free hole from the set of available holes.
First fit
Allocate the first hole that is big enough. first satisfy
Best fit
Allocate the smallest hole that is big enough. available
Worst fit
Allocate the largest hole. largest
first fit and best fit are better than worst fit in terms of decreasing both time and storage utilization.
Neither first fit nor best fit is clearly better in terms of storage utilization, but first fit is generally fast.
external fragmentation
As processes are loaded and removed from memory, the free memory space is broken into little pieces.External fragmentation exists when enough total memory space exists to satisfy a request, but it is not contiguous; storage is fragmented into a large number of small holes.
The external fragmentation in first-fit and best-fit
The selection of the first-fit versus best-fit strategies can affect the amount of fragmentation. (First fit is better for some systems, whereas best fit is better for others)
Another factor is which end of a free block is allocated. (Which is the leftover piece – the one on the top, or the one on the bottom?)
No matter which algorithm is used, external fragmentation will be a problem.
50-percent rule Statistical analysis of first fit, given N allocated blocks, another 0.5N blocks will be lost due to fragmentation. That is one-third of memory be unusable!
One solution of the problem of external fragmentation is compaction. The goal is to shuffle the memory contents to place all free memory together in one large block.
If relocation is static and is done at assembly or load time, compaction cannot be done;
Compaction is possible only if relocation is dynamic, and is done at execution.
The simplest compaction algorithm can be expensive.
internal fragmentation
The overhead to keep track of this hole will be substantially larger than the hole itself.The general approach is to break the physical memory into fixed-sized blocks, and allocate memory in unit of block sizes. With this approach, the memory allocated to a process may be slightly larger than the requested memory. The difference between these two numbers is internal fragmentation – memory that is internal to a partition but is not being used.
Another possible solution to the external-fragmentation problem is to permit the logical-address space of a process to be noncontiguous, thus allowing a process to be allocated physical memory wherever the latter is available.
Two complementary techniques achieve this solution: paging and segmentation. These thechniques can also be combined.
Operating System MM IV
yinquanOne for the resident operating system, and one for the user processes
We may place the operating system in either low memory or high memory. The major factor affecting this decision is the location of the interrupt vector. Since the interrupt vector is often in low memory, programmers usually place the operating system in low memory as well.
protecting the operation system from the user processes and protecting the user processes from one another The relocation register is used for this protection. The relocation register contains the value of the smallest physical address; the limit register contains the range of logical address. With relocation and limit register, each logical address must be less than the limit register; the MMU maps the logical address dynamically by adding the value in the relocation register.The CPU can check the every address generated, so it can protect the operating system and the other users programs and data from being modified by this running process.
All memory is available for user processes, and is consider as one large block of available memory, a hole.When a process arrives and needs memory, we search for a hole large enough for this process. If we find one, we allocate only as much memory as is needed, keeping the rest available to satisfy future requests.
As processes enter the system, they are put into an input queue. The operating system takes into account the memory requirements of each process and the amount of available memory space in determining which processes are allocated memory.When a process is allocated space, it is loaded into memory and it can then compete for the CPU.When a process terminates, it releases its memory, which the operating system may then fill with another process from the input queue.
At any given time, we have a list of available block sizes and the input queue. The operating system can order the input queue according to a scheduling algorithm.Memory is allocated to processes until, finally, the memory requirements of the next process cannot be satisfied; no available block of memory (or hole) is larger enough to hold process.The operating system can then wait until a large enough block is available, or it can skip down the input queue to see whether the smaller memory requirements of some other process can be met.
In general, a set of holes, of various sizes, is scattered throughout memory at any given time.When a process arrives and needs memory, the system searches this set of hole that is large enough for this process.If the hole is too large, it is split into two: One part is allocated to the arriving process; the other is returned to the set of holes.If the new hole is adjacent holes are merged to form one larger hole.After this point, the system may need to check whether these are processes waiting for memory and whether this newly freed and recombined memory could satisfy the demands of any these waiting processes.
Allocate the first hole that is big enough. first satisfy
Allocate the smallest hole that is big enough. available
Allocate the largest hole. largest
first fit and best fit are better than worst fit in terms of decreasing both time and storage utilization.
Neither first fit nor best fit is clearly better in terms of storage utilization, but first fit is generally fast.
As processes are loaded and removed from memory, the free memory space is broken into little pieces.External fragmentation exists when enough total memory space exists to satisfy a request, but it is not contiguous; storage is fragmented into a large number of small holes.
No matter which algorithm is used, external fragmentation will be a problem.
50-percent rule Statistical analysis of first fit, given N allocated blocks, another 0.5N blocks will be lost due to fragmentation. That is one-third of memory be unusable!The overhead to keep track of this hole will be substantially larger than the hole itself.The general approach is to break the physical memory into fixed-sized blocks, and allocate memory in unit of block sizes. With this approach, the memory allocated to a process may be slightly larger than the requested memory. The difference between these two numbers is internal fragmentation – memory that is internal to a partition but is not being used.
You might also like
Operating System – Page III
Structure of the Page Table The most common techniques for structuring the page table Hierarchical Paging...
Operating System – Page II
Protection Memory protection in a paged environment is accomplished by protection bits that are associated...
Operating System – Page I
The memory-management scheme that permits the physical-address space of a process to be noncontiguous....