OS Walkthrough 03 — Memory Management

PKU’s OS on Coursera: Week 07 ~ Week 08

Yu-Cheng Kuo
4 min readFeb 1, 2023

Now we move forward to concepts such as virtual memory, paging, demand paging, segmented paging, TLB (translation lookaside buffer), page fault, page replacement algorithm, Belady anomaly, thrashing, & COW (copy-on-write).

Last article:

Next article:

(1) Week 07: Memory Management-1

(2) Week 08: Memory Management-2

(1) Week 07: Memory Management-1

1. Memory allocation problems:

  • Multi-processing
  • May access chunks of memory other progress is using.
  • External fragmentation

2. Handling the problem of process size > physical memory

  • Overlaying
  • Swapping (swap in / swap out) (page in / page out)
  • Virtual memory

3. MMU (Memory Management Unit)

4. Locality of reference

  • Spatial locality
  • Temporal locality

5. Relocation / translation / mapping [mmap()]

  • logical address / virtual address space
MMU & relocation [1]

6. Virtual memory size is only affected by CPU (32 bits or 64 bits).

  • x86 (32 bits): 4MB (2^32 bits)
  • x86_64 (64 bits): 128T (2^64 bits)

7. Page VS. page frame

  • Page: A page (or memory page, or virtual page, or logical page) is a fixed-length contiguous block of virtual memory. In other words, logical memory is divided into fixed-sized blocks. A page is normally 4 KB.
  • Page frame/frame: A frame (or memory frame, or physical page) is a fixed-length block of RAM (i.e., physical memory, it exists — as in “physical”. Virtual memory is invented for our mathematics to work properly and efficiently to securely manage memory). In other words, physical memory is divided into fixed-sized blocks.

8. Paging: still can’t solve internal fragmentation.

  • Demand paging
  • Prepaging

9. Demand paging

  • It suggests keeping all pages of the frames in the secondary memory until they are required. In other words, it says that do not load any page in the main memory until it is required.
  • Whenever any page is referred to for the first time in the main memory, then that page will be found in the secondary memory.
  • After that, it may or may not be present in the main memory depending upon the page replacement algorithm which will be covered later in this tutorial.
Paging hardware [1]

10. Page table / page table entry / page-table page / page table descriptor

11. Logical address = page ID + page address

(2) Week 08: Memory Management-2

1. Page directory / segment / extent / page

2. Segmentation / segmented paging

3. Paging VS. Segmentation

  • In Paging, a process address space is broken into fixed-sized blocks called pages.
  • In Segmentation, a process address space is broken into varying-sized blocks called sections.

4. Translation lookaside buffer (TLB)

  • Composed of costly SRAM
  • associative memory
Paging hardware with TLB [1]

5. Page Fault: If the referred page is not present in the main memory then there will be a miss and the concept is called Page miss or page fault.

  • The CPU has to access the missed page from the secondary memory. If the number of page faults is very high then the effective access time of the system will become very high.

6. Page replacement algorithm

  • OPT (Optimal Page Replacement Algorithm): used as the benchmark
  • FIFO (First In, First Out) [cf. FCFS in CPU scheduling]
  • SCR (Second Chance Replacement Policy) [an upgraded version of FIFO]
  • Clock [an upgraded version of SCR]
  • NRU (Not Recently Used)
  • LRU (Least Recently Used)
  • NFU (Not Frequently Used)
  • Aging [an upgraded version of LRU]
  • Working set model

7. Belady anomaly

  • The phenomenon in which increasing the number of page frames increases the number of page faults for certain memory access patterns.
  • This phenomenon is commonly experienced when using the first-in-first-out (FIFO) page replacement algorithm.

8. Thrashing

  • If the number of page faults is equal to the number of referred pages, or the number of page faults is so high that the CPU remains busy just reading the pages from the secondary memory, then the effective access time will be the time taken by the CPU to read one word from the secondary memory and it will be so high.

9. Copy-on-write (COW): When a parent process creates a child process [fork()], they share the same chunks of memory. Then, only when the child process intends to write and modify the resource does the OS copy a duplicate of those chunks of memory in another address, which reduces the resource consumption of unmodified copies.

(3) References

[1] https://zhuanlan.zhihu.com/p/351646541

(Chinese)

1. clock页面置换算法

https://zhuanlan.zhihu.com/p/196478796

--

--

Yu-Cheng Kuo

CS/DS blog with C/C++/Embedded Systems/Python. Embedded Software Engineer. Email: yc.kuo.28@gmail.com