Winter 2003 Midterm Solutions

For this exam, the mean was 30.4, the median 31, and the std dev 6.87. If you have not picked up your exam, it is available from Chris Lilly in Gates 305.

1, 2, 3, 4John
5, 6, 11, 12Mendel
7, 8, 9, 10Paul

If you believe that part of your exam was graded in error, please submit a written request for a regrade. We'd prefer that the request go to the staff member who graded that question. Arithmetic errors should be e-mailed to the staff list and those exams can be brought to the office hours of any of the staff.

## Problem 1

No, deadlock is not possible. Each process will request first one tape drive, and then ask for a second. Since there are three processes and four drives, at least one process must be able to get two drives (by the pigeon hole principle - if every process has only one drive, only three drives will be used). That process with two drives will be satisfied, because the processes need a maximum of two drives. Since a process in the system is making progress, the system is not deadlocked. Eventually the satisfied process will complete, and the other two processes will get two drives and make progress.

## Problem 2

This solution was found to be wrong in Fall 2005. For the correct answer, see the Fall 2005 midterm solutions.

No, this will not change the working set of a process. The working set is defined as the set of pages touched by a process in the past T seconds. Each process in a given time period T will run the same code, touching the same pages, independent of whether the libraries are statically- or dynamically-linked (since the same lib will be the same size, static or dynamic). Thus the set of pages touched will not change, so the working set and working set size will not change either.

Note that with dynamically linked libraries, the libs are shared and so the working sets of several processes will overlap (if they use the same libs). The balance set (the superset of the working sets of the processes being run) will decrease with dynamic libs because of this overlap, but the working set of each process will be the same.

The problem with this solution is it assumes that the code layout will be the same both with statically and dynamically linking. This is unlikely to be the case. When statically linking, the routines of the library that are used by the program are packed together in the pages of the code segment in the object file. When dynamically linking, the code segment of the library (including all the routines both used and not used by the program) is mapped in the process. The most likely result is you will end up touching more pages since the in-use routines will be spread out amoung more pages.

## Problem 3

Yes, although it's not common. A page table entry includes {vpn, pfn, status bits}, where the status bits include the use, valid, dirty, and other bits. Many systems have a 32-bit physical address space with 4K pages, the pfn will be 20 bits (32 total bits - 12 bits for the 4K page). However, if we had 4MB pages, the pfn would be 10 bits (32 total bits - 22 bits of page index); assuming a 32 bit virtual address space we would also have a 10 bit vpn, and if we have just use, valid and dirty bits the page table entry will be 10+10+3=23 bits, which is fewer than the 32 bits in the physical address space.

## Problem 4

Basically, we use the valid bit to emulate a reference bit. To clear the "reference" bit, we set the valid bit in the TLB to 0. This will mean that when the user program tries to access that memory, it will take a page fault, at which point we know the page is being referenced (and can set the reference bit in our kernel page table, or alternate structure); we then set the valid bit in the TLB and the machine runs. This provides reasonable performance since we only clear the valid bit when we would be clearing the reference bit (hopefully not frequent), costing us just a soft page fault, and otherwise the machine can use the TLB entry as usual.

## Problem 5

The page fault frequency algorithm sets thresholds for the rate of page faults that a process takes. This rate can be expressed as page faults per instruction executed by the process. If a process’s page fault rate is above a certain threshold more memory will be given to the process and it if is below a threshold memory can be taken away.

The working set algorithm defines the working set of a process to be the pages touched in the last T seconds of the process’s execution. The working set algorithm gives keeps the working set of each process in memory.

## Problem 6

Sure, using LRU replacement should work well in any situation in which there is locality. There is no reason to expect that a local replacement policy would be different from a global replacement policy in this regard.

## Problem 7

The statement is true. Segmentation has less internal fragmentation because it uses variable sized ranges that can exactly match an allocation request if there is enough contiguous free space. Because these ranges need to be contiguous in memory segmentation suffers external fragmentation as many variable sized allocations are made over time.

In contrast paging fits allocations into some number of potentially discontinuous, fixed size physical pages. The discontinuity lets paging use any free page in the system, eliminating external fragmentation. However since allocations must be a multiple of the page size they often don't use the entire page, leading to internal fragmentation.

## Problem 8

No, busy waiting never makes sense because there can be only one running thread on a uniprocessor system. If the thread loops waiting for a resource to become available it will prevent other threads from running and freeing the resource until it is context switched out.

OR

Yes, but only in the limited context of polling for an external event that we have good reason to believe will happen soon. An example is packet arrival on an ethernet card done by ALTQ networking in FreeBSD. In other contexts it does not sense because if thread loops waiting for a resource to become available it will prevent other threads from running and freeing the resource until it is context switched out.

## Problem 9

Yes, spin locks make sense on a superthreaded processor because switching between the two execution streams is "instant" and the other running thread may be holding the resource the current thread desires.

## Problem 10

An operating system can save less state on a system call than on a page fault because an application can control when it enters a system call while it has no real control over when it takes a page fault. When an application page faults the operating system needs to preserve all the application registers and the entire application processor state because the application should never know the fault occurred. In contrast applications know exactly when they are making a system call. They coud treat it as a normal function call that could clobber some of the registers (\$t0-\$t9 on MIPS) alleviating the operating system of the duty of saving those registers.

## Problem 11

Wait-free or non-blocking synchronization refers to a technique that deals with synchronization by detecting conflicts rather than using locks to prevent them. An example would be code that reads a data structure, modifies it, then checks to make sure the original data structure didn’t change before atomically replacing it with the modified copy.

## Problem 12

A MLFQ scheduler moves jobs to lower priority queues when they use their entire timeslice. This action results in I/O bound jobs that don’t use their entire timeslice in different queues that CPU-bound jobs that do use their entire timeslice.