Fall 2005 Midterm Solutions
|
About This Midterm
For this exam, the Mean was 31.54, the Standard Deviation 4.69. If you have not picked up your exam, it is available from Judy Polenta in Gates 351.
Problems | Grader |
1, 2 | Jorge |
3, 4 | Mendel |
5, 6 | Ben |
If you believe that part of your exam was graded in error, please submit
a written request for a regrade. We'd prefer that the request go to the
staff member who graded that question. Arithmetic errors should be
emailed to the staff
list and those exams can be brought to the office hours of any of the
staff.
Problem 1
Part (a)
Answer:
The 1 bit clock only has a use bit, which is set at the current
time if the page is used, and reset by sweeping the list of pages. In LRU
the eviction is done a different sweep “hand” ahead of the reset “hand”. In a
8-bit clock algorithm we have many different settings for use count.
At each instead of just resetting use bit we update use count:
use_count = (use_bit < < n-1) | (use_count > > 2) causing exponential decay, then evict the page with the lowest use count. This leads to a
better approximation of LRU.
Part (b)
Answer:
This makes sense. Regardless of the number of bits on the clock
there are reasons to choose local over global replacement. A local
replacement policy may be chosen in order to prevent processes from paging
out other processes when they have the CPU. This reduces the need to load
many pages when we are just scheduled, preventing processes that use many
pages from paging out another process. The 8-bit clock can be used for
accuracy in the LRU independent of the choice of local versus global
replacement.
Problem 2
Part (a)
Answer:
The working set is expected to increase. The reason for this is due
to the setup of the DLLs. The DLLs do not have the functions setup in
such a way that they minimize the pages used by a particular process,
since they are used by several processes. When calling a dynamically
linked function, the entire page it is on will be loaded. Therefore on
expectation the working set of a process will increase when using
dynamic linking versus static linking.
Part (b)
Answer:
The working set is defined as the set of pages accessed by the process
in the last T ms. The reason such a pattern might exist is that when
executing a certain module the process may use a certain constant set of
pages causing a plateau, then when the process starts running a new module
it may still use the same number of pages, but since the pages are
different the working set just after the new module starts running is the
union of the pages used by both modules. After sometime the working set
will only include pages used by the later module since the previous module
was not ran in the last T ms, causing another plateau.
Problem 3
Answer:
One of the key differences between static linking and dynamic linking is in dynamic linking you need to have symbol information around at runtime to perform the linking. This additional information such as the name of routines being called helps in reverse engineering. Statically linked binaries can be fully stripped of their symbol tables so less information is available making reverse engineering more difficult.
Problem 4
Answer:
If the OS kernel does not run in the address space of the user process, then it will not be possible to directly use the user’s virtual address to access the process’ memory. Instead, the kernel can translate the user’s virtual address to a physical address using the page table the kernel maintains for the process. That physical address can be mapped into the kernel address space to access the process’ memory. In fact, our Pintos kernel already keeps every page of physical memory mapped into the kernel (starting at PHYS_BASE) so we need only translate to a physical address and index into this mapped version of the physical memory.
Problem 5
Part (a)
Answer:
The priority of a thread in a lottery system is determined by how many tickets it has, which determines how likely it is to run. For thread A to donate priority to thread B, thread A can donate some of its tickets to thread B, hence increasing the chance that B will run.
Part (b)
Answer:
A long term scheduler can control the degree of multiprogramming of a system, specifically how many processes are resident in memory. The long term scheduler can choose wisely to avoid thrashing, and maintain a good mix of I/O bound and CPU bound processes.
(Many people talked about the difference between the two schedulers, i.e., frequency of execution, but didn’t explain the purpose of the long-term scheduler. Mentioning conservation of memory/thrashing avoidance was required to receive full credit.)
Part (c)
Answer:
Higher priority.
A thrashing process spends more time paging than running on the CPU. Hence it is an I/O bound process, and will rarely use up its time slice. The Unix scheduler raises the priority of threads that don’t use up their time slice, similar to the Pintos MLFQS implementation.
Problem 6
Answer:
The simplest solution:
init(sema1,0); //initialize sema1 to 0
init(sema2,0); //initialize sema2 to 0
Thread T1 | Thread T2 | Thread T3 | Thread T4 |
| down(sema1) | down(sema1) | down(sema2) |
| | | down(sema2) |
//computation | //computation | //computation | //computation |
up(sema1) | up(sema2) | up(sema2) | |
up(sema1) | | | |
Most people did some variation of this with 3 semaphores and received full credit. Some people used 4, and were deducted 1 point for not finding a simpler solution