Fall 2003 Midterm Solutions
About This Midterm
For this exam, the Mean was 34, the Median 34, the Min 19, the Max 47,
and the Standard Deviation 6.3. If you have not picked up your exam, it
is available from Theresa Wan in Gates 351.
If you believe that part of your exam was graded in error, please submit
a written request for a regrade. We'd prefer that the request go to the
staff member who graded that question. Arithmetic errors should be
emailed to the staff
list and those exams can be brought to the office hours of any of the
I was looking for at least 2 advantages of preemptive scheduling. If
you only gave one unique advantage, I took off one point.
- Better utilization of resources like I/O
- Allows scheduler to provide fairness
- Allows scheduler to prevent starvation
- Allows scheduler to switch between high and low priority threads.
I wasn't looking for a description of "how" different levels of the
multi-level feedback work in terms of quantum. I was looking for the
motivation behind the entire scheme. Possible reasons are:
- Reduces context switch time when scheduling cpu-bound threads
- Improves I/O utilization
- Improves response time
- Provides fairness
If you only had 1 of these, I took off one point. If you described how
the system works, I took off 2 points.
The average response time of jobs increases when round robin is used on
jobs of equal length.
- Resource is not easily shared
- Given exclusively to one entity for a specific period of time
- ex) CPU
- Resource divided up into chunks
- Different chunks allocated to different entities at the same time
- ex) memory
Note: Saying space sharing is when you share space is not a reasonable
In this question, we were looking for you to realize that JIT compilation
requires a process to use resources (like CPU and memory) during execution
in order to compile the code. For full credit, you had to specify that
when the overhead of compilation was larger than the benefits accrued from
performing runtime optimizations, runtime code generation hurts
I took off points if you only specified that resources were used by
compilation and said nothing about the relationship between compilation
In general, many answers were accepted. A lot depended on your explanations,
and if you showed that you understood the question and had the right idea(s)
in your answer. Partial credit was given if you had the right answer mixed
somewhere between an incorrect answer.
Many students said that this was approximating LRU, or that the past
approximates the future, and that because a page was modified means that
it is being used, while an unmodified page is not being used. This is not
true, because such an assumption cannot be made (code pages are never
modified, but are always being used). No credit was given for this answer.
Answer (thanks to Ian Comfort):
The original suggestion must have been based on the fact that swapping out
unmodified pages requires only a disk read (of the new page) while swapping out
modified pages requires both a disk write (of the page being replaced) and
a disk read.
Some students with the correct answer also said that it takes twice the amount
of time to evict a modified page as compared to evicting an unmodified page
(because of the extra write). This is not entirely true, because writes are
usually slower than reads -- so the time for evicting a modified page is
actually more than twice the time for evicting an unmodified page. But no
points were taken off for this small technical detail.
Aging / LRU was the key concept here. Also, this algorithm was briefly
discussed in class.
Answer (thanks to Seungbeom Kim):
The use_bit will occupy the highest-order bit, while past use_count will be
shifted down. The effect is that use_count for a more recently used page will
have a high value (MSB set), while a page not recently used will have a lower
value because the "history" of use_count will decay exponentially.
We were looking for the right general idea in an answer for this question,
and most credit was given even if all the details weren't mentioned.
Answer (thanks to Lei Zhang):
Pure paging system needs page table to map virtual pages for the entire
address space. Adding segmentation will require page tables for the segments
that are in use, which is significantly smaller than entire address space.
Also, it does not require contiguous memory to store page tables for the
entire address space.
Reducing external fragmentation was the key point we were looking for in this
Answer (thanks to Eric Sirianni):
Yes. It would allow the memory allocated within the base-and-bound chunk to
come from non-contiguous locations in physical memory (via paging). This would
significantly reduce external fragmentation while adding only minimal
There was no right or wrong answer for this question. It all depended on the
explanation that was given with the answer. It was key to recognize that if a
page table is shared between two processes, then all the information is
shared between the two processes, including the code and the
stack -- and hence it isn't really possible to do this.
An answer which said that this was a good idea for processes that were sharing
memory / code / libraries did not receive any credit (because as mentioned
above, the entire address space is shared). Answers that reconignized that
threads share address spaces, or that these two processes would actually be
threads if they were sharing page tables received full credit. A few answers
also mentioned fork() or vfork(), and said that the new child process shares
the same address space as the parent. These answers were also given full credit
for having the right concept.
Many answers were accepted for this part -- even those that said that any
allocation algorithm is not better than any other, or that an adversary could
always give you a sequence of allocations that will work for one allocation
method, but not another. However, the answer we were looking for was that over
a long period of time (in the average case), first-fit will have less
fragmentation than best-fit, because best-fit leaves a lot of "sawdust".
Another perfectly valid answer would be to give an example (like the one in
class) where best-fit fails during a sequence of allocations, but first-fit
Answer (thanks to Seungbeom Kim):
Yes. For example, if allocation requests are made so that the smallest (best)
size of the free memory block is almost always a little larger than the
requested size (instead of exactly the same), a best-fit allocator will leave
a lot of very small free memory blocks that are not very useful ("sawdust").
In this case, which we can expect not to be very rare, a first-fit allocator
can have less fragmentation.
It seems likely that the locks used in your Nachos implementation
actually protect some Nachos' state from concurrent access. Killing
the process holding a lock and releasing the lock will violate this
protection guarantee and can result in the protected state being left
in an inconsistent condition. It seems likely that this will result in
something not working correctly.
move variable into register R1.
add one to R1 putting result into register R2.
Compare variable with R1 if match swap variable with R2.
if compare fail, goto try_again;
This code works fine when done in an interrupt handler. The
compare-and-swap is atomic with respect to interrupts. Any
interleaving of an interrupt that changes the variable will cause the
compare-and-swap to fail and the operation tried again.
Starvation and deadlock are different things so preventing a deadlock
doesn't prevent starvation. In a deadlock there exists no scheduling
that would exit the deadlock waiting while in starvation there exists
a possible exit path but the path is not being taken.
It never makes sense to spin on a lock on a uniprocessor. The optimal
spin time would be zero. For a multiprocessor, you should spin for 2
times the blocking cost.
This question can be answered in a number of different ways. One way
is to argue that it would be easier to add multiple threads per
address space to OS-A than it would be to add address spaces to
OS-B. A plausible argument for this would be adding address spaces to
an OS would impact more of the system than simply having threads (that
are already implemented) share the same address space.
It also was acceptable to argue the concurrency of threads in the same
address space would cause more difficulties than simply subdividing
threads into different address spaces. Hence it would be easier to
start with OS-B.
You grade was based on how well your justified your answer.
Yes. Deadlock requires waiting, it doesn't matter if it is
busy-waiting (i.e. spin locks) or blocking locks.
Since the compiler gets to control the context switch done as part of
a procedure call, the amount of state that needs to be saved and
restored can be smaller than context switching threads that can happen
at an arbitrary point in the execution.