For this exam, the Mean was 35.2, and the Standard Deviation 7.4. If you
have not picked up your exam, it is available from Theresa Wan in Gates
351.
Problem
Grader
1, 2
Mendel
3, 4
Ben
5, 6, 7
Sameer
If you believe that part of your exam was graded in error, please submit
a written request for a regrade. We'd prefer that the request go to the
staff member who graded that question. Arithmetic errors should be
emailed to the staff
list and those exams can be brought to the office hours of any of the
staff.
Problem 1
Part (a)
Answer:
One obvious extension would be to have the CPU scheduler take into account
the memory needs of the processes it schedules to avoid thrashing
caused by running too many processes at once. An example would be
using a working set/balance set algorithm to estimate the memory needs of
a process and deactivate processes as needed to avoid overloading
memory. If thrashing is caused by a single job that is too large to
fit into memory, we are going to thrash if we run it regardless of the
scheduling algorithm we use.
Part (b)
Answer:
Up. Suffering many page faults will cause a compute-bound job to have to
wait for pages to be transferred from disk, effectively turning it into an
I/O-bound job from the point of view of the scheduler. Since it will be
spending much of its time blocked, waiting for a page-in, the unix-like
scheduler will cause its priority to be raised.
Part (c)
Answer:
Priority inversion is when a lower priority job is run instead of a
higher priority job due to events outside of the CPU scheduler. Since the
virtual memory subsystem can take memory from one process and give it to
another it can effectively override the scheduler's priority and cause a
high priority job to block while a low priority job can run. So yes, a VM
subsystem can cause priority inversion.
Problem 2
The working set of a process is the pages touched in the last T second of
the process's execution. It is a function of the memory reference pattern of
the process. Adding physical memory to the system might cause the process to
run faster, but it will not alter the memory reference pattern and hence the
working set will stay unchanged.
Problem 3
Yes, it is possible. One way to do it is to use one instance of the clock
algorithm for each process. In the clock algorithm, the pages in
a process are arranged in a circular queue. When a page must be
evicted, a pointer or "hand" traverses the queue. If a page it
encounters to has its "use" bit set, it resets it and continues to
the next page. Otherwise, it chooses that page for eviction.
-3 inefficiently searching a global list for local pages
-2 no description of algorithm or egregious lack of detail
-7 claim that local LRU not possible
Problem 4
Part (a)
Answer:
Static data within a single file is laid out by the compiler. The linker
lays out static data within a whole program or library. The linker also
decides the location of static data.
-1 didn't mention linker's role in layout
-2 each part wrong
-3 max
Part (b)
Answer:
Dynamically allocated data is laid out and located at runtime.
-2 each part wrong
-3 max
Part (c)
Answer:
The compiler lays out local variables on the stack and the OS decides
where to locate the top of the stack.
-2 each part wrong
-3 max
Problem 5
Numerous students thought that "processor affinity CPU scheduling" had to
do with CPU bound processes, or Shortest-Time-Completion-First, etc. This
was not the case. Processor affinity CPU scheduling means scheduling
processes on the same processor that they first ran on.
Some students also mentioned that such scheduling might be simpler,
while others said such scheduling might be more complex. Although we were
not looking for this, some credit was given if your explanation made
sense along with this claim.
Answer (thanks to Daniel Parkes):
The advantages of CPU scheduling come mostly from the ability to preserve
caches. If a process is always scheduled on the same CPU, then if
the caches are already warmed up, and performance will improve.
Disadvantages include the complexity of load balancing the CPUs so that
they are fully utilized. If several processes are always scheduled on the
same CPU, then other CPUs may become idle.
Problem 6
Some students thought that the condition was "Hold and Wait". They
received some partial credit if they had some sort of explanation along
with this that made sense.
Answer (thanks to Daniel Parkes):
Deadlock requires four conditions to hold true:
- Limited Access To Resources
- No Preemption
- Hold and Wait
- Circularity In Dependency Graph
Rank ordering allows us to avoid circularity by ensuring that resources
are always obtained in a certain order. For example, the rank ordering
algorithm could require that resources are always obtained in ascending
rank order. This breaks the circularity as the previous dependency link
from high ranking resource to low ranking resource is broken.
Problem 7
It was important to recognize that "I++" is actually broken up into three
instructions (read, increment, write), and is not atomic. Full credit was
given only if this detail was mentioned, along with some example about how
this piece of code might generate incorrect / undesired results. Just
saying "this is a race condition" was not enough.
Answer (thanks to Ozgun Erdogan):
The "I++" operation is executed as: a read from the memory, an increment
and a write back to memory. If you don't provide a way to make these
three instructions atomic, this piece of code would lead to race
conditions, and we may get indeterministic results (each execution may
lead to different output).
For example, we have two threads, T1 and T2, and I=0.
1. If T1 and T2 execute serially, I=2 at the end of the execution.
2. T1 reads; T1 incrememnts; T2 reads; T2 increments; T1 writes; T2
writes. This would lead to I=1.