CS 140 Winter 2005 Midterm Solutions

Statistics

For this exam, the mean was 33 and the standard deviation 7.48.

Retrieving Exams

If you have not picked up your exam, it is available from Judy Polenta in Gates 351.

SCPD Students

If you're a SCPD student, then we'll send it back to you through the SCPD (for both local and remote students).

Grading Errors

If you believe that part of your exam was graded in error, please request a regrade. You should preferably attend the office hours of the person who graded the question (shown below), or as a second choice email the staff list. Arithmetic errors should go to the staff list or the office hours of any of the staff.

ProblemGrader
1Mendel
2, 7Adam
3, 6Nick
4, 5Sameer

Problem 1

Question

(10 points) In class I talked about how semaphores can be used to control the scheduling of a set of threads. The example I gave had three threads (T1,T2,T3) working together to compute print(f(x,y)):
float x, y, z;
sem Sx = 0, Sy = 0, Sz = 0;

T1()
{
  x = ...;
  V(Sx);
  y = ...; 
  V(Sy);
}

T2()
{
  P(Sx);
  P(Sy);
  z = f(x,y);
  V(Sz);
}

T3()
{
  P(Sz);
  print(z);
}

Recode the above code using monitors rather than semaphores. Show the code for routines T1(), T2(), and T3() that use monitors to get the same functionality. Do not simply emulate semaphores with monitors.

Solution

monitor printIt {
   float x, y, z;
   Boolean t1Done = FALSE, t2Done = FALSE; 
   Condition t2Wait, t3Wait;

   T1() {
       x = ...;
       y = ...;
       t1Done = TRUE;
       signal(t2Wait);
    }

    T2() {
       while (!t1Done) {
          wait(t2Wait);
       }
       z = f(x,y);
       t2Done = TRUE;
       signal(t3Wait);
    }
 
    T3() {
        while (!t2Done) {
          wait(t3Wait);
        }
        print(z);
        t1Done = FALSE; // Reset for next time
        t2Done = FALSE; 
    }
}

Note that simply replacing V(Sx) with a signal(CondX) and P(Sx) with a wait(CondX) doesn't work. Unlike semaphores, condition variables don't have a history. For example, should T1 run before T2, the signal in T1 wouldn't cause the wait in T2 to let the thread proceed as it doesn't with semaphores. The T2 thread would simply wait forever. Adding the boolean to check to see if waiting is necessary solves this problem.

Using only a single condition variable that both T2 and T3 wait on also has a problem. Since the signal done by T1 will wake only one of the threads the system will hang if T3 is woken instead of T2.

Problem 2

Question

(10 points) Assume that you have an exponential feedback queue (also called a multilevel feedback queue) that has 8 queues. The highest priority queue has a timeslice of 10 milliseconds. The machine has a CPU that executes 1000 MIPS (million of instructions per second) and a disk that responds in 15 milliseconds. Assume that you have two jobs:

JobA) A large memory job that is thrashing with a page fault occurring every 100 instructions.

JobB) An I/O intensive job that alternates reading a block from the disk and then computing for 25 milliseconds on the block.

a) Describe which queue will contain each job once the system reaches steady state.

b) If you were the owner of JobB, would you prefer the system have a global or local page replacement policy?

Be sure to justify your answer for both parts a) and b).

Solution

The queues will be 10ms, 20ms, 40ms, 80ms, etc. When a job runs for the whole slice, it gets put on a lower (longer) queue. When a job yields/blocks early it gets moved up one to a shorter queue.

a) Job A Will be in the 1st queue. Since it will never run long enough to exhaust its timeslice, it will never bet put in a lower queue. Job B will alternate between the 2nd, and 3rd queues as it will exhaust a 20ms slice when it runs for the 25ms before it blocks, but also cannot fill the 40ms slice.

Numbering the queues starting at 0, 1, or 8 were all fine, so long as you ended up with Job A in the "top" one, and Job B moving around. 90% of answers did not say where Job B would be.

b) Job B would prefer a local policy. Since Job A is thrashing, it would immidately start swapping all of B's pages out in a global policy. Job B would then start thrashing as well, and the system would grind to a crawl.

Problem 3

Question

(8 points) Describe the type of code that would cause the linker to encounter each of the following conditions:

a) External references to the same symbol in two different object files.
b) Global definitions of the same symbol in two different object files.
c) A global definition of a symbol that has external references to two different object files.
d) A global definition with no external references in any of the object files.

Solution

Multiple answers were accepted for most sections of this problem. Please note that anything static was not accepted as a solution as that is not actually global.

a) Any global variable / function referenced in two files where it is not defined (printf is one example).

b) Two files defining the same global variable / function. This is a linker error.

c) A number of different answers were accepted because the problem was open to some interpretation. One example of this is to have a global function that takes two classes defined in two different files as parameters. Another accepted answer was a global variable that is referenced in two other files.

d) An unused global variable.

Problem 4

Question

(8 points) For each of the memory management schemes below describe if you would expect to find:
(1) a translation lookaside buffer (TLB)
(2) internal fragmentation
(3) external fragmentation

The memory management schemes are:
a) Base and bound.
b) Segmentation
c) Paging
d) Segmentation and Paging.

Be sure to explain your answer.

Solution

It was important to explain your answers. Answers without any explanation received less than half credit. For parts (a) and (b), if you mentioned that a TLB was expected, then you didn't get any credit for that part.

a) A TLB is not required, because there is no real translation happening. External fragmentation is expected, because the base and bound are of variable size, and we can get "holes" or "sawdust" in memory that cannot be allocated as processes start and exit. There is no internal fragmentation, because the base and bounds are set to the amount of memory the process needs.

b) This is similar to part (a). A TLB is not required. External fragmentation is expected, because the segments allocated by the processes are of varying size, leaving behing smaller chunks of memory that cannot be allocated. There is no internal fragmentation because the process allocates a segment for the amount of memory that it needs.

c) There is no external fragmentation, because a process allocates pages, and each page is a fixed size. However, there could be internal fragmentation if a process does not use all of the memory that it has allocated within the pages. Usually this happens in the last page that was allocated. A TLB is expected to keep a cache of virtual to physical pages to make lookups faster.

d) This is similar to part (c). Segmentation and Paging is just a more sophisticated form of paging. There is no external fragmentation because we are still using pages. Internal fragmentation is expected because a process may not use all of the memory in the last page. A TLB is required to keep a cache of virtual to physical pages. The "segmentation" that happens in Segmentation and Paging is actually a segmentation of the pages themselves and not of memory. A lot of students said that they would expect to see both internal and external fragmentation because they saw both "segmentation" and "paging". This is not true.

Problem 5

Question

(5 points) Assume that you have some code that detects when a cycle exists in the wait-for graph of a system. Does the presence of a cycle in the wait-for graph signal the presence of a deadlock in the system? Justify your answer.

Solution

There are four necessary conditions required for deadlock to occur: Mutual Exclusion, Hold and Wait, No Preemption, Circular Wait. The presence of a cycle in the wait-for graph satisfies only one of these conditions. However, this does not mean that a deadlock exists in the system.

Problem 6

Question

(5 points) Describe the optimal spinlock blocking algorithm assuming that you have perfect future knowledge.

Solution

Assuming perfect future knowledge, you would know exactly how long it would take for a particular lock to become available.

if ( lock_duration < time_to_block )
{
  spinlock process
}
else
{
  block process
}

Note that the original algorithm cost a maximum of two times the cost of blocking. Here, the maximum cost is equal to the cost of blocking. Additionally, time_to_block is actually two context switches (one to switch the process out and one to switch it back in).

Also, this is in general only useful in multi-processor systems. It can be applicable to a few cases of hardware interrupts with single-processor systems.

Problem 7

Question

(4 points) Is it possible for a CPU scheduler with a 100-millisecond timeslice to spend over half its time in the OS context switch code? Assume that it takes the OS 1 millisecond to context switch the CPU. Justify your answer.

Solution

Yes. If the context switch (scheduler) takes 1ms, but on average a job yields or blocks in less then 1ms, more then 1/2 the CPU time will be spent switching tasks.

Examples are 2 threads busy waiting, many jobs doing I/O, etc. The CPU not having anything to do (the idle thread runs), or time "waiting" for page faults (waiting/paging doesnt count as switching), are not valid answers.


cs140-win0405-staff@lists.stanford.edu