Fall 2001 Midterm Solutions |
Note that Question 3 (about multilevel feedback queue scheduling) has been changed to an extra credit question. Your score on this question may thus increase, but will not decrease, your final grade.
Ref LRU-Gl MIN-Gl FIFO-Loc P1-A A** A** A** P1-B AB* AB* AB* P2-C ABC ABC ABC P2-F FBC ABF ABF P2-E FEC ABE ABE P1-A FEA ABE ABE P2-D DEA ADE ABD P2-E DEA ADE ABE P2-D DEA ADE ABD P1-A DEA ADE ABD P1-B DBA ABD Faults
First calculate the number of page table entries (231 / 212) and multiply by four bytes to get your answer: 2MB.
This question asks how big the page table is that maps the page table in part a. In other words, how big a page table do you need to map 2MB of memory? 2MB / 4KB * 4 bytes = 2KB.
20 bits of physical page number and 12 bits of offset give 32 bits of physical addressing, or 4GB of memory.
The OS must ensure that two processes with the same ASID are not in the TLB at the same time. The simplest solution is to flush the TLB when switching to a process whose ASID is already in the TLB.
The point of this question was to determine if you understand how multi-level feedback queue (MLFQ) scheduling works. To answer the problem you needed to make some assumptions and the answer you got depended greatly on the assumptions.
The grading of this problem looked for signs you understood multi-level feedback queues. Signs included a schedule that showed priorities dropping and timeslices expanding for jobs that filled the timeslice. This could also be shown in the stated assumptions or scratch notions used to compute the schedule.
Unfortunately, I believe this problem failed to achieve its goal of determining if you understand multi-level feedback queues. Pretty much any schedule of the jobs can be justified by tweaking the assumptions. It was not possible to distinguish an answer that had different, but un-stated, assumptions from an answer that had no knowledge of MLFQs.
Because of this I have recorded the grade for problem #3 separately in the grade spreadsheet. I intend to treat this problem as extra credit in the final grade (i.e., it can only help your final grade, not hurt it).
No, this will not work. Semaphores have a count associated with them, while condition variables do not. If we take this approach, a scenario such as the following is possible: a condition variable is signaled x times, causing the next x wait calls on it to be no-ops. This is not how condition variables are supposed to behave. They should cause a waiting thread to sleep regardless of how many times they have been signaled beforehand.
0 | Answered yes |
1 | Answered no |
2 | Justified answer of no and justification was on the right track, but too general or vague |
2 | Justified answer of no by stating that broadcast would not work (question asked if signal using V() and wait using P() would work; it did not mention broadcast) |
5 | Justified answer of no correctly |
No, this will not work. Deadlock can occur in the presence of any limited nonpreemptible shared resources in the system. For example, a producer thread and a consumer thread sharing a buffer can deadlock if the producer is waiting for more bytes in the buffer to be consumed before producing, but the consumer is waiting for more bytes to be produced before consuming.
0 | Answered yes |
1 | Answered yes, but mentioned limited shared resources |
1 | Answered no |
2 | Justified answer of no and justification was on the right track, but too general or vague |
2 | Justified answer of no by stating that semaphores could also cause deadlock (question intended "locks" to mean "all synchronization primitives") |
3 | Justified answer of no incorrectly, but mentioned limited shared resources |
5 | Justified answer of no correctly |
Justifications involving one thread were not accepted because deadlock by definition involves more than one thread.
(1) or (2) were acceptable answers. During the first pass, the linker builds the global symbol table, and duplicate entries in this symbol table (such as caused by the duplicate definition of an external variable) can be detected during or at the end of the first pass.
(1) or (2) were acceptable answers. As the linker goes through each object file, it learns the size of each segment in memory, and can therefore determine whether a program fits into the virtual addressable space by the end of the first pass.
(4). The linker knows what all the definitions and references are at the end of the first pass and resolves all refs to defs during the second pass. Therefore, it only knows what the unreferenced defs (such as an external variable that is def'd but never used) are at the end of the second pass.
(3). The linker resolves all refs to defs during the second pass, so a ref that cannot be resolved to a def (such as a ref to a nonexistent external variable) is detected in this phase.
0 | Incorrect answer |
0.5 | Correct answer, but incorrect or no justification |
1 | Correct answer and justification was on the right track, but too general or vague |
2 | Correct answer and justification |
CS 140 Homepage |
cs140-staff@cs.stanford.edu
Last modified: Thu Feb 7 14:57:11 PST 2002
|