Please note: these solutions are not an official solution set; they are based off of one of the TA's exam papers from when he was a student in the class, with answers clarified as needed.
1) True: you can just increment the heap as you allocate. Fragmentation can only occur when you free things in the middle of allocated space.
2) True: By decreasing main memory to increase the buffer cache, you may cause processes to need to swap more often, which can decrease performance.
3) False: It's possible that at any given time, only a small subset of the files are in use; in this case, that subset can be entirely cached throughout its use.
4) False: Connection based networks are designed to deal well with such things. Connectionless networking would tend to be bursty, etc., which would have such behavior.
5) Suppose the page table is currently swapped out. Then, when a TLB fault occurs, epc and badva will be set, and the TLB handler will attempt to get the translation. This will result in another fault, which then overwrites epc and badva.
6) The Honor Code assumes that students want the policy enforced. It also assumes that students are competent to make judgements about what constitutes a policy violation. When these assumptions are valid, it can prevent an attack where a group of students try to collaborate to cheat.
7)
a)FCFS
b) Large packets can monopolize the link. You can put an upper limit on
packet size to force "time slicing".
8)
a) 51 threads required, since each thread runs for 0.02 ms, blocks for
1 ms, during which time 50 other threads can run.
b) 41 threads: 0.025 ms/thread before it blocks.
c) A bad scheduler can swap at inappropriate times, ruining
performance. Also, [not sure]...
9)
a) It works: initate_gc doesn't run when malloc is resetting the heap,
and it's ok to run at any other time.
b) The program might have suffered from much external fragmentation, so
compaction can put more things on the same page.
10)
a) 10 round trips/second => 4800 bytes/second => ~0.0046 MB/s
b) 100 packets = 48000 bytes; 100 packets sent = 10000 ms xfer time; 1
timeout = 1000 ms; 11 seconds total, 48000 bytes => ~0.0042 MB/s
c) 2 round trips = 300 ms; 960 bytes per .3 sec; ~ 0.0031 MB/s
d) Same as for c): send p1, wait 150 ms, resend, 50 ms later get ack,
send p2 exactly 200 ms after p1 sent, which is exactly what happened
before.
12)
a) - Requires 2^32 bytes of actual data: 1048576 data blocks.
- 1024 block ptrs per indirect block.
- 1048564 blocks/1024 = 1024 indirect blocks needed.
- 1 double-indirect block needed.
- 256 bytes for inode.
- Total: 4299165952 bytes.
b) To read: 1 read required for the inode, 1 to read the double indirect
block, 1 to read indirect, 1 to read the data: 4 disk accesses. To
write: 4 reads, plus 1 write = 5 disk accesses.
c) Allysa is right: high-order bits denote subnet, so hosts on the same
subnet have numerical values close to each other. So, bringing in a
block to cache brings in 4096 bytes/hosts on the same subnet.
13)
a) A crash can point metadata at uninitialized blocks. We can use write
ahead logging for recovery. [Full solution requires an explanation of
write-ahead logging.]
b) He can be trying to write an entry that sits across 2 sectors, so
that 4 bytes are written to one sector but the other 2 aren't written to
the other sector (or vice versa).
c) You can store the ip and eth addresses on the NVRAM which together
are 10 bytes. Then, after a system crash, you can force an update of the
ip:eth pair to the file.