CSE 330 - Operating Systems
Practice Homework
No deadline, do not submit


There are 4 processes, executing concurrently. Process P0 is in an infinite loop, incrementing the value of the variable x (x is initialized to 0). P0 is the only process that changes the value of x.

The rest of the processes Pi (1<= i <=3) monitor the value of x. Whenever x reaches a value such that it is divisible by i, Pi prints the value of x. For example, P3 will print the sequence 3 6 9 12 ….. as the value of x reaches 3, 6, 9, 12 and so on.

Write the code for all the 4 processes using semaphores. Note that P1 - P3 should be identical, also Pi determines whether x is to be printed, and this decision is not made by P0.

A synchronization mechanism consists of 2 atomic routines, ENQ(r) and DEQ(r). "r" is a resource variable that has two fields, inuse (boolean) and queue (a queue) which is a queue of processes waiting to acquire the resource. The definitions of ENQ and DEQ are:

 ENQ(r) : if (r.inuse==1) then begin
                insert current process in r.queue
           else  r.inuse = 1;


 DEQ(r) : if r.queue == nil then inuse = false
           else delete a process from r.queue and activate it.

Construct an implementation of ENQ/DEQ using semaphores. You can use other variables, etc that you need, but no other atomic code or synchronization constructs other than P or V can be used.

Do the reverse of Q3, that is implement Semaphores using ENQ/DEQ.

Attempt to solve more semaphore problems (also solve the same problems with monitors). The known problems have names such as:

- Sleeping barber’s problem

- Cigarette Smoker’s problem


The problems below have numbers from the textbook, 9th edition. A brief description is provided as the problem numbers may be different in other editions. 



       [Write a bounded-buffer monitor in which the buffers (portions) are embedded within the monitor itself]


5.27   [The strict mutual exclusion within a monitor makes the bounded-buffer monitor of Exercise 6.13 mainly suitable for small portions.

          Explain why this assertion is true.

          Design a new scheme that is suitable for larger portions.


5.37  [finite number of resources, single resource type, licenses]





7.3, 7.23 Table of numbers, snapshots of allocation, max and available. Determine safety.


7.11  -- traffic deadlock (diagram with cars)


7.4 -- Higher order containment resource


7.25  -- Deadlocks in villages of Vermont



Memory management


8.10  About a compiler, linker and generating binaries


8.25 – Time taken to access memory and adding TLBs


8.28 Address translation – a table of segment, base and length provided.


8.30 VAX memory cycles



Virtual Memory


9.2 page reference string with m fames, p pages…


9.7 Two dimensional array


9.12 and 9.27 numbers provided for CPU utilization, Paging disk and I/O devices


9.30  description of a page replacement algorithm (distribute heavily used pages)



--- some more questions-------------



Deadlocks can be detected, avoided or prevented. Comment on the following statements: (i.e. state if they are correct or wrong and why).

·         Deadlock detection is more general than deadlock prevention or deadlock avoidance.

·         It is better to prevent deadlocks than to detect them.

·         If a deadlock is detected, we can use a deadlock avoidance technique to prevent the deadlock.

·         A cycle in a resource-allocation graph is an early warning signal of possible future deadlock.

·         A cycle in a waits-for graph is an early warning signal of possible future deadlock.

·         The Bankers algorithm ensures that there is no cycle in the Resource allocation graph.

·         If there is a deadlock, there must be a cycle in the resource allocation graph.


Some operating systems use base and bound registers to protect memory. The bounds registers updated (or changed)

·         When?

·         Why?

·         By whom?


Swapping is a method of increasing the capacity of the main memory. Note that swapping is not the same as demand paging.

·         When is a process swapped in?

·         When is a process swapped out?

·         Does a whole process have to be in memory before it can be run? Why?



Consider a demand paged system using "lazy" or "pure" demand paging.

·         What is "lazy/pure" demand paging?

·         What are the factors that limit the MAXIMUM amount of memory that can be allocated to 1 process?

·         What are the factors that limit the MAXIMUM amount of memory that can be allocated to all the processes (total)?

·         What are the factors that determine the page fault rate?


The virtual memory system using demand paging, also provides memory protection.

·         Explain clearly (and briefly) how is protection achieved.

·         Show how the paging system determines when a page reference is an illegal page or a page not currently in memory.


Show that the “Optimal Page Replacement Policy (OPT)" is indeed optimal, that is there is no algorithm, known or unknown that can perform better than the OPT policy.


A method of deadlock prevention is the use of ordered resource allocation.

Does this scheme work when there are multiple instances of each resource?

Justify your answer.


The device driver for disk devices is used to read and write blocks of data to and from disk units.

·         Why is the driver split into two "halves" and what are these two halves?

·         Where in the driver is the disk scheduling policy implemented?


·         If the page replacement policy is FIFO, should the disk scheduling policy also be FIFO? Why or why not?


·         Which policy is better at minimizing disk head movement: SSTF or LOOK?