Computer Science GRE Study Guide, Program control and structure

Table of Contents

I.    Software Systems and Methodology – 40%

B.     Program control and structure

2.      Procedures, functions, methods, and exception handlers

a.       Definition: A procedure and a function are very similar animals. They both allow the programmer to call them from multiple places in the program. If the called subroutine returns a value, then we typically call it a function.

b.      Definition: In object-oriented programming, we have a special type of subroutine called a method. A method is defined in a class to work only on objects in that same class.

c.       Definition: From time-to-time, the software or hardware may want or need to change the normal execution of the program. To do this, an exception is generated. Your program or operating system must have some way to deal with the exception; the code that processes the exception is called the exception handler.

d.      A macro is very similar to a function or procedure. Originally, when you used a macro, the compiler literally substituted the text of the macro for the name of the macro. Therefore, if your program called the macro twice then the compiler inserted the code into the program twice. Compare this to a function call; if you call a function twice, the compiler still only inserts the code once into the program.

3.      Concurrency, communication, and synchronization

a.       Definition: Concurrency refers to the capability of a system to run multiple programs at the same time. This is commonly referred to as multi-tasking. This is very different from multiple processors.

b.      Definition: Coordinating multiple sub-routines that may pass information to each other can be a tricky task. There are many techniques for communication and synchronization.

c.       We often produce data with one function and then use it in another function. We say that these two functions have a producer-consumer relationship. This relationship can generate problems if they operate at different rates. To solve this, we implement a buffer. The producing function places the data in the buffer and the consuming function takes the oldest data out of the buffer. (The buffer is FIFO.) Unfortunately, the buffer itself generates new issues that we must deal with. There are four basic implementation techniques for the buffer. The most basic is that both functions have direct access to the buffer. This is poor design. This means that any function that produces must make sure that the buffer is not full. When it is full, there might be multiple functions waiting to place data in the buffer. Since all functions are independent of each other, there are no rules of precedence. Furthermore, it is relatively difficult to make sure that the buffer stays FIFO when both the producer and consumer could be accessing the buffer at the same time. The second way to implement the buffer is to use a semaphore. The semaphore acts as a synchronizer and does not allow simultaneous access to the buffer. Semaphores can force something called busy-waiting. Now, when the buffer is full or empty the semaphore forces the appropriate function to loop in place waiting for a change. This looping consumes processor cycles and is inefficient. Monitors are the third implementation technique. Monitors also allow only one function to have access to the buffer, but they do so in a more stringent way. The problem is that with this extra security, if one function gets control of the monitor and cannot continue (e.g. a producer function when the buffer is full), then the process has to abort and the next process is offered control. Remember that monitors suck and are inefficient. The most modern method for buffers is to make the buffer a totally separate function. With this technique, the producer does not even attempt to produce if the buffer is full and the consumer does not attempt to take something off the buffer when the buffer is empty. This is the most efficient technique we currently have.

d.      There are times when two (or more) functions are in deadlock. For example, imagine two functions that send information to each other, but cannot because all buffers are full. Neither function will attempt to empty a buffer, because they want to place something in a buffer. There are many ways to deal with deadlock. You could require that a process request exclusive access to the resources necessary to complete the operation before beginning the operation. If the resources are not available, the process waits. You can prioritize the resources and require that all processes request the resources in order. You can have processes time out and restart if they wait too long. You can have the operating system restart processes if the wait queues back up.

⇒ Next section

Liked it? Take a second to support Hunter Hogan on Patreon!
Become a patron at Patreon!

I’m disabled & homeless.

I must earn money from advertisements.

Please whitelist my website and YouTube channel in your ad blocker or cookie blocker, such as Privacy Badger.

Thank you.