Friday 18 September 2020

Round Robin (RR) CPU Scheduling Algorithm

 Round Robin Scheduling (RR):

The time sharing systems uses the Round Robin algorithm. The use of a small time quantum allows Round Robin to provide good response time. It is a preemptive algorithm.

 

CPU selects the process from the ready queue. To implement RR scheduling, ready queue is maintained as a FIFO queue. New processes are added to the tail of the ready queue. The CPU scheduler picks the first process from the ready queue, set a timer to interrupt after one time quantum and dispatches the process.

 

With RR algorithm, the principal design issue is the length of the time quantum or time slice to be used. If the time quantum is very short, then short processes will move through the system relatively quickly. It increases the processing overhead involved in handling the clock interrupt and performing the scheduling and dispatch function. So, a very short time quantum should be avoided.

 

Consider the set of processes with the burst time in milliseconds. All the processes arrive at time 0. The Gantt chart, waiting time and so on are as follows:


Process

Burst time

P1

3

P2

6

P3

4

P4

2


RR Gantt chart


Waiting time:

Process

Waiting time

P1

0 + 6 = 6

P2

2 + 5 + 2 = 9

P3

4 + 5 =9

P4

6 = 6


RR avg waiting time


Turnaround time:

 It is the sum of burst time + waiting time of each process.

 

Process

Turnaround time

P1

3 + 6 = 9

P2

6 + 9 = 15

P3

4 + 9 = 13

P4

2 + 6 = 8



RR avg turnaround time



Priority CPU Scheduling Algorithm

 

Priority Scheduling:

CPU is allocated to the highest priority of the process from the ready queue. Each process has a special number associated with called as priority number. If two or more processes have the same priority, then FCFS algorithm is applied for solving the tie. 

 

The values of priority are not fixed. In some systems, lower value denotes a lower priority, whereas in some systems lower value denotes higher priorities. In our example, we are considering low numbers have the higher priority.


Priority scheduling can be preemptive or non-preemptive. Priority of the process can be defined either internally or externally. Internal priority considers the time limits, number of open files, use of memory and use of IO devices. External priorities are set by using external parameter of the process like importance of a process, cost of process, etc.

 

When the process arrives at the ready queue, its priority is compared with the priority of the currently running process. A non preemptive priority algorithm will simply put the new process at the head of the ready queue.

 

A preemptive priority scheduling algorithm will preempt the CPU if priority of the newly arrived process is higher than the priority of the currently running process. In this, current executing process will change the state from running to ready. In non preemptive scheduling algorithm, currently executing process will not change state.

 

Consider the following set of processes with the burst time in milliseconds. Arrival time of the process is 0.

 

Process

Burst time

Priority

P1

3

2

P2

6

4

P3

4

1

P4

2

3

 

Processes arrive in the order P1, P2, P3, and P4. Gantt chart, waiting time and turnaround time for priority scheduling algorithm are as follows:

Priority Gantt chart


Waiting time:

Process

Waiting time

P1

4

P2

9

P3

0

P4

7


Priority avg waiting time

Turnaround time:

 It is the sum of burst time + waiting time of each process.

 

Process

Turnaround time

P1

3 + 4 = 7

P2

6 + 9 = 15

P3

4 + 0 = 4

P4

2 + 7 = 9

 

Priority avg turnaround time

Priority scheduling algorithm can leave some low-priority processes waiting indefinitely for the CPU. This problem is called starvation. Priority scheduling algorithm faces the starvation problem. Starvation problem is solved by using aging technique. In aging technique, priority of the processes will increase which are waiting for a longer time in the ready queue.


Shortest Job First (SJF) CPU Scheduling Algorithm

 

Shortest Job First Scheduling (SJF):

It links with each process the length of the latter’s next CPU burst. When the CPU is free, it is assigned to the process of the ready queue which has smallest next CPU burst. If two processes have the same length, FCFS scheduling is used to break the tie.

 

SJF scheduling algorithm is used frequently in long term scheduling. This algorithm may be either preemptive or non-preemptive. A preemptive SJF algorithm will preempt the currently executing process, whereas a non- preemptive SJF algorithm will allow the currently running process to finish its CPU burst. Consider the following set of processes with burst time in milliseconds:

Process

Burst time

P1

3

P2

6

P3

4

P4

2

Arrival time of the process is 0 and the processes arrive in the order of P1, P2, P3 and P4. The Gantt chart, waiting time and turnaround time are as follows:

 

Gantt chart:

SJF Gantt chart


Waiting time:

Process

Waiting time

P1

2

P2

9

P3

5

P4

0

 


 Turnaround time:

 It is the sum of burst time + waiting time of each process.

 

Process

Turnaround time

P1

3 + 2 = 5

P2

6 + 9 = 15

P3

4 + 5 = 9

P4

2 + 0 = 2

 

SJF avg turaround time


SJF algorithm is an optimal algorithm. It gives the minimum average waiting time for a given set of processes. However, SJF algorithm cannot be implemented at the level of short-term CPU scheduling as there is no way to know the length of the next CPU burst. 


First Come First Serve (FCFS) CPU Scheduling Algorithm

 

CPU Scheduling Algorithms:

These algorithms decide which of the processes in the ready queue is to be allocated the CPU. There are many different CPU scheduling algorithms. Out of these, an algorithm that maximizes CPU utilization and throughput; and minimize turnaround time, waiting time, and response time are considered as best of all algorithms.


First Come First Serve Scheduling (FCFS):

 

FCFS is the simplest scheduling method. CPU is allocated to the process in the order of arrival. The process that requests the CPU first is allocated the CPU first. FCFS is also called FIFO (First In First Out). FCFS is non - preemptive scheduling algorithm. Implementation of the FCFS policy is easily managed with a FIFO queue.

 

When a process enters the ready queue, its process control block (PCB) is linked onto the tail of the queue. When the CPU is free, it is allocated to the process at the head of the queue. The running process is removed from the queue.


The performance metric used is average waiting time. Average waiting time for FCFS is often quite long. Usually, FCFS is used in batch systems. Real life analogy of FCFS is buying tickets at the ticket counter. The FCFS is a not suitable for real time systems.

 

Consider following set of processes that arrive at time 0, with the length of the CPU burst given in milliseconds:


Process

Burst time

P1

24

P2

3

P3

3



Now, we calculate an average waiting time when the processes arrive in the following order:

 

a)    P1, P2, P3     b)  P2, P3, P1

 

The Gantt chart for the same is as follows:

 

For process arrival order,

 

a)    P1, P2, P3


FCFS Gantt chart-I


Waiting time:

 

Process

Waiting time

P1

0

P2

24

P3

27

 

FCFS avg waiting time-I


b)  P2, P3, P1

Gantt chart:

FCFS Gantt chart-II


Waiting time:

Process

Waiting time

P1

6

P2

0

P3

3


FCFS avg waiting time-II

The long CPU burst P1 seriously delays the completion of the shorter bursts P2 and P3.The CPU burst of P1 can be very long and all other processes will have no chance to run at all. It is not responsive. If P2 and P3 are interactive, users won't feel it a reasonable arbitrarily longer than the time really needed. It lengthens the average completion time, since it is possible that all processes completes only after P1 completes.

 

It reduces IO device utilisation, since all IO bound processes will be waiting long for CPU bound ones. This is called as convoy effect. For these reasons FCFS is usable only when all CPU bursts are known to be short, ex: within the kernel.

 

 Convoy Effect:

Consider 10 IO bound processes and 1 CPU bound process in the system. IO bound processes pass quickly through the ready queue and suspend them waiting for IO. The CPU bound process arrives at the head of the queue and execute the program until completion. IO bound processes rejoin the ready queue and wait for the CPU bound process releasing the CPU.

IO device is idle until the CPU bound process completes. A convoy effect happens when a set of processes need to use a resource for a short time and one process holds the resource for the long time, blocking all other processes.

 

Advantages:

·         FCFS is relatively simple to implement.

·         It incurs minimum overhead

 

Disadvantages:

·         It has unpredictable turnaround time.

·         In FCFS, average waiting time is more.


Performance Metrics or CPU Scheduling Criteria

 

Scheduling Criteria or Scheduling Methodology:


So many CPU scheduling algorithms are available. Different CPU scheduling algorithms have different properties. In choosing which algorithm to use in a particular situation, we must consider the properties of the various algorithms. The performance metrics are used for the same. We can determine which algorithm is the best algorithm and which one is the worst using the performance metrics.  These performance metrics are as follows:

 

1. Throughput:

 

Throughput means how many jobs are completed by the CPU within a time period. If the bigger process, this rate may be a process per hour; for short process throughput might be process per minute.

 

 2. Turnaround time:

The time interval between the submission of the process and time of the completion is called the turnaround time. Turnaround time is sum of the periods spent waiting to get into memory, waiting in ready queue, executing on the CPU and doing IO.

 

Turnaround time = waiting time in ready queue + executing time + waiting time in        waiting queue for IO

 

 3. Waiting time:

 Waiting time is the sum of the periods spent waiting by a process in the ready queue. Generally, a CPU scheduling algorithm having the least average waiting time is said to be the best algorithm. It may be expressed as turnaround time less the actual execution time.

 

4. Response time:

 Response time is the time duration between the submission and first response. For ex: a process entered in the ready queue at 10:05 a.m., but the process got the first response from the CPU is 10:10 a.m. Then the interval between 10:05 and 10:10 is said to be the response time.


5. CPU utilization:

 This is the percentage of time that the processor is busy. CPU utilization may range from 0 - 100%. In single user systems and real time systems this criteria is less important than the time sharing systems. The load on the system affects the level of utilization that can be achieved. On large and expensive systems, it may be the primary criteria.

 

6. Priority:

 Give superior treatment to processes having higher priorities.

 

7. Balanced utilization:

 Apart from CPU utilization, the utilization of other resources like memory, IO devices are also considered.

 

8. Fairness:

 Avoid the process from starvation. All the processes must be given equal opportunity to execute.

Tuesday 15 September 2020

3 types of schedulers in OS: Long, short and medium term

 

 3 Types of Schedulers in OS:

 

An operating system has many schedulers. There are three main scheduler viz. long term scheduler, short term scheduler, and medium term scheduler.

 

1. Long Term Scheduler:

 

It is also termed as job scheduler. Its function is to select the jobs from the pool of jobs (job pool) and load these jobs into main memory (ready queue) of the computer.

 

The main objective of job scheduler is to provide balanced mix of jobs consisting of IO bound and CPU bound processes. It also controls the degree of multiprogramming. If degree of multiprogramming is stable, then average rate of process creation must be equal to average departure rate of processes leaving the system.

 A ready queue is a nothing but a queue. It is one type of data structure consisting of two ends. One end is called as front and another end is termed as rear. The jobs are loaded into the ready queue through the front; and these jobs are deleted through the rear.


Insertion and deletion in ready queue


A long term scheduler may be absent or minimal in some systems. It is absent in time sharing systems. When a process changes its state from new to ready, long term scheduler comes in play.


2. Short Term Scheduler:

 

It is also called as CPU scheduler. Function of the short term scheduler is to select a job from the ready queue and gives the control of the CPU to that process with the help of a Dispatcher. The main objective is to increase system performance with respect to the chosen set of criteria. The method of selecting a process from the ready queue depends on the CPU scheduling algorithm. It executes most frequently and is faster compared to long term scheduler.

 

Dispatcher:

 

A dispatcher is a module that connects the CPU to the process selected by the short-term scheduler. The main functions of the dispatcher are:

·         switching the CPU from one process to another process (context switch)

·         switching to user mode

·         jumping to the proper location in the user program and ready to start execution

 

The dispatcher should be fast because it is invoked during each and every process switch. The time taken by dispatcher to stop one process and start another running is known as dispatch latency. The degree of multiprogramming depends greatly on the dispatch latency. It is also sometimes called short term scheduler. If the dispatch latency increases, then degree of multiprogramming decreases.

 

3. Medium Term Scheduler:

 

It is a part of swapping function. If a process requests an IO in the middle of the execution, then that process is removed from main memory and loaded into waiting queue. When the IO operation completes, then the job is moved from waiting queue to ready queue. These two operations are commonly termed as swapping and are performed by the medium term scheduler. It reduces the degree of multi programming.