Academic Integrity: tutoring, explanations, and feedback — we don’t complete graded work or submit on a student’s behalf.

On C++ Simulating Fair-share Process Scheduling Problem statement: Implement the

ID: 3673940 • Letter: O

Question

On C++

Simulating Fair-share Process Scheduling

Problem statement: Implement the simulation of a process scheduler that is responsible for scheduling a given list of processes that belongs to a given list of users. The scheduler is running on a machine with one CPU. The scheduling policy is type of fair-share scheduling and works as follow:

- Several users may exist in the system and each user may own several processes.

- Scheduler works in cyclic manner. This means that in each scheduling cycle it distribute a pre- defined time quantum of CPU between different users and processes. The first scheduler cycle will start at second “1”.

- At any scheduling cycle the scheduler distributes the time quantum equally between different users that have at least one process ready for execution.

- Furthermore, the scheduler distributes each user’s time share equally between processes that belong to that user

- The order which the processes will be scheduled in one cycle is not important, i.e. there is no priority on users and processes.

Example: Assume that at the beginning of a scheduling cycle, users A, B and C exist in the system. User A owns processes P1 and P2 ready for execution. User B owns process P3 ready for execution. User C does not have any ready process. Scheduler should distribute this cycle time quantum 50% to user A and 50% to user B. Therefore, P1 and P2 should receive 25% of quantum usage each, while P3 will receive 50% of quantum. This means that if fair-share scheduler has a quantum equal to 4 seconds. Then in this scheduling cycle P1 and P2 receives 1 second each, while P3 receives 2 seconds of CPU usage.

The input of the algorithms consists of a file named “input.txt” and the output should be written to “output.txt”.

Implementation Requirements:

The processes (both scheduler and user processes) MUST be simulated using threads. The scheduler should be able to pause and resume each process and only one process should be running at a time (because the simulated system has only one CPU). This means that the scheduler thread will ask a simulated process (T1) to suspend its execution and gives the CPU to the next simulated process (e.g. T2) based on the discussed scheduling policy. Each process is responsible to keep track of its total execution and inform the scheduler thread when it is finished. This process continues until all processes are done.

Sample "input.txt": 2

Explanation / Answer

CPU Scheduling

References:

5.1 Basic Concepts

5.1.1 CPU-I/O Burst Cycle

5.1.2 CPU Scheduler

5.1.3. Preemptive Scheduling

5.1.4 Dispatcher

5.2 Scheduling Criteria

5.3 Scheduling Algorithms

The following subsections will explain several common scheduling strategies, looking at only a single CPU burst each for a small number of processes. Obviously real systems have to deal with a lot more simultaneous processes executing their CPU-I/O burst cycles.

5.3.1 First-Come First-Serve Scheduling, FCFS

Process

Burst Time

P1

24

P2

3

P3

3

5.3.2 Shortest-Job-First Scheduling, SJF

Process

Burst Time

P1

6

P2

8

P3

7

P4

3

estimate[ i + 1 ] = alpha * burst[ i ] + ( 1.0 - alpha ) * estimate[ i ]

Process

Arrival Time

Burst Time

P1

0

8

P2

1

4

P3

2

9

p4

3

5

5.3.3 Priority Scheduling

Process

Burst Time

Priority

P1

10

3

P2

1

1

P3

2

4

P4

1

5

P5

5

2

5.3.4 Round Robin Scheduling

Process

Burst Time

P1

24

P2

3

P3

3

5.3.5 Multilevel Queue Scheduling

5.3.6 Multilevel Feedback-Queue Scheduling

5.4 Thread Scheduling

5.4.1 Contention Scope

5.4.2 Pthread Scheduling


Figure 5.8

5.5 Multiple-Processor Scheduling

5.5.1 Approaches to Multiple-Processor Scheduling

5.5.2 Processor Affinity

5.5.3 Load Balancing

5.5.4 Multicore Processors

5.5.5 Virtualization and Scheduling

5.6 Operating System Examples

5.6.1 Example: Solaris Scheduling

Figure 5.12

5.6.2 Example: Windows XP Scheduling


Figure 5.14

5.6.3 Example: Linux Scheduling


Figure 5.15


Figure 5.16

5.7 Algorithm Evaluation

5.7.1 Deterministic Modeling

Process

Burst Time

P1

10

P2

29

P3

3

P4

7

P5

12

5.7.2 Queuing Models

N = Lambda * W

5.7.3 Simulations


Figure 5.17

5.7.4 Implementation

5.8 Summary

Material Omitted from the Eighth Edition:

Was 5.4.4 Symmetric Multithreading ( Omitted from 8th edition )


omitted

Interacting processes: The concurrent processes executing in the operating system are interacting or cooperating processes if they can be affected by each other.

Any process that shares data with other processes is an interacting process.

Two methods of implementing interacting process are as follows:   

(i)                 Shared memory solution: This scheme requires that these processes share a common buffer pool and the code for implementing the buffer be written by the application programmer.

For example, a shared-memory solution can be provided to the bounded-buffer problem. The producer and consumer processes share the following variables:

#define BUFFER_SIZE 10

Typedef struct{

  ……….

}item;

Item buffer[BUFFER_SIZE];

int in=0;

int out=0;

The shared buffer is implemented as a circular array with two logical pointers: in and out. The variable in points to the next free position in the buffer; out points to the first full position in the buffer. The buffer is empty when in==out; the buffer is full when (( in + 1)%BUFFER_SIZE)==out.

The producer process has a local variable nextProduced in which the new item to be produced is stored:

while(1){

    /* produce and item in nextProduced */

    While(((in + 1)%BUFFER_SIZE)==out)

     ; // do nothing

     Buffer[in]=nextProduced;

     in =(in+1)% BUFFER_SIZE;}

The consumer process has a local variable nextConsumed in which the item to be consumed is stored:

while(1){

  while(in==out)

  ; //do nothing

nextConsumed = buffer[out];

out=(out +1)% BUFFER_SIZE;

/* consume the item in nextConsumed */}

(ii)               Inter process Communication: The OS provides the means for cooperating processes to communicate with each other via an interprocess communication (IPC) facility. IPC provides a mechanism to allow processes to communicate and to synchronize their actions without sharing the same address space. IPC is particularly useful in a distributed environment where the communicating processes may reside on different computers connected with a network. IPC is best implemented by message passing system where communication among the user processes is accomplished through the passing of messages. An IPC facility provides at least the two operations:

send(message) and receive(message).

Some types of message passing system are as follows:

Direct or Indirect Communication: With direct communication, each process that wants to communicate must explicitly name the recipient or sender of the communication. In this scheme, the send and receive primitives are defined as:                    

·        send(P, message)- Send a message to process P.

·        receive(Q, message)- Receive a message from process Q.

A communication link in this scheme has the following properties:

·        A link is established automatically between every pair of processes that want to communicate. The processes need to know only each other’s  identity to communicate.

·        A link is associated with exactly two processes.

·        Exactly one link exists between each pair of  processes.

With indirect communication, the messages are sent to and received from mailboxes, or ports. Each mailbox has a unique identification. In this scheme, a process can communicate with some other process via a number of different mailboxes. Two processes can communicate only if they share a mailbox. The send and receive primitives are defined as follows:

·        send (A, message)- Send a message to mailbox A

·        receive (A, message)- Receive a message from mailbox A.

In this scheme, a communication link has the following properties:

·        A link is established between a pair of processes only if both members of the pair have a shared mailbox.

·        A link may be associated with more than two processes.

·        A number of different links may exist between each pair of communicating processes, with each link corresponding to one mailbox.

Chart for First Come First Served scheduling

1

2

3

4

5

0      5                               20                       32                                                                  57    62

Turnaround time = Terminated time – Arrival time

i.e,           T         =   Tr  - Ta

So, turnaround time for various jobs are

For job 1, T1 = 5-0=5 unit time

For job 2, T2 = 20-1=19 unit time

For job 3, T3 = 32-3=29 unit time

For job 4, T4 = 57-7=50 unit time

For job 5, T5 = 62-10=52 unit time

Mean turnaround time, Tm = (T1+T2+T3+T4+T5)/5= 155/5=31 unit time/job

Throughput=  no of process completed per unit time= 5/62= 0.081 jobs/unit time

Chart for Shortest Job Next scheduling

1

3

5

2

4

0      5                    17       22                                     37                                                          62

Turnaround time for various jobs are

For job 1, T1 = 5-0=5 unit time

For job 2, T2 = 37-1=36 unit time

For job 3, T3 = 17-3=14 unit time

For job 4, T4 = 62-7=55 unit time

For job 5, T5 = 22-10=12 unit time

Mean turnaround time, Tm = (T1+T2+T3+T4+T5)/5= 122/5= 24.4unit time/job

Throughput= no of process completed per unit time= 5/62= 0.081 jobs/unit time

(i)    In the given question,

Available matrix for resources [R1 R2 R3] = No of resource unit -Total Allocation  = [7 7 10]-[5 4 10]= [ 2 3 0]

Need matrix is defined as (Max – Allocation),     

Processes

Need of resources

R1

R2

R3

P1

1

4

5

P2

2

3

0

P3

2

2

0

Using Safety Algorithm, we get sequence:

Processes

Available resources

after satisfying need

R1

R2

R3

2

3

0

P2

4

3

3

P3

5

5

7

P1

7

7

10

The sequence <P2, P3, P1> satisfies the safety criteria. So current allocation state is  safe.        

(ii) Request made by process P1, Request(P1)= [1 1 0]

Here, Request(P1)< Need(P1) < Available

            i.e. [1 1 0]< [1 4 5] < [2 3 0]

Pretending that request can be fulfilled, we get new state:

Processes

Allocation

Need

Available

R1

R2

R3

R1

R2

R3

R1

R2

R3

P1

3

3

3

0

3

5

1

2

0

P2

2

0

3

2

3

0

P3

1

2

4

2

2

0

As  Need > Available for all process, no need can be fulfilled

So allocation is not thread safe i.e. request made by Process P1 can’t be granted.

Process

Burst Time

P1

24

P2

3

P3

3

Hire Me For All Your Tutoring Needs
Integrity-first tutoring: clear explanations, guidance, and feedback.
Drop an Email at
drjack9650@gmail.com
Chat Now And Get Quote