From cd3ed905d507db7fb3fd2a791c13685bed02cba4 Mon Sep 17 00:00:00 2001 From: Khwaish Chawla <126390524+khwaishchawla@users.noreply.github.com> Date: Sun, 10 Nov 2024 00:47:48 +0530 Subject: [PATCH 01/12] Create program.c --- .../proportional scheduling/program.c | 66 +++++++++++++++++++ 1 file changed, 66 insertions(+) create mode 100644 Miscellaneous Algorithms/proportional scheduling/program.c diff --git a/Miscellaneous Algorithms/proportional scheduling/program.c b/Miscellaneous Algorithms/proportional scheduling/program.c new file mode 100644 index 00000000..5f36a709 --- /dev/null +++ b/Miscellaneous Algorithms/proportional scheduling/program.c @@ -0,0 +1,66 @@ +#include +#include +#include + +#define NUM_PROCESSES 5 + +// Structure for process details +struct Process { + int pid; // Process ID + int burstTime; // Burst time of the process + int group; // Group ID (for fair share) +}; + +// Function to simulate fair share scheduling +void fairShareScheduling(struct Process processes[], int numProcesses) { + int timeQuantum = 2; // Time quantum for each round + int timeElapsed = 0; + + // Keep track of remaining burst times + int remainingBurst[numProcesses]; + for (int i = 0; i < numProcesses; i++) { + remainingBurst[i] = processes[i].burstTime; + } + + printf("Starting Fair Share Scheduling...\n"); + + // Continue until all processes are done + while (1) { + int allDone = 1; + + for (int i = 0; i < numProcesses; i++) { + if (remainingBurst[i] > 0) { + allDone = 0; + + // Execute for time quantum or remaining burst time + int execTime = (remainingBurst[i] > timeQuantum) ? timeQuantum : remainingBurst[i]; + remainingBurst[i] -= execTime; + timeElapsed += execTime; + + printf("Process %d (Group %d) ran for %d units\n", processes[i].pid, processes[i].group, execTime); + + // If process finished + if (remainingBurst[i] == 0) { + printf("Process %d completed at time %d\n", processes[i].pid, timeElapsed); + } + } + } + + // Check if all processes are done + if (allDone) break; + } +} + +int main() { + struct Process processes[NUM_PROCESSES] = { + {1, 8, 1}, // Process ID, Burst Time, Group + {2, 4, 2}, + {3, 9, 1}, + {4, 5, 2}, + {5, 7, 1} + }; + + fairShareScheduling(processes, NUM_PROCESSES); + + return 0; +} From b797d6d163a5d2be103cb394a4d5860c70fef8f2 Mon Sep 17 00:00:00 2001 From: Khwaish Chawla <126390524+khwaishchawla@users.noreply.github.com> Date: Sun, 10 Nov 2024 00:48:18 +0530 Subject: [PATCH 02/12] Create readme.md --- .../proportional scheduling/readme.md | 38 +++++++++++++++++++ 1 file changed, 38 insertions(+) create mode 100644 Miscellaneous Algorithms/proportional scheduling/readme.md diff --git a/Miscellaneous Algorithms/proportional scheduling/readme.md b/Miscellaneous Algorithms/proportional scheduling/readme.md new file mode 100644 index 00000000..89273911 --- /dev/null +++ b/Miscellaneous Algorithms/proportional scheduling/readme.md @@ -0,0 +1,38 @@ +Fair Share Scheduling is a scheduling strategy that distributes CPU time fairly among users or groups of users. Instead of focusing solely on individual processes, fair share scheduling ensures that each user or group gets a specified proportion of CPU time, helping prevent any single user or group from monopolizing resources. + +Description +In a fair share scheduling system, the CPU is allocated based on user-defined shares or groups. Each group is given an equal or specified share of CPU resources. For example, if two users each have processes that need CPU time, the scheduler will ensure both users receive a fair amount of CPU time, regardless of how many processes each user has running. + +The scheduler operates in rounds (usually called time slices or time quanta) and allocates CPU time to processes within each user group. If a user has more processes than another user, the time is divided among that user's processes accordingly. This way, fair share scheduling attempts to prevent cases where one user's processes consume an excessive amount of CPU time, starving other users. + +Advantages (Pros) +Fairness Across Users or Groups: Fair share scheduling ensures that each user or group receives an equitable share of CPU time, promoting fairness in resource allocation. + +Prevents Starvation: By ensuring each group gets a proportion of CPU time, this method prevents any user or process from monopolizing the CPU, reducing the chance of starvation for low-priority users or processes. + +Customizable Resource Distribution: It can be configured to assign specific shares based on group importance, enabling priority allocation to certain users or critical processes. + +Enhanced Multitasking: By distributing CPU time fairly, it improves the responsiveness of the system across multiple users and processes, which is beneficial for environments with diverse workloads. + +Disadvantages (Cons) +Increased Complexity: Fair share scheduling can be more complex to implement compared to simpler algorithms like round-robin or first-come-first-served, as it requires managing and tracking groups and their allocated shares. + +Overhead in Resource Tracking: The system must monitor the CPU time used by each group, adding overhead to maintain this information, which can slightly reduce efficiency. + +Not Optimized for Real-Time Tasks: Fair share scheduling does not prioritize time-sensitive tasks, potentially leading to delays for high-priority processes if they are part of a lower-priority group. + +Suboptimal Performance for Single-User Systems: In environments with only one user or where most resources are used by a single user, fair share scheduling may be unnecessary and could add unnecessary complexity. + +Use Cases +Fair share scheduling is ideal in multi-user or multi-group environments such as: + +Academic or Research Institutions: Where multiple researchers or students share computational resources. +Enterprise Environments: Where resources need to be equitably divided among departments or teams. +Shared Server Systems: Cloud environments or shared servers where multiple users or clients access limited computational resources. +Overall, fair share scheduling balances CPU usage among users or groups, making it well-suited for multi-user systems, but it may add complexity and be less efficient in simpler or single-user systems. + + + + + + From f253bba7775894985df921260670935f9ca0fb4b Mon Sep 17 00:00:00 2001 From: Khwaish Chawla <126390524+khwaishchawla@users.noreply.github.com> Date: Sun, 10 Nov 2024 01:02:19 +0530 Subject: [PATCH 03/12] Update readme.md --- Miscellaneous Algorithms/proportional scheduling/readme.md | 1 + 1 file changed, 1 insertion(+) diff --git a/Miscellaneous Algorithms/proportional scheduling/readme.md b/Miscellaneous Algorithms/proportional scheduling/readme.md index 89273911..25b4598c 100644 --- a/Miscellaneous Algorithms/proportional scheduling/readme.md +++ b/Miscellaneous Algorithms/proportional scheduling/readme.md @@ -1,3 +1,4 @@ +fair share is another name of proportional scheduling.. Fair Share Scheduling is a scheduling strategy that distributes CPU time fairly among users or groups of users. Instead of focusing solely on individual processes, fair share scheduling ensures that each user or group gets a specified proportion of CPU time, helping prevent any single user or group from monopolizing resources. Description From 0d8941a779565cb169eab47fae10dfecfee6d414 Mon Sep 17 00:00:00 2001 From: Khwaish Chawla <126390524+khwaishchawla@users.noreply.github.com> Date: Sun, 10 Nov 2024 01:04:27 +0530 Subject: [PATCH 04/12] Create program.c --- .../two level scheduling/program.c | 76 +++++++++++++++++++ 1 file changed, 76 insertions(+) create mode 100644 Miscellaneous Algorithms/proportional scheduling/two level scheduling/program.c diff --git a/Miscellaneous Algorithms/proportional scheduling/two level scheduling/program.c b/Miscellaneous Algorithms/proportional scheduling/two level scheduling/program.c new file mode 100644 index 00000000..5dfb294f --- /dev/null +++ b/Miscellaneous Algorithms/proportional scheduling/two level scheduling/program.c @@ -0,0 +1,76 @@ +#include +#include +#include + +#define NUM_CPUS 2 // Number of CPUs available +#define NUM_PROCESSES 5 // Number of processes + +// Structure for process details +struct Process { + int pid; // Process ID + int burstTime; // CPU burst time needed + int assignedCPU; // Assigned CPU ID +}; + +// Function to perform round-robin scheduling on a CPU +void roundRobinScheduler(struct Process processes[], int numProcesses, int cpuID, int timeQuantum) { + printf("\nCPU %d scheduling processes:\n", cpuID); + + int timeElapsed = 0; + int allDone; + + do { + allDone = 1; + for (int i = 0; i < numProcesses; i++) { + // Skip processes not assigned to this CPU + if (processes[i].assignedCPU != cpuID) + continue; + + if (processes[i].burstTime > 0) { + allDone = 0; + int execTime = (processes[i].burstTime > timeQuantum) ? timeQuantum : processes[i].burstTime; + + processes[i].burstTime -= execTime; + timeElapsed += execTime; + + printf("Process %d ran for %d units on CPU %d. Remaining burst time: %d\n", + processes[i].pid, execTime, cpuID, processes[i].burstTime); + + if (processes[i].burstTime == 0) { + printf("Process %d completed on CPU %d at time %d\n", processes[i].pid, cpuID, timeElapsed); + } + } + } + } while (!allDone); +} + +// Function to perform high-level CPU assignment for processes +void twoLevelScheduling(struct Process processes[], int numProcesses, int numCPUs, int timeQuantum) { + printf("Assigning processes to CPUs...\n"); + + // High-level scheduling: assign each process to a CPU in a round-robin fashion + for (int i = 0; i < numProcesses; i++) { + processes[i].assignedCPU = i % numCPUs; + printf("Process %d assigned to CPU %d\n", processes[i].pid, processes[i].assignedCPU); + } + + // Low-level scheduling: each CPU runs a round-robin scheduler for its assigned processes + for (int cpuID = 0; cpuID < numCPUs; cpuID++) { + roundRobinScheduler(processes, numProcesses, cpuID, timeQuantum); + } +} + +int main() { + struct Process processes[NUM_PROCESSES] = { + {1, 10, -1}, // Process ID, Burst Time, Assigned CPU (-1 indicates not assigned yet) + {2, 5, -1}, + {3, 8, -1}, + {4, 6, -1}, + {5, 12, -1} + }; + + int timeQuantum = 3; // Time quantum for round-robin scheduling + twoLevelScheduling(processes, NUM_PROCESSES, NUM_CPUS, timeQuantum); + + return 0; +} From 103ab764ed5b74d8b63751170ebf06c317b817e0 Mon Sep 17 00:00:00 2001 From: Khwaish Chawla <126390524+khwaishchawla@users.noreply.github.com> Date: Sun, 10 Nov 2024 01:05:03 +0530 Subject: [PATCH 05/12] Create README.md --- .../two level scheduling/README.md | 50 +++++++++++++++++++ 1 file changed, 50 insertions(+) create mode 100644 Miscellaneous Algorithms/proportional scheduling/two level scheduling/README.md diff --git a/Miscellaneous Algorithms/proportional scheduling/two level scheduling/README.md b/Miscellaneous Algorithms/proportional scheduling/two level scheduling/README.md new file mode 100644 index 00000000..28d8886d --- /dev/null +++ b/Miscellaneous Algorithms/proportional scheduling/two level scheduling/README.md @@ -0,0 +1,50 @@ +Two-Level Scheduling is a CPU scheduling technique used primarily in systems with multiple processors or distributed environments. It separates scheduling into two distinct levels: a high-level (or global) scheduler and a low-level (or local) scheduler. The high-level scheduler assigns processes to specific CPUs, while the low-level scheduler manages the execution of those processes on each individual CPU. This approach allows for more efficient CPU utilization and better performance, especially in multi-core or distributed systems. + +How Two-Level Scheduling Works +High-Level Scheduling: +In this first stage, the high-level scheduler assigns processes to different CPUs or processors based on factors like CPU load, process priority, and resource requirements. This scheduling happens less frequently and helps balance the load across multiple CPUs. +Low-Level Scheduling: +Once a process is assigned to a CPU, the low-level scheduler (often using algorithms like round-robin, shortest job next, or priority scheduling) manages the execution of processes on that CPU. This level of scheduling operates more frequently, focusing on the efficient and fair use of the CPU it controls. +Pros of Two-Level Scheduling +Improved Load Balancing: + +The high-level scheduler helps distribute workload across multiple CPUs, avoiding situations where some CPUs are overloaded while others are underutilized. This improves overall system performance and resource usage. +Enhanced Scalability: + +By offloading the process distribution responsibility to the high-level scheduler, the system can handle a large number of processes efficiently, making it ideal for multi-core and distributed systems. +Better Responsiveness and Fairness: + +With a dedicated scheduler on each CPU, low-level scheduling ensures quick, responsive task handling within each CPU, while the high-level scheduler maintains fairness across CPUs. +Efficient Multi-Core Usage: + +Two-level scheduling maximizes the use of multi-core processors by allowing CPUs to independently manage tasks assigned to them, leading to higher throughput. +Reduced Context Switching: + +Processes are assigned to a specific CPU by the high-level scheduler, reducing the need for frequent process migrations between CPUs, which minimizes context switching overhead. +Cons of Two-Level Scheduling +Increased Complexity: + +Managing two levels of scheduling requires more complex algorithms and data structures, especially when balancing load across CPUs and handling process migrations. +Higher Overhead: + +The two-layered approach can introduce overhead, as there are now two schedulers operating at different levels. The system needs to track which processes are assigned to which CPUs and manage load distribution. +Potential CPU Idle Time: + +If the high-level scheduler doesn't efficiently assign processes to each CPU, it may lead to scenarios where some CPUs remain idle or underutilized, which can reduce efficiency. +Challenges in Process Migration: + +Migrating processes between CPUs, if required, can be complex and may cause performance penalties. Process migration often requires additional mechanisms to transfer process states between CPUs without affecting performance. +Suboptimal for Single-Core Systems: + +Two-level scheduling adds unnecessary complexity in single-core or less complex environments, where simpler scheduling techniques (like round-robin or priority-based scheduling) would suffice. +Use Cases for Two-Level Scheduling +Multi-Core Systems: In systems with multiple cores or processors, two-level scheduling maximizes core usage and balances the load effectively. +Distributed Systems: In distributed computing environments, where multiple machines or nodes are available, two-level scheduling helps in resource allocation across nodes. +Real-Time and High-Performance Applications: For systems needing high responsiveness and parallelism, such as web servers, scientific computing, or cloud-based applications, two-level scheduling improves resource allocation and responsiveness. +Overall, two-level scheduling is a powerful approach for handling large-scale and multi-core workloads efficiently. However, it introduces additional complexity and overhead, making it most suitable for high-performance, multi-core, or distributed systems. + + + + + + From 07247e7a6ed672a79b769e54df95988a8c3f5e43 Mon Sep 17 00:00:00 2001 From: Khwaish Chawla <126390524+khwaishchawla@users.noreply.github.com> Date: Sun, 10 Nov 2024 01:06:25 +0530 Subject: [PATCH 06/12] Create readme.md --- Miscellaneous Algorithmstw/twolevel/readme.md | 50 +++++++++++++++++++ 1 file changed, 50 insertions(+) create mode 100644 Miscellaneous Algorithmstw/twolevel/readme.md diff --git a/Miscellaneous Algorithmstw/twolevel/readme.md b/Miscellaneous Algorithmstw/twolevel/readme.md new file mode 100644 index 00000000..28d8886d --- /dev/null +++ b/Miscellaneous Algorithmstw/twolevel/readme.md @@ -0,0 +1,50 @@ +Two-Level Scheduling is a CPU scheduling technique used primarily in systems with multiple processors or distributed environments. It separates scheduling into two distinct levels: a high-level (or global) scheduler and a low-level (or local) scheduler. The high-level scheduler assigns processes to specific CPUs, while the low-level scheduler manages the execution of those processes on each individual CPU. This approach allows for more efficient CPU utilization and better performance, especially in multi-core or distributed systems. + +How Two-Level Scheduling Works +High-Level Scheduling: +In this first stage, the high-level scheduler assigns processes to different CPUs or processors based on factors like CPU load, process priority, and resource requirements. This scheduling happens less frequently and helps balance the load across multiple CPUs. +Low-Level Scheduling: +Once a process is assigned to a CPU, the low-level scheduler (often using algorithms like round-robin, shortest job next, or priority scheduling) manages the execution of processes on that CPU. This level of scheduling operates more frequently, focusing on the efficient and fair use of the CPU it controls. +Pros of Two-Level Scheduling +Improved Load Balancing: + +The high-level scheduler helps distribute workload across multiple CPUs, avoiding situations where some CPUs are overloaded while others are underutilized. This improves overall system performance and resource usage. +Enhanced Scalability: + +By offloading the process distribution responsibility to the high-level scheduler, the system can handle a large number of processes efficiently, making it ideal for multi-core and distributed systems. +Better Responsiveness and Fairness: + +With a dedicated scheduler on each CPU, low-level scheduling ensures quick, responsive task handling within each CPU, while the high-level scheduler maintains fairness across CPUs. +Efficient Multi-Core Usage: + +Two-level scheduling maximizes the use of multi-core processors by allowing CPUs to independently manage tasks assigned to them, leading to higher throughput. +Reduced Context Switching: + +Processes are assigned to a specific CPU by the high-level scheduler, reducing the need for frequent process migrations between CPUs, which minimizes context switching overhead. +Cons of Two-Level Scheduling +Increased Complexity: + +Managing two levels of scheduling requires more complex algorithms and data structures, especially when balancing load across CPUs and handling process migrations. +Higher Overhead: + +The two-layered approach can introduce overhead, as there are now two schedulers operating at different levels. The system needs to track which processes are assigned to which CPUs and manage load distribution. +Potential CPU Idle Time: + +If the high-level scheduler doesn't efficiently assign processes to each CPU, it may lead to scenarios where some CPUs remain idle or underutilized, which can reduce efficiency. +Challenges in Process Migration: + +Migrating processes between CPUs, if required, can be complex and may cause performance penalties. Process migration often requires additional mechanisms to transfer process states between CPUs without affecting performance. +Suboptimal for Single-Core Systems: + +Two-level scheduling adds unnecessary complexity in single-core or less complex environments, where simpler scheduling techniques (like round-robin or priority-based scheduling) would suffice. +Use Cases for Two-Level Scheduling +Multi-Core Systems: In systems with multiple cores or processors, two-level scheduling maximizes core usage and balances the load effectively. +Distributed Systems: In distributed computing environments, where multiple machines or nodes are available, two-level scheduling helps in resource allocation across nodes. +Real-Time and High-Performance Applications: For systems needing high responsiveness and parallelism, such as web servers, scientific computing, or cloud-based applications, two-level scheduling improves resource allocation and responsiveness. +Overall, two-level scheduling is a powerful approach for handling large-scale and multi-core workloads efficiently. However, it introduces additional complexity and overhead, making it most suitable for high-performance, multi-core, or distributed systems. + + + + + + From a3921c2fddaf740f86dee8ddd972b10e88daa175 Mon Sep 17 00:00:00 2001 From: Khwaish Chawla <126390524+khwaishchawla@users.noreply.github.com> Date: Sun, 10 Nov 2024 01:06:48 +0530 Subject: [PATCH 07/12] Create program.c --- Miscellaneous Algorithmstw/twolevel/program.c | 66 +++++++++++++++++++ 1 file changed, 66 insertions(+) create mode 100644 Miscellaneous Algorithmstw/twolevel/program.c diff --git a/Miscellaneous Algorithmstw/twolevel/program.c b/Miscellaneous Algorithmstw/twolevel/program.c new file mode 100644 index 00000000..e9b406b1 --- /dev/null +++ b/Miscellaneous Algorithmstw/twolevel/program.c @@ -0,0 +1,66 @@ +#include +#include +#include + +#define NUM_PROCESSES 5 + +// Structure for process details +struct Process { + int pid; // Process ID + int burstTime; // Burst time of the process + int group; // Group ID (for proportional scheduling) +}; + +// Function to simulate proportional scheduling +void proportionalScheduling(struct Process processes[], int numProcesses) { + int timeQuantum = 2; // Time quantum for each round + int timeElapsed = 0; + + // Keep track of remaining burst times + int remainingBurst[numProcesses]; + for (int i = 0; i < numProcesses; i++) { + remainingBurst[i] = processes[i].burstTime; + } + + printf("Starting Proportional Scheduling...\n"); + + // Continue until all processes are done + while (1) { + int allDone = 1; + + for (int i = 0; i < numProcesses; i++) { + if (remainingBurst[i] > 0) { + allDone = 0; + + // Execute for time quantum or remaining burst time + int execTime = (remainingBurst[i] > timeQuantum) ? timeQuantum : remainingBurst[i]; + remainingBurst[i] -= execTime; + timeElapsed += execTime; + + printf("Process %d (Group %d) ran for %d units\n", processes[i].pid, processes[i].group, execTime); + + // If process finished + if (remainingBurst[i] == 0) { + printf("Process %d completed at time %d\n", processes[i].pid, timeElapsed); + } + } + } + + // Check if all processes are done + if (allDone) break; + } +} + +int main() { + struct Process processes[NUM_PROCESSES] = { + {1, 8, 1}, // Process ID, Burst Time, Group + {2, 4, 2}, + {3, 9, 1}, + {4, 5, 2}, + {5, 7, 1} + }; + + proportionalScheduling(processes, NUM_PROCESSES); + + return 0; +} From 2f0e5e79772364ef2fc838a59dfbf2ba85895063 Mon Sep 17 00:00:00 2001 From: Khwaish Chawla <126390524+khwaishchawla@users.noreply.github.com> Date: Sun, 10 Nov 2024 01:08:55 +0530 Subject: [PATCH 08/12] Delete Miscellaneous Algorithmstw/twolevel directory --- Miscellaneous Algorithmstw/twolevel/program.c | 66 ------------------- Miscellaneous Algorithmstw/twolevel/readme.md | 50 -------------- 2 files changed, 116 deletions(-) delete mode 100644 Miscellaneous Algorithmstw/twolevel/program.c delete mode 100644 Miscellaneous Algorithmstw/twolevel/readme.md diff --git a/Miscellaneous Algorithmstw/twolevel/program.c b/Miscellaneous Algorithmstw/twolevel/program.c deleted file mode 100644 index e9b406b1..00000000 --- a/Miscellaneous Algorithmstw/twolevel/program.c +++ /dev/null @@ -1,66 +0,0 @@ -#include -#include -#include - -#define NUM_PROCESSES 5 - -// Structure for process details -struct Process { - int pid; // Process ID - int burstTime; // Burst time of the process - int group; // Group ID (for proportional scheduling) -}; - -// Function to simulate proportional scheduling -void proportionalScheduling(struct Process processes[], int numProcesses) { - int timeQuantum = 2; // Time quantum for each round - int timeElapsed = 0; - - // Keep track of remaining burst times - int remainingBurst[numProcesses]; - for (int i = 0; i < numProcesses; i++) { - remainingBurst[i] = processes[i].burstTime; - } - - printf("Starting Proportional Scheduling...\n"); - - // Continue until all processes are done - while (1) { - int allDone = 1; - - for (int i = 0; i < numProcesses; i++) { - if (remainingBurst[i] > 0) { - allDone = 0; - - // Execute for time quantum or remaining burst time - int execTime = (remainingBurst[i] > timeQuantum) ? timeQuantum : remainingBurst[i]; - remainingBurst[i] -= execTime; - timeElapsed += execTime; - - printf("Process %d (Group %d) ran for %d units\n", processes[i].pid, processes[i].group, execTime); - - // If process finished - if (remainingBurst[i] == 0) { - printf("Process %d completed at time %d\n", processes[i].pid, timeElapsed); - } - } - } - - // Check if all processes are done - if (allDone) break; - } -} - -int main() { - struct Process processes[NUM_PROCESSES] = { - {1, 8, 1}, // Process ID, Burst Time, Group - {2, 4, 2}, - {3, 9, 1}, - {4, 5, 2}, - {5, 7, 1} - }; - - proportionalScheduling(processes, NUM_PROCESSES); - - return 0; -} diff --git a/Miscellaneous Algorithmstw/twolevel/readme.md b/Miscellaneous Algorithmstw/twolevel/readme.md deleted file mode 100644 index 28d8886d..00000000 --- a/Miscellaneous Algorithmstw/twolevel/readme.md +++ /dev/null @@ -1,50 +0,0 @@ -Two-Level Scheduling is a CPU scheduling technique used primarily in systems with multiple processors or distributed environments. It separates scheduling into two distinct levels: a high-level (or global) scheduler and a low-level (or local) scheduler. The high-level scheduler assigns processes to specific CPUs, while the low-level scheduler manages the execution of those processes on each individual CPU. This approach allows for more efficient CPU utilization and better performance, especially in multi-core or distributed systems. - -How Two-Level Scheduling Works -High-Level Scheduling: -In this first stage, the high-level scheduler assigns processes to different CPUs or processors based on factors like CPU load, process priority, and resource requirements. This scheduling happens less frequently and helps balance the load across multiple CPUs. -Low-Level Scheduling: -Once a process is assigned to a CPU, the low-level scheduler (often using algorithms like round-robin, shortest job next, or priority scheduling) manages the execution of processes on that CPU. This level of scheduling operates more frequently, focusing on the efficient and fair use of the CPU it controls. -Pros of Two-Level Scheduling -Improved Load Balancing: - -The high-level scheduler helps distribute workload across multiple CPUs, avoiding situations where some CPUs are overloaded while others are underutilized. This improves overall system performance and resource usage. -Enhanced Scalability: - -By offloading the process distribution responsibility to the high-level scheduler, the system can handle a large number of processes efficiently, making it ideal for multi-core and distributed systems. -Better Responsiveness and Fairness: - -With a dedicated scheduler on each CPU, low-level scheduling ensures quick, responsive task handling within each CPU, while the high-level scheduler maintains fairness across CPUs. -Efficient Multi-Core Usage: - -Two-level scheduling maximizes the use of multi-core processors by allowing CPUs to independently manage tasks assigned to them, leading to higher throughput. -Reduced Context Switching: - -Processes are assigned to a specific CPU by the high-level scheduler, reducing the need for frequent process migrations between CPUs, which minimizes context switching overhead. -Cons of Two-Level Scheduling -Increased Complexity: - -Managing two levels of scheduling requires more complex algorithms and data structures, especially when balancing load across CPUs and handling process migrations. -Higher Overhead: - -The two-layered approach can introduce overhead, as there are now two schedulers operating at different levels. The system needs to track which processes are assigned to which CPUs and manage load distribution. -Potential CPU Idle Time: - -If the high-level scheduler doesn't efficiently assign processes to each CPU, it may lead to scenarios where some CPUs remain idle or underutilized, which can reduce efficiency. -Challenges in Process Migration: - -Migrating processes between CPUs, if required, can be complex and may cause performance penalties. Process migration often requires additional mechanisms to transfer process states between CPUs without affecting performance. -Suboptimal for Single-Core Systems: - -Two-level scheduling adds unnecessary complexity in single-core or less complex environments, where simpler scheduling techniques (like round-robin or priority-based scheduling) would suffice. -Use Cases for Two-Level Scheduling -Multi-Core Systems: In systems with multiple cores or processors, two-level scheduling maximizes core usage and balances the load effectively. -Distributed Systems: In distributed computing environments, where multiple machines or nodes are available, two-level scheduling helps in resource allocation across nodes. -Real-Time and High-Performance Applications: For systems needing high responsiveness and parallelism, such as web servers, scientific computing, or cloud-based applications, two-level scheduling improves resource allocation and responsiveness. -Overall, two-level scheduling is a powerful approach for handling large-scale and multi-core workloads efficiently. However, it introduces additional complexity and overhead, making it most suitable for high-performance, multi-core, or distributed systems. - - - - - - From 2a9a60017b769e3b05d7d13361b6b4189005ca0f Mon Sep 17 00:00:00 2001 From: Khwaish Chawla <126390524+khwaishchawla@users.noreply.github.com> Date: Sun, 10 Nov 2024 01:09:27 +0530 Subject: [PATCH 09/12] Delete Miscellaneous Algorithms/proportional scheduling/two level scheduling directory --- .../two level scheduling/README.md | 50 ------------ .../two level scheduling/program.c | 76 ------------------- 2 files changed, 126 deletions(-) delete mode 100644 Miscellaneous Algorithms/proportional scheduling/two level scheduling/README.md delete mode 100644 Miscellaneous Algorithms/proportional scheduling/two level scheduling/program.c diff --git a/Miscellaneous Algorithms/proportional scheduling/two level scheduling/README.md b/Miscellaneous Algorithms/proportional scheduling/two level scheduling/README.md deleted file mode 100644 index 28d8886d..00000000 --- a/Miscellaneous Algorithms/proportional scheduling/two level scheduling/README.md +++ /dev/null @@ -1,50 +0,0 @@ -Two-Level Scheduling is a CPU scheduling technique used primarily in systems with multiple processors or distributed environments. It separates scheduling into two distinct levels: a high-level (or global) scheduler and a low-level (or local) scheduler. The high-level scheduler assigns processes to specific CPUs, while the low-level scheduler manages the execution of those processes on each individual CPU. This approach allows for more efficient CPU utilization and better performance, especially in multi-core or distributed systems. - -How Two-Level Scheduling Works -High-Level Scheduling: -In this first stage, the high-level scheduler assigns processes to different CPUs or processors based on factors like CPU load, process priority, and resource requirements. This scheduling happens less frequently and helps balance the load across multiple CPUs. -Low-Level Scheduling: -Once a process is assigned to a CPU, the low-level scheduler (often using algorithms like round-robin, shortest job next, or priority scheduling) manages the execution of processes on that CPU. This level of scheduling operates more frequently, focusing on the efficient and fair use of the CPU it controls. -Pros of Two-Level Scheduling -Improved Load Balancing: - -The high-level scheduler helps distribute workload across multiple CPUs, avoiding situations where some CPUs are overloaded while others are underutilized. This improves overall system performance and resource usage. -Enhanced Scalability: - -By offloading the process distribution responsibility to the high-level scheduler, the system can handle a large number of processes efficiently, making it ideal for multi-core and distributed systems. -Better Responsiveness and Fairness: - -With a dedicated scheduler on each CPU, low-level scheduling ensures quick, responsive task handling within each CPU, while the high-level scheduler maintains fairness across CPUs. -Efficient Multi-Core Usage: - -Two-level scheduling maximizes the use of multi-core processors by allowing CPUs to independently manage tasks assigned to them, leading to higher throughput. -Reduced Context Switching: - -Processes are assigned to a specific CPU by the high-level scheduler, reducing the need for frequent process migrations between CPUs, which minimizes context switching overhead. -Cons of Two-Level Scheduling -Increased Complexity: - -Managing two levels of scheduling requires more complex algorithms and data structures, especially when balancing load across CPUs and handling process migrations. -Higher Overhead: - -The two-layered approach can introduce overhead, as there are now two schedulers operating at different levels. The system needs to track which processes are assigned to which CPUs and manage load distribution. -Potential CPU Idle Time: - -If the high-level scheduler doesn't efficiently assign processes to each CPU, it may lead to scenarios where some CPUs remain idle or underutilized, which can reduce efficiency. -Challenges in Process Migration: - -Migrating processes between CPUs, if required, can be complex and may cause performance penalties. Process migration often requires additional mechanisms to transfer process states between CPUs without affecting performance. -Suboptimal for Single-Core Systems: - -Two-level scheduling adds unnecessary complexity in single-core or less complex environments, where simpler scheduling techniques (like round-robin or priority-based scheduling) would suffice. -Use Cases for Two-Level Scheduling -Multi-Core Systems: In systems with multiple cores or processors, two-level scheduling maximizes core usage and balances the load effectively. -Distributed Systems: In distributed computing environments, where multiple machines or nodes are available, two-level scheduling helps in resource allocation across nodes. -Real-Time and High-Performance Applications: For systems needing high responsiveness and parallelism, such as web servers, scientific computing, or cloud-based applications, two-level scheduling improves resource allocation and responsiveness. -Overall, two-level scheduling is a powerful approach for handling large-scale and multi-core workloads efficiently. However, it introduces additional complexity and overhead, making it most suitable for high-performance, multi-core, or distributed systems. - - - - - - diff --git a/Miscellaneous Algorithms/proportional scheduling/two level scheduling/program.c b/Miscellaneous Algorithms/proportional scheduling/two level scheduling/program.c deleted file mode 100644 index 5dfb294f..00000000 --- a/Miscellaneous Algorithms/proportional scheduling/two level scheduling/program.c +++ /dev/null @@ -1,76 +0,0 @@ -#include -#include -#include - -#define NUM_CPUS 2 // Number of CPUs available -#define NUM_PROCESSES 5 // Number of processes - -// Structure for process details -struct Process { - int pid; // Process ID - int burstTime; // CPU burst time needed - int assignedCPU; // Assigned CPU ID -}; - -// Function to perform round-robin scheduling on a CPU -void roundRobinScheduler(struct Process processes[], int numProcesses, int cpuID, int timeQuantum) { - printf("\nCPU %d scheduling processes:\n", cpuID); - - int timeElapsed = 0; - int allDone; - - do { - allDone = 1; - for (int i = 0; i < numProcesses; i++) { - // Skip processes not assigned to this CPU - if (processes[i].assignedCPU != cpuID) - continue; - - if (processes[i].burstTime > 0) { - allDone = 0; - int execTime = (processes[i].burstTime > timeQuantum) ? timeQuantum : processes[i].burstTime; - - processes[i].burstTime -= execTime; - timeElapsed += execTime; - - printf("Process %d ran for %d units on CPU %d. Remaining burst time: %d\n", - processes[i].pid, execTime, cpuID, processes[i].burstTime); - - if (processes[i].burstTime == 0) { - printf("Process %d completed on CPU %d at time %d\n", processes[i].pid, cpuID, timeElapsed); - } - } - } - } while (!allDone); -} - -// Function to perform high-level CPU assignment for processes -void twoLevelScheduling(struct Process processes[], int numProcesses, int numCPUs, int timeQuantum) { - printf("Assigning processes to CPUs...\n"); - - // High-level scheduling: assign each process to a CPU in a round-robin fashion - for (int i = 0; i < numProcesses; i++) { - processes[i].assignedCPU = i % numCPUs; - printf("Process %d assigned to CPU %d\n", processes[i].pid, processes[i].assignedCPU); - } - - // Low-level scheduling: each CPU runs a round-robin scheduler for its assigned processes - for (int cpuID = 0; cpuID < numCPUs; cpuID++) { - roundRobinScheduler(processes, numProcesses, cpuID, timeQuantum); - } -} - -int main() { - struct Process processes[NUM_PROCESSES] = { - {1, 10, -1}, // Process ID, Burst Time, Assigned CPU (-1 indicates not assigned yet) - {2, 5, -1}, - {3, 8, -1}, - {4, 6, -1}, - {5, 12, -1} - }; - - int timeQuantum = 3; // Time quantum for round-robin scheduling - twoLevelScheduling(processes, NUM_PROCESSES, NUM_CPUS, timeQuantum); - - return 0; -} From c740a0900617dea5c757b54ccb8aa5d61b12d1c1 Mon Sep 17 00:00:00 2001 From: Khwaish Chawla <126390524+khwaishchawla@users.noreply.github.com> Date: Sun, 10 Nov 2024 01:10:25 +0530 Subject: [PATCH 10/12] Create program.c --- Miscellaneous Algorithms/twolevel/program.c | 66 +++++++++++++++++++++ 1 file changed, 66 insertions(+) create mode 100644 Miscellaneous Algorithms/twolevel/program.c diff --git a/Miscellaneous Algorithms/twolevel/program.c b/Miscellaneous Algorithms/twolevel/program.c new file mode 100644 index 00000000..e9b406b1 --- /dev/null +++ b/Miscellaneous Algorithms/twolevel/program.c @@ -0,0 +1,66 @@ +#include +#include +#include + +#define NUM_PROCESSES 5 + +// Structure for process details +struct Process { + int pid; // Process ID + int burstTime; // Burst time of the process + int group; // Group ID (for proportional scheduling) +}; + +// Function to simulate proportional scheduling +void proportionalScheduling(struct Process processes[], int numProcesses) { + int timeQuantum = 2; // Time quantum for each round + int timeElapsed = 0; + + // Keep track of remaining burst times + int remainingBurst[numProcesses]; + for (int i = 0; i < numProcesses; i++) { + remainingBurst[i] = processes[i].burstTime; + } + + printf("Starting Proportional Scheduling...\n"); + + // Continue until all processes are done + while (1) { + int allDone = 1; + + for (int i = 0; i < numProcesses; i++) { + if (remainingBurst[i] > 0) { + allDone = 0; + + // Execute for time quantum or remaining burst time + int execTime = (remainingBurst[i] > timeQuantum) ? timeQuantum : remainingBurst[i]; + remainingBurst[i] -= execTime; + timeElapsed += execTime; + + printf("Process %d (Group %d) ran for %d units\n", processes[i].pid, processes[i].group, execTime); + + // If process finished + if (remainingBurst[i] == 0) { + printf("Process %d completed at time %d\n", processes[i].pid, timeElapsed); + } + } + } + + // Check if all processes are done + if (allDone) break; + } +} + +int main() { + struct Process processes[NUM_PROCESSES] = { + {1, 8, 1}, // Process ID, Burst Time, Group + {2, 4, 2}, + {3, 9, 1}, + {4, 5, 2}, + {5, 7, 1} + }; + + proportionalScheduling(processes, NUM_PROCESSES); + + return 0; +} From 644469c7834714c3fcbe5913fa7170d53463583b Mon Sep 17 00:00:00 2001 From: Khwaish Chawla <126390524+khwaishchawla@users.noreply.github.com> Date: Sun, 10 Nov 2024 01:10:59 +0530 Subject: [PATCH 11/12] Create readme.md --- Miscellaneous Algorithms/twolevel/readme.md | 51 +++++++++++++++++++++ 1 file changed, 51 insertions(+) create mode 100644 Miscellaneous Algorithms/twolevel/readme.md diff --git a/Miscellaneous Algorithms/twolevel/readme.md b/Miscellaneous Algorithms/twolevel/readme.md new file mode 100644 index 00000000..641e97fe --- /dev/null +++ b/Miscellaneous Algorithms/twolevel/readme.md @@ -0,0 +1,51 @@ +**Two-Level Scheduling** is a CPU scheduling technique used primarily in systems with multiple processors or distributed environments. It separates scheduling into two distinct levels: a high-level (or global) scheduler and a low-level (or local) scheduler. The high-level scheduler assigns processes to specific CPUs, while the low-level scheduler manages the execution of those processes on each individual CPU. This approach allows for more efficient CPU utilization and better performance, especially in multi-core or distributed systems. + +### How Two-Level Scheduling Works + +1. **High-Level Scheduling**: + - In this first stage, the high-level scheduler assigns processes to different CPUs or processors based on factors like CPU load, process priority, and resource requirements. This scheduling happens less frequently and helps balance the load across multiple CPUs. + +2. **Low-Level Scheduling**: + - Once a process is assigned to a CPU, the low-level scheduler (often using algorithms like round-robin, shortest job next, or priority scheduling) manages the execution of processes on that CPU. This level of scheduling operates more frequently, focusing on the efficient and fair use of the CPU it controls. + +### Pros of Two-Level Scheduling + +1. **Improved Load Balancing**: + - The high-level scheduler helps distribute workload across multiple CPUs, avoiding situations where some CPUs are overloaded while others are underutilized. This improves overall system performance and resource usage. + +2. **Enhanced Scalability**: + - By offloading the process distribution responsibility to the high-level scheduler, the system can handle a large number of processes efficiently, making it ideal for multi-core and distributed systems. + +3. **Better Responsiveness and Fairness**: + - With a dedicated scheduler on each CPU, low-level scheduling ensures quick, responsive task handling within each CPU, while the high-level scheduler maintains fairness across CPUs. + +4. **Efficient Multi-Core Usage**: + - Two-level scheduling maximizes the use of multi-core processors by allowing CPUs to independently manage tasks assigned to them, leading to higher throughput. + +5. **Reduced Context Switching**: + - Processes are assigned to a specific CPU by the high-level scheduler, reducing the need for frequent process migrations between CPUs, which minimizes context switching overhead. + +### Cons of Two-Level Scheduling + +1. **Increased Complexity**: + - Managing two levels of scheduling requires more complex algorithms and data structures, especially when balancing load across CPUs and handling process migrations. + +2. **Higher Overhead**: + - The two-layered approach can introduce overhead, as there are now two schedulers operating at different levels. The system needs to track which processes are assigned to which CPUs and manage load distribution. + +3. **Potential CPU Idle Time**: + - If the high-level scheduler doesn't efficiently assign processes to each CPU, it may lead to scenarios where some CPUs remain idle or underutilized, which can reduce efficiency. + +4. **Challenges in Process Migration**: + - Migrating processes between CPUs, if required, can be complex and may cause performance penalties. Process migration often requires additional mechanisms to transfer process states between CPUs without affecting performance. + +5. **Suboptimal for Single-Core Systems**: + - Two-level scheduling adds unnecessary complexity in single-core or less complex environments, where simpler scheduling techniques (like round-robin or priority-based scheduling) would suffice. + +### Use Cases for Two-Level Scheduling + +- **Multi-Core Systems**: In systems with multiple cores or processors, two-level scheduling maximizes core usage and balances the load effectively. +- **Distributed Systems**: In distributed computing environments, where multiple machines or nodes are available, two-level scheduling helps in resource allocation across nodes. +- **Real-Time and High-Performance Applications**: For systems needing high responsiveness and parallelism, such as web servers, scientific computing, or cloud-based applications, two-level scheduling improves resource allocation and responsiveness. + +Overall, two-level scheduling is a powerful approach for handling large-scale and multi-core workloads efficiently. However, it introduces additional complexity and overhead, making it most suitable for high-performance, multi-core, or distributed systems. From 24567ecfbd00800802d9b17a7395a20dc2fb75d4 Mon Sep 17 00:00:00 2001 From: Khwaish Chawla <126390524+khwaishchawla@users.noreply.github.com> Date: Sun, 10 Nov 2024 01:12:00 +0530 Subject: [PATCH 12/12] Delete Miscellaneous Algorithms/twolevel directory --- Miscellaneous Algorithms/twolevel/program.c | 66 --------------------- Miscellaneous Algorithms/twolevel/readme.md | 51 ---------------- 2 files changed, 117 deletions(-) delete mode 100644 Miscellaneous Algorithms/twolevel/program.c delete mode 100644 Miscellaneous Algorithms/twolevel/readme.md diff --git a/Miscellaneous Algorithms/twolevel/program.c b/Miscellaneous Algorithms/twolevel/program.c deleted file mode 100644 index e9b406b1..00000000 --- a/Miscellaneous Algorithms/twolevel/program.c +++ /dev/null @@ -1,66 +0,0 @@ -#include -#include -#include - -#define NUM_PROCESSES 5 - -// Structure for process details -struct Process { - int pid; // Process ID - int burstTime; // Burst time of the process - int group; // Group ID (for proportional scheduling) -}; - -// Function to simulate proportional scheduling -void proportionalScheduling(struct Process processes[], int numProcesses) { - int timeQuantum = 2; // Time quantum for each round - int timeElapsed = 0; - - // Keep track of remaining burst times - int remainingBurst[numProcesses]; - for (int i = 0; i < numProcesses; i++) { - remainingBurst[i] = processes[i].burstTime; - } - - printf("Starting Proportional Scheduling...\n"); - - // Continue until all processes are done - while (1) { - int allDone = 1; - - for (int i = 0; i < numProcesses; i++) { - if (remainingBurst[i] > 0) { - allDone = 0; - - // Execute for time quantum or remaining burst time - int execTime = (remainingBurst[i] > timeQuantum) ? timeQuantum : remainingBurst[i]; - remainingBurst[i] -= execTime; - timeElapsed += execTime; - - printf("Process %d (Group %d) ran for %d units\n", processes[i].pid, processes[i].group, execTime); - - // If process finished - if (remainingBurst[i] == 0) { - printf("Process %d completed at time %d\n", processes[i].pid, timeElapsed); - } - } - } - - // Check if all processes are done - if (allDone) break; - } -} - -int main() { - struct Process processes[NUM_PROCESSES] = { - {1, 8, 1}, // Process ID, Burst Time, Group - {2, 4, 2}, - {3, 9, 1}, - {4, 5, 2}, - {5, 7, 1} - }; - - proportionalScheduling(processes, NUM_PROCESSES); - - return 0; -} diff --git a/Miscellaneous Algorithms/twolevel/readme.md b/Miscellaneous Algorithms/twolevel/readme.md deleted file mode 100644 index 641e97fe..00000000 --- a/Miscellaneous Algorithms/twolevel/readme.md +++ /dev/null @@ -1,51 +0,0 @@ -**Two-Level Scheduling** is a CPU scheduling technique used primarily in systems with multiple processors or distributed environments. It separates scheduling into two distinct levels: a high-level (or global) scheduler and a low-level (or local) scheduler. The high-level scheduler assigns processes to specific CPUs, while the low-level scheduler manages the execution of those processes on each individual CPU. This approach allows for more efficient CPU utilization and better performance, especially in multi-core or distributed systems. - -### How Two-Level Scheduling Works - -1. **High-Level Scheduling**: - - In this first stage, the high-level scheduler assigns processes to different CPUs or processors based on factors like CPU load, process priority, and resource requirements. This scheduling happens less frequently and helps balance the load across multiple CPUs. - -2. **Low-Level Scheduling**: - - Once a process is assigned to a CPU, the low-level scheduler (often using algorithms like round-robin, shortest job next, or priority scheduling) manages the execution of processes on that CPU. This level of scheduling operates more frequently, focusing on the efficient and fair use of the CPU it controls. - -### Pros of Two-Level Scheduling - -1. **Improved Load Balancing**: - - The high-level scheduler helps distribute workload across multiple CPUs, avoiding situations where some CPUs are overloaded while others are underutilized. This improves overall system performance and resource usage. - -2. **Enhanced Scalability**: - - By offloading the process distribution responsibility to the high-level scheduler, the system can handle a large number of processes efficiently, making it ideal for multi-core and distributed systems. - -3. **Better Responsiveness and Fairness**: - - With a dedicated scheduler on each CPU, low-level scheduling ensures quick, responsive task handling within each CPU, while the high-level scheduler maintains fairness across CPUs. - -4. **Efficient Multi-Core Usage**: - - Two-level scheduling maximizes the use of multi-core processors by allowing CPUs to independently manage tasks assigned to them, leading to higher throughput. - -5. **Reduced Context Switching**: - - Processes are assigned to a specific CPU by the high-level scheduler, reducing the need for frequent process migrations between CPUs, which minimizes context switching overhead. - -### Cons of Two-Level Scheduling - -1. **Increased Complexity**: - - Managing two levels of scheduling requires more complex algorithms and data structures, especially when balancing load across CPUs and handling process migrations. - -2. **Higher Overhead**: - - The two-layered approach can introduce overhead, as there are now two schedulers operating at different levels. The system needs to track which processes are assigned to which CPUs and manage load distribution. - -3. **Potential CPU Idle Time**: - - If the high-level scheduler doesn't efficiently assign processes to each CPU, it may lead to scenarios where some CPUs remain idle or underutilized, which can reduce efficiency. - -4. **Challenges in Process Migration**: - - Migrating processes between CPUs, if required, can be complex and may cause performance penalties. Process migration often requires additional mechanisms to transfer process states between CPUs without affecting performance. - -5. **Suboptimal for Single-Core Systems**: - - Two-level scheduling adds unnecessary complexity in single-core or less complex environments, where simpler scheduling techniques (like round-robin or priority-based scheduling) would suffice. - -### Use Cases for Two-Level Scheduling - -- **Multi-Core Systems**: In systems with multiple cores or processors, two-level scheduling maximizes core usage and balances the load effectively. -- **Distributed Systems**: In distributed computing environments, where multiple machines or nodes are available, two-level scheduling helps in resource allocation across nodes. -- **Real-Time and High-Performance Applications**: For systems needing high responsiveness and parallelism, such as web servers, scientific computing, or cloud-based applications, two-level scheduling improves resource allocation and responsiveness. - -Overall, two-level scheduling is a powerful approach for handling large-scale and multi-core workloads efficiently. However, it introduces additional complexity and overhead, making it most suitable for high-performance, multi-core, or distributed systems.