“Operating System” Chapters to Read
CPU Scheduling
CPU scheduling is the method used by the operating system to decide which process gets the CPU next when multiple processes are ready to run.
Since the CPU can execute only one process per core at a time, scheduling ensures efficient and fair usage of the CPU.
Scheduling Criteria
Scheduling criteria are the standards used to evaluate and compare different CPU scheduling algorithms in an operating system. When the os chooses which process runs next, it aims to optimize certain performance goals.
Goals
- CPU utilization : keep the CPU as busy as possible , higher utilization = better performance.
- Turn around time : total time from submission of a process to its completion , includes waiting + execution time
- Waiting time : time a process spends waiting in the ready queue.
- Response time : time from request submission to the first response , important for interactive systems (like web apps).
Scheduling algorithms
Scheduling algorithms are methods used by the operating system to decide which process gets the CPU next from the ready queue. The goal is to optimize CPU utilization, waiting time, turnaround time, response time, and fairness.
FCFS ( first-come, first-served )
- Processes are executed in the order they arrive.
- Simple but may cause long waiting time (convoy effect).
SJF ( shortest job first )
- Process with the shortest execution time runs first.
- Minimizes average waiting time.
- Hard to predict exact job length.
Priority scheduling
- Each process is assigned a priority number.
- Higher priority runs first.
- Can cause starvation (low-priority processes may never run).
Round robin (RR)
- Each process gets a fixed time slice.
- After its time expires, it goes to the back of the queue.
- Fair and widely used in time-sharing systems.
Thread scheduling
Thread scheduling is the process by which the operating system decides which thread gets the CPU next, instead of scheduling entire processes.
- Instead of scheduling processes, the os may schedule individual threads.
- Threads within the same process may compete for CPU.
- Important in multithreaded applications.
Multiprocessor scheduling
Multiprocessor scheduling is used in systems that have multiple CPUs or multiple cores.
The goal is to efficiently distribute processes/threads across processors to maximize performance and ensure load balancing.
Types : Asymmetric multiprocessing
- One master processor controls scheduling , i/o , system activities
- Other processors (slaves) execute tasks assigned by the master.
Advantages are simple design , easier to manage
Types : Symmetric multiprocessing
- Each CPU runs its own scheduler.
- All CPUs are treated equally.
- Any process or thread can run on any processor.
Advantages are better scalability, no single point of failure, efficient load balancing
Load balancing
Load balancing is the process of distributing work evenly across multiple CPUs or cores so that no processor is overloaded while others are idle. It is especially important in multiprocessor or multicore systems.
Real-time CPU scheduling
Real-time CPU scheduling is a scheduling method used in systems where tasks must complete within specific time limits (deadlines).
Types : Hard real-time systems
- Deadlines are strict and must never be missed.
- Missing a deadline is considered a system failure.
- It is often used in safety-critical systems.
- Examples : airbag control system, pacemaker , flight control system etc
Types : Soft real-time systems
- Deadlines are important but occasional misses are acceptable.
- Missing deadlines may reduce performance, but the system continues working.
- Examples : video streaming, online gaming, multimedia applications