Name: Nguyen Phuong Nam (Andrew)
Class: Diploma Internation Technology
– CPU sheduling is a process which allows one process to use the CPU while the execution of another process is hold (in waiting state) and the aim CPU scheduling is to make the system more efficient, fast and fair.
– There are four conditions to CPU Scheduler take place:
1. When a process switches from the running state to the waiting state .
2. When the process switches from the running state to the ready state.
3. When the process switches from the waiting state to the ready state.
4. When the process terminates.
For conditions (1) and (4): there is no choice(non-pre-emptive) – a new process must be selected.
For conditions (2) and (3): there is a choice(pre-emptive)- select a next process or continue running the current process.
First come first serve Scheduling-FCFS
Shortest job first scheduling-SJF
Round Robin scheduling
Multilevel Queue scheduling
Multilevel feedback queue scheduling
2.1 Window 10
– Scheduling Priorities: Threads are scheduled to run based on their scheduling priority. One thread is assigned a scheduling priority. The priority always has level range from zero (lowest priority) to thirty-one (highest priority). The zero-page thread have a priority of zero. The system all threads with the same priority as equal. The system assigns time slices in a round-robin fashion to all threads with the highest priority. If don’t have threads are ready to run, the system assigns time slices in a round-robin fashion to all threads with the next highest priority. If a higher-priority thread becomes available to run, the system ceases to execute the lower-priority thread (without allowing it to finish using its time slice), and assigns a full-time slice to the higher-priority thread.
– Priority class
Idle priority class
Below normal priority class
Normal priority class
Above normal priority class
High priority class
Real time priority class
-Priority Inversion: Priority inversion occurs when two or more threads have not same priorities that are disputed. For example: we have 3 threads (thread 1, thread 2 and thread 3). Thread 1 is high and ready to be scheduled, thread 2 is a low priority string and it is executing code in an important section. Thread 1 is the highest priority and begins to wait and receive recourse from topic 2. Thread 3 has an average priority. Thread 3 has received all processing time, because thread of high priority (thread 1) is waiting for resources to be shared from thread of low priority (thread 2). Thread 2 will not leave the important part, because it does not have the highest priority and will not be scheduled.
– The scheduler solves this problem by randomly boosting the priority of the ready threads (in this case, the low priority lock-holders). The low priority threads run long enough to exit the critical section, and the high-priority thread can enter the critical section. If the low-priority thread does not get enough CPU time to exit the critical section the first time, it will get another chance during the next round of scheduling.
– Priority Boosts: Each thread has a dynamic priority. This is the priority the scheduler uses to determine which string to execute. Initially, the dynamic priority of a flow is the same as the underlying priority of the stream. The system can increase and decrease the dynamic priority, to ensure that it is responsive and that no thread is starved for processing time. The system does not increase the priority of the chains with a base priority of 16 to 31. Only basic priority flows in the range of 0 to 15 receive a dynamic priority increase.