Processes Are Said to Be Operating in a Fashion if Each Process in the Queue Is Given a C
In calculating, scheduling is the action of assigning resources to perform tasks. The resources may be processors, network links or expansion cards. The tasks may be threads, processes or data flows.
The scheduling activeness is carried out by a process chosen scheduler. Schedulers are ofttimes designed and so as to keep all computer resources busy (every bit in load balancing), allow multiple users to share system resources finer, or to achieve a target quality-of-service.
Scheduling is key to computation itself, and an intrinsic part of the execution model of a computer organisation; the concept of scheduling makes information technology possible to have computer multitasking with a single central processing unit (CPU).
Goals [edit]
A scheduler may aim at i or more than goals, for instance:
- maximizing throughput (the total amount of work completed per time unit);
- minimizing wait time (fourth dimension from work becoming gear up until the showtime betoken it begins execution);
- minimizing latency or response time (time from work condign ready until information technology is finished in instance of batch activity,[one] [2] [3] or until the system responds and hands the first output to the user in example of interactive activity);[4]
- maximizing fairness (equal CPU time to each process, or more generally appropriate times according to the priority and workload of each process).
In practice, these goals often disharmonize (e.m. throughput versus latency), thus a scheduler will implement a suitable compromise. Preference is measured by whatever one of the concerns mentioned above, depending upon the user's needs and objectives.
In real-time environments, such as embedded systems for automatic command in industry (for case robotics), the scheduler besides must ensure that processes tin meet deadlines; this is crucial for keeping the system stable. Scheduled tasks can also exist distributed to remote devices across a network and managed through an administrative back end.
Types of operating system schedulers [edit]
The scheduler is an operating system module that selects the next jobs to be admitted into the system and the next procedure to run. Operating systems may characteristic up to three distinct scheduler types: a long-term scheduler (also known every bit an access scheduler or high-level scheduler), a mid-term or medium-term scheduler, and a short-term scheduler. The names propose the relative frequency with which their functions are performed.
Procedure scheduler [edit]
The process scheduler is a part of the operating system that decides which process runs at a certain signal in time. It commonly has the power to suspension a running process, move it to the back of the running queue and start a new process; such a scheduler is known equally a preemptive scheduler, otherwise information technology is a cooperative scheduler.[5]
Nosotros distinguish between "long-term scheduling", "medium-term scheduling", and "brusk-term scheduling" based on how frequently decisions must exist made.[six]
Long-term scheduling [edit]
The long-term scheduler, or admission scheduler, decides which jobs or processes are to exist admitted to the ready queue (in main memory); that is, when an endeavour is fabricated to execute a program, its admission to the set of currently executing processes is either authorized or delayed by the long-term scheduler. Thus, this scheduler dictates what processes are to run on a system, and the degree of concurrency to exist supported at any one time – whether many or few processes are to be executed concurrently, and how the split between I/O-intensive and CPU-intensive processes is to be handled. The long-term scheduler is responsible for decision-making the degree of multiprogramming.
In general, most processes can be described as either I/O-bound or CPU-bound. An I/O-bound process is 1 that spends more of its time doing I/O than it spends doing computations. A CPU-bound process, in dissimilarity, generates I/O requests infrequently, using more of its time doing computations. It is important that a long-term scheduler selects a proficient process mix of I/O-bound and CPU-bound processes. If all processes are I/O-bound, the ready queue will virtually always be empty, and the short-term scheduler will have lilliputian to do. On the other hand, if all processes are CPU-spring, the I/O waiting queue will almost always be empty, devices will go unused, and once more the system will be unbalanced. The system with the best functioning will thus have a combination of CPU-leap and I/O-jump processes. In modern operating systems, this is used to brand sure that real-time processes become enough CPU time to finish their tasks.[seven]
Long-term scheduling is also important in large-calibration systems such equally batch processing systems, computer clusters, supercomputers, and render farms. For example, in concurrent systems, coscheduling of interacting processes is oft required to prevent them from blocking due to waiting on each other. In these cases, special-purpose job scheduler software is typically used to assist these functions, in addition to any underlying admission scheduling support in the operating system.
Some operating systems just allow new tasks to be added if it is certain all real-time deadlines can nonetheless exist met. The specific heuristic algorithm used by an operating system to accept or reject new tasks is the admission control machinery.[8]
Medium-term scheduling [edit]
The medium-term scheduler temporarily removes processes from main retention and places them in secondary memory (such every bit a difficult disk drive) or vice versa, which is ordinarily referred to equally "swapping out" or "swapping in" (likewise incorrectly equally "paging out" or "paging in"). The medium-term scheduler may decide to bandy out a process which has not been agile for some time, or a procedure which has a low priority, or a procedure which is page faulting frequently, or a process which is taking up a big amount of memory in order to free up main retentiveness for other processes, swapping the process back in later when more memory is available, or when the process has been unblocked and is no longer waiting for a resource. [Stallings, 396] [Stallings, 370]
In many systems today (those that support mapping virtual address infinite to secondary storage other than the bandy file), the medium-term scheduler may actually perform the role of the long-term scheduler, by treating binaries equally "swapped out processes" upon their execution. In this way, when a segment of the binary is required it tin be swapped in on demand, or "lazy loaded",[Stallings, 394] too called need paging.
Curt-term scheduling [edit]
The short-term scheduler (besides known every bit the CPU scheduler) decides which of the set, in-memory processes is to be executed (allocated a CPU) later on a clock interrupt, an I/O interrupt, an operating organisation call or another form of signal. Thus the short-term scheduler makes scheduling decisions much more than frequently than the long-term or mid-term schedulers – a scheduling decision will at a minimum have to be made later on every fourth dimension slice, and these are very curt. This scheduler can be preemptive, implying that it is capable of forcibly removing processes from a CPU when it decides to allocate that CPU to some other process, or non-preemptive (also known as "voluntary" or "co-operative"), in which case the scheduler is unable to "strength" processes off the CPU.
A preemptive scheduler relies upon a programmable interval timer which invokes an interrupt handler that runs in kernel mode and implements the scheduling office.
Dispatcher [edit]
Another component that is involved in the CPU-scheduling role is the dispatcher, which is the module that gives control of the CPU to the procedure selected by the short-term scheduler. It receives control in kernel mode as the result of an interrupt or system phone call. The functions of a dispatcher involve the following:
- Context switches, in which the dispatcher saves the state (as well known as context) of the procedure or thread that was previously running; the dispatcher and then loads the initial or previously saved land of the new process.
- Switching to user way.
- Jumping to the proper location in the user program to restart that program indicated by its new state.
The dispatcher should be as fast equally possible, since information technology is invoked during every process switch. During the context switches, the processor is virtually idle for a fraction of time, thus unnecessary context switches should exist avoided. The time it takes for the dispatcher to stop one procedure and showtime another is known equally the dispatch latency.[7] : 155
Scheduling disciplines [edit]
A scheduling discipline (as well chosen scheduling policy or scheduling algorithm) is an algorithm used for distributing resources among parties which simultaneously and asynchronously request them. Scheduling disciplines are used in routers (to handle packet traffic) as well as in operating systems (to share CPU fourth dimension amongst both threads and processes), deejay drives (I/O scheduling), printers (print spooler), most embedded systems, etc.
The chief purposes of scheduling algorithms are to minimize resource starvation and to ensure fairness amongst the parties utilizing the resources. Scheduling deals with the problem of deciding which of the outstanding requests is to be allocated resources. There are many unlike scheduling algorithms. In this section, we introduce several of them.
In packet-switched computer networks and other statistical multiplexing, the notion of a scheduling algorithm is used as an culling to start-come showtime-served queuing of data packets.
The simplest best-effort scheduling algorithms are round-robin, fair queuing (a max-min off-white scheduling algorithm), proportional-fair scheduling and maximum throughput. If differentiated or guaranteed quality of service is offered, as opposed to best-effort communication, weighted fair queuing may be utilized.
In advanced packet radio wireless networks such equally HSDPA (High-Speed Downlink Bundle Access) three.5G cellular system, channel-dependent scheduling may be used to have reward of channel state information. If the channel conditions are favourable, the throughput and arrangement spectral efficiency may be increased. In fifty-fifty more than advanced systems such as LTE, the scheduling is combined by aqueduct-dependent packet-by-packet dynamic channel allocation, or by assigning OFDMA multi-carriers or other frequency-domain equalization components to the users that best can utilise them.[9]
First come, first served [edit]
Outset in, kickoff out (FIFO), besides known as first come up, first served (FCFS), is the simplest scheduling algorithm. FIFO simply queues processes in the society that they make it in the prepare queue. This is commonly used for a task queue , for example as illustrated in this section.
- Since context switches only occur upon procedure termination, and no reorganization of the process queue is required, scheduling overhead is minimal.
- Throughput can exist low, because long processes can be holding the CPU, causing the brusk processes to expect for a long time (known as the convoy issue).
- No starvation, because each procedure gets chance to be executed later a definite time.
- Turnaround time, waiting fourth dimension and response time depend on the order of their arrival and can be high for the aforementioned reasons above.
- No prioritization occurs, thus this system has trouble coming together procedure deadlines.
- The lack of prioritization means that as long as every process eventually completes, there is no starvation. In an environment where some processes might non complete, there can be starvation.
- Information technology is based on queuing.
Priority scheduling [edit]
Earliest deadline first (EDF) or least time to go is a dynamic scheduling algorithm used in existent-time operating systems to place processes in a priority queue. Whenever a scheduling event occurs (a task finishes, new chore is released, etc.), the queue will be searched for the process closest to its deadline, which will exist the adjacent to exist scheduled for execution.
Shortest remaining time first [edit]
Similar to shortest task get-go (SJF). With this strategy the scheduler arranges processes with the least estimated processing fourth dimension remaining to be next in the queue. This requires advanced knowledge or estimations virtually the time required for a procedure to complete.
- If a shorter process arrives during another process' execution, the currently running process is interrupted (known as preemption), dividing that process into two split up computing blocks. This creates excess overhead through additional context switching. The scheduler must also place each incoming process into a specific place in the queue, creating additional overhead.
- This algorithm is designed for maximum throughput in most scenarios.
- Waiting time and response time increase as the procedure's computational requirements increase. Since turnaround time is based on waiting time plus processing fourth dimension, longer processes are significantly affected by this. Overall waiting time is smaller than FIFO, all the same since no process has to wait for the termination of the longest process.
- No particular attention is given to deadlines, the developer can but effort to brand processes with deadlines as brusque as possible.
- Starvation is possible, especially in a decorated system with many pocket-sized processes beingness run.
- To use this policy nosotros should have at least two processes of different priority
Fixed priority pre-emptive scheduling [edit]
The operating system assigns a fixed priority rank to every process, and the scheduler arranges the processes in the fix queue in order of their priority. Lower-priority processes go interrupted by incoming higher-priority processes.
- Overhead is non minimal, nor is information technology meaning.
- FPPS has no particular reward in terms of throughput over FIFO scheduling.
- If the number of rankings is limited, information technology tin be characterized equally a collection of FIFO queues, one for each priority ranking. Processes in lower-priority queues are selected only when all of the higher-priority queues are empty.
- Waiting time and response time depend on the priority of the process. Higher-priority processes have smaller waiting and response times.
- Deadlines tin exist met past giving processes with deadlines a college priority.
- Starvation of lower-priority processes is possible with large numbers of loftier-priority processes queuing for CPU time.
Circular-robin scheduling [edit]
The scheduler assigns a fixed time unit per procedure, and cycles through them. If process completes within that time-piece it gets terminated otherwise it is rescheduled subsequently giving a gamble to all other processes.
- RR scheduling involves extensive overhead, particularly with a pocket-sized time unit of measurement.
- Balanced throughput between FCFS/ FIFO and SJF/SRTF, shorter jobs are completed faster than in FIFO and longer processes are completed faster than in SJF.
- Good average response time, waiting fourth dimension is dependent on number of processes, and not average process length.
- Considering of high waiting times, deadlines are rarely met in a pure RR organization.
- Starvation can never occur, since no priority is given. Order of fourth dimension unit allocation is based upon process arrival time, like to FIFO.
- If Fourth dimension-Slice is large it becomes FCFS /FIFO or if it is brusque and then it becomes SJF/SRTF.
Multilevel queue scheduling [edit]
This is used for situations in which processes are easily divided into different groups. For case, a common sectionalization is made betwixt foreground (interactive) processes and groundwork (batch) processes. These ii types of processes have different response-time requirements and so may have unlike scheduling needs. It is very useful for shared memory problems.
Piece of work-conserving schedulers [edit]
A work-conserving scheduler is a scheduler that always tries to keep the scheduled resource busy, if in that location are submitted jobs ready to be scheduled. In contrast, a not-piece of work conserving scheduler is a scheduler that, in some cases, may leave the scheduled resources idle despite the presence of jobs gear up to be scheduled.
Scheduling optimization problems [edit]
There are several scheduling bug in which the goal is to make up one's mind which chore goes to which station at what time, such that the total makespan is minimized:
- Job shop scheduling – at that place are north jobs and m identical stations. Each job should be executed on a single machine. This is usually regarded every bit an online problem.
- Open-shop scheduling – there are due north jobs and yard different stations. Each chore should spend some time at each station, in a free order.
- Menstruum shop scheduling – at that place are due north jobs and m different stations. Each task should spend some time at each station, in a pre-determined gild.
Manual scheduling [edit]
A very common method in embedded systems is to schedule jobs manually. This can for instance be done in a time-multiplexed style. Sometimes the kernel is divided in 3 or more parts: Manual scheduling, preemptive and interrupt level. Verbal methods for scheduling jobs are often proprietary.
- No resources starvation problems
- Very high predictability; allows implementation of hard existent-time systems
- About no overhead
- May non be optimal for all applications
- Effectiveness is completely dependent on the implementation
Choosing a scheduling algorithm [edit]
When designing an operating system, a programmer must consider which scheduling algorithm will perform best for the use the organisation is going to see. There is no universal "best" scheduling algorithm, and many operating systems employ extended or combinations of the scheduling algorithms above.
For example, Windows NT/XP/Vista uses a multilevel feedback queue, a combination of fixed-priority preemptive scheduling, round-robin, and starting time in, outset out algorithms. In this system, threads tin dynamically increase or decrease in priority depending on if it has been serviced already, or if information technology has been waiting extensively. Every priority level is represented by its ain queue, with round-robin scheduling amidst the loftier-priority threads and FIFO among the lower-priority ones. In this sense, response time is curt for near threads, and short but critical organization threads get completed very quickly. Since threads can only utilize one fourth dimension unit of measurement of the round-robin in the highest-priority queue, starvation can exist a trouble for longer high-priority threads.
Operating system procedure scheduler implementations [edit]
The algorithm used may be as simple as round-robin in which each process is given equal time (for case one ms, commonly between 1 ms and 100 ms) in a cycling list. So, process A executes for 1 ms, then process B, then process C, then back to process A.
More advanced algorithms take into account process priority, or the importance of the process. This allows some processes to utilise more than time than other processes. The kernel ever uses whatever resource it needs to ensure proper operation of the system, and so can exist said to have space priority. In SMP systems, processor affinity is considered to increase overall system operation, even if it may cause a process itself to run more than slowly. This generally improves performance by reducing cache thrashing.
Bone/360 and successors [edit]
IBM OS/360 was available with three dissimilar schedulers. The differences were such that the variants were often considered 3 different operating systems:
- The Single Sequential Scheduler option, also known equally the Primary Control Program (PCP) provided sequential execution of a unmarried stream of jobs.
- The Multiple Sequential Scheduler option, known as Multiprogramming with a Fixed Number of Tasks (MFT) provided execution of multiple concurrent jobs. Execution was governed by a priority which had a default for each stream or could be requested separately for each job. MFT version Two added subtasks (threads), which executed at a priority based on that of the parent chore. Each chore stream defined the maximum amount of memory which could exist used by whatsoever job in that stream.
- The Multiple Priority Schedulers option, or Multiprogramming with a Variable Number of Tasks (MVT), featured subtasks from the get-go; each job requested the priority and retentivity it required before execution.
Afterward virtual storage versions of MVS added a Workload Manager characteristic to the scheduler, which schedules processor resources according to an elaborate scheme divers by the installation.
Windows [edit]
Very early MS-DOS and Microsoft Windows systems were non-multitasking, and as such did not feature a scheduler. Windows three.1x used a not-preemptive scheduler, significant that information technology did not interrupt programs. Information technology relied on the program to end or tell the Os that it didn't need the processor then that it could move on to some other procedure. This is ordinarily called cooperative multitasking. Windows 95 introduced a rudimentary preemptive scheduler; yet, for legacy support opted to permit 16 chip applications run without preemption.[ten]
Windows NT-based operating systems use a multilevel feedback queue. 32 priority levels are divers, 0 through to 31, with priorities 0 through fifteen being "normal" priorities and priorities sixteen through 31 existence soft existent-time priorities, requiring privileges to assign. 0 is reserved for the Operating Organisation. User interfaces and APIs piece of work with priority classes for the process and the threads in the procedure, which are so combined by the system into the absolute priority level.
The kernel may change the priority level of a thread depending on its I/O and CPU usage and whether information technology is interactive (i.eastward. accepts and responds to input from humans), raising the priority of interactive and I/O bounded processes and lowering that of CPU spring processes, to increment the responsiveness of interactive applications.[xi] The scheduler was modified in Windows Vista to employ the wheel counter register of modern processors to keep track of exactly how many CPU cycles a thread has executed, rather than just using an interval-timer interrupt routine.[12] Vista besides uses a priority scheduler for the I/O queue and then that disk defragmenters and other such programs exercise not interfere with foreground operations.[thirteen]
Archetype Mac OS and macOS [edit]
Mac Bone 9 uses cooperative scheduling for threads, where one process controls multiple cooperative threads, and also provides preemptive scheduling for multiprocessing tasks. The kernel schedules multiprocessing tasks using a preemptive scheduling algorithm. All Procedure Manager processes run inside a special multiprocessing task, called the "blue task". Those processes are scheduled cooperatively, using a round-robin scheduling algorithm; a process yields control of the processor to some other process by explicitly calling a blocking role such as WaitNextEvent
. Each process has its own re-create of the Thread Manager that schedules that process's threads cooperatively; a thread yields control of the processor to another thread past calling YieldToAnyThread
or YieldToThread
.[fourteen]
macOS uses a multilevel feedback queue, with four priority bands for threads – normal, system loftier priority, kernel mode merely, and existent-time.[xv] Threads are scheduled preemptively; macOS also supports cooperatively scheduled threads in its implementation of the Thread Managing director in Carbon.[14]
AIX [edit]
In AIX Version 4 at that place are three possible values for thread scheduling policy:
- Outset In, Beginning Out: In one case a thread with this policy is scheduled, it runs to completion unless information technology is blocked, it voluntarily yields control of the CPU, or a college-priority thread becomes dispatchable. Only fixed-priority threads can accept a FIFO scheduling policy.
- Round Robin: This is like to the AIX Version 3 scheduler circular-robin scheme based on 10 ms time slices. When a RR thread has control at the end of the fourth dimension piece, it moves to the tail of the queue of dispatchable threads of its priority. But fixed-priority threads can have a Round Robin scheduling policy.
- OTHER: This policy is defined by POSIX1003.4a equally implementation-defined. In AIX Version four, this policy is divers to exist equivalent to RR, except that information technology applies to threads with non-fixed priority. The recalculation of the running thread's priority value at each clock interrupt means that a thread may lose control because its priority value has risen above that of another dispatchable thread. This is the AIX Version 3 beliefs.
Threads are primarily of interest for applications that currently consist of several asynchronous processes. These applications might impose a lighter load on the arrangement if converted to a multithreaded structure.
AIX 5 implements the following scheduling policies: FIFO, round robin, and a fair round robin. The FIFO policy has three different implementations: FIFO, FIFO2, and FIFO3. The round robin policy is named SCHED_RR in AIX, and the fair round robin is called SCHED_OTHER.[16]
Linux [edit]
Linux 2.4 [edit]
In Linux two.4, an O(due north) scheduler with a multilevel feedback queue with priority levels ranging from 0 to 140 was used; 0–99 are reserved for real-time tasks and 100–140 are considered dainty task levels. For real-fourth dimension tasks, the fourth dimension quantum for switching processes was approximately 200 ms, and for squeamish tasks approximately 10 ms.[ commendation needed ] The scheduler ran through the run queue of all ready processes, letting the highest priority processes get get-go and run through their time slices, subsequently which they will be placed in an expired queue. When the active queue is empty the expired queue will get the active queue and vice versa.
However, some enterprise Linux distributions such every bit SUSE Linux Enterprise Server replaced this scheduler with a backport of the O(1) scheduler (which was maintained past Alan Cox in his Linux ii.four-ac Kernel series) to the Linux 2.four kernel used by the distribution.
Linux 2.6.0 to Linux 2.vi.22 [edit]
In versions ii.6.0 to 2.half dozen.22, the kernel used an O(1) scheduler adult past Ingo Molnar and many other kernel developers during the Linux 2.5 evolution. For many kernel in time frame, Con Kolivas developed patch sets which improved interactivity with this scheduler or even replaced information technology with his own schedulers.
Since Linux 2.6.23 [edit]
Con Kolivas' work, most significantly his implementation of "fair scheduling" named "Rotating Staircase Deadline", inspired Ingo Molnár to develop the Completely Fair Scheduler equally a replacement for the before O(1) scheduler, crediting Kolivas in his annunciation.[17] CFS is the first implementation of a fair queuing process scheduler widely used in a general-purpose operating organization.[eighteen]
The Completely Off-white Scheduler (CFS) uses a well-studied, archetype scheduling algorithm called off-white queuing originally invented for packet networks. Fair queuing had been previously applied to CPU scheduling under the name stride scheduling. The fair queuing CFS scheduler has a scheduling complexity of , where is the number of tasks in the runqueue. Choosing a task can be done in abiding time, merely reinserting a task after information technology has run requires operations, because the run queue is implemented every bit a red–blackness tree.
The Encephalon Fuck Scheduler, also created by Con Kolivas, is an alternative to the CFS.
FreeBSD [edit]
FreeBSD uses a multilevel feedback queue with priorities ranging from 0–255. 0–63 are reserved for interrupts, 64–127 for the height half of the kernel, 128–159 for existent-time user threads, 160–223 for fourth dimension-shared user threads, and 224–255 for idle user threads. As well, like Linux, information technology uses the active queue setup, but it also has an idle queue.[19]
NetBSD [edit]
NetBSD uses a multilevel feedback queue with priorities ranging from 0–223. 0–63 are reserved for time-shared threads (default, SCHED_OTHER policy), 64–95 for user threads which entered kernel space, 96-128 for kernel threads, 128–191 for user real-time threads (SCHED_FIFO and SCHED_RR policies), and 192–223 for software interrupts.
Solaris [edit]
Solaris uses a multilevel feedback queue with priorities ranging between 0 and 169. Priorities 0–59 are reserved for fourth dimension-shared threads, 60–99 for system threads, 100–159 for real-fourth dimension threads, and 160–169 for depression priority interrupts. Dissimilar Linux,[nineteen] when a process is washed using its time quantum, it is given a new priority and put back in the queue. Solaris 9 introduced two new scheduling classes, namely fixed priority class and fair share class. The threads with fixed priority have the same priority range as that of the fourth dimension-sharing course, but their priorities are not dynamically adjusted. The fair scheduling class uses CPU shares to prioritize threads for scheduling decisions. CPU shares indicate the entitlement to CPU resources. They are allocated to a set of processes, which are collectively known equally a project.[seven]
Summary [edit]
Operating System | Preemption | Algorithm |
---|---|---|
Amiga Os | Yes | Prioritized circular-robin scheduling |
FreeBSD | Yes | Multilevel feedback queue |
Linux kernel before 2.6.0 | Yep | Multilevel feedback queue |
Linux kernel 2.6.0–2.vi.23 | Yes | O(1) scheduler |
Linux kernel after 2.6.23 | Yes | Completely Off-white Scheduler |
classic Mac Os pre-9 | None | Cooperative scheduler |
Mac Bone 9 | Some | Preemptive scheduler for MP tasks, and cooperative for processes and threads |
macOS | Yes | Multilevel feedback queue |
NetBSD | Yes | Multilevel feedback queue |
Solaris | Yep | Multilevel feedback queue |
Windows 3.1x | None | Cooperative scheduler |
Windows 95, 98, Me | One-half | Preemptive scheduler for 32-fleck processes, and cooperative for 16-bit processes |
Windows NT (including 2000, XP, Vista, 7, and Server) | Yeah | Multilevel feedback queue |
Run into also [edit]
- Action selection problem
- Crumbling (scheduling)
- Atropos scheduler
- Automatic planning and scheduling
- Cyclic executive
- Dynamic priority scheduling
- Foreground-background
- Interruptible operating system
- Least slack time scheduling
- Lottery scheduling
- Priority inversion
- Process states
- Queuing Theory
- Rate-monotonic scheduling
- Resource-Task Network
- Scheduling (production processes)
- Stochastic scheduling
- Time-utility function
Notes [edit]
- ^ C. L., Liu; James Westward., Layland (January 1973). "Scheduling Algorithms for Multiprogramming in a Hard-Real-Time Environment". Journal of the ACM. ACM. twenty (i): 46–61. doi:x.1145/321738.321743. S2CID 207669821.
We define the response time of a request for a sure task to be the time span between the request and the end of the response to that request.
- ^ Kleinrock, Leonard (1976). Queueing Systems, Vol. two: Computer Applications (one ed.). Wiley-Interscience. p. 171. ISBN047149111X.
For a client requiring 10 sec of service, his response time will equal his service time x plus his waiting fourth dimension.
- ^ Feitelson, Dror Thousand. (2015). Workload Modeling for Calculator Systems Performance Evaluation. Cambridge University Press. Section 8.iv (Page 422) in Version 1.03 of the freely available manuscript. ISBN9781107078239 . Retrieved 2015-10-17 .
if we denote the time that a job waits in the queue by tw, and the fourth dimension it actually runs by tr, so the response time is r = twest + tr.
- ^ Silberschatz, Abraham; Galvin, Peter Baer; Gagne, Greg (2012). Operating System Concepts (9 ed.). Wiley Publishing. p. 187. ISBN978-0470128725.
In an interactive system, turnaround time may not exist the best criterion. Often, a process tin produce some output fairly early and tin can continue computing new results while previous results are beingness output to the user. Thus, another measure is the fourth dimension from the submission of a request until the first response is produced. This measure out, chosen response time, is the time it takes to beginning responding, not the fourth dimension it takes to output the response.
- ^ Paul Krzyzanowski (2014-02-19). "Process Scheduling: Who gets to run side by side?". cs.rutgers.edu . Retrieved 2015-01-xi .
- ^ Raphael Finkel. "An Operating Systems Vade Mecum". Prentice Hall. 1988. "chapter 2: Time Management". p. 27.
- ^ a b c Abraham Silberschatz, Peter Baer Galvin and Greg Gagne (2013). Operating System Concepts. Vol. 9. John Wiley & Sons, Inc. ISBN978-1-118-06333-0.
{{cite volume}}
: CS1 maint: uses authors parameter (link) - ^ Robert Kroeger (2004). "Access Control for Independently-authored Realtime Applications". UWSpace. http://hdl.handle.net/10012/1170 . Section "2.6 Admission Command". p. 33.
- ^ Guowang Miao; Jens Zander; Ki Won Sung; Ben Slimane (2016). Fundamentals of Mobile Data Networks. Cambridge University Printing. ISBN978-1107143210.
- ^ Early Windows at the Wayback Machine (archive alphabetize)
- ^ Sriram Krishnan. "A Tale of Two Schedulers Windows NT and Windows CE". Archived from the original on July 22, 2012.
- ^ "Windows Administration: Inside the Windows Vista Kernel: Part one". Technet.microsoft.com. 2016-xi-14. Retrieved 2016-12-09 .
- ^ "Archived copy". web log.gabefrost.com. Archived from the original on xix Feb 2008. Retrieved 15 January 2022.
{{cite web}}
: CS1 maint: archived copy equally title (link) - ^ a b "Technical Note TN2028: Threading Architectures". developer.apple tree.com . Retrieved 2019-01-xv .
- ^ "Mach Scheduling and Thread Interfaces". developer.apple.com . Retrieved 2019-01-xv .
- ^ [1] Archived 2011-08-eleven at the Wayback Motorcar
- ^ Molnár, Ingo (2007-04-13). "[patch] Modular Scheduler Core and Completely Fair Scheduler [CFS]". linux-kernel (Mailing list).
- ^ Tong Li; Dan Baumberger; Scott Hahn. "Efficient and Scalable Multiprocessor Fair Scheduling Using Distributed Weighted Round-Robin" (PDF). Happyli.org . Retrieved 2016-12-09 .
- ^ a b "Comparison of Solaris, Linux, and FreeBSD Kernels" (PDF). Archived from the original (PDF) on August 7, 2008.
References [edit]
- Błażewicz, Jacek; Ecker, K.H.; Pesch, East.; Schmidt, Grand.; Weglarz, J. (2001). Scheduling computer and manufacturing processes (2 ed.). Berlin [u.a.]: Springer. ISBNthree-540-41931-4.
- Stallings, William (2004). Operating Systems Internals and Blueprint Principles (4th ed.). Prentice Hall. ISBN0-13-031999-6.
- Data on the Linux 2.6 O(1)-scheduler
Further reading [edit]
- Operating Systems: Three Easy Pieces by Remzi H. Arpaci-Dusseau and Andrea C. Arpaci-Dusseau. Arpaci-Dusseau Books, 2014. Relevant capacity: Scheduling: Introduction Multi-level Feedback Queue Proportional-share Scheduling Multiprocessor Scheduling
- Brief word of Job Scheduling algorithms
- Understanding the Linux Kernel: Chapter x Process Scheduling
- Kerneltrap: Linux kernel scheduler articles
- AIX CPU monitoring and tuning
- Josh Aas' introduction to the Linux 2.6.8.1 CPU scheduler implementation
- Peter Brucker, Sigrid Knust. Complexity results for scheduling problems [2]
- TORSCHE Scheduling Toolbox for Matlab is a toolbox of scheduling and graph algorithms.
- A survey on cellular networks packet scheduling
- Large-scale cluster management at Google with Borg
0 Response to "Processes Are Said to Be Operating in a Fashion if Each Process in the Queue Is Given a C"
Post a Comment