D.2.1 The Task Dispatching Model
1/2
{
AI95-00321-01}
[The task dispatching model specifies
task preemptive
scheduling, based on conceptual priority-ordered ready queues.]
Static Semantics
1.1/2
{
AI95-00355-01}
The following language-defined library package
exists:
1.2/3
{
AI05-0166-1}
package Ada.Dispatching is
pragma Preelaborate Pure(Dispatching);
Dispatching_Policy_Error : exception;
end Ada.Dispatching;
1.3/3
1.4/3
{
AI05-0166-1}
Dispatching_Policy_Error : exception;
end Ada.Dispatching;
1.5/2
Dispatching serves as the
parent of other language-defined library units concerned with task dispatching.
Dynamic Semantics
2/2
{
AI95-00321-01}
A task
can become runs
(that is, it becomes a
running task)
only
if when
it is ready (see
9 9.2)
and the execution resources required by that task are available. Processors
are allocated to tasks based on each task's active priority.
3
It is implementation defined whether, on a multiprocessor,
a task that is waiting for access to a protected object keeps its processor
busy.
3.a
Implementation defined: Whether, on a
multiprocessor, a task that is waiting for access to a protected object
keeps its processor busy.
4/2
{
AI95-00321-01}
Task
dispatching is the process by which one ready task is selected for
execution on a processor. This selection is done at certain points during
the execution of a task called
task dispatching points. A task
reaches a task dispatching point whenever it becomes blocked, and
when
it terminates whenever it becomes ready.
In addition, the completion of an accept_statement
(see 9.5.2), and task termination are task
dispatching points for the executing task. [Other task dispatching
points are defined throughout this Annex
for specific
policies.]
4.a
Ramification: On multiprocessor systems,
more than one task can be chosen, at the same time, for execution on
more than one processor, as explained below.
5/2
{
AI95-00321-01}
Task
dispatching policies are specified in terms of conceptual
ready
queues and,
task states
, and task preemption. A ready
queue is an ordered list of ready tasks. The first position in a queue
is called the
head of the queue, and the last position is called
the
tail of the queue. A task is
ready if it is in a ready
queue, or if it is running. Each processor has one ready queue for each
priority value. At any instant, each ready queue of a processor contains
exactly the set of tasks of that priority that are ready for execution
on that processor, but are not running on any processor; that is, those
tasks that are ready, are not running on any processor, and can be executed
using that processor and other available resources. A task can be on
the ready queues of more than one processor.
5.a
Discussion: The core language defines
a ready task as one that is not blocked. Here we refine this definition
and talk about ready queues.
6/2
{
AI95-00321-01}
Each processor also has one
running task,
which is the task currently being executed by that processor. Whenever
a task running on a processor reaches a task dispatching point
it goes back to one or more ready queues; a,
one task
(possibly the same task) is
then selected to run on that processor.
The task selected is the one at the head of the highest priority nonempty
ready queue; this task is then removed from all ready queues to which
it belongs.
6.a
Discussion: There is always at least
one task to run, if we count the idle task.
7/3
{
AI95-00321-01}
{
AI05-0166-1}
A preemptible resource is a
resource that while allocated to one task can be allocated (temporarily)
to another instead. Processors are preemptible resources. Access to a
protected object (see 9.5.1) is a nonpreemptible
resource. When a higher-priority task is dispatched
to the processor, and the previously running task is placed on the appropriate
ready queue, the latter task is said to be preempted. A
call of Yield is a task dispatching point. Yield is a potentially blocking
operation (see 9.5.1).
7.a/2
This paragraph
was deleted.Reason: A processor
that is executing a task is available to execute tasks of higher priority,
within the set of tasks that that processor is able to execute. Write
access to a protected object, on the other hand, cannot be granted to
a new task before the old task has released it.
8/2
This paragraph was
deleted.{
AI95-00321-01}
A new running
task is also selected whenever there is a nonempty ready queue with a
higher priority than the priority of the running task, or when the task
dispatching policy requires a running task to go back to a ready queue.
[These are also task dispatching points.]
8.a/2
This paragraph
was deleted.Ramification: Thus,
when a task becomes ready, this is a task dispatching point for all running
tasks of lower priority.
Implementation Permissions
9/2
{
AI95-00321-01}
An implementation is allowed to define additional resources as execution
resources, and to define the corresponding allocation policies for them.
Such resources may have an implementation
- defined effect on task dispatching
(see
D.2.2).
9.a/2
Implementation defined: The effect affect
of implementation- defined execution resources on task dispatching.
10
An implementation may place implementation-defined
restrictions on tasks whose active priority is in the Interrupt_Priority
range.
10.a/3
Ramification: {
AI05-0229-1}
For example, on some operating systems, it might be necessary to disallow
them altogether. This permission applies to tasks whose priority is set
to interrupt level for any reason: via
an aspect a
pragma, via a call to Dynamic_Priorities.Set_Priority, or via
priority inheritance.
10.1/2
{
AI95-00321-01}
[For optimization purposes,] an implementation
may alter the points at which task dispatching occurs, in an implementation-defined
manner. However, a delay_statement
always corresponds to at least one task dispatching point.
11
7 Section 9 specifies under which circumstances
a task becomes ready. The ready state is affected by the rules for task
activation and termination, delay statements, and entry calls.
When
a task is not ready, it is said to be blocked.
12
8 An example of a possible implementation-defined
execution resource is a page of physical memory, which needs to be loaded
with a particular page of virtual memory before a task can continue execution.
13
9 The ready queues are purely conceptual;
there is no requirement that such lists physically exist in an implementation.
14
10 While a task is running, it is not on
any ready queue. Any time the task that is running on a processor is
added to a ready queue, a new running task is selected for that processor.
15
11 In a multiprocessor system, a task can
be on the ready queues of more than one processor. At the extreme, if
several processors share the same set of ready tasks, the contents of
their ready queues is identical, and so they can be viewed as sharing
one ready queue, and can be implemented that way. [Thus, the dispatching
model covers multiprocessors where dispatching is implemented using a
single ready queue, as well as those with separate dispatching domains.]
16
17/2
13 {
AI95-00321-01}
The setting of a task's base priority as a result
of a call to Set_Priority does not always take effect immediately when
Set_Priority is called. The effect of setting the task's base priority
is deferred while the affected task performs a protected action.
Wording Changes from Ada 95
17.a/2
{
AI95-00321-01}
This description is simplified to describe only
the parts of the dispatching model common to all policies. In particular,
rules about preemption are moved elsewhere. This makes it easier to add
other policies (which may not involve preemption).
Incompatibilities With Ada 2005
17.b/3
{
AI05-0166-1}
Procedure Yield is newly added
to Dispatching. If Dispatching is referenced in a use_clause,
and an entity E with a defining_identifier
of Yield is defined in a package that is also referenced in a use_clause,
the entity E may no longer be use-visible, resulting in errors.
This should be rare and is easily fixed if it does occur.
Ada 2005 and 2012 Editions sponsored in part by Ada-Europe