Overview of Embedded Operating Systems
Embedded systems use specialized operating systems in order to achieve system size, real-time response or system capacity requirements.
We don't mention specific embedded operating systems by name in this article as there are so many good ones.
Editor's note: the choice of operating system for a specific processor or hardware platform will often be heavily influenced by the availability of development tools for the processor or platform, all other considerations being equal.
There are several types of embedded operating systems.
Perhaps the simplest operating system is a single system control loop that acts much like an operating system but which runs a single task. Whether such a system can be truly classified as an operating system is subject to debate. This type of operating system can be aimed at simple (or sometimes critical) systems but is often quickly umaintainable from a functional and software development stand-point as system complexity grows.
A multitasking operating system is at least the norm in embedded systems. In a multitasking operating system, several tasks or processes appear to execute concurrently. More than one task may in fact execute concurrently if the system processor has more than one core or there is more than one processor.
The operating system switches between tasks as task wait for events and other tasks receive events and become ready to run.
Multitasking operating systems simplify software development as the different software components can be developed independently of each other.
A preemptive operating system is a multitasking operating system that defines preemptive priorities for tasks. A higher priority task always interrupts and is always run before a lower priority task.
Preemptive operating systems increase system responsiveness to events, simplify software development and increase system reliability.
A system designer can calculate the time taken to service interrupts in a system and the time taken by the system scheduler to switch tasks to arrive at the minimum and maximum time required before a higher priority task will resume execution. System and processor delays such as processor cache fill and spill times have to be taken into account on top of operating system overheads.
A preemptive operating system may still fail to meet a system deadline without the system software being aware that the deadline was missed (unless the system software implements other tests).
A natural way to measue CPU loading in a preemptive operating system is to define a task of the lowest priority that does nothing but increment a counter. The counter can be calibrated once (according to the speed of the CPU) to establish the maximum value the counter can reach when the CPU is idle. During system execution, any task and interrupt activity will prevent the counter from incrementing.
A rate monotonic operating system guarantees that tasks in the system can run at a certain interval of time and for a certain period of time. When this guarantee is not met, the system software can be notified of failure and take appropriate action. The guarantee may not be met if the system is oversubscribed when configured or when other events occur during runtime that cause the guarantee to fail.
Generally, embedded operating systems provide constant, or near constant, operation times, for system operations (accounting for processor cache fills, system interruupts, and other such delays). For example, memory allocation in an embedded system is often implemented such that the time spent allocating the memory is roughly the same each time.
Constant time operations are the cornerstone of realtime responsiveness and predictable capacity loading.
Embedded Operating Systems normally provide for fast interrupt response times by separating interrupt handlers in two phases.
In the first phase, the interrupt handler reacts to an interrupt and satisfies the interrupt condition from a hardware perspective.
In the second phase, the interrupt handler processes the interrupt condition. While executing in the second phase, other interrupts in the system are enabled and may be handled to allow higher priority interrupts to take precedence over lower priority interrupts.
A performance advantage to this approach of interrupt handling is that the first phase normally disables further interrupts on the interrupting device; multiple interrupts that occur before the interrupt handler is called in its second phase do not cause multiple interrupts to the processor nor do they cause repeated cpu cycles for handling the same interrupt.
Priority inversion is a condition in preemptive operating systems where a lower priority task claims a resource that is subsequently required by a higher priority task. This causes the priority of the lower priority task to be inverted since the lower priority task is preventing the higher priority task from continuing execution until it gives up its resource. The lower priority task is temporarily elevated to at least the priority of the higher priority task it is blocking from executing.
Some preemptive operating systems address priority inversion by temporarily elevating the priority of the lower priority task to the priority of the higher priority task while the lower priority task holds the resource the higher priority task is waiting for.
While priority inversion can be a helpful feature on part of an operating system, a system designer must ensure that a low priority task that may be priority inverted does not spend more cpu cycles than, at least, the higher priority task it is inverting.
Additionally, a system design fault would be considered to occur if a low priority task that was priority inverted were to block execution, thus preventing the initially higher priority task from progressing in execution since the lower priority task still holds the resource the higher priority requires to proceed.
A monolithic operating system includes all operating system code such as device drivers and file system handlers as part of a single system image.
A micro kernel operating system includes only the bare necessities such as task switching, scheduling and device handling interfaces in the operating system code. All other operating system code such as device drivers and file system handlers are then installed depending on the needs of the system software.
Micro kernel operating systems are generally more scaleable than monolithic operating systems since they only require the code and memory resources that are needed by the system software.
Some implementations of micro kernels protect all other system components with virtual memory techniques. This prevents, for example, an installed device driver from corrupting any kernel memory and allows the kernel to clearly point at the misbehaving system component.
Some micro kernels carry memory protection further by protecting and isolating all software components from each other. This can often be implemented efficiently given a little help from the system processor and operating system interfaces for efficiently accessing (but only reading) data from another software component.
Not all embedded systems have to perform realtime operations, that is, are required to meet realtime event response requirements. There are many such embedded operating systems employed in embedded systems today, though these have normally been reduced both in code size and in memory size requirements.
Some embedded systems increase their reliability by always restarting the system after performing important control operations. For example, an embedded system that performs an important control operation may be more reliable if it it restarts itself right after performing the control operation.
Factors to consider when designing a system for frequent restarts include the time required to restart the system, the delayed response time to system events when the system is restarted and the persistance of system data across restarts.
© 1996-2009 emspace Embedded Spaces Inc