What do you mean by NRE cost in a system design?

Design costs, also called Non-Recurring Engineering costs (NRE), are of major importance when few of a particular embedded system are being built. Conversely, production costs are important in high-volume production. Embedded systems vary from single units to millions of units, and so span the range of tradeoffs between NRE versus production costs.

However with the continuous advancements in electronic design process and in the quality of CAD / CAE tools, NRE costs for simple embedded systems should come down but in reality with the advancement in technology, more and more complex embedded systems are being built which maintains the NRE (development) costs quite high.

What do you mean by Interrupt Latency?

When an electronic device causes an interrupt, the intermediate results (registers) have to be saved before the software responsible for handling the interrupt can run. They must also be restored after that software is finished. If there are more registers, this saving and restoring process takes more time, increasing the latency.

Interrupt Latency Period Tla is a sum of:-

  • Time for response and initiation for ISR instructions. (This includes time to save or switch the context)
  • Periods needed to service all interrupts of higher priority than that of the present one.
  • Maximum period of disabling of the execution of the ISR for “critical region” instructions.

Ways to reduce such context/restore latency include having relatively few registers in their central processing units (undesirable because it slows down most non-interrupt processing substantially), or at least not having hardware save them all (hoping that the software doesn’t then need to compensate by saving the rest “manually”). Another technique involves spending silicon gates on “shadow registers”: one or more duplicate registers used only by the interrupt software, perhaps supporting a dedicated stack.

What do you mean by Priority Inversion?

Priority Inversion is a resource sharing problem in a priority scheduling situation.

Most commercial real-time operating systems (RTOSes) employ a priority-based preemptive scheduler. These systems assign each task a unique priority level. The scheduler ensures that of those tasks that are ready to run, the one with the highest priority is always the task that is actually running. To meet this goal, the scheduler may preempt a lower-priority task in mid-execution. However the scheduler may not have any control over the resources in the system and the resource once allocated to a low priority process will remain allocated to it till release.

This situation may force a high priority process to remain in waiting for the resource to get released. The waiting elongates if another medium priority process pre-empts the low priority process to execute itself. Now the midium priority process is running. The low priority process is in waiting for the midium priority process to finish and high priority process will get a chance to execute only after the lower priority processes finish. This is illogical and improper as ideally the high priority process should finish first. This problem is known as priority inversion problem.

What do you mean by hierarchical RTOS?

Hierarchical RTOS is a configurable RTOS where only limited functions of the scheduler is derived.  All other functions like memory allocation, IPC, Memory Management, File System operations etc are kept outside the scheduler. These functions link and bind dynamically as and when needed. This makes the RTOS use only the functions needed and if required extended. Such a hierarchical RTOS can be configured for specific processors and devices.

What do you mean by hierarchical RTOS?

Hierarchical RTOS is a configurable RTOS where only limited functions of the scheduler is derived.  All other functions like memory allocation, IPC, Memory Management, File System operations etc are kept outside the scheduler. These functions link and bind dynamically as and when needed. This makes the RTOS use only the functions needed and if required extended. Such a hierarchical RTOS can be configured for specific processors and devices.

Enumerate the sequence of events that takes place in interrupt handling.

1. Hardware stacks program counter, etc.
2. Hardware loads new program counter from interrupt vector.
3. Assembly language procedure saves registers.
4. Assembly language procedure sets up new stack.
5. C interrupt service runs (typically reads and buffers input).
6. Scheduler decides which process is to run next.
7. C procedure returns to the assembly code.
8. Assembly language procedure starts up new current process.

What do you mean by plug and play devices? Explain any protocol that supports plug and play feature.

In computing, plug and play is a term used to describe the characteristic of a universal computer bus, or device specification, which facilitates the discovery of a hardware component in a system, without the need for physical device configuration, or user intervention in resolving resource conflicts.
Plug and play refers to both the boot-time assignment of device resources, and to hotplug systems.
ISA PnP or (legacy) Plug & Play ISA was a plug-n-play system that used a combination of modifications to hardware, the system BIOS, and operating system software to automatically manage resource allocations. It was superseded by the PCI bus during the mid-1990s.
UPnP or Universal Plug n Play supports plug and play feature.
Universal Plug and Play (UPnP) is a set of networking protocols promulgated by the UPnP Forum.
Universal Plug and Play functionality involves five processes:
Discovery: A Universal Plug and Play device advertises its presence on the network to other devices and control points by using the Simple Service Discovery Protocol (SSDP). A new control point uses SSDP to discover Universal Plug and Play devices on the network. The information that is exchanged between the device and the control point is limited to discovery messages that provide basic information about the devices and their services, and a description URL, which can be used to gather additional information about the device.
Description: Using the URL that is provided in the discovery process, a control point receives XML information about the device, such as the make, model, and serial number. Additionally, the description process can include a list of embedded devices, embedded services, and URLs that are used to access device features.
Control: Control points use URLs that are provided during the description process to access additional XML information that describes actions to which the Universal Plug and Play device services respond, with parameters for each action. Control messages are formatted in XML and use SOAP.
Eventing: When a control point subscribes to a service, the service sends event messages to the control point to announce changes in device status. Event messages are formatted in XML and use General Event Notification Architecture (GENA).
Presentation: If a Universal Plug and Play device provides a presentation URL, a browser can be used to access interface control features, device or service information, or any device-specific abilities that are implemented by the manufacturer.

What problem might occur in a shared memory process communication? How can you overcome that problem? Illustrate your answer with an example.

Sharing memory for communication between processes is a classical IPC methodology available for operating systems. Here one process creates an area in RAM which can be used by other processes. Since multiple processes can access the shared memory area like regular working memory, this is a very fast way of communication. 2.6 Linux kernel series uses /dev/shm (a world-writable directory that is stored in memory with a defined limit in /etc/default/tmpfs) for shared memory IPC. The program PulseAudio uses it extensively.
Problem arises when there is a multiple-processor architecture. If the Cache-Coherence policy is not adopted, a copy of the shared memory will be running for accessing process which may not be current, eventually leading to over-writing of the shared memory with inaccurate data.
Example: –
Process X and Y handles location L for shared access.

  1. X read L

  2. X Writes L

  3. Y read L

  4. Y Writes L

  5. X read L but this actually is not a read because content of L is already there in cache. As cache coherence is not used the old value is there in L

  6. X writes L now it writes wrong data because the earlier read was not proper and garbage in results in garbage out.

If any Cache-Coherence policy is adopted, the cache will be kept updated for accesses. This would result in updated value in cache.
Otherwise use of cache can be disabled for shared memory locations. This requires too complicated implementations.
An other method is to disable cache altogether. This is not advised as this will seriously impact performance.