Indian Script Code for Information Interchange or ISCII

Indian Script Code Information Interchange ISCII

1 Character Encoding

1.1 Introduction

Internally, the computer handles information as numbers. So a word like, “word” is stored and handled in numeric representation. The character, are handled as numbers with help of a method called ‘Character Encoding’.

In the simplest case, for English characters we can use:

a = 00; b = 01; c = 02; d = 03; e = 04; f = 05; g = 06; h = 07, i = 08; j = 09;
k = 10; l = 11, m = 12; n = 13; o = 14; p = 15; q = 16; r = 17; s = 18; t = 19;
u = 20; v = 21; w = 22; x = 23; y = 24; z = 25.

So if we want to encode, “word” using the above encoding, the word will look like this inside a computer’s memory:


Now if we want to represent, “a word” we realise we can not do so as we do not have a character encoding for ‘blank’. We can write “aword” but not “a word”. So we add a new character to the above list.

a = 00; b = 01; c = 02; d = 03; e = 04; f = 05; g = 06; h = 07, i = 08; j = 09;
k = 10; l = 11, m = 12; n = 13; o = 14; p = 15; q = 16; r = 17; s = 18; t = 19;
u = 20; v = 21; w = 22; x = 23; y = 24; z = 25;
"blank" = 26

So “a word” using the above encoding will look like this inside a computer’s memory:


Now if we want to represent, “A word” using above encoding, we face another problem. Our encoding has numeric code for only small case alphabets and not upper case. So we need to add that too. And how about punctuation marks? We need to add that. How about some often used symbols like Rs to represent rupees, @ used in email addresses, %, +, – and others used in mathematics? Or characters that represent a number like 0, 1, 2, 3, etc?


ASCII stands for American Standard Code for Information Interchange and pronounced as “ask-key”.

In the early days of computing, people realised that even though the problem of encoding characters or assigning them numeric values had a simple solution, but without a standard, it could lead to a very confusing situation.

To explain, let us look at above example. Say computer manufacturer A builds one using the same encoding as above. But computer manufacturer B decides to use, for some reason:

a = 10; b = 11; c = 12; d = 13; e = 14; f = 15; g = 16; h = 17, i = 18; j = 19;
k = 20; l = 21, m = 22; n = 23; o = 24; p = 25; q = 26; r = 27; s = 28; t = 29;
u = 30; v = 31; w = 32; x = 33; y = 34; z = 35;

Whereas B’s representation of “none” is 23242314, if the same code is carried to A’s machine, it becomes “xyxo”.

Realising this could lead to incompatibility and confusion, a standard for character encoding was decided upon.

Much of the credit for this standard goes to Robert W. Bemer’s work in 1965. The work started in 1963 and by 1968, a standard seven-bit code that was finalised by ANSI (as ANSI X3.4 standard). It was called, American Standard Code for Information Interchange or ASCII.

This standard included non-printing characters (like blank, etc), typographic symbols, punctuation marks, English lower case and upper case characters, numbers and other symbols.

As it was a 7-bit code, the maximum number of possible characters it could encode was 128. Starting from 0, the numbers went up to 127. It was around this time that another standard was decided upon – that a byte would be 8-bit. The 8th unused bit was used for parity check and in some systems for mark end of a string.

As usage of computers spread to other countries, this standard was referred to as US-ASCII. ASCII and its national variants were declared international standard ISO 646 in 1972. So, the names were of the form ISO-646-xx, where xx was a two-character country code (CA – Canadian, CN – Chinese, CU, DE, DK, ES, FR, HU, IT, JP, KR, NO, PT, SE, UK, YU and so on). No, there was no IN for India.

A general symbol for currency was chosen as “¤” as many socialist countries did not want to use “$”.

As computing progressed, people realised that 128 characters were not enough. Especially when more countries began to participate in computing efforts. Also, the practice of having a parity bit was no longer needed or considered a good idea. It was time for an upgrade.


EBCDIC stands for Extended Binary Coded Decimal Interchange Code and pronounced as “eb-sih-dik”.

It was an extension of the 4-bit Binary Coded Decimal encoding. It was devised in around 1963-1964 timeframe by IBM and was the predecessor to ASCII, which was finalised in 1968. EBCDIC is an 8-bit encoding vs. the 7-bit encoding of ASCII.

Being an 8-bit code, the maximum number of possible characters it could encode was 256. Starting from 0, the numbers went up to 255 It was around this time that another standard was decided upon – that a byte would be 8-bit. The 8th unused bit was used for parity check and in some systems for mark end of a string.

It is a very IBM specific code and not used much out of their mainframe family.

1.4 Extended ASCII

Extended ASCII is also referred to as 8-bit ASCII.

Realising the shortcomings of 7-bit code, an 8-bit version was standardised. This included the US-ASCII encoding as first 128 characters (0-127) and another 128 (128-255) were added. It was first used by IBM for their PCs. Eventually, ISO released a standard, ISO 8859 describing an 8-bit ASCII extensions. The US-ASCII based one was ISO 8859-1 (popularly called ISO Latin1). The one for Eastern European languages was standardised as ISO 8859-2, for Cyrillic languages as ISO 8859-5, for Thai as ISO 8859-11, and so on. No, none was standardised for any Indian language. ISO 8859-15 or Latin-9 was a revision of 8859-1 to include the euro symbol.

As usage of computers spread to more countries and more information was being shared using character encoding, limitations of this encoding began to surface.

One was that 256 codes were not sufficient for encoding characters for many languages. Also, overlapping codes, based off different ISO 8859-x standards, led to strange characters showing up if the standard did not support. It was time for an upgrade.

2 Unicode

2.1 Introduction

In 1991, a new standard was proposed. It was called Unicode and aimed at providing one big character encoding table that has all characters of almost all languages being used in computing or otherwise and internationally used symbols.

Unicode is a 16-bit code. As compared to 256 of extended ASCII, it can encode (provide numeric codes for) up to 65536 (0-65535). The extended ASCII codes we also preserved. The first 256 characters encoded were same as extended ASCII. Thus, to convert ASCII to Unicode, take all one byte ASCII codes, and add an all-zero byte in front to extend them to 16 bits.

While as on one side, it meant that simple text files were now double the size, on the other hand, a standard was achieved that could serve as cross-language character encoding method.

2.2 Indian Languages Supported

There are 18 Indian languages (on the last count) listed in the Eighth Schedule of Indian Constitution.

They are: Assamese, Bengali, Gujarati, Hindi, Kannada, Kashmiri, Konkani, Malayalam, Manipuri, Marathi, Nepali, Oriya, Punjabi, Sanskrit, Sindhi, Tamil, Telugu and Urdu.

Out of these, characters from scripts of following languages are currently part of the Unicode standard (4.0) : Urdu, Hindi, Bengali, Punjabi, Gujarati, Oriya, Tamil, Telgu, Kannada and Malayalam.

The Unicode standard is a work in progress. There are many proposals that are being considered for inclusion. One of them being investigated is for encoding Vedic characters.

2.3 Codes for Indian Scripts

Language       Script            Code (hex)
                               Start     End
Urdu           Arabic          0600      067f
Hindi          Devanagri       0900      097f
Bengali        Bengali         0980      09ff
Punjabi        Gurumukhi       0a00      0a7f
Gujarati       Gujarati        0a80      0aff
Oriya          Oriya           0b00      0b7f
Tamil          Tamil           0b80      0bff
Telgu          Telgu           0c00      0c7f
Kannada        Kannada         0c80      0cff
Malayalam      Malayalam       0d00      0dff

The code for Rs symbol is 20a8.


3.1 Introduction

ISCII stands for Indian Script Code for Information Interchange and commonly read as an abbreviation but also at times (“is” as in this) “is-kii” making it sound like two Hindi words, “is” and “kii”. In Hindi, it is referred to as “Soochna Antrvinimay kay liye Bhartiye Lipi Sahita”.

It was established as a standard by Bureau of Indian Standards (IS13194:1991) in 1991 and is based on an earlier Indian Standard IS 10401:1982. ISCII is an 8-bit standard in which lower 128 characters (0-127) confirm to ASCII standard. The higher 128 characters (128-255) are used to encode characters from an Indian script. Unicode has largely preserved the ISCII encoding strategy in their encoding. Though allocating different codes, Unicode is based on the ISCII-1988 revision and is a superset of the ISCII-1991 character encoding. Thus, texts encoded in ISCII-1991 may be automatically converted to Unicode values and back to their original encoding without loss of information.

Indian languages Urdu, Sindhi and Kashmiri are primarily written in Perso-Arabic scripts. But they can be (and sometimes are) written in Devanagari too. Sindhi is also written in the Gujarati script. Apart from Perso-Arabic scripts, all the other scripts used for Indian languages have evolved from the ancient Brahmi script and have a common phonetic structure, making a common character set possible. By manually switching between scripts, an automatic transliteration is achieved.

As per the standard, following mnemonics are used for Indian scripts:

DEV: Devanagari, PNJ: Punjabi, GJR: Gujarati, ORI: Oriya, BNG: Bengali, ASM:Assamese, TLG: Telugu, KND: Kannada, MLM: Malayalam, TML: Tamil, RMN: Roman.

This ISCII-91 DEV is character encoding for characters from Devanagari which is used to write Hindi, etc. ISCII-91 ORI is the character encoding for Oriya and so on.

3.2 Indian Language Supported

Character encoding for following scripts has been standardised by the ISCII and they can be used to write Indian Languages. The scripts are:

Devanagari, Punjabi, Gujarati, Oriya, Bengali, Assamese, Telugu, Kannada, Malayalam, Tamil and Roman.


4.1 Introduction

INSFOC stands for Indian Script Font Codes.

Originally designed along with ISCII, ISFOC has not been standardised by BIS yet. A new draft, however, has been written in 2003.


TTF stands for TrueType Font. It is a font standard developed in the late 1980s. The biggest advantage of using TrueType was that a user could increase or decrease their size without losing the quality. In earlier fonts, if a user used any non-standard size, the fonts used to get jagged and lost the clarity and visual quality.

OTF stands for OpenType Font. It is a font standard announced in 1996. They are Unicode based and hence can support any language Unicode supports. Being based on Unicode, they can support up to 65,536 glyphs (as characters/symbols are referred to in Fonts).

USP stands for Unicode Script Processor. This is a technology developed for Complex Scripts like Arabic, Hebrew, Thai, Indic, etc.

UTF stands for Unicode Transformation Format. There are several mechanisms to physically implement Unicode based on storage space consideration, source code compatibility, and interoperability with other systems. These mapping methods are referred to as UTF.

One of the most popular UTF mapping methods is UTF-8. It was created by Rob Pike and Ken Thompson (yes, it is those guys. They even titled the standard paper, “Hello World” in several languages.

It is a variable-length encoding which uses groups of bytes to represent the Unicode standard for the alphabets of many of the world’s languages. Thus, it may use 1 to 4 bytes per character, depending on the Unicode symbol.

All Internet protocols have to identify the encoding used for character data. So, as per IETF requirements, UTF-8 is the, at least one encoding supported, by all Internet protocols.

4.3 Fonts for Indian Script

ASCII, ISCII, Unicode, etc come into the picture in the back-end for storing and processing of text/data. For front-end visual rendering, fonts come in the to picture too.

In the case of Indic Scripts, displays usually need an on-the-fly conversion from ISCII-to-Font and Font-to-ISCII. This is needed, for example for conjunction of characters and mantras. Absence of such rendering can lead to the display of ISCII character in order in which they are stored instead of how they expected to be seen visually.

For example in Hindi, the small “ee” matra is after a character but visually is placed before it. A display that has the on-the-fly conversion from ISCII-to-Font will show it before the character. But a display not having on-the-fly conversion from ISCII-to-Font will show it after the character, for the same ISCII file. That is not how one expects the small “ee” matra to be placed.

Another example from Hindi. The “halant” symbol is placed under a character. A display that has the on-the-fly conversion from ISCII-to-Font will show it under the character, but one without will not.

This complexity in the rendering and editing of Indic Scripts, the display of Indic scripts is referred to as the script being non-linear in nature. Some of the challenges in Indic scripts rendering are:

  • Glyphs have variable widths and have positional attributes.
  • Vowel signs can be attached to the top, bottom, left and right sides of the base consonant.
  • Vowel signs may also combine with consonants to form independent glyphs.
  • Consonants frequently combine with each other to form complex conjunct glyphs.

4.4 Searchability

As Indic fonts have not been standardised and INSFOC or any Font-Encoding-Standards has not been developed, most Indic language supporting websites and software vendors had to developing their own fonts. This led to totally incompatible with each-others.

This incompatibility leads to these issues:

  • Text composed in the editor using one Indic font can not be opened/edited in another editor using some other font.
  • As the vendors would have to pay to support other vendor’s fonts, that would increase the cost of their product. That discourages compatibility.
  • Indian Language processing remains limited up to word-processing, DTP, printing only.
  • Web page and email messages composed in the editor using one Indic font can be viewed only if the font also attached/downloaded and installed on the receiver’s end. This leads to a user having to install unnecessary number of Fonts and manage them.

This has led to hundreds of web pages all over the net in Indic languages, none compatible with the another. This severely handicaps web search engine who are unable to provide meaningful search result from those web pages, virtually making the information inaccessible.

With inputs from Hariram Pansari – hrpansari at yahoo dot com


Introduction to Real-Time Operating Systems (RTOS)

1 Introduction to RTOS

1.1 Understanding Realtime Operating System (RTOS)

To offer an analogy, Realtime systems work like just like a driver driving his racing car at a very high speed.

The driver will have to take the right decision fast and in time. A decision to turn left may not be a good idea after the turn has been missed.

The driver has to know his environment very well. The car would skid if the road were wet or icy.

The driver should know the capability of the car. If the driver tries to push it beyond its limits, the car will break down. And break down at high speed can lead to fatal accidents.

The driver should know what all information would the various meters in the dashboard provide him. The speed of the car, how much fuel is still left, the temperature of the engine, the air pressure in the tires, etc are vital to for a driver of such a fast car so that a valid decision can be made.

What happens if the driver turns on a wet road at very high speed? The car may have an accident. The driver of a racing car will always be prepared for such a possibility in terms of safeguards like helmets, escapes like a door that will open on one click and a recovery like an ambulance nearly.

As we study more about Realtime systems, we will notice the similarities between such a system and the driver.

Now consider you are cycling down a road that is in a very bad shape.

A quick veer to either side may be needed, just in time to avoid a pothole you had almost missed to notice. But, if you fail you will get a total-body jerk that might shake up a bit but you will not need an ambulance to recover.

Or consider your average day in front of your home PC.

You are moving the mouse and the pointer moves accordingly. Now, what happens if due to overload, the mouse is slow to respond? We will be irritated, but the world will not come crashing down. High efficient with respect to the home PC responding to mouse’s movement is desired but it is not critical.

That is the essence of the difference between a Realtime system and other systems. Fatality, as compared to irritation.

2 Some Basic Concepts and Definitions


POSIX is an acronym that stands for Portable Operating System Interface.

The POSIX standard is heavily influenced by UNIX operating system. As the name suggests, the aim of this standard is to provide a common interface for operating systems for purpose of portability to source code level.

The following quote appears in the Introduction to POSIX.1: “The name POSIX was suggested by Richard Stallman. It is expected to be pronounced pahz-icks as in positive, not poh-six, or other variations. The pronunciation has been published in an attempt to promulgate a standardized way of referring to a standard operating system interface”.

For the sake of uniformity, we shall use POSIX interfaces in examples.

POSIX includes interfaces for:

  • asynchronous I/O
  • semaphores
  • message queues
  • memory management
  • queued signals
  • scheduling
  • clocks and timers

For more on POSIX, please refer to

Note: The meaning of terms process, thread, task or job varies from one environment to another. Therefore, it is necessary to properly define these terms as used here.

2.2 Process

A process typically refers to a “program in execution” and the set of resources associated with it. As per POSIX definition, a process is an address space with one or more threads executing within that address space, and the required system resources for those threads. Many of the system resources are shared among all of the threads within a process.

2.3 Thread

A thread typically refers to a unit flow of control that can be scheduled. A process contains at least one thread. As per POSIX definition, a thread is a single flow of control within a process. Each thread has its own thread ID, scheduling priority, Errno value, etc. All threads executing in the same process share the same address space (and hence, mess each other up).

2.4 Task

Task is not a standard term. Sometimes it refers to a process and sometimes to a thread. In Realtime systems, it often refers to a thread and that is how it is used here.

2.5 Job

Job is again not a standard term. It is often used in place of term task. We will not use this term.

2.6 Multi-Tasking

Multitasking refers to the ability of a system to execute more than one task at the same time. However, in reality, multitasking just creates the appearance of many tasks running concurrently. Actually, the tasks are interleaved rather than concurrently.

RTOS - Task A, Task B and Task C running concurrently

Fig 1: Task A, Task B and Task C running concurrently

RTOS - Task A, Task B and Task C running interleaved

Fig 2: Task A, Task B and Task C running interleaved

2.6.1 Cooperative Multitasking
In cooperative multitasking, each task can control the CPU for as long as it needs it. Thus, the task currently controlling the CPU must offer control to other tasks. It is called cooperative because all tasks must cooperate for it to work. If one task acts like a selfish and self-centric person and does not cooperate, it can hog the CPU.

2.6.2 Preemptive Multitasking
In preemptive multitasking, the operating system allocates the CPU time slices to each task. Thus, preemptive multitasking forces tasks to share the CPU whether they want to or not.

2.7 Finite State machine

In computer science, a finite-state machine (FSM) or finite-state automaton (FSA) is an abstract concept.

A FSM can have only fixed number of conditions or states and it can go from one state to another by a fixed number of paths. All this is properly defined during the design of the FSM.

Imagine a small toy windmill whose arms are always rotating. The windmill arms can rotate in two directions, clockwise and counter-clockwise. It has a small remote that has two buttons. If one is clicked, the arms rotate in clockwise direction and if the other is clicked, they rotate in a counter-clockwise direction.

This whole system is a simple FSM. The windmill has two states and there are two transitions to move between the states. One thing to note is that a state does not have any further internal structure. That means that when an FSM is in a state, it has that state and no state has any number of sub-states it can have while in a particular state.

RTOS - Stated and State Transitions of the Windmill

Fig 3: Stated and State Transitions of the Windmill

2.8 State

Operating systems keep track of the various tasks running with help of numerous parameters. One of them is the internal state of a task. This is a very common model and many operating systems use states to keep track of the internal condition of a task.

Thus, the state of a task is the last known or current status of the task.

The states and their transition are different for different operating systems. However, typical states of a task that we will use here can be:

READY: The state of a task that is not waiting for any resource other than the CPU.
PEND: The state of a task that is blocked due to the unavailability of some resource (other than the CPU).
DELAY: The state of a task that has been put to sleep for some duration.

2.9 State Transitions

As we saw in the windmill example, there are always well-defined ways to change the state of a system.

The following table lists some reasons or ways we can change the state of a system:

READY --> wait for a resource --> PEND
READY --> delay command       --> DELAY
PEND  --> resource acquired   --> READY
DELAY --> delay expires       --> READY


RTOS - State Diagram

Fig 4: State Diagram

2.10 Preemption

Preemption is defined as the act of a higher-priority process taking control of the processor from a lower-priority task.

2.11 Context Switch

As we saw earlier that in a Multitasking system, various tasks can be interleaved and that the task switch can be preemptive. When one task preempts another task out of CPU to use the CPU for itself, the CPU is said to have undergone a context change.

Thus, context switch refers to the changes necessary in the CPU in response to preemption when the scheduler schedules a different task to run on the CPU. This involves two things:

  • switching out the outgoing task, and
  • switching in the incoming task

For the outgoing task, it may do typical things like saving the contents of the registers, saving the contents of the stack, etc. For an incoming task, it may load the saved values to the registers, stack, etc.

2.12 Context Switch Overhead

It takes a finite amount of time for the system to switch from one running task to a different task. This is referred to as context switching overhead.

2.13 Synchronization

Tasks typically share resources and services for which they must be prepared to wait if they are not available immediately. Synchronization is the managing the resources and tasks such that there is a fair allocation of the resource to a task.

3 Scheduling

As we saw earlier, in a multi-tasking system various tasks can be interleaved. That is, one task can run for some time and then CPU is allocated to another task and so on. We also saw that the task switch could be preemptive or cooperative.

The process of determining which task runs when in a multi-tasking system is referred to as CPU scheduling or plain scheduling. The algorithm followed to decide who gets next turn on CPU is called the scheduling algorithm. The program that does this is called the scheduler.

Thus, when we have a pool of tasks in READY state, contending for the CPU time, the job of the scheduler is to distribute the resource of the CPU to the different processes in a fair manner. The definition of fair depends on the designer of the system and varies. Based on this need of fairness, various scheduling algorithms are chosen.

There are two scheduling actions per task instance. One, when the task is context switched and it begins to execute. Another, when it completes.

3.1 Round-Robin Scheduling

A round-robin scheduling algorithm attempts to share the CPU fairly among all Ready tasks. Round-robin scheduling achieves this by using time slicing. Each task is given CPU time for a defined interval or time slice. All tasks get an equal interval and all are executed in a rotation.

RTOS - Priority Scheduling

Fig 5: Round-Robin Scheduling

The disadvantage of this scheduling is that if a task of very high priority needs CPU time, it will have to wait for its turn. Since, in a Realtime system, a high priority task should be processed first, this scheduling as such does not serve the purpose.

3.2 Priority Scheduling

A priority-based scheduling algorithm allocates a priority to each task. The priority level is usually in terms of a number. If there are 256 levels of priority, zero could be the highest priority level a task can have and 256 the lowest. The CPU is allocated to the highest priority task that is in READY state. It can also be called as Fixed-Priority Scheduling as the scheduler does not change the priority of a task.

 RTOS - Round-Robin Scheduling

Fig 6: Priority Scheduling

Only in Fig 6, the context switch overhead has been shown explicitly between the end of a task’s run and the start of another when context switch happens for the sake of illustration.

The disadvantage of this scheduling is that if two tasks having the same priority need CPU time, the one starting earlier will starve the other task of processor time till it is done. Thus even if the second task is a high priority task, the system may not be able to complete it in time because of another equal priority task. This, this scheduling as such does not serve the purpose.

3.3 Fixed-Priority Preemptive Round-Robin Scheduling

One way of making use of the advantage of both scheduling methods without letting the disadvantages spoil Realtime-ness, is to use a mix of the two.

A Priority based Preemptive Round-Robin scheduling algorithm allocates a priority to each task. The CPU is allocated to the highest priority task that is in READY state. However, if there are more than one tasks at that priority level, they are run in round-robin manner.

RTOS - Fixed-Priority Preemptive Round-Robin Scheduling

Fig 7: Fixed-Priority Preemptive Round-Robin Scheduling

Most Realtime Operating Systems (RTOS) support this scheme. Now on, we will refer to it as Fixed-Priority Preemptive Scheduling only.

3.4 Rate-Monotonic Scheduling

A fixed-priority preemptive scheduling in which, the priority is decided by the scheduler based on its frequency (inverse of the period) if it is a periodic task. Thus, higher the frequency, the higher its priority is.

3.5 Dynamic-Priority Preemptive Scheduling

It is similar to Fixed-Priority Preemptive Scheduling with one basic difference. The difference is that the priority of a task can change from instance to instance or within the execution of an instance.

3.6 Deadline-Monotonic Scheduling

In Deadline-Monotonic Scheduling, the deadline of a task is a pre-fixed point in time relative to the beginning of the task. The shorter this deadline, the higher the priority so it can finish in time.

4 Inter-Task Communication

When we have multi-tasking, there is a need for tasks to communicate with each other. That maybe for the purpose of data sharing, synchronization, error handling or even exception handling.

4.1 Mutual Exclusion

Mutual Exclusion ensures that there is no contention for resource access. It ensures two tasks can not access the same resource at the same time and hence lead to unpredictability. It also ensures that there is a proper method to request and hold on and release of a resource that.

4.2 Inter-Task Communication Methods

  • Shared memory
  • Semaphores
  • Signals

4.2.1 Shared Memory

Shared Memory is the most simple method for tasks to communicate. Since all tasks share the address space, all tasks can access a given memory address.

A data structure can be used to define a memory block. By accessing this shared data structures the data can be shared between various tasks.

The flip side of this method is that it can lead to horrible errors if proper care is not taken. Let us consider one such issue.

RAW (Read After Write) error can happen when one task is writing to a memory and another task is reading from it. The ‘Read’ from the task reading should follow the ‘Write’ from the task writing. But if the task reading from that memory read it before the task writing onto it wrote any data on it. The task would end up reading invalid data. This, in a Realtime system, can lead to catastrophic failures.

Such errors can be avoided using various synchronization techniques like resource locking using semaphores, temporarily disabling interrupts or temporarily disabling preemption.

4.2.2 Semaphores

Semaphores often provide the fastest intertask communication mechanism and address the need for both mutual exclusion and synchronization.

A semaphore can be viewed as a flag or a marker or a red/green jhandi that can be used to specify if a resource is available or unavailable.

When a task tries to take a binary semaphore, the outcome depends on whether the semaphore is available or unavailable at the time of the call.

If the semaphore is available, then the semaphore becomes unavailable. It is up to this task to release the semaphore once its need is over.

If the semaphore is unavailable, the task is put on a queue of blocked tasks and enters a state of PEND on the availability of the semaphore. Mutual Exclusion Using Binary Semaphores

Suppose at one given time, only one task is allowed to write into a memory location.

/* sample program 1 - incomplete */

int someGlobalVar;

void sometaskA(void)
	/* wait for the semaphore */

	/* got it! */
	someGlobalVar = 1;

	/* let go */

void sometaskB(void)
	/* wait for the semaphore */

	/* got it! */
	someGlobalVar = 2;

	/* let go */

This way when these two tasks are running in a multi-tasking environment, they will have to wait for a semaphore to be available to write into some global memory. Synchronization Using Binary Semaphores

Suppose there is a task that produces one unit of data randomly. It can write that data into a global memory. There is another task that has been written to process the data. A semaphore can be used to synchronize these two tasks.

/* sample program 2 - incomplete */

int someGlobalVar;

void generatorTask(void)
	while (1)
		/* wait for the semaphore */

		/* wait for some random time */

		/* got it! */
		someGlobalVar = 3;

		/* let go */

void processorTask(void)
	while (1)
		/* wait for the semaphore */

		/* got it! that means generatorTask wrote something to someGlobalVar */
		/* do something */
		/* let go */

The processorTask is forever waiting on the semaphore. Whenever it is available, it means that there is valid data available to process. Once it has done that, it lets go of the semaphore. In the meantime, generatorTask is already in queue to grab it. As soon as processorTask let’s go, generatorTask grabs it back and hold onto it till it has some valid data to send again.

This way when these two tasks are running in a multi-tasking environment, they can be synchronized.

4.2.3 Signals

Signals can be seen as software interrupts. They can asynchronously alter the control flow of a task. Any task or ISR can raise a signal for a particular task. The task that is being signaled, on receiving it immediately suspends its current thread of execution and executes the task-specified signal handler routine the next time it is scheduled to run. Synchronization Using Signals

Suppose there is a task that produces one unit of data randomly. It can write that data into a global memory. There is another task that has been written to process the data. A signal can be used to synchronize these two tasks.

/* sample program 3 - incomplete */

int someGlobalVar;

void generatorTask(void)
	/* use OS specific method to get ID of the processor task */
	processorID = someOSCall();

	while (1)
		/* wait for some random time */

		/* generate data */
		someGlobalVar = 3;

		/* send a signal */
		kill(processorID, signalNum);

void processorTask(void)
	/* specify the signal handler */
	aSigaction.sa_handler = catchProcessorTaskSignals;
	sigaction(SOME_SIGNAL, &aSigaction, NULL);

	/* do something else */

void catchProcessorTaskSignals(int signal)
	/* implement generatorTask generated data processing here */

The generatorTask raises a signal as soon as the data is ready. When the processorTask was launched, it had specified a handler function for that signal. As soon as the generatorTask sends the signal, the processorTask will execute code in catchProcessorTaskSignals() inside the context of processorTask only.

This way when these two tasks are running in a multi-tasking environment, they can be synchronized.

4.3 Priority Inversion

Priority inversion arises when a higher-priority task is forced to wait for an indefinite period for a lower-priority task to complete and give up the resource.

Let us consider two tasks, taskHighPri and taskLowPri that have high and low priority, respectively. taskLowPri acquires some resource by taking its associated binary semaphore. Along comes taskHighPri and preempts taskLowPri. Now it wants to use the same resource and waits on the same semaphore that taskLowPri has taken. Since taskLowPri has been preempted, it is unable to finish what it was doing and give the semaphore. Thus, taskHighPri becomes blocked for an indefinite period.

There are ways to handle such scenarios. One way is that a task that has a resource, executes at the same priority as of the highest-priority task blocked on that resource.

So in above case, the taskLowPri’s priority is raised to that of taskHighPri. That way, having the same priority as taskHighPri, the taskLowPri preempts it (round robin followed when many tasks of same priority in READY state), finishes its work and gives up the semaphore. As soon, as it works gets done, its priority is back to normal and the situation is taken care of.