what is RTOS (REAL TIME OPERATING SYSTEM )
A real-time operating system is a multi tasking OS intended forreal-time applications. Such applications include embedded systems (programmable thermostats, household appliance controllers, mobile telephones), industrial robots, spacecraft, industrial control SCADA , and scientific research equipment.
An RTOS facilitates the creation of a real-time system, but does not guarantee the final result will be real-time; this requires correct development of the software. An RTOS does not necessarily have high throughput; rather, an RTOS provides facilities which, if used properly, guarantee deadlines can be met generally (soft real-time) ordeterministically (hard real-time). An RTOS will typically use specialized scheduling algorithms in order to provide the real-time developer with the tools necessary to produce deterministic behaviour in the final system. An RTOS is valued more for how quickly and/or predictably it can respond to a particular event than for the given amount of work it can perform over time. Key factors in an RTOS are therefore a minimal interrupt latency and a minimal thread switching latency.
What is REAL TIME ?
- Correctness of output depends on timing as well as result
- Hard vs. soft real time
- Are Windows and Linux real time?
Posted at 00:06 | Labels: RTOS | 0 Comments
Semaphore
A semaphore, is a protected variable (an entity grouping several variables that may or may not be numerical) which constitutes the classic method for restricting access to shared resources, such as shared memory , in a multi programming environment (a system where several programs may be executing, or taking turns to execute, at once). Semaphores exist in many variants, though usually the term refers to a counting semaphore, since a binary semaphore is better known as a mutex. A counting semaphore is a counter for a set of available resources, rather than a locked/unlocked flag of a single resource. It was invented by Dijkstra . Semaphores are the classic solution to preventing race conditions in the dining philosopher problem, although they do not prevent resource deadlock.
Semaphores can only be accessed using the following operations. Those marked atmic should not be interrupted (that is, if the system decides that the "turn is up" for the program doing this, it shouldn't stop it in the middle of those instructions) for the reasons explained below.
P(Semaphore s) // Acquire Resource
{
wait until s > 0, then s := s-1;
/* must be atomic once s > 0 is detected */
}
V(Semaphore s) // Release Resource
{
s := s+1; /* must be atomic */
}
Init(Semaphore s, Integer v)
{
s := v;
}
Posted at 23:57 | Labels: operating systems, os | 0 Comments
PROCESS vs THREAD
PROCESS
A process will execute the threads(set of instructions), which may contain multiple threads sometimes.
THREAD
It contains a group of instructions that a processor has to do.
Difference
In a nutshell, a process can contain multiple threads.
A process, in the simplest terms, is an executing program. One or more threads run in the context of the process. A thread is the basic unit to which the operating system allocates processor time. A thread can execute any part of the process code, including parts currently being executed by another thread. A fiber is a unit of execution that must be manually scheduled by the application. Fibers run in the context of the threads that schedule them.
In most multithreading operating systems, a process gets its own memory address space; a thread doesn't. Threads typically share the heap belonging to their parent process.
For instance, a JVM runs in a single process in the host O/S. Threads in the JVM share the heap belonging to that process; that's why several threads may access the same object.
Typically, even though they share a common heap, threads have their own stack space. This is how one thread's invocation of a method is kept separate from another's.
This is all a gross oversimplification, but it's accurate enough at a high level. Lots of details differ between operating systems.
process is a execution of a program and program contain set of instructions but thread is a single sequence stream within the process.thread is sometime called lightweight process. single thread alows a os to perform singler task ata time similarities between process and threads are:
1)share cpu.
2)sequential execution
3)create child
4)if one thread is blocked then the next will be start to run like process.
dissimilarities:
1)threads are not independent like process.
2)all threads can access every address in the task unlike process.
3)threads are design to assist onr another and process might or not might be assisted on one another.
thread is nothing but flow of execution where as process is nothing but a group of instructions which is similar to that of a program except which may be stopped and started by the os itself
1. Different processes can't work under same memory location. They have their own individual working memory segment. But the threads can work under the same memory area. Even they can access any memory location.
2. Processes have the overhead of making all the work within it orderly and also sum up them. But threads are done for a single unit of task only and at completion their work is all over.
3. Multiple processes require the resources explicitly to work with. They are least concerned about sharing them. i.e why when a system implements a multi process system it is expensive as compared to multi-threaded system. Because the threads have good strategy to share the things.
4.One process can run on more than one thread.
For this answer you require knowledge of Linux kernel. In Linux O/S, process is a program in execution. Two processes can share text (code) section (In case of fork() call), but sharing of data section of the processes is not possible. Prior to Linux kernel 2.0, POSIX thread library was used in only user space. So multi threaded applications using p_thread library were seen by the kernel as a part of single process. Threads can share the heap part but not the stack part. Scheduling, resource management etc of all those multi threaded application took place in user's address space. After Kernel 2.4, concept of lightweight process was implemented, in which 2 LWPs can share address space, open files etc. So now attaching threads using p_thread library with LWPs, real multi threading is visible to the kernel. So there is difference between a process, LWP and thread in Linux.
Don't look at kernel level. Think of a process as a set of tasks. but the problem arise when two or more tasks are to be performed concurrently, because a process which contain exact one thread can't perform concurrent task. So to divide task, in sets so they can be performed independently to each other, mechanism of threading is used. by default a process contain exact one thread. if the process is multi threaded than that process can perform many task simultaneously. for example take a editor program; that�s nothing but a process; process of editing a file. This editing process contain many task like typing (editing), saving, printing�. Ok. Suppose process is automated and it takes input from user, it save that input in file automatically and it print file automatically�. Ok If process contains only one thread then while you edit process can�t save your typing. It can save file only when you stopped editing. And also it can�t print when you type� only one task is done. But, if it contains 3 threads. 1st thread given task of saving and 2nd thread given task of printing. File will be automatically saved while you are typing, you won�t have to stop editing to save file. and same as 3rd thread can print your editing while you are typing. All the three tasks can be performed simultaneously.
Consider a very simple database server running on a computer.
This server is a process. Now if the process was single threaded, then there would only be one point of execution. This would mean that only one client could connect to the database at a time. Any other clients would have to wait, unless multiple processes were created, for each client - this could have quite a large overhead.
A more realistic database would be multi-threaded. One database server 'process' would be running. For each client which connects to the database a thread, (or point of execution) would be created. In this way, many clients could connect, using a thread to interact with the database.
A process is a term used to describe the execution of a sequential program. A thread is a point of execution within the process.
Early computers were dedicated mainframes used for a single purpose and handled by a single group of people or a closely related group of people. The programs that ran on these systems were also developed by a related group of people. Everyone knew what kind of program was in use and also the limitations of this program. Even if this program was run concurrently all the concurrent instances of execution could share the same memory. This was fine as long as there was only a single dedicated program running. Later people started using a computer for a variety of computational tasks all running at the same time. (Like one program might be listing the number of records that are stored while another program might be computing how many users had accessed these stored records.) Now both the programs might have been written by two different groups of people. There is a risk that the second program might crash the first one. Hence an idea was proposed that each program 'thread' will have its own working space or memory which the other programs cannot access. Such an independent execution of a program came to be known as a 'process'. This could be contrasted with multiple instances of the same program that share the single working space or memory and retained the name 'thread'.
A process a collection of virtual memory space, code, data, and system resources. A thread is code that is to be serially executed within a process. A processor executes threads, not processes.
The basic difference between process and thread is every process have its own data memory location but all related threads can share same data memory and have their own individual stacks.
Thread is a light weighted process,collection of threads become process.
Multi Processes can share the resources by using IPC objects. When a process accessing the resource the codex switching(from user mode to kernel mode)is required, that increases the execution time.
But in case of multi threads,while sharing the resources the thread always run in user mode only,i.e context switching is not required.Then it reduces the execution time.
Thread is light weight process having its own stack but collection of related threads can share same execution memory.process is program or part of program under execution .every process can have its own execution environment.
Thread is a light weight process or a sub-process. In fact a thread is also a kind of process. Thread can not exist by itself, a "process" has to start a "thread". A process can start multiple threads.
Below are the links for best material and presentations on PROCESS and THREAD DOWNLOAD THEM ............
Posted at 23:26 | Labels: operating systems, os | 0 Comments
OS Process
A process is an execution stream in the context of a particular process state.
* An execution stream is a sequence of instructions.
* Process state determines the effect of the instructions. It usually includes (but is not restricted to):
o Registers
o Stack
o Memory (global variables and dynamically allocated memory)
o Open file tables
o Signal management information
Key concept: processes are separated: no process can directly affect the state of another process.
# Process is a key OS abstraction that users see - the environment you interact with when you use a computer is built up out of processes.
* The shell you type stuff into is a process.
* When you execute a program you have just compiled, the OS generates a process to run the program.
* Your WWW browser is a process.
# Organizing system activities around processes has proved to be a useful way of separating out different activities into coherent units.
# Two concepts: uniprogramming and multiprogramming.
* Uniprogramming: only one process at a time. Typical example: DOS. Problem: users often wish to perform more than one activity at a time (load a remote file while editing a program, for example), and uniprogramming does not allow this. So DOS and other uniprogrammed systems put in things like memory-resident programs that invoked asynchronously, but still have separation problems. One key problem with DOS is that there is no memory protection - one program may write the memory of another program, causing weird bugs.
* Multiprogramming: multiple processes at a time. Typical of Unix plus all currently envisioned new operating systems. Allows system to separate out activities cleanly.
# Multiprogramming introduces the resource sharing problem - which processes get to use the physical resources of the machine when? One crucial resource: CPU. Standard solution is to use preemptive multitasking - OS runs one process for a while, then takes the CPU away from that process and lets another process run. Must save and restore process state. Key issue: fairness. Must ensure that all processes get their fair share of the CPU.
# How does the OS implement the process abstraction? Uses a context switch to switch from running one process to running another process.
# How does machine implement context switch? A processor has a limited amount of physical resources. For example, it has only one register set. But every process on the machine has its own set of registers. Solution: save and restore hardware state on a context switch. Save the state in Process Control Block (PCB). What is in PCB? Depends on the hardware.
* Registers - almost all machines save registers in PCB.
* Processor Status Word.
* What about memory? Most machines allow memory from multiple processes to coexist in the physical memory of the machine. Some may require Memory Management Unit (MMU) changes on a context switch. But, some early personal computers switched all of process's memory out to disk (!!!).
# Operating Systems are fundamentally event-driven systems - they wait for an event to happen, respond appropriately to the event, then wait for the next event. Examples:
* User hits a key. The keystroke is echoed on the screen.
* A user program issues a system call to read a file. The operating system figures out which disk blocks to bring in, and generates a request to the disk controller to read the disk blocks into memory.
* The disk controller finishes reading in the disk block and generates and interrupt. The OS moves the read data into the user program and restarts the user program.
* A Mosaic or Netscape user asks for a URL to be retrieved. This eventually generates requests to the OS to send request packets out over the network to a remote WWW server. The OS sends the packets.
* The response packets come back from the WWW server, interrupting the processor. The OS figures out which process should get the packets, then routes the packets to that process.
* Time-slice timer goes off. The OS must save the state of the current process, choose another process to run, the give the CPU to that process.
# When build an event-driven system with several distinct serial activities, threads are a key structuring mechanism of the OS.
# A thread is again an execution stream in the context of a thread state. Key difference between processes and threads is that multiple threads share parts of their state. Typically, allow multiple threads to read and write same memory. (Recall that no processes could directly access memory of another process). But, each thread still has its own registers. Also has its own stack, but other threads can read and write the stack memory.
# What is in a thread control block? Typically just registers. Don't need to do anything to the MMU when switch threads, because all threads can access same memory.
# Typically, an OS will have a separate thread for each distinct activity. In particular, the OS will have a separate thread for each process, and that thread will perform OS activities on behalf of the process. In this case we say that each user process is backed by a kernel thread.
* When process issues a system call to read a file, the process's thread will take over, figure out which disk accesses to generate, and issue the low level instructions required to start the transfer. It then suspends until the disk finishes reading in the data.
* When process starts up a remote TCP connection, its thread handles the low-level details of sending out network packets.
# Having a separate thread for each activity allows the programmer to program the actions associated with that activity as a single serial stream of actions and events. Programmer does not have to deal with the complexity of interleaving multiple activities on the same thread.
# Why allow threads to access same memory? Because inside OS, threads must coordinate their activities very closely.
* If two processes issue read file system calls at close to the same time, must make sure that the OS serializes the disk requests appropriately.
* When one process allocates memory, its thread must find some free memory and give it to the process. Must ensure that multiple threads allocate disjoint pieces of memory.
Having threads share the same address space makes it much easier to coordinate activities - can build data structures that represent system state and have threads read and write data structures to figure out what to do when they need to process a request.
# One complication that threads must deal with: asynchrony. Asynchronous events happen arbitrarily as the thread is executing, and may interfere with the thread's activities unless the programmer does something to limit the asynchrony. Examples:
* An interrupt occurs, transferring control away from one thread to an interrupt handler.
* A time-slice switch occurs, transferring control from one thread to another.
* Two threads running on different processors read and write the same memory.
# Asynchronous events, if not properly controlled, can lead to incorrect behavior. Examples:
* Two threads need to issue disk requests. First thread starts to program disk controller (assume it is memory-mapped, and must issue multiple writes to specify a disk operation). In the meantime, the second thread runs on a different processor and also issues the memory-mapped writes to program the disk controller. The disk controller gets horribly confused and reads the wrong disk block.
* Two threads need to write to the display. The first thread starts to build its request, but before it finishes a time-slice switch occurs and the second thread starts its request. The combination of the two threads issues a forbidden request sequence, and smoke starts pouring out of the display.
* For accounting reasons the operating system keeps track of how much time is spent in each user program. It also keeps a running sum of the total amount of time spent in all user programs. Two threads increment their local counters for their processes, then concurrently increment the global counter. Their increments interfere, and the recorded total time spent in all user processes is less than the sum of the local times.
# So, programmers need to coordinate the activities of the multiple threads so that these bad things don't happen. Key mechanism: synchronization operations. These operations allow threads to control the timing of their events relative to events in other threads. Appropriate use allows programmers to avoid problems like the ones outlined above.
Posted at 19:24 | Labels: operating systems, os | 0 Comments