Saturday 6 April 2013

Operating Systems Short Questions and Answers

What are the basic functions of an operating system?Operating system controls and coordinates the use of the hardware among the various application programs for various uses. Operating system acts as resource allocator and manager. Since there are many possibly conflicting requests for resources, the operating system must decide which requests are allocated resources to operating the computer system efficiently and fairly. Also, operating system is control program which controls the user programs to prevent errors and improper use of the computer. It is especially concerned with the operation and control of I/O devices.

Why paging is used?Paging is solution to external fragmentation problem which is to permit the logical address space of a process to be noncontiguous, thus allowing a process to be allocating physical memory wherever the latter is available.

What resources are used when a thread created? How do they differ from those when a process is created?When a thread is created the threads does not require any new resources to execute the thread shares the resources like memory of the process to which they belong to. The benefit of code sharing is that it allows an application to have several different threads of activity all within the same address space. Whereas if a new process creation is very heavyweight because it always requires new address space to be created and even if they share the memory then the inter process communication is expensive when compared to the communication between the threads.

What is virtual memory?Virtual memory is hardware technique where the system appears to have more memory that it actually does. This is done by time-sharing, the physical memory and storage parts of the memory one disk when they are not actively being used.

What is Throughput, Turnaround time, waiting time and Response time?
Throughput
: Number of processes that complete their execution per time unit.
Turnaround Time: Amount of time to execute a particular process.
Waiting Time: Amount of time a process has been waiting in the ready queue.
Response Time: amount of time it takes from when a request was submitted until the first response is produced, not output (for time-sharing environment).

What is the important aspect of a real-time system or Mission Critical Systems?A real time operating system has well defined fixed time constraints. Process must be done within the defined constraints or the system will fail. An example is the operating system for a flight control computer or an advanced jet airplane. Often used as a control device in a dedicated application such as controlling scientific experiments, medical imaging systems, industrial control systems, and some display systems.

What are the types of Real Time System?Real-Time systems may be either hard or soft real-time.
Hard real-time: Secondary storage limited or absent, data stored in short term memory, or read-only memory (ROM), Conflicts with time-sharing systems, not supported by general-purpose operating systems.
Soft real-time: Limited utility in industrial control of robotics, Useful in applications (multimedia, virtual reality) requiring advanced operating-system features.

What is the difference between Hard and Soft real-time systems?A hard real-time system guarantees that critical tasks complete on time. This goal requires that all delays in the system be bounded from the retrieval of the stored data to the time that it takes the operating system to finish any request made of it.
A soft real time system where a critical real-time task gets priority over other tasks and retains that priority until it completes. As in hard real time systems kernel delays need to be bounded

What is the cause of thrashing? How does the system detect thrashing? Once it detects thrashing, what can the system do to eliminate this problem?Thrashing is caused by under allocation of the minimum number of pages required by a process, forcing it to continuously page fault.
The system can detect thrashing by evaluating the level of CPU utilization as compared to the level of multiprogramming.
It can be eliminated by reducing the level of multiprogramming.

What are multi tasking, multi programming and multi threading?
Multi programming:
Multiprogramming is the technique of running several programs at a time using timesharing. It allows a computer to do several things at the same time. Multiprogramming creates logical parallelism. The concept of multiprogramming is that the operating system keeps several jobs in memory simultaneously. The operating system selects a job from the job pool and starts executing a job, when that job needs to wait for any i/o operations the CPU is switched to another job. So the main idea here is that the CPU is never idle.
Multi tasking: Multitasking is the logical extension of multi-programming .The concept of multitasking is quite similar to multiprogramming but difference is that the switching between jobs occurs so frequently that the users can interact with each program while it is running. This concept is also known as time-sharing systems. A time-shared operating system uses CPU scheduling and multiprogramming to provide each user with a small portion of time-shared system.
Multi threading: An application typically is implemented as a separate process with several threads of control. In some situations a single application may be required to perform several similar tasks for example a web server accepts client requests for web pages, images, sound, and so forth. A busy web server may have several of clients concurrently accessing it. If the web server ran as a traditional single-threaded process, it would be able to service only one client at a time. The amount of time that a client might have to wait for its request to be serviced could be enormous. So it is efficient to have one process that contains multiple threads to serve the same purpose. This approach would multithread the web-server process, the server would create a separate thread that would listen for client requests when a request was made rather than creating another process it would create another thread to service the request. To get the advantages like responsiveness, Resource sharing economy and utilization of multiprocessor architectures multithreading concept can be used.

What is hard disk and what is its purpose?Hard disk is the secondary storage device, which holds the data in bulk, and it holds the data on the magnetic medium of the disk. Hard disks have a hard platter that holds the magnetic medium, the magnetic medium can be easily erased and rewritten, and a typical desktop machine will have hard disks with a capacity of between 10 and a few gigabytes. Data is stored onto the disk in the form of files.

What is fragmentation? Different types of fragmentation?Fragmentation occurs in a dynamic memory allocation system when many of the free blocks are too small to satisfy any request.
External Fragmentation: External Fragmentation happens when a dynamic memory allocation algorithm allocates some memory and a small piece is left over that cannot be effectively used. If too much external fragmentation occurs, the amount of usable memory is drastically reduced. Total memory space exists to satisfy a request, but it is not contiguous.
Internal Fragmentation: Internal fragmentation is the space wasted inside of allocated memory blocks because of restriction on the allowed sizes of allocated blocks. Allocated memory may be slightly larger than requested memory; this size difference is memory internal to a partition, but not being used

What is DRAM? In which form does it store data?DRAM is not the best, but it’s cheap, does the job, and is available almost everywhere you look. DRAM data resides in a cell made of a capacitor and a transistor. The capacitor tends to lose data unless it’s recharged every couple of milliseconds, and this recharging tends to slow down the performance of DRAM compared to speedier RAM types.

What is Dispatcher?Dispatcher module gives control of the CPU to the process selected by the short-term scheduler; this involves: Switching context, Switching to user mode, Jumping to the proper location in the user program to restart that program, dispatch latency:- time it takes for the dispatcher to stop one process and start another running.

What is CPU Scheduler?Selects from among the processes in memory that are ready to execute, and allocates the CPU to one of them. CPU scheduling decisions may take place when a process:
1. Switches from running to waiting state.
2. Switches from running to ready state.
3. Switches from waiting to ready.
4. Terminates.
Scheduling under 1 and 4 is non-preemptive. All other scheduling is preemptive.

What is Context Switch?Switching the CPU to another process requires saving the state of the old process and loading the saved state for the new process. This task is known as a context switch. Context-switch time is pure overhead, because the system does no useful work while switching. Its speed varies from machine to machine, depending on the memory speed, the number of registers which must be copied, the existence of special instructions (such as a single instruction to load or store all registers).

What is cache memory?Cache memory is random access memory (RAM) that a computer microprocessor can access more quickly than it can access regular RAM. As the microprocessor processes data, it looks first in the cache memory and if it finds the data there (from a previous reading of data), it does not have to do the more time-consuming reading of data from larger memory.

What is a Safe State and what is its use in deadlock avoidance?
Safe State:
When a process requests an available resource, system must decide if immediate allocation leaves the system in a safe state. System is in safe state if there exists a safe sequence of all processes.
Deadlock Avoidance:Ensure that a system will never enter an unsafe state.

What is a Real-Time System?A real time process is a process that must respond to the events within a certain time period. A real time operating system is an operating system that can run real time processes successfully.

What is MUTEX?Mutex is a program object that allows multiple program threads to share the same resource, such as file access, but not simultaneously. When a program is started a mutex is created with a unique name. After this stage, any thread that needs the resource must lock the mutex from other threads while it is using the resource. The mutex is set to unlock when the data is no longer needed or the routine is finished.

What is the difference between a ‘thread’ and a ‘process’?A process is a collection of virtual memory space, code, data, and system resources. A thread is code that is to be serially executed within a process. A processor executes threads, not processes, so each application has at least one process, and a process always has at least one thread of execution, known as the primary thread. A process can have multiple threads in addition to the primary thread. Prior to the introduction of multiple threads of execution, applications were all designed to run on a single thread of execution.When a thread begins to execute, it continues until it is killed or until it is interrupted by a thread with higher priority (by a user action or the kernel’s thread scheduler). Each thread can run separate sections of code, or multiple threads can execute the same section of code. Threads executing the same block of code maintain separate stacks. Each thread in a process shares that process’s global variables and resources.

What is Semaphore?Semaphore is the locking Mechanism used inside resource managers and resource dispensers. A semaphore object is a synchronization object that maintains a count between zero and a specified maximum value. The count is decremented each time a thread completes a wait for the semaphore object and incremented each time a thread releases the semaphore. When the count reaches zero, no more threads can successfully wait for the semaphore object state to become signaled. The state of a semaphore is set to ‘signaled’ when its count is greater than zero and ‘non-signaled’ when its count is zero. The semaphore object is useful in controlling a shared resource that can support a limited number of users. It acts as a gate that limits the number of threads sharing the resource to a specified maximum number.

What is Marshalling?The process of packaging and sending interface method parameters across thread or process boundaries is marshalling. Marshalling is the process of gathering data and transforming it into a standard format before it is transmitted over a network so that the data can transcend network boundaries. In order for an object to be moved around a network, it must be converted into a data stream that corresponds with the packet structure of the network transfer protocol. This conversion is known as data marshalling. Data pieces are collected in a message buffer before they are marshaled. When the data is transmitted, the receiving computer converts the marshaled data back into an object.

 What is INODE?
I
NODE are the data structures that contain information about the files which are created when UNIX file systems are created. Each file has an inode & is identified by an inode number (i-number) in the file system where it resides. Inode provides important information on files such as group ownership, access mode (read, write, execute permissions).

 What is Mutex Object?
A mutex object is a synchronization object whose state is set to signaled when it is not owned by any thread, and non-signaled when it is owned. For example, to prevent two threads from writing to shared memory at the same time, each thread waits for ownership of a mutex object before executing the code that accesses the memory. After writing to the shared memory, the thread releases the mutex object.

 Explain Memory Partitioning, Paging, Segmentation.
Memory partitioning
is the way to distribute the Kernel and User Space Area in Memory.
Paging is actually a minimum memory, which can be swap in and swap out from Memory.
In modern Server operating systems, we can use Multiple Page Size Support. That actually helps to tune OS performance, depending on type of applications. Segmentation is actually a way to keep similar objects in one place. For example: you can have your stack stuffs in one place (Stack Segment), Binary code in another place (text segment), and data in another place (Data and BSS segment).
Linux doesn’t have segment architecture. AIX has Segment architecture.

What are the different Dynamic Storage-Allocation methods?How to satisfy a request of size n from a list of free holes?
First-fit: Allocate the first hole that is big enough.
Best-fit: Allocate the smallest hole that is big enough; must search entire list, unless ordered by size. It produces the smallest leftover hole.
Worst-fit: Allocate the largest hole; must also search entire list. Produces the largest left over hole. First-fit and best-fit are better than worst-fit in terms of speed and storage utilization.

Under what circumstances do page faults occur? Describe the actions taken by the operating system when a page fault occurs?A page fault occurs when an access to a page that has not been brought into main memory takes place. The operating system verifies the memory access, aborting the program if it is invalid. If it is valid, a free frame is located and I/O is requested to read the needed page into the free frame. Upon completion of I/O, the process table and page table are updated and the instruction is restarted. When a process is executed with only few pages in memory and when an instruction is encountered which refers to any instruction or data in some other page, which is not present in the main memory, a page fault occurs.

Explain briefly about, processor, assembler, compiler, loader, linker and the functions executed by them.
Processor:
A processor is the part a computer system that executes instructions .It is also called a CPU.
Assembler: An assembler is a program that takes basic computer instructions and converts them into a pattern of bits that the computer's processor can use to perform its basic operations. Some people call these instructions assembler language and others use the term assembly language.
Compiler: A compiler is a special program that processes statements written in a particular programming language and turns them into machine language or "code" that a computer's processor uses.
Typically, a programmer writes language statements in a language such as Pascal or C one line at a time using an editor. The file that is created contains what are called the source statements. The programmer then runs the appropriate language compiler, specifying the name of the file that contains the source statements.
Loader: In a computer operating system, a loader is a component that locates a given program (which can be an application or, in some cases, part of the operating system itself) in offline storage (such as a hard disk), loads it into main storage (in a personal computer, it's called random access memory), and gives that program control of the computer.
Linker: Linker performs the linking of libraries with the object code to make the object code into an executable machine code.

No comments:

Post a Comment