Exit Interview Form Check Type Of Termination: Fill & Download for Free

GET FORM

Download the form

How to Edit The Exit Interview Form Check Type Of Termination quickly and easily Online

Start on editing, signing and sharing your Exit Interview Form Check Type Of Termination online refering to these easy steps:

  • Push the Get Form or Get Form Now button on the current page to direct to the PDF editor.
  • Wait for a moment before the Exit Interview Form Check Type Of Termination is loaded
  • Use the tools in the top toolbar to edit the file, and the edits will be saved automatically
  • Download your completed file.
Get Form

Download the form

The best-rated Tool to Edit and Sign the Exit Interview Form Check Type Of Termination

Start editing a Exit Interview Form Check Type Of Termination in a second

Get Form

Download the form

A quick guide on editing Exit Interview Form Check Type Of Termination Online

It has become quite simple lately to edit your PDF files online, and CocoDoc is the best PDF online editor you have ever used to do some editing to your file and save it. Follow our simple tutorial to start!

  • Click the Get Form or Get Form Now button on the current page to start modifying your PDF
  • Add, change or delete your text using the editing tools on the tool pane above.
  • Affter altering your content, put the date on and make a signature to finish it.
  • Go over it agian your form before you click and download it

How to add a signature on your Exit Interview Form Check Type Of Termination

Though most people are adapted to signing paper documents using a pen, electronic signatures are becoming more regular, follow these steps to sign PDF for free!

  • Click the Get Form or Get Form Now button to begin editing on Exit Interview Form Check Type Of Termination in CocoDoc PDF editor.
  • Click on the Sign tool in the toolbar on the top
  • A window will pop up, click Add new signature button and you'll be given three choices—Type, Draw, and Upload. Once you're done, click the Save button.
  • Drag, resize and settle the signature inside your PDF file

How to add a textbox on your Exit Interview Form Check Type Of Termination

If you have the need to add a text box on your PDF and customize your own content, follow these steps to carry it throuth.

  • Open the PDF file in CocoDoc PDF editor.
  • Click Text Box on the top toolbar and move your mouse to position it wherever you want to put it.
  • Write in the text you need to insert. After you’ve inserted the text, you can utilize the text editing tools to resize, color or bold the text.
  • When you're done, click OK to save it. If you’re not happy with the text, click on the trash can icon to delete it and take up again.

A quick guide to Edit Your Exit Interview Form Check Type Of Termination on G Suite

If you are looking about for a solution for PDF editing on G suite, CocoDoc PDF editor is a recommendable tool that can be used directly from Google Drive to create or edit files.

  • Find CocoDoc PDF editor and set up the add-on for google drive.
  • Right-click on a PDF document in your Google Drive and choose Open With.
  • Select CocoDoc PDF on the popup list to open your file with and give CocoDoc access to your google account.
  • Modify PDF documents, adding text, images, editing existing text, highlight important part, polish the text up in CocoDoc PDF editor before pushing the Download button.

PDF Editor FAQ

Can a company give a relieving letter saying something bad about me in it if I resigned from it?

Exit form has nothing to do with your relieving, infact the purpose of exit form or exit interview is to get open feedback. It is meant to receive honest feedback no matter position or negative.But your fear of negative reliving letter is genuine and to answer it, I would like to give brief about type of exits & relieving .Voluntarily exit : employee on his/her will leaving the company, for which they put resignation and serve notice; will receive standard relieving letter.Involuntarily exit : where company ask to leave employeeLayoff - when due to cost cutting or any other business decision, company revoke its employment agreement; will receive standard relieving letter.Low performance - low performers generally receive ask to leave after performance review of their PIP (performance improvement plan) ; will receive standard relieving letter.Termination - Under any severe offence like criminal/illegal activity, company may give immediate termination; will receive termination letter.No reliving letter & negative background verification when:Employee did not served notice period or did not complete his f&f formalities and dues.In this case company can issue a legal notice and may send it to your current employer which ofcourse give negative impact.Also, company can give your negative background check like your exit process is incomplete.Under termination, employee get termination letter instead of relieving which is itself negative and they will always receive negative background check.Hope, above information helps.

What are some of the most commonly asked operating system questions in interviews for undergraduates?

Commonly asked operating system questions :1) Explain the definition and main purpose of an operating system?operating system?An operating system is a collection of software programs which control the allocation and usage of various hardware resources in the system. It is the first program to be loaded in the computer and it runs in the memory till the system is shut down.Some of the popular Operating Systems are DOS, Windows, Ubuntu, Solaris etc.Operating systems exist for two main purposes. One is that it is designed to make sure a computer system performs well by managing its computational activities. Another is that it provides an environment for the development and execution of programs.functions of operating system?The operating system controls and coordinates the use of hardware among the different processes and applications. It provides the various functionalities to the users. The following are the main job of operating system.- Resource utilization- Resource allocation- Process management- Memory management- File management- I/O management- Device management2) What is demand paging?Demand paging is referred when not all of a process’s pages are in the RAM, then the OS brings the missing(and required) pages from the disk into the RAM.3) What are the advantages of a multiprocessor system?With an increased number of processors, there is considerable increase in throughput. It can also save more money because they can share resources. Finally, overall reliability is increased as well.4) What is kernel?Kernel is the core of every operating system. It connects applications to the actual processing of data. It also manages all communications between software and hardware components to ensure usability and reliability.5) What are real-time systems?Real-time systems are used when rigid time requirements have been placed on the operation of a processor. It has well defined and fixed time constraints.6) What is virtual memory?Virtual memory is a memory management technique for letting processes execute outside of memory. This is very useful especially is an executing program cannot fit in the physical memory.7) Describe the objective of multiprogramming.The main objective of multiprogramming is to have process running at all times. With this design, CPU utilization is said to be maximized.8 ) What are time sharing systems?In a Time sharing system, the CPU executes multiple jobs by switching among them, also known as multitasking. This process happens so fast that users can actually interact with each program while it is running.9) What is SMP?SMP is short for Symmetric MultiProcessing, and is the most common type of multiple-processor systems. In this system, each processor runs an identical copy of the operating system, and these copies communicate with one another as needed.10) How are server systems classified?Server systems can be classified as either computer-server systems or file server systems. In the first case, an interface is made available for clients to send requests to perform an action. In the second case, provisions are available for clients to create, access and update files.11) What is asymmetric clustering?In asymmetric clustering, a machine is in a state known as hot standby mode where it does nothing but to monitor the active server. That machine takes the active server’s role should the server fails.12) What is a thread?A thread is a basic unit of CPU utilization. In general, a thread is composed of a thread ID, program counter, register set and the stack.13) Give some benefits of multithreaded programming.– there is an increased responsiveness to the user– resource sharing within the process– economy– utilization of multiprocessing architecture14) Briefly explain FCFS.FCFS is short for First-come, first-served, and is one type of scheduling algorithm. In this scheme, the process that requests the CPU first is allocated the CPU first. Implementation is managed by a FIFO queue.15) What is RR scheduling algorithm?RR (round-robin) scheduling algorithm is primarily aimed for time-sharing systems. A circular queue is setup in such a way that the CPU scheduler goes around that queue, allocating CPU to each process for a time interval of up to around 10 to 100 milliseconds.16) What necessary conditions can lead to a deadlock situation in a system?Deadlock situations occur when four conditions occur simultaneously in a system: Mutual exclusion; Hold and Wait; No preemption; and Circular wait.17) Enumerate the different RAID levels.RAID 0 – Non-redundant stripingRAID 1 – Mirrored DisksRAID 2 – Memory-style error-correcting codesRAID 3 – Bit-interleaved ParityRAID 4 – Block-interleaved ParityRAID 5 – Block-interleaved distributed ParityRAID 6 – P+Q Redundancy18) Describe Banker’s algorithmBankers AlgorithmBanker’s algorithm is one form of deadlock-avoidance in a system. It gets its name from a banking system wherein the bank never allocates available cash in such a way that it can no longer satisfy the needs of all of its customers.19) What factors determine whether a detection-algorithm must be utilized in a deadlock avoidance system?One is that it depends on how often a deadlock is likely to occur under the implementation of this algorithm. The other has to do with how many processes will be affected by deadlock when this algorithm is applied.20) Differentiate logical from physical address space.Logical address refers to the address that is generated by the CPU. On the other hand, physical address refers to the address that is seen by the memory unit.21) How does dynamic loading aid in better memory space utilization?With dynamic loading, a routine is not loaded until it is called. This method is especially useful when large amounts of code are needed in order to handle infrequently occurring cases such as error routines.22) What are overlays?Overlays are used to enable a process to be larger than the amount of memory allocated to it. The basic idea of this is that only instructions and data that are needed at any given time are kept in memory.23) What is the basic function of paging?Paging is a memory management scheme that permits the physical-address space of a process to be noncontiguous. It avoids the considerable problem of having to fit varied sized memory chunks onto the backing store.24) What is fragmentation?Fragmentation is memory wasted. It can be internal if we are dealing with systems that have fixed-sized allocation units, or external if we are dealing with systems that have variable-sized allocation units.25) How does swapping result in better memory management?During regular intervals that are set by the operating system, processes can be copied from main memory to a backing store, and then copied back later. Swapping allows more processes to be run that can fit into memory at one time.26) Give an example of a Process State.– New State – means a process is being created– Running – means instructions are being executed– Waiting – means a process is waiting for certain conditions or events to occur– Ready – means a process is waiting for an instruction from the main processor– Terminate – means a process is done executing27) What is a socket?A socket provides a connection between two applications. Each endpoint of a communication is a socket.28) What is Direct Access Method?Direct Access method is based on a disk model of a file, such that it is viewed as a numbered sequence of blocks or records. It allows arbitrary blocks to be read or written. Direct access is advantageous when accessing large amounts of information.29) When does thrashing occur?Thrashing refers to an instance of high paging activity. This happens when it is spending more time paging instead of executing.30) What is the best page size when designing an operating system?The best paging size varies from system to system, so there is no single best when it comes to page size. There are different factors to consider in order to come up with a suitable page size, such as page table, paging time, and its effect on the overall efficiency of the operating system.31) When designing the file structure for an operating system, what attributes are considered?Typically, the different attributes for a file structure are naming, identifier, supported file types, and location for the files, size, and level of protection.32) What is root partition?Root partition is where the operating system kernel is located. It also contains other potentially important system files that are mounted during boot time.33) What are device drivers?Device drivers provides a standard means of representing I/O devices that maybe manufactured by different companies. This prevents conflicts whenever such devices are incorporated in a systems unit.34) What are the primary functions of VFS?VFS, or Virtual File System, separates file system generic operations from their implementation by defining a clean VFS interface. It is also based on a file-representation structure known as vnode, which contains a numerical designator needed to support network file systems.35) What are the different types of CPU registers in a typical operating system design?– Accumulators– Index Registers– Stack Pointer– General Purpose Registers36) What is the purpose of an I/O status information?I/O status information provides info about which I/O devices are to be allocated for a particular process. It also shows which files are opened, and other I/O device state.37) What is multitasking?Multitasking is the process within an operating system that allows the user to run several applications at the same time. However, only one application is active at a time for user interaction, although some applications can run “behind the scene”.38) What are some pros and cons of a command line interface?A command line interface allows the user to type in commands that can immediately provide results. Many seasoned computer users are well accustomed to using the command line because they find it quicker and simpler. The main problem with a command line interface is that users have to be familiar with the commands, including the switches and parameters that come with it. This is a downside for people who are not fond of memorizing commands.39) What is caching?Caching is the processing of utilizing a region of fast memory for a limited data and process. A cache memory is usually much efficient because of its high access speed.40) What is spooling?Spooling is normally associated with printing. When different applications want to send an output to the printer at the same time, spooling takes all of these print jobs into a disk file and queues them accordingly to the printer.41) What is an Assembler?An assembler acts as a translator for low level language. Assembly codes, written using mnemonic commands are translated by the Assembler into machine language.42) What are interrupts?Interrupts are part of a hardware mechanism that sends a notification to the CPU when it wants to gain access to a particular resource. An interrupt handler receives this interrupt signal and “tells” the processor to take action based on the interrupt request.43) What is GUI?GUI is short for Graphical User Interface. It provides users with an interface wherein actions can be performed by interacting with icons and graphical symbols. People find it easier to interact with the computer when in a GUI especially when using the mouse. Instead of having to remember and type commands, users just click on buttons to perform a process.44) What is preemptive multitasking?Preemptive multitasking allows an operating system to switch between software programs. This in turn allows multiple programs to run without necessarily taking complete control over the processor and resulting in system crashes.45) Why is partitioning and formatting a prerequisite to installing an operating system?Partitioning and formatting creates a preparatory environment on the drive so that the operating system can be copied and installed properly. This includes allocating space on the drive, designating a drive name, determining and creating the appropriate file system structure.46) What is plumbing / piping?It is the process of using the output of one program as an input to another. For example, instead of sending the listing of a folder or drive to the main screen, it can be piped and sent to a file, or sent to the printer to produce a hard copy.47) What is NOS?NOS is short for Network Operating System. It is a specialized software that will allow a computer to communicate with other devices over the network, including file/folder sharing.48) Differentiate internal commands from external commands.Internal commands are built-in commands that are already part of the operating system. External commands are separate file programs that are stored in a separate folder or directory.49) Under DOS, what command will you type when you want to list down the files in a directory, and at the same time pause after every screen output?a) dir /wb) dir /pc) dir /sd) dir /w /pAnswer: d) dir /w /p50) How would a filenamed EXAMPLEFILE.TXT appear when viewed under the DOS command console operating in Windows 98?The filename would appear as EXAMPL~1.TXT . The reason behind this is that filenames under this operating system is limited to 8 characters when working under DOS environment.51) What is a folder in Ubuntu ?There is no concept of Folder in Ubuntu. Everything including your hardware is a FILE52) Explain why Ubuntu is safe and not affected by viruses?It does not support malicious e-mails and contents, and before any e-mail is opened by users it will go through many security checksUbuntu uses Linux , which is a super secure O.S systemUnlike other O.S, countless Linux users can see the code at any time and can fix the problem if there is anyGenerally, Malwares and viruses are coded to take advantage of weakness in Windows53) Explain what is Unity in Ubuntu ? How can you add new entries to the launcher?In Ubuntu, Unity is the default windows manager. On left side of the Ubuntu it introduces the launcher and Dash to start programs.In order to add new entries to the launcher you can create a file name like .desktop and then drag file on the launcher.54) Explain what is the purpose of using libaio package in Ubuntu?Libaio is Linux Kernel Asynchronous I/O (A/O). A/O allows even a single application thread to overlap I/O operations with other processing, by providing an interface for submitting one or more I/O requests in one system call without waiting for completion. And a separate interface to reap completed I/O operations associated with a given completion group.55) What is the use of behaviour tab in Ubuntu?Through behaviours tab you can make many changes on the appearance of desktopAuto hide the launcher : You can use this option to reveal the launcher when moving the pointer to the defined hot spot.Enable workspaces: By checking this option you can enable workspaceAdd show desktop icon to the launcher: This option is used to display the desktop icon on the launcher56) Explain what is the meaning of “export” command in Ubuntu?Export is a command in Bash shell language, when you try to set a variable, it is visible or exported to any subprocess started from that instance of bash. The variable will not exist in the sub-process without the export command.57) Explain how you can reset Unity Configuration?To reset the unity configuration the simplest way to do is to hit open a Terminal or hit Atl-F2 and run the command # unity –reset58) Explain how to access Terminal?To access terminal , you have to go under Application Menu -> Accessories -> Terminal .59) Describe system calls and its typeSystem calls works as a mediator between user program and service provided by operating system. In actual situation, functions that make up an API (application program interface) typically invoke the actual system calls on behalf of the application programmer.Types of System CallSystem calls can be grouped roughly into five major categories:(i)Process control:-Create process, terminate process,end,allocate and free memory etc.(ii)File manipulation:-Create file, delete file, open file, close file, read, write.(iii)Device manipulation:-request device, release device, read, write, reposition, get device attributes, set device attributes etc.(iv)Information maintenance:-get or set process, file, or device attributes(v)Communications:-Send, receive messages, transfer status information60)Explain Booting the system and Bootstrap program in operating system.The procedure of starting a computer by loading the kernel is known as booting the system.When a user first turn on or booted the computer, it needs some initial program to run. This initial program is known as Bootstrap Program. It is stored in read-only memory (ROM) or electrically erasable programmable read-only memory (EEPROM). Bootstrap program locates the kernel and loads it into main memory and starts its execution.61) Describe Main memory and Secondary memory storage in brief.Main memory is also called random access memory (RAM). CPU can access Main memory directly. Data access from main memory is much faster than Secondary memory. It is implemented in a semiconductor technology, called dynamic random-access memory (DRAM).Main memory is usually too small to store all needed programs. It is a volatile storage device that loses its contents when power is turned off. Secondary memory can stores large amount of data and programs permanently. Magnetic disk is the most common secondary storage device. If a user wants to execute any program it should come from secondary memory to main memory because CPU can access main memory directly.62) What are the advantages of multiprocessor system?Systems which have more than one processor are called multiprocessor system. These systems are also known as parallel systems or tightly coupled systems.Multiprocessor systems have the following advantages.- Increased Throughput: Multiprocessor systems have better performance than single processor systems. It has shorter response time and higher throughput. User gets more work in less time.- Reduced Cost: Multiprocessor systems can cost less than equivalent multiple single processor systems. They can share resources such as memory, peripherals etc.- Increased reliability: Multiprocessor systems have more than one processor, so if one processor fails, complete system will not stop. In these systems, functions are divided among the different processors.63) Is it possible to have a deadlock involving only one process? Explain your answer.Deadlock with one process is not possible. Here is the explanation.A deadlock situation can arise if the following four conditions hold simultaneously in a system.- Mutual Exclusion.- Hold and Wait.- No Preemption.- Circular-wait.It is not possible to have circular wait with only one process, thus failing a necessary condition for Circular wait. There is no second process to form a circle with the first one. So it is not possible to have a deadlock involving only one process.64) What is a Kernel?- Kernel is the part of OS which handles all details of sharing resources and device handling.- It can be considered as the core of OS which manages the core features of an OS.- Its purpose is to handle the communication between software and hardware- Its services are used through system calls.- A layer of software called shell wraps around the Kernel.65) What are the main functions of a Kernel?The main functions of a Kernel are:- Process management- Device management- Memory management- Interrupt handling- I/O communication- File system management66) What are the different types of Kernel?Kernels are basically of two types:a. Monolithic Kernels - In this architecture of kernel, all the system services were packaged into a single system module which lead to poor maintainability and huge size of kernel.b. Microkernels - They follow the modular approach of architecture. Maintainability became easier with this model as only the concerned module is to be altered and loaded for every function. This model also keeps a tab on the ever growing code size of the kernel.67) What are the disadvantages of Microkernels?Following are the main disadvantages of Microkernels. Usually these disadvantages are situation based.a. Larger running memory footprintb. Performance loss due to the requirement of more software for interfacing.c. Difficulty in fixing the messaging bugs.d. Complicated process management.68) What is a command interpreter?It is a program that interprets the command input through keyboard or command batch file. It helps the user to interact with the OS and trigger the required system programs or execute some user application.Command interpreter is also referred to as:- Control card interpreter- Command line interpreter- Console command processor- Shell69) Explain Process and basic functions of process management.A process is a program that is running and under execution. On batch systems, it is called as a "job" while on time sharing systems, it is called as a "task".basic functions of process management.Important functions of process management are:- Creation and deletion of system processes.- Creation and deletion of users.- CPU scheduling.- Process communication and synchronization.70) What do you know about interrupt?- Interrupt can be understood as a signal from a device causing context switch.- To handle the interrupts, interrupt handlers or service routines are required.- The address of each Interrupt service routine is provided in a list which is maintained in interrupt vector.71) What is a daemon?- Daemon - Disk and execution monitor, is a process that runs in the background without user’s interaction. They usually start at the booting time and terminate when the system is shut down.72) How would you identify daemons in Unix?- The name of daemons usually end with 'd' at the end in Unix.- For e.g. httpd, named, lpd.73) What do you mean by a zombie process?- These are dead processes which are not yet removed from the process table.- It happens when the parent process has terminated while the child process is still running. This child process now stays as a zombie.74) What do you know about a Pipe? When is it used?- It is an IPC mechanism used for one way communication between two processes which are related.- A single process doesn't need to use pipe. It is used when two process wish to communicate one-way.75) What is a named pipe?- A traditional pipe is unnamed and can be used only for the communication of related process. If unrelated processes are required to communicate - named pipes are required.- It is a pipe whose access point is a file available on the file system. When this file is opened for reading, a process is granted access to the reading end of the pipe. Similarly, when the file is opened for writing, the process is granted access to writing end of the pipe.- A named pipe is also referred to as FIFO or named FIFO.76) What are the various IPC mechanisms?IPC - Inter Process Communication.77) Various IPC mechanisms are:a. Socketsb. Pipesc. Shared memoryd. Signalse. Message Queues78) What is a semaphore?- A semaphore is a hardware or a software tag variable whose value indicates the status of a common resource.- Its purpose is to lock the common resource being used. A process which needs the resource will check the semaphore to determine the status of the resource followed by the decision for proceeding.- In multitasking operating systems, the activities are synchronized by using the semaphore techniques.79) What kind of operations are possible on a semaphore?Two kind of operations are possible on a semaphore - 'wait' and 'signal'.80) What is context switching?- Context is associated with each process encompassing all the information describing the current execution state of the process- When the OS saves the context of program that is currently running and restores the context of the next ready to run process, it is called as context switching.- It is important for multitasking OS.81) Tell us something about Mutex.- Mutex - ‘Mutual Exclusion Lock’ is a lock which protects access to shared data resource.- Threads can create and initialize a mutex to be used later.- Before entering a critical region the mutex is locked. It is unlocked after exiting the critical region. If any thread tries to lock the mutex during this time, it can't do so.82) What is a critical section?It is a section of code which can be executed only by one process at a time.83) What is synchronization? What are the different synchronization mechanisms?Synchronization means controlling access to a resource that is available to two or more threads or process. Different synchronization mechanisms are:- Mutex- Semaphores- Monitors- Condition variables- Critical regions- Read/ Write locks84) What is the basic difference between pre-emptive and non-pre-emptive scheduling.Pre-emptive scheduling allows interruption of a process while it is executing and taking the CPU to another process while non-pre-emptive scheduling ensures that a process keeps the CPU under control until it has completed execution.85) Is non-pre-emptive scheduling frequently used in a computer? Why?No, it is rarely used for the reasons mentioned below:- It can not ensure that each user gets a share of CPU regularly.- The idle time with this increases reducing the efficiency and overall performance of the system.- It allows program to run indefinitely which means that other processes have to wait for very long.86) Explain condition variable.- These are synchronization objects which help threads wait for particular conditions to occur.- Without condition variable, the thread has to continuously check the condition which is very costly on the resources.- Condition variable allows the thread to sleep and wait for the condition variable to give it a signal.87) What are read-write locks?- Read - write locks provide simultaneous read access to many threads while the write access stays with one thread at a time. They are especially useful in protecting the data that is not frequently written but read simultaneously by many threads.- They are slower than mutexes.88) What is a deadlock?- It is a condition where a group of two or more waiting for the resources currently in use by other processes of the same group.- In this situation every process is waiting for an event to be triggered by another process of the group.- Since no thread can free up the resource a deadlock occurs and the application hangs.89) What are the necessary conditions for deadlock to occur?a. At least one resource should be occupied in a non-sharable condition.b. A process holding at least one resource is waiting for more resources currently in use by other processes.c. It is not possible to pre-empt the resource.d. There exists a circular wait for processes.90) Name the functions constituting the OS's memory management.- Memory allocation and de-allocation- Integrity maintenance- Swapping- Virtual memory91) Name the different types of memory?a. Main memory also called primary memory or RAMb. Secondary memory or backing storagec. Cached. Internal process memory92) Throw some light on Internal Process Memory.- This memory consists of a set of high-speed registers. They work as temporary storage for instructions and data.93) Explain compaction.During the process of loading and removal of process into and out of the memory, the free memory gets broken into smaller pieces. These pieces lie scattered in the memory. Compaction means movement of these pieces close to each other to form a larger chunk of memory which works as a resource to run larger processes.94) What are page frames?Page frames are the fixed size contiguous areas into which the main memory is divided by the virtual memory.95) What are pages?- Pages are same sized pieces of logical memory of a program. Usually they range from 4 KB to 8 KB depending on the addressing hardware of the machine.- Pages improve the overall system performance and reduces requirement of physical storage as the data is read in 'page' units.96) Differentiate between logical and physical address.- Physical addresses are actual addresses used for fetching and storing data in main memory when the process is under execution.- Logical addresses are generated by user programs. During process loading, they are converted by the loader into physical address.97) When does page fault error occur?- It occurs when a page that has not been brought into main memory is accessed.98) Explain thrashing.- In virtual memory system, thrashing is a high page fault scenario. It occurs due to under-allocation of pages required by a process.- The system becomes extremely slow due to thrashing leading to poor performance.99) What are the basic functions of file management in OS?- Creation and deletion of files/ directories.- Support of primitives for files/ directories manipulation.- Backing up of files on storage media.- Mapping of files onto secondary storage.100) Explain thread.- It is an independent flow of control within a process.- It consists of a context and a sequence of instructions for execution.101) What are the advantage of using threads?The main advantages of using threads are:a.) No special communication mechanism is required.b.) Readability and simplicity of program structure increases with threads.c.) System becomes more efficient with less requirement of system resources.102) What are the disadvantages of using threads?The main disadvantages of using threads are:- Threads can not be re-used as they exist within a single process.- They corrupt the address space of their process.- They need synchronization for concurrent read-write access to memory.103) What is a compiler?A compiler is a program that takes a source code as an input and converts it into an object code. During the compilation process the source code goes through lexical analysis, parsing and intermediate code generation which is then optimized to give final output as an object code.104) What is a library?It is a file which contains object code for subroutines and data to be used by the other program.105) What are the advantages of distributed system?Advantages of distributed system are:- Resources get shared- Load gets shared- Reliability is improved- Provide a support for inter-process communication106) What are the different types of scheduling algorithms?The scheduling algorithms decide which processes in the ready queue are to be allocated to the CPU for execution. Scheduling algorithms can be broadly classified on the basis of:- Preemptive algorithms- Round Robin Scheduling- Shortest Job First Scheduling (can be both)- Priority Scheduling (can be both)- Non-preemptive algorithms- First Come First Served SchedulingNon-Preemptive algorithms: In this type of scheduling once a CPU has been allocated to a process it would not release the CPU till a request for termination or switching to waiting state occurs.Preemptive algorithms: In this type of scheduling a process maybe interrupted during execution and the CPU maybe allocated to another process.107) Why is round robin algorithm considered better than first come first served algorithm?The first come first served algorithm is the simplest scheduling algorithm known. The processes are assigned to the CPU on the basis of their arrival time in the ready queue. Since, it is non-preemptive once a process is assigned to the CPU, it will run till completion. Since a process takes the CPU till it is executed it is not very good in providing good response times. It can make other important processes wait un-necessarily.On the other hand, the round robin algorithm works on the concept of time slice or also known as quantum. In this algorithm, every process is given a predefined amount of time to complete the process. In case, a process is not completed in its predefined time then it is assigned to the next process waiting in queue. In this way, a continuous execution of processes is maintained which would not have been possible in case of FCFS algorithm108) Explain how a copying garbage collector works. How can it be implemented using semispaces?The copying garbage collector basically works by going through live objects and copying them into a specific region in the memory. This collector traces through all the live objects one by one. This entire process is performed in a single pass. Any object that is not copied in memory is garbage.The copying garbage collector can be implemented using semispaces by splitting the heap into two halves. Each half is a contiguous memory region. All the allocations are made from a single half of the heap only. When the specified heap is half full, the collector is immediately invoked and it copies the live objects into the other half of the heap. In this way, the first half of the heap then only contains garbage and eventually is overwritten in the next pass.109) How does reference counting manage memory allocated objects? When can it fail to reclaim objects?Reference counting augments every object with a count of the number of times an object has been referenced. This count is incremented every time a reference to that object is made. Also every time a reference is destroyed the reference is decremented. This process is repeated till the reference count becomes zero. Once the reference count of an object reaches zero the object can be reclaimed. In this way, reference counting systems can perform automatic memory management by keeping a count in every object. Any object that does not have a reference count can be considered to be dead and that memory can be reclaimed.The reference counting method can fail to reclaim objects in case of cyclic references. There are no concrete ways to avoid this problem and it is always suggested to create an architecture that does not use a circular reference.110) What differences are there between a semaphore wait signal and a condition variable wait signal?Semaphore wait signal:- They can be used anywhere except in a monitor.- The wait() function does not always blocks its caller.- The signal() function increments the semaphore counter and can release a process.- If the signal() releases a process, the released and the caller both continue.Condition Variable wait signal:- It can only be used in monitors.- The wait() function always blocks its caller.- The signal() can either release a process or it is lost as if it never occurred.- On signal() releasing a process either the caller or the released continues but not both at the same time.111) For a deadlock to occur what are the necessary conditionsIn order for deadlocks to occur there are four necessary conditions:- Mutual Exclusion: The resources available are not sharable. This implies that the resources used must be mutually exclusive.- Hold and Wait: Any process requires some resources in order to be executed. In case of insufficient availability of resources a process can take the available resources, hold them and wait for more resources to be available.- No Preemption: The resources that a process has on hold can only be released by the process itself voluntarily. This resource cannot be preempted by the system.- Circular Waiting: A special type of waiting in which one process is waiting for the resources held by a second process. The second process is in turn waiting for the resources held by the first process.112) Why is the context switch overhead of a user-level threading as compared to the overhead for processes? Explain.This is due to the reason that a context switch implementation is done by the kernel. During this process the state information is copied between the processor and the PCB (process control block) or the TCB (thread control block). Since the kernel does not know anything about user-level threads, technically it is not possible for it to be a user level thread context switch. The user level scheduler can do some limited state copying on the behalf of a thread prior to the control being handed to that thread. But this copying of state information is smaller compared to that of a kernel-level process. Also the process does not involve going into the kernel mode with the help of a system call.113) State the advantages of segmented paging over pure segmentation?In broad terms paging is a memory management technique that allows a physical address space of a process to be non-contiguous.Segmented paging has a certain set of advantages over pure segmentation such as:- Segmented paging does not have any source of external fragmentation.- Since a segment existence is not restricted to a contiguous memory range it can be easily grown and does not have to adjust into a physical memory medium.- With segmented paging the addition of an offset and a base is simpler as it is only an append operation instead of it being a full addition operation.114) When does the Belady's anomaly occur?The Belady's anomaly is a situation in which the number of page faults increases when additional physical memory is added to a system. This anomaly arises in some algorithms that implement virtual memory. The virtual memory allows programs larger than the physical memory space to execute. An algorithm suffers from this problem when it cannot guarantee that a page will be kept when a small number of frames are available. An optimal algorithm would not suffer from this problem as it replaces the page not to be used for the longest time. The anomaly occurs when the page replacement algorithm will remove a page that will be needed in the immediate future. An optimal algorithm will not select such a page that will be required immediately. This anomaly is also stated to be unbounded.115) What complications does concurrent processing add to an operating system?There are various complications of concurrent processing such as:- A time sharing method must be implemented to allow multiple processes to have an access to the system. This will involve the preemption of processes that do not give up CPU on their own i.e. more than one process may be executing kernel code simultaneously.- The amount of resources that a process can use and the operations that it may perform must be limited. The system resources and the processes must be protected from each other.- Kernel must be designed to prevent deadlocks between the various processes, i.e. Cyclic waiting or hold and waiting must not occur.- Effective memory management techniques must be used to better utilize the limited resources.116) How can a VFS layer allow multiple file systems support?The VFS layer also known as the virtual file system functions in many ways similar to object oriented programming techniques. It acts like an abstraction layer on top of a more specific file system. The VFS layer enables the OS to make system calls independent of the file system type used. Any file system that is used gives its function calls used and the data structures to the layer of VFS. The VFS layer translates a system call into the correct specific functions for the targeted file system. The program that is used for calling does not have a file system specific code also the system call structures used in upper levels are file system independent. The VFS layer translation translates the non-file system specific calls into a file system specific operation.117) What are the pros and cons of using circuit switching?The primary advantage of using circuit switching is that it ensures the availability of resources. That is it reserves the network resources required for a specific transfer prior to the transmission taking place. By doing so it ensures that no packet would be dropped and the required quality of service is met.The disadvantage of using circuit switching is that it requires a round trip message to setup a reservation. By doing so as it provisions the resources ahead of the transmission it might lead to the suboptimal use of resources.Circuit switching can be implemented for applications that have constant demand for network resources for long periods of time.118) What problems are faced during the implementation of a network-transparent system?A designer primarily faces two major problems while implementing a network-transparent system. They are as follows:- The primary problem is to make all the processors and storage devices to appear transparent on the network. This implies that the distributed system should appear as a single centralized system to the users using the network.There are two solutions to it:- The Andrews files system- The NFS system.- Both these file systems (distributed) appear as a single file system to the user whereas in reality it may be distributed over the network.- The secondary issue is regarding the user mobility. The designer would want any user to connect to the entire system overall rather than to a particular machine.119) Explain the layers of a Windows XP system.The layers of Windows XP system boot-up is as follows:- A situation of operating system portability is created by the hardware abstraction layer by hiding hardware differences from the operating systems upper layers. A virtual machine interface is provided by the hardware abstraction layer to be used by the kernel dispatcher and the device drivers.- The foundation provided by the kernel layer is used by the executive functions and the user mode sub systems. The kernel would always remain in memory and cannot be preempted. The functions of the kernel are thread scheduling, interrupt and exception handling etc.- The executive layer is responsible for providing services to be used by all subsystems. These can be object manager, process manager, i/o manager etc.120) Explain the booting process of a Windows XP system.The steps involved are as follows:- As the computer is powered on, the BIOS begins execution from ROM, it loads and executes the bootstrap loader.- The NTLDR program is loaded from the root directory of the system disk and determines which boot disk contains the operating system.- NTLDR loads the HAL library, kernel and system hive. The system hive indicates the required boot drivers and loads them one by one.- Kernel execution begins by initializing the system and creating two processes: the system process containing all internal worker threads and the first user-mode initialization process: SMSS.- SMSS further initializes the system by establishing paging files and loading device drivers.- SMSS creates two processes: WINLOGON, which brings up the rest of the system and CSRSS, the Win32 subsystem process.121) How are data structures handled by NTFS and how does it recover from a crash?In an NTFS file system inside the transactions all the data structure updates are performed. Prior to the alteration of a data structure a transaction creates log record containing information on redo and undo functions. Once a transaction is completed commit record information is stored in the logs.An NTFS system recovers from a crash by accessing information from the created log records. The first step is to redo operations of committed transactions and undoing those transactions which could not be successfully committed. Although the NTFS file system after recovering from a crash might not reflect the same user data prior to a crash but it can guarantee the file data structures are undamaged. It restores the structure to a pre-crash and consistent state.122) What are the benefits and losses of placing the functionality in a device controller rather than in placing it in the kernel?The benefits of placing functionality in the device controller are:- System crasher due to the occurrence of a bug is greatly reduced.- By the utilization of dedicated hardware and algorithms that are hard coded the performance can be improved greatly.- Since the algorithms are hard coded the kernel gets simplified.The banes of placing functionality in the controller rather than the kernel are:- Once a bug occurs they are difficult to fix, a new firmware or revision may be required.- For performance improvement of algorithms hardware upgrades are required rather than a device driver update.123) What are merits and demerits of systems supporting multiple file structure and systems supporting a stream of bytes?The main advantage of having a system that supports multiple file structures is that the support for it is provided by the system itself no other individual application is required to provide the multiple structure support. Since the support is provided by the system itself the implementation is much more efficient as compared to application level.A demerit of such kind of implementation is that it can increase the overall size of the system. Also, since the support is provided by the system, for an application that requires a different file type may not be executable on such a system.A good alternative for this is that the OS does not define any support for file structures instead all files are considered to be a series of bytes. By doing so the support for file systems is simplified as the OS does not have to specify the different structures for the file systems. It allows the applications to define the file structures. This kind of implementation can be found in UNIX.124) What do you understand by transaction atomicity?The transaction process can be considered to be a series of read and write operations upon some data which is followed by a commit operation. By transaction atomicity it means that if a transaction is not completed successfully then the transaction must be aborted and any changes that the transactions did while execution must be roll backed. It means that a transaction must appear as a single operation that cannot be divided. This ensures that integrity of the data that is being updated is maintained. If the concept of atomicity in transaction is not used any transaction that is aborted midway may result in data to be inconsistent as there might be a possibility two transactions may be sharing the same data value.125) Why is a single serial port managed with a single interrupt-driven I/O but a front-end processor is managed using a polling I/O, such as a terminal concentrator?When the I/O is frequent and of very short durations polling is considered to be more efficient than an interrupt driven I/O. Although, a serial port individually can have fairly infrequent number of I/O and hence should ideally use interrupts the case of serial ports in a terminal concentrator is different.A terminal concentrator consists of multiple serial ports and this can lead to the creation of multiple short I/O instances this can create un-necessary load on the system in case of interrupts usage.Instead, if a polling loop is used it can greatly reduce the amount of load on the system by looping through without the requirement of I/O.Due to this reason interrupts are used for single ports as the frequency of I/O on such a port is less and can be managed effectively, whereas we use polling for multiple ports as the frequency of I/O increases and are of short durations which suits polling.126) What is graceful degradation?- It is the ability to continue providing service proportional to level of hardware.- Systems designed for graceful degradation are called fault tolerant.- If we have several processors connected together, then failure of one would not stop the system.- Then the entire system runs only 10% slower.- This leads to increased reliability of the system.127) What are loosely coupled systems?- These systems are also called as the distributed systems.- It consist of collection of processors that do not share memory or clock.- The processors communicate through high speed buses or telephone lines.- It can be a centralized system where the server responds to client requests.- It can also be a peer to peer system.128) Explain SMP.- It is called as symmetric multiprocessing which is multiprocessor system.- In it each processor runs an identical copy of the operating system.- These copies communicate with one another as needed.- These processor systems lead to increased throughput.- These systems are also called parallel systems or tightly coupled systems.129) What is DLM?- It is the service called as distributed lock manager.- In cluster systems to avoid file sharing the distributed systems must provide the access control and file locking.- This ensures that no conflicting operations occur in the system.- Here the distributed file systems are not general purpose therefore it requires locking.130) Explain the handheld systems. List the issues related to the handheld system.- Handheld devices are palm tops and cellular telephones with connectivity to a network.- These devices are of limited size which leads to limited applications.- They use a memory 512KB to 16MB as a result the operating system and applications must use the memory efficiently.- The speed of the processors is only a fraction of speed of the PC processors and for faster processors larger battery is required.- These devices use very small display screens so reading mails and browsing must be condensed to smaller displays.131)Why is interrupt vector used in operating systems?- The operating system these days are interrupt driven and this requires the interrupt vector.- This interrupt vector contains the addresses of the interrupt service routines for various devices.- Here the interrupts can be indirectly called through the table with no intermediate routine needed.- This leads to interrupt handling at a faster rate.- Operating systems like MS DOS and UNIX are using the interrupt vector.132) What is the need of device status table?- This table gives the device type, its address and status.- It is required to keep a track of many input output requests at the same time.- The state of the device can be functioning, idle or busy.- If a device is busy, type of request and other parameters are stored in the table entry.- If more than one processor issues request for the same device then a wait queue is maintained.133) How can the speed of interrupt driven input output systems be improved?- Direct memory access is used to enhance the speed of the input output systems.- Here, buffers, counters and pointers are set for the devices.- The device controller transfers the block of data directly from own buffer storage to memory.- The data is not given to the CPU for further transfer between CPU and input output devices or CPU and memory.- Only one interrupt is generated per block than one interrupt per byte which enhances the speed.134) Explain the execution cycle for a von Neumann architecture.- Initially the system will fetch the instruction and stores it in instruction register.- Instruction is then decoded and may cause operands to be fetched from memory.- After execution the result is stored in the memory.- Here the memory unit sees only the memory addresses irrespective of how they are generated.- Memory unit is also unaware of what addresses are for.135) Explain the positioning time for a disk.- It is also called as the random access time used by a disk to perform operations.- It consists of time to move the disk arm to the desired cylinder called the seek time.- The time required for the desired sector to rotate to the disk head is called rotational latency.- Typical disks can transfer megabytes of data per second.- Seek time and rotational latency is always in milliseconds.136) What is EIDE?- EIDE is a bus called enhanced integrated drive electronics.- The input output devices are attached to the computer by a set of wires called the bus.- The data transfer on a bus are carried out by electronic processes called controllers.- The host controller sends messages to device controller and device controller performs the operations.- These device controllers consist of built in cache so that data transfer occurs at faster speed.137) Differentiate between the user mode and monitor mode.- User mode and monitor mode are distinguished by a bit called the mode bit.- User mode uses bit 1 and monitor mode uses bit 0.- At the boot time hardware starts with the monitor mode.- Also, at the time of interrupt user mode is shifted to the transfer mode.- System always switches to the user mode before passing control to the user program.- Whenever system gains control of the computer it works in monitor mode otherwise in user mode.138) What is time slice?- The timer in CPU is set to interrupt every N milliseconds where this N is called the time slice.- It is the time each user gets to execute before control is given to next user.- At the end of each time slice the value of N is incremented and the record is maintained.- It also maintains the record of the total time user program has executed thus far.- This method helps in time sharing among the various users.139) What are the activities related to the Time Shared User Program Management?- An Operating System is responsible for the creation and deletion of both user and system processes.- It also provides mechanism for the process synchronization.- Suspending and resuming of windows is done by the operating system itself.- Program needs resources like CPU time, memory, files, input output devices to complete the task which is provided by the operating system.- Mechanisms are also provided for deadlock handling.140) When an input file is opened, what are the possible errors that may occur?- 1st condition may be that the file is protected against access, here it terminates abruptly.- 2nd condition may be that file exists, then we need to create the output file.- If file with the same name exists then it may be deleted or program may be aborted.- In another case the system may ask the user to replace the existing file or abort the program.141) Explain PCB.- PCB, process control block, is also called as the task control block.- It contains information about the process state like new, ready, running, waiting and halt.- It also includes the information regarding the process priority and pointers to scheduling queues .- Its counter indicates the address of the next instruction to be executed for the process.- It basically serves as the storage for any information that may vary from process to process.142) What is context switching ?- It is the process of switching the CPU from one process to another.- This requires to save the state of the old process and loading the saved state for the new process.- The context of the process is represented in the process control block.- During switching the system does no useful work.- How the address space is preserved and what amount of work is needed depends on the memory management.143) What is cascading termination?- If one process is terminated, its related processes are also terminated abnormally then it is called cascade termination.- It occurs in the case of parent child process.- If the parent process is terminated normally or abnormally then all its child processes must be terminated.- The parent is existing and the operating system does not allow a child to continue if its parent terminates.- This child process is the new process created by the process called the parent process.144) Explain IPC.- It is called as the inter process communication.- The scheme requires that processes share a common buffer pool and code for implementing the buffer.- It allows processes to communicate and to synchronize their actions.- Example : chat program used on the world wide web.- It is useful in distributed computer systems where communicating processes reside on different computers connected with a network.145) What are sockets?- A socket is defined as endpoint for communication, a pair of sockets is used by the pair of processes.- It is made of IP address chained with a port number.- They use the client server architecture.- Server waits for incoming client requests by listening to specified port.- On reception of request, server accepts connection from client socket to complete the connection.146) What is virtual memory, how is it implemented, and why do operating systems use it?Real, or physical, memory exists on RAM chips inside the computer. Virtual memory, as its name suggests, doesn’t physically exist on a memory chip. It is an optimization technique and is implemented by the operating system in order to give an application program the impression that it has more memory than actually exists. Virtual memory is implemented by various operating systems such as Windows, Mac OS X, and Linux.So how does virtual memory work? Let’s say that an operating system needs 120 MB of memory in order to hold all the running programs, but there’s currently only 50 MB of available physical memory stored on the RAM chips. The operating system will then set up 120 MB of virtual memory, and will use a program called the virtual memory manager (VMM) to manage that 120 MB. The VMM will create a file on the hard disk that is 70 MB (120 – 50) in size to account for the extra memory that’s needed. The O.S. will now proceed to address memory as if there were actually 120 MB of real memory stored on the RAM, even though there’s really only 50 MB. So, to the O.S., it now appears as if the full 120 MB actually exists. It is the responsibility of the VMM to deal with the fact that there is only 50 MB of real memory.The paging file and the RAMNow, how does the VMM function? As mentioned before, the VMM creates a file on the hard disk that holds the extra memory that is needed by the O.S., which in our case is 70 MB in size. This file is called a paging file (also known as a swap file), and plays an important role in virtual memory. The paging file combined with the RAM accounts for all of the memory. Whenever the O.S. needs a ‘block’ of memory that’s not in the real (RAM) memory, the VMM takes a block from the real memory that hasn’t been used recently, writes it to the paging file, and then reads the block of memory that the O.S. needs from the paging file. The VMM then takes the block of memory from the paging file, and moves it into the real memory – in place of the old block. This process is called swapping (also known as paging), and the blocks of memory that are swapped are called pages. The group of pages that currently exist in RAM, and that are dedicated to a specific process, is known as the working set for that process.As mentioned earlier, virtual memory allows us to make an application program think that it has more memory than actually exists. There are two reasons why one would want this: the first is to allow the use of programs that are too big to physically fit in memory. The other reason is to allow for multitasking – multiple programs running at once. Before virtual memory existed, a word processor, e-mail program, and browser couldn’t be run at the same time unless there was enough memory to hold all three programs at once. This would mean that one would have to close one program in order to run the other, but now with virtual memory, multitasking is possible even when there is not enough memory to hold all executing programs at once.Virtual Memory Can Slow Down PerformanceHowever, virtual memory can slow down performance. If the size of virtual memory is quite large in comparison to the real memory, then more swapping to and from the hard disk will occur as a result. Accessing the hard disk is far slower than using system memory. Using too many programs at once in a system with an insufficient amount of RAM results in constant disk swapping – also called thrashing, which can really slow down a system’s performance.147) Suppose we have a paging system with 4 frames and 12 pages, where the number of frames denotes the number of pages that can be held in RAM at any given time. Assume the pages are accessed by some program in the order shown below, from left to right. Also, assume that the program has just started, so the frames are initially empty. How many page faults will be generated assuming that the LRU (Least Recently Used) algorithm is being used?Order in which pages are accessed:3, 4, 2, 1, 4, 7, 2, 5, 3, 6, 1, 3Reading the previous discussion on virtual memory is recommended to better understand this problem. A page fault occurs when a program tries to access a page that is mapped in address space, but not loaded in the physical memory (the RAM). In other words, a page fault occurs when a program can not find a page that it’s looking for in the physical memory, which means that the program would have to access the paging file (which resides on the hard disk) to retrieve the desired page.The term page fault is a bit misleading as it implies that something went seriously wrong. Although page faults are undesirable – as they result in slow accesses to the hard disk – they are quite common in any operating system that uses virtual memory.Now, we need to actually solve the problem. The easiest way to do this is to break the problem down into 12 steps (where 12 is the number of pages) to see what happens each time a page is referenced by the program, and at each step see whether a page fault is generated or not. Of course, we want to keep track of what pages are currently in the physical memory (the RAM). The first four page accesses will result in page faults because the frames are initially empty. After that, if the program tries to access a page that’s already in one of the frames then there’s no problem. But if the page that the program is trying to access is not already in one of the frames then that results in a page fault. In this case, we have to determine which page we want to take out (or ‘swap’) from the RAM, and for that we use the LRU algorithm.Some other algorithm could be used as well – FIFO and NRU are other possibilities – and as a group these are known as page replacement algorithms. Applying the LRU algorithm to this problem is fairly straightforward – you simply remove the page that was least recently used. Proceeding in this manner leads to the chart shown below – you should try this out yourself before looking at the answer.We can see that 9 page faults will be generated in this scenario.148) What is the purpose of swapping in virtual memory?Swapping is exchanging data between the hard disk and the RAMThe goal of the virtual memory technique is to make an application think that it has more memory than actually exists. If you read the recommended question then you know that the virtual memory manager (VMM) creates a file on the hard disk called a swap file. Basically, the swap file (also known as a paging file) allows the application to store any extra data that can’t be stored in the RAM – because the RAM has limited memory. Keep in mind that an application program can only use the data when it’s actually in the RAM. Data can be stored in the paging file on the hard disk, but it is not usable until that data is brought into the RAM. Together, the data being stored on the hard disk combined with the data being stored in the RAM comprise the entire data set needed by the application program.So, the way virtual memory works is that whenever a piece of data needed by an application program cannot be found in the RAM, then the program knows that the data must be in the paging file on the hard disk.But in order for the program to be able to access that data, it must transfer that data from the hard disk into the RAM. This also means that a piece of existing data in the RAM must be moved to the hard disk in order to make room for the data that it wants to bring in from the hard disk. So, you can think of this process as a trade in which an old piece of data is moved from the RAM to the hard disk in exchange for a ‘new’ piece of data to bring into the RAM from the hard disk. This trade is known as swapping or paging. Another term used for this is a ‘page fault’ – which occurs when an application program tries to access a piece of data that is not currently in the RAM, but is in the paging file on the hard disk. Remember that page faults are not desirable since they cause expensive accesses to the hard disk. Expensive in this context means that accessing the hard disk is slow and takes time.The Purpose Of SwappingSo, we can say that the purpose of swapping, or paging, is to access data being stored in hard disk and to bring it into the RAM so that it can be used by the application program. Remember that swapping is only necessary when that data is not already in the RAM.Excessive Swapping Causes ThrashingExcessive use of swapping is called thrashing and is undesirable because it lowers overall system performance, mainly because hard drives are far slower than RAM.149) What is the difference between a thread and a process?Processes vs ThreadsA process is an executing instance of an application. What does that mean? Well, for example, when you double-click the Microsoft Word icon, you start a process that runs Word. A thread is a path of execution within a process. Also, a process can contain multiple threads. When you start Word, the operating system creates a process and begins executing the primary thread of that process.It’s important to note that a thread can do anything a process can do. But since a process can consist of multiple threads, a thread could be considered a ‘lightweight’ process. Thus, the essential difference between a thread and a process is the work that each one is used to accomplish. Threads are used for small tasks, whereas processes are used for more ‘heavyweight’ tasks – basically the execution of applications.Another difference between a thread and a process is that threads within the same process share the same address space, whereas different processes do not. This allows threads to read from and write to the same data structures and variables, and also facilitates communication between threads. Communication between processes – also known as IPC, or inter-process communication – is quite difficult and resource-intensive.MultiThreadingThreads, of course, allow for multi-threading. A common example of the advantage of multithreading is the fact that you can have a word processor that prints a document using a background thread, but at the same time another thread is running that accepts user input, so that you can type up a new document.If we were dealing with an application that uses only one thread, then the application would only be able to do one thing at a time – so printing and responding to user input at the same time would not be possible in a single threaded application.Each process has it’s own address space, but the threads within the same process share that address space. Threads also share any other resources within that process. This means that it’s very easy to share data amongst threads, but it’s also easy for the threads to step on each other, which can lead to bad things.Multithreaded programs must be carefully programmed to prevent those bad things from happening. Sections of code that modify data structures shared by multiple threads are called critical sections. When a critical section is running in one thread it’s extremely important that no other thread be allowed into that critical section. This is called synchronization, which we wont get into any further over here. But, the point is that multithreading requires careful programming.Also, context switching between threads is generally less expensive than in processes. And finally, the overhead (the cost of communication) between threads is very low relative to processes.Here’s a summary of the differences between threads and processes:1. Threads are easier to create than processes since they  don't require a separate address space.   2. Multithreading requires careful programming since threads  share data strucures that should only be modified by one thread at a time. Unlike threads, processes don't share the same  address space.   3. Threads are considered lightweight because they use far  less resources than processes.   4. Processes are independent of each other. Threads, since they  share the same address space are interdependent, so caution  must be taken so that different threads don't step on each other.  This is really another way of stating #2 above.   5. A process can consist of multiple threads. 150) Does Vmware Fusion come with Windows?Vmware Fusion is a Virtual MachineIf you’ve never used Vmware or a similar product before, it helps to give a quick summary of what a virtual machine does. Virtual machines like Vmware fusion allow people who have Macs to run other operating systems at the same time. So, if you’re running the Mac operating system, then you can also be running another operating system like Microsoft Windows, Linux, NetWare or Solaris simultaneously. Vmware accomplishes this through some fancy and very complex software which we won’t get into.Answering the original question, Vmware does not actually come with Windows – you will have to purchase a copy of the Windows operating system on your own to install with the Vmware software. And the same is true for any other operating system you would like to run simultaneously with the Mac operating system.The reasons for this should be clear – since Vmware allows people to run other operating systems as well, it wouldn’t make sense to just be limited to Windows. Also, Windows is not known to be bundled with other non Microsoft products.151) Does Parallels Desktop come with Windows?Parallels is a Virtual MachineIf you’ve never used Parallels or a similar product before, it helps to give a quick summary of what a virtual machine does. Virtual machines like Parallels Desktop allow people who have Macs to run other operating systems at the same time. So, if you’re running the Mac operating system, then you can also be running another operating system like Microsoft Windows, Linux, NetWare or Solaris simultaneously. Parallels Desktop accomplishes this through some fancy and very complex software which we won’t get into.Answering the original question, Parallels Desktop does not actually come with Windows – you will have to purchase a copy of the Windows operating system on your own to install with the Parallels Desktop software. And the same is true for any other operating system you would like to run simultaneously with the Mac operating system.The reasons for this should be clear – since Parallels Desktop allows people to run other operating systems as well, it wouldn’t make sense to just be limited to Windows. Also, Windows is not known to be bundled with other non Microsoft products.152) What is the difference between a monitor and semaphore?The reason that semaphores and monitors are needed is because multi-threaded applications (like Microsoft Word, Excel, etc) must control how threads access shared resources. This is known as thread synchronization – which is absolutely necessary in a multi-threaded application to ensure that threads work well with each other. If applications do not control the threads then it may result in corruption of data and other problems.Do I use a monitor or a semaphore?Monitors and semaphores are both programming constructs used to accomplish thread synchronization.Whether you use a monitor or a semaphore depends on what your language or system supports.What is a Monitor?A monitor is a set of multiple routines which are protected by a mutual exclusion lock. None of the routines in the monitor can be executed by a thread until that thread acquires the lock. This means that only ONE thread can execute within the monitor at a time. Any other threads must wait for the thread that’s currently executing to give up control of the lock.However, a thread can actually suspend itself inside a monitor and then wait for an event to occur. If this happens, then another thread is given the opportunity to enter the monitor. The thread that was suspended will eventually be notified that the event it was waiting for has now occurred, which means it can wake up and reacquire the lock.What is a Semaphore?A semaphore is a simpler construct than a monitor because it’s just a lock that protects a shared resource – and not a set of routines like a monitor. The application must acquire the lock before using that shared resource protected by a semaphore.Example of a Semaphore – a MutexA mutex is the most basic type of semaphore, and mutex is short for mutual exclusion. In a mutex, only onethread can use the shared resource at a time. If another thread wants to use the shared resource, it must wait for the owning thread to release the lock.Differences between Monitors and SemaphoresBoth Monitors and Semaphores are used for the same purpose – thread synchronization. But, monitors are simpler to use than semaphores because they handle all of the details of lock acquisition and release. An application using semaphores has to release any locks a thread has acquired when the application terminates – this must be done by the application itself. If the application does not do this, then any other thread that needs the shared resource will not be able to proceed.Another difference when using semaphores is that every routine accessing a shared resource has to explicitly acquire a a lock before using the resource. This can be easily forgotten when coding the routines dealing with multithreading . Monitors, unlike semaphores, automatically acquire the necessary locks.Is there a cost to using a monitor or semaphore?Yes, there is a cost associated with using synchronization constructs like monitors and semaphores. And, this cost is the time that is required to get the necessary locks whenever a shared resource is accessed.153) Provide an example of threading and synchronization in JavaThe best way to really understand threading and the need for synchronization is through a great example. Here we will present an example of an online banking system to really help see the potential problems with multi-threading, and their solutions through the use of a thread synchronization construct like a monitor.Let’s suppose we have an online banking system, where people can log in and access their account information. Whenever someone logs in to their account online, they receive a separate and unique thread so that different bank account holders can access the central system simultaneously.Now let’s create a Java class to represent those individual bank accounts. Instances of this class are created when people actually log in online. Let’s name the class BankAccount. This class has a method called “deposit” that’s used to deposit funds into the bank account. This class also has another method called “transfer” to transfer funds from the bank to another account.Here is some simple Java code that represents the BankAccount class:public class BankAccount {   int accountNumber;     double accountBalance;    // to withdraw funds from the account  public boolean transfer (double amount)   {  double newAccountBalance;   if( amount > accountBalance)  {  //there are not enough funds in the account  return false;   }    else  {  newAccountBalance = accountBalance - amount;  accountBalance = newAccountBalance;  return true;  }   }   public boolean deposit(double amount)   {  double newAccountBalance;   if( amount < 0.0)  {  return false; // can not deposit a negative amount  }    else  {  newAccountBalance = accountBalance + amount;  accountBalance = newAccountBalance;  return true;  }   } Example of a race condition in JavaYou've now seen the code above, but let's get into the problems that we can run intowhen we have a multi-threaded application. The problem that we will be presenting below is what's called arace condition. A race condition occurs when a program or application malfunctions because of an unexpected ordering of events that produces contention over the same resource. That sounds confusing, but it will make a lot more sense once you read the example below.So, let’s get into the actual problem. Let’s say that there’s a husband and wife - Jack and Jill - who share a joint account. They currently have $1,000 in their account. They both log in to their online bank account at the same time, but from different locations.They both decide to deposit $200 each into their account through a wire transfer from other bank accounts that they have at the same time. So, the total account balance after these 2 deposits should be $1,000 + ($200 * 2), which equals $1,400.Let’s say Jill’s transaction goes through first, but Jill's thread of execution is switched out (to Jack’s transaction thread) right after executing this line of code in the deposit method:newAccountBalance = accountBalance + amount; Now, the processor is running the thread for Jack, who is also depositing $200 into their account. When Jack’s thread deposits $200, the account balance is still only $1,000, because the variable accountBalance has not yet been updated in Jill’s thread. Remember that Jill’s thread stopped execution right before the accountBalance variable was updated.So, Jack’s thread runs until it completes the deposit function, and then updates the value of the accountBalance variable to $1200. After this, control returns to Jill’s thread, where newAccountBalance has the value of $1200. Then, it just assigns this value of $1,200 to accountBalance and returns. And that is the end of execution.What is the result of these 2 deposits of $200? Well, the accountBalance variable ends up being set to only $1200, when it should have been $1400. This means Jack and Jill lost $200. This is good for the bank, but a huge problem for Jack and Jill, and any other of the bank's customers.The cause of the race conditionSo, do you see how the problem was caused here? Because Jill’s thread switched out (to Jack’s thread) right before the accountBalance variable was updated, Jill’s deposit was not counted.If you remember the definition of a race condition, the example we just gave should clear it up. Here's the definition of a race condition again, in case you forgot: A race condition occurs when a program or application malfunctions because of an unexpected ordering of events that produces contention over the same resource. Hopefully, now it makes a lot more sense.Synchronization fixes race conditions in multi-threaded programsBut the real question is how can this problem be fixed? Well, it should be clear that the code needs to allow the deposit function to run to completion without switching to run a different thread. This is what synchronization is all about - fixing issues like this! This can be accomplished with a synchronization construct like a monitor.Example of using the synchronized keyword in JavaThis problem is easily fixed in Java. In the code below, all we do is add the synchronized keyword to the transfer and deposit methods to create a monitor.public class BankAccount {   int accountNumber;     double accountBalance;    // to withdraw funds from the account  public synchronized boolean transfer (double amount)   {  double newAccountBalance;   if( amount > accountBalance)  {  //there are not enough funds in the account  return false;   }    else  {  newAccountBalance = accountBalance - amount;  accountBalance = newAccountBalance;  return true;  }   }   public synchronized boolean deposit(double amount)   {  double newAccountBalance;   if( amount < 0.0)  {  return false; // can not deposit a negative amount  }    else  {  newAccountBalance = accountBalance + amount;  accountBalance = newAccountBalance;  return true;  }   } Synchronized keyword locks methodsWhat does the synchronized keyword do for us here? Well, if a thread is executing inside either the deposit or transfer blocks, then it is now impossible for any other threads to enter either of those methods. This means that only one thread can execute those functions at a time - which is exactly what we want to prevent the problem with the accountBalance variable that we described earlier.First, it is not possible for two invocations of synchronized methods on the same object to interleave - so one thread can not interrupt another thread until it is done executing all of the code in a synchronized method. So, when one thread is executing a synchronized method all other threads are blocked from entering that method.154) What is the difference between a System Thread and a User Thread?There is a difference between user threads and system threads, and it helps to explain that difference. The system creates the system thread (no surprise there). Everything starts with the system thread – it is the first and main thread. The application usually ends when the system thread terminates. User threads are created by the application to do tasks that either cannot or should not be done by the system thread.Applications which display user interfaces have to be careful when using threads. The system (or main) thread in these types of applications is also called the event thread – because it waits for and submits events (like clicks of the mouse and keyboard actions) to the application for processing. Allowing the event/system thread to be blocked for any period of time is generally considered bad programming practice, because it can lead to an unresponsive application or even a frozen computer. The problem of a blocked event thread is avoided by creating user threads to handle time consuming operations.So, What are The Differences Between System and User Threads?From the discussion above, you can probably extract this information, but we thought we would make it very clear. System threads are the main threads used in an application, whereas user threads are created to handle different tasks as they come to the application.Enjoy learning.

What are Operating system interview questions?

What are rhe functions of operating system?The operating system controls and coordinates the use of hardware among the different processes and applications. It provides the various functionalities to the users. The following are the main job of operating system.- Resource utilization- Resource allocation- Process management- Memory management- File management- I/O management- Device managementDescribe system calls and its typeSystem calls works as a mediator between user program and service provided by operating system. In actual situation, functions that make up an API (application program interface) typically invoke the actual system calls on behalf of the application programmer.Types of System CallSystem calls can be grouped roughly into five major categories:Sr No.Example1Process controlCreate process, terminate process,end,allocate and free memory etc2File manipulationCreate file, delete file, open file, close file, read, write.3Device manipulationrequest device, release device, read, write, reposition, get device attributes, set device attributes etc.4Information maintenanceget or set process, file, or device attributes5CommunicationsSend, receive messages, transfer informationExplain Booting the system and Bootstrap program in operating system.The procedure of starting a computer by loading the kernel is known as booting the system.When a user first turn on or booted the computer, it needs some initial program to run. This initial program is known as Bootstrap Program. It is stored in read-only memory (ROM) or electrically erasable programmable read-only memory (EEPROM). Bootstrap program locates the kernel and loads it into main memory and starts its execution.Describe Main memory and Secondary memory storage in brief.Main memory is also called random access memory (RAM). CPU can access Main memory directly. Data access from main memory is much faster than Secondary memory. It is implemented in a semiconductor technology, called dynamic random-access memory (DRAM).Main memory is usually too small to store all needed programs. It is a volatile storage device that loses its contents when power is turned off. Secondary memory can stores large amount of data and programs permanently. Magnetic disk is the most common secondary storage device. If a user wants to execute any program it should come from secondary memory to main memory because CPU can access main memory directly.What are the advantages of multiprocessor system?Systems which have more than one processor are called multiprocessor system. These systems are also known as parallel systems or tightly coupled systems.Multiprocessor systems have the following advantages.- Increased Throughput: Multiprocessor systems have better performance than single processor systems. It has shorter response time and higher throughput. User gets more work in less time.- Reduced Cost: Multiprocessor systems can cost less than equivalent multiple single processor systems. They can share resources such as memory, peripherals etc.- Increased reliability: Multiprocessor systems have more than one processor, so if one processor fails, complete system will not stop. In these systems, functions are divided among the different processors.Is it possible to have a deadlock involving only one process? Explain your answer.Deadlock with one process is not possible. Here is the explanation.A deadlock situation can arise if the following four conditions hold simultaneously in a system.- Mutual Exclusion.- Hold and Wait.- No Preemption.- Circular-wait.It is not possible to have circular wait with only one process, thus failing a necessary condition for Circular wait. There is no second process to form a circle with the first one. So it is not possible to have a deadlock involving only one process.What is an operating system?An operating system is a collection of software programs which control the allocation and usage of various hardware resources in the system. It is the first program to be loaded in the computer and it runs in the memory till the system is shut down.Some of the popular Operating Systems are DOS, Windows, Ubuntu, Solaris etc.What are its main functions?The main functions of an OS are:a. Process Managementb. Memory Managementc. Input/ Output Managementd. Storage/ File system managementWhat is a Kernel?- Kernel is the part of OS which handles all details of sharing resources and device handling.- It can be considered as the core of OS which manages the core features of an OS.- Its purpose is to handle the communication between software and hardware- Its services are used through system calls.- A layer of software called shell wraps around the Kernel.What are the main functions of a Kernel?The main functions of a Kernel are:- Process management- Device management- Memory management- Interrupt handling- I/O communication- File system managementWhat are the different types of Kernel?Kernels are basically of two types:a. Monolithic Kernels - In this architecture of kernel, all the system services were packaged into a single system module which lead to poor maintainability and huge size of kernel.b. Microkernels - They follow the modular approach of architecture. Maintainability became easier with this model as only the concerned module is to be altered and loaded for every function. This model also keeps a tab on the ever growing code size of the kernel.What are the disadvantages of Microkernels?Following are the main disadvantages of Microkernels. Usually these disadvantages are situation based.a. Larger running memory footprintb. Performance loss due to the requirement of more software for interfacing.c. Difficulty in fixing the messaging bugs.d. Complicated process management.Explain Process.A process is a program that is running and under execution. On batch systems, it is called as a "job" while on time sharing systems, it is called as a "task".Explain the basic functions of process management.Important functions of process management are:- Creation and deletion of system processes.- Creation and deletion of users.- CPU scheduling.- Process communication and synchronization.What do you know about interrupt?- Interrupt can be understood as a signal from a device causing context switch.- To handle the interrupts, interrupt handlers or service routines are required.- The address of each Interrupt service routine is provided in a list which is maintained in interrupt vector.What do you mean by a zombie process?- These are dead processes which are not yet removed from the process table.- It happens when the parent process has terminated while the child process is still running. This child process now stays as a zombie.What do you know about a Pipe? When is it used?- It is an IPC mechanism used for one way communication between two processes which are related.- A single process doesn't need to use pipe. It is used when two process wish to communicate one-way.What is a named pipe?- A traditional pipe is unnamed and can be used only for the communication of related process. If unrelated processes are required to communicate - named pipes are required.- It is a pipe whose access point is a file available on the file system. When this file is opened for reading, a process is granted access to the reading end of the pipe. Similarly, when the file is opened for writing, the process is granted access to writing end of the pipe.- A named pipe is also referred to as FIFO or named FIFO.What are the various IPC mechanisms?IPC - Inter Process Communication.Various IPC mechanisms are:a. Socketsb. Pipesc. Shared memoryd. Signalse. Message QueuesWhat is a semaphore?- A semaphore is a hardware or a software tag variable whose value indicates the status of a common resource.- Its purpose is to lock the common resource being used. A process which needs the resource will check the semaphore to determine the status of the resource followed by the decision for proceeding.- In multitasking operating systems, the activities are synchronized by using the semaphore techniques.What kind of operations are possible on a semaphore?Two kind of operations are possible on a semaphore - 'wait' and 'signal'.What is context switching?- Context is associated with each process encompassing all the information describing the current execution state of the process- When the OS saves the context of program that is currently running and restores the context of the next ready to run process, it is called as context switching.- It is important for multitasking OS.Tell us something about Mutex.- Mutex - ‘Mutual Exclusion Lock’ is a lock which protects access to shared data resource.- Threads can create and initialize a mutex to be used later.- Before entering a critical region the mutex is locked. It is unlocked after exiting the critical region. If any thread tries to lock the mutex during this time, it can't do so.What is a critical section?It is a section of code which can be executed only by one process at a time.What is synchronization? What are the different synchronization mechanisms?Synchronization means controlling access to a resource that is available to two or more threads or process. Different synchronization mechanisms are:- Mutex- Semaphores- Monitors- Condition variables- Critical regions- Read/ Write locksWhat is the basic difference between pre-emptive and non-pre-emptive scheduling.Pre-emptive scheduling allows interruption of a process while it is executing and taking the CPU to another process while non-pre-emptive scheduling ensures that a process keeps the CPU under control until it has completed execution.Is non-pre-emptive scheduling frequently used in a computer? Why?No, it is rarely used for the reasons mentioned below:- It can not ensure that each user gets a share of CPU regularly.- The idle time with this increases reducing the efficiency and overall performance of the system.- It allows program to run indefinitely which means that other processes have to wait for very long.Explain condition variable.- These are synchronization objects which help threads wait for particular conditions to occur.- Without condition variable, the thread has to continuously check the condition which is very costly on the resources.- Condition variable allows the thread to sleep and wait for the condition variable to give it a signal.What are read-write locks?- Read - write locks provide simultaneous read access to many threads while the write access stays with one thread at a time. They are especially useful in protecting the data that is not frequently written but read simultaneously by many threads.- They are slower than mutexes.What is a deadlock?- It is a condition where a group of two or more waiting for the resources currently in use by other processes of the same group.- In this situation every process is waiting for an event to be triggered by another process of the group.- Since no thread can free up the resource a deadlock occurs and the application hangs.What are the necessary conditions for deadlock to occur?a. At least one resource should be occupied in a non-sharable condition.b. A process holding at least one resource is waiting for more resources currently in use by other processes.c. It is not possible to pre-empt the resource.d. There exists a circular wait for processes.Name the functions constituting the OS's memory management.- Memory allocation and de-allocation- Integrity maintenance- Swapping- Virtual memoryName the different types of memory?a. Main memory also called primary memory or RAMb. Secondary memory or backing storagec. Cached. Internal process memoryThrow some light on Internal Process Memory.- This memory consists of a set of high-speed registers. They work as temporary storage for instructions and data.Explain compaction.During the process of loading and removal of process into and out of the memory, the free memory gets broken into smaller pieces. These pieces lie scattered in the memory. Compaction means movement of these pieces close to each other to form a larger chunk of memory which works as a resource to run larger processes.What are page frames?Page frames are the fixed size contiguous areas into which the main memory is divided by the virtual memory.What are pages?- Pages are same sized pieces of logical memory of a program. Usually they range from 4 KB to 8 KB depending on the addressing hardware of the machine.- Pages improve the overall system performance and reduces requirement of physical storage as the data is read in 'page' units.Differentiate between logical and physical address.- Physical addresses are actual addresses used for fetching and storing data in main memory when the process is under execution.- Logical addresses are generated by user programs. During process loading, they are converted by the loader into physical address.When does page fault error occur?- It occurs when a page that has not been brought into main memory is accessed.Explain thrashing.- In virtual memory system, thrashing is a high page fault scenario. It occurs due to under-allocation of pages required by a process.- The system becomes extremely slow due to thrashing leading to poor performance.What are the basic functions of file management in OS?- Creation and deletion of files/ directories.- Support of primitives for files/ directories manipulation.- Backing up of files on storage media.- Mapping of files onto secondary storage.Explain thread.- It is an independent flow of control within a process.- It consists of a context and a sequence of instructions for execution.What are the advantage of using threads?The main advantages of using threads are:a.) No special communication mechanism is required.b.) Readability and simplicity of program structure increases with threads.c.) System becomes more efficient with less requirement of system resources.What are the disadvantages of using threads?The main disadvantages of using threads are:- Threads can not be re-used as they exist within a single process.- They corrupt the address space of their process.- They need synchronization for concurrent read-write access to memory.What is a compiler?A compiler is a program that takes a source code as an input and converts it into an object code. During the compilation process the source code goes through lexical analysis, parsing and intermediate code generation which is then optimized to give final output as an object code.What is a library?It is a file which contains object code for subroutines and data to be used by the other program.What are the advantages of distributed system?Advantages of distributed system are:- Resources get shared- Load gets shared- Reliability is improved- Provide a support for inter-process communicationWhat are the different types of scheduling algorithms?The scheduling algorithms decide which processes in the ready queue are to be allocated to the CPU for execution. Scheduling algorithms can be broadly classified on the basis of:- Preemptive algorithms- Round Robin Scheduling- Shortest Job First Scheduling (can be both)- Priority Scheduling (can be both)- Non-preemptive algorithms- First Come First Served SchedulingNon-Preemptive algorithms: In this type of scheduling once a CPU has been allocated to a process it would not release the CPU till a request for termination or switching to waiting state occurs.Preemptive algorithms: In this type of scheduling a process maybe interrupted during execution and the CPU maybe allocated to another process.Why is round robin algorithm considered better than first come first served algorithm?The first come first served algorithm is the simplest scheduling algorithm known. The processes are assigned to the CPU on the basis of their arrival time in the ready queue. Since, it is non-preemptive once a process is assigned to the CPU, it will run till completion. Since a process takes the CPU till it is executed it is not very good in providing good response times. It can make other important processes wait un-necessarily.On the other hand, the round robin algorithm works on the concept of time slice or also known as quantum. In this algorithm, every process is given a predefined amount of time to complete the process. In case, a process is not completed in its predefined time then it is assigned to the next process waiting in queue. In this way, a continuous execution of processes is maintained which would not have been possible in case of FCFS algorithmExplain how a copying garbage collector works. How can it be implemented using semispaces?The copying garbage collector basically works by going through live objects and copying them into a specific region in the memory. This collector traces through all the live objects one by one. This entire process is performed in a single pass. Any object that is not copied in memory is garbage.The copying garbage collector can be implemented using semispaces by splitting the heap into two halves. Each half is a contiguous memory region. All the allocations are made from a single half of the heap only. When the specified heap is half full, the collector is immediately invoked and it copies the live objects into the other half of the heap. In this way, the first half of the heap then only contains garbage and eventually is overwritten in the next pass.How does reference counting manage memory allocated objects? When can it fail to reclaim objects?Reference counting augments every object with a count of the number of times an object has been referenced. This count is incremented every time a reference to that object is made. Also every time a reference is destroyed the reference is decremented. This process is repeated till the reference count becomes zero. Once the reference count of an object reaches zero the object can be reclaimed. In this way, reference counting systems can perform automatic memory management by keeping a count in every object. Any object that does not have a reference count can be considered to be dead and that memory can be reclaimed.The reference counting method can fail to reclaim objects in case of cyclic references. There are no concrete ways to avoid this problem and it is always suggested to create an architecture that does not use a circular reference.What differences are there between a semaphore wait signal and a condition variable wait signal?Semaphore wait signal:- They can be used anywhere except in a monitor.- The wait() function does not always blocks its caller.- The signal() function increments the semaphore counter and can release a process.- If the signal() releases a process, the released and the caller both continue.Condition Variable wait signal:- It can only be used in monitors.- The wait() function always blocks its caller.- The signal() can either release a process or it is lost as if it never occurred.- On signal() releasing a process either the caller or the released continues but not both at the same time.For a deadlock to occur what are the necessary conditionsIn order for deadlocks to occur there are four necessary conditions:- Mutual Exclusion: The resources available are not sharable. This implies that the resources used must be mutually exclusive.- Hold and Wait: Any process requires some resources in order to be executed. In case of insufficient availability of resources a process can take the available resources, hold them and wait for more resources to be available.- No Preemption: The resources that a process has on hold can only be released by the process itself voluntarily. This resource cannot be preempted by the system.- Circular Waiting: A special type of waiting in which one process is waiting for the resources held by a second process. The second process is in turn waiting for the resources held by the first process.Why is the context switch overhead of a user-level threading as compared to the overhead for processes? Explain.This is due to the reason that a context switch implementation is done by the kernel. During this process the state information is copied between the processor and the PCB (process control block) or the TCB (thread control block). Since the kernel does not know anything about user-level threads, technically it is not possible for it to be a user level thread context switch. The user level scheduler can do some limited state copying on the behalf of a thread prior to the control being handed to that thread. But this copying of state information is smaller compared to that of a kernel-level process. Also the process does not involve going into the kernel mode with the help of a system call.State the advantages of segmented paging over pure segmentation?In broad terms paging is a memory management technique that allows a physical address space of a process to be non-contiguous.Segmented paging has a certain set of advantages over pure segmentation such as:- Segmented paging does not have any source of external fragmentation.- Since a segment existence is not restricted to a contiguous memory range it can be easily grown and does not have to adjust into a physical memory medium.- With segmented paging the addition of an offset and a base is simpler as it is only an append operation instead of it being a full addition operation.When does the Belady's anomaly occur?The Belady's anomaly is a situation in which the number of page faults increases when additional physical memory is added to a system. This anomaly arises in some algorithms that implement virtual memory. The virtual memory allows programs larger than the physical memory space to execute. An algorithm suffers from this problem when it cannot guarantee that a page will be kept when a small number of frames are available. An optimal algorithm would not suffer from this problem as it replaces the page not to be used for the longest time. The anomaly occurs when the page replacement algorithm will remove a page that will be needed in the immediate future. An optimal algorithm will not select such a page that will be required immediately. This anomaly is also stated to be unbounded.What complications does concurrent processing add to an operating system?There are various complications of concurrent processing such as:- A time sharing method must be implemented to allow multiple processes to have an access to the system. This will involve the preemption of processes that do not give up CPU on their own i.e. more than one process may be executing kernel code simultaneously.- The amount of resources that a process can use and the operations that it may perform must be limited. The system resources and the processes must be protected from each other.- Kernel must be designed to prevent deadlocks between the various processes, i.e. Cyclic waiting or hold and waiting must not occur.- Effective memory management techniques must be used to better utilize the limited resources.How can a VFS layer allow multiple file systems support?The VFS layer also known as the virtual file system functions in many ways similar to object oriented programming techniques. It acts like an abstraction layer on top of a more specific file system. The VFS layer enables the OS to make system calls independent of the file system type used. Any file system that is used gives its function calls used and the data structures to the layer of VFS. The VFS layer translates a system call into the correct specific functions for the targeted file system. The program that is used for calling does not have a file system specific code also the system call structures used in upper levels are file system independent. The VFS layer translation translates the non-file system specific calls into a file system specific operation.What are the pros and cons of using circuit switching?The primary advantage of using circuit switching is that it ensures the availability of resources. That is it reserves the network resources required for a specific transfer prior to the transmission taking place. By doing so it ensures that no packet would be dropped and the required quality of service is met.The disadvantage of using circuit switching is that it requires a round trip message to setup a reservation. By doing so as it provisions the resources ahead of the transmission it might lead to the suboptimal use of resources.Circuit switching can be implemented for applications that have constant demand for network resources for long periods of time.What problems are faced during the implementation of a network-transparent system?A designer primarily faces two major problems while implementing a network-transparent system. They are as follows:- The primary problem is to make all the processors and storage devices to appear transparent on the network. This implies that the distributed system should appear as a single centralized system to the users using the network.There are two solutions to it:- The Andrews files system- The NFS system.- Both these file systems (distributed) appear as a single file system to the user whereas in reality it may be distributed over the network.- The secondary issue is regarding the user mobility. The designer would want any user to connect to the entire system overall rather than to a particular machine.Explain the layers of a Windows XP system.The layers of Windows XP system boot-up is as follows:- A situation of operating system portability is created by the hardware abstraction layer by hiding hardware differences from the operating systems upper layers. A virtual machine interface is provided by the hardware abstraction layer to be used by the kernel dispatcher and the device drivers.- The foundation provided by the kernel layer is used by the executive functions and the user mode sub systems. The kernel would always remain in memory and cannot be preempted. The functions of the kernel are thread scheduling, interrupt and exception handling etc.- The executive layer is responsible for providing services to be used by all subsystems. These can be object manager, process manager, i/o manager etc.Explain the booting process of a Windows XP system.The steps involved are as follows:- As the computer is powered on, the BIOS begins execution from ROM, it loads and executes the bootstrap loader.- The NTLDR program is loaded from the root directory of the system disk and determines which boot disk contains the operating system.- NTLDR loads the HAL library, kernel and system hive. The system hive indicates the required boot drivers and loads them one by one.- Kernel execution begins by initializing the system and creating two processes: the system process containing all internal worker threads and the first user-mode initialization process: SMSS.- SMSS further initializes the system by establishing paging files and loading device drivers.- SMSS creates two processes: WINLOGON, which brings up the rest of the system and CSRSS, the Win32 subsystem process.How are data structures handled by NTFS and how does it recover from a crash?In an NTFS file system inside the transactions all the data structure updates are performed. Prior to the alteration of a data structure a transaction creates log record containing information on redo and undo functions. Once a transaction is completed commit record information is stored in the logs.An NTFS system recovers from a crash by accessing information from the created log records. The first step is to redo operations of committed transactions and undoing those transactions which could not be successfully committed. Although the NTFS file system after recovering from a crash might not reflect the same user data prior to a crash but it can guarantee the file data structures are undamaged. It restores the structure to a pre-crash and consistent state.What are the benefits and losses of placing the functionality in a device controller rather than in placing it in the kernel?The benefits of placing functionality in the device controller are:- System crasher due to the occurrence of a bug is greatly reduced.- By the utilization of dedicated hardware and algorithms that are hard coded the performance can be improved greatly.- Since the algorithms are hard coded the kernel gets simplified.The banes of placing functionality in the controller rather than the kernel are:- Once a bug occurs they are difficult to fix, a new firmware or revision may be required.- For performance improvement of algorithms hardware upgrades are required rather than a device driver update.What are merits and demerits of systems supporting multiple file structure and systems supporting a stream of bytes?The main advantage of having a system that supports multiple file structures is that the support for it is provided by the system itself no other individual application is required to provide the multiple structure support. Since the support is provided by the system itself the implementation is much more efficient as compared to application level.A demerit of such kind of implementation is that it can increase the overall size of the system. Also, since the support is provided by the system, for an application that requires a different file type may not be executable on such a system.A good alternative for this is that the OS does not define any support for file structures instead all files are considered to be a series of bytes. By doing so the support for file systems is simplified as the OS does not have to specify the different structures for the file systems. It allows the applications to define the file structures. This kind of implementation can be found in UNIX.What do you understand by transaction atomicity?The transaction process can be considered to be a series of read and write operations upon some data which is followed by a commit operation. By transaction atomicity it means that if a transaction is not completed successfully then the transaction must be aborted and any changes that the transactions did while execution must be roll backed. It means that a transaction must appear as a single operation that cannot be divided. This ensures that integrity of the data that is being updated is maintained. If the concept of atomicity in transaction is not used any transaction that is aborted midway may result in data to be inconsistent as there might be a possibility two transactions may be sharing the same data value.Why is a single serial port managed with a single interrupt-driven I/O but a front-end processor is managed using a polling I/O, such as a terminal concentrator?When the I/O is frequent and of very short durations polling is considered to be more efficient than an interrupt driven I/O. Although, a serial port individually can have fairly infrequent number of I/O and hence should ideally use interrupts the case of serial ports in a terminal concentrator is different.A terminal concentrator consists of multiple serial ports and this can lead to the creation of multiple short I/O instances this can create un-necessary load on the system in case of interrupts usage.Instead, if a polling loop is used it can greatly reduce the amount of load on the system by looping through without the requirement of I/O.Due to this reason interrupts are used for single ports as the frequency of I/O on such a port is less and can be managed effectively, whereas we use polling for multiple ports as the frequency of I/O increases and are of short durations which suits polling.What is graceful degradation?- It is the ability to continue providing service proportional to level of hardware.- Systems designed for graceful degradation are called fault tolerant.- If we have several processors connected together, then failure of one would not stop the system.- Then the entire system runs only 10% slower.- This leads to increased reliability of the system.What are loosely coupled systems?- These systems are also called as the distributed systems.- It consist of collection of processors that do not share memory or clock.- The processors communicate through high speed buses or telephone lines.- It can be a centralized system where the server responds to client requests.- It can also be a peer to peer system.Explain SMP.- It is called as symmetric multiprocessing which is multiprocessor system.- In it each processor runs an identical copy of the operating system.- These copies communicate with one another as needed.- These processor systems lead to increased throughput.- These systems are also called parallel systems or tightly coupled systems.What is DLM?- It is the service called as distributed lock manager.- In cluster systems to avoid file sharing the distributed systems must provide the access control and file locking.- This ensures that no conflicting operations occur in the system.- Here the distributed file systems are not general purpose therefore it requires locking.Explain the handheld systems. List the issues related to the handheld system.- Handheld devices are palm tops and cellular telephones with connectivity to a network.- These devices are of limited size which leads to limited applications.- They use a memory 512KB to 16MB as a result the operating system and applications must use the memory efficiently.- The speed of the processors is only a fraction of speed of the PC processors and for faster processors larger battery is required.- These devices use very small display screens so reading mails and browsing must be condensed to smaller displays.Why is interrupt vector used in operating systems?- The operating system these days are interrupt driven and this requires the interrupt vector.- This interrupt vector contains the addresses of the interrupt service routines for various devices.- Here the interrupts can be indirectly called through the table with no intermediate routine needed.- This leads to interrupt handling at a faster rate.- Operating systems like MS DOS and UNIX are using the interrupt vector.What is the need of device status table?- This table gives the device type, its address and status.- It is required to keep a track of many input output requests at the same time.- The state of the device can be functioning, idle or busy.- If a device is busy, type of request and other parameters are stored in the table entry.- If more than one processor issues request for the same device then a wait queue is maintained.How can the speed of interrupt driven input output systems be improved?- Direct memory access is used to enhance the speed of the input output systems.- Here, buffers, counters and pointers are set for the devices.- The device controller transfers the block of data directly from own buffer storage to memory.- The data is not given to the CPU for further transfer between CPU and input output devices or CPU and memory.- Only one interrupt is generated per block than one interrupt per byte which enhances the speed.Explain the execution cycle for a von Neumann architecture.- Initially the system will fetch the instruction and stores it in instruction register.- Instruction is then decoded and may cause operands to be fetched from memory.- After execution the result is stored in the memory.- Here the memory unit sees only the memory addresses irrespective of how they are generated.- Memory unit is also unaware of what addresses are for.Explain the positioning time for a disk.- It is also called as the random access time used by a disk to perform operations.- It consists of time to move the disk arm to the desired cylinder called the seek time.- The time required for the desired sector to rotate to the disk head is called rotational latency.- Typical disks can transfer megabytes of data per second.- Seek time and rotational latency is always in milliseconds.What is EIDE?- EIDE is a bus called enhanced integrated drive electronics.- The input output devices are attached to the computer by a set of wires called the bus.- The data transfer on a bus are carried out by electronic processes called controllers.- The host controller sends messages to device controller and device controller performs the operations.- These device controllers consist of built in cache so that data transfer occurs at faster speed.Differentiate between the user mode and monitor mode.- User mode and monitor mode are distinguished by a bit called the mode bit.- User mode uses bit 1 and monitor mode uses bit 0.- At the boot time hardware starts with the monitor mode.- Also, at the time of interrupt user mode is shifted to the transfer mode.- System always switches to the user mode before passing control to the user program.- Whenever system gains control of the computer it works in monitor mode otherwise in user mode.What is time slice?- The timer in CPU is set to interrupt every N milliseconds where this N is called the time slice.- It is the time each user gets to execute before control is given to next user.- At the end of each time slice the value of N is incremented and the record is maintained.- It also maintains the record of the total time user program has executed thus far.- This method helps in time sharing among the various users.What are the activities related to the Time Shared User Program Management?- An Operating System is responsible for the creation and deletion of both user and system processes.- It also provides mechanism for the process synchronization.- Suspending and resuming of windows is done by the operating system itself.- Program needs resources like CPU time, memory, files, input output devices to complete the task which is provided by the operating system.- Mechanisms are also provided for deadlock handling.When an input file is opened, what are the possible errors that may occur?- 1st condition may be that the file is protected against access, here it terminates abruptly.- 2nd condition may be that file exists, then we need to create the output file.- If file with the same name exists then it may be deleted or program may be aborted.- In another case the system may ask the user to replace the existing file or abort the program.Explain PCB.- PCB, process control block, is also called as the task control block.- It contains information about the process state like new, ready, running, waiting and halt.- It also includes the information regarding the process priority and pointers to scheduling queues .- Its counter indicates the address of the next instruction to be executed for the process.- It basically serves as the storage for any information that may vary from process to process.What is context switching ?- It is the process of switching the CPU from one process to another.- This requires to save the state of the old process and loading the saved state for the new process.- The context of the process is represented in the process control block.- During switching the system does no useful work.- How the address space is preserved and what amount of work is needed depends on the memory management.What is cascading termination?- If one process is terminated, its related processes are also terminated abnormally then it is called cascade termination.- It occurs in the case of parent child process.- If the parent process is terminated normally or abnormally then all its child processes must be terminated.- The parent is existing and the operating system does not allow a child to continue if its parent terminates.- This child process is the new process created by the process called the parent process.Explain IPC.- It is called as the inter process communication.- The scheme requires that processes share a common buffer pool and code for implementing the buffer.- It allows processes to communicate and to synchronize their actions.- Example : chat program used on the world wide web.- It is useful in distributed computer systems where communicating processes reside on different computers connected with a network.What are sockets?- A socket is defined as endpoint for communication, a pair of sockets is used by the pair of processes.- It is made of IP address chained with a port number.- They use the client server architecture.- Server waits for incoming client requests by listening to specified port.- On reception of request, server accepts connection from client socket to complete the connection.There is no question about your provided information really these are very useful for IT Exam preparation.But it is difficult to see every question with answer in a single shot and there no download mechanism available so please make available one of these

Comments from Our Customers

The number of criteria/fields we can use in our forms and having a single form used by multiple departments and still have the forms emailed to a single source.

Justin Miller