Friday, 24 July 2020

Types of Operating Systems (OS):

Types of OS are as follows:

Desktop Systems:

The PCs (Personal Computer) appeared in 1970s. At the starting, the CPUs in PCs lacked features that were needed to protect an operating system from user programs.

Initially, PC operating systems were neither multiuser nor multitasking. But, with time the goals of these operating systems have changed;  instead of maximizing CPU and peripheral utilization,  these systems chosen to  maximize user convenience and responsiveness.

The most popular examples of these systems include PCs running Microsoft Windows and the Apple Macintosh.The development in operating systems for mainframes benefited in several ways to the desktop OS. Micro computers were immediately able to adopt some of the technology developed for larger operating systems.

The hardware cost for micro computers are sufficiently low, individuals have sole use of the computer and CPU utilization is no longer a prime concern. As such the design related decisions that were made in operating systems for mainframes seems to be inappropriate for smaller systems.

However, other design decisions still apply.  For ex. file protection feature was at first not necessary on a personal machine. But, nowadays these computers are often connected to other computers via. LANs or other internet connections. When other computers and other users can access the files on a PC, file protection becomes a necessary feature for an operating system.

The MS DOS and the Mac OS initially do not have such protection mechanism which has made it easy for malicious programs to destroy data. These programs maybe self-replicating and may spread rapidly via. worm or virus mechanism.

With the new advances in hardware i.e virtual memory and multitasking, there is no need for the entire program to reside in main memory.

 

Multiprocessor Systems:

The uni-processor systems are those systems that contain only a single CPU. Multiprocessor systems consists more than one processor in close communication. They share computer bus, system clock and sometimes memory and IO devices. These systems are also known as parallel systems or tightly coupled systems and are growing in importance in today’s world.

 Features:

· If one of the processor fails, then other processors should retrieve the interrupted process state so that the process continues to execute.

· Context switching should be efficiently supported by the processors.

· These systems support large physical address space as well as virtual address space.

· Provision for IPC (Inter Process Communication) and its implementation in hardware as it becomes easy and efficient.

 

Advantages:

 1.     Increased throughput:  By increasing the number of processors we hope to get more work done in less time. But, the speedup ratio with N processors is not N, rather it is less than N.  When multiple processors co-operate on a task, certain amount of overhead is incurred in keeping all the parts working correctly. This overhead plus contention for shared resources lowers the expected gain from additional processors.

2.     Economy of scale:  They are cheaper than multiple single processor systems as they share resources.

3.     Increased reliability: The failure of one processor will not halt the system, only slows it down.  This ability to continue providing service proportional to the level of surviving hardware is called graceful degradation. Systems that are designed for graceful degradation are called fault tolerant system.

 

There are two types of multi processing viz:

A] Symmetric Multiprocessing (SMP):

 ·         In SMP, each processor runs an identical copy of the operating system and these copies communicate with one another as needed.

 ·         SMP means that all processors are peers; no master slave relationship exists between processors. Each processor performs all the tasks within OS.

 

 

 

B] Asymmetric Multiprocessing (ASMP):

·         In ASMP, each processor is assigned a specific task.

·         A master processor controls the system. The other processors either look to the master for instruction or have predefined tasks.

·         This scheme defines a master - slave relationship.

·         The master processor schedules and allocates work to the slave processors.

·         Each processor has its own memory address space.

  

 Distributed Systems:

A network is a communication path between two or more systems. Distributed systems depend on networking for their functionality. Networks vary by the protocols used, the distances between nodes and the transport media. TCP IP is the most common network protocol although ATM and other protocols are in wide spread use. Likewise operating system support of protocol varies.

 Distributed OS runs on and controls the resources of multiple machines. It provides resource sharing. The user gets a feel of it is an OS for the just one single machine. It owns the whole network and feels like a virtual uni-processor or multi-processor.

Definition: A Distributed OS is a system that looks like an ordinary operating system to its users but it runs on multiple, independent CPUs.

Advantages:

1.    Resource sharing: It allows for sharing of both hardware as well as software resources in an effective manner among all the computers and users.

2.    Higher reliability: Reliability is the degree of tolerance against errors and component failures. Availability is a key aspect of reliability. Availability can be defined as the time span for which the system is available for the use. We can increase the availability of hard disks by having multiple hard disks located at different sites. Thus, if one of the hard disk fails or become unavailable, the other hard disk can be used. This in turn provides higher reliability.

3.    Higher throughput rates with shorter response times.

4.    Easier expansion: It is easy to extend power and functionality by adding additional resources.

5.    Better price-performance ratio: Nowadays, microprocessors are getting cheaper with increasing computing powers which yields a better price-performance ratio.

 

Disadvantages:

·        They are hard to build.

·         No commercial examples of such systems are available.

·         Increased overhead of protocols used in communication.

 

A] Client Server Systems:

  • As PCs have become faster, more powerful and cheaper, designers have shifted away from centralized system architecture.
  • Terminals connected to centralized systems are now being replaced by PCs.
  • The user interface functionality that used to be handled directly by the centralized system is increasingly being handled by the PCs.
  • As a result centralized systems today act as server sustains to satisfy a request generated by client systems.
  • A server is a typically a high performance machine and clients usually interacts with it through a request-response mechanism.
  • A client sends a request to server, for which the server processes the client’s request and generates a response for the client.
  • Server systems can be broadly categorized as compute server and file server.
  • Compute server systems provide an interface to which clients can send request to perform an action, in response to which they execute the action and sends back results to the client.
  • File server systems provide a file system interface where clients can create, update, read and delete files.

 

B] Peer-to-Peer Systems:

  • The computer networks used in these applications consists of a collection of processors that do not share memory or a clock.
  • Instead, each processor has its own local memory.  The processor communicates with one another through various communication lines such as high speed buses or telephone lines.
  • These systems are usually referred to as loosely coupled systems or distributed systems.
  • A Network OS is an operating system that provides features such as file sharing across the network and that includes a communications scheme that allows different processors on different computers to exchange messages.

 

 Clustered Systems:

It is a group of computer system connected with a high speed communication link. Each computer system has its own memory and peripheral devices. Clustering is usually performed to provide high availability.

Like parallel systems, clustered systems gather together multiple CPUs to accomplish computational work. But they differ from parallel systems in a way such that they are composed of two or more individual systems coupled together.

The definition of the term cluster is not concrete. The generally accepted definition is that clustered computers share storage and is closely linked via LAN networking. These systems are integrated with both hardware cluster and software cluster. In Hardware clustering, high performance disks are shared. While in Software clustering, the cluster is in the form of unified control of the system.

A layer of cluster software runs on the cluster nodes. Each node can monitor one or more of the others over the LAN. If the monitor machine fails, the monitoring machine can take ownership of its storage, and restart the applications that were running on the failed machine. The failed machine can remain down, but the users and clients of the application would only see a brief interruption of service.

In asymmetric clustering, one machine is in hot standby mode while the other is running the applications. The hot standby host machine monitors the active server. If that server fails, the hot standby host become the active server.

In symmetric clustering, two or more hosts are running application, and they are monitoring each other. This mode is more efficient, as it uses all of the available hardware. It requires more than one application be available to run.

Parallel clusters and clustering over WAN are also available in clustering.  Parallel clusters allow multiple hosts to access the same data on the shared storage.Cluster provides all the key advantages of distributed systems. It also provides better reliability than SMP.

  

 Real Time Systems (RTOS):

A real time operating system is one that must react to inputs and responds to them quickly. These systems cannot afford to be late for providing a response to an event.

A real time system has well defined, fixed time constraints. Processing must be done within the defined constraints, or the system will fail. A real time system functions correctly only if it returns the correct result within its time constraints.

Examples of RTOS are the systems that control scientific experiments, medical imaging systems, industrial control systems, certain display systems, automobile engine fuel injection systems, home appliance controllers, weapon systems etc.

Deterministic scheduling algorithms are used in RTOS. They are divided in two categories: Hard RTOS and Soft RTOS.

 

A hard real time system guarantees that critical task be completed on time. This goal requires that all delays in the system be bounded, from the retrieval of stored data to the time that it takes the operating system to finish any request made of it. Hard real-time systems conflict with the operation of time sharing systems and as such the two systems cannot be mixed.

A Soft RTOS is less restrictive type where the critical real-time task gets priority over other tasks, and retains that priority until it completes. A real time task cannot be kept waiting indefinitely for the kernel to run it. Soft real time is an achievable goal that can be mixed with other type of system.

Soft RTOS have more limited utility then Hard RTOS. They are risky to use for industrial control and robotics as they lack fixed time constraint support. They are useful in many areas like multimedia, virtual reality and advanced scientific projects such as undersea exploration and planetary rover.

Soft RTOS requires two conditions to implement. First, the CPU scheduling must be priority based and second, the dispatch latency must be small. These systems need advanced operating system features that cannot be supported by hard real-time systems.


Handheld Systems:

The systems found in the PDAs (Personal Digital Assistants) such as Palm Pilots are handheld systems. These devices use cellular telephony with network connectivity such as the internet.

The developers of handheld systems face many challenges, most of which are due to the limited size of such devices. Due to this limited size, most handheld devices have a small amount of memory; include slow processors and features small display screens. The typical memory of such devices ranges from 512 KB - 8 MB. As a result the operating system and applications must manage memory efficiently. This includes returning all allocated memory back to the memory manager once the memory is no longer being used.

The speed of the processor used in the device is a second issue of concern to developers of handheld devices. Faster processor requires more power. To include a faster processor in a handheld device, a larger battery is required which needs to be replaced or recharged more frequently.

To minimize the size, smaller, slower processor which consumes less power is used. Also the screen display of these devices is small usually not more than 3 inches square. So tasks such as reading email or browsing the web pages must be condensed on these smaller displays. One approach for displaying the content in web pages is web clipping where only a small subset of a web page is delivered and displayed on the handheld device.

Wireless technology and Bluetooth allows remote access to email and web browsing. Limitations in the functionality of PDA are balanced by their convenience and portability. Their use continues to expand and other options like cameras and MP3 players expand their utility.


What are the objectives of DBMS?

Objectives of DBMS:

 

The main objectives of DBMS are as follows: 

1.  Minimal Redundancy: Data redundancy is defined as the duplication of same data at more than one storage place. This duplication of data leads to wastage of storage space, time and incurs a cost. This redundancy in data has to be eliminated by integrating the data at one place.

2.  Consistency: The data duplication creates a problem of multiple level of updation. In some cases, updation of redundant data entries may provide either incorrect or conflicting information. A database in such a case is called as an inconsistent database. Consistency of data has to be achieved through redundancy control.

3.  Data sharing: By data sharing we mean multiple users can use the same data in the database. Also, new applications can be developed as per the needs to operate on the same stored data. An objective of DBMS as such is to satisfy the data requirement of various new applications without the need of having separate data for each application.

4.  Provision of multiple user interfaces: In order to allow different users to access the database, a DBMS should provide:

    1. Query Language: Query language for casual users such as SQL to access the database.
    2. Programming Language Interfaces: For application programmers.
    3. Menu Driven Interfaces: For stand-alone users.

5. Simplicity: One of the objectives of DBMS is to make application development task simpler and easier. To achieve this, a DBMS is accompanied with a powerful query manipulation and report generation tools.

6.   Flexibility: The DBMS allows us to change the structure of a database without affecting the data stored in it and in the existing application. As such it makes the process of application development cheap, fast and flexible.

7.  Data Migration: It is a key objective to make the database economical. Data migration indicates the adjustment of data on costly or cheap media devices. All data within a database are not referenced very frequently. Some data are accessed frequently while other can be accessed in rare situations. The more frequently accessed data can be stored on fast access or direct access media devices, while rarely accessed data can be stored on slower access or on cheap devices.

8.    Restriction from unauthorized access: Data in database must be secured in all cases. Thus, an important objective of DBMS is to restrict unauthorized access. To ensure this it must provide:

    1. User identification - before they can use the database.
    2. Monitoring user actions – in case if they do anything wrong, they are likely to be found.
    3. All content should be proper and not easy to check.

9.   Privacy and Security: Privacy and security are important objectives of DBMS. Privacy can be defined as when, how and to what extent data access should be given to the users. Databases are costly thus; their security is a prime concern. Security of data is needed from accidental as well as intentional disposal.

10.Integrity Enforcement: Integrity is related to the data accuracy. It also suggests that incorrect information cannot be stored in the database. In order to achieve this objective, a DBMS should have the capability for designing and imposing consistency constraints on the data.

11. Maintaining Standards: All applicable standard should be followed in the representation of data such as format, conventions on data names, documentation etc. The standardized data is very helpful during migration or interchanging of data. This will result in uniformity of the entire database as well as its usage.

 


Wednesday, 22 July 2020

DBMS vs FPS

Differences between DBMS and FPS are as follows:


DBMS vs FPS

DBMS vs FPS

Features and Limitations of Traditional File Processing System (FPS)

Traditional File Processing System (FPS):

 

A file system is a method for storing and organizing computer files and the data they contain to make it easy to find and access them. It may use a storage device such as a hard disk or CD ROM and involve maintaining the physical location of the files. A typical example of file processing system is a system used to store and manage data of each department having its own set of files. This often results in data redundancy and data isolation.

 

These files are stored in permanent system using conventional operating system. The application programs were then created independently to access data in these files.Furthermore, for example consider a bank that keeps information about all customers and savings accounts.  One way to keep the information on a computer is to store it in operating system files.

 

To allow users to manipulate the information, the system has a number of application programs which includes:

·         program to debit or credit an account

·         program to add a new account

·         program to find the balance of an account

·         program to generate monthly statements

 

To meet the needs of the bank, system programmers wrote these application programs. But, as the need arises new application programs are added to the system. For example suppose bank decides to offer checking accounts. As a result, the bank creates new permanent files that contain information about all the checking accounts maintained in the bank and it may have to write in new application programs to deal with situations that do not arise in savings accounts such as overdraft.

 

The system acquires more files and more application programs from time to time.  This typical file processing system is supported by a conventional operating system.

The system stores permanent records in various files and it needs different application programs to extract and add records to the appropriate files.

 

Limitations:

 

1.   It is difficult to retrieve information using a conventional file processing system.

2.   Getting the exact result matching the query is difficult.

3.   Data duplication:

·     In many cases, same information is stored in more than one file.  This data duplication is wastage of resources.  It costs time and money to enter the data more than once. Also, it acquires additional storage space in the system. Thus, duplication can lead to data that is no longer consistent.

4.    Separated and isolated data:

·      To make a decision, a user might need data from two or more separate files. These were evaluated by analyst and programmers to determine the specific data required from each file and then applications were written in a programming language to process and extract the needed data.

·      It is difficult to write new application programs to retrieve the appropriate data as data is scattered in various files and they may be in different formats.

5.    Data security :

·       Security of data is low in as data maintained in the flat file is easily accessible.

6.    Data dependence:

·     In FPS, files and records were described by specific physical formats that were coded into the application program by programmers. If the format of a certain record was changed the code in each file containing that format must be updated.

·   Moreover, changes in storage structure or access methods could greatly affect the processing or results of an application.

7.    Data inflexibility:

·   Program and data inter-dependency and data isolation limited the flexibility of file processing systems in providing users with the results of information request.

8.    Incompatible file formats:

·   The structures of files are dependent on the application programming language. For example the structure of a file generated by a COBOL program may be different from the structure of a file generated by a C program.  The direct incompatibility of such files makes them difficult for combined processing.

9.    Concurrency problems:

·     Concurrency can be defined as when multiple users access the same piece of data at a same time interval. When two or more users read the data simultaneously there is no problem but when they like to write or update a file simultaneously it results in a big problem.

        10.  Integrity problems:

·       In database we can declare the integrity constraints along with the definition itself. The data values may need to satisfy some integrity constraints. For example the balance field value must be greater than 5000. 

11.  Atomicity problems:

·    It is difficult to ensure atomicity in FPS.  For example while transferring $100 from account A to account B, if a failure occur during execution there could be situation like $100 is deducted from account A and not credited in account  B.


Tuesday, 21 July 2020

Operating System Goals

System Goals:

 

It is easier to define an operating system by what it does then by what it is. The primary goal of some operating system is convenience for the user. OS exists because they are supposed to make it easier to compute with them than without them.

 

This view is particularly clear when you look at operating systems for small PCs. The primary goal of other operating systems is efficient operation of the computer system.  This is the case for large, shared, multi user systems.  These systems are expensive so it is desirable to make them as efficient as possible.

 

These two goals - convenience and efficiency are sometimes contradictory. In the past, efficiency was often more important than convenience. Thus, much of OS theory concentrates on optimal use of computing resources. Operating systems have also evolved over time. Many graphical user interfaces (GUI) were added to make it more convenient for users while still concentrating on efficiency.

 

The design of an operating system is a complex task. Operating systems and computer architecture have influenced each other a great deal.  To facilitate the use of the hardware, researchers developed operating systems. Users of the OS then proposed changes in hardware design to simplify them.

           

Relation between OS and Computer Architecture


User View and System View in OS

User View:

The user view of the computer varies by the interface being used. Most computer users sit in front of a PC, consisting of a monitor, keyboard mouse and system unit. Such a system is designed for one user to monopolize its resources, to maximize the work that the user is performing. In this case, the operating system is designed mostly for ease of use, with some attention paid to performance and none paid to resource utilization.

 

Some users sit at a terminal connected to a mainframe or minicomputer. Other users are accessing the same computer through other terminals.  These users share resources and may exchange information.  The operating system is designed to maximize resource utilization- to ensure that all available CPU time, memory, and IO are used efficiently and that no individual user takes more than his/her fair share.

  

Other users sit at workstations connected to networks of other workstations and servers. These users have dedicated resources at their disposal, but they also share resources such as networking and servers- file, compute and print server. Therefore their operating system is designed to compromise between individual usability and resource utilization.

 

Recently many varieties of handheld computers have come into fashion. These devices are mostly stand alone, used singly by individual users. Some are connected to networks, either directly by wire or through wireless modems. Due to power and interface limitations they perform relativity few remote operations. The operating systems are designed mostly for individual usability but performance per amount of battery life is important as well.

 

Some computers have little or no user review. For example, embedded computers in home devices and automobiles may have a numeric keypad and may turn indicator lights on or off to show status, but mostly they and their operating systems are designed to run without user intervention.

  

System View:

 

·         OS as a Resource Manager:

From the computer’s point of view, the operating system is the program that is most intimate with the hardware. We can view an operating system as a resource allocator.  A computer system has a many resources - hardware and software that may be required to solve a problem:  CPU time, memory, file storage, IO devices and so on. The operating system acts as a manager of these resources.

 

·         OS as a Control Program:

 An operating system is a control program. A control program manages execution of user programs to prevent errors and improper use of the computer. It is especially concerned with the operation and control of IO devices.

 

A more common definition is that the operating system is the one program running at all times on the computer usually called the kernel, with all else being application programs.

 

 

Various resources of computer system

Fig: Various resources of a computer system

 

 


What is an Operating System (OS)?


Operating System (OS):

An operating system (OS) is a program that manages the computer hardware. It also provides a basis for application programs and acts as an intermediary between a user of a computer and the computer hardware. OS is an important part of every computer system.

 

 A computer system can be divided roughly into four components. These are:

1.  Hardware – the CPU, memory, and the input output devices provides the basic computing resources.

2.  Operating system – it controls and coordinates the use of hardware among the various  application programs for the various users.

3.   Application programs – such as word processors, spreadsheets, compilers and web browsers define the ways in which these resources are used to solve the computing problems of the users.

4.     Users – people who interacts with the system.

 

The following fig. shows an abstract view of a computer system.


Abstract view of computer system



An OS is similar to a government.  Like a government, it performs no useful function by itself.  It simply provides and environment within which other programs can do useful work. Operating systems can be explored from two view points:  the user and the system.

 

A more common definition is that the operating system is the one program running at all times on the computer usually called the kernel, with all else being application programs.