Wednesday 22 July 2020

Features and Limitations of Traditional File Processing System (FPS)

Traditional File Processing System (FPS):

 

A file system is a method for storing and organizing computer files and the data they contain to make it easy to find and access them. It may use a storage device such as a hard disk or CD ROM and involve maintaining the physical location of the files. A typical example of file processing system is a system used to store and manage data of each department having its own set of files. This often results in data redundancy and data isolation.

 

These files are stored in permanent system using conventional operating system. The application programs were then created independently to access data in these files.Furthermore, for example consider a bank that keeps information about all customers and savings accounts.  One way to keep the information on a computer is to store it in operating system files.

 

To allow users to manipulate the information, the system has a number of application programs which includes:

·         program to debit or credit an account

·         program to add a new account

·         program to find the balance of an account

·         program to generate monthly statements

 

To meet the needs of the bank, system programmers wrote these application programs. But, as the need arises new application programs are added to the system. For example suppose bank decides to offer checking accounts. As a result, the bank creates new permanent files that contain information about all the checking accounts maintained in the bank and it may have to write in new application programs to deal with situations that do not arise in savings accounts such as overdraft.

 

The system acquires more files and more application programs from time to time.  This typical file processing system is supported by a conventional operating system.

The system stores permanent records in various files and it needs different application programs to extract and add records to the appropriate files.

 

Limitations:

 

1.   It is difficult to retrieve information using a conventional file processing system.

2.   Getting the exact result matching the query is difficult.

3.   Data duplication:

·     In many cases, same information is stored in more than one file.  This data duplication is wastage of resources.  It costs time and money to enter the data more than once. Also, it acquires additional storage space in the system. Thus, duplication can lead to data that is no longer consistent.

4.    Separated and isolated data:

·      To make a decision, a user might need data from two or more separate files. These were evaluated by analyst and programmers to determine the specific data required from each file and then applications were written in a programming language to process and extract the needed data.

·      It is difficult to write new application programs to retrieve the appropriate data as data is scattered in various files and they may be in different formats.

5.    Data security :

·       Security of data is low in as data maintained in the flat file is easily accessible.

6.    Data dependence:

·     In FPS, files and records were described by specific physical formats that were coded into the application program by programmers. If the format of a certain record was changed the code in each file containing that format must be updated.

·   Moreover, changes in storage structure or access methods could greatly affect the processing or results of an application.

7.    Data inflexibility:

·   Program and data inter-dependency and data isolation limited the flexibility of file processing systems in providing users with the results of information request.

8.    Incompatible file formats:

·   The structures of files are dependent on the application programming language. For example the structure of a file generated by a COBOL program may be different from the structure of a file generated by a C program.  The direct incompatibility of such files makes them difficult for combined processing.

9.    Concurrency problems:

·     Concurrency can be defined as when multiple users access the same piece of data at a same time interval. When two or more users read the data simultaneously there is no problem but when they like to write or update a file simultaneously it results in a big problem.

        10.  Integrity problems:

·       In database we can declare the integrity constraints along with the definition itself. The data values may need to satisfy some integrity constraints. For example the balance field value must be greater than 5000. 

11.  Atomicity problems:

·    It is difficult to ensure atomicity in FPS.  For example while transferring $100 from account A to account B, if a failure occur during execution there could be situation like $100 is deducted from account A and not credited in account  B.


No comments:

Post a Comment