รับทำเว็บไซต์
รับเขียนโปรแกรม
ขับทำ SEO
รับดูแลเว็บไซต์
รับติดตั้งกล้องวงจรปิด
The earliest business personal computers were utilised to process business records and convey information. These were generally faster and much more accurate than equivalent manual systems. Scalping strategies saved categories of records in separate files, and they also were known as file processing systems. Although file processing systems are a good improvement over manual systems, they are doing possess the following restrictions:
Information is separated and isolated.
Information is frequently copied.
Application programs are determined by file formats.
It is not easy to represent complex objects using file processing systems. Information is separate and isolated. Recall that because the marketing manager you possessed to relate sales data to customer data. In some way you have to extract data from both CUSTOMER and ORDER files and mix it right into a single apply for processing. To get this done, computer developers pick which areas of each one of the files are essential. They figure out how the files are based on each other, and lastly they coordinate the processing from the files therefore the correct information is removed. This information is then accustomed to make the information. Think of the problems of removing data from ten or 15 files rather than just two! Information is frequently copied. Within the record club example, a member's title, address, and membership number are saved both in files. Even though this duplicate data wastes a tiny bit of file space, that's not probably the most serious issue with duplicate data. Ths issue concerns data integrity. An accumulation of data has integrity when the information is realistically consistent. What this means is, simply, that copied data products accept each other. Poor data integrity frequently evolves in file processing systems. If your member would change their title or address, then all files that contains that data have to be up-to-date. The risk is based on the danger that files may not be up-to-date, leading to discrepancies between your files. Data integrity troubles are serious. If data products differ, sporadic results is going to be created. A study in one application might disagree having a report from another application. A minumum of one of these is going to be incorrect, but who are able to tell which? If this happens, the credibility from the saved data makes question. Application programs are determined by file formats. In file processing systems, the physical formats of files and records are joined within the application programs that process the files. In COBOL, for instance, file formats are designed in the information DIVISION. The issue with this particular arrangement is the fact that alterations in file formats lead to program updates. For instance, when the Customer record were modified to grow the Zipcode area from five to nine numbers, all programs which use the client record have to be modified, even when they don't make use of the Zipcode area. There can be twenty programs that process the client file. A big change like that one implies that a programmer must identify all of the affected programs, then modify and retest them. This really is both time intensive and error-prone. It's also very frustrating to need to modify programs that don't make use of the area whose format transformed. It is not easy to represent complex objects using file processing systems. This last weakness of file processing systems may appear a little theoretical, but it's an essential disadvantage.
you are posting a good information for people and keep maintain and give more update too.
ตอบลบInformatica Training in Chennai