![]() For example, a computer file system may be configured on a fault-tolerant RAID array, but might not provide block-level checksums to detect and prevent silent data corruption. In production systems, these techniques are used together to ensure various degrees of data integrity. ![]() Computer-induced transcription errors can be detected through hash functions. These are used to maintain data integrity after manual transcription from one computer system to another by a human intermediary (e.g. Human-induced data integrity errors are often detected through the use of simpler checks and algorithms, such as the Damm algorithm or Luhn algorithm. Physical integrity often makes extensive use of error detecting algorithms known as error-correcting codes. Ensuring physical integrity includes methods such as redundant hardware, an uninterruptible power supply, certain types of RAID arrays, radiation hardened chips, error-correcting memory, use of a clustered file system, using file systems that employ block level checksums such as ZFS, storage arrays that compute parity calculations such as exclusive or or use a cryptographic hash function and even having a watchdog timer on critical subsystems. Challenges with physical integrity may include electromechanical faults, design flaws, material fatigue, corrosion, power outages, natural disasters, and other special environmental hazards such as ionizing radiation, extreme temperatures, pressures and g-forces. Physical integrity deals with challenges which are associated with correctly storing and fetching the data itself. Depending on the data involved this could manifest itself as benign as a single pixel in an image appearing a different color than was originally recorded, to the loss of vacation pictures or a business-critical database, to even catastrophic loss of human life in a life-critical system. ![]() If the changes are the result of unauthorized access, it may also be a failure of data security. Data integrity is not to be confused with data security, the discipline of protecting data from unauthorized parties.Īny unintended changes to data as the result of a storage, retrieval or processing operation, including malicious intent, unexpected hardware failure, and human error, is failure of data integrity. In short, data integrity aims to prevent unintentional changes to information. Moreover, upon later retrieval, ensure the data is the same as when it was originally recorded. The overall intent of any data integrity technique is the same: ensure data is recorded exactly as intended (such as a database correctly rejecting mutually exclusive possibilities). ĭata integrity is the opposite of data corruption. It is at times used as a proxy term for data quality, while data validation is a prerequisite for data integrity. The term is broad in scope and may have widely different meanings depending on the specific context – even under the same general umbrella of computing. ![]() Maintenance of data over its entire life-cycleĭata integrity is the maintenance of, and the assurance of, data accuracy and consistency over its entire life-cycle and is a critical aspect to the design, implementation, and usage of any system that stores, processes, or retrieves data. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |