Management teams are  growing more reliant on the ability to immediately  access and quickly sort through massive amounts of data to find the information they need – Data Governance for Financial Institution

A “data tiedown” is a reliable and cross-checked timestamp that secures a data item or group of data items to one or more fixed times – such as time of collection, generation, or storage. Data analysis cannot compensate for bad data and distributed processing makes it easy to generate bad data whether from new inputs or by corrupting existing data. As transaction rates increase and data size grows, traditional locking techniques are both more expensive to implement and easier to get wrong. Timestamps can be an important part of data integrity assurance by imposing order on data, even if it is generated or stored by multiple processing elements that are only loosely coupled. But timestamping is in itself a fragile process.  One common error mode for networks relying on IEEE 1588 Precision Time Protocol (PTP) for clock synchronization involves a loss or incorrect calculation of the number of leap seconds since the epoch.  If data integrity depends on timestamps a sudden jump of 35 or 16 seconds can have a devastating impact. So the integrity of timestamps is something that also  needs to be safeguarded. One way to do that is to do what TimeKeeper does – track multiple reference time sources so that the clock can be continuously cross checked against other clocks.  The timestamp is then provided with a record of how well it matches secondary, tertiary, or deeper sources. When these all match, the timestamp has high reliability. Errors can also be traced – there is a forensic methodology available to validate time and locate problems. Data is then tied to a timestamp and the timestamp is tied to a log that verifies its accuracy – the combination is an example of a data tiedown.

 

Data tiedowns with reliable time stamping
Tagged on: