Oswald Regular
OpenSans Regular
Data Quality
Do you have it?

Everybody wants it. Some have it. Most don’t. Data quality is very hard for organizations to define, measure, monitor, and improve. But it matters because inaccurate information can lead to bad business decisions and ultimately affect the bottom line.

Addressing data quality requires an enterprise approach. A company’s data and its systems are distributed. The data comes in every conceivable format. Some systems process huge numbers of records or transactions each day. Some systems are batch, some are real-time. Many are built with Ab Initio. Most are legacy. Only a general-purpose approach can address data quality in such a wide variety of situations.

Ab Initio’s approach to this challenge is comprehensive. It is based on a powerful customizable design pattern for an end-to-end data quality program. The steps in this design pattern are:

  • Problem detection and correction are integrated into the processing pipeline of applications across the enterprise. The Co>Operating System makes this easy – it can be deployed on practically any platform, can scale to any data volumes or transaction rates, and can process any kind of data. It comes with a library of standard validation rules and can graphically express complex data validation and cleansing rules. The Business Rules Environment allows business analysts to specify and test their rules in a spreadsheet-like environment. All of this logic can be integrated directly into existing systems, whether they were built with Ab Initio or not.
  • Issue reporting is handled by Ab Initio’s Enterprise Meta>Environment. The EME collects statistics from data profiling and data validation, and computes data quality metrics. It then provides a single point for data quality reporting by combining data-level statistics and metrics with various data quality dashboards. System lineage diagrams include data quality metrics so that the source of data quality problems can be identified graphically.
  • Quality monitoring is performed by the Data Profiler. It can reveal issues with the contents of datasets, including data values, distributions, and relationships. Using the Data Profiler operationally allows subtle changes in data distributions to be detected and studied.

Ab Initio’s data quality design pattern is based on a set of powerful, reusable building blocks. These building blocks can be customized and integrated into all aspects of a production environment. Because these building blocks all run on the Co>Operating System, they can be deployed across the enterprise.

Finally, Ab Initio technology supports collecting, formatting, and storing information about individual data quality problems, for root cause analysis and to inform the design of solutions to both rare, “one of a kind” issues as well as systemic data quality problems.

Learn more about Ab Initio’s approach to Data Quality.

Language:
English
Français
Español
Deutsch
简体中文
日本語