Simplify Your Data Quality Capture

Using a standardised protocol to identify and record DQ errors means faster time-to-market for tracking DQ issues.

Identified issues are saved as a query in the system which matches an approved API. These queries are then run on a scheduled basis and recorded to the system for posterity. This allows us to identify new DQ issues. Issues not fixed since the last time the DQ check was run and the mean time to fix a specific instance of the error.

As this data is captured in a standard format, we can run system level reports that give a visual representation of the key metrics, allowing us to demonstrate to key stakeholders that DQ issues are being identifed and addressed in a pro-active manner.

Leverage discovery into DQ reports

The hardest part of DQ management is getting the correct query to isolate the issues and give the relevant context

Now you can take your rich query and strip it down to the data capture element for the API and the data enrichment to provide context and targets for notification.

Using the power of our component architecture, issues relating to the same business entity can re-use the data enrichment against multiple issues

DRY principles with component architechture

One of the keys to efficient development is the DRY principle or Don't.Repeat.Yourself.

Project Overwatch achieves this by breaking down the capture, analysis and dissemination of DQ metrics into their component parts which can then be assembled back together as required.

  • One identifier for a business entity
  • One query to capture the DQ issue using the entity identifier
  • Reusable queries to enrich the DQ entity for context and distribution
  • Reusable templates to prepare the email for DQ advice
  • A custom ruleset to match the query fields with the email template fields for data merge
  • Reusable schedules to trigger DQ capture and Email generation
  • Bespoke options available when business rules are too complex to manage a simple custom field match

DQ historic runs logging and reporting

With multiple schedules it can be hard to keep track of what ran and when

Overwatch provides a full historical breakdown of which DQ issues were run in a batch, how long the process took and how many records were created or updated. This meta-data can be used in it's own reporting basis to provide a holistic look at the system and identify trends or performance bottlenecks

Easy deployment between test and production environments

Leveraging the power of linked servers and synonymns, data in remote databases can be mapped to synonyms so that all queries look like local tables. By avoiding long (4-part) table names and abstracting that information to the synonym, code can quickly be deployed between development environments and production without having to modify the script and invalidate QA testing and signoff

When generating emails into a test environment, the 'real' target audience can be overridden at email send time and emails dispatched to the development or QA team instead but still have visibilty as to who the correct recipients would be if it were a production environment

Dynamic management of distribution groups

Keeping your email lists up to date by changing email addresses and names inside the code violates source code control principles and makes it difficult to deploy changes. Using a named distibution group (even for one user) and allowing Overwatch to substitute the group for the current active members means that mailing lists are manged through convention and database entries, rather than code changes so no need to be passed by QA or go through the Change Control Board.

See who is in a group at any one time, see which groups they are in and which emails they have been scheduled to receive as a result of their group membership