Types of Data in CRM and Identifying Data Quality Issues

Coursera 7-Day Trail offer

CRM software integrates data from multiple departments across the firm to provide a single, up-to-date image of each individual customer. So that personnel who have direct contact with customers in the sales, marketing, and customer service departments may make rapid and educated decisions on a wide range of topics, including as upsells and cross-sells, customer service improvements, and sales and marketing campaign coordination.

Types of Data in CRM

Reference Data and Master Data

In general, master data is not particularly volatile; however, it must be kept up to date in order to reflect the most up-to-date information available. Each particular product for which there are unique qualities that are known in advance is represented by a single record in the master data set. Almost all, if not all, of the decisions on what the product is, how it is created, how much it should sell for, and so on are decided long before it is included in the product master file.

For example, the International Standards Organisation country code list is a relatively simple master file that anyone can understand (ISO 3166-1).

Reference data varies from master data in that it does not require any prior configuration before a valid entry can be made. Referencing data reflects reality, while master data is responsible for creating reality.

For example, most businesses will not accept an order for a product that has not yet been entered into the product file, but they will accept orders from consumers who are making their first purchase from them. It is possible to construct the reference record at the same time that the order is being placed. Once generated, a reference file prohibits the entry of duplicate information for the same customer in subsequent transactions. The information about the customer is saved in a reference file. Reference data is more volatile than master data, and this is a common occurrence.

Transactional Data

Transaction data is tremendously variable, and it is also exceedingly time-sensitive. Transaction data should be updated to better reflect current conditions. Every single order (hundreds or thousands each day, depending on the volume) generates a new transaction record (typically a collection of records) in the order file, as an illustration.

If a customer changes his or her mind about the quality of a product before it has been dispatched, we must update the record to reflect the new amount in order for the customer to obtain what he or she desires. However, it is also vital to know what the original quantity was in the first place. Changes in quantity can have an impact on order transactions farther down the line, such as sales rep quotas.

If a sales representative is concerned that his or her quota does not correspond to what he or she believes it should be, the differences between the original and current buy quality might be used to explain the mismatch. When an order has been entirely shipped and is no longer in active status, transactions are often withdrawn from the open order database and archived in a data warehouse, file cabinet, or other offline storage location until the order is processed again.

Because perfect accuracy is not required to execute the transaction properly, transaction systems are notorious for collecting low-quality data. Because performance indicators for transaction systems are dependent on characteristics other than quality, little attention is paid to the quality of data collecting during the collection process.

Unfortunately, it is considerably more expensive to rectify incorrect data than it is to capture it accurately in the first place, yet this is a problem that is rarely addressed. Most firms nowadays concentrate their early efforts on ensuring that transaction data is clean as it is transferred into the data warehouse.

Warehouse Data

Transaction systems and reference or master files are used to populate data warehouses, which are then filled with information from other sources. Unless the source transaction changes, it is not recommended to make modifications to data warehouse records after they have been loaded. Rather than making updates to the first warehouse record, we establish a new record that reflects the changes that have taken place. The data warehouse keeps track of all transactions and changes throughout time, allowing businesses to identify patterns and trends in the data over time.

According to a recent White Paper by Larry English, one of the ten pitfalls to avoid when establishing a quality warehouse is thinking the source data is correct just because the operational system appears to be functioning properly. The data warehouse serves a separate purpose and has different quality standards than the rest of the system.

Business View Data

The data for the business view is derived from the data warehouse. In the business world, business perspectives are calculated summaries of historical information that are frequently used to compare patterns across time. Business perspectives are focused on a specific business sector, such as solving a business issue such as how many clients in North America raised their purchases by more than 10% each year for the past three years, or analysing financial data.

Reporting and analysis tools perform significantly better when working with a consolidated business view rather than when working with the complete data warehouse. The data in the business view is never updated. Each time the views are updated, they are simply regenerated from the data warehouse.


Identifying Data Quality Issues

We understand why it is necessary to maintain the quality of raw material inventory, but why is data quality such an issue in the first place? Manufacturing material data can be poor when it arrives from the source, just as it can be bad when it comes from the manufacturer; it can be damaged during handling, damaged while in low supply, or misused during the manufacturing process. The following are the most frequently encountered sources of data quality issues:

Customer Data Deterioration (Customers Move and Change Jobs)

The minute one begins entering information into a client database, the quality of the data begins to erode rapidly. Approximately ten percent of customers and businesses physically shift every year, rendering their previous contact information ineffective and obsolete. In addition, when considering the vast number of business individuals who can shift employment, the rate of deterioration for B2B names can be closer to 30 to 35 per cent per year. Unless a significant investment is made in data management, more than 50% of the names in the B2B business database will be wrong in less than 2 years.

Customer’s data gets eroded with time as the registered customers might have changed their phone number or email address. It is also possible that they may have changed their residential address. There are many people who migrate from one city to another because of their jobs, so this impacts the quality of data maintained.

Source Data Quality (Database Design, Lack of Standards)

Transactional data sources have extremely varying levels of quality, and they are frequently of poor quality. It was never held against anyone who was accountable for the source transaction data in terms of the quality of that data. When it came to order information, for example, it didn’t matter whether or not the product was correctly dispatched or whether or not the bill was effectively mailed.

It didn’t matter that there were three or four distinct addresses associated with the same client, at least not until firms began to care about knowing their customers and what they wanted from them. It was far more crucial to get the order in than it was to get it right the first time. Quality issues have increased as a result of corporations’ recognition of the gold mine of customer knowledge that they have amassed via the use of all of these transaction platforms.

To acquire the much sought after 360-degree perspective of the consumer for analytical and planning purposes, they wish to integrate all of their data into a single repository. However, because of the lack of standards and the database design that was created to support operational activities, much of the data cannot be simply merged together as a result. The shipping address provided on a purchase is used to print the mailing label, which contains four or five lines of information and is responsible for getting the product to its destination.

The first line may contain the customer’s name, or it may contain an attention line or specific shipping instructions to the carrier, among other things. Even if the transaction systems continued to function normally, the customer reference file or data warehouse would be full of errors due to the inconsistent nature of the data source.

Lack of Trust (Sales Rep Resistance and Customer Resistance)

The need of developing good relationships with clients so that they may trust us enough to provide us with their information has been discussed previously. Another instance in which trust might have an impact on the quality of the database is when it is misplaced. The politics of data originates from the reality that information is indeed power, and that, like political power, the majority of individuals do not want to share it with others.

Many personnel have developed personal ties with certain customers, which has enabled them to provide exceptional service to that customer. The problem is that one of the most comprehensive sources of information about a client is the sales representative or anyone else in the firm who has regular direct contact with the customer. The widely held belief that no one else will have any information about a sales representative’s customers unless the sales rep himself or herself gives it away is a complete and utter illusion.

Duplicate Data

Modern organisations gather data from various sources:

  • Local databases
  • Cloud data lakes
  • Streaming data

Additionally, they may have application and system silos. There is bound to be a lot of duplication and overlap in these sources. Duplication of contact details, for example, affects customer experience significantly.

Inaccurate Data

Inaccuracies of data can be traced back to several factors, including human errors, data drift, and data decay. According to Gartner, every month around 3% of data gets decayed globally, which is very alarming. Quality of data can degrade over time, and data can lose its integrity during the journey across various systems.

For example, looking at the recent experience, the need to improve the quality of data for COVID-19 and subsequent pandemics is evident more than ever. Inaccurate data does not give you a correct real-world picture and cannot help plan the appropriate response. After 2 years of the COVID-19 Pandemic, we still do not have confirmation about the number of deaths due to the corona-virus in 2021.

Leave a Reply