Pages

Ads 468x60px

Labels

Friday 29 June 2007

Data Integration Challenge – Facts Arrive Earlier than Dimension


The fact transactions that come in earlier than the dimension (master) records are not bad data, such fact records needs to be handled in our ETL process as a special case. Such situations of facts coming in before dimensions can occur quite commonly like in case of a customer opening a bank account and his transactions starting to flow into the data warehouse immediately.
But the customer id creation process from the Customer Reconciliation System can get delayed and hence the customer data would reach the data warehouse after few days.
How do we handle this scenario differs based on the business process being addressed, there could be two different requirements
  • Make the fact available and report under “In Process” category; commonly followed in financial reporting systems to enable reconciliation
  • Make the fact available only when the dimension is present,; commonly followed in status reporting systems
Requirement 1: Make the fact available and report under “In Process” category
For this requirement follow the below steps
  1. Insert into the dimension table a record that represents a default or ‘In Process’ status like in case of the banking example the Customer Dimension would have a ‘default record’ inserted that represents the information that the customer detail has not yet arrived

  2. In the ETL process while populating the Fact table, for the transactions that do not have a corresponding entry in the Dimension table, assign a default Dimension key and insert into the Fact. In the same process insert the Dimensions Lookup values into a ‘temporary’ or ‘error’ table

  3. Build an ETL process that checks the new records inserted into the Dimension table, queries the temporary table and identifies the records in facts for which the dimension key has to be updated and updates the respective fact’s dimension key

Requirement 2: Make the fact available only when the dimension is present
For this requirement follow the below steps
  1. Build an ETL process that populates the fact into a staging table

  2. Build an ETL process that pushes only the records that have a dimension value to the data warehouse tables

  3. At the end of ETL process delete all the processed records from the staging table making the other unprocessed records available to be pulled next time

You can read more about  Data Integration Challenge

Monday 25 June 2007

Business Intelligence: Gazing at the Crystal Ball


Circa 2015 – 8 years from now
CEO of a multinational organization enters the corner office overlooking the busy city down below. On flicking a switch near the seat, the wall in front is illuminated with a colorful dashboard, what is known in CEO circles then, as the Rainbow Chart.
The Rainbow Chart is the CEO’s lifeline as it gives a snapshot of the current business position (the left portion) and also figures/colors that serves as a premonition of the company’s future (the right portion).
The current state/left portion of the dashboard, on closer examination, reveals 4 sub-parts. On the extreme left is the Balance Sheet of the business and next to it is the Income statement. The Income statement has more colors that are changing dynamically as compared to the Balance sheet. Each line item has links to it, using which the CEO can drill down further to specific geographies, business units and even further to individual operating units. The third part has the cash flow details (the colors are changing far more rapidly here) and the fourth one gives the details on inventory, raw materials position and other operational details.
The business future state/right portion of the dashboard has a lot of numbers that can be categorized into two. The first category is specific to the business – Sales in pipeline, Revenue & Cost projections, Top 5 initiatives, Strategy Maps etc. and the second category are the macroeconomic indicators across the world. At the bottom of the dashboard is a stock ticker (what else?) with the company’s stock prices shown in bold.
All these numbers & colors change in real-time and the CEO can drill up/down/across/through all the line items. Similar such dashboards are present across the organization and each one covers details that are relevant for the person’s level and position in the company.
This in essence is the real promise of BI.
Whether it happens in 2015 or earlier (hopefully not later!) can be speculated but the focus of the next few blogs from my side will zero-in on some of the pre-requisites for such a scenario – The  Business Intelligence Utopia!

Business Intelligence @ Crossroads


Business Intelligence (BI) is well & truly at the crossroads and so are BI practitioners like me. On one hand there is tremendous improvement in BI tools & techniques almost on a daily basis but on the other hand there is still a big expectation gap among business users on Business Intelligence’s usage/value to drive core business decisions.

This ensures that every Business Intelligence (BI) practitioner develops a ’split’ personality – a la Jekyll and Hyde, getting fascinated by the awesome power of databases, smart techniques in data integration tools etc. and the very next moment getting into trouble with a business user on why ‘that’ particular metric cannot be captured in an analytical report.
For the BI technologists, there is never going to be a dull moment in the near future. With all the big product vendors like Microsoft, Oracle, SAP etc. throwing their might behind BI and with all the specialty BI product vendors showing no signs of slowing down, just get ready to join the big swinging party.
For the business users, there is still the promise of BI that is very enticing – ‘Data to Information to Knowledge to Actions that drive business decisions’. But they are not giving the verdict as of now. Operational folks are really not getting anything out of BI right now (wait for BI 2.0) and the strategic thinkers are not completely satisfied with what they get to see.
The techno-functional managers, the split personality types are the ones in the middle trying to grapple with increasing complexity on the technology side and the ever increasing clamor for insights from the business side.
Take sides right away – there is more coming from this space on the fascinating world of Business Intelligence.

Thursday 14 June 2007

DI Challenge – Handling Files of different format with same subject content


In a Data Integration environment which has multiple OLTP systems existing for same business functionality one of the scenarios that occur quite common is that of these systems ‘providing files of different formats with same subject content’.
Different OLTP systems with same functionality may arise in organizations like in case of a bank having its core banking systems running on different products due to acquisition, merger or in a simple case of same application with multiple instances with country specific customizations.
For example data about same subject like ‘loan payment details’ would be received on a monthly basis from different OLTP systems in different layouts and formats. These files might arrive in different frequency and may be incremental or full files.
Always files having same subject content reach the same set of target tables in the data warehouse.
How do we handle such scenarios effectively and build a scalable Data Integration process?
The following steps help in handling such situations effectively
• Since all the files provide data related to one common subject content, prepare a Universal Set of fields that would represent that subject. For e.g., for any loan payment subject we would have a set of fields identified as a Universal Set representing details about the guarantors, borrower, loan account etc. This Universal Field list is called Common Standard layout (CSL)
• Define the CSL fields with a Business Domain specialist and define certain fields in the CSL as mandatory or NOT NULL fields, which all source files should provide
• Build a set of ETL process that process the data based on the CSL layout and populates the target tables. The CSL layout could be a Table or Flat File. In case the CSL is table define the fields as character. All validations that are common to that subject are built in this layer.
• Build individual ETL process for each file which maps the source files fields to the CSL structure. All file specific validations are built in this layer.
Benefits of this approach
• Conversion of all source file formats to CSL ensured that all the common rules are developed as reusable components
• Addition of a new file that provides same subject content is easier, we need to just build a process to map the new file to the CSL structure
Read more about :Data Integration Challenge

Monday 11 June 2007

First Step in Knowing your Data – ‘Profile It’


Chief Data Officer (CDO), the protagonist, who was introduced before on this blog has the unenviable task of understanding the data that is within the organization boundaries. Having categorized the data into 6 MECE sets (read the post dated May 29 on this blog), the data reconnaissance team starts its mission with the first step – ‘Profiling’.
Data Profiling at the most fundamental level involves understanding of:
1) How is the data defined?
2) What is the range of values that the data element can take?
3) How is the data element related to others?
4) What is the frequency of occurrence of certain values, etc.
A slightly more sophisticated definition of Data Profiling would include analysis of data elements in terms of:
  • Basic statistics, frequencies, ranges and outliers
  • Numeric range analysis
  • Identify duplicate name and address and non-name and address information
  • Identify multiple spellings of the same content
  • Identify and validate redundant data and primary/foreign key relationships across data sources
  • Validate data specific business rules within a single record or across sources
  • Discover and validate data patterns and formats
Armed with statistical information about critical data present in enterprise wide systems, the CDO’s team can devise specific strategies to improve the quality of data and hence the improve http://blogs.hexaware.com/business-intelligence/first-step-in-knowing-your-data-profile-it-2/ the quality of information and business decisioning.

To add more variety to your thoughts on Operational BI, you can read it More  Data Profiling 


Friday 1 June 2007

What is Data Integration or ETL ?


ETL represents the three basic steps:
  1. Extraction of data from a source system

  2. Transformation of the extracted data and

  3. Loading the transformed data into a target environment

In general ‘ETL’ represented more of batch process and that of gathering data from either flat files or relational structure. When ETL systems started supporting data from wider sources like XML, industry standard format like SWIFT, unstructured data, real time feeds like message queues etc ‘ETL’ got evolved to ‘Data Integration’. That’s the reason why now all ETL product vendors are called Data Integrators.
Now let us see how Data Integration or ETL has evolved over the period. The ways of performing DI…
  • Write Code
  • Generate Code
  • Configure Engine
Write Code: Write a piece of code in a programming language, compile and execute
Generate Code: Use a Graphical User Interface to input the requirements of data movement, generate the code in a programming language, compile and execute
Configure Engine: Use a Graphical User Interface to input the requirements, save the inputs (Metadata) in a data store (repository). Use the generic pre compiled Engine to interpret the metadata from the repository and execute.
Pros and Cons of each approach
ProsWrite CodeGenerate CodeConfigure Engine
  • Easy to get started for smaller tasks
  • Complex data handling requirements can be met
  • Developer friendly to design the requirements
  • Metadata of requirements captured
  • Developer friendly to design the requirements
  • Metadata of requirements captured
  • Easier code maintenance
  • Flexibility to access any type of data source
  • Scalable for huge data volume supports architectures like SMP, MPP, NUMA – Q,GRID etc
Cons
  • Large effort in maintenance of the code
  • Labor-intensive development, error prone and time consuming
  • Large effort in maintenance of the code
  • Metadata and code deployed can be out of sync
  • Certain data handling requirements might require adding a ‘hand written code’
  • Dedicated environment, servers and the initial configuration process

To add more variety to your thoughts on Data , you can read it More Data Integration