Pages

Ads 468x60px

Labels

Wednesday, 29 October 2008

Business Intelligence Challenge – Product Upgrades & Migrations, Impact Assessment – 3

The next step after ‘Object Consolidation’ is Impact Assessment.
What is Impact Assessment? The process in which, we try to determine the variations or gaps in the existing objects/reports by comparing against the target platform.
The gaps or variations in an existing environment could be due to an existing function being replaced by a new function or an existing function not being supported with an equivalent function in the new platform.
Steps Involved in Impact Assessment
We try to do this comparison against the target platform ‘Impact Assessment’ in an automated way by leveraging the underlying metadata of the existing environment.
1. Take the ‘object metadata’ collected as part of the Object Consolidation
2. Collect details of the possible issues faced during the upgrade or migration process. The source of details would be from
  • prior experience in executing similar projects
  • the manuals and release notes provided by the product vendor
  • the pilot project executed with the subset of objects from the existing setup
3. Convert the ‘issues’ identified into a relational structure table.
  • An issue could be an observation like the function ‘sum’ has been now changed in the newer version of the product to the word ‘sumif’
  • One way of converting this issue to a relational structure is to have two column names ‘issue_case’ and ‘issue_type’ where in issue_case will carry the value ‘sum’ and issue_type will carry the value ‘aggregate’.
  • Converting to a relational structure enables using SQL queries for automated search of the impacted objects by joining the ‘object metadata’ table with the ‘issue’ table
4.Run SQL queries joining the ‘issue’ table with the ‘object metadata’ table to determine the impacted existing objects
5.Classify each object by the degree of impact(number of impact points) and decide on the strategy of upgrade/migration for each these groups
Benefits of Impact Assessment
  • Foresee the challenges and being well prepared for system upgrade or migration
  • Ability to estimate and plan the execution & testing phases very effectively
  • Enable building comprehensive test cases
  • Minimize surprises and provide confidence to the execution team
  • Helps in making decisions of whether we should consider a object to be built from scratch or upgrade/migrate
Impact Assessment Challenges
First gathering the knowledge of issues; the options are talk to a person who has done an upgrade project to collect the issue details or perform a quick pilot with an appropriate sample from the set of objects to determine the issues.
Second conversion of issues logs into relational structure and running of queries to determine the impacts; both of these would require a good understanding of the underlying metadata structure, so explore the metadata structure and understand them to the fullest from the point of analysis.
Next time let us discuss on one other key task in an upgrade project…

Tuesday, 21 October 2008

Business Intelligence Value Curve

Every business software system has an economic life. This essentially means that a software application exists for a period of time to accomplish its intended business functionality after which it has to be replaced or re-engineered. This is a fundamental truth that has to be taken into account when a product is bought or for a system that is developed from scratch.
During its useful life, the software system goes through a maturity life cycle – I would like to call it the “Value Curve” to establish the fact that the real intention of creating the system is to provide business value. As a BI practitioner, my focus is on the “Business Intelligence Value Curve” and in my humble opinion it typically goes thro’ the following phases as shown in the diagram.
curve1
Stage 1 – Deployment and Proliferation
The BI infrastructure is created at this stage catering to one or two subject areas. Both the process and technology infrastructure are established and there will be tangible benefits to the business users (usually the finance team!). Seeing the initial success, more subject areas are brought into the BI landscape that leads to the first list of problems – lack of data quality, completeness and duplication of data across data marts / repositories.
Stage 2 – Leveraging for Enterprise Decision Making
This stage takes off by addressing the problems seen in Stage-1 and overall enterprise data warehouse architecture starts taking shape. There is increased business value as compared to Stage-1 as the Enterprise Data Warehouse becomes a single source of truth for the enterprise. But as the data volume grows, the value is diminished due to scalability issues. For example, the data loads that used to take ‘x’ hours to complete now needs at-least ‘2x’ hours.
Stage 3 – Integrating and Sustaining
The scalability issues seen at the end of Stage-2 are alleviated and the BI landscape sees much higher levels of integration. Knowledge is built into the set up by leveraging the metadata and the user adoption of the BI system is almost complete. But the emergence of a disruptive technology (for example – BI Appliances) or a completely different service model for BI (Ex: Cloud Analytics) or a regulatory mandate (Ex: IFRS) may force the organization to start evaluating completely different ways of analyzing information.
Stage 4 – Reinvent
The organization, after appropriate feasibility tests and ROI calculations, reinvents its business intelligence landscape and starts constructing one that is relevant for its future.
I do acknowledge the fact that not all organizations will go through this particular lifecycle but based on my experience in architecting BI solutions, most of them do have stages of evolution similar to the one described in this blog. A good understanding of the value curve would help BI practitioners provide the right solutions to the problems encountered at different stages.

Friday, 10 October 2008

Business Intelligence Challenge – Product Upgrades & Migrations Product Upgrades & Migrations, Object Consolidation – 2

As an initial step one of the key tasks to be considered in any Business Intelligence product upgrade or migration is ‘Object Consolidation’.
What is Object Consolidation? The process of getting to understand the current BI environment by means of the metadata and analysing them with a perspective to determine and eliminate redundant objects. The ‘object’ in a BI product would be its reports and the semantic layer definitions (like Universe in Business Objects).
Steps Involved in Object Consolidation
1. Locate all objects (reports and semantic definitions). These objects could be from a central repository and as well from individual user folders and desktops
2. Check whether the Object’s metadata are available in a relational storage (metadata repository) else build processes that would collect the metadata of the objects and store them into a relational structure
3. Run SQL queries against the relational structure to determine
a.‘Duplicates’; the objects that have same metadata elements
b.‘Clusters’; the objects that have similar metadata elements. when objects(reports) differ between them by a few 1 or 2
metadata elements then these Objects are grouped as ‘Clusters’
c. ‘ Dormant’; the objects that are no longer used
d. Complexity of the objects in terms of factors like the number of metadata elements being used in an object
4. Share the object consolidation findings to the users for confirmation and verification
5. By eliminating the duplicate & dormant and including only the prime in a cluster prepare the consolidated list of objects
a.Duplicate objects are directly removed
b.From the Cluster objects only the key object is considered for upgrade. After the upgrade of the key object rest of the
objects in the same cluster are derived from this upgraded key object
The consolidated list of objects and the understanding of the complexity of the existing environment becomes one of the key inputs to plan for the upgrade process.
Benefits of Object Consolidation
1. Eliminating upgrade of unwanted objects, saving on effort & cost
2. Enabling to build a clean system in the newer version or platform ensuring easier system maintenance
3. Enables effective upgrade planning based on the understanding of the environment
4. Improves the understanding of the existing environment through the metadata links
Object Consolidation Challenge: Accessing the metadata of the objects would be a challenge since many of the BI products don’t expose the metadata that can be queried through SQLs. But almost every products provide SDK kits through which the metadata can be accessed or expose the metadata as XML files. We would need to build tools that can pull the metadata using SDKs or in the cases of XML files build XML readers/parsers to pull the required metadata.

Friday, 3 October 2008

Data Integration Challenge – Storing Timestamps

Storing timestamps along with a record indicating its new arrival or a change in its value is a must in a data warehouse. We always take it for granted, adding timestamp fields to table structures and tending to miss that the amount of storage space a timestamp field can occupy is huge, the storage occupied by timestamp is almost double against a integer data type in many databases like SQL Server, Oracle and if we have two fields one as insert timestamp and other field as update timestamp then the storage spaced required gets doubled. There are many instances where we could avoid using timestamps especially when the timestamps are being used for primarily for determining the incremental records or being stored just for audit purpose.

How to effectively manage the data storage and also leverage the benefit of a timestamp field?
One way of managing the storage of timestamp field is by introducing a process id field and a process table. Following are the steps involved in applying this method in table structures and as well as part of the ETL process.
Data Structure
  1. Consider a table name PAYMENT with two fields with timestamp data type like INSERT_TIMESTAMP and UPDATE_TIEMSTAMP used for capturing the changes for every present in the table
  2. Create a table named PROCESS_TABLE with columns PROCESS_NAME Char(25), PROCESS_ID Integer and PROCESS_TIMESTAMP Timestamp
  3. Now drop the fields of the TIMESTAMP data type from table PAYMENT
  4. Create two fields of integer data type in the table PAYMENT like INSERT_PROCESS_ID and UPDATE_PROCESS_ID
  5. These newly created id fields INSERT_PROCESS_ID and UPDATE_PROCESS_ID would be logically linked with the table PROCESS_TABLE through its field PROCESS_ID
  6.  
ETL Process
  1. Let us consider an ETL process called ‘Payment Process’ that loads data into the table PAYMENT
  2. Now create a pre-process which would run before the ‘payment process’, in the pre-process build the logic by which a record is inserted with the values like (‘payment process’, SEQUNCE Number, current timestamp) into the PROCESS_TABLE table. The PROCESS_ID in the PROCESS_TABLE table could be defined as a database sequence function.
  3. Pass the currently generated PROCESS_ID of PROCESS_TABLE as ‘current_process_id’  from pre-process step to the ‘payment process’ ETL process
  4. In the ‘payment process’ if a record is to inserted into the PAYMENT table then the current_prcoess_id value is set to both the columns INSERT_PROCESS_ID and UPDATE_PROCESS_ID else if a record is getting updated in the PAYMENT table then the current_process_id value is set to only the column UPDATE_PROCESS_ID
  5. So now the timestamp values for the records inserted or updated in the table PAYMENT can be picked from the PROCESS_TABLE by joining by the PROCESS_ID with the INSERT_PROCESS_ID and UPDATE_PROCESS_ID columns of the PAYMENT table
  6.  
Benefits
  • The fields INSERT_PROCESS_ID and UPDATE_PROCESS_ID occupy less space when compared to the timestamp fields
  • Both the columns INSERT_PROCESS_ID and UPDATE_PROCESS_ID are Index friendly
  • Its easier to handle these process id fields in terms picking the records for determining the incremental changes or for any audit reporting.
Read More About Data Integration Challenge

Tuesday, 30 September 2008

Business Intelligence – The Reusability Gene

One issue that confronts me time and again while executing BI projects is “Reusability”, actually the lack of it. Let me give an example. 
In the many migrations and upgrade projects that Hexaware (my company) has executed, I always find that the number of reports finally migrated/upgraded to a new environment is only 40-50% of the number that is provided to us by the customer initially. Report Rationalization has become such a critical step that we have developed many specific metadata tools that helps rationalize the reporting environment.  Coming back to the topic – The reason for such a divergence between the final number of reports and the initial number is lack of ‘reusability’. Business users have their own versions of standardized (?) reports stored in their desktops which are nothing but small variations (usually with a new filter added) of an already existing report.
Another similar example on the data integration side is the creation of ad-hoc ETL routines as and when required. This results in duplication of ETL jobs and also results in a non-standard BI environment.
Lack of re-use causes two major problems:
1) BI environment becomes bloated with the increase in the number of unwanted components that use valuable computing resources, resulting in delays for availability of more important information.
2) Any attempt at upgrading/re-engineering the existing system results in high costs and undesirable heart-burn among business users
The Prescription:
1) Establish a corporate level BI team whose primary responsibility is to ensure that any component addition (ETL, Reports, and Models etc.) is justified based on its purpose. This team has to ensure that existing standards and components are reused to the maximum extent.
2) Strengthen the “Business Metadata” architecture within the organization. In one of my earlier posts, I had explained my view of BI metadata and that is very relevant to the task of improving reusability.
Basically, the “Reusability gene” seems to be a little muted in its functioning among BI practitioners. It is time that BI teams within organizations and system integrators like Hexaware look at reusability as a critical parameter while developing and deploying BI solutions.
Read More About  The Reusability Gene