Pages

Ads 468x60px

Labels

Showing posts with label Business Intelligence Consulting. Show all posts
Showing posts with label Business Intelligence Consulting. Show all posts

Monday 24 November 2008

Business Intelligence Challenge – Product Upgrades & Migrations, Moving the Code – 4

Last time we discussed about Impact Assessment , the next logical step after this is to perform the actual upgrade or migration of the code.
Moving the Code: Performing Upgrade or Migration of the Objects
When we talk about product upgrades, always the product vendor provides tools by which the objects in the earlier version can be upgraded to the latest version. Yes we would see some objects failing through while using such tools; these are the ones that would need rework after the upgrade process.
When we talk about product migration like moving from Cognos to Business Objects or Business Objects to Cognos, there is good scope for us to look for some ways to automate the code migration. Earlier discussions have been on how to leverage the metadata for understanding the environment, now we are looking at an option on how to manipulate or transform the metadata so that an object in platform ‘A’ becomes compliant to platform ‘B’.
Steps involved in building an automated product migration process
Perform metadata level object mapping between the two platforms, determine the gaps. This would actually be a ‘by product’ of ‘Step 2’ in Impact Assessment
Build individual components that would
  • Read the metadata from the source platform and prepare a repository
  • Have the knowledge of the match & gap between the platforms, could be reference tables
  • Transform the ‘source’ metadata and write out as understood by the ‘target’ platform by using the reference tables
Benefits of Automated Migration
  • Helps avoid creation of objects from scratch
  • Ensures availability of time for testing (core task) than code development
  • Enables team to have a flexible skillset
  • A faster way of delivering things when a ‘one to one’ migration from the source platform is seen as a must
Automated Migration Challenges
Transforming the source metadata to the target platform would be a challenge, especially with data manipulation functions. Having a good understanding of the gaps will help; a reference table mapping the functions between the platforms would be useful. In scenarios where a function cannot be converted to the target platform, a comment can be written into a log file enabling quicker attention.
Have seen good success in writing such automated migration components though not 100%. With almost every products providing good SDK kits for reading and as well writing metadata and as well with the support for XML structures, writing such bridges for object migration are getting easier.
Whether the objects in a product are migrated/upgraded in an automated way or not, the following activity of ‘Validation’ plays a key role in ensuring the final quality, next time let us discuss on some of the means for effective validation ….

Thursday 20 November 2008

Zachman Framework for BI Assessments

The Zachman Framework for Enterprise Architecture has become the model around which major organizations view and communicate their enterprise information infrastructure. Enterprise Architecture provides the blueprint, or architecture, for the organization’s information infrastructure. More information on the Zachman Framework can be obtained at www.zifa.com.
For BI practitioners, the Zachman Framework provides a way of articulating the current state of the BI infrastructure in the organization. Ralph Kimball in his eminently readable book “The Data Warehouse Lifecycle Toolkit” illustrates how the Zachman Framework can be adapted to the Business Intelligence context.
Given below is a version of the Zachman Framework that I have used in some of my consulting engagements. This is just one way of using this framework but does illustrate the power of this model in some measure.
zachman
Some Salient Points with respect to the above diagram are:
  • The framework answers the basic questions of “What”, “How”, “Who” and “Where” across 4 important dimensions – Business Requirements, Conceptual Model, Logical/Physical Model and Actual Implementation.
  • Zachman Framework reinforces the fact that a successful enterprise system combines the ingredients of business, process, people and technology in proper measure.
  • It is typically used to assess the current state of the BI infrastructure in any organization
  • Each of the cells that lies at the intersection of the rows and columns (Ex: Information Requirements of Business) has to be documented in detail as part of the assessment document
  • Information on each cell is gathered through subjective and objective questionnaires.
  • Scoring Models can be developed to provide an assessment score for each of the cells. Based on the scores, a set of recommendations can be provided to achieve the intended goals.
  • Another interesting thought is to create a As-Is Zachman framework and overlay that with To-Be one in situations where re-engineering of a BI environment is undertaken. This will help us provide a transition path from the current state to the future.
Thanks for reading. If you have used the Zachman framework differently in your environment, please do share your thoughts.

Monday 10 November 2008

Valuing your Business Intelligence System – Part 1

Sample these statements:
  • Dow Jones Industrial Average jumped 200 points today, a 2% increase from the previous close
  • The carbon footprint of an average individual in the world is about 4 tonnes per year which is a 3% increase over last year
  • The number of unique URL’s as on July 2008 in the World Wide web is 1 trillion. The previous landmark of 1 billion was reached in 2000
  • One day 5% VaR (Value at Risk) for the portfolio is $ 1 Million as compared to the VaR of $ 1.3 Million a couple of weeks back
Most of us buy into the idea of having a single number that encapsulates complex phenomena. Though the details of the underlying processes are important, the single number (and the trend) does act like a bellwether of sorts helping us quickly get a feel of the current situation.
As a BI practitioner, I feel that it is about time that we formulated a way for valuing the BI infrastructure in organizations. Imagine a scenario where the Director of BI in company X can announce thus: “The value of the BI system in this organization has grown 15% over the past 1 year to touch $50 Million” (substitute your appropriate currencies here!).
The core idea of this post is to find a way to “scientifically put a number to your data warehouse”. Here are a few level setting points:
  1. Valuation of BI systems is different from computing the Return on Investment (ROI) for BI initiatives. ROI calculations are typically done using Discounted Cash Flow techniques and are used in organizations to some extent
  2. More than the absolute number, the trends are important which means that the BI system has to be valued using the same norms at different points in time. Scientific / Mathematical rigor helps in bringing the consistency aspect.
  3.  
My perspective to valuation is based on the “Outside-in” logic where the fundamental premise is that the value of the BI infrastructure is completely determined by its consumption. Or in other words, if there are no consumers for your data warehouse, the value of such a system is zero. One simple, yet powerful technique in the “Outside-in” category is RFM Analysis. RFM stands for Recency, Frequency and Monetary and is very popular in the direct marketing world. My 2-step hypothesis for BI system valuation using the RFM technique is:
  • Step 1: Value of BI system = Sum of the values of individual BI consumers
  • Step 2: Value of each individual consumer = Function (Recency, Frequency, Monetary parameters)
Qualitatively speaking, from the business user standpoint, one who has accessed information from the BI system more recently, has been using data more frequently and uses that information to make decisions that are critical to the organization will be given a higher value. A calibration chart will provide the specific value associated with RFM parameters based on the categories within them. For example: For the Recency parameter, usage of information within the last 1 day can be fixed at 10 points while access 10 days back will fetch 1 point. I will explain my version of the calibration chart in detail in subsequent posts. (Please note that the conversion of points to dollar values is also an interesting, non-trivial exercise)
Am sure that people acknowledge the fact that valuing data assets are difficult, tricky at best. But then, lot more difficult questions on nature and behavior have been reduced to mathematical equations – probably, the day on which BI practitioners can apply standardized techniques to value their BI infrastructure is not too far off.
Read More About  Business Intelligence System

Tuesday 21 October 2008

Business Intelligence Value Curve

Every business software system has an economic life. This essentially means that a software application exists for a period of time to accomplish its intended business functionality after which it has to be replaced or re-engineered. This is a fundamental truth that has to be taken into account when a product is bought or for a system that is developed from scratch.
During its useful life, the software system goes through a maturity life cycle – I would like to call it the “Value Curve” to establish the fact that the real intention of creating the system is to provide business value. As a BI practitioner, my focus is on the “Business Intelligence Value Curve” and in my humble opinion it typically goes thro’ the following phases as shown in the diagram.
curve1
Stage 1 – Deployment and Proliferation
The BI infrastructure is created at this stage catering to one or two subject areas. Both the process and technology infrastructure are established and there will be tangible benefits to the business users (usually the finance team!). Seeing the initial success, more subject areas are brought into the BI landscape that leads to the first list of problems – lack of data quality, completeness and duplication of data across data marts / repositories.
Stage 2 – Leveraging for Enterprise Decision Making
This stage takes off by addressing the problems seen in Stage-1 and overall enterprise data warehouse architecture starts taking shape. There is increased business value as compared to Stage-1 as the Enterprise Data Warehouse becomes a single source of truth for the enterprise. But as the data volume grows, the value is diminished due to scalability issues. For example, the data loads that used to take ‘x’ hours to complete now needs at-least ‘2x’ hours.
Stage 3 – Integrating and Sustaining
The scalability issues seen at the end of Stage-2 are alleviated and the BI landscape sees much higher levels of integration. Knowledge is built into the set up by leveraging the metadata and the user adoption of the BI system is almost complete. But the emergence of a disruptive technology (for example – BI Appliances) or a completely different service model for BI (Ex: Cloud Analytics) or a regulatory mandate (Ex: IFRS) may force the organization to start evaluating completely different ways of analyzing information.
Stage 4 – Reinvent
The organization, after appropriate feasibility tests and ROI calculations, reinvents its business intelligence landscape and starts constructing one that is relevant for its future.
I do acknowledge the fact that not all organizations will go through this particular lifecycle but based on my experience in architecting BI solutions, most of them do have stages of evolution similar to the one described in this blog. A good understanding of the value curve would help BI practitioners provide the right solutions to the problems encountered at different stages.

Friday 3 October 2008

Data Integration Challenge – Storing Timestamps

Storing timestamps along with a record indicating its new arrival or a change in its value is a must in a data warehouse. We always take it for granted, adding timestamp fields to table structures and tending to miss that the amount of storage space a timestamp field can occupy is huge, the storage occupied by timestamp is almost double against a integer data type in many databases like SQL Server, Oracle and if we have two fields one as insert timestamp and other field as update timestamp then the storage spaced required gets doubled. There are many instances where we could avoid using timestamps especially when the timestamps are being used for primarily for determining the incremental records or being stored just for audit purpose.

How to effectively manage the data storage and also leverage the benefit of a timestamp field?
One way of managing the storage of timestamp field is by introducing a process id field and a process table. Following are the steps involved in applying this method in table structures and as well as part of the ETL process.
Data Structure
  1. Consider a table name PAYMENT with two fields with timestamp data type like INSERT_TIMESTAMP and UPDATE_TIEMSTAMP used for capturing the changes for every present in the table
  2. Create a table named PROCESS_TABLE with columns PROCESS_NAME Char(25), PROCESS_ID Integer and PROCESS_TIMESTAMP Timestamp
  3. Now drop the fields of the TIMESTAMP data type from table PAYMENT
  4. Create two fields of integer data type in the table PAYMENT like INSERT_PROCESS_ID and UPDATE_PROCESS_ID
  5. These newly created id fields INSERT_PROCESS_ID and UPDATE_PROCESS_ID would be logically linked with the table PROCESS_TABLE through its field PROCESS_ID
  6.  
ETL Process
  1. Let us consider an ETL process called ‘Payment Process’ that loads data into the table PAYMENT
  2. Now create a pre-process which would run before the ‘payment process’, in the pre-process build the logic by which a record is inserted with the values like (‘payment process’, SEQUNCE Number, current timestamp) into the PROCESS_TABLE table. The PROCESS_ID in the PROCESS_TABLE table could be defined as a database sequence function.
  3. Pass the currently generated PROCESS_ID of PROCESS_TABLE as ‘current_process_id’  from pre-process step to the ‘payment process’ ETL process
  4. In the ‘payment process’ if a record is to inserted into the PAYMENT table then the current_prcoess_id value is set to both the columns INSERT_PROCESS_ID and UPDATE_PROCESS_ID else if a record is getting updated in the PAYMENT table then the current_process_id value is set to only the column UPDATE_PROCESS_ID
  5. So now the timestamp values for the records inserted or updated in the table PAYMENT can be picked from the PROCESS_TABLE by joining by the PROCESS_ID with the INSERT_PROCESS_ID and UPDATE_PROCESS_ID columns of the PAYMENT table
  6.  
Benefits
  • The fields INSERT_PROCESS_ID and UPDATE_PROCESS_ID occupy less space when compared to the timestamp fields
  • Both the columns INSERT_PROCESS_ID and UPDATE_PROCESS_ID are Index friendly
  • Its easier to handle these process id fields in terms picking the records for determining the incremental changes or for any audit reporting.
Read More About Data Integration Challenge

Tuesday 30 September 2008

Business Intelligence – The Reusability Gene

One issue that confronts me time and again while executing BI projects is “Reusability”, actually the lack of it. Let me give an example. 
In the many migrations and upgrade projects that Hexaware (my company) has executed, I always find that the number of reports finally migrated/upgraded to a new environment is only 40-50% of the number that is provided to us by the customer initially. Report Rationalization has become such a critical step that we have developed many specific metadata tools that helps rationalize the reporting environment.  Coming back to the topic – The reason for such a divergence between the final number of reports and the initial number is lack of ‘reusability’. Business users have their own versions of standardized (?) reports stored in their desktops which are nothing but small variations (usually with a new filter added) of an already existing report.
Another similar example on the data integration side is the creation of ad-hoc ETL routines as and when required. This results in duplication of ETL jobs and also results in a non-standard BI environment.
Lack of re-use causes two major problems:
1) BI environment becomes bloated with the increase in the number of unwanted components that use valuable computing resources, resulting in delays for availability of more important information.
2) Any attempt at upgrading/re-engineering the existing system results in high costs and undesirable heart-burn among business users
The Prescription:
1) Establish a corporate level BI team whose primary responsibility is to ensure that any component addition (ETL, Reports, and Models etc.) is justified based on its purpose. This team has to ensure that existing standards and components are reused to the maximum extent.
2) Strengthen the “Business Metadata” architecture within the organization. In one of my earlier posts, I had explained my view of BI metadata and that is very relevant to the task of improving reusability.
Basically, the “Reusability gene” seems to be a little muted in its functioning among BI practitioners. It is time that BI teams within organizations and system integrators like Hexaware look at reusability as a critical parameter while developing and deploying BI solutions.
Read More About  The Reusability Gene

Wednesday 17 September 2008

Business Intelligence Challenge – Product Updates and Migration-I

Product Upgrades are situations where we are moving from one version of the product to the latest version of the same product. Upgrades happen
  • to ensure support from the product vendor
  • to leverage new features provided by the latest version in terms of performance and user experience
  • as some other new product which is being added to the architecture doesn’t talk to the existing versions
Product Migrations are situations where we are moving from a platform of one vendor to another vendor’s platform. Migrations happen
  • as ‘BI Standardization’ initiatives drive organizations to move towards a common platform to operate BI systems at a lower cost and provide uniform user experience
  • because of bad experience with the current product not meeting the business needs in terms of performance or usability or product support or license cost
  • to be triggered also because of the recent mergers and acquisitions which lead organizations to think of a ‘safer’ platform
Upgrade a Challenge? With newer versions of every major product especially the ones like Business Objects, Cognos under go such a rapid change that the newer versions of the same product comes out on a different architecture with entirely new set of components, no longer upgrades are upgrades they have become effort intensive product migrations almost similar to moving from one BI product vendor to the another BI vendor.
Let us call either upgrade or migration as ‘Upgrade’ as any such initiative is for better upgraded experience of the business and the IT.
Can we do this upgrade next year? , a common dialogue when an IT team requests for a Business Intelligence Product Upgrade. Upgrade is one of the key items that would definitely come up for discussions during BI budget allocation in every organization. Fears among the business subsist that Upgrade projects would involve many of their hours without much benefit to them. For the IT Upgrade is a bigger challenge due to the unpredictability involved in the problems they would face during the course of the project and ensuring minimal disturbance to the business team. Hence the BI initiatives related to Product Upgrade get through multiple scrutinies before budget approval. Such projects are seen as an IT initiative and clear definition of business benefits becomes difficult to build.

Tuesday 9 September 2008

Business Intelligence – The Unconquered Territories

Bill Bryson, one of my favorite authors, writes this way in the book “A Short History of Nearly Everything” and I quote:
“As the nineteenth century drew to a close, scientists could reflect with satisfaction that they had pinned down most of the mysteries of the physical world: electricity, magnetism, gases, optics, kinetics, and statistical mechanics, to name just a few. If a thing could be oscillated, accelerated, perturbed, distilled, combined, weighed or made gaseous they had done it, and in the process produced a body of universal laws so weighty and majestic that we still tend to write them out in capitals. The whole world clanged and chuffed with the machinery and instruments that their ingenuity had produced. Many wise people believed that there was nothing much left for science to do”
Now we all know how much science did invent / discover in the 20th century.
Sitting now in 2008, sometimes when I hear people speaking about BI, I get a feeling that we are on the verge of accomplishing everything in this space. Alas! That is “as far as it gets” from the truth– There are so many “unconquered territories” in BI that if you were thinking that the past was challenging enough, it is time to get rejuvenated for wresting with bigger challenges in the future.
My top ten “Unconquered Territories” for BI Practitioners are:
1) Majority of BI decision making is geared towards analysis of structured data. Usage of unstructured data is minimal at best and non-existent in many cases.
2) There is still lot of work to be done in integrating the process rigor of a Six Sigma or a quality management methodology (say CMMI) to the BI paradigm. Unless that is done, BI will not be sustainable in the long run.
3) Lack of valuation techniques. BI systems are corporate assets like Human Resources, Brands etc. and there has to be concrete models for valuing them.
4) Predictive Analytics / Data Mining are used only by handful of organizations effectively. There is no shortage of techniques but the world is probably short of people who can apply high-end analytical techniques to solve “common-sense”, real world business problems.
5) Let’s face it – There are technology limitations. Operational BI (Lack of real-time data access), Guided analytics (Lack of comprehensive business metadata), Information as a Service (Lack of SOA based BI architecture) are some of those technology limitations that come to my mind.
6) Data Quality is a nightmare in most organizations. Either the data is already ‘dirty’ or there is really no governance process which leaves the only option that data will become ‘dirty’ eventually.
7) Here is a mindset challenge – BI Practitioners, in my view, need to develop a higher level of “business process” oriented thinking that seems to be lacking given the ever increasing technology complexity of BI tools.
8) Simulations!! – Businesses run with a lot of interdependent variables. Unless a simulation model of the business is built into the analytical landscape, there is really no way of having a handle on the future state of business. Of course, ‘Black Swans’ will continue to exist but that’s a different subject matter altogether.
9) On demand analytics – I accept that am being a little unfair here to expect BI to catch up with the nascent world of “cloud” computing so early. But the fact remains that much work can be done in this area of “Cloud Analytics”.
10) Packaged analytics is a step in the right direction – Organizations can quickly deploy analytical packages and spend more time on how to optimize business decisions. Having said that, the implementation difficulty combined with the lack of flexibility in packages are areas of concern to be alleviated.
Each one of us will have our own list of “unconquered territories”. Probably it is worthwhile to put everything down on paper and nudge your BI environments towards conquering all those areas and beyond.
Read More About  Business Intelligence

Monday 1 September 2008

Business Intelligence Challenge – Understanding Requirements, System Object Analysis

In the earlier discussion we had looked at understanding BI requirements through User Object Analysis, now let us look at another aspect.
The uniqueness in building BI systems when compared to other systems is that BI systems are built over the data collected by transaction (source) systems for effective data analysis. In principle a BI system should enable any kind of analysis on the data from source(s), but in many cases we pull only required elements initially to the data warehouse based on predefined analysis and get the BI system up. The requirements for a BI system is to define the scope in terms of what business processes, its scenarios and data that are of immediate need and get them available for analysis.
Even though many system owners or functional experts provide the details of the transaction system, there are still many data elements and relationship that are not reachable through the inputs from the business. We must have experienced new scenarios pointed out by the business like ‘this data element should not be updated’, ‘we need the value to be populated based on a certain flag’, such things emerge during the testing phase or in the production, such surprises occur not because that the requirements keep changing but due to lack of understanding of the clear scenarios based on the data present in the source system.
The means of understanding the business process and the system functions of a source system by looking at its data elements and their values is called ‘System Object Analysis’.
Following are the steps in ‘System Object Analysis’
1. Collect all tables from the source system, physical structure metadata like table name, column name, data type etc
2. Define the descriptions in terms of kind of data each of these tables store
3. Group the tables based on the functions through description understanding or through naming conventions present among the tables.Certain tables or groups can get eliminated here by interaction with the users. Also a table can belong to multiple groups
4. Reverse engineering the underlying data model would be useful as well
5. Perform data profiling for each of tables
6. Understand the domain values, their significance in terms when such value can occur and the relationship between tables
7. Determine the different scenarios on how the data has arrived into this table
8. Determine the fact, dimension and the attributes of dimensions within each functional area/group
9. Now with the clear details on each group and the facts-dimensions that they contribute, prepare certain questions that a business can get answered within and across the functional area (groups). Validate the questions and possibly collect more questions from Business.
10. Present to the business on what can be done on the system, prioritize and prepare the implementation plan
Based on the analysis of the tables, the Group or Functional defined initially can undergo changes in terms of the table list within a group or even a new group can come up. During the above steps regular interaction with the business users happens and the requirements of the BI system gets defined.
Benefits of System Object Analysis
Ensures complete understanding of the process by which data gets modified in the source system enabling to deliver more than what the business needs
Helps group, prioritize requirements and build case for the dependency and prepare roll out plan
Means to trigger the requirements definition from user through an interactive process, gets us raise many questions to the business about their system and process
Many a times the requirement defined by the business is to build an ad-hoc query environment for a transaction system, so System Object Analysis which enables the users navigate the requirements through the inputs from the technical team becomes almost mandatory for building an effective BI system.