Pages

Ads 468x60px

Labels

Showing posts with label Business Intelligence. Show all posts
Showing posts with label Business Intelligence. Show all posts

Monday 1 September 2008

Business Intelligence Challenge – Understanding Requirements, System Object Analysis

In the earlier discussion we had looked at understanding BI requirements through User Object Analysis, now let us look at another aspect.
The uniqueness in building BI systems when compared to other systems is that BI systems are built over the data collected by transaction (source) systems for effective data analysis. In principle a BI system should enable any kind of analysis on the data from source(s), but in many cases we pull only required elements initially to the data warehouse based on predefined analysis and get the BI system up. The requirements for a BI system is to define the scope in terms of what business processes, its scenarios and data that are of immediate need and get them available for analysis.
Even though many system owners or functional experts provide the details of the transaction system, there are still many data elements and relationship that are not reachable through the inputs from the business. We must have experienced new scenarios pointed out by the business like ‘this data element should not be updated’, ‘we need the value to be populated based on a certain flag’, such things emerge during the testing phase or in the production, such surprises occur not because that the requirements keep changing but due to lack of understanding of the clear scenarios based on the data present in the source system.
The means of understanding the business process and the system functions of a source system by looking at its data elements and their values is called ‘System Object Analysis’.
Following are the steps in ‘System Object Analysis’
1. Collect all tables from the source system, physical structure metadata like table name, column name, data type etc
2. Define the descriptions in terms of kind of data each of these tables store
3. Group the tables based on the functions through description understanding or through naming conventions present among the tables.Certain tables or groups can get eliminated here by interaction with the users. Also a table can belong to multiple groups
4. Reverse engineering the underlying data model would be useful as well
5. Perform data profiling for each of tables
6. Understand the domain values, their significance in terms when such value can occur and the relationship between tables
7. Determine the different scenarios on how the data has arrived into this table
8. Determine the fact, dimension and the attributes of dimensions within each functional area/group
9. Now with the clear details on each group and the facts-dimensions that they contribute, prepare certain questions that a business can get answered within and across the functional area (groups). Validate the questions and possibly collect more questions from Business.
10. Present to the business on what can be done on the system, prioritize and prepare the implementation plan
Based on the analysis of the tables, the Group or Functional defined initially can undergo changes in terms of the table list within a group or even a new group can come up. During the above steps regular interaction with the business users happens and the requirements of the BI system gets defined.
Benefits of System Object Analysis
Ensures complete understanding of the process by which data gets modified in the source system enabling to deliver more than what the business needs
Helps group, prioritize requirements and build case for the dependency and prepare roll out plan
Means to trigger the requirements definition from user through an interactive process, gets us raise many questions to the business about their system and process
Many a times the requirement defined by the business is to build an ad-hoc query environment for a transaction system, so System Object Analysis which enables the users navigate the requirements through the inputs from the technical team becomes almost mandatory for building an effective BI system.

Saturday 30 August 2008

Introduction of External Business Component (EBC)

In almost all the implementation of Siebel Application we come across the requirement as – “Bring the data from the external source and display” well for that in Siebel the solution is External Business Component (EBC) and Virtual Business Component (VBC). In this Blog the content is specific to EBC, so let me give the defination of EBC first then the process of creating it –
Defination:
EBC is a method used when there is a need to display data which is external to siebel. The external schema is imported into siebel tables using a wizard. Once the exteranal schema is imported, to display this data in an applet, the configuartion is going to be the ame as creating a BC, an Applet,a View etc.
Steps of Creating:
1. get the DDL file for your external table.
here is how a sample ddl file will look like:
CREATE TABLE TPMS.EBC_VEC
(
demo1 VARCHAR2(20),
demo2 VARCHAR2(20),
demo3 NUMBER(10)
)
2. Use siebel object creation wizard to create this table.
File –> New Object –> External Table Schema Import
3. The wizard will ask for following inputs:
i. Select Project this table will be part of from the list -
ii. Select the database where external table resides – Enter the database, for this example it is Oracle Server Enterprise Edition
iii. Specify full path of the file where table definition resides -
iv. Specify a 3 digit batch code for this import – eg 001
v. Click on Next and then click on Finish
4. This will create your External table. with a name like EX_001_0000001. The names of External tables begin with “EX_” the next 3 characters are batch codes and the rest is just a serial number.
* The Type field will be “External” for this table.
* You will also have to map one of the table columns to the Siebel’s Id field. to do this: go to the desired table column and in the “System Field Mapping” column select “Id”
5. Changes to be made in cfg file now follow the below steps
  • create an entry for a new datasource under [DataSources] section
TPMS = TPMS
  • add a new section [TPMS] to describe the datasource params:
[TPMS]
Docked = TRUE
ConnectString = VECDEV
TableOwner = TPMS_INT
DLL = abc.dll
SqlStyle = OracleCBO
DSUserName = vecdev
DSPassword = vecdev
  • Now that you have defined the Datasource in cfg file, go back to siebel tools and add the datasource to your external table. Go to your external table, and go to the Data Source and add a new record:
    Name = TPMS
  • External table is now ready for use in a EBC.
Use siebel object wizard to create a BC based on this table. Once the BC is created, change the Data Source property of the BC to “TPMS” .You are now ready to use this BC in a applet/view.
In the above process the description is about an external data source called TPMS and we are fetching the  data from TPMS to Siebel.
But what if we come across a bit more complex requirement … suppose the data is in siebel but it should not be modified from the front end or from the back end (unless and untill one has right to do so) just like an external data source schema. Or let me put it as can we make EBC on the same database we are working i.e. Siebel Database???
The answer and solution is the same Yes we can create an EBC based on the same database but for that we need to create a different DSN and then follow the steps given above.
Please feel free to put comments/questions/ideas

Monday 25 August 2008

To Build or Buy? – The Answer is ROI

For Business Intelligence project managers, sponsors and decision makers, things are getting lot more interesting (and complicated) with the advent of packaged BI Applications. Packaged BI is not new but this domain has been getting a big push in recent years from all the major enterprise application vendors.
The logic behind Packaged BI looks sound and bullet-proof. It goes like this – The enterprise applications vendors understand the business aspects very well and have handled complexity of a high order. The collective experience over many years have been distilled into creating specific BI solutions (Financials, Supply Chain, Operations, Sales etc.) and these come packaged with data models, pre-built ETL jobs, standardized reports and high-end predictive analytics. For an example, take a look at this blog describing the packaged BI Applications from Oracle.
So what’s the problem – Why can’t everybody buy packaged BI applications and live happily ever after?
It appears that the choice is not so simple. Packaged BI has certain drawbacks some of which are outlined below:
Packaged BI imposes a certain way of capturing business entities and metrics (euphemistically termed best practices), which might go against an organization’s way of doing things.
The pre-packaged data integration jobs (ETL) stays relevant only for a plain-vanilla implementation of enterprise apps.
Customization done to transaction systems would involve customization to pre-packaged ETL jobs and reports that involves considerable effort and is error-prone.
Packaged BI apps come with embedded ETL and Reporting tools that might be different from the already chosen enterprise standard tools.
From my own experience, I have seen that the packaged BI comes with so many entities and attributes for each domain that it appears “bloated” for companies taking a first step into performing analytics for that particular domain.
Ultimately, the current situation is such that, BI decision makers are grappling with the question of “Build or Buy” – Should I build the BI application from scratch or buy one of those packaged applications? One way to overcome this problem is to build a strong ROI (Return on Investment) framework for BI initiatives in your organization. ROI is computed by dividing the Net Present Value of cash flows over a time horizon by the initial investment. The details of ROI computation and Hexaware’s proprietary tool for financial assessments in BI will be the discussed in subsequent blogs. For now, let’s assume that you have computed the ROI for a Build solution and also for a Packaged BI solution. Once this is done, the choice becomes a little clear – If the ROI for Packaged BI solution is better than expected and the organization can manage the typical pains of implementing a packaged solution, then consider the “Buy” option, else look for a “Build” option.
Now here comes the little twist In my experience, I have seen customers looking at a shorter time-horizon where the ROI of a build solution is typically higher and then move onto a buy solution with a longer time-frame in mind. The extra advantage of this approach is that the organization understands its analytical needs much better before implementing a Packaged BI solution. So it is strictly not a “Build vs Buy” question but can also be a “Build and Buy” scenario.
Thanks for reading. Please do share your thoughts.

Thursday 14 August 2008

End Point in the Business Intelligence Value Chain

An interesting aspect of Business Intelligence is the fact that there
are many end-points possible in a BI Value Chain. Let me explain
a bit here and build a case for creating “Reference Architectures”
in the BI domain.
In my view, there are typically 5 different configurations for the
BI Value Chain that leads to 5 possible end-points. They are:
End Point 1: Reporting and Ad-hoc Analysis
This is the most common type of enterprise BI Landscape.
The objective here is to provide business users with standardized
reports and ad-hoc analysis capabilities to analyze the business.
With that objective in mind, data warehouses and/or data
marts are created as data repositories and semantic layers
for analysis flexibility.
End Point 2: Data Hub or Master Data Repository
This is a scenario where the objective is to consolidate data
and create master data repositories. The consumption of this
master data is typically left to individual consumers to figure
it out for themselves. The complexity in this type of configuration
is more in terms of data quality and governance mechanisms
around the data hub, as the business value increases only if more
systems utilize the data hub.
End Point 3: Source Systems
This configuration indicates a fairly mature landscape where the
feedback loop from the analytical systems to the operational
ones is in place. The concept of Operational BI is built on this
foundation where the data from transaction systems go
through the analytical layers, gets enriched and reaches
its place of origination with the intent of helping business
make better informed transactional decisions.
End Point 4: Data Mining models
This is a configuration that helps organizations
compete on analytics
. Integrated, subject oriented, cleansed data
that is taken out of data warehouses / marts are fed into
data mining models in a seamless fashion. The results obtained
from the data mining exercise are used to optimize business
decisions.
End Point 5: Simulations
Here is a configuration that I haven’t seen in practice but have
a strong feeling would be the future of BI. I have some
experience in working with Simulation tools (Powersim,
Promodel to name a few)
 where the idea is to create a model
of the business with appropriate leads, lags, dependencies
etc. The starting criteria (set of initial parameters) would
typically be fed by a business analyst and the output of
the model would indicate the state of business (or specific
business area being modeled) after a period of time. Given
this context, I think it would be more powerful to have
the simulation models being fed with data from analytical
systems in an automated fashion. Presuming that the simulation
models are built correctly by experts in that particular area,
the output tends to be a better illustration of the future
state of the business than compared to “gut feel” extrapolation.
Outlined above are the 5 different configurations of BI systems.
The logical next step from the technology standpoint is
to publish reference architectures for each of these configurations.
This would help organizations get an idea of the components
involved once they decide on a particular configuration
for their business.
Reference Architectures and Simulations in BI environments
are areas that will be explored more in the subsequent posts.
Thanks for reading. Have a great day!
Read More About Business Intelligence Value Chain

Tuesday 5 August 2008

Business Intelligence Challenge – Understanding Requirements, User Object Analysis

Let us start with the Law of (BI) Requirements“Requirements can not be created nor destroyed; it can only be transformed from one form to another”. The thought is that in all customer environments the requirements for a BI system are always available in some form or the other. We need to find the ‘base object form’ of the requirement and build upon it for further improvement.
In general data in every transaction system gets analyzed and reported in one way or the other. The BI system is built only to improve that process of analysis to a much easier and sophisticated way. Typical requirements ‘understanding’ has been through the means of Questionnaires, Interviews and Joint Discussions, these kinds of requirements gathering could miss out understanding certain things that the user needs because we might not ask the right questions or the user is not in a good mood during the discussion or the user might just provide details on what he can remember at that point of time. When we are talking about users in thousands and located across globe it becomes much bigger challenge.
The solution to cover all aspects of requirements understanding from a user perspective is by analysis of the objects that a user ‘creates or uses’ in his day today activities, we can call this ‘User Object Analysis’.
A ‘User Object’ is any artifact that a user is creating as part of his data preparation, analysis and reporting, this object could be an Excel, PowerPoint slide, Access database, a Word Document, a notepad or an e-mail.
Following are the steps in ‘User Object Analysis’
  • Collect all the ‘Objects’ from all users, the objects collected can go across years, but the key is to collect all of them which the user feels as relevant and applicable
  • Convert all of the content in each of the ‘User object’ into a relational structure, the conversion process would involve mapping the data in the Objects to its metadata like the business names/elements, tables-columns, username, depart etc
  • Analysis of this collected metadata gives a wider view, enables questioning, makes us understand the needs of the users and enables us to define improvements or provide another perspective to the existing ones
  • Prepare and submit the ‘User Object Analysis’ report highlighting the needs of each user (or user clusters) to get user confirmation
Benefits of User Object Analysis
  • An effective means to understand the needs of a user based on what he does as a daily routine
  • An easy way for the user as he has to just read thru final report for approval and need not work in providing inputs through questionnaire or discussions
  • Easily managed for users in large numbers or multiple locations
  • A good base for us to define improvements for the existing process of analysis
  • Platform to consolidate the needs across multiple users and carve out the user clusters who perform same kind of analysis
  • Enables us to think through the business process and improves business understanding
Next time let us discuss about another perspective to Requirements Understanding called ‘System Object Analysis’.

Tuesday 8 July 2008

Competencies for Business Intelligence Professionals

The world of BI seems to be largely driven by proficiency in tools that I was stumped during a recent workshop when we were asked to identify BI competencies. The objective of the workshop, conducted by the training wing of my company, was to identify the competencies required for different roles within our practice and also to define 5 levels (Beginner to Expert) for each of the identified competencies.
We were a team of 4 people and started listing down the areas where expertise is required to be a successful BI practice. For the first version we came up with 20 odd competencies ranging from architecture definition to tool expertise to data mining to domain expertise. This was definitely not an elegant proposition considering the fact that for each of the competencies we had to define 5 levels and also create assessment mechanisms for evaluating them. The initial list was far too big for any meaningful competency building and so we decided that we have to fit all this into a maximum of 5 buckets.
After some intense discussions and soul searching, we came up with the final list of BI competencies as given below:
2) BI Solutions
3) Data Related
4) Project / Process Management
5) Domain Expertise
BI Platform covers all tool related expertise ranging from working on the tool with guidance to being an industry authority on specific tools (covering ETL, Databases and OLAP)
BI Solutions straddles the spectrum of solutions available out-of-the-box. These solutions can be packages available with system integrators to help jump-start BI implementations at one end (For ex: Hexaware has a strong proprietary solution around HR Analytics) to the other extreme of Packaged analytics provided by major product companies (Examples are: Oracle Peoplesoft EPM, Oracle BI Applications (OBIA), Business Objects Rapid Marts etc.)
Data Related competency has ‘data’ at its epicenter. The levels here range from understanding and writing SQL Queries to Predictive Analytics / Data Mining at the other extreme. We decided to keep this as a separate bucket as this is a very critical one from BI standpoint for nobody else has so much “data” focus than the tribe of BI professionals.
Project Management covers all aspects of managing projects with specific attention to the risks and issues that can crop up during execution of Business Intelligence projects. This area also includes the assimilation and application of software quality process such as CMMI for project execution and Six Sigma for process optimization.
The fifth area was “Domain Expertise”. We decided to keep this as a separate category considering the fact that for BI to be really effective it has to be implemented in the context of that particular industry. The levels here range from being a business analyst with the ability to understand business processes across domains to being a specialist in a particular industry domain.
This list can serve as a litmus paper for all BI Professionals to rate themselves on these competencies and find ways of scaling up across these dimensions.
I found this exercise really interesting and hope the final list is useful for some of you. If you feel that there are other areas that have been missed out, please do share your thoughts.
The team involved in this exercise: Sundar, Pandian, Mohammed Rafi and I. All of us are part of the Business Intelligence and Analytics Practice at Hexaware.

Thursday 26 June 2008

Lessons From CMMI (A Software Process Model) For BI Practitioners

Hexaware successfully completed the CMMI Level 5 re-certification recently with KPMG auditing and certifying the company’s software process to be in line with Version 1.2 of the model. This is the highest level in the Capability Maturity Model Integration model developed by Software Engineering Institute in collaboration with Carnegie Mellon. For the uninitiated, Capability Maturity Model Integration (CMMI) is a process improvement approach that provides organizations with essential elements of effective process.
Now, what has CMMI got to do with Business Intelligence?
I participated in the re-certification audit as one of the project managers and I learnt some lessons which I think would be useful for all of us as BI practitioners. The CMMI model has 22 different process areas covering close to 420 odd specific practices. Though the specifics are daunting, the ultimate goal of the model is simple to understand and there-in lies our lesson.
In the CMMI model, Maturity Levels 2 and 3 act as building blocks in creating the process infrastructure to ensure that the higher maturity levels are achievable and sustainable.The high-maturity practices (Levels 4 and 5) of the model focus on:
1) Establish Quantitative Goals in line with the business objectives
2) Measure the performance with respect to the goals using statistical tools
3) Take corrective action to bring the performance in line with the goals
4) Measure again to ensure that the action taken has contributed positively to performance improvement.
Key Lessons for BI Practitioners:
1) Single-minded focus to “close the loop” – CMMI model evaluates every project management action in the context of project goals and measures them quantitatively. Business Intelligence, ideally, should measure all actions in the context of business goals and provide the facility to compare metrics before and after the decision implementation.
2) Strong information infrastructure – Higher levels of maturity in CMMI are sustainable only if the lower maturity levels are strongly established. In the BI context, this translates to a robust architecture that makes measurements possible
3) Accuracy + Precision is the key – Controlling variation (sustainability) is as important as hitting your targets. BI in organizations is weak along the sustainability dimension. For instance, enterprises do have analytics around “How am I doing now” but not much on questions like a) How long will this growth continue? b) When will we get out of this declining trend? etc.
In a way, this post is related to one of my earlier blog on BI and Sig Sigma with the central idea being that, for enterprises to be analytics driven both numbers and processes behind those numbers are equally important. CMMI Model, in its simplest form, also has that as its core theme for achieving high process maturity in an organization.
Thanks for reading and please do share your thoughts. Read More About : CMMI

Monday 9 June 2008

Hybrid OLAP – The Future of Information Delivery

As I get to see more Enterprise BI initiatives, it is becoming increasingly clear (atleast to me!) that when it comes to information dissemination, Hybrid Online Analytical Processing (HOLAP) is the way to go. Let me explain my position here.
As you might be aware, Relational (ROLAP), Multi-dimensional (MOLAP) and Hybrid OLAP (HOLAP) are the 3 modes of information delivery for BI systems. In an ROLAP environment, the data is stored in a relational structure and is accessed through a semantic layer (usually!). MOLAP on the other hand stores data in proprietary format providing the notion of a multi-dimensional cube to users. HOLAP combines the power of both ROLAP and MOLAP systems and with the rapid improvements made by BI tool vendors, seems to have finally arrived on the scene.
In my mind, the argument for subscribing to the HOLAP paradigm goes back to the “classic” article
by Ralph Kimball on different types of fact table grains. According to him, there are 3 types of fact tables – Transaction grained, Periodic snapshot, Accumulating snapshot and that atleast 2 of them are required to model a business situation completely. From an analytical standpoint, this means that operational data has to be analyzed along with summarized data (snapshots) for business users to take informed decisions.
Traditionally, the BI world has handled this problem in 2 ways:
1) Build everything on the ROLAP architecture. Handle the summarization either on the fly or thro’ summarized reporting tables at the database level. This is not a very elegant solution as everybody in the organization (even those analysts working with summarized information) gets penalized for the slow performance of SQL queries issued against the relational database through the semantic layer.
2) Profile users and segregate operational analysts from strategic analysts. Operational users are provided ROLAP tools while business users working primarily with summarized information are provided their “own” cubes (MOLAP) for high-performance analytics.
Both solutions are rapidly becoming passé. In many organizations now, business users wants to look at summarized information and based on what they see, needs the facility to drill down to granular level information. A good example is the case of analyzing Ledger information (Income statement & Balance Sheet) and then drilling down to Journal entries as required. All this drilling down has to happen through a common interface – either an independent BI Tool or an enterprise portal with an underlying OLAP engine.
This is the world of HOLAP and it is here to stay. The technology improvement that is making this possible is the relatively new wonder-kid, XMLA (XML for Analysis). More about XMLA in my subsequent posts.
As an example of HOLAP architecture, you can take a look at this link
to understand the integration of Essbase cubes (MOLAP at its best) with OBIEE (Siebel Analytics – ROLAP platform) to provide a common semantic model for end-user analytics.
Information Nugget: If you are interested in Oracle Business Intelligence, please do stop by at http://www.rittmanmead.com/blog/ to read his blogs. The articles are very informative and thoroughly practical.
Thanks for reading. Please do share your thoughts.
Read More Hybrid OLAP