Pages

Ads 468x60px

Labels

Monday 19 November 2012

HP DIAGNOSTICS


Overview
Identifying and correcting availability and performance problems can be costly, time consuming and risky. IT organizations spend more time identifying an owner than resolving the problem.
HP Diagnostics helps to improve application availability and performance in pre-production and production environments. HP’s diagnostics software is used to drill down from the end user into application components and cross platform service calls to resolve the toughest problems. This includes slow services, methods, SQL, out of memory errors, threading problems and more.

How HP Diagnostics software works
During a performance test, HP Diagnostics software traces J2EE, .NET, ERP, and CRM business processes from the client side across all tiers of the infrastructure. The modules then break down eachtransactionresponse time into time spent in the various tiers and within individual components. 

•Easy to use view of how individual tiers, components, memory, and SQL statements impact
Overall performance of a business process under load conditions. During or after a load test, you can
inform the application team that the application is not scaling and provide actionable data to them.

• The ability to triage and find problems effectively with business context, which enables to focus onproblems impacting business processes
Why? The Benefits
Diagnostics falls into the middle ground between Quality Assurance and Operations Performance Validation.
For developers, having Diagnostics means that tracing code doesn’t have to be added and removed. This is a big side effect of why diagnostics can improve performance.
Diagnostics is the science of pinpointing the root cause of a problem. Load Runner is the first load testing tool to provide a set of Diagnostics modules that trace, time, and troubleshoot end-user transactions acrossALL tiers of the system. These modules extend LoadRunner to provide a unified view of both end-user experience and application component (method, SQL) level performance. The intuitive visual interface allows the user to drill down from a problematic business process all the way to the poorly performing component. This granularity of results ensures that every load test provides development with actionable results, thus reducing the cost and time required to optimize J2EE/.NET applications.
Diagnostics can be integrated with HP Business Availability Center software, HP LoadRunner, and HP Performance Center
As the response times alone will not suffice the report, more people(client ,developer etc)  are interested to know the key features why the bottlenecks .As a part of performance engineering identifying the root cause as where the bottleneck is and why is it caused.

Any application  framework we test has numerous lines of code.it is difficult for a developer to identify why the application response in more on load if we just produce them with response times,if team has to fix them ,they will be in a doubt as which part of the code and methods are causing the increased response time.

Supported platforms
• WebSphere, WebLogic, Oracle 10g, SAP Web
Application Server, JBoss, Tomcat, Sun ONE, ATG,Borland ES, FUJITSU Interstage, Tmax Soft JEUS,
.NET 1.1 to 3.5
• WebSphere Portal Server, WebLogic Portal Server,SAP Enterprise Portal, Oracle 12i applications

Consider a J2EE/.net framework
As of the probes are installed on each layer like web,application layer, database layer the metrics are collected by diagnostics tool illustrating the behavior of the layers when a request is sent..
Key concern when it comes to metrics:
1.J2EE/.NET Framework –Average Method response time
2. J2EE/.NET Framework-Server requests response
2. J2EE/.NET Framework-server method calls persecond

When it comes in direct invoking of the diagnostics we have the following metrics
1. Average memory used
2. Average CPU used
3. JVM heap memory used
4. Connection pool, Thread pool
5. Collection leaks
6. EJB Methods /time
7. Server requests/time
8. Worst transaction
9. Worst SQL Queries
10. Network latency
11. Server request -exceptions

The report which we consolidate will speak clearly as where the
Developer-Which method or part of code should he fix?(methods and calls)
DBA-Which query should be tuned (any indexes are used  for the query)
Integration team-Any increase in servers and CPU are necessary for scalability.

Key Functions of Diagnostics:
Various Metrics (such as JVM heap size, garbage collection frequency, method invocation counts, etc.) are grabbed by Probes which pass metric data out to the Profiler web service (installed with and runs on the same server with the probe) to produce web pages in HTML or XML or format which can be parsed dynamically by Scripts running withing load runner programmed to store diagnostics values as user-defined values along with metrics maintained by LoadRunner (such as the number of vusers running concurrently).
HP(Mercury) Tuning Console product which tracks the impact of server configuration changes on metrics

When many app servers are involved add in (Diagnostic)to LoadRunner displays metrics files obtained from the
Diagnostic Server, also called the Commander, which stores data from the Collector and Mediator which filter and aggregate data obtained from probes on app servers.
Probe Profiler Tabs
Below is the sample of the probe metric page and listen below are the few metrics.

Summary
Memory
Load
Shortest Requests
Hotspots
Slowest Methods
CPU Hotspots (Methods)
Slowest SQL
Metrics
System (Host) CPU, Memory Usage, PageInsPerSec, PageOutsPerSec, PageCutsPerSec, Disk, Network
JVM: Probe: HeapFree, HeapTotal, HeapUsed
Java Platform: Classes, GC, Threads
Mercury System
Web logic: EJB, Execute Queues, JDBC, etc.

The final summary is that report plays a major role in making the performance of the application as desired by the User (Fast and scalable).Response times can be brought down by fixing these issues.
Hence forth diagnostics is the heart and soul for the Performance engineering Practice.

 

Performing Manual Correlation with Dynamic Boundaries in LR

What is Correlation: It is a Process to handle dynamic values in our Script. Here the dynamic value is replaced by a variable which we assign or capture from the server response.

Ways to do correlation: There are two ways to do this Correlation.

They are as follows:

  • Auto-Correlation: The Correlation Engine in LR Package captures the value and replaces it with another value
  • Manual Correlation: Understanding of the Script and its response is highly needed to do this. It is bit complex to do Manual Correlation sometimes but this is always the preferred method to handle Dynamic Values in our Script

Usually the Manual Correlation is done by capturing the dynamic value which is present in between the Static left and right Boundaries.

Objective: The intention of this article is that to give a method which will be useful when we wanted to capture and handle Dynamic Values when even the Left and right Boundaries are also dynamic.

The Solution can be much simple, Instead of determining the boundaries to the String we can actually use Text flags.

Before Getting into the Topic we should know about the Text Flags:

Text flags are the Flag used just after the text with Forward Slash.

Some of the commonly known and used Text flags are:

  • /IC to ignore the case
  • /BIN to specify binary data
  • /DIG to interpret the pound sign (#) as a wildcard for a single digit
  • /ALNUM<case> to interpret the caret sign (^) as a wildcard for a single US–ASCII alphanumeric character

Case 1: Digit Value

Suppose the response data is the string literal, but the issue is that the left boundary is changing every time; you get the left boundary as axb, where x ranges between 0 and 9, as follows:
a0b=Boundaryrb
a1b=Boundaryrb
a2b=Boundaryrb
——–
——–

a9b=Boundaryrb

We can capture the desired string by putting the following correlation function in place, using the /DIG text flag in combination with Left Boundary:

web_reg_save_param (“Corr_Param”, “LB/DIG=a#b\=”, “RB=rb”, LAST);

The corresponding place, which you expect to be dynamically filled in with a digit, should be replaced by a pound sign (#).

If there are multiple digits, we can use ‘##’.

Case 2: Boundary is String and case sensitive

web_reg_save_param (“Corr_Param”, “LB/IC/DIG=a#b\=”, “RB/IC=rb”, LAST);

Case 3: A Place to be filled either by a Digit or a letter

web_reg_save_param (“Corr_Param”, “LB/ALNUM=a^b\=”, “RB/IC=rb”, LAST);

HP Ajax TruClient – Overview with Tips and Tricks

Overview

  • In LoadRunner 11.5, TruClient for Internet Explorer has been introduced. It is now possible to use TruClient on IE-only web applications.

Note: This still supports only HTML + JavaScript websites. It does not support ActiveX objects or Flash or Java Applets, etc.

  • TruClient IE was developed as an add-in for IE 9, so it will not work on earlier versions of IE. This version of IE was the first version to expose enough of the DOM to be usable by a TruClient-style Vusers. Note that your web application must support IE9 in “standard mode”.
  • Some features have also been added to TruClient Firefox. These include:
    • The ability to specify think time
    • The ability to set HTTP headers
    • URL filters
    • Event handlers, which can automatically handle intermittent pop-up windows, etc.
  • Web page breakdown graphs have been added to TruClient (visible in LoadRunner Analysis). Previously they were only available for standard web Vusers.

Tips and Tricks

NTLM authentication -

Scenario: Some applications when accessed on Mozilla, demand NTLM authentication. If these steps appear while recording,   they don’t get recorded. Hence while replaying, due to the absence of these steps; the application fails to perform the intended transactions.

Solution: To avoid a situation in which an application asks for NTLM authentication while recording and replaying, one has to specify the application as a trusted NTLM resource. To make that, follow these steps.

  • Open the file “user.js” located in “%lr_path%\dat\LrWeb2MasterProfile”.
  • Locate the preference setting “network.automatic-ntlm-auth.trusted.uris”.
  • Specify the URL of the trusted resource as the value of this setting.
  • Save the file “user.js”

These changes are done only where the VUgen is used to develop the script. These changes get saved with the script and apply on different machines during load tests. 

Disable pop-ups during recording -

Scenario: The occurrence of unwanted pop-ups creates hurdles during script development.

Solution: To disable the pop-ups, we can do it by following the below mentioned steps –

  • In the Firefox address bar, enter ‘about: config’. Click ‘I’ll be careful, I promise’ tab
  • In the filter field, enter disable_open_during_load
  • Right click on ‘disable_ open_during_load’ and select ‘Toggle’. The value changes to ‘false’
  • Record initial Navigation step again
  • Your pop-ups will be disabled

Displaying the value in a parameter or variable -

Scenario: To understand the value that gets stored in a parameter while replaying the script.

Solution: This can be achieved using alert () function.

Example:

var x=”Good Morning”;

window.alert (x);

Calculating number of text occurrences -

Scenario: Scripting of most of the modern internet applications with number of dynamic features demand this requirement. Be it to check the presence of a text on the web page or to count the number of tickets generated in the application during run time, calculating text occurrences and using this count with right logical code.

Solution: In AJAX, using JavaScript functions, we can achieve this objective. This can be done as –

  • Drag ‘Evaluate JavaScript code’ from toolbox
  • In the arguments section add the following code -
    var splitBySearchWord = (document.body.textContent).split (‘Text to search for);
  • Then display the total number of occurrence of the text using Alert () method.
    window.alert (splitBySearchWord. Length);

 

 

Inserting random think time -

Scenario: End-user behavior is unpredictable and as a performance tester, while executing a performance test, our aspiration should always be to reach closest to the real time scenario. Some end users may spend only 2 secs before navigating to the next page, while many others may think for more time. Hence in many test scenarios, it would not be ideal to insert a fixed think time value before a web request; rather one must use random think time in such cases.

Solution: The above scenario can be achieved using advanced JavaScript functionality. They are:

  • From ‘Toolbox’, copy a wait function and paste it before the web request
  • In the argument section, replace the interval value ’3′ by ‘Math.floor (11*Math.random () +5); ‘

The above function will return a random number between 5 and 15.

Math.floor () method rounds a number Downwards to its nearest integer (Eg. The output of code ’Math.floor (1.8); ‘is 1). Hence 11 are used as a multiplication factor so that an integer in the upper decimals of 10 will be rounded to 10.Math.random () method returns a random number between 0 and 1.

Handling browser cache -

Scenario: You may wish to manage the cache handling features of the browser to replicate different types of test scenarios.

Solution: This can be achieved by following these steps -

  • Open the Script under Interactive mode.
  • Go to VUser > Run-Time Settings > General > Load mode Browser Settings
  • Inside the Settings frame display the option Advanced
  • Select the option “Compare the page in cache to the page on the network”; select one of the four values above according to your test requirements

0 = Once per session

1 = Every time the page is accessed

2 = Never

3 = When the page is out of date (Default value)

Conclusion

In Hexaware, we have used TrueClient protocol to record many applications for different clients. Some of the benefits we fruited are as follows – HP TruClient Protocol works with many frameworks like jquery, Ajax, YUI, GWT, JS, etc. Rich internet applications developed on Web 2.0 technologies can be easily scripted and replayed. Script development is interactive with script flow at one side of the window and application opened in the browser at the other. This makes scripting with AJAX TruClient protocol easier and faster. Object identification features minimize the use of complex correlations and make script more dynamic. Thus the scripts become more resilient to back-end changes. Complex client side events like Mouse over, slider bars, calendar items, dynamic lists, etc. can be very easily scripted, customized and replayed. Thus testing cycle is much shorter in case of Ajax TruClient as compared to that with other web protocols. Using AJAX TruClient, API + GUI response time can be obtained, as opposed to other protocols that provide only API response time.

 

XML Optimization through custom Properties

1. Problem Statement:

I am creating a XML file as an output . If my source is empty, is there a way to  avoid the creation of an empty XML file?

Sample output Data with source data :


 

Case 1 : Empty Source – Creation of Minimal XML file

We have to set the following properties of an XML Target at session level under the Mapping tab.

Null Content Representation – “No Tag”

Empty String Content Representation – “No Tag”

Null Attribute Representation – “No Attribute”

Empty String Attribute Representation – “No attribute”

The Output file is as follows

Note: It generates the minimal XML and parent tag. The parent tags are shown as Unary Tag in the browser.

Case 2:  Creation of Zero Byte XML file.

Even though setting all the above property you will get an empty XML file with no data or only with parent tags. If downstream system Like MFT (Managed File Transfer) consumes this garbage file, you will end up with errors while processing.  To avoid these kinds of errors we have to set two custom properties in the Integration Service:

WriteNullXMLFile = No

The WriteNullXMLFile custom property skips creating an XML file when the XML Generator transformation or Target doesn’t receive data . The Default value for this parameter is Yes and. if you set No , the minimal XML document will not be generated and the target XML file size will be of zero byte.

 

2) Suppress the Empty Parent Tag

 

A PowerCenter session with an XML target writes empty parent tags to the XML file when all child elements are null.  This may occur even when the Null Content Representation option is set to No Tag in the session properties.

SuppressNilContentMethod = ByTree

The SuppressNilContentMethod server parameter will suppress the parent tags as well as the child tags when all the child elements are null. To achieve this, set the custom property to “ByTree”.

 

 

ByTree

The ByTree flag suppresses non-leaf elements up to (but not including) the document root, when the entire element chain originating at the specified element contains no data. ByTree flag is always optimal.

For example the Street1 and Street2 values are empty, without setting the property you will get the below output with Street Unary tag:

If you set the Property SuppressNilContentMethod = ByTree the entire Street tag will be vanished.

3) To reduce the Session log size while using XML as Target

XMLWarnDupRows =No

By default; it is Yes, the Informatica Server writes duplicate row warnings and duplicate rows for

XML targets to the session log.

4 ) To reduce the cache file size created by XML target and increase the performance of reading large XML files.

XMLSendChildFirst=Yes

How to set the Custom Properties?

Infa 8.x and Above

1. Connect to the Administration Console

2. Stop the Integration Service

3. Select the Integration Service

4. Under the Properties tab, click Edit in the Custom Properties section

5. Under Name enter WriteNullXMLFile = No

6. Under Value enter No

7. Under Name enter SuppressNilContentMethod

8. Under Value enter ByTree

9. Click OK

10. Restart the Integration Service

Starting with PowerCenter 8.5, this change could be done at the session task itself as follows:

These custom properties would override the DI service level properties.

1. Edit the session

2. Select Config Object tab

3. Under Custom Properties add the attribute WriteNullXMLFile=No and SuppressNilContentMethod=ByTree

4. Save the session

Session Properties:

Advanced Replication Setup for High availability and Performance

In my personal opinion, Oracle leads the market in Directory Product offerings (LDAP Directories). Starting from Oracle Internet Directory (OID), to the latest Oracle Unified Directory (OUD), Oracle definitely provides variety of LDAP Directory related products for integration.

With increasing demand for mobile computing and cloud computing offering, there is a need to standardize LDAP Deployments for Identification, Authentication and (sometimes) Authorization (IAA) services. With a highly scalable, highly performing, highly available, highly stable and highly secure LDAP Directory, these IAA services will be easier to integrate with applications in the cloud or for the mobile applications.

Introduction

Oracle Unified Directory (OUD) is a latest LDAP Directory offering from Oracle Corp. As mentioned in my previous post, OUD comes with three main components. They are:

  • Directory Server
  • Proxy Server
  • Replication Server

Here, Directory Server provides the main LDAP functionality (I assume you already know what an LDAP Directory Server means). Proxy server is used for to proxy LDAP requests (how?). AndReplication Server is used for replicating (copying) data from one OUD to another OUD or even to ODSEE server (we will talk more about replication in this post). You can read about my first post on OUD here. In this current article, I will write about replication server and advanced replication setup for Oracle Unified Directory.

Many people want a step by step guide (kind of cheat sheet) to setup something like OUD or OID for replication. Unfortunately I am not going to give you that here. In my personal opinion, that (cheat sheet) is not a right approach at all and will not be helpful in the long run for gaining concepts or knowledge. First of all, we need to give importance to the basic concepts behind how something works.

First of all, read OUD Documentation

Product Documentation must be read before you plan your deployment. You can find the OUD Documentation here. This link is for OUD Version 11.1.1. Make sure to refer the latest product manual. Documentation provides lot of details about the product and save lot of time with investigation later. For Replication, you need to start with “Architecture Reference” Guide.

When do you want to setup replication?

There should be a reason, right? If there is no reason, then there is no need for you to setup replication at all. Instead, you can have a beer and pass the time happily doing something else.

Ideally, you need replication setup for “High Availability” and “Performance”. Usually, there will be multiple instances of OUD Directory Server processes running in Production. Let’s say we need to have around four OUD Directory Servers (and four more for Business Continuity/Disaster Recovery).

Unfortunately, there is no single process to update all the eight OUD Directory Servers in our example. We need to find a mechanism to synchronize the directory entries across these servers.  For this, we need to use the OUD Replication Server Component.

Securing the Replication Traffic

We don’t want network sniffers taking away critical user information (even inside the internal network, it is possible). We need to encrypt the traffic between the replication servers. Do not consider setting up a Replication Server communication without encrypted traffic.

Since OUD provided identity data, all the network traffic is prone to sniffing attacks. Always use encrypted or secure connections to OUD or to any LDAP Directory.

Deciding a Replication Method to use

Next important thing is to decide what replication method you are going to use. This is mostly site specific and you need to know lot of details before deciding a replication method to use. I am planning to use the following sample architecture for this post. Let’s understand our sample OUD Architecture first.

 

Here are the quick components of the architecture:

  • We have one master OUD Server called PROD-01. All the updates to the directory happens here. Most probably, HR System will update the directory. Also, Updates can happen using a custom developed application plug-in for LDAP Directory or using a Identity and Access Management System (IAM) system such as Oracle Identity Manager or Tivoli Identity Manager.
  • PROD-02 will be used with PROD-01 for High Availability and Performance in this Production Deployment.
  • In Disaster Recovery deployment, we have PROD-03 and PROD-04 servers. These servers need to synchronize the user data from the master server PROD-01.

One way to setup replication is by provisioning users into all the six OUD Directory Servers by an Identity and Access Management (IAM) System (such as Oracle Identity Manager or Tivoli Identity Manager). However this provisioning can be time consuming to complete because it will be treated as updating six different LDAP Directories. So a better way to achieve this is using a Replication Server.

We will continue setting up the Replication Server for this architecture. Lets meet in another post - Until then.