Monday, December 14, 2015

HP SiteScope 1 - Introduction



There are many tools available in market for monitoring application software and hardware resources in pre production and production environment. Some have the diagnostics capabilities where we can deep dive straight into the bottleneck and figure out the root cause while some are used passively just to monitor resources. Some are agent based i.e. they will communicate with the server where agents specifically designed to understand server platform are installed and correlate those readings and present them to the user, while some are agent less which do not require any installation on the server side.

HP Sitescope is agent less resource metrics monitoring tool which is one of the most easily deployable monitoring tool around. The main purpose this tool is to check application availability and performance of application and infrastructure in distributive application environment. 

Thursday, December 10, 2015

Mobile Network Conditions Capture (Shunra)

There are huge variations in network quality and network technologies across the globe, and hence testing the network conditions is one of the most important aspects in non functional software application testing.

To test network conditions which exist in real world between two points say for example between New Delhi (India) and New York (USA) we need to have the properties of network between the two locations. There are tools available for collecting this information by either installing agents on the network endpoint or without the use of any agents.

Agent based network conditions capture tools gives us more accurate condition readings than their counterparts but they may not be used in every case, for example if we want to test network conditions between our application and Facebook servers we may need to install agents at Facebook servers which surely Facebook will not allow that easily. In this case agent less network capture tools come into picture.

Network conditions captured using these tools many be used in HP Load Runner scenarios using HP Network Virtualization tool to emulate same.

I will be explaining one simple tool for capturing network conditions free of cost on Google play store and Apple store called Shunra Network Cather Express.




Tuesday, December 8, 2015

Mobile Network Emulation and Performance Testing Using HP Network Virtualization

Performance testing an application which is being used from many client devices using different set of network qualities is a difficult task. Without incorporating different network conditions while performance testing an application will provide results which are not closer to real world user experience and hence can not be used in capacity planning of and tunning the application.

Network condition testing becomes more relevant when mobile devices comes into picture since the network technologies and  quality for these devices posses maximum variation.

Below I will explain the process of emulating these real world network qualities using HP Network Virtualization tool (Formerly known as Shunra) and how these emulated network conditions can be incorporated in HP Load Runner scenarios. I will try to follow the process by using a simple scenario containing a single script with single transaction.


Friday, December 4, 2015

Mobile Application Performance Testing and HP Network Virtualization - Introduction

Mobile Application Performance Testing

Applications on mobile phones are installed on mobiles with different CPU, Memory, Screen sizes, Resolutions and Networks should perform well.
As in performance testing of web applications, standalone applications or database’s the  focus of mobile performance testing is to identify or anticipate the performance bottleneck in the application and identify and diagnose the reasons for such bottlenecks.

Performance Parameters for mobile performance testing are:

      1) Application response time – How fast application responds?

      2)  Reliability – Will application for a particular environment be able to handle the load? How long can application run without any issues? How any particular functionality of the application works when it is repeated again and again?

      3)  Configuration – Is this the best configuration for performance? If not which one is?

      4) Scalability – Will application be able to scale to higher load requirements if required in future?

      5) Transaction Response Time – How long it took to complete a particular business process?

      6) Bottleneck Identification – If any hardware or software environment changes what will be its impact on the application performance?

      7) Power management – Can the application use the battery better? How much power is used by the application while running for long time? How does the application behavior changes with respect to battery level?

      8) Resource Utilization – How much CPU or Memory or Network used by application? How much memory is used when the application is installed? How much memory application uses while it’s running? How much CPU is used by the application while it’s running? What is the throughput of the application? Amount of bandwidth application uses? And what is the latency in the network?

Focus areas of mobile performance testing-
Mobile performance testing can be performed from 4 different perspectives.

1)      Device perspective –

a)      Power management. Key Performance indicator (KPI) is Battery usage.
b)      CPU performance – KPI is CPU utilization percentage.
            c)       Memory – KPI’s are Cache memory, free memory and used memory. 

2) End user perspective – What we want in our device so that we justify our $’s we spent on that.

a)      Transition time
b)      Application launch time
c)       Load time
d)      Page component analysis
e)      Response time
For all these metrics the KPI is data/sec processed by the device i.e. all the indicators depend on how fast our device processes the data for the application taking into consideration the different hardware and software restrictions.

1    3) Network Perspective - 1)      Since most of the applications which are available on mobile devices use networks and there are many types of networks available for mobile devices following network features also require performance testing because these may severely hamper performance of application on a mobile device which needs a network.

a)      Connectivity strength – KPI’s are application features in low, medium and high power condition.
b)      Connection switching or handovers – How application behaves when network is changed from 3g to 2g or vice versa and other network scenarios.
c)       Calling use interference – How the application behaves when incoming voice calls happen while using.

      4) Server Perspective – Applications on mobile many a times uses a server which takes the requests from clients and processes those. We need to test this side of the application too.

a)      Backend server load – KPI’s are bandwidth, latency and transaction response time.

Measuring performance parameters for Mobile performance testing:

           1)      Server –

a)      Load
b)      Process time
c)       Bytes total
d)      User time
e)      Packet sent/receive

2) Network –

a)      Packets and Bytes sent
b)      Packets and Bytes received
c)       Average delay/Latency
d)      Packet drops
e)      Bandwidth usage

                  3)      Device –

a)      CPU and Memory usage
b)      Method level profiling
c)       Web application component level
d)      Response time
e)      Page rendering time

4) Transaction-

a)      Response time
b)      Throughput

There are many tools available in market for testing Mobile applications from different perspectives and provide different level of details while doing so. Some of the common tools and their features are explained below: 

           1)      HP Load runner ( Server Perspective)
-          Transaction response time
-          Bandwidth
-          Latency

                 2)      ARO (Device Perspective)
-          Battery level
-          Signal strength
-          Network packets

3)      Anritsu Simulator (Network Perspective)
-          Application behaviour for different network conditions
-          Handovers
-          Incoming call/SMS 

4)   Perfecto Mobile (End-User Perspective)
-          Screen transition time
-          Application launch time
-          CPU consumption
-          Memory consumption

Mobile Load Testing using HP Load Runner and HP Network Virtualization –

Most important aspect of mobile performance testing is network. As we have seen the network conditions in mobile ecosystems are very diverse from LTE to 4Gs and more. Also mobile networks also have coverage and signal strength limitations which make network one of the most important factors in mobile application performance testing.

Load runner provides us some basic features for network virtualization. We can find these options in Load runner run time settings as shown below.

If we go to Replay -> Run time settings -> Network -> Speed Simulation, we will find following three options for Network Speed simulation –

a)      Use the maximum bandwidth
b)      Use a predefined bandwidth (bps)
c)       Use a custom bandwidth (bps)

Bps here means bits per second.


As we can see above although Load Runner provides us with some basic network simulation tools these are not robust. Since networks have many issues which cannot be emulated using virtualizing only its bandwidth properties. Two of the issues which come to my mind right now are following –

a) Random network delay (not a consistent delay)

b) Network packet loss (data packets lost during transmission)

Hence we need more tools for correctly virtualizing network properties and make mobile performance tests more realistic. There are many tools in market to emulate network conditions but here I will discuss HP network virtualization tool.

The HP network virtualization agents are installed on the Load generator machines and network conditions are specified on them through these. Network virtualization software component called HP Network Editor is installed on the ALM/PC client machine or on controller host. HP Network Editor provides the different network simulation options for the performance tester to use in performance test scenarios. Some pre built sample networks are also there to be used as it is configurations.

HP Network Virtualization tool set contains following tools –

1)    HP Network Catcher – This tool helps to capture the real world network scenarios.This tool captures network condition from one location to another in real time.

There are tools available specifically to capture mobile network conditions. Once this tool captures the real world scenario from a mobile device like iOS or Android its captured files can be exported and imported using HP Network Editor for use in performance scenario.

For example HP Network Capture Express is a tool available on Apple store and Google Play store for iOS and Android phones respectively.

2) HP Network editor – This tool is used to modify network scenario files. Scenario files are the real world network condition files collected using HP Network Catcher ( .ntxx files).

3) HP Network Virtualization for Load Runner / Performance Center – This allows us to run load test with network virtualization enabled.

4) HP Analytics – This tool helps to understand the root cause of performance bottleneck from a single user perspective. Input to this tool is single user test output. It details load times, component download analysis, response time breakdown and errors.

5) HP Predictor – This tool help to compare application performance after real world network configurations are applied against the KPI’s. It analyzes the test results from HP Load Runner or HP Performance center tests with WAN emulation.

Following diagram show where these components are installed on a HP performance center system -




Why can’t we put our load generators on remote locations to emulate the mobile network conditions?

There are many reasons for us not to do this, some are as follows –

a) Remote load generators cannot be set up on a mobile network.

b) Remote load generators incur more cost in terms of hosting and support. Also remote load generators are difficult to manage.

c) If we need to emulate the behavior of network from a different location we need to deploy a load generator in that network location which is near to impossible for each location we want to test.

d) Limitation in volumes of users that can be tested and also the times at which tests can be executed.

For complete process of how we can performance test mobile application from network perspective using HP Network Virtualization I will create another blog and publish. Please wait for my next blog for this topic.

Wednesday, December 2, 2015

Performance Testing Work Load Model Creation

Performance test workload model – This document is the baseline for Performance test scenarios which needs to be executed for testing the scalability, durability and availability of the application under test. This document contains the characteristics of the Vusers which will be executed to test the system performance. These characteristics include number of Vusers required, Pace times and Think times assigned to them.
Observations about the current system behavior and future predictions of the system are all dependent on how accurately Performance test workload model is documented.

If performance model is not correct the behavior of application under test cannot be predicted with desired accuracy.

Performance test workload model contains –

1)     List of transactions/services getting tested from performance testing point of view.
2)      Vusers, Think Time and Pace time settings used to test these services for different performance scenarios like baseline, peak or endurance. 

Inputs for creating a performance test work load model –

1)    Low Level Design document
2)  High Level Design document
3)  Current Production Load metrics (If system is already live)
4)  Anticipated Production Load
5)  Test Environment details
6) Production Environment details

Inputs detailed –

Low Level Design Document and High level Design documents helps to analyze new changes which are going to be deployed in production. These documents help to single out transactions which are getting impacted with current release of application in production. 
New Transactions - Transactions which are getting impacted due to current release and are getting code changes.
Regression Transactions – Transactions which are not getting impacted due to current release and were part of earlier test suits (If the application was tested earlier).

Current Production Load metrics – If the system which is going to be tested is live in production the details of live transactions need to be pulled from there. This will help us to understand the current user behavior in production and will help us to accurately emulate same in our performance model.

Anticipated Production Load – If the whole system or some part of it is new the anticipated production volumes for same are required to correct documentation of Performance test workload model.

Test Environment details – The performance testing environment details are required for coming up with a accurate and useful performance load model. Most of the times the testing environment of the application is not scaled up to or is not scalable to production environment hence we need to adjust our volumes accordingly. Getting to know the environment details of testing environment also helps in predicting the future hardware or system capabilities of current system.

Also there are many instances where whole of the production systems cannot be emulated in testing environment such as third party systems. So these systems needs to be virtualized using different virtualization software’s like iTKO LISA which needs further analysis for there bottlenecks. Since these systems are not equally capable and just is a work around to test a complete transaction flow we many need to incorporate these details in our production models.

Production Environment details – Just as testing environment details are required as detailed above for the same exact reasons we require the production environment details too for documenting accurate performance model.

Performance model documentation process:

1)      Document the changes going into production for the current release of application. If the application is going live for the first time document the business critical processes.

2)      Identify the transactions or business flows which require performance testing. Not all the code changes impact the performance of the application example if label of a button or a hyper link text is modified it will not impact the performance of the application in any way. So these kinds of code changes can be ignored from performance testing point of view. Make a list of transactions identified to be tested. This is generally done in an excel sheet.

3)      The list we created in the step 2 above form the basis of our performance model. These listed transactions are further divide into high, medium and low priority categories based on their impact of business flow and volumes. For example a service or transaction which is related to updating of account information of a customer or underwriting a insurance product is much more critical than a transaction or service which pulls out the company address of the customer. Similarly a service or a transaction which is executed 10 times per Second in production is more eligible for testing than a service or transaction which is executed 50 or 100 times a day.

4)      For all the live transactions the production load for peak time traffic needs to be documented. For new transactions this value can be asked from the application architects or developers. Since testing environment is not scaled to production environment these load or volumes needs to be scaled down relatively. This scaling down can be accomplished using Pace times, Think times or Vusers in the performance scenarios.  The values for Pace times, Think times and Vusers are generally modified either individually or collectively to create different performance scenarios like baseline, peak and endurance tests. All this information needs to be documented in performance model.

5)      For all the services Expected response time metrics should be specified which will make the baseline for all the testing. This response time should be taken from production. For new services SLA should be provided by application architect or developer.

Performance model creation explained with example

 For a Brand New Banking Project For HTTP requests (Also called a Greenfield project) –

a)      Manually navigate through the application. Understand the User activities on the application by walking through it. Select the most important business activities based on its execution frequency on a high load business day.

Note – Ignore the negative flows. This is not a System integration testing and negative flows are rarely tested for performance.
Divide the flow into individual transactions. Transaction is generally a request from client to server. So whenever there is client interaction with the server that instance can be called as a transaction with the transaction name being the purpose of that interaction.
Now suppose we have following three business flows for our example banking application –

Flow 1 – User Check Balance
Flow 2 – User Transfer Money to other account
Flow 3 – User Requests Check book
For simplicity we take limited transactions for these flows as below:



1





  b)      Gather the information about each individual flow. The information required mostly is how much requests/hour application will be expecting in production. Also how will users use the application i.e. which business flow will be executed and at what frequency. For example in the given example let us assume Flow 1 will be executed 60% of the time while Flow 2 and Flow 3 will be executed 30% and 10% of time in production.
       Also assume we are expecting 72000 http requests per hour in production.
 This information can be provided either by marketing team or by application architects.
  Now suppose we have following data gathered by above process –





 Expected requests per hour in production – 72000 requests per hour.
 No of requests generated per Flow can be understood by tools like fiddler or any other http                  traffic analyzer tool.
 Once we have above information we can go ahead with starting work load model.
                  
 c) Calculations and derivations for Think time, Pacing and Number of Vusers settings.We need to find answer to a simple question for finding these values –

If a single user is given 1 hour of time to execute these Flows, how many requests will                   be generated?
For answering this question some mathematical deductions are required which are as                           follows.
Total http requests generated in a flow can be obtained by summing up the requests                       generate by each individual transaction in that flow.
Total time for a flow to complete is just sum of all the think times in that flow.
Now from total time taken by a Flow to complete we can calculate how many times that                 flow can be executed per hour.
For example in case of Flow 1 –















 This is the point where answer to the question we are finding solution to can be answered                    for this Flow 1.
 If single user is given an hour of time and he is executing only this flow he will make                          102.8571429 executions of this flow which will generate 102.8571429*15 requests.
  Since answer is incomplete we will move forward. 
  For Flow 2 and Flow 3 above values are as follows –
















 Now since it is assumed that Flow 1, Flow 2 and Flow 3 are executed 60%, 30% and 10%                    in  production, user will devote as much time proportionally from his allocated hour.
 So for Flow 1 user will have 0.6 hour, for Flow 2 0.3 hour and for Flow 3 0.1 Hour.
 So in 0.6 hours user will execute Flow 1  0.6*102.65 times, Flow 2 0.3*60 times and Flow                  3 0.1*65.55 times.
 So in 1 Hour single user will execute following iterations of Flow 1, Flow 2 and Flow 3.






 If we multiply Number of executions of particular Flow with the total number of requests in                that flow we can get the answer to our question as follows -

  Now just add the number of requests in all the flows above and we will get the number of                   requests generated by single user in single hour executing Flow 1, Flow 2 and Flow 3 as per               production threshold.



   Moving forward from here is relatively easier.
   Since we know the we are expected to generate 72000 requests per hour and 1 user in 1 hour                generates 1013 requests total number of users required to generate expected requests will be                72000/1013 i.e. 71 Users.

    Now we just need to distribute these users among Flow 1, Flow 2 and Flow 3 as 60%, 30% and 10 %.


2) For a Existing  Project For HTTP requests – The process of creating workload model for existing performance testing project is similar to the one for green field project except that now all the details for creating one is already handy. There may be some new code changes or hardware changes which may affect the performance of the application so we need to test those on high priority. But since these new changes may affect the performance of whole system knowingly or unknowingly we may have to retest the whole application again as regression.

3) For Middleware application which contains only web services – Sometimes we may need to test a middleware application which requires us to test all the web service calls involved. For preparing work load model for this king of application we need a different simple approach. Steps involved are as follows –

a)      Gather the list of web services which are going to be tested and expected transactions per second which these services may be required to operate upon. Generally it is not possible to test all the web services in a particular application since list is huge for a enterprise application and also it is not possible to test some web services in standalone fashion.  Generally the approach is to include web services which can be tested as standalone and which are responsible for 90 or 95% of production volumes.

Suppose we have list of 15 web services with expected TPS as detailed below –
























      Calculate the total TPS and 95% of TPS (If you are targeting 95% of production volume in performance test, change if you are targeting something else).
      Arrange the web services according to decreasing TPS. And select those web services from top of the list whose TPS sum makes 95% of the TPS calculated above. In our case total TPS is 20.4 and 95% TPS is 19.38, so our selection will include services highlighted in green in below list –
















This will be the list of web services which we want to test rest can be ignored since those do not have considerable volume to be tested.

b)  Now since we have the list of services and there volumes which we need to emulate in our performance tests we will proceed to find the Think time and Number Vusers to be assigned to them for emulating correct load.

For this we can use Little’s law of queuing theory which is as follows –

Average number of customers in a system (over some time interval), is equal to their average arrival rate, multiplied by their average time in the system.
 To simplify this in performance testing perspective this statement is equal to following equation –
N = TPS * (RT + T)
N – Number of Vusers,
TPS – Transactions per Second (TPS),
RT – Response Time in seconds,
T – Think Time in seconds.
To proceed further we need the response times which we are expecting for these web services from business people. Once those are received we will document those in our workload model as below –















C) Calculations of Number of users and think time. Since web services are generally single requests think times and pacing can be used interchangeably since we create 1 action per web service in our scripts generally. If we have more than one web service in a action part of our script then pacing and think times are different entities.

Now to calculate the Number of Users and think time to be assigned to each web service we need to use Little’s law as follows –

Think time = (Number of Vusers/Transactions per Second) – Response Time
Since this is a variable equation involving four variables there are many solutions to this equation and hence many values for Vusers and Think time satisfy this equation.
To find the best solution we need to increase the value of Vusers from 1 to the value for which we get the lowest value for think time which is more than zero.
So for example for the first web service we will start increasing value for Vusers as follows –

For Vuser = 1, Think time will be = -0.75
For Vuser =2, Think time will be = -0.5
For Vuser = 3, Think time will be = -0.25
For Vuser = 4, Think time will be = 0
For Vuser = 5, Think time will be = 0.25

Hence we will document Vuser = 5, Think time = 0.25 for this web service.
Similarly for we need to calculate for other web services too.
 










This is how we can create a performance model for web services testing.