Featured post

Easy setup to dockerize jmeter,influxdb & grafana monitoring solution

In this blog I’m going to show you how to create a performance monitoring solution on the go.

Total code can be found in github repo
Create a docker compose file which has jmeter,influxdb and grafana services to automate performance execution and monitor the live results from grafana using influxdb timeseries database storage

With a single docker-compose up -d command we can setup our jmeter execution and monitor the live results in grafana and saves the raw and html report of the jmeter execution in your host machine

Create a folder under which docker-compose file resides and add grafana,influxdb and jmeter folder to store the dockerfile and concerned config and data files accordingly on the github repo

For setting up only jmeter docker image please refer the below links https://performanceengineeringsite.wordpress.com/2020/02/01/setup-jmeter-in-linux-docker-container-with-all-plugins/

https://github.com/manojkumar542/Jmeter-docker

Basic of docker-compose file. Tweak the docker-compose file according to your need and always go with latest versions of images.

<strong>Docker compose file</strong>
version: '3.8'
services:
  influxdb:
    image: 'influxdb:1.8.9-alpine'
    container_name: influxdb
    restart: always
    ports: 
      - '8086:8086'
    networks:
      - monitoring
    environment:
      - INFLUXDB_DB=jmeter
    volumes:
      - influxdb-storage:/var/lib/influxdb
    
  grafana:
    build:
      context: ./grafana
    image: 'grafana:8.1.0'
    container_name: grafana
    ports:
      - '3000:3000'
    restart: always
    networks: 
      - monitoring
    environment:
      - GF_SECURITY_ADMIN_USER=admin
      - GF_SECURITY_ADMIN_PASSWORD=grafana
      - GF_INSTALL_PLUGINS=grafana-clock-panel,briangann-gauge-panel,natel-plotly-panel,grafana-simple-json-datasource
    volumes:
      - grafana-volume:/etc/grafana/provisioning
      - grafana-data:/var/lib/grafana
    depends_on: 
      - influxdb

  jmeter:
    build:
      context: ./jmeter
    image: 'jmeter-docker:latest'
    container_name: jmeter
    restart: always
    networks:
      - monitoring
    volumes:
      - ./jmeter:/results
    depends_on: 
      - influxdb
      - grafana

networks:
  monitoring:
volumes:
  influxdb-storage:
  grafana-data:
  grafana-volume:

Steps to be followed to create Docker Compose Yaml file

1. Create customized docker files for jmeter & grafana and place in each respective folders and for influxdb can be directly pull it from dockerhub.

2. Create a Network to connect multiple containers to establish connection between them and default network is bridge and just define it in the yaml no need to explicitly create it.

3. Create volumes such as named volumes to persist the data whenever there are any unseen container exists and we can use volumes to get the data back just define it in Yaml docker will create for you

4. Create environment variables to pass it to the runnning containers such as ex: creating TSDB with name Jmeter in influxdb before running jmeter executions. INFLUXDB_DB:jmeter

5. Use docker-build to build the docker image from the specific build context folder where dockerfile resides to get the latest changes into docker image.

6. Finally just run docker-compose up -d to run all the services defined in docker-compose.yaml file in detached mode.

Yaml files grafana and automatic creation of Datasource and Dashboard

FROM grafana/grafana:8.1.0

ENV GF_SECURITY_ADMIN_USER=admin
ENV GF_SECURITY_ADMIN_PASSWORD=grafana
ENV GF_INSTALL_PLUGINS=grafana-clock-panel,briangann-gauge-panel,natel-plotly-panel,grafana-simple-json-datasource

# automatically provisions datasources and dashboards in grafana
ADD ./provisioning /etc/grafana/provisioning

# Add configuration file
#ADD ./grafana.ini /etc/grafana/grafana.ini


Grafana Datasource Yaml format
apiVersion: 1
datasources:
  - name: 'Influx-Jmeter'
    type: influxdb
    access: proxy
    database: jmeter
    url: http://influxdb:8086
    isDefault: true
    editable: true

Grafana Dashboard format download here

Now go to docker-compose file folder and run docker-compose up -d

Use docker exec -it containerid /bin/sh to connect to containers and check the influxdb to check the database with name jmeter is created or not same way you can do it for grafana to check the dashboards and datasources are referring correctly from the copied files.

Since i have enabled port mapping to expose container port to localhost port i can able to access both grafana and influxdb using httP://localhost:portnumber

But in containers they use internal ips to communicate and internally container name DNS is mapped to ip that’s the reason i have used container names to configure datasource connection in influxdb url in grafana

Using docker-compose build  by providing build context we can build the image from the specific docker file folder using docker-compose yaml itself as below
  jmeter:
    build:
      context: ./jmeter
    image: 'jmeter-docker:latest'
    container_name: jmeter
    restart: always
    networks:
      - monitoring
    volumes:
      - ./jmeter:/results
    depends_on: 
      - influxdb
      - grafana

Context - where exactly the dockerfile resides and build and tag with the image name specified in our case it's <strong>'jmeter-docker:latest'</strong>

depends_on - make sure services in the section specified will be starting and running which are dependent on the current service.
Jmeter InfluxDB configuration
Running docker-compose build to build images from the context specified to get the latest images created
docker-compose up -d running the container services in detached mode so that i can connect to containers seperately
docker ps to show the live containers and connecting to influxdb container to check DB created with jmeter
Type influx command and influx shell get's opened
show databases - shows list of databases
use databasename - connect to specified database
show measurements - shows all the tables in the database
select * from measurement - list of all data in the table
See the above image for reference.
Grafana login screen and username is admin and password is grafana since i specified it via env variables from docker compose file
Jmeter grafana dashboard first section shows aggregated summary for all transactions
Specific transaction selected from the grafana template variables from the above screenshot
docker-compose down to stop and delete all the running containers

Total code repo can be found here

Featured post

Handle duplicate values captured in RegExp from the same response in jmeter

There are at times where we might need to capture all the unique values from the response and send each unique value in a subsequent request. For this scenario I have created UDV (user defined variabled) with repeated values and passing the unique values to each Http Sampler. Here i have created 5 variables as in screenshot.

Create a JSR223 sampler and write a small snippet of code to handle duplicate values and send only unique values. Using HashSet Collection we can eliminate duplicate values and store only unique values.

int matches = Integer.parseInt(vars.get("Url_matchNr"));
Set links = new HashSet();
for (int i = 1; i <= matches; i++) {
  links.add(vars.get("Url_" + i));
  vars.remove("Url_" + i);
}
int counter = 1;
for (String link : links) {
  vars.put("Url_" + counter, link);
  counter++;
}

ForEach controller loops through the values of a set of related variables so it repeatedly send the http request until all values in the list are over.

Finally we can see only unique values being sent to each request.

Dynamically construct string out of captured values from LoadRunner functions

In LoadRunner majority of times all of our requirements gets done with the inbuilt web vuser functions but there are at times we might need to extend the logic(C scripting code) to fulfill the requirement.
One of the use case is to construct dynamic string from the correlated values by placing delimiters in between each value.

Example: You have list of assetid’s captured from the response and we need to pass the captured values with adding delimiters such as “”,(Quote & comma) in between the values and send to the next request body param.
“Sample1″,”Sample2″,”Sample3″,”Sample4″,”Sample5”,……so on
Each Sample value can be captured using correlation in a JSON response as below

web_reg_save_param_json("ParamName=param", "QueryString=$..assetId", "SelectAll=Yes", LAST );# SelectAll -> captures all occurrences of matched values in json response
// C Code snippet as follows
Action_DynamicString()
{
	int i,count;
	char value[200],str[10000]="";
        	web_reg_save_param_json("ParamName=param", "QueryString=$..assetId", "SelectAll=Yes", LAST );
	web_custom_request("sample1",
		"URL=https://{url}",
		"Method=GET",
		"Resource=0",
		"RecContentType=application/json",
		"Referer={url}/",
		"Snapshot=t1.inf",
		"Mode=HTTP",
		"EncType=application/json",

		LAST);	
 
       //construct dynamic string out of captured values
	count = atoi(lr_eval_string("{param_count}"));
	for (i=1; i<=count; i++)
	{
   
        sprintf(value,"{param_%d}",i);  // writes formatted output to a string.
        //lr_output_message("New String is %s",lr_eval_string(value));
        strcat(str,"\"");
        strcat(str,value);
        strcat(str,"\"");
        if(i<count){
        	strcat(str,",");
        }

	} 
	
	lr_save_string(lr_eval_string(str),"dynStr");
	lr_output_message("New String is %s",lr_eval_string("{dynStr}"));

       
	web_custom_request("sample2",
		"URL=https://{url1}",
		"Method=POST",
		"Resource=0",
		"RecContentType=application/json",
		"Referer={url1}/",
		"Snapshot=t2.inf",
		"Mode=HTTP",
		"EncType=application/json",
		"Body={\"filters\":[{dynStr}]}",
		LAST);

Output
param_1=Sample1
param_2=Sample2
param_3=Sample3
param_4=Sample4
param_5=Sample5
param_count=5
dynStr=”Sample1″,”Sample2″,”Sample3″,”Sample4″,”Sample5″

Hope this will atleast give you some idea on how to handle dynamic strings 🙂

Huge difference in response times with socket vs wininet replay in Loadrunner

Our LR controller and LG infra setup has internal proxy configuration i.e each request goes from our LG first goes to proxy and then to target server and response back to proxy to client browser. This setup will have some negligible latency
Step 1 : We recorded and customized the script flow and started running in LR controller and we were seeing quite higher response times for the transactions and sub transactions for API responses which we were not seeing manually from the browser so when we checked the LR RTS we were running with wininet mode replay with proxy settings set as "use the default browser proxy" 

Step 2: We wanted to try running with socket mode implementation of HTTP traffic simulation then we came to know that internal proxy was not allowing the socket mode and was throwing us with connect timeout for the website since proxy requires authentication for the request.
After several attempts we figured to make it work by changing the below settings.
provide the windows credentials used for the LR machine and don't worry password is masked no one can see it.
Step 3: We compared the runs with wininet vs socket and we see socket mode replay is in line with real application performance numbers and I see wininet is almost 3x more times increase in response times.
Then after realizing wininet implementation was developed by microsoft and it only works with windows and simulation is quite different.
Socket based engine is default engine implemented by HP loadrunner team.

Please make sure your script replay should be in socket to accurately measure the performance of web application and this will only happen to be for HTTP/HTML based simulation but if you go with Truclient protocol it's proprietary to HP and we will not face any problem for end to end response time simulation from browser level.

Run jmeter in docker and save results in CSV

Today I’m going to describe detailed approach to tackle the performance results while running the Performance executions. One way is going by SaaS based tools like blazemeter,octoperf,flood.io etc In this current blog i will show you how to install plugins from pluginmanager via cmd line

1. make sure cmdrunner-2.0.jar is present in jmeter/lib directory. If not, take it from here: https://repo1.maven.org/maven2/kg/apc/cmdrunner/2.2/

2. make sure PluginsManagerCMD.sh or PluginsManagerCMD.bat is present in jmeter/bin directory.

If not, run java -cp jmeter/lib/ext/jmeter-plugins-manager-x.x.jar org.jmeterplugins.repository.PluginManagerCMDInstaller to have the files created Once those steps are done you can update the docker file accordingly to automate the plugin manager setup and install the required plugins for your usecase.

Now, you can use PluginsManagerCMD to get the status of plugins, install or uninstall them. The command-line is simple: PluginsManagerCMD <command> [<params>] Where command is one of “help”, “status”, “upgrades”, “available”, “install”, “install-all-except”, “install-for-jmx”, “uninstall”.

One way is going by SaaS based tools like Sblaze

Sample docker file to use for custom jmeter setup for reference
Go to github url or pull it from dockerhub

eter,octoperf,Samfloo

FROM ubuntu:latest

# setup jmeter version to use
ARG JMETER_VERSION="5.3"
ARG JMETER_PLUGINS_MANAGER_VERSION="1.6"
ARG CMDRUNNER_VERSION="2.2"
ENV JMETER_HOME /opt/apache-jmeter-${JMETER_VERSION}
ENV JMETER_BIN  ${JMETER_HOME}/bin
ENV MIRROR_HOST https://archive.apache.org/dist/jmeter
ENV JMETER_DOWNLOAD_URL ${MIRROR_HOST}/binaries/apache-jmeter-${JMETER_VERSION}.tgz
ENV JMETER_PLUGINS_DOWNLOAD_URL https://repo1.maven.org/maven2/kg/apc
ENV JMETER_PLUGINS_FOLDER ${JMETER_HOME}/lib/ext/
ENV PATH $PATH:$JMETER_BIN

# Install jmeter lib and dependency jars
RUN curl -L --silent ${JMETER_PLUGINS_DOWNLOAD_URL}/jmeter-plugins-manager/${JMETER_PLUGINS_MANAGER_VERSION}/jmeter-plugins-manager-${JMETER_PLUGINS_MANAGER_VERSION}.jar -o ${JMETER_PLUGINS_FOLDER}/jmeter-plugins-manager-${JMETER_PLUGINS_MANAGER_VERSION}.jar
RUN curl -L --silent ${JMETER_PLUGINS_DOWNLOAD_URL}/cmdrunner/${CMDRUNNER_VERSION}/cmdrunner-${CMDRUNNER_VERSION}.jar -o ${JMETER_HOME}/lib/cmdrunner-${CMDRUNNER_VERSION}.jar && \
    java -cp ${JMETER_PLUGINS_FOLDER}/jmeter-plugins-manager-${JMETER_PLUGINS_MANAGER_VERSION}.jar org.jmeterplugins.repository.PluginManagerCMDInstaller && \
    PluginsManagerCMD.sh install jpgc-cmd=2.2,jpgc-dummy=0.4,jpgc-filterresults=2.2,jpgc-synthesis=2.2,jpgc-graphs-basic=2.0 \
    && jmeter --version \
    && PluginsManagerCMD.sh status \

Go to github repo https://github.com/manojkumar542/Jmeter-docker for reference You need to have a jmeter execution setup isolated from the dockerfile for easier maintanance and adding some customizations and use this to run.sh script file to start the docker execution as below

run.sh
------------------------------

#!/bin/bash -e
#define variables to store info
version=5.3
scriptname="Sample"
#
# override the HEAP settings and run the jmeter script.
JVM_ARGS="-Xms512m -Xmx2048m" jmeter -Jjmeter.save.saveservice.subresults=false -n -t /${scriptname}.jmx -l /${scriptname}.jtl -e -o /results/output 2>&1
java -jar /opt/apache-jmeter-${version}/lib/cmdrunner-2.2.jar --tool Reporter --plugin-type AggregateReport --input-jtl /${scriptname}.jtl --generate-csv /results/results.csv 2>&1
cat /results/results.csv
We can use cmdrunner or JmeterpluginsCMD to generate the CSV from the raw results JTL 

Generating CSV:

JMeterPluginsCMD.bat --generate-csv test.csv --input-jtl results.jtl --plugin-type AggregateReport 
Yoy may generate CSV and PNG in single tool run. --help will show you short help list on available parameters.

--generate-png <file>	generate PNG file containing graph	
--generate-csv <file>	generate CSV file containing graph data	
--input-jtl <file>	load data from specified JTL file	
--plugin-type <class>	which type of graph use for results generation	
Use --input-jtl merge-results.properties with --plugin-type MergeResults. The merge-results.properties file is in JMETER_HOME/bin dir.

Next blog would be to show you how to tackle entire performance stack using docker compose and save off the results along with real time monitoring of performance executions.

Integrate Loadrunner results to grafana dashboard

There are at times we need to run the scenario in the controller but results should be visible to all the folks working in the team so using grafana open source tool we can overcome the difficulties of viewing results from web browser.

Older versions of Loadrunner such as before LR 2020 it doesn’t have inbuilt support for integrating results with InfluxDB Timeseries database by default from the controller/ALM

We need to manually convert LR Raw results to InfluxDB data and we use grafana’s influxdb datasource to integrate data stored to view in graphical representation.
We see InfluxDB setup already exists in LR installation folder and once scenario is executed we need to run the below command to convert the LR raw results data to InfluxDB data as below

Please see below blogs for influxDB and grafana installation

InfluxDB Installation Detailshttps://influxdb.com/docs/v0.9/introduction/installation.html

To access influx db from command line interface (CLI) and use influx query language – https://influxdb.com/docs/v0.9/introduction/getting_started.html

Step 1:
Make sure you run the influxd service and influx.exe for running influx commands as below

  • All the tools like influxDB and LR exporter present in LR installation/bin folder
  • D:\SystemApps32\HPE\LoadRunner\bin>LrRawResultsExporter.exe -source_dir “<LR raw results folder directory>” -to_influx
  • Once exporter to influx is successful then check the influxD service to check the db name where the results got stored it should be name of the results folder appended with random number like smoke_test_210824162539
  • Integrate the influxDB datasource in grafana and import the already available loadrunner json grafana dashboard from the grafana site
  • Import the json dashboard to grafana

After succesfull import you will be seeing the graph as below for the influxDB data

From LR 2020 InfluxDB integration has been added as a part of scenario execution and can be configured during the scenario creation as below

Learning about grafana please check grafana posts

Setup Jmeter in Linux docker container with all plugins

Jmeter Docker image with all plugins installed

Apache JMeter : an application designed to load test functional behavior and measure performance – https://jmeter.apache.org

JMeter Plugins : an independent set of plugins – https://jmeter-plugins.org

The version number is composed of two version numbers the first is the version of the Apache JMeter embedded in this docker image the second is for this docker image itself

Clone this repo https://github.com/manojkumar542/Jmeter-docker

You can directly pull the image from dockerhub by using the below command

docker pull manojkumar542/jmeter-docker:latest

Build and Run docker jmeter images as below

docker build -t tag any name .

ex: docker build -t jmeter-docker:latest .

docker run -it –name any name image name

ex: docker run -it –name jmeterimage jmeter-docker:latest

To store the results into your local machine use volumes as below

docker run -it –name any name -v hostfolder:containerfolder image name

ex: docker run -it –name jmeterimage -v ${pwd}/output:/results jmeter-docker:latest

In order to leverage your jmx files and data files with your jmeter executions

Step 1: Clone this Git repo 

Step 2 : Update the dockerfile to copy your jmeter (.jmx) and data files what ever you wanted to run  to docker container using below commands.

# copy jmeter script files and data files to execute jmeter runs

COPY sample.jmx /script.jmx

COPY sample.csv /data.csv
# shell script has script to convert JTL to CSV
COPY run.sh /run.sh
DockerImageJmeter

Step 3: Edit the run.sh and update the script path as below

#!/bin/bash -e
# override the HEAP settings and run the jmeter script.
JVM_ARGS=”-Xms512m -Xmx2048m” jmeter -n -t /script.jmx -l /jmeter.jtl 2>&1
cat /results/results.csv  // to check the output in console

 

Runjmeter

Done now you can build the docker image and run it with following command to store the results from docker container to your local machine

# building docker image

ex: docker build -t jmeter-docker:latest .

# Running docker image

docker run -it –name any name image name

ex: docker run -it –name jmeterimage jmeter-docker:latest

To store the results into your local machine use volumes as below

docker run -it –name any name -v hostfolder:containerfolder image name

ex: docker run -it –name jmeterimage -v ${pwd}/output:/results jmeter-docker:latest

Docker volumes examples
https://training.play-with-docker.com/docker-volumes/

Creating Dashboards in Kibana

Once we got the logs flowing to kibana next step is to analyse the logs and get the meaningful insights out of the logs and represent them in graphical format.

Example : Kibana GeoIP example

1. Once data is loaded into Elasticsearch, open Kibana UI and go to Management tab => Kibana Index pattern.

2. Create Kibana index with “logstash-iplocation” pattern and hit Next.

3. Select timestamp if you want to show it with your index and hit create index pattern.

 

4. Now go to Discover tab and select “logstash-iplocation” to see the data which we just loaded.

You can expand the fields and see geoip.location has datatype as geo_point. You can verify this by “globe” sign which you will find just before geoip.location field. If it’s not there then you have done some mistake and datatype mapping is incorrect.

5. Now go to Visualize tab and select coordinate map from the types of visualization and index name as “logstash-iplocation“.

 

6. Apply the filters (Buckets: Geo coordinates, Aggregation: Geohash & Field: geoip.location) as shown below and hit the “Play” button. That’s it !! You have located all the ip addresses.

Kibana Visualization and dashboard example

Refer to this for complete tutorial ELK Stack

Use Case 1: Top 10 Requested URL’s (Pie chart)

Open Kibana UI on your machine and go to Visualize tab => Create a visualization:

Select the type of visualization. For our first use case, select pie chart:

Select the index squidal which we created earlier.

Now go to Options and uncheck Donut check box as we need a simple pie chart. Check Show Labels or you can leave it blank if you don’t want labels, it’s up to you.

Next, go to Data tab where you will find Metrics or you can say Measure in reporting terminology by default it will show as Count.

Click on blue button right behind Split Slices in order to choose Aggregation type. Lets choose Terms aggregation for our use case (in simple words assume Terms like group by SQL). Refer this for more details.

Further, choose the Field => Requested_URL.keyword which will act as dimension for us. Hit blue arrow button next to Options in order to generate the chart. You can also give this chart a custom label as shown below.

Save the chart => Hit Save button on top right corner of your dashboard. You can name the visualization as Top 10 Requested URL.

Now go to Dashboard tab => Create a dashboard => Add => Select Top 10 Requested URL

Hit Save button on top of your dashboard. Give it a meaningful name, for instance Squid Access Log Dashboard.

Use Case 2: Number of events per hour (Bar chart)

Go to Visualize tab again (top left corner of your screen) and click on “+” sign. Choose chart type as vertical bar and select squidal index.

In this case, instead of choosing aggregation type as Terms, we will be using X-Axis bucket with Date Histogram and Interval as Hourly as shown below:

Hit Save button and give it an appropriate name, for instance Events per Hour.

Now go back to Dashboard tab => Squid Access Log Dashboard => Edit => Add and select Events per hour to add it in your dashboard.

Hit Save again. At this point your dashboard should look like this:

Linux basics with explanation

Linux basics:

$1, $2, …

The first, second, etc command line arguments to the script.

variable=value

To set a value for a variable. Remember, no spaces on either side of =

Quotes ” ‘

Double will do variable substitution, single will not.

variable=$( command )

Save the output of a command into a variable

export var1

Make the variable var1 available to child processes.

read varName

Read input from the user and store it in the variable varName.

/dev/stdin

A file you can read to get the STDIN for the Bash script

let expression

Make a variable equal to an expression.

expr expression

print out the result of the expression.

$(( expression ))

Return the result of the expression.

${#var}

Return the length of the variable var.

 

Test

The square brackets ( [ ] ) in the if statement above are actually a reference to the command test. This means that all of the operators that test allows may be used here as well. Look up the man page for the test to see all of the possible operators (there are quite a few) but some of the more common ones are listed below.

Operator Description
! EXPRESSION The EXPRESSION is false.
-n STRING The length of STRING is greater than zero.
-z STRING The length of STRING is zero (ie it is empty).
STRING1 = STRING2 STRING1 is equal to STRING2
STRING1 != STRING2 STRING1 is not equal to STRING2
INTEGER1 -eq INTEGER2 INTEGER1 is numerically equal to INTEGER2
INTEGER1 -gt INTEGER2 INTEGER1 is numerically greater than INTEGER2
INTEGER1 -lt INTEGER2 INTEGER1 is numerically less than INTEGER2
-d FILE FILE exists and is a directory.
-e FILE FILE exists.
-r FILE FILE exists and the read permission is granted.
-s FILE FILE exists and it’s size is greater than zero (ie. it is not empty).
-w FILE FILE exists and the write permission is granted.
-x FILE FILE exists and the execute permission is granted.

 

A few points to note:

  • = is slightly different to -eq. [ 001 = 1 ] will return false as = does a string comparison (ie. character for character the same) whereas -eq does a numerical comparison meaning [ 001 -eq 1 ] will return true.
  • When we refer to FILE above we are actually meaning a path. Remember that a path may be absolute or relative and may refer to a file or a directory.