Setup Jmeter in Linux docker container with all plugins

Jmeter Docker image with all plugins installed

Apache JMeter : an application designed to load test functional behavior and measure performance – https://jmeter.apache.org

JMeter Plugins : an independent set of plugins – https://jmeter-plugins.org

The version number is composed of two version numbers the first is the version of the Apache JMeter embedded in this docker image the second is for this docker image itself

Clone this repo https://github.com/manojkumar542/Jmeter-docker

You can directly pull the image from dockerhub by using the below command

docker pull manojkumar542/jmeter-docker:latest

Build and Run docker jmeter images as below

docker build -t tag any name .

ex: docker build -t jmeter-docker:latest .

docker run -it –name any name image name

ex: docker run -it –name jmeterimage jmeter-docker:latest

To store the results into your local machine use volumes as below

docker run -it –name any name -v hostfolder:containerfolder image name

ex: docker run -it –name jmeterimage -v ${pwd}/output:/results jmeter-docker:latest

In order to leverage your jmx files and data files with your jmeter executions

Step 1: Clone this Git repo 

Step 2 : Update the dockerfile to copy your jmeter (.jmx) and data files what ever you wanted to run  to docker container using below commands.

# copy jmeter script files and data files to execute jmeter runs

COPY sample.jmx /script.jmx

COPY sample.csv /data.csv
# shell script has script to convert JTL to CSV
COPY run.sh /run.sh
DockerImageJmeter

Step 3: Edit the run.sh and update the script path as below

#!/bin/bash -e
# override the HEAP settings and run the jmeter script.
JVM_ARGS=”-Xms512m -Xmx2048m” jmeter -n -t /script.jmx -l /jmeter.jtl 2>&1
cat /results/results.csv  // to check the output in console

 

Runjmeter

Done now you can build the docker image and run it with following command to store the results from docker container to your local machine

# building docker image

ex: docker build -t jmeter-docker:latest .

# Running docker image

docker run -it –name any name image name

ex: docker run -it –name jmeterimage jmeter-docker:latest

To store the results into your local machine use volumes as below

docker run -it –name any name -v hostfolder:containerfolder image name

ex: docker run -it –name jmeterimage -v ${pwd}/output:/results jmeter-docker:latest

Docker volumes examples
https://training.play-with-docker.com/docker-volumes/

Creating Dashboards in Kibana

Once we got the logs flowing to kibana next step is to analyse the logs and get the meaningful insights out of the logs and represent them in graphical format.

Example : Kibana GeoIP example

1. Once data is loaded into Elasticsearch, open Kibana UI and go to Management tab => Kibana Index pattern.

2. Create Kibana index with “logstash-iplocation” pattern and hit Next.

3. Select timestamp if you want to show it with your index and hit create index pattern.

 

4. Now go to Discover tab and select “logstash-iplocation” to see the data which we just loaded.

You can expand the fields and see geoip.location has datatype as geo_point. You can verify this by “globe” sign which you will find just before geoip.location field. If it’s not there then you have done some mistake and datatype mapping is incorrect.

5. Now go to Visualize tab and select coordinate map from the types of visualization and index name as “logstash-iplocation“.

 

6. Apply the filters (Buckets: Geo coordinates, Aggregation: Geohash & Field: geoip.location) as shown below and hit the “Play” button. That’s it !! You have located all the ip addresses.

Kibana Visualization and dashboard example

Refer to this for complete tutorial ELK Stack

Use Case 1: Top 10 Requested URL’s (Pie chart)

Open Kibana UI on your machine and go to Visualize tab => Create a visualization:

Select the type of visualization. For our first use case, select pie chart:

Select the index squidal which we created earlier.

Now go to Options and uncheck Donut check box as we need a simple pie chart. Check Show Labels or you can leave it blank if you don’t want labels, it’s up to you.

Next, go to Data tab where you will find Metrics or you can say Measure in reporting terminology by default it will show as Count.

Click on blue button right behind Split Slices in order to choose Aggregation type. Lets choose Terms aggregation for our use case (in simple words assume Terms like group by SQL). Refer this for more details.

Further, choose the Field => Requested_URL.keyword which will act as dimension for us. Hit blue arrow button next to Options in order to generate the chart. You can also give this chart a custom label as shown below.

Save the chart => Hit Save button on top right corner of your dashboard. You can name the visualization as Top 10 Requested URL.

Now go to Dashboard tab => Create a dashboard => Add => Select Top 10 Requested URL

Hit Save button on top of your dashboard. Give it a meaningful name, for instance Squid Access Log Dashboard.

Use Case 2: Number of events per hour (Bar chart)

Go to Visualize tab again (top left corner of your screen) and click on “+” sign. Choose chart type as vertical bar and select squidal index.

In this case, instead of choosing aggregation type as Terms, we will be using X-Axis bucket with Date Histogram and Interval as Hourly as shown below:

Hit Save button and give it an appropriate name, for instance Events per Hour.

Now go back to Dashboard tab => Squid Access Log Dashboard => Edit => Add and select Events per hour to add it in your dashboard.

Hit Save again. At this point your dashboard should look like this:

Linux basics with explanation

Linux basics:

$1, $2, …

The first, second, etc command line arguments to the script.

variable=value

To set a value for a variable. Remember, no spaces on either side of =

Quotes ” ‘

Double will do variable substitution, single will not.

variable=$( command )

Save the output of a command into a variable

export var1

Make the variable var1 available to child processes.

read varName

Read input from the user and store it in the variable varName.

/dev/stdin

A file you can read to get the STDIN for the Bash script

let expression

Make a variable equal to an expression.

expr expression

print out the result of the expression.

$(( expression ))

Return the result of the expression.

${#var}

Return the length of the variable var.

 

Test

The square brackets ( [ ] ) in the if statement above are actually a reference to the command test. This means that all of the operators that test allows may be used here as well. Look up the man page for the test to see all of the possible operators (there are quite a few) but some of the more common ones are listed below.

Operator Description
! EXPRESSION The EXPRESSION is false.
-n STRING The length of STRING is greater than zero.
-z STRING The length of STRING is zero (ie it is empty).
STRING1 = STRING2 STRING1 is equal to STRING2
STRING1 != STRING2 STRING1 is not equal to STRING2
INTEGER1 -eq INTEGER2 INTEGER1 is numerically equal to INTEGER2
INTEGER1 -gt INTEGER2 INTEGER1 is numerically greater than INTEGER2
INTEGER1 -lt INTEGER2 INTEGER1 is numerically less than INTEGER2
-d FILE FILE exists and is a directory.
-e FILE FILE exists.
-r FILE FILE exists and the read permission is granted.
-s FILE FILE exists and it’s size is greater than zero (ie. it is not empty).
-w FILE FILE exists and the write permission is granted.
-x FILE FILE exists and the execute permission is granted.

 

A few points to note:

  • = is slightly different to -eq. [ 001 = 1 ] will return false as = does a string comparison (ie. character for character the same) whereas -eq does a numerical comparison meaning [ 001 -eq 1 ] will return true.
  • When we refer to FILE above we are actually meaning a path. Remember that a path may be absolute or relative and may refer to a file or a directory.

Docker commands and examples

Everything about docker

# Build a Docker image

$ docker build -t [image_name]:[tag] .

# Run a Docker container specifying a name

$ docker run –name [container_name] [image_name]:[tag]

# Fetch the logs of a container

$ docker logs -f [container_id_or_name]

# Run a command in a running container

$ docker exec -it [container_id_or_name] bash

# Show running containers

$ docker ps

# Show all containers

$ docker ps -a

# Show Docker images

$ docker images

# Stop a Docker container

$ docker stop [container_id_or_name]

# Remove a Docker container

$ docker rm [container_id_or_name]

# Remove a Docker image

$ docker rmi [image_id_or_name]

 

Every command has it helps page, so you can call for example:

$ docker build –help

$ docker run –help

And see the OPTIONS you can pass to it.

Useful Commands

Some other useful commands to perform operations in multiple items:

# Stop all running containers

$ docker stop $(docker ps -q)

# Remove all containers

$ docker rm $(docker ps -aq) –force

# Remove all images

$ docker rmi $(docker images -aq) –force

 

Parsing data with Ingest node of Elastic Search

Please go through this link for ELK overview and explanation of each tool Elastic Stack Centralized logging solution practical explanation

There are 2 ways to parse the fields from log data

  1. Shipping log data from file beats to logstash and use grok filters to parse the log line
  2. Using Ingest Node of elastic search which preprocesses data before indexing

Second approach is easy one as filebeats are light weight shippers which are easy to handle and work

When you use Elasticsearch for output, you can configure Filebeat to use ingest node to pre-process documents before the actual indexing takes place in Elasticsearch. Ingest node is a convenient processing option when you want to do some extra processing on your data, but you do not require the full power of Logstash.

We use Grok Processors to extract structured fields out of a single text field within a document. A grok pattern is like a regular expression that supports aliased expressions that can be reused. Go through this blog on how to define grok processors you can use grok debugger to validate the grok patterns

Grok Processors in Elastic stack

 

Optionally you can add a data type conversion to your grok pattern. By default all semantics are saved as strings. If you wish to convert a semantic’s data type, for example change a string to an integer then suffix it with the target data type. For example %{NUMBER:num:int} which converts the num semantic from a string to an integer. Currently the only supported conversions are int and float.

So, if I add :int into my grok field definition, suddenly my value will be cast as an integer. Caveat: The NUMBER grok pattern will also detect numbers with decimal values. If you cast a number with a decimal as :int, it will truncate the decimal value leaving only the whole number portion.

In that case, the number can be cast as :float, meaning floating-point value.

Grokfilter.PNG

One of the great things about Elasticsearch is its extensive REST API which allows you to integrate, manage and query the indexed data in countless different ways. We use Ingest Rest API of elasticsearch to put some pre processing rules on log data lines as below Here pipeline id = parse_logs (user defined name) but use the same name under elastic search configuration in filebeat.yml

Ingestquery.PNG The PUT pipeline API also instructs all ingest nodes to reload their in-memory representation of pipelines, so that pipeline changes take effect immediately. After defining the pipeline in Elasticsearch as above, we simply configure Filebeat to use the pipeline. To configure Filebeat, you specify the pipeline ID in the parameters option under elasticsearch in the filebeat.yml file:

output.elasticsearch:
  hosts: ["localhost:9200"]
  pipeline: parse_logs

sample message :

Here Field is message

“message”: “Home,103,1245,1005,1196,1343,4045,677,21840,0.00%,.3,21.4,2086.37”,

After creating pipeline we should able to see individual fields in kibana under discover page

Parser.PNG

This allows us to use advanced features like statistical analysis on value fields, faceted search, filters, and more.

Next blog is to analyse and visualize the parsed fields from the log data.

Implicit & Explicit waits in webdriver and when to use

Why do users need Selenium Wait?

Most web applications are developed with Ajax and Javascript. When a page loads on a browser, the various web elements that someone wants to interact with may load at various time intervals.

This obviously creates difficulty in identifying any element. On top of that, if an element is not located then the “ElementNotVisibleException” appears. Selenium Wait commands help resolve this issue. Read more about the Common Exceptions in Selenium.

Implicit Wait in Selenium

Implicit Wait directs the Selenium WebDriver to wait for a certain measure of time before throwing an exception. Once this time is set, WebDriver will wait for the element before the exception occurs.

Once the command is in place, Implicit Wait stays in place for the entire duration for which the browser is open. Its default setting is 0, and the specific wait time needs to be set by the following protocol.

To add implicit waits in test scripts, import the following package.

import java.util.concurrent.TimeUnit;

Syntax

driver.manage().timeouts().implicitlyWait(10, TimeUnit.SECONDS);

Add the above code into the test script. It sets an implicit wait after the instantiation of WebDriver instance variable.

 

Explicit Wait in Selenium

By using Explicit Wait command, the WebDriver is directed to wait until a certain condition occurs before proceeding with executing the code.

Setting Explicit Wait is important in cases where there are certain elements that naturally take more time to load. If one sets an implicit wait command, then the browser will wait for the same time frame before loading every web element. This causes an unnecessary delay in executing the test script.

Explicit wait is more intelligent, but can only be applied for specified elements. However, it is an improvement on implicit wait since it allows the program to pause for dynamically loaded Ajax elements.

In order to declare explicit wait, one has to use “ExpectedConditions”. The following Expected Conditions can be used in Explicit Wait.

  • alertIsPresent()
  • elementSelectionStateToBe()
  • elementToBeClickable()
  • elementToBeSelected()
  • frameToBeAvaliableAndSwitchToIt()
  • invisibilityOfTheElementLocated()
  • invisibilityOfElementWithText()
  • presenceOfAllElementsLocatedBy()
  • presenceOfElementLocated()
  • textToBePresentInElement()
  • textToBePresentInElementLocated()
  • textToBePresentInElementValue()
  • titleIs()
  • titleContains()
  • visibilityOf()
  • visibilityOfAllElements()
  • visibilityOfAllElementsLocatedBy()
  • visibilityOfElementLocated()

To use Explicit Wait in test scripts, import the following packages into the script.

import org.openqa.selenium.support.ui.ExpectedConditions
import org.openqa.selenium.support.ui.WebDriverWait

Then, Initialize A Wait Object using WebDriverWait Class.

WebDriverWait wait = new WebDriverWait(driver,30);

Here, the reference variable is named <wait> for the <WebDriverWait> class. It is instantiated using the WebDriver instance. The maximum wait time must be set for the execution to layoff. Note that the wait time is measured in seconds.

Refer this javadocs for each method explanation

http://javadox.com/org.seleniumhq.selenium/selenium-support/2.53.0/org/openqa/selenium/support/ui/WebDriverWait.html

http://javadox.com/org.seleniumhq.selenium/selenium-support/2.53.0/org/openqa/selenium/support/ui/ExpectedConditions.html

Examples  Jmeter webdriver scripts to show explicit wait

 

import org.openqa.selenium.*;
import org.openqa.selenium.support.ui.*;
import java.util.concurrent.TimeUnit;

WebDriverWait wait = new WebDriverWait(WDS.browser, 20); // explicit wait

wait.until(ExpectedConditions.presenceOfElementLocated(By.cssSelector(“p[ng-if^=’vm.assetCountWithoutLocation’]”))); //An expectation for checking that an element                                                                                is  present on the DOM of a page and visible.                                                                                  Visibility means that the element is not only                                                                                    displayed but also has a height and width that                                                                                 is greater than 0.

wait.until(ExpectedConditions.presenceOfElementLocated(By.cssSelector(“p[ng-if^=’vm.assetCountWithoutLocation’]”)));  //An expectation for checking that an element                                                                                 is present on the DOM of a page. This does not                                                                               necessarily mean that the element is visible.

wait.until(ExpectedConditions.textToBePresentInElement(WDS.browser.findElement(By.id(“displayed-asset-count”)),”50″)); //An expectation for checking if the given text is                                                                              present in the element that matches the given locator.

wait.until(ExpectedConditions.elementToBeClickable(By.id(“toplevel-dropdown”)));

//An expectation for checking an element is visible and enabled such that you can click it.

Elastic Stack Centralized logging solution practical explanation

Elastic Stack is the world’s most popular log management platform.

The ELK Stack is a collection of three open-source products — Elasticsearch,Logstash, and Kibana — all developed, managed and maintained by Elastic. Elasticsearch is a NoSQL database that is based on the Lucene search engine. Logstash is a log pipeline tool that accepts inputs from various sources, executes different transformations, and exports the data to various targets. Kibana is a visualization layer that works on top of Elasticsearch.

The stack also includes a family of log shippers(Lightweight Data Shippers) called Beats, which led Elastic to rename ELK as the Elastic Stack.

ELK Stack Architecture

 

I will show you how to install Elastic search,Kibana and File Beats(Log file shippers) and leverage them to ship our daily logs to analyse and visualize meaning full insights.

Lot of ways to install them but for demo purpose i’m going with downloading Zip file in windows and later i will go through installing and running them in linux containers as well.

Infrastructure Setup:

  • Download & Setup ElasticSearch
    • Ensure that you are able to access elastic search using http://ip-of-elasticsearch-host:9200
    • Run elasticsearch
  • Download & Setup File Kibana
    • Update config/kibana.yml with the elastic url to fetch the data
    • Run kibana.bat / .sh
    • Ensure that you are able to access kibana using http://ip-of-kibana-host:5601
  • Download & Setup File Beats
    • We need to have one file beat setup for each application instance we wish to ship the logs.
    • File Beat is responsible for collecting the log info and send it to elastic search engine.
    • Update filebeat.yml file as shown below

Just enable by  removing # before the line what ever you wish to have FileBeatsconfig.PNG

  • Paths : file path where i kept jmeter.log using wildcard (.*)
  • A Sample any log file format
  • By default, File Beat logs each line in the log file as a separate log entry. Sometimes exceptions might span multi lines.  So we need to update filebeat.yml with the multiline pattern.
  • For jmeter.log – each log entry has its time stamp. so, we can configure the pattern as multiline.pattern: ^[0-9]{4}/[0-9]{2}/[0-9]{2}(starting with time stamp)
  • When there is no time stamp, FileBeat can append the line to the previous line based on the configuration.
  • install Filebeat as service by running  (install-service-filebeat) powershell script under filebeat extracted folder so that it runs as a service and start collecting logs which we configured under path in yml file.
  • FileBeat will start monitoring the log file – whenever the log file is updated, data will be sent to ElasticSearch. Using Kibana we can monitor the log entries in ElasticSearch.

Verify the filebeat logs under programdata/filebeat/logs folder and check to see filebeat started harvesting logs message as below

Filebeatlog.PNG

Login to kibana from the url http://ip-kibana-host/5601

Kibana.PNG

So that you can ship all your logs to elastic search and from kibana UI we can visualize te data from elastic search.

In the next blog i will explain you how to parse the fields in the log message using grok filters and show how to create graphs  and dashboards out of log data.

 

Running Selenium webdriver script in JMeter part -1

JMeter simulates heavy load requests and calculates the response time for the requests but that doesn’t provide Client side UI performance of the application.

Jmeter does simulate requests but actually it’s not a browser so we can’t measure browser rending time , java script execution time etc

Using Jmeter Web Driver Sampler  we can automates the execution and collection of Performance metrics on the Browser (client-side). A large part of performance testing, up to this point, has been on the server side of things. However, with the advancement of technology, HTML5, JS and CSS improvements, more and more logic and behaviour have been pushed down to the client.

All these things add to the overall browser execution time, and this sampler aims to measure the time it takes to complete rendering all this content.

It’s always advised to run HTTP sampler along with web driver sampler to measure the both back end and front end performance of an application.

Steps:

  1. First upgrade jmeter to latest version and install the plugin manager .
  2. Copy the plugin manager jar to jmeter installation folder/lib/ext folder and restart the jmeter
  3. Open jmeter and click on plugin manager icon as below and add selenium webdriver support plugin along with what ever required for you.jm5.PNG
  4. Now create a threadgroup and add config element -> chrome driver config as belowjm6.PNG
  5. Download latest chromedriver and add chromedriver path under chrome tabjm7.PNGjm8.PNG
  6. Add webdriver sampler and add below code into the sampler and select script language as groovy in order to support java like syntax and add listener

WDS.sampleResult.sampleStart()
WDS.browser.get(‘http://google.com&#8217;)
WDS.sampleResult.sampleEnd()

jm9.PNG

You will see a real chrome browser will open and perform the activities under start and end calls of webdriver script.

The main fields that the scripter can make use of are:

  1. Name – for the test that each Web Driver Sampler will have. This is used by the reports.
  2. Parameters – is optional and allows the reader to inject variables used in the script section.
  3. Script – allows the scripter to control the Web Driver and capture times and success/failure outcomes.

Within the Script section, the Sampler automatically injects a WebDriverScriptable object with the name of WDS. This object has the following properties for the scripter to use:

  1. WDS.name – is the value provided in the Name field (above).
  2. WDS.vars – <a href=”../api/org/apache/jmeter/threads/JMeterVariables.html”>JMeterVariables</a> – e.g.

<code> vars.get(“VAR1”); vars.put(“VAR2″,”value”); vars.remove(“VAR3”); vars.putObject(“OBJ1”,new Object()); </code>

  1. WDS.props – JMeterProperties (class <a href=”https://docs.oracle.com/javase/8/docs/api/java/util/Properties.html”><code>java.util.Properties</code></a&gt;) – e.g.

<code> props.get(“START.HMS”); props.put(“PROP1″,”1234”); </code>

  1. WDS.ctx – <a href=”../api/org/apache/jmeter/threads/JMeterContext.html”>JMeterContext</a>
  2. WDS.parameters – is the value provided in the Parameters field (above).
  3. WDS.args – is an array of the strings provided in the Parameters field, but split by the space ‘ ‘ character. This allows the scripter to provide a number of strings as input and access each one by position.
  4. WDS.log – is a Logger instance to allow the scripter to debug their scripts by writing information to the jmeter log file (JMeter provides a GUI for its log entries)
  5. WDS.browser – is the configured Web Driver browser that the scripter can script and control. There is detailed documentation on this object on the Selenium Javadocs page.
  6. WDS.sampleResult – is used to log when the timing should start and end. Furthermore, the scripter can set success/failure state on this object, and this SampleResult is then used by the JMeter reporting suite. The JMeter javadocs provide more information on the API of this object.

As mentioned previously, this sampler provides the scripter with a lot of control. This means that unlike most samplers, the scripter is responsible for performing several tasks within the Script panel:

  1. Capture the sample times by calling the `WDS.sampleResult.sampleStart()` and `WDS.sampleResult.sampleEnd()` methods.
  2. Executing the sampler task on the `WDS.browser` instance.
  3. Verifying if the task completed successfully. By default the sampler assumes success.

The following is a very simple sampler that captures all responsibilities mentioned above.

// 1a. Start capturing the sampler timing
WDS.sampleResult.sampleStart()
// 2. Perform the Sampler task
WDS.browser.get('http://google.com.au')
// 1b. Stop the sampler timing
WDS.sampleResult.sampleEnd()
// 3. Verify the results
if(WDS.browser.getTitle() != 'Google') {
    WDS.sampleResult.setSuccessful(false)
    WDS.sampleResult.setResponseMessage('Expected title to be Google')
}

 

Resource Footprint

Because of running real browser, WebDriver tests require a lot of resources. In general case, you need 1 CPU core per virtual user with it. This makes difficult to scale WebDriver test to have hundreds and thousands of vurtual users. However, the load testing cloud providers may help to scale WebDriver test up to thousands of real browsers, look at BlazeMeter for example.

 

 

 

 

Running Selenium webdriver script in JMeter part -2

This is continuation to previous post so i insist to read that before going through this

Jmeter webdriver is same as selenium webdriver code but jmeter gives us scriptable object like WDS though which we can do our automation stuff .

The `Browser` object exposed in the `Script` section is an instance of the `WebDriver` object documented in the Selenium documentation. It is recommended that the reader have a look at the documentation to see what methods are available on the WebDriver API, to better understand what can be scriptable on the Browser instance.

To calculate the response time of the specific component enclose the call with sample start and sample end methods.

Jmeter webdriver sampler uses the elements to be located by using locators or object identifiers.

Different types of Locators in Selenium are as follows:

i. ID
ii. Name
iii. Class Name
iv. Tag Name
v. Link Text & Partial Link Text
vi. CSS Selector
vii. XPath

Locating elements in WebDriver is done by using the method “findElement(By.locator())“.

AJAX

Testing for AJAX content is one of the more complex parts, and if not scripted properly, it may be quite brittle as well. The following script will automate the following steps:

  1. Go to a web form
  2. Declare variables used for controlling AJAX
  3. Initiate interactions that will cause AJAX content to appear
  4. Verify results

//Sample webdriver code with explanation to achieve correctness

import org.openqa.selenium.*;
import org.openqa.selenium.support.ui.*;
import java.util.concurrent.TimeUnit;

WebDriverWait wait = new WebDriverWait(WDS.browser, 20); // explicit wait if we                                //exceed the mentioned time in  identifying any element we get timeout error.
//WDS.browser.manage().timeouts().implicitlyWait(25, TimeUnit.SECONDS); //(implicit wait)
WDS.browser.get(“${samplewebsite}”);  // opening specific website
WDS.log.info(“opening url”);  // logging purpose even we can use                                                                                                        //error, debug as well
Thread.sleep(1000);   // pausing the thread before proceeding to next step
WDS.browser.findElement(By.id(“user”)).sendKeys(“${username}”);  //  By.locator is a                                                    //method to locate any element using any locator mechanism
Thread.sleep(1000);
WDS.browser.findElement(By.id(“password”)).sendKeys(“${userPassword}”);
Thread.sleep(2000);
WDS.browser.findElement(By.className(“btn-sign-in”)).click();
wait.until(ExpectedConditions.elementToBeClickable(By.id(“toplevel-dropdown”))); //
if(WDS.browser.findElements(By.cssSelector(“.toast-message”)).size()!= 0)  // Checking for                                                                  //any screen messages any messages and closing
{
WDS.browser.findElement(By.cssSelector(“.toast-close-button”)).click();
}
Thread.sleep(1000);

WDS.browser.findElement(By.id(“toplevel-dropdown”)).click();

WDS.browser.findElement(By.id(“toplevel_search”)).click();

WDS.browser.findElement(By.id(“toplevel_search”)).sendKeys(“${customerName}”);

Thread.sleep(1000);
WDS.browser.findElement(By.cssSelector(“[ng-data-key=’${customerName}’]”)).click();
Thread.sleep(1000);
WDS.sampleResult.sampleStart();  // starts execution time for a page
try{
WDS.browser.findElement(By.id(“account-select”)).click();
//Thread.sleep(5000);
wait.until(ExpectedConditions.presenceOfElementLocated(By.cssSelector(“p[ng-if^=’vm.assetCountWithoutLocation’]”)));  //  wait for ajax content to display – timing out                                                               //in 20 seconds if it doesn’t find the element located
WDS.sampleResult.sampleEnd();   // ends execution time
}catch(ex){    // any exception occurs catching it
WDS.log.error(“exception details:”+ex);
WDS.sampleResult.sampleEnd();
WDS.sampleResult.setSuccessful(false);  // making the request as failures
WDS.sampleResult.setResponseMessage(“Failed to identify element, timeout occured”);  // failure message
}

Enable jmeter logging to identify any issues 🙂 in scripts