Blog

Latest Updates. News. Insights. Ideas.

August 2016 - Sahi Pro

Team Sahi Pro’s solution at the Test Automation Contest

Posted by | Uncategorized | No Comments

After the blog post here, we asked Rahul Verma if we could publish our solution on this blog. We were concerned about the same problem given in future contests. After getting a go-ahead, we are here to publish and describe our solution. The application for this contest was “WordPress”. There were three scenarios to be automated within three hours. One of the main requirements was that these scenarios should run unattended.

Scenario 1: Publish five unique blog posts. Verify that the blog posts have been published successfully.
Scenario 2: Create a user and login with the created user.
Scenario 3: Search for a particular post and delete the blog post.

Our Approach:
We used Sahi Pro to automate these scenarios. We followed what we recommend to our users:
– Record with Accessor Repository enabled and create snippets of code
– Extract functions and use functions library
– Create Scenario files (.s.csv)
– Keep the data external to the script and call the file when required
– Club the scenario files into data driven suite (.dd.csv )
– Added tags for each scenario
– Execute .dd.csv from ant target

So, the final .dd.csv file is as follows:

blogTest.dd.csv

blogTest.dd.csv

Here, we are running three scenario files and tagged these scenarios with tags – all and scn1/scn2/scn3.
The scenario file for the first scenario is as follows:
We are logging in to the application and adding five unique blog posts. Once the posts are posted, we are verifying and finally logout of the application.

createBlogPosts.s.csv

createBlogPosts.s.csv

Let me describe the components of a scenario file here.
The first line of a scenario file is the column header – TestCase, Key Word, Argument 1, 2.
loadSahi loads the Sahi script with required function definitions. In this case, we are loading ‘blogARBlog.sah’ and ‘createBlogPost.sah’. blogARBlog.sah is the Accessor Repository file which we will know in detail later.
We are reading the data from an external file. In this case, it is a CSV file. So, we are using the _readCSVFile API
[Documentation] is useful for documentation purposes. One can note the purpose of the test case here.
In the [Data] [data:Posts:title] [data:Posts:content] [data:Posts:title], we are reading the data from the Posts.csv as per the column headings.
The other keywords are functions – login, addPostsAndVerify, logout.

As per our recommendation to our users, we also extracted functions and saved it in a separate functions library file.
The functions library file and the first script file is as follows:

functionsLib.sah

functionsLib.sah

createBlogPost.sah

createBlogPost.sah (Part 1)

createBlogPost.sah (Part 2)

createBlogPost.sah (Part 2)

We also used the Accessor Repository file in our scripts. This prevents the same element from being identified in multiple ways in different scripts. If there are changes expected in the element’s accessor, we need to update at only one place – the Accessor Repository (AR) file and all the scripts which include the AR file will be updated. This saves a lot of time and effort. The AR file for the first scenario is as follows:

blogARBlog.sah

blogARBlog.sah

And to identify the elements, we used our powerful recorder and object identifier which works on any modern browser. Once we ran the scenarios, the logs were automatically generated.

OverallLogs

OverallLogs

Scenario1Logs

Scenario1Logs

 

Finally, we generated the Ant Target and added to build.xml, configured Jenkins and ran the ant command from the command line.

Some of the unique features of our approach:

  • We did not use any waits throughout the whole exercise. Sahi Pro automatically waits for the whole page to load. It also waits for all the AJAX activities to complete. One need not add any explicit wait.
  • The use of relational APIs like _near which ensures that  the elements are identified in relation to an element we are sure of, which will not change its position.
  • We recorded on one browser and without any change in the script, we can play it back across browsers. We could also do a parallel playback using Sahi Pro.
  • Sahi Pro came up with the reports after playback without us writing any extra code.

You can download the code for the whole exercise here: stepin

If you also used Sahi Pro in the competition and followed a different approach, feel free to share your approach with us. Hope you liked our solution. Till next time, happy testing with Sahi Pro!

Sahi Pro powers Test Automation of eBaoTech’s Digital Insurance Platform

Posted by | Case study | No Comments
Sahi+eBaoTech

                                                                                                         Case study on eBaoTech

eBaoTech’s mission is to “make insurance easy”. With more than 150 installations in more than 30 countries, eBaoTech is one of the global leaders in insurance technology. From its inception in 2000, eBaoTech has been an innovation leader at the intersection of insurance and internet. eBaoTech offers world leading insurance software solutions for both life and general insurances.

Sahi Pro creates win-win situation for employee as well as employer where manual testers skill enhancement provides better job satisfaction to employee and employer gets the benefits of employee retention, automation and load testing.
– Rahul Agarwal, Manager Testing, Automation & QA
eBaoTech-Collaborus

Click here to download the complete case study.

Test_Automation_Contest

Team Sahi Pro – runner up in the Test Automation Contest at STeP-IN SUMMIT 2016

Posted by | Uncategorized | One Comment

Team Sahi Pro believes in continuous learning and we nominated ourselves for the Test Automation Contest conducted by STeP-IN Forum as part of STeP-IN SUMMIT 2016. Each team could have up to five members. We went with three members – our lead developer Kshitij Gupta, our support guru Pratik Shah and yours truly me (Ajay Balamurugadas). As soon as we got the email from the organizing committee about the pre-requisites, we kept asking them questions about the contest – the format, the purpose of VM, pre-requisites and what else was necessary from our side. Once we understood the expectations, we had all the pre-requisites in a pen drive and ready for the contest.

Contest Day:

We arrived on time for the contest and ready for the instructions. There was only one socket working in the power strip near our table. We quickly got it replaced and tested it. Then, once the pen drive was given to us with the VM image, we faced issues w.r.t network and BIOS settings. Virtualization was disabled in BIOS settings. Rahul explained the different types of networking modes like Host-only, Bridged and NAT. The volunteers from Test Mile helped us get ready for the contest. We received the problem statement at 9.35am. We quickly read the instructions and jumped to asking questions. It was soon announced that we will get time for asking questions.

There were three scenarios to be automated and the criteria to get selected for the final round was straightforward. The teams that had the scenarios automated would be through to the next round. There were a total of 32 teams participating and only 8 final slots. While Pratik started with the first scenario and Kshitij started working on the second scenario, I took the role of a business user who would keep cross checking the implementation against the requirement document. I took the role of timekeeper and noting down the team’s progress in a notepad.

Test Automation Contest

120 mins left for the contest to end:
We were clear on what to achieve at the end of the three hours and the progress was steady. None of the scenarios were automated till now. Work on the third scenario was not even started. At the same time, we thought that whatever we did till no was without any errors.

60 mins left for the contest to end:
Both the first scenarios were automated and the code was on two different machines. We also asked clarifying questions to the judges and were ready with some features that were over and above the expectations. While Kshitij refactored his code for scenario 2, Pratik ensured that his scenario 1 code worked without any errors. We wanted both the code snippets to be consistent with each other. So, we were also cross-checking the function names, password masking and the comments. Backup of each scenario was emailed to all of us.

30 mins left for the contest to end:
By now, we had integrated the code and started work on the third scenario. Some of the merge issues were resolved quickly. There were also some issues with Accessor Repository files. Each scenario was played back individually and the problematic line was identified. Though we were tempted to hard code the value due to lack of time, good sense prevailed and we troubleshooted it the right way.

15 mins left for the contest to end:
We started our work on Jenkins Integration and we tested our code on both the machines. One of them did not have Jenkins configured. As the clock struck 1pm, we were asked to close the laptops, wait for judges to judge us and head to lunch. We were satisfied with what we had achieved. Before we left, we had started the playback to verify one last time. The slides were also ready with our progress timeline, approach and the CTC (Challenges, Team work and Components) used. Then, judges came to us and asked to demonstrate our work. They checked if our code worked, which tool we used and the approach we followed. There were few questions asked and we confidently answered all of them to our satisfaction

After a good lunch, we were back to know who had qualified for the final round. We waited for more than 15 mins and we still did not know if we had qualified for the finals. Then the team names were announced. We were in the top 8. We were ready with the tweet and as soon as the name was announced, we pressed the ‘Tweet’ button.

Tweet

 

It was time for the presentations to begin. We were third in the presentation order. We connected our laptop and the HDMI cable to the projector. The input was not detected. We were given some time to sort it out. We tried installing drivers but it did not help. We had to try with the other laptop but remember, it did not have Jenkins. We quickly configured Jenkins on this machine and we were ready for our presentation.

Presentation Time:
We presented our approach, the code for each scenario, the challenges we faced and how we solved each challenge – the latest challenge being the HDMI connectivity issue. We also demonstrated our code by playing back the whole code. We highlighted the strengths of this approach and answered questions on why a particular approach was followed. There were questions on which parts we worked in the three hours and which came as part of the tool pre-packaged.

Team Presentation

Time for results:
Then the results for the Test Automation contest was announced along with highlights of each of the 8 presentations by the finalists. Team Sahi Pro came second and we were happy about it. One reason why we thought we missed the first prize was Sahi Pro is so feature rich that it might look that we did not do much in the 3 hours, which is acceptable. It is a testimony to the power of Sahi Pro. 🙂

Runner Up

Our special thanks to our CTO Mr. Narayan Raman for sponsoring this team for the contest. We also want to thank everyone at Sahi Pro for making Sahi Pro – The Tester’s Web Automation Tool. Looking forward to the next contest. Till then, happy testing with Sahi Pro!!!

Use fully-loaded Sahi Pro FREE for a month. Download Now Request a Demo