What is Functional Testing? Process, Challenges and Best Practices


What is Functional Testing? Process, Challenges and Best Practices

Hardik Shah
in Quality Assurance
- 17 minutes
functional testing

Few readers of my previous blog on Unit Testing had different opinions on the utility of unit testing. Some went on to say that unit testing is a huge waste of time because these tests involve mock objects, calling mock methods and functional testing is what really helps to find real-world bugs.

I strongly believe that unit testing has its own place in the software development lifecycle but the results are implicit in the form of code quality. When we talk about delivering quality software, functional testing has the highest ROI since it is done with real data. Functional testing verifies that the software performs its stated functions in a way that the users expect.

The process of functional testing involves a series of tests: Smoke, Sanity, Integration, Regression, Interface, System and finally User Acceptance Testing. Tests are conducted on each feature of the software to determine its behavior, using a combination of inputs simulating normal operating conditions, and deliberate anomalies and errors.

Thus after a rigorous functional testing, you receive a software with the consistent user interface, proper integration with business processes, well-designed API having robust security and network features.

What is functional testing?

Functional testing is a black box testing type which is performed to verify that each function/feature/module of the application is working as per the given requirements. Are you able to login in a system when you enter correct credentials? does the payment system show an error message when you enter less than 16 digit card number? Does the “add a customer” screen successfully adds a customer to your records? You get the idea.

Testers ensure that the functionality as a whole is performing as the user requires. This is completely different from testing done by developers which is unit testing. It checks whether the code works as per expectation individually. Despite how flawless the various individual code components may be, it is essential to check that the app’s functionality as expected when all components are combined.

Functional Testing Process

Functional testing aims to address the core purpose of the software and hence it relies heavily on the software’s requirements. Thus like requirements, testing scenarios and test cases require a great deal of organization.

For any given scenario, like “log in,” you have to understand the inputs and their expected outputs, and how to navigate to the relevant part of the application. With that information in hand, you also need to keep careful track of what you did while interacting with the software and then of whether the test succeeded or failed.

Let’s understand the functional testing step by step by taking an example of the online payment process. Here I’ve divided the whole process into 12 steps but it may vary depending on the functionality and testing process of the organization.

functional testing

1. Testing Goals

In functional testing, we can describe goals as intended outputs of the software testing process. The main goal of functional testing is to check how closely the feature is working according to the specifications. For better understanding, we can divide functional testing goals into two parts: validation testing and defect testing.

Validation testing:

  • To demonstrate to the developer and client that the software meets the requirements.
  • A successful test should show that the system works as intended.

Defect testing:

  • To discover the defects in the functionality in terms of user interface, error messages and text handling.
  • A successful test should expose the defects when the functionality does not work as expected.

For the example of the checkout process, we can expect the following goals to be completed:

  • Payment Gateway should securely encrypt sensitive information like card numbers, account holder name, CVV number, and password.
  • This information is sent with the highest safety from customer to the merchant.
  • A system should show an error message when wrong details get entered.
  • A user should get the confirmation message on the successful transaction.

2. Team Member Assignments

The testing team must be properly structured, with defined roles and responsibilities that allow the testers to perform their functions with minimal overlap. One way to divide testing resources is by dividing features based on high to low priority. The testing team may also

have more role requirements than it has members, which must be considered by the test manager.

In our example of the checkout process, one member is sufficient to test all scenarios.

3. Scope

This section details the features that will be included in the functional testing phase(s).

  • Payment gateway
  • Debit/Credit Card Options
  • Notification/OTP check

4. Testing Approach & Tools

Here we can use Ranorex. In Ranorex 7.1, you can easily use conditions to cover the checkout process for all payment methods in a single test case, instead of having to create a test case for each payment method – without having to dip into code.

To do so, start by creating a smart folder in your test case for each payment method. Each smart folder should contain a module that covers the specific payment steps. Next, you will have to define when each smart folder should run. That’s when we need the conditions.

5. Entry and Exit criteria

A list of Entrance/Entry Criteria to be completed is required before starting a Test Phase. A list of Exit Criteria to be completed is required before exiting a Test Phase. These lists will be detailed in the Test Plan.

Entry Criteria:

  • The project is code-complete. There are no missing features or media elements
  • The feature satisfies the performance and memory requirements specified by the Functional Spec.
  • Software/Curriculum have run and passed a Sanity Test that has been provided by the QA group.

Exit Criteria:

  • All priority bugs are fixed and verified.
  • If any medium or low-priority errors are outstanding – the implementation risk must be signed off as acceptable by Business Analyst and/or Client.
  • Internal documentation has been updated to reflect the current state of the product.

6. Test Plans Identification

In this step, we list down all the possible test scenarios for the given specification. A `test scenario’ is the summary of a product’s functionality, i.e. what will be tested. Based on these scenarios test cases are prepared.

Scenarios: There can be thousands of test scenarios for an application with a decent amount of traction. Few of them are as follows.

1) User Data transmitted to the gateway must be set over a secure (HTTPS or other) channel.

2) Some application ask User to store Card information. In that case, a system should store Card information in encrypted format.

3) Check for all mandatory fields validation. System should not go ahead with payment process if any data for any field is missing.

4) Test with Valid Card Number + Valid Expiry Date + Invalid CVV Number.

5) Test with Valid Card Number + Invalid Expiry Date + Valid CVV Number.

6) Test with Invalid Card Number + Valid Expiry Date + Valid CVV Number.

7) Test all Payment Options. Each payment option should trigger respective payment flow.

8) Test with multiple currency formats(if available).

9) Test with a Blocked Card Information.

10) Try to submit the Payment information after Session Timeout.

11) From Payment Gateway, Confirmation page tries to click on the Back button of browser to check Session is still active or not.

12) Verify that End user gets a notification email upon successful payment.

13) Verify that End user gets a  notification email with proper reason upon payment failure.

14) Test authorization receipt after successful payment. Verify all fields carefully.

Out of all the above scenarios we are considering only of payment functionality done via credit/debit card. Here we are using a combination of card enrollment and OTP success.

In the staging environment, you can use any credit card number which passes a basic Luhn check. Once you have completed successful staging tests, you can move onto production testing.

7. Test Case Design

It is a standard testing methodology to create test cases to validate the requirements of the system. The following represents a specific methodology that will be utilized for the project as it pertains to requirements and test case design.

  • For each business process requirement, one or more test cases will be created to validate the business process.
  • Test cases can also be created to verify software feature requirements.
  • In the absence of business process requirements, the ‘Test Case Summary’ definition will become the requirement.
  • Test cases can be created outside the scope of a pre-existing requirement. In these cases, the ‘Test Case Summary’ will be considered the business process or software requirement.
  • A Test Case review can be initiated by either the business area or Component integration testing(CIT) to help ensure testing coverage & accuracy. This activity is recommended but not currently a mandated activity for the release.

Test case design techniques:

Equivalence Partitioning

Equivalence partitioning is a Test Case Design Technique to divide the input data of software into different equivalence data classes. Test cases are designed for equivalence data class. The equivalence partitions are frequently derived from the requirements specification for input data that influence the processing of the test object. A use of this method reduces the time necessary for testing software using less and effective test cases.

Boundary Value Analysis

Boundary value analysis (BVA) is based on testing the boundary values of valid and invalid partitions. The Behavior at the edge of each equivalence partition is more likely to be incorrect than the behavior within the partition, so boundaries are an area where testing is likely to yield defects.

Every partition has its maximum and minimum values and these maximum and minimum values are the boundary values of a partition.

A boundary value for a valid partition is a valid boundary value. Similarly, a boundary value for an invalid partition is an invalid boundary value.

Here we are taking the example payment using VISA card using a combination of card enrollment and OTP success.

In the staging environment, you can use any credit card number which passes a basic Luhn check. Once you have completed successful staging tests, you can move onto production testing.

8. Create input data

You can have test data in an excel sheet which can be entered manually while executing test cases or it can be read automatically from files (XML, Flat Files, Database etc.) by automation tools. There are mainly two types of input data.

Fixed input data

  • Fixed input data is available before the start of the test, and can be seen as part of the test conditions.

Consumable Input data

  • Consumable input data forms the test input

In our example, we’ve to create input data manually. We’ll require following types of data:

  • Card Type
  • 16 digit Card Number
  • CVV number
  • Expiry date
  • Name on the card.

We can use http://www.getcreditcardnumbers.com/ to generate some test card numbers.

9. Determine output for the functionality under test.

functional testing

10. Defect tracking system

When defects are found, the testers will complete a defect report on the Defect tracking system,i.e. ‘JIRA’. The defect tracking system is accessible by Testers, Developers & all members of the project team. When a defect has been fixed or more information is needed, the developer will change the status of the defect to indicate the current state. Once a defect is verified as FIXED by the testers, the testers will close the defect.

Defect Categories:

Defects found during the Testing can be categorized as below:

For the given example, I’ve divided the defects based on its severity.

  1. Major: After clicking the submit button, Authorization is not requested to the customer’s issuing bank which confirms the card holder’s validity.
  2. Blocker: User cannot select the card type and hence he/she can’t proceed further.
  3. Minor: It does not show “Invalid Card Number error message” when a user enters the wrong card number.
  4. Trivial(Cosmetic): Cursor is not moving to next box when user enters “tab” key.
  5. Enhancement: The color of the error message is black instead of “red”. It’s not related to business requirement and can be fixed in the next testing phase.

11. Executing Test Cases

It is the comparison of Test Case expected results to Test Case actual results (obtained from the test execution run) that will determine whether the test has a ‘Pass’ or ‘Fail’ status.

Test Case status definitions are:

Passed (P): Test run-result matches the expected result.

Failed (F): Test run-result did not match expected result. In some cases, result did match as per expectation but caused another problem. Here a defect must be logged and referenced for all failed test cases.

Not Run (NR): Test has not yet been executed. In a test case database, all tests start from a default status of ‘NR’.

In Progress (IP): Tests have been started out but not all the test steps have been completed.

Investigating (I):  Test has been run but investigating on whether to declare as a passed or failed test.

Blocked (B): Test cannot be executed due the to the blocking issue. For example, some test cases can’t be run because of hardware issues.

12. Test Status Reporting

This is a report of testing activities from a specific functional area to the project manager.

  • For all Test Cases in Failed state, provide a total count of problems in each of the severity levels (1-5).
  • For all Test Cases in Failed state, provide a description of each open problem with a severity level of 1 or 2 and current status.
  • For all Test Cases in Investigating, Blocked, or Deferred, provide comments (indicating RT ticket, if applicable, a reason for block or reason for deferment).

Actual output i.e. the output after executing the test case and expected output (determined from requirement specification) are compared to find whether the functionality is working as expected or not.

Functional testing types

Sanity Testing

Sanity testing is performed when testers don’t have enough time for doing testing. It is the surface level testing where QA engineer verifies that all the menus, functions, commands available in the product and project are working fine.

Smoke testing

Smoke Testing is a kind of Software Testing performed after software build to ascertain that the critical functionalities of the program are working fine. It is executed “before” any detailed functional or regression tests are executed on the software build. The purpose is to reject a badly broken application so that the QA team does not waste time installing and testing the software application.

Regression tests

In regression testing test cases are re-executed in order to check whether the previous functionality is working fine and the new changes have not introduced any new bugs. This test can be performed on a new build when there is a significant change in the original functionality that too even in a single bug fix.

Integration tests

The most common use of the concept of integration testing is directly after unit testing. A unit will be developed, it will be tested by itself during unit test, and then it will be integrated with surrounding units in the program. Remember, the point of integration testing is to verify proper functionality between components, not to retest every combination of lower-level functionality.

Beta/Usability testing

Functionality testing confirms whether something works. Usability testing confirms whether it works well for users. Functionality testing is a predecessor to usability testing in order to have valid feedback from the testers who are using the application. It is often the last step before feature/software goes live. Usability tests make sure the software meets user needs.

Functional Testing Best Practices

Start writing testing cases early in the requirement analysis & design phase

If you start writing test cases during an early phase of Software Development Life Cycle then you will understand whether all requirement is testable or not. While writing test cases first consider Valid/Positive test cases which cover all expected behavior of application under test. After that, you can consider invalid conditions/negative test cases.

Keep balance across testing types

If all of the tests for a functionality are through the GUI, then there’s a lot that isn’t getting tested, the tests will tend to be at a higher cost to maintain because interfaces tend to be change more than the classes and services. Also, they will be found later in the product cycle as the GUI tests tend to require all of the product layers are built (regardless of the order they are created).

Automated functional testing

Automated tests are helpful to avoid repeated manual work, get faster feedback, save time on running tests over and over again. It is impossible to automate all test cases, so it is important to determine what test cases should be automated first.

However, it’s hard to determine the tests which needs to be automated as it is pretty much subjective depending on the functionality of an app or software. To get the best ROI, I’ve listed some of the common parameters to select test cases for automation testing.

  • Test case executed with a different set of data
  • Test case executed with a different browser
  • Test case executed with different environment
  • Test case executed with complex business logic
  • Test case executed with a different set of users
  • Test case Involves a large amount of data
  • Test case has any dependency
  • Test case requires Special data

Test most important feature first

You should test the thing that makes you money first and should test supporting functionalities later. The place where you make money is the place that will have the largest demand for new and changing functionality. And where things change the most is where you need tests to protect against regressions.

Understanding How the User Thinks

The main distinction between QA and dev is state of mind. While developers write pieces of code that later become features in the application, Testers are expected to understand how the application satisfies the user needs.

Let’s take an example of eBay. In such a mature online platform, there are different types of users as sellers, buyers, support agents etc. When planning tests, all these personas must be taken into consideration with a particular test plan for each.

Prepare traceability matrix

Requirement Traceability Matrix(RTM) captures all requirements proposed by the client or software development team and their traceability in a single document delivered at the conclusion of the life-cycle.

In other words, it is a document that maps and traces user requirement with test cases. The main purpose of Requirement Traceability Matrix is to see that all test cases are covered so that no functionality should miss while doing software testing.

Challenges Faced in Performing Functional Testing

Unavailability of Documentation of the Projects

QA have to depend on release build available on QA servers to understand the project, write test cases and then execute them. There were no proper documentation accompanying releases provided to the test team. The test engineer is not aware of the known issues, main features to be tested, etc. Hence a lot of effort is wasted.

Increasing size and complexity of software systems

Existing functional testing techniques do not take advantages of test cases available for parts of the artifact under test. Compositional approaches for deriving test cases for a given system taking advantage of test cases available for its subsystems is an important open research problem.

Budget and quality constraints

Different quality and budget constraints may lead to different choices. For example, random test case generation may be appropriate if the primary constraint is rapid, automated testing, and reliability requirements are not stringent. In contrast, thorough testing of a safety-critical application may require the use of sophisticated methods for functional test case generation.

It is important to evaluate all relevant costs while choosing an approach. For example, generating a large number of random test cases may necessitate design and construction of sophisticated test oracles.Also the cost of training to use a new tool may exceed the advantages of adopting a new approach.


As a product manager, you need to understand that not every function or requirement needs to be tested, and actually not everything can possibly be tested. High-priority, “big rock” sort of things needs testing preferably automation testing before release. “The rest” can be completed as time allows, and should be prioritized according to the particular importance of the functionality you are testing.

At Simform, we believe – Not done testing it? Don’t ship it. If you’ve determined that a certain set of tests is necessary for release, then stick to it and hustle if you’ve not finished testing. Functional testing is basic and essential because it’s foundational. Without it, no concept of done exists. So make sure you do adequate functional testing and pay attention to the results.

Hardik Shah

Working from last 8 years into consumer and enterprise mobility, Hardik leads large scale mobility programs covering platforms, solutions, governance, standardization and best practices.

Stomp out bugs with Functional Testing

Stomp out bugs with Functional Testing

Like what you are reading? Sign up now to get the latest blogs on Functional Testing directly in your inbox. 

You have Successfully Subscribed!