A
A/B Testing
A technique in which a new feature, or different variants of a feature, are made available to different sets of users and evaluated by comparing metrics and user behavior.
Acceptance Criteria
- The external quality characteristics specified by the product owner from a business or stakeholder perspective. Acceptance criteria define desired behavior and are used to determine whether a product backlog item has been successfully developed.
- The exit criteria that a component or a system must satisfy in order to be accepted by a user, customer, or other authorized entity.
Acceptance Testing
Typically used by a client or a stakeholder, acceptance testing is a set of tests that prove the product is suitable for its intended use. Acceptance tests are often also end-to-end tests, but with the specific purpose of preventing the release or handover of a solution if they fail.
Acceptance-Test-Driven-Development (ATDD)
A technique in which the participants collaboratively discuss acceptance criteria, using examples, and then distill them into a set of concrete acceptance tests before development begins.
Actual Result/Outcome
Actual outcome, also known as actual result, is what a tester gets after performing the test. Actual Outcome is always documented along with the test case during the test execution phase. After performing the tests, the actual outcome is compared with the expected outcome and the deviations are noted.
Ad Hoc Testing
Ad hoc testing is a non-methodical approach to assessing the viability of a product.
Agent
A part of the server-agent duo of programs, running in some instance or container to provide input for the centralized server app (like Zabbix monitoring agent).
Agent, also called softbot (“software robot”), is a computer program that performs various actions continuously and autonomously on behalf of an individual or an organization. For example, an agent may archive various computer files or retrieve electronic messages on a regular schedule. Such simple tasks barely begin to tap the potential uses of agents, however. This is because an intelligent agent can observe the behaviour patterns of its users and learn to anticipate their needs or at least their repetitive actions. Such intelligent agents frequently rely on techniques from other fields of artificial intelligence, such as expert systems and neural networks, and aim to achieve complex goals.
Agile
A precursor to DevOps. Agile is a software development and, more broadly, business methodology, that emphasizes short, iterative planning and development cycles to provide better control and predictability and to support changing requirements as projects evolve.
Used in the DevOps world to describe infrastructure, processes or tools that are adaptable and scalable. Being agile is a key focus of DevOps.
Agile Project Management
Agile Project Management (APM) is an iterative approach to planning and guiding project processes.
Agile Software Development
A methodology of software delivery based on short iterative sprints of development, where every sprint should result in an operational product. This allows for easy adjustment of the project requirements should the need arise and empowers creativity and flexibility within the development teams.
AI
The ability of a digital computer or computer-controlled robot to perform tasks commonly associated with intelligent beings. The term is frequently applied to the project of developing systems endowed with the intellectual processes characteristic of humans, such as the ability to reason, discover meaning, generalize, or learn from past experience.
Artificial intelligence (AI) is the simulation of human intelligence processes by machines, especially computer systems. These processes include learning (the acquisition of information and rules for using the information), reasoning (using rules to reach approximate or definite conclusions) and self-correction. Particular applications of AI include expert systems, speech recognition and machine vision.
Three Toed Sloth
AIOps
AIOps combines big data and machine learning to automate IT operations processes, including event correlation, anomaly detection and causality determination.
AIOps Platform
An AIOps platform combines big data and machine learning functionality to support all primary IT operations functions through the scalable ingestion and analysis of the ever-increasing volume, variety and velocity of data generated by IT. The platform enables the concurrent use of multiple data sources, data collection methods, and analytical and presentation technologies.
Alpha Testing
Alpha testing is a type of acceptance testing; performed to identify all possible issues/bugs before releasing the product to everyday users or the public. The focus of this testing is to simulate real users by using a black box and white box techniques. The aim is to carry out the tasks that a typical user might perform. Alpha testing is carried out in a lab environment and usually, the testers are internal employees of the organization. To put it as simple as possible, this kind of testing is called alpha only because it is done early on, near the end of the development of the software, and before beta testing.
Anomaly Detection
Anomaly detection is the identification of data points, items, observations or events that do not conform to the expected pattern of a given group. These anomalies occur very infrequently but may signify a large and significant threat such as cyber intrusions or fraud.
Application Infrastructure
Application infrastructure is software platforms for the delivery of business applications, including development and runtime enablers.
Application Level Testing
Application Testing is defined as a software testing type, conducted through scripts with the motive of finding errors in software. It deals with tests for the entire application. It helps to enhance the quality of your applications while reducing costs, maximizing ROI, and saving development time.
Application Performance Monitoring (APM)
Application performance monitoring (APM) is a suite of monitoring software comprising digital experience monitoring (DEM), application discovery, tracing and diagnostics, and purpose-built artificial intelligence for IT operations.
Application Release Automation (ARA)
Application Release Automation, or ARA, is the consistent, repeatable and auditable process of packaging and deploying an application or update of an application from development, across various environments, and ultimately to production. Successful ARA eliminates the need to build and maintain custom scripts for application deployments, while simultaneously reducing configuration errors and downtime. By providing a model-based approach to performing critical automation tasks, it effectively increases the speed to market associated with agile development and gives stakeholders the ability to coordinate and automate releases between multiple groups and people.
Application Release Orchestration (ARO)
Tools, scripts, or products that automatically install and correctly configure a given version of an application in a target environment, ready for use. Also referred to as “Application Release Automation” (ARA) or “Continuous Delivery and Release Automation” (CDRA).
Application Software Services
The application software services segment includes back-office, ERP and supply chain management (SCM) software services, as well as collaborative and personal software services. It also covers engineering software and front-office CRM software services.
Artifact
any process description in the software delivery pipeline, which can be referred to. The most widespread artifacts are use cases, class diagrams, UML models and design documents.
A tangible by-product produced during product development. The product backlog, sprint backlog, and potentially shippable product increment are examples of Scrum artifacts.
Assertion
Assertions are statements that perform an actual check on the software’s output. In general, a single function called assert is enough to express any check. In practice, test libraries often have many assert functions to meet specific needs (such as assertFalse, assertEqual and more) to offer better analysis and friendlier output.
Augmented Intelligence
Augmented intelligence is a design pattern for a human-centered partnership model of people and artificial intelligence (AI) working together to enhance cognitive performance, including learning, decision making and new experiences.
Automated Provisioning
Automated provisioning is defined as the fully automated delivery and maintenance of application environment components. Application environment components are the deployment target containers of the application. For example, a database server or application runtime server. In a DevOps organization, automated provisioning can be the responsibility of DevOps Platform teams.
Automated Testing
Automated testing applies to commercially or internally developed software or services to assist in the testing process, including functional and load/stress testing. Automated tests provide consistent results and data points. The benefits are ease of maintenance, the ability to efficiently use resources in off-peak hours, and the capability to create reports based on the executed tests. Associated quality management tools include functionality for test planning, test case management and defect management (the governance piece of quality).
Automation
The technology by which a process or procedure is performed without manual intervention. In DevOps, automation allows for the creation of real-time reports, integrating various tools used by different stakeholders, and workflows—integrating technology to bring tools together from different domains and break down the silos.
Automation is the reduction of manual tasks by automating them. Usually it is the goal of automation to increase the efficiency of processes by speeding up steps or by reducing the risk of errors. Automation is worth it, if there is a reasonable ROI (Return of Investment). That means that the initial investment of work will be overcompensated in the long run by the increased efficiency and manual work.
B
Backlog Refinement Session
Scrum Term – This session is used to anticipate and define what User Stories are expected in next sprint and communicate uncertainties for in case User Stories are unclear. The session typically takes place half-way a sprint, leaving room for Business and Product Owner to improve User Stories where needed, prior to the starts of the next sprint.
Backlogs
An interactive list of work items that corresponds to a team’s project plan or roadmap for what the team plans to deliver. The product backlog supports prioritizing work, forecasting work by sprints, and quickly linking work to portfolio backlog items. You can define your backlog items and then manage their status using the Kanban board.
Behaviour Driven Development (BDD)
A development methodology that asserts software should be specified in terms of the desired behavior of the application, and with syntax that is readable for business managers.
A subset of TDD driven by the need of clearer communication and proper documentation. BDD is perhaps the biggest recent development in TDD. Its core idea is to replace confusing and developer-centric terminology (tests, suites, assertions etc) with ubiquitous language that all participating stakeholders (including non-technical staff, and, possibly, clients) can understand.subset of TDD driven by the need of clearer communication and proper documentation. BDD is perhaps the biggest recent development in TDD. Its core idea is to replace confusing and developer-centric terminology (tests, suites, assertions etc) with ubiquitous language that all participating stakeholders (including non-technical staff, and, possibly, clients) can understand.
Best-in-class
Best-in-class is defined as the superior product within a category of hardware or software. It does not necessarily mean best product overall, however. For example, the best-in-class product in a low-priced category may be inferior to the best product on the market, which could sell for much more.
Beta Testing
In software development, a beta test is the second phase of software testing in which a sampling of the intended audience tries the product out.
Beta testing is the stage at which a new product is tested under actual usage conditions.
Big-Bang Integration
Big Bang Integration Testing is an integration testing strategy, wherein all units are linked at once, which results in a complete and efficient system. In this type of integration testing all the components as well as the modules of the software are integrated simultaneously, after which everything is tested as a whole. During the process of big bang integration testing, most of the developed modules are coupled together to form a complete software system or a major part of the system, which is then used for integration testing. This approach of software testing is very effective as it enables software testers to save time as well as their efforts during the integration testing process.
Big Bang Integration Testing is an integration testing strategy wherein all units are linked at once, resulting in a complete system. When this type of testing strategy is adopted, it is difficult to isolate any errors found, because attention is not paid to verifying the interfaces across individual units.
Binning
A technique for accurately grouping together items of similar size. Useful when we don’t have the precision necessary to discriminate among similarly sized items, so instead we group together all items that fall within a given small interval and label all such items with a value (often the central value) that is representative of the interval.
Black Box Testing
A testing or quality assurance practice that assumes no knowledge of the inner workings of the system being tested, and which thus attempts to verify external rather than internal behavior or state.
A general principle in testing where the person writing tests does not know or avoids the internals of the software, choosing instead to test the public interface of the software strictly by its interface or specification. See white-box testing.
Blueprints
Blueprints enable you to on-board projects, applications, and teams across the enterprise to the DevOps toolchain, without a lot of administrative overhead.
Blueprints enable you to onboard projects, applications, and teams across the enterprise to the DevOps toolchain, without a lot of administrative overhead. XebiaLabs’ blueprints guide you through a process that automatically generates YAML files for your applications and infrastructure.
Boards (Kanban)
An interactive, electronic sign board that supports visualization of the flow of work from concept to completion and lean methods.
Bottom-Up Integration
In the bottom-up strategy, each module at lower levels is tested with higher modules until all modules are tested. It takes help of Drivers for testing.
Boundary Testing
Boundary testing is the process of testing between extreme ends or boundaries between partitions of the input values.
Branch/Branching
A Git branch allows each developer to branch out from the original code base, work independently, and isolate their work from affecting others.
“A Git branch is essentially an independent line of development. You can take advantage of branching when working on new features or bug fixes as it helps to isolate your work from that of other team members.”
Branches are separate copies of the project code on GitHub or other code version control system, allowing many developers to work on the project at once.
A branch in a computer program is an instruction that tells a computer to begin executing different instructions rather than simply executing the instructions in order. In high-level languages, these are typically referred to as flow control procedures and are built into the language. In assembly programming, branch instructions are built into a CPU.
BS 7925-1
BS 7925-1 is a Glossary of Software Testing Terms.
BS 7925-2
BS 7925-2 is the Software Component Testing Standard.
Bug
A bug is an unexpected problem with software or hardware. Typical problems are often the result of external interference with the program’s performance that was not anticipated by the developer. Minor bugs can cause small problems like frozen screens or unexplained error messages that do not significantly affect usage. Major bugs may not only affect software and hardware, but could also have unintended effects on connected devices or integrated software and may damage data files.
Build Agent
A type of agent used in Continuous Integration that can be installed locally or remotely in relation to the Continuous Integration server. It sends and receives messages about handling software builds.
Build Artifact Repository
A tool used to organize artifacts with metadata constructs and to allow automated publication and consumption of those artifacts.
Build Automation
Tools or frameworks that allow source code to be automatically compiled into releasable binaries. Usually includes code-level unit testing to ensure individual pieces of code behave as expected.
Build automation transforms code changes, committed by team members, automatically to published deployment artifacts, ready for deployment and validation in (test) environments.
Burn Down Chart
Scrum Term – During Planning Poker, features are assigned so called velocity points. When progressing in time, team estimations will become more reliable. The Burn Down chart outlines the burn rate for the running sprint over times. This way, a team can steer on making the needed progress to burn all points for the sprint.
C
Canary Release
A go-live strategy in which a new application version is released to a small subset of production servers and heavily monitored to determine whether it behaves as expected. If everything seems stable, the new version is rolled out to the entire production environment.
The staging server, which is an exact duplicate of the production environment. New software builds run there to ensure compliance with the existing features and code before rolling them out to the whole user base.
Capability Maturity Model Integration (CMMI)
The Capability Maturity Model Integration (CMMI)® is a proven set of global best practices that drives business performance through building and benchmarking key capabilities… CMMI best practices focus on what needs to be done to improve performance and align operations to business goals. Designed to be understandable, accessible, flexible, and integrate with other methodologies such as agile, CMMI models help organizations understand their current level of capability and performance and offer a guide to optimize business results.
Capacity
- The quantity of resources available to perform useful work.
- A concept used to help establish a WIP limit by ensuring that we only start work to match the available capacity to complete work.
Capacity Requirements Planning
Capacity requirements planning (CRP) is the process of specifying the level of resources (facilities, equipment and labor force size) that best supports the enterprise’s competitive strategy for production.
Capacity Testing
A capacity test is a test to determine how many users your application can handle before either performance or stability becomes unacceptable. By knowing the number of users your application can handle “successfully”, you will have better visibility into events that might push your site beyond its limitations. This is a way to avoid potential problems in the future.
Capacity Utilisation
Capacity utilization is the production of a fab divided by its maximum potential production.
Capital Expenditure (CapEx)
- Funds used by a company to acquire or upgrade physical assets such as property, industrial buildings or equipment. It is often used to undertake new projects or investments by the firm. This type of outlay is also made by companies to maintain or increase the scope of their operations (investorpedia).
- Capitalizing an expense means you don’t offset your revenue against the expense in the year you purchased or built it. Instead you list the purchase as an asset on your balance sheet and then each year (through depreciation) you offset revenue on your income statement against that year’s depreciated amount.
Captive Centers
Captive centers are client-owned-and-operated service delivery centers, typically in a nondomestic, low-cost location, that provide service resources directly to their organization. The personnel in a captive facility are legal employees of the organization, not the vendor.
Capture and Replay Tool
Capture & replay tools have been developed for testing the applications against graphical user interfaces. Using a capture and replay tool, testers can run an application and record the interaction between a user and the application. The Script is recorded with all user actions including mouse movements and the tool can then automatically replay the exact same interactive session any number of times without requiring a human intervention. This supports fully automatic regression testing of graphical user interfaces.
A capture/replay tool is a kind of test execution tool in which the entries are recording during manual testing with the goal of generating automated test scripts that can be replayed afterwards. These tools are often used to support automated regression tests.
Change Control
Change control is a systematic approach to managing all changes made to a product or system.
Change Management
Change management is the automated support for development, rollout and maintenance of system components (i.e., intelligent regeneration, package versioning, state control, library control, configuration management, turnover management and distributed impact sensitivity reporting).
Change Request
A petition for modifying the behavior of a system due to normal business changes or because there is a bug in the system.
Chatbot
A chatbot is a domain-specific conversational interface that uses an app, messaging platform, social network or chat solution for its conversations. Chatbots vary in sophistication, from simple, decision-tree-based marketing stunts, to implementations built on feature-rich platforms. They are always narrow in scope. A chatbot can be text- or voice-based, or a combination of both.
Client
A system or a program that requests the activity of one or more other systems or programs, called servers, to accomplish specific tasks. In a client/server environment, the workstation is usually the client.
Client Management Tools
Client management tools (previously known as PC configuration life cycle management [PCCLM] tools) manage the configurations of client systems. Specific functionality includes OS deployment, inventory, software distribution, patch management, software usage monitoring and remote control. Desktop support organizations use client management tools to automate system administration and support functions that would otherwise be done manually.
Cloud
“The cloud is not a physical entity, but instead is a vast network of remote servers around the globe which are hooked together and meant to operate as a single ecosystem. These servers are designed to either store and manage data, run applications, or deliver content or a service such as streaming videos, web mail, office productivity software, or social media. Instead of accessing files and data from a local or personal computer, you are accessing them online from any Internet-capable device—the information will be available anywhere you go and anytime you need it.”
Cloud Computing
Cloud Management Platform
A cloud management platform (CMP) is a product that gives the user integrated management of public, private, and hybrid cloud environments.
Cloud management platforms are integrated products that provide for the management of public, private and hybrid cloud environments. The minimum requirements to be included in this category are products that incorporate self-service interfaces, provision system images, enable metering and billing, and provide for some degree of workload optimization through established policies. More-advanced offerings may also integrate with external enterprise management systems, include service catalogs, support the configuration of storage and network resources, allow for enhanced resource management via service governors and provide advanced monitoring for improved “guest” performance and availability.
Code Coverage
Code coverage is a measurement of how many lines/blocks/arcs of your code are executed while the automated tests are running.Code coverage is collected by using a specialized tool to instrument the binaries to add tracing calls and run a full set of automated tests against the instrumented product. A good tool will give you not only the percentage of the code that is executed, but also will allow you to drill into the data and see exactly which lines of code were executed during a particular test. Furthermore, Code coverage basically tests that how much of your code is covered under tests. So, if you have 90% code coverage than it means there is 10% of code that is not covered under tests.
A goal to achieve at least a minimum amount of code coverage for a given application. Furthermore, code coverage is not a panacea. Coverage generally follows an 80-20 rule. Increasing coverage values becomes difficult, with new tests delivering less and less incrementally. If you follow defensive programming principles, where failure conditions are often checked at many levels in your software, some code can be very difficult to reach with practical levels of testing. Coverage measurement is not a replacement for good code review and good programming practices. In general you should adopt a sensible coverage target and aim for even coverage across all of the modules that make up your code. Relying on a single overall coverage figure can hide large gaps in coverage.
Code Inspection
Code inspection in the case of safety is called critical code review (CCR); it is an activity that involves reviewing the entirety or part of the code of a software application.
Code Review
Code review is a phase in the computer program development process in which the authors of code, peer reviewers, and perhaps quality assurance reviewers get together to review code, line by line.
Codebase
A codebase refers to a whole collection of source code that is used to build a particular software system, application, or software component. Typically, a codebase includes only human-written source code files.
Coding Standard
Coding standards are collections of coding rules, guidelines, and best practices. Using the right one will help you write cleaner code.
Commercial Off-The-Shelf (COTS)
Commercial off-the-shelf (COTS) is a term that references non-developmental items (NDI) sold in the commercial marketplace and used or obtained through government contracts. The set of rules for COTS is defined by the Federal Acquisition Regulation (FAR). A COTS product is usually a computer hardware or software product tailored for specific uses and made available to the general public. Such products are designed to be readily available and user friendly. A typical example of a COTS product is Microsoft Office or antivirus software. A COTS product is generally any product available off-the-shelf and not requiring custom development before installation.
Commit
The point in a transaction when all updates to any resources involved in the transaction are made permanent.
The process of pushing the code to the Git repository and the resulting piece of code pushed.
Compilation
Compiling a program.
Compiler
- Software that converts a set of high-level language statements into a lower-level representation. For example, a help compiler converts a text document embedded with appropriate commands into an online help system. A dictionary compiler converts terms and definitions into a dictionary lookup system.
- Software that translates a program written in a high-level programming language (C/C++, COBOL, etc.) into machine language. A compiler usually generates assembly language first and then translates the assembly language into machine language. A utility known as a linker then combines all required machine language modules into an executable program that can run in the computer.
Component
Component Adaptive System
A system with many entities interacting with each other in various ways, where these interactions are governed by simple, localized rules operating in a context of constant feedback. Examples include the stock market, the brain, ant colonies, and Scrum teams.
Component Integration Testing
Combining individual components that have already been tested, and seeing how they work together.
Component Team
- A team that focuses on the creation of one or more components of a larger product that a customer would purchase. Component teams create assets or components that are then reused by other teams to assemble customer-valuable solutions.
- Team that is cross-functional (multi-disciplinary), single component focused.
Component Testing
Testing a single component of the solution. Typically, in Java, a module would be considered a component. The focus is on whether the component delivers the required functionality for the rest of the solution. It should not rely on other modules, as these would typically be mocked or stubbed.
Component-Based Development (CBD)
Component-based development (CBD) is defined as a set of reuse-enabling technologies, tools and techniques that allow application development (AD) organizations to go through the entire AD process (i.e., analysis design, construction and assembly) or through any particular stage via the use of predefined component-enabling technologies (such as AD patterns, frameworks, design templates) tools and application building blocks.
Computer Aided Software Testing (CAST)
Computer aided software testing (CAST) refers to the computing-based processes, techniques and tools for testing software applications or programs. CAST is the computing-enabled process of software testing performed using a combination of software- and hardware-based tools and techniques.
Confidence Threshold
- The definition of done for envisioning (product-level planning).
- The set of information that decision makers need in order to have sufficient confidence to make a go/no-go funding decision for more detailed development.
Configuration as Code
A system configuration management technique in which the configuration for machines, applications, jobs, etc. is specified in code and kept in version control, allowing teams to configure new applications/systems/jobs in seconds
Configuration Control Board (CCB)
Establishment of and charter for a group of qualified people with responsibility for the process of controlling and approving changes throughout the development and operational lifecycle of products and systems; may also be referred to as a change control board.
Computer Security Resource Center
Configuration Drift
A term for the general tendency of software and hardware configurations to drift, or become inconsistent, with the template version of the system due to manual ad hoc changes (like hotfixes) that are not introduced back into the template.
The undesirable result of updating various servers independently, leading to different software configurations and states. Best removed through the practice of deployment of Immutable Infrastructure as Code.
Configuration Management
A term for establishing and maintaining consistent settings and functional attributes for a system. It includes tools for system administration tasks, such as IT infrastructure automation.
The process of setting and maintaining the desired software ecosystem parameters with the help of automated configuration management tools like Kubernetes, Ansible, Puppet, Chef, Saltstack, etc.
Configuration Testing
Configuration testing is defined as a software testing type, that checks an application with multiple combinations of software and hardware to find out the optimal configurations that the system can work without any flaws or bugs.
Container/Containerization
A software envelope separating the app and all resources required to run it from the infrastructure it runs on. Due to using Docker containers, any apps can run on any OS with Docker and any issues of a single container don’t affect the rest of the system.
A container a virtualization instance in which the kernel of an operating system allows for multiple isolated user-space instances. Unlike virtual machines (VMs), containers do not need to run a full-blown operating system (OS) image for each instance. Instead, containers are able to run separate instances of an application within a single shared OS.
Containers
A software development container groups applications and their dependencies together, making it possible to build applications where test and live environments can be faithfully replicated.
Context-Driven Testing
Context-driven testing is a model for developing and debugging computer software that takes into account the ways in which the programs will be used or are expected to be used in the real world. In order to successfully conduct this type of testing, software developers must identify the intended market and evaluate the environments in which people are likely to employ the product.
Continuous Automation
“Continuous Automation is the practice of automating every aspect of an application’s lifecycle to build and deploy software and changes quickly, consistently, and safely. It integrates automation of infrastructure, applications, and compliance, defining elements as code to make it easy to manage multiple versions, test for a variety of conditions, change when needed, and apply at scale. It is a sophisticated approach to building, deploying, and managing software.”
Continuous Delivery (CD)
“Continuous Delivery is the ability to get changes of all types, including new features, configuration changes, bug fixes and experiments—into production, or into the hands of users, safely and quickly in a sustainable way.”
A set of processes and practices that radically remove waste from your software production process, enable faster delivery of high- quality functionality, and set up a rapid and effective feedback loop between your business and your users.
A software delivery process wherein updates are planned, implemented and released to end-users on a steady, constant basis. It’s the opposite of waterfall delivery, in which updates are released at an irregular, static pace.
Continuous Deployment
A particular case of Continuous Delivery, where the deployment of new code to production is also done automatically. This is not appropriate in some cases, though, and greatly depends on the particular requirements of your product and business model.
Changes are being deployed continuously on a server for internal use, e.g. for manual testing.
Continuous Improvement
“Continuous improvement, sometimes called continual improvement, is the ongoing improvement of products, services or processes through incremental and breakthrough improvements.”
“Continuous improvement, or Kaizen, is a method for identifying opportunities for streamlining work and reducing waste. The practice was formalized by the popularity of Lean/Agile/Kaizen in manufacturing and business, and it is now being used by thousands of companies all over the world to identify savings opportunities.”
Continuous Integration (CI)
A development practice that requires developers to integrate code into a shared repository several times a day. Each check-in is then verified by an automated build, allowing teams to detect problems early.
A process that allows software changes to be tested and integrated into a code base on a continuous basis each time a change is made to code. Most DevOps teams view continuous integration as an improvement over the traditional process of waiting until a large number of code changes are written before testing and integrating them.
Changes are being continuously built and automatically tested in order to find bugs or problems as early as possible.
Continuous Quality
Continuous quality is a systematic approach to finding and fixing software defects during all phases of the software development cycle. CQ reduces the risk of security vulnerabilities and software defects (bugs) by helping developers find and fix problems as early as possible in the development cycle.
Continuous Testing
Continuous Testing is the process of executing automated tests as part of the software delivery pipeline in order to obtain feedback on the business risks associated with a software release candidate as rapidly as possible.
Cross Browser Testing
Cross Browser Testing is a type of testing to verify if an application works across different browsers as expected and degrades gracefully. It is the process of verifying your application’s compatibility with different browsers.
Culture, Automation, Lean, Measure and Sharing (CALMS)
Key ingredients for DevOps as defined by Damon Edwards and John Willis. Culture, Automation, Lean, Measure and Sharing.
CALMS is a conceptual framework for the integration of development and operations (DevOps) teams, functions and systems within an organization.The CALMS framework is often used as a maturity model, helping managers to evaluate whether or not their organization is ready for DevOps — and if not, what needs to change. The acronym CALMS is credited to Jez Humble, co-author of ‘The DevOps Handbook’.
D
Daily Build
Completing a software build of the latest version of a program, on a daily (or nightly) basis.
Daily Scrum
A synchronization, inspection, and adaptive planning activity that a development team performs each day. This core practice in the Scrum framework is timeboxed to no more than 15 minutes. Synonymous with daily stand-up.
Dark Launch
A go-live strategy in which code implementing new features is released to a subset of the production environment but is not visibly, or only partially, activated. The code is exercised, however, in a production setting without users being aware of it.
Debugging
Debugging, in computer programming and engineering, is a multistep process that involves identifying a problem, isolating the source of the problem, and then either correcting the problem or determining a way to work around it.
Decision Table Testing
Decision table testing is a software testing technique used to test system behavior for different input combinations. This is a systematic approach where the different input combinations and their corresponding system behavior (Output) are captured in a tabular form. That is why it is also called as a Cause-Effect table where Cause and effects are captured for better test coverage.
Defect
Rework that is required because an activity was not properly executed in first instance. This requires one to task-switch back to the originating activity, stopping progress, analyze the issue and fix the issue.
Defect Report
Defect report is a document that identifies and describes a defect detected by a tester. The purpose of a defect report is to state the problem as clearly as possible so that developers can replicate the defect easily and fix it.
Defects Rer Unit (DPU)
A measure of quality that measures how many defects are associated with a single product or service unit.
Deliverable
Deliverable, as an adjective, describes something that can be delivered, such as a product or service. For example, a software application might be said to be deliverable by a certain date. In an IT context, deliverable is more frequently used as a noun. For example, a software application might be listed as a project deliverable in a request for proposal (RFP).
Delivery Pipeline
A sequence of orchestrated, automated tasks implementing the software delivery process for a new application version. Each step in the pipeline is intended to increase the level of confidence in the new version to the point where a go/no-go decision can be made. A delivery pipeline can be considered the result of optimizing an organization’s release process.
Deployment
“Deployments represent state changes to systems. To deploy means to get a program to a stable, running state in whatever environment you’re working in. You may make multiple deployments to testing environments throughout development
The release of software updates to users. In DevOps environments, deployment is fully automated so users get updates as soon as they are written and tested.
Deployment Automation
The streamlining of applications and configurations to the various environments used in the SDLC. Using a deployment automation solution ensures that teams have secure, self-service deployment capabilities for Continuous Integration, environment provisioning, and testing. A deployment automation solution can help you to deploy more often while greatly reducing the rate of errors and failed deployments.
The streamlining of applications and configurations to the various environments used in the SDLC. Using a deployment automation solution ensures that teams have secure, self-service deployment capabilities for Continuous Integration, environment provisioning, and testing. A deployment automation solution can help you to deploy more often while greatly reducing the rate of errors and failed deployments.
Deployment Pipeline
A model of the automated, connected tools and processes used to release a software update.
Desk checking
A desk check is an informal non-computerized or manual process for verifying the programming and logic of an algorithm before the program is launched. A desk check helps programmers to find bugs and errors which would prevent the application from functioning properly.
DevOps
DevOps (development and operations) is an enterprise software development phrase used to mean a type of agile relationship between development and IT operations. The goal of DevOps is to change and improve the relationship by advocating better communication and collaboration between these two business units.
“DevOps is the practice of operations and development engineers participating together in the entire service lifecycle, from design through the development process to production support.”
DevOps Automation
DevOps automation ensures a consistent and repeatable deployment process, allowing application enhancements to be developed, tested, implemented securely, and managed across all environments—including production. This enables IT to support the needs of the business more directly while stimulating revenue growth, customer loyalty, and innovation.
DevSecOps
The practice of integrating security into the DevOps process.
Automation of core security tasks by embedding security controls and processes into the DevOps workflow. The goal is to bring security into the process as early as possible in order to minimize vulnerabilities and risks.
Diffy
Diffblue’s wise and helpful mascot. A futuristic version of Diffy can be seen here.
Document Management
Document management (DM) is a function in which applications or middleware perform data management tasks tailored for typical unstructured documents (including compound documents). It may also be used to manage the flow of documents through their life cycles. Long-established document management products have traditionally focused on managing a small group of documents vital to the business. However, the DM market is transforming into a two-tier market, with new competitors building out horizontal capabilities to manage the many documents created in the course of everyday work life. Today, enterprises are looking for ways to cut costs, reduce risk, and enable competitive opportunities, resulting in new market opportunities and competitive forces. Vendors are scrambling to compete by leveraging their existing market positions as well as experimenting with new approaches such as open source and software as a service (SaaS).
Documentation Review
Documentation review determines if the technical aspects of policies and procedures are current and comprehensive. These documents provide the foundation for an organization’s security posture, but are often overlooked during technical assessments.
Driver
A driver, or device driver, is a software program that enables a specific hardware device to work with a computer’s operating system. Drivers may be required for internal components, such as video cards and optical media drives, as well as external peripherals, such as printers and monitors.
Dummy
A dummy is a type of test double that is never used by the actual software, but is only used in testing to fill required parameters.
Dynamic Application Security Testing (DAST)
Dynamic application security testing (DAST) technologies are designed to detect conditions indicative of a security vulnerability in an application in its running state. Most DAST solutions test only the exposed HTTP and HTML interfaces of Web-enabled applications; however, some solutions are designed specifically for non-Web protocol and data malformation (for example, remote procedure call, Session Initiation Protocol [SIP] and so on).
Dynamic System Development Method (DSDM)
DSDM is an Agile method that focuses on the full project lifecycle, DSDM (formally known as Dynamic System Development Method) was created in 1994, after project managers using RAD (Rapid Application Development) sought more governance and discipline to this new iterative way of working. DSDM’s success is due to the philosophy “that any project must be aligned to clearly defined strategic goals and focus upon early delivery of real benefits to the business.” Supporting this philosophy with the eight principles allows teams to maintain focus and achieve project goals.
Dynamic Testing
Dynamic Testing is defined as a software testing type, which checks the dynamic behaviour of the code is analysed… Testing is verification and validation, and it takes 2 Vs to make testing complete. Out of the 2 Vs, Verification is called a Static testing and the other V, Validation is known as Dynamic testing.
E
End to End Agility
Agile scaling that includes all of the business and development/IT functions that are necessary to achieve fast, flexible, flow of business value across the target product (or value stream, capability, or customer journey).
End to End Testing
Testing the entire solution as the user is expected to use the system. The testing should be done via the user interface. This will most likely involve the use of specific automation tools, such as Selenium, for interacting with a Web UI.
Entry Criteria
Entry Criteria gives the prerequisite items that must be completed before testing can begin.
Equivalent Class Partitioning
Equivalent Class Partitioning is a black box technique (code is not visible to tester) which can be applied to all levels of testing like unit, integration, system, etc. In this technique, you divide the set of test condition into a partition that can be considered the same.
Error
An error describes any issue that arises unexpectedly that cause a computer to not function properly. Computers can encounter either software errors or hardware errors.
Error Checking
Testing for accurate transmission of data over a communications network or internally within the computer system.
Error Guessing
Error Guessing is a Software Testing technique on guessing the error which can prevail in the code. It is an experience-based testing technique where the Test Analyst uses his/her experience to guess the problematic areas of the application. This technique necessarily requires skilled and experienced testers. It is a type of Black-Box Testing technique and can be viewed as an unstructured approach to Software Testing.
Error guessing is a testing technique that makes use of a tester’s skill, intuition and experience in testing similar applications to identify defects that may not be easy to capture by the more formal techniques. It is usually done after more formal techniques are completed.
Error Handling
Error handling refers to the routines in a program that respond to abnormal input or conditions. The quality of such routines is based on the clarity of the error messages and the options given to users for resolving the problem. Contrast with exception handling, which deals with responses to abnormal conditions that are built into the programming language or the hardware.
Event-Driven Architecture
An event-driven architecture (EDA) is a framework that orchestrates behavior around the production, detection and consumption of events as well as the responses they evoke.
Everything as Code
Refers to a development technique where all of the components needed to build and deliver software – deployment packages, infrastructure, environments, release templates, dashboards – are defined as code. Defining your delivery pipeline as code gives you a standardized, controlled way to on-board projects, applications, and teams.
Execute
Execute and execution are terms that describe the process of running a computer software program or command. For example, each time you open your Internet browser you are executing the program. In Windows to execute a program, double-click the executable file or double-click the shortcut icon pointing to the executable file.
To run a program, which causes the computer to carry out its instructions.
Exhaustive Testing
Exhaustive testing is a testing or quality assurance approach in which all possible combinations of scenarios and use/test cases are used for testing.
Exit Criteria
Exit Criteria defines the items that must be completed before testing can be concluded.
Expected Result
The expected result of the test.
Exploration
The act of acquiring or buying knowledge by performing some activity such as building a prototype, creating a proof of concept, performing a study, or conducting an experiment.
Exploratory Testing
In addition to being used to discover specific bugs, exploratory testing is also understood as a way to learn about the application and design functional and regression test cases to be executed in the future.
External Services Provider (ESP)
An external services provider (ESP) is an enterprise that is a separate legal entity from the contracting company that provides services such as consulting, software development — including system integration and application service providers (ASPs) — and outsourcing. ESPs supplement the skills and resources of an in-house IS department.
External Stakeholders
Stakeholders who are typically external to the organization that is developing a product, for example, customers, partners, and regulators.
Extreme Programming
- A software development methodology that is intended to improve software quality and responsiveness to changing customer requirements (source Wikipedia).
- An agile development approach that is complementary to Scrum. Extreme Programming specifies important technical practices that development teams use to manage the flow of task-level work during sprint execution.
F
Factory Acceptance Test (FAT)
The Factory Acceptance Test (FAT) is a process that evaluates the equipment during and after the assembly process by verifying that it is built and operating in accordance with design specifications. FAT ensures that the components and controls are working properly according to the functionality of the equipment itself. As the name suggests, this testing is performed at the factory.
Fail Fast
the strategy of software design where the ideas are tested quickly to ensure rapid feedback. Once the feedback is applied, the experiment is repeated until the satisfactory result is achieved.
Fail fast is a philosophy that values the development or implementation of many small experimental products, changes or approaches before committing large amounts of time or resources.
Failure
“The [incorrect] result of a fault”
Software Engineering Body of Knowledge (cited on Stack Exchange)
Fake
Fakes are test doubles that implement the required functionality in a way that is useful in testing, but which also effectively disqualifies it from being used in production environment. For example, a key-value database that stores all values in memory and loses them after every execution potentially allows tests to run faster, but its tendency to destroy data would not allow it to be used in production.
Fast Feedback
A principle that states that feedback today is much more valuable than the same feedback tomorrow, because today’s feedback can be used to correct a problem before it compounds into a much larger problem, and provides the ability to truncate economically undesirable paths sooner (to fail faster).
Fault Detection and Isolation
Online diagnostics that detect and isolate faults in real time, prevent contamination into other areas, and attempt to retry operations.
Fault Injection
Fault injection testing is a software testing method which deliberately introduces errors to a system to ensure it can withstand and recover from error conditions. Fault injection testing is typically carried out prior to deployment to uncover any potential faults that may have been introduced during production. Similar to stress testing, fault injection testing is used to identify specific weaknesses in a hardware or software system so they can be fixed or avoided.
Feedback Loops
Creating fast and continuous feedback between Operations and Development early in the software delivery process is a major principle underpinning DevOps. Doing so not only helps to ensure that you’re giving customers what they actually want, it lightens the load on development, reduces the fear of deployment, creates a better relationship between Dev and Ops, and heightens productivity.
Formal Review
A type of peer review, formal review follows a formal process and has a specific formal agenda. It has a well structured and regulated process, which is usually implemented at the end of each life cycle. During this process, a formal review panel or board considers the necessary steps for the next life cycle.
Functional Integration Testing
Functional testing means testing a slice of functionality in the system (may interact with dependencies) to confirm that the code is doing the right things. Functional tests are related to integration tests, however, they signify to the tests that check the entire application’s functionality with all the code running together, nearly a super integration test.
Functional Testing
Testing of the end-to-end system to validate functionality. With executable specifications, Functional Testing is carried out by running the specifications against the application.
G
Git
Git is a free and open source distributed version control system designed to handle everything from small to very large projects with speed and efficiency.
Governance
In IT, governance refers to the process by which organizations evaluate and ensure that their tech investments are performing as expected and not introducing new risk. A formal governance process also helps companies ensure that IT activities are aligned with business goals, while also ensuring that everything is compliant with common standards, such as OWASP, PCI 3.2, and CWE/SANS.
Granularity
The ability to increase a system’s capacity and performance through incremental processor expansion.
Gray-Box Testing
Gray box testing, also called gray box analysis, is a strategy for software debugging based on limited knowledge of the internal details of the program.
H
Hybrid Cloud
A cloud computing environment that uses a mix of cloud services––on-premises, private cloud, and third-party. As enterprises scale their software delivery processes, their usage needs and costs change. Using a hybrid cloud solution offers greater flexibility and more deployment options.
A cloud computing environment that uses a mix of cloud services––on-premises, private cloud, and third-party. As enterprises scale their software delivery processes, their usage needs and costs change. Using a hybrid cloud solution offers greater flexibility and more deployment options.
Hypervisor
A hypervisor or virtual machine monitor (VMM) is a piece of software that allows physical devices to share their resources among virtual machines (VMs) running on top of that physical hardware. The hypervisor creates, runs and manages VMs.
I
Immutable Infrastructure
An application service or hosting environment that, once set up, cannot be changed. If a DevOps team wishes to change a configuration on immutable infrastructure, the entire component must be re-initalized. While this may seem inefficient, the advantage of immutable infrastructure is that it makes environments more robust and reliable because inadvertent changes are impossible to introduce.
Impact Analysis
The impact analysis of an anomaly consists of identifying the changes to be made on the descending phase (impact on the documents impact on the code, impact on the description and implementation of tests) of the realization.
Impact Analysis is defined as analyzing the impact of changes in the deployed product or application. It gives the information about the areas of the system that may be affected due to the change in the particular section or features of the application.
Impediment
A hindrance or obstruction to doing something. Frequently used to describe some issue or blocker that is preventing a team or organization from performing Scrum in an effective way.
Impediment Board
Incident
An incident, in the context of information technology, is an event that is not part of normal operations that disrupts operational processes. An incident may involve the failure of a feature or service that should have been delivered or some other type of operation failure.
Incident Response
Incident Response is a documented, formalized set of policies and procedures for managing cyber attacks, security breaches and other types of IT or security incidents.
The follow-up to an unplanned event such as a hardware or software failure or attack against a computer or network. Incident response requires preparation, especially for attacks, because the breach may still be in the process of causing damage.
Incremental Development
- Development based on the principle of building some before building all.
- A staging strategy in which parts of the product are developed and delivered to users at different times, with the intention to adapt to external feedback.
Independent Testing
Independent testing corresponds to an independent team, who involve in testing activities other than developer to avoid author bias and is often more effective at finding defects and failures.
Informal Review
As suggested by its name, this is an informal type of review, which is extremely popular and is widely used by people all over the world. Informal review does not require any documentation, entry criteria, or a large group of people. It is a time saving process that is not documented.
Infrastructure as a Service (IaaS)
Cloud-hosted virtualized machines, usually billed on a “pay as you go” basis. Users have full control of the machines but need to install and configure any required middleware and applications themselves.
Infrastructure as a Service, or IaaS, is the delivery of computing resources such as data storage, networks, and virtual and physical computers to end users via a service model.
Infrastructure-as-a-Service, the IT management model where the computing resources and the services needed to run them are provided as a service to enable the functioning of various platforms and apps.
Infrastructure as Code
A system configuration management technique in which machines, network devices, operating systems, middleware, and so on are specified in a fully automatable format. The specification, or “blueprint,” is regarded as code that is executed by provisioning tools, kept in version control, and generally subject to the same practices used for application code development.
An approach to infrastructure configuration that allows DevOps teams to use scripts in order to provision servers or hosting environments automatically. This saves them from having to set up infrastructure by hand, a time-consuming and mistake-prone process.
One of the basic principles of DevOps. It means that infrastructure configuration is done with machine-readable declarative files, not manually or using interactive tools. These files (like Kubernetes or Terraform manifests) can be stored in GitHub repositories, adjusted and versioned the same as code, thus providing efficient automation of infrastructure provisioning.
Insight Backlog
A prioritized list of previously generated insights or process improvement ideas that have not yet been acted upon. The insight backlog is generated and used during sprint retrospectives.
Installation Testing
Installation testing is check that software application is successfully installed & it is working as expected after installation. This is testing phase prior to end users will firstly interact with the actual application. Installation testing is also called as “Implementation Testing”. This is most important as well as most interesting step in the Software testing life cycle.
Institute of Electrical and Electronics Engineers (IEEE)
Nonprofit professional association of scientists and engineers founded in 1963 with more than 365,000 members in 150 countries. It is best known for setting global standards for computing and communications and has 1,300 standards and projects under development.
Instrumentation
“‘Instrumentation refers to an ability to monitor or measure the level of a product’s performance, to diagnose errors and to write trace information (Wikipedia)’ In that context, ‘instrumentation’ is the word you use when talking about how you’re recording data to be viewed and monitored.”
Integrated Development Environment (IDE)
An integrated development environment (IDE) is an application that provides a programming environment for developers. An IDE typically includes a code editor, automation tools, and a debugger.
Integration
The combining of the various components or assets of some or all of a product to form a coherent, larger-scope work product that can be validated to function correctly as a whole.
Integration Testing
Testing the interaction between two or more components in the product. One of the key parts of an integration test is that it is testing, or relying on, the actual behavior of an interface between two components. Essentially, unlike component or unit testing, a change in either component can cause the test to fail.
Intellij IDE
IntelliJ IDE is an integrated development environment (IDE) written in Java for developing computer software. It is developed by JetBrains (formerly known as IntelliJ), and is available as an Apache 2 Licensed community edition, and in a proprietary commercial edition. Both can be used for commercial development.
Internal Stakeholder
Stakeholders who are internal to the organization that is developing the product, for example, senior executives, managers, and internal users.
Issue Tracking
The management of change requests. It might be a stand-alone system or part of a help desk system that tracks bug reports.
ISTQB
ISTQB Certification is an internationally accepted software testing certification that is conducted online by its Member Boards through a testing Exam Provider. An Exam Provider is an organization licensed by a Member Board(s) to offer exams locally and internationally including online testing certification.
Iteration
A single development cycle, typically measured as one week or two weeks.
A self-contained development cycle focused on performing all of the work necessary to produce a valuable outcome.
Iterative development
A planned rework strategy where multiple passes over the work are used to converge on a good solution.
J
Java
Java is a high-level programming language developed by Sun Microsystems. It was originally designed for developing programs for set-top boxes and handheld devices, but later became a popular choice for creating web applications. The Java syntax is similar to C++, but is strictly an object-oriented programming language.
Java Applications
A Java program that runs stand alone in a client or server. The Java Virtual Machine interprets the instructions, and like any programming language running in its native environment, Java programs have full access to all the resources in the computer. Contrast with Java applet.
Jenkins
Jenkins, the open source automation server written in Java, has long been the de facto standard for Continuous Integration. With Jenkins, developers can integrate their code into a shared repository several times a day. As organizations look to scale their software delivery processes, they often find that Jenkins requires too much scripting and/or maintaining of workflows, and that they need to expand to Continuous Delivery. Continuous Delivery not only leverages tools for Continuous Integration, but also for end-to-end release orchestration, test automation, security, IT service management, and more.
An open source Java server enabling software delivery automation out-of-the-box.
JUnit
JUnit is an open source Unit Testing Framework for JAVA. It is useful for Java Developers to write and run repeatable tests. Erich Gamma and Kent Beck initially develop it. It is an instance of xUnit architecture. As the name implies, it is used for Unit Testing of a small chunk of code.
Just-in-time (JIT) Compiler
A compiler that converts all of the bytecode into native machine code just as a Java program is run. This results in run-time speed improvements over code that is interpreted by a Java virtual machine.
A characteristic of a process whereby the assets or activities of a work stream become available or occur just as they are needed.
K
Kaizen (Continuous Improvement)
Kaizen is an approach to creating continuous improvement based on the idea that small, ongoing positive changes can reap major improvements.
Kanban
An agile approach overlaid on an existing process that advocates visualizing how work flows through a system, limiting the work in process, and measuring and optimizing the flow of work.
Kano Model Analysis
A customer satisfaction model developed by Japanese research Noriaki Kano in the early 1980s. Kano analysis is a set of ideas and techniques that help determine perceived user satisfaction with product features (e.g., product backlog items). Kano analysis uses a questionnaire were customers are asked both a functional (“How would you feel if the feature is included?”) and dysfunctional (“How would you feel if the feature were not included?”) form of a question about individual features. Based on how users answer these questions, Kano analysis classifies a feature as either mandatory (has to be in the product), linear (customer satisfaction will be linear with increases in quantity or quality of the feature), or exciter/delighter (customer satisfaction will be very high since customers didn’t even know they wanted this feature until they saw it, and now they believe they can’t live without it). Kano model analysis is one technique that can be used to determine the level of investment (if any) to make in each product backlog item and therefore is a technique that can be used to help prioritize product backlog items.
Kubernetes
Container-based applications require the same release process as other enterprise applications. In fact, things may even get more complicated as applications evolve to rely on more and more microservices and more and more containers—across Dev, Test, Staging, and Production environments. Kubernetes is an open source system for managing containerized applications across multiple hosts, providing basic mechanisms for deployment, maintenance, and scaling of applications.
An open-source container management platform from Google. Kubernetes and Docker are the pillars of running modern workloads in the cloud.
L
Last Responsible Moment (LRM)
A strategy of not making a premature decision but instead delaying commitment and keeping important and irreversible decisions open until the cost of not making a decision becomes greater than the cost of making a decision.
Lead Time
The time needed to move the new code batch from commit to release.
Lean
“’Lean manufacturing,’ or ‘lean production,’ is an approach or methodology that aims to reduce waste in a production process by focusing on preserving value. Largely derived from practices developed by Toyota in car manufacturing, lean concepts have been applied to software development as part of agile methodologies. The Value Stream Map (VSM), which attempts to visually identify valuable and wasteful process steps, is a key lean tool.
A production philosophy that focuses on reducing waste and improving the flow of processes to improve customer value. (Global Knowledge) Lean software development is a concept that emphasizes optimizing efficiency and minimizing waste in the development of software.
Lean Software Development
Lean software development is a concept that emphasizes optimizing efficiency and minimizing waste in the development of software.
Lean Ux
A practice that integrates design thinking, core agile principles, and lean startup principles that is used by a cross-functional team of designers, developers, and product managers to bring the true nature of the work to light faster, with less emphasis on deliverables and greater focus on the actual experience being designed. Lean UX is based on the following core principles:
- Development + Product Management + UX = 1 Product Team
- Visualize thinking for all to see
- Goal-driven and outcome focused
- FLOW: Think-Make-Check
- Focus on solving the right problem
- Generate many options
- Decide quickly which options to pursue and hold decisions lightly
- Recognize hypotheses and validate them
- Research with users is the best source of information
Legacy Code
Legacy code refers to an application system source code type that is no longer supported. Legacy code can also refer to unsupported operating systems, hardware and formats. In most cases, legacy code is converted to a modern software language and platform.
Load Testing Software
Load testing software is an evaluation tool for determining how an application will perform as the work level approaches the limits of the application’s specifications.
M
Maintainability
The ease with which a software system or component can be modified to correct faults, improve performance or other attributes, or adapt to a changed environment.
IEEE Standard Glossary of Software Engineering Terminology as cited by Steve Croach
Maintenance
- Hardware maintenance is the testing and cleaning of equipment.
- Software maintenance is the updating of operating systems and application programs in order to add new functions and change data formats. It also includes fixing bugs and adapting the software to new hardware devices.
- Information system maintenance is the routine updating of databases, such as adding or deleting employees and customers, as well as changing credit limits and product prices.
- Disk and file maintenance is the periodic reorganizing of disk files that have become fragmented due to continuous updating.
Marginal Economics
Determining if spending the next chunk of money is justified by the return that investment would generate. When applying margin economics, we consider all work that has been performed on the product up to the decision point as a “sunk cost” and therefore don’t consider the sunk cost when determining whether to spend the next chunk of money
Mean Time Between Failures (MTBF)
Mean time between failure (MTBF) refers to the average amount of time that a device or product functions before failing. This unit of measurement includes only operational time between failures and does not include repair times, assuming the item is repaired and begins functioning again. MTBF figures are often used to project how likely a single unit is to fail within a certain period of time.
Mean Time to Recovery (MTTR)
“We all agree the MTTR (mean time to repair/resolve) metric is core to any Incident Management practice. Teams who focus on reducing Mean Time To Repair (MTTR) rightly focus heavily on the Remediation phase.”
Microservices
A software architecture design pattern in which complex applications are composed of small, independent processes communicating with each other using language-agnostic APIs. These services are small, highly decoupled, and focus on doing a small task.
A type of application architecture in which applications are broken into multiple small pieces. For example, a microservices-based Web server might have its storage, front-end and security layers each operating as a separate service. Docker containers have become a popular deployment mechanism for microservices applications.
Mock
A type of test double created for a particular test or test case. It expects to be called a specific number of times and gives a predefined answer. At the end of the test, a mock raises an error if it was not called as many times as expected. A mock with strict expectations is part of the assertion framework.
Model-Based Testing
Model-based testing (MBT) requires a test team to create a second, lightweight implementation of a software build- typically only the business logic – called the model.
Module
A self-contained hardware or software component that interacts with a larger system. A software module (program module) comes in the form of a file and typically handles a specific task within a larger software system. Hardware modules are units that often plug into a main system.
Module Testing
Module testing is defined as a software testing type, which checks individual subprograms, subroutines, classes, or procedures in a program. Instead of testing whole software program at once, module testing recommends testing the smaller building blocks of the program. Module testing is largely a white box oriented. The objective of doing Module, testing is not to demonstrate proper functioning of the module but to demonstrate the presence of an error in the module. Module level testing allows to implement parallelism into the testing process by giving the opportunity to test multiple modules simultaneously.
N
Naming Conventions/Standard
Naming conventions are general rules applied when creating text scripts for software programming. They have many different purposes, such as adding clarity and uniformity to scripts, readability for third-party applications, and functionality in certain languages and applications. They range from capitalization and punctuation to adding symbols and identifiers to signify certain functions.
Negative Testing
Negative testing is the process of applying as much creativity as possible and validating the application against invalid data. This means its intended purpose is to check if the errors are being shown to the user where it’s supposed to, or handling a bad value more gracefully.
Non-Functional Requirements (NFRs)
The specification of system qualities, such as ease-of-use, clarity of design, latency, speed, and ability to handle large numbers of users, that describe how easily or effectively a piece of functionality can be used, rather than simply whether it exists. These characteristics can also be addressed and improved using the Continuous Delivery feedback loop.
Non-Functional Testing
Non-functional testing is done to verify the non-functional requirement of the application like Performance, Usability, etc. It verifies if the behavior of the system is as per the requirement or not. It covers all the aspects which are not covered in functional testing.
NoOps
A type of organization in which the management of systems on which applications run is either handled completely by an external party (such as a PaaS vendor) or fully automated. A NoOps organization aims to maintain little or no in-house operations capability or staff.
NUnit
NUnit is an evolving, open source framework designed for writing and running tests in Microsoft .NET programming languages. NUnit, like JUnit, is an aspect of test-driven development (TDD), which is part of a larger software design paradigm known as Extreme Programming (XP).
O
Object-Orientated Design
A software design method that models the characteristics of abstract or real objects using classes and objects.
On-Premises
On-Premises means that a system is located inside an organisation’s territory or locally. Applications that are operated on-premises are hosted on servers that are owned or rented by the organisation
Open Source
The term open source refers to something people can modify and share because its design is publicly accessible. The term originated in the context of software development to designate a specific approach to creating computer programs. Today, however, open source designates a broader set of values—what we call the open source way. Open source projects, products, or initiatives embrace and celebrate principles of open exchange, collaborative participation, rapid prototyping, transparency, meritocracy, and community-oriented development.
If a program is open-source, its source code is freely available to its users. Its users – and anyone else – have the ability to take this source code, modify it, and distribute their own versions of the program. The users also have the ability to distribute as many copies of the original program as they want. Anyone can use the program for any purpose; there are no licensing fees or other restrictions on the software.
Operational Expense (OpEx)
- A category of expenditure that a business incurs as a result of performing its normal business operations (investopedia).
- Expensing the full cost of building or buying something in the year in which you incur the cost. So, if you build something that costs $50k, and you make $150k in revenue that same year, your profit would be $150k-$50k = $100k in that year.
Operational Testing
Operational testing refers to the evaluation of a software application prior to the production phase. Operational testing ensures system and component compliance in the application’s standard operating environment (SOE). Operational testing is applied in a specified environment during various software development life cycle (SDLC) phases for the evaluation of software syetem functionality.
Orchestration Pipeline
Tools or products that enable the various automated tasks that make up a Continuous Delivery pipeline to be invoked at the right time. They generally also record the state and output of each of those tasks and visualize the flow of features through the pipeline.
Out-of-the-Box Tools
Hardware that has just been removed from its original carton and plugged in or software just removed from its original package and installed.
P
Pair Programming
Pair programming is an Agile technique originating from XP in which two developers team together and work on one computer.
Pair Testing
Pair Testing is a software testing technique in which two people test the same feature at the same place at same time by continuously exchanging ideas. It generates more ideas which result in better testing of the application under test.
Performance Testing
Performance testing is the process of determining the speed, responsiveness and stability of a computer, network, software program or device under a workload.
Platform as a Service (PaaS)
Cloud-hosted application runtimes, usually billed on a “pay as you go” basis. Customers provide the application code and limited configuration settings, while the middleware, databases, and so on are part of the provided runtime.
Platform-as-a-Service, the model of software delivery when the developers get all the required libraries, tools and services to develop the software, with all the underlying infrastructure being handled by the platform providing the service.
Platform-as-a-service (PaaS) is a model of cloud service delivery where a cloud service provider delivers some hardware and software tools to customers over the internet.
Positive Testing
Positive testing is the type of testing that can be performed on the system by providing the valid data as input. It checks whether an application behaves as expected with positive inputs. This test is done to check the application that does what it is supposed to do.
Postconditions
A postcondition associated with a method invocation is a condition that must be true when we return from a method. For example, if a natural logarithm method was called with input X, and the method returns Y, we must have eY = X (within the limits of the level of precision being used).
Preconditions
The precondition of a method (or function, or subroutine, depending on the programming language) is a logical condition that must be true when that method is called. For example, if we are operating in the domain of real numbers and invoke a method to calculate the square root of a number, an obvious precondition is that this number must be non-negative.
Priority
A particular order, or sequence, in which things take place (items processed, users served, etc.). A priority is based on a predetermined assignment of value, or importance, to different types of events and people.
Product Owner
A person or role responsible for the definition, prioritization, and maintenance of the list of outstanding features and other work to be tackled by a development team. Product Owners are common in agile software development methodologies and often represent the business or customer organization. Product Owners need to play a more active, day-to-day role in the development process than their counterparts in more traditional software development processes.
Production
A production or production rule in computer science is a rewrite rule specifying a symbol substitution that can be recursively performed to generate new symbol sequences. A finite set of productions is the main component in the specification of a formal grammar (specifically a generative grammar). The other components are a finite set of nonterminal symbols, a finite set (known as an alphabet) of terminal symbols that is disjoint from and a distinguished symbol that is the start symbol.
Q
Quality
A continuous delivery pipeline enables you to act and deliver more quickly, but you still need to deliver a quality product to your users. You can build the expectation of quality into your software development process from the start. Design your tests before a line of code is written. Create a test architecture that can be woven into your CD pipeline. Build a self-adjusting system that applies the right tests at the appropriate time in development, covering unit tests through to performance testing. That way you always have the real-time insight you need into your software quality.
Quality Assurance (QA)
A department, procedure or program within an organization that is involved in testing hardware and/or software. QA ensures that all products and systems perform as originally specified.
Quality Assurance Analyst
A person who is responsible for maintaining software quality within an organization. Such individuals develop and use stringent testing methods and may also be involved with ISO 9000 and the SEI models.
R
Rational Unified Process (RUP)
Software from IBM that provides guidelines, templates and examples for each team member in the system development process. Supporting the Unified Modeling Language (UML), RUP can be used with other Rational tools to provide a uniform set of best practices for iterative development, which was developed in the 1970s. Rational calls its product the e-coach for software teams. The product was originally from Rational Software, which was acquired by IBM in 2003.
Re-testing
Re-testing is executing a previously failed test against new software to check if the problem is resolved. After a defect has been fixed, re-testing is performed to check the scenario under the same environmental conditions.
Refactoring
The process of improving implementation details of code without changing its functionality. Refactoring without tests is a very brittle process, as the developer doing the refactoring can never be sure that his improvements are not breaking some parts of functionality. If the code was written using test-driven development, the developer can be sure that his refactoring was successful as soon as all tests pass, as all the required functionality of the code is still correct.
A technique for restructuring an existing body of code by improving/simplifying its internal structure (design) without changing its external behavior. Refactoring is one of the principal techniques for managing technical debt.
Regression
A software defect which appears in a particular feature after some event (usually a change in the code).
Regression Testing
Testing of the end-to-end system to verify that changes to an application did not negatively impacted existing functionality.
A type of software testing that verifies that previously developed and tested software still performs correctly even after the software itself was changed or after other software in the same package or configuration has been changed. Basically, testing to determine that previously working software is still working in the presence of recently made changes (e.g., new features added, defects repaired, or code refactoring).
Release
One or more system changes that are built, tested and deployed together.
- A combination of features that when packaged together make for a coherent deliverable to customers or users.
- A version of a product that is promoted for use or deployment. Releases represent the rhythm of business-value delivery and should align with defined business cycles.
Release Coordination
The definition and execution of all the actions required to take a new feature or set of features from code check-in to go-live. In a Continuous Delivery environment, this is largely or entirely automated and carried out by the pipeline.
Release Goal
A clear statement of the purpose and desired outcome of a release. A release goal is created by considering many factors, including the target customers, high-level architectural issues, and significant marketplace events.
Release Management
The process of managing software releases from the development stage to the actual software release itself.
Release Orchestration
Helps enterprises efficiently manage and optimize their release pipelines and is necessary for enterprises that want to realize the benefits of Continuous Delivery and DevOps. Enterprise-focused release orchestration solutions offer crucial real-time visibility into release status and, through detailed reporting and analytics, provide the intelligence needed to make the best decisions. Release orchestration tools offer control over the release process, enforcing compliance requirements and also making it easy to modify release plans in an auditable manner. And they manage a mixture of manual and automated tasks that need to be coordinated across multiple teams, both business and technical.
Release Plan
In agile software development, a release plan is an evolving flowchart that describes which features will be delivered in upcoming releases.
- The output of release planning. On a fixed-date release, the release plan will specify the range of features available on the fixed future date. On a fixed scope release, the release plan will specify the range of sprints and costs required to deliver the fixed scope.
- A plan that communicates, to the level of accuracy that is reasonably possible, when the release will be available, what features will be in the release, and how much will it cost.
Release Testing
Testing a new version of software to ensure that it is ready to be released, i.e. beta testing
Release Train
- An approach to aligning the vision, planning, and interdependencies of many teams by providing cross-team synchronization based on a common cadence. A release train focuses on fast, flexible flow at the level of a larger product.
- In the Scaled Agile Framework (SAFe), an Agile Release Train is a long-lived, self-organizing collection of agile teams that plans, commits, and executes together. Agile release trains are organized around the enterprise’s significant value streams (source, Scaled Agile Framework).
Repository
In software development, a repository is a central file storage location. It is used by version control systems to store multiple versions of files. While a repository can be configured on a local machine for a single user, it is often stored on a server, which can be accessed by multiple users.
Requirements
The information needed to support a business or other activity. Systems analysts turn information requirements (the what and when) into functional specifications (the how) of an information system.
Requirements Definition and Management
Requirements definition and management (RDM) tools streamline development teams’ analysis of requirements, capture requirements in a database-based tool to enable collaborative review for accuracy and completeness, ease use-case and/or test-case creation, provide traceability, and facilitate documentation and versioning/change control. Increasingly, RDM tools support business analysts with graphical tools for process workflow definition, application simulation and prototyping, and other visual, collaborative tools. The database approach uses special-purpose repositories that are part of the requirements management solution or ship with a general-purpose commercial database integrated with the tool.
Requirements Management
The administration and control of the information needs of users. In order to achieve business objectives within an organization via information systems, user requirements must be defined in a consistent manner, prioritized and monitored.
Resource Requirements Planning
The process of converting the production plan or the master production schedule into the impact on key resources, e.g., man hours, machine hours, storage, standard cost dollars, shipping dollars and inventory levels.
Retrospective
An inspect-and-adapt activity performed at the end of every sprint. The sprint retrospective is a continuous improvement opportunity for a Scrum team to review its process (approaches to performing Scrum) and to identify opportunities to improve it.
Retrospective Meeting
A retrospective meeting or agile retrospective is a meeting that’s held at the end of an iteration in Agile software development (ASD ). During the retrospective, the team reflects on what happened in the iteration and identifies actions for improvement going forward.
Review
A process or meeting during which a software product is examined by a project personnel, managers, users, customers, user representatives, or other interested parties for comment or approval.
IEEE Standard Glossary of Software Engineering Terminology as cited by Steve Croach
Risk
- The likelihood that an event will be accompanied by undesirable consequences. Risk is measured by both the probability of the event and the seriousness of the consequences.
- Any uncertainty that is expected to have a negative outcome for the activity.
Risk-Based Testing
Risk based testing is basically a testing done for the project based on risks. Risk based testing uses risk to prioritize and emphasize the appropriate tests during test execution. In simple terms – Risk is the probability of occurrence of an undesirable outcome. This outcome is also associated with an impact. Since there might not be sufficient time to test all functionality, Risk based testing involves testing the functionality which has the highest impact and probability of failure.
Risk-based testing (RBT) is essentially a test performed for projects depending on the risks. Risk-based testing strategies make use of risks to prioritize and highlight the right tests at the time of test execution. Considering that there might not be ample time to check all kinds of functionality, risk-based testing mainly concentrates on testing the functionality that carries the biggest impact and the possibility of failure.
ROI Impact
Return on Investment (ROI) is a performance measure used to evaluate the efficiency of an investment or compare the efficiency of a number of different investments. ROI tries to directly measure the amount of return on a particular investment, relative to the investment’s cost. To calculate ROI, the benefit (or return) of an investment is divided by the cost of the investment. The result is expressed as a percentage or a ratio.
Rollback
A manual or automated restoration of a previously saved state of program or database.
S
Sandwich Integration Testing
Sandwich Testing is the combination of bottom-up approach and top-down approach, so it uses the advantage of both bottom up approach and top down approach. Initially it uses the stubs and drivers where stubs simulate the behaviour of missing component. It is also known as the Hybrid Integration Testing.
SAP HANA
SAP HANA is an in-memory database and application development platform for processing high volumes of data in real time.
Scalability
Scalability is the ability of a process, system, or framework to handle a growing workload. In other words, a scalable system is adaptable to increasing demands. The ability to scale on demand is one of the biggest advantages of cloud computing.
Scalability Testing
Scalability Testing is a non-functional test methodology in which an application’s performance is measured in terms of its ability to scale up or scale down the number of user requests or other such performance measure attributes. Scalability testing can be performed at a hardware, software or database level.
Scaling
Generally, scaling or scalability of a system is the ability of this system, network or process to change its size or to grow.
Scenario
A scenario, in the context of business planning and strategy, is a potential event or combination of events that could be relevant to the organization — typically because it could create a significant risk or provide a significant opportunity.
Scrum
An aIterative, time-bound, and incremental Agile Framework for completing complex projects.
In software development, Scrum is a project management framework or methodology that is used to efficiently produce quality work while adapting quickly to change.
Scrum is a collaborative Agile development framework that breaks large processes down into small pieces in order to streamline efficiency.
Self-Service Deployment
The ability for team members to deploy to pre-production environments with relative ease.
Server-less Cloud Computing
Server-less cloud computing (or “cloud functions hosting”) is a cloud service that allows building and deploying applications on cloud without involving infrastructure management. With server-less, you can avoid provisioning or managing servers, virtual machines or containers.
Service Level Agreement (SLA)
A service level agreement (SLA) is a contractual agreement between a customer and a cloud service provider (CSP) which defines the level of service, availability and performance guaranteed by the CSP.
Service-Orientated Architecture (SOA)
Service-oriented architecture (SOA) is a software development model for distributed application components that incorporates discovery, access control, data mapping and security features.
Session-Based Testing
A structured and time based approach, to carry out the exploratory testing activity. This type of testing, involves the progress of the exploratory testing phase, in multiple sessions. The basic idea behind SBT is to divide the whole exploratory testing, into multiple sessions of equal duration, and thereafter, defining and creating the test plan & strategies, along with the reporting of the test status report for each session of the exploratory testing. This way of carrying out the exploratory testing ensures the discovery of defects quickly in a very short time.
Severity
Severity is defined as the degree of impact a defect has on the development or operation of a component application being tested. Higher effects on the system functionality will lead to the assignment of higher severity to the bug. A Quality Assurance engineer usually determines the severity level of the defect.
Serverless Computing
A type of service that provides access to computing resources on demand, without requiring users to configure or manage an entire server environment. AWS Lambda is the most famous serverless computing product currently, but a number of competitors have arisen recently, including Azure Serverless Functions and IBM OpenWhisk.
A cloud-computing execution model in which the cloud provider dynamically manages the allocation of machine resources. Pricing is based on the actual amount of resources consumed by an application. This is often referred to as Function as a Service.
Shift Left Testing
Shift left testing is an approach used to speed software testing and facilitate development by moving the testing process to an earlier point in the development cycle.
Shifting Left
With increasing delivery speed comes increasing security risks and compliance issues across different applications, teams, and environments. Shifting left refers to integrating risk assessment, security testing, and compliance evaluation processes earlier in the delivery pipeline. Doing so makes it cheaper and easier to address potential release delays or failures, security vulnerabilities that threaten Production, and IT governance violations that result in expensive fines.
Site Reliability Engineering (SRE)
SRE can be defined as a more opinionated and prescriptive way of doing DevOps — a way pioneered by Google… Thinking in terms of programming languages, an SRE is a concrete class that implements a DevOps interface.
Smoke Testing
Smoke Testing, also known as “Build Verification Testing”, is a type of software testing that comprises of a non-exhaustive set of tests that aim at ensuring that the most important functions work. The result of this testing is used to decide if a build is stable enough to proceed with further testing. The term ‘smoke testing’, it is said, came to software testing from a similar type of hardware testing, in which the device passed the test if it did not catch fire (or smoke) the first time it was turned on.
Software as a Service (SaaS)
Software as a service (SaaS), is a model of cloud computing in which applications (software) are hosted by a vendor and provided to the user as a service. SaaS applications are licensed on a subscription basis and are made available to users over a network, typically the internet. Because SaaS applications can be accessed at any time, at anyplace, and on any platform, they have become a popular model for delivery of many business applications. A well-known example of SaaS is Microsoft’s Office 365, which provides Microsoft’s famous suite of productivity software— including MS Word and Excel— as a service.
Software Development
Software development is the body of processes involved in creating software programs, embodying all the stages throughout the systems development life cycle (SDLC).
Software Development Kit (SDK)
A Software development kit (SDK), also known as a developer’s toolkit or devkit, is a set of development tools that aids or allows the creation of applications for a certain platform. SDKs typically include APIs, sample code, documentation, debuggers and other utilities.
Software Development Life Cycle
The SDLC describes the phases that a software product runs through during its development.
Software Engineering
Software engineering is the application of principles used in the field of engineering, which usually deals with physical systems, to the design, development, testing, deployment and management of software systems.
Software Review
Software review is an important part of Software Development Life Cycle (SDLC) that assists software engineers in validating the quality, functionality, and other vital features and components of the software… It is a complete process that involves testing the software product and ensuring that it meets the requirements stated by the client.
Source Control
The system for storing, managing and tracking the changes to the source code. The most popular are GitHub, GitLab, and BitBucket.
Sprint (Software Development)
A sprint is a set period of time during which specific work has to be completed and made ready for review.
A short-duration, timeboxed iteration. Typically a timebox between one week and a calendar month during which the Scrum team is focused on producing a potentially shippable product increment that meets the Scrum team’s agreed-upon definition of done.
Sprint Backlog
An interactive list of work items that have been assigned to the same sprint or iteration path for a team. The sprint backlog supports teams that use Scrum methodologies.
Staging Environment
The controlled copy of your production environment, resembling it to the fullest possible extent. This allows testing new software versions to find bugs before the release to production.
State Transition Testing
State Transition Testing is a type of software testing which is performed to check the change in the state of the application under varying input. The condition of input passed is changed and the change in state is observed. State Transition Testing is basically a black box testing technique that is carried out to observe the behavior of the system or application for different input conditions passed in a sequence. In this type of testing, both positive and negative input values are provided and the behavior of the system is observed. State Transition Testing is basically used where different system transitions are needed to be tested.
Static Application Security Testing (SAST)
Static application security testing (SAST) is a set of technologies designed to analyze application source code, byte code and binaries for coding and design conditions that are indicative of security vulnerabilities. SAST solutions analyze an application from the “inside out” in a nonrunning state.
Static Testing
Static testing is a software testing method that involves examination of the program’s code and its associated documentation but does not require the program be executed. Dynamic testing, the other main category of software testing methods, involves interaction with the program while it runs. The two methods are frequently used together to try to ensure the functionality of a program. Static testing may be conducted manually or through the use of various software testing tools. Specific types of static software testing include code analysis, inspection, code reviews and walkthroughs.
Static Testing is defined as a software testing technique by which we can check the defects in software without actually executing it. Its counter-part is Dynamic Testing which checks an application when the code is run. Refer to this tutorial for a detailed difference between static and dynamic testing. Static testing is done to avoid errors at an early stage of development as it is easier to find sources of failures then failures themselves. Static testing helps to find errors that may not be found by Dynamic Testing.
Stress Testing
Determining the durability of a system by pushing it to its limits. Stress testing a network is performed by transmitting excessive numbers of packets or attempting to break in illegally. Software stress testing is done by feeding the program erroneous data as well as activating all interface options in all possible sequences. Hardware stress testing involves using the devices in extreme temperatures and hazardous environments.
Structural Testing
Structural testing is the type of testing carried out to test the structure of code. It is also known as White Box testing or Glass Box testing. This type of testing requires knowledge of the code, so, it is mostly done by the developers. It is more concerned with how system does it rather than the functionality of the system. It provides more coverage to the testing.
Stub
A small software routine placed into a program that provides a common function. Stubs are used for a variety of purposes. For example, a stub might be installed in a client machine, and a counterpart installed in a server, where both are required to resolve some protocol, remote procedure call (RPC) or other interoperability requirement.
System
- A group of related components that interact to perform a task.
- A computer system is made up of the CPU, operating system and peripheral devices. All desktop computers, laptop computers, network servers, minicomputers and mainframes are computer systems. Most references to computer imply the computer system.
- An information system is a business application made up of the database, the data entry, update, query and report programs as well as manual and machine procedures. Order processing systems, payroll systems, inventory systems and accounts payable systems are examples of information systems.
- The system often refers to the operating system, the master control program that runs the computer.
System Integration
The process of creating a complex information system that may include designing or building a customized architecture or application, integrating it with new or existing hardware, packaged and custom software, and communications. Most enterprises rely on an external contractor for program management of most or all phases of system development. This external vendor generally also assumes a high degree of the project’s risks.
System Integration Testing
System integration testing (SIT) is a high-level software testing process in which testers verify that all related systems maintain data integrity and can operate in coordination with other systems in the same environment. The testing process ensures that all subcomponents are integrated successfully to provide expected results.
System Test
Running a complete system for testing purposes.
System Integrator (SI)
An enterprise that specializes in implementing, planning, coordinating, scheduling, testing, improving and sometimes maintaining a computing operation. SIs try to bring order to disparate suppliers.
T
Technical Debt
A concept of the undesired quantity of developer work required to correct the simple code used to gain fast results, instead of spending time designing and implementing the best solution.
In software development, technical debt is a metaphor equating Extreme Programming’s incremental, get-something-started approach with the easy acquisition of money through fast loans.
Test Automation
The process of using a specific software to test the new software versions against unit tests and compare the actual test outcomes with the predicted results.
The use of special software (separate from the software being tested) to control the execution of tests and the comparison of actual outcomes with predicted outcomes (Wikipedia). Test automation is critical for supporting many other technical practices such as code refactoring, continuous integration, and continuous delivery.
Test Case
A set of test data and test programs (test scripts) and their expected results. A test case validates one or more system requirements and generates a pass or fail.
Test Coverage
Test coverage is defined as a metric in Software Testing that measures the amount of testing performed by a set of test. It will include gathering information about which parts of a program are executed when running the test suite to determine which branches of conditional statements have been taken. In simple terms, it is a technique to ensure that your tests are testing your code or how much of your code you exercised by running the test.
Test Data
A set of data created for testing new or revised applications. Test data should be developed by the user as well as the programmer and must contain a sample of every category of valid data as well as many invalid conditions as possible.
Test Data Generator
Communications instructions for forming files containing sets of information developed specifically to ensure the adequacy of a computer run or system.
Test Driven Development (TDD)
The primary goal of TDD is to make the code clearer, simple and bug-free. Test-Driven Development starts with designing and developing tests for every small functionality of an application. In TDD approach, first, the test is developed which specifies and validates what the code will do. In the normal Software Testing process, we first generate the code and then test. TDD can be defined as a programming practice that instructs developers to write new code only if an automated test has failed, to avoid duplication of code. Tests might fail since tests are developed even before the development. In order to pass the test, the development team has to develop and refactor the code. Refactoring a code means changing some code without affecting its behavior.
A development practice in which small tests to verify the behavior of a piece of code are written before the code itself. The tests initially fail, and the aim of the developer(s) is then to add code to make them succeed.
Test Driver
Test Drivers are used during Bottom-up integration testing in order to simulate the behaviour of the upper level modules that are not yet integrated. Test Drivers are the modules that act as temporary replacement for a calling module and give the same output as that of the actual product. Drivers are also used when the software needs to interact with an external system and are usually complex than stubs.
Test Environment
A testing environment is a setup of software and hardware for the testing teams to execute test cases. In other words, it supports test execution with hardware, software and network configured.
Test Execution
Test execution is the process of executing the code and comparing the expected and actual results.
Test Log
The detailed, time-stamped file of tests that have been run and the outcomes of each (e.g. passed or failed).
Test Manager
The role of the software test manager is to lead the testing team. Test Manager plays a central role in the Team. The Test Manager takes full responsibility for the project’s success. The role involves quality & test advocacy, resource planning & management, and resolution of issues that impede the testing effort.
Test Plan
A test plan is a document describing software testing scope and activities. It is the basis for formally testing any software/product in a project.
Test Policy
A document that represents the testing philosophy of the company as a whole and provides a direction which the testing department should adhere to.
Test Report
Test Report is a document which contains. A summary of test activities and final test results. An assessment of how well the Testing is performed.
Test Suite
A collection of test cases that test a large portion of software. Alternatively, all test cases for a particular software.
A collection of test scenarios and/or test cases that are related or that cooperate with each other.
Tester
A person whose role is to check the quality of code, e.g. verify if observed results match expected results, fix bugs, write tests, create documentation, etc.
Toolchain
From source code management and continuous integration, to environment provisioning and application deployment, there are ton of tools that get specific processes done in an enterprise DevOps practice. A DevOps toolchain refers to the set of tools that work together in the delivery, development, and management of an application.
A toolchain is a set of tools that are connected or interlinked to each other These connections create a chain of tools to automate processes
U
Unit Testing
Testing the smallest testable part of the code base. In Java, you could consider a standalone method within a class as a unit, or you could consider the class itself as the unit. There certainly should not be any interaction with services outside of the product (e.g. Databases, Kafka, Web Servers). These would typically be stubbed or mocked. Unit tests often have a huge overlap with component testing.
Code-level (i.e., does not require a fully installed end-to-end system to run) testing to verify the behavior of individual pieces of code. Test-driven development makes extensive use of unit tests to describe and verify intended behavior.
Unit Tests
Unit testing is a level of software testing where individual units/ components of a software are tested. The purpose is to validate that each unit of the software performs as designed. A unit is the smallest testable part of any software. It usually has one or a few inputs and usually a single output. Unit testing frameworks, drivers, stubs, and mock/ fake objects are used to assist in unit testing.
User Acceptance Testing
In software development, user acceptance testing (UAT)—also called application testing, and end user testing—is a phase of software development in which the software is tested in the real world by the intended audience.
User Experience (UX)
The nature of a user’s interaction with and perception of a system.
User Interface (UI)
User interface (UI) is the way that the user and computer system interact.
User Story
A convenient format for expressing the desired business value for many types of product backlog items. User stories are crafted in a way that makes them understandable for both business people and technical people. They are structurally simple and typically expressed in a format such as: “As a (blank), I want to achieve (blank) so that I get (blank).” They provide a great placeholder for a conversation. Additionally, they can be written at various levels of granularity and are easy to progressively refine.
V
Value Stream Mapping
A process visualization and improvement technique that is used heavily in lean manufacturing and engineering approaches. In a software delivery pipeline, Value Stream Maps are used to identify essential process steps so that “waste” can be eliminated from the process.
Lean Term – Value Stream Mapping (VSM) is a tool to gain insight in the workflow of a process and can be used to identify both Value Adding Activities and Non-Value Adding Activities in a process stream while providing handles for optimizing the process chain.
DevOps Agile Skills Association
Version Control System (VCS)
A system that records changes to a file or set of files over time so that you can recall specific versions later (GitHub, GitLab, Subversion, etc.)
Version control is the practice of managing code in versions—tracking revisions and change history to make code easy to review and recover. This practice is usually implemented using version control systems such as Git which allow multiple developers to collaborate in authoring code. These systems provide a clear process to merge code changes that happen in the same files, handle conflicts and roll back changes to earlier states. The use of version control is a fundamental DevOps practice, helping development teams work together, divide coding tasks between team members and store all code for easy recovery if needed. Version control is also a necessary element in other practices such as continuous integration and infrastructure as code.
Virtual Desktop Infrastructure (VDI)
Virtual desktop infrastructure (VDI) is a desktop operating system hosted within a virtual machine.
Virtual Machine
An abstract specification for a computing device that can be implemented in different ways, in software or hardware. You compile to the instruction set of a virtual machine much like you’d compile to the instruction set of a microprocessor. The Java virtual machine consists of a bytecode instruction set, a set of registers, a stack, a garbage-collected heap, and an area for storing methods.
W
Waterfall
A software development methodology based on a phased approach to projects, from “Requirements Gathering” through “Development” and so on, to “Release.” Phases late in the process (typically related to testing and QA) tend to be squeezed, as delays put projects under time pressure.
White-box testing
White-box testing is a testing technique when a person performing the testing knows about, or can read, the internals of the system under test. Unlike the more common black-box testing, white-box testing allows a deeper analysis of possible problems in the code. In addition, any test coverage techniques are usually, by definition, a part of white-box testing.
A testing or quality assurance practice that is based on verifying the correct functioning of the internals of a system by examining its (internal) behavior and state as it runs.
X
Xtreme Programming (XP/Extreme Programming)
An agile software development framework that aims to produce higher quality software, and higher quality of life for the development team. XP is the most specific of the agile frameworks regarding appropriate engineering practices for software development.
Y
YAML
YAML, an acronym for “YAML Ain’t Markup Language,” is a human-readable data serialization language. YAML files can be used in software delivery to automate specifications deploy and release processes. With YAML files, you can leverage the configurations from your existing applications and pipelines to represent familiar constructs to use in your development environment.
Z
Z Notation
The Z notation (pronounced as zed, named after the German mathematician Ernst Zermelo) originated at the Oxford University Computing Laboratory, UK, and has evolved over the last decade into a conceptually clear and mathematically well-defined specification language. The mathematical bases for Z notation are ZF set theory and the classical two-valued predicate logic. An interesting feature of the Z specification language is the schema notation. Using schemas, one can develop modular specifications in Z and compose them using schema calculus.