Skip to product information
1 of 12

www.ChineseStandard.us -- Field Test Asia Pte. Ltd.

GB/T 43499-2023 English PDF (GB/T43499-2023)

GB/T 43499-2023 English PDF (GB/T43499-2023)

Regular price $440.00
Regular price Sale price $440.00
Sale Sold out
Shipping calculated at checkout.
GB/T 43499-2023: Testing methods for the software of motor vehicle inspection system
Delivery: 9 seconds. Download (& Email) true-PDF + Invoice.
Get Quotation: Click GB/T 43499-2023 (Self-service in 1-minute)
Historical versions (Master-website): GB/T 43499-2023
Preview True-PDF (Reload/Scroll-down if blank)

GB/T 43499-2023
GB
NATIONAL STANDARD OF THE
PEOPLE’S REPUBLIC OF CHINA
ICS 43.180
CCS R 86
Testing methods for the software of vehicle inspection system
ISSUED ON: DECEMBER 28, 2023
IMPLEMENTED ON: JULY 01, 2024
Issued by: State Administration for Market Regulation;
National Standardization Administration.
Table of Contents
Foreword ... 3
1 Scope ... 4
2 Normative references ... 4
3 Terms and definitions ... 4
4 Test content ... 5
5 Test methods ... 12
6 Test document set ... 22
Appendix A (Informative) List of typical defects in software perspective testing ... 25
Appendix B (Informative) Version registration record form ... 26
Appendix C (Informative) Testing report ... 29
References ... 38
Testing methods for the software of vehicle inspection system
1 Scope
This document specifies the test content, test methods, and test document set of the
motor vehicle inspection agency’s inspection system software.
This document is applicable to the testing of inspection system software of motor
vehicle inspection agencies.
2 Normative references
The contents of the following documents constitute essential provisions of this
document through normative references in the text. Among them, for dated reference
documents, only the version corresponding to the date applies to this document; for
undated reference documents, the latest version (including all amendments) applies to
this document.
GB/T 25000.51-2016 Systems and software engineering - Systems and software
Quality Requirements and Evaluation (SQuaRE) - Part 51: Requirements for quality
of Ready to Use Software Product (RUSP) and instructions for testing
GB/T 26765 Specifications for motor vehicle safety inspection business information
system and networking
GB/T 38634.4-2020 Systems and software engineering - Software testing - Part 4:
Test techniques
GB/T 42685 Terminology of power-driven vehicle inspection
HJ 1238 Technical specification for data collection and transmission of vehicles
periodic emissions inspection
3 Terms and definitions
The terms and definitions defined in GB/T 42685 and the following apply to this
document.
3.1
Vehicle inspection institution
Test the extent to which an activity or event can be proven and undeniable after it has
occurred. Data non-repudiation testing includes the following:
a) Function to provide evidence to the data originator: Test whether the software has
the function to provide evidence of the origin of the data to the data originator
upon request;
b) Function to provide evidence to the data recipients: Test whether the software has
the function of providing data recipients with evidence of data receipt upon
request.
4.2.2.3 Data verifiability
Test the extent to which an entity's activities can be uniquely traced to that entity. Data
verifiability testing includes the following:
a) User process association and traceability: Test whether the software can associate
the user process with the owner user, so that the behavior of the user process can
be traced back to the owner user of the process;
b) Process dynamic association and traceability: Test whether the software can
associate the system process dynamics with the current service requester user, so
that the behavior of the system process can be traced back to the current service
requester user.
4.2.2.4 Data authenticity
Test the extent to which the identity of a test object or resource can be verified to be
consistent with its claims. Data authenticity testing includes the following:
a) User list and configuration table: Test whether the software has the user list and
configuration table of the current system;
b) Access login record: Test whether the access login record of the software is
complete in the system's access history database;
c) Historical logs and log management: Test whether the software has historical logs
and log management functions for users to use the system;
d) Simulated intrusion log records: Test whether the software log content has
relevant records when the software is invaded by a simulated attack event;
e) Virus detection records: Test whether the records of software users' access to
systems and data include "virus detection records" to prevent viruses.
4.2.2.5 Data transmission security
Test how well data is protected during transmission and processing. Data transmission
security testing includes the following:
a) Verification: Test whether the data verification code algorithm is used, generate
the verification code of the source data, verify the integrity of important data
during transmission and processing to prevent key data from being illegally
tampered with;
b) Data encryption: Test whether encryption technology is used to encrypt important
data and private information to protect data confidentiality and prevent
information leakage;
c) Network transmission security: Test whether measures are taken to ensure the
security of data transmission between different networks.
4.2.3 Source code standardization
Check the program and discover possible exceptions in the program. Source code
testing includes but is not limited to the following:
a) Statement label does not exist: Test whether there is a statement label that redirects
to a statement label that does not exist;
b) Unused statement labels: Test whether there are unused statement labels;
c) Unused subroutine definitions: Test whether there are unused subroutine
definitions;
d) Subroutine does not exist: Test whether to call a subroutine that does not exist;
e) Unreachable statements: Test whether there are statements that cannot be reached
after entering from the program entrance;
f) Statements that cannot reach the stop statement: Test whether there are statements
that cannot reach the stop statement;
g) Special trigger pop-up window: Test whether there are special trigger conditions
and a pop-up window will appear.
4.2.4 Others
Check software version registration, upgrades, changes and other records. Other tests
include but are not limited to the following:
a) Registration form: Check whether the software has a version registration form,
change and upgrade record form (if it has been changed or upgraded), etc.;
b) Registration form and software consistency: Check the version registration form,
upgrade (change) internal approval form, upgrade (change) record form, etc., to
Equivalence class division uses the test item model to divide test item inputs and outputs
into equivalence classes (also called "partitions"), where each equivalence class shall
be used as a test condition. These equivalence classes shall be derived from the test
basis such that all values in each partition can be treated similarly by the test item (i.e.,
the values in the equivalence class are "equivalent"). Valid inputs and output as well as
invalid input and output can derive equivalence class division.
Each equivalence class shall be a test coverage item (that is, in the equivalence class
division, the test conditions and test coverage items are the same equivalence class).
Exported test cases shall implement each test coverage item (i.e., equivalence class).
The steps to export test cases are as follows.
a) Determine the combination method of test coverage achieved by selecting test
cases. The following are two common methods:
1) One-to-one, each exported test case is used to cover a specific equivalence
class;
2) Minimization, where equivalence classes are covered by test cases such that
the minimum number of test cases derived covers all equivalence classes at
least once.
b) Use the method in step a) to select the test coverage items included in the current
test case.
c) Determine the input values for the test coverage items covered by the test case for
execution, as well as any valid values for any other input variables required by
the test case.
d) Apply the inputs to the test basis, to determine the expected results of the test case.
e) Repeat steps b) ~ d) until the required test coverage is achieved.
5.2.2 Classification tree
The classification tree method uses the test item model to divide the input of the test
item and graphically represents it in the form of a classification tree. The input of the
test item is divided into several "categories"; each division consists of several
independent (non-overlapping) "categories" and subcategories; meanwhile the category
set is complete (all input domains of the modeled test item are recognized and included
in all categories). Each category shall be a test condition. Depending on the rigor of the
test, the "category" obtained by decomposing the classification may be further divided
into "subcategories". Depending on the required test coverage, the exported partitions
and categories may include both valid and invalid input data. Shape the hierarchical
relationship between classification, category, subcategories into a tree, using the input
domain of the test item as the root node of the tree, the classification as the branch node,
the category, subcategories as the leaf node.
Test coverage items shall be derived from the combination classification using the
selected combination method; the exported test cases shall implement each test
coverage item. The steps to export test cases are as follows:
a) Export test coverage items; select a combination for the current test case, requiring
that the combination is not covered by test cases;
b) Determine the input values in each category that have not yet been assigned a
value;
c) Determine the expected results of the test case by applying the inputs to the test
basis;
d) Repeat steps a) ~ c), until the required test coverage level is reached.
5.2.3 Boundary value analysis
Boundary value analysis divides the input and output of the test item into multiple
ordered sets and subsets (partitions and sub-partitions) with identifiable boundaries,
through the analysis of the boundary value of the test item model, where each boundary
is a test condition. Boundaries shall be derived from the test basis.
The exported test cases shall implement each test coverage item. Below are the steps to
export test cases.
a) To determine the combination of test coverage items achieved by selecting test
cases, there are two common methods:
1) One-to-one, each test case implements a specified boundary value;
2) Minimization, which derives a minimum number of test cases to cover all
boundary values at least once.
b) Use the method in step a) to select the test coverage items included in the current
test case.
c) Other input variables not selected by the test case in step b) take any valid value.
d) Determine the expected results of the test case by applying the inputs to the test
evidence.
e) Repeat steps b) ~ d), until the required test coverage level is reached.
5.2.4 Cause-and-effect diagram
The cause-and-effect diagram method uses a cause-and-effect diagram to represent the
logical relationship model between the cause (such as input) and the result (such as
output) of the test item, including:
b) Determine the test case input values that cover the test coverage items;
c) Determine the expected results of the test case by applying the inputs to the test
basis (the expected results can be defined using the access states described in the
output and state models);
d) Repeat steps a) ~ c), until the required test coverage level is reached.
5.2.6 Scenario testing
Scenario testing uses a sequence model of interactions between a test item and other
systems (in this context, users are often considered to be other systems), to test the
involved test item usage processes. The test condition shall be one interaction sequence
(i.e., one scenario) or all interaction sequences (i.e., all scenarios).
Scenario testing shall include the following scenarios:
The "main" scenario is the expected typical action sequence of the test item, or an
arbitrary choice taken where there is no typical action sequence; the "alternative"
scenario represents the optional (non-main) scenario of the test item.
Test coverage items shall be the main scenario and alternative scenarios (that is, the test
coverage items are the same as the test conditions). Among the test cases exported by
scenario testing, one test case covers at least one scenario (test coverage item). The
steps to export test cases are as follows:
a) Select the test coverage items implemented by the current test case;
b) Determine the input values of test coverage items covered by test cases;
c) Determine the expected results of the test case by applying the input to the test
basis;
d) Repeat steps a) ~ c), until the required test coverage level is reached.
5.2.7 Random testing
Random testing uses an input domain model of the test item to define the set of all
possible input values. The input distribution shall be chosen to generate random input
values. The entire input field shall be randomly tested against test conditions.
Random testing has no known test coverage items; test cases for random testing shall
randomly select input values from the input field of the test item (or pseudo-randomly
if a tool is used), according to the selected input distribution. The steps to export test
cases are as follows:
a) Select an input distribution for the test input;
b) Generate random values of the test input according to the input distribution in step
a);
c) Determine the expected results of the test case by applying the input to the test
basis;
d) Repeat steps b) ~ c), until the required tests are completed.
5.3 Structure-based testing methods
5.3.1 Statement test
Statement testing shall export the source code model of the test item and identify the
statement as executable or non-executable. Each execution statement shall be a test
condition.
The steps to export test cases are as follows:
a) Identify control flow subpaths that reach one or more test coverage items that have
not yet been executed to test coverage; ...
View full details