Cloud native EDA tools & pre-optimized hardware platforms
Did the last Automotive SPICE Assessment crash and you don’t know why? Or is your first assessment coming up?
This article series is about the preparation of an assessment for the Automotive SPICE process Software Unit Verification (SWE.4). We go into the process, the expected deliveries and the view of assessors. Always keeping the idea in mind: What to do to get through an assessment successfully?
An Automotive SPICE Assessment is always successful if all participants from the project:
For every process, Automotive Spice version 3.1 requires two basic types of deliverables:
The concrete outcomes should be known by all participants for a successful assessment, because an assessor will pay particular attention to them when assessing the process.
Good to know: The “Software Unit Verification (SWE.4)” process is often equated only with the dynamic testing of software units. Although this is an essential component, much more is expected here.
An Automotive SPICE Assessment is intended to determine the maturity level of an organization. The maturity level is seen as an indicator for high quality. The assessment itself is performed for each process using generic base practice descriptions derived from a reference process model. For a rating of Level 1: “performed process”, at least 50% (largely) of all required achievements must be achieved.
Pro Tip: The Software Unit Verification process has 7 base practices (see figure). You should consider all base practices. Please do not ignore any of them. The qualitative requirements for the Level 1 assessment are very high and the results of the base practices have cross-dependencies to upstream and downstream processes. Poor performance can result in downgrades in other areas.
The software verification strategy is the basis for all activities in the software unit verification process and is therefore also the basis in an assessment. The software verification strategy is required by Base Practice 1: Develop Software Unit Verification Strategy including Regression Strategy.
For an assessor a unit verification strategy must include at least the following 10 aspects:
1. Definition of all units. The definition can be generic or specific. Make sure that units are uniquely identifiable. In the simplest case, there is a list of functions or files that are classified as units.
2. Definition of how specific requirements related to verification and testing are covered. This means functional, non-functional and process requirements.
3. Definition of methods for the development of test cases and test data derived from the detailed design and non-functional requirements.
4. Definition of methods for the methods and tools for static verification and reviews.
5. Definition for each test environment and for each test methodology used.
6. Definition of the test coverage depending on the project and release phase.
7. Definition of the test start conditions and test end criteria for dynamic unit tests.
8. Documentation of sufficient test coverage of each test level, if the test levels are combined.
9. Procedure for dealing with failed test cases, failed static checks, and check results.
10. Definition for performing regression testing.
Notes on the assessment.
If you do not cover all 10 aspects mentioned above in a Software Unit Verification Strategy, you must expect not to receive the assessment “Fully” for BP1 “Develop Software Unit Verification Strategy including regression Strategy”. Not fulfilling points 2 till 4 will result in them being rated Partly or worse for BP1.
Implicitly, the assessor also expects that all personnel involved in the process have knowledge of the contents of the Software Unit Verification Strategy. If they do not have evidence, e.g. in the form of mails, logs or similar, it may happen that a tester is called into the assessment and their knowledge is determined in an interview.
In Automotive SPICE, the higher-level Work Product Verification Strategy (WP ID 19-10) is characterized in more detail. It requires scheduling of activities, handling of risks and constraints, degree of independence in verification and other aspects for a verification strategy.
How do you define the criteria for verification in Base Practice 2? With the strategic guidelines defined in Base Practice 1, you’re ready to proceed to the next step. This BP applies to both static and dynamic tests. The result is expected to be specific test cases for the units and the definition of static checks at unit level.
Base Practice 2: Develop Criteria for Unit Verification
The ASPICE process expects that criteria are defined to ensure that the unit does what is described in both the software detailed design and the non-functional requirements.
All Work products are expected to be produced as described in the Software Unit Verification Strategy.
For example, the following criteria shall be defined for the static tests:
You can set unit verification criteria generically for all units, or specifically for categories of units or individual units. In order not to let the effort get out of hand, it is recommended to be conservative with general definitions.
Pro-Tip: Coverage goals (e.g. code coverage) are not usually suitable as unit verification criteria. They are best used as end-of-test criteria and thus determine when a test can be considered done.
For each test specification, Base Practice 6 “Ensure Consistency” requires a content check between the test specification and the software detailed design. In most cases, this is done through quality assurance measures such as a review. The aim of this check is to prove that the test case correctly tests the content of the linked requirements. It is explicitly expected that each review is documented.
The BP2 assessment may be downrated if missing or insufficient non-functional requirements (SWE.1) or missing or insufficient software detailed design (SWE.3) are identified during the assessment.
In other words, if the preceding processes are not complete, they will not get a good rating either.
Base Practice 3: Perform static verification of software units
Using the criteria defined in Base Practice 2, static verification of software units should be performed in Base Practice 3.
The execution can take place by
The success criteria should be determined using the criteria from BP2. They specify whether the check is successful or failed. The basis can be coverage criteria or compliance with maximum value (max. cyclomatic complexity of Y) or minimum values (min. x lines of comments per lines of code).
Base Practice 4: Test software units
Using the test specifications created in Base Practice 2, software unit tests are to be performed in Base Practice 4. It is expected that the tests will be performed as described in the software unit verification strategy.
For Base Practice 3 and Base Practice 4 it is explicitly expected that all tests including results are recorded and documented. In case of anomalies and findings, it is expected that these are documented, evaluated and reported.
In addition, it is expected that all data are summarized in a meaningful way. In software unit verification, a lot of test data is generally expected. The test data should be prepared for both manual and automated execution verification results at multiple levels of detail. A solution for this is a meaningful summary e.g. by aggregation of all test results in form of a pie chart.
Notes on the assessment for Base Practice 3 and Base Practice 4.
Deviations in the execution of verification tests compared to the software unit verification strategy (BP1) lead to the devaluation of BP3 or BP4.
For BP3 and BP4, lack of meaningful summaries leads to downgrading. If a test is only rated as passed/failed without additional information about the test, an assessor will not rate the affected Base Practice better than “Partly”. The stimulation and calculations of the unit presented in the reporting for automated software unit tests can be considered sufficient additional information to the assessment.
An assessor will want to see an example for the assessment of BP3 and BP4, respectively. Specifically, they will want to use this to verify that a finding is handled consistently with the Software Unit Verification Strategy and with SUP.9 Problem Resolution Management.
Base Practice 5: Establish Bidirectional Traceability
Bidirectional traceability is required in several places in Automotive SPICE. How you implement it is up to you. In this case, you are expected to link requirements from the Detailed Design with the results of test cases and static tests. And the test cases in turn are linked to requirements from the Detailed Design.
In the simplest case, this can be done in a tabular form (columns = test cases; rows = requirements). This implementation is very maintenance intensive and error prone.
Pro-Tip: Use tools such as TPT for this purpose in which links are created as easily as possible and a report is generated automatically at best. You can use this traceability report for consistency reviews (SWE.4 BP6) as an overview. In case of change requests you can analyze dependencies to test cases faster.
The assessor explicitly expects you to link test cases and requirements bidirectionally (BP5).
Base Practice 7: Summarize and communicate results
All unit verification results should be summarized and communicated to relevant parties. It is explicitly expected that there is evidence that the results have been reported. All types of communication media, such as letters, mails, videos, forum posts, etc. are accepted as evidence (as long as they are documented and thus traceable).
If the SWE.4 BP 3 and/ or BP 4 is rated “None” or “Partly”, downgrading for BP7 by the assessor must also be expected.
Identifying the relevant parties and their need for information is required in process ACQ.13 Project Requirements with BP7.
The ACQ.13 Project Requirements process is not reviewed as part of an Automotive SPICE Assessment. It is, however, good practice that a project should not ignore processes just because they are not assessed.
Automotive SPICE demands many activities and outcomes for quality assurance. Many of the required results should also be checked in a verifiable way.
Knowing and applying these assessment rules increases the likelihood of reaching a good assessment. Usually, a project reaches level 1 after 2 years and level 2 after another 2 years.
Experience shows that success is achieved most quickly when the team is willing to learn and works continuously to meet the requirements.