There is an ever-widening range of automotive electrical and/or electronic (E/E/PE) systems such as adaptive driver assistance systems, anti-lock braking systems, steering and airbags. Their increasing levels of integration and connectivity provide almost as many challenges as their proliferation, with non-critical systems such as entertainment systems sharing the same communications infrastructure as steering, braking and control systems. The net result is a necessity for exacting development processes
Automotive SPICE®, released in 2001 and reviewed regularly ever since, was developed in response to these challenges.
ASPICE (full form: Automotive Software Performance Improvement and Capability dEtermination) is a process standard set. It defines a structured mechanism for the definition, implementation, and evaluation of a process for software system development in the automotive industry, and for the measurement of software development process maturity.
ASPICE (Automotive SPICE®) provides a structured framework that defines specific process areas for software development, including requirements analysis, design, implementation, integration, and testing. Each of these activities is required to apply defined practices and produce verifiable work products, promoting consistency and quality across development teams.
One of the central tenets of ASPICE is traceability, requiring that all development artifacts—from requirements through to test results—are linked and controlled. This structure encourages thorough documentation, early defect detection, and systematic change management.
In addition to shaping the technical development process, ASPICE drives organizational maturity through its capability levels which assess how well an organization manages and improves its processes. Software teams are encouraged to adopt tools and workflows that support process discipline, traceability, and continuous improvement. Later versions have introduced guidance that supports Agile, DevOps, and integration with safety (ISO 26262) and cybersecurity (ISO/SAE 21434) standards.
ASPICE (Automotive SPICE®) has evolved through several iterations, each of which has impacted software developers to some degree as detailed in the following table:
| Version | Release Year | Key Updates / Significant Changes |
|---|---|---|
| v1.0 | 2005 |
|
| v2.3 | 2011 |
|
| v3.0 | 2015 |
|
| v3.1 | 2017 |
|
| v4.0 | 2023 |
|
Automotive SPICE® 4.0 was released in December 2024, superseding Automotive SPICE® 3.1.
ASPICE 4.0 expands the scope of software engineering to reflect evolving industry practice, for example by including enhanced support for Agile and DevOps methodologies. It refines the six core software development activities (SWE.1–SWE.6) for greater clarity, consistency, and traceability, with clearer expectations for work products. It places stronger emphasis on cybersecurity by aligning with ISO/SAE 21434 and introduces a new validation process to better distinguish between system and software validation.
These updates support a more robust, secure, and adaptable approach to automotive software development while aligning better with prevalent industry practises.
Automotive SPICE® (Software Process Improvement and Capability dEtermination) is developed and maintained by the AUTOSIG (Automotive Special Interest Group). This group consists of the SPICE User Group, the Procurement Forum, and automotive industry representative constructors including Audi, BMW, Daimler, Fiat, Ford, Jaguar, Land Rover, Porsche, Volkswagen, and Volvo.
SPICE (Software Process Improvement and Capability dEtermination) was initially developed in the early 1990s as part of an international effort to create a standardized framework for assessing and improving software development processes across industries.
The resulting ISO/IEC 15504 “Information technology – Process assessment” series of technical standards focused on software development processes and related business management functions collectively defined the SPICE framework. It formally emerged as an international standard in 2003/4.
The ISO/IEC 15504 series of standards has now been superseded by the ISO/IEC 330XX series.
As stated above, ASPICE was originally based on the ISO/IEC 15504 series of standards. That series of standards has now been superseded by the ISO/IEC 330XX series which provides a more modern and flexible foundation for process assessment and improvement. SPICE remains conformant with ISO/IEC 330XX.
ISO/IEC TS 33061 “Information technology — Process assessment — Process assessment model for software life cycle processes” supersedes ISO/IEC 15504-5, which was a part of the standards series on which ASPICE was based. ISO/IEC TS 33061 defines a process assessment model for software life cycle processes, conformant with the requirements of ISO/IEC 33004, for use in performing a conformant assessment in accordance with the requirements of ISO/IEC 33002.
ASPICE defines a structured mechanism for the definition, implementation, and evaluation of a process for software system development in automotive applications, and for the measurement of software development process maturity.
The ISO 26262 “Road vehicles – Functional safety” standard series takes a slightly different perspective. By focusing on functional safety in the automotive sector, it also promotes the development of high-quality software, but with the specific aim of ensuring that developments are adequately safe, with due consideration of the risk involved should they fail. ISO 26262 is now a de-facto automotive functional safety standard, adhered to by almost all road vehicle product development.
There is overlap between the scope of ASPICE and that of ISO 26262, and often a requirement to comply with both. In practice, development teams often ASPICE to structure their software development process, and ISO 26262 to ensure those processes meet functional safety requirements. The two standards are often mapped together internally via process compliance matrices, so that ASPICE assessments and ISO 26262 audits complement each other without duplication.
Just as ISO 26262 complements ASPICE from a functional safety perspective, the ISO/SAE 21434 “Road vehicles – Cybersecurity engineering” standard takes a complementary approach by focusing specifically on the identification, assessment, and mitigation of cybersecurity risks throughout the vehicle software development lifecycle. Its aim is not only to ensure robust software development, but also to ensure that vehicle systems are resilient to malicious attacks and unintentional cybersecurity threats.
There is increasing convergence between the scope of ASPICE and ISO/SAE 21434, particularly where the secure design, implementation, and verification of software and electronic systems are concerned. In practice, development teams may use ASPICE to establish and assess process maturity, while applying ISO/SAE 21434 to ensure cybersecurity considerations are systematically addressed across the development lifecycle.
As with ISO 26262, organizations often align the two standards via process mappings or compliance matrices. This allows cybersecurity-related work products, risk assessments, and threat analyses defined in ISO/SAE 21434 to be linked to the relevant engineering and support processes defined in ASPICE. The result is a development process that is both mature and cybersecure, with ASPICE assessments and ISO/SAE 21434 audits complementing one another without redundancy.
It naturally follows that the approach can be extended to create a development process that embraces the principles of all three standards – ASPICE, ISO 26262, and ISO/SAE 21434.
It naturally follows that the approach can be extended to create a development process that embraces the principles of all three standards – ASPICE, ISO 26262, and ISO/SAE 21434.
Although Agile SPICE was developed when ASPICE 3.x was current, its principles and mappings remain pertinent and so it is still fully applicable under ASPICE 4.0. Agile SPICE maps common Agile practices—such as sprints, backlogs, user stories, and retrospectives—to SPICE (and therefore ASPICE) base practices and work products. This helps Agile teams demonstrate compliance without needing to change their development style.
The concept underpinning ASPICE is that the continuous and ongoing refinement of software development processes will similarly enhance the quality of the resulting code. The analyses and metrics associated with the framework are equally well suited to product development organisations looking to improve their quality, and to purchasers looking to establish the credentials of their suppliers.
The ASPICE Process Assessment Model (PAM) is made up of the ASPICE process dimension and ASPICE Capability Levels (CL1 to CL5).It can be visualized as a two-dimensional matrix, where one axis describes a collection of desirable actions, and the other indicates how well and how completely those actions are performed.
In ASPICE 4.0, the PAM is no longer exclusive to software, putting greater emphasis on system-level processes. That said, software does remain a core focus.

The Process Dimension axis of this PAM matrix is called the Process Reference Model. It is composed of Process Categories which include Process Groups, and these collate individual processes. Each process has a purpose and outcomes, and base practices and work products contribute to achieving one or more outcomes.
The Capability Dimension axis is called the Process Measurement Framework. The Capability Levels are further subdivided into Process Attributes (PA). In practice, the processes themselves consist of the activities to be performed during the software development lifecycle (while noting the earlier reference to system-level activities).
ASPICE describes several Software Engineering Processes, known as SWEs. In ASPICE v4, these are no longer shown as a sector-specific interpretation of the V-model lifecycle, although the notion of a ASPICE V-cycle or ASPICE V-model still holds true. It requires a testing phase corresponding to each development phase.

This slight shift in emphasis results from the realignment with the ISO/IEC 330XX series, with its greater emphasis on system/software co-engineering, traceability, and iterative or incremental development.
Each of these SWE processes is broken down in the standard to several base practices (BPs), with verification and validation playing a significant part:
For example, the base practices associated with software unit verification are represented in TBmanager as shown below.

The level of thoroughness and expertise applied to each of these activities can vary enormously, and with it the level of quality assurance. For example, consider the testing of software units (BP4). The standard requires the team to “Test software units using the unit test specification according to the software unit verification strategy. Record the test results and logs.”
The standard provides for the assessment of Capability Levels associated with each of the processes, and each the comprehensiveness of the implementation of each base practice contributes to that. However, that is not to be confused with the required level of rigor for a particular verification or validation task which is more likely to be within the remit of functional safety or security standards.
Notwithstanding those considerations, ASPICE assessment ratings are based on an assessment of the thoroughness and completeness afforded to each Process. Assessment will result in specific ASPICE levels that demonstrate a level of compliance and competence for comparison purposes. The ASPICE standard scoring ranges from 0-5:

The definition and measurement of process adherence as measured by the ASPICE levels applies in tandem with the level of thoroughness and expertise applied to each base practice activity. This variation is reflected in most functional safety standards (IEC 61508, ISO 26262, EN 50128…) and cybersecurity standards (ISO/IEC 62443, ISO/SAE 21434…) such that the more critical applications demand more rigorous analysis.
Consider the testing of software units (BP4). The ASPICE standard expands that a little: “Test software units using the unit test specification according to the software unit verification strategy. Record the test results and logs.”
The thoroughness of unit test can vary considerably:
Decisions about which of these practices should apply, and under what circumstances, will be part of the “software unit verification strategy”. This is defined as part of BP.1’s “software unit verification strategy” and will need to reflect the demands of any applicable functional safety standard.
This is not to be confused with the assessment of Capability Levels associated with each of the ASPICE processes. It would be possible, for example, to have a Level 5 process that specifies relatively rudimentary unit tests for uncritical software.
The practices and processes defined within the standard align with the eight primary software verification tasks supported by the LDRA tool suite: traceability verification and process standard objective management, static analysis (design, code, and quality reviews), unit testing, target testing, test verification (code coverage) and test management. Focus on all these key areas is required to achieve an organization’s software development and maintenance goals.
The diagram below superimposes the functionality of the LDRA tool suite on the Traceability and consistency diagram shown in Automotive SPICE® Annex C.5.

The purpose of the software requirements analysis process is to transform the software related parts of the system requirements into a set of software requirements.
The products of this phase potentially include CAD drawings, spreadsheets, textual documents, and many other artefacts, and clearly a variety of tools can be involved in their production. Automating the management of the status of each of those elements and maintaining traceability between them and subsequent phases can address a project management headache. Such an approach also aligns well with supporting process groups like SUP.10 (Change Management) and SUP.1 (Quality Assurance).
The ideal requirements management tool selection depends largely on the scale of the development. Each can be integrated with the LDRA tool suite using its TBmanager component. If there are few developers in a local office, a simple spreadsheet or Microsoft Word document may suffice. Bigger projects, perhaps with contributors in geographically diverse locations, are likely to benefit from an Application Lifecycle Management (ALM) tool such as IBM® Engineering Requirements Management DOORS® Family, or Siemens Polarion ALM.

The purpose of the software architectural design process is to establish an architectural design, identify which software requirements are to be allocated to which elements of the software, and to evaluate the software architectural design against defined criteria.
There are many tools available for the generation of the software architectural design, with graphical representation of that design becoming an increasingly popular approach. Appropriate tools include those exemplified by IBM® Engineering Systems Design Rhapsody®, MathWorks Simulink and Ansys SCADE Suite.
The purpose of the software detailed design and unit construction process is to provide an evaluated detailed design for the software components, and to specify, implement, and verify the software units.
The standard requires that techniques used during this phase are appropriate, justified, and contribute to code reliability, testability, and maintainability. For example, the use of language subsets such as MISRA C fits those criteria by restricting the use of a programming language to those elements known to be least susceptible to causing problems.

Bidirectional traceability between software detailed design and software units is a key objective of the standard. Automating the fulfilment of ASPICE traceability reduces both management overhead and the potential for error, particularly when unanticipated changes arise. In such circumstances, impact analysis reports help to quantify the overhead associated with such changes and ensure that they are implemented in full.

The purpose of the software unit verification process is to provide evidence for compliance of the software units with the software detailed design and with the non-functional software requirements.
Each developed software unit needs to be tested with reference to the software detailed design. Test procedures need to be authored, reviewed, and executed on the target hardware and/or simulated environment to confirm that each software unit behaves as specified and does not produce unintended behaviour.
Actual outputs are captured and compared with the expected results, pass/fail results are reported, and requirements are validated accordingly.

The TBrun component of the LDRA tool suite automates the unit test process by exposing the software interface at the function scope, allowing the user to enter inputs and expected outputs. The tool suite then generates a test harness, which is compiled and executed on the target hardware. Actual outputs are captured, along with structural coverage data, and then compared with the expected outputs specified in the test cases.
These static and dynamic analyses can be integrated with several different model-based development tools, such as IBM® Engineering Systems Design Rhapsody®, MathWorks Simulink and Ansys SCADE Suite and many others. The development phase itself involves the creation of the model in the usual way, with the integration becoming more pertinent once source code has been auto generated from that model.
Model-based development offers many advantages to automotive software developers, and many modelling tools now include integrated model and auto-generated code testing features. However, ASPICE 4.0 focuses on process outcomes, meaning that model-based tools must produce artefacts that support those outcomes—such as design documentation, traceability links, and verification evidence. In this context, an automated testing approach that is integrated with the modelling tool but independent of it can help address concerns about systemic faults.
For example, IBM Engineering Systems Design Rhapsody can be deployed using an approach appropriate for use with “Back-to-back” testing. Design models are developed with Rhapsody and verified using Rhapsody Test Conductor. Then, code is generated from Rhapsody, instrumented by the LDRA tool suite, and executed in Software-In-the-Loop (SIL or host), or Processor-In-the-Loop (PIL or target) mode. Structural coverage is then collected, and structural coverage reports can be generated at the source code level.

Static analysis of the generated source code using the TBvision component of the LDRA tool suite can ensure compliance with an appropriate coding standard, such as MISRA C:2025, with Appendix E offering guidance on compliance for generated code. Additional dynamic testing can be performed at the source level from within the LDRA tool suite. Requirements based tests can be created to verify functionality and collate structural coverage. Test data can also be imported from Rhapsody and used to migrate test data to the LDRA tool suite for efficiency.
Real time embedded systems based on auto generated code usually also include some level of conventionally written code. Software for board support packages, interrupt handlers, drivers, and other lower-level code is typically hand coded. Legacy code is almost always part of deployed systems. These portions of the system can be verified through traditional methods using the LDRA tool suite alongside auto-generated code.
The purpose of the software integration and integration test process is to integrate the software units into larger software items up to a complete integrated software consistent with the software architectural design. It is also to ensure that the software items are tested to provide evidence for compliance of the integrated software items with the software architectural design, including the interfaces between the software units and between the software items.
The TBvision component of the LDRA tool suite contributes to the verification of the design by means of the control and data flow analysis of the code derived from it. This provides graphical representations of the relationship between code components for comparison with the intended design. A similar approach can also be used to generate a graphical representation of legacy system code, providing a path for future modifications to be integrated and verified in alignment with ASPICE-compliant development and verification processes.

Integration testing is designed to ensure that when the units are working together in accordance with the software architectural design, they meet the related specified requirements. Where practical, integration testing should be performed in environments representative of the target hardware to validate hardware-software interactions.
Within the LDRA tool suite, unit tests become integration tests as units are tested as part of a call tree, rather than in isolation. The same test data can be used to validate the code in both cases.
The inputs and expected outputs defined in the test cases are typically derived from requirements to ensure intended functionality is verified. Various other forms of tests including negative tests, fault injection and robustness tests are available using the same mechanism.
The analysis of boundary values can be automated using the “extreme test” capabilities provided by TBextreme and TBextremePLUS. In the LDRA context, the term “extreme testing” refers to the ability of LDRA’s unit test tools to create a sequence of test cases (“extreme tests”), automatically creating the associated test vectors for each of those test cases.
TBextreme provides two types of extreme test. Standard extreme tests create one test per function in the code under test, whereas tabular extreme tests create several tests per function.
TBextremePLUS is an enhancement to TBextreme and extends its capabilities by providing additional configuration options and more functionality. For example, automated Requirements Based Testing (RBT) is achieved through the application of user defined test values as dictated by project requirements which are leveraged by TBextremePLUS to create appropriate test cases.
The purpose of the Software Qualification Test Process is to ensure that the integrated software is tested to provide evidence for compliance with the software requirements.
ASPICE is often illustrated using the V-model to highlight verification and traceability relationships. However, in practice, development is rarely strictly sequential — changes in requirements or test failures often trigger updates that must be traced and managed carefully.
Consider what happens if there is a code change in response to a failed integration test, or where there is a change in a customer requirement. Such scenarios can quickly lead to situations where the traceability between the products of software development falls if the integrity of each development task and the artefacts generated by it is not maintained. A sequence of similar issues can ultimately lead to a situation where the completed project does not fulfil functional, functional safety, or cybersecurity requirements.
For that reason, ASPICE incorporates the principle of bidirectional traceability, requiring ongoing maintenance and integrity of the artefacts generated throughout the development lifecycle. Adherence to this principle ensures not only that the delivered system accurately reflects the requirements of the stakeholders as confirmed during software qualification test but also confirms that there is no superfluous code within the system – important from both a safety and security perspective.
ASPICE represents best practice in the development of functionally safe software for the automotive industry. A key concept for the standard is the Capability Maturity Model (CMM) – a set of structured levels that describe how well the behaviours, practices and processes of an organization can reliably and sustainably produce required outcomes.
These practices and processes can be represented by the preferred process model (V-model, Agile…) enhanced by bidirectional traceability to ensure that each phase of development always accurately reflects the one before it. The development and verification and validation processes required for each phase are broken down by the standard, but there is no provision in ASPICE for the variation in thoroughness proportional to the functional safety demanded of an application.
Leveraging ISO 26262 in tandem with ASPICE would address that issue – and that principle can be extended to cybersecurity, too. By leveraging ISO/SAE 21434 and integrating its requirements and processes into the overarching software development lifecycle.
Email: info@ldra.com
EMEA: +44 (0)151 649 9300
USA: +1 (855) 855 5372
INDIA: +91 80 4080 8707