Systems analysis in IT
Systems Analysis in IT (Systems Analysis and Design) is an approach to designing and developing information systems from concept to operation, encompassing needs identification, requirements formalization, domain and process modeling, and the evaluation of alternatives and risks. In other words, systems analysis in IT is the development phase where specialists study a problem, define what the system should do, and design solutions for its creation.
Classical systems analysis covers a broad range of applications, not only software development but also organizational change, strategy, and other aspects.Systems analysis
Subject and Objectives of Systems Analysis in IT
The subject of systems analysis in IT is an information system (software product and/or service) throughout its entire lifecycle—from concept and justification to implementation and operation.
The objective of systems analysis is to transform business needs into a consistent and verifiable set of requirements and architectural solutions: to identify and document stakeholder goals and constraints, formalize requirements, model the domain and processes, assess the feasibility and risks of alternatives, and justify the chosen architecture. The result is a set of agreed-upon documents and traceable links between requirements, design decisions, and tests. This ensures manageability and control over the development process.
Key tasks in systems analysis include:
- Identifying stakeholder needs and goals. The analyst gathers and refines the expectations of customers, users, and other interested parties using interviews, surveys, observation, and analysis of current processes. The output is an initial requirements specification that distinguishes between functional ("what the system must do") and non-functional requirements (reliability, performance, security, etc.).[1][2]
- Formalizing and documenting requirements. Requests are transformed into verifiable requirements. A well-formed requirement must be clear and unambiguous, complete, consistent, verifiable, and traceable to higher-level goals; the set of requirements must be coherent and integral.[3][4] In practice, standardized documents are used, such as the SRS (Software Requirements Specification) per ISO/IEC/IEEE 29148, and in some industries, the URS (User Requirements Specification) and functional specifications.[5][6][7]
- Analyzing and modeling the system. To understand how the system will work and interact with the external world, models are built: Use Case diagrams for usage scenarios, DFDs for data flows and business processes, class/component diagrams, etc. These models serve as a basis for comparing alternative solutions and architectures.[8][9][10]
- Assessing feasibility and selecting a solution. A feasibility study is conducted (technical, organizational, economic, schedule feasibility), and architectural alternatives are compared (trade-off). To evaluate architecture quality based on attributes (e.g., performance, scalability, modifiability), methods like ATAM (Architecture Tradeoff Analysis Method) are used.[11][12] The choice between, for example, a monolithic and a microservices architecture [1] is based on explicit trade-offs (operational complexity vs. independent scalability and delivery speed) according to industry guides.[13][14]
- Preparing project artifacts. The analysis phase produces:
- An approved requirements specification (indicating their importance).
- A conceptual model of the system (diagrams/descriptions).
- Architectural and design decisions (data schemas, external system interfaces).
- An implementation plan (phases/modules).
- It is critical to ensure bidirectional traceability of requirements to design elements and tests.[15][4]
The success of an IT project largely depends on mature practices for managing requirements and architecture. A study by McKinsey and Oxford found that large IT projects often exceed their budget and timeline. The study also highlighted the importance of proper strategy management, stakeholder engagement, and competent requirements gathering, all of which can significantly impact a project's success or failure.[16]
Approaches and Methodologies in Systems Analysis for IT
Systems analysis in IT relies on systems thinking principles and methodologies adapted for software development. In practice, it combines "hard" and "soft" approaches, structured methodologies, object-oriented notations, and process and requirements modeling languages.
- Hard and soft approaches. In IT projects, a hard systems approach involves pre-formalized goals and requirements, decomposition, and top-down design. A soft systems approach is used when goals are unclear and multiple viewpoints exist. It employs elements of Soft Systems Methodology (SSM) (e.g., rich pictures, root definitions, CATWOE) to align understanding of the problem and desired changes; the results are then translated into formal requirements.[17][18]
- SSM (Soft Systems Methodology). Originally developed by Peter Checkland for organizational change, SSM is useful in the pre-project stages of IT: from investigating the problem situation and formulating root definitions (including via CATWOE) to comparing conceptual models with reality and achieving accommodation among stakeholders.[19][20]
- Structured methodologies: SADT/IDEF0. SADT models a system as a hierarchy of functions; the standard IDEF0 notation (IEEE 1320.1) captures functions and their I-C-O-M interfaces (Inputs, Controls, Outputs, Mechanisms). This method is useful for functional decomposition and agreeing on system boundaries, independent of algorithms.[21][22]
- Object-oriented analysis: UML and SysML (MBSE). UML has become the standard language for requirements and design (use case, class, sequence diagrams, etc.) and facilitates scenario validation with users. SysML extends UML for systems engineering (requirements diagrams, parametric diagrams) and is based on the MBSE approach, where the model is the central artifact across all stages, from requirements to testing.[23][24][25]
- Business process modeling: BPMN. The BPMN standard is used for graphically describing processes (pools, workflows, events, gateways), including comparing as-is/to-be states in requirements specifications and integrations.[26][27]
- Relation to requirements engineering. The process includes elicitation–analysis–specification–validation–change management stages. The criteria for a "good requirement" and the structure of an SRS are regulated by ISO/IEC/IEEE 29148. For prioritization, techniques like MoSCoW (Must/Should/Could/Won’t) and multi-criteria decision-making methods like AHP are used. In agile processes, systems analysis activities are reflected in backlog refinement and requirements traceability.[28][29][30][31]
- Relation to systems engineering. For complex (cyber-physical) systems, the V-model is applied: the "left" branch covers systems analysis and architecture, while the "right" branch covers integration, verification, and validation, linked to the artifacts from the left branch. Methods for evaluating architecture based on quality attributes include ATAM (trade-off analysis).[32][33]
Systems analysis in IT combines proven approaches—from soft methodologies for vision alignment to formal notations and standards. The choice of tools is determined by the problem's level of certainty: high uncertainty calls for a greater role for SSM and facilitation, while clearly defined boundaries favor formal models (UML/SysML, IDEF0, BPMN) and regulations.
Relation to IT Architecture and Enterprise Architecture
Systems analysis in IT projects is closely linked to architectural design. The roles of the analyst and architect overlap: the analyst formulates requirements and the logical model, while the architect defines the target solution structure and technical trade-offs; they work collaboratively.
- IT systems architecture. In a narrow sense, software architecture is the organization of components, their relationships, and the principles guiding the solution's design. It is important for the analyst to consider architectural styles (layered, client-server, microservices, event-driven, etc.), as non-functional requirements (reliability, scalability, modifiability) often drive architectural decisions and their trade-offs.[34][35] At an early stage of analysis, an architectural vision (high-level vision) is formed, and a preliminary solution outline is developed to test the viability of the requirements (iteration length and detail depend on the methodology).[36]
- Patterns and preliminary decisions. Architectural patterns are used to satisfy non-functional requirements. For example, for asynchronous interaction and loose coupling, a publish-subscribe pattern via a message broker is used in an event-driven architecture.[37]
- TOGAF (The Open Group Architecture Framework). One of the most common enterprise architecture frameworks, it includes the ADM (Architecture Development Method) and artifacts for managing architecture (repository, catalogs/matrices, principles). In TOGAF, requirements management is a cross-cutting process integrated into all phases of the ADM.[36] To support requirements and traceability, catalogs and matrices are used (e.g., requirements ↔ services, functions ↔ components), and a distinction is made between Architecture Building Blocks and Solution Building Blocks.[38][39] Enterprise principles and standards are recorded in corresponding catalogs and act as external non-functional requirements for project teams.[40] The compliance of solutions with the target architecture is confirmed through an Architecture Compliance Review.[41] The TOGAF approach involves a preliminary Architecture Vision followed by detailed design (data/applications/technology) with a migration plan and management of requirements changes.[42][43]
- Zachman Framework. An early and influential ontology for enterprise architecture artifacts, presented as a 6×6 matrix (perspectives × aspects: "what/how/where/who/when/why"). The "designer" row corresponds to systems analysis and design; the columns define the completeness of consideration for data, functions/processes, roles, location, and motivation. The framework serves as a classification (not a methodology) and helps ensure a complete description of the solution within the enterprise landscape.[44]
- Relation to Enterprise Architecture (EA). The systems analyst works within the context of EA: new requirements are traced to business capabilities and the operating model; enterprise standards and key constraints (security, compatibility, etc.) are applied.[45][36] During the initiation phase, an Architecture Vision is formed (goals/constraints, high-level requirements). The analyst then details these requirements while maintaining traceability to the vision and corporate standards. Non-compliance with standards is identified during architectural reviews and may lead to solution rework.[36][46]
In summary, systems analysis and architectural design form a chain of "requirements → architectural solutions → trade-offs on quality attributes." The choice of methods (styles/patterns, TOGAF artifacts, Zachman classification) is determined by the project's nature and the enterprise architecture framework.
Processes and Practices
Systems analysis is integrated throughout the entire software development and operations lifecycle, connecting business goals, architecture, and delivery. It includes pre-project investigation, approach selection, creation of verifiable artifacts, and defining requirements for reliability, performance, security, and maintenance. In a waterfall model, analysis is performed before design and implementation. In agile methods, it is a continuous, iterative process. In DevOps, it emphasizes operational goals. Regardless of the approach, analysis ensures traceability, change and risk management, documentation of architectural trade-offs, and compliance with regulatory constraints, making development predictable and manageable.
- Classic SDLC (Waterfall). The System Analysis & Requirements Definition phase precedes design and implementation; requirements are fixed in a detailed SRS as a basis for planning and contracts. This is effective in stable and regulated domains. The risks of "frozen" requirements are mitigated by SRRs/reviews and change management through a CCB.[47][48][49]
- Agile Methodologies. Analysis is continuous: instead of a final SRS, a product backlog of user stories with acceptance criteria is maintained and refined during backlog refinement. BDD (Given–When–Then) is applied. The risk of losing architectural integrity is offset by early architectural design and transparent traceability from requirements to implementation/tests.[50][51][52]
- DevOps and SRE. Frequent releases demand operational requirements "by default": automation, observability, and rollback capabilities. Non-functional requirements are formulated as SLOs/SLIs, and an error budget is managed. The backlog includes tasks for logs/metrics/traces/alerts. To achieve zero-downtime releases, patterns like blue/green deployment are used.[53][54][55]
- Requirements and Risk Management. Requirements in ALM have statuses and links to tasks/releases/defects. Version control, change impact analysis, and regular reprioritization are mandatory.[56][57]
- Quality Assurance (QA). Quality is built in at the requirements stage: reviews, "Three Amigos" sessions, an Acceptance Test Plan, and automated tests for acceptance criteria (BDD/ATDD).[58][59]
- Observability and Reliability. Requirements include SLAs/SLOs, MTTR, and MTBF with measurable goals and monitoring methods. These parameters come from business/operations and are incorporated into the architecture and reliability tests.[60][61]
Metrics and Artifact Quality
Commonly accepted criteria are used to evaluate the work of a systems analyst and the quality of their deliverables. High-quality requirements and models are the foundation of a successful project and are managed throughout the entire lifecycle (elicitation → specification → verification/validation → change management). The basic quality attributes of requirements are established in standards like ISO/IEC/IEEE 29148 and (historically) IEEE 830.[3][62][1]
- Correctness — The requirement reflects a genuine need and is agreed upon by domain experts; confirmed through validation (review/inspection, prototypes, scenarios).[1][4]
- Completeness — All essential aspects and conditions are considered.
- Completeness of an individual requirement: Necessary details are specified (e.g., "the indicator turns red upon failure" rather than just "becomes red").
- Completeness of the specification: Scenarios/roles are covered, NFRs are defined; achieved through checklists and traceability to business goals; an independent completeness audit (QA/review) is useful.[3][63]
- Unambiguity — Formulations can be interpreted in only one way; supported by a glossary, templates like "the system shall do A when B if C", and examples; diagrams are accompanied by a legend. Verified by the four-eyes principle.[3][1]
- Consistency — Requirements do not contradict each other or external constraints; achieved through structuring, summary attribute tables, team reviews; checked against regulations/standards.[3][63]
- Verifiability/Testability — Achievement can be confirmed by a test/demonstration/analysis; non-verifiable statements are replaced with measurable criteria; metrics and acceptance criteria are predefined for NFRs.[3][63]
- Modifiability & Traceability — Unique IDs, logical structure ("one thought, one paragraph"), no duplicates; links are maintained "requirement ↔ source/goal/design/test"; a Requirements Traceability Matrix (RTM) is kept.[64][3]
- Ranking and Prioritization — A quality attribute of the requirements set; techniques like MoSCoW and MCDM (e.g., AHP) are used; prioritization in collaboration with the business impacts planning and risks.[65][66]
Requirements quality metrics (examples):[63][1]
- Requirements defect density (issues per 100 requirements);
- Number of changes after baselining;
- Coverage metrics: percentage of requirements with tests; percentage of requirements traced to business goals;
- Requirements stability (ratio of added/deleted to total requirements over a period);
- Specification size/complexity (average number of requirements per use case, depth of decomposition);
- Stakeholder satisfaction (survey).
In mature processes (e.g., CMMI Level 3+), requirements quality procedures are in place: formal reviews, audits for template compliance, and metrics collection/analysis.[67] In critical domains (avionics, aerospace, etc.), formal methods are used to enhance reliability.[68]
Common Mistakes
IT projects frequently suffer from systems analysis errors: incomplete and ambiguous requirements, contradictions, unclear boundaries, neglect of non-functional aspects, missed integrations, and belated security considerations. This leads to rework, delays, increased costs, and defects.
Typical problems, their consequences, and prevention methods:
- Incompleteness and missed requirements. Roles with special permissions, edge cases, and NFRs are often overlooked. Consequences: architectural rework and launch delays. How to avoid: checklists, "what if..." brainstorming sessions, early involvement of testers, traceability to business goals.[1][69]
- Vague, ambiguous formulations. Consequences: developers implement the "wrong thing," leading to customer dissatisfaction. How to avoid: measurable criteria, a glossary, templates like "A, when B, if C", peer review.[3][69]
- Contradictory requirements. Consequences: delays for clarification, rework during integration. How to avoid: structuring requirements, verifying business rules/regulations, conflict resolution sessions, consistency checks during reviews.[3][1]
- Gold plating syndrome. Consequences: scope creep, increased complexity, new points of failure. How to avoid: link every requirement to a goal/metric; in Agile, do not include unnecessary items in the backlog; fix the scope; see YAGNI.
- Excessive detail where unnecessary. How to avoid: separate what/why (requirements) from how (design/implementation); use design-free requirements where appropriate.[3]
- Lack of requirements management. Consequences: version confusion, implementing the "wrong thing." How to avoid: a single source of truth in an ALM tool, version history and statuses, RTM and change impact analysis; change management via a CCB.[64][1]
- Lack of user involvement. How to avoid: interviews, observation, prototypes, regular demos; explicit validation with stakeholders.[3][1]
- Prolonged 'analysis paralysis'. How to avoid: a 'good enough' approach, iteration and timeboxing; launch an MVP/increment and adjust based on feedback.[1]
- Ignoring non-functional requirements. How to avoid: dedicate a section to NFRs (e.g., using FURPS+), define measurable criteria, and include them in test plans and architectural decisions.[1][3]
- Communication errors and the 'human factor'. How to resolve: develop interviewing and facilitation skills, remain neutral, document decisions and requirement sources (traceability to goals).[1]
Most problems boil down to the quality of formulations, completeness, and manageability of requirements. Applying standards like ISO/IEC/IEEE 29148 and SWEBOK practices (verifiability, traceability, iterativeness) significantly reduces the risk of schedule overruns and rework.[3][1]
Limitations
Despite its effectiveness in reducing uncertainty, systems analysis has its limitations:
- Reality is volatile and complex. It is impossible to account for all factors, especially in long-term projects. Some requirements will inevitably emerge only after the system is launched. The goal is to minimize surprises, but one must be prepared for change.
- Requirements depend on people. Business priorities, laws, and market conditions can change. Systems analysis captures the current state but cannot predict all external shifts. To adapt, requirements must be regularly updated, and work should be iterative.
- Users often don't know what they want until they see it. This is a well-known limitation. Prototyping and agile methodologies help overcome this issue. Analysis on paper has its limits; feedback from actual implementations is needed for accurate data.
- The balance between time and quality. Overly detailed analysis can become obsolete. In innovative fields, it is often better to quickly build a minimum viable product (MVP) and gather real-world data. Systems analysis is effective in stable domains, but its role is limited in research and development (R&D) projects.
- The human factor. Even the best methodologies cannot compensate for an incompetent analyst or an unavailable customer. It is crucial that all process participants are engaged and motivated.
Impact of Modern Technologies on Systems Analysis in IT
Systems analysis in IT is constantly evolving under the influence of technological innovations. The 21st-century analyst operates amidst explosive data growth, widespread AI adoption, rapid development cycles, and heightened attention to security. Successful systems analysis practice requires mastering new knowledge (Data Science, cybersecurity, cloud technologies) and flexibility in applying methods.
- Data and AI/ML: What is added to the analysis. For systems with AI, the goals and context of use, requirements for data sources and quality, and metrics for model trustworthiness (reliability, safety, explainability, privacy, fairness) are defined from the outset. TEVV (testing, evaluation, verification, validation) is planned, along with operational monitoring and safe model shutdown/decommissioning. These steps align with the GOVERN–MAP–MEASURE–MANAGE functions of the NIST AI Risk Management Framework and are reflected in the SRS, architecture, and verification/operations plans.[70]
- DevSecOps: 'Shift-left' and security by default. Integrating security into every stage of the CI/CD pipeline is becoming standard: automated checks (SAST/DAST), dependency and container scanning, deployment policies, and baseline observability. Trusted artifact registries and standardized hardened images are used, and zero-trust principles are applied. In systems analysis, pipeline control gates (conditions for passing stages), the link between requirements and security controls, and rules for promotion between environments (dev/test/stage/prod) are described in advance.[71]
- How documents (artifacts) are changing. Sections that are added or refined in key documents in the presence of Big Data, AI/ML, and DevSecOps:
- SRS / Requirements Specification: AI goals and context of use; data requirements (provenance, quality, ethical and legal constraints); model metrics (accuracy, reliability, response time); TEVV plan; transparency/explainability and privacy requirements; criteria for model shutdown/decommissioning.[70]
- Architecture and Design (Architecture, ADR): Threat modeling results; "security by default" measures (encryption, access control, secret management, principle of least privilege); restrictions on data/model usage; ADRs with risk and trade-off assessments.[71][70]
- Verification and Validation Plan (V&V / TEVV): Test scenarios for models and data; acceptance thresholds for quality metrics; monitoring for data/model drift; procedures for periodic reassessment and re-validation.[70]
- CI/CD Policies and Pipeline Gates: Automated SAST/DAST, SCA (dependencies), and container scans; signing and storing artifacts in trusted registries; rules for promotion between environments (dev/test/stage/prod) and conditions for blocking builds on check failures; requirements for observability by default.[71]
- Data and Model Management Plan: Source and lineage catalog; data quality and availability criteria; dataset/model versioning; (re)training schedule and bias control; access and storage policies; plan for secure model deactivation and data deletion, if required.[70]
- Operations and Observability (Ops/Runbook): AI trust metrics and SLOs; auditing and logging; alerts for degradation/anomalies; incident response plan; fallback/kill-switch for AI components; requirements for reporting and post-incident analysis.[70][71]
- End-to-end Traceability: Explicit links from "requirement ↔ control/check in the pipeline" and "requirement ↔ test/monitoring in operation" to demonstrably verify security and quality throughout the entire lifecycle.[71][70]
- The role of the systems analyst.
- Manages the context and risks of AI (actors, usage scenarios, data assumptions and limitations).
- Ensures traceability from "requirement ↔ security control in the pipeline".
- Formulates verifiable non-functional requirements (security, transparency, observability) across the system's entire lifecycle.[70][71]
Differences from Classical Systems Analysis
The term "Systems analysis" is historically broader than software development. Classical systems analysis is an approach to solving complex, interdisciplinary problems (social, economic, managerial), based on systems thinking and quantitative methods, typically to support management decisions. In IT, systems analysis refers to an applied discipline within software engineering focused on creating information systems.
Below are the key differences.
- Goals and object of analysis. Classical analysis addresses ill-structured, "fuzzy" problems and improves existing socio-technical systems (e.g., an urban transport network, a company's strategy, environmental policy). The object is a real-world system; the task is to help a decision-maker choose a course of action. For systems analysis in IT, the goal is to design and create a new information system or software product that meets requirements. The object is the system being designed; the focus is on the behavior and characteristics needed by users.
- Methodological foundations. Classical schools rely on systems thinking and often on mathematics. A hard systems approach involves problem formalization, quantitative criteria, and optimization (as in operations research). Soft systems methodologies acknowledge multiple viewpoints; an example is Soft Systems Methodology (SSM), where discussions and conceptual models are used to agree on desired changes. In IT, the foundation is engineering disciplines: requirements engineering, software design, and architectural frameworks. Standardized processes (ISO/IEC/IEEE 15288, 12207, 29148), UML/SysML notations, and change management practices are applied.
- Roles and artifacts. In classical analysis, the role of a "systems analyst" is often informal; the outputs are analytical reports, recommendations, mathematical models, and "what-if" scenarios. In IT, the role of an analyst (or business analyst) is formalized; they produce requirements specifications, system models (UML, ER), interface specifications, user stories, and backlogs—artifacts directly used by developers and testers.
- Lifecycle and process. Classical analysis has no single template: the steps depend on the problem (in SSM, from studying the situation to implementing changes). In IT, standard SDLCs are adopted: in the waterfall model, there is a distinct requirements analysis phase; in iterative and agile approaches, analysis is a continuous activity in every sprint. Modern practices (DevOps, CI/CD) extend the scope of analysis to operations, considering requirements for maintenance, observability, and updatability. In other words, systems analysis in IT is embedded in the development lifecycle, whereas classical analysis is more often conducted as a project-based or consulting activity.
The Systems Analyst
The Systems Analyst in IT is a specialist responsible for applying systems thinking to the design and development of information systems. This includes forming and validating requirements, modeling (UML/BPMN), aligning on architectural decisions, and ensuring integration. The role and qualification requirements in the Russian Federation are defined in a professional standard and federal educational standards.
The primary goal of this professional activity: To ensure that an IT service, automated system, automated information system, automated control system, software, information product, or tool (hereinafter referred to as the System) conforms to its environment, initial requirements and constraints, and the goals of automation and the automated activity by developing and delivering high-quality, interconnected design solutions to stakeholders and by launching and coordinating the work of individual performers throughout the entire lifecycle of the System (Professional Standard "Systems Analyst" (Order of the Ministry of Labor of the Russian Federation No. 367n of April 27, 2023)).
Glossary of Key Terms
Basic Concepts and Participants
- Systems Analysis in IT — A discipline whose subject is an information system throughout its entire lifecycle, from concept to operation.
- Stakeholders — Individuals or groups interested in or affected by a project (customers, users, managers).
- Project Artifacts — Documents and deliverables created during a project, such as specifications, models, plans, and decisions.
Requirements: Types and Documentation
- Functional Requirements — Describe what the system must do; its functions and behavior.
- Non-Functional Requirements (NFRs) — Describe the quality attributes of a system (reliability, performance, security, usability, scalability, etc.).
- (Initial) Requirements Specification — A document containing the initial set of requirements gathered during the early stages of a project.
- SRS (Software Requirements Specification) — A standardized document that details software requirements according to international standards (e.g., ISO/IEC/IEEE 29148).
- URS (User Requirements Specification) — A document describing user requirements for a system from the perspective of business processes and end-user expectations.
- Architecturally Significant Requirements (ASRs) — Requirements that significantly influence architectural decisions and trade-offs.
- Cross-Functional Requirements (CFRs) — A synonym for non-functional requirements, emphasizing their cross-cutting nature.
- Acceptance Criteria — Verifiable conditions that must be met for work on a requirement to be considered accepted.
- Definition of Ready (DoR) — An agreement on when a backlog item is ready for development (clarity, estimation, criteria).
- Definition of Done (DoD) — An agreement on what "done" means for a piece of work (code, tests, documentation, deployment).
- Constraint — A strict condition that limits design choices (deadlines, platforms, standards, licenses).
- Assumption — A proposition taken as true without proof, which requires subsequent validation.
- Requirements Quality — Attributes per ISO 29148: unambiguous, complete, consistent, verifiable, atomic.
Requirements Formalization, Traceability, and Prioritization
- Requirements Formalization — The process of transforming informal requests into clear, verifiable, and unambiguous requirements.
- Requirements Traceability — The ability to track the lifecycle of a requirement from its source to its implementation, testing, and deployment.
- Bidirectional Traceability — The ability to track links between requirements, design elements, and test cases in both forward and backward directions.
- MoSCoW — A prioritization technique that classifies requirements as Must-have, Should-have, Could-have, and Won't-have.
- BDD (Behavior-Driven Development) — A development methodology where tests are written in a natural language focused on system behavior from the user's perspective (Given–When–Then format).
Notations and Modeling
- UML (Unified Modeling Language) — A standardized graphical modeling language for specifying, visualizing, constructing, and documenting the components of software systems.
- SysML (Systems Modeling Language) — An extension of UML for systems engineering that supports the modeling of various aspects of complex systems, including requirements, behavior, structure, and parameters.
- BPMN (Business Process Model and Notation) — A standard graphical notation for describing business processes, allowing visualization of workflows, events, gateways, and pools.
- MBSE (Model-Based Systems Engineering) — An approach to systems engineering where the model is the central artifact at all stages of the system lifecycle, from requirements to testing.
- ArchiMate — A notation for enterprise architecture (business, application, technology layers) and their relationships.
- DMN (Decision Model and Notation) — A standard for modeling business decisions and rules tables.
- DFD (Data Flow Diagram) — Diagrams illustrating data flows (context, decomposition levels).
- ERD (Entity-Relationship Diagram) — A model of a domain with entities, relationships, and attributes.
- CRUD Matrix — A matrix that maps Create/Read/Update/Delete operations to entities and roles/functions.
Architectural Styles and Solution Evaluation
- Monolithic Architecture — An architectural approach where the entire system is developed as a single, indivisible module.
- Microservices Architecture — An architectural approach where a system is built as a collection of small, independently deployable and scalable services.
- Trade-off — A choice between mutually exclusive or conflicting characteristics or decisions, where improving one aspect comes at the expense of another.
- ATAM (Architecture Tradeoff Analysis Method) — A software architecture evaluation method used to analyze trade-offs between quality attributes (e.g., performance, scalability).
Enterprise Architecture and Frameworks
- TOGAF (The Open Group Architecture Framework) — One of the most common enterprise architecture frameworks, which includes the ADM (Architecture Development Method) for developing and managing architecture.
- Zachman Framework — An ontology of enterprise architecture artifacts, presented as a 6×6 matrix that classifies different architectural aspects from various perspectives.
Approaches to Analysis and Development Processes
- Hard Systems Approach — A systems analysis methodology that assumes pre-formalized goals and requirements, decomposition, and top-down design, effective for well-defined problems.
- Soft Systems Approach — A systems analysis methodology used for problems with unclear goals and multiple stakeholder viewpoints, aimed at aligning understanding of the problem and desired changes.
- SSM (Soft Systems Methodology) — A specific soft systems methodology developed by Peter Checkland, which uses tools like rich pictures, root definitions, and CATWOE.
- Waterfall Model — A classic software development methodology where phases (analysis, design, implementation, testing, deployment) are executed sequentially, with each phase fully completed before the next begins.
- Agile — A group of flexible software development methodologies focused on iterative development, adapting to change, customer collaboration, and continuous delivery of value.
Selection Methods and Common Mistakes
- AHP (Analytic Hierarchy Process) — A multi-criteria decision-making method that allows for structuring complex problems and evaluating alternatives based on a hierarchy of criteria.
- Gold-plating — A common mistake in systems analysis that involves adding functionality not required by stakeholders, leading to increased project scope and complexity.
Links
- ISO/IEC/IEEE 15288:2023 — System life cycle processes
- ISO/IEC/IEEE 12207:2017 — Software life cycle processes
- ISO/IEC/IEEE 29148:2018 — Requirements engineering
- ISO/IEC/IEEE 42010:2022 — Architecture description
- ISO/IEC 25010:2023 — Product quality model (SQuaRE)
- ISO/IEC/IEEE 24748-2:2024 — Life cycle management — Guidelines for applying ISO/IEC/IEEE 15288
- ISO/IEC/IEEE 15289:2019 — Content of life-cycle information items (documentation)
- ISO/IEC/IEEE 42020:2019 — Architecture processes
- ISO/IEC/IEEE 29119-1:2022 — Software testing — Part 1: General concepts
- IEEE Std 1012-2024 — System, Software, and Hardware Verification and Validation
- SEI ATAM — Architecture Tradeoff Analysis Method
- NASA Systems Engineering Handbook, SP-2016-6105 Rev2 (PDF)
- SWEBOK Guide v4.0a — IEEE Computer Society (PDF)
- Guide to the Systems Engineering Body of Knowledge (SEBoK)
- BABOK Guide v3 — IIBA
- Google SRE Books — Official site
- Microsoft Azure Well-Architected Framework — Official docs
Literature
- ISO/IEC/IEEE (2023). 15288: System Life Cycle Processes.
- INCOSE (2023). INCOSE Systems Engineering Handbook, 5th ed.
- ISO/IEC/IEEE (2018). 29148: Systems and Software Engineering — Life Cycle Processes — Requirements Engineering.
- IIBA (2015). A Guide to the Business Analysis Body of Knowledge (BABOK® Guide), v3.
- The Open Group (2022). The TOGAF® Standard, 10th Edition. Official free version.
- OMG (2017). Unified Modeling Language (UML®) 2.5.1 Specification. PDF.
- OMG (2014). Business Process Model and Notation (BPMN™) 2.0.2 Specification. PDF.
- OMG (2024). Systems Modeling Language (SysML®) 1.7 Specification. PDF.
- The Open Group (2022). ArchiMate® 3.2 Specification. Official free download (under license).
- Bass, L.; Clements, P.; Kazman, R. (2021). Software Architecture in Practice, 4th ed.
- Wiegers, K.; Beatty, J. (2013). Software Requirements, 3rd ed.
- Rozanski, N.; Woods, E. (2012). Software Systems Architecture: Working with Stakeholders Using Viewpoints and Perspectives, 2nd ed.
- Meadows, D. (2008). Thinking in Systems: A Primer.
- Senge, P. M. (2006). The Fifth Discipline: The Art & Practice of the Learning Organization (rev. ed.).
- Blanchard, B. S.; Fabrycky, W. J. (2010). Systems Engineering and Analysis, 5th ed.
- Robertson, J.; Robertson, S. (2012). Mastering the Requirements Process: Getting Requirements Right, 3rd ed.
- van Lamsweerde, A. (2009). Requirements Engineering: From System Goals to UML Models to Software Specifications.
- Hull, E.; Jackson, K.; Dick, J. (2017). Requirements Engineering, 4th ed.
- Kendall, K. E.; Kendall, J. E. (2023). Systems Analysis and Design, 11th ed.
- Dennis, A.; Wixom, B. H.; Tegarden, D. (2021). Systems Analysis and Design: An Object-Oriented Approach with UML, 8th ed.
- Satzinger, J. W.; Jackson, R. B.; Burd, S. D. (2015). Systems Analysis and Design in a Changing World, 7th ed.
- Fowler, M. (2003). UML Distilled: A Brief Guide to the Standard Object Modeling Language, 3rd ed.
- Delligatti, L. (2013). SysML Distilled: A Brief Guide to the Systems Modeling Language.
- Silver, B. (2011). BPMN Method and Style, 2nd ed.
- Lankhorst, M. et al. (2017). Enterprise Architecture at Work: Modelling, Communication and Analysis, 4th ed.
- Richards, M.; Ford, N. (2020). Fundamentals of Software Architecture.
- Fairbanks, G. (2010). Just Enough Software Architecture: A Risk-Driven Approach.
- Keeling, M. (2017). Design It!: From Programmer to Software Architect.
- Evans, E. (2003). Domain-Driven Design: Tackling Complexity in the Heart of Software.
- Vernon, V. (2013). Implementing Domain-Driven Design.
- Brandolini, A. (2018). Introducing EventStorming: An Act of Deliberate Collective Learning.
- Simsion, G.; Witt, G. (2015). Data Modeling Essentials, 4th ed.
- Silverston, L. (2008–2009). The Data Model Resource Book, Vols. 1–3 (rev. eds.).
- Keeney, R. L.; Raiffa, H. (1993). Decisions with Multiple Objectives: Preferences and Value Trade-Offs, 2nd ed.
- Saaty, T. L. (1980). The Analytic Hierarchy Process; (1990) Decision Making for Leaders.
References
- ↑ 1.00 1.01 1.02 1.03 1.04 1.05 1.06 1.07 1.08 1.09 1.10 1.11 1.12 IEEE Computer Society (2025). Guide to the Software Engineering Body of Knowledge (SWEBOK), v4.0a. Requirements Engineering: elicitation techniques. https://ieeecs-media.computer.org/media/education/swebok/swebok-v4.pdf
- ↑ Zowghi, D.; Coulin, C. (2005/2014). Requirements Elicitation: A Survey of Techniques, Approaches, and Tools. https://eecs481.org/readings/requirements.pdf
- ↑ 3.00 3.01 3.02 3.03 3.04 3.05 3.06 3.07 3.08 3.09 3.10 3.11 3.12 ISO/IEC/IEEE 29148 (2011/2018). Systems and software engineering — Requirements engineering. ISO overview page: “Defines the construct of a good requirement…”. https://www.iso.org/standard/45171.html
- ↑ 4.0 4.1 4.2 NASA (2020). NPR 7123.1C — Systems Engineering Processes and Requirements. Definition: “well-formed (clear and unambiguous), complete, consistent, individually verifiable and traceable”. https://nodis3.gsfc.nasa.gov/displayAll.cfm?Internal_ID=N_PR_7123_001C_&page_name=all
- ↑ GMU (George Mason University). IEEE Software Requirements Specification Template (SRS). https://cs.gmu.edu/~rpettit/files/project/SRS-template.doc
- ↑ Westfall, L. (Cal Poly, .edu). The What, Why, Who, When and How of Software Requirements (mentions URS). https://users.csc.calpoly.edu/~csturner/courses/300f06/readings/%5B3%5D_%20The_Why_What_Who_When_and_How_of_Software_Requirements.pdf
- ↑ Stanford University IT (.edu). Functional Specification Document Template. https://uit.stanford.edu/sites/default/files/2017/08/30/Functional%20Specification%20Document%20Template.docx
- ↑ Penn State (.edu). Elements of a Use Case Diagram. https://www.e-education.psu.edu/geog468/l8_p4.html
- ↑ UC Irvine (.edu). Data Flow Diagram. https://www.security.uci.edu/program/risk-assessment/data-flow-diagram/
- ↑ ISO/IEC/IEEE 42010 (2011/2022). Architecture description — requirements for architecture description and viewpoints. https://standards.ieee.org/ieee/42010/5334/
- ↑ Cornell University (.edu). CS 5150 — Feasibility Studies. https://www.cs.cornell.edu/courses/cs5150/2015fa/slides/C1-feasibility.pdf
- ↑ Carnegie Mellon SEI. Architecture Tradeoff Analysis Method (ATAM) — overview. https://www.sei.cmu.edu/library/architecture-tradeoff-analysis-method-collection/
- ↑ Microsoft Azure Architecture Center. Architecture styles: microservices — benefits & complexity. https://learn.microsoft.com/en-us/azure/architecture/guide/architecture-styles/
- ↑ Google Cloud. What Is Microservices Architecture? — Monolithic vs. microservices (overview). https://cloud.google.com/learn/what-is-microservices-architecture
- ↑ NASA (2023). Requirements Management — Traceability, Bidirectional traceability (definitions). https://www.nasa.gov/reference/6-2-requirements-management/
- ↑ McKinsey (2012). Delivering large-scale IT projects on time, on budget, and on value. https://www.mckinsey.com/capabilities/mckinsey-digital/our-insights/delivering-large-scale-it-projects-on-time-on-budget-and-on-value
- ↑ University of Cambridge, IfM. Soft Systems Methodology (SSM) — CATWOE, 3Es. https://www.ifm.eng.cam.ac.uk/research/dstools/soft-systems-methodology/
- ↑ Lancaster University (ePrints). Soft Systems Methodology and root definitions. https://eprints.lancs.ac.uk/id/eprint/48770/1/Document.pdf
- ↑ University of Cambridge, IfM. Soft Systems Methodology (SSM). https://www.ifm.eng.cam.ac.uk/research/dstools/soft-systems-methodology/
- ↑ UCL Discovery (.ac.uk). M. Haklay. Soft System Methodology (SSM). https://discovery.ucl.ac.uk/1296/1/paper13.pdf
- ↑ IEEE Std 1320.1-1998 (R2004). Functional Modeling Language — Syntax and Semantics for IDEF0. https://standards.ieee.org/ieee/1320.1/2003/
- ↑ ISO/IEC/IEEE 31320-1:2012. IDEF0: Function Modeling. https://cdn.standards.iteh.ai/samples/60615/9c848e7a1bc54042b774b3cb050872e7/ISO-IEC-IEEE-31320-1-2012.pdf
- ↑ University of Washington (.edu). UML Class Diagrams / UML overview (course material). https://courses.cs.washington.edu/courses/cse403/16au/lectures/L07.pdf
- ↑ JHU/APL (.edu). Modeling with SysML — tutorial. https://www.jhuapl.edu/sites/default/files/2023-03/ModelingwithSysMLTutorial.pdf
- ↑ MIT OCW (.edu). O. de Weck. Introduction to Systems Modeling Languages (incl. SysML, MBSE). https://ocw.mit.edu/courses/16-842-fundamentals-of-systems-engineering-fall-2015/23fea897f48d11f45593fa4be698d749_MIT16_842F15_Ses3_sysmodlg.pdf
- ↑ IBM. What is Business Process Modeling and Notation (BPMN)?. https://www.ibm.com/think/topics/bpmn
- ↑ IBM Docs. Business Process Modeling Notation (BPMN) model. https://www.ibm.com/docs/en/iis/11.5.0?topic=types-business-process-modeling-notation-bpmn-model
- ↑ IEEE/ISO/IEC 29148:2018. Systems and software engineering — Requirements engineering (overview). https://standards.ieee.org/ieee/29148/6937/
- ↑ King’s College London (.ac.uk). What is MoSCoW prioritization?. https://kdl.kcl.ac.uk/faqs/what-is-moscow-prioritization/
- ↑ T. L. Saaty. How to Make a Decision: The Analytic Hierarchy Process. Interfaces 24(6), 1994. https://pubsonline.informs.org/doi/10.1287/inte.24.6.19
- ↑ Microsoft Learn (Azure Boards). Best practices for Agile project management — Refine each backlog. https://learn.microsoft.com/en-us/azure/devops/boards/best-practices-agile-project-management
- ↑ MIT OCW (.edu). V-Model — Fundamentals of Systems Engineering. https://ocw.mit.edu/courses/16-842-fundamentals-of-systems-engineering-fall-2015/resources/v-model/
- ↑ Carnegie Mellon SEI (.edu). Architecture Tradeoff Analysis Method (ATAM) — overview. https://www.sei.cmu.edu/library/architecture-tradeoff-analysis-method-collection/
- ↑ Microsoft Azure Architecture Center. Architecture styles. https://learn.microsoft.com/en-us/azure/architecture/guide/architecture-styles/
- ↑ Kazman, R.; Klein, M.; Clements, P. ATAM: Method for Architecture Evaluation. CMU/SEI Technical Report, 2000. https://www.sei.cmu.edu/documents/629/2000_005_001_13706.pdf
- ↑ 36.0 36.1 36.2 36.3 The Open Group. Introduction — The TOGAF® Standard. https://www.togaf.org/chap01.html
- ↑ Microsoft Azure Architecture Center. Event-Driven Architecture style. https://learn.microsoft.com/en-us/azure/architecture/guide/architecture-styles/event-driven
- ↑ Jonkers, H. et al. ArchiMate® and the TOGAF® Framework. The Open Group (white paper). https://pubs.opengroup.org/onlinepubs/7698909899/toc.pdf
- ↑ Estrem, W. Building Blocks Revisited. The Open Group (presentation). https://archive.opengroup.org/public/member/proceedings/q411b/presentations/Estrem%20-%20Building%20Blocks%20Revisted.pdf
- ↑ The Open Group. Architecture Principles. https://pubs.opengroup.org/onlinepubs/7499919799/toc.pdf
- ↑ The Open Group. IT Architecture Compliance. https://www.opengroup.org/architecture/togaf7-doc/arch/p4/comp/comp.htm
- ↑ The TOGAF® Standard, Version 9.2 (specification). The Open Group. https://university.sk/wp-content/uploads/2020/01/TOGAF_v9_2_specifikacia.pdf
- ↑ Engelsman, W.; van Sinderen, M. Supporting Requirements Management in TOGAF and ArchiMate. The Open Group (white paper). https://pubs.opengroup.org/onlinepubs/7698999899/toc.pdf
- ↑ Zachman, J. A. A Framework for Information Systems Architecture. IBM Systems Journal, 26(3), 276–292, 1987. https://doi.org/10.1147/sj.263.0276
- ↑ MIT CISR. Classic Topics — Enterprise Architecture (defining EA as the “organizing logic for business process and IT capabilities…”). https://cisr.mit.edu/content/classic-topics-enterprise-architecture
- ↑ The Open Group. IT Architecture Compliance. https://www.opengroup.org/architecture/togaf7-doc/arch/p4/comp/comp.htm
- ↑ Royce, W. W. (1970). Managing the Development of Large Software Systems. IEEE WESCON. Reprint (PDF): https://www.praxisframework.org/files/royce1970.pdf
- ↑ Defense Acquisition University. System Requirements Review (SRR) — Acquipedia. https://aaf.dau.edu/acquipedia/article/system-requirements-review-srr/
- ↑ NASA (2023). Requirements Management — baseline, change control (CCB). https://www.nasa.gov/reference/6-2-requirements-management/
- ↑ Microsoft Learn (Azure Boards). Best practices for Agile project management — Refine each backlog. https://learn.microsoft.com/en-us/azure/devops/boards/best-practices-agile-project-management
- ↑ MSDN Magazine (Microsoft). BDD Primer: Behavior‑Driven Development with SpecFlow. https://learn.microsoft.com/en-us/archive/msdn-magazine/2010/december/msdn-magazine-bdd-primer-behavior-driven-development-with-specflow-and-watin
- ↑ Microsoft Learn. End‑to‑end traceability in Azure DevOps. https://learn.microsoft.com/en-us/azure/devops/cross-service/end-to-end-traceability
- ↑ Google SRE. Service Level Objectives; Error Budget Policy. https://sre.google/sre-book/service-level-objectives/ ; https://sre.google/workbook/error-budget-policy/
- ↑ Microsoft Learn. Azure Monitor — Overview. https://learn.microsoft.com/en-us/azure/azure-monitor/fundamentals/overview
- ↑ AWS Whitepaper. Blue/Green Deployments on AWS. https://docs.aws.amazon.com/whitepapers/latest/blue-green-deployments/welcome.html
- ↑ Microsoft Learn. Manage change — track, triage, and implement change requests. https://learn.microsoft.com/en-us/azure/devops/cross-service/manage-change
- ↑ Microsoft Learn (Azure Boards). Backlogs overview — create and manage your product backlog. https://learn.microsoft.com/en-us/azure/devops/boards/backlogs/backlogs-overview
- ↑ Microsoft Learn (Azure Test Plans). What is Azure Test Plans?. https://learn.microsoft.com/en-us/azure/devops/test/overview
- ↑ MSDN Magazine. Behavior‑Driven Design with SpecFlow. https://learn.microsoft.com/en-us/archive/msdn-magazine/2013/july/data-points-behavior-driven-design-with-specflow
- ↑ IBM. MTTR vs. MTBF: What’s the difference?. https://www.ibm.com/think/topics/mttr-vs-mtbf
- ↑ Google Cloud. Google Cloud Observability. https://cloud.google.com/products/observability
- ↑ IEEE Std 830‑1998. IEEE Recommended Practice for Software Requirements Specifications (historical standard). Educational copy (PDF): https://www.math.uaa.alaska.edu/~afkjm/cs401/IEEE830.pdf
- ↑ 63.0 63.1 63.2 63.3 NASA. Systems Engineering Handbook (NASA/SP‑2016‑6105 Rev2). https://www.nasa.gov/wp-content/uploads/2018/09/nasa_systems_engineering_handbook_0.pdf
- ↑ 64.0 64.1 NASA (2023). Requirements Management — traceability, baseline, CCB. https://www.nasa.gov/reference/6-2-requirements-management/
- ↑ King’s College London (.ac.uk). What is MoSCoW prioritization?. https://kdl.kcl.ac.uk/faqs/what-is-moscow-prioritization/
- ↑ Saaty, T. L. (1994). How to Make a Decision: The Analytic Hierarchy Process. Interfaces 24(6), 19–43. https://doi.org/10.1287/inte.24.6.19
- ↑ SEI / CMMI Institute. Capability Maturity Model Integration (CMMI) — Overview. https://www.sei.cmu.edu/cmmi/
- ↑ NASA Langley. What is Formal Methods?; NASA‑GB‑002‑95 Guidebook. https://shemesh.larc.nasa.gov/fm/fm-what.html ; https://ntrs.nasa.gov/api/citations/19980228002/downloads/19980228002.pdf
- ↑ 69.0 69.1 SEI (CMU). Common Testing Problems: Pitfalls to Prevent and Mitigate (on typical requirements/traceability issues). https://www.sei.cmu.edu/blog/common-testing-problems-pitfalls-to-prevent-and-mitigate/
- ↑ 70.0 70.1 70.2 70.3 70.4 70.5 70.6 70.7 NIST (2023). Artificial Intelligence Risk Management Framework (AI RMF 1.0). NIST AI 100‑1. https://nvlpubs.nist.gov/nistpubs/ai/nist.ai.100-1.pdf
- ↑ 71.0 71.1 71.2 71.3 71.4 71.5 U.S. DoD CIO (2021). DoD Enterprise DevSecOps Reference Design. https://dodcio.defense.gov/Portals/0/Documents/Library/DevSecOpsReferenceDesign.pdf