In order to evaluate and increase modularity this paper combines a method for visualizing and measuring software architectures and two algorithms for decoupling. The combination is tested on a software system at Ericsson. Our analysis show that the system has one large cluster of components (18% of the system, a Core), all interacting with each other. By employing cluster and dominator analysis we suggest 19 dependencies to be removed in order to decouple the Core. Validating the analysis output with experts at Ericsson six of the suggested dependencies where deemed impossible to remove. By removing the remaining 13 dependencies Ericsson would improve the architecture of their system considerably, e.g. core size would go down to 5%.
Advanced metering infrastructure (AMI) is a key component of the concept of smart power grids. Although several functional/logical reference models of AMI exist, they are not suited for automated analysis of properties such as cyber security. This paper briefly presents a reference model of AMI that follows a tested and even commercially adopted formalism allowing automated analysis of cyber security. Finally, this paper presents an example cyber security analysis, and discusses its results.
Enterprise architecture advocates model-based decision-making on enterprise-wide information system issues. In order to provide decisionmaking support, enterprise architecture models should not only be descriptive but also enable analysis. This paper presents a software tool, currently under development, for the evaluation of enterprise architecture models. In particular, the paper focuses on how to encode scientific theories so that they can be used for model-based analysis and reasoning under uncertainty. The tool architecture is described, and a case study shows how the tool supports the process of enterprise architecture analysis.
Enterprise architecture advocates for model-based decision-making on enterprise-wide information system issues. In order to provide decision-making support, enterprise architecture models should not only be descriptive but also enable analysis. This paper presents a software tool, currently under development, for the evaluation of enterprise architecture models. In particular, the paper focuses on how to encode scientific theories so that they can be used for model-based analysis and reasoning under uncertainty. The tool architecture is described, and a case study shows how the tool supports the process of enterprise architecture analysis.
Nowadays, both agile development and enterprise architecture are often employed in large organizations. However there is still some confusion if these can and should be used together, and there is not much research about the possible interplay. The aim of this study is to bring new knowledge to the field of enterprise architecture and its relation to agile development. Twelve qualitative interviews with professionals in different roles, such as developers and architects, have been carried out. The participants belong to five different companies and the information obtained from them has been used to compare opinions and stated challenges regarding agile and EA. We found that some common opinions among the interviewees are; 1) agile development and enterprise architecture can be combined, 2) there are clear communication problems among architects, different teams, and project owners, and 3) there is a lack of system and application reusability. © 2018 IEEE.
A tool for Enterprise Architecture analysis using a probabilistic mathematical framework is demonstrated. The Model-View-Controller tool architecture is outlined, before the use of the tool is considered. A sample abstract maintainability model is created, showing the dependence of system maintainability on documentation quality. developer expertise, etc. Finally, a concrete model of an ERP system is discussed.
This paper presents a CAD tool for enterprise cyber security management called securiCAD. It is a software developed during ten years of research at KTH Royal Institute of Technology, and it is now being commercialized by foreseeti (a KTH spin-off company). The idea of the tool is similar to CAD tools used when engineers design and test cars, buildings, etc. Specifically, the securiCAD user first models the IT environment, an existing one or one under development, and then securiCAD, using attack graphs, calculates and highlights potential weaknesses and avenues of attacks. The main benefits with securiCAD are; 1) built in security expertise, 2) visualization, 3) holistic security assessments, and 4) scenario comparison (decision-making) capabilities.
In recent years, Enterprise Architecture (EA) has become a discipline for business and IT-system management. While much research focuses on theoretical contributions related to EA, very few studies use statistical tools to analyze empirical data. This paper investigates the actual application of EA, by giving a broad overview of the usage of enterprise architecture in Swedish, German, Austrian and Swiss companies. 162 EA professionals answered a survey originally focusing on the relation between IT/business alignment (ITBA) and EA. The dataset provides answers to questions such as: For how many years have companies been using EA models, tools, processes and roles? How is ITBA in relation to EA perceived at companies? In particular, the survey has investigated quality attributes of EA, related to IT-systems, business and IT governance. One important result is some interesting correlations between how these qualities are prioritized. For example, a high concern for interoperability correlates with a high concern for maintainability.
What constitutes an enterprise architecture framework is a contested subject. The contents of present enterprise architecture frameworks thus differ substantially. This paper aims to alleviate the confusion regarding which framework contains what by proposing a meta framework for enterprise architecture frameworks. By using this meta framework, decision makers are able to express their requirements on what their enterprise architecture framework must contain and also to evaluate whether the existing frameworks meets these requirements. An example classification of common EA frameworks illustrates the approach.
Creating accurate models of information systems is an important but challenging task. It is generally well understood that such modeling encompasses general scientific issues, but the monetary aspects of the modeling of software systems are not equally well acknowledged. The present paper describes a method using Bayesian networks for optimizing modeling strategies, perceived as a trade-off between these two aspects. Using GeNIe, a graphical tool with the proper Bayesian algorithms implemented, decision support can thus be provided to the modeling process. Specifically, an informed trade-off can be made, based on the modeler's prior knowledge of the predictive power of certain models, combined with his projection of their costs. It is argued that this method might enhance modeling of large and complex software systems in two principal ways: Firstly, by enforcing rigor and making hidden assumptions explicit. Secondly, by enforcing cost awareness even in the early phases of modeling. The method should be used primarily when the choice of modeling can have great economic repercussions.
Creating accurate models of information systems is an important but challenging task. While the scienti c aspects of such modeling are generally acknowledged, the monetary aspects of the modeling of software systems are not. The present paper describes a Bayesian method for optimizing modeling strategies, perceived as a trade-off between these two aspects. Speci cally, an informed trade-off can be made, based on the modeler's prior knowledge of the predictive power of certain models, combined with her projection of the costs. It is argued that this method enhances modeling of large and complex software systems in two principal ways: Firstly, by enforcing rigor and making hidden assumptions explicit. Secondly, by enforcing cost awareness even in the early phases of modeling. The method should be used primarily when the choice of modeling can have great economic repercussions.
Models are an integral part of the discipline of Enterprise Architecture (EA). To stay relevant to management decision-making needs, the models need to be based upon suitable metamodels. These metamodels, in turn, need to be properly and continuously maintained. While there exists several methods for metamodel development and maintenance, these typically focus on internal metamodel qualities and metamodel engineering processes, rather than on the actual decision-making needs and their impact on the metamodels used. The present paper employs techniques from information theory and learning classification trees to propose a method for metamodel management based upon the value added by entities and attributes to the decision-making process. This allows for the removal of those metamodel parts that give the least "bang for the bucks" in terms of decision support. The method proposed is illustrated using real data from an ongoing research project on systems modifiability
This paper presents a case study that evaluates the performance of the product development performance measurement system used in a Swedish company that is a part of a global corporate group. The study is based on internal documentation and eighteen indepth interviews with stakeholders involved in the product development process. The results from the case study include a description of what metrics that are in use, how these are employed, and its effect on the quality of the performance measurement system. Especially, the importance of having a well-defined process proved to have a major impact on the quality of the performance measurement system in this particular case.
Performance evaluation of product development processes is becoming increasingly important as many companies experience tougher competition and shorter product life cycles. This article, based on a case study on a Swedish company investigates the needs and requirements that the company have on a future performance measurement system for product development. The requirements were found to mostly consider cooperation between functions, co-worker motivation and cost-efficient product solutions. These focus areas are common problems in product development since they are addressed in development concepts like Lean Product Development and Design for Six Sigma. Therefore, more research about how they can be supported by performance measurement system for product development would be of interest.
Large investments are made annually to develop and maintain IT systems. Successful outcome of IT projects is therefore crucial for the economy. Yet, many IT projects fail completely or are delayed or over budget, or they end up with less functionality than planned. This article describes a Bayesian decision-support model. The model is based on expert elicited data from 51 experts. Using this model, the effect management decisions have upon projects can be estimated beforehand, thus providing decision support for the improvement of IT project performance.
EA initiatives are usually spanning the entire enterprise on high level. While, a typical development organization (could be a business unit within a larger enterprise) often has detailed models describing their product, the enterprise architecture on the business unit level is handled in an ad hoc or detached way. However, research shows that there is a tight link between the product architecture and its developing organization. In this paper we have studied an organization within Ericsson, which focuses on the development of large software and hardware products. We have applied the hidden structure method, which is based on the Design Structure Matrix approach, to analyze of organizational transformations. The to-be scenarios are possible alternatives in trying to become more agile and lean. Our analysis shows that one scenario likely increases the complexity of developing the product, while the other two suggestions are both promising to-be scenarios.
Large amounts of software are running on what is considered to be legacy platforms. These systems are often business critical and cannot be phased out without a proper replacement. Migration of these legacy applications can be troublesome due to poor documentation and a changing workforce. Estimating the costof suchprojects is nontrivial. Expert estimationis the most common method, but the method is heavily relying on the experience, knowledge,and intuition of the estimator. The use of a complementary estimation method can increase the accuracy of the assessment. This paper presents a metamodel that combines enterprise architecture modeling concepts with the COCOMO II estimation model. Ourstudy proposes a method combining expert estimation with the metamodel-based approachtoincrease the estimation accuracy. The combination was tested with four project samples at a large Nordic manufacturing company, which resulted in a mean magnitude of relative error of 10%.
Enterprise Architecture (EA) is an approach used to provide decision support based on organization-wide models. The creation of such models is, however, cumbersome as multiple aspects of an organization need to be considered, making manual efforts time-consuming, and error prone. Thus, the EA approach would be significantly more promising if the data used when creating the models could be collected automatically-a topic not yet properly addressed by either academia or industry. This paper proposes network scanning for automatic data collection and uses an existing software tool for generating EA models (ArchiMate is employed as an example) based on the IT infrastructure of enterprises. While some manual effort is required to make the models fully useful to many practical scenarios (e.g., to detail the actual services provided by IT components), empirical results show that the methodology is accurate and (in its default state) require little effort to carry out.
Management of various Distributed Energy Resources (DERs) in microgrids requires the integration of heterogeneous control devices and systems. Design and management of such integrated systems would benefit from the application of models that capture structural and functional aspects. These models are important in order to abstract the technical detail for planning and design in order to provide a basis for discussion amongst stakeholders and technical experts. Such models should provide semantics that adequately describe and define these aspects from the electro-technical to the information management perspective during design and implementation. In the discipline of IT management, Enterprise Architecture (EA) is a commonly used approach. The EA approach is typically based on metamodels with ArchiMate being one of the most well known. ArchiMate aims to enable holistic descriptions of businesses and their supporting IT using three layers, namely business, application and technology, from three perspectives, namely information, behavior and structure. While, invaluable for planning and management of large organizational IT, ArchiMate in its original form lacks the descriptive semantics required to specifically capture the high level of systems integration required for electrical process management. This paper proposes an extended ArchiMate metamodel for modeling microgrid components, the control systems, and the management and control of these integrated systems. The paper provides an example of how this can be applied to a proposed microgrid development project.
Enterprise architecture modeling and model maintenance are time-consuming and error-prone activities that are typically performed manually. This position paper presents new and innovative ideas on how to automate the modeling of enterprise architectures. We propose to view the problem of modeling as a probabilistic state estimation problem, which is addressed using Dynamic Bayesian Networks (DBN). The proposed approach is described using a motivating example. Sources of machine-readable data about Enterprise Architecture entities are reviewed.
Time between vulnerability disclosure (TBVD) for individual analysts is proposed as a meaningful measure of the likelihood of finding a zero-day vulnerability within a given timeframe. Based on publicly available data, probabilistic estimates of the TBVD of various software products are provided. Sixty-nine thousand six hundred forty-six vulnerabilities from the National Vulnerability Database (NVD) and the SecurityFocus Vulnerability Database were harvested, integrated and categorized according to the analysts responsible for their disclosure as well as by the affected software products. Probability distributions were fitted to the TBVD per analyst and product. Among competing distributions, the Gamma distribution demonstrated the best fit, with the shape parameter, k, similar for most products and analysts, while the scale parameter, 8, differed significantly. For forecasting, autoregressive models of the first order were fitted to the TBVD time series for various products. Evaluation demonstrated that forecasting of TBVD on a per product basis was feasible. Products were also characterized by their relative susceptibility to vulnerabilities with impact on confidentiality, integrity and availability respectively. The differences in TBVD between products is significant, e.g. spanning differences of over 500% among the 20 most common software products in our data. Differences are further accentuated by the differing impact, so that, e.g., the mean working time between disclosure of vulnerabilities with a complete impact on integrity (as defined by the Common Vulnerability Scoring System) for Linux (110 days) exceeds that of Windows 7 (6 days) by over 18 times.
Attack simulations may be used to assess the cyber security of systems. In such simulations, the steps taken by an attacker in order to compromise sensitive system assets are traced, and a time estimate may be computed from the initial step to the compromise of assets of interest. Attack graphs constitute a suitable formalism for the modeling of attack steps and their dependencies, allowing the subsequent simulation. To avoid the costly proposition of building new attack graphs for each system of a given type, domain-specific attack languages may be used. These languages codify the generic attack logic of the considered domain, thus facilitating the modeling, or instantiation, of a specific system in the domain. Examples of possible cyber security domains suitable for domain-specific attack languages are generic types such as cloud systems or embedded systems but may also be highly specialized kinds, e.g. Ubuntu installations; the objects of interest as well as the attack logic will differ significantly between such domains. In this paper, we present the Meta Attack Language (MAL), which may be used to design domain-specific attack languages such as the aforementioned. The MAL provides a formalism that allows the semi-automated generation as well as the efficient computation of very large attack graphs. We declare the formal background to MAL, define its syntax and semantics, exemplify its use with a small domain-specific language and instance model, and report on the computational performance.
The Common Vulnerability Scoring System (CVSS) is the state-of-the art system for assessing software vulnerabilities. However, it has been criticized for lack of validity and practitioner relevance. In this paper, the credibility of the CVSS scoring data found in five leading databases – NVD, X-Force, OSVDB, CERT-VN, and Cisco – is assessed. A Bayesian method is used to infer the most probable true values underlying the imperfect assessments of the databases, thus circumventing the problem that ground truth is not known. It is concluded that with the exception of a few dimensions, the CVSS is quite trustworthy. The databases are relatively consistent, but some are better than others. The expected accuracy of each database for a given dimension can be found by marginalizing confusion matrices. By this measure, NVD is the best and OSVDB is the worst of the assessed databases.
The Multi-Attribute Prediction Language (MAPL), an analysis metamodel for non-functional qualities of systems-ofsystems, is introduced. MAPL features analysis in five nonfunctional areas: service cost, service availability, data accuracy, application coupling, and application size. In addition, MAPL explicitly includes utility modeling to make tradeoffs between the qualities. The paper introduces how each of the five non-functional qualities is modeled and quantitatively analyzed based on the ArchiMate standard for enterprise architecture modeling and the previously published Predictive, Probabilistic Architecture Modeling Framework, building on the well-known UML and OCL formalisms. The main contribution of MAPL lies in combining all five nonfunctional analyses into a single unified framework.
The discipline of enterprise architecture advocates the use of models to support decision-making on enterprise-wide information system issues. In order to provide such support, enterprise architecture models should be amenable to analyses of various properties, as e.g. the level of enterprise information security. This paper proposes the use of a formal language to support such analysis. Such a language needs to be able to represent causal relations between, and definitions of, various concepts as well as uncertainty with respect to both concepts and relations. To support decision making properly, the language must also allow the representation of goals and decision alternatives. This paper evaluates a number of languages with respect to these requirements, and selects influence diagrams for further consideration. The influence diagrams are then extended to fully satisfy the requirements. The syntax and semantics of the extended influence diagrams are detailed in the paper, and their use is demonstrated in an example.
The discipline of enterprise architecture advocates the use of models to support decision-making on enterprise-wide information system issues. In order to provide such support, enterprise architecture models should be amenable to analyses of various properties, as e.g. the level of enterprise information security. This paper proposes the use of a formal language to support such analysis. Such a language needs to be able to represent causal relations between, and definitions of, various concepts as well as uncertainty with respect to both concepts and relations. To support decision-making properly, the language must also allow the representation of goals and decision alternatives. This paper evaluates a number of languages with respect to these requirements, and selects influence diagrams for further consideration. The influence diagrams are then extended to fully satisfy the requirements. The syntax and semantics of the extended influence diagrams are detailed in the paper, and their use is demonstrated in an example.
Making major changes in enterprise information systems, such as large IT-investments, often have a significant impact on business operations. Moreover, when deliberating which IT-changes to make, the consequences of choosing a certain scenario may be difficult to grasp. One way to ascertain the quality of IT investment decisions is through the use of methods from decision theory. This paper proposes the use of one such method to facilitate IT-investment decision making, viz. extended influence diagrams. An extended influence diagram is a tool able to completely describe and analyse a decision situation. The applicability of extended influence diagrams is demonstrated at the end of the paper by using an extended influence diagram in combination with the ISO/IEC 9126 software quality characteristics and metrics as means to assist a decision maker in a decision regarding an IT-investment.
Making major changes in enterprise information systems, such as large IT-investments, often have a significant impact on business operations. Moreover, when deliberating which IT-changes to make, the consequences of choosing a certain scenario may be difficult to grasp. One way to ascertain the quality of IT-investment decisions is through the use of methods from decision theory. This paper proposes the use of one such method to facilitate IT-investment decision making, viz. extended influence diagrams. An extended influence diagram is a tool able to completely desccribe and analyse a decision situation. The applicability of extended influence diagrams is demonstrated at the end of the paper by using an extended influence diagram in combination with the ISO/IEC 9126 software quality metrics as means to assist a decision maker in a decision regarding an IT-investment.
In this paper we introduce pwnPr3d, a probabilistic threat modeling approach for automatic attack graph generation based on network modeling. The aim is to provide stakeholders in organizations with a holistic approach that both provides high-level overview and technical details. Unlike many other threat modeling and attack graph approaches that rely heavily on manual work and security expertise, our language comes with built-in security analysis capabilities. pwnPr3d generates probability distributions over the time to compromise assets.
This paper proposes an approach, called pwnPr3d, for quantitatively estimating information security risk in ICT systems. Unlike many other risk analysis approaches that rely heavily on manual work and security expertise, this approach comes with built-in security risk analysis capabilities. pwnPr3d combines a network architecture modeling language and a probabilistic inference engine to automatically generate an attack graph, making it possible to identify threats along with the likelihood of these threats exploiting a vulnerability. After defining the value of information assets to their organization with regards to confidentiality, integrity and availability breaches, pwnPr3d allows users to automatically quantify information security risk over time, depending on the possible progression of the attacker. As a result, pwnPr3d provides stakeholders in organizations with a holistic approach that both allows high-level overview and technical details.
Attack simulations are a feasible means to assess the cyber security of systems. The simulations trace the steps taken by an attacker to compromise sensitive system assets. Moreover, they allow to estimate the time conducted by the intruder from the initial step to the compromise of assets of interest. One commonly accepted approach for such simulations are attack graphs, which model the attack steps and their dependencies in a formal way. To reduce the effort of creating new attack graphs for each system of a given type, domain-specific attack languages may be employed. They codify common attack logics of the considered domain. Consequently, they ease the reuse of models and, thus, facilitate the modeling of a specific system in the domain. Previously, MAL (the Meta Attack Language) was proposed, which serves as a framework to develop domain specific attack languages. In this article, we present vehicleLang, a Domain Specific Language (DSL) which can be used to model vehicles with respect to their IT infrastructure and to analyze their weaknesses related to known attacks. To model domain specifics in our language, we rely on existing literature and verify the language using an interview with a domain expert from the automotive industry. To evaluate our results, we perform a Systematic Literature Review (SLR) to identify possible attacks against vehicles. Those attacks serve as a blueprint for test cases checked against the vehicleLang specification.
Authorization and its enforcement, access control, has stood at the beginning of the art and science of information security, and remains being a crucial pillar of secure operation of IT. Dozens of different models of access control have been proposed. Although enterprise architecture as a discipline strives to support the management of IT, support for modeling authorization in enterprises is lacking, both in terms of supporting the variety of individual models nowadays used, and in terms of providing a unified metamodel capable of flexibly expressing configurations of all or most of the models. This study summarizes a number of existing models of access control, proposes an unified metamodel mapped to ArchiMate, and illustrates its use on a selection of simple cases.
Authorization and its enforcement, access control, have stood at the beginning of the art and science of information security, and remain being crucial pillar of security in the information technology and enterprises operations. Dozens of different models of access control have been proposed. Although Enterprise Architecture as the discipline strives to support the management of IT, support for modeling access policies in enterprises is often lacking, both in terms of supporting the variety of individual models of access control nowadays used, and in terms of providing a unified ontology capable of flexibly expressing access policies for all or the most of the models.This study summarizes a number of existing models of access control, proposes an unified metamodel mapped to ArchiMate, and illustrates its use on a selection of example scenarios and two cases.
Enterprise architecture (EA) has become an essential part of managing technology in large enterprises. These days, automated analysis of EA is gaining increased attention. That is, using models of business and technology combined in order to analyze aspects such as cyber security, complexity, cost, performance, and availability. However, gathering all information needed and creating models for such analysis is a demanding and costly task. To lower the efforts needed a number of approaches have been proposed, the most common are automatic data collection and reference models. However these approaches are all still very immature and not efficient enough for the discipline, especially when it comes to using the models for analysis and not only for documentation and communication purposes. In this paper we propose a format for representing reference models focusing on analysis. The format is tested with a case in a large European project focusing on security in advanced metering infrastructure. Thus we have, based on the format, created a reference model for smart metering architecture and cyber security analysis. On a theoretical level we discuss the potential impact such a reference model can have.
The SCADA infrastructure is a key component for power grid operations. Securing the SCADA infrastructure against cyber intrusions is thus vital for a well-functioning power grid. However, the task remains a particular challenge, not the least since not all available security mechanisms are easily deployable in these reliability-critical and complex, multi-vendor environments that host modern systems alongside legacy ones, to support a range of sensitive power grid operations. This paper examines how effective a few countermeasures are likely to be in SCADA environments, including those that are commonly considered out of bounds. The results show that granular network segmentation is a particularly effective countermeasure, followed by frequent patching of systems (which is unfortunately still difficult to date). The results also show that the enforcement of a password policy and restrictive network configuration including whitelisting of devices contributes to increased security, though best in combination with granular network segmentation.
This paper presents a mapping between the Enterprise Architecture framework ArchiMate and the Substation Configuration Language (SCL) of IEC 61850. Enterprise Architecture (EA) is a discipline for managing an enterprise's information system portfolio in relation to the supported business. Metamodels, descriptive models on how to model and one of the core components of EA, can assist stakeholders in many ways, for example in decision-making. Moreover, the power industry is a domain with an augmented reliance on the support of information systems. IEC 61850 is a standard for the design of Substation Automation (SA) systems and provides a vendor independent framework for interoperability by defining communication networks and functions. The SCL is a descriptive language in IEC 61850 on the configuration of substation Intelligent Electronic Devices (IED) which describes the structure together with physical components and their relating functions. By using SCL, which models the architecture of SA systems, and mapping it to ArchiMate, stakeholders are assisted in understanding their SA system and its architecture. The mapping is intended to support the integration of SA systems applying IEC 61850 into the enterprise architecture. The mapping is demonstrated with an example applying the mapping to a SA configuration based on SCL.
A fast and continuously changing business environment demands flexible software systems easy to modify and maintain. Due to the extent of interconnection between systems and the internal quality of each system many IT-decision makers find it difficult predicting the effort of making changes to their systems. To aid IT-decision makers in making better decisions regarding what modifications to make to their systems, this paper proposes extended influence diagrams and enterprise architecture models for maintainability analysis. A framework for assessing maintainability using enterprise architecture models is presented and the approach is illustrated by a fictional example decision situation.
A fast and continuously changing business environment demands flexible softwaresystems easy to modify and maintain. Due to the extent of interconnection betweensystems and the internal quality of each system many IT decision-makers find it difficultpredicting the effort of making changes to their systems. To aid IT-decision makers inmaking better decisions regarding what modifications to make to their systems, thisarticle proposes extended influence diagrams and enterprise architecture models formaintainability analysis. A framework for assessing maintainability using enterprisearchitecture models is presented and the approach is illustrated by a fictional exampledecision situation.
Contemporary enterprises depend to great extent on software systems. During the past decades the number of systems has been constantly increasing and these systems have become more integrated with one another. This has lead to a growing complexity in managing software systems and their environment. At the same time business environments today need to progress and change rapidly to keep up with evolving markets. As the business processes change, the systems need to be modified in order to continue supporting the processes.
The complexity increase and growing demand for rapid change makes the management of enterprise systems a very important issue. In order to achieve effective and efficient management, it is essential to be able to analyze the system modifiability (i.e. estimate the future change cost). This is addressed in the thesis by employing architectural models. The contribution of this thesis is a method for software system modifiability analysis using enterprise architecture models. The contribution includes an enterprise architecture analysis formalism, a modifiability metamodel (i.e. a modeling language), and a method for creating metamodels. The proposed approach allows IT-decision makers to model and analyze change projects. By doing so, high-quality decision support regarding change project costs is received.
This thesis is a composite thesis consisting of five papers and an introduction. Paper A evaluatesa number of analysis formalisms and proposes extended influence diagrams to be employed for enterprise architecture analysis. Paper B presents the first version of the modifiability metamodel. InPaper C, a method for creating enterprise architecture metamodels is proposed. This method aims to be general, i.e. can be employed for other IT-related quality analyses such as interoperability, security, and availability. The paper does however use modifiability as a running case. The second version of the modifiability metamodel for change project cost estimation is fully described in Paper D. Finally, Paper E validates the proposed method and metamodel by surveying 110 experts and studying 21 change projects at four large Nordic companies. The validation indicates that the method and metamodel are useful, contain the right set of elements and provide good estimation capabilities.
In this paper, we present a case were we employ the Hidden Structure method to product feature prioritization at Ericsson. The method extends the more common Design Structure Matrix (DSM) approach that has been used in technology management (e.g. project management and systems engineering) for quite some time in order to model complex systems and processes. The hidden structure method focuses on analyzing a DSM based on coupling and modularity theory, and it has been used in a number of software architecture and software portfolio cases. In previous work by the authors the method was tested on organization transformation at Ericsson, however this is the first time it has been employed in the domain of product feature prioritization. Today, at Ericsson, features are prioritized based on a business case approach where each feature is handled isolated from other features and the main focus is customer or market-based requirements. By employing the hidden structure method we show that features are heavily dependent on each other in a complex network, thus they should not be treated as isolated islands. These dependencies need to be considered when prioritizing features in order to save time and money, as well as increase end customer satisfaction.