Modern enterprises face the challenge to survive in an ever changing environment. One commonly accepted means to address this challenge and further enhance survivability is enterprise architecture (EA) management, which provides a holistic model-based approach to business/IT alignment. Thereby, the decisions taken in the context of EA management are based on accurate documentation of IT systems and business processes. The maintenance of such documentation causes high investments for enter-prises, especially in the absence of information on the change rates of different systems and processes. In this paper we propose a method for gathering and analyzing such in-formation. The method is used to analyze the life spans of the application portfolio of three companies from different industry sectors. Based on the results of the three case studies implications and limitations of the method are discussed.
Cybersecurity is the backbone of a successful digitalization of society, and cyber situation awareness is an essential aspect of managing it. The COVID-19 pandemic has sped up an already ongoing digitalization of Swedish government agencies, but the cybersecurity maturity level varies across agencies. In this study, we conduct a census of Swedish government administrative authority communications on cybersecurity to employees at the beginning of the COVID-19 pandemic. The census shows that the employee communications in the beginning of the pandemic to a greater extent have focused on first-order risks, such as video meetings and telecommuting, rather than on second-order risks, such as invoice fraud or social engineering. We also find that almost two thirds of the administrative authorities have not yet implemented, but only initiated or documented, their cybersecurity policies.
The COVID-19 pandemic has accelerated the digitalization of the Swedish public sector, and to ensure the success of this ongoing process cybersecurity plays an integral part. While Sweden has come far in digitalization, the maturity of cybersecurity work across entities covers a wide range. One way of improving cybersecurity is through communication, thereby enhancing employee cyber situation awareness. In this paper, we conduct a census of Swedish public sector employee communication on cybersecurity at the beginning of the COVID-19 pandemic using questionnaires. The study shows that public sector entities find the same sources of information useful for their cybersecurity work. We find that nearly two thirds of administrative authorities and almost three quarters of municipalities are not yet at the implemented cybersecurity level. We also find that 71 % of municipalities have less than one dedicated staff for cybersecurity.
In recent years, the Swedish public sector has undergone rapid digitalization, while cybersecurity efforts have not kept even steps. This study investigates conditions for cybersecurity work at Swedish administrative authorities by examining organizational conditions at the authorities, what cybersecurity staff do to acquire the cyber situation awareness required for their role, as well as what experience cybersecurity staff have with incidents. In this study, 17 semi-structured interviews were held with respondents from Swedish administrative authorities. The results showed the diverse conditions for cybersecurity work that exist at the authorities and that a variety of roles are involved in that work. It was found that national-level support for cybersecurity was perceived as somewhat lacking. There were also challenges in getting access to information elements required for sufficient cyber situation awareness.
We study the impact of data sharing policies on cyber insurance markets. These policies have been proposed to address the scarcity of data about cyber threats, which is essential to manage cyber risks. We propose a Cournot duopoly competition model in which two insurers choose the number of policies they offer (i.e., their production level) and also the resources they invest to ensure the quality of data regarding the cost of claims (i.e., the data quality of their production cost). We find that enacting mandatory data sharing sometimes creates situations in which at most one of the two insurers invests in data quality, whereas both insurers would invest when information sharing is not mandatory. This raises concerns about the merits of making data sharing mandatory.
Cybersecurity is an important concern in systems-of-systems (SoS), where the effects of cyber incidents, whether deliberate attacks or unintentional mistakes, can propagate from an individual constituent system (CS) throughout the entire SoS. Unfortunately, the security of an SoS cannot be guaranteed by separately addressing the security of each CS. Security must also be addressed at the SoS level. This paper reviews some of the most prominent cybersecurity risks within the SoS research field and combines this with the cyber and information security economics perspective. This sets the scene for a structured assessment of how various cyber risks can be addressed in different SoS architectures. More precisely, the paper discusses the effectiveness and appropriateness of five cybersecurity policy options in each of the four assessed SoS archetypes and concludes that cybersecurity risks should be addressed using both traditional design-focused and more novel policy-oriented tools.
In order to be able to successfully defend an IT system it is useful to have an accurate appreciation of the cyber threat that goes beyond stereotypes. To effectively counter potentially decisive and skilled attackers it is necessary to understand, or at least model, their behavior. Although the real motives for untraceable anonymous attackers will remain a mystery, a thorough understanding of their observable actions can still help to create well-founded attacker profiles that can be used to design effective countermeasures and in other ways enhance cyber defense efforts. In recent work empirically founded attacker profiles, so-called attacker personas, have been used to assess the overall threat situation for an organization. In this paper we elaborate on 1) the use of attacker personas as a technique for attacker profiling, 2) the design of tailor-made cyber defense exercises for the purpose of obtaining the necessary empirical data for the construction of such attacker personas, and 3) how attacker personas can be used for enhancing the situational awareness within the cyber domain. The paper concludes by discussing the possibilities and limitations of using cyber defense exercises for data gathering, and what can and cannot be studied in such exercises.
In the cyber security landscape, the human ability to comprehend and adapt to existing and emerging threats is crucial. Not only technical solutions, but also the operator’s ability to grasp the complexities of the threats affect the level of success or failure that is achieved in cyber defence. In this paper we discuss the general concept of situation awareness and associated measurement techniques. Further, we describe the cyber domain and how it differs from other domains, and show how predictive knowledge can help improve cyber defence. We discuss how selected existing models and measurement techniques for situation awareness can be adapted and applied in the cyber domain to measure actual levels of cyber situation awareness. We identify generic relevant criteria and other factors to consider, and propose a methodology to set up cyber situation awareness measurement experiments within the context of simulated cyber defence exercises. Such experiments can be used to test the viability of different cyber solutions. A number of concrete possible experiments are also suggested.
Enterprise Architecture (EA) management involves tasks that substantially contribute to the operations of an enterprise, and to its sustainable market presence. One important aspect of this is the availability of services to customers. However, the increasing interconnectedness of systems with other systems and with business processes makes it difficult to get a clear view on change impacts and dependency structures. While management level decision makers need this information to make sound decisions, EA models often do not include quality attributes (such as availability), and very rarely provide quantitative means to assess them. We address these shortcomings by augmenting an information model for EA modeling with concepts from Probabilistic Relational Models, thus enabling quantitative analysis. A sample business case is evaluated as an example of the technique, showing how decision makers can benefit from information on availability impacts on enterprise business services.
Enterprise architecture advocates for model-based decision-making on enterprise-wide information system issues. In order to provide decision-making support, enterprise architecture models should not only be descriptive but also enable analysis. This paper presents a software tool, currently under development, for the evaluation of enterprise architecture models. In particular, the paper focuses on how to encode scientific theories so that they can be used for model-based analysis and reasoning under uncertainty. The tool architecture is described, and a case study shows how the tool supports the process of enterprise architecture analysis.
Enterprise architecture advocates model-based decision-making on enterprise-wide information system issues. In order to provide decisionmaking support, enterprise architecture models should not only be descriptive but also enable analysis. This paper presents a software tool, currently under development, for the evaluation of enterprise architecture models. In particular, the paper focuses on how to encode scientific theories so that they can be used for model-based analysis and reasoning under uncertainty. The tool architecture is described, and a case study shows how the tool supports the process of enterprise architecture analysis.
Getting ahead on the global stage of AI technologies requires vast resources or novel approaches. The Nordic countries have tried to find a novel path, claiming that responsible and ethical AI is not only morally right but confers a competitive advantage. In this article, eight official AI policy documents from Denmark, Finland, Norway and Sweden are analysed according to the AI4People taxonomy, which proposes five ethical principles for AI: beneficence, non-maleficence, autonomy, justice and explicability. The principles are described in terms such as growth, innovation, efficiency gains, cybersecurity, malicious use or misuse of AI systems, data use, effects on labour markets, and regulatory environments. The authors also analyse how the strategies describe the link between ethical principles and a competitive advantage, and what measures are proposed to facilitate that link. Links such as a first-mover advantage and measures such as influencing international standards and regulations are identified. The article concludes by showing that while ethical principles are present, neither the ethical principles nor the links and measures are made explicit in the policy documents.
In the past few years, the ethics and transparency of AI and other digital systems have received much attention. There is a vivid discussion on explainable AI, both among practitioners and in academia, with contributions from diverse fields such as computer science, human-computer interaction, law, and philosophy. Using the Value Sensitive Design (VSD) method as a point of departure, this paper explores how VSD can be used in the context of transparency. More precisely, it is investigated (i) if the VSD Envisioning Cards facilitate transparency as a pro-ethical condition, (ii) if they can be improved to realize ethical principles through transparency, and (iii) if they can be adapted to facilitate reflection on ethical principles in large groups. The research questions are addressed through a two-fold case study, combining one case where a larger audience participated in a reduced version of VSD with another case where a smaller audience participated in a more traditional VSD workshop. It is concluded that while the Envisioning Cards are effective in promoting ethical reflection in general, the realization of ethical values through transparency is not always similarly promoted. Therefore, it is proposed that a transparency card be added to the Envisioning Card deck. It is also concluded that a lightweight version of VSD seems useful in engaging larger audiences. The paper is concluded with some suggestions for future work.
The General Data Protection Regulation (GDPR) establishes a right for individuals to get access to information about automated decision-making based on their personal data. However, the application of this right comes with caveats. This paper investigates how European insurance companies have navigated these obstacles. By recruiting volunteering insurance customers, requests for information about how insurance premiums are set were sent to 26 insurance companies in Denmark, Finland, The Netherlands, Poland and Sweden. Findings illustrate the practice of responding to GDPR information requests and the paper identifies possible explanations for shortcomings and omissions in the responses. The paper also adds to existing research by showing how the wordings in the different language versions of the GDPR could lead to different interpretations. Finally, the paper discusses what can reasonably be expected from explanations in consumer oriented information.
The GDPR aims at strengthening the rights of data subjects and to build trust in the digital single market. This is manifested by the introduction of a new principle of transparency. It is, however, not obvious what this means in practice: What kind of answers can be expected to GDPR requests citing the right to “meaningful information”? This is the question addressed in this article. Seven insurance companies, representing 90–95% of the Swedish home insurance market, were asked by consumers to disclose information about how premiums are set. Results are presented first giving descriptive statistics, then characterizing the pricing information given, and lastly describing the procedural information offered by insurers as part of their answers. Overall, several different approaches to answering the request can be discerned, including different uses of examples, lists, descriptions of logic, legal basis as well as data related to the process of answering the requests. Results are analyzed in light of GDPR requirements. A number of potential improvements are identified—at least three responses are likely to fail the undue delay requirement. The article is concluded with a discussion about future work.
A tool for Enterprise Architecture analysis using a probabilistic mathematical framework is demonstrated. The Model-View-Controller tool architecture is outlined, before the use of the tool is considered. A sample abstract maintainability model is created, showing the dependence of system maintainability on documentation quality. developer expertise, etc. Finally, a concrete model of an ERP system is discussed.
Today, with rapidly developing technology and changing business models, organizations face rapid changes in both internal and external environments. To be able to rapidly respond to such changing environments, integration of software systems has become a top priority for many organizations. However, despite extensive use of software systems integration, quantitative methods for estimating the business value of such integrations are still missing. Using Data Envelopment Analysis (DEA) and the microeconomic concept of marginal rates, this study proposes a method for quantifying the effects of enterprise integration on the firm performance. In the paper, we explain how DEA can be used to evaluate the marginal benefits of enterprise integration. Our proposed method is to measure and compare the productive efficiency of firms using enterprise integration, specifically by relating the benefits produced to the resources consumed in the process. The method is illustrated on data collected from 12 organizations. The defined method has a solid theoretical foundation, eliminating the need for a priori information about the relationship between different measures. Furthermore, the framework could be used not only to quantify the business value of enterprise integration, but also to estimate trade-offs and impacts of other subjective managerial goals on the results. The major limitation of the proposed method is the absence of a comprehensive theory relating IT architecture changes to organizational outcomes. The underlying model is strongly dependent on the relevancy and accuracy of the included variables, as well as number of data units, introducing uncertainties to the outcomes of the model.
This article reports the findings of a literature review concerning the potential benefits of Enterprise Integration (EI) for organizations. The review reveals the current state of the scientific literature concerning the potential benefits of EI, classified using a conceptual model of the enterprise. We believe that the results provide a consolidated and comprehensive picture of such potential benefits, useful as a baseline for future research. Additionally, the review is expected to assist practitioners in establishing business cases for EI by means of scientifically grounded reasoning about how EI benefits can contribute to the achievement of certain business goals. Additionally, results could be employed to develop methods or models capable of measuring such benefits in financial terms.
With increasing use of automated algorithmic decision-making, issues of algorithmic fairness have attracted much attention lately. In this growing literature, existing concepts from ethics and political philosophy are often applied to new contexts. The reverse—that novel insights from the algorithmic fairness literature are fed back into ethics and political philosophy—is far less established. However, this short commentary on Baumann and Loi (Philosophy & Technology, 36(3), 45 2023) aims to do precisely this. Baumann and Loi argue that among algorithmic group fairness measures proposed, one—sufficiency (well-calibration) is morally defensible for insurers to use, whereas independence (statistical parity or demographic parity) and separation (equalized odds) are not normatively appropriate in the insurance context. Such a result may seem to be of relatively narrow interest to insurers and insurance scholars only. We argue, however, that arguments such as that offered by Baumann and Loi have an important but so far overlooked connection to the derivation of the minimal state offered by Nozick (1974) and thus to political philosophy at large.
This short commentary on Peters (Philosophy & Technology 35, 2022) identifies the entrenchment of political positions as one additional concern related to algorithmic political bias, beyond those identified by Peters. First, it is observed that the political positions detected and predicted by algorithms are typically contingent and largely explained by “political tribalism”, as argued by Brennan (2016). Second, following Hacking (1999), the social construction of political identities is analyzed and it is concluded that algorithmic political bias can contribute to such identities. Third, following Nozick (1989), it is argued that purist political positions may stand in the way of the pursuit of all worthy values and goals to be pursued in the political realm and that to the extent that algorithmic political bias entrenches political positions, it also hinders this healthy “zigzag of politics”.
Information technology has become increasingly important to individuals and organizations alike. Not only does IT allow us to do what we always did faster and more effectively, but it also allows us to do new things, organize ourselves differently, and work in ways previously unimaginable. However, these advantages come at a cost: as we become increasingly dependent upon IT services, we also demand that they are continuously and uninterruptedly available for use. Despite advances in reliability engineering, the complexity of today's increasingly integrated systems offers a non-trivial challenge in this respect. How can high availability of enterprise IT services be maintained in the face of constant additions and upgrades, decade-long life-cycles, dependencies upon third-parties and the ever-present business-imposed requirement of flexible and agile IT services?
The contribution of this thesis includes (i) an enterprise architecture framework that offers a unique and action-guiding way to analyze service availability, (ii) identification of causal factors that affect the availability of enterprise IT services, (iii) a study of the use of fault trees for enterprise architecture availability analysis, and (iv) principles for how to think about availability management.
This thesis is a composite thesis of five papers. Paper 1 offers a framework for thinking about enterprise IT service availability management, highlighting the importance of variance of outage costs. Paper 2 shows how enterprise architecture (EA) frameworks for dependency analysis can be extended with Fault Tree Analysis (FTA) and Bayesian networks (BN) techniques. FTA and BN are proven formal methods for reliability and availability modeling. Paper 3 describes a Bayesian prediction model for systems availability, based on expert elicitation from 50 experts. Paper 4 combines FTA and constructs from the ArchiMate EA language into a method for availability analysis on the enterprise level. The method is validated by five case studies, where annual downtime estimates were always within eight hours from the actual values. Paper 5 extends the Bayesian prediction model from paper 3 and the modeling method from paper 4 into a full-blown enterprise architecture framework, expressed in a probabilistic version of the Object Constraint Language. The resulting modeling framework is tested in nine case studies of enterprise information systems.
Modern society is increasingly dependent on digital services, making their dependability a top priority. But while there is a consensus that cybersecurity is important, there is no corresponding agreement on the true extent of the problem, the most effective countermeasures, or the proper division of labor and responsibilities. This makes cybersecurity policy very difficult. This article addresses this issue based on observations and experiences from a period of guest research at the Swedish Financial Supervisory Authority (Finansinspektionen), which made it possible to study how cybersecurity policy is developed and implemented in the Swedish financial sector. Observations include policy implementation challenges related to squaring different roles and perspectives mandated by different laws, and to collaboration between independent government authorities, but also policy development challenges: How can the full range of perspectives and tools be included in cybersecurity policy development? As Sweden now revises its cybersecurity policy, this is a key issue.
The legendary Russian literary critic Belinsky famously described Pushkin’s novel in verse Eugene Onegin as an encyclopedia of Russian life. However, this encyclopedia seems seriously incomplete in that it largely leaves out elements of oppression, war, and insurrection. There are many valid explanations for this, but one, very blunt and prosaic, is that oppression and censorship actually worked – that it is absent in the fiction because it was present in reality. As a case in point, this article presents a novel translation into Swedish, with rhymes and meter preserved, of the fragments remaining of the unfinished tenth chapter of Eugene Onegin. This tenth chapter deals with the failed Decembrist uprising of 1825, and the misrule precipitating it, and it is not surprising that it could not be published at the time it was written. Though well known in the academic community, this fragment is rarely published in foreign translations, and as far as known, this is the first translation into a Scandinavian language. The article offers some commentary on the translation and concludes with a few remarks on the value of reading the classics even in times of turmoil.
Recent advances in artificial intelligence offer many beneficial prospects. However, concerns have been raised about the opacity of decisions made by these systems, some of which have turned out to be biased in various ways. This article makes a contribution to a growing body of literature on how to make systems for automated decision-making more transparent, explainable, and fair by drawing attention to and further elaborating a distinction first made by Nozick (1993) between first-level bias in the application of standards and second-level bias in the choice of standards, as well as a second distinction between discrimination and arbitrariness. Applying the typology developed, a number of illuminating observations are made. First, it is observed that some reported bias in automated decision-making is first-level arbitrariness, which can be alleviated by explainability techniques. However, such techniques have only a limited potential to alleviate first-level discrimination. Second, it is argued that second-level arbitrariness is probably quite common in automated decision-making. In contrast to first-level arbitrariness, however, second-level arbitrariness is not straightforward to detect automatically. Third, the prospects for alleviating arbitrariness are discussed. It is argued that detecting and alleviating second-level arbitrariness is a profound problem because there are many contrasting and sometimes conflicting standards from which to choose, and even when we make intentional efforts to choose standards for good reasons, some second-level arbitrariness remains.
Wang (Philosophy & Technology 35, 2022) introduces a Foucauldian power account of algorithmic transparency. This short commentary explores when this power account is appropriate. It is first observed that the power account is a constructionist one, and that such accounts often come with both factual and evaluative claims. In an instance of Hume’s law, the evaluative claims do not follow from the factual claims, leaving open the question of how much constructionist commitment (Hacking, 1999) one should have. The concept of acts in equilibrium (Nozick, 1981) is then used to explain how different individuals reading Wang can end up with different evaluative attitudes towards algorithmic transparency, despite factual agreement. The commentary concludes by situating constructionist commitment inside a larger question of how much to think of our actions, identifying conflicting arguments.
High enterprise IT service availability is a key success factor throughout many industries. While understanding of the economic importance of availability management is becoming more widespread, the implications for management of Service Level Agreements (SLAs) and thinking about availability risk management are just beginning to unfold. This paper offers a framework within which to think about availability management, highlighting the importance of variance of outage costs. The importance of variance is demonstrated using simulations on existing data sets of revenue data. An important implication is that when outage costs are proportional to outage duration, more but shorter outages should be preferred to fewer but longer, in order to minimize variance. Furthermore, two archetypal cases where the cost of an outage depends non-linearly on its duration are considered. An optimal outage length is derived, and some guidance is also given for its application when the variance of hourly downtime costs is considered. The paper is concluded with a discussion about the feasibility of the method, its practitioner relevance and its implications for SLA management.
Modern society makes extensive use of automated algorithmic decisions, fueled by advances in artificial intelligence. However, since these systems are not perfect, questions about fairness are increasingly investigated in the literature. In particular, many authors take a Rawlsian approach to algorithmic fairness. Based on complications with this approach identified in the literature, this article discusses how Rawls’s theory in general, and especially the difference principle, should reasonably be applied to algorithmic fairness decisions. It is observed that proposals to achieve Rawlsian algorithmic fairness often aim to uphold the difference principle in the individual situations where automated decision-making occurs. However, the Rawlsian difference principle applies to society at large and does not aggregate in such a way that upholding it in constituent situations also upholds it in the aggregate. But such aggregation is a hidden premise of many proposals in the literature and its falsity explains many complications encountered.
Modern society makes extensive use of automated algorithmic decisions, fueled by advances in artificial intelligence. However, since these systems are not perfect, questions about fairness are increasingly investigated in the literature. In particular, many authors take a Rawlsian approach to algorithmic fairness. This article aims to identify some complications with this approach: Under which circumstances can Rawls’s original position reasonably be applied to algorithmic fairness decisions? First, it is argued that there are important differences between Rawls’s original position and a parallel algorithmic fairness original position with respect to risk attitudes. Second, it is argued that the application of Rawls’s original position to algorithmic fairness faces a boundary problem in defining relevant stakeholders. Third, it is observed that the definition of the least advantaged, necessary for applying the difference principle, requires some attention in the context of algorithmic fairness. Finally, it is argued that appropriate deliberation in algorithmic fairness contexts often require more knowledge about probabilities than the Rawlsian original position allows. Provided that these complications are duly considered, the thought-experiment of the Rawlsian original position can be useful in algorithmic fairness decisions.
Today, most enterprises are increasingly reliant on information technology to carry out their operations. This also entails an increasing need for cyber situational awareness—roughly, to know what is going on in the cyber domain, and thus be able to adequately respond to events such as attacks or accidents. This chapter argues that cyber situational awareness is best understood by combining three complementary points of view: the technological, the socio-cognitive, and the organizational perspectives. In addition, the chapter investigates the prospects for reasoning about adversarial actions. This part also reports on a small empirical investigation where participants in the Locked Shields cyber defense exercise were interviewed about their information needs with respect to threat actors. The chapter is concluded with a discussion regarding important challenges to be addressed along with suggestions for further research.
As more enterprises buy information technology services, studying their underpinning contracts becomes more important. With cloud computing and outsourcing, these service level agreements (SLAs) are now often the only link between the business and the supporting IT services. This paper presents an experimental economics investigation of decision-making with regard to availability SLAs, among enterprise IT professionals. The method and the ecologically valid subjects make the study unique to date among IT service SLA studies. The experiment consisted of pairwise choices under uncertainty, and subjects (N = 46) were incentivized by payments based on one of their choices, randomly selected. The research question investigated in this paper is: Do enterprise IT professionals maximize expected value when procuring availability SLAs, as would be optimal from the business point of view? The main result is that enterprise IT professionals fail to maximize expected value. Whereas some subjects do maximize expected value, others are risk-seeking, risk-averse, or exhibit non-monotonic preferences. The nonmonotonic behavior in particular is an interesting observation, which has no obvious explanation in the literature. For a subset of the subjects (N = 29), a few further hypotheses related to associations between general attitude to risk or professional experience on the one hand, and behavior in SLAs on the other hand, were investigated. No support for these associations was found. The results should be interpreted with caution, due to the limited number of subjects. However, given the prominence of SLAs in modern IT service management, the results are interesting and call for further research, as they indicate that current professional decision-making regarding SLAs can be improved. In particular, if general attitude to risk and professional experience do not impact decision-making with regard to SLAs, more extensive use of decision-support systems might be called for in order to facilitate proper risk management.
In recent years, Enterprise Architecture (EA) has become a discipline for business and IT-system management. While much research focuses on theoretical contributions related to EA, very few studies use statistical tools to analyze empirical data. This paper investigates the actual application of EA, by giving a broad overview of the usage of enterprise architecture in Swedish, German, Austrian and Swiss companies. 162 EA professionals answered a survey originally focusing on the relation between IT/business alignment (ITBA) and EA. The dataset provides answers to questions such as: For how many years have companies been using EA models, tools, processes and roles? How is ITBA in relation to EA perceived at companies? In particular, the survey has investigated quality attributes of EA, related to IT-systems, business and IT governance. One important result is some interesting correlations between how these qualities are prioritized. For example, a high concern for interoperability correlates with a high concern for maintainability.
Analysis of dependencies between technical systems and business processes is an important part of the discipline of Enterprise Architecture (EA). However, EA models typically provide only visual and qualitative decision support. This paper shows how EA frameworks for dependency analysis can be extended into the realm of quantitative methods by use of the Fault Tree Analysis (FTA) and Bayesian networks (BN) techniques. Using DoDAF ? the Department of Defense Architecture Framework ? as an example, we provide a method for how these EA models can be adapted for use of FTA and BN. Furthermore, we use this method to perform dependency analysis and scenario evaluation on a sample DoDAF model.
Analysis of dependencies between technical systems and business processes is an important part of the discipline of Enterprise Architecture (EA). However, EA models typically provide only visual and qualitative decision support. This paper shows how EA frameworks for dependency analysis can be extended into the realm of quantitative methods by use of the Fault Tree Analysis (FTA) and Bayesian networks (BN) techniques. Using DoDAF - the Department of Defense Architecture Framework - as an example, we provide a method for how these EA models can be adapted for use of FTA and BN. Furthermore, we use this method to perform dependency analysis and scenario evaluation on a sample DoDAF model.
Consolidation of IT resources is a frequently cited task for IT decision makers, aiming to remove redundancy and thereby to cut costs. However, while economically motivated, the methods described in the literature rarely address costs directly. Instead, the focus often remains on purely IT-related considerations. In this paper, IT consolidation is addressed from an operations research perspective, applying a binary integer programming model to find optimal solutions to consolidation problems. Since accurate cost estimates are vital to successful consolidation, and play an important role in the presented binary integer program, the paper also addresses the costs involved in consolidation, with a particular focus on the costs of modifying business processes. Applying the mathematical method, with accurate cost estimates, enables decision makers to make optimal decisions in a transparent and rigorous way. The use of the proposed method is demonstrated with an example based on a real consolidation problem from a large European power supplier.
With the emergence of global digital service providers, concerns about digital oligopolies have increased, with a wide range of potentially harmful effects being discussed. One of these relates to cyber security, where it has been argued that market concentration can increase cyber risk. Such a state of affairs could have dire consequences for insurers and reinsurers, who underwrite cyber risk and are already very concerned about accumulation risk. Against this background, the paper develops some theory about how convex cyber risk affects Cournot oligopoly markets of data storage. It is demonstrated that with constant or increasing marginal production cost, the addition of increasing marginal cyber risk cost decreases the differences between the optimal numbers of records stored by the oligopolists, in effect offsetting the advantage of lower marginal production cost. Furthermore, based on the empirical literature on data breach cost, two possibilities are found: (i) that such cyber risk exhibits decreasing marginal cost in the number of records stored and (ii) the opposite possibility that such cyber risk instead exhibits increasing marginal cost in the number of records stored. The article is concluded with a discussion of the findings and some directions for future research.
What constitutes an enterprise architecture framework is a contested subject. The contents of present enterprise architecture frameworks thus differ substantially. This paper aims to alleviate the confusion regarding which framework contains what by proposing a meta framework for enterprise architecture frameworks. By using this meta framework, decision makers are able to express their requirements on what their enterprise architecture framework must contain and also to evaluate whether the existing frameworks meets these requirements. An example classification of common EA frameworks illustrates the approach.
In this paper, application consolidation using Enterprise Architecture methods is considered, with an ongoing project in the Swedish Armed Forces as the point of departure. The decision-making features of application consolidation are first analyzed and formalized from the perspective of decision theory. Applying these insights, a more practical framework is then proposed, based primarily on the ISO/IEC 9126 standard, the Ministry of Defence Architecture Framework (MODAF), and the formalism of Probabilistic Relational Models (PRM). This framework supports cost-benefit analysis of application consolidation decision-making, thus helping to make these decisions more structured and transparent.
Analysis of dependencies between information systems, business processes, and strategic goals is an important part of the discipline of Enterprise Architecture (EA). However, EA models typically provide only visual and qualitative decision support. This paper shows how EA frameworks for dependency analysis can be extended into the realm of quantitative methods by the use of techniques from Fault Tree Analysis (FTA). Using MODAF, the UK Ministry of Defence Architecture Framework as an example, we give a list of criteria for the extraction of a metamodel for FTA use, and provide such a metamodel for MODAF. Furthermore, we use this MODAF FTA metamodel to perform dependency analysis on a sample MODAF model.
This paper presents an integrated enterprise architecture framework for qualitative and quantitative modeling and assessment of enterprise IT service availability. While most previous work has either focused on formal availability methods such as fault trees or qualitative methods such as maturity models, this framework offers a combination. First, a modeling and assessment framework is described. In addition to metamodel classes, relationships and attributes suitable for availability modeling, the framework also features a formal computational model written in a probabilistic version of the object constraint language. The model is based on 14 systemic factors impacting service availability and also accounts for the structural features of the service architecture. Second, the framework is empirically tested in nine enterprise information system case studies. Based on an initial availability baseline and the annual evolution of the 14 factors of the model, annual availability predictions are made and compared with the actual outcomes as reported in SLA reports and system logs. The practical usefulness of the method is discussed based on the outcomes of a workshop conducted with the participating enterprises, and some directions for future research are offered.
Ensuring the availability of enterprise IT systems is a challenging task. The factors that can bring systems down are numerous, and their impact on various system architectures is difficult to predict. At the same time, maintaining high availability is crucial in many applications, ranging from control systems in the electric power grid, over electronic trading systems on the stock market to specialized command and control systems for military and civilian purposes. This paper describes a Bayesian decision support model, designed to help enterprise IT systems decision makers evaluate the consequences of their decisions by analyzing various scenarios. The model is based on expert elicitation from 50 experts on IT systems availability, obtained through an electronic survey. The Bayesian model uses a leaky Noisy-OR method to weigh together the expert opinions on 16 factors affecting systems availability. Using this model, the effect of changes to a system can be estimated beforehand, providing decision support for improvement of enterprise IT systems availability. The Bayesian model thus obtained is then integrated within a standard, reliability block diagram-style, mathematical model for assessing availability on the architecture level. In this model, the IT systems play the role of building blocks. The overall assessment framework thus addresses measures to ensure high availability both on the level of individual systems and on the level of the entire enterprise architecture. Examples are presented to illustrate how the framework can be used by practitioners aiming to ensure high availability.
Ensuring the availability of enterprise IT systems is a challenging task. The factors that can bring systems down are numerous, and their impact on various system architectures is difficult to predict. At the same time, maintaining high availability is crucial in many applications, ranging from control systems in the electric power grid, over electronic trading systems on the stock market to specialized command and control systems for military and civilian purposes. The present paper desccribes a Bayesian decision support model, designed to help enterprise IT systems decision makers evaluate the consequences of their decisions by analyzing various scenarios. The model is based on expert elicitation from 50 academic experts on IT systems availability, obtained through an electronic survey. The Bayesian model uses a leaky Noisy-OR method to weigh together the expert opinions on 16 factors affecting systems availability. Using this model, the effect of changes to a system can be estimated beforehand, providing decision support for improvement of enterprise IT systems availability.
Creating accurate models of information systems is an important but challenging task. It is generally well understood that such modeling encompasses general scientific issues, but the monetary aspects of the modeling of software systems are not equally well acknowledged. The present paper describes a method using Bayesian networks for optimizing modeling strategies, perceived as a trade-off between these two aspects. Using GeNIe, a graphical tool with the proper Bayesian algorithms implemented, decision support can thus be provided to the modeling process. Specifically, an informed trade-off can be made, based on the modeler's prior knowledge of the predictive power of certain models, combined with his projection of their costs. It is argued that this method might enhance modeling of large and complex software systems in two principal ways: Firstly, by enforcing rigor and making hidden assumptions explicit. Secondly, by enforcing cost awareness even in the early phases of modeling. The method should be used primarily when the choice of modeling can have great economic repercussions.
Creating accurate models of information systems is an important but challenging task. While the scienti c aspects of such modeling are generally acknowledged, the monetary aspects of the modeling of software systems are not. The present paper describes a Bayesian method for optimizing modeling strategies, perceived as a trade-off between these two aspects. Speci cally, an informed trade-off can be made, based on the modeler's prior knowledge of the predictive power of certain models, combined with her projection of the costs. It is argued that this method enhances modeling of large and complex software systems in two principal ways: Firstly, by enforcing rigor and making hidden assumptions explicit. Secondly, by enforcing cost awareness even in the early phases of modeling. The method should be used primarily when the choice of modeling can have great economic repercussions.
Today, engineering and other technology intense businesses are increasingly carried out in project form. This allows better use of scarce resources, a simplified decision-making process as well as specific organization forms being tailored to the task at hand. However, despite these advantages, complex industrial projects often fail in the sense that budgets, time-frames and customer requirements are not met. Organizations also spend lots of money on introducing project models and certifying their project managers. Are these actions really efficient, i.e. do they lead to improved management of projects? The aim of this article is to describe the impact of (i) use of project model, (ii) project manager certification, and (iii) size of organization on ( a) employment of risk analysis, (b) existence of project sponsor, ( c) existence of steering committee, and (d) existence of project manager task description, respectively. This paper presents correlations derived from N=59 semi-structured interviews with project managers of technology centered projects from several countries. Two statistically significant relations were found through a correlation analysis: between the use of a project models and the existence of steering committee and between the use of a project model and the existence of a project manager task description.
The JQRR metrics for Information Assurance (IA)and Computer Network Defense (CND) are combinedwith a framework based on defense graphs. This enablesthe use of architectural models for rational decision making,based on the mathematical rigor of extended influencediagrams. A sample abstract model is provided,along with a simple example of its usage to assess accesscontrol vulnerability.
The NIS Directive aims to increase the overall level of cyber security in the EU and establishes a mandatory reporting regime for operators of essential services and digital service providers. While this reporting has attracted much attention, both in society at large and in the scientific community, the non-public nature of reports has led to a lack of empirically based research. This paper uses the unique set of all the mandatory NIS reports in Sweden in 2020 to shed light on incident costs. The costs reported exhibit large variability and skewed distributions, where a single or a few higher values push the average upwards. Numerical values are in the range of tens to hundreds of kSEK per incident. The most common incident causes are malfunctions and mistakes, whereas attacks are rare. No operators funded their incident costs using loans or insurance. Even though the reporting is mandated by law, operator cost estimates are incomplete and sometimes difficult to interpret, calling for additional assistance and training of operators to make the data more useful.
Models are an integral part of the discipline of Enterprise Architecture (EA). To stay relevant to management decision-making needs, the models need to be based upon suitable metamodels. These metamodels, in turn, need to be properly and continuously maintained. While there exists several methods for metamodel development and maintenance, these typically focus on internal metamodel qualities and metamodel engineering processes, rather than on the actual decision-making needs and their impact on the metamodels used. The present paper employs techniques from information theory and learning classification trees to propose a method for metamodel management based upon the value added by entities and attributes to the decision-making process. This allows for the removal of those metamodel parts that give the least "bang for the bucks" in terms of decision support. The method proposed is illustrated using real data from an ongoing research project on systems modifiability
In this paper we explore cyber security practices in Swedish manufacturing firms. Manufacturing is being transformed by new technologies under the label of smart industry or industry 4.0. Most of these technologies are either digital themselves or depend on digital connectivity. Their use is made possible by electronic sensors, actuators, and other devices as well as by data-driven analysis. This technological change entails a fundamental shift in risk and security as devices become interconnected, making information and control transmissible both within and to varying degree outside the firm's organization. These issues must be addressed to prevent both unintentional and intentional security incidents. Thus, there will be no smart industry without cyber security. Based on a sector-wide survey with 649 respondents (17% response rate) carried out in collaboration with the Association of Swedish Engineering Industries, we map risk perception and the controls put in place to address these risks across firms. We present three primary findings: (i) Compared to how firms value further investments in digitalization, risk perception related to cyber security issues is fairly low and business interruption is a greater cause for worry than data breach, (ii) there is a gap between the anticipated impact of digitalization and the perceived need for cyber security measures across business functions within firms, and (iii) the implementation of cyber security measures is still in its infancy with a significant bias towards technological measures, leaving organizational and social cyber security measures underrepresented. The paper is concluded with the identification of a few interesting follow-up questions for future work.
Large investments are made annually to develop and maintain IT systems. Successful outcome of IT projects is therefore crucial for the economy. Yet, many IT projects fail completely or are delayed or over budget, or they end up with less functionality than planned. This article describes a Bayesian decision-support model. The model is based on expert elicited data from 51 experts. Using this model, the effect management decisions have upon projects can be estimated beforehand, thus providing decision support for the improvement of IT project performance.