A company?s Chief Information Officer (CIO) is responsible for the management and evolution of the enterprise information system. An approach suggested as an aid for the CIO?s decision-making process is Enterprise Architecture, based on architectural models of both the enterprise information system and its context. For architectural models to function as decision-making support, this paper argues that they must be amenable to architectural analysis. The purpose of this paper is to demonstrate the importance of architectural theory in the analysis of architectural models of the enterprise information system. Architectural theory diagrams are proposed as means for presenting and comparing architectural theories as well as for assessing the analytical value of architectural models.
Time between vulnerability disclosure (TBVD) for individual analysts is proposed as a meaningful measure of the likelihood of finding a zero-day vulnerability within a given timeframe. Based on publicly available data, probabilistic estimates of the TBVD of various software products are provided. Sixty-nine thousand six hundred forty-six vulnerabilities from the National Vulnerability Database (NVD) and the SecurityFocus Vulnerability Database were harvested, integrated and categorized according to the analysts responsible for their disclosure as well as by the affected software products. Probability distributions were fitted to the TBVD per analyst and product. Among competing distributions, the Gamma distribution demonstrated the best fit, with the shape parameter, k, similar for most products and analysts, while the scale parameter, 8, differed significantly. For forecasting, autoregressive models of the first order were fitted to the TBVD time series for various products. Evaluation demonstrated that forecasting of TBVD on a per product basis was feasible. Products were also characterized by their relative susceptibility to vulnerabilities with impact on confidentiality, integrity and availability respectively. The differences in TBVD between products is significant, e.g. spanning differences of over 500% among the 20 most common software products in our data. Differences are further accentuated by the differing impact, so that, e.g., the mean working time between disclosure of vulnerabilities with a complete impact on integrity (as defined by the Common Vulnerability Scoring System) for Linux (110 days) exceeds that of Windows 7 (6 days) by over 18 times.
In the design phase of business collaboration, it is desirable to be able to predict the profitability of the business-to-be. Therefore, techniques to assess qualities such as costs, revenues, risks, and profitability have been previously proposed. However, they do not allow the modeler to properly manage uncertainty with respect to the design of the considered business collaboration. In many real collaboration projects today, uncertainty regarding the business' present or future characteristics is so significant that ignoring it becomes problematic. In this paper, we propose an approach based on the Predictive, Probabilistic Architecture Modeling Framework (P2AMF), capable of advanced and probabilistically sound reasoning about profitability risks. The P2AMF-based approach for profitability risk prediction is also based on the e3-value modeling language and on the Object Constraint Language (OCL). The paper introduces the prediction and modeling approach, and a supporting software tool. The use of the approach is illustrated by means of a case.
In the design phase of business collaboration, it is desirable to be able to predict the profitability of the business-to-be. Therefore, techniques to assess qualities such as costs, revenues, risks, and profitability have been previously proposed. However, they do not allow the modeler to properly manage uncertainty with respect to the design of the considered business collaboration. In many real collaboration projects today, uncertainty regarding the business' present or future characteristics is so significant that ignoring it becomes problematic. In this paper, we propose an approach based on the predictive, probabilistic architecture modeling framework (P2AMF), capable of advanced and probabilistically sound reasoning about profitability risks. The P2AMF-based approach for profitability risk prediction is also based on the e3-value modeling language and on the object constraint language. The paper introduces the prediction and modeling approach, and a supporting software tool. The use of the approach is illustrated by means of a case study originated from the Stockholm Royal Seaport smart city project.
Most academic disciplines emphasize the importance of their general theories. Examples of well-known general theories include the Big Bang theory, Maxwell's equations, the theory of the cell, the theory of evolution, and the theory of demand and supply. Less known to the wider audience, but established within their respective fields, are theories with names such as the general theory of crime and the theory of marriage. Few general theories of software engineering have, however, been proposed, and none have achieved significant recognition. This workshop, organized by the SEMAT initiative, aims to provide a forum for discussing the concept of a general theory of software engineering. The topics considered include the benefits, the desired qualities, the core components and the form of a such a theory.
Business processes are increasingly dependent on their supporting information systems. With this dependence comes an increased security risk with respect to the information flowing through the processes. This paper presents a method for assessment of the level of information security within business processes in the form of a percentage number, where a high score indicates good information security and a low score indicates a poor level of information security. The method also provides a numerical estimate of the credibility of the information security score, so that an assessment based on few and uncertain pieces of evidence is associated with low credibility and an assessment based on a large set of trustworthy evidence is associated with high credibility. A common problem with information security assessments is the cost related to collecting the required evidence. The paper proposes an evidence collection strategy designed to minimize the effort spent on gathering assessment data while maintaining the desired credibility of the results. A case study is presented, demonstrating the use of the method.
The discipline of enterprise architecture advocates the use of models to support decision-making on enterprise-wide information system issues. In order to provide such support, enterprise architecture models should be amenable to analyses of various properties, as e.g. the availability, performance, interoperability, modifiability, and information security of the modeled enterprise information systems. This paper presents a software tool for such analyses. The tool guides the user in the generation of enterprise architecture models and subjects these models to analyses resulting in quantitative measures of the chosen quality attribute. The paper describes and exemplifies both the architecture and the usage of the tool.
The discipline of enterprise architecture advocates the use of models to support decision-making on enterprise-wide information system issues. In order to provide such support, enterprise architecture models should be amenable to analyses of various properties, as e.g. the level of enterprise information security. This paper proposes the use of a formal language to support such analysis. Such a language needs to be able to represent causal relations between, and definitions of, various concepts as well as uncertainty with respect to both concepts and relations. To support decision making properly, the language must also allow the representation of goals and decision alternatives. This paper evaluates a number of languages with respect to these requirements, and selects influence diagrams for further consideration. The influence diagrams are then extended to fully satisfy the requirements. The syntax and semantics of the extended influence diagrams are detailed in the paper, and their use is demonstrated in an example.
The discipline of enterprise architecture advocates the use of models to support decision-making on enterprise-wide information system issues. In order to provide such support, enterprise architecture models should be amenable to analyses of various properties, as e.g. the level of enterprise information security. This paper proposes the use of a formal language to support such analysis. Such a language needs to be able to represent causal relations between, and definitions of, various concepts as well as uncertainty with respect to both concepts and relations. To support decision-making properly, the language must also allow the representation of goals and decision alternatives. This paper evaluates a number of languages with respect to these requirements, and selects influence diagrams for further consideration. The influence diagrams are then extended to fully satisfy the requirements. The syntax and semantics of the extended influence diagrams are detailed in the paper, and their use is demonstrated in an example.
Making major changes in enterprise information systems, such as large IT-investments, often have a significant impact on business operations. Moreover, when deliberating which IT-changes to make, the consequences of choosing a certain scenario may be difficult to grasp. One way to ascertain the quality of IT investment decisions is through the use of methods from decision theory. This paper proposes the use of one such method to facilitate IT-investment decision making, viz. extended influence diagrams. An extended influence diagram is a tool able to completely describe and analyse a decision situation. The applicability of extended influence diagrams is demonstrated at the end of the paper by using an extended influence diagram in combination with the ISO/IEC 9126 software quality characteristics and metrics as means to assist a decision maker in a decision regarding an IT-investment.
Making major changes in enterprise information systems, such as large IT-investments, often have a significant impact on business operations. Moreover, when deliberating which IT-changes to make, the consequences of choosing a certain scenario may be difficult to grasp. One way to ascertain the quality of IT-investment decisions is through the use of methods from decision theory. This paper proposes the use of one such method to facilitate IT-investment decision making, viz. extended influence diagrams. An extended influence diagram is a tool able to completely desccribe and analyse a decision situation. The applicability of extended influence diagrams is demonstrated at the end of the paper by using an extended influence diagram in combination with the ISO/IEC 9126 software quality metrics as means to assist a decision maker in a decision regarding an IT-investment.
Software engineering needs a general theory, i.e., a theory thatapplies across the field and unifies existing empirical and theoreticalwork. General theories are common in other domains, suchas physics. While many software engineering theories exist, nogeneral theory of software engineering is evident. Consequently,this report reviews the emerging consensus on a general theory insoftware engineering from the Second SEMAT General Theory ofSoftware Engineering workshop co-located with the InternationalConference on Software Engineering in 2013. Participants agreedthat a general theory is possible and needed, should explain andpredict software engineering phenomena at multiple levels, includingsocial processes and technical artifacts, should synthesize existingtheories from software engineering and reference disciplines,should be developed iteratively, should avoid common misconceptionsand atheoretical concepts, and should respect the complexityof software engineering phenomena. However, several disputes remain,including concerns regarding ontology, epistemology, levelof formality, and how exactly to proceed with formulating a generaltheory.
In the design phase of business and IT system development, it is desirable to predict the properties of the system-to-be. A number of formalisms to assess qualities such as performance, reliability and security have therefore previously been proposed. However, existing prediction systems do not allow the modeler to express uncertainty with respect to the design of the considered system. Yet, in contemporary business, the high rate of change in the environment leads to uncertainties about present and future characteristics of the system, so significant that ignoring them becomes problematic. In this paper, we propose a formalism, the Predictive, Probabilistic Architecture Modeling Framework (P(2)AMF), capable of advanced and probabilistically sound reasoning about business and IT architecture models, given in the form of Unified Modeling Language class and object diagrams. The proposed formalism is based on the Object Constraint Language (OCL). To OCL, P(2)AMF adds a probabilistic inference mechanism. The paper introduces P(2)AMF, describes its use for system property prediction and assessment and proposes an algorithm for probabilistic inference.
In the design phase of business and software system development, it is desirable to predict the properties of the system-to-be. Existing prediction systems do, however, not allow the modeler to express uncertainty with respect to the design of the considered system. In this paper, we propose a formalism, the Predictive, Probabilistic Architecture Modeling Framework (P 2AMF), capable of advanced and probabilistically sound reasoning about architecture models given in the form of UML class and object diagrams. The proposed formalism is based on the Object Constraint Language (OCL). To OCL, P2AMF adds a probabilistic inference mechanism. The paper introduces P2AMF, describes its use for system property prediction and assessment, and proposes an algorithm for probabilistic inference.
In order to successfully govern an enterprise, business and information technology must be aligned. With proper alignment come many benefits. Undoubtedly, the challenge and complexity of this endeavour is enormous and has been addressed previously in research literature and by practitioners. The Strategic Alignment Maturity Assessment Framework has been proposed as a comprehensive tool for the evaluation of business-IT alignment. The framework has roots in academic research and aims at analyzing the current situation in an organization. Further, it provides hands-on suggestions on how alignment can be improved. This framework has been used in three case studies as a main tool for the identification of actions that would lead to an improvement of business-IT alignment, each in its respective company. All of the companies involved in the study had problems with the translation of the output gained through maturity assessment into actions. Starting with the knowledge acquired from the case studies, this paper analyzes Enterprise Architecture and Generic Framework for the Business-IT relationship and discusses them from the practitioners' point of view.
The ongoing process of globalization drives the offshoring trend of IT-development and services in Sweden as well as in other industrialized countries. In contrast to the industrial companies that lead the trend, Swedish banks are extremely restrictive in this sense. In comparison to the US banking business, this is considered atypical behavior. In this paper we present the results of an investigation stemming from interviews of the four largest Swedish banks and their main reasons not to offshore IT services
The role of quality control is to correctly translate customers´ requirements into materialized products to achieve customers’ satisfaction. Today Product Development Process (PDP) becomes the stage to improve and to obtain successful products. In the 1980s, Six Sigma appeared as an attempt to improve some aspects of products and services in order to reduce failures in production as much as possible. But there still was a lack of customer perspective over product development project. This thesis approached the lack of customer insight with a new product development process, Design for Six Sigma (DFSS). Examples that describe how DFSS works and its implementation are still missing, therefore the purpose of this master thesis is to prove if the implementation of DFSS would be efficient, effective and closer connected to the customer of mass- production companies. DFSS is based on processes, roles and tools. However, it is questionable whether the suggested role structure and a uniformed process are able to implement in the real world.
The methods used for this research project were two interviews with two mass customization companies, qualitative data collection and finally analyzing the collected data by comparing them with the theory of DFSS.
This PDP ensures the track, and enhances creativity and innovation, taking the process beyond its capabilities.
Moreover, the conclusions of the thesis show that there is no need for a uniformed model, making DFSS more useful and trickier for competitors. The lack of a uniformed model allows to overcome the innovative ability. The skills to reach these roles and the creation of a team within the company are more important than roles and backgrounds themselves. In spite of knowledge of expertise areas, the task is more related to control the collaboration, communication and flow of information within the group. DFSS becomes a successful solution for customer insight, a good combination of tools and techniques. But it does not mean that implementing accurately a certain methodology is the right solution.
Firstly, this paper reviews the current process of reporting reliability data in Sweden. Limitations of reliability indices such as SAIDI and SAIFI are discussed and the need for more reliability measures is stated. The paper suggests the introduction of a reliability performance scorecard to analyse reliability measures in an organized system under different aspects. Furthermore, a set of measurements is provided that can be used to assess a utility’s reliability performance. The use of the scorecard is discussed and its applicability for implementing and tracking regulations. This would result in a better policy-making and a decreased pressure on electric utilities due to a higher understanding of what companies invest to achieve a reliable supply of energy.
Commercial and operational use of residential and commercial user demand-side flexibility in energy grids will play an important but, as yet, not disclosed role in realising the required increase in the penetration of DG-RES. An increased embedding potential is needed to satisfy ambitious energy efficiency and carbon dioxide emission targets. Introduction of smart technologies is seen as a facilitator but also encounters technological, operational, market and regulatory challenges. From assessing a number of technologies and field test projects, task 17 of the IEA/DSM program has explored the nature of this demand-side flexibility and the stakeholder context. It was found that a portfolio of services and control strategies for coordinating flexibility in an efficient, safe, reliable and scalable manner can be found. The boundary conditions, valuation and practical implementation are also discussed.
Reliability of power system depends on the up to date knowledge of the system state for operation and control. Shifting from large conventional production units to small and/or renewable DG connected in the distribution network means more control and monitoring system require for the Distributed System operator caused by active generation and reactive power consumption by DG. Therefore it is interesting to explore concepts in fast and scalable topology processors for monitoring and controlling applications such as state estimation, OPF and static and dynamic stability assessment in electrical distribution network the need is evident to validate with meshed network to analyze the overall performance of the proposed methodology\Decentralized Topology Inference of Electrical Distribution Networks". The topology inference processor is require minimal prior knowledge of electrical network structure by taking a series of time-stamped process measurements from each bays of each substation in the network and distinguished between connected and unconnected bays. This master thesis project has implemented an IEEE reference electric power distribution network in Simulink platform , integrating the reference electrical network with the Java-based multi agent topology inference application as well as having investigated. This project has included work in the real time simulation of a standard IEEE reference distribution network, OPC server interfacing between reference model and the topology inference application, testing and analysis of the application. The reference model is selected to provide a sucient case to analyses and validate the methodology.
The technological advancements in the substation automation give rise to many new challenges for the engineers. The IEC 61850 standard de nes the most advanced techniques towards the digital substation development. It describes the communication mappings for the substation automation of both conventional and digital substations. The most important challenge is to replace old successful and reliable protection relays with the newly born microprocessor based relays called intelligent electronic devices (IEDs). The IEC 61850 standard gives the novel ideas in its sub-clauses IEC 60044-8 and IEC 61850-9-2 about digital communication and sampled values transmission over an Ethernet link called process bus. As this thesis is the Part-A and it is mainly based on the development of the conventional instrument transformers, analog to digital data converter and a multi-bus power system. The scope of this study contains the development of current and voltage transformer models in SIMULINK which gives the ideal behaviour of the conventional instrument transformers for voltage and current measurements.The methodology of this study is to model the Sigma-Delta analog to digital converter in the SIMULINK and then simulated results are veri ed according to the standard. The 4KHz output (Voltage/Current) signal is obtained in the digital form with 16-bit resolution. The SNR (Signal to Noise Ratio) and ENOB (Eective Number of Bits) of the data converter is veri ed both theoretically and practically. In the next phase the multi-bus power system is modelled in the SIMULINK using SimPowerSystems Library to make the nal tests on the developed product. Finally the developed models of Project Part-A have been integrated with the transmission model developed in the Project Part-B, collectively known as Merging Unit. The functionality of this complete developed product is to get 3-phase analog signals of currents and voltages from the instrument transformers, perform signal processing on these signals and then transmit them on the Ethernet port in the form of SV (Sampled Value) stream according to the IEC 61850-9-2 standard. The developed Merging Unit is then connected to the dierent nodes of the power system to test the performance and reliability of the Merging Unit. The over current and dierential protection functions are tested on the ABB's RET 670 IED (Protection Relay for Transformer).
In both test cases three phase short circuit fault is applied to the power system to check the behaviour of the Merging Unit during normal and abnormal conditions. It detects all the values correctly during pre-fault condition, fault condition and post-fault condition.
En studie om frekvensderivataskyddens funktion, tillämpning och nödvändighet i det svenskakraftsystemet har genomförts. Frekvensderivataskydd används som ett skydd mot ödrift.Ödrift innebär att produktionsanläggningar upprätthåller driften av en mindre del av nätetäven då anslutningen till det större kraftsystemet försvinner. Studien initierades pågrund av att det rapporterats att frekvensderivataskydden löst ut obefogat. Undersökningenhar innefattat en litteraturstudie om ödrift, frekvensderivataskydd samt problematikenmed obefogade bortkopplingar. Vidare har en empirisk mätdataanalys genomförts för attestimera frekvensderivatans variationer i kraftnätet. Baserat på utförda laborationstestergjordes en korrelationsanalys för att tydligare fastställa skillnader i hur olika enheter mäterspänning och beräknar frekvensderivata. Från resultaten förs slutligen en diskussion omlämpliga inställningsområden och frekvensderivataskyddens nödvändighet.
Resultat från den empiriska mätdataanalysen visar att den estimerade frekvensderivatani systemet som mest uppgår till 0,14 Hz/s. Utförda laborationstester visar att vid en konstantspänning detekterar frekvensderivataskydden inom angivna felmarginaler. I övriga fallpåverkas dock skydden av snabba spänningsförändringar som resulterar i att frekvensderivatanberäknas felaktigt. På grund av detta och stora skillnader i frekvensderivataberäkningmellan PMU-enheter och skydd är det svårt att definiera ett lämpligt inställningsområdeför att uppnå tillräcklig känslighet och stabilitet. Utförda analyser visar att om skyddenska erhålla en tillräckligt hög stabilitet måste en lång tidsfördröjning användas, uppskattningsvis0,5 s. Detta kommer dock försämra skyddens känslighet. I ett analyserat ödriftfallförändras frekvensen under en mycket kortare tidsperiod, ungefär 20 ms. Det innebär att ettfrekvensderivataskydd med denna tidsfördröjning missar att detektera ödriften. Däremotskulle ett frekvensskydd, förutsatt att frekvensen uppnår gränsvärdet och att en kortare tidsfördröjningän 0,5 s används, i detta fall lösa ut snabbare. Liknande ödriftfall med kortvarigfrekvensderivata är, i och med den stora utbyggnaden av produktion med låg svängmassa,ett troligt scenario. Med detta sagt kan frekvensderivataskydd med dagens tillämpade inställningaranses vara ett otillräckligt skydd mot vissa typer av ödrift. Dock kan frekvensochspänningsskydd kompletteras med frekvensderivataskydd som ett skydd mot ödrift. Ide fall där ödriften innehar med något högre svängmassa kan frekvensderivataskyddet dådetektera ödriften snabbare. En tidsfördröjning inställd på minst 0,5 s bör då användas föratt undvika obefogade utlösningar.
To prevent a power grid from having disturbances that may, in a worst case scenario, blackout entire countries, Transmission System Operators (TSO) want to be able to see what happens in different parts of the network faster, to have time to do the necessary changes and regulate the power grid. For this reason, a new technology called PMU-based Wide Area Monitoring System (WAMS) has been introduced to allow the real-time monitoring of the power grid. Due to having real-time information from many points in a large geographical area, the PMU technology allows the development of advanced applications to control, protect and monitor the entire power grid. The information gathered by the PMU is sent via a TCP/IP communication network to a PDC to be processed. The applications that use the
PMU data have different requirements on that the data arrives within a certain time frame. As an IP network can only provide best effort service, the delay, jitter and packet loss can never be guaranteed which means that the PMU data can never be guaranteed to arrive in time with potentially serious consequences. This problem may be remedied by implementing different quality of service (QoS) schemes on the TCP/IP communication network to assure that the PMU data falls within its requirements.
The purpose of this master thesis is to evaluate the different QoS schemes in a TCP/IP communication network dedicated for WAMS systems and the effect they have on the delay, jitter and packet loss on the different traffic flows. For this purpose, the typical TSO‟s TCP/IP traffic data and the communication network specifics have been studied and modeled using the simulator tool OPNET. The different scenarios‟ results have been collected and analyzed in regard to the different QoS schemes.
Methods for risk assessment in information security suggest users to collect and consider sets of input information, often notably different, both in type and size. To explore these differences, this study compares twelve established methods on how their input suggestions map to the concepts of ArchiMate, a widely used modeling language for enterprise architecture. Hereby, the study also tests the extent, to which ArchiMate accommodates the information suggested by the methods (e.g., for the use of ArchiMate models as a source of information for risk assessment). Results of this study show how the methods differ in suggesting input information in quantity, as well as in the coverage of the ArchiMate structure. Although the translation between ArchiMate and the methods’ input suggestions is not perfect, our results indicate that ArchiMate is capable of modeling fair portions of the information needed for the methods for information security risk assessment, which makes ArchiMate models a promising source of guidance for performing risk assessments.
Authorization and its enforcement, access control, has stood at the beginning of the art and science of information security, and remains being a crucial pillar of secure operation of IT. Dozens of different models of access control have been proposed. Although enterprise architecture as a discipline strives to support the management of IT, support for modeling authorization in enterprises is lacking, both in terms of supporting the variety of individual models nowadays used, and in terms of providing a unified metamodel capable of flexibly expressing configurations of all or most of the models. This study summarizes a number of existing models of access control, proposes an unified metamodel mapped to ArchiMate, and illustrates its use on a selection of simple cases.
Developing software without considering the potential changes it mighthave to undergo in the future can be a costly mistake. This is becauseits maintenance costs can become very expensive as they can consumeover 90% of the total life-cycle costs. Incorporating maintainability insoftware has for this reason become highly attractive since it can significantlyreduce the maintenance costs and therefore save companies andsoftware developers a fortune. This thesis presents a software tool thathas been developed to aid Bombardier in the verification of computerbasedinterlocking (CBI) systems. The tool automatically generates testcases which represent the different tests that verify the interlocking system.The paper is divided into two parts. The first part focuses on themaintainability of the tool while the second part investigates whetherthe tool can speed up the testing process of CBI-systems at Bombardier.The results show that the tool is highly maintainable and that tests onCBI-systems can be performed significantly faster with it.
Good IT decision making is a highly desirable property that can be furthered by the use of enterprise architecture, an approach to IT management using diagrammatic models. In order to support decision making, the models must contain only relevant information since creation of enterprise architecture models often is a demanding task. This paper suggests a method for constructing an enterprise architecture model framework where enterprises in need of architecture and rational decision making are designated. The paper also describes the outcome of the method at a case for a Swedish utility company, Vattenfall AB.
Modern society is unquestionably heavily reliant on supply of electricity. Hence, the power system is one of the important infrastructures for future growth. However, the power system of today was designed for a stable radial flow of electricity from large power plants to the customers and not for the type of changes it is presently being exposed to, like large scale integration of electric vehicles, wind power plants, residential photovoltaic systems etc. One aspect of power system control particular exposed to these changes is the design of power system control and protection functionality. Problems occur when the flow of electricity changes from a unidirectional radial flow to a bidirectional. Such an implication requires redesign of control and protection functionality as well as introduction of new information and communication technology (ICT). To make matters worse, the closer the interaction between the power system and the ICT systems the more complex the matter becomes from a reliability perspective. This problem is inherently cyber-physical, including everything from system software to power cables and transformers, rather than the traditional reliability concern of only focusing on power system components.
The contribution of this thesis is a framework for reliability analysis, utilizing system modeling concepts that supports the industrial engineering issues that follow with the imple-mentation of modern substation automation systems. The framework is based on a Bayesian probabilistic analysis engine represented by Probabilistic Relational Models (PRMs) in com-bination with an Enterprise Architecture (EA) modeling formalism. The gradual development of the framework is demonstrated through a number of application scenarios based on substation automation system configurations.
This thesis is a composite thesis consisting of seven papers. Paper 1 presents the framework combining EA, PRMs and Fault Tree Analysis (FTA). Paper 2 adds primary substation equipment as part of the framework. Paper 3 presents a mapping between modeling entities from the EA framework ArchiMate and substation automation system configuration objects from the IEC 61850 standard. Paper 4 introduces object definitions and relations in coherence with EA modeling formalism suitable for the purpose of the analysis framework.
Paper 5 describes an extension of the analysis framework by adding logical operators to the probabilistic analysis engine. Paper 6 presents enhanced failure rates for software components by studying failure logs and an application of the framework to a utility substation automation system. Finally, Paper 7 describes the ability to utilize domain standards for coherent modeling of functions and their interrelations and an application of the framework utilizing software-tool support.
The future smart electricity grids will exhibit tight integration between control and automation systems and primary power system equipment. Optimal and safe operation of the power system will be completely dependent on well functioning information and communication (ICT) systems. Considering this, it is essential that the control and automation systems do not constitute the weak link in ensuring reliable power supply to society. At the same time, studies of reliability when considering complex interdependencies between integrated ICT systems becomes increasingly difficult to perform due to the large amount of integrated entities with varying characteristics involved. To manage this challenge there is a need for structured modeling and analysis methods that accommodate this characteristics and interdependencies. In other fields, the analysis of large interconnected systems is done using models that capture the systems and its context as well as its components and interactions. This paper addresses this issue by combining enterprise architecture methods that utilize these modeling concepts, with fault tree analysis and probabilistic relational models. This novel approach enables a holistic overview thanks to the use of formalized models. It also allows use of rigorous analysis thanks to the adaptation of the models to enable Fault Tree Analysis. The paper is concluded with an example of application of the analysis method on a proposed smart grid function in a distribution network.
This paper presents an approach for reliability centred asset management of active distribution management systems. The application of Probabilistic Relational Models (PRMs) together with a set of defined components and their attribute allows both the ability to capture system configurations in architectural models and the ability of probabilistic reasoning offered by Bayesian networks. The approach is based on two key concepts; first, it addresses both the reliability of primary system components and the supporting secondary, ICT-based systems. Secondly, it enables representation of architecture of the ICT systems, including for instance redundancy of hardware and allocation of software functions to several hardware devices. The increasing number of software dependent systems for controlling and supervising the power grid enhances the risk of software-caused failures. Thus, for reliable operation it is of high importance to not only concern primary component, but also the software and hardware of the secondary systems controlling it. The approach is demonstrated using an example composing both primary equipment and secondary systems.
This position paper presents starting research on assessment of ICT system characteristics and their impact on controllability and observability of future electricity distribution grids. The aspects of controllability and observability for active grids are key factors for guaranteeing safe, efficient and reliable network operation. An assessment framework is proposed for analyzing the impact of ICT system quality on controllability and observability of the power distribution grid. The proposed assessment method is based on architectural models extended with a probabilistic inference analysis engine. For this, a combination between Enterprise Architecture meta-models and extended influence diagrams are proposed. The paper presents how the framework is constructed and the method on applying it for analysis. The paper is concluded with an example providing an instantiation of the assessment framework.
This paper presents a case study applying a framework developed for the analysis of substation automation system function reliability. The analysis framework is based on Probabilistic Relational Models (PRMs) and includes the analysis of both primary equipment and the supporting information and communication (ICT) systems. Furthermore, the reliability analysis also considers the logical structure and its relation to the physical infrastructure. The system components that are composing the physical infrastructure are set with failure probabilities and depending of the logical structure the reliability of the studied functionality is evaluated. Software failures are also accounted for in the analysis. As part of the case study failure rates of modern digital control and protection relays were identified by studying failure logs from a Nordic power utility. According to the failure logs software counts for approximately 35% of causes of failures related to modern control and protection relays. The framework including failure probabilities is applied to a system for voltage control that consists of a voltage transformer with an on-load tap changer and a control system for controlling the tap. The result shows a 96% probability of successful operation over period of one year for the automatic voltage control. A concluding remark is that when analyzing substation automation system business functions it is important to reduce the modeling effort. The expressiveness of the presented modeling framework has shown somewhat cumbersome when modeling a single business function with a small number of components. Instead the analysis framework's full usefulness may expect to arise when a larger number of business functions are evaluated for a system with a high degree of dependency between the components in the physical infrastructure. The identification of accurate failure rates is also a limiting factor for the analysis and is something that is interesting for further work.
This paper presents an architecture based framework for reliability analysis of ICT systems for power system protection, monitoring and control. The analysis framework applies Probabilistic Relational Models (PRMs) - providing a combination of entity-relationship diagrams together with a probabilistic analysis engine of Bayesian networks. Moreover, the framework allows modeling and analysis of the relations between the ICT and the power system, including both physical and logical relations. Three architectural scenarios are presented onto which the analysis framework is applied. By using component failure rates gathered from various sources each scenario is analyzed based on the probability of successful operation. Calculations are preformed using Bayesian networks and contrasted with the application of Reliability Block Diagrams. The outcome verifies the successful use of PRMs for reliability analysis of ICT systems for power system protection, monitoring and control architecture.
This paper presents the use of Probabilistic Relational Models (PRM) for reliability analysis of control systems for active distribution grids. The approach is based on two key concepts; first, it addresses both the reliability of primary system components and the supporting secondary, ICT-based systems. Secondly, the use of PRMs enables representation of architecture of the ICT systems, including for instance redundancy of hardware and allocation of software functions to several hardware devices. This later aspect is important, since allocation of software across different hardware platforms is a feature enabled by for instance the IEC 61850 standard. The increasing number of software dependent systems for controlling and supervising the power grid enhances the risk of software-caused failures. Thus, for reliable operation it is of high importance to not only concern primary component, but also the software and hardware of the secondary systems controlling it. A variety of methods exist for reliability analysis of secondary systems, however few address the issue of failing software together with failing primary components. The paper presents the underlying theory for Probabilistic Relational Models, and presents the steps necessary to use the technique. The paper is concluded with an example of application of the approach.
This paper presents the application of a framework for reliability analysis of substation automation (SA) system functions. The framework is based on probabilistic relational models which combines probabilistic reasoning offered by Bayesian networks together with architecture models in form of entity relationship diagrams. In the analysis, both the physical infrastructure, and the logical structure of the system, is regarded in terms of qualitative modeling and quantitative analysis. Moreover, the framework treats the aspect of failures caused by software. An example is detailed with the framework applied to an IEC 61850-based SA system. The logical structure, including functions and their relations, is modeled in accordance with Pieces of Information for COMmunication (PICOM) defined in the IEC 61850 standard. By applying PICOMs as frame of reference when modeling functions the model instantiation becomes more standardized compared to subjectively defining functions. A quantitative reliability analysis is performed on a function for tipping a circuit breaker in case of mismatch between currents. The result is presented both in terms of a qualitative architecture model and a quantitative result showing the probability of successful operation during a period of one year.
This paper presents an extended probabilistic framework for reliability analysis of Information and Communication Technology (ICT) for power systems. The framework is based on Probabilistic Relational Models (PRMs) and includes the analysis of both primary equipment and the supporting ICT systems. The framework also separates between the functional structure and the physical infrastructure. To be able to analyze architectural properties, such as redundancy, the framework is extended with logical gates which are based on AND and OR logic. The gates serve and important purpose when integrating the framework into an analysis tool that supports PRMs. The problem without logical gates is the dynamic sizing of the conditional probability tables which depend on the number of parent nodes. This problem is solved by using aggregation functions. The application of the extended framework is demonstrated by applying to an example system-architecture with two types of redundant configurations.
This paper presents a mapping between the Enterprise Architecture framework ArchiMate and the Substation Configuration Language (SCL) of IEC 61850. Enterprise Architecture (EA) is a discipline for managing an enterprise's information system portfolio in relation to the supported business. Metamodels, descriptive models on how to model and one of the core components of EA, can assist stakeholders in many ways, for example in decision-making. Moreover, the power industry is a domain with an augmented reliance on the support of information systems. IEC 61850 is a standard for the design of Substation Automation (SA) systems and provides a vendor independent framework for interoperability by defining communication networks and functions. The SCL is a descriptive language in IEC 61850 on the configuration of substation Intelligent Electronic Devices (IED) which describes the structure together with physical components and their relating functions. By using SCL, which models the architecture of SA systems, and mapping it to ArchiMate, stakeholders are assisted in understanding their SA system and its architecture. The mapping is intended to support the integration of SA systems applying IEC 61850 into the enterprise architecture. The mapping is demonstrated with an example applying the mapping to a SA configuration based on SCL.
A fast and continuously changing business environment demands flexible software systems easy to modify and maintain. Due to the extent of interconnection between systems and the internal quality of each system many IT-decision makers find it difficult predicting the effort of making changes to their systems. To aid IT-decision makers in making better decisions regarding what modifications to make to their systems, this paper proposes extended influence diagrams and enterprise architecture models for maintainability analysis. A framework for assessing maintainability using enterprise architecture models is presented and the approach is illustrated by a fictional example decision situation.
A fast and continuously changing business environment demands flexible softwaresystems easy to modify and maintain. Due to the extent of interconnection betweensystems and the internal quality of each system many IT decision-makers find it difficultpredicting the effort of making changes to their systems. To aid IT-decision makers inmaking better decisions regarding what modifications to make to their systems, thisarticle proposes extended influence diagrams and enterprise architecture models formaintainability analysis. A framework for assessing maintainability using enterprisearchitecture models is presented and the approach is illustrated by a fictional exampledecision situation.
Contemporary enterprises depend to great extent on software systems. During the past decades the number of systems has been constantly increasing and these systems have become more integrated with one another. This has lead to a growing complexity in managing software systems and their environment. At the same time business environments today need to progress and change rapidly to keep up with evolving markets. As the business processes change, the systems need to be modified in order to continue supporting the processes.
The complexity increase and growing demand for rapid change makes the management of enterprise systems a very important issue. In order to achieve effective and efficient management, it is essential to be able to analyze the system modifiability (i.e. estimate the future change cost). This is addressed in the thesis by employing architectural models. The contribution of this thesis is a method for software system modifiability analysis using enterprise architecture models. The contribution includes an enterprise architecture analysis formalism, a modifiability metamodel (i.e. a modeling language), and a method for creating metamodels. The proposed approach allows IT-decision makers to model and analyze change projects. By doing so, high-quality decision support regarding change project costs is received.
This thesis is a composite thesis consisting of five papers and an introduction. Paper A evaluatesa number of analysis formalisms and proposes extended influence diagrams to be employed for enterprise architecture analysis. Paper B presents the first version of the modifiability metamodel. InPaper C, a method for creating enterprise architecture metamodels is proposed. This method aims to be general, i.e. can be employed for other IT-related quality analyses such as interoperability, security, and availability. The paper does however use modifiability as a running case. The second version of the modifiability metamodel for change project cost estimation is fully described in Paper D. Finally, Paper E validates the proposed method and metamodel by surveying 110 experts and studying 21 change projects at four large Nordic companies. The validation indicates that the method and metamodel are useful, contain the right set of elements and provide good estimation capabilities.
In this paper, we test a Design Structure Matrix (DSM) based method for visualizing and measuring software portfolio architectures. Our data is drawn from a power utility company, comprising 192 software applications with 614 dependencies between them. We show that the architecture of this system can be classified as a “core-periphery” system, meaning it contains a single large dominant cluster of interconnected components (the “Core”) representing 40% of the system. The system has a propagation cost of 44% and architecture flow through of 93%. This case and these findings add another piece of the puzzle suggesting that the method could be effective in uncovering the hidden structure in software portfolio architectures.
We test a method for visualizing and measuring enterprise application architectures. The method was designed and previously used to reveal the hidden internal architectural structure of software applications. The focus of this paper is to test if it can also uncover new facts about the applications and their relationships in an enterprise architecture, i.e., if the method can reveal the hidden external structure between software applications. Our test uses data from a large international telecom company. In total, we analyzed 103 applications and 243 dependencies. Results show that the enterprise application structure can be classified as a core-periphery architecture with a propagation cost of 25%, core size of 34%, and architecture flow through of 64%. These findings suggest that the method could be effective in uncovering the hidden structure of an enterprise application architecture.