Byzantine Fault Tolerant protocols are complicated and hard to implement.Today’s software industry is reluctant to adopt these protocols because of thehigh overhead of message exchange in the agreement phase and the high resourceconsumption necessary to tolerate faults (as 3 f + 1 replicas are required totolerate f faults). Moreover, total ordering of messages is needed by mostclassical protocols to provide strong consistency in both agreement and executionphases. Research has improved throughput of the execution phase by introducingconcurrency using modern multicore infrastructures in recent years. However,improvements to the agreement phase remains an open area.
Byzantine Fault Tolerant systems use State Machine Replication to tolerate awide range of faults. The approach uses leader based consensus algorithms for thedeterministic execution of service on all replicas to make sure all correct replicasreach same state. For this purpose, several algorithms have been proposed toprovide total ordering of messages through an elected leader. Usually, a singleleader is considered to be a bottleneck as it cannot provide the desired throughputfor real-time software services. In order to achieve a higher throughput there is aneed for a solution which can execute multiple consensus rounds concurrently.
We present a solution that enables multiple consensus rounds in parallel bychoosing multiple leaders. By enabling concurrent consensus, our approach canexecute several requests in parallel. In our approach we incorporate applicationspecific knowledge to split the total order of events into multiple partial orderswhich are causally consistent in order to ensure safety. Furthermore, a dependencycheck is required for every client request before it is assigned to a particular leaderfor agreement. This methodology relies on optimistic prediction of dependenciesto provide higher throughput. We also propose a solution to correct the course ofexecution without rollbacking if dependencies were wrongly predicted.
Our evaluation shows that in normal cases this approach can achieve upto 100% higher throughput than conventional approaches for large numbers ofclients. We also show that this approach has the potential to perform better incomplex scenarios
Given the need for modern researchers to produce open, reproducible scientific output, the lack of standards and best practices for sharing data and workflows used to produce and analyze molecular dynamics (MD) simulations has become an important issue in the field. There are now multiple well-established packages to perform molecular dynamics simulations, often highly tuned for exploiting specific classes of hardware, each with strong communities surrounding them, but with very limited interoperability/transferability options. Thus, the choice of the software package often dictates the workflow for both simulation production and analysis. The level of detail in documenting the workflows and analysis code varies greatly in published work, hindering reproducibility of the reported results and the ability for other researchers to build on these studies. An increasing number of researchers are motivated to make their data available, but many challenges remain in order to effectively share and reuse simulation data. To discuss these and other issues related to best practices in the field in general, we organized a workshop in November 2018 (https://bioexcel.eu/events/workshop-on-sharing-data-from-molecular-simulations/). Here, we present a brief overview of this workshop and topics discussed. We hope this effort will spark further conversation in the MD community to pave the way toward more open, interoperable, and reproducible outputs coming from research studies using MD simulations.
Trigger-Action Platforms (TAPs) play a vital role in fulfilling the promise of the Internet of Things (IoT) by seamlessly connecting otherwise unconnected devices and services. While enabling novel and exciting applications across a variety of services, security and privacy issues must be taken into consideration because TAPs essentially act as persons-in-the-middle between trigger and action services. The issue is further aggravated since the triggers and actions on TAPs are mostly provided by third parties extending the trust beyond the platform providers. Node-RED, an open-source JavaScript-driven TAP, provides the opportunity for users to effortlessly employ and link nodes via a graphical user interface. Being built upon Node.js, third-party developers can extend the platform’s functionality through publishing nodes and their wirings, known as flows. This paper proposes an essential model for Node-RED, suitable to reason about nodes and flows, be they benign, vulnerable, or malicious. We expand on attacks discovered in recent work, ranging from exfiltrating data from unsuspecting users to taking over the entire platform by misusing sensitive APIs within nodes. We present a formalization of a runtime monitoring framework for a core language that soundly and transparently enforces fine-grained allowlist policies at module-, API-, value-, and context-level. We introduce the monitoring framework for Node-RED that isolates nodes while permitting them to communicate via well-defined API calls complying with the policy specified for each node.
The goal of this thesis is to explore the possibility on if it is possible to automate regression testing for a SaaS application with a serverless approach. The thesis covers the fundamentals of the software development lifecycle, cloud concepts, different types of testing frameworks, and SaaS applications. The report researches various testing tools that can be used in accordance with Polestar’s needs. The testing framework must run the existing tests and deliver the results of the tests. The system must be able to coexist with the testing strategy that is in place today. The result is a testing framework that can run a number of selected tests on the SaaS application Salesforce. The system was deployed with serverless docker containers through Amazon Web Services. The report also covers what a future implementation can look like and potential improvements.
This report describes a master thesis performed at SICS (Swedish Institute of Computer Science) and KTH (The Royal Institute of Technology) in Stockholm.
Ajax stands for "Asynchronous JavaScript and XML" and it's not a programming language, but a suite of technologies used to develop web applications with more interactivity than the traditional web pages.
Ajax applications can be adapted for mobile and constrained devices. This has been called Mobile Ajax. While the technique is the same, Mobile Ajax generally is considered to be a special case of Ajax, because it deals with problems specific to the mobile market.
The purpose of this thesis has been to examine which possibilities and disadvantages has the Mobile Ajax from developers and users perspective. In addition we compare Mobile Ajax with Java Micro Edition (Java ME) and Flash Lite.
This has been done through literature studies and development of a databased chat client (MAIM -Mobile Ajax Instant Messenger). The application sends and receives direct messages in real time between differently mobile devices. Then MAIM application has been compared with our own developed Java ME and Flash Lite chat clients.
We have tested all three applications with different models of mobile devices and on different web browsers. The results have shown that mobile Ajax makes possible the creation of sophisticated and dynamic mobile web applications and is better than the classic web application model, but this requires that the mobile device has a modern and compatible web browser like Opera mobile.
In an industrial project, we addressed the challenge of developing a software-based video generator such that consumers and providers of video processing algorithms can benchmark them on a wide range of video variants. This article aims to report on our positive experience in modeling, controlling, and implementing software variability in the video domain. We describe how we have designed and developed a variability modeling language, called VM, resulting from the close collaboration with industrial partners during 2 years. We expose the specific requirements and advanced variability constructs; we developed and used to characterize and derive variations of video sequences. The results of our experiments and industrial experience show that our solution is effective to model complex variability information and supports the synthesis of hundreds of realistic video variants. From the software language perspective, we learned that basic variability mechanisms are useful but not enough; attributes and multi-features are of prior importance; meta-information and specific constructs are relevant for scalable and purposeful reasoning over variability models. From the video domain and software perspective, we report on the practical benefits of a variability approach. With more automation and control, practitioners can now envision benchmarking video algorithms over large, diverse, controlled, yet realistic datasets (videos that mimic real recorded videos)-something impossible at the beginning of the project.
Queueing theory is widely used in practical queuing applications. It can be applied for specific models of queuing systems, especially the ones that follow the Markovian property. Its purpose is to predict system behaviour in order to be used for performance optimization. In this case study, it was used to evaluate an extended queuing model with agents serving multiple queues. The purpose was to try to capture more variability and input factors into the theoretical model and test its applicability on more extended models. The main objective was to use relevant queuing theory models to estimate the wait time using real contact center data. Different from the theoretical model, the service rates of the system model depended on how many queues an agent served concurrently, which increased the complexity of the model. The obtained results demonstrated some limitations that made the models too restrictive to be applied to a model with multi-skilled agents that were not equally available. Moreover, it was shown that heuristical approaches might be more suitable for more complex queuing systems that are not covered in queueing theory models.
Motivated by emerging vision-based intelligent services, we consider the problem of rate adaptation for high-quality and low-delay visual information delivery over wireless networks using scalable video coding. Rate adaptation in this setting is inherently challenging due to the interplay between the variability of the wireless channels, the queuing at the network nodes, and the frame-based decoding and playback of the video content at the receiver at very short time scales. To address the problem, we propose a low-complexity model-based rate adaptation algorithm for scalable video streaming systems, building on a novel performance model based on stochastic network calculus. We validate the analytic model using extensive simulations. We show that it allows fast near-optimal rate adaptation for fixed transmission paths, as well as cross-layer optimized routing and video rate adaptation in mesh networks, with less than 10% quality degradation compared to the best achievable performance.
Model checking temporal properties of software is algorithmically hard. To be practically feasible, it usually requires the creation of simpler, abstract models of the software, over which the properties are checked. However, creating suitable abstractions is another difficult problem. We argue that such abstract models can be obtained with little effort, when the state transformation properties of the software components have already been deductively verified. As a concrete, language-independent representation of such abstractions we propose the use of flow graphs, a formalism previously developed for the purposes of compositional model checking. In this paper, we describe how we envisage the work flow and tool chain to support the proposed verification approach in the context of embedded, safety-critical software written in C.
This paper addressed the question whether there is an environmental advantage of using DECT phones instead of GSM phones in offices. The paper also addresses the environmental compatibility of Electrochemical Pattern Replication (ECPR) compared to classical photolithography based microscale metallization (CL) for pattern transfer. Both environmental assessments consider electricity consumption and CO2 emissions. The projects undertaken were two comparative studies of DECT phone/GSM phone and ECPR/CL respectively. The research method used was probabilistic uncertainty modelling with a limited number of inventory parameters used in the MATLAB tool. Within the chosen system boundaries and with the uncertainties added to input data, the ECPR is to 100 % probability better than CL and the DECT phone is to 90% better than the GSM phone.
MOOSE2 is a MATLAB®-based toolbox for solving least-costly application-oriented input design problems in system identification. MOOSE2 provides the spectrum of the input signal to be used in the identification experiment made to estimate a linear parametric model of the system. The objective is to find a spectrum that minimizes experiment cost while fulfilling constraints imposed in the experiment and on the obtained model. The constraints considered by MOOSE2 are: frequency or power constraints on the signal spectra in the experiment, and application or quality specifications on the obtained model.
In this study, we present a meta-learning model to adapt the predictions of the network's capacity between viewers who participate in a live video streaming event. We propose the MELANIE model, where an event is formulated as a Markov Decision Process, performing meta-learning on reinforcement learning tasks. By considering a new event as a task, we design an actor-critic learning scheme to compute the optimal policy on estimating the viewers' high-bandwidth connections. To ensure fast adaptation to new connections or changes among viewers during an event, we implement a prioritized replay memory buffer based on the Kullback-Leibler divergence of the reward/throughput of the viewers' connections. Moreover, we adopt a model-agnostic meta-learning framework to generate a global model from past events. As viewers scarcely participate in several events, the challenge resides on how to account for the low structural similarity of different events. To combat this issue, we design a graph signature buffer to calculate the structural similarities of several streaming events and adjust the training of the global model accordingly. We evaluate the proposed model on the link weight prediction task on three real-world datasets of live video streaming events. Our experiments demonstrate the effectiveness of our proposed model, with an average relative gain of 25% against state-of-the-art strategies. For reproduction purposes, our evaluation datasets and implementation are publicly available at https://github.com/stefanosantaris/melanie
Automatic Program Repair (APR) is a field that has gained much attention in recent years. The idea of automatically fixing bugs could save time and money for companies. Template Based Automatic Program Repair is an area within APR that uses fix templates for generating patches and a test suite for evaluating them. However, there exists many various tools and datasets, and the concept has not widely been evaluated at companies or tried in production. Critique of current research is that the bug datasets are gathered from only a few projects, are sparsely updated and are not representative of real-world projects. This thesis evaluates, TBar, Template based automatic program repair tool for Java on a large open-source bug dataset Bears and a small company dataset. Further, TBar is modified to kBar to be used for experiments. The results show that kBar presents Plausible patches to 35% (19/54) of the selected bugs and 13% (7/54) of them is Correct. Finally, a prototype were implemented at Saab, waiting for the developers to submit the first real-world bug to fix.
Within software engineering there is a diversity of process methods where each one has its specific purpose. A process method can be described as being a repeatable set of step with the purpose to achieve a task and reach a specific result. The majority of process methods found in this study are focused on the software product being developed. There seems to be a lack of process methods that can be used by software developers for there individual soft- ware process improvement. Individual software process improvement, refers to how the in- dividual software developer chooses to structure their own work with the purpose to obtain a specific result
The Self-Governance Developer Framework (also called SGD-framework) whilst writing this is a newly developed process framework with the purpose of aiding the individual soft- ware developer to improve his own individual software process. Briefly explained the framework is intended to contain all the activities that can come up in a software project. The problem is that this tool has not yet been evaluated and therefore it is unknown if it is relevant for its purpose. To frame and guide the study three problem questions has been for- mulated (1) Is the framework complete for a smaller company in regards to it activities? (2) How high is the cost for the SGD-framework in regard of time?
The goal of the study is to contribute for future studies for the framework by performing an action study where the Self-Governance Developer Framework is evaluated against a set of chosen evaluation criteria.
An inductive qualitative research method was used when conducting the study. An induc- tive method means that conclusions are derived from empirically gathered data and from that data form general theories. Specifically, the action study method was used. Data was gathered by keeping a logbook and also time logging during the action study. To evaluate the framework, some evaluation criteria was used which were (1) Completeness, (2) Se- mantic correctness, (3) Cost. A narrative analysis was conducted over the data that was gathered for the criteria. The analysis took the problem formulations in regard.
The results from the evaluation showed that the framework was not complete with the re- gards of the activities. Although next to complete as only a few activities were further needed during the action study. A total of 3 extra activities were added over the regular 40 activities. Around 10% of the time spent in activities were in activities outside of the Self- Governance Developer Framework. The activities were considered to finely comminute for the context of a smaller company. The framework was considered highly relevant for im- proving the individual software developers own process. The introduction cost in this study reflect on the time it took until the usage of the framework was considered consistent. In this study it was approximately 24 working days with a usage about 3.54% of an eight-hour work day. The total application cost of usage of the framework in the performed action study was on average 4.143 SEK/hour or 662,88 SEK/month. The template cost used was on 172.625 SEK/hour.
In this paper, we extend work on model‐based testing for Apache ZooKeeper, to handle watchers (triggers) and improve scalability. In a distributed asynchronous shared storage like ZooKeeper, watchers deliver notifications on state changes. They are difficult to test because watcher notifications involve an initial action that sets the watcher, followed by another action that changes the previously seen state.
We show how to generate test cases for concurrent client sessions executing against ZooKeeper with the tool Modbat. The tests are verified against an oracle that takes into account all possible timings of network communication. The oracle has to verify that there exists a chain of events that triggers both the initial callback and the subsequent watcher notification. We show in detail how the oracle computes whether watch triggers are correct and how the model was adapted and improved to handle these features. Together with a new search improvement that increases both speed and accuracy, we are able to verify large test setups and confirm several defects with our model.
Preconditions indicate when it is permitted to use a given function. However, it is not always the case that both outcomes of a precondition are observed during testing. A precondition that is always false makes a function unusable, a precondition that is always true may turn out to be actually an invariant. In model-based testing, preconditions describes when a transition may be executed from a given state. If no outgoing transition is enabled in a given state because all preconditions of all outgoing transitions are false, the test model may be flawed. Experiments show a low test coverage of preconditions in the Scala library. We also investigate preconditions in Modbat models for model-based testing, in that case, a certain number of test cases is needed to produce sufficient coverage, but remaining cases of low coverage indeed point to legitimate flaws in test models or code.
Apache ZooKeeper is a distributed data storage that is highly concurrent and asynchronous due to network communication; testing such a system is very challenging. Our solution using the tool "Modbat" generates test cases for concurrent client sessions, and processes results from synchronous and asynchronous callbacks. We use an embedded model checker to compute the test oracle for non-deterministic outcomes; the oracle model evolves dynamically with each new test step. Our work has detected multiple previously unknown defects in ZooKeeper. Finally, a thorough coverage evaluation of the core classes show how code and branch coverage strongly relate to feature coverage in the model, and hence modeling effort.
Random test case generation produces relatively diverse test sequences, but the validity of the test verdict is always uncertain. Because tests are generated without taking the specification and documentation into account, many tests are invalid. To understand the prevalent types of successful and invalid tests, we present a classification of 56 issues that were derived from 208 failed, randomly generated test cases. While the existing workflow successfully eliminated more than half of the tests as irrelevant, half of the remaining failed tests are false positives. We show that the new @NonNull annotation of Java 8 has the potential to eliminate most of the false positives, highlighting the importance of machine-readable documentation.
FluidDyn is a project to foster open-science and open-source in the fluid dynamics community. It is thought of as a research project to channel open-source dynamics, methods and tools to do science. We propose a set of Python packages forming a framework to study fluid dynamics with different methods, in particular laboratory experiments (package fluidlab), simulations (packages fluidfft, fluidsim and fluidfoam) and data processing (package fluidimage). In the present article, we give an overview of the specialized packages of the project and then focus on the base package called fluiddyn, which contains common code used in the specialized packages. Packages fluidfft and fluidsim are described with greater detail in two companion papers [4, 5]. With the project FluidDyn, we demonstrate that specialized scientific code can be written with methods and good practices of the open-source community. The Mercurial repositories are available in Bitbucket (https://bitbucket.org/fluiddyn/). All codes are documented using Sphinx and Read the Docs, and tested with continuous integration run on Bitbucket Pipelines and Travis. To improve the reuse potential, the codes are as modular as possible, leveraging the simple object-oriented programming model of Python. All codes are also written to be highly efficient, using C++, Cython and Pythran to speedup the performance of critical functions.
Today little is known about what tools software companies are using to support their Agile methods and whether they are satisfied or dissatisfied with them. This is due to lack of objective surveys on the subject. The surveys that have been conducted so far are of a subjective nature and have mostly been performed by tool vendors. They are very limited in number and focus mainly on company structure and adherence to a specific Agile method rather than on tool usage and needs. For this reason many companies have difficulties to choose appropriate tools to support their Agile process. One such company is the Swedish telecommunications giant Ericsson. To account for this lack of data Ericsson commissioned us to conduct an independent survey focusing on the tool usage and needs as experienced by the Agile software community today. In this paper we report on the results of our survey. The survey covers 121 responses from 120 different companies coming from 35 different countries. Our results show that the most satisfactory tool aspect is ease of use whereas the least satisfactory one is lack of integration with other systems. Finally our results provide a list of features that are most desired by the software companies today.
To fully utilize multi-processors, new tools are required to manage software complexity. We present a novel technique that enables automating hierarchical process network transformations to derive optimized parallel applications. Designers leverage a library of process constructors and data-parallel algorithmic skeletons, utilizing the well-defined semantics of a restricted set of operators. This carefully chosen set addresses both temporal and spatial aspects of computation, enabling the automated identification of various parallel patterns. We utilize an augmented version of a meta-modeling framework grounded in system graphs and trait hierarchies to generate an intermediate representation (IR) of the system model to simplify automatic transformations and evaluations. Our augmentation allows for capturing skeletons and hierarchical networks. By meticulously selecting the underlying framework, we alleviate the need for tool integration in our design flow. We validate our approach through a proof-of-concept implementation, where our automated tool applied 193 transformations to fully parallelize an image processing application.
Software bills of materials (SBOMs) promise to become the backbone of software supply chain hardening. We deep-dive into six tools and the SBOMs they produce for complex open source Java projects, revealing challenges regarding the accurate production and usage of SBOMs.
IoT platforms enable users to connect various smart devices and online services via reactive apps running onthe cloud. These apps, often developed by third-parties, perform simple computations on data triggered byexternal information sources and actuate the results of computations on external information sinks. Recentresearch shows that unintended or malicious interactions between the different (even benign) apps of a usercan cause severe security and safety risks. These works leverage program analysis techniques to build toolsfor unveiling unexpected interference across apps for specific use cases. Despite these initial efforts, we arestill lacking a semantic framework for understanding interactions between IoT apps. The question of whatsecurity policy cross-app interference embodies remains largely unexplored.This paper proposes a semantic framework capturing the essence of cross-app interactions in IoT platforms.The framework generalizes and connects syntactic enforcement mechanisms to bisimulation-based notionsof security, thus providing a baseline for formulating soundness criteria of these enforcement mechanisms.Specifically, we present a calculus that models the behavioral semantics of a system of apps executingconcurrently, and use it to define desirable semantic policies targeting the security and safety of IoT apps.To demonstrate the usefulness of our framework, we define and implement static analyses for enforcingcross-app security and safety, and prove them sound with respect to our semantic conditions. We also leveragereal-world apps to validate the practical benefits of our tools based on the proposed enforcement mechanisms.
User experience has been extensively discussed in literature, yet the idea of applying it to explain and comprehend the conceptualization of Mobile Learning (ML) is relatively new. Consequently much of the existing works are mainly theoretical and they concentrate to establish and explain the relationship between ML and experience. Little has been done to apply or adopt it into practice. In contrast to the currently existing approaches, this paper presents an ontology to support Citywide Mobile Learning (CML). The ontology presented in this paper addresses three fundamental aspects of CML, namely User Model, User Experience and Places/Spaces which exist in the city. The ontology presented here not only attempts to model and translate the theoretical concepts such as user experience and Place/Spaces for citywide context for Mobile Learning, but also apply them into practice. The discussed ontology is used in our system to support Place/Space based CML.
Software bugs are common, and correcting them accounts for a significant portion of the costs in the software development and maintenance process. In this article, we discuss R-Hero, our novel system for learning how to fix bugs based on continual training.
High quality data is essential for designing effective software test suites. We propose three original methods for using large language models to generate representative test data, which fit to the domain of the program under test and are culturally adequate.
It's a period of unrest. Rebel developers, striking from continuous deployment servers, have won their first victory. During the battle, rebel spies managed to push an epic commit in the HTML code of https://pro.sony. Pursued by sinister agents, the rebels are hiding in commits, buttons, tooltips, API, HTTP headers, and configuration screens.
Object-orientation increases portability, flexibilty and reuse of software components; distribution improves performance, scalability and collaboration. We propose a distributed object computing (DOC) framework for the development of sound processing software on workstation- and personal computers, which apart from providing low-level support for object distribution, also mediates between program parts that have been written in different languages. The system's design is based on a generalization and unification of concepts from the various prevalent paradigms of computper music.
Systems of Systems (SoSs) integrate many critical systems our society relies on. In designing individual systems, stakeholders use bespoke protocols, custom information models, and proprietary components with limited computational resources from various vendors. We present a reference architecture that allows multiple stakeholders to carry out a flexible integration without giving up control to a single entity in the presence of the aforementioned limitations. Our architecture relies on rule engines and graph data model to integrated systems flexibly even when black-box components are used. At the same time, a federation of the rule engines allows each stakeholder to retain control over the rules that reflect their policies. We also rely on a common information model based on ontologies to account for the information model mismatch and reduce the duplication of integration efforts. Moving rule execution to the standalone rule engines allows deployment in resource-constrained and proprietary environments. A uniform application programming interface (API) is used to integrate rule engines across systems as well as components within each system with a respective rule engine. We also present a novel algorithm to determine dependencies across rules deployed in different rule engines within the federation. This allows domain experts to develop rules as usual without having to deal with the distributed aspect of the system. We also present a proof of the sufficient condition to ensure all necessary notifications will be sent to ensure correct rule activation across different rule engines. Compared to other systems involving distributed rules, the proposed architecture is well-suited for the integration of transactional workloads commonly found in enterprises. The qualitative evaluation based on the Architecture Tradeoff Analysis Method (ATAM), applied to a telecommunications use case, shows that the architecture possesses the “interoperability”, “modifiability”, and the “functional completeness” quality attributes with a trade-off around rule expressiveness. The quantitative evaluation demonstrates speedup over the single-node setup in most scenarios except in case of highly optimized rules and a poor network performance simultaneously (tr<10ms, tn=100ms).
SMLocalizer combines the availability of ImageJ with the power of GPU processing for fast and accurate analysis of single molecule localization microscopy data. Analysis of 2D and 3D data in multiple channels is supported.
On-demand resource provisioning is an important feature of cloud, which offers several benefits such as scalability, enhanced performance, low maintenance cost, elasticity, adequate storage, ubiquitous accessibility, minimum infrastructure etc. At the same time, usage of mobile phones is becoming increasingly common throughout the world. Also, these phones are coming in the market with appealing features like fast processing unit, internal and external memory up to MBs and GBs respectively, web browsing, inbuilt sensors, powerful cameras etc. This master’s thesis project proposes a novel approach of utilizing all these features of semi-autonomous mobile devices (especially smart phones) and cloud infrastructure altogether.
The specific aim of project is to design, implement and present a framework for multi-featured smart phone on top of cloud infrastructure. Proposed framework is implemented and tested for two different communication methods known as client poll and server push. In the second phase, performance analysis of implemented framework is carried out via simulation to compare two methods of interaction and to observe server’s load. As a result, first it is found that the server-initiated communication (i.e. server push) requires 40% to 50% less times as compared to the client-initiated communication (i.e. client poll). Second, it is observed that the application server load of framework is not affected too much regarding increasing number of client’s requests.
Tool chains are expected to increase the productivity of product development by providing automation and integration. If, however, the tool chain does not have the features required to support the product development process, it falls short of this expectation. Tool chains could reach their full potential if it could be ensured that the features of a tool chain are aligned with the product development process. As part of a systematic development approach for tool chains, we propose a verification method that measures the extent to which a tool chain design conforms to the product development process and identifies misalignments. The verification method can be used early in tool chain development, when it is relatively easy and cheap to perform the necessary corrections. Our verification method is automated, which allows for quick feedback and enables iterative design. We apply the proposed method on an industrial tool chain, where it is able to identify improvements to the design of the tool chain.
The development of complex systems requires tool support for the different phases of the system life cycle. To allow for an efficient development process, the involved tools need to be integrated, e.g. by exchanging tool data or providing trace ability between the data. Despite the availability of tool integration platforms and frameworks, it is labor-intensive and costly to build tool integration solutions. Industrial tool integration initiatives such as OSLC (Open Services for Lifecycle Collaboration) demand complex configurations and the adherence to integration standards. This further complicates building an integration solution. We propose an approach that uses formalized specifications to systematize tool integration and specialized code generators to automate the process of building tool adapters. We evaluate our approach with the implementation of a code generator that creates service-oriented tool adapters conforming to the OSLC industry initiative.
The engineering of software-intensive systems is supported by a variety of development tools. While development tools are traditionally desktop tools, they are more and more complemented and replaced by web-based development tools. The resulting blend of desktop and web-based tools is difficult to integrate into a seamless tool chain, which supports product development by data, control and presentation integration. Moreover, the construction of such tool chains is a significant engineering challenge. We propose an approach for the efficient, automated construction of tool chains, which integrate both web-based and desktop development tools; and provide a proof of concept of the approach in a case study. Our approach suggests that companies can selectively take advantage of hosted web-based development tools, while maintaining a seamless flow of integration with legacy desktop tools.
We present StochasticPrograms.jl, a user-friendly and powerful open-source framework for stochastic programming written in the Julia language. The framework includes both modeling tools and structure-exploiting optimization algorithms. Stochastic programming models can be efficiently formulated using an expressive syntax, and models can be instantiated, inspected, and analyzed interactively. The framework scales seamlessly to distributed environments. Small instances of a model can be run locally to ensure correctness, whereas larger instances are automatically distributed in a memory-efficient way onto supercomputers or clouds and solved using parallel optimization algorithms. These structure-exploiting solvers are based on variations of the classical L-shaped, progressive-hedging, and quasi-gradient algorithms. We provide a concise mathematical background for the various tools and constructs available in the framework along with code listings exemplifying their usage. Both software innovations related to the implementation of the framework and algorithmic innovations related to the structured solvers are highlighted. We conclude by demonstrating strong scaling properties of the distributed algorithms on numerical benchmarks in a multinode setup.