1 - 11 of 11
rss atomLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
  • Public defence: 2018-08-24 10:00 Sal F3, Stockholm
    Fan, Xuge
    KTH, School of Electrical Engineering and Computer Science (EECS).
    Integration of graphene into MEMS and NEMS for sensing applications2018Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    This thesis presents a novel approach to integrate chemical vapor deposition (CVD) graphene into silicon micro- and nanoelectromechanical systems (MEMS/NEMS) to fabricate different graphene based MEMS/NEMS structures and explore mechanical properties of graphene as well as their applications such as acceleration sensing, humidity sensing and CO2 sensing. The thesis also presents a novel method of characterization of CVD graphene grain boundary based defects.

        The first section of this thesis presents a robust, scalable, flexible route to integrate double-layer graphene membranes to a silicon substrate so that large silicon masses are suspended by graphene membranes.

        In the second section, doubly-clamped suspended graphene beams with attached silicon masses are fabricated and used as model systems for studying the mechanical properties of graphene and transducer elements for NEMS resonators and extremely small accelerometers, occupying die areas that are at least two orders of magnitude smaller than the die areas occupied by the most compact state-of-the-art silicon accelerometers. An averaged Young’s modulus of double-layer graphene of ~0.22 TPa and non-negligible built-in stresses of the order of 200-400 MPa in the suspended graphene beams are extracted, using analytical and FEA models. In addition, fully clamped suspended graphene membranes with attached proof masses are also realized, which are used for acceleration sensing.

    In the third section, CO2 sensing of single-layer graphene and the cross-sensitivity between CO2 and humidity are shown. The cross-sensitivity of CO2 is negligible at typical CO2 concentrations present in air. The properties of double-layer graphene when exposed to humidity and CO2 have been characterized, with similarly fast response and recovery behaviour but weak resistance responses, compared to single layer graphene.

    In the fourth section, a fast and simple method for large-area visualization of grain boundaries in CVD graphene transferred to a SiO2 surface is demonstrated. The method only requires vapor hydrofluoric acid (VHF)-etching and optical microscope inspection and therefore could be useful to speed up the process of developing large-scale high quality graphene synthesis, and can also be used for analysis of the influence of grain boundaries on the properties of emerging graphene devices that utilize CVD graphene patches placed on a SiO2 substrate.

  • Public defence: 2018-08-27 12:39 F3, Stockholm
    Toscani, Giulio
    KTH, School of Industrial Engineering and Management (ITM), Industrial Economics and Management (Dept.), Industrial Marketing and Entrepreneurship.
    UNDERSTANDING THE SPONSEE'S EXPERIENCE: AN ASSESSMENT OF THE SPONSOR-SPONSEE RELATIONSHIP2018Doctoral thesis, monograph (Other academic)
    Abstract [en]

    Sponsorship is the fastest-growing marketing communication tool, both in terms of

    volume and complexity. The two central players in any sponsorship arrangement are

    the sponsor and the sponsored institutions’ (sponsee). Sponsors are gradually

    increasing the amounts that they invest in sponsorships and elevating outcome

    requirements for their investments, as reflected in the large body of research

    dedicated to sponsors’ needs in terms of brand awareness, consumer loyalty and

    evaluation of results. On the other hand, the sponsees needs are relatively

    neglected, especially in the arts sector, where there has been little research focused

    on what arts sponsees require from a sponsorship arrangement. This research fills

    this gap by investigating the sponsorship process that arts sponsees go through and

    provides the first theoretical model of this process. Because of the need to

    inductively explain the process, taking into account its causes and consequences,

    the grounded theory method is used to develop a substantive theoretical model. Indepth

    interviews with 31 arts sponsorship managers, globally dispersed and with

    demonstrated experience in sponsorship, were collected, and they indicate that the

    arts sponsee’s reciprocity with a sponsor in a sponsorship interaction is a highly

    complex experience that involves both the internal arts sponsee and external

    sponsor’s actors. Within the complexity of the experience, the relationship is

    arguably not a developmentally normal experience, given arts sponsees’

    professional situations. The conclusion is that the reciprocity that arts sponsees

    experience throughout the sponsorship interaction is often not acknowledged or

    understood and would benefit from further empirical research.

  • Public defence: 2018-08-27 13:00 F3, Stockholm
    Saranurak, Thatchaphol
    KTH, School of Electrical Engineering and Computer Science (EECS), Theoretical Computer Science, TCS.
    Dynamic algorithms: new worst-case and instance-optimal bounds via new connections2018Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    This thesis studies a series of questions about dynamic algorithms which are algorithms for quickly maintaining some information of an input data undergoing a sequence of updates. The first question asks \emph{how small the update time for handling each update can be} for each dynamic problem. To obtain fast algorithms, several relaxations are often used including allowing amortized update time, allowing randomization, or even assuming an oblivious adversary. Hence, the second question asks \emph{whether these relaxations and assumptions can be removed} without sacrificing the speed. Some dynamic problems are successfully solved by fast dynamic algorithms without any relaxation. The guarantee of such algorithms, however, is for a worst-case scenario. This leads to the last question which asks for \emph{an algorithm whose cost is nearly optimal for every scenario}, namely an instance-optimal algorithm. This thesis shows new progress on all three questions.

    For the first question, we give two frameworks for showing the inherent limitations of fast dynamic algorithms. First, we propose a conjecture called the Online Boolean Matrix-vector Multiplication Conjecture (OMv). Assuming this conjecture, we obtain new \emph{tight} conditional lower bounds of update time for more than ten dynamic problems even when algorithms are allowed to have large polynomial preprocessing time. Second, we establish the first analogue of ``NP-completeness'' for dynamic problems, and show that many natural problems are ``NP-hard'' in the dynamic setting. This hardness result is based on the hardness of all problems in a huge class that includes a number of natural and hard dynamic problems. All previous conditional lower bounds for dynamic problems are based on hardness of specific problems/conjectures.

    For the second question, we give an algorithm for maintaining a minimum spanning forest in an $n$-node graph undergoing edge insertions and deletions using $n^{o(1)}$ worst-case update time with high probability. This significantly improves the long-standing $O(\sqrt{n})$ bound by {[}Frederickson STOC'83, Eppstein, Galil, Italiano and Nissenzweig FOCS'92{]}. Previously, a spanning forest (possibly not minimum) can be maintained in polylogarithmic update time if either amortized update is allowed or an oblivious adversary is assumed. Therefore, our work shows how to eliminate these relaxations without slowing down updates too much.

    For the last question, we show two main contributions on the theory of instance-optimal dynamic algorithms. First, we use the forbidden submatrix theory from combinatorics to show that a binary search tree (BST) algorithm called \emph{Greedy} has almost optimal cost when its input \emph{avoids a pattern}. This is a significant progress towards the Traversal Conjecture {[}Sleator and Tarjan JACM'85{]} and its generalization. Second, we initialize the theory of instance optimality of heaps by showing a general transformation between BSTs and heaps and then transferring the rich analogous theory of BSTs to heaps. Via the connection, we discover a new heap, called the \emph{smooth heap}, which is very simple to implement, yet inherits most guarantees from BST literature on being instance-optimal on various kinds of inputs. The common approach behind all our results is about making new connections between dynamic algorithms and other fields including fine-grained and classical complexity theory, approximation algorithms for graph partitioning, local clustering algorithms, and forbidden submatrix theory.

  • Public defence: 2018-08-29 14:00 Salongen, Stockholm
    Buckley, Jeffrey
    KTH, School of Industrial Engineering and Management (ITM).
    Investigating the role of spatial ability as a factor of human intelligence in technology education: Towards a causal theory of the relationship between spatial ability and STEM education2018Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Education is a particularly complex discipline due to the numerous variables which impact on teaching and learning. Due to the large effect of human intelligence on the variance in student educational achievement, there is a substantial need to further contemporary understandings of its role in education. Multiple paradigms exist regarding the study of human intelligence. One in particular, the psychometric tradition, has offered many critical findings which have had a substantial impact on STEM education. One of the most significant offerings of this approach is the wealth of empirical evidence which demonstrates the importance of spatial ability in STEM education. However, while categorically identified as important, a causal relationship between spatial ability and STEM is yet to be confirmed

    As there is insufficient evidence to support a causal investigation, this thesis aims to develop an empirically based causal theory to make this possible. Five studies were conducted to achieve this aim and are described in the appended papers. As the research explores spatial ability in technology education, Paper I examines the epistemological position of technology education within STEM education. Based on the evidence showing spatial ability is important in Science, Engineering and Mathematics, Paper II explores its relevance to Technology. Paper III offers an empirically based definition for spatial ability through a synthesis of contemporary research and illustrates empirically where it has been observed as important to STEM learning. Paper IV examines the perceived importance of spatial ability relative to intelligence in STEM education from the perspective of technology education. Finally, Paper V examines the psychometric relationship between spatial ability and fluid intelligence (Gf) based on a hypothesis generated throughout the preceding papers.

    The main results of this thesis illustrate the predictive capacity of visualization (Vz), memory span (MS), and inductive reasoning (I) on fluid intelligence (Gf) which is posited to offer a causal explanation based on the creative, innovative, and applied nature of STEM. Additional findings include the observation that learners use problem solving strategies which align with their cognitive strengths, that external representations of problems can scaffold the use of spatial ability or alleviate the need for it, that the variability of knowledge types across STEM sub-disciplines may affect the nature of reasoning within disciplines, and that for technology education specifically, acquiring an explicit knowledge base is not perceived to denote intelligence while the capacity to reason abstractly to solve novel problems is. This epistemological fluidity and focus on reasoning highlights the unique way in which technology education can provide insight into intelligence in STEM education. The implications of these results are discussed with specific focus on their theoretical validity and potential application in applied educational contexts.

  • Public defence: 2018-09-03 13:15 Sal Ka-208, Kista, Stockholm
    Castañeda Lozano, Roberto
    KTH, School of Electrical Engineering and Computer Science (EECS), Software and Computer systems, SCS. RISE SICS (Swedish Institute of Computer Science).
    Constraint-Based Register Allocation and Instruction Scheduling2018Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Register allocation (mapping variables to processor registers or memory) and instruction scheduling (reordering instructions to improve latency or throughput) are central compiler problems. This dissertation proposes a combinatorial optimization approach to these problems that delivers optimal solutions according to a model, captures trade-offs between conflicting decisions, accommodates processor-specific features, and handles different optimization criteria.

    The use of constraint programming and a novel program representation enables a compact model of register allocation and instruction scheduling. The model captures the complete set of global register allocation subproblems (spilling, assignment, live range splitting, coalescing, load-store optimization, multi-allocation, register packing, and rematerialization) as well as additional subproblems that handle processor-specific features beyond the usual scope of conventional compilers.

    The approach is implemented in Unison, an open-source tool used in industry and research that complements the state-of-the-art LLVM compiler. Unison applies general and problem-specific constraint solving methods to scale to medium-sized functions, solving functions of up to 647 instructions optimally and improving functions of up to 874 instructions. The approach is evaluated experimentally using different processors (Hexagon, ARM and MIPS), benchmark suites (MediaBench and SPEC CPU2006), and optimization criteria (speed and code size reduction). The results show that Unison generates code of slightly to significantly better quality than LLVM, depending on the characteristics of the targeted processor (1% to 9.3% mean estimated speedup; 0.8% to 3.9% mean code size reduction). Additional experiments for Hexagon show that its estimated speedup has a strong monotonic relationship to the actual execution speedup, resulting in a mean speedup of 5.4% across MediaBench applications.

    The approach contributed by this dissertation is the first of its kind that is practical (it captures the complete set of subproblems, scales to medium-sized functions, and generates executable code) and effective (it generates better code than the LLVM compiler, fulfilling the promise of combinatorial optimization). It can be applied to trade compilation time for code quality beyond the usual optimization levels, explore and exploit processor-specific features, and identify improvement opportunities in conventional compilers.

  • Public defence: 2018-09-07 11:59 F3, Stockholm
    Sjölander, Jens
    KTH, School of Engineering Sciences (SCI), Aeronautical and Vehicle Engineering, Lightweight Structures.
    Improving Forming of Aerospace Composite Components through Process Modelling2018Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    In the aerospace industry there is a constant effort to reduce the weight of aircraft. Since weight reduction has a direct impact on fuel consumption. Reducing the fuel consumption leads to botheconomical benefits through less money spent on fuel and environmental benefits through reduced CO2 emissions. One way that weight savings have been achieved in the last couple of decades is by replacing metals with carbon fiber composites in structural components, where a common choice is unidirectional pre-impregnated (UD prepreg) carbon fiber. Traditionally manufacturing is done by hand lay-up where one ply at a time is laid up on a tool. However the need to make large production volumes feasible has led to a need of automated manufacturing processes. One way to rationalize production is to form the whole laminate at once instead of layer by layer. This is done presently with the single and double diaphragm forming techniques. The challenge with forming of stacked laminates is that the individual plies interact with each other as they conform to the geometry increasing the likelihood of defects to develop. This thesis investigates the effect of forming method and process parameters on the development of manufacturing faults and on the geometry of the finished formed part and studies if these faults can be predicted in numerical simulations. First a method for forming stacked laminates using an industrial robot with methods inspired by human forming techniques is presented. Using this system the effect of different forming sequences on the appearance of wrinkles can be investigated. Forming simulations were done to relate the appearance of wrinkles to ply strains detected in the simulated forming process. The method is used to manufacture joggled spars with a length of 1.4 m and a laminate consisting of 20 plies. Thereafter process simulation of hot drape forming (HDF) is used to determine why wrinkling occurs when plies with specific fiber directions are combined with each other in a stack. This study is supported by an experimental study where plies using two different material systems were mixed in the stack to promote or suppress different types of wrinkles. This leads to the discovery that the wrinkles observed could be divided into two main types: global wrinkles were the whole laminate is under compression due to the geometry, and local wrinkling were wrinkling is initiated by compression of one layer due to interaction with surrounding layers. In the fifth paper the impact of forming method on radius thinning is investigated. By comparing hand lay-up and HDF it is shown that a majority of radius thinning of a laminate can occur already in the forming step if HDF is used. In the last study inter-ply shear of prepreg under a variety of different testing parameters is investigated, including different relative fiber directions between the plies. The study shows that the relative fiber direction is an important parameter to take into account when characterizing inter-ply shear as the force required to shear an interface that has a difference of fiber direction of 0° is significantly higher than the force required to shear interfaces with a difference of 45° and 90°. Taking the difference into account also has a significant impact on the results of forming simulations where models that included the difference in inter-ply shear behavior showed a higher tendency for in-plane wrinkling.

  • Public defence: 2018-09-07 13:00 FR4, Stockholm
    Larsson, Jakob C.
    KTH, School of Engineering Sciences (SCI), Applied Physics, Biomedical and X-ray Physics.
    Laboratory x-ray fluorescence tomography2018Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    X-ray fluorescence (XRF) tomography is an emerging bio-imaging modality with potential for high-resolution molecular imaging in 3D. In this technique the fluorescence signal from targeted nanoparticles (NPs) is measured, providing information about the spatial distribution and concentration of the NPs inside the object. However, present laboratory XRF tomographysystems typically have limited spatial resolution (>1 mm) and suffer from long scan times and high radiation dose even at high NP concentrations, mainly due to low efficiency and poor signal-to-noise ratio (SNR). Other macroscopic biomedical imaging methods provide either structural information with high spatial resolution (e.g., CT) or functional/molecularinformation with lower resolution (e.g., PET).

    In this Thesis we present a laboratory XRF tomography system with high spatial resolution (sub-200 μm), low NP concentration and vastly reduced scan times and dose, opening up the possibilities for in vivo small-animal imaging research. The system consists of a high-brightness liquid-metal-jet microfocus x-ray source, x-ray focusing optics and two photon counting detectors. By using the source’s characteristic 24 keV line emission together with spectrally matched molybdenum NPs the Compton background is greatly reduced, increasing the SNR. Each measurement provides information about the spatial distribution and concentration of the NPs, as well as the absorption of the object. An iterative method is used to get aquantitative reconstruction of the XRF image. The reconstructed absorption and XRF images are finally combined into a single 3D overlay image.

    Using this system we have demonstrated high-resolution dual CT and XRF imaging of both phantoms and mice at radiation doses compatible with in vivo small-animal imaging.

  • Public defence: 2018-09-21 10:00 rum 4301, Stockholm
    Zheng, Weisen
    KTH, School of Industrial Engineering and Management (ITM), Materials Science and Engineering.
    Thermodynamic and kinetic investigation of systems related to lightweight steels2018Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Lightweight steels have attracted considerable interest for automobile applications due to the weight reduction without loss of high strength and with retained excellent plasticity. In austenitic Fe-Mn-Al-C steels, the nano-precipitation of the κ-carbide within the austenitic matrix significantly contributes to the increase in yield strength. In the present work, the precipitation strengthening simulation has been carried out within the framework of the ICME approach. Thermodynamic assessments of the quaternary Fe-Mn-Al-C system as well as its sub-ternary systems were performed with the CALPHAD method. All available information on phase equilibria and thermochemical properties were critically evaluated and used to optimize the thermodynamic model parameters. By means of the partitioning model, the κ-carbide was described using a five-sublattice model (four substitutional and one interstitial sublattice), which can reflect the ordering between metallic elements and reproduce the wide homogeneity range of the κ-carbide. Based on the present thermodynamic description, a thermodynamic database for lightweight steels was created. Using the database, the phase equilibria evolution in lightweight steels can be satisfactorily predicted, as well as the partition of alloying elements. In order to accelerate the development of a kinetic database for multicomponent systems, a high-throughput optimization method was adopted to optimize the diffusion mobilities. This method may largely reduce the necessary diffusion-couple experiments in multicomponent systems. Based on the developed thermodynamic and kinetic databases for lightweight steels, the precipitation of the κ-carbide was simulated using TC-PRISMA. The volume fraction and particle size were reasonably reproduced. Finally, the precipitation strengthening contribution to the yield strength was predicted. The calculation results show that the anti-phase boundary effect is predominant in the precipitation strengthening. Overall, the relationship between the composition, processing parameters, microstructure and mechanical properties are established in the thesis.

  • Public defence: 2018-09-21 10:00 Videoconferencing Sal C, Kista
    Apolonia, Nuno
    KTH, School of Information and Communication Technology (ICT). Universitat Politecnica de Catalunya (UPC) Barcelona, Spain.
    On Service Optimization in Community Network Micro-Clouds2018Doctoral thesis, monograph (Other academic)
    Abstract [en]

    Internet coverage in the world is still weak and local communities are required to come together and build their own network infrastructures. People collaborate for the common goal of accessing the Internet and cloud services by building Community networks (CNs).The use of Internet cloud services has grown over the last decade. Community network cloud infrastructures (i.e. micro-clouds) have been introduced to run services inside the network, without the need to consume them from the Internet. CN micro-clouds aims for not only an improved service performance, but also an entry point for an alternative to Internet cloud services in CNs. However, the adaptation of the services to be used in CN micro-clouds have their own challenges since the use of low-capacity devices and wireless connections without a central management is predominant in CNs. Further, large and irregular topology of the network, high software and hardware diversity and different service requirements in CNs, makes the CN micro-clouds a challenging environment to run local services, and to achieve service performance and quality similar to Internet cloud services. In this thesis, our main objective is the optimization of services (performance, quality) in CN micro-clouds, facilitating entrance to other services and motivating members to make use of CN micro-cloud services as an alternative to Internet services. We present an approach to handle services in CN micro-cloud environments in order to improve service performance and quality that can be approximated to Internet services, while also giving to the community motivation to use CN micro-cloud services. Furthermore, we break the problem into different levels (resource, service and middleware), propose a model that provides improvements for each level and contribute with information that helps to support the improvements (in terms of service performance and quality) in the other levels.At the resource level, we facilitate the use of community devices by utilizing virtualization techniques that isolate and manage CN micro-cloud services in order to have a multi-purpose environment that fosters services in the CN micro-cloud environment.At the service level, we build a monitoring tool tailored for CN micro-clouds that helps us to analyze service behavior and performance in CN micro-clouds. Subsequently, the information gathered enables adaptation of the services to the environment in order to improve their quality and performance under CN environments. At the middleware level, we build overlay networks as the main communication system according to the social information in order to improve paths and routes of the nodes, and improve transmission of data across the network by utilizing the relationships already established in the social network or community of practices that are related to the CNs. Therefore, service performance in CN micro-clouds can become more stable with respect to resource usage, performance and user perceived quality.

  • Public defence: 2018-09-28 10:00 Kollegiesalen, Stockholm
    Montecchio, Francesco
    KTH, School of Engineering Sciences in Chemistry, Biotechnology and Health (CBH), Chemical Engineering, Process Technology.
    Process Optimization of UV-Based Advanced Oxidation Processes in VOC Removal Applications2018Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Air pollution is a major concern in developed countries due to its hazardous health effects. Recent studies by the WHO (World Health Organization) estimate that urban air pollution causes a number of diseases of the respiratory tract and is associated with 150,000 deaths each year. Volatile organic compounds (VOCs) are among the major pollutants affecting the outdoor air quality. Given that industrial processes are the main source of atmospheric VOC emissions, national and international authorities have issued regulations to limit such emissions. However, traditional removal technologies such as incineration, have low energy efficiency and high investment costs. AOPs (advanced oxidation processes) offer a promising alternative in which very reactive conditions can be achieved at room temperature, thus greatly increasing energy efficiency. However, this is still not a mature technology due to challenges that limit the range of applications.

    This thesis focuses on two types of UV-based AOP: photocatalysis and UV-ozone. The goal is to improve VOC conversion and achieve a process that is competitive with traditional technologies. The research on photocatalysis presents an innovative UV reactor design that is closer to industrial conditions and has the ability to effectively screen different samples. Effort was put into finding a metallic support for the photocatalyst without using additional adhesives. Several electrochemical treatments were performed on metals to restructure the surface. One treatment proved to be superior when it came to stabilizing the TiO2 coating, especially when compared with the traditional ceramic support.

    Research on UV-ozone AOPs focused on reactor modelling, developing a numerical and a fluid dynamics model. The goal was to gain a deep understanding of the governing phenomena of UV-ozone reactors so as to optimize the reactor configuration. The numerical model created described the UV irradiation and the reaction kinetics accurately, while a computational fluid dynamics (CFD) simulator modelled the fluid a larger scale, simulating two prototypes. The work resulted in general guidelines for the design of UV-ozone UV reactors as well as for full-scale units. 

  • Public defence: 2018-10-05 10:00 Kollegiesalen, Stockholm
    Mölleryd, Bengt A
    KTH, School of Industrial Engineering and Management (ITM), Energy Technology.
    Governance of innovation - creating an architectural framework for innovation of technological systems for energy, security and defence2018Doctoral thesis, monograph (Other academic)
    Abstract [en]

    Governance of innovation – creating an architectural framework for innovation of technological systems for energy, security and defence

    Innovation has a great deal of attraction but is associated with serious uncertainties and downsides. It is potentially beneficial for growth, sector and industrial development and competitiveness. Innovation brings hope of solving societal challenges, such as climate change and environment protection, and could help secure a supply of energy. Furthermore, it improves resilience and strengthens security and defence.

    The downside of innovations of some magnitude concern severe transitions and disruptions. Digitisation, with associated net technologies, is an illustrative example of an innovation that creates new services, competitiveness and other benefits which is enormously positive and attractive, while simultaneously dismantles and destroys existing systems, firms and branches, and whole sectors and practices on the negative side.

    The thesis deals with systemic innovation, defined as a value-adding (to the customer and user) set or convergence of new products/services from technological systems in processes which emerge by evolutionary association and integration of systems transforming businesses, industries and sectors (with disruptions as consequence). On the basis of the systemicitiness of innovations and innovation processes an architectural framework for innovation aiming at governance of innovation is suggested. Platform based ecosystems examplifies an emulation of systemic innovation that aligns with the proposed framework. The suggested framework is a product from innovation case studies as from analysis of events, patterns, landscapes and models of innovation in the literature, supported with theoretical deliberations and studies of systemicitiness and from reviewing governance approaches and models for innovation and innovation processes. The architectural framework is intended as a light weight complement to usually linear R&D based enterprise architectures, systems engineering methods, standards and protocols (e.g. ISO/IEC 15288:2015, ISO/IEC 42010:2011, NISP) with their demands for data and series of projects.