Change search
Refine search result
1234567 1 - 50 of 362
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Abbasi, Abdul Ghafoor
    KTH, School of Information and Communication Technology (ICT), Communication Systems, CoS.
    CryptoNET: Generic Security Framework for Cloud Computing Environments2011Doctoral thesis, monograph (Other academic)
    Abstract [en]

    The area of this research is security in distributed environment such as cloud computing and network applications. Specific focus was design and implementation of high assurance network environment, comprising various secure and security-enhanced applications. “High Assurance” means that

    -               our system is guaranteed to be secure,

    -               it is verifiable to provide the complete set of security services,

    -               we prove that it always functions correctly, and

    -               we justify our claim that it can not be compromised without user neglect and/or consent.

     

    We do not know of any equivalent research results or even commercial security systems with such properties. Based on that, we claim several significant research and also development contributions to the state–of–art of computer networks security.

    In the last two decades there were many activities and contributions to protect data, messages and other resources in computer networks, to provide privacy of users, reliability, availability and integrity of resources, and to provide other security properties for network environments and applications. Governments, international organizations, private companies and individuals are investing a great deal of time, efforts and budgets to install and use various security products and solutions. However, in spite of all these needs, activities, on-going efforts, and all current solutions, it is general belief that the security in today networks and applications is not adequate.

    At the moment there are two general approaches to network application’s security. One approach is to enforce isolation of users, network resources, and applications. In this category we have solutions like firewalls, intrusion–detection systems, port scanners, spam filters, virus detection and elimination tools, etc. The goal is to protect resources and applications by isolation after their installation in the operational environment. The second approach is to apply methodology, tools and security solutions already in the process of creating network applications. This approach includes methodologies for secure software design, ready–made security modules and libraries, rules for software development process, and formal and strict testing procedures. The goal is to create secure applications even before their operational deployment. Current experience clearly shows that both approaches failed to provide an adequate level of security, where users would be guaranteed to deploy and use secure, reliable and trusted network applications.

    Therefore, in the current situation, it is obvious that a new approach and a new thinking towards creating strongly protected and guaranteed secure network environments and applications are needed. Therefore, in our research we have taken an approach completely different from the two mentioned above. Our first principle is to use cryptographic protection of all application resources. Based on this principle, in our system data in local files and database tables are encrypted, messages and control parameters are encrypted, and even software modules are encrypted. The principle is that if all resources of an application are always encrypted, i.e. “enveloped in a cryptographic shield”, then

    -               its software modules are not vulnerable to malware and viruses,

    -               its data are not vulnerable to illegal reading and theft,

    -               all messages exchanged in a networking environment are strongly protected, and

    -               all other resources of an application are also strongly protected.

     

    Thus, we strongly protect applications and their resources before they are installed, after they are deployed, and also all the time during their use.

    Furthermore, our methodology to create such systems and to apply total cryptographic protection was based on the design of security components in the form of generic security objects. First, each of those objects – data object or functional object, is itself encrypted. If an object is a data object, representing a file, database table, communication message, etc., its encryption means that its data are protected all the time. If an object is a functional object, like cryptographic mechanisms, encapsulation module, etc., this principle means that its code cannot be damaged by malware. Protected functional objects are decrypted only on the fly, before being loaded into main memory for execution. Each of our objects is complete in terms of its content (data objects) and its functionality (functional objects), each supports multiple functional alternatives, they all provide transparent handling of security credentials and management of security attributes, and they are easy to integrate with individual applications. In addition, each object is designed and implemented using well-established security standards and technologies, so the complete system, created as a combination of those objects, is itself compliant with security standards and, therefore, interoperable with exiting security systems.

    By applying our methodology, we first designed enabling components for our security system. They are collections of simple and composite objects that also mutually interact in order to provide various security services. The enabling components of our system are:  Security Provider, Security Protocols, Generic Security Server, Security SDKs, and Secure Execution Environment. They are all mainly engine components of our security system and they provide the same set of cryptographic and network security services to all other security–enhanced applications.

    Furthermore, for our individual security objects and also for larger security systems, in order to prove their structural and functional correctness, we applied deductive scheme for verification and validation of security systems. We used the following principle: “if individual objects are verified and proven to be secure, if their instantiation, combination and operations are secure, and if protocols between them are secure, then the complete system, created from such objects, is also verifiably secure”. Data and attributes of each object are protected and secure, and they can only be accessed by authenticated and authorized users in a secure way. This means that structural security properties of objects, upon their installation, can be verified. In addition, each object is maintained and manipulated within our secure environment so each object is protected and secure in all its states, even after its closing state, because the original objects are encrypted and their data and states stored in a database or in files are also protected.

    Formal validation of our approach and our methodology is performed using Threat Model. We analyzed our generic security objects individually and identified various potential threats for their data, attributes, actions, and various states. We also evaluated behavior of each object against potential threats and established that our approach provides better protection than some alternative solutions against various threats mentioned. In addition, we applied threat model to our composite generic security objects and secure network applications and we proved that deductive approach provides better methodology for designing and developing secure network applications. We also quantitatively evaluated the performance of our generic security objects and found that the system developed using our methodology performs cryptographic functions efficiently.

    We have also solved some additional important aspects required for the full scope of security services for network applications and cloud environment: manipulation and management of cryptographic keys, execution of encrypted software, and even secure and controlled collaboration of our encrypted applications in cloud computing environments. During our research we have created the set of development tools and also a development methodology which can be used to create cryptographically protected applications. The same resources and tools are also used as a run–time supporting environment for execution of our secure applications. Such total cryptographic protection system for design, development and run–time of secure network applications we call CryptoNET system. CrytpoNET security system is structured in the form of components categorized in three groups: Integrated Secure Workstation, Secure Application Servers, and Security Management Infrastructure Servers. Furthermore, our enabling components provide the same set of security services to all components of the CryptoNET system.

    Integrated Secure Workstation is designed and implemented in the form of a collaborative secure environment for users. It protects local IT resources, messages and operations for multiple applications. It comprises four most commonly used PC applications as client components: Secure Station Manager (equivalent to Windows Explorer), Secure E-Mail Client, Secure Web Browser, and Secure Documents Manager. These four client components for their security extensions use functions and credentials of the enabling components in order to provide standard security services (authentication, confidentiality, integrity and access control) and also additional, extended security services, such as transparent handling of certificates, use of smart cards, Strong Authentication protocol, Security Assertion Markup Language (SAML) based Single-Sign-On protocol, secure sessions, and other security functions.

    Secure Application Servers are components of our secure network applications: Secure E-Mail Server, Secure Web Server, Secure Library Server, and Secure Software Distribution Server. These servers provide application-specific services to client components. Some of the common security services provided by Secure Application Servers to client components are Single-Sign-On protocol, secure communication, and user authorization. In our system application servers are installed in a domain but it can be installed in a cloud environment as services. Secure Application Servers are designed and implemented using the concept and implementation of the Generic Security Server. It provides extended security functions using our engine components. So by adopting this approach, the same sets of security services are available to each application server.

    Security Management Infrastructure Servers provide domain level and infrastructure level services to the components of the CryptoNET architecture. They are standard security servers, known as cloud security infrastructure, deployed as services in our domain level could environment.

    CryptoNET system is complete in terms of functions and security services that it provides. It is internally integrated, so that the same cryptographic engines are used by all applications. And finally, it is completely transparent to users – it applies its security services without expecting any special interventions by users. In this thesis, we developed and evaluated secure network applications of our CryptoNET system and applied Threat Model to their validation and analysis. We found that deductive scheme of using our generic security objects is effective for verification and testing of secure, protected and verifiable secure network applications.

    Based on all these theoretical research and practical development results, we believe that our CryptoNET system is completely and verifiably secure and, therefore, represents a significant contribution to the current state-of-the-art of computer network security.

  • 2.
    Aid, Graham
    KTH, School of Industrial Engineering and Management (ITM), Industrial Ecology.
    Industrial Ecology Methods within Engagement Processes for Industrial Resource Management2013Licentiate thesis, comprehensive summary (Other academic)
    Abstract [en]

    The global use of resources such as materials, energy, and water has surpassed sustainable levels by many accounts.  The research presented here was explicitly normative in its aim to improve the understanding of, and make sustainable change toward highly systemic issues of resource management.  The core methods chosen to work toward this aim were bottom up action research procedures (including stakeholder engagement processes) and industrial ecology analysis tools.  These methods were employed and tested in pragmatic combination through two of the author’s case study projects. The first case study, performed between 2009 and 2012, employed a multi-stakeholder process aimed at improving the cycling of construction and demolition waste in the Stockholm region.  The second case study produced a strategic tool (Looplocal) built for facilitating more efficient regional industrial resource networks. While the highly participative aim of the cases required a larger contribution of resources than that of more closed studies, it is arguable that the efficacy of approaching the project aims is improved through their employment. 

  • 3.
    Aid, Graham
    et al.
    KTH, School of Industrial Engineering and Management (ITM), Industrial Ecology (moved 20130630).
    Brandt, Nils
    KTH, School of Industrial Engineering and Management (ITM), Industrial Ecology (moved 20130630).
    Lysenkovac, Mariya
    Smedberg, Niklas
    KTH, School of Industrial Engineering and Management (ITM), Industrial Ecology (moved 20130630).
    Looplocal: a Heuristic Visualization Tool for the Strategic Facilitation of Industrial Symbiosis2012In: Greening of Industry Netowrk Proceedings / [ed] Leo Baas, 2012Conference paper (Refereed)
    Abstract [en]

    Industrial symbiosis (IS) developments have been differentiated as ‘self organized’, ‘facilitated’, and ‘planned’. This article introduces a tool that has been built with objectives to support the strategic facilitation of IS. ‘Looplocal’ is a visualization tool built to assist in 1) the identification of regions prone to new industrial symbiosis activities 2) market potential exchanges to key actors and 3) assist aspiring facilitators to assess the various strategies and social methodologies available for the initial phases of a facilitated industrial symbiosis venture. This tool combines life cycle inventory (LCI) data, waste statistics, and national industrial data (including geographic, activity, economic, and contact information) to perform a heuristic analysis of raw material and energy inputs and outputs (wastes). Along with an extensive list of ‘waste to raw material’ substitutions (which may be direct, combined, or upgraded) gathered from IS uncovering studies, IS organizations, and waste and energy professionals; heuristic regional output to input ‘matching’ can be visualized. On a national or regional scale the tool gives a quick overview of what could be the most interesting regions to prioritize resources for IS facilitation. Focusing in on a regional level, the tool visualizes the potential structure of the network in that region (centralized, decentralized, or distributed), allowing a facilitator to adapt the networking approach correspondingly. The tool also visualizes potential IS transfer information, along with key stakeholder data. The authors have performed a proof of concept run of this tool in the ‘industrial disperse’ context of Sweden. In its early stages of application, the method has proven capable of identifying regions prone to the investment of facilitators’ resources. The material focus and custom possibilities for the tool show potential for a wide spectrum of potential facilitators: from waste management companies (using the tool as a strategic market analysis tool) to national or regional authorities looking to lower negative environmental impacts, to ‘sustainable’ industry sectors looking to strengthen market positioning. In conjunction with proper long term business models, such a tool could be reusable itself over the evolution of facilitation activities and aims.

  • 4.
    Akay, Altug
    et al.
    KTH, School of Technology and Health (STH), Health Systems Engineering, Systems Safety and Management.
    Dragomir, A.
    Department of Biomedical Engineering, University of Houston, Houston, TX, US.
    Erlandsson, Björn-Erik
    KTH, School of Technology and Health (STH), Health Systems Engineering, Systems Safety and Management.
    A novel data-mining approach leveraging social media to monitor and respond to outcomes of diabetes drugs and treatment2013In: 2013 IEEE Point-of-Care Healthcare Technologies (PHT), New York: IEEE , 2013, p. 264-266Conference paper (Refereed)
    Abstract [en]

    A novel data-mining method was developed to gauge the experiences of medical devices and drugs by patients with diabetes mellitus. Self-organizing maps were used to analyze forum posts numerically to better understand user opinion of medical devices and drugs. The end-result is a word list compilation that correlates certain positive and negative word cluster groups with medical drugs and devices. The implication of this novel data-mining method could open new avenues of research into rapid data collection, feedback, and analysis that would enable improved outcomes and solutions for public health.

  • 5.
    Akay, Altug
    et al.
    KTH, School of Technology and Health (STH), Health Systems Engineering, Systems Safety and Management.
    Dragomir, A
    Erlandsson, Björn-Erik
    KTH, School of Technology and Health (STH), Health Systems Engineering, Systems Safety and Management.
    A Novel Data-Mining Approach Leveraging Social Media to Monitor Consumer Opinion of Sitagliptin2015In: IEEE journal of biomedical and health informatics, ISSN 2168-2194, E-ISSN 2168-2208, Vol. 19, no 1, p. 389-396Article in journal (Refereed)
    Abstract [en]

    A novel data mining method was developed to gauge the experience of the drug Sitagliptin (trade name Januvia) by patients with diabetes mellitus type 2. To this goal, we devised a two-step analysis framework. Initial exploratory analysis using self-organizing maps was performed to determine structures based on user opinions among the forum posts. The results were a compilation of user's clusters and their correlated (positive or negative) opinion of the drug. Subsequent modeling using network analysis methods was used to determine influential users among the forum members. These findings can open new avenues of research into rapid data collection, feedback, and analysis that can enable improved outcomes and solutions for public health and important feedback for the manufacturer.

  • 6.
    Al-Battat, Ahmed
    et al.
    KTH, School of Technology and Health (STH), Medical Engineering, Computer and Electronic Engineering.
    Anwer, Noora
    KTH, School of Technology and Health (STH), Medical Engineering, Computer and Electronic Engineering.
    Utvärdering utifrån ett mjukvaruutveckling perspektiv av ramverk för SharePoint2017Independent thesis Basic level (university diploma), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    The functionality was tested by two different tests, which showed that the product is suitable for usage in the intranet within a company or an organization, there are great benefits from using intranet as a tool for sharing of information. A good intranet contributes to a better flow of information and effective cooperation. SharePoint is a platform for intranet with interactive features, it makes the job easier for staff and even the company. The framework Omnia is a solution designed for Microsoft SharePoint 2013.This essay evaluates how Omnia acts as a framework and what the product is suitable for. Omnia framework evaluates carefully and is an independent assessment carried during this essay. The evaluation is based on scientific studies which are based on the qualitative and quantitative research methodology. The evaluator's main areas are based on system performance, scalability, architecture and functionality. A test prototype develops during the process in the form of an employee vacation request application by the development framework Omnia.The framework Omnia is considered to be suitable for the development of interactive web-based applications for SharePoint. The architecture for the system meets the requirements for scalable systems because it is based on the tier architecture. The system also has good performance but it needs to be improved if the number of users exceeds one thousand. The functionality of this product is quite suitable for the system's usage.

  • 7. Alonso, O.a
    et al.
    Kamps, J.b
    Karlgren, Jussi
    KTH, School of Computer Science and Communication (CSC), Theoretical Computer Science, TCS.
    Seventh workshop on exploiting semantic annotations in information retrieval (ESAIR’14)2014In: CIKM 2014 - Proceedings of the 2014 ACM International Conference on Information and Knowledge Management, Association for Computing Machinery (ACM), 2014, p. 2094-2095Conference paper (Refereed)
    Abstract [en]

    There is an increasing amount of structure on the Web as a result of modern Web languages, user tagging and annotation, emerging robust NLP tools, and an ever growing volume of linked data. These meaningful, semantic, annotations hold the promise to significantly enhance information access, by enhancing the depth of analysis of today’s systems. The goal of the ESAIR’14 workshop remains to advance the general research agenda on this core problem, with an explicit focus on one of the most challenging aspects to address in the coming years. The main remaining challenge is on the user’s side-the potential of rich document annotations can only be realized if matched by more articulate queries exploiting these powerful retrieval cues-and a more dynamic approach is emerging by exploiting new forms of query autosuggest. How can the query suggestion paradigm be used to encourage searcher to articulate longer queries, with concepts and relations linking their statement of request to existing semantic models? How do entity results and social network data in "graph search" change the classic division between searchers and information and lead to extreme personalization-are you the query? How to leverage transaction logs and recommendation, and how adaptive should we make the system? What are the privacy ramifications and the UX aspects-how to not creep out users?

  • 8.
    Ardestani, Shahrzad
    et al.
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for High Performance Computing, PDC.
    Håkansson, Carl Johan
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for High Performance Computing, PDC.
    Laure, Erwin
    KTH, School of Computer Science and Communication (CSC), Computational Science and Technology (CST). KTH, School of Computer Science and Communication (CSC), Centres, Centre for High Performance Computing, PDC.
    Livenson, I.
    Stranak, P.
    Dima, E.
    Blommesteijn, D.
    Van De Sanden, M.
    B2SHARE: An open eScience data sharing platform2015In: Proceedings - 11th IEEE International Conference on eScience, IEEE , 2015, p. 448-453Conference paper (Refereed)
    Abstract [en]

    Scientific data sharing is becoming an essential service for data driven science and can significantly improve the scientific process by making reliable, and trustworthy data available. Thereby reducing redundant work, and providing insights on related research and recent advancements. For data sharing services to be useful in the scientific process, they need to fulfill a number of requirements that cover not only discovery, and access to data. But to ensure the integrity, and reliability of published data as well. B2SHARE, developed by the EUDAT project, provides such a data sharing service to scientific communities. For communities that wish to download, install and maintain their own service, it is also available as software. B2SHARE is developed with a focus on user-friendliness, reliability, and trustworthiness, and can be customized for different organizations and use-cases. In this paper we discuss the design, architecture, and implementation of B2SHARE. We show its usefulness in the scientific process with some case studies in the biodiversity field.

  • 9.
    Asker, Lars
    et al.
    Stockholms universitet, Institutionen för data- och systemvetenskap.
    Boström, Henrik
    Stockholms universitet, Institutionen för data- och systemvetenskap.
    Karlsson, Isak
    Stockholms universitet, Institutionen för data- och systemvetenskap.
    Papapetrou, Panagiotis
    Stockholms universitet, Institutionen för data- och systemvetenskap.
    Zhao, Jing
    Stockholms universitet, Institutionen för data- och systemvetenskap.
    Mining Candidates for Adverse Drug Interactions in Electronic Patient Records2014In: PETRA '14 Proceedings of the 7th International Conference on Pervasive Technologies Related to Assistive Environments, PETRA’14, New York: ACM Press, 2014, article id 22Conference paper (Refereed)
    Abstract [en]

    Electronic patient records provide a valuable source of information for detecting adverse drug events. In this paper, we explore two different but complementary approaches to extracting useful information from electronic patient records with the goal of identifying candidate drugs, or combinations of drugs, to be further investigated for suspected adverse drug events. We propose a novel filter-and-refine approach that combines sequential pattern mining and disproportionality analysis. The proposed method is expected to identify groups of possibly interacting drugs suspected for causing certain adverse drug events. We perform an empirical investigation of the proposed method using a subset of the Stockholm electronic patient record corpus. The data used in this study consists of all diagnoses and medications for a group of patients diagnoses with at least one heart related diagnosis during the period 2008--2010. The study shows that the method indeed is able to detect combinations of drugs that occur more frequently for patients with cardiovascular diseases than for patients in a control group, providing opportunities for finding candidate drugs that cause adverse drug effects through interaction.

  • 10.
    Asker, Lars
    et al.
    Stockholms universitet, Institutionen för data- och systemvetenskap.
    Boström, Henrik
    Stockholms universitet, Institutionen för data- och systemvetenskap.
    Papapetrou, Panagiotis
    Stockholms universitet, Institutionen för data- och systemvetenskap.
    Persson, Hans
    Identifying Factors for the Effectiveness of Treatment of Heart Failure: A Registry Study2016In: IEEE 29th International Symposiumon Computer-Based Medical Systems: CBMS 2016, IEEE Computer Society , 2016Conference paper (Refereed)
    Abstract [en]

    An administrative health register containing health care data for over 2 million patients will be used to search for factors that can affect the treatment of heart failure. In the study, we will measure the effects of employed treatment for various groups of heart failure patients, using different measures of effectiveness. Significant deviations in effectiveness of treatments of the various patient groups will be reported and factors that may help explaining the effect of treatment will be analyzed. Identification of the most important factors that may help explain the observed deviations between the different groups will be derived through generation of predictive models, for which variable importance can be calculated. The findings may affect recommended treatments as well as high-lighting deviations from national guidelines.

  • 11.
    Asker, Lars
    et al.
    Stockholms universitet, Institutionen för data- och systemvetenskap.
    Papapetrou, Panagiotis
    Stockholms universitet, Institutionen för data- och systemvetenskap.
    Boström, Henrik
    Stockholms universitet, Institutionen för data- och systemvetenskap.
    Learning from Swedish Healthcare Data2016In: Proceedings of the 9th ACM International Conference on PErvasive Technologies Related to Assistive Environments, Association for Computing Machinery (ACM), 2016, Vol. 29, article id 47Conference paper (Refereed)
    Abstract [en]

    We present two ongoing projects aimed at learning from health care records. The first project, DADEL, is focusing on high-performance data mining for detrecting adverse drug events in healthcare, and uses electronic patient records covering seven years of patient record data from the Stockholm region in Sweden. The second project is focusing on heart failure and on understanding the differences in treatment between various groups of patients. It uses a Swedish administrative health register containing health care data for over two million patients.

  • 12.
    B. da Silva Jr., Jose Mairton
    KTH, School of Electrical Engineering and Computer Science (EECS), Network and Systems engineering.
    Optimization and Fundamental Insights in Full-Duplex Cellular Networks2019Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    The next generations of cellular networks are expected to provide explosive data rate transmissions and very low latencies. To meet such demands, one of the promising wireless transmissions candidates is in-band full-duplex communications, which enable wireless devices to simultaneously transmit and receive on the same frequency channel. Full-duplex communications have the potential to double the spectral efficiency and reduce the transmission delays when compared to current half-duplex systems that either transmit or receive on the same frequency channel. Until recently, full-duplex communications have been hindered by the interference that leaks from the transmitter to its own receiver,the so-called self-interference. However, advances in digital and analog self-interference suppression techniques are making it possible to reduce the self-interference to manageable levels, and thereby make full-duplex a realistic candidate for advanced wireless systems.

    Although in-band full-duplex promises to double the data rates of existing wireless technologies, its deployment in cellular networks must be gradual due to the large number of legacy devices operating in half-duplex mode. When half-duplex devices are deployed in full-duplex cellular networks, the user-to-user interference may become the performance bottleneck. In such new interference situation, the techniques such as user pairing, frequency channel assignment, power control, beamforming, and antenna splitting become even more important than before, because they are essential to mitigate both the user-to-user interference and the residual self-interference. Moreover, introduction of full- duplex in cellular networks must comply with current multi-antenna systems and, possibly, transmissions in the millimeter-wave bands. In these new scenarios, no comprehensive analysis is available to understand the trade-offs in the performance of full-duplex cellular networks.

    This thesis investigates the optimization and fundamental insights in the design of spectral efficient and fair mechanisms in full-duplex cellular networks. The novel analysis proposed in this thesis suggests new solutions for maximizing full-duplex performance in the sub-6 GHz and millimeter-wave bands. The investigations are based on an optimization theory approach that includes distributed and nonconvex optimization with mixed integer-continuous variables, and novel extensions of Fast-Lipschitz optimization. The analysis sheds lights on fundamental questions such as which antenna architecture should be used and whether full-duplex in the millimeter-wave band is feasible. The results establish fundamental insights in the role of user pairing, frequency assignment, power control and beamforming; reveal the special behaviour between the self-interference and user- to-user interference; analyse the trade-offs between antenna sharing and splitting for uplink/downlink signal separation; and investigate the role of practical beamforming design in full-duplex millimeter-wave systems. This thesis may provide input to future standardization process of full-duplex communications.

  • 13.
    Bandali, Benjamin
    KTH, School of Technology and Health (STH), Medical Engineering, Computer and Electronic Engineering.
    Availability and perceived availability with interaction design: Cost-effective availability model for a multinational company2014Independent thesis Basic level (university diploma), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Techniques of measuring web-service uptime have always been a key metric to improve interaction and system integration. Availability is a usual metric in the field of statistics where the goal is to attract customers, but perhaps more importantly the users and providers which can improve the services through this metric. Since availability differs a lot depending on type of service, company and usage, a common problem is to define what availability really is.

    The thesis will give the readers an introduction to availability, and also explain the reasons why it may vary, which is the theory of interaction design behind a service. Though availability is a metric that can be calculated through many different ways, the result is very complex to understand for the public that is interested in it.

    The goal of this thesis is to give the readers an understanding and guidelines of how to define perceived availability based on the system availability, but also present a method of defining, calculating and present the metric in a user-friendly procedure. The result will in turn consist of a cost-effective model for perceived availability and be tested at a multinational company.

  • 14. Barbosa, Amanda
    et al.
    Santana, Alixandre
    Hacks, Simon
    KTH, School of Electrical Engineering and Computer Science (EECS).
    Stein, Niels von
    A Taxonomy for Enterprise Architecture Analysis Research2019Conference paper (Refereed)
  • 15. Ben-Nun, J.
    et al.
    Farhi, N.
    Llewellyn, M.
    Riva, B.
    Rosen, A.
    Ta-Shma, A.
    Wikström, Douglas
    KTH, School of Computer Science and Communication (CSC), Theoretical Computer Science, TCS.
    A new implementation of a dual (paper and cryptographic) voting system2012Conference paper (Refereed)
    Abstract [en]

    We report on the design and implementation of a new cryptographic voting system, designed to retain the "look and feel" of standard, paper-based voting used in our country Israel while enhancing security with end-to-end verifiability guaranteed by cryptographic voting. Our system is dual ballot and runs two voting processes in parallel: one is electronic while the other is paper-based and similar to the traditional process used in Israel. Consistency between the two processes is enforced by means of a new, specially-tailored paper ballot format. We examined the practicality and usability of our protocol through implementation and field testing in two elections: the first being a student council election with over 2000 voters, the second a political party's election for choosing their leader. We present our findings, some of which were extracted from a survey we conducted during the first election. Overall, voters trusted the system and found it comfortable to use.

  • 16.
    Berezkin, Nikita
    et al.
    KTH, School of Engineering Sciences in Chemistry, Biotechnology and Health (CBH), Biomedical Engineering and Health Systems, Health Informatics and Logistics.
    Heidari, Ahmed
    KTH, School of Engineering Sciences in Chemistry, Biotechnology and Health (CBH), Biomedical Engineering and Health Systems, Health Informatics and Logistics.
    Berika receptdata med innehållshanteringssystem2019Independent thesis Basic level (university diploma), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    The problem today is that people do not eat climate-smart food; this results in that the food will not suffice, and what we eat may harm the greenhouse effect. The problem is that people do not have the time or knowledge to cook climate-smart food. A solution is to use a Content Management System (CMS). A Content Management System processes selected type of data in a specific way which is then stored. This report will address the basics and the making of a CMS in a recommendation system for a user. The system will entail a more climate-smart food alternative to achieve the individual's personal needs. The result was that with the help of data from various sources, an ingredient of a recipe could add additional information such as nutritional value, allergies, and whether it is vegetarian. Tests such as performance tests on the execution time for the CMS, parsing accuracy, and matching product accuracy, a better result was achieved. Most of the ingredients in the recipe became enriched, which leads to more climate-smart food alternatives, which are better for the environment. The accuracy is the matching of ingredients in the recipe to the names of products in the business. The next step was to enrich the recipes using enriched ingredients.

  • 17.
    Bishop, Adrian N.
    et al.
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Fidan, Baris
    Anderson, Brian D. O.
    Dogancay, Kutluyil
    Pathirana, Pubudu N.
    Optimality analysis of sensor-target localization geometries2010In: Automatica, ISSN 0005-1098, E-ISSN 1873-2836, Vol. 46, no 3, p. 479-492Article in journal (Refereed)
    Abstract [en]

    The problem of target localization involves estimating the position of a target from multiple noisy sensor measurements. It is well known that the relative sensor-target geometry can significantly affect the performance of any particular localization algorithm. The localization performance can be explicitly characterized by certain measures, for example, by the Cramer-Rao lower bound (which is equal to the inverse Fisher information matrix) on the estimator variance. In addition, the Cramer-Rao lower bound is commonly used to generate a so-called uncertainty ellipse which characterizes the spatial variance distribution of an efficient estimate, i.e. an estimate which achieves the lower bound. The aim of this work is to identify those relative sensor-target geometries which result in a measure of the uncertainty ellipse being minimized. Deeming such sensor-target geometries to be optimal with respect to the chosen measure, the optimal sensor-target geometries for range-only, time-of-arrival-based and bearing-only localization are identified and studied in this work. The optimal geometries for an arbitrary number of sensors are identified and it is shown that an optimal sensor-target configuration is not, in general, unique. The importance of understanding the influence of the sensor-target geometry on the potential localization performance is highlighted via formal analytical results and a number of illustrative examples.

  • 18. Bonivento, A.
    et al.
    Fischione, Carlo
    KTH, School of Electrical Engineering (EES), Automatic Control.
    Sangiovanni-Vincentelli, A.
    Randomized protocol stack for ubiquitous networks in indoor environment2006In: 2006 3rd IEEE Consumer Communications and Networking Conference, CCNC 2006, 2006, Vol. 1, p. 152-156Conference paper (Refereed)
    Abstract [en]

    We present a novel protocol architecture for ubiquitous networks. Our solution is based on a randomized routing, MAC and duty cycling protocols that allow for performance and reliability leveraging node density. We show how the three layers can be jointly optimized for energy efficiency and we present a completely distributed algorithm that allows for the network to reach the optimal working point and adapt to traffic variations with negligible overhead. Finally, we present a set of simulation results that support our mathematical model.

  • 19. Borozanov, Vasil
    et al.
    Hacks, Simon
    Silva, Nuno
    Using Machine Learning Techniques for Evaluating the Similarity of Enterprise Architecture Models2019Conference paper (Refereed)
  • 20.
    Boström, Gustav
    KTH, School of Information and Communication Technology (ICT), Computer and Systems Sciences, DSV.
    A case study on estimating the software engineering properties of implementing database encryption as and aspect2005In:  SPLAT 05: Papers, 2005Conference paper (Refereed)
  • 21.
    Boström, Gustav
    KTH, School of Information and Communication Technology (ICT), Computer and Systems Sciences, DSV.
    Aspects in the user interface: the case of access controlArticle in journal (Other academic)
  • 22.
    Boström, Gustav
    KTH, Superseded Departments, Computer and Systems Sciences, DSV.
    Database Encryption as an Aspect2004In: AOSD'04 International Conference on Aspect-Oriented Software Development  : Papers, 2004Conference paper (Refereed)
    Abstract [en]

    Encryption is an important method for implementing confidentiality in information systems. Unfortunately applying encryption effectively can be quite complicated. Encryption, as well as other security concerns, is also often spread out in an application making implementation difficult. This crosscutting nature of encryption makes it a potentially ideal candidate for implementation using AOP. In this article we provide an example of how database encryption was applied using AOP with AspectJ on a real-life healthcare database application. Although the attempt was promising with regards to modularity, amount of effort and security engineering, it also revealed problems related to substring queries that need to be solved to make the approach really useful.

  • 23.
    Boström, Gustav
    KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA.
    Simplifying development of secure software: Aspects and Agile methods2006Licentiate thesis, comprehensive summary (Other scientific)
    Abstract [en]

    Reducing the complexity of building secure software systems is an important goal as increased complexity can lead to more security flaws. This thesis aims at helping to reduce this complexity by investigating new programming techniques and software development methods for implementing secure software. We provide case studies on the use and effects of applying Aspect-oriented software development to Confidentiality, Access Control and Quality of Service implementation. We also investigate how eXtreme Programming can be used for simplifying the secure software development process by comparing it to the security engineering standards Common Criteria and the Systems Security Engineering Capability Maturity Model. We also explore the relationship between Aspect-oriented programming and Agile software development methods, such as eXtreme Programming.

  • 24.
    Boström, Gustav
    et al.
    KTH, School of Information and Communication Technology (ICT), Computer and Systems Sciences, DSV.
    Wäyrynen, Jaana
    Henkel, Martin
    KTH, School of Information and Communication Technology (ICT), Computer and Systems Sciences, DSV.
    Aspects in the Agile toobox2005In:   SPLAT 05: Papers, 2005Conference paper (Refereed)
  • 25.
    Boström, Henrik
    Stockholms universitet, Institutionen för data- och systemvetenskap.
    Concurrent Learning of Large-Scale Random Forests2011In: Scandinavian Conference on Artificial Intelligence, IOS Press , 2011Conference paper (Refereed)
    Abstract [en]

    The random forest algorithm belongs to the class of ensemble learning methods that are embarassingly parallel, i.e., the learning task can be straightforwardly divided into subtasks that can be solved independently by concurrent processes. A parallel version of the random forest algorithm has been implemented in Erlang, a concurrent programming language originally developed for telecommunication applications. The implementation can be used for generating very large forests, or handling very large datasets, in a reasonable time frame. This allows for investigating potential gains in predictive performance from generating large-scale forests. An empirical investigation on 34 datasets from the UCI repository shows that forests of 1000 trees significantly outperform forests of 100 trees with respect to accuracy, area under ROC curve (AUC) and Brier score. However, increasing the forest sizes to 10 000 or 100 000 trees does not give any further significant performance gains.

  • 26.
    Boström, Henrik
    et al.
    Högskolan i Skövde, Institutionen för kommunikation och information.
    Andler, Sten F.
    Högskolan i Skövde, Institutionen för kommunikation och information.
    Brohede, Marcus
    Högskolan i Skövde, Institutionen för kommunikation och information.
    Johansson, Ronnie
    Högskolan i Skövde, Institutionen för kommunikation och information.
    Karlsson, Alexander
    Högskolan i Skövde, Institutionen för kommunikation och information.
    van Laere, Joeri
    Högskolan i Skövde, Institutionen för kommunikation och information.
    Niklasson, Lars
    Högskolan i Skövde, Institutionen för kommunikation och information.
    Nilsson, Marie
    Högskolan i Skövde, Institutionen för kommunikation och information.
    Persson, Anne
    Högskolan i Skövde, Institutionen för kommunikation och information.
    Ziemke, Tom
    Högskolan i Skövde, Institutionen för kommunikation och information.
    On the Definition of Information Fusion as a Field of Research2007Report (Other academic)
    Abstract [en]

    A more precise definition of the field of information fusion can be of benefit to researchers within the field, who may use uch a definition when motivating their own work and evaluating the contribution of others. Moreover, it can enable researchers and practitioners outside the field to more easily relate their own work to the field and more easily understand the scope of the techniques and methods developed in the field. Previous definitions of information fusion are reviewed from that perspective, including definitions of data and sensor fusion, and their appropriateness as definitions for the entire research field are discussed. Based on strengths and weaknesses of existing definitions, a novel definition is proposed, which is argued to effectively fulfill the requirements that can be put on a definition of information fusion as a field of research.

  • 27.
    Boström, Henrik
    et al.
    Stockholms universitet, Institutionen för data- och systemvetenskap.
    Dalianis, Hercules
    Stockholms universitet, Institutionen för data- och systemvetenskap.
    De-identifying health records by means of active learning2012In: ICML 2012 workshop on Machine Learning for Clinical Data Analysis 2012, 2012Conference paper (Refereed)
    Abstract [en]

    An experiment on classifying words in Swedish health records as belonging to one of eight protected health information (PHI) classes, or to the non-PHI class, by means of active learning has been conducted, in which three selection strategies were evaluated in conjunction with random forests; the commonly employed approach of choosing the most uncertain examples, choosing randomly, and choosing the most certain examples. Surprisingly, random selection outperformed choosing the most uncertain examples with respect to ten considered performance metrics. Moreover, choosing the most certain examples outperformed random selection with respect to nine out of ten metrics.

  • 28. Bowers, John
    et al.
    Hellström, Sten-Olof
    Tobiasson, Helena
    Taxén, Gustav
    KTH, Superseded Departments, Numerical Analysis and Computer Science, NADA.
    Designing mixed media artefacts for public settings2004In: Cooperative Systems Design. Scenario-Based Design of Collaborative Systems / [ed] Darses, F., Simone, C. and Zacklad, M., Amsterdam: IOS Press , 2004, p. 195-210Conference paper (Refereed)
    Abstract [en]

    This paper describes how principles which are emerging from socialscientific studies of people’s interaction with mixed media artefacts in public place have been used to support the development of two installations, the second of which is a long term museum exhibit. Our principles highlight the design of ‘emergent collaborative value’, ‘layers of noticeability’ and ‘structures of motivation’ to create an ‘ecology of participation’ in installations. We describe how our first installation was used as a ‘research vehicle’ that guided and shaped the design of the museum installation. We also provide an account of how people interact with our installations and how this analysis has shaped their design. The paper closes with some general remarks about the challenges there are for the design of collaborative installations and the extent to which we have met them.

  • 29.
    Braun, Stefan
    et al.
    KTH, School of Electrical Engineering (EES), Microsystem Technology (Changed name 20121201).
    Oberhammer, Joachim
    KTH, School of Electrical Engineering (EES), Microsystem Technology (Changed name 20121201).
    Stemme, Göran
    KTH, School of Electrical Engineering (EES), Microsystem Technology (Changed name 20121201).
    MEMS single-chip 5x5 and 20x20 double-switch arrays for telecommunication networks2007In: IEEE 20th International Conference on Micro Electro Mechanical Systems, 2007. MEMS, New York: IEEE , 2007, p. 811-814Conference paper (Refereed)
    Abstract [en]

    This paper reports on a microelectromechanical switch array with up to 20x20 double switches and packaged on a single chip and utilized for main distribution frames in copper-wire networks. The device includes 5x5 or 20x20 allowing for an any-to-any interconnection of the input line to the specific output line. The switches are on an electrostatic S-shaped film actuator with the contact moving between a top and a bottom electrode. device is fabricated in two parts and is designed to assembled using selective adhesive wafer bonding in a wafer-scale package of the switch array. The 5x5 switch arrays have a size of 6.7x6.4mm(2) and the arrays are 14x10 mm(2) large. The switch actuation for closing/opening the switches averaged over an array measured to be 21.2 V / 15.3 V for the 5x5 array 93.2 V / 37.3 V for the 20x20 array. The total impedance varies on the 5x5 array between 0.126 Omega 0.564 Omega at a measurement current of 1 mA. The resistance of the switch contacts within the 5x5 array determined to be 0.216 Omega with a standard deviation 0. 155 Omega.

  • 30.
    Braun, Stefan
    et al.
    KTH, School of Electrical Engineering (EES), Microsystem Technology (Changed name 20121201).
    Oberhammer, Joachim
    KTH, School of Electrical Engineering (EES), Microsystem Technology (Changed name 20121201).
    Stemme, Göran
    KTH, School of Electrical Engineering (EES), Microsystem Technology (Changed name 20121201).
    MEMS single-chip microswitch array for re-configuration of telecommunication networks2006In: 2006 European Microwave Conference: Vols 1-4, New York: IEEE , 2006, p. 315-318Conference paper (Refereed)
    Abstract [en]

    This paper reports on a micro-electromechanical (MEMS) switch array embedded and packaged on a single chip. The switch array is utilized for the automated re-configuration of the physical layer of copper-wire telecommunication networks. A total of 25 individually controllable double-switches are arranged in a 6.7 x 6.4 mm(2) large 5x5 switch matrix allowing for any configuration of independently connecting the line-pairs of the five input channels to any line-pair of the five output channels. The metal-contact switch array is embedded in a single chip package, together with 4 metal layers for routing the signal and control lines and with a total of 35 I/O contact pads. The MEMS switches are based on an electrostatic S-shaped thin membrane actuator with the switching contact bar rolling between a top and a bottom electrode. This special switch design allows for low actuation voltage (21.23 V) to close the switches and for high isolation. The total signal line resistances of the routing network vary from 0.57 Omega to 0.98 Omega. The contact resistance of the gold contacts is 0.216 Omega.

  • 31.
    Braun, Stefan
    et al.
    KTH, School of Electrical Engineering (EES), Microsystem Technology (Changed name 20121201).
    Oberhammer, Joachim
    KTH, School of Electrical Engineering (EES), Microsystem Technology (Changed name 20121201).
    Stemme, Göran
    KTH, School of Electrical Engineering (EES), Microsystem Technology (Changed name 20121201).
    Smart individual switch addressing of 5×5 and 20×20 MEMS double-switch arrays2007In: TRANSDUCERS and EUROSENSORS '07 - 4th International Conference on Solid-State Sensors, Actuators and Microsystems, IEEE , 2007, p. 153-156Conference paper (Other academic)
    Abstract [en]

    This paper presents a smart row / column addressing scheme for large MEMS rnicroswitch arrays, utilizing the pull-in / pull-out hysteresis of their electrostatic actuators to efficiently reduce the number of control lines. Single-chip 20 x 20 double-switch switch arrays with individually programmable 400 switch elements have been fabricated and the smart addressing scheme was successfully evaluated. The reproducibility of the actuation voltages within the array is very important for this addressing scheme and therefore the influence of effects such as isolation layer charging on the pull-in voltages has also been investigated.

  • 32.
    Briat, Corentin
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Optimization and Systems Theory.
    Seuret, A.
    Stability criteria for asynchronous sampled-data systems - A fragmentation approach2011In: IFAC Proc. Vol. (IFAC-PapersOnline), 2011, no PART 1, p. 1313-1318Conference paper (Refereed)
    Abstract [en]

    The stability analysis of asynchronous sampled-data systems is studied. The approach is based on a recent result which allows to study, in an equivalent way, the quadratic stability of asynchronous sampled-data systems in a continuous-time framework via the use of peculiar functionals satisfying a necessary boundary condition. The method developed here is an extension of previous results using a fragmentation technique inspired from recent advances in time-delay systems theory. The approach leads to a tractable convex feasibility problem involving a small number of finite dimensional LMIs. The approach is then finally illustrated through several examples.

  • 33. Bruederle, Daniel
    et al.
    Petrovici, Mihai A.
    Vogginger, Bernhard
    Ehrlich, Matthias
    Pfeil, Thomas
    Millner, Sebastian
    Gruebl, Andreas
    Wendt, Karsten
    Mueller, Eric
    Schwartz, Marc-Olivier
    de Oliveira, Dan Husmann
    Jeltsch, Sebastian
    Fieres, Johannes
    Schilling, Moritz
    Mueller, Paul
    Breitwieser, Oliver
    Petkov, Venelin
    Muller, Lyle
    Davison, Andrew P.
    Krishnamurthy, Pradeep
    KTH, School of Computer Science and Communication (CSC), Computational Biology, CB.
    Kremkow, Jens
    Lundqvist, Mikael
    KTH, School of Computer Science and Communication (CSC), Computational Biology, CB.
    Muller, Eilif
    Partzsch, Johannes
    Scholze, Stefan
    Zuehl, Lukas
    Mayr, Christian
    Destexhe, Alain
    Diesmann, Markus
    Potjans, Tobias C.
    Lansner, Anders
    KTH, School of Computer Science and Communication (CSC), Computational Biology, CB.
    Schueffny, Rene
    Schemmel, Johannes
    Meier, Karlheinz
    A comprehensive workflow for general-purpose neural modeling with highly configurable neuromorphic hardware systems2011In: Biological Cybernetics, ISSN 0340-1200, E-ISSN 1432-0770, Vol. 104, no 4-5, p. 263-296Article in journal (Refereed)
    Abstract [en]

    In this article, we present a methodological framework that meets novel requirements emerging from upcoming types of accelerated and highly configurable neuromorphic hardware systems. We describe in detail a device with 45 million programmable and dynamic synapses that is currently under development, and we sketch the conceptual challenges that arise from taking this platform into operation. More specifically, we aim at the establishment of this neuromorphic system as a flexible and neuroscientifically valuable modeling tool that can be used by non-hardware experts. We consider various functional aspects to be crucial for this purpose, and we introduce a consistent workflow with detailed descriptions of all involved modules that implement the suggested steps: The integration of the hardware interface into the simulator-independent model description language PyNN; a fully automated translation between the PyNN domain and appropriate hardware configurations; an executable specification of the future neuromorphic system that can be seamlessly integrated into this biology-to-hardware mapping process as a test bench for all software layers and possible hardware design modifications; an evaluation scheme that deploys models from a dedicated benchmark library, compares the results generated by virtual or prototype hardware devices with reference software simulations and analyzes the differences. The integration of these components into one hardware-software workflow provides an ecosystem for ongoing preparative studies that support the hardware design process and represents the basis for the maturity of the model-to-hardware mapping software. The functionality and flexibility of the latter is proven with a variety of experimental results.

  • 34.
    Buschle, Markus
    KTH, School of Electrical Engineering (EES), Industrial Information and Control Systems.
    Tool Support for Enterprise Architecture Analysis: with application in cyber security2014Doctoral thesis, monograph (Other academic)
    Abstract [en]

    In today’s companies, business processes and information technology areinterwoven. Old and new systems as well as off-the-shelf products and tailoredsolutions are used. This results in heterogeneous, often complex ITlandscapes. The impact of changes and the affected systems are difficult toidentify. However, volatile business environments and changing customer requestsrequire organizations to adapt quickly and to frequently make decisionsabout the modifications of their information technology.

    IT management aims at generating value from the usage of informationtechnology. One frequently used IT management approach is Enterprise Architecture.Company-wide models are used to obtain a holistic picture. Thesemodels are usually created using Enterprise Architecture modeling tools.These tools frequently have strong documentation capabilities. However, theyoften lack advanced analysis functionality. Specifically, such tools do not offersufficient support for the analysis of system properties, such as cyber security,availability or interoperability. The ability to analyze a set of possible scenariosand predict the properties of the modeled systems would be valuablefor decision-making. Changes or extensions could be evaluated before theirimplementation. In other domains, for example, in architecture in its classicalmeaning or in the development of machines, the analysis of models is a commonpractice. Typically, CAD tools are used to perform analysis and supportdecision-making. It is thereby possible to investigate the stability of buildingsor the performance of engines without the need for empirical testing.

    The contribution of the research work documented in this thesis is a softwaretool with a particular focus on the analysis of Enterprise Architecturemodels and thereby support for decision-making. This tool combines stateof-the-art Enterprise Architecture tooling with advanced analysis capabilitiesthat, until now, were only offered by modeling tools for other domains. Thepresented tool possesses two components. One component allows the creationof a metamodel capturing Enterprise Architecture analysis theory, for example,relevant concepts in the context of cyber security and how they relateto each other. The other component supports the instantiation of the metamodelinto an Enterprise Architecture model. Once a model is in place, itcan be analyzed with regards to the previously specified theory so that, forinstance, a cyber security evaluation can be conducted.

    The analysis tool was partly developed within the context of a larger researchproject on cyber security analysis. However, the tool is not restrictedto applications within this field. It can be used for the evaluation of numeroussystem properties. Several authors contributed to the tool both on an implementationlevel and in the development and design of the tool’s features. Theperformed research followed the Design Science methodology. First, the objectivesof a tool for Enterprise Architecture analysis were defined. Next, anartifact was designed and developed in terms of a software tool. This tool wasthen demonstrated and evaluated against the objectives. Lastly, the resultswere communicated to both academic and non-academic audiences.

  • 35.
    Buschle, Markus
    et al.
    KTH, School of Electrical Engineering (EES), Industrial Information and Control Systems.
    Ekstedt, Mathias
    KTH, School of Electrical Engineering (EES), Industrial Information and Control Systems.
    Grunow, S.
    Hauder, M.
    Matthes, F.
    Roth, S.
    Automating enterprise architecture documentation using an enterprise service bus2012In: 18th Americas Conference on Information Systems 2012, AMCIS 2012: Volume 6, 2012, 2012, p. 4213-4226Conference paper (Refereed)
    Abstract [en]

    Currently the documentation of Enterprise Architectures (EA) requires manual collection of data resulting in an error prone, expensive, and time consuming process. Recent approaches seek to automate and improve EA documentation by employing the productive system environment of organizations. In this paper, we investigate a specific Enterprise Service Bus (ESB) considered as the nervous system of an enterprise interconnecting business applications and processes as an information source. We evaluate the degree of coverage to which data of a productive system can be used for EA documentation. A vendor-specific ESB data model is reverse-engineered and transformation rules for three representative EA information models are derived. These transformation rules are employed to perform automated model transformations making the first step towards an automated EA documentation. We evaluate our approach using a productive ESB system from a leading enterprise of the fashion industry.

  • 36.
    Buschle, Markus
    et al.
    KTH, School of Electrical Engineering (EES), Industrial Information and Control Systems.
    Quartel, Dick
    Novay.
    Extending the method of Bedell for Enterprise Architecture2011In: Proceedings of the Enterprise Distributed Object Computing Conference Workshops (EDOCW), 2011Conference paper (Refereed)
    Abstract [en]

    If and where IT investments should be made can be determined with the help of IT portfolio valuation instruments. An interesting approach is Bedell?s method. However, this method originates from a time where enterprises were structured according to the classical silo architecture. Nowadays the importance of IT has increased, and IT systems and services are more and more interwoven with the business. Enterprise Architecture is a model-based approach that takes into consideration how companies are structured today and how they apply IT. In order to use Enterprise Architecture models in conjunction with Bedell?s method the method needs to be extended. This paper presents an updated version of Bedell?s method that exploits Enterprise Architecture models, in particular the information they contain about the alignment of IT support to the business. In addition, the paper illustrates the application of the method using an example.

  • 37.
    Cajander, Åsa
    et al.
    Uppsala University, Sweden.
    Grünloh, Christiane
    KTH, School of Computer Science and Communication (CSC), Media Technology and Interaction Design, MID. CUAS Cologne University of Applied Sciences, Germany.
    Rexhepi, Hanife
    Högskolan i Skövde, Sweden.
    Worlds Apart - Doctors’ Technological Frames and Online Medical Records2015In: Schriften aus der Fakultät Wirtschaftsinformatik und Angewandte Informatik der Otto-Friedrich-Universität Bamberg (22), University of Bamberg Press , 2015, p. 357-369Conference paper (Refereed)
    Abstract [en]

    The ability of individuals to access and use their online medical records serves as one of the cornerstones of national efforts to increase patient empowerment and improve health outcomes. However, the launch of online medical records in Uppsala County, Sweden, has been criticized by the medical profession and the local doctors’ union. The aim of this paper is therefore to present the results from an exploratory study where interviews with two oncologists are analysed and discussed based on the theory of Technological Frames and Patient Empowerment. The results indicate that medical doctors have different assumptions and perspectives that affect their use of technology and how they view patient empowerment in everyday clinical work.

  • 38.
    Cakici, Baki
    KTH, School of Information and Communication Technology (ICT), Communication: Services and Infrastucture, Software and Computer Systems, SCS.
    Disease surveillance systems2011Licentiate thesis, comprehensive summary (Other academic)
    Abstract [en]

    Recent advances in information and communication technologies have made the development and operation of complex disease surveillance systems technically feasible, and many systems have been proposed to interpret diverse data sources for health-related signals. Implementing these systems for daily use and efficiently interpreting their output, however, remains a technical challenge.

    This thesis presents a method for understanding disease surveillance systems structurally, examines four existing systems, and discusses the implications of developing such systems. The discussion is followed by two papers. The first paper describes the design of a national outbreak detection system for daily disease surveillance. It is currently in use at the Swedish Institute for Communicable Disease Control. The source code has been licenced under GNU v3 and is freely available. The second paper discusses methodological issues in computational epidemiology, and presents the lessons learned from a software development project in which a spatially explicit micro-meso-macro model for the entire Swedish population was built based on registry data.

  • 39. Camenisch, J.
    et al.
    Papadimitratos, Panagiotis
    KTH, School of Electrical Engineering and Computer Science (EECS), Network and Systems engineering.
    Preface2018In: 17th International Conference on Cryptology and Network Security, CANS 2018, Springer Verlag , 2018Conference paper (Refereed)
  • 40. Cao, J.
    et al.
    Yang, L.
    Zheng, X.
    Liu, B.
    Zhao, L.
    Ni, X.
    Dong, F.
    Mao, Bo
    KTH, School of Architecture and the Built Environment (ABE), Urban Planning and Environment, Geodesy and Geoinformatics.
    Social attribute based web service information publication mechanism in delay tolerant network2011In: Proc. - IEEE Int. Conf. Comput. Sci. Eng., CSE Int. Symp. Pervasive Syst., Algorithms, Networks, I-SPAN IEEE Int. Conf. IUCC, 2011, p. 435-442Conference paper (Refereed)
    Abstract [en]

    The intermittence of the connection between nodes and limited resources greatly impair the effectiveness of service information publication in Delay Tolerant Network (DTN). To overcome this problem, a multi-layer service information cooperative publication mechanism is proposed in this paper. Firstly, the social interaction network and service overlay network is established in the form of abstract weight graph. Then, the community division algorithm is used to analysis the social characteristics of service interaction and the social attribute based DTN model-S-DTN is constructed. Finally, the carrier nodes of the information publication is selected from neighbor set by computing utilization function based node context, and a multi-layer cooperative mechanism is proposed to achieve effective service information publication in DTN. The experiment results indicate that the proposed publication mechanism brings nearly the same success ratio as Epidemic Routing with lower delay and network loads. Additionally, it shows better performance in overall metrics than Prophet Routing.

  • 41. Carlsson, Lars
    et al.
    Ahlberg, Ernst
    Boström, Henrik
    Stockholms universitet, Institutionen för data- och systemvetenskap.
    Johansson, Ulf
    Linusson, Henrik
    Modifications to p-Values of Conformal Predictors2015In: Statistical Learning and Data Sciences: Third International Symposium, SLDS 2015, Egham, UK, April 20-23, 2015, Proceedings / [ed] Alexander Gammerman, Vladimir Vovk, Harris Papadopoulos, Springer, 2015, Vol. 9047, p. 251-259Conference paper (Refereed)
    Abstract [en]

    The original definition of a p-value in a conformal predictor can sometimes lead to too conservative prediction regions when the number of training or calibration examples is small. The situation can be improved by using a modification to define an approximate p-value. Two modified p-values are presented that converges to the original p-value as the number of training or calibration examples goes to infinity.

    Numerical experiments empirically support the use of a p-value we call the interpolated p-value for conformal prediction. The interpolated p-value seems to be producing prediction sets that have an error rate which corresponds well to the prescribed significance level.

  • 42.
    Cen, Jinkang
    KTH, School of Information and Communication Technology (ICT).
    Design of Indoor Positioning System Based on IEEE 802.15.4a Ultra-wideband Technology2013Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Global Positioning System (GPS) has revolutionized the way we navigate and get location-based information in the last decade. Unfortunately the accuracy of civilian GPS is still remaining in meter level and it does not work well in indoor environment, which is a major drawback for applications such as autonomous vehicle, robot machine and so on. UWB (Ultra-wideband) is one of the most promising technologies to solve this problem. The UWB technology has large bandwidth and it is quite robust to fading and multipath effect. Therefore, it is capable of high accuracy down to centimeters for positioning in both outdoor and indoor scenarios.  The IEEE 802.15.4a was released in 2007, which adopted UWB in this standard and specified its physical layer for accurate positioning in WPAN (Wireless Personal Area Network). Apart from the capability of accurate positioning, solutions based on this standard will have quite low power consumption and low cost.  In this thesis work a positioning system based on IEEE 802.15.4a has been designed. A few practical constrains have been taken into account in designing the system, such as performance, cost, power consumption, and governmental regulations and so forth. To reduce the system complexity and communication channel occupancy, TDOA (Time Difference of Arrival) has been chosen as the ranging protocol. The system has been designed accordingly. Main components have been selected and PCBs (Printed Circuit Board) has been designed as well. The design work covered both hardware and software. The proposed system is believed to be able to achieve a positioning accuracy of ±20 centimeters.

  • 43. Chen, Y.
    et al.
    Xie, L.
    Li, J.
    Lu, Zhonghai
    KTH, School of Information and Communication Technology (ICT), Electronic Systems.
    A deadlock-free fault-tolerant routing algorithm based on pseudo-receiving mechanism for networks-on-chip of CMP2011In: 2011 International Conference on Multimedia Technology, ICMT 2011, 2011, p. 2825-2828Conference paper (Refereed)
    Abstract [en]

    As the size of CMOS technology scales down to nanometers domain, fault-tolerant is becoming a challenge for NoC. Turn model provides a simple and efficient systematic approach to the development of deadlock-free routing algorithms. In this paper, we propose a pseudo-receiving mechanism based on the support of local processor's cache to enable prohibited turn, and meanwhile make it keep deadlockfree. We present a fault-tolerant routing algorithm based on pseudo-receiving mechanism for 2D mesh. The routing algorithm is livelock-free in the cost of disable a few un-faulty links or nodes. The algorithm is applied to a single-cycle fixed output-buffered router. Experimental results show that, it achieves high performance even under high faulty rate.

  • 44. Clemm, A.
    et al.
    Granville, L. Z.
    Stadler, Rolf
    KTH, School of Electrical Engineering (EES), Communication Networks.
    Shaping the network management Research agenda-report on DSOM 20072008In: Journal of Network and Systems Management, ISSN 1064-7570, Vol. 16, no 2, p. 223-225Article in journal (Refereed)
    Abstract [en]

    The 18th IFIP/IEEE international Workshop on Distributed Systems: Operations and Management (DSOM 2007) was held at San Jones, California/USA, form October 29-31, 2007. The aim of DSOM workshops is to bring together researchers from industry and academia in the areas of network, systems, and service management, in order to discuss recent advances and foster growth. The workshops have a single-track program in order to enable intense interaction among participants. DSOM 2007 continued its tradition of giving a platform to papers that address general topics related to the management of distributed systems. It included sessions on on decentralized and peer-to-peer management, fault detection and diagnosis, service accounting and auditing, problem detection and mitigation, and web services and management. DSOM 2008, will be held between September 22-26, 2008, at Samos Island, Greece. The theme will be Managing Large-scale Service Deployment, which takes up a key aspect of current research in network management.

  • 45.
    Collin, Mikael
    et al.
    KTH, School of Information and Communication Technology (ICT), Electronic, Computer and Software Systems, ECS.
    Brorsson, Mats
    KTH, School of Information and Communication Technology (ICT), Electronic, Computer and Software Systems, ECS.
    Improving Code Density of Embedded Software using a 2-level Dictionary Code Compression Architecture2008In: 2008 13TH ASIA-PACIFIC COMPUTER SYSTEMS ARCHITECTURE CONFERENCE, NEW YORK: IEEE , 2008, p. 284-291Conference paper (Refereed)
    Abstract [en]

    Dictionary code compression has been proposed to reduce the energy consumed in the instruction fetch path of processors or to reduce program footprint in memory With this technique, instructions, or instruction sequences, are in the binary code replaced with short code words that in run-time are replaced with the original instructions using the dictionary inside the data-path. We present here a new method with the aim to further improve on code density as compared to previously proposed dictionary code compression techniques. It is a 2-level approach capable of handling compression of both individual instructions and code sequences of 2-16 instructions. Our proposed approach is more flexible and has better dynamic compression ratio and fetch path energy consumption as compared to previously studied 1-level approaches. The energy consumed in the instruction fetch path is reduced with up to 56% as compared to using uncompressed instructions.

  • 46.
    Dalianis, Hercules
    et al.
    Stockholms universitet, Institutionen för data- och systemvetenskap.
    Boström, Henrik
    Stockholms universitet, Institutionen för data- och systemvetenskap.
    Releasing a Swedish Clinical Corpus after Removing all Words – De-identification Experiments with Conditional Random Fields and Random Forests2012In: Proceedings of the Third Workshop on Building and Evaluating Resources for Biomedical Text Mining (BioTxtM 2012), 2012, p. 45-48Conference paper (Refereed)
    Abstract [en]

    Patient records contain valuable information in the form of both structured data and free text; however this information is sensitive since it can reveal the identity of patients. In order to allow new methods and techniques to be developed and evaluated on real world clinical data without revealing such sensitive information, researchers could be given access to de-identified records without protected health information (PHI), such as names, telephone numbers, and so on. One approach to minimizing the risk of revealing PHI when releasing text corpora from such records is to include only features of the words instead of the words themselves. Such features may include parts of speech, word length, and so on from which the sensitive information cannot be derived. In order to investigate what performance losses can be expected when replacing specific words with features, an experiment with two state-of-the-art machine learning methods, conditional random fields and random forests, is presented, comparing their ability to support de-identification, using the Stockholm EPR PHI corpus as a benchmark test. The results indicate severe performance losses when the actual words are removed, leading to the conclusion that the chosen features are not sufficient for the suggested approach to be viable.

  • 47. Danielsson, Mats
    et al.
    Ekenberg, Love
    Hansson, Karin
    Idefelt, Jim
    larsson, Aron
    Pahlman, Mona
    Riabacke, Ari
    Sundgren, David
    Cross-disciplinary research in analytic decision support systems2006In: ITI 2006: Proceedings of the 28th International Conference on Information Technology Interfaces / [ed] LuzarStiffler V, Dobric VH, New York: IEEE , 2006, p. 123-128Conference paper (Refereed)
    Abstract [en]

    A main problem in decision support contexts is that unguided decision making is difficult and can lead to inefficient decision processes and undesired consequences. Therefore, decision support systems (DSSs) are of prime concern to any organization and there have been numerous approaches to delivering decision support from, e.g., computational, mathematical, financial, philosophical, psychological, and sociological angles. A key observation, however, is that effective and efficient decision making is not easily achieved by using methods from one discipline only. This paper describes some efforts made by the DECIDE Research Group to approach DSS development and decision making tools in a cross-disciplinary way.

  • 48. Daudi, M.
    et al.
    Hauge, Jannicke
    KTH, School of Technology and Health (STH). University of Bremen, Germany.
    Thoben, K. -D
    On analysis of trust dynamics in supply chain collaboration2016In: ILS 2016 - 6th International Conference on Information Systems, Logistics and Supply Chain, International Conference on Information Systems, Logistics and Supply Chain , 2016Conference paper (Refereed)
    Abstract [en]

    Trust is an essential asset to support Supply Chain Collaboration (SCC), and it is a complex construct of dynamic nature. This dynamic behavior stems from trust ability of changing forms or states over time. Due to this dynamicity, SCC requires that the partners have a clear understanding of how trust changes throughout the lifetime of their alliances. This understanding is necessary in building and maintaining trustworthy relationships in dynamic environments. However, the authors have found no framework that sufficiently describes trust dynamics in SCC. Thus, this research presents the first approach toward a holistic framework describing trust dynamics by considering distinct dimensions, forms, states and roles of trust. The trust framework describing aspects attributing to trust dynamics is applied in an industrial case involving change events accruing to trust dynamics.

  • 49. Davoli, Paolo
    et al.
    Monari, Matteo
    KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA.
    Collaborative tools' quality in web-based learning systems -: A model of user perceptions2006In: Advances in Information Systems Development: Bridging the Gap Between Academia and Industry, 2006, p. 313-324Conference paper (Refereed)
    Abstract [en]

    The importance of collaborative tools is increasing in e-learning practice, both in educational institutions and enterprises. E-learning is nowadays much more than file downloading: both in distance and blended learning, group interactions are showing their didactic relevance. Specific contexts and needs are to be taken into account when evaluating didactic collaborative tools, since they present peculiar aspects. For instance, e-learning platforms are not pure groupware, but didactic systems hosting both groupware facilities and single-user features.

  • 50.
    Deegalla, Sampath
    et al.
    Stockholms universitet, Institutionen för data- och systemvetenskap.
    Boström, Henrik
    Stockholms universitet, Institutionen för data- och systemvetenskap.
    Classification of Microarrays with kNN: Comparison of Dimensionality Reduction Methods2007In: Intelligent Data Engineering and Automated Learning - IDEAL 2007 / [ed] Hujun Yin, Peter Tino, Emilio Corchado, Will Byrne, Xin Yao, Berlin, Heidelberg: Springer Verlag , 2007, p. 800-809Conference paper (Refereed)
    Abstract [en]

    Dimensionality reduction can often improve the performance of the k-nearest neighbor classifier (kNN) for high-dimensional data sets, such as microarrays. The effect of the choice of dimensionality reduction method on the predictive performance of kNN for classifying microarray data is an open issue, and four common dimensionality reduction methods, Principal Component Analysis (PCA), Random Projection (RP), Partial Least Squares (PLS) and Information Gain(IG), are compared on eight microarray data sets. It is observed that all dimensionality reduction methods result in more accurate classifiers than what is obtained from using the raw attributes. Furthermore, it is observed that both PCA and PLS reach their best accuracies with fewer components than the other two methods, and that RP needs far more components than the others to outperform kNN on the non-reduced dataset. None of the dimensionality reduction methods can be concluded to generally outperform the others, although PLS is shown to be superior on all four binary classification tasks, but the main conclusion from the study is that the choice of dimensionality reduction method can be of major importance when classifying microarrays using kNN.

1234567 1 - 50 of 362
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf