kth.sePublications
Change search
Refine search result
1234567 1 - 50 of 482
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Abbasi, Abdul Ghafoor
    KTH, School of Information and Communication Technology (ICT), Communication Systems, CoS.
    CryptoNET: Generic Security Framework for Cloud Computing Environments2011Doctoral thesis, monograph (Other academic)
    Abstract [en]

    The area of this research is security in distributed environment such as cloud computing and network applications. Specific focus was design and implementation of high assurance network environment, comprising various secure and security-enhanced applications. “High Assurance” means that

    -               our system is guaranteed to be secure,

    -               it is verifiable to provide the complete set of security services,

    -               we prove that it always functions correctly, and

    -               we justify our claim that it can not be compromised without user neglect and/or consent.

     

    We do not know of any equivalent research results or even commercial security systems with such properties. Based on that, we claim several significant research and also development contributions to the state–of–art of computer networks security.

    In the last two decades there were many activities and contributions to protect data, messages and other resources in computer networks, to provide privacy of users, reliability, availability and integrity of resources, and to provide other security properties for network environments and applications. Governments, international organizations, private companies and individuals are investing a great deal of time, efforts and budgets to install and use various security products and solutions. However, in spite of all these needs, activities, on-going efforts, and all current solutions, it is general belief that the security in today networks and applications is not adequate.

    At the moment there are two general approaches to network application’s security. One approach is to enforce isolation of users, network resources, and applications. In this category we have solutions like firewalls, intrusion–detection systems, port scanners, spam filters, virus detection and elimination tools, etc. The goal is to protect resources and applications by isolation after their installation in the operational environment. The second approach is to apply methodology, tools and security solutions already in the process of creating network applications. This approach includes methodologies for secure software design, ready–made security modules and libraries, rules for software development process, and formal and strict testing procedures. The goal is to create secure applications even before their operational deployment. Current experience clearly shows that both approaches failed to provide an adequate level of security, where users would be guaranteed to deploy and use secure, reliable and trusted network applications.

    Therefore, in the current situation, it is obvious that a new approach and a new thinking towards creating strongly protected and guaranteed secure network environments and applications are needed. Therefore, in our research we have taken an approach completely different from the two mentioned above. Our first principle is to use cryptographic protection of all application resources. Based on this principle, in our system data in local files and database tables are encrypted, messages and control parameters are encrypted, and even software modules are encrypted. The principle is that if all resources of an application are always encrypted, i.e. “enveloped in a cryptographic shield”, then

    -               its software modules are not vulnerable to malware and viruses,

    -               its data are not vulnerable to illegal reading and theft,

    -               all messages exchanged in a networking environment are strongly protected, and

    -               all other resources of an application are also strongly protected.

     

    Thus, we strongly protect applications and their resources before they are installed, after they are deployed, and also all the time during their use.

    Furthermore, our methodology to create such systems and to apply total cryptographic protection was based on the design of security components in the form of generic security objects. First, each of those objects – data object or functional object, is itself encrypted. If an object is a data object, representing a file, database table, communication message, etc., its encryption means that its data are protected all the time. If an object is a functional object, like cryptographic mechanisms, encapsulation module, etc., this principle means that its code cannot be damaged by malware. Protected functional objects are decrypted only on the fly, before being loaded into main memory for execution. Each of our objects is complete in terms of its content (data objects) and its functionality (functional objects), each supports multiple functional alternatives, they all provide transparent handling of security credentials and management of security attributes, and they are easy to integrate with individual applications. In addition, each object is designed and implemented using well-established security standards and technologies, so the complete system, created as a combination of those objects, is itself compliant with security standards and, therefore, interoperable with exiting security systems.

    By applying our methodology, we first designed enabling components for our security system. They are collections of simple and composite objects that also mutually interact in order to provide various security services. The enabling components of our system are:  Security Provider, Security Protocols, Generic Security Server, Security SDKs, and Secure Execution Environment. They are all mainly engine components of our security system and they provide the same set of cryptographic and network security services to all other security–enhanced applications.

    Furthermore, for our individual security objects and also for larger security systems, in order to prove their structural and functional correctness, we applied deductive scheme for verification and validation of security systems. We used the following principle: “if individual objects are verified and proven to be secure, if their instantiation, combination and operations are secure, and if protocols between them are secure, then the complete system, created from such objects, is also verifiably secure”. Data and attributes of each object are protected and secure, and they can only be accessed by authenticated and authorized users in a secure way. This means that structural security properties of objects, upon their installation, can be verified. In addition, each object is maintained and manipulated within our secure environment so each object is protected and secure in all its states, even after its closing state, because the original objects are encrypted and their data and states stored in a database or in files are also protected.

    Formal validation of our approach and our methodology is performed using Threat Model. We analyzed our generic security objects individually and identified various potential threats for their data, attributes, actions, and various states. We also evaluated behavior of each object against potential threats and established that our approach provides better protection than some alternative solutions against various threats mentioned. In addition, we applied threat model to our composite generic security objects and secure network applications and we proved that deductive approach provides better methodology for designing and developing secure network applications. We also quantitatively evaluated the performance of our generic security objects and found that the system developed using our methodology performs cryptographic functions efficiently.

    We have also solved some additional important aspects required for the full scope of security services for network applications and cloud environment: manipulation and management of cryptographic keys, execution of encrypted software, and even secure and controlled collaboration of our encrypted applications in cloud computing environments. During our research we have created the set of development tools and also a development methodology which can be used to create cryptographically protected applications. The same resources and tools are also used as a run–time supporting environment for execution of our secure applications. Such total cryptographic protection system for design, development and run–time of secure network applications we call CryptoNET system. CrytpoNET security system is structured in the form of components categorized in three groups: Integrated Secure Workstation, Secure Application Servers, and Security Management Infrastructure Servers. Furthermore, our enabling components provide the same set of security services to all components of the CryptoNET system.

    Integrated Secure Workstation is designed and implemented in the form of a collaborative secure environment for users. It protects local IT resources, messages and operations for multiple applications. It comprises four most commonly used PC applications as client components: Secure Station Manager (equivalent to Windows Explorer), Secure E-Mail Client, Secure Web Browser, and Secure Documents Manager. These four client components for their security extensions use functions and credentials of the enabling components in order to provide standard security services (authentication, confidentiality, integrity and access control) and also additional, extended security services, such as transparent handling of certificates, use of smart cards, Strong Authentication protocol, Security Assertion Markup Language (SAML) based Single-Sign-On protocol, secure sessions, and other security functions.

    Secure Application Servers are components of our secure network applications: Secure E-Mail Server, Secure Web Server, Secure Library Server, and Secure Software Distribution Server. These servers provide application-specific services to client components. Some of the common security services provided by Secure Application Servers to client components are Single-Sign-On protocol, secure communication, and user authorization. In our system application servers are installed in a domain but it can be installed in a cloud environment as services. Secure Application Servers are designed and implemented using the concept and implementation of the Generic Security Server. It provides extended security functions using our engine components. So by adopting this approach, the same sets of security services are available to each application server.

    Security Management Infrastructure Servers provide domain level and infrastructure level services to the components of the CryptoNET architecture. They are standard security servers, known as cloud security infrastructure, deployed as services in our domain level could environment.

    CryptoNET system is complete in terms of functions and security services that it provides. It is internally integrated, so that the same cryptographic engines are used by all applications. And finally, it is completely transparent to users – it applies its security services without expecting any special interventions by users. In this thesis, we developed and evaluated secure network applications of our CryptoNET system and applied Threat Model to their validation and analysis. We found that deductive scheme of using our generic security objects is effective for verification and testing of secure, protected and verifiable secure network applications.

    Based on all these theoretical research and practical development results, we believe that our CryptoNET system is completely and verifiably secure and, therefore, represents a significant contribution to the current state-of-the-art of computer network security.

    Download full text (pdf)
    FULLTEXT01
  • 2. Abrams, M. B.
    et al.
    Bjaalie, J. G.
    Das, S.
    Egan, G. F.
    Ghosh, S. S.
    Goscinski, W. J.
    Grethe, J. S.
    Hellgren Kotaleski, Jeanette
    KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Computational Science and Technology (CST).
    Ho, E. T. W.
    Kennedy, D. N.
    Lanyon, L. J.
    Leergaard, T. B.
    Mayberg, H. S.
    Milanesi, L.
    Mouček, R.
    Poline, J. B.
    Roy, P. K.
    Strother, S. C.
    Tang, T. B.
    Tiesinga, P.
    Wachtler, T.
    Wójcik, D. K.
    Martone, M. E.
    A Standards Organization for Open and FAIR Neuroscience: the International Neuroinformatics Coordinating Facility2021In: Neuroinformatics, ISSN 1539-2791, E-ISSN 1559-0089Article in journal (Refereed)
    Abstract [en]

    There is great need for coordination around standards and best practices in neuroscience to support efforts to make neuroscience a data-centric discipline. Major brain initiatives launched around the world are poised to generate huge stores of neuroscience data. At the same time, neuroscience, like many domains in biomedicine, is confronting the issues of transparency, rigor, and reproducibility. Widely used, validated standards and best practices are key to addressing the challenges in both big and small data science, as they are essential for integrating diverse data and for developing a robust, effective, and sustainable infrastructure to support open and reproducible neuroscience. However, developing community standards and gaining their adoption is difficult. The current landscape is characterized both by a lack of robust, validated standards and a plethora of overlapping, underdeveloped, untested and underutilized standards and best practices. The International Neuroinformatics Coordinating Facility (INCF), an independent organization dedicated to promoting data sharing through the coordination of infrastructure and standards, has recently implemented a formal procedure for evaluating and endorsing community standards and best practices in support of the FAIR principles. By formally serving as a standards organization dedicated to open and FAIR neuroscience, INCF helps evaluate, promulgate, and coordinate standards and best practices across neuroscience. Here, we provide an overview of the process and discuss how neuroscience can benefit from having a dedicated standards body.

  • 3.
    Ahltorp, Magnus
    et al.
    KTH.
    Skeppstedt, Maria
    Department of Computer and Systems Sciences (DSV), Stockholm University, Sweden.
    Kitajima, S.
    Graduate School of Information Science and Technology, Hokkaido University, Japan.
    Rzepka, R.
    Graduate School of Information Science and Technology, Hokkaido University, Japan.
    Araki, K.
    Graduate School of Information Science and Technology, Hokkaido University, Japan.
    Medical vocabulary mining using distributional semantics on Japanese patient blogs2014In: SMBM 2014 - Proceedings of the 6th International Symposium on Semantic Mining in Biomedicine, University of Aveiro , 2014, p. 57-62Conference paper (Refereed)
    Abstract [en]

    Random indexing has previously been successfully used for medical vocabulary expansion for Germanic languages. In this study, we used this approach to extract medical terms from a Japanese patient blog corpus. The corpus was segmented into semantic units by a semantic role labeller, and different pre-processing and parameter settings were then evaluated. The evaluation showed that similar settings are suitable for Japanese as for previously explored Germanic languages, and that distributional semantics is equally useful for semi-automatic expansion of Japanese medical vocabularies as for medical vocabularies in Germanic languages.

  • 4.
    Aid, Graham
    KTH, School of Industrial Engineering and Management (ITM), Industrial Ecology.
    Industrial Ecology Methods within Engagement Processes for Industrial Resource Management2013Licentiate thesis, comprehensive summary (Other academic)
    Abstract [en]

    The global use of resources such as materials, energy, and water has surpassed sustainable levels by many accounts.  The research presented here was explicitly normative in its aim to improve the understanding of, and make sustainable change toward highly systemic issues of resource management.  The core methods chosen to work toward this aim were bottom up action research procedures (including stakeholder engagement processes) and industrial ecology analysis tools.  These methods were employed and tested in pragmatic combination through two of the author’s case study projects. The first case study, performed between 2009 and 2012, employed a multi-stakeholder process aimed at improving the cycling of construction and demolition waste in the Stockholm region.  The second case study produced a strategic tool (Looplocal) built for facilitating more efficient regional industrial resource networks. While the highly participative aim of the cases required a larger contribution of resources than that of more closed studies, it is arguable that the efficacy of approaching the project aims is improved through their employment. 

    Download full text (pdf)
    Aid - Industrial Ecology Methods within Engagement Processes for Industrial Resource Management
  • 5.
    Aid, Graham
    et al.
    KTH, School of Industrial Engineering and Management (ITM), Industrial Ecology.
    Brandt, Nils
    KTH, School of Industrial Engineering and Management (ITM), Industrial Ecology.
    Lysenkovac, Mariya
    Smedberg, Niklas
    KTH, School of Industrial Engineering and Management (ITM), Industrial Ecology.
    Looplocal: a Heuristic Visualization Tool for the Strategic Facilitation of Industrial Symbiosis2012In: Greening of Industry Netowrk Proceedings / [ed] Leo Baas, 2012Conference paper (Refereed)
    Abstract [en]

    Industrial symbiosis (IS) developments have been differentiated as ‘self organized’, ‘facilitated’, and ‘planned’. This article introduces a tool that has been built with objectives to support the strategic facilitation of IS. ‘Looplocal’ is a visualization tool built to assist in 1) the identification of regions prone to new industrial symbiosis activities 2) market potential exchanges to key actors and 3) assist aspiring facilitators to assess the various strategies and social methodologies available for the initial phases of a facilitated industrial symbiosis venture. This tool combines life cycle inventory (LCI) data, waste statistics, and national industrial data (including geographic, activity, economic, and contact information) to perform a heuristic analysis of raw material and energy inputs and outputs (wastes). Along with an extensive list of ‘waste to raw material’ substitutions (which may be direct, combined, or upgraded) gathered from IS uncovering studies, IS organizations, and waste and energy professionals; heuristic regional output to input ‘matching’ can be visualized. On a national or regional scale the tool gives a quick overview of what could be the most interesting regions to prioritize resources for IS facilitation. Focusing in on a regional level, the tool visualizes the potential structure of the network in that region (centralized, decentralized, or distributed), allowing a facilitator to adapt the networking approach correspondingly. The tool also visualizes potential IS transfer information, along with key stakeholder data. The authors have performed a proof of concept run of this tool in the ‘industrial disperse’ context of Sweden. In its early stages of application, the method has proven capable of identifying regions prone to the investment of facilitators’ resources. The material focus and custom possibilities for the tool show potential for a wide spectrum of potential facilitators: from waste management companies (using the tool as a strategic market analysis tool) to national or regional authorities looking to lower negative environmental impacts, to ‘sustainable’ industry sectors looking to strengthen market positioning. In conjunction with proper long term business models, such a tool could be reusable itself over the evolution of facilitation activities and aims.

  • 6.
    Akay, Altug
    et al.
    KTH, School of Technology and Health (STH), Health Systems Engineering, Systems Safety and Management.
    Dragomir, A.
    Department of Biomedical Engineering, University of Houston, Houston, TX, US.
    Erlandsson, Björn-Erik
    KTH, School of Technology and Health (STH), Health Systems Engineering, Systems Safety and Management.
    A novel data-mining approach leveraging social media to monitor and respond to outcomes of diabetes drugs and treatment2013In: 2013 IEEE Point-of-Care Healthcare Technologies (PHT), New York: IEEE , 2013, p. 264-266Conference paper (Refereed)
    Abstract [en]

    A novel data-mining method was developed to gauge the experiences of medical devices and drugs by patients with diabetes mellitus. Self-organizing maps were used to analyze forum posts numerically to better understand user opinion of medical devices and drugs. The end-result is a word list compilation that correlates certain positive and negative word cluster groups with medical drugs and devices. The implication of this novel data-mining method could open new avenues of research into rapid data collection, feedback, and analysis that would enable improved outcomes and solutions for public health.

  • 7.
    Akay, Altug
    et al.
    KTH, School of Technology and Health (STH), Health Systems Engineering, Systems Safety and Management.
    Dragomir, A
    Erlandsson, Björn-Erik
    KTH, School of Technology and Health (STH), Health Systems Engineering, Systems Safety and Management.
    A Novel Data-Mining Approach Leveraging Social Media to Monitor Consumer Opinion of Sitagliptin2015In: IEEE journal of biomedical and health informatics, ISSN 2168-2194, E-ISSN 2168-2208, Vol. 19, no 1, p. 389-396Article in journal (Refereed)
    Abstract [en]

    A novel data mining method was developed to gauge the experience of the drug Sitagliptin (trade name Januvia) by patients with diabetes mellitus type 2. To this goal, we devised a two-step analysis framework. Initial exploratory analysis using self-organizing maps was performed to determine structures based on user opinions among the forum posts. The results were a compilation of user's clusters and their correlated (positive or negative) opinion of the drug. Subsequent modeling using network analysis methods was used to determine influential users among the forum members. These findings can open new avenues of research into rapid data collection, feedback, and analysis that can enable improved outcomes and solutions for public health and important feedback for the manufacturer.

  • 8.
    Al-Battat, Ahmed
    et al.
    KTH, School of Technology and Health (STH), Medical Engineering, Computer and Electronic Engineering.
    Anwer, Noora
    KTH, School of Technology and Health (STH), Medical Engineering, Computer and Electronic Engineering.
    Utvärdering utifrån ett mjukvaruutveckling perspektiv av ramverk för SharePoint2017Independent thesis Basic level (university diploma), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    The functionality was tested by two different tests, which showed that the product is suitable for usage in the intranet within a company or an organization, there are great benefits from using intranet as a tool for sharing of information. A good intranet contributes to a better flow of information and effective cooperation. SharePoint is a platform for intranet with interactive features, it makes the job easier for staff and even the company. The framework Omnia is a solution designed for Microsoft SharePoint 2013.This essay evaluates how Omnia acts as a framework and what the product is suitable for. Omnia framework evaluates carefully and is an independent assessment carried during this essay. The evaluation is based on scientific studies which are based on the qualitative and quantitative research methodology. The evaluator's main areas are based on system performance, scalability, architecture and functionality. A test prototype develops during the process in the form of an employee vacation request application by the development framework Omnia.The framework Omnia is considered to be suitable for the development of interactive web-based applications for SharePoint. The architecture for the system meets the requirements for scalable systems because it is based on the tier architecture. The system also has good performance but it needs to be improved if the number of users exceeds one thousand. The functionality of this product is quite suitable for the system's usage.

    Download full text (pdf)
    Utvärdering utifrån ett mjukvaruutveckling perspektiv av ramverk för SharePoint
  • 9.
    Almlof, Jonas
    et al.
    Ericsson AB, Stockholm, Sweden..
    Llosera, Gemma Vall
    Ericsson AB, Stockholm, Sweden..
    Arvidsson, Elisabet
    KTH, School of Engineering Sciences (SCI), Applied Physics.
    Björk, Gunnar
    KTH, School of Engineering Sciences (SCI), Applied Physics.
    Creating and detecting specious randomness2023In: EPJ QUANTUM TECHNOLOGY, ISSN 2662-4400, Vol. 10, no 1, article id 1Article in journal (Refereed)
    Abstract [en]

    We present a new test of non-randomness that tests both the lower and the upper critical limit of a chi 2-statistic. While checking the upper critical value has been employed by other tests, we argue that also the lower critical value should be examined for non-randomness. To this end, we prepare a binary sequence where all possible bit strings of a certain length occurs the same number of times and demonstrate that such sequences pass a well-known suite of tests for non-randomness. We show that such sequences can be compressed, and therefore are somewhat predictable and thus not fully random. The presented test can detect such non-randomness, and its novelty rests on analysing fixed-length bit string frequencies that lie closer to the a priori probabilities than could be expected by chance alone.

  • 10.
    Almlöf, Jonas
    et al.
    Ericsson AB.
    Vall Llosera, Gemma
    Ericsson AB.
    Arvidsson, Elisabet
    KTH, School of Engineering Sciences (SCI).
    Björk, Gunnar
    KTH, School of Engineering Sciences (SCI), Applied Physics, Quantum and Biophotonics.
    Creating and detecting specious randomnessManuscript (preprint) (Other academic)
    Abstract [en]

    We present a new test of non-randomness that tests both the lower and the upper critical limit of aχ2-statistic. While checking the upper critical value has been employed by other tests, we argue that also the lower critical value should be examined for non-randomness. To this end, we prepare a binary sequence where all possible bit strings of a certain length occurs the same number of times and demonstrate that such sequences pass a well-known suite of tests for non-randomness. We show that such sequences can be compressed, and therefore are somewhat predictable and thus not fully random. The presented test can detect such non-randomness, and its novelty rests on analysing fixed-length bit string frequencies that lie closer to the a priori probabilities than could be expected by chance 

    Download full text (pdf)
    fulltext
  • 11. Alonso, O.a
    et al.
    Kamps, J.b
    Karlgren, Jussi
    KTH, School of Computer Science and Communication (CSC), Theoretical Computer Science, TCS.
    Seventh workshop on exploiting semantic annotations in information retrieval (ESAIR’14)2014In: CIKM 2014 - Proceedings of the 2014 ACM International Conference on Information and Knowledge Management, Association for Computing Machinery (ACM), 2014, p. 2094-2095Conference paper (Refereed)
    Abstract [en]

    There is an increasing amount of structure on the Web as a result of modern Web languages, user tagging and annotation, emerging robust NLP tools, and an ever growing volume of linked data. These meaningful, semantic, annotations hold the promise to significantly enhance information access, by enhancing the depth of analysis of today’s systems. The goal of the ESAIR’14 workshop remains to advance the general research agenda on this core problem, with an explicit focus on one of the most challenging aspects to address in the coming years. The main remaining challenge is on the user’s side-the potential of rich document annotations can only be realized if matched by more articulate queries exploiting these powerful retrieval cues-and a more dynamic approach is emerging by exploiting new forms of query autosuggest. How can the query suggestion paradigm be used to encourage searcher to articulate longer queries, with concepts and relations linking their statement of request to existing semantic models? How do entity results and social network data in "graph search" change the classic division between searchers and information and lead to extreme personalization-are you the query? How to leverage transaction logs and recommendation, and how adaptive should we make the system? What are the privacy ramifications and the UX aspects-how to not creep out users?

  • 12. Andersen, T. C. K.
    et al.
    Aagaard, A.
    Magnusson, Mats
    KTH, School of Industrial Engineering and Management (ITM), Machine Design (Dept.), Integrated Product Development.
    Exploring business model innovation in SMEs in a digital context: Organizing search behaviours, experimentation and decision-making2022In: Creativity and Innovation Management, ISSN 0963-1690, E-ISSN 1467-8691, Vol. 31, no 1, p. 19-34Article in journal (Refereed)
    Abstract [en]

    In today's business environment, digitalization plays a key role in establishing competitive advantage and developing new business models. However, little is known about business model innovation (BMI) processes and practices of small and medium-sized enterprise (SMEs) in their digital venturing. Thus, the aim of this paper is to address this research gap by investigating the process activities of SMEs in effectively building new business models through digitalization. Through a case study of 18 SMEs, document studies and 36 interviews, we explore the BMI processes during the case companies' digital transformation. The research results identify four critical BMI process activities: (1) assessing the environment in search of new opportunities, (2) conveying a sense of urgency, (3) exploring and testing new opportunities through experimentation and (4) handling decision-making with a combination of intuition and data. Finally, our study reveals managerial implications related to data-driven decision-making during BMI, constituting four managerial dilemmas: (1) prognosis and scenario-driven search myopia, (2) timing and sustainability, (3) radical shift from traditional experimentation to data-based methods and (4) using intuition versus data-driven decision-making. 

  • 13. Andrienko, G.
    et al.
    Andrienko, N.
    Boldrini, C.
    Caldarelli, G.
    Cintia, P.
    Cresci, S.
    Facchini, A.
    Giannotti, F.
    Gionis, Aristides
    KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Theoretical Computer Science, TCS.
    Guidotti, R.
    Mathioudakis, M.
    Muntean, C. I.
    Pappalardo, L.
    Pedreschi, D.
    Pournaras, E.
    Pratesi, F.
    Tesconi, M.
    Trasarti, R.
    (So) Big Data and the transformation of the city2020In: International Journal of Data Science and Analytics, ISSN 2364-415XArticle in journal (Refereed)
    Abstract [en]

    The exponential increase in the availability of large-scale mobility data has fueled the vision of smart cities that will transform our lives. The truth is that we have just scratched the surface of the research challenges that should be tackled in order to make this vision a reality. Consequently, there is an increasing interest among different research communities (ranging from civil engineering to computer science) and industrial stakeholders in building knowledge discovery pipelines over such data sources. At the same time, this widespread data availability also raises privacy issues that must be considered by both industrial and academic stakeholders. In this paper, we provide a wide perspective on the role that big data have in reshaping cities. The paper covers the main aspects of urban data analytics, focusing on privacy issues, algorithms, applications and services, and georeferenced data from social media. In discussing these aspects, we leverage, as concrete examples and case studies of urban data science tools, the results obtained in the “City of Citizens” thematic area of the Horizon 2020 SoBigData initiative, which includes a virtual research environment with mobility datasets and urban analytics methods developed by several institutions around Europe. We conclude the paper outlining the main research challenges that urban data science has yet to address in order to help make the smart city vision a reality.

  • 14.
    Andrén, Samuel
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Network and Systems Engineering.
    Lindström, Erik
    KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Network and Systems Engineering.
    Hugosson, Alice
    KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Network and Systems Engineering.
    Rönnqvist, Sofia
    KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Network and Systems Engineering.
    Lagerström, Robert
    KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Network and Systems Engineering.
    Hacks, Simon
    KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Network and Systems Engineering.
    Assessing Alignment Between Business and IT Strategy: A Case Study2020In: Proceedings of the Forum at Practice of Enterprise Modeling 2020 co-located with the 13th IFIP WG 8.1 Working Conference on the Practice of Enterprise Modeling (PoEM 2020), CEUR-WS , 2020, Vol. 2793, p. 1-12Conference paper (Refereed)
    Abstract [en]

    Strategic alignment between business and IT is a topic of high importance to modern businesses, but it remainsproblematic to implement structured methods to improve and assess alignment in many organisations. Thisstudy investigates how organisations can better leverage published strategic alignment theory and methods,finding that previous research has not sufficiently considered the different dimensions of strategy and thatsuch considerations would help enterprises improve strategic alignment. The study proposes a framework forunderstanding strategic alignment in hierarchical business-led organisations, exemplified in a case study ofTrafikförvaltningen, the Stockholm public transport authority.

  • 15.
    Apiola, Mikko
    et al.
    Univ Turku, Dept Comp, Turku 20500, Finland..
    Lopez-Pernas, Sonsoles
    Univ Politecn Madrid, ETSI Sistemas Informat, Dept Sistemas Informat, Madrid 28031, Spain..
    Saqr, Mohammed
    Univ Eastern Finland, Sch Comp, Joensuu 80101, Finland..
    Pears, Arnold
    KTH, School of Industrial Engineering and Management (ITM), Learning.
    Daniels, Mats
    Uppsala Univ, Dept Informat Technol, S-75105 Uppsala, Sweden..
    Malmi, Lauri
    Aalto Univ, Dept Comp Sci, Aalto 00076, Finland..
    Tedre, Matti
    Univ Eastern Finland, Sch Comp, Joensuu 80101, Finland..
    From a National Meeting to an International Conference: A Scientometric Case Study of a Finnish Computing Education Conference2022In: IEEE Access, E-ISSN 2169-3536, Vol. 10, p. 66576-66588Article in journal (Refereed)
    Abstract [en]

    Computerisation and digitalisation are shaping the world in fundamental and unpredictable ways, which highlights the importance of computing education research (CER). As part of understanding the roots of CER, it is crucial to investigate the evolution of CER as a research discipline. In this paper we present a case study of a Finnish CER conference called Koli Calling, which was launched in 2001, and which has become a central publication venue of CER. We use data from 2001 to 2020, and investigate the evolution of Koli Calling's scholarly communities and zoom in on it's publication habits and internalisation process. We explore the narrative of the development and scholarly agenda behind changes in the conference submission categories from the perspective of some of the conference chairs over the years. We then take a qualitative perspective, analysing the conference publications based on a comprehensive bibliometric analysis. The outcomes include classification of important research clusters of authors in the community of conference contributors. Interestingly, we find traces of important events in the historical development of CER. In particular, we find clusters emerging from specific research capacity building initiatives and we can trace how these connect research spanning the world CER community from Finland to Sweden and then further to the USA, Australia and New Zealand. This paper makes a strategic contribution to the evolution of CER as a research discipline, from the perspective of one central event and publication venue, providing a broad perspective on the role of the conference in connecting research clusters and establishing an international research community. This work contributes insights to researchers in one specific CER community and how they shape the future of computing education

  • 16.
    Arda Yilal, Serkan
    KTH, School of Electrical Engineering and Computer Science (EECS).
    Prediction of Persistence to Treatment for Patients with Rheumatoid Arthritis using Deep Learning2023Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Rheumatoid Arthritis is an inflammatory joint disease that is one of the most common autoimmune diseases in the world. The treatment usually starts with a first-line treatment called Methotrexate, but it is often insufficient. One of the most common second-line treatments is Tumor Necrosis Factor inhibitors (TNFi). Although some patients respond to TNFi, it has a risk of side effects, including infections. Hence, ability to predict patient responses to TNFi becomes important to choose the correct treatment. This work presents a new approach to predict if the patients were still on TNFi, 1 year after they started, by using a generative neural network architecture called Variational Autoencoder (VAE). We combined a VAE and a classifier neural network to create a supervised learning model called Supervised VAE (SVAE), trained on two versions of a tabular dataset containing Swedish register data. The datasets consist of 7341 patient records, and our SVAE achieved an AUROC score of 0.615 on validation data. Nevertheless, compared to machine learning models previously used for the same prediction task, SVAE achieved higher scores than decision trees and elastic net but lower scores than random forest and gradient-boosted decision tree. Despite the regularization effect that VAEs provide during classification training, the scores achieved by the SVAEs tested during this thesis were lower than the acceptable discrimination level.

    Download full text (pdf)
    fulltext
  • 17.
    Ardestani, Shahrzad
    et al.
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for High Performance Computing, PDC.
    Håkansson, Carl Johan
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for High Performance Computing, PDC.
    Laure, Erwin
    KTH, School of Computer Science and Communication (CSC), Computational Science and Technology (CST). KTH, School of Computer Science and Communication (CSC), Centres, Centre for High Performance Computing, PDC.
    Livenson, I.
    Stranak, P.
    Dima, E.
    Blommesteijn, D.
    Van De Sanden, M.
    B2SHARE: An open eScience data sharing platform2015In: Proceedings - 11th IEEE International Conference on eScience, IEEE , 2015, p. 448-453Conference paper (Refereed)
    Abstract [en]

    Scientific data sharing is becoming an essential service for data driven science and can significantly improve the scientific process by making reliable, and trustworthy data available. Thereby reducing redundant work, and providing insights on related research and recent advancements. For data sharing services to be useful in the scientific process, they need to fulfill a number of requirements that cover not only discovery, and access to data. But to ensure the integrity, and reliability of published data as well. B2SHARE, developed by the EUDAT project, provides such a data sharing service to scientific communities. For communities that wish to download, install and maintain their own service, it is also available as software. B2SHARE is developed with a focus on user-friendliness, reliability, and trustworthiness, and can be customized for different organizations and use-cases. In this paper we discuss the design, architecture, and implementation of B2SHARE. We show its usefulness in the scientific process with some case studies in the biodiversity field.

  • 18.
    Arvidsson, Niklas
    et al.
    KTH, School of Industrial Engineering and Management (ITM), Industrial Economics and Management (Dept.).
    Backteman, Richard
    KTH, School of Industrial Engineering and Management (ITM), Industrial Economics and Management (Dept.), Sustainability, Industrial Dynamics & Entrepreneurship.
    Expensive expense management: The role of the organization in service automationManuscript (preprint) (Other academic)
    Abstract [en]

    This paper studies the role of organizations in office automation and complements previous studies focusing on technology and jobs. Our case is Expense Management where technology is available for repetitive tasks highly susceptible to automation, and where the cost saving potential is high. The study includes data from a survey with 162 respondents and 16 interviews. Results show that companies with a large workforce favor automation and ease of the learning process stimulates it while high turnover has a negative effect on automation. We conclude that the role of organizations, and network effects should receive more attention in future research.

  • 19.
    Asker, Lars
    et al.
    Stockholms universitet, Institutionen för data- och systemvetenskap.
    Boström, Henrik
    Stockholms universitet, Institutionen för data- och systemvetenskap.
    Karlsson, Isak
    Stockholms universitet, Institutionen för data- och systemvetenskap.
    Papapetrou, Panagiotis
    Stockholms universitet, Institutionen för data- och systemvetenskap.
    Zhao, Jing
    Stockholms universitet, Institutionen för data- och systemvetenskap.
    Mining Candidates for Adverse Drug Interactions in Electronic Patient Records2014In: PETRA '14 Proceedings of the 7th International Conference on Pervasive Technologies Related to Assistive Environments, PETRA’14, New York: ACM Press, 2014, article id 22Conference paper (Refereed)
    Abstract [en]

    Electronic patient records provide a valuable source of information for detecting adverse drug events. In this paper, we explore two different but complementary approaches to extracting useful information from electronic patient records with the goal of identifying candidate drugs, or combinations of drugs, to be further investigated for suspected adverse drug events. We propose a novel filter-and-refine approach that combines sequential pattern mining and disproportionality analysis. The proposed method is expected to identify groups of possibly interacting drugs suspected for causing certain adverse drug events. We perform an empirical investigation of the proposed method using a subset of the Stockholm electronic patient record corpus. The data used in this study consists of all diagnoses and medications for a group of patients diagnoses with at least one heart related diagnosis during the period 2008--2010. The study shows that the method indeed is able to detect combinations of drugs that occur more frequently for patients with cardiovascular diseases than for patients in a control group, providing opportunities for finding candidate drugs that cause adverse drug effects through interaction.

  • 20.
    Asker, Lars
    et al.
    Stockholms universitet, Institutionen för data- och systemvetenskap.
    Boström, Henrik
    Stockholms universitet, Institutionen för data- och systemvetenskap.
    Papapetrou, Panagiotis
    Stockholms universitet, Institutionen för data- och systemvetenskap.
    Persson, Hans
    Identifying Factors for the Effectiveness of Treatment of Heart Failure: A Registry Study2016In: IEEE 29th International Symposiumon Computer-Based Medical Systems: CBMS 2016, IEEE Computer Society , 2016Conference paper (Refereed)
    Abstract [en]

    An administrative health register containing health care data for over 2 million patients will be used to search for factors that can affect the treatment of heart failure. In the study, we will measure the effects of employed treatment for various groups of heart failure patients, using different measures of effectiveness. Significant deviations in effectiveness of treatments of the various patient groups will be reported and factors that may help explaining the effect of treatment will be analyzed. Identification of the most important factors that may help explain the observed deviations between the different groups will be derived through generation of predictive models, for which variable importance can be calculated. The findings may affect recommended treatments as well as high-lighting deviations from national guidelines.

  • 21.
    Asker, Lars
    et al.
    Stockholms universitet, Institutionen för data- och systemvetenskap.
    Papapetrou, Panagiotis
    Stockholms universitet, Institutionen för data- och systemvetenskap.
    Boström, Henrik
    Stockholms universitet, Institutionen för data- och systemvetenskap.
    Learning from Swedish Healthcare Data2016In: Proceedings of the 9th ACM International Conference on PErvasive Technologies Related to Assistive Environments, Association for Computing Machinery (ACM), 2016, Vol. 29, article id 47Conference paper (Refereed)
    Abstract [en]

    We present two ongoing projects aimed at learning from health care records. The first project, DADEL, is focusing on high-performance data mining for detrecting adverse drug events in healthcare, and uses electronic patient records covering seven years of patient record data from the Stockholm region in Sweden. The second project is focusing on heart failure and on understanding the differences in treatment between various groups of patients. It uses a Swedish administrative health register containing health care data for over two million patients.

  • 22.
    B. da Silva Jr., Jose Mairton
    KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Network and Systems Engineering.
    Optimization and Fundamental Insights in Full-Duplex Cellular Networks2019Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    The next generations of cellular networks are expected to provide explosive data rate transmissions and very low latencies. To meet such demands, one of the promising wireless transmissions candidates is in-band full-duplex communications, which enable wireless devices to simultaneously transmit and receive on the same frequency channel. Full-duplex communications have the potential to double the spectral efficiency and reduce the transmission delays when compared to current half-duplex systems that either transmit or receive on the same frequency channel. Until recently, full-duplex communications have been hindered by the interference that leaks from the transmitter to its own receiver,the so-called self-interference. However, advances in digital and analog self-interference suppression techniques are making it possible to reduce the self-interference to manageable levels, and thereby make full-duplex a realistic candidate for advanced wireless systems.

    Although in-band full-duplex promises to double the data rates of existing wireless technologies, its deployment in cellular networks must be gradual due to the large number of legacy devices operating in half-duplex mode. When half-duplex devices are deployed in full-duplex cellular networks, the user-to-user interference may become the performance bottleneck. In such new interference situation, the techniques such as user pairing, frequency channel assignment, power control, beamforming, and antenna splitting become even more important than before, because they are essential to mitigate both the user-to-user interference and the residual self-interference. Moreover, introduction of full- duplex in cellular networks must comply with current multi-antenna systems and, possibly, transmissions in the millimeter-wave bands. In these new scenarios, no comprehensive analysis is available to understand the trade-offs in the performance of full-duplex cellular networks.

    This thesis investigates the optimization and fundamental insights in the design of spectral efficient and fair mechanisms in full-duplex cellular networks. The novel analysis proposed in this thesis suggests new solutions for maximizing full-duplex performance in the sub-6 GHz and millimeter-wave bands. The investigations are based on an optimization theory approach that includes distributed and nonconvex optimization with mixed integer-continuous variables, and novel extensions of Fast-Lipschitz optimization. The analysis sheds lights on fundamental questions such as which antenna architecture should be used and whether full-duplex in the millimeter-wave band is feasible. The results establish fundamental insights in the role of user pairing, frequency assignment, power control and beamforming; reveal the special behaviour between the self-interference and user- to-user interference; analyse the trade-offs between antenna sharing and splitting for uplink/downlink signal separation; and investigate the role of practical beamforming design in full-duplex millimeter-wave systems. This thesis may provide input to future standardization process of full-duplex communications.

    Download full text (pdf)
    MairtonBarros_Doctoral_Thesis
  • 23.
    Baalsrud Hauge, Jannicke
    et al.
    KTH, School of Industrial Engineering and Management (ITM), Sustainable production development, Avancerad underhållsteknik och produktionslogistik. aBIBA – Bremer Institut Für Produktion Und Logistik GmbH, Bremen, 28359, Germany.
    Duin, H.
    Kammerlohr, V.
    Göbl, B.
    Using a Participatory Design Approach for Adapting an Existing Game Scenario – Challenges and Opportunities2020In: Serious Games: Joint International Conference, JCSG 2020, Stoke-on-Trent, UK, November 19–20, 2020, Proceedings, Springer Nature , 2020, Vol. 12434, p. 204-218Conference paper (Refereed)
    Abstract [en]

    Designing Serious Games (SGs) is a complex process, often putting game play in a central role during the design process. Therefore, the game mechanics can create unwanted tangential outcomes. Further challenges emerge from the time constraints to deliver a purposeful product that meets the requirements of the target group, while maintaining a low budget. The re-use of game components and a participatory design may contribute to overcome these challenges. This paper presents and reports on a case study integrating reuse and re-purposing of a game-engine while involving the future users in the early phase of the design process.

  • 24.
    Baalsrud Hauge, Jannicke
    et al.
    KTH, School of Industrial Engineering and Management (ITM), Sustainable production development, Avancerad underhållsteknik och produktionslogistik.
    Soebke, Heinrich
    Bauhaus Univ Weimar, Goethepl 7-8, D-99423 Weimar, Germany..
    Broeker, Thomas
    Nuremberg Inst Technol, Nurnberg, Germany..
    Lim, Theodore
    Heriot Watt Univ, Edinburgh, Midlothian, Scotland..
    Luccini, Angelo Marco
    Succubus Interact, Nantes, France..
    Kornevs, Maksims
    KTH, School of Engineering Sciences in Chemistry, Biotechnology and Health (CBH), Biomedical Engineering and Health Systems, Health Informatics and Logistics.
    Meijer, Sebastiaan
    KTH, School of Engineering Sciences in Chemistry, Biotechnology and Health (CBH), Biomedical Engineering and Health Systems, Health Informatics and Logistics.
    Current Competencies of Game Facilitators and Their Potential Optimization in Higher Education: Multimethod Study2021In: JMIR Serious Games, E-ISSN 2291-9279, Vol. 9, no 2, article id e25481Article in journal (Refereed)
    Abstract [en]

    Background: Serious games can be a powerful learning tool in higher education. However, the literature indicates that the learning outcome in a serious game depends on the facilitators' competencies. Although professional facilitators in commercial game-based training have undergone specific instruction, facilitators in higher education cannot rely on such formal instruction, as game facilitation is only an occasional part of their teaching activities. Objective: This study aimed to address the actual competencies of occasional game facilitators and their perceived competency deficits. Methods: Having many years of experience as professional and occasional facilitators, we (n=7) defined requirements for the occasional game facilitator using individual reflection and focus discussion. Based on these results, guided interviews were conducted with additional occasional game facilitators (n=4) to check and extend the requirements. Finally, a group of occasional game facilitators (n=30) answered an online questionnaire based on the results of the requirement analysis and existing competency models. Results: Our review produced the following questions: Which competencies are needed by facilitators and what are their training needs? What do current training courses for occasional game facilitators in higher education look like? How do the competencies of occasional game facilitators differ from other competencies required in higher education? The key findings of our analysis are that a mix of managerial and technical competencies is required for facilitating serious games in higher educational contexts. Further, there is a limited or no general competence model for game facilitators, and casual game facilitators rarely undergo any specific, formal training. Conclusions: The results identified the competencies that game facilitators require and a demand for specific formal training. Thus, the study contributes to the further development of a competency model for game facilitators and enhances the efficiency of serious games.

  • 25.
    Baalsrud Hauge, Jannicke
    et al.
    KTH, School of Industrial Engineering and Management (ITM), Sustainable production development, Avancerad underhållsteknik och produktionslogistik. BIBA – Bremer Institut für Produktion und Logistik GmbH BremenGermany.
    Stefan, I.
    Improving Learning Outcome by Re-using and Modifying Gamified Lessons Paths2020In: Improving Learning Outcome by Re-using and Modifying Gamified Lessons Paths, Springer Nature , 2020, Vol. 12434, p. 150-163Conference paper (Refereed)
    Abstract [en]

    A main challenge for teachers is to provide good educational offers that appear both appealing as well as motivating to students to learn about the content according to the curriculum. Educational games are thought to be a good complementary way of provide this learning environment, but, so far, the adaption of educational games to a specific context is not only costly but also requiring a lot of knowledge related to game design. This article provides some examples on how gamified lessons paths can be changed in a simple way and how different components can be re-used, in order to save costs and time and to improve the overall quality of the learning experience.

  • 26.
    Baalsrud Hauge, Jannicke
    et al.
    KTH, School of Industrial Engineering and Management (ITM), Sustainable production development. BIBA Bremer Inst Prod & Logist GmbH, Hochschulring 20, D-28359 Bremen, Germany.;Royal Inst Technol, Kvarnbergagt 12, S-15181 Södertälje, Sweden..
    Stefan, Ioana Andreea
    Adv Technol Syst, Str Tineretului 1, Targoviste 130029, Romania..
    Sallinen, Niina
    LAB Univ Appl Sci, Mukkulankatu 19, Lahti 15210, Finland..
    Hauge, Jakob A. H. Baalsrud
    BIBA Bremer Inst Prod & Logist GmbH, Hochschulring 20, D-28359 Bremen, Germany..
    Accessibility Considerations in the Design of Serious Games for Production and Logistics2021In: Advances In Production Management Systems: Artificial Intelligence For Sustainable And Resilient Production Systems, APMS 2021, Pt Iv / [ed] Dolgui, A Bernard, A Lemoine, D VonCieminski, G Romero, D, Springer Nature , 2021, p. 510-519Conference paper (Refereed)
    Abstract [en]

    Digital accessibility has been the focus of initiatives, policies and standards at European and international level in the last decade. However, adoption of accessibility guidelines and the development of accessible resources and applications remain limited and education is a primary example of the multiple challenges that must be addressed. This research has highlighted the main barriers that should be overcome in order to make digital educational games accessible for learners with disabilities and it has brought forward the critical need of personalizing the game contexts and analytics to meet specific profiles of learners with disabilities. Building upon the outcomes of two case studies, the authors propose a game analytics framework for learners with disabilities, in an effort to streamline game design processes that target accessibility.

  • 27.
    Baalsrud Hauge, Jannicke
    et al.
    KTH, School of Industrial Engineering and Management (ITM), Sustainable production development, Advanced Maintenance and Production Logistics.
    Söbke, H.
    Stefan, I. A.
    Stefan, A.
    Applying and Facilitating Serious Location-Based Games2020In: 19th IFIP TC 14 International Conference on Entertainment Computing, ICEC 2020, Springer Science and Business Media Deutschland GmbH , 2020, p. 104-109Conference paper (Refereed)
    Abstract [en]

    The popularity of location-based games continues unabated and is benefiting from the increasing use of mobile end devices and advantageous general conditions, such as the Internet of Things and the Smart City paradigm. This enormous potential of engagement should also be tapped for serious location-based games, i.e. the use of location-based games beyond the purpose of entertainment. The workshop “Applying and Facilitating Serious Location-based Games” aims to contribute to the development of this potential. In the article, the theoretical basis for this workshop is derived and corresponding frameworks are presented.

  • 28.
    Bandali, Benjamin
    KTH, School of Technology and Health (STH), Medical Engineering, Computer and Electronic Engineering.
    Availability and perceived availability with interaction design: Cost-effective availability model for a multinational company2014Independent thesis Basic level (university diploma), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Techniques of measuring web-service uptime have always been a key metric to improve interaction and system integration. Availability is a usual metric in the field of statistics where the goal is to attract customers, but perhaps more importantly the users and providers which can improve the services through this metric. Since availability differs a lot depending on type of service, company and usage, a common problem is to define what availability really is.

    The thesis will give the readers an introduction to availability, and also explain the reasons why it may vary, which is the theory of interaction design behind a service. Though availability is a metric that can be calculated through many different ways, the result is very complex to understand for the public that is interested in it.

    The goal of this thesis is to give the readers an understanding and guidelines of how to define perceived availability based on the system availability, but also present a method of defining, calculating and present the metric in a user-friendly procedure. The result will in turn consist of a cost-effective model for perceived availability and be tested at a multinational company.

    Download full text (pdf)
    Degree Project - Availability and perceived availability with interaction design
  • 29. Barbosa, Amanda
    et al.
    Santana, Alixandre
    Hacks, Simon
    Research Group Software Construction, RWTH Aachen University, Aachen, Germany.
    Stein, Niels von
    A Taxonomy for Enterprise Architecture Analysis Research2019Conference paper (Refereed)
    Download full text (pdf)
    fulltext
  • 30.
    Belitz-Hellwich, Wiebke
    KTH, School of Architecture and the Built Environment (ABE), Civil and Architectural Engineering, Structural Engineering and Bridges.
    An Ontology-Based Platform for Information Integration: Supporting Sustainable Smart Transportation Infrastructure2023Licentiate thesis, comprehensive summary (Other academic)
    Abstract [en]

    Transportation and transportation infrastructure are major contributors to global emissions. Even though most of these emissions can be attributed to fuel consumption of vehicles, some of it is due to pavement construction and maintenance. Improving the condition and extending the life span of transportation infrastructure is an integral part to helping reduce infrastructure related emissions. A better understanding of mutual effect of traffic and infrastructure condition is needed to reach this goal. In this thesis the aim is to synchronize logistics operations and pavement condition in a port environment. For this purpose, a logistics simulation model and a Finite Element (FE) model are connected through a system of ontologies, that facilitate information exchange between the models. The suggested approach consists of two connected ontologies (one for logistics operations and one for pavements), interfaces between the models and the ontologies, as well as an application that runs the exchange. Additionally, performance tests have been carried out to judge the impact of the integration on run time of the individual applications.

    Download full text (pdf)
    fulltext
  • 31. Ben-Nun, J.
    et al.
    Farhi, N.
    Llewellyn, M.
    Riva, B.
    Rosen, A.
    Ta-Shma, A.
    Wikström, Douglas
    KTH, School of Computer Science and Communication (CSC), Theoretical Computer Science, TCS.
    A new implementation of a dual (paper and cryptographic) voting system2012Conference paper (Refereed)
    Abstract [en]

    We report on the design and implementation of a new cryptographic voting system, designed to retain the "look and feel" of standard, paper-based voting used in our country Israel while enhancing security with end-to-end verifiability guaranteed by cryptographic voting. Our system is dual ballot and runs two voting processes in parallel: one is electronic while the other is paper-based and similar to the traditional process used in Israel. Consistency between the two processes is enforced by means of a new, specially-tailored paper ballot format. We examined the practicality and usability of our protocol through implementation and field testing in two elections: the first being a student council election with over 2000 voters, the second a political party's election for choosing their leader. We present our findings, some of which were extracted from a survey we conducted during the first election. Overall, voters trusted the system and found it comfortable to use.

  • 32.
    Bentersten, William
    KTH, School of Engineering Sciences in Chemistry, Biotechnology and Health (CBH), Biomedical Engineering and Health Systems, Health Informatics and Logistics.
    Upptäcka kritiska ändringar i JSON-meddelanden i webb-API:er2019Independent thesis Advanced level (professional degree), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    One way of developing web applications is in two parts, where one part is an API and the other part is the client. This report focuses on JSON APIs and on finding a solution for identifying breaking changes in JSON messages before they affect their intended client in undesirable ways.

    A case study has been carried out at a company that develops their web applications in two parts. The result is a web application (a tool) that solves the problem by recording API requests that are then replayed against different versions of the API. Version tagged responses are collected and compared against each other by different aspects.

    The web application (the tool) succeeds in identifying breaking changes in JSON messages. This is verified using a test API, which in turn verifies the thesis’ hypothesis.

    To test an API, whose underlying application is stateful, that application is expected to be reset to a standardized state before each use of the tool. This is a limitation.

    There is potential for future development in getting the tool work against authenticated APIs.

    Download full text (pdf)
    fulltext
  • 33.
    Berezkin, Nikita
    et al.
    KTH, School of Engineering Sciences in Chemistry, Biotechnology and Health (CBH), Biomedical Engineering and Health Systems, Health Informatics and Logistics.
    Heidari, Ahmed
    KTH, School of Engineering Sciences in Chemistry, Biotechnology and Health (CBH), Biomedical Engineering and Health Systems, Health Informatics and Logistics.
    Berika receptdata med innehållshanteringssystem2019Independent thesis Basic level (university diploma), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    The problem today is that people do not eat climate-smart food; this results in that the food will not suffice, and what we eat may harm the greenhouse effect. The problem is that people do not have the time or knowledge to cook climate-smart food. A solution is to use a Content Management System (CMS). A Content Management System processes selected type of data in a specific way which is then stored. This report will address the basics and the making of a CMS in a recommendation system for a user. The system will entail a more climate-smart food alternative to achieve the individual's personal needs. The result was that with the help of data from various sources, an ingredient of a recipe could add additional information such as nutritional value, allergies, and whether it is vegetarian. Tests such as performance tests on the execution time for the CMS, parsing accuracy, and matching product accuracy, a better result was achieved. Most of the ingredients in the recipe became enriched, which leads to more climate-smart food alternatives, which are better for the environment. The accuracy is the matching of ingredients in the recipe to the names of products in the business. The next step was to enrich the recipes using enriched ingredients.

    Download full text (pdf)
    Enriching Recipe Data using Content Management System
  • 34.
    Bieser, Jan C. T.
    et al.
    KTH, School of Architecture and the Built Environment (ABE), Sustainable development, Environmental science and Engineering, Strategic Sustainability Studies.
    Kriukelyte, Erika
    KTH, School of Architecture and the Built Environment (ABE), Sustainable development, Environmental science and Engineering, Strategic Sustainability Studies.
    The digitalization of passenger transport: Technologies, applications and potential implications for greenhouse gas emissions2021Report (Other academic)
    Abstract [en]

    To meet internationally agreed climate protection targets, a drastic reduction of passenger transport greenhouse gas (GHG) emissions is required. The “Avoid-Shift-Improve”-Approach suggests to meet future transport demand by avoiding unnecessary travel, shifting travel to more environmentally-friendly transport modes and improving the environmental performance of transport modes. Digital applications can contribute to both an increase or a decrease of passenger transport GHG emissions, e.g. by avoiding travel, increasing travel or shifting travel to more GHG-intensive or GHG-efficient transport modes. In view of the large number of digital applications in passenger transport and their uncertain impacts on GHG emissions, the aim of this report is to present a review of (1) digital technologies that are used in passenger transport, (2) applications that are supported by digital technologies and (3) their potential impacts on GHG emissions.

    We identified nine central categories of digital technologies that shape passenger transport, namely (mobile) end user devices and apps, telecommunication networks, cloud computing, artificial intelligence and big data, geospatial technologies, digital sensors, computer graphics, automation and robotics and blockchain. These technologies support various applications in passenger transport which can be categorized into digital traveler information systems (e.g. trip planning and booking apps), digital shared mobility services (e.g. car or ride sharing), digitally-enabled transport modes that would not exist without digital technologies (e.g. virtual mobility, taxi drones), digital in-vehicle applications (e.g. automated driving), and digital applications for traffic and infrastructure management (e.g. traffic simulations and mobility pricing).

    All described applications can have reducing and increasing effects on GHG emissions. Main levers to reduce GHG emissions are (1) a reduction of number of vehicles produced (e.g. through vehicle sharing), (2) a reduction of total travel distances (e.g. through virtual mobility), (3) an increase in the attractiveness of and shift to more GHG-efficient transport modes (e.g. through multimodal mobility platforms), (4) an increase in the utilization of transport modes and a reduction of vehicle kilometers traveled (e.g. through ride sharing), and (5) an increase in the fuel efficiency of vehicles (e.g. through automated driving systems).

    In a real-life setting, the impacts of digital applications depend on the interplay between the applications and their design, existing travel patterns and the policy framework in place. In order put digital applications in passenger transport at the service of climate protection, applications and policies have to be aligned in a way that they promote GHG reducing levers. Otherwise, there is a risk that these applications lead to an increase in GHG emissions, e.g. by inducing additional travel or promoting more GHG-intensive transport modes.

    Future research should empirically assess the impacts of digital applications on passenger transport and identify the conditions under which decarbonization potentials will materialize. This will support policy makers and market actors to jointly create conditions under which offering digital applications in passenger transport contributes to a net GHG emission reduction and is economically-feasible.

    Download full text (pdf)
    Bieser and Kriukelyte_Digitalization of passenger transport
  • 35.
    Bishop, Adrian N.
    et al.
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Fidan, Baris
    Anderson, Brian D. O.
    Dogancay, Kutluyil
    Pathirana, Pubudu N.
    Optimality analysis of sensor-target localization geometries2010In: Automatica, ISSN 0005-1098, E-ISSN 1873-2836, Vol. 46, no 3, p. 479-492Article in journal (Refereed)
    Abstract [en]

    The problem of target localization involves estimating the position of a target from multiple noisy sensor measurements. It is well known that the relative sensor-target geometry can significantly affect the performance of any particular localization algorithm. The localization performance can be explicitly characterized by certain measures, for example, by the Cramer-Rao lower bound (which is equal to the inverse Fisher information matrix) on the estimator variance. In addition, the Cramer-Rao lower bound is commonly used to generate a so-called uncertainty ellipse which characterizes the spatial variance distribution of an efficient estimate, i.e. an estimate which achieves the lower bound. The aim of this work is to identify those relative sensor-target geometries which result in a measure of the uncertainty ellipse being minimized. Deeming such sensor-target geometries to be optimal with respect to the chosen measure, the optimal sensor-target geometries for range-only, time-of-arrival-based and bearing-only localization are identified and studied in this work. The optimal geometries for an arbitrary number of sensors are identified and it is shown that an optimal sensor-target configuration is not, in general, unique. The importance of understanding the influence of the sensor-target geometry on the potential localization performance is highlighted via formal analytical results and a number of illustrative examples.

  • 36. Bonivento, A.
    et al.
    Fischione, Carlo
    KTH, School of Electrical Engineering (EES), Automatic Control.
    Sangiovanni-Vincentelli, A.
    Randomized protocol stack for ubiquitous networks in indoor environment2006In: 2006 3rd IEEE Consumer Communications and Networking Conference, CCNC 2006, 2006, Vol. 1, p. 152-156Conference paper (Refereed)
    Abstract [en]

    We present a novel protocol architecture for ubiquitous networks. Our solution is based on a randomized routing, MAC and duty cycling protocols that allow for performance and reliability leveraging node density. We show how the three layers can be jointly optimized for energy efficiency and we present a completely distributed algorithm that allows for the network to reach the optimal working point and adapt to traffic variations with negligible overhead. Finally, we present a set of simulation results that support our mathematical model.

  • 37. Borozanov, Vasil
    et al.
    Hacks, Simon
    Silva, Nuno
    Using Machine Learning Techniques for Evaluating the Similarity of Enterprise Architecture Models2019Conference paper (Refereed)
    Download full text (pdf)
    fulltext
  • 38.
    Boström, Gustav
    KTH, School of Information and Communication Technology (ICT), Computer and Systems Sciences, DSV.
    A case study on estimating the software engineering properties of implementing database encryption as and aspect2005In:  SPLAT 05: Papers, 2005Conference paper (Refereed)
  • 39.
    Boström, Gustav
    KTH, School of Information and Communication Technology (ICT), Computer and Systems Sciences, DSV.
    Aspects in the user interface: the case of access controlArticle in journal (Other academic)
  • 40.
    Boström, Gustav
    KTH, Superseded Departments (pre-2005), Computer and Systems Sciences, DSV.
    Database Encryption as an Aspect2004In: AOSD'04 International Conference on Aspect-Oriented Software Development  : Papers, 2004Conference paper (Refereed)
    Abstract [en]

    Encryption is an important method for implementing confidentiality in information systems. Unfortunately applying encryption effectively can be quite complicated. Encryption, as well as other security concerns, is also often spread out in an application making implementation difficult. This crosscutting nature of encryption makes it a potentially ideal candidate for implementation using AOP. In this article we provide an example of how database encryption was applied using AOP with AspectJ on a real-life healthcare database application. Although the attempt was promising with regards to modularity, amount of effort and security engineering, it also revealed problems related to substring queries that need to be solved to make the approach really useful.

  • 41.
    Boström, Gustav
    KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA.
    Simplifying development of secure software: Aspects and Agile methods2006Licentiate thesis, comprehensive summary (Other scientific)
    Abstract [en]

    Reducing the complexity of building secure software systems is an important goal as increased complexity can lead to more security flaws. This thesis aims at helping to reduce this complexity by investigating new programming techniques and software development methods for implementing secure software. We provide case studies on the use and effects of applying Aspect-oriented software development to Confidentiality, Access Control and Quality of Service implementation. We also investigate how eXtreme Programming can be used for simplifying the secure software development process by comparing it to the security engineering standards Common Criteria and the Systems Security Engineering Capability Maturity Model. We also explore the relationship between Aspect-oriented programming and Agile software development methods, such as eXtreme Programming.

    Download full text (pdf)
    FULLTEXT01
  • 42.
    Boström, Gustav
    et al.
    KTH, School of Information and Communication Technology (ICT), Computer and Systems Sciences, DSV.
    Wäyrynen, Jaana
    Henkel, Martin
    KTH, School of Information and Communication Technology (ICT), Computer and Systems Sciences, DSV.
    Aspects in the Agile toobox2005In:   SPLAT 05: Papers, 2005Conference paper (Refereed)
  • 43.
    Boström, Henrik
    Stockholms universitet, Institutionen för data- och systemvetenskap.
    Concurrent Learning of Large-Scale Random Forests2011In: Scandinavian Conference on Artificial Intelligence, IOS Press , 2011Conference paper (Refereed)
    Abstract [en]

    The random forest algorithm belongs to the class of ensemble learning methods that are embarassingly parallel, i.e., the learning task can be straightforwardly divided into subtasks that can be solved independently by concurrent processes. A parallel version of the random forest algorithm has been implemented in Erlang, a concurrent programming language originally developed for telecommunication applications. The implementation can be used for generating very large forests, or handling very large datasets, in a reasonable time frame. This allows for investigating potential gains in predictive performance from generating large-scale forests. An empirical investigation on 34 datasets from the UCI repository shows that forests of 1000 trees significantly outperform forests of 100 trees with respect to accuracy, area under ROC curve (AUC) and Brier score. However, increasing the forest sizes to 10 000 or 100 000 trees does not give any further significant performance gains.

    Download full text (pdf)
    fulltext
  • 44.
    Boström, Henrik
    et al.
    Högskolan i Skövde, Institutionen för kommunikation och information.
    Andler, Sten F.
    Högskolan i Skövde, Institutionen för kommunikation och information.
    Brohede, Marcus
    Högskolan i Skövde, Institutionen för kommunikation och information.
    Johansson, Ronnie
    Högskolan i Skövde, Institutionen för kommunikation och information.
    Karlsson, Alexander
    Högskolan i Skövde, Institutionen för kommunikation och information.
    van Laere, Joeri
    Högskolan i Skövde, Institutionen för kommunikation och information.
    Niklasson, Lars
    Högskolan i Skövde, Institutionen för kommunikation och information.
    Nilsson, Marie
    Högskolan i Skövde, Institutionen för kommunikation och information.
    Persson, Anne
    Högskolan i Skövde, Institutionen för kommunikation och information.
    Ziemke, Tom
    Högskolan i Skövde, Institutionen för kommunikation och information.
    On the Definition of Information Fusion as a Field of Research2007Report (Other academic)
    Abstract [en]

    A more precise definition of the field of information fusion can be of benefit to researchers within the field, who may use uch a definition when motivating their own work and evaluating the contribution of others. Moreover, it can enable researchers and practitioners outside the field to more easily relate their own work to the field and more easily understand the scope of the techniques and methods developed in the field. Previous definitions of information fusion are reviewed from that perspective, including definitions of data and sensor fusion, and their appropriateness as definitions for the entire research field are discussed. Based on strengths and weaknesses of existing definitions, a novel definition is proposed, which is argued to effectively fulfill the requirements that can be put on a definition of information fusion as a field of research.

    Download full text (pdf)
    FULLTEXT01
  • 45.
    Boström, Henrik
    et al.
    Stockholms universitet, Institutionen för data- och systemvetenskap.
    Dalianis, Hercules
    Stockholms universitet, Institutionen för data- och systemvetenskap.
    De-identifying health records by means of active learning2012In: ICML 2012 workshop on Machine Learning for Clinical Data Analysis 2012, 2012Conference paper (Refereed)
    Abstract [en]

    An experiment on classifying words in Swedish health records as belonging to one of eight protected health information (PHI) classes, or to the non-PHI class, by means of active learning has been conducted, in which three selection strategies were evaluated in conjunction with random forests; the commonly employed approach of choosing the most uncertain examples, choosing randomly, and choosing the most certain examples. Surprisingly, random selection outperformed choosing the most uncertain examples with respect to ten considered performance metrics. Moreover, choosing the most certain examples outperformed random selection with respect to nine out of ten metrics.

  • 46. Bowers, John
    et al.
    Hellström, Sten-Olof
    Tobiasson, Helena
    Taxén, Gustav
    KTH, Superseded Departments (pre-2005), Numerical Analysis and Computer Science, NADA.
    Designing mixed media artefacts for public settings2004In: Cooperative Systems Design. Scenario-Based Design of Collaborative Systems / [ed] Darses, F., Simone, C. and Zacklad, M., Amsterdam: IOS Press , 2004, p. 195-210Conference paper (Refereed)
    Abstract [en]

    This paper describes how principles which are emerging from socialscientific studies of people’s interaction with mixed media artefacts in public place have been used to support the development of two installations, the second of which is a long term museum exhibit. Our principles highlight the design of ‘emergent collaborative value’, ‘layers of noticeability’ and ‘structures of motivation’ to create an ‘ecology of participation’ in installations. We describe how our first installation was used as a ‘research vehicle’ that guided and shaped the design of the museum installation. We also provide an account of how people interact with our installations and how this analysis has shaped their design. The paper closes with some general remarks about the challenges there are for the design of collaborative installations and the extent to which we have met them.

  • 47.
    Braun, Stefan
    et al.
    KTH, School of Electrical Engineering (EES), Microsystem Technology.
    Oberhammer, Joachim
    KTH, School of Electrical Engineering (EES), Microsystem Technology.
    Stemme, Göran
    KTH, School of Electrical Engineering (EES), Microsystem Technology.
    MEMS single-chip 5x5 and 20x20 double-switch arrays for telecommunication networks2007In: IEEE 20th International Conference on Micro Electro Mechanical Systems, 2007. MEMS, New York: IEEE , 2007, p. 811-814Conference paper (Refereed)
    Abstract [en]

    This paper reports on a microelectromechanical switch array with up to 20x20 double switches and packaged on a single chip and utilized for main distribution frames in copper-wire networks. The device includes 5x5 or 20x20 allowing for an any-to-any interconnection of the input line to the specific output line. The switches are on an electrostatic S-shaped film actuator with the contact moving between a top and a bottom electrode. device is fabricated in two parts and is designed to assembled using selective adhesive wafer bonding in a wafer-scale package of the switch array. The 5x5 switch arrays have a size of 6.7x6.4mm(2) and the arrays are 14x10 mm(2) large. The switch actuation for closing/opening the switches averaged over an array measured to be 21.2 V / 15.3 V for the 5x5 array 93.2 V / 37.3 V for the 20x20 array. The total impedance varies on the 5x5 array between 0.126 Omega 0.564 Omega at a measurement current of 1 mA. The resistance of the switch contacts within the 5x5 array determined to be 0.216 Omega with a standard deviation 0. 155 Omega.

  • 48.
    Braun, Stefan
    et al.
    KTH, School of Electrical Engineering (EES), Microsystem Technology.
    Oberhammer, Joachim
    KTH, School of Electrical Engineering (EES), Microsystem Technology.
    Stemme, Göran
    KTH, School of Electrical Engineering (EES), Microsystem Technology.
    MEMS single-chip microswitch array for re-configuration of telecommunication networks2006In: 2006 European Microwave Conference: Vols 1-4, New York: IEEE , 2006, p. 315-318Conference paper (Refereed)
    Abstract [en]

    This paper reports on a micro-electromechanical (MEMS) switch array embedded and packaged on a single chip. The switch array is utilized for the automated re-configuration of the physical layer of copper-wire telecommunication networks. A total of 25 individually controllable double-switches are arranged in a 6.7 x 6.4 mm(2) large 5x5 switch matrix allowing for any configuration of independently connecting the line-pairs of the five input channels to any line-pair of the five output channels. The metal-contact switch array is embedded in a single chip package, together with 4 metal layers for routing the signal and control lines and with a total of 35 I/O contact pads. The MEMS switches are based on an electrostatic S-shaped thin membrane actuator with the switching contact bar rolling between a top and a bottom electrode. This special switch design allows for low actuation voltage (21.23 V) to close the switches and for high isolation. The total signal line resistances of the routing network vary from 0.57 Omega to 0.98 Omega. The contact resistance of the gold contacts is 0.216 Omega.

  • 49.
    Braun, Stefan
    et al.
    KTH, School of Electrical Engineering (EES), Microsystem Technology.
    Oberhammer, Joachim
    KTH, School of Electrical Engineering (EES), Microsystem Technology.
    Stemme, Göran
    KTH, School of Electrical Engineering (EES), Microsystem Technology.
    Smart individual switch addressing of 5×5 and 20×20 MEMS double-switch arrays2007In: TRANSDUCERS and EUROSENSORS '07 - 4th International Conference on Solid-State Sensors, Actuators and Microsystems, IEEE , 2007, p. 153-156Conference paper (Other academic)
    Abstract [en]

    This paper presents a smart row / column addressing scheme for large MEMS rnicroswitch arrays, utilizing the pull-in / pull-out hysteresis of their electrostatic actuators to efficiently reduce the number of control lines. Single-chip 20 x 20 double-switch switch arrays with individually programmable 400 switch elements have been fabricated and the smart addressing scheme was successfully evaluated. The reproducibility of the actuation voltages within the array is very important for this addressing scheme and therefore the influence of effects such as isolation layer charging on the pull-in voltages has also been investigated.

  • 50.
    Briat, Corentin
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Optimization and Systems Theory.
    Seuret, A.
    Stability criteria for asynchronous sampled-data systems - A fragmentation approach2011In: IFAC Proc. Vol. (IFAC-PapersOnline), 2011, no PART 1, p. 1313-1318Conference paper (Refereed)
    Abstract [en]

    The stability analysis of asynchronous sampled-data systems is studied. The approach is based on a recent result which allows to study, in an equivalent way, the quadratic stability of asynchronous sampled-data systems in a continuous-time framework via the use of peculiar functionals satisfying a necessary boundary condition. The method developed here is an extension of previous results using a fragmentation technique inspired from recent advances in time-delay systems theory. The approach leads to a tractable convex feasibility problem involving a small number of finite dimensional LMIs. The approach is then finally illustrated through several examples.

1234567 1 - 50 of 482
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf