kth.sePublications
Planned maintenance
A system upgrade is planned for 10/12-2024, at 12:00-13:00. During this time DiVA will be unavailable.
Change search
Refine search result
1234567 1 - 50 of 436
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Aarno, Daniel
    KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA.
    Intention recognition in human machine collaborative systems2007Licentiate thesis, monograph (Other scientific)
    Abstract [en]

    Robot systems have been used extensively during the last decades to provide automation solutions in a number of areas. The majority of the currently deployed automation systems are limited in that the tasks they can solve are required to be repetitive and predicable. One reason for this is the inability of today’s robot systems to understand and reason about the world. Therefore the robotics and artificial intelligence research communities have made significant research efforts to produce more intelligent machines. Although significant progress has been made towards achieving robots that can interact in a human environment there is currently no system that comes close to achieving the reasoning capabilities of humans.

    In order to reduce the complexity of the problem some researchers have proposed an alternative to creating fully autonomous robots capable of operating in human environments. The proposed alternative is to allow fusion of human and machine capabilities. For example, using teleoperation a human can operate at a remote site, which may not be accessible for the operator for a number of reasons, by issuing commands to a remote agent that will act as an extension of the operator’s body.

    Segmentation and recognition of operator generated motions can be used to provide appropriate assistance during task execution in teleoperative and human-machine collaborative settings. The assistance is usually provided in a virtual fixture framework where the level of compliance can be altered online in order to improve the performance in terms of execution time and overall precision. Acquiring, representing and modeling human skills are key research areas in teleoperation, programming-by-demonstration and human-machine collaborative settings. One of the common approaches is to divide the task that the operator is executing into several sub-tasks in order to provide manageable modeling.

    This thesis is focused on two aspects of human-machine collaborative systems. Classfication of an operator’s motion into a predefined state of a manipulation task and assistance during a manipulation task based on virtual fixtures. The particular applications considered consists of manipulation tasks where a human operator controls a robotic manipulator in a cooperative or teleoperative mode.

    A method for online task tracking using adaptive virtual fixtures is presented. Rather than executing a predefined plan, the operator has the ability to avoid unforeseen obstacles and deviate from the model. To allow this, the probability of following a certain trajectory sub-task) is estimated and used to automatically adjusts the compliance of a virtual fixture, thus providing an online decision of how to fixture the movement.

    A layered hidden Markov model is used to model human skills. A gestem classifier that classifies the operator’s motions into basic action-primitives, or gestemes, is evaluated. The gestem classifiers are then used in a layered hidden Markov model to model a simulated teleoperated task. The classification performance is evaluated with respect to noise, number of gestemes, type of the hidden Markov model and the available number of training sequences. The layered hidden Markov model is applied to data recorded during the execution of a trajectory-tracking task in 2D and 3D with a robotic manipulator in order to give qualitative as well as quantitative results for the proposed approach. The results indicate that the layered hidden Markov model is suitable for modeling teleoperative trajectory-tracking tasks and that the layered hidden Markov model is robust with respect to misclassifications in the underlying gestem classifiers.

    Download full text (pdf)
    FULLTEXT01
  • 2.
    Adler, Jeremy
    et al.
    The Wenner-Gren Institute, Stockholm University, Sweden.
    Bergholm, Fredrik
    KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA.
    Pagakis, Stamatis N.
    Biomedical Research Foundation Academy of Athens, Athens, Greece.
    Parmryd, Ingela
    The Wenner-Gren Institute, Stockholm University, Sweden.
    Noise and colocalization in fluorescence microscopy: solving a problem2008In: Microscopy and Microanalysis, ISSN 1431-9276, E-ISSN 1435-8115, Vol. 22, no 5Article in journal (Refereed)
  • 3. Agarwal, A.
    et al.
    Dowling, A. P.
    Shin, H. -C
    Graham, W.
    Sefi, Sandy
    KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA.
    A ray tracing approach to calculate acoustic shielding by the silent aircraft airframe2006In: Collection of Technical Papers - 12th AIAA/CEAS Aeroacoustics Conference, 2006, p. 2799-2818Conference paper (Refereed)
    Abstract [en]

    The Silent Aircraft is in the form of a flying wing with a large wing planform and a propulsion system that is embedded in the rear of the airframe with intakes on the upper surface of the wing. Thus a large part of the forward-propagating noise from the intake ducts is expected to be shielded from observers on the ground by the wing. Acoustic shielding effects can be calculated by solving an external acoustic scattering problem for a moving aircraft. In this paper, acoustic shielding effects of the Silent Aircraft airframe are quantified by a ray-tracing method. The dominant frequencies from the noise spectrum of the engines are sufficiently high for ray theory to yield accurate results. It is shown that for low-Mach number homentropic flows, a condition satisfied approximately by the Silent Aircraft during take-off and approach, the acoustic rays propagate in straight lines. Thus, from Fermat's principle it is clear that classical Geometrical Optics and Geometrical Theory of Diffraction solutions are applicable to this moving-body problem as well. The total amount of acoustic shielding at an observer located in the shadow region is calculated by adding the contributions from all the diffracted rays (edge-diffracted and creeping rays) and then subtracting the result from the incident field without the airframe. Experiments on a model-scale geometry have been conducted in an anechoic chamber to test the applicability of the ray-tracing technique. The three-dimensional ray-tracing solver is validated by comparing the numerical solution with analytical high-frequency asymptotic solutions for canonical shapes.

  • 4. Agarwal, Anurag
    et al.
    Dowling, Ann P.
    Shin, Ho-Chul
    Graham, Will
    Sefi, Sandy
    KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA.
    Ray-tracing approach to calculate acoustic shielding by a flying wing airframe2007In: AIAA Journal, ISSN 0001-1452, E-ISSN 1533-385X, Vol. 45, no 5, p. 1080-1090Article in journal (Refereed)
    Abstract [en]

    The "silent aircraft" is in the form of a flying wing with a large wing planform and a propulsion system that is embedded in the rear of the airframe with intakes on the upper surface of the wing. Thus a large part of the forward-propagating noise from the intake ducts is expected to be shielded from observers on the ground by the wing. Acoustic shielding effects can be calculated by solving an external acoustic scattering problem for a moving aircraft. In this paper, acoustic shielding effects of the silent aircraft airframe are quantified by a ray-tracing method. The dominant frequencies from the noise spectrum of the engines are sufficiently high for ray theory to yield accurate results. It is shown that, for low-Mach number homentropic flows, a condition satisfied approximately during takeoff and approach, the acoustic rays propagate in straight lines. Thus, from Fermat's principle it is clear that classical geometrical optics and geometrical theory of diffraction solutions are applicable to this moving-body problem as well. The total amount of acoustic shielding at an observer located in the shadow region is calculated by adding the contributions from all the diffracted rays (edge-diffracted and creeping rays) and then subtrading the result from the incident field without the airframe. The three-dimensional ray-tracing solver is validated by comparing the numerical solutions with analytical high-frequency asymptotic solutions for canonical shapes. Experiments on a model-scale geometry have been conducted in an anechoic chamber to test the applicability of the ray-tracing technique. The results confirm the accuracy of the approach, which is then applied to a CAD representation of a prototype silent aircraft design. As expected, the flying wing configuration provides very significant ground shielding (in excess of 10 dB at all locations) of a source above the airframe.

  • 5.
    Aktug, Irem
    KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA.
    State space representation for verification of open systems2006Licentiate thesis, monograph (Other scientific)
    Abstract [en]

    When designing an open system, there might be no implementation available for cer- tain components at verification time. For such systems, verification has to be based on assumptions on the underspecified components. In this thesis, we present a framework for the verification of open systems through explicit state space representation.

    We propose Extended Modal Transition Systems (EMTS) as a suitable structure for representing the state space of open systems when assumptions on components are writ- ten in the modal μ-calculus. EMTSs are based on the Modal Transition Systems (MTS) of Larsen. This representation supports state space exploration based verification tech- niques, and provides an alternative formalism for graphical specification. In interactive verification, it enables proof reuse and facilitates visualization for the user guiding the verification process.

    We present a two-phase construction from process algebraic open system descriptions to such state space representations. The first phase deals with component assumptions, and is essentially a maximal model construction for the modal μ-calculus that makes use of a powerset construction for the fixed point cases. In the second phase, the models obtained are combined according to the structure of the open system to form the complete state space. The construction is sound and complete for systems with a single unknown component and sound for those without dynamic process creation. We suggest a tableau-based proof system for establishing open system properties of the state space representation. The proof system is sound and it is complete for modal μ-calculus formulae with only prime subformulae.

    A complete framework based on the state space representation is offered for the auto- matic verification of open systems. The process begins with specifying the open system by a process algebraic term with assumptions. Then, the state space representation is ex- tracted from this description using the construction described above. Finally, open system properties can be checked on this representation using the proof system.

    Download full text (pdf)
    FULLTEXT01
  • 6.
    Aktug, Irem
    et al.
    KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA.
    Gurov, Dilian
    KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA.
    State Space Representation for Verification of Open Systems2006In: Algebraic Methodology And Software Technology, Proceedings / [ed] Johnson, M; Vene, V, Berlin: Springer , 2006, p. 5-20Conference paper (Refereed)
    Abstract [en]

    When designing an open system, there might be no implementation available for certain components at verification time. For such systems, verification has to be based on assumptions on the underspecified components. When component assumptions are expressed in Hennessy-Milner logic (HML), the state space of open systems can be naturally represented with modal transition systems (NITS), a graphical specification language equiexpressive with HML. Having an explicit state space representation supports state space exploration based verification techniques, Besides, it enables proof reuse and facilitates visualization for the user guiding the verification process. in interactive verification. As an intuitive representation of system behavior, it aids debugging when proof generation fails in automatic verification.

    However, HML is not expressive enough to capture temporal assumptions. For this purpose, we extend MTSs to represent the state space of open systems where component assumptions are specified in modal mu-calculus. We present a two-phase construction from process algebraic open system descriptions to such state space representations. The first phase deals with component assumptions, and is essentially a maximal model construction for the modal p-calculus. In the second phase, the models obtained are combined according to the structure of the open system to form the complete state space. The construction is sound and complete for systems with a single unknown component and sound for those-without dynamic process creation. For establishing open system properties based on the representation, we present a proof system which is sound and complete for prime formulae.

  • 7.
    Anderlind, Eva
    et al.
    KTH, School of Computer Science and Communication (CSC), Human - Computer Interaction, MDI.
    Noz, Marilyn E.
    New York University, Department of Radiology.
    Sallnäs Pysander, Eva-Lotta
    KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA.
    Lind, Bengt K.
    Karolinska Institute, Department of Medical Radiation Physics.
    Maguire, Gerald Q. Jr.
    KTH, School of Information and Communication Technology (ICT), Communication Systems, CoS.
    Will haptic feedback speed up medical imaging? An application to radiation treatment planning2008In: Acta Oncologica, ISSN 0284-186X, E-ISSN 1651-226X, Vol. 47, no 1, p. 32-37Article in journal (Refereed)
    Abstract [en]

    Haptic technology enables us to incorporate the sense of touch into computer applications, providing an additional input/output channel. The purpose of this study was to examine if haptic feedback can help physicians and other practitioners to interact with medical imaging and treatment planning systems. A haptic application for outlining target areas (a key task in radiation therapy treatment planning) was implemented and then evaluated via a controlled experiment with ten subjects. Even though the sample size was small, and the application only a prototype, results showed that haptic feedback can significantly increase (p0.05) the speed of outlining target volumes and organs at risk. No significant differences were found regarding precision or perceived usability. This promising result warrants further development of a full haptic application for this task. Improvements to the usability of the application as well as to the forces generated have been implemented and an experiment with more subjects is planned.

  • 8.
    Andersson, Mats
    KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA.
    Frekventa användares bruk och uppfattning av webben2005In: Human IT, ISSN 1402-1501, E-ISSN 1402-151X, Vol. 8, no 1, p. 1-49Article in journal (Refereed)
    Abstract [en]

    The Web is becoming an essential appliance for people in contemporary society. But although the Web is easy to define in technical terms, it is much harder to describe in terms of aims and what it can afford. This paper presents a study on frequent users' usage and experiences of the Web and is based on a social constructive perspective. Notes from diaries were used in so-called stimulated recall interviews. The study was conducted with inspiration from phenomenography and resulted in the identification of four aspects of what the Web can afford: the aspect of reference; the aspect of distance eliminator; the aspect of overview; and the social aspect. The results also give a picture of usage in everyday life. The respondents' knowledge of how to use the Web is diverse, which implies that such knowledge should not be neglected or taken for granted. Another conclusion is that a technologically deterministic view of Web usage can be questioned.

  • 9.
    Andersson, Mats
    et al.
    KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA.
    Persson, Christian
    KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA.
    Meeting the users need for knowledge: a concept of a learning domainManuscript (preprint) (Other academic)
  • 10.
    Andersson, Samuel A.
    et al.
    KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA.
    Lagergren, Jens
    KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA.
    Motif Yggdrasil: Sampling from a tree mixture model2006In: Research In Computational Molecular Biology, Proceedings / [ed] Apostolico, A; Guerra, C; Istrail, S; Pevzner, P; Waterman, M, 2006, Vol. 3909, p. 458-472Conference paper (Refereed)
    Abstract [en]

    In phylogenetic foot-printing, putative regulatory elements are found in upstream regions of orthologous genes by searching for common motifs. Motifs in different upstream sequences are subject to mutations along the edges of the corresponding phylogenetic tree, consequently taking advantage of the tree in the motif search is an appealing idea. We describe the Motif Yggdrasil sampler; the first Gibbs sampler based on a general tree that uses unaligned sequences. Previous tree-based Gibbs samplers have assumed a star-shaped tree or partially aligned upstream regions. We give a probabilistic model describing upstream sequences with regulatory elements and build a Gibbs sampler with respect to this model. We apply the collapsing technique to eliminate the need to sample nuisance parameters, and give a derivation of the predictive update formula. The use of the tree achieves a substantial increase in nucleotide level correlation coefficient both for synthetic data and 37 bacterial lexA genes.

  • 11. Andrews, George
    et al.
    Eriksson, Henrik
    KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA.
    Petrov, Fedor
    Romik, Dan
    Integrals, partitions and MacMahon's theorem2007In: Journal of combinatorial theory. Series A (Print), ISSN 0097-3165, E-ISSN 1096-0899, Vol. 114, no 3, p. 545-554Article in journal (Refereed)
    Abstract [en]

    In two previous papers, the study of partitions with short sequences has been developed both for its intrinsic interest and for a variety of applications. The object of this paper is to extend that study in various ways. First, the relationship of partitions with no consecutive integers to a theorem of MacMahon and mock theta functions is explored independently. Secondly, we derive in a succinct manner a relevant definite integral related to the asymptotic enumeration of partitions with short sequences. Finally, we provide the generating function for partitions with no sequences of length K and part exceeding N.

  • 12.
    Appelgren, Ester
    KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA.
    The influence of media convergence on strategies in newspaper production2005Licentiate thesis, comprehensive summary (Other scientific)
    Abstract [en]

    Convergence implies that previously unalike areas come together, approaching a common goal. A subordinate concept of convergence, i.e., media convergence, is a concept that has become common when denoting a range of processes within the production of media content, its distribution and consumption. The concept of media convergence has achieved buzzword status in many contexts due to its widespread use.

    The concept is not new and has been discussed by researchers in many academic fields and from several different points of views. This thesis will discuss media convergence as an ongoing process and not an end state.

    Newspapers are one of many so-called publishing channels that provide information and entertainment. They have traditionally been printed on paper, but today’s digital technology makes it possible to provide newspapers through a number of different channels. The current strategy used by newspaper companies involves a process of convergence mainly regarding multiple publishing. A newspaper company interested in publishing content through multiple channels has to adapt its production workflow to produce content not only for the traditional printed edition, but also for the other channels.

    In this thesis, a generalized value chain involving four main stages illustrates the production workflow at a newspaper company in relation to the convergence processes. The four stages are creation, packaging, distribution and consumption of content.

    The findings of the thesis are based on studies of the newspaper industry in Sweden and reflect specific newspaper companies, their strategies, production workflow and ventures from 2002 to 2005. The methods used have been case studies, literature studies and scenarios.

    Some of the conclusions of the thesis indicate that convergence processes have steered the newspaper companies’ development towards multiple channel publishing. Advancing technology and mergers between companies have contributed to the processes of convergence. However, the new publishing channels have been described as threatening to the traditional printed editions since they compete for consumers’ time and advertising revenues. Convergence of technology has made it possible to store, edit and publish material over many different networks using the same tools and the same database system. If the content is stored in a neutral format, it can be packaged and used in many different types of publishing channels. However, according to the studied newspapers, a fully automated workflow for all publishing channels is undesirable and impossible to achieve with the existing technology, standards and organizational structure.

    This licentiate thesis will discuss some of the strategies behind multiple channel publishing, production workflows and market conditions to detect how the newspaper industry is coping with media convergence.

    Download full text (pdf)
    FULLTEXT01
  • 13.
    Appelö, Daniel
    KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA.
    Absorbing Layers and Non-Reflecting Boundary Conditions for Wave Propagation Problems2005Doctoral thesis, comprehensive summary (Other scientific)
    Abstract [en]

    The presence of wave motion is the defining feature in many fields of application,such as electro-magnetics, seismics, acoustics, aerodynamics,oceanography and optics. In these fields, accurate numerical simulation of wave phenomena is important for the enhanced understanding of basic phenomenon, but also in design and development of various engineering applications.

    In general, numerical simulations must be confined to truncated domains, much smaller than the physical space were the wave phenomena takes place. To truncate the physical space, artificial boundaries, and corresponding boundary conditions, are introduced. There are four main classes of methods that can be used to truncate problems on unbounded or large domains: boundary integral methods, infinite element methods, non-reflecting boundary condition methods and absorbing layer methods.

    In this thesis, we consider different aspects of non-reflecting boundary conditions and absorbing layers. In paper I, we construct discretely non-reflecting boundary conditions for a high order centered finite difference scheme. This is done by separating the numerical solution into spurious and physical waves, using the discrete dispersion relation.

    In paper II-IV, we focus on the perfectly matched layer method, which is a particular absorbing layer method. An open issue is whether stable perfectly matched layers can be constructed for a general hyperbolic system.

    In paper II, we present a stable perfectly matched layer formulation for 2 x 2 symmetric hyperbolic systems in (2 + 1) dimensions. We also show how to choose the layer parameters as functions of the coefficient matrices to guarantee stability.

    In paper III, we construct a new perfectly matched layer for the simulation of elastic waves in an anisotropic media. We present theoretical and numerical results, showing that the stability properties of the present layer are better than previously suggested layers.

    In paper IV, we develop general tools for constructing PMLs for first order hyperbolic systems. We present a model with many parameters which is applicable to all hyperbolic systems, and which we prove is well-posed and perfectly matched. We also use an automatic method, derived in paper V, for analyzing the stability of the model and establishing energy inequalities. We illustrate our techniques with applications to Maxwell s equations, the linearized Euler equations, as well as arbitrary 2 x 2 systems in (2 + 1) dimensions.

    In paper V, we use the method of Sturm sequences for bounding the real parts of roots of polynomials, to construct an automatic method for checking Petrowsky well-posedness of a general Cauchy problem. We prove that this method can be adapted to automatically symmetrize any well-posed problem, producing an energy estimate involving only local quantities.

    Download full text (pdf)
    FULLTEXT01
  • 14.
    Appelö, Daniel
    et al.
    KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA.
    Hagstrom, Thomas
    Department of Mathematics and Statistics, University of New Mexico.
    Kreiss, Gunilla
    KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA.
    Perfectly matched layers for hyperbolic systems: General formulation, well-posedness and stability2006In: SIAM Journal on Applied Mathematics, ISSN 0036-1399, E-ISSN 1095-712X, Vol. 67, no 1, p. 1-23Article in journal (Refereed)
    Abstract [en]

    Since its introduction the perfectly matched layer (PML) has proven to be an accurate and robust method for domain truncation in computational electromagnetics. However, the mathematical analysis of PMLs has been limited to special cases. In particular, the basic question of whether or not a stable PML exists for arbitrary wave propagation problems remains unanswered. In this work we develop general tools for constructing PMLs for first order hyperbolic systems. We present a model with many parameters, which is applicable to all hyperbolic systems and which we prove is well-posed and perfectly matched. We also introduce an automatic method for analyzing the stability of the model and establishing energy inequalities. We illustrate our techniques with applications to Maxwell's equations, the linearized Euler equations, and arbitrary 2 x 2 systems in (2 + 1) dimensions.

  • 15.
    Appelö, Daniel
    et al.
    KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA.
    Kreiss, Gunilla
    KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA.
    A New Absorbing Layer for Elastic Waves2006In: Journal of Computational Physics, ISSN 0021-9991, E-ISSN 1090-2716, Vol. 215, no 2, p. 642-660Article in journal (Refereed)
    Abstract [en]

    A new perfectly matched layer (PML) for the simulation of elastic waves in anisotropic media on an unbounded domain is constructed. Theoretical and numerical results, showing that the stability properties of the present layer are better than previously suggested layers, are presented. In addition, the layer can be formulated with fewer auxiliary variables than the split-field PML.

  • 16.
    Arnborg, Stefan
    KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA.
    Data Mining:  Opportunities and Challenges chapter 1: A Survey of Bayesian Data Mining.2003In: Data Mining: Opportunities and Challenges chapter 1: A Survey of Bayesian Data Mining. / [ed] J Wang, Idea Group Publishing, 2003Chapter in book (Refereed)
    Abstract [en]

    Data Mining: Opportunities and Challenges presents an overview of the state of the art approaches in this new and multidisciplinary field of data mining. The primary objective of this book is to explore the myriad issues regarding data mining, specifically focusing on those areas that explore new methodologies or examine case studies. This book contains numerous chapters written by an international team of forty-four experts representing leading scientists and talented young scholars from seven different countries.

  • 17.
    Arnborg, Stefan
    et al.
    KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA.
    Agartz, Ingrid
    Hall, Håkan
    Jönsson, Erik
    Sillen, Anna
    Sedvall, Göran
    Data Mining in Schizophrenia Research - preliminary analysis2002Conference paper (Refereed)
    Abstract [en]

    We describe methods used and some results in a study of schizophrenia in a population of affected and unaffected participants, called patients and controls. The subjects are characterized by diagnosis, genotype, brain anatomy (MRI), laboratory tests on blood samples, and basic demographic data. The long term goal is to identify the causal chains of processes leading to disease. We describe a number of preliminary findings, which confirm earlier results on deviations of brain tissue volumes in schizophrenia patients, and also indicate new effects that are presently under further investigation. More importantly, we discuss a number of issues in selection of methods from the very large set of tools in data mining and statistics.

  • 18.
    Arnborg, Stefan
    et al.
    KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA.
    Sjödin, G
    Bayes rules!2000Conference paper (Refereed)
    Abstract [en]

    Of the many justifications of Bayesianism, most imply some assumption that is not very compelling, like the differentiability or continuity of some auxiliary function. We show how such assumptions can be replaced by weaker assumptions for finite domains. The new assumptions are a non-informative refinement principle and a concept of information independence. These assumptions are weaker than those used in alternative justifications, which is shown by their inadequacy for infinite domains. They are also more compelling. 1 Introduction The normative claim of Bayesianism is that every type of uncertainty should be described as probability. Bayesianism has been quite controversial in both the statistics and the uncertainty management communities. It developed as subjective Bayesianism, in [5, 11]. Recently, the information based family of justifications, initiated in [3] and continued in [1] have been discussed in [12, 6, 13]. We will try to find assumptions that are strong enough to s...

  • 19.
    Arnborg, Stefan
    et al.
    KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA.
    Sjödin, G
    On the foundations of Bayesianism2000Conference paper (Refereed)
    Abstract [en]

    We discuss precise assumptions entailing Bayesianism in the line of investigations started by Cox, and relate them to a recent critique by Halpern. We show that every nite model which cannot be rescaled to probability violates a natural and simple re nability principle. A new condition, separability, was found sufficient and necessary for rescalability of in nite models. We nally characterize the acceptable ways to handle uncertainty in in nite models based on Cox's assumptions. Certain closure properties must be assumed before all the axioms of ordered elds are satis ed. Once this is done, a proper plausibility model can be embedded in an ordered eld containing the reals, namely either standard probability ( eld of reals) for a real valued plausibility model, or extended probability ( eld of reals and in nitesimals) for an ordered plausibility model. The end result is that if our assumptions are accepted, all reasonable uncertainty management schemes must be based on sets of extended probability distributions and Bayes conditioning.

  • 20.
    Aroyo, Lora
    et al.
    Technische Universiteit Eindhoven.
    Dolog, Peter
    University of Hannover.
    Houben, Geert-Jan
    Vrije Universiteit Brussel.
    Kravcik, Milos
    OTEC, Open University, The Netherlands.
    Naeve, Ambjörn
    KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA.
    Nilsson, Mikael
    KTH, School of Computer Science and Communication (CSC), Centres, Center for Useroriented IT Design, CID.
    Wild, Fridolin
    Vienna University of Economics, and Business Administration.
    Interoperability in personalized adaptive learning2006In: Educational Technology & Society, ISSN 1176-3647, E-ISSN 1436-4522, Vol. 9, no 2, p. 4-18Article in journal (Refereed)
    Abstract [en]

    Personalized adaptive learning requires semantic-based and context-aware systems to manage the Web knowledge efficiently as well as to achieve semantic interoperability between heterogeneous information resources and services. The technological and conceptual differences can be bridged either by means of standards or via approaches based on the Semantic Web. This article deals with the issue of semantic interoperability of educational contents on the Web by considering the integration of learning standards, Semantic Web, and adaptive technologies to meet the requirements of learners. Discussion is made on the state of the art and the main challenges in this field, including metadata access and design issues relating to adaptive learning. Additionally, a way how to integrate several original approaches is proposed.

  • 21. Arsenlis, Athanasios
    et al.
    Cai, Wei
    Tang, Meijie
    Rhee, Moono
    Oppelstrup, Tomas
    KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA.
    Hommes, Gregg
    Pierce, Tom G.
    Bulatov, Vasily V.
    Enabling strain hardening simulations with dislocation dynamics2007In: Modelling and Simulation in Materials Science and Engineering, ISSN 0965-0393, E-ISSN 1361-651X, Vol. 15, p. 553-595Article in journal (Refereed)
    Abstract [en]

    Numerical algorithms for discrete dislocation dynamics simulations areinvestigated for the purpose of enabling strain hardening simulations of singlecrystals on massively parallel computers. The algorithms investigated includethe O(N) calculation of forces, the equations of motion, time integration,adaptive mesh refinement, the treatment of dislocation core reactions and thedynamic distribution of data and work on parallel computers. A simulationintegrating all these algorithmic elements using the Parallel DislocationSimulator (ParaDiS) code is performed to understand their behaviour in concertand to evaluate the overall numerical performance of dislocation dynamicssimulations and their ability to accumulate percent of plastic strain.

  • 22.
    Artman, Henrik
    et al.
    KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA.
    Markensten, Erik
    KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA.
    Comparative Analysis of Usability Design in E-service procurment2004In: Proceedings of Ecommerce, 2004, p. 345-350Conference paper (Other academic)
    Abstract [en]

    This paper analyzes three case studies of e-service procurement. The comparative analysis focuses on contract conditions, design scope, steer-group, process and results. The cases are set up differently, but the design scope was similar and focused on user centered design and business requirements. The cases departed more from a business perspective of the organizational objectives with the e-service rather than orthodox usability and user interface design perspective. The cases also differ regarding steer-group organization and work process. The first case study had only one person serving both as project leader and steer-group, while the other cases had a group of persons representing a large part of the organization, as well as usability professionals that defined the systems requirements. As for the process the former project worked in a more ad hoc oriented way, taking care of problems as they appeared, while the latter worked with a structured user centered design methodology with a strong focus on tracing the business goals. The results of the projects were two heavily delayed projects and one that was completed on time. The results suggest that it is important to define requirements concretely by making prototypes as part of the systems acquisition, as well as that the procurement steer groupis active and engaged throughout the project. A main conclusion is that industrial procurement projects should learn from other design disciplines such as architecture, industrial design and movie production, in making sketches, blueprints and pre-production as deliverables from the systems acquisition, in order to make the systems development more focused, productive and effective.

  • 23.
    Artman, Henrik
    et al.
    KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA.
    Ramberg, Robert
    SU, DSV.
    Sundholm, Hillevi
    SU, DSV.
    Cerratto-Pargman, Teresa
    SU, DSV.
    Action Context and Target Context Representations: A Case Study on Collaborative Design Learning2005In: CSCL 2005: Computer Supported Collaborative Learning 2005: The Next 10 Years, Proceedings / [ed] Koschmann, T, Lawrence Erlbaum Associates, 2005, p. 1-7Conference paper (Refereed)
    Abstract [en]

    This paper focuses on the concept of representations produced in the context of collaborative design. More specifically, on the interplay between collaborative creation of sketches (design proposals), and argumentation and negotiation processes taking place in the design activity. The question raised in this paper is how sketches produced during a design session reflect and mediate dialogues and argumentation in the design activity and how the sketches feed into an envisioned use context or vice versa. The concepts of action context and target context representations are introduced and used to illustrate shifts of focus during a design session. We have studied a group of students working on a design task in an interactive space for two weeks. The purpose of the study was to investigate how an environment meant to support collaborative work and learning support collaborative and creative learning of interaction design. The results indicate that students attending a course on interaction design did not pay enough attention to target representations. Furthermore the results suggest that "action context representations" to a large extent occupy student activities as a result of either complex technology or as a result of the students thrust to do something instrumental. We suggest that pedagogical programs for collaborative learning of design may relieve some of the mapping, or interplay, of design proposals and the target context representation.

  • 24.
    Artman, Henrik
    et al.
    KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA.
    Zällh, Susanne
    KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA.
    Finding a way to Usability: Procurement of a taxi dispatch system2005In: Cognition, Technology & Work, ISSN 1435-5558, E-ISSN 1435-5566, Vol. 7, no 3, p. 141-155Article in journal (Refereed)
    Abstract [en]

    Despite the extensive work on human-computer interaction regarding methods of involving users and designing for high degrees of usability, there is surprisingly little published on how procurer organizations understand, reason about, and require usability. This study focuses on how one taxi company dealt with usability requirements when procuring a new dispatch system. We have conducted ten interviews with various stakeholders in the company and analyzed related documentation in order to discover the process. The case shows how the concept of usability matured during over time. The taxi company dealt with requirement elicitation by developing prototypes in small reference groups. They did no formal analysis of the operators' cooperation with each other at the operator central, but they did include experienced users, which created implicit scenarios. The supplier company did not focus on the efficiency of the operators or, for that matter, the cooperative demands of the operator central in their original design, which became evident when the procurer organization requested a redesign that emphasized user tasks. This indicates, on one hand, the extent to which procurers must understand usability and cooperation to procure good systems design and, on the other hand, the extent to which designers must understand business and activity processes in order to design good systems.

  • 25.
    Arvestad, Lars
    KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA.
    Efficient methods for estimating amino acid replacement rates2006In: Journal of Molecular Evolution, ISSN 0022-2844, E-ISSN 1432-1432, Vol. 62, no 6, p. 663-673Article in journal (Refereed)
    Abstract [en]

    Replacement rate matrices describe the process of evolution at one position in a protein and are used in many applications where proteins are studied with an evolutionary perspective. Several general matrices have been suggested and have proved to be good approximations of the real process. However, there are data for which general matrices are inappropriate, for example, special protein families, certain lineages in the tree of life, or particular parts of proteins. Analysis of such data could benefit from adaption of a data-specific rate matrix. This paper suggests two new methods for estimating replacement rate matrices from independent pairwise protein sequence alignments and also carefully studies Muller-Vingron's resolvent method. Comprehensive tests on synthetic datasets show that both new methods perform better than the resolvent method in a variety of settings. The best method is furthermore demonstrated to be robust on small datasets as well as practical on very large datasets of real data. Neither short nor divergent sequence pairs have to be discarded, making the method economical with data. A generalization to multialignment data is suggested and used in a test on protein-domain family phylogenies, where it is shown that the method offers family-specific rate matrices that often have a significantly better likelihood than a general matrix.

  • 26.
    Arvestad, Lars
    et al.
    KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA.
    Visa, N.
    Lundeberg, Joakim
    KTH, School of Biotechnology (BIO), Gene Technology.
    Wieslander, L.
    Savolainen, Peter
    KTH, School of Biotechnology (BIO), Gene Technology.
    Expressed sequence tags from the midgut and an epithelial cell line of Chironomus tentans: annotation, bioinformatic classification of unknown transcripts and analysis of expression levels2005In: Insect molecular biology (Print), ISSN 0962-1075, E-ISSN 1365-2583, Vol. 14, no 6, p. 689-695Article in journal (Refereed)
    Abstract [en]

    Expressed sequence tags (ESTs) were generated from two Chironomus tentans cDNA libraries, constructed from an embryo epithelial cell line and from larva midgut tissue. 8584 5'-end ESTs were generated and assembled into 3110 tentative unique transcripts, providing the largest contribution of C. tentans sequences to public databases to date. Annotation using BLAST gave 1975 (63.5%) transcripts with a significant match in the major gene/protein databases, 1170 with a best match to Anopheles gambiae and 480 to Drosophila melanogaster. 1091 transcripts (35.1%) had no match to any database. Studies of open reading frames suggest that at least 323 of these contain a coding sequence, indicating that a large proportion of the genes in C. tentans belong to previously unknown gene families.

  • 27.
    Avramidis, Stefanos
    KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA.
    Simulation and parameter estimation of spectrophotometric instruments 2009Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    The paper and the graphics industries use two instruments with different optical geometry (d/0 and 45/0) to measure the quality of paper prints. The instruments have been reported to yield incompatible measurements and even rank samples differently in some cases, causing communication problems between these sectors of industry.A preliminary investigation concluded that the inter-instrument difference could be significantly influenced by external factors (background, calibration, heterogeneity of the medium). A simple methodology for eliminating these external factors and thereby minimizing the instrument differences has been derived. The measurements showed that, when the external factors are eliminated, and there is no fluorescence or gloss influence, the inter-instrument difference becomes small, depends on the instrument geometry, and varies systematically with the scattering, absorption, and transmittance properties of the sample.A detailed description of the impact of the geometry on the results has been presented regarding a large sample range. Simulations with the radiative transfer model DORT2002 showed that the instruments measurements follow the physical radiative transfer model except in cases of samples with extreme properties. The conclusion is that the physical explanation of the geometrical inter-instrument differences is based on the different degree of light permeation from the two geometries, which eventually results in a different degree of influence from near-surface bulk scattering. It was also shown that the d/0 instrument fulfils the assumptions of a diffuse field of reflected light from the medium only for samples that resemble the perfect diffuser but it yields an anisotropic field of reflected light when there is significant absorption or transmittance. In the latter case, the 45/0 proves to be less anisotropic than the d/0.In the process, the computational performance of the DORT2002 has been significantly improved. After the modification of the DORT2002 in order to include the 45/0 geometry, the Gauss-Newton optimization algorithm for the solution of the inverse problem was qualified as the most appropriate one, after testing different optimization methods for performance, stability and accuracy. Finally, a new homotopic initial-value algorithm for routine tasks (spectral calculations) was introduced, which resulted in a further three-fold speedup of the whole algorithm.The paper and the graphics industries use two instruments with different optical geometry (d/0 and 45/0) to measure the quality of paper prints. The instruments have been reported to yield incompatible measurements and even rank samples differently in some cases, causing communication problems between these sectors of industry.A preliminary investigation concluded that the inter-instrument difference could be significantly influenced by external factors (background, calibration, heterogeneity of the medium). A simple methodology for eliminating these external factors and thereby minimizing the instrument differences has been derived. The measurements showed that, when the external factors are eliminated, and there is no fluorescence or gloss influence, the inter-instrument difference becomes small, depends on the instrument geometry, and varies systematically with the scattering, absorption, and transmittance properties of the sample.A detailed description of the impact of the geometry on the results has been presented regarding a large sample range. Simulations with the radiative transfer model DORT2002 showed that the instruments measurements follow the physical radiative transfer model except in cases of samples with extreme properties. The conclusion is that the physical explanation of the geometrical inter-instrument differences is based on the different degree of light permeation from the two geometries, which eventually results in a different degree of influence from near-surface bulk scattering. It was also shown that the d/0 instrument fulfils the assumptions of a diffuse field of reflected light from the medium only for samples that resemble the perfect diffuser but it yields an anisotropic field of reflected light when there is significant absorption or transmittance. In the latter case, the 45/0 proves to be less anisotropic than the d/0.In the process, the computational performance of the DORT2002 has been significantly improved. After the modification of the DORT2002 in order to include the 45/0 geometry, the Gauss-Newton optimization algorithm for the solution of the inverse problem was qualified as the most appropriate one, after testing different optimization methods for performance, stability and accuracy. Finally, a new homotopic initial-value algorithm for routine tasks (spectral calculations) was introduced, which resulted in a further three-fold speedup of the whole algorithm.

    Download full text (pdf)
    FULLTEXT01
  • 28. Babiuc, M. C.
    et al.
    Kreiss, Heinz-Otto
    KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA.
    Winicour, Jeffrey
    Constraint-preserving Sommerfeld conditions for the harmonic Einstein equations2007In: PHYSICAL REVIEW D, ISSN 1550-7998, Vol. 75, no 4, p. 044002-Article in journal (Refereed)
    Abstract [en]

    The principle part of Einstein equations in the harmonic gauge consists of a constrained system of 10 curved space wave equations for the components of the space-time metric. A new formulation of constraint-preserving boundary conditions of the Sommerfeld-type for such systems has recently been proposed. We implement these boundary conditions in a nonlinear 3D evolution code and test their accuracy.

  • 29. Babuska, Ivo
    et al.
    Nobile, Fabio
    Tempone, Raul
    KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA.
    A stochastic collocation method for elliptic partial differential equations with random input data2007In: SIAM Journal on Numerical Analysis, ISSN 0036-1429, E-ISSN 1095-7170, Vol. 45, no 3, p. 1005-1034Article in journal (Refereed)
    Abstract [en]

    In this paper we propose and analyze a stochastic collocation method to solve elliptic partial differential equations with random coefficients and forcing terms ( input data of the model). The input data are assumed to depend on a finite number of random variables. The method consists in a Galerkin approximation in space and a collocation in the zeros of suitable tensor product orthogonal polynomials (Gauss points) in the probability space and naturally leads to the solution of uncoupled deterministic problems as in the Monte Carlo approach. It can be seen as a generalization of the stochastic Galerkin method proposed in [I. Babuska, R. Tempone, and G. E. Zouraris, SIAM J. Numer. Anal., 42 ( 2004), pp. 800-825] and allows one to treat easily a wider range of situations, such as input data that depend nonlinearly on the random variables, diffusivity coefficients with unbounded second moments, and random variables that are correlated or even unbounded. We provide a rigorous convergence analysis and demonstrate exponential convergence of the probability error with respect to the number of Gauss points in each direction in the probability space, under some regularity assumptions on the random input data. Numerical examples show the effectiveness of the method.

  • 30. Bannon, L.
    et al.
    Benford, S.
    Bowers, John
    KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA.
    Heath, C.
    Hybrid design creates innovative, museum experiences2005In: Communications of the ACM, ISSN 0001-0782, E-ISSN 1557-7317, Vol. 48, no 3, p. 62-65Article in journal (Refereed)
    Abstract [en]

    Museums which rely on simple text panels for providing information to visitors about museum artifacts are discussed. The study involved extensive fieldwork, audio-visual recording, interviews and discussion with curators, museum, educators and exhibit designers. The radio frequency identification (RFID)-tagged paper enabled visitors to assemble a coherent experience from their interactions with different installations. It is suggested that ubiquitous technologies should be assembled and combined with other media to form a interactive and collaborative systems in the museum.

  • 31. Becker, Lee B.
    et al.
    Hollifield, C. Ann
    Jacobsson, Adam
    Jacobsson, Eva-Maria
    KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA.
    Vlad, Tudor
    IS MORE ALWAYS BETTER?: Examining the adverse effects of competition on media performance2009In: Journalism Studies, ISSN 1461-670X, E-ISSN 1469-9699, Vol. 10, no 3, p. 368-385Article in journal (Refereed)
    Abstract [en]

    While classic market economic theory argues that competition among media is better for consumers, preliminary research in emerging media markets suggests otherwise. High levels of competition in markets with limited advertising revenues may lead to poorer journalistic performance. This study tests that argument using secondary analysis of data from a purposive sample of countries where measures of news media performance and market competition exist. The authors find a curvilinear relationship between competition and the quality of the journalistic product, with moderate competition leading to higher-quality journalism products and higher levels of competition leading to journalistic products that do not serve society well. The implications of the findings for media assistance initiatives are discussed.

  • 32.
    Bergholm, Fredrik
    et al.
    KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA.
    Adler, Jeremy
    Parmryd, Ingela
    Analysis of Bias in the Apparent Correlation Coefficient Between Image Pairs Corrupted by Severe Noise2010In: Journal of Mathematical Imaging and Vision, ISSN 0924-9907, E-ISSN 1573-7683, Vol. 37, no 3, p. 204-219Article in journal (Refereed)
    Abstract [en]

    The correlation coefficient r is a measure of similarity used to compare regions of interest in image pairs. In fluorescence microscopy there is a basic tradeoff between the degree of image noise and the frequency with which images can be acquired and therefore the ability to follow dynamic events. The correlation coefficient r is commonly used in fluorescence microscopy for colocalization measurements, when the relative distributions of two fluorophores are of interest. Unfortunately, r is known to be biased understating the true correlation when noise is present. A better measure of correlation is needed. This article analyses the expected value of r and comes up with a procedure for evaluating the bias of r, expected value formulas. A Taylor series of so-called invariant factors is analyzed in detail. These formulas indicate ways to correct r and thereby obtain a corrected value free from the influence of noise that is on average accurate (unbiased). One possible correction is the attenuated corrected correlation coefficient R, introduced heuristically by Spearman (in Am. J. Psychol. 15:72-101, 1904). An ideal correction formula in terms of expected values is derived. For large samples R tends towards the ideal correction formula and the true noise-free correlation. Correlation measurements using simulation based on the types of noise found in fluorescence microscopy images illustrate both the power of the method and the variance of R. We conclude that the correction formula is valid and is particularly useful for making correct analyses from very noisy datasets.

  • 33. Bergström, L.
    et al.
    Edlund, C.
    KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA. Stockholm University, Sweden.
    Fairbairn, M.
    Järemo, Anna Karin
    KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA.
    Kreiss, Gunilla
    KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA.
    Pieri, L.
    Signals of WIMP annihilation into electrons at the galactic center2005In: Proceedings of the 29th International Cosmic Ray Conference, Vol 4: OG 2.1, 2.2 & 2.3, Tata Institute of Fundamental Research , 2005, p. 57-60Conference paper (Refereed)
    Abstract [en]

    Photons from the annihilation of dark matter in the center of our Galaxy are expected to provide a promising way to find out the nature and distribution of the dark matter itself. These photons can be either produced directly and/or through successive decays of annihilation products, or radiated from electrons and positrons. This ends up in a multi-wavelength production of photons whose expected intensity can be compared to observational data. Assuming that the Lightest Supersymmetric Particle makes the dark matter, we derive the expected photon signal from a given dark matter model and compare it with present available data.

  • 34.
    Besong, Donald O.
    KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA.
    Derivation of a segregation-mixing equation for particles in a fluid medium2009In: Applied mathematics and mechanics, ISSN 0253-4827, E-ISSN 1573-2754, Vol. 30, no 6, p. 765-770Article in journal (Refereed)
    Abstract [en]

    The main purpose of this work is to show that the gravity term of the segregation-mixing equation of fine mono-disperse particles in a fluid can be derived from first-principles (i.e., elementary physics). Our derivation of the gravity-driven flux of particles leads to the simplest case of the Richardson and Zaki correlation. Stokes velocity also naturally appears from the physical parameters of the particles and fluid by means of derivation only. This derivation from first-principle physics has never been presented before. It is applicable in small concentrations of fine particles.

  • 35.
    Bigert, Johnny
    et al.
    KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA.
    Sjöbergh, Jonas
    KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA.
    Knutsson, Ola
    KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA.
    Sahlgren, M.
    Unsupervised evaluation of parser robustness2005In: COMPUTATIONAL LINGUISTICS AND INTELLIGENT TEXT PROCESSING / [ed] Gelbukh, A, 2005, Vol. 3406, p. 142-154Conference paper (Refereed)
    Abstract [en]

    This article describes an automatic evaluation procedure for NLP system robustness under the strain of noisy and ill-formed input. The procedure requires no manual work or annotated resources. It is language and annotation scheme independent and produces reliable estimates on the robustness of NLP systems. The only requirement is an estimate on the NLP system accuracy. The procedure was applied to five parsers, and one part-of-speech tagger on Swedish text. To establish the reliability of the procedure, a comparative evaluation involving annotated resources was carried out on the tagger and three of the parsers.

  • 36. Birin, Hadas
    et al.
    Gal-Or, Zohar
    Elias, Isaac
    KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA.
    Tuller, Tamir
    Inferring models of rearrangements, recombinations, and horizontal transfers by the minimum evolution criterion2007In: Algorithms in Bioinformatics, Proceedings / [ed] Giancarlo, R; Hannenhalli, S, 2007, Vol. 4645, p. 111-123Conference paper (Refereed)
    Abstract [en]

    The evolution of viruses is very rapid and in addition to local point mutations (insertion, deletion, substitution) it also includes frequent recombinations, genome rearrangements, and horizontal transfer of genetic material. Evolutionary analysis of viral sequences is therefore a complicated matter for two main reasons: First, due to HGTs and recombinations, the right model of evolution is a network and not a tree. Second, due to genome rearrangements, an alignment of the input sequences is not guaranteed. Since contemporary methods for inferring phylogenetic networks require aligned sequences as input, they cannot deal with viral evolution. In this work we present the first computational approach which deals with both genome rearrangements and horizontal gene transfers and does not require a multiple alignment as input. We formalize a new set of computational problems which involve analyzing such complex models of evolution, investigate their computational complexity, and devise algorithms for solving them. Moreover, we demonstrate the viability of our methods on several synthetic datasets as well as biological datasets.

  • 37.
    Bjurstedt, Anders
    KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA.
    Converging technologies in prepress from 1980 to 20032005In: Proceedings of the Technical Association of the Graphic Arts, TAGA, 2005, p. 238-257Conference paper (Refereed)
    Abstract [en]

    The author suggests that there have been three paradigm shifts during the 20th century. The first shift was at the turn of the 19th and 20 th century, when the first modern typesetting technology was introduced. The new technology, the first major step since the invention of loose types by Gutenberg in the 16th century, became the most important contribution to mass market circulation of newspapers, magazines, textbooks, books and other publications during the years to come. A supervening social necessity of change was urgent, and there was no suppression from competing technologies. Previously newspapers were very thin, because the manual typesetting, which was slow and expensive, made it impossible to produce more than a few pages every day. Books and textbooks were expensive to produce, and only a minority of the population could afford to buy them. With the new technology textbooks became available for large circulations, which together with school reforms in most civilized countries quickly spread knowledge and information among their citizens. The line casting technology was more or less unchanged during the major part of the 20th century, and only a few technical changes, such as the introduction of punched paper tapes after WW II, improved the productivity. A major concern for quality was the excessive wear of the brass matrix, which made frequent and expensive maintenance necessary. In the beginning of the 1950's many attempts were made to replace the hot metals with other methods such as phototypesetting. The first attempts were more like an emulation of the line casting machines, but soon other technologies were introduced. A major step forward came when the first affordable computers were introduced on the market, such as the PDP-8 from DEC in 1964 and later the PDP-11 in 1970. Again the supervening necessity was created because the competition among publishers was extremely hard. But now there were many forces who wanted to suppress the new technology. The major force was the traditionally very strong labour unions, in particular organizing the labour in newspaper production on Fleet Street but also in Sweden and Denmark. Their influence started to diminish during the second paradigm shift and was more or less completely over a decade later. With the entrance of computerized composition systems for newspaper and other publishers the first step towards the second paradigm shift was taken. The shift was the transfer from all analogue technology in producing text (as hot metal), line works and images to a digital technology. Colour separations made by electronic drum scanners became a standard procedure during the 1970's. A major breakthrough occurred when Scitex Corp. showed the first colour page make up system (CEPS) - the Response system, which was quickly followed by other major suppliers - Dr Hell and Crosfield Electronics, both leading suppliers of digital drum scanners. The graphic art industry went digital. A supervening necessity evolved during the end of the 1980's when publishers were looking for cheaper production methods. The systems of the major suppliers were extremely expensive, and yet there was no simple technique available of exchanging digital information between different systems. The law of suppression held up the introduction of the third paradigm shift, but the Apple Mac and Adobe PostScript slowly became the major technologies in the digital age of publishing. To-day, Apple is still in the market, small but influential in the publishing world, but Adobe - the inventor of PostScript and PDF technology - is the new giant on the world market. Never before has a company had such a position in the graphic art industry. Previously, many customers were complaining about the lack of competition and industry standards in the front-end market. Now, however, Adobe has created a de facto world standard with the PDF-process, which is also backed by the ISO. A new monopoly in the front-end technology has been created by default. Never before has one single company been in a similar position in the graphic art industry. This is like falling from the frying pan into the fire! Many observers - mostly those outside the graphic art industry - believe that the next paradigm shift is imminent. The Internet might be the first paradigm shift in 21st century. The web may have a great impact on the graphic art industry, but this has yet to be proven.

  • 38.
    Bjurstedt, Anders
    KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA.
    Gravure vs. Web-offset!: a changing world in publication printing 1986-20062007Doctoral thesis, comprehensive summary (Other scientific)
    Abstract [en]

    The European publication printing industry and its markets have undergone profound structural changes between 1986 and 2006. This thesis is an investigation of these changes and of how the publication industry has been affected, as well as of the balance between publication gravure and commercial heat-set web-offset. The publication printing market has grown substantially during 1986-2006, and the increase in volume is about 250%, from 5 million tons to 13 million tons of paper. In 1986, gravure was the dominating publication printing technique. Since 1986, however, web-offset printing has grown substantially, and the process has today a much larger market share of the European publication market. This domination is also reflected in the investments in new printing capacity since 2000, where 70-75% has gone to commercial heat-set web-offset press manufacturers.

    This thesis focuses on the reasons why the balance between the two competing publication printing techniques, gravure and web-offset, changed between 1986 and 2006. It also studies the main driving forces determining the developments of these techniques and their related processes as well as their competitive strengths. Is gravure a printing process suitable only for very large runs, for huge volumes and for large markets? The changes in the European media market have affected the two major segments of the publication market; magazine and catalogue printing. In the magazine market, print runs in the segments of medium to large titles have decreased, and catalogues have changed from a single, thick catalogue to thinner; more targeted catalogues.

    This thesis is based on two studies. The first, focused on the market requirements and techno-economical comparisons of gravure and web-offset in 1985-1986, was carried out by the author as the Secretary General of the European Rotogravure Association (ERA), and the second, in 2005-2006, has investigated the present situation on the European publication markets. The methodologies used in the investigations have been questionnaires (the originals 1985-86 have also been used in 2005-2006), surveys, literature studies and a substantial number of interviews with representatives of print buyers (publishers and catalogue producers), printers and all the major suppliers to the industry.

    Given these changes, how can the competitiveness of publication gravure be improved and what strategies should a publication gravure printer use in order to survive in a very competitive European market? With shorter runs in very fast running gravure presses, the turn-around time in the cylinder-engraving department becomes very critical. A Double Ender gravure press for paginations from 16-64 pages, with an alternative up to 96 pages, where only four cylinders are needed, in combination with high-speed laser engraving of the cylinders, may be the answer.

  • 39.
    Bjurstedt, Anders
    KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA.
    The European publication printin industry: an industry in profound changes2005Licentiate thesis, comprehensive summary (Other scientific)
    Download full text (pdf)
    FULLTEXT01
  • 40.
    Björkman, Mårten
    et al.
    KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA.
    Eklundh, Jan-Olof
    KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA.
    Foveated Figure-Ground Segmentation and Its Role in Recognition2005In: BMVC 2005 - Proceedings of the British Machine Vision Conference 2005 / [ed] William Clocksin, Andrew Fitzgibbon, Philip Torr, British Machine Vision Association, BMVA , 2005, p. 819-828Conference paper (Refereed)
    Abstract [en]

    Figure-ground segmentation and recognition are two interrelated processes. In this paper we present a method for foveated segmentation and evaluate it in the context of a binocular real-time recognition system. Segmentation is solved as a binary labeling problem using priors derived from the results ofa simplistic disparity method. Doing so we are able to cope with situations when the disparity range is very wide, situations that has rarely been considered, but appear frequently for narrow-field camera sets. Segmentation and recognition are then integrated into a system able to locate, attend to and recognise objects in typical cluttered indoor scenes. Finally, we try to answer two questions: is recognition really helped by segmentation and what is the benefit of multiple cues for recognition?

    Download full text (pdf)
    fulltext
  • 41. Björn, Anders
    et al.
    Riesel, Hans
    KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA.
    Factors of generalized fermat numbers (vol 67, pg 441, 1998)2005In: Mathematics of Computation, ISSN 0025-5718, E-ISSN 1088-6842, Vol. 74, no 252, p. 2099-2099Article in journal (Refereed)
    Abstract [en]

    We note that three factors are missing from Table 1 in Factors of generalized Fermat numbers by A. Bjorn and H. Riesel published in Math. Comp. 67 (1998), 441-446.

  • 42. Björn, Anders
    et al.
    Riesel, Hans
    KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA.
    FACTORS OF GENERALIZED FERMAT NUMBERS (vol 67, pg 441, 1998): Table errata 22011In: Mathematics of Computation, ISSN 0025-5718, E-ISSN 1088-6842, Vol. 80, no 275, p. 1865-1866Article in journal (Refereed)
    Abstract [en]

    We note that one more factor is missing from Table 1 in Bjorn-Riesel, Factors of generalized Fermat numbers, Math. Comp. 67 (1998), 441 446, in addition to the three already reported upon in Bjorn-Riesel, Table errata to "Factors of generalized Fermat numbers", Math. Comp. 74 (2005), p. 2099.

  • 43.
    Blomqvist, Ulf
    KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA.
    Mediated peer (to peer) learning2006Licentiate thesis, comprehensive summary (Other scientific)
    Abstract [en]

    Peer learning means learning from and with each other. Collaboration and co-operation in a friendly environment is, however, something that is neither easy nor obvious for students attending the university. Though, different methods and technological solutions can be implemented to facilitate and improve peer learning as well as dialogue and reflection.

    The aims of this thesis were to study the implementation and use of inno-vative methods and technologies, and its effects on the learning process in mediated peer learning in higher education, as well as methods for facilitating peer learning through students’ individual and group reflection. The aim was also to study end-user involvements in the development processes.

    Dialogue sheets as a medium, i.e. a large sheet of paper with questions (about learning and reflection in this case) printed around its perimeter as support and guidance to the dialogue, have been investigated. Furthermore, the use of peer-to-peer (P2P) technology as mediator in learning has also been studied. The use of P2P technology in learning can be encapsulated in the expression peer-to-peer learning, hence the title “Mediated peer (to peer) learning”. In addition, the evolvement of content-based services in the 3G market has also been studied, introducing a proposed general interpretation of how technology evolution affects the players in a certain market. Dialogue sheets and P2P technology are but two examples of media enhancing peer learning. Many other forms of media can of course enhance peer learning as well, but as computers and the Internet are considered to be the media into which all previous media converge, the thesis starts with the “oldest” medium, the paper, and ends with the “newest” medium, the Internet.

    The conclusions of this thesis can be summarised as:

    The future of learning involves various media enhancing the learning experience. The development and evolution of these media should be the result of cooperation and interaction between learners, teachers, and the university. Failing to cooperate can cause serious problems for the universities.

    By building and maintaining an infrastructure, both analogue and digital, the learning institutions can enable flexible learning, including peer learning, utilising multiple media forms, and also support learners’ indi-vidual learning styles, i.e. promote the learner-centric approach to learning, as well as increase the need for and appreciation of teachers as guides and mentors.

    • By promoting various forms of mediated learning, including P2P technology solutions, teachers and universities can contribute to the defusing of P2P in the public debate, as also socially unquestionable activities then can be associated with the technology. They also foster students in respecting others’ intellectual rights, and can promote alternative copyright schemes, such as creative common

    Download full text (pdf)
    FULLTEXT01
  • 44.
    Bogdan, Cristian
    et al.
    KTH, School of Computer Science and Communication (CSC), Human - Computer Interaction, MDI.
    Bowers, John
    KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA.
    Tuning in: Challenging design for communities through a field study of radio amateurs2007In: Communities and Technologies 2007, 2007, p. 439-461Conference paper (Refereed)
    Abstract [en]

    As illustrated by the emerging field of Communities and Technologies, the topic of community, whether further qualified by ‘virtual’ (Rheingold 1993), ‘on line’ or ‘networked’ (Schuler 1996), has become a major focus for field study, design, technical infrastructural provision, as well as social, psychological and economic theorising. Let us review some early examples of this ‘turn to community’. (1999) discuss the ‘network communities of SeniorNet’, an organisation that supports people over the age of 50 in the use of computer networking technologies. The SeniorNet study highlights the complex ‘collage’ of participation and interaction styles that community members sustain, many of which go beyond conventional understandings of older people, their practices and relations to technology. While the members of SeniorNet are geographically dispersed, (1996) describe the ‘Blacksburg Electronic Village’, a local community computing initiative centred around Blacksburg, Virginia, USA. As long ago as 1994, (1994) claimed the existence of over 100 such projects in the US with very diverse aims and experiences but all concerned to be responsive to a community’s needs while exploiting the Internet and the technical developments it has made possible. For their part, (2001) offer some generic infrastructural tools for community computing, including support for ‘identity management’.

  • 45.
    Boström, Gustav
    KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA.
    Simplifying development of secure software: Aspects and Agile methods2006Licentiate thesis, comprehensive summary (Other scientific)
    Abstract [en]

    Reducing the complexity of building secure software systems is an important goal as increased complexity can lead to more security flaws. This thesis aims at helping to reduce this complexity by investigating new programming techniques and software development methods for implementing secure software. We provide case studies on the use and effects of applying Aspect-oriented software development to Confidentiality, Access Control and Quality of Service implementation. We also investigate how eXtreme Programming can be used for simplifying the secure software development process by comparing it to the security engineering standards Common Criteria and the Systems Security Engineering Capability Maturity Model. We also explore the relationship between Aspect-oriented programming and Agile software development methods, such as eXtreme Programming.

    Download full text (pdf)
    FULLTEXT01
  • 46.
    Bowers, John
    et al.
    KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA.
    Bannon, L.
    Fraser, M.
    Hindmarsh, J.
    Benford, S.
    Heath, C.
    Taxén, Gustav
    KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA.
    Ciolfi, L.
    From the disappearing computer to living exhibitions: Shaping interactivity in museum settings2007In: The Disappearing Computer: Interaction Design, System Infrastructures and Applications for Smart Environments / [ed] Norbert Streitz, Achilles Kameas, Irene Mavrommati, Springer, 2007, p. 30-49Chapter in book (Refereed)
  • 47. Brooks, A.
    et al.
    Kaupp, T.
    Makarenko, A.
    Williams, S.
    Orebäck, Anders
    KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA.
    Orca: A component model and repository2007In: Software Engineering for Experimental Robotics, Springer, 2007, p. 231-251Conference paper (Refereed)
    Abstract [en]

    This Chapter describes Orca: an open-source project which applies Component-Based Software Engineering principles to robotics. It provides the means for defining and implementing interfaces such that components developed independently are likely to be inter-operable. In addition it provides a repository of free re-useable components. Orca attempts to be widely applicable by imposing minimal design constraints. This Chapter describes lessons learned while using Orca and steps taken to improve the framework based on those lessons. Improvements revolve around middleware issues and the problems encountered while scaling to larger distributed systems. Results are presented from systems that were implemented.

  • 48. Brooks, A.
    et al.
    Kaupp, T.
    Makarenko, A.
    Williams, S.
    Orebäck, Anders
    KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Towards component-based robotics2005In: 2005 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS, 2005, p. 3567-3572Conference paper (Refereed)
    Abstract [en]

    This paper gives an overview of Component-Based Software Engineering (CBSE), motivates its application to the field of mobile robotics, and proposes a particular component model. CBSE is an approach to system-building that aims to shift the emphasis from programming to composing systems from a mixture of off-the-shelf and custom-built software components. This paper argues that robotics is particularly well-suited for and in need of component-based ideas. Furthermore, now is the right time for their introduction. The paper introduces Orca - an open-source component-based software engineering framework proposed for mobile robotics with an associated repository of free, reusable components for building mobile robotic systems.

  • 49. Bryant, D.
    et al.
    Lagergren, Jens
    KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA.
    Compatibility of unrooted phylogenetic trees is FPT2006In: Theoretical Computer Science, ISSN 0304-3975, E-ISSN 1879-2294, Vol. 351, no 3, p. 296-302Article in journal (Refereed)
    Abstract [en]

    A collection of T-1, T-2,..., T-k of unrooted, leaf labelled (phylogenetic) trees, all with different leaf sets, is said to be compatible if there exists a tree T such that each tree T-i can be obtained from T by deleting leaves and contracting edges. Determining compatibility is NP-hard, and the fastest algorithm to date has worst case complexity of around Omega(n(k)) time, n being the number of leaves. Here, we present an O(nf (k)) algorithm, proving that compatibility of unrooted phylogenetic trees is fixed parameter tractable (FPT) with respect to the number k of trees.

  • 50.
    Brynielsson, Joel
    KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA.
    A Decision-Theoretic Framework Using Rational Agency2002In: Proceedings of the 11th Conference on Computer-Generated Forces and Behavioral Representation, 2002, p. 459-463Conference paper (Refereed)
    Abstract [en]

    Maximizing expected utility has been the foremost method for decision-making for centuries and has been applied to numbers of decision tasks of various kinds. The ideas of using utility matrices in a tree structure to predict behavior among intelligent agents is however new with several contributions during the last decade. We have investigated such a decision-theoretic framework, the Recursive Modeling Method, which is originally applied within intelligent agents. This framework includes a data structure that holds information regarding the surrounding environment, and a model for computation that takes advantage of the mentioned data structure. The data structure is based on utility matrices used for storing information regarding preferences, the environment and other agents. These utility matrices are organized in a tree structure that also contains probability distributions representing beliefs regarding the current situation picture. The probability distributions are used recursively together with the utility matrices in order to solve decision tasks. We conclude that the investigated framework needs to be extended to be fully functional for Command and Control decision-making. Therefore we outline an extended framework where we introduce the “attribute domain”, which will be used throughout the model. The main idea is to keep track of different utility variables, one for each attribute, throughout the recursive process so that information can be used for various decision tasks. We believe that different utility functions will be used from time to time and therefore the utility cannot be combined into one single variable. Instead the data structure must be designed to hold sets containing one utility value for each attribute, rather than one single utility value describing all kinds of profit.

1234567 1 - 50 of 436
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf