Endre søk
Begrens søket
45678910 301 - 350 of 476
RefereraExporteraLink til resultatlisten
Permanent link
Referera
Referensformat
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Treff pr side
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sortering
  • Standard (Relevans)
  • Forfatter A-Ø
  • Forfatter Ø-A
  • Tittel A-Ø
  • Tittel Ø-A
  • Type publikasjon A-Ø
  • Type publikasjon Ø-A
  • Eldste først
  • Nyeste først
  • Skapad (Eldste først)
  • Skapad (Nyeste først)
  • Senast uppdaterad (Eldste først)
  • Senast uppdaterad (Nyeste først)
  • Disputationsdatum (tidligste først)
  • Disputationsdatum (siste først)
  • Standard (Relevans)
  • Forfatter A-Ø
  • Forfatter Ø-A
  • Tittel A-Ø
  • Tittel Ø-A
  • Type publikasjon A-Ø
  • Type publikasjon Ø-A
  • Eldste først
  • Nyeste først
  • Skapad (Eldste først)
  • Skapad (Nyeste først)
  • Senast uppdaterad (Eldste først)
  • Senast uppdaterad (Nyeste først)
  • Disputationsdatum (tidligste først)
  • Disputationsdatum (siste først)
Merk
Maxantalet träffar du kan exportera från sökgränssnittet är 250. Vid större uttag använd dig av utsökningar.
  • 301.
    Maboudi Afkham, Heydar
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Improving Image Classification Performance using Joint Feature Selection2014Doktoravhandling, monografi (Annet vitenskapelig)
    Abstract [en]

    In this thesis, we focus on the problem of image classification and investigate how its performance can be systematically improved. Improving the performance of different computer vision methods has been the subject of many studies. While different studies take different approaches to achieve this improvement, in this thesis we address this problem by investigating the relevance of the statistics collected from the image.

    We propose a framework for gradually improving the quality of an already existing image descriptor. In our studies, we employ a descriptor which is composed the response of a series of discriminative components for summarizing each image. As we will show, this descriptor has an ideal form in which all categories become linearly separable. While, reaching this form is not possible, we will argue how by replacing a small fraction of these components, it is possible to obtain a descriptor which is, on average, closer to this ideal form. To do so, we initially identify which components do not contribute to the quality of the descriptor and replace them with more robust components. As we will show, this replacement has a positive effect on the quality of the descriptor.

    While there are many ways of obtaining more robust components, we introduce a joint feature selection problem to obtain image features that retains class discriminative properties while simultaneously generalising between within class variations. Our approach is based on the concept of a joint feature where several small features are combined in a spatial structure. The proposed framework automatically learns the structure of the joint constellations in a class dependent manner improving the generalisation and discrimination capabilities of the local descriptor while still retaining a low-dimensional representations.

    The joint feature selection problem discussed in this thesis belongs to a specific class of latent variable models that assumes each labeled sample is associated with a set of different features, with no prior knowledge of which feature is the most relevant feature to be used. Deformable-Part Models (DPM) can be seen as good examples of such models. These models are usually considered to be expensive to train and very sensitive to the initialization. Here, we focus on the learning of such models by introducing a topological framework and show how it is possible to both reduce the learning complexity and produce more robust decision boundaries. We will also argue how our framework can be used for producing robust decision boundaries without exploiting the dataset bias or relying on accurate annotations.

    To examine the hypothesis of this thesis, we evaluate different parts of our framework on several challenging datasets and demonstrate how our framework is capable of gradually improving the performance of image classification by collecting more robust statistics from the image and improving the quality of the descriptor.

  • 302.
    Madry, Marianna
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Maboudi Afkham, Heydar
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Ek, Carl Henrik
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Carlsson, Stefan
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Extracting essential local object characteristics for 3D object categorization2013Inngår i: 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), IEEE conference proceedings, 2013, s. 2240-2247Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Most object classes share a considerable amount of local appearance and often only a small number of features are discriminative. The traditional approach to represent an object is based on a summarization of the local characteristics by counting the number of feature occurrences. In this paper we propose the use of a recently developed technique for summarizations that, rather than looking into the quantity of features, encodes their quality to learn a description of an object. Our approach is based on extracting and aggregating only the essential characteristics of an object class for a task. We show how the proposed method significantly improves on previous work in 3D object categorization. We discuss the benefits of the method in other scenarios such as robot grasping. We provide extensive quantitative and qualitative experiments comparing our approach to the state of the art to justify the described approach.

  • 303.
    Madry, Marianna
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Song, Dan
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Ek, Carl Henrik
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    "Robot, bring me something to drink from": object representation for transferring task specific grasps2013Inngår i: In IEEE International Conference on Robotics and Automation (ICRA 2012), Workshop on Semantic Perception, Mapping and Exploration (SPME),  St. Paul, MN, USA, May 13, 2012, 2013Konferansepaper (Fagfellevurdert)
    Abstract [en]

    In this paper, we present an approach for taskspecificobject representation which facilitates transfer of graspknowledge from a known object to a novel one. Our representation encompasses: (a) several visual object properties,(b) object functionality and (c) task constrains in order to provide a suitable goal-directed grasp. We compare various features describing complementary object attributes to evaluate the balance between the discrimination and generalization properties of the representation. The experimental setup is a scene containing multiple objects. Individual object hypotheses are first detected, categorized and then used as the input to a grasp reasoning system that encodes the task information. Our approach not only allows to find objects in a real world scene that afford a desired task, but also to generate and successfully transfer task-based grasp within and across object categories.

  • 304.
    Madry, Marianna
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Song, Dan
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    From object categories to grasp transfer using probabilistic reasoning2012Inngår i: 2012 IEEE International Conference on Robotics and Automation (ICRA), IEEE Computer Society, 2012, s. 1716-1723Konferansepaper (Fagfellevurdert)
    Abstract [en]

    In this paper we address the problem of grasp generation and grasp transfer between objects using categorical knowledge. The system is built upon an i) active scene segmentation module, able of generating object hypotheses and segmenting them from the background in real time, ii) object categorization system using integration of 2D and 3D cues, and iii) probabilistic grasp reasoning system. Individual object hypotheses are first generated, categorized and then used as the input to a grasp generation and transfer system that encodes task, object and action properties. The experimental evaluation compares individual 2D and 3D categorization approaches with the integrated system, and it demonstrates the usefulness of the categorization in task-based grasping and grasp transfer.

  • 305.
    Mahbod, A.
    et al.
    Romania.
    Ellinger, I.
    Romania.
    Ecker, R.
    Romania.
    Smedby, Örjan
    KTH, Skolan för kemi, bioteknologi och hälsa (CBH), Medicinteknik och hälsosystem, Medicinsk avbildning.
    Wang, Chunliang
    KTH, Skolan för kemi, bioteknologi och hälsa (CBH), Medicinteknik och hälsosystem, Medicinsk avbildning.
    Breast Cancer Histological Image Classification Using Fine-Tuned Deep Network Fusion2018Inngår i: 15th International Conference on Image Analysis and Recognition, ICIAR 2018, Springer, 2018, s. 754-762Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Breast cancer is the most common cancer type in women worldwide. Histological evaluation of the breast biopsies is a challenging task even for experienced pathologists. In this paper, we propose a fully automatic method to classify breast cancer histological images to four classes, namely normal, benign, in situ carcinoma and invasive carcinoma. The proposed method takes normalized hematoxylin and eosin stained images as input and gives the final prediction by fusing the output of two residual neural networks (ResNet) of different depth. These ResNets were first pre-trained on ImageNet images, and then fine-tuned on breast histological images. We found that our approach outperformed a previous published method by a large margin when applied on the BioImaging 2015 challenge dataset yielding an accuracy of 97.22%. Moreover, the same approach provided an excellent classification performance with an accuracy of 88.50% when applied on the ICIAR 2018 grand challenge dataset using 5-fold cross validation.

  • 306.
    Maki, Atsuto
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Bretzner, Lars
    Eklundh, Jan-Olof
    Local Fourir Phase and Disparity Estimates: An Analytical Study1995Inngår i: International Conference on Computer Analysis of Images and Patterns, 1995, Vol. 970, s. 868-873Konferansepaper (Fagfellevurdert)
  • 307.
    Maki, Atsuto
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Nordlund, Peter
    Eklundh, Jan-Olof
    A computational model of depth-based attention1996Inngår i: International Conference on Pattern Recognition, 1996, s. 734-739Konferansepaper (Fagfellevurdert)
  • 308.
    Maki, Atsuto
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Nordlund, Peter
    Eklundh, Jan-Olof
    Attentional Scene Segmentation: Integrating Depth and Motion2000Inngår i: Computer Vision and Image Understanding, Vol. 78, nr 3, s. 351-373Artikkel i tidsskrift (Fagfellevurdert)
  • 309.
    Maki, Atsuto
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Uhlin, Tomas
    Disparity Selection in Binocular Pursuit1995Inngår i: IEICE transactions on information and systems, ISSN 0916-8532, E-ISSN 1745-1361, Vol. E78-D, nr 12, s. 1591-1597Artikkel i tidsskrift (Fagfellevurdert)
  • 310.
    Maki, Atsuto
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Uhlin, Tomas
    Eklundh, Jan-Olof
    A Direct Disparity Estimation Technique for Depth Segmentation1996Inngår i: IAPR Workshop on Machine Vision Applications, 1996, s. 530-533Konferansepaper (Fagfellevurdert)
  • 311.
    Maki, Atsuto
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Uhlin, Tomas
    Eklundh, Jan-Olof
    Disparity Selection in Binocular Pursuit1994Inngår i: IAPR Workshop on Machine Vision Applications, 1994, s. 182-185Konferansepaper (Fagfellevurdert)
  • 312.
    Maki, Atsuto
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Uhlin, Tomas
    Eklundh, Jan-Olof
    Phase-Based Disparity Estimation in Binocular Tracking1993Inngår i: Scandinavian Conference on Image Analysis, 1993Konferansepaper (Fagfellevurdert)
  • 313. Mancini, M.
    et al.
    Bresin, Roberto
    KTH, Skolan för datavetenskap och kommunikation (CSC), Tal, musik och hörsel, TMH.
    Pelachaud, C.
    From acoustic cues to an expressive agent2006Inngår i: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) / [ed] Gibet, S; Courty, N; Kamp, JF, 2006, Vol. 3881, s. 280-291Konferansepaper (Fagfellevurdert)
    Abstract [en]

    This work proposes a new way for providing feedback to expressivity in music performance. Starting from studies on the expressivity of music performance we developed a system in which a visual feedback is given to the user using a graphical representation of a human face. The first part of the system, previously developed by researchers at KTH Stockholm and at the University of Uppsala, allows the real-time extraction and analysis of acoustic cues from the music performance. Cues extracted are: sound level, tempo, articulation, attack time, and spectrum energy. From these cues the system provides an high level interpretation of the emotional intention of the performer which will be classified into one basic emotion, such as happiness, sadness, or anger. We have implemented an interface between that system and the embodied conversational agent Greta, developed at the University of Rome "La Sapienza" and "University of Paris 8". We model expressivity of the facial animation of the agent with a set of six dimensions that characterize the manner of behavior execution. In this paper we will first describe a mapping between the acoustic cues and the expressivity dimensions of the face. Then we will show how to determine the facial expression corresponding to the emotional intention resulting from the acoustic analysis, using music sound level and tempo characteristics to control the intensity and the temporal variation of muscular activation.

  • 314.
    Mancini, Massimiliano
    et al.
    Sapienza Univ Rome, Rome, Italy.;Fdn Bruno Kessler, Trento, Italy..
    Karaoǧuz, Hakan
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Robotik, perception och lärande, RPL.
    Ricci, Elisa
    Fdn Bruno Kessler, Trento, Italy.;Univ Trento, Trento, Italy..
    Jensfelt, Patric
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Robotik, perception och lärande, RPL.
    Caputo, Barbara
    Sapienza Univ Rome, Rome, Italy.;Italian Inst Technol, Milan, Italy..
    Kitting in the Wild through Online Domain Adaptation2018Inngår i: 2018 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS) / [ed] Maciejewski, AA Okamura, A Bicchi, A Stachniss, C Song, DZ Lee, DH Chaumette, F Ding, H Li, JS Wen, J Roberts, J Masamune, K Chong, NY Amato, N Tsagwarakis, N Rocco, P Asfour, T Chung, WK Yasuyoshi, Y Sun, Y Maciekeski, T Althoefer, K AndradeCetto, J Chung, WK Demircan, E Dias, J Fraisse, P Gross, R Harada, H Hasegawa, Y Hayashibe, M Kiguchi, K Kim, K Kroeger, T Li, Y Ma, S Mochiyama, H Monje, CA Rekleitis, I Roberts, R Stulp, F Tsai, CHD Zollo, L, IEEE , 2018, s. 1103-1109Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Technological developments call for increasing perception and action capabilities of robots. Among other skills, vision systems that can adapt to any possible change in the working conditions are needed. Since these conditions are unpredictable, we need benchmarks which allow to assess the generalization and robustness capabilities of our visual recognition algorithms. In this work we focus on robotic kitting in unconstrained scenarios. As a first contribution, we present a new visual dataset for the kitting task. Differently from standard object recognition datasets, we provide images of the same objects acquired under various conditions where camera, illumination and background are changed. This novel dataset allows for testing the robustness of robot visual recognition algorithms to a series of different domain shifts both in isolation and unified. Our second contribution is a novel online adaptation algorithm for deep models, based on batch-normalization layers, which allows to continuously adapt a model to the current working conditions. Differently from standard domain adaptation algorithms, it does not require any image from the target domain at training time. We benchmark the performance of the algorithm on the proposed dataset, showing its capability to fill the gap between the performances of a standard architecture and its counterpart adapted offline to the given target domain.

  • 315. Mancini, Maurizio
    et al.
    Bresin, Roberto
    KTH, Skolan för datavetenskap och kommunikation (CSC), Tal, musik och hörsel, TMH, Musikakustik.
    Pelachaud, Catherine
    A virtual head driven by music expressivity2007Inngår i: IEEE Transactions on Audio, Speech, and Language Processing, ISSN 1558-7916, E-ISSN 1558-7924, Vol. 15, nr 6, s. 1833-1841Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    In this paper, we present a system that visualizes the expressive quality of a music performance using a virtual head. We provide a mapping through several parameter spaces: on the input side, we have elaborated a mapping between values of acoustic cues and emotion as well as expressivity parameters; on the output side, we propose a mapping between these parameters and the behaviors of the virtual head. This mapping ensures a coherency between the acoustic source and the animation of the virtual head. After presenting some background information on behavior expressivity of humans, we introduce our model of expressivity. We explain how we have elaborated the mapping between the acoustic and the behavior cues. Then, we describe the implementation of a working system that controls the behavior of a human-like head that varies depending on the emotional and acoustic characteristics of the musical execution. Finally, we present the tests we conducted to validate our mapping between the emotive content of the music performance and the expressivity parameters.

  • 316.
    Martínez-Gómez, Jesus
    et al.
    University of Castilla-La Mancha.
    Caputo, Barbara
    University of Rome La Sapienza.
    Cazorla, Miguel
    University of Alicante.
    Christensen, Henrik I.
    University of California, San Diego.
    Fornoni, Marco
    Idiap Research Institute and EPFL.
    García-Varea, Ismael
    University of Castilla-La Mancha.
    Pronobis, Andrzej
    University of Washington, United States.
    Where Are We After Five Editions?: Robot Vision Challenge, a Competition that Evaluates Solutions for the Visual Place Classification Problem2015Inngår i: IEEE robotics & automation magazine, ISSN 1070-9932, E-ISSN 1558-223X, Vol. 22, nr 4, s. 147-156Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    This article describes the Robot Vision challenge, a competition that evaluates solutions for the visual place classification problem. Since its origin, this challenge has been proposed as a common benchmark where worldwide proposals are measured using a common overall score. Each new edition of the competition introduced novelties, both for the type of input data and sub-objectives of the challenge. All the techniques used by the participants have been gathered up and published to make it accessible for future developments. The legacy of the Robot Vision challenge includes data sets, benchmarking techniques, and a wide experience in the place classification research that is reflected in this article.

  • 317.
    Marzinotto, Alejandro
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Colledanchise, Michele
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Smith, Christian
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Ögren, Petter
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Towards a Unified Behavior Trees Framework for Robot Control2014Inngår i: Robotics and Automation (ICRA), 2014 IEEE International Conference on , IEEE Robotics and Automation Society, 2014, s. 5420-5427Konferansepaper (Fagfellevurdert)
    Abstract [en]

    This paper presents a unified framework for Behavior Trees (BTs), a plan representation and execution tool. The available literature lacks the consistency and mathematical rigor required for robotic and control applications. Therefore, we approach this problem in two steps: first, reviewing the most popular BT literature exposing the aforementioned issues; second, describing our unified BT framework along with equivalence notions between BTs and Controlled Hybrid Dynamical Systems (CHDSs). This paper improves on the existing state of the art as it describes BTs in a more accurate and compact way, while providing insight about their actual representation capabilities. Lastly, we demonstrate the applicability of our framework to real systems scheduling open-loop actions in a grasping mission that involves a NAO robot and our BT library.

  • 318.
    Marzinotto, Alejandro
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Stork, Johannes A.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Dimarogonas, Dino V.
    KTH, Skolan för elektro- och systemteknik (EES), Reglerteknik.
    Kragic Jensfelt, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Cooperative grasping through topological object representation2015Inngår i: IEEE-RAS International Conference on Humanoid Robots, IEEE Computer Society, 2015, s. 685-692Konferansepaper (Fagfellevurdert)
    Abstract [en]

    We present a cooperative grasping approach based on a topological representation of objects. Using point cloud data we extract loops on objects suitable for generating entanglement. We use the Gauss Linking Integral to derive controllers for multi-agent systems that generate hooking grasps on such loops while minimizing the entanglement between robots. The approach copes well with noisy point cloud data, it is computationally simple and robust. We demonstrate the method for performing object grasping and transportation, through a hooking maneuver, with two coordinated NAO robots.

  • 319.
    Masson, Clément
    KTH, Skolan för datavetenskap och kommunikation (CSC).
    Direction estimation using visual odometry2015Independent thesis Advanced level (professional degree), 20 poäng / 30 hpOppgave
    Abstract [en]

    This Master thesis tackles the problem of measuring objects’ directions from a motionless observation point. A new method based on a single rotating camera requiring the knowledge of only two (or more) landmarks’ direction is proposed. In a first phase, multi-view geometry is used to estimate camera rotations and key elements’ direction from a set of overlapping images. Then in a second phase, the direction of any object can be estimated by resectioning the camera associated to a picture showing this object. A detailed description of the algorithmic chain is given, along with test results on both synthetic data and real images taken with an infrared camera.

  • 320. Menghi, C.
    et al.
    Garcia, S.
    Pelliccione, P.
    Tumova, Jana
    KTH.
    Multi-robot LTL planning under uncertainty2018Inngår i: 22nd International Symposium on Formal Methods, FM 2018 Held as Part of the Federated Logic Conference, FloC 2018, Springer, 2018, Vol. 10951, s. 399-417Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Robot applications are increasingly based on teams of robots that collaborate to perform a desired mission. Such applications ask for decentralized techniques that allow for tractable automated planning. Another aspect that current robot applications must consider is partial knowledge about the environment in which the robots are operating and the uncertainty associated with the outcome of the robots’ actions. Current planning techniques used for teams of robots that perform complex missions do not systematically address these challenges: (1) they are either based on centralized solutions and hence not scalable, (2) they consider rather simple missions, such as A-to-B travel, (3) they do not work in partially known environments. We present a planning solution that decomposes the team of robots into subclasses, considers missions given in temporal logic, and at the same time works when only partial knowledge of the environment is available. We prove the correctness of the solution and evaluate its effectiveness on a set of realistic examples.

  • 321. Miao, Li
    et al.
    Bekiroglu, Yasemin
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Billard, Aude
    Learning of Grasp Adaptation through Experience and Tactile Sensing2014Konferansepaper (Fagfellevurdert)
    Abstract [en]

    To perform robust grasping, a multi-fingered robotic hand should be able to adapt its grasping configuration, i.e., how the object is grasped, to maintain the stability of the grasp. Such a change of grasp configuration is called grasp adaptation and it depends on the controller, the employed sensory feedback and the type of uncertainties inherit to the problem. This paper proposes a grasp adaptation strategy to deal with uncertainties about physical properties of objects, such as the object weight and the friction at the contact points. Based on an object-level impedance controller, a grasp stability estimator is first learned in the object frame. Once a grasp is predicted to be unstable by the stability estimator, a grasp adaptation strategy is triggered according to the similarity between the new grasp and the training examples. Experimental results demonstrate that our method improves the grasping performance on novel objects with different physical properties from those used for training.

  • 322.
    Mitsunaga, Noriaki
    et al.
    Osaka Kyoiku University.
    Smith, Christian
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Kanda, Takayuki
    Advanced Telecommunications Research International.
    Ishiguro, Hiroshi
    Osaka University.
    Hagita, Norihiro
    Advanced Telecommunications Research International.
    Adapting Nonverbal Behavior Parameters to be Preferred by Individuals2012Inngår i: Human-Robot Interaction in Social Robotics / [ed] Takayuki Kanda and Hiroshi Ishiguro, Boca Raton, FL, USA: CRC Press, 2012, 1, s. 312-324Kapittel i bok, del av antologi (Annet vitenskapelig)
  • 323.
    Mohanty, Sumit
    et al.
    KTH, Skolan för informations- och kommunikationsteknik (ICT).
    Hong, Ayoung
    Alcantara, Carlos
    Petruska, Andrew J.
    Nelson, Bradley J.
    Stereo Holographic Diffraction Based Tracking of Microrobots2018Inngår i: IEEE Robotics and Automation Letters, ISSN 2377-3766, E-ISSN 1949-3045, Vol. 3, nr 1, s. 567-572Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    Three-dimensional (3-D) tracking of microrobots is demonstrated using stereo holographic projections. The method detects the lateral position of a microrobot in two orthogonal in-line holography images and triangulates to obtain the 3-D position in an observable volume of 1 cm(3). The algorithm is capable of processing holograms at 25 Hz on a desktop computer and has an accuracy of 24.7 mu mand 15.2 mu min the two independent directions and 7.3 mu m in the shared direction of the two imaging planes. This is the first use of stereo holograms to track an object in real time and does not rely on the computationally expensive process of holographic reconstruction.

  • 324.
    Mondal, Jaydeb
    et al.
    Indian Stat Inst, Machine Intelligence Unit, 203 BT Rd, Kolkata 700108, India..
    Kundu, Malay Kumar
    Indian Stat Inst, Machine Intelligence Unit, 203 BT Rd, Kolkata 700108, India..
    Das, Sudeb
    Videonet Technol Pvt Ltd, Salt Lake City 700091, UT, India..
    Chowdhury, Manish
    KTH, Skolan för teknik och hälsa (STH).
    Video shot boundary detection using multiscale geometric analysis of nsct and least squares support vector machine2018Inngår i: Multimedia tools and applications, ISSN 1380-7501, E-ISSN 1573-7721, Vol. 77, nr 7, s. 8139-8161Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    The fundamental step in video content analysis is the temporal segmentation of video stream into shots, which is known as Shot Boundary Detection (SBD). The sudden transition from one shot to another is known as Abrupt Transition (AT), whereas if the transition occurs over several frames, it is called Gradual Transition (GT). A unified framework for the simultaneous detection of both AT and GT have been proposed in this article. The proposed method uses the multiscale geometric analysis of Non-Subsampled Contourlet Transform (NSCT) for feature extraction from the video frames. The dimension of the feature vectors generated using NSCT is reduced through principal component analysis to simultaneously achieve computational efficiency and performance improvement. Finally, cost efficient Least Squares Support Vector Machine (LS-SVM) classifier is used to classify the frames of a given video sequence based on the feature vectors into No-Transition (NT), AT and GT classes. A novel efficient method of training set generation is also proposed which not only reduces the training time but also improves the performance. The performance of the proposed technique is compared with several state-of-the-art SBD methods on TRECVID 2007 and TRECVID 2001 test data. The empirical results show the effectiveness of the proposed algorithm.

  • 325.
    Naber, Adam
    et al.
    Chalmers Univ Technol, Biomechatron & Neurorehabil Lab, Dept Elect Engn, Gothenburg, Sweden..
    Karayiannidis, Yiannis
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Robotik, perception och lärande, RPL.
    Ortiz-Catalan, Max
    Chalmers Univ Technol, Biomechatron & Neurorehabil Lab, Dept Elect Engn, Gothenburg, Sweden.;Integrum AB, S-43137 Molndal, Sweden..
    Universal, Open Source, Myoelectric Interface for Assistive Devices2018Inngår i: 2018 15TH INTERNATIONAL CONFERENCE ON CONTROL, AUTOMATION, ROBOTICS AND VISION (ICARCV), IEEE , 2018, s. 1585-1589Konferansepaper (Fagfellevurdert)
    Abstract [en]

    We present an integrated, open-source platform for the control of assistive vehicles. The system is vehicle-agnostic and can be controlled using a myoelectric interface to translate muscle contractions into vehicular commands. A modular shared-control system was used to enhance safety and ease of use, and three collision avoidance systems were included and verified in both an included test platform and on a quadcopter operating in a simulated environment. Seven subjects performed the experiments and rated the user experience of the system under each of the provided collision avoidance systems with positive results. Qualitative tests with the quadcopter validated the proposed system and shared-control techniques. This open-source platform for shared control between humans and machines integrates decoding of motor volition with control engineering to expedite further investigation into the operation of mobile robots.

  • 326. Nalpantidis, L.
    et al.
    Kragic Jensfelt, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Kostavelis, I.
    Gasteratos, A.
    Theta- disparity: An efficient representation of the 3D scene structure2015Inngår i: 13th International Conference on Intelligent Autonomous Systems, IAS 2014, Springer, 2015, Vol. 302, s. 795-806Konferansepaper (Fagfellevurdert)
    Abstract [en]

    We propose a new representation of 3D scene structure, named thetadisparity. The proposed representation is a 2D angular depth histogram that is calculated using a disparity map. It models the structure of the prominent objects in the scene and reveals their radial distribution relative to a point of interest. The proposed representation is analyzed and used as a basic attention mechanism to autonomously resolve two different robotic scenarios. The method is efficient due to the low computational complexity. We show that the method can be successfully used for the planning of different tasks in the industrial and service robotics domains, e.g., object grasping, manipulation, plane extraction, path detection, and obstacle avoidance.

  • 327.
    Nalpantidis, Lazaros
    Production and Management Engineering Dept., Democritus University of Thrace, Greece.
    Study and implementation of stereo vision systems for robotic applications2010Doktoravhandling, monografi (Annet vitenskapelig)
    Abstract [en]

    Stereo vision has been chosen by natural selection as the most common way to estimate the depth of objects. A pair of two-dimensional images is enough in order to retrieve the third dimension of the scene under observation. The importance of this method is great, apart from the living creatures, for sophisticated machine systems, as well. During the last years robotics has made significant progress and the state of the art is now about achieving autonomous behaviors. In order to accomplish the target of robots being able to move and act autonomously, accurate representations of their environments are required. Both these fields, stereo vision and accomplishing autonomous robotic behaviors, have been in the center of this PhD thesis. The issue of robots using machine stereo vision is not a new one. The number and significance of the researchers that have been involved, as well as the publishing rate of relevant scientific papers indicates an issue that is interesting and still open to solutions and fresh ideas rather than a banal and solved issue. The motivation of this PhD thesis has been the observation that the combination of stereo vision usage and autonomous robots is usually performed in a simplistic manner of simultaneously using two independent technologies. This situation is owed to the fact that the two technologies have evolved independently and by different scientific communities. Stereo vision has mainly evolved within the field of computer vision. On the other hand, autonomous robots are a branch of the robotics and mechatronics field. Methods that have been proposed within the frame of computer vision are not generally satisfactory for use in robotic applications. This fact is due to that an autonomous robot places strict constraints concerning the demanded speed of calculations and the available computational resources. Moreover, their inefficiency is commonly owed to factors related to the environments and the conditions of operation. As a result, the used algorithms, in this case the stereo vision algorithms, should take into consideration these factors during their development. The required compromises have to retain the functionality of the integrated system. The objective of this PhD thesis is the development of stereo vision systems customized for use in autonomous robots. Initially, a literature survey was conducted concerning stereo vision algorithms and corresponding robotic applications. The survey revealed the state of the art in the specific field and pointed out issues that had not yet been answered in a satisfactory manner. Afterwards, novel stereo vision algorithms were developed, which satisfy the demands posed by robotic systems and propose solutions to the open issues indicated by the literature survey. Finally, systems that embody the proposed algorithms and treat open robotic applications’ issues have been developed. Within this dissertation there have been used for the first time and combined in a novel way various computational tools and ideas originating from different scientific fields. There have been used biologically and psychologically inspired methods, such as the logarithmic response law (Weber-Fechner law) and the gestalt laws of perceptual organization (proximity, similarity and continuity). Furthermore, there have been used sophisticated computational methods, such as 2D and 3D cellular automata and fuzzy inference systems for computer vision applications. Additionally, ideas from the field of video coding have been incorporated in stereo vision applications. The resulting methods have been applied to basic computer vision depth extraction applications and even to advanced autonomous robotic behaviors. In more detail, the possibility of implementing effective hardware-implementable stereo correspondence algorithms has been investigated. Specifically, an algorithm that combines rapid execution, simple and straight-forward structure, as well as high-quality of results is presented. These features render it as an ideal candidate for hardware implementation and for real-time applications. The algorithm utilizes Gaussian aggregation weights and 3D cellular automata in order to achieve high-quality results. This algorithm comprised the basis of a multi-view stereo vision system. The final depth map is produced as a result of a certainty assessment procedure. Moreover, a new hierarchical correspondence algorithm is presented, inspired by motion estimation techniques originally used in video encoding. The algorithm performs a 2D correspondence search using a similar hierarchical search pattern and the intermediate results are refined by 3D cellular automata. This algorithm can process uncalibrated and non-rectified stereo image pairs, maintaining the computational load within reasonable levels. It is well known that non-ideal environmental conditions, such as differentiations in illumination depending on the viewpoint heavily affect the stereo algorithms’ performance. In this PhD thesis a new illumination-invariant pixels’ dissimilarity measure is presented that can substitute the established intensity-based ones. The proposed measure can be adopted by almost any of the existing stereo algorithms, enhancing them with its robust features. The algorithm using the proposed dissimilarity measure has outperformed all the other examined algorithms, exhibiting tolerance to illumination differentiations and robust behavior. Moreover, a novel stereo correspondence algorithm that incorporates many biologically and psychologically in- spired features to an adaptive weighted sum of absolute differences framework is presented. In addition to ideas already exploited, such as the color information utilization, gestalt laws of proximity and similarity, new ones have been adopted. The algorithm introduces the use of circular support regions, the gestalt law of continuity, as well as the psychophysically-based logarithmic response law. All the aforementioned perceptual tools act complementarily inside a straight-forward computational algorithm. Furthermore, stereo correspondence algorithms have been further exploited as the basis of more advanced robotic behaviors. Vision-based obstacle avoidance algorithms for autonomous mobile robots are presented. These algorithms avoid, as much as possible, computationally complex processes. The only sensor required is a stereo camera. The algorithms consist of two building blocks. The first one is a stereo algorithm, able to provide reliable depth maps of the scenery in frame rates suitable for a robot to move autonomously. The second building block is either a simple decision- making algorithm or a fuzzy logic-based one, which analyze the depth maps and deduce the most appropriate direction for the robot to avoid any existing obstacles. Finally, a visual Simultaneous Localization and Mapping (SLAM) algorithm suitable for indoor applications is proposed. The algorithm is focused on computational effectiveness and the only sensor used is a stereo camera placed onboard a moving robot. The algorithm processes the acquired images calculating the depth of the scenery, detecting occupied areas and progressively building a map of the environment. The stereo vision-based SLAM algorithm embodies a custom-tailored stereo correspondence algorithm, the robust scale and rotation invariant feature detection and matching "Speeded Up Robust Features" (SURF) method, a computationally effective v-disparity image calculation scheme, a novel map-merging module, as well as a sophisticated cellular automata-based enhancement stage.

  • 328.
    Nalpantidis, Lazaros
    et al.
    Production and Management Engineering Dept., Democritus University of Thrace, Greece.
    Amanatiadis, Angelos
    Electrical and Computer Engineering Dept., Democritus University of Thrace, Greece.
    Sirakoulis, Georgios Ch.
    Electrical and Computer Engineering Dept., Democritus University of Thrace, Greece.
    Gasteratos, Antonios
    Production and Management Engineering Dept., Democritus University of Thrace, Greece.
    Efficient hierarchical matching algorithm for processing uncalibrated stereo vision images and its hardware architecture2011Inngår i: IET Image Processing, ISSN 1751-9659, E-ISSN 1751-9667, Vol. 5, nr 5, s. 481-492Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    In motion estimation, the sub-pixel matching technique involves the search of sub-sample positions as well as integer-sample positions between the image pairs, choosing the one that gives the best match. Based on this idea, this work proposes an estimation algorithm, which performs a 2-D correspondence search using a hierarchical search pattern. The intermediate results are refined by 3-D cellular automata (CA). The disparity value is then defined using the distance of the matching position. Therefore the proposed algorithm can process uncalibrated and non-rectified stereo image pairs, maintaining the computational load within reasonable levels. Additionally, a hardware architecture of the algorithm is deployed. Its performance has been evaluated on both synthetic and real self-captured image sets. Its attributes, make the proposed method suitable for autonomous outdoor robotic applications.

  • 329.
    Nalpantidis, Lazaros
    et al.
    Production and Management Engineering Dept., Democritus University of Thrace, Greece.
    Amanatiadis, Angelos
    Electrical and Computer Engineering Dept., Democritus University of Thrace, Greece.
    Sirakoulis, Georgios Ch.
    Electrical and Computer Engineering Dept., Democritus University of Thrace, Greece.
    Kyriakoulis, Nikolaos
    Production and Management Engineering Dept., Democritus University of Thrace, Greece.
    Gasteratos, Antonios
    Production and Management Engineering Dept., Democritus University of Thrace, Greece.
    Dense disparity estimation using a hierarchical matching technique from uncalibrated stereo vision2009Inngår i: IST: 2009 IEEE INTERNATIONAL WORKSHOP ON IMAGING SYSTEMS AND TECHNIQUES, NEW YORK: IEEE , 2009, s. 422-426Konferansepaper (Fagfellevurdert)
    Abstract [en]

    In motion estimation, the sub-pixel matching technique involves the search of sub-sample positions as well as integer-sample positions between the image pairs, choosing the one that gives the best match. Based on this idea, the proposed disparity estimation algorithm performs a 2-D correspondence search using a hierarchical search pattern. The disparity value is then defined using the distance of the matching position. Therefore, the proposed algorithm can process non-rectified stereo image pairs, maintaining the computational load within reasonable levels.

  • 330.
    Nalpantidis, Lazaros
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Björkman, Mårten
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    YES - YEt another object Segmentation: exploiting camera movement2012Inngår i: Intelligent Robots and Systems (IROS), 2012 IEEE/RSJ International Conference on, IEEE , 2012, s. 2116-2121Konferansepaper (Fagfellevurdert)
    Abstract [en]

    We address the problem of object segmentation in image sequences where no a-priori knowledge of objects is assumed. We take advantage of robots' ability to move, gathering multiple images of the scene. Our approach starts by extracting edges, uses a polar domain representation and performs integration over time based on a simple dilation operation. The proposed system can be used for providing reliable initial segmentation of unknown objects in scenes of varying complexity, allowing for recognition, categorization or physical interaction with the objects. The experimental evaluation on both self-captured and a publicly available dataset shows the efficiency and stability of the proposed method.

  • 331.
    Nalpantidis, Lazaros
    et al.
    Production and Management Engineering Dept., Democritus University of Thrace, Greece.
    Chrysostomou, Dimitrios
    Production and Management Engineering Dept., Democritus University of Thrace, Greece.
    Gasteratos, Antonios
    Production and Management Engineering Dept., Democritus University of Thrace, Greece.
    Obtaining reliable depth maps for robotic applications from a quad-camera system2009Inngår i: INTELLIGENT ROBOTICS AND APPLICATIONS, PROCEEDINGS, Berlin: Springer Berlin/Heidelberg, 2009, Vol. 5928 LNAI, s. 906-916Konferansepaper (Annet vitenskapelig)
    Abstract [en]

    Autonomous navigation behaviors in robotics often require reliable depth maps. The use of vision sensors is the most popular choice in such tasks. On the other hand, accurate vision-based depth computing methods suffer from long execution times. This paper proposes a novel quad-camera based system able to calculate fast and accurately a single depth map of a scenery. The four cameras are placed on the corners of a square. Thus, three, differently oriented, stereo pairs result when considering a single reference image (namely an horizontal, a vertical and a diagonal pair). The proposed system utilizes a custom tailored, simple, rapidly executed stereo correspondence algorithm applied to each stereo pair. This way, the computational load is kept within reasonable limits. A reliability measure is used in order to validate each point of the resulting disparity maps. Finally, the three disparity maps are fused together according to their reliabilities. The maximum reliability is chosen for every pixel. The final output of the proposed system is a highly reliable depth map which can be used for higher level robotic behaviors.

  • 332.
    Nalpantidis, Lazaros
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Gasteratos, A.
    Stereo vision depth estimation methods for robotic applications2013Inngår i: Robotics: Concepts, Methodologies, Tools, and Applications, IGI Global, 2013, Vol. 3, s. 1461-1481Kapittel i bok, del av antologi (Annet vitenskapelig)
    Abstract [en]

    Vision is undoubtedly the most important sense for humans. Apart from many other low and higher level perception tasks, stereo vision has been proven to provide remarkable results when it comes to depth estimation. As a result, stereo vision is a rather popular and prosperous subject among the computer and machine vision research community. Moreover, the evolution of robotics and the demand for visionbased autonomous behaviors has posed new challenges that need to be tackled. Autonomous operation of robots in real working environments, given limited resources requires effective stereo vision algorithms. This chapter presents suitable depth estimation methods based on stereo vision and discusses potential robotic applications.

  • 333.
    Nalpantidis, Lazaros
    et al.
    Production and Management Engineering Dept., Democritus University of Thrace, Greece.
    Gasteratos, Antonios
    Production and Management Engineering Dept., Democritus University of Thrace, Greece.
    Biologically and psychophysically inspired adaptive support weights algorithm for stereo correspondence2010Inngår i: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 58, nr 5, s. 457-464Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    In this paper a novel stereo correspondence algorithm is presented. It incorporates many biologically and psychologically inspired features to an adaptive weighted sum of absolute differences (SAD) framework in order to determine the correct depth of a scene. In addition to ideas already exploited, such as the color information utilization, gestalt laws of proximity and similarity, new ones have been adopted. The presented algorithm introduces the use of circular support regions, the gestalt law of continuity as well as the psychophysically-based logarithmic response law. All the aforementioned perceptual tools act complementarily inside a straightforward computational algorithm applicable to robotic applications. The results of the algorithm have been evaluated and compared to those of similar algorithms.

  • 334.
    Nalpantidis, Lazaros
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Gasteratos, Antonios
    Production and Management Engineering Dept., Democritus University of Thrace, Greece.
    Stereo vision depth estimation methods for robotic applications2011Inngår i: Depth Map and 3D Imaging Applications: Algorithms and Technologies / [ed] A. S. Malik, T.-S. Choi, and H. Nisar, IGI Global, 2011, s. 397-417Kapittel i bok, del av antologi (Fagfellevurdert)
    Abstract [en]

    Vision is undoubtedly the most important sense for humans. Apart from many other low and higher level perception tasks, stereo vision has been proven to provide remarkable results when it comes to depth estimation. As a result, stereo vision is a rather popular and prosperous subject among the computer and machine vision research community. Moreover, the evolution of robotics and the demand for vision-based autonomous behaviors has posed new challenges that need to be tackled. Autonomous operation of robots in real working environments, given limited resources requires effective stereo vision algorithms. This chapter presents suitable depth estimation methods based on stereo vision and discusses potential robotic applications.

  • 335.
    Nalpantidis, Lazaros
    et al.
    Production and Management Engineering Dept., Democritus University of Thrace, Greece.
    Gasteratos, Antonios
    Production and Management Engineering Dept., Democritus University of Thrace, Greece.
    Stereo vision for robotic applications in the presence of non-ideal lighting conditions2010Inngår i: Image and Vision Computing, ISSN 0262-8856, E-ISSN 1872-8138, Vol. 28, nr 6, s. 940-951Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    Many robotic and machine-vision applications rely on the accurate results of stereo correspondence algorithms. However, difficult environmental conditions, such as differentiations in illumination depending on the viewpoint, heavily affect the stereo algorithms' performance. This work proposes a new illumination-invariant dissimilarity measure in order to substitute the established intensity-based ones. The proposed measure can be adopted by almost any of the existing stereo algorithms, enhancing it with its robust features. The performance of the dissimilarity measure is validated through experimentation with a new adaptive support weight (ASW) stereo correspondence algorithm. Experimental results for a variety of lighting conditions are gathered and compared to those of intensity-based algorithms. The algorithm using the proposed dissimilarity measure outperforms all the other examined algorithms, exhibiting tolerance to illumination differentiations and robust behavior.

  • 336.
    Nalpantidis, Lazaros
    et al.
    Production and Management Engineering Dept., Democritus University of Thrace, Greece.
    Gasteratos, Antonios
    Production and Management Engineering Dept., Democritus University of Thrace, Greece.
    Stereovision-based fuzzy obstacle avoidance method2011Inngår i: International Journal of Humanoid Robotics, ISSN 0219-8436, Vol. 8, nr 1, s. 169-183Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    This work presents a stereovision-based obstacle avoidance method for autonomous mobile robots. The decision about the direction on each movement step is based on a fuzzy inference system. The proposed method provides an efficient solution that uses a minimum of sensors and avoids computationally complex processes. The only sensor required is a stereo camera. First, a custom stereo algorithm provides reliable depth maps of the environment in frame rates suitable for a robot to move autonomously. Then, a fuzzy decision making algorithm analyzes the depth maps and deduces the most appropriate direction for the robot to avoid any existing obstacles. The proposed methodology has been tested on a variety of self-captured outdoor images and the results are presented and discussed.

  • 337.
    Nalpantidis, Lazaros
    et al.
    Production and Management Engineering Dept., Democritus University of Thrace, Greece.
    Kalomiros, John
    Informatics and Communications Dept., Technological Educational Institute of Serres, Greece.
    Gasteratos, Antonios
    Production and Management Engineering Dept., Democritus University of Thrace, Greece.
    Robust 3D vision for robots using dynamic programming2011Inngår i: 2011 IEEE International Conference on Imaging Systems and Techniques, IST 2011 - Proceedings, VDE Verlag GmbH, 2011, s. 89-93Konferansepaper (Fagfellevurdert)
    Abstract [en]

    In this paper a new stereo vision method is presented that combines the use of a lightness-invariant pixel dissimilarity measure within a dynamic programming depth estimation framework. This method uses concepts such as the proper projection of the HSL colorspace for lightness tolerance, as well as the Gestalt-based adaptive support weight aggregation and a dynamic programming optimization scheme. The robust behavior of this method is suitable for the working environments of outdoor robots, where non ideal lighting conditions often occur. Such problematic conditions heavily affect the efficiency of robot vision algorithms in exploration, military and security applications. The proposed algorithm is presented and applied to standard image sets.

  • 338.
    Nalpantidis, Lazaros
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Kostavelis, I.
    Gasteratos, A.
    Intelligent stereo vision in autonomous robot traversability estimation2013Inngår i: Robotics: Concepts, Methodologies, Tools, and Applications, IGI Global, 2013, Vol. 1, s. 350-365Kapittel i bok, del av antologi (Annet vitenskapelig)
    Abstract [en]

    Traversability estimation is the process of assessing whether a robot is able to move across a specific area. Autonomous robots need to have such an ability to automatically detect and avoid non-traversable areas and, thus, stereo vision is commonly used towards this end constituting a reliable solution under a variety of circumstances. This chapter discusses two different intelligent approaches to assess the traversability of the terrain in front of a stereo vision-equipped robot. First, an approach based on a fuzzy inference system is examined and then another approach is considered, which extracts geometrical descriptions of the scene depth distribution and uses a trained support vector machine (SVM) to assess the traversability. The two methods are presented and discussed in detail.

  • 339.
    Nalpantidis, Lazaros
    et al.
    Production and Management Engineering Dept., Democritus University of Thrace, Greece.
    Kostavelis, Ioannis
    Production and Management Engineering Dept., Democritus University of Thrace, Greece.
    Gasteratos, Antonios
    Production and Management Engineering Dept., Democritus University of Thrace, Greece.
    Stereovision-based algorithm for obstacle avoidance2009Inngår i: Intelligent Robotics And Applications, Proceedings / [ed] Xie, M; Xiong, Y; Xiong, C; Liu, H; Hu, Z, Springer Berlin/Heidelberg, 2009, Vol. 5928 LNAI, s. 195-204Konferansepaper (Fagfellevurdert)
    Abstract [en]

    This work presents a vision-based obstacle avoidance algorithm for autonomous mobile robots. It provides an efficient solution that uses a minimum of sensors and avoids, as much as possible, computationally complex processes. The only sensor required is a stereo camera. The proposed algorithm consists of two building blocks. The first one is a stereo algorithm, able to provide reliable depth maps of the scenery in frame rates suitable for a robot to move autonomously. The second building block is a decision making algorithm that analyzes the depth maps and deduces the most appropriate direction for the robot to avoid any existing obstacles. The proposed methodology has been tested on sequences of self-captured outdoor images and its results have been evaluated. The performance of the algorithm is presented and discussed.

  • 340.
    Nalpantidis, Lazaros
    et al.
    Production and Management Engineering Dept., Democritus University of Thrace, Greece.
    Sirakoulis, Georgios Ch.
    Electrical and Computer Engineering Dept., Democritus University of Thrace, Greece.
    Carbone, Andrea
    Computer and System Sciences, Sapienza University of Rome, Italy.
    Gasteratos, Antonios
    Production and Management Engineering Dept., Democritus University of Thrace, Greece.
    Computationally effective stereovision SLAM2010Inngår i: 2010 IEEE International Conference on Imaging Systems and Techniques, IST 2010 - Proceedings, IEEE , 2010, s. 458-463Konferansepaper (Fagfellevurdert)
    Abstract [en]

    In this paper a visual Simultaneous Localization and Mapping (SLAM) algorithm suitable for indoor area measurement applications is proposed. The algorithm is focused on computational effectiveness. The only sensor used is a stereo camera placed onboard a moving robot. The algorithm processes the acquired images calculating the depth of the scenery, detecting occupied areas and progressively building a map of the environment. The stereo vision-based SLAM algorithm embodies a custom-tailored stereo correspondence algorithm, the robust scale and rotation invariant feature detection and matching Speeded Up Robust Features (SURF) method, a computationally effective v-disparity image calculation scheme, a novel map-merging module, as well as a sophisticated Cellular Automata (CA)-based enhancement stage. The proposed algorithm is suitable for autonomously mapping and measuring indoor areas using robots. The algorithm is presented and experimental results for self-captured image sets are provided and analyzed.

  • 341.
    Nalpantidis, Lazaros
    et al.
    Production and Management Engineering Dept., Democritus University of Thrace, Greece.
    Sirakoulis, Georgios Ch.
    Electrical and Computer Engineering Dept., Democritus University of Thrace, Greece.
    Gasteratos, Antonios
    Production and Management Engineering Dept., Democritus University of Thrace, Greece.
    A dense stereo correspondence algorithm for hardware implementation with enhanced disparity selection2008Inngår i: Artificial Intelligence: Theories, Models And Applications, Setn 2008 / [ed] Darzentas, J; Vouros, GA; Arnellos, A, Springer Berlin/Heidelberg, 2008, Vol. 5138 LNAI, s. 365-370Konferansepaper (Fagfellevurdert)
    Abstract [en]

    In this paper an effective, hardware oriented stereo correspondence algorithm, able to produce dense disparity maps of improved fidelity is presented. The proposed algorithm combines rapid execution, simple and straight-forward structure as well as comparably high quality of results. These features render it as an ideal candidate for hardware implementation and for real-time applications. The proposed algorithm utilizes the Absolute Differences (AD) as matching cost and aggregates the results inside support windows, assigning Gaussian distributed weights to the support pixels, based on their Euclidean distance. The resulting Disparity Space Image (DSI) is furthered refined by Cellular Automata (CA) acting in all of the three dimensions of the DSI. The algorithm is applied to typical as well as to self-recorded real-life image sets. The disparity maps obtained are presented and quantitatively examined.

  • 342.
    Nalpantidis, Lazaros
    et al.
    Production and Management Engineering Dept., Democritus University of Thrace, Greece.
    Sirakoulis, Georgios Ch.
    Electrical and Computer Engineering Dept., Democritus University of Thrace, Greece.
    Gasteratos, Antonios
    Production and Management Engineering Dept., Democritus University of Thrace, Greece.
    Non-probabilistic cellular automata-enhanced stereo vision simultaneous localization and mapping2011Inngår i: Measurement science and technology, ISSN 0957-0233, E-ISSN 1361-6501, Vol. 22, nr 11, s. 114027-Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    In this paper, a visual non-probabilistic simultaneous localization and mapping (SLAM) algorithm suitable for area measurement applications is proposed. The algorithm uses stereo vision images as its only input and processes them calculating the depth of the scenery, detecting occupied areas and progressively building a map of the environment. The stereo vision-based SLAM algorithm embodies a stereo correspondence algorithm that is tolerant to illumination differentiations, the robust scale- and rotation-invariant feature detection and matching speeded-up robust features method, a computationally effective v-disparity image calculation scheme, a novel map-merging module, as well as a sophisticated cellular automata-based enhancement stage. A moving robot equipped with a stereo camera has been used to gather image sequences and the system has autonomously mapped and measured two different indoor areas.

  • 343.
    Nalpantidis, Lazaros
    et al.
    Production and Management Engineering Dept., Democritus University of Thrace, Greece.
    Sirakoulis, Georgios Ch.
    Electrical and Computer Engineering Dept., Democritus University of Thrace, Greece.
    Gasteratos, Antonios
    Production and Management Engineering Dept., Democritus University of Thrace, Greece.
    Review of stereo matching algorithms for 3D vision2007Inngår i: 16th International Symposium on Measurement and Control in Robotics, 2007Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Stereo vision, resulting in the knowledge of deep information in a scene, is of great importance in the field of machine vision, robotics and image analysis. As a result, in order to address the problem of matching points between two images of a stereo pair several algorithms have been proposed so far. In this paper, an explicit analysis of the existing stereo matching methods, up to date, is presented in full detail. The algorithms found in literature can be grouped into those producing sparse output and those giving a dense result, while the later can be classified as local (area-based) and global (energy- based). The presented algorithms are discussed in terms of speed, accuracy, coverage, time consumption and disparity range. Comparative test results concerning different image sizes as well as different stereo data sets are presented. Furthermore, the usage of advanced computational intelligence techniques such as neural networks and cellular automata in the development and application of such algorithms is also considered. However, due to the fact that the resulting depth calculation is a computationally demanding procedure, most of the presented algorithms perform poorly in real-time applications. Towards this direction, the development of real-time stereo matching algorithms, able to be efficiently implemented in dedicated hardware is of great interest in the contexts of 3D reconstruction, simultaneous localization and mapping (SLAM), virtual reality, robot navigation and control. Some possible implementations of stereo- matching algorithms in hardware for real-time applications are also discussed in details.

  • 344.
    Nalpantidis, Lazaros
    et al.
    Production and Management Engineering Dept., Democritus University of Thrace, Greece.
    Sirakoulis, Georgios Ch.
    Electrical and Computer Engineering Dept., Democritus University of Thrace, Greece.
    Gasteratos, Antonios
    Production and Management Engineering Dept., Democritus University of Thrace, Greece.
    Review of stereo vision algorithms: From software to hardware2008Inngår i: International Journal of Optomechatronics, ISSN 1559-9612, Vol. 2, nr 4, s. 435-462Artikkel, forskningsoversikt (Fagfellevurdert)
    Abstract [en]

    Stereo vision, resulting in the knowledge of deep information in a scene, is of great importance in the field of machine vision, robotics and image analysis. In this article, an explicit analysis of the existing stereo matching methods, up to date, is presented. The presented algorithms are discussed in terms of speed, accuracy, coverage, time consumption, and disparity range. Towards the direction of real-time operation, the development of stereo matching algorithms, suitable for efficient hardware implementation is highly desirable. Implementations of stereo matching algorithms in hardware for real-time applications are also discussed in details.

  • 345.
    Nilsson, John-Olof
    et al.
    KTH, Skolan för elektro- och systemteknik (EES), Signalbehandling. KTH, Skolan för elektro- och systemteknik (EES), Centra, ACCESS Linnaeus Centre.
    Händel, Peter
    KTH, Skolan för elektro- och systemteknik (EES), Signalbehandling. KTH, Skolan för elektro- och systemteknik (EES), Centra, ACCESS Linnaeus Centre.
    Recursive Bayesian Initialization of Localization Based on Ranging and Dead Reckoning2013Inngår i: Intelligent Robots and Systems (IROS), 2013 IEEE/RSJ International Conference on, IEEE conference proceedings, 2013, s. 1399-1404Konferansepaper (Fagfellevurdert)
    Abstract [en]

    The initialization of the state estimation in a localization scenario based on ranging and dead reckoning is studied. Specifically, we treat a cooperative localization setup and consider the problem of recursively arriving at a unimodal state estimate with sufficiently low covariance such that covariance based filters can be used to estimate an agent's state subsequently. The initialization of the position of an anchor node will be a special case of this. A number of simplifications/assumptions are made such that the estimation problem can be seen as that of estimating the initial agent state given a deterministic surrounding and dead reckoning. This problem is solved by means of a particle filter and it is described how continual states and covariance estimates are derived from the solution. Finally, simulations are used to illustrate the characteristics of the method and experimental data are briefly presented.

  • 346.
    Nilsson, John-Olof
    et al.
    KTH, Skolan för elektro- och systemteknik (EES), Signalbehandling. KTH, Skolan för elektro- och systemteknik (EES), Centra, ACCESS Linnaeus Centre.
    Kristensen, Johan
    KTH, Skolan för elektro- och systemteknik (EES), Signalbehandling. KTH, Skolan för elektro- och systemteknik (EES), Centra, ACCESS Linnaeus Centre.
    Händel, Peter
    KTH, Skolan för elektro- och systemteknik (EES), Signalbehandling. KTH, Skolan för elektro- och systemteknik (EES), Centra, ACCESS Linnaeus Centre.
    A unifying approach to feature point orientation assignmentManuskript (preprint) (Annet vitenskapelig)
    Abstract [en]

    A general structure of an orientation measure for orientation assignment of 2D image feature points is heuristically motivated. The measure is discretized and general approximation methods are presented. Orientation assignment methods found in the literature are shown to exploit special cases of the general measure, which thereby provides a unifying framework for them. An analytical robustness analysis is conducted, giving a number of desirable robustness properties of the measure components. Following this, a detailed treatment of implementation issues such as gradient sampling and binning is given and based on the desirable properties and other implementation considerations, specific measure components with implementations are suggested. Together, this constitutes a novel feature point orientation assignment method, which we have called RAID. We argue that this method is considerably less expensive than comparable methods in the literature, and by means of a quantitative perturbation analysis, a significantly improved orientation assignment repeatability is demonstrated compared with the available methods found in the literature.

     

  • 347.
    Nilsson, John-Olof
    et al.
    KTH, Skolan för elektro- och systemteknik (EES), Signalbehandling. KTH, Skolan för elektro- och systemteknik (EES), Centra, ACCESS Linnaeus Centre.
    Kristensen, Johan
    KTH, Skolan för elektro- och systemteknik (EES), Signalbehandling. KTH, Skolan för elektro- och systemteknik (EES), Centra, ACCESS Linnaeus Centre.
    Händel, Peter
    KTH, Skolan för elektro- och systemteknik (EES), Signalbehandling. KTH, Skolan för elektro- och systemteknik (EES), Centra, ACCESS Linnaeus Centre.
    Fast scale-space approximation by cascaded box filters and integer signal representationManuskript (preprint) (Annet vitenskapelig)
    Abstract [en]

    A method for computationally inexpensive approximation of a set of scale levels in a Gaussian scale-space is presented. Box filters of different widths and orientations are cascaded to approximate the Gaussian kernels. The signal is downsampled at higher scales to reduce the number of samples and thereby the computational cost. Integer signal representation is used throughout the filtering, and the signal is downshifted as required to keep within the numerical range of the representation. The filtering require only add (and subtract) and shift operations to implement. An optimization problem is formulated for designing the filter cascade and a branch-and-bound technique is used to solve it. The level of approximation versus the computational cost is studied and based on a qualitative comparison with state-of-the-art approximation methods, it is concluded that the presented method shows unprecedented low computational cost and unique properties for low cost Gaussian scale-space approximation.

     

  • 348.
    Nilsson, John-Olof
    et al.
    KTH, Skolan för elektro- och systemteknik (EES), Signalbehandling. KTH, Skolan för elektro- och systemteknik (EES), Centra, ACCESS Linnaeus Centre.
    Zachariah, Dave
    KTH, Skolan för elektro- och systemteknik (EES), Signalbehandling. KTH, Skolan för elektro- och systemteknik (EES), Centra, ACCESS Linnaeus Centre.
    Skoog, Isaac
    KTH, Skolan för elektro- och systemteknik (EES), Signalbehandling. KTH, Skolan för elektro- och systemteknik (EES), Centra, ACCESS Linnaeus Centre.
    Händel, Peter
    KTH, Skolan för elektro- och systemteknik (EES), Signalbehandling. KTH, Skolan för elektro- och systemteknik (EES), Centra, ACCESS Linnaeus Centre.
    Cooperative localization by dual foot-mounted inertial sensors and inter-agent ranging2013Inngår i: EURASIP Journal on Advances in Signal Processing, ISSN 1687-6172, E-ISSN 1687-6180, Vol. 164Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    The implementation challenges of cooperative localization by dual foot-mounted inertial sensors and inter-agent ranging are discussed and work on the subject is reviewed. System architecture and sensor fusion are identified as key challenges. A partially decentralized system architecture based on step-wise inertial navigation and step-wise dead reckoning is presented. This architecture is argued to reduce the computational cost and required communication bandwidth by around two orders of magnitude while only giving negligible information loss in comparison with a naive centralized implementation. This makes a joint global state estimation feasible for up to a platoon-sized group of agents. Furthermore, robust and low-cost sensor fusion for the considered setup, based on state space transformation and marginalization, is presented. The transformation and marginalization are used to give the necessary flexibility for presented sampling based updates for the inter-agent ranging and ranging free fusion of the two feet of an individual agent. Finally, characteristics of the suggested implementation are demonstrated with simulations and a real-time system implementation.

  • 349.
    Nilsson, Ulrik
    et al.
    Department of Autonomous Systems Swedish Defence Research Agency.
    Ögren, Petter
    Department of Autonomous Systems Swedish Defence Research Agency.
    Thunberg, Johan
    Swedish Defence Research Institute (FOI).
    Towards Optimal Positioning of Surveillance UGVs2009Inngår i: / [ed] Hirsch, MJ; Commander, CW; Pardalos, PM; Murphey, R, Springer Berlin/Heidelberg, 2009, s. 221-233Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Unmanned Ground Vehicles (UGVs) equipped with surveillance cameras present a flexible complement to the numerous stationary sensors being used in security applications today. However, to take full advantage of the flexibility and speed offered by a group of UGV platforms, a fast way to compute desired camera locations to cover ail area or a set of buildings, e.g., in response to ail alarm, is needed. Building upon earlier results in terrain guarding and sensor placement we propose a way to find candidate guard positions that; satisfy a large set. of view angle and range constraints simulataneously. Since the original problem is NP-complete, we do riot seek to find the true optimal set of guard positions. Instead, a near optimal subset of the candidate points is chosen using a scheme with a known approximation ratio of O(log(n)). A number of examples are presented to illustrate the approach.

  • 350.
    Nordström, Marcus
    et al.
    KTH, Skolan för teknikvetenskap (SCI), Matematik (Inst.), Matematisk statistik.
    Hult, Henrik
    KTH, Skolan för teknikvetenskap (SCI), Matematik (Inst.), Matematisk statistik.
    Maki, Atsuto
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Robotik, perception och lärande, RPL.
    Löfman, Fredrik
    Raysearch Labs, Stockholm, Sweden..
    Pareto Dose Prediction Using Fully Convolutional Networks Operating in 3D2018Inngår i: Medical physics (Lancaster), ISSN 0094-2405, Vol. 45, nr 6, s. E176-E176Artikkel i tidsskrift (Annet vitenskapelig)
45678910 301 - 350 of 476
RefereraExporteraLink til resultatlisten
Permanent link
Referera
Referensformat
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf