Ändra sökning
Avgränsa sökresultatet
1234567 101 - 150 av 470
RefereraExporteraLänk till träfflistan
Permanent länk
Referera
Referensformat
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Träffar per sida
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sortering
  • Standard (Relevans)
  • Författare A-Ö
  • Författare Ö-A
  • Titel A-Ö
  • Titel Ö-A
  • Publikationstyp A-Ö
  • Publikationstyp Ö-A
  • Äldst först
  • Nyast först
  • Skapad (Äldst först)
  • Skapad (Nyast först)
  • Senast uppdaterad (Äldst först)
  • Senast uppdaterad (Nyast först)
  • Disputationsdatum (tidigaste först)
  • Disputationsdatum (senaste först)
  • Standard (Relevans)
  • Författare A-Ö
  • Författare Ö-A
  • Titel A-Ö
  • Titel Ö-A
  • Publikationstyp A-Ö
  • Publikationstyp Ö-A
  • Äldst först
  • Nyast först
  • Skapad (Äldst först)
  • Skapad (Nyast först)
  • Senast uppdaterad (Äldst först)
  • Senast uppdaterad (Nyast först)
  • Disputationsdatum (tidigaste först)
  • Disputationsdatum (senaste först)
Markera
Maxantalet träffar du kan exportera från sökgränssnittet är 250. Vid större uttag använd dig av utsökningar.
  • 101. Ek, Carl Henrik
    et al.
    Rihan, J.
    Torr, P.
    Rogez, G.
    Lawrence, Neil D.
    Ambiguity modeling in latent spaces2008Ingår i: MACHINE LEARNING FOR MULTIMODAL INTERACTION, PROCEEDINGS / [ed] PopescuBelis, A; Stiefelhagen, R, BERLIN: SPRINGER-VERLAG BERLIN , 2008, s. 62-73Konferensbidrag (Refereegranskat)
    Abstract [en]

    We are interested in the situation where we have two or more representations of an underlying phenomenon. In particular we are interested in the scenario where the representation Lire complementary. This implies that a single individual representation is not sufficient to fully discriminate a specific instance of the underlying phenomenon, it also means that each representation is an ambiguous representation of the other complementary spaces. In this paper we present a latent variable model capable of consolidating multiple complementary representations. Our method extends canonical correlation analysis by introducing additional latent spaces that Lire specific to the different representations, thereby explaining the full variance of the observations. These additional spaces, explaining representation specific variance, separately model the variance in a representation ambiguous to the other. We develop a spectral algorithm for fast computation of the embeddings and a probabilistic model (based on Gaussian processes) for validation and inference. The proposed model has several potential application areas, we demonstrate its use for multi-modal regression on a benchmark human pose estimation data set.

  • 102.
    Ek, Carl Henrik
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Song, Dan
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Huebner, Kai
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Exploring affordances in robot grasping through latent structure representation2010Ingår i: The 11th European Conference on Computer Vision (ECCV 2010), 2010Konferensbidrag (Refereegranskat)
  • 103.
    Ek, Carl Henrik
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Song, Dan
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Huebner, Kai
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Task Modeling in Imitation Learning using Latent Variable Models2010Ingår i: 2010 10th IEEE-RAS International Conference on Humanoid Robots, Humanoids 2010, 2010, s. 458-553Konferensbidrag (Refereegranskat)
    Abstract [en]

    An important challenge in robotic research is learning and reasoning about different manipulation tasks from scene observations. In this paper we present a probabilistic model capable of modeling several different types of input sources within the same model. Our model is capable to infer the task using only partial observations. Further, our framework allows the robot, given partial knowledge of the scene, to reason about what information streams to acquire in order to disambiguate the state-space the most. We present results for task classification within and also reason about different features discriminative power for different classes of tasks.

  • 104.
    Ek, Carl Henrik
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Song, Dan
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Learning Conditional Structures in Graphical Models from a Large Set of Observation Streams through efficient Discretisation2011Ingår i: IEEE International Conference on Robotics and Automation, Workshop on Manipulation under Uncertainty, 2011Konferensbidrag (Refereegranskat)
  • 105. Ek, Carl Henrik
    et al.
    Torr, Phil
    Lawrence, Neil D.
    Gaussian process latent variable models for human pose estimation2007Ingår i: MACHINE LEARNING FOR MULTIMODAL INTERACTION / [ed] Belis, AP; Renals, S; Bourlard, H, 2007, s. 132-143Konferensbidrag (Refereegranskat)
    Abstract [en]

    We describe a method for recovering 3D human body pose from silhouettes. Our model is based on learning a latent space using the Gaussian Process Latent Variable Model (GP-LVM) [1] encapsulating both pose and silhouette features Our method is generative, this allows us to model the ambiguities of a silhouette representation in a principled way. We learn a dynamical model over the latent space which allows us to disambiguate between ambiguous silhouettes by temporal consistency. The model has only two free parameters and has several advantages over both regression approaches and other generative methods. In addition to the application shown in this paper the suggested model is easily extended to multiple observation spaces without constraints on type.

  • 106.
    Ekberg, Peter
    et al.
    KTH, Skolan för industriell teknik och management (ITM), Industriell produktion.
    Daemi, Bita
    KTH, Skolan för industriell teknik och management (ITM), Industriell produktion.
    Mattsson, Lars
    KTH, Skolan för industriell teknik och management (ITM), Industriell produktion.
    3D precision measurements of meter sized surfaces using low cost illumination and camera techniques2017Ingår i: Measurement science and technology, ISSN 0957-0233, E-ISSN 1361-6501, Vol. 28, nr 4, artikel-id 045403Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    Using dedicated stereo camera systems and structured light is a well-known method for measuring the 3D shape of large surfaces. However the problem is not trivial when high accuracy, in the range of few tens of microns, is needed. Many error sources need to be handled carefully in order to obtain high quality results. In this study, we present a measurement method based on low-cost camera and illumination solutions combined with high-precision image analysis and a new approach in camera calibration and 3D reconstruction. The setup consists of two ordinary digital cameras and a Gobo projector as a structured light source. A matrix of dots is projected onto the target area. The two cameras capture the images of the projected pattern on the object. The images are processed by advanced subpixel resolution algorithms prior to the application of the 3D reconstruction technique. The strength of the method lays in a different approach for calibration, 3D reconstruction, and high-precision image analysis algorithms. Using a 10 mm pitch pattern of the light dots, the method is capable of reconstructing the 3D shape of surfaces. The precision (1 sigma repeatability) in the measurements is < 10 mu m over a volume of 60 x 50 x 10 cm(3) at a hardware cost of similar to 2% of available advanced measurement techniques. The expanded uncertainty (95% confidence level) is estimated to be 83 mu m, with the largest uncertainty contribution coming from the absolute length of the metal ruler used as reference.

  • 107. Eklundh, Jan-Olof
    et al.
    Uhlin, Tomas
    Nordlund, Peter
    Maki, Atsuto
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Active Vision and Seeing Robots1996Ingår i: International Symposium on Robotics Research, 1996Konferensbidrag (Refereegranskat)
  • 108. Eklundh, Jan-Olof
    et al.
    Uhlin, Tomas
    Nordlund, Peter
    Maki, Atsuto
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Developing an Active Observer1995Ingår i: Asian Conference on Computer Vision, 1995, Vol. 1035, s. 181-190Konferensbidrag (Refereegranskat)
  • 109.
    Ekström, Johan
    KTH, Skolan för datavetenskap och kommunikation (CSC).
    Obstacle avoidance for platforms in three-dimensional environments2016Självständigt arbete på avancerad nivå (masterexamen), 20 poäng / 30 hpStudentuppsats (Examensarbete)
    Abstract [sv]

    Fältet inom kollisionsundvikande är ett välforskat område. Trots detta så är forskning inom kollisionsundvikande metoder i tre dimensioner förvånansvärt magert. För plattformar som kan navigera det tredimensionella rummet, såsom multirotor-baserade drönare kommer sådana metoder att bli mer vanliga.

    I denna tes presenteras en kollisionsundvikande metod, menad för det tredimensionella rummet. Först reduceras dimensionaliteten av det tredimensionella rummet genom att projicera hinderobservationer på ett tvådimensionellt sfärisk ark i form av en djupkarta som bibehåller information om riktning och avstånd till hinder. Därefter beaktas plattformens dimensioner genom att tillämpa ett efterbehandlingssteg på djupkartan. Till sist, med kunskap om rörelsemodellen, ett verifieringssteg där information från djupkartan används för att försäkra sig om att plattformen inte kolliderar med några hinder genom att inte tillåta kontrollinmatningar som leder till kollisioner. Om det finns flera kontrollinmatningskandidater efter verifikationssteget som leder till hastighetsvektorer nära en önskad hastighetsvektor så används en heuristisk kostnadsfunktion, där likheten i riktning och magnitud av den resulterande vektorn och önskade hastighetsvektorn värderas, för att välja en av dem.

    Utvärdering av metoden visar att plattformar kan bibehålla avstånd till hinder. Dock föreslås ytterligare arbete för att förbättra tillförlitligheten av metoden samt att utvärdera metoden i den verkliga världen.

  • 110.
    Ekvall, Staffan
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Aarno, Daniel
    KTH, Skolan för datavetenskap och kommunikation (CSC), Numerisk Analys och Datalogi, NADA.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Online task recognition and real-time adaptive assistance for computer-aided machine control2006Ingår i: IEEE Transactions on robotics, ISSN 1552-3098, E-ISSN 1941-0468, Vol. 22, nr 5, s. 1029-1033Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    Segmentation and recognition of operator-generated motions are commonly facilitated to provide appropriate assistance during task execution in teleoperative and human-machine collaborative settings. The assistance is usually provided in a virtual fixture framework where the level of compliance can be altered online, thus improving the performance in terms of execution time and overall precision. However, the fixtures are typically inflexible, resulting in a degraded performance in cases of unexpected obstacles or incorrect fixture models. In this paper, we present a method for online task tracking and propose the use of adaptive virtual fixtures that can cope with the above problems. Here, rather than executing a predefined plan, the operator has the ability to avoid unforeseen obstacles and deviate from the model. To allow this, the probability of following a certain trajectory (subtask) is estimated and used to automatically adjusts the compliance, thus providing the online decision of how to fixture the movement.

  • 111.
    Englesson, Erik
    et al.
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Robotik, perception och lärande, RPL.
    Azizpour, Hossein
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Robotik, perception och lärande, RPL.
    Efficient Evaluation-Time Uncertainty Estimation by Improved Distillation2019Konferensbidrag (Refereegranskat)
  • 112. Erkent, Ozgur
    et al.
    Karaoguz, Hakan
    KTH, Skolan för datavetenskap och kommunikation (CSC), Robotik, perception och lärande, RPL.
    Bozma, H. Isil
    Hierarchically self-organizing visual place memory2017Ingår i: Advanced Robotics, ISSN 0169-1864, E-ISSN 1568-5535, Vol. 31, nr 16, s. 865-879Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    A hierarchically organized visual place memory enables a robot to associate with its respective knowledge efficiently. In this paper, we consider how this organization can be done by the robot on its own throughout its operation and introduce an approach that is based on the agglomerative method SLINK. The hierarchy is obtained from a single link cluster analysis that is carried out based on similarity in the appearance space. As such, the robot can incrementally incorporate the knowledge of places into its visual place memory over the long term. The resulting place memory has an order-invariant hierarchy that enables both storage and construction efficiency. Experimental results obtained under the guided operation of the robot demonstrate that the robot is able to organize its place knowledge and relate to it efficiently. This is followed by experimental results under autonomous operation in which the robot evolves its visual place memory completely on its own.

  • 113.
    Fagerström, Daniel
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Spatio-Temporal Scale-Space Theory2011Doktorsavhandling, sammanläggning (Övrigt vetenskapligt)
    Abstract [en]

    This thesis addresses two important topics in developing a systematic space-time geometric approach to real-time, low-level motion vision. The first one concerns measuring of image flow, while the second one focuses on how to find low level features.

    We argue for studying motion vision in terms of space-time geometry rather than in terms of two (or a few) consecutive image frames. The use of Galilean Geometry and Galilean similarity geometry for this  purpose is motivated and relevant geometrical background is reviewed.

    In order to measure the visual signal in a way that respects the geometry of the situation and the causal nature of time, we argue that a time causal Galilean spatio-temporal scale-space is needed. The scale-space axioms are chosen so that they generalize popular axiomatizations of spatial scale-space to spatio-temporal  geometries.

    To be able to derive the scale-space, an infinitesimal framework for scale-spaces that respects a more general class of Lie groups (compared to previous theory) is developed and applied.

    Perhaps surprisingly, we find that with the chosen axiomatization, a time causal Galilean scale-space is not possible as an evolution process on space and time. However, it is possible on space and memory. We argue that this actually is a more accurate and realistic model of motion vision.

    While the derivation of the time causal Galilean spatio-temporal scale-spaces requires some exotic mathematics, the end result is as simple as one possibly could hope for and a natural extension of  spatial scale-spaces. The unique infinitesimally generated scale-space is an ordinary diffusion equation with drift on memory and a diffusion equation on space. The drift is used for velocity  adaption, the "velocity adaption" part of Galilean geometry (the Galilean boost) and the temporal scale-space acts as memory.

    Lifting the restriction of infinitesimally generated scale spaces, we arrive at a new family of scale-spaces. These are generated by a family of fractional differential evolution equations that generalize the ordinary diffusion equation. The same type of evolution equations have recently become popular in research in e.g. financial and physical modeling.

    The second major topic in this thesis is extraction of features from an image flow. A set of low-level features can be derived by classifying basic Galilean differential invariants. We proceed to derive invariants for two main cases: when the spatio-temporal  gradient cuts the image plane and when it is tangent to the image plane. The former case corresponds to isophote curve motion and the later to creation and disappearance of image structure, a case that is not well captured by the theory of optical flow.

    The Galilean differential invariants that are derived are equivalent with curl, divergence, deformation and acceleration. These  invariants are normally calculated in terms of optical flow, but here they are instead calculated directly from the the  spatio-temporal image.

  • 114. Ferri, Stefania
    et al.
    Pauwels, Karl
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Rizzolatti, Giacomo
    Orban, Guy
    Stereoscopically Observing Manipulative Actions2016Ingår i: Cerebral Cortex, ISSN 1047-3211, E-ISSN 1460-2199Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    The purpose of this study was to investigate the contribution of stereopsis to the processing of observed manipulative actions. To this end, we first combined the factors “stimulus type” (action, static control, and dynamic control), “stereopsis” (present, absent) and “viewpoint” (frontal, lateral) into a single design. Four sites in premotor, retro-insular (2) and parietal cortex operated specifically when actions were viewed stereoscopically and frontally. A second experiment clarified that the stereo-action-specific regions were driven by actions moving out of the frontoparallel plane, an effect amplified by frontal viewing in premotor cortex. Analysis of single voxels and their discriminatory power showed that the representation of action in the stereo-action-specific areas was more accurate when stereopsis was active. Further analyses showed that the 4 stereo-action-specific sites form a closed network converging onto the premotor node, which connects to parietal and occipitotemporal regions outside the network. Several of the specific sites are known to process vestibular signals, suggesting that the network combines observed actions in peripersonal space with gravitational signals. These findings have wider implications for the function of premotor cortex and the role of stereopsis in human behavior.

  • 115.
    Folkesson, John
    et al.
    Massachusetts Institute of Technology, Cambridge, MA.
    Christensen, Henrik
    Georgia Institute of Technology, Atlanta, GA.
    SIFT Based Graphical SLAM on a Packbot2008Ingår i: Springer Tracts in Advanced Robotics, ISSN 1610-7438, E-ISSN 1610-742X, Vol. 42, s. 317-328Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    We present an implementation of Simultaneous Localization and Mapping (SLAM) that uses infrared (IR) camera images collected at 10 Hz from a Packbot robot. The Packbot has a number of challenging characteristics with regard to vision based SLAM. The robot travels on tracks which causes the odometry to be poor especially while turning. The IMU is of relatively low quality as well making the drift in the motion prediction greater than on conventional robots. In addition, the very low placement of the camera and its fixed orientation looking forward is not ideal for estimating motion from the images. Several novel ideas are tested here. Harris corners are extracted from every 5 th frame and used as image features for our SLAM. Scale Invariant Feature Transform, SIFT, descriptors are formed from each of these. These are used to match image features over these 5 frame intervals. Lucas-Kanade tracking is done to find corresponding pixels in the frames between the SIFT frames. This allows a substantial computational savings over doing SIFT matching every frame. The epipolar constraints between all these matches that are implied by the dead-reckoning are used to further test the matches and eliminate poor features. Finally, the features are initialized on the map at once using an inverse depth parameterization which eliminates the delay in initialization of the 3D point features.

  • 116.
    Folkesson, John
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Jensfelt, Patric
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS. KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Christensen, Henrik
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Graphical SLAM using vision and the measurement subspace2005Ingår i: 2005 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, VOLS 1-4, IEEE conference proceedings, 2005, s. 325-330Konferensbidrag (Refereegranskat)
    Abstract [en]

    In this paper we combine a graphical approach for simultaneous localization and mapping, SLAM, with a feature representation that addresses symmetries and constraints in the feature coordinates, the measurement subspace, M-space. The graphical method has the advantages of delayed linearizations and soft commitment to feature measurement matching. It also allows large maps to be built up as a network of small local patches, star nodes. This local map net is then easier to work with. The formation of the star nodes is explicitly stable and invariant with all the symmetries of the original measurements. All linearization errors are kept small by using a local frame. The construction of this invariant star is made clearer by the M-space feature representation. The M-space allows the symmetries and constraints of the measurements to be explicitly represented. We present results using both vision and laser sensors.

  • 117.
    Fredenberg, Erik
    et al.
    KTH, Skolan för teknikvetenskap (SCI), Fysik, Medicinsk avbildning.
    Hemmendorff, Magnus
    Cederström, Björn
    KTH, Skolan för teknikvetenskap (SCI), Fysik, Medicinsk avbildning.
    Åslund, Magnus
    Danielsson, Mats
    KTH, Skolan för teknikvetenskap (SCI), Fysik, Medicinsk avbildning.
    Contrast-enhanced spectral mammography with a photon-counting detector2010Ingår i: Medical physics (Lancaster), ISSN 0094-2405, Vol. 37, nr 5, s. 2017-2029Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    Purpose: Spectral imaging is a method in medical x-ray imaging to extract information about the object constituents by the material-specific energy dependence of x-ray attenuation. In particular, the detectability of a contrast agent can be improved over a lumpy background. We have investigated a photon-counting spectral imaging system with two energy bins for contrast-enhanced mammography. System optimization and the potential benefit compared to conventional non-energy-resolved imaging was studied.

    Methods: A framework for system characterization was set up that included quantum and anatomical noise, and a theoretical model of the system was benchmarked to phantom measurements.

    Results: It was found that optimal combination of the energy-resolved images corresponded approximately to minimization of the anatomical noise, and an ideal-observer detectability index could be improved more than a factor of two compared to absorption imaging in the phantom study. In the clinical case, an improvement close to 80% was predicted for an average glandularity breast, and a factor of eight for dense breast tissue. Another 70% was found to be within reach for an optimized system.

    Conclusions: Contrast-enhanced spectral mammography is feasible and beneficial with the current system, and there is room for additional improvements.

  • 118.
    Frennert, Susanne
    et al.
    Lund University, Sweden.
    Eftring, Håkan
    Lund University, Sweden.
    Östlund, Britt
    Lund University, Sweden.
    Using attention cards to facilitate active participation in eliciting old adults' requirements for assistive robots2013Ingår i: Proceedings - IEEE International Workshop on Robot and Human Interactive Communication, 2013, s. 774-779Konferensbidrag (Refereegranskat)
    Abstract [en]

    Engaging old users in the exploration of future product concepts can be challenging. It is of great value to find ways to actively involve them in the design of novel technologies intended for them, particularly when they have no prior experience of the technology in question. One obstacle in this process is that many old people do not identify themselves as being old or they think that it (the technology) would be good for others but not themselves. This paper presents a card method to overcome this obstacle. A full-day workshop with three internal focus groups was run with 14 participants. Based on our experience, we propose a way in which active participation in the process of eliciting user requirements for assistive robots from old users with no prior experience of assistive robots can be carried out.

  • 119.
    Frid, Emma
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Medieteknik och interaktionsdesign, MID.
    Bresin, Roberto
    KTH, Skolan för datavetenskap och kommunikation (CSC), Medieteknik och interaktionsdesign, MID.
    Alexanderson, Simon
    KTH, Skolan för elektroteknik och datavetenskap (EECS).
    Perception of Mechanical Sounds Inherent to Expressive Gestures of a NAO Robot - Implications for Movement Sonification of Humanoids2018Ingår i: Proceedings of the 15th Sound and Music Computing Conference / [ed] Anastasia Georgaki and Areti Andreopoulou, Limassol, Cyprus, 2018Konferensbidrag (Refereegranskat)
    Abstract [en]

    In this paper we present a pilot study carried out within the project SONAO. The SONAO project aims to compen- sate for limitations in robot communicative channels with an increased clarity of Non-Verbal Communication (NVC) through expressive gestures and non-verbal sounds. More specifically, the purpose of the project is to use move- ment sonification of expressive robot gestures to improve Human-Robot Interaction (HRI). The pilot study described in this paper focuses on mechanical robot sounds, i.e. sounds that have not been specifically designed for HRI but are inherent to robot movement. Results indicated a low correspondence between perceptual ratings of mechanical robot sounds and emotions communicated through ges- tures. In general, the mechanical sounds themselves ap- peared not to carry much emotional information compared to video stimuli of expressive gestures. However, some mechanical sounds did communicate certain emotions, e.g. frustration. In general, the sounds appeared to commu- nicate arousal more effectively than valence. We discuss potential issues and possibilities for the sonification of ex- pressive robot gestures and the role of mechanical sounds in such a context. Emphasis is put on the need to mask or alter sounds inherent to robot movement, using for exam- ple blended sonification.

  • 120.
    Fukui, Kazuhiro
    et al.
    Tsukuba University, Japan.
    Maki, Atsuto
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Difference subspace and its generalization for subspace-based methods2015Ingår i: IEEE Transaction on Pattern Analysis and Machine Intelligence, ISSN 0162-8828, E-ISSN 1939-3539, Vol. 37, nr 11, s. 2164-2177Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    Subspace-based methods are known to provide a practical solution for image set-based object recognition. Based on the insight that local shape differences between objects offer a sensitive cue for recognition, this paper addresses the problem of extracting a subspace representing the difference components between class subspaces generated from each set of object images independently of each other. We first introduce the difference subspace (DS), a novel geometric concept between two subspaces as an extension of a difference vector between two vectors, and describe its effectiveness in analyzing shape differences. We then generalize it to the generalized difference subspace (GDS) for multi-class subspaces, and show the benefit of applying this to subspace and mutual subspace methods, in terms of recognition capability. Furthermore, we extend these methods to kernel DS (KDS) and kernel GDS (KGDS) by a nonlinear kernel mapping to deal with cases involving larger changes in viewing direction. In summary, the contributions of this paper are as follows: 1) a DS/KDS between two class subspaces characterizes shape differences between the two respectively corresponding objects, 2) the projection of an input vector onto a DS/KDS realizes selective visualization of shape differences between objects, and 3) the projection of an input vector or subspace onto a GDS/KGDS is extremely effective at extracting differences between multiple subspaces, and therefore improves object recognition performance. We demonstrate validity through shape analysis on synthetic and real images of 3D objects as well as extensive comparison of performance on classification tests with several related methods; we study the performance in face image classification on the Yale face database B+ and the CMU Multi-PIE database, and hand shape classification of multi-view images.

  • 121. Ge, Q.
    et al.
    Shen, F.
    Jing, X. -Y
    Wu, F.
    Xie, S. -P
    Yue, D.
    Li, Haibo
    KTH, Skolan för datavetenskap och kommunikation (CSC), Medieteknik och interaktionsdesign, MID.
    Active contour evolved by joint probability classification on Riemannian manifold2016Ingår i: Signal, Image and Video Processing, ISSN 1863-1703, E-ISSN 1863-1711, Vol. 10, nr 7, s. 1257-1264Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    In this paper, we present an active contour model for image segmentation based on a nonparametric distribution metric without any intensity a priori of the image. A novel nonparametric distance metric, which is called joint probability classification, is established to drive the active contour avoiding the instability induced by multimodal intensity distribution. Considering an image as a Riemannian manifold with spatial and intensity information, the contour evolution is performed on the image manifold by embedding geometric image feature into the active contour model. The experimental results on medical and texture images demonstrate the advantages of the proposed method.

  • 122. Ge, Qi
    et al.
    Jing, Xiao-Yuan
    Wu, Fei
    Wei, Zhi-Hui
    Xiao, Liang
    Shao, Wen-Ze
    Yue, Dong
    Li, Hai-Bo
    KTH, Skolan för datavetenskap och kommunikation (CSC), Medieteknik och interaktionsdesign, MID.
    Structure-Based Low-Rank Model With Graph Nuclear Norm Regularization for Noise Removal2017Ingår i: IEEE Transactions on Image Processing, ISSN 1057-7149, E-ISSN 1941-0042, Vol. 26, nr 7, s. 3098-3112Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    Nonlocal image representation methods, including group-based sparse coding and block-matching 3-D filtering, have shown their great performance in application to low-level tasks. The nonlocal prior is extracted from each group consisting of patches with similar intensities. Grouping patches based on intensity similarity, however, gives rise to disturbance and inaccuracy in estimation of the true images. To address this problem, we propose a structure-based low-rank model with graph nuclear norm regularization. We exploit the local manifold structure inside a patch and group the patches by the distance metric of manifold structure. With the manifold structure information, a graph nuclear norm regularization is established and incorporated into a low-rank approximation model. We then prove that the graph-based regularization is equivalent to a weighted nuclear norm and the proposed model can be solved by a weighted singular-value thresholding algorithm. Extensive experiments on additive white Gaussian noise removal and mixed noise removal demonstrate that the proposed method achieves a better performance than several state-of-the-art algorithms.

  • 123.
    Ghadirzadeh, Ali
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Maki, Atsuto
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Björkman, Mårten
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS. KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    A Sensorimotor Approach for Self-Learning of Hand-Eye Coordination2015Ingår i: IEEE/RSJ International Conference onIntelligent Robots and Systems, Hamburg, September 28 - October 02, 2015, IEEE conference proceedings, 2015, s. 4969-4975Konferensbidrag (Refereegranskat)
    Abstract [en]

    This paper presents a sensorimotor contingencies (SMC) based method to fully autonomously learn to perform hand-eye coordination. We divide the task into two visuomotor subtasks, visual fixation and reaching, and implement these on a PR2 robot assuming no prior information on its kinematic model. Our contributions are three-fold: i) grounding a robot in the environment by exploiting SMCs in the action planning system, which eliminates the need for prior knowledge of the kinematic or dynamic models of the robot; ii) using a forward model to search for proper actions to solve the task by minimizing a cost function, instead of training a separate inverse model, to speed up training; iii) encoding 3D spatial positions of a target object based on the robot’s joint positions, thus avoiding calibration with respect to an external coordinate system. The method is capable of learning the task of hand-eye coordination from scratch by less than 20 sensory-motor pairs that are iteratively generated at real-time speed. In order to examine the robustness of the method while dealing with nonlinear image distortions, we apply a so-called retinal mapping image deformation to the input images. Experimental results show the successfulness of the method even under considerable image deformations.

  • 124.
    Gu, Song
    et al.
    Chengdu Aeronaut Polytech, Dept Aeronaut Engn, Chengdu 610100, Sichuan, Peoples R China..
    Wang, Lihui
    KTH, Skolan för industriell teknik och management (ITM), Industriell produktion, Produktionssystem.
    Hao, Wei
    Chengdu Aeronaut Polytech, Dept Aeronaut Engn, Chengdu 610100, Sichuan, Peoples R China..
    Du, Yingjie
    Chengdu Aeronaut Polytech, Dept Aeronaut Engn, Chengdu 610100, Sichuan, Peoples R China..
    Wang, Jian
    Chengdu Aeronaut Polytech, Dept Aeronaut Engn, Chengdu 610100, Sichuan, Peoples R China..
    Zhang, Weirui
    Chengdu Aeronaut Polytech, Dept Aeronaut Engn, Chengdu 610100, Sichuan, Peoples R China..
    Online Video Object Segmentation via Boundary-Constrained Low-Rank Sparse Representation2019Ingår i: IEEE Access, E-ISSN 2169-3536, Vol. 7, s. 53520-53533Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    Graphcut-based algorithm is adopted in many video object segmentation systems because different terms can be probabilistically fused together in a framework. Constructing spatio-temporal coherences is an important stage in segmentation systems. However, many steps are involved when computing a key term with good discriminative power. If the cascade steps are adopted, the inaccurate output of the previous step will definitely affect the next step, leading to inaccurate segmentation. In this paper, a key term that is computed by a single framework referred to as boundary-constrained low-rank sparse representation (BCLRSR) is proposed to achieve the accurate segmentation. By treating the elements as linear combinations of dictionary templates, low-rank sparse optimization is adopted to achieve the spatio-temporal saliency. For adding the spatial information to the low-rank sparse model, a boundary constraint is adopted in the framework as a Laplacian regularization. A BCLRSR saliency is then obtained by the represented coefficients, which measure the similarity between the elements in the current frame and the ones in the dictionary. At last, the object is segmented by minimizing the energy function, which is formalized by the spatio-temporal coherences. The experiments on some public datasets show that our proposed algorithm outperforms the state-of-the-art methods.

  • 125.
    Guo, Meng
    et al.
    KTH, Skolan för elektro- och systemteknik (EES), Reglerteknik. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS. KTH, Skolan för elektro- och systemteknik (EES), Centra, ACCESS Linnaeus Centre.
    Dimarogonas, Dimos V.
    KTH, Skolan för elektro- och systemteknik (EES), Reglerteknik. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS. KTH, Skolan för elektro- och systemteknik (EES), Centra, ACCESS Linnaeus Centre.
    Multi-agent plan reconfiguration under local LTL specifications2015Ingår i: The international journal of robotics research, ISSN 0278-3649, E-ISSN 1741-3176, Vol. 34, nr 2, s. 218-235Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    We propose a cooperative motion and task planning scheme for multi-agent systems where the agents have independently assigned local tasks, specified as linear temporal logic formulas. These tasks contain hard and soft sub-specifications. A least-violating initial plan is synthesized first for the potentially infeasible task and the partially-known workspace. This discrete plan is then implemented by the potential-field-based navigation controllers. While the system runs, each agent updates its knowledge about the workspace via its sensing capability and shares this knowledge with its neighbouring agents. Based on the knowledge update, each agent verifies and revises its motion plan in real time. It is ensured that the hard specification is always fulfilled for safety and the satisfaction for the soft specification is improved gradually. The design is distributed as only local interactions are assumed. The overall framework is demonstrated by a case study and an experiment.

  • 126.
    Guo, Meng
    et al.
    KTH, Skolan för elektro- och systemteknik (EES), Centra, ACCESS Linnaeus Centre.
    Tumova, Jana
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Dimarogonas, Dino V.
    KTH, Skolan för elektro- och systemteknik (EES), Reglerteknik.
    Hybrid control of multi-agent systems under local temporal tasks and relative-distance constraints2016Ingår i: Proceedings of the IEEE Conference on Decision and Control, IEEE conference proceedings, 2016, s. 1701-1706Konferensbidrag (Refereegranskat)
    Abstract [en]

    In this paper, we propose a distributed hybrid control strategy for multi-agent systems where each agent has a local task specified as a Linear Temporal Logic (LTL) formula and at the same time is subject to relative-distance constraints with its neighboring agents. The local tasks capture the temporal requirements on individual agents' behaviors, while the relative-distance constraints impose requirements on the collective motion of the whole team. The proposed solution relies only on relative-state measurements among the neighboring agents without the need for explicit information exchange. It is guaranteed that the local tasks given as syntactically co-safe or general LTL formulas are fulfilled and the relative-distance constraints are satisfied at all time. The approach is demonstrated with computer simulations.

  • 127.
    Gårding, Jonas
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Lindeberg, Tony
    KTH, Skolan för datavetenskap och kommunikation (CSC), Beräkningsbiologi, CB.
    CanApp: The Candela Application Library1989Rapport (Övrigt vetenskapligt)
    Abstract [en]

    This paper describes CanApp, the Candela Application Library. CanApp is a software package for image processing and image analysis. Most of the subroutines in CanApp are available both as stand-alone programs and C subroutines.

    CanApp currently comprises some 50 programs and 75 subroutines, and these numbers are expected to grow continuously as a result of joint efforts of the members of the CVAP group at the Royal Institute of Technology in Stockholm.

    CanApp is currently installed and running under UNIX on Sun workstations

  • 128.
    Gårding, Jonas
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Lindeberg, Tony
    KTH, Tidigare Institutioner, Numerisk analys och datalogi, NADA.
    Direct computation of shape cues using scale-adapted spatial derivative operators1996Ingår i: International Journal of Computer Vision, ISSN 0920-5691, E-ISSN 1573-1405, Vol. 17, nr 2, s. 163-191Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    This paper addresses the problem of computing cues to the three-dimensional structure of surfaces in the world directly from the local structure of the brightness pattern of either a single monocular image or a binocular image pair.It is shown that starting from Gaussian derivatives of order up to two at a range of scales in scale-space, local estimates of (i) surface orientation from monocular texture foreshortening, (ii) surface orientation from monocular texture gradients, and (iii) surface orientation from the binocular disparity gradient can be computed without iteration or search, and by using essentially the same basic mechanism.The methodology is based on a multi-scale descriptor of image structure called the windowed second moment matrix, which is computed with adaptive selection of both scale levels and spatial positions. Notably, this descriptor comprises two scale parameters; a local scale parameter describing the amount of smoothing used in derivative computations, and an integration scale parameter determining over how large a region in space the statistics of regional descriptors is accumulated.Experimental results for both synthetic and natural images are presented, and the relation with models of biological vision is briefly discussed.

  • 129.
    Gårding, Jonas
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Lindeberg, Tony
    KTH, Tidigare Institutioner, Numerisk analys och datalogi, NADA.
    Direct estimation of local surface shape in a fixating binocular vision system1994Ingår i: Computer Vision — ECCV '94: Third European Conference on Computer Vision Stockholm, Sweden, May 2–6, 1994 Proceedings, Volume I, Springer Berlin/Heidelberg, 1994, s. 365-376Konferensbidrag (Refereegranskat)
    Abstract [en]

    This paper addresses the problem of computing cues to the three-dimensional structure of surfaces in the world directly from the local structure of the brightness pattern of a binocular image pair. The geometric information content of the gradient of binocular disparity is analyzed for the general case of a fixating vision system with symmetric or asymmetric vergence, and with either known or unknown viewing geometry. A computationally inexpensive technique which exploits this analysis is proposed. This technique allows a local estimate of surface orientation to be computed directly from the local statistics of the left and right image brightness gradients, without iterations or search. The viability of the approach is demonstrated with experimental results for both synthetic and natural gray-level images.

  • 130.
    Göbelbecker, Moritz
    et al.
    University of Freiburg.
    Hanheide, Marc
    University of Lincoln.
    Gretton, Charles
    University of Birmingham.
    Hawes, Nick
    University of Birmingham.
    Pronobis, Andrzej
    KTH, Skolan för datavetenskap och kommunikation (CSC), Robotik, perception och lärande, RPL.
    Aydemir, Alper
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS. KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Kristoffer, Sjöö
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Zender, Hendrik
    DFKI, Saarbruecken.
    Dora: A Robot that Plans and Acts Under Uncertainty2012Ingår i: Proceedings of the 35th German Conference on Artificial Intelligence (KI’12), 2012Konferensbidrag (Refereegranskat)
    Abstract [en]

    Dealing with uncertainty is one of the major challenges when constructing autonomous mobile robots. The CogX project addressed key aspects of that by developing and implementing mechanisms for self-understanding and self-extension -- i.e. awareness of gaps in knowledge, and the ability to reason and act to fill those gaps. We discuss our robot called Dora, a showcase outcome of that project. Dora is able to perform a variety of search tasks in unexplored environments. One of the results of the project is the Dora robot, that can perform a variety of search tasks in unexplored environments by exploiting probabilistic knowledge representations while retaining efficiency by using a fast planning system.

  • 131.
    Güler, Püren
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Bekiroglu, Yasemin
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Gratal, Xavi
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Pauwels, Karl
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    What's in the Container?: Classifying Object Contents from Vision and Touch2014Ingår i: 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems  (IROS 2014), IEEE , 2014, s. 3961-3968Konferensbidrag (Refereegranskat)
    Abstract [en]

    Robots operating in household environments need to interact with food containers of different types. Whether a container is filled with milk, juice, yogurt or coffee may affect the way robots grasp and manipulate the container. In this paper, we concentrate on the problem of identifying what kind of content is in a container based on tactile and/or visual feedback in combination with grasping. In particular, we investigate the benefits of using unimodal (visual or tactile) or bimodal (visual-tactile) sensory data for this purpose. We direct our study toward cardboard containers with liquid or solid content or being empty. The motivation for using grasping rather than shaking is that we want to investigate the content prior to applying manipulation actions to a container. Our results show that we achieve comparable classification rates with unimodal data and that the visual and tactile data are complimentary.

  • 132.
    Güler, Rezan
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS. KTH, Skolan för bioteknologi (BIO), Proteinteknologi.
    Pauwels, Karl
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Pieropan, Alessandro
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Kjellström, Hedvig
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Estimating the Deformability of Elastic Materials using Optical Flow and Position-based Dynamics2015Ingår i: Humanoid Robots (Humanoids), 2015 IEEE-RAS 15th International Conference on, IEEE conference proceedings, 2015, s. 965-971Konferensbidrag (Refereegranskat)
    Abstract [en]

    Knowledge of the physical properties of objects is essential in a wide range of robotic manipulation scenarios. A robot may not always be aware of such properties prior to interaction. If an object is incorrectly assumed to be rigid, it may exhibit unpredictable behavior when grasped. In this paper, we use vision based observation of the behavior of an object a robot is interacting with and use it as the basis for estimation of its elastic deformability. This is estimated in a local region around the interaction point using a physics simulator. We use optical flow to estimate the parameters of a position-based dynamics simulation using meshless shape matching (MSM). MSM has been widely used in computer graphics due to its computational efficiency, which is also important for closed-loop control in robotics. In a controlled experiment we demonstrate that our method can qualitatively estimate the physical properties of objects with different degrees of deformability.

  • 133. Halawani, A.
    et al.
    Li, Haibo
    KTH, Skolan för datavetenskap och kommunikation (CSC), Medieteknik och interaktionsdesign, MID. Nanjing University of Posts and Telecommunications.
    Template-based search: A tool for scene analysis2016Ingår i: Proceeding - 2016 IEEE 12th International Colloquium on Signal Processing and its Applications, CSPA 2016, IEEE conference proceedings, 2016, s. 1-6Konferensbidrag (Refereegranskat)
    Abstract [en]

    This paper proposes a simple and yet effective technique for shape-based scene analysis, in which detection and/or tracking of specific objects or structures in the image is desirable. The idea is based on using predefined binary templates of the structures to be located in the image. The template is matched to contours in a given edge image to locate the designated entity. These templates are allowed to deform in order to deal with variations in the structure's shape and size. Deformation is achieved by dividing the template into segments. The dynamic programming search algorithm is used to accomplish the matching process, achieving very robust results in cluttered and noisy scenes in the applications presented.

  • 134. Halawani, Alaa
    et al.
    Li, Haibo
    KTH, Skolan för datavetenskap och kommunikation (CSC), Medieteknik och interaktionsdesign, MID.
    100 lines of code for shape-based object localization2016Ingår i: Pattern Recognition, ISSN 0031-3203, E-ISSN 1873-5142, Vol. 60, s. 458-472Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    We introduce a simple and effective concept for localizing objects in densely cluttered edge images based on shape information. The shape information is characterized by a binary template of the object's contour, provided to search for object instances in the image. We adopt a segment-based search strategy, in which the template is divided into a set of segments. In this work, we propose our own segment representation that we call one-pixel segment (OPS), in which each pixel in the template is treated as a separate segment. This is done to achieve high flexibility that is required to account for intra-class variations. OPS representation can also handle scale changes effectively. A dynamic programming algorithm uses the OPS representation to realize the search process, enabling a detailed localization of the object boundaries in the image. The concept's simplicity is reflected in the ease of implementation, as the paper's title suggests. The algorithm works directly with very noisy edge images extracted using the Canny edge detector, without the need for any preprocessing or learning steps. We present our experiments and show that our results outperform those of very powerful, state-of-the-art algorithms.

  • 135. Halawani, Alaa
    et al.
    Rehman, Shafiq Ur
    Li, Haibo
    KTH, Skolan för datavetenskap och kommunikation (CSC), Medieteknik och interaktionsdesign, MID.
    Active vision for tremor disease monitoring2015Ingår i: 6TH INTERNATIONAL CONFERENCE ON APPLIED HUMAN FACTORS AND ERGONOMICS (AHFE 2015) AND THE AFFILIATED CONFERENCES, AHFE 2015, Elsevier, 2015, s. 2042-2048Konferensbidrag (Refereegranskat)
    Abstract [en]

    The aim of this work is to introduce a prototype for monitoring tremor diseases using computer vision techniques. While vision has been previously used for this purpose, the system we are introducing differs intrinsically from other traditional systems. The essential difference is characterized by the placement of the camera on the user's body rather than in front of it, and thus reversing the whole process of motion estimation. This is called active motion tracking. Active vision is simpler in setup and achieves more accurate results compared to traditional arrangements, which we refer to as "passive" here. One main advantage of active tracking is its ability to detect even tiny motions using its simple setup, and that makes it very suitable for monitoring tremor disorders.

  • 136.
    Hamid Muhammed, Hamed
    Uppsala universitet.
    Characterizing and Estimating Fungal Disease Severity in Wheat2004Konferensbidrag (Övrigt vetenskapligt)
  • 137.
    Hamid Muhammed, Hamed
    Uppsala universitet.
    Hyperspectral Image Generation, Processing and Analysis2005Doktorsavhandling, monografi (Övrigt vetenskapligt)
  • 138.
    Hammarwall, David
    et al.
    KTH, Skolan för elektro- och systemteknik (EES), Signalbehandling. KTH, Skolan för elektro- och systemteknik (EES), Centra, ACCESS Linnaeus Centre.
    Ottersten, Björn
    KTH, Skolan för elektro- och systemteknik (EES), Signalbehandling. KTH, Skolan för elektro- och systemteknik (EES), Centra, ACCESS Linnaeus Centre.
    Spatial transmit processing using long-term channel statistics and pilot signaling on selected antennas2006Ingår i: 2006 Fortieth Asilomar Conference on Signals, Systems and Computers, 2006, s. 762-766Konferensbidrag (Refereegranskat)
    Abstract [en]

    In wireless high performance systems utilizing smart antenna transmission techniques, increased pilot signaling becomes problematic when more transmit antennas are added. Herein, we propose a scheme where the pilot signaling is restricted to a subset of the transmit antennas, and the total signal strength of these antennas is fed back to the transmitter. This potentially reduces the required pilot signaling and feedback so it becomes comparable to that of single antenna systems. By combining the feedback with channel statistics, known to the transmitter, substantial spatial information is gained. Herein, this information is used to develop elaborate scheduling and beamforming techniques.

  • 139.
    Hang, Kaiyu
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Li, Miao
    Stork, Johannes A.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Bekiroglu, Yasemin
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Billard, Aude
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Hierarchical Fingertip Space for Synthesizing Adaptable Fingertip Grasps2014Konferensbidrag (Refereegranskat)
  • 140.
    Haustein, Joshua
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Robotik, perception och lärande, RPL. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Hang, Kaiyu
    KTH, Skolan för datavetenskap och kommunikation (CSC), Robotik, perception och lärande, RPL. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Robotik, perception och lärande, RPL. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Integrating motion and hierarchical fingertip grasp planning2017Ingår i: 2017 IEEE International Conference on Robotics and Automation (ICRA), Institute of Electrical and Electronics Engineers (IEEE), 2017, s. 3439-3446, artikel-id 7989392Konferensbidrag (Refereegranskat)
    Abstract [en]

    In this work, we present an algorithm that simultaneously searches for a high quality fingertip grasp and a collision-free path for a robot hand-arm system to achieve it. The algorithm combines a bidirectional sampling-based motion planning approach with a hierarchical contact optimization process. Rather than tackling these problems in a decoupled manner, the grasp optimization is guided by the proximity to collision-free configurations explored by the motion planner. We implemented the algorithm for a 13-DoF manipulator and show that it is capable of efficiently planning reachable high quality grasps in cluttered environments. Further, we show that our algorithm outperforms a decoupled integration in terms of planning runtime.

  • 141. Hawes, N
    et al.
    Ambrus, Rares
    KTH, Skolan för datavetenskap och kommunikation (CSC), Robotik, perception och lärande, RPL.
    Bore, Nils
    KTH, Skolan för datavetenskap och kommunikation (CSC), Robotik, perception och lärande, RPL.
    Folkesson, John
    KTH, Skolan för datavetenskap och kommunikation (CSC), Robotik, perception och lärande, RPL.
    Jensfelt, Patric
    KTH, Skolan för datavetenskap och kommunikation (CSC), Robotik, perception och lärande, RPL.
    Hanheide, Marc
    et al.,
    The STRANDS Project Long-Term Autonomy in Everyday Environments2017Ingår i: IEEE robotics & automation magazine, ISSN 1070-9932, E-ISSN 1558-223X, Vol. 24, nr 3, s. 146-156Artikel i tidskrift (Refereegranskat)
  • 142. Hawes, N.
    et al.
    Brenner, M.
    Sjöö, Kristoffer
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Planning as an architectural control mechanism2008Konferensbidrag (Refereegranskat)
    Abstract [en]

    We describe recent work on PECAS, an architecture for intelligent robotics that supports multi-modal interaction.

  • 143. Hilgen, Gerrit
    et al.
    Sorbaro, Martino
    KTH, Skolan för teknikvetenskap (SCI), Fysik, Beräkningsbiofysik.
    Pirmoradian, Sahar
    Muthmann, Jens-Oliver
    Kepiro, Ibolya Edit
    Ullo, Simona
    Ramirez, Cesar Juarez
    Encinas, Albert Puente
    Maccione, Alessandro
    Berdondini, Luca
    Murino, Vittorio
    Sona, Diego
    Zanacchi, Francesca Cella
    Sernagor, Evelyne
    Hennig, Matthias Helge
    Unsupervised Spike Sorting for Large-Scale, High-Density Multielectrode Arrays2017Ingår i: Cell reports, ISSN 2211-1247, E-ISSN 2211-1247, Vol. 18, nr 10, s. 2521-2532Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    We present amethod for automated spike sorting for recordings with high-density, large-scale multielectrode arrays. Exploiting the dense sampling of single neurons by multiple electrodes, an efficient, low-dimensional representation of detected spikes consisting of estimated spatial spike locations and dominant spike shape features is exploited for fast and reliable clustering into single units. Millions of events can be sorted in minutes, and the method is parallelized and scales better than quadratically with the number of detected spikes. Performance is demonstrated using recordings with a 4,096-channel array and validated using anatomical imaging, optogenetic stimulation, and model-based quality control. A comparison with semi-automated, shape-based spike sorting exposes significant limitations of conventional methods. Our approach demonstrates that it is feasible to reliably isolate the activity of up to thousands of neurons and that dense, multi-channel probes substantially aid reliable spike sorting.

  • 144.
    Hjelm, Martin
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Robotik, perception och lärande, RPL.
    Holistic Grasping: Affordances, Grasp Semantics, Task Constraints2019Doktorsavhandling, monografi (Övrigt vetenskapligt)
    Abstract [sv]

    De flesta av oss greppar objekt över tusen gånger per dag utan att ge det mycket eftertanke, vare sig det är att köra bil eller att dricka kaffe. Att lära robotar liknande förmågor gällande manipulering har varit ett mål för robotforskningen i årtionden.

    Anledningen till de långsamma framstegen ligger huvudsakligen i robotarnas underutvecklade sensorimotoriska system. Robothänder är ofta inflexibla, saknar möjligheter till komplexa konfigurationer jämfört med mänskliga händer. De haptiska sensorerna är rudimentära, vilket innebär betydligt lägre upplösning och känslighet vid beröring än hos människor.

    Den nuvarande forskningen har därför koncentrerat sig på tekniska lösningar som fokuserar på stabiliteten i det slutgiltiga greppet. Detta innebär att man formulerar komplexa funktioner och sökstrategier som beskriver interaktionen mellan robotens fingar och objektets yta. Med tanke på mängden variation i material, former och förmåga att deformera verkar det otänkbart att kunna analytiskt formulera en sådan generell hand-till-form-funktion. Många forskare har istället börjat fokusera på metoder baserade på lärande från data, likså den här avhandlingen.

    Människor har uppenbarligen en förmåga att synka hand till form. Hur vi greppar ett objekt bestäms emellertid främst av vad vi ska göra med objektet. Vi har en intern a priori uppfattning av hur handlingen, material och objektdynamiken styr grepp-processen. Vi har också en djupare förståelse för hur form och material relaterar till vår egen hand.

    Vi knyter samman alla dessa aspekter: vår förståelse för vad ett föremål kan användas för, hur den användningen påverkar vår interaktion med det och hur vår hand kan formas och placeras för att uppnå målet för manipulationen. För oss är grepp-processen inte bara en hand-till-form funktion utan en holistisk process där alla delar av kedjan är lika viktiga för resultatet. Innehållet i denna avhandling handlar således om hur man införlivar en sådan process i en robots planering av maipulationsmomentet.

    Vi kommer ta oss an den holistiska processen genom tre sammankopplade moduler. Den första är att låta roboten detektera interaktionsmöjligheter och förstå vilka delar av ett objekt som är viktiga för att möjliggöra interaktionen, en form a konceptualisering av interaktionsmöjligheten. Den andra modulen handlar om utlärning av grepp semantik, hur form relaterar till den egna handens  förmåga. Slutligen är sista modulen fokuserad på hur man lär roboten hur målet med interaktionen påverkar möjliga grepp på objektet.Vi kommer att utforska dessa tre delar genom begreppet affinitet. Detta begrepp translateras direkt till idén att vi lär oss en representation som sätter liknande typer av entiteter, det vill säga objekt, grepp, och mål, nära varandra i representationsrymden.

    Vi kommer att visa att idén om affinitetsbaserade representationer kommer att hjälpa roboten a resonera kring vilka delar av ett objekt som är viktiga för inferens, vilka grepp och mål som liknar varandra och hur de olika kategorierna relaterar till varandra. Slutligen kommer ett affinitetsbaserat tillvägagångssätt att hjälpa oss att knyta samman alla delar i en demonstrationen av en holistisk grepp-process.

  • 145.
    Hjelm, Martin
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Ek, Carl Henrik
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Detry, Renaud
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Kjellström, Hedvig
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Sparse Summarization of Robotic Grasping Data2013Ingår i: 2013 IEEE International Conference on Robotics and Automation (ICRA), New York: IEEE , 2013, s. 1082-1087Konferensbidrag (Refereegranskat)
    Abstract [en]

    We propose a new approach for learning a summarized representation of high dimensional continuous data. Our technique consists of a Bayesian non-parametric model capable of encoding high-dimensional data from complex distributions using a sparse summarization. Specifically, the method marries techniques from probabilistic dimensionality reduction and clustering. We apply the model to learn efficient representations of grasping data for two robotic scenarios.

  • 146.
    Hjelm, Martin
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Ek, Carl Henrik
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Detry, Renaud
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Learning Human Priors for Task-Constrained Grasping2015Ingår i: COMPUTER VISION SYSTEMS (ICVS 2015), Springer Berlin/Heidelberg, 2015, s. 207-217Konferensbidrag (Refereegranskat)
    Abstract [en]

    An autonomous agent using manmade objects must understand how task conditions the grasp placement. In this paper we formulate task based robotic grasping as a feature learning problem. Using a human demonstrator to provide examples of grasps associated with a specific task, we learn a representation, such that similarity in task is reflected by similarity in feature. The learned representation discards parts of the sensory input that is redundant for the task, allowing the agent to ground and reason about the relevant features for the task. Synthesized grasps for an observed task on previously unseen objects can then be filtered and ordered by matching to learned instances without the need of an analytically formulated metric. We show on a real robot how our approach is able to utilize the learned representation to synthesize and perform valid task specific grasps on novel objects.

  • 147.
    Hyttinen, Emil
    KTH, Skolan för datavetenskap och kommunikation (CSC), Robotik, perception och lärande, RPL. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Adaptive Grasping Using Tactile Sensing2017Licentiatavhandling, monografi (Övrigt vetenskapligt)
    Abstract [sv]

    Att greppa nya föremål är utmanande, både eftersom roboten inte har fullständig information om objekten och på grund av den inneboende osäkerheten i verkliga tillämpningar. Återkoppling från känselsensorer är viktigt för att kunna greppa föremål som inte påträffats tidigare. I vår forskning så studerar vi hur information från känselsensorer kan användas för att förbättra greppandet av nya föremål. Eftersom det är svårt att extrahera relevanta egenskaper om föremål och härleda lämpliga åtgärder, baserat på känselsensorer, så har vi använt maskininlärning för att lära roboten lämpliga beteenden. Vi har visat att uppskattningar av stabiliteten av ett grepp baserat på känselsensorer kan förbättras genom att även använda en grov approximation av föremålets form. Vi har även konstruerat en metod som vägleder lokala justeringar av grepp, baserat på vår metod som uppskattar stabiliteten av ett grepp. Dess justeringar hittas genom att simulera känselsensordata för grepp i närheten av det nuvarande greppet. Vi presenterar flera experiment som demonstrerar tillämpbarheten av våra metoder. Avhandlingen avslutas med en diskussion om våra resultat och förslag på möjliga ämnen för fortsatt forskning.

  • 148.
    Irfan, Bahar
    et al.
    Univ Plymouth, Ctr Robot & Neural Syst, Plymouth, Devon, England..
    Ramachandran, Aditi
    Yale Univ, Social Robot Lab, New Haven, CT 06520 USA..
    Spaulding, Samuel
    MIT, Personal Robots Grp, Media Lab, Cambridge, MA 02139 USA..
    Glas, Dylan F.
    Huawei, Futurewei Technol, Santa Clara, CA USA..
    Leite, Iolanda
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Robotik, perception och lärande, RPL.
    Koay, Kheng Lee
    Univ Hertfordshire, Adapt Syst Res Grp, Hatfield, Herts, England..
    Personalization in Long-Term Human-Robot Interaction2019Ingår i: HRI '19: 2019 14TH ACM/IEEE INTERNATIONAL CONFERENCE ON HUMAN-ROBOT INTERACTION, IEEE , 2019, s. 685-686Konferensbidrag (Refereegranskat)
    Abstract [en]

    For practical reasons, most human-robot interaction (HRI) studies focus on short-term interactions between humans and robots. However, such studies do not capture the difficulty of sustaining engagement and interaction quality across long-term interactions. Many real-world robot applications will require repeated interactions and relationship-building over the long term, and personalization and adaptation to users will be necessary to maintain user engagement and to build rapport and trust between the user and the robot. This full-day workshop brings together perspectives from a variety of research areas, including companion robots, elderly care, and educational robots, in order to provide a forum for sharing and discussing innovations, experiences, works-in-progress, and best practices which address the challenges of personalization in long-term HRI.

  • 149.
    Jacobsson, Mattias
    SICS.
    Play, Belief and Stories about Robots: A Case Study of a Pleo Blogging Community2009Ingår i: Proceedings of RO-MAN 2009, NEW YORK: IEEE , 2009, s. 830-835Konferensbidrag (Övrigt vetenskapligt)
    Abstract [en]

    We present an analysis based on user-provided content collected from online blogs and forums about the robotic artifact Pleo. Our primary goal is to explore stories about how human-robot interaction would manifest themselves in actual real-world contexts. To be able to assess these types of communicative media we are using a method based on virtual ethnography that specifically addresses underlying issues in how the data is produced and should be interpreted. Results indicate that generally people are staging, performing and have a playful approach to the interaction. This is further emphasized by the way people communicate their stories through the blogging practice. Finally we argue that these resources are indeed essential for understanding and designing long-term human-robot relationships.

  • 150.
    Jacobsson, Mattias
    et al.
    SICS.
    Fernaeus, Ylva
    KTH, Skolan för datavetenskap och kommunikation (CSC), Medieteknik och interaktionsdesign, MID.
    Cramer, Henriette
    Ljungblad, Sara
    Crafting against robotic fakelore: on the critical practice of artbot artists2013Ingår i: Conference on Human Factors in Computing Systems - Proceedings, Association for Computing Machinery (ACM), 2013, Vol. 2013, s. 2019-2028Konferensbidrag (Refereegranskat)
    Abstract [en]

    We report on topics raised in encounters with a series of robotics oriented artworks, which to us were interpreted as a general critique to what could be framed as robotic fakelore, or mythology. We do this based on interviews held with artists within the community of ArtBots, and discuss how their approach relates to and contributes to the discourse of HCI. In our analysis we outline a rough overview of issues emerging in the interviews and reflect on the broader questions they may pose to our research community.

1234567 101 - 150 av 470
RefereraExporteraLänk till träfflistan
Permanent länk
Referera
Referensformat
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf