Endre søk
Begrens søket
3456789 251 - 300 of 407
RefereraExporteraLink til resultatlisten
Permanent link
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Treff pr side
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sortering
  • Standard (Relevans)
  • Forfatter A-Ø
  • Forfatter Ø-A
  • Tittel A-Ø
  • Tittel Ø-A
  • Type publikasjon A-Ø
  • Type publikasjon Ø-A
  • Eldste først
  • Nyeste først
  • Skapad (Eldste først)
  • Skapad (Nyeste først)
  • Senast uppdaterad (Eldste først)
  • Senast uppdaterad (Nyeste først)
  • Disputationsdatum (tidligste først)
  • Disputationsdatum (siste først)
  • Standard (Relevans)
  • Forfatter A-Ø
  • Forfatter Ø-A
  • Tittel A-Ø
  • Tittel Ø-A
  • Type publikasjon A-Ø
  • Type publikasjon Ø-A
  • Eldste først
  • Nyeste først
  • Skapad (Eldste først)
  • Skapad (Nyeste først)
  • Senast uppdaterad (Eldste først)
  • Senast uppdaterad (Nyeste først)
  • Disputationsdatum (tidligste først)
  • Disputationsdatum (siste først)
Merk
Maxantalet träffar du kan exportera från sökgränssnittet är 250. Vid större uttag använd dig av utsökningar.
  • 251.
    Pacchierotti, Elena
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Christensen, Henrik I.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS. KTH, Skolan för datavetenskap och kommunikation (CSC), Numerisk Analys och Datalogi, NADA.
    Jensfelt, Patric
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS. KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Embodied social interaction for service robots in hallway environments2006Inngår i: Field and Service Robotics / [ed] Corke, P; Sukkarieh, S, BERLIN: SPRINGER-VERLAG BERLIN , 2006, Vol. 25, s. 293-304Konferansepaper (Fagfellevurdert)
    Abstract [en]

    A key aspect of service robotics for everyday use is the motion in close proximity to humans. It is essential that the robot exhibits a behavior that signals safety of motion and awareness of the persons in the environment. To achieve this, there is a need to define control strategies that are perceived as socially acceptable by users that are not familiar with robots. In this paper a system for navigation in a hallway is presented, in which the rules of proxemics are used to define the interaction strategies. The experimental results show the contribution to the establishment of effective spatial interaction patterns between the robot and a person.

  • 252. Paetzel, M.
    et al.
    Hupont, I.
    Varni, G.
    Chetouani, M.
    Peters, Christopher
    KTH, Skolan för datavetenskap och kommunikation (CSC), High Performance Computing and Visualization (HPCViz). KTH, Skolan för datavetenskap och kommunikation (CSC), Beräkningsvetenskap och beräkningsteknik (CST).
    Castellano, G.
    Exploring the link between self-assessed mimicry and embodiment in HRI2017Inngår i: ACM/IEEE International Conference on Human-Robot Interaction, IEEE Computer Society , 2017, s. 245-246Konferansepaper (Fagfellevurdert)
    Abstract [en]

    This work explores the relationship between a robot's embodiment and people's ability to mimic its behavior. It presents a study in which participants were asked to mimic a 3D mixed-embodied robotic head and a 2D version of the same character. Quantitative and qualitative analysis were performed from questionnaires. Quantitative results show no significant influence of the character's embodiment on the self-assessed ability to mimic it, while qualitative ones indicate a preference for mimicking the robotic head.

  • 253.
    Panahandeh, Ghazaleh
    et al.
    KTH, Skolan för elektro- och systemteknik (EES), Signalbehandling. KTH, Skolan för elektro- och systemteknik (EES), Centra, ACCESS Linnaeus Centre.
    Guo, Chao X.
    University of Minnesota, Minneapolis.
    Jansson, Magnus
    KTH, Skolan för elektro- och systemteknik (EES), Signalbehandling. KTH, Skolan för elektro- och systemteknik (EES), Centra, ACCESS Linnaeus Centre.
    Roumeliotis, Stergios I.
    University of Minnesota, Minneapolis.
    Observability analysis of a vision-aided inertial navigation system using planar features on the ground2013Inngår i: 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), IEEE conference proceedings, 2013, s. 4187-4194Konferansepaper (Fagfellevurdert)
    Abstract [en]

    In this paper, we present an observability analysis of a vision-aided inertial navigation system (VINS) in which the camera is downward looking and observes a single point feature on the ground. In our analysis, the full INS parameter vector (including position, velocity, rotation, and inertial sensor biases) as well as the 3D position of the observed point feature are considered as state variables. In particular, we prove that the system has only three unobservable directions corresponding to global translations along the x and y axes, and rotations around the gravity vector. Hence, compared to general VINS, an advantage of using only ground features is that the vertical translation becomes observable. The findings of the theoretical analysis are validated through real-world experiments.

  • 254.
    Panahandeh, Ghazaleh
    et al.
    KTH, Skolan för elektro- och systemteknik (EES), Signalbehandling. KTH, Skolan för elektro- och systemteknik (EES), Centra, ACCESS Linnaeus Centre.
    Guo, Chao X.
    University of Minnesota, Minneapolis.
    Jansson, Magnus
    KTH, Skolan för elektro- och systemteknik (EES), Signalbehandling. KTH, Skolan för elektro- och systemteknik (EES), Centra, ACCESS Linnaeus Centre.
    Roumeliotis, Stergios I.
    University of Minnesota, Minneapolis.
    Observability Analysis of a Vision-aided Inertial Navigation System Using Planar Features on the Ground: Supplemental Material2013Rapport (Annet vitenskapelig)
    Abstract [en]

    In this paper, we present an observability analysis of a vision-aided inertial navigation system (VINS) in which the camera is downward looking and observes a single point feature on the ground. In our analysis, the full INS parameter vector (including position, velocity, rotation, and inertial sensor biases) as well as the 3D position of the observed point feature are considered as state variables. In particular, we prove that the system has only three unobservable directions corresponding to global translations along the x and y axes, and rotations around the gravity vector. Hence, compared to general VINS, an advantage of using only ground features is that the vertical translation becomes observable. The findings of the theoretical analysis are validated through real-world experiments.

  • 255.
    Panahandeh, Ghazaleh
    et al.
    KTH, Skolan för elektro- och systemteknik (EES), Signalbehandling. KTH, Skolan för elektro- och systemteknik (EES), Centra, ACCESS Linnaeus Centre.
    Jansson, Magnus
    KTH, Skolan för elektro- och systemteknik (EES), Signalbehandling. KTH, Skolan för elektro- och systemteknik (EES), Centra, ACCESS Linnaeus Centre.
    Vision-aided inertial navigation based on ground plane feature detection2014Inngår i: IEEE/ASME transactions on mechatronics, ISSN 1083-4435, E-ISSN 1941-014X, Vol. 19, nr 4, s. 1206-1215Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    In this paper, a motion estimation approach is introduced for a vision-aided inertial navigation system. The system consists of a ground-facing monocular camera mounted on an inertial measurement unit (IMU) to form an IMU-camera sensor fusion system. The motion estimation procedure fuses inertial data from the IMU and planar features on the ground captured by the camera. The main contribution of this paper is a novel closed-form measurement model based on the image data and IMU output signals. In contrast to existing methods, our algorithm is independent of the underlying vision algorithm for image motion estimation such as optical flow algorithms for camera motion estimation. The algorithm has been implemented using an unscented Kalman filter, which propagates the current and the last state of the system updated in the previous measurement instant. The validity of the proposed navigation method is evaluated both by simulation studies and by real experiments.

  • 256.
    Panahandeh, Ghazaleh
    et al.
    KTH, Skolan för elektro- och systemteknik (EES), Signalbehandling.
    Jansson, Magnus
    KTH, Skolan för elektro- och systemteknik (EES), Signalbehandling. KTH, Skolan för elektro- och systemteknik (EES), Centra, ACCESS Linnaeus Centre.
    Händel, Peter
    KTH, Skolan för elektro- och systemteknik (EES), Signalbehandling. KTH, Skolan för elektro- och systemteknik (EES), Centra, ACCESS Linnaeus Centre.
    Observability Analysis of Mirror-Based IMU-Camera Self-Calibration2013Inngår i: IPIN 2013: 4th International Conference on Indoor Positioning and Indoor Navigation, 2013Konferansepaper (Fagfellevurdert)
  • 257. Parasuraman, Ramviyas
    et al.
    Caccamo, Sergio
    KTH, Skolan för datavetenskap och kommunikation (CSC), Robotik, perception och lärande, RPL. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Båberg, Fredrik
    KTH, Skolan för datavetenskap och kommunikation (CSC), Robotik, perception och lärande, RPL. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Ögren, Petter
    KTH, Skolan för datavetenskap och kommunikation (CSC), Robotik, perception och lärande, RPL. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Neerincx, Mark
    A New UGV Teleoperation Interface for Improved Awareness of Network Connectivity and Physical Surroundings2017Inngår i: Journal of Human-Robot Interaction, E-ISSN 2163-0364, Vol. 6, nr 3, s. 48-70Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    A reliable wireless connection between the operator and the teleoperated unmanned ground vehicle (UGV) is critical in many urban search and rescue (USAR) missions. Unfortunately, as was seen in, for example, the Fukushima nuclear disaster, the networks available in areas where USAR missions take place are often severely limited in range and coverage. Therefore, during mission execution, the operator needs to keep track of not only the physical parts of the mission, such as navigating through an area or searching for victims, but also the variations in network connectivity across the environment. In this paper, we propose and evaluate a new teleoperation user interface (UI) that includes a way of estimating the direction of arrival (DoA) of the radio signal strength (RSS) and integrating the DoA information in the interface. The evaluation shows that using the interface results in more objects found, and less aborted missions due to connectivity problems, as compared to a standard interface. The proposed interface is an extension to an existing interface centered on the video stream captured by the UGV. But instead of just showing the network signal strength in terms of percent and a set of bars, the additional information of DoA is added in terms of a color bar surrounding the video feed. With this information, the operator knows what movement directions are safe, even when moving in regions close to the connectivity threshold.

  • 258.
    Parasuraman, Ramviyas
    et al.
    Purdue Univ, W Lafayette, IN 47906 USA..
    Ögren, Petter
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Robotik, perception och lärande, RPL.
    Min, Byung-Cheol
    Purdue Univ, W Lafayette, IN 47906 USA..
    Kalman Filter Based Spatial Prediction of Wireless Connectivity for Autonomous Robots and Connected Vehicles2018Inngår i: 2018 IEEE 88TH VEHICULAR TECHNOLOGY CONFERENCE (VTC-FALL), IEEE , 2018Konferansepaper (Fagfellevurdert)
    Abstract [en]

    This paper proposes a new Kalman filter based online framework to estimate the spatial wireless connectivity in terms of received signal strength (RSS), which is composed of path loss and the shadow fading variance of a wireless channel in autonomous vehicles. The path loss is estimated using a localized least squares method and the shadowing effect is predicted with an empirical (exponential) variogram. A discrete Kalman Filter is used to fuse these two models into a state space formulation. The approach is unique in a sense that it is online and does not require the exact source location to be known apriori. We evaluated the method using real-world measurements dataset from both indoors and outdoor environments. The results show significant performance improvements compared to state-of-the-art methods using Gaussian processes or Kriging interpolation algorithms. We are able to achieve a mean prediction accuracy of up to 96% for predicting RSS as far as 20 meters ahead in the robot's trajectory.

  • 259. Patel, M.
    et al.
    Ek, Carl Henrik
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Kyriazis, N.
    Argyros, A.
    Miro, J. V.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Language for learning complex human-object interactions2013Inngår i: 2013 IEEE International Conference on Robotics and Automation (ICRA), IEEE Computer Society, 2013, s. 4997-5002Konferansepaper (Fagfellevurdert)
    Abstract [en]

    In this paper we use a Hierarchical Hidden Markov Model (HHMM) to represent and learn complex activities/task performed by humans/robots in everyday life. Action primitives are used as a grammar to represent complex human behaviour and learn the interactions and behaviour of human/robots with different objects. The main contribution is the use of a probabilistic model capable of representing behaviours at multiple levels of abstraction to support the proposed hypothesis. The hierarchical nature of the model allows decomposition of the complex task into simple action primitives. The framework is evaluated with data collected for tasks of everyday importance performed by a human user.

  • 260.
    Pauwels, Karl
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Scaling Up Real-time Object Pose Tracking to Multiple Objects and Active Cameras2015Inngår i: IEEE International Conference on Robotics and Automation: Workshop on Scaling Up Active Perception, 2015Konferansepaper (Fagfellevurdert)
    Abstract [en]

    We present an overview of our recent work on real-time model-based object pose estimation. We have developed an approach that can simultaneously track the pose of a large number of objects using multiple active cameras. It combines dense motion and depth cues with proprioceptive information to maintain a 3D simulated model of the objects in the scene and the robot operating on them. A constrained optimization method allows for an efficient fusion of the multiple dense cues obtained from each camera into this scene representation. This work is publicly available as a ROS software module for real-time object pose estimation called SimTrack.

  • 261.
    Pauwels, Karl
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Rubio, Leonardo
    Ros, Eduardo
    Real-time Pose Detection and Tracking of Hundreds of Objects2015Inngår i: IEEE transactions on circuits and systems for video technology (Print), ISSN 1051-8215, E-ISSN 1558-2205Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    We propose a novel model-based method for tracking the six-degrees-of-freedom (6DOF) pose of a very large number of rigid objects in real-time. By combining dense motion and depth cues with sparse keypoint correspondences, and by feeding back information from the modeled scene to the cue extraction process, the method is both highly accurate and robust to noise and occlusions. A tight integration of the graphical and computational capability of graphics processing units (GPUs) allows the method to simultaneously track hundreds of objects in real-time. We achieve pose updates at framerates around 40 Hz when using 500,000 data samples to track 150 objects using images of resolution 640x480. We introduce a synthetic benchmark dataset with varying objects, background motion, noise and occlusions that enables the evaluation of stereo-vision-based pose estimators in complex scenarios. Using this dataset and a novel evaluation methodology, we show that the proposed method greatly outperforms state-of-the-art methods. Finally, we demonstrate excellent performance on challenging real-world sequences involving multiple objects being manipulated.

  • 262. Peltason, J.
    et al.
    Siepmann, F. H. K.
    Spexard, T. P.
    Wrede, B.
    Hanheide, M.
    Topp, Elin A.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Mixed-initiative in human augmented mapping2009Inngår i: ICRA: 2009 IEEE International Conference on Robotics and Automation, IEEE , 2009, s. 2146-2153Konferansepaper (Fagfellevurdert)
    Abstract [en]

    In scenarios that require a close collaboration and knowledge transfer between inexperienced users and robots, the "learning by interacting" paradigm goes hand in hand with appropriate representations and learning methods. In this paper we discuss a mixed initiative strategy for robotic learning by interacting with a user in a joint map acquisition process. We propose the integration of an environment representation approach into our interactive learning framework. The environment representation and mapping system supports both user driven and data driven strategies for the acquisition of spatial information, so that a mixed initiative strategy for the learning process is realised. We evaluate our system with test runs according to the scenario of a guided tour, extending the area of operation from structured laboratory environment to less predictable domestic settings.

  • 263. Peng, Dongdong
    et al.
    Folkesson, John
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Intelligenta system, Robotik, perception och lärande, RPL.
    Xu, Chao
    Robust Particle Filter Based on Huber Function for Underwater Terrain Aided Navigation2019Inngår i: IET radar, sonar & navigation, ISSN 1751-8784, E-ISSN 1751-8792Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    Terrain aided navigation is a promising technique to determine the location of underwater vehicle by matching terrain measurement against a known map. The particle filter is a natural choice for terrain aided navigation because of its ability to handle nonlinear, multimodal problems. However, the terrain measurements are vulnerable to outliers, which will cause the particle filter to degrade or even diverge. Modification of the Gaussian likelihood function by using robust cost functions is a way to reduce the effect of outliers on an estimate. We propose to use the Huber function to modify the measurement model used to set importance weights in a particle filter. We verify our method in simulations of multi-beam sonar in a real underwater digital map. The results demonstrate that the proposed method is more robust to outliers than the standard particle filter.

  • 264.
    Pernestål Brenden, Anna
    et al.
    KTH, Skolan för industriell teknik och management (ITM), Centra, Integrated Transport Research Lab, ITRL.
    Kristoffersson, Ida
    VTI.
    Mattsson, Lars-Göran
    KTH, Skolan för arkitektur och samhällsbyggnad (ABE), Transportvetenskap, Transportplanering, ekonomi och teknik.
    Future scenarios for self-driving vehicles in Sweden2017Rapport (Annet vitenskapelig)
    Abstract [en]

    The development of Self-Driving Vehicles (SDVs) is fast, and new pilots and tests are released every week. SDVs are predicted to have the potential to change mobility, human life and society.

    In literature, both negative and positive effects of SDVs are listed (Litman 2015; Fagnant and Kockelman 2015). Among the positive effects are increased traffic throughput leading to less congestion, improved mobility for people without a driver’s license, decreased need for parking spaces, and SDV as an enabler for shared mobility. On the other hand, SDVs are expected to increase the consumption of transport which leads to an increase in total vehicle kilometers travelled. This effect is further reinforced by empty vehicles driving around. This will increase the number of vehicles on the streets and lead to more congestion and increased energy usage. Since the SDV technology is expensive, segregation may be a consequence of the development. In addition there are several challenges related to for example legislation, standardization, infrastructure investments, privacy and security. The question is not if, but rather when SDVs will be common on our streets and roads, and if they will change our way of living, and if so, how?

    As we are in a potential mobility shift, and decisions made today will affect the future development, understanding possibilities and challenges for the future is important for many stakeholders. To this end a scenario-based future study was performed to derive a common platform for initiation of future research and innovation projects concerning SDVs in Sweden. This study will also be used in the ongoing governmental investigation about future regulations for SDVs on Swedish roads (Bjelfvenstam 2016). A third motivation for the study is to shed light on how demography, geography and political landscape can affect the development of new mobility services.

    Since there are many different forces that drive the development, often uncertain and sometimes in conflict with each other, a scenario planning approach was chosen. In previous studies, different types of predictions have been derived. Most of them are made by US scholars and are therefore naturally focused on the development in the US. The culture, both with respect to urban planning and public transport is different in Europe compared to the US.

    The work has been performed by an expert group and a smaller analysis team. The expert group has involved nearly 40 persons from 20 transport organizations, including public authorities, lawyers, city planners, researchers, transport service suppliers, and vehicle manufacturers. The expert group met three times, each time focusing on a specific theme: 1) trend analysis, 2) defining scenario axes of uncertainty, and 3) consequence analysis. The analysis team, consisting of the present three authors and two future strategists, has analyzed, refined and condensed the material from the expert group.

    During the project certain trends and strategic uncertainties were identified by the expert group. The uncertainties that were identified as most important for the development of SDVs in Sweden are: 1) whether the sharing economy becomes a new norm or not, and 2) whether city planners, authorities and politicians will be proactive in the development of cities and societies or not, especially regarding the transportation system. This led to four scenarios: A) “Same, same but all the difference” – a green, individualistic society, B) “Sharing is the new black” – a governmentally driven innovation society based on sharing, C) “Follow the path” – an individualistic society based on development in the same direction as today, and D) “What you need is what you get” – a commercially driven innovation society where sharing is a key.

    In the paper, we describe the scenarios and the process to derive them in more detail. We also present an analysis of the consequences for the development of SDVs in the four scenarios, including predictions concerning pace of development, level of self-driving, fleet size, travel demand and vehicle kilometers travelled. The paper also includes a discussion and comparison with other studies on the development of SDVs in the US, Europe and Asia.

  • 265. Pervaiz, Salman
    et al.
    Deiab, Ibrahim
    Wahba, Essam
    Rashid, Amir
    KTH, Skolan för industriell teknik och management (ITM), Industriell produktion.
    Nicolescu, Mihai
    KTH, Skolan för industriell teknik och management (ITM), Industriell produktion.
    A numerical and experimental study to investigate convective heat transfer and associated cutting temperature distribution in single point turning2018Inngår i: The International Journal of Advanced Manufacturing Technology, ISSN 0268-3768, E-ISSN 1433-3015, Vol. 94, nr 1-4, s. 897-910Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    During the metal cutting operation, heat generation at the cutting interface and the resulting heat distribution among tool, chip, workpiece, and cutting environment has a significant impact on the overall cutting process. Tool life, rate of tool wear, and dimensional accuracy of the machined surface are linked with the heat transfer. In order to develop a precise numerical model for machining, convective heat transfer coefficient is required to simulate the effect of a coolant. Previous literature provides a large operating range of values for the convective heat transfer coefficients, with no clear indication about the selection criterion. In this study, a coupling procedure based on finite element (FE) analysis and computational fluid dynamics (CFD) has been suggested to obtain the optimum value of the convective heat transfer coefficient. In this novel methodology, first the cutting temperature was attained from the FE-based simulation using a logical arbitrary value of convective heat transfer coefficient. The FE-based temperature result was taken as a heat source point on the solid domain of the cutting insert and computational fluid dynamics modeling was executed to examine the convective heat transfer coefficient under similar condition of air interaction. The methodology provided encouraging results by reducing error from 22 to 15% between the values of experimental and simulated cutting temperatures. The methodology revealed encouraging potential to investigate convective heat transfer coefficients under different cutting environments. The incorporation of CFD modeling technique in the area of metal cutting will also benefit other peers working in the similar areas of interest.

  • 266. Petersson, L.
    et al.
    Fletcher, L.
    Zelinsky, A.
    Barnes, N.
    Arnell, Fredrik
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Towards safer roads by integration of road scene monitoring and vehicle control2006Inngår i: The international journal of robotics research, ISSN 0278-3649, E-ISSN 1741-3176, Vol. 25, nr 1, s. 53-72Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    In this paper we introduce the Smart Cars Project at the Australian National University/National ICT Australia, together with a discussion and an example of a driver assistance system. We present a framework for interactive driver assistance systems that includes techniques for fast-speed sign detection and classification, obstacle detection and tracking applied to pedestrian detection, and lane departure warning. In addition, the driver's actions are monitored. The integrated system uses information extracted from the road scene (speed signs, position within the lane, relative position to other cars, etc.) together with information about the driver's state, such as eye gaze and head pose, to issue adequate warnings. A touch screen monitor is used to present relevant information and allow the driver to interact with the system. The research is focused around robust algorithms that are able to run on-line. Results of on-line speed sign detection and pedestrian detection are presented in the context of a driver assistance system.

  • 267.
    Piovan, Giulia
    et al.
    UCSB.
    Shames, Iman
    KTH, Skolan för elektro- och systemteknik (EES), Reglerteknik. KTH, Skolan för elektro- och systemteknik (EES), Centra, ACCESS Linnaeus Centre.
    Fidan, Baris
    University of Waterloo.
    Bullo, Francesco
    UCSB.
    Anderson, Brian
    Australian National University.
    On Frame and Orientation Localization for Relative Sensing Networks2013Inngår i: Automatica, ISSN 0005-1098, E-ISSN 1873-2836, Vol. 49, nr 1, s. 206-213Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    We develop a novel localization theory for networks of nodes that measure each other's bearing, i.e.; we assume that nodes do not have the ability to perform measurements expressed in a common reference frame. We begin with some basic definitions of frame localizability and orientation localizability. Based on some key kinematic relationships, we characterize orientation localizability for planar networks with angle-of-arrival sensing. We then address the orientation localization problem in the presence of noisy measurements. Our first algorithm computes a least-squares estimate of the unknown node orientations in a ring network given angle-of-arrival sensing. For arbitrary connected graphs, our second algorithm exploits kinematic relationships among the orientations of nodes in loops in order to reduce the effect of noise. We establish the convergence of the algorithm, and through some simulations we show that the algorithm reduces the mean-square error due to the noisy measurements in a way that is comparable to the amount of noise reduction obtained by the classic least-square estimator. We then consider networks in 3-dimensional space and we explore necessary and sufficient conditions for orientation localizability in the noiseless case.

  • 268. Piwek, P.
    et al.
    Masthoff, J.
    Bergenstråhle, Malin
    KTH, Skolan för teknikvetenskap (SCI), Farkost och flyg.
    Reference and gestures in dialogue generation: Three studies with embodied conversational agents2005Inngår i: AISB'05 Convention - Proceedings of the Joint Symposium on Virtual Social Agents: Social Presence Cues for Virtual Humanoids Empathic Interaction with Synthetic Characters Mind Minding Agents, 2005, s. 53-60Konferansepaper (Fagfellevurdert)
    Abstract [en]

    This paper reports on three studies into social presence cues which were carried out in the context of the NECA (Net-environment for Embodied Emotional Conversational Agents) project and the EPOCH network. The first study concerns the generation of referring expressions. We adopted an existing algorithm for generating referring expressions such that it could run according to an egocentric and a neutral strategy. In an evaluation study, we found that the two strategies were correlated with the perceived friendliness of the speaker. In the second and the third study, we evaluated the gestures that were generated by the NECA system. In this paper, we briefly summarize the most salient results of these two studies. They concern the effect of gestures on perceived quality of speech and information retention.

  • 269. Pokorny, Florian T.
    et al.
    Goldberg, Ken
    Kragic, Danica
    Topological Trajectory Clustering with Relative Persistent Homology2016Inngår i: 2016 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), Institute of Electrical and Electronics Engineers (IEEE), 2016, s. 16-23Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Cloud Robotics techniques based on Learning from Demonstrations suggest promising alternatives to manual programming of robots and autonomous vehicles. One challenge is that demonstrated trajectories may vary dramatically: it can be very difficult, if not impossible, for a system to learn control policies unless the trajectories are clustered into meaningful consistent subsets. Metric clustering methods, based on a distance measure, require quadratic time to compute a pairwise distance matrix and do not naturally distinguish topologically distinct trajectories. This paper presents an algorithm for topological clustering based on relative persistent homology, which, for a fixed underlying simplicial representation and discretization of trajectories, requires only linear time in the number of trajectories. The algorithm incorporates global constraints formalized in terms of the topology of sublevel or superlevel sets of a function and can be extended to incorporate probabilistic motion models. In experiments with real automobile and ship GPS trajectories as well as pedestrian trajectories extracted from video, the algorithm clusters trajectories into meaningful consistent subsets and, as we show in an experiment with ship trajectories, results in a faster and more efficient clustering than a metric clustering by Frechet distance.

  • 270.
    Pokorny, Florian T.
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Hang, Kaiyu
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Grasp Moduli Spaces2013Inngår i: Proceedings of Robotics: Science and Systems (RSS 2013), 2013Konferansepaper (Fagfellevurdert)
    Abstract [en]

    We present a new approach for modelling grasping using an integrated space of grasps and shapes. In particular, we introduce an infinite dimensional space, the Grasp Moduli Space, which represents shapes and grasps in a continuous manner. We define a metric on this space allowing us to formalize ‘nearby’ grasp/shape configurations and we discuss continuous deformations of such configurations. We work in particular with surfaces with cylindrical coordinates and analyse the stability of a popular L1 grasp quality measure Ql under continuous deformations of shapes and grasps. We experimentally determine bounds on the maximal change of Ql in a small neighbourhood around stable grasps with grasp quality above a threshold. In the case of surfaces of revolution, we determine stable grasps which correspond to grasps used by humans and develop an efficient algorithm for generating those grasps in the case of three contact points. We show that sufficiently stable grasps stay stable under small deformations. For larger deformations, we develop a gradient-based method that can transfer stable grasps between different surfaces. Additionally, we show in experiments that our gradient method can be used to find stable grasps on arbitrary surfaces with cylindrical coordinates by deforming such surfaces towards a corresponding ‘canonical’ surface of revolution.

  • 271.
    Pokorny, Florian T.
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Hawasly, M.
    Ramamoorthy, S.
    Topological trajectory classification with filtrations of simplicial complexes and persistent homology2016Inngår i: The international journal of robotics research, ISSN 0278-3649, E-ISSN 1741-3176, Vol. 35, nr 1-3, s. 204-223Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    In this work, we present a sampling-based approach to trajectory classification which enables automated high-level reasoning about topological classes of trajectories. Our approach is applicable to general configuration spaces and relies only on the availability of collision free samples. Unlike previous sampling-based approaches in robotics which use graphs to capture information about the path-connectedness of a configuration space, we construct a multiscale approximation of neighborhoods of the collision free configurations based on filtrations of simplicial complexes. Our approach thereby extracts additional homological information which is essential for a topological trajectory classification. We propose a multiscale classification algorithm for trajectories in configuration spaces of arbitrary dimension and for sets of trajectories starting and ending in two fixed points. Using a cone construction, we then generalize this approach to classify sets of trajectories even when trajectory start and end points are allowed to vary in path-connected subsets. We furthermore show how an augmented filtration of simplicial complexes based on an arbitrary function on the configuration space, such as a costmap, can be defined to incorporate additional constraints. We present an evaluation of our approach in 2-, 3-, 4- and 6-dimensional configuration spaces in simulation and in real-world experiments using a Baxter robot and motion capture data.

  • 272.
    Pokorny, Florian T.
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Hawasly, Majd
    University of Edinburgh, School of Informatics, IPAB.
    Ramamoorthy, Subramanian
    University of Edinburgh, School of Informatics, IPAB.
    Multiscale Topological Trajectory Classification with Persistent Homology2014Inngår i: Proceedings of Robotics: Science and Systems, 2014, 2014Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Topological approaches to studying equivalence classes of trajectories in a configuration space have recently received attention in robotics since they allow a robot to reason about trajectories at a high level of abstraction. While recent work has approached the problem of topological motion planning under the assumption that the configuration space and obstacles within it are explicitly described in a noise-free manner, we focus on trajectory classification and present a sampling-based approach which can handle noise, which is applicable to general configuration spaces and which relies only on the availability of collision free samples. Unlike previous sampling-based approaches in robotics which use graphs to capture information about the path-connectedness of a configuration space, we construct a multiscale approximation of neighborhoods of the collision free configurations based on filtrations of simplicial complexes. Our approach thereby extracts additional homological information which is essential for a topological trajectory classification. By computing a basis for the first persistent homology groups, we obtain a multiscale classification algorithm for trajectories in configuration spaces of arbitrary dimension. We furthermore show how an augmented filtration of simplicial complexes based on a cost function can be defined to incorporate additional constraints. We present an evaluation of our approach in 2, 3, 4 and 6 dimensional configuration spaces in simulation and using a Baxter robot.

  • 273.
    Pokorny, Florian T.
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Classical Grasp Quality Evaluation: New Theory and Algorithms2013Inngår i: 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), IEEE , 2013, s. 3493-3500Konferansepaper (Fagfellevurdert)
    Abstract [en]

    This paper investigates theoretical properties of a well-known L 1 grasp quality measure Q whose approximation Q- l is commonly used for the evaluation of grasps and where the precision of Q- l depends on an approximation of a cone by a convex polyhedral cone with l edges. We prove the Lipschitz continuity of Q and provide an explicit Lipschitz bound that can be used to infer the stability of grasps lying in a neighbourhood of a known grasp. We think of Q - l as a lower bound estimate to Q and describe an algorithm for computing an upper bound Q+. We provide worst-case error bounds relating Q and Q- l. Furthermore, we develop a novel grasp hypothesis rejection algorithm which can exclude unstable grasps much faster than current implementations. Our algorithm is based on a formulation of the grasp quality evaluation problem as an optimization problem, and we show how our algorithm can be used to improve the efficiency of sampling based grasp hypotheses generation methods.

  • 274.
    Pokorny, Florian T.
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Stork, Johannes A.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Grasping Objects with Holes: A Topological Approach2013Inngår i: 2013 IEEE International Conference on Robotics and Automation (ICRA), New York: IEEE , 2013, s. 1100-1107Konferansepaper (Fagfellevurdert)
    Abstract [en]

    This work proposes a topologically inspired approach for generating robot grasps on objects with `holes'. Starting from a noisy point-cloud, we generate a simplicial representation of an object of interest and use a recently developed method for approximating shortest homology generators to identify graspable loops. To control the movement of the robot hand, a topologically motivated coordinate system is used in order to wrap the hand around such loops. Finally, another concept from topology -- namely the Gauss linking integral -- is adapted to serve as evidence for secure caging grasps after a grasp has been executed. We evaluate our approach in simulation on a Barrett hand using several target objects of different sizes and shapes and present an initial experiment with real sensor data.

  • 275.
    Popović, Mila
    et al.
    The Maersk Mc-Kinney Möller Institute, University of Southern Denmark.
    Kootstra, Gert
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Jørgensen, Jimmy Alison
    The Maersk Mc-Kinney Möller Institute, University of Southern Denmark.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Krüger, Norbert
    The Maersk Mc-Kinney Möller Institute, University of Southern Denmark.
    Grasping Unknown Objects using an Early Cognitive Vision System for General Scene Understanding2011Inngår i: Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), IEEE , 2011, s. 987-994Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Grasping unknown objects based on real-world visual input is a challenging problem. In this paper, we present an Early Cognitive Vision system that builds a hierarchical representation based on edge and texture information, which is a sparse but powerful description of the scene. Based on this representation we generate edge-based and surface-based grasps. The results show that the method generates successful grasps, that the edge and surface information are complementary, and that the method can deal with more complex scenes. We furthermore present a benchmark for visual-based grasping.

  • 276.
    Pouech, Jérémy
    KTH, Skolan för datavetenskap och kommunikation (CSC).
    Failure Detection and Classification for Industrial Robots2015Independent thesis Advanced level (professional degree), 20 poäng / 30 hpOppgave
    Abstract [en]

    In industrial robotics, the detection of failures is a key part of the robotsprogram to reach a robust assembly process. However, the setting up of sucha detection is long, very specific to a robotic operation and involves programmingby engineers. In response to this problematic, the thesis presented inthis paper proposes an algorithm which makes it generic and semi-automaticthe creation of a failures detection and a classification method. Based onmachine learning, the algorithm is able to learn how to differentiate betweena success and a failure scenario given a series of examples. Moreover, theproposed method makes the teaching of failure detection/classification accessiblefor any operator without any special programming skills.After the programming of movements for a standard behavior, a trainingset of sensory acquisitions is recorded while the robot performs blindlyoperation cycles. Depending on sensors nature, the gathered signals canbe binary inputs, images, sounds or other information measured by specificsensors (force, enlightening, temperature...). These signals contain specificpatterns or signatures for success and failures. The set of training examples isthen analyzed by the clustering algorithm OPTICS. The algorithm providesan intuitive representation based on similarities between signals which helpsan operator to find the patterns to differenciate success and failure situations.The labels extracted from this analysis are thus taken into account to learna classification function. This last function is able to classify efficiently anew signal between the success and failure cases encountered in the trainingperiod and then to provide a relevant feedback to the robot program. Arecovery can then easily be defined by an operator to fix the flaw.

  • 277.
    Pronobis, Andrzej
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS. KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Caputo, Barbara
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS. KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Jensfelt, Patric
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS. KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Christensen, Henrik I.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS. KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    A discriminative approach to robust visual place recognition2006Inngår i: 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems, Vols 1-12, NEW YORK: IEEE , 2006, s. 3829-3836Konferansepaper (Fagfellevurdert)
    Abstract [en]

    An important competence for a mobile robot system is the ability to localize and perform context interpretation. This is required to perform basic navigation and to facilitate local specific services. Usually localization is performed based on a purely geometric model. Through use of vision and place recognition a number of opportunities open up in terms of flexibility and association of semantics to the model. To achieve this the present paper presents an appearance based method for place recognition. The method is based on a large margin classifier in combination with a rich global image descriptor. The method is robust to variations in illumination and minor scene changes. The method is evaluated across several different cameras, changes in time-of-day and weather conditions. The results clearly demonstrate the value of the approach.

  • 278.
    Pérez Mejías, Carlos
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Design of a telepresence interfacefor direct teleoperation of robots: The synergy between Virtual Reality and FreeLook Control2016Independent thesis Advanced level (degree of Master (Two Years)), 20 poäng / 30 hpOppgave
    Abstract [en]

    In direct teleoperation the interface is vital to control a robot. Often it is reduced to a simple controller and the feedback provided by a camera stream in a monitor which leads to poor results.

    A telepresence system combined with a Free Look Control is proposed to increase the result in terms of situational awareness, usability and comfort. The telepresence system provides the sense of depth to the operatorin several manners. Free Look Control replaces Tank Control as control mode, in which the robot can be driven in any direction and the operator takes the control of the camera. A synergy is found when both features are implemented together as their advantages are increased. In addition a multi-camera setup is created, in order to build the 3D environment shown to the operator, which is calibrated in an automatic way.

    The two different control modes are tested and compared by several people. The outcome shows how the inclusion of these characteristics improve the result of the teleoperation in a visible way.

  • 279.
    Rai, Akshara
    et al.
    Carnegie Mellon Univ, Sch Comp Sci, Robot Inst, Pittsburgh, PA 15213 USA..
    Antonova, Rika
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Robotik, perception och lärande, RPL.
    Song, Seungmoon
    Carnegie Mellon Univ, Sch Comp Sci, Robot Inst, Pittsburgh, PA 15213 USA..
    Martin, William
    Carnegie Mellon Univ, Sch Comp Sci, Robot Inst, Pittsburgh, PA 15213 USA..
    Geyer, Hartmut
    Carnegie Mellon Univ, Sch Comp Sci, Robot Inst, Pittsburgh, PA 15213 USA..
    Atkeson, Christopher
    Carnegie Mellon Univ, Sch Comp Sci, Robot Inst, Pittsburgh, PA 15213 USA..
    Bayesian Optimization Using Domain Knowledge on the ATRIAS Biped2018Inngår i: 2018 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), IEEE, 2018, s. 1771-1778Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Robotics controllers often consist of expert-designed heuristics, which can be hard to tune in higher dimensions. Simulation can aid in optimizing these controllers if parameters learned in simulation transfer to hardware. Unfortunately, this is often not the case in legged locomotion, necessitating learning directly on hardware. This motivates using data-efficient learning techniques like Bayesian Optimization (BO) to minimize collecting expensive data samples. BO is a black-box data-efficient optimization scheme, though its performance typically degrades in higher dimensions. We aim to overcome this problem by incorporating domain knowledge, with a focus on bipedal locomotion. In our previous work, we proposed a feature transformation that projected a 16-dimensional locomotion controller to a 1-dimensional space using knowledge of human walking. When optimizing a human-inspired neuromuscular controller in simulation, this feature transformation enhanced sample efficiency of BO over traditional BO with a Squared Exponential kernel. In this paper, we present a generalized feature transform applicable to non-humanoid robot morphologies and evaluate it on the ATRIAS bipedal robot, in both simulation and hardware. We present three different walking controllers and two are evaluated on the real robot. Our results show that this feature transform captures important aspects of walking and accelerates learning on hardware and simulation, as compared to traditional BO.

  • 280.
    Rakesh, Krishnan
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Robotik, perception och lärande, RPL. KTH, Skolan för teknikvetenskap (SCI), Centra, BioMEx.
    Björsell, N.
    Smith, Christian
    KTH, Skolan för datavetenskap och kommunikation (CSC), Robotik, perception och lärande, RPL. KTH, Skolan för teknikvetenskap (SCI), Centra, BioMEx.
    Segmenting humeral submovements using invariant geometric signatures2017Inngår i: 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Institute of Electrical and Electronics Engineers (IEEE), 2017, s. 6951-6958, artikkel-id 8206619Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Discrete submovements are the building blocks of any complex movement. When robots collaborate with humans, extraction of such submovements can be very helpful in applications such as robot-assisted rehabilitation. Our work aims to segment these submovements based on the invariant geometric information embedded in segment kinematics. Moreover, this segmentation is achieved without any explicit kinematic representation. Our work demonstrates the usefulness of this invariant framework in segmenting a variety of humeral movements, which are performed at different speeds across different subjects. Our results indicate that this invariant framework has high computational reliability despite the inherent variability in human motion.

  • 281.
    Rakesh, Krishnan
    et al.
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Robotik, perception och lärande, RPL. Department of Electronics, Mathematics and Natural Sciences, University of Gävle, Gävle, Sweden.
    Cruciani, Silvia
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Robotik, perception och lärande, RPL.
    Gutierrez-Farewik, Elena
    KTH, Skolan för teknikvetenskap (SCI), Mekanik.
    Björsell, Niclas
    Department of Electronics, Mathematics and Natural Sciences, University of Gävle, Gävle, Sweden.
    Smith, Christian
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Robotik, perception och lärande, RPL.
    Reliably Segmenting Motion Reversals of a Rigid-IMU Cluster Using Screw-Based Invariants2018Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Human-robot interaction (HRI) is movingtowards the human-robot synchronization challenge. Inrobots like exoskeletons, this challenge translates to thereliable motion segmentation problem using wearabledevices. Therefore, our paper explores the possibility ofsegmenting the motion reversals of a rigid-IMU clusterusing screw-based invariants. Moreover, we evaluate thereliability of this framework with regard to the sensorplacement, speed and type of motion. Overall, our resultsshow that the screw-based invariants can reliably segmentthe motion reversals of a rigid-IMU cluster.

  • 282.
    Rakesh, Krishnan
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. Department of Electronics, Mathematics and Natural Sciences, University of Gävle, Gävle, Sweden.
    Niclas, Björsell
    Department of Electronics, Mathematics and Natural Sciences, University of Gävle, Gävle, Sweden.
    Christian, Smith
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Invariant Spatial Parametrization of Human Thoracohumeral Kinematics: A Feasibility Study2016Konferansepaper (Fagfellevurdert)
    Abstract [en]

    In this paper, we present a novel kinematic framework using hybrid twists, that has the potential to improve the reliability of estimated human shoulder kinematics. This is important as the functional aspects of the human shoulder are evaluated using the information embedded in thoracohumeral kinematics. We successfully demonstrate in our results, that our approach is invariant of the body-fixed coordinate definition, is singularity free and has high repeatability; thus resulting in a flexibleuser-specific kinematic tracking not restricted to bony landmarks.

  • 283.
    Rao, Akhila
    et al.
    KTH.
    Ben Abdesslem, F.
    Lindgren, A.
    Ziviani, A.
    Team communication strategy for collaborative exploration by autonomous vehicles2016Inngår i: 2016 IEEE International Conference on Communications, ICC 2016, Institute of Electrical and Electronics Engineers (IEEE), 2016, artikkel-id 7511087Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Exploring a large area can be conveniently performed by a team of small autonomous vehicles for different applications, such as search and rescue, cleaning, or lawn mowing. The efficiency and performance of such autonomous exploration depends on the exploration algorithm implemented by the vehicles, and can be enhanced with a better communication and collaboration strategy within the team. In this paper, a new algorithm is proposed and evaluated where vehicles with a limited communication range pro-actively seek their teammates to exchange information about the explored area. Simulations show that this approach allows the vehicles to finish the exploration and return to their base station 18% faster, without consuming more energy.

  • 284.
    Reynaga Barba, Valeria
    KTH, Skolan för datavetenskap och kommunikation (CSC).
    Detecting Changes During the Manipulation of an Object Jointly Held by Humans and RobotsDetektera skillnader under manipulationen av ett objekt som gemensamt hålls av människor och robotar2015Independent thesis Advanced level (degree of Master (Two Years)), 20 poäng / 30 hpOppgave
    Abstract [en]

    In the last decades research and development in the field of robotics has grown rapidly. This growth has resulted in the emergence of service robots that need to be able to physically interact with humans for different applications. One of these applications involves robots and humans cooperating in handling an object together. In such cases, there is usually an initial arrangement of how the robot and the humans hold the object and the arrangement stays the same throughout the manipulation task. Real-world scenarios often require that the initial arrangement changes throughout the task, therefore, it is important that the robot is able to recognize these changes and act accordingly. We consider a setting where a robot holds a large flat object with one or two humans. The aim of this research project is to detect the change in the number of agents grasping the object using only force and torque information measured at the robot's wrist. The proposed solution involves defining a transition sequence of four steps that the humans should perform to go from the initial scenario to the final one. The force and torque information is used to estimate the grasping point of the agents with a Kalman filter. While the humans are going from one scenario to the other, the estimated point changes according to the step of the transition the humans are in. These changes are used to track the steps in the sequence using a hidden Markov model (HMM). Tracking the steps in the sequence means knowing how many agents are grasping the object. To evaluate the method, humans that were not involved in the training of the HMM were asked to perform two tasks: a) perform the previously defined sequence as is, and b) perform a deviation of the sequence. The results of the method show that it is possible to detect the change between one human and two humans holding the object using only force and torque information.

  • 285.
    Ringh, Axel
    et al.
    KTH, Skolan för teknikvetenskap (SCI), Matematik (Inst.), Optimeringslära och systemteori.
    Karlsson, Johan
    KTH, Skolan för teknikvetenskap (SCI), Matematik (Inst.), Optimeringslära och systemteori.
    Lindquist, Anders
    KTH, Skolan för teknikvetenskap (SCI), Matematik (Inst.), Optimeringslära och systemteori. Shanghai Jiao Tong University, China.
    The Multidimensional Circulant Rational Covariance Extension Problem: Solutions and Applications in Image Compression2016Inngår i: 2015 54th IEEE Conference on Decision and Control (CDC), 2015, Institute of Electrical and Electronics Engineers (IEEE), 2016, s. 5320-5327Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Rational functions play a fundamental role in systems engineering for modelling, identification, and control applications. In this paper we extend the framework by Lindquist and Picci for obtaining such models from the circulant trigonometric moment problems, from the one-dimensional to the multidimensional setting in the sense that the spectrum domain is multidimensional. We consider solutions to weighted entropy functionals, and show that all rational solutions of certain bounded degree can be characterized by these. We also consider identification of spectra based on simultaneous covariance and cepstral matching, and apply this theory for image compression. This provides an approximation procedure for moment problems where the moment integral is over a multidimensional domain, and is also a step towards a realization theory for random fields.

  • 286.
    Rixon Fuchs, Louise
    et al.
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Robotik, perception och lärande, RPL.
    Gällström, Andreas
    Folkesson, John
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Robotik, perception och lärande, RPL.
    Object Recognition in Forward Looking Sonar Images using Transfer Learning2018Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Forward Looking Sonars (FLS) are a typical choiceof sonar for autonomous underwater vehicles. They are mostoften the main sensor for obstacle avoidance and can be usedfor monitoring, homing, following and docking as well. Thosetasks require discrimination between noise and various classes ofobjects in the sonar images. Robust recognition of sonar data stillremains a problem, but if solved it would enable more autonomyfor underwater vehicles providing more reliable informationabout the surroundings to aid decision making. Recent advancesin image recognition using Deep Learning methods have beenrapid. While image recognition with Deep Learning is known torequire large amounts of labeled data, there are data-efficientlearning methods using generic features learned by a networkpre-trained on data from a different domain. This enables usto work with much smaller domain-specific datasets, makingthe method interesting to explore for sonar object recognitionwith limited amounts of training data. We have developed aConvolutional Neural Network (CNN) based classifier for FLS-images and compared its performance to classification usingclassical methods and hand-crafted features.

  • 287.
    Rixon Fuchs, Louise
    et al.
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Intelligenta system, Robotik, perception och lärande, RPL.
    Gällström, Andreas
    Sonar System Design Saab Dynamics, Linköping, Sweden.
    Folkesson, John
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Intelligenta system, Robotik, perception och lärande, RPL.
    Object Recognition in Forward Looking Sonar Images using Transfer Learning2018Inngår i: AUV 2018 - 2018 IEEE/OES Autonomous Underwater Vehicle Workshop, Proceedings, Institute of Electrical and Electronics Engineers Inc. , 2018Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Forward Looking Sonars (FLS) are a typical choice of sonar for autonomous underwater vehicles. They are most often the main sensor for obstacle avoidance and can be used for monitoring, homing, following and docking as well. Those tasks require discrimination between noise and various classes of objects in the sonar images. Robust recognition of sonar data still remains a problem, but if solved it would enable more autonomy for underwater vehicles providing more reliable information about the surroundings to aid decision making. Recent advances in image recognition using Deep Learning methods have been rapid. While image recognition with Deep Learning is known to require large amounts of labeled data, there are data-efficient learning methods using generic features learned by a network pre-trained on data from a different domain. This enables us to work with much smaller domain-specific datasets, making the method interesting to explore for sonar object recognition with limited amounts of training data. We have developed a Convolutional Neural Network (CNN) based classifier for FLS-images and compared its performance to classification using classical methods and hand-crafted features.

  • 288.
    Rixon Fuchs, Louise
    et al.
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Intelligenta system, Robotik, perception och lärande, RPL. Saab, SE-581 88 Linköping, Sweden.
    Larsson, Christer
    Saab, SE-581 88 Linköping, Sweden;Department of Electrical and Information Technology, Lund University.
    Gällström, Andreas
    Saab, SE-581 88 Linköping, Sweden;Department of Electrical and Information Technology, Lund University.
    Deep learning based technique for enhanced sonar imaging2019Konferansepaper (Annet vitenskapelig)
    Abstract [en]

    Several beamforming techniques can be used to enhance the resolution of sonar images. Beamforming techniques can be divided into two types: data independent beamforming such as the delay-sum-beamformer, and data-dependent methods known as adaptive beamformers. Adaptive beamformers can often achieve higher resolution, but are more sensitive to errors. Several signals are processed from several consecutive pings. The signals are added coherently to achieve the same effect as having a longer array in synthetic aperture sonar (SAS). In general it can be said that a longer array gives a higher image resolution. SAS processing typically requires high navigation accuracy, and physical array-overlap between pings. This restriction on displacement between pings limits the area coverage rate for the vehicle carrying the SAS. We investigate the possibility to enhance sonar images from one ping measurements in this paper. This is done by using state-of-the art techniques from Image-to-Image translation, namely the conditional generative adversarial network (cGAN) Pix2Pix. The cGAN learns a mapping from an input to output image as well as a loss function to train the mapping. We test our concept by training a cGAN on simulated data, going from a short array (low resolution) to a longer array (high resolution). The method is evaluated using measured SAS-data collected by Saab with the experimental platform Sapphires in freshwater Lake Vättern.

  • 289.
    Romero, Javier
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    From Human to Robot Grasping2011Doktoravhandling, monografi (Annet vitenskapelig)
    Abstract [en]

    Imagine that a robot fetched this thesis for you from a book shelf. How doyou think the robot would have been programmed? One possibility is thatexperienced engineers had written low level descriptions of all imaginabletasks, including grasping a small book from this particular shelf. A secondoption would be that the robot tried to learn how to grasp books from yourshelf autonomously, resulting in hours of trial-and-error and several bookson the floor.In this thesis, we argue in favor of a third approach where you teach therobot how to grasp books from your shelf through grasping by demonstration.It is based on the idea of robots learning grasping actions by observinghumans performing them. This imposes minimum requirements on the humanteacher: no programming knowledge and, in this thesis, no need for specialsensory devices. It also maximizes the amount of sources from which therobot can learn: any video footage showing a task performed by a human couldpotentially be used in the learning process. And hopefully it reduces theamount of books that end up on the floor.

    This document explores the challenges involved in the creation of such asystem. First, the robot should be able to understand what the teacher isdoing with their hands. This means, it needs to estimate the pose of theteacher's hands by visually observing their in the absence of markers or anyother input devices which could interfere with the demonstration. Second,the robot should translate the human representation acquired in terms ofhand poses to its own embodiment. Since the kinematics of the robot arepotentially very different from the human one, defining a similarity measureapplicable to very different bodies becomes a challenge. Third, theexecution of the grasp should be continuously monitored to react toinaccuracies in the robot perception or changes in the grasping scenario.While visual data can help correcting the reaching movement to the object,tactile data enables accurate adaptation of the grasp itself, therebyadjusting the robot's internal model of the scene to reality. Finally,acquiring compact models of human grasping actions can help in bothperceiving human demonstrations more accurately and executing them in a morehuman-like manner. Moreover, modeling human grasps can provide us withinsights about what makes an artificial hand design anthropomorphic,assisting the design of new robotic manipulators and hand prostheses.

    All these modules try to solve particular subproblems of a grasping bydemonstration system. We hope the research on these subproblems performed inthis thesis will both bring us closer to our dream of a learning robot andcontribute to the multiple research fields where these subproblems arecoming from.

  • 290. Romero, Javier
    et al.
    Feix, Thomas
    Ek, Carl Henrik
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Kjellström, Hedvig
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Extracting Postural Synergies for Robotic Grasping2013Inngår i: IEEE Transactions on robotics, ISSN 1552-3098, E-ISSN 1941-0468, Vol. 29, nr 6, s. 1342-1352Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    We address the problem of representing and encoding human hand motion data using nonlinear dimensionality reduction methods. We build our work on the notion of postural synergies being typically based on a linear embedding of the data. In addition to addressing the encoding of postural synergies using nonlinear methods, we relate our work to control strategies of combined reaching and grasping movements. We show the drawbacks of the (commonly made) causality assumption and propose methods that model the data as being generated from an inferred latent manifold to cope with the problem. Another important contribution is a thorough analysis of the parameters used in the employed dimensionality reduction techniques. Finally, we provide an experimental evaluation that shows how the proposed methods outperform the standard techniques, both in terms of recognition and generation of motion patterns.

  • 291.
    Rotkowitz, Mikael
    et al.
    KTH, Skolan för elektro- och systemteknik (EES), Reglerteknik.
    Lall, S.
    A characterization of convex problems in decentralized control2005Inngår i: IEEE Transactions on Automatic Control, ISSN 0018-9286, E-ISSN 1558-2523, Vol. 50, nr 12, s. 1984-1996Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    We consider the problem of constructing optimal decentralized controllers. We formulate this problem as one of minimizing the closed-loop norm of a feedback system subject to constraints on the controller structure. We define the notion of quadratic invariance of a constraint set with respect to a system, and show that if the constraint set has this property, then the constrained minimum-norm problem may be solved via convex programming. We also show that quadratic invariance is necessary and sufficient for the constraint set to be preserved under feedback. These results are developed in a very general framework, and are shown to hold in both continuous and discrete time, for both stable and unstable systems, and for any norm. This notion unifies many previous results identifying specific tractable decentralized control problems, and delineates the largest known class of convex problems in decentralized control. As an example, we show that optimal stabilizing controllers may be efficiently computed in the case where distributed controllers can communicate faster than their dynamics propagate. We also show that symmetric synthesis is included in this classification, and provide a test for sparsity constraints to be quadratically invariant, and thus amenable to convex synthesis.

  • 292.
    Rotkowitz, Mikael
    et al.
    KTH, Skolan för elektro- och systemteknik (EES), Reglerteknik.
    Lall, S
    A characterization of convex problems in decentralized control2006Inngår i: IEEE Transactions on Automatic Control, ISSN 0018-9286, E-ISSN 1558-2523, Vol. 51, nr 2, s. 274-286Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    We consider the problem of constructing optimal decentralized controllers. We formulate this problem as one of minimizing the closed-loop norm of a feedback system subject to constraints on the controller structure. We define the notion of quadratic invariance of a constraint set with respect to a system, and show that if the constraint set has this property, then the constrained minimum-norm problem may be solved via convex programming. We also show that quadratic invariance is necessary and sufficient for the constraint set to be preserved under feedback. These results are developed in a very general framework, and are shown to hold in both continuous and discrete time, for both stable and unstable systems, and for any norm. This notion unifies many previous results identifying specific tractable decentralized control problems, and delineates the largest known class of convex problems in decentralized control. As an example, we show that optimal stabilizing controllers may be efficiently computed in the case where distributed controllers can communicate faster than their dynamics propagate. We also show that symmetric synthesis is included in this classification, and provide a test for sparsity constraints to be quadratically invariant, and thus amenable to convex synthesis.

  • 293.
    Samuelsson, Johan
    et al.
    KTH, Skolan för teknikvetenskap (SCI), Matematik (Inst.), Optimeringslära och systemteori.
    Gustavi, Tove
    KTH, Skolan för teknikvetenskap (SCI), Matematik (Inst.), Optimeringslära och systemteori.
    Karasalo, Maja
    KTH, Skolan för teknikvetenskap (SCI), Matematik (Inst.), Optimeringslära och systemteori.
    Hu, Xiaoming
    KTH, Skolan för teknikvetenskap (SCI), Matematik (Inst.), Optimeringslära och systemteori.
    Robust formation adaptation for mobile platforms with noisy sensor information2006Inngår i: 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems, Vols 1-12, NEW YORK: IEEE , 2006, s. 2527-2532Konferansepaper (Fagfellevurdert)
    Abstract [en]

    In this paper stability and formation adaptation of mobile multi-agent systems with limited sensor information is studied. A globally stable control for line formations with varying separations and bearing angles between the agents is evaluated in simulations and experiments with Khepera. robots. The control algorithm only requires information available from on-board sensors, although stability is improved if communication and sharing of information between the agents is possible. In addition, the control only needs a coarse estimation of the actual target speed.

  • 294.
    Sandberg, Henrik
    et al.
    KTH, Skolan för elektro- och systemteknik (EES), Reglerteknik. KTH, Skolan för elektro- och systemteknik (EES), Centra, ACCESS Linnaeus Centre.
    Amin, Saurabh
    Johansson, Karl Henrik
    KTH, Skolan för elektro- och systemteknik (EES), Reglerteknik. KTH, Skolan för elektro- och systemteknik (EES), Centra, ACCESS Linnaeus Centre.
    Cyberphysical security in networked control systems: An introduction to the issue2015Inngår i: IEEE CONTR SYST MAG, ISSN 1066-033X, Vol. 35, nr 1, s. 20-23Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    This special issue provides an introduction to cyberphysical security of networked control systems (NCSs) and summarizes recent progress in applying fundamentals of systems theory and decision sciences to this new and increasingly promising area. NCS applications range from large-scale industrial applications to critical infrastructures such as water, transportation, and electricity networks. The security of NCSs naturally depends on the integration of cyber and physical dynamics and on different ways in which they are affected by the actions of human decision makers. Thus, problems in this area lie at the intersection of control systems and computer security. The six articles that constitute this special issue approach cyberphysical security from a variety of perspectives, including control theory, optimization, and game theory. They cover a range of topics such as models of attack and defense, risk assessment, his special issue provides an introduction to cyberphysical security of networked control systems (NCSs) and summarizes recent progress in applying fundamentals of systems theory and decision sciences to this new and increasingly promising area. NCS applications range from large-scale industrial applications to critical infrastructures such as water, transportation, and electricity networks. The security of NCSs naturally depends on the integration of cyber and physical dynamics and on different ways in which they are affected by the actions of human decision makers. Thus, problems in this area lie at the intersection of control systems and computer security. The six articles that constitute this special issue approach cyberphysical security from a variety of perspectives, including control theory, optimization, and game theory. They cover a range of topics such as models of attack and defense, risk assessment,attack detection and identification, and secure control design.A common theme among these contributions is an emphasis on the development of a principled approach to cyberphysical security of NCS.

  • 295.
    Schilling, Fabian
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Robotik, perception och lärande, RPL.
    chen, xi
    KTH, Skolan för datavetenskap och kommunikation (CSC), Robotik, perception och lärande, RPL.
    Folkesson, John
    KTH, Skolan för datavetenskap och kommunikation (CSC), Robotik, perception och lärande, RPL.
    Jensfelt, Patric
    KTH, Skolan för datavetenskap och kommunikation (CSC), Robotik, perception och lärande, RPL.
    Geometric and visual terrain classification for autonomous mobile navigation2017Inngår i: 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), IEEE, 2017, artikkel-id 8206092Konferansepaper (Fagfellevurdert)
    Abstract [en]

    In this paper, we present a multi-sensory terrain classification algorithm with a generalized terrain representation using semantic and geometric features. We compute geometric features from lidar point clouds and extract pixel-wise semantic labels from a fully convolutional network that is trained using a dataset with a strong focus on urban navigation. We use data augmentation to overcome the biases of the original dataset and apply transfer learning to adapt the model to new semantic labels in off-road environments. Finally, we fuse the visual and geometric features using a random forest to classify the terrain traversability into three classes: safe, risky and obstacle. We implement the algorithm on our four-wheeled robot and test it in novel environments including both urban and off-road scenes which are distinct from the training environments and under summer and winter conditions. We provide experimental result to show that our algorithm can perform accurate and fast prediction of terrain traversability in a mixture of environments with a small set of training data.

  • 296.
    Schillinger, Philipp
    KTH, Skolan för elektro- och systemteknik (EES), Reglerteknik. Bosch Center for Artificial Intelligence.
    Specification Decomposition and Formal Behavior Generation in Multi-Robot Systems2017Licentiatavhandling, monografi (Annet vitenskapelig)
    Abstract [en]

    While autonomous robot systems are becoming increasingly common, their usage is still mostly limited to rather simple tasks. This primarily results from the need for manually programming the execution plans of the robots. Instead, as shown in this thesis, their behavior can be automatically generated from a given goal specification. This forms the basis for providing formal guarantees regarding optimality and satisfaction of the mission goal specification and creates the opportunity to deploy these robots in increasingly sophisticated scenarios. Well-defined robot capabilities of comparably low complexity can be developed independently from a specific high-level goal and then, using a behavior planner, be automatically composed to achieve complex goals in a verifiably correct way. Considering multiple robots introduces significant additional planning complexity. Not only actions need to be planned, but also allocation of parts of the mission to the individual robots needs to be considered. Classically, either are planning and allocation seen as two independent problems which requires to solve an exponential number of planning problems, or the formulation of a joint team model leads to a product state space between the robots. The resulting exponential complexity prevents most existing approaches from being practically useful in more complex and realistic scenarios. In this thesis, an approach is presented to utilize the interplay of allocation and planning, which avoids the exponential complexity for independently executable parts of the mission specification. Furthermore, an approach is presented to identify these independent parts automatically when only being given a single goal specification for the team. This bears the potential of improving the efficiency to find an optimal solution and is a significant step towards the application of formal multi-robot behavior planning to real-world problems. The effectiveness of the proposed methods is therefore illustrated in experiments based on an existing office environment and in realistic scenarios.

  • 297.
    Schillinger, Philipp
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Reglerteknik. Bosch Center for Artificial Intelligence.
    Specification Decomposition and Formal Behavior Generation in Multi-Robot Systems2019Doktoravhandling, monografi (Annet vitenskapelig)
    Abstract [en]

    Autonomous robot systems are becoming increasingly common in service applications and industrial scenarios. However, their use is still mostly limited to rather simple tasks. This primarily results from the considerable effort that is required to manually program the execution plans of the robots. In this thesis, we discuss how the behavior of robots can be automatically generated from a given goal specification. This forms the basis for providing formal guarantees regarding optimality and satisfaction of the mission goal specification and creates the opportunity to deploy these robots in increasingly sophisticated scenarios. Well-defined robot capabilities of comparably low complexity can be developed independently from a specific high-level goal and a behavior planner can then automatically compose them to achieve complex goals in a verifiably correct way.

    Intelligent coordination of a robot team can highly improve the performance of a system, but at the same time, considering multiple robots introduces significant additional planning complexity. To address the complexity, a framework is proposed to efficiently plan actions for multi-robot systems. The generated behavior of the robots is guaranteed to fulfill complex, temporally extended, formal mission specifications posed to the team as a whole. To achieve this, several challenges are addressed such as decomposition of a specification into tasks, allocation of tasks to robots, planning of actions to execute the assigned tasks, and coordination of action execution. This enables the combination of heterogeneous robots for automating tasks in a wide range of practically relevant applications.

    The proposed methods determine efficient actions for each robot in the sense that these actions are optimal in the absence of execution uncertainty and otherwise, improve the solution performance over time based on online observations. First, to plan optimal actions, an approach called Simultaneous Task Allocation and Planning is proposed to utilize the interplay of allocation and planning based on automatically identified, independently executable tasks. Second, to improve performance in the presence of stochastic actions, a Hierarchical LTL-Task MDP is proposed to combine auction-based allocation with reinforcement learning to achieve the desired performance with feasible computational effort. Both approaches of the presented framework are evaluated in the considered use case areas of service robotics and factory automation. The results of this thesis enable to plan correct-by-construction behavior from expressive specifications in more complex and realistic scenarios than possible with previous approaches.

  • 298.
    Schillinger, Philipp
    et al.
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Centra, Centrum för autonoma system, CAS. KTH, Skolan för elektroteknik och datavetenskap (EECS), Centra, ACCESS Linnaeus Centre. Bosch Ctr Artificial Intelligence, Renningen, Germany..
    Buerger, Mathias
    Bosch Ctr Artificial Intelligence, Renningen, Germany..
    Dimarogonas, Dimos V.
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Reglerteknik. KTH, Skolan för elektroteknik och datavetenskap (EECS), Centra, ACCESS Linnaeus Centre.
    Auctioning over Probabilistic Options for Temporal Logic-Based Multi-Robot Cooperation under Uncertainty2018Inngår i: 2018 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), IEEE Computer Society, 2018, s. 7330-7337Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Coordinating a team of robots to fulfill a common task is still a demanding problem. This is even more the case when considering uncertainty in the environment, as well as temporal dependencies within the task specification. A multirobot cooperation from a single goal specification requires mechanisms for decomposing the goal as well as an efficient planning for the team. However, planning action sequences offline is insufficient in real world applications. Rather, due to uncertainties, the robots also need to closely coordinate during execution and adjust their policies when additional observations are made. The framework presented in this paper enables the robot team to cooperatively fulfill tasks given as temporal logic specifications while explicitly considering uncertainty and incorporating observations during execution. We present the effectiveness of our ROS implementation of this approach in a case study scenario.

  • 299.
    Schillinger, Philipp
    et al.
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Reglerteknik. Bosch Center for Artificial Intelligence.
    Buerger, Mathias
    Bosch Center for Artificial Intelligence.
    Dimarogonas, Dimos V.
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Reglerteknik.
    Improving Multi-Robot Behavior Using Learning-Based Receding Horizon Task Allocation2018Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Planning efficient and coordinated policies for a team of robots is a computationally demanding problem, especially when the system faces uncertainty in the outcome or duration of actions. In practice, approximation methods are usually employed to plan reasonable team policies in an acceptable time. At the same time, many typical robotic tasks include a repetitive pattern. On the one hand, this multiplies the increased cost of inefficient solutions. But on the other hand, it also provides the potential for improving an initial, inefficient solution over time. In this paper, we consider the case that a single mission specification is given to a multi-robot system, describing repetitive tasks which allow the robots to parallelize work. We propose here a decentralized coordination scheme which enables the robots to decompose the full specification, execute distributed tasks, and improve their strategy over time.

  • 300.
    Schillinger, Philipp
    et al.
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Centra, ACCESS Linnaeus Centre. Bosch Ctr Artificial Intelligence, Robert Bosch Campus 1, DE-71272 Renningen, Germany.
    Buerger, Mathias
    Bosch Ctr Artificial Intelligence, Robert Bosch Campus 1, DE-71272 Renningen, Germany..
    Dimarogonas, Dimos V.
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Centra, ACCESS Linnaeus Centre.
    Simultaneous task allocation and planning for temporal logic goals in heterogeneous multi-robot systems2018Inngår i: The international journal of robotics research, ISSN 0278-3649, E-ISSN 1741-3176, Vol. 37, nr 7, s. 818-838Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    This paper describes a framework for automatically generating optimal action-level behavior for a team of robots based on temporal logic mission specifications under resource constraints. The proposed approach optimally allocates separable tasks to available robots, without requiring a priori an explicit representation of the tasks or the computation of all task execution costs. Instead, we propose an approach for identifying sub-tasks in an automaton representation of the mission specification and for simultaneously allocating the tasks and planning their execution. The proposed framework avoids the need to compute a combinatorial number of possible assignment costs, where each computation itself requires solving a complex planning problem. This can improve computational efficiency compared with classical assignment solutions, in particular for on-demand missions where task costs are unknown in advance. We demonstrate the applicability of the approach with multiple robots in an existing office environment and evaluate its performance in several case study scenarios.

3456789 251 - 300 of 407
RefereraExporteraLink til resultatlisten
Permanent link
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf