Change search
Refine search result
1 - 4 of 4
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Chen, Xi
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, Perception and Learning, RPL.
    Ghadirzadeh, Ali
    KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, Perception and Learning, RPL.
    Folkesson, John
    KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, Perception and Learning, RPL.
    Björkman, Mårten
    KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, Perception and Learning, RPL.
    Jensfelt, Patric
    KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, Perception and Learning, RPL.
    Deep Reinforcement Learning to Acquire Navigation Skills for Wheel-Legged Robots in Complex Environments2018In: 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2018Conference paper (Refereed)
    Abstract [en]

    Mobile robot navigation in complex and dynamic environments is a challenging but important problem. Reinforcement learning approaches fail to solve these tasks efficiently due to reward sparsities, temporal complexities and high-dimensionality of sensorimotor spaces which are inherent in such problems. We present a novel approach to train action policies to acquire navigation skills for wheel-legged robots using deep reinforcement learning. The policy maps height-map image observations to motor commands to navigate to a target position while avoiding obstacles. We propose to acquire the multifaceted navigation skill by learning and exploiting a number of manageable navigation behaviors. We also introduce a domain randomization technique to improve the versatility of the training samples. We demonstrate experimentally a significant improvement in terms of data-efficiency, success rate, robustness against irrelevant sensory data, and also the quality of the maneuver skills.

  • 2.
    Klamt, Tobias
    et al.
    Univ Bonn, Autonomous Intelligent Syst, Bonn, Germany..
    Chen, Xi
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Karaoǧuz, Hakan
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Jensfelt, Patric
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Behnke, Sven
    Univ Bonn, Autonomous Intelligent Syst, Bonn, Germany..
    et al.,
    Flexible Disaster Response of Tomorrow: Final Presentation and Evaluation of the CENTAURO System2019In: IEEE robotics & automation magazine, ISSN 1070-9932, E-ISSN 1558-223X, Vol. 26, no 4, p. 59-72Article in journal (Refereed)
    Abstract [en]

    Mobile manipulation robots have great potential for roles in support of rescuers on disaster-response missions. Robots can operate in places too dangerous for humans and therefore can assist in accomplishing hazardous tasks while their human operators work at a safe distance. We developed a disaster-response system that consists of the highly flexible Centauro robot and suitable control interfaces, including an immersive telepresence suit and support-operator controls offering different levels of autonomy.

  • 3.
    Schilling, Fabian
    et al.
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL.
    chen, xi
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL.
    Folkesson, John
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL.
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL.
    Geometric and visual terrain classification for autonomous mobile navigation2017In: 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), IEEE, 2017, article id 8206092Conference paper (Refereed)
    Abstract [en]

    In this paper, we present a multi-sensory terrain classification algorithm with a generalized terrain representation using semantic and geometric features. We compute geometric features from lidar point clouds and extract pixel-wise semantic labels from a fully convolutional network that is trained using a dataset with a strong focus on urban navigation. We use data augmentation to overcome the biases of the original dataset and apply transfer learning to adapt the model to new semantic labels in off-road environments. Finally, we fuse the visual and geometric features using a random forest to classify the terrain traversability into three classes: safe, risky and obstacle. We implement the algorithm on our four-wheeled robot and test it in novel environments including both urban and off-road scenes which are distinct from the training environments and under summer and winter conditions. We provide experimental result to show that our algorithm can perform accurate and fast prediction of terrain traversability in a mixture of environments with a small set of training data.

  • 4.
    Sun, Xu
    et al.
    KTH, School of Information and Communication Technology (ICT).
    Chen, Xi
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Yan, Min
    KTH, School of Information and Communication Technology (ICT), Materials- and Nano Physics, Optics and Photonics, OFO.
    Qiu, Min
    KTH, School of Information and Communication Technology (ICT), Materials- and Nano Physics, Optics and Photonics, OFO.
    Thylen, Lars
    Wosinski, Lech
    KTH, School of Information and Communication Technology (ICT), Materials- and Nano Physics, Optics and Photonics, OFO.
    All-Optical Switching Using a Hybrid Plasmonic Donut Resonator With Photothermal Absorber2016In: IEEE Photonics Technology Letters, ISSN 1041-1135, E-ISSN 1941-0174, Vol. 28, no 15, p. 1609-1612Article in journal (Refereed)
    Abstract [en]

    A novel hybrid plasmonic (HP) donut resonator integrated with a photothermal plasmonic absorber has been developed, which can be used as a compact all-optical switch or modulator. The radius of the fabricated HP donut resonator is 1.8 mu m, with a resonant wavelength around 1550 nm and a quality factor (Q factor) around 600. The photothermal plasmonic absorber is directly integrated above the HP device, which can absorb as much as 75% of impinging optical power at 1064 nm wavelength. Since the absorber is in tight contact to the Si ridge of the HP waveguide, the absorbed optical power can efficiently heat up the Si ridge, and hence change the resonant wavelength of the HP donut resonator by Si thermal expansion effect. Experimental results show that the power used for 15 dB amplitude switch is only 10 mW, with rise and fall response times around 18 and 14 mu s, respectively.

1 - 4 of 4
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf