Change search
Link to record
Permanent link

Direct link
BETA
Alternative names
Publications (10 of 56) Show all publications
Cheng, X., Yang, B., Liu, G., Olofsson, T. & Li, H. (2018). A variational approach to atmospheric visibility estimation in the weather of fog and haze. Sustainable cities and society, 39, 215-224
Open this publication in new window or tab >>A variational approach to atmospheric visibility estimation in the weather of fog and haze
Show others...
2018 (English)In: Sustainable cities and society, ISSN 2210-6707, Vol. 39, p. 215-224Article in journal (Refereed) Published
Abstract [en]

Real-time atmospheric visibility estimation in foggy and hazy weather plays a crucial role in ensuring traffic safety. Overcoming the inherent drawbacks with traditional optical estimation methods, like limited sampling volume and high cost, vision-based approaches have received much more attention in recent research on atmospheric visibility estimation. Based on the classical Koschmieder's formula, atmospheric visibility estimation is carried out by extracting an inherent extinction coefficient. In this paper we present a variational framework to handle the nature of time-varying extinction coefficient and develop a novel algorithm of extracting the extinction coefficient through a piecewise functional fitting of observed luminance curves. The developed algorithm is validated and evaluated with a big database of road traffic video from Tongqi expressway (in China). The test results are very encouraging and show that the proposed algorithm could achieve an estimation error rate of 10%. More significantly, it is the first time that the effectiveness of Koschmieder's formula in atmospheric visibility estimation was validated with a big dataset, which contains more than two million luminance curves extracted from real-world traffic video surveillance data.

Place, publisher, year, edition, pages
Elsevier, 2018
Keywords
Atmospheric visibility estimation, Computer vision, Fog and haze, Piecewise stationary time series, Variational approach
National Category
Infrastructure Engineering
Identifiers
urn:nbn:se:kth:diva-227594 (URN)10.1016/j.scs.2018.02.001 (DOI)000433169800020 ()2-s2.0-85042790582 (Scopus ID)
Note

QC 20180521

Available from: 2018-05-21 Created: 2018-05-21 Last updated: 2018-07-02Bibliographically approved
Khan, M. S., Halawani, A., Rehman, S. U. & Li, H. (2018). Action Augmented Real Virtuality: A Design for Presence. IEEE Transactions on Cognitive and Developmental Systems, 10(4), 961-972
Open this publication in new window or tab >>Action Augmented Real Virtuality: A Design for Presence
2018 (English)In: IEEE Transactions on Cognitive and Developmental Systems, ISSN 2379-8920, Vol. 10, no 4, p. 961-972Article in journal (Refereed) Published
Abstract [en]

This paper addresses the important question of how to design a video teleconferencing setup to increase the experience of spatial and social presence. Traditional video teleconferencing setups are lacking in presenting the nonverbal behaviors that humans express in face-to-face communication, which results in decrease in presence-experience. In order to address this issue, we first present a conceptual framework of presence for video teleconferencing. We introduce a modern presence concept called real virtuality and propose a new way of achieving this based on body or artifact actions to increase the feeling of presence, and we named this concept presence through actions. Using this new concept, we present the design of a novel action-augmented real virtuality prototype that considers the challenges related to the design of an action prototype, action embodiment, and face representation. Our action prototype is a telepresence mechatronic robot (TEBoT), and action embodiment is through a head-mounted display (HMD). The face representation solves the problem of face occlusion introduced by the HMD. The novel combination of HMD, TEBoT, and face representation algorithm has been tested in a real video teleconferencing scenario for its ability to solve the challenges related to spatial and social presence. We have performed a user study where the invited participants were requested to experience our novel setup and to compare it with a traditional video teleconferencing setup. The results show that the action capabilities not only increase the feeling of spatial presence but also increase the feeling of social presence of a remote person among local collaborators.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2018
Keywords
Actions, embodied telepresence system, embodiment, face occlusion, face retrieval, perception, real virtuality, telepresence, virtual reality, WebRTC
National Category
Interaction Technologies
Identifiers
urn:nbn:se:kth:diva-240755 (URN)10.1109/TCDS.2018.2828865 (DOI)000452636400012 ()2-s2.0-85045736549 (Scopus ID)
Note

QC 20190107

Available from: 2019-01-07 Created: 2019-01-07 Last updated: 2019-01-07Bibliographically approved
Xie, S., Yang, C., Zhang, Z. & Li, H. (2018). Scatter Artifacts Removal Usings Using Learning-Based Method for CBCT in IGRT System. IEEE Access, 6, 78031-78037
Open this publication in new window or tab >>Scatter Artifacts Removal Usings Using Learning-Based Method for CBCT in IGRT System
2018 (English)In: IEEE Access, E-ISSN 2169-3536, Vol. 6, p. 78031-78037Article in journal (Refereed) Published
Abstract [en]

Cone-beam-computed tomography (CBCT) has shown enormous potential in recent years, but it is limited by severe scatter artifacts. This paper proposes a scatter-correction algorithm based on a deep convolutional neural network to reduce artifacts for CBCT in an image-guided radiation therapy (IGRT) system. A two-step registration method that is essential in our algorithm is implemented to preprocess data before training. The testing result on real data acquired from the IGRT system demonstrates the ability of our approach to learn artifacts distribution. Furthermore, the proposed method can be applied to enhance the performance on such applications as dose estimation and segmentation.

Place, publisher, year, edition, pages
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC, 2018
Keywords
CBCT, scatter correction, image registration, deep CNN
National Category
Media and Communication Technology
Identifiers
urn:nbn:se:kth:diva-241235 (URN)10.1109/ACCESS.2018.2884704 (DOI)000454607600001 ()2-s2.0-85058116792 (Scopus ID)
Note

QC 20190117

Available from: 2019-01-17 Created: 2019-01-17 Last updated: 2019-01-17Bibliographically approved
Zhu, B., Hedman, A. & Li, H. (2017). Designing Digital Mindfulness: Presence-In and Presence-With versus Presence-Through. In: PROCEEDINGS OF THE 2017 ACM SIGCHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS (CHI'17): . Paper presented at THE 2017 ACM SIGCHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS (CHI'17) (pp. 2685-2695). ASSOC COMPUTING MACHINERY
Open this publication in new window or tab >>Designing Digital Mindfulness: Presence-In and Presence-With versus Presence-Through
2017 (English)In: PROCEEDINGS OF THE 2017 ACM SIGCHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS (CHI'17), ASSOC COMPUTING MACHINERY , 2017, p. 2685-2695Conference paper, Published paper (Refereed)
Abstract [en]

The digital health and wellbeing movement has led to development of digital mindfulness applications that aim to help people to become mindful. In this paper we suggest a broad scheme for classifying and ordering apps intended to support mindfulness. This scheme consists of four levels of what we here term digital mindfulness. One crucial aspect of the fourth level is that artifacts at this level allow for what we term as presence-with and presence-in as opposed to presence-through, which occurs at the first three levels. We articulate our four levels along with specific design qualities through concrete examples of existing mindfulness apps and through research through design (RtD) work conducted with design fiction examples. We then use a working design case prototype to further illustrate the possibilities of presence-with and presence-in. We hope our four levels of digital mindfulness framework will be found useful by other researchers in discussing and planning the design of their own mindfulness apps and digital artifacts.

Place, publisher, year, edition, pages
ASSOC COMPUTING MACHINERY, 2017
Keywords
Digital mindfulness, design, presence, interaction, wellbeing, attention, awareness, being, research through design, aesthetics
National Category
Computer and Information Sciences
Identifiers
urn:nbn:se:kth:diva-225813 (URN)10.1145/3025453.3025590 (DOI)000426970502061 ()
Conference
THE 2017 ACM SIGCHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS (CHI'17)
Note

QC 20180409

Available from: 2018-04-09 Created: 2018-04-09 Last updated: 2018-04-09Bibliographically approved
Zhu, B., Hedman, A., Feng, S., Li, H. & Osika, W. (2017). Designing, Prototyping and Evaluating Digital Mindfulness Applications: A Case Study of Mindful Breathing for Stress Reduction. Journal of Medical Internet Research, 19(6), Article ID e197.
Open this publication in new window or tab >>Designing, Prototyping and Evaluating Digital Mindfulness Applications: A Case Study of Mindful Breathing for Stress Reduction
Show others...
2017 (English)In: Journal of Medical Internet Research, ISSN 1438-8871, E-ISSN 1438-8871, Vol. 19, no 6, article id e197Article in journal (Refereed) Published
Abstract [en]

Background: During the past decade, there has been a rapid increase of interactive apps designed for health and well-being. Yet, little research has been published on developing frameworks for design and evaluation of digital mindfulness facilitating technologies. Moreover, many existing digital mindfulness applications are purely software based. There is room for further exploration and assessment of designs that make more use of physical qualities of artifacts. Objective: The study aimed to develop and test a new physical digital mindfulness prototype designed for stress reduction. Methods: In this case study, we designed, developed, and evaluated HU, a physical digital mindfulness prototype designed for stress reduction. In the first phase, we used vapor and light to support mindful breathing and invited 25 participants through snowball sampling to test HU. In the second phase, we added sonification. We deployed a package of probes such as photos, diaries, and cards to collect data from users who explored HU in their homes. Thereafter, we evaluated our installation using both self-assessed stress levels and heart rate (HR) and heart rate variability (HRV) measures in a pilot study, in order to measure stress resilience effects. After the experiment, we performed a semistructured interview to reflect on HU and investigate the design of digital mindfulness apps for stress reduction. Results: The results of the first phase showed that 22 of 25 participants (88%) claimed vapor and light could be effective ways of promoting mindful breathing. Vapor could potentially support mindful breathing better than light (especially for mindfulness beginners). In addition, a majority of the participants mentioned sound as an alternative medium. In the second phase, we found that participants thought that HU could work well for stress reduction. We compared the effect of silent HU (using light and vapor without sound) and sonified HU on 5 participants. Subjective stress levels were statistically improved with both silent and sonified HU. The mean value of HR using silent HU was significantly lower than resting baseline and sonified HU. The mean value of root mean square of differences (RMSSD) using silent HU was significantly higher than resting baseline. We found that the differences between our objective and subjective assessments were intriguing and prompted us to investigate them further. Conclusions: Our evaluation of HU indicated that HU could facilitate relaxed breathing and stress reduction. There was a difference in outcome between the physiological measures of stress and the subjective reports of stress, as well as a large intervariability among study participants. Our conclusion is that the use of stress reduction tools should be customized and that the design work of mindfulness technology for stress reduction is a complex process, which requires cooperation of designers, HCI (Human-Computer Interaction) experts and clinicians.

Place, publisher, year, edition, pages
JMIR PUBLICATIONS, INC, 2017
Keywords
respiration, biofeedback, mindfulness, stress, device design, sound, light, breathing, heart rate, relaxation
National Category
Health Sciences
Identifiers
urn:nbn:se:kth:diva-213815 (URN)10.2196/jmir.6955 (DOI)000408350400001 ()28615157 (PubMedID)2-s2.0-85021836442 (Scopus ID)
Note

QC 20170911

Available from: 2017-09-11 Created: 2017-09-11 Last updated: 2017-11-29Bibliographically approved
Ge, Q., Shen, F., Jing, X.-Y. -., Wu, F., Xie, S.-P. -., Yue, D. & Li, H. (2016). Active contour evolved by joint probability classification on Riemannian manifold. Signal, Image and Video Processing, 10(7), 1257-1264
Open this publication in new window or tab >>Active contour evolved by joint probability classification on Riemannian manifold
Show others...
2016 (English)In: Signal, Image and Video Processing, ISSN 1863-1703, E-ISSN 1863-1711, Vol. 10, no 7, p. 1257-1264Article in journal (Refereed) Published
Abstract [en]

In this paper, we present an active contour model for image segmentation based on a nonparametric distribution metric without any intensity a priori of the image. A novel nonparametric distance metric, which is called joint probability classification, is established to drive the active contour avoiding the instability induced by multimodal intensity distribution. Considering an image as a Riemannian manifold with spatial and intensity information, the contour evolution is performed on the image manifold by embedding geometric image feature into the active contour model. The experimental results on medical and texture images demonstrate the advantages of the proposed method.

Place, publisher, year, edition, pages
Springer London, 2016
Keywords
Active contour, Image segmentation, Joint probability classification, Nonparametric distribution, Riemannian manifold
National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:kth:diva-187086 (URN)10.1007/s11760-016-0891-8 (DOI)000382363300010 ()2-s2.0-84964010146 (Scopus ID)
Note

QC 20161208

Available from: 2016-05-17 Created: 2016-05-17 Last updated: 2018-01-10Bibliographically approved
Zhu, B., Hedman, A. & Li, H. (2016). Design digital mindfulness for personal wellbeing. In: Proceedings of the 28th Australian Computer-Human Interaction Conference, OzCHI 2016: . Paper presented at 28th Australian Computer-Human Interaction Conference, OzCHI 2016, 29 November 2016 through 2 December 2016 (pp. 626-627). Association for Computing Machinery, Inc
Open this publication in new window or tab >>Design digital mindfulness for personal wellbeing
2016 (English)In: Proceedings of the 28th Australian Computer-Human Interaction Conference, OzCHI 2016, Association for Computing Machinery, Inc , 2016, p. 626-627Conference paper, Published paper (Refereed)
Abstract [en]

The digital health and wellbeing movement has led to development of what we here baptize as digital mindfulness applications that allow people to improve psychological wellbeing. The approaches to digital mindfulness vary greatly and as a researcher it can be difficult to gain an overview of the field and what to focus on in one's own research. Here we describe four levels of digital mindfulness with examples and focus on the larger question of how to design for digital mindfulness. We end up with a set of general issues that we hope will generate further discussion and research in the field of digital mindfulness. 

Place, publisher, year, edition, pages
Association for Computing Machinery, Inc, 2016
Keywords
Design, Digital mindfulness, Mental health, Wellbeing, Interactive computer systems, Psychological well-being, Human computer interaction
National Category
Computer and Information Sciences
Identifiers
urn:nbn:se:kth:diva-207486 (URN)10.1145/3010915.3011841 (DOI)2-s2.0-85012014179 (Scopus ID)9781450346184 (ISBN)
Conference
28th Australian Computer-Human Interaction Conference, OzCHI 2016, 29 November 2016 through 2 December 2016
Note

Conference code: 125791; Export Date: 22 May 2017; Conference Paper. QC 20170612

Available from: 2017-06-12 Created: 2017-06-12 Last updated: 2018-01-13Bibliographically approved
Li, B., Li, H. & Söderström, U. (2016). Distinctive curves features. Electronics Letters, 52(3), 197-U83
Open this publication in new window or tab >>Distinctive curves features
2016 (English)In: Electronics Letters, ISSN 0013-5194, E-ISSN 1350-911X, Vol. 52, no 3, p. 197-U83Article in journal (Refereed) Published
Abstract [en]

Curves and lines are geometrical, abstract features of an image. Whereas interest points are more limited, curves and lines provide much more information of the image structure. However, the research done in curve and line detection is very fragmented. The concept of scale space is not yet fused very well into curve and line detection. Keypoint (e.g. SIFT, SURF, ORB) is a successful concept which represent features (e.g. blob, corner etc.) in scale space. Stimulated by the keypoint concept, a method which extracts distinctive curves (DICU) in scale space, including lines as a special form of curve features is proposed. A curve feature can be represented by three keypoints (two end points, and one middle point). A good way to test the quality of detected curves is to analyse the repeatability under various image transformations. DICU using the standard Oxford benchmark is evaluated. The overlap error is calculated by averaging the overlap error of three keypoints on the curve. Experiment results show that DICU achieves good repeatability comparing with other state-of-the-art methods. To match curve features, a relatively uncomplicated way is to combine local descriptors of three keypoints on each curve.

Place, publisher, year, edition, pages
Institution of Engineering and Technology, 2016
National Category
Electrical Engineering, Electronic Engineering, Information Engineering
Identifiers
urn:nbn:se:kth:diva-183319 (URN)10.1049/el.2015.3495 (DOI)000369674000014 ()2-s2.0-84956854574 (Scopus ID)
Note

QC 20160319

Available from: 2016-03-09 Created: 2016-03-07 Last updated: 2017-11-30Bibliographically approved
Khan, M. S., Réhman, S. U., Söderström, U., Halawani, A. & Li, H. (2016). Face-off: A face reconstruction technique for virtual reality (VR) scenarios. In: 14th European Conference on Computer Vision, ECCV 2016: . Paper presented at 8 October 2016 through 16 October 2016 (pp. 490-503). Springer
Open this publication in new window or tab >>Face-off: A face reconstruction technique for virtual reality (VR) scenarios
Show others...
2016 (English)In: 14th European Conference on Computer Vision, ECCV 2016, Springer, 2016, p. 490-503Conference paper, Published paper (Refereed)
Abstract [en]

Virtual Reality (VR) headsets occlude a significant portion of human face. The real human face is required in many VR applications, for example, video teleconferencing. This paper proposes a wearable camera setup-based solution to reconstruct the real face of a person wearing VR headset. Our solution lies in the core of asymmetrical principal component analysis (aPCA). A user-specific training model is built using aPCA with full face, lips and eye region information. During testing phase, lower face region and partial eye information is used to reconstruct the wearer face. Online testing session consists of two phases, (i) calibration phase and (ii) reconstruction phase. In former, a small calibration step is performed to align test information with training data, while the later uses half face information to reconstruct the full face using aPCAbased trained-data. The proposed approach is validated with qualitative and quantitative analysis.

Place, publisher, year, edition, pages
Springer, 2016
Keywords
Face reconstruction, Oculus, PCA, Virtual reality, VR headset, Wearable setup, Calibration, Computer vision, Wearable technology, Qualitative and quantitative analysis, Test information, Video teleconferencing, Wearable cameras, Principal component analysis
National Category
Computer Sciences
Identifiers
urn:nbn:se:kth:diva-195544 (URN)10.1007/978-3-319-46604-0_35 (DOI)2-s2.0-84989829345 (Scopus ID)9783319466033 (ISBN)
Conference
8 October 2016 through 16 October 2016
Note

QC 20161121

Available from: 2016-11-21 Created: 2016-11-03 Last updated: 2018-01-13Bibliographically approved
Li, H. & Hedman, A. (2016). Harnessing Crowds to Avert or Mitigate Acts Terrorism: A Collective Intelligence Call for Action. In: Brynielsson, J Johansson, F (Ed.), 2016 EUROPEAN INTELLIGENCE AND SECURITY INFORMATICS CONFERENCE (EISIC): . Paper presented at Conference on European Intelligence and Security Informatics Conference (EISIC), AUG 17-19, 2016, Uppsala, SWEDEN (pp. 203-203). IEEE
Open this publication in new window or tab >>Harnessing Crowds to Avert or Mitigate Acts Terrorism: A Collective Intelligence Call for Action
2016 (English)In: 2016 EUROPEAN INTELLIGENCE AND SECURITY INFORMATICS CONFERENCE (EISIC) / [ed] Brynielsson, J Johansson, F, IEEE , 2016, p. 203-203Conference paper, Published paper (Refereed)
Abstract [en]

Averting acts of terrorism through non-traditional means of surveillance and control: the use of crowd sourcing (collective intelligence) and the development of a new class of anti-terror mobile apps. The proposed class of anti-terrorist apps is based on two dimensions: the individual and the central. By individual, we mean the individual app user and by central we mean a central organizational locus of coordination and control in the fight against terrorism. Such a central locus could be a governmental agency or a national/international security organization active in the fight against terrorism.

Place, publisher, year, edition, pages
IEEE, 2016
Series
European Intelligence and Security Informatics Conference, ISSN 2572-3723
Keywords
terrorism, mitigation, aversion, apps
National Category
Computer and Information Sciences
Identifiers
urn:nbn:se:kth:diva-215867 (URN)10.1109/EISIC.2016.057 (DOI)000411272300046 ()2-s2.0-85017257428 (Scopus ID)978-1-5090-2857-3 (ISBN)
Conference
Conference on European Intelligence and Security Informatics Conference (EISIC), AUG 17-19, 2016, Uppsala, SWEDEN
Note

QC 20171018

Available from: 2017-10-18 Created: 2017-10-18 Last updated: 2018-06-19Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0003-3779-5647

Search in DiVA

Show all publications