Change search
Link to record
Permanent link

Direct link
BETA
Alternative names
Publications (10 of 64) Show all publications
Cheng, X., Yang, B., Tan, K., Isaksson, E., Li, L., Hedman, A., . . . Li, H. (2019). A Contactless Measuring Method of Skin Temperature based on the Skin Sensitivity Index and Deep Learning. Applied Sciences, 9(7), Article ID 1375.
Open this publication in new window or tab >>A Contactless Measuring Method of Skin Temperature based on the Skin Sensitivity Index and Deep Learning
Show others...
2019 (English)In: Applied Sciences, E-ISSN 2076-3417, Vol. 9, no 7, article id 1375Article in journal (Refereed) Published
Abstract [en]

Featured Application The NISDL method proposed in this paper can be used for real time contactless measuring of human skin temperature, which reflects human body thermal comfort status and can be used for control HVAC devices. Abstract In human-centered intelligent building, real-time measurements of human thermal comfort play critical roles and supply feedback control signals for building heating, ventilation, and air conditioning (HVAC) systems. Due to the challenges of intra- and inter-individual differences and skin subtleness variations, there has not been any satisfactory solution for thermal comfort measurements until now. In this paper, a contactless measuring method based on a skin sensitivity index and deep learning (NISDL) was proposed to measure real-time skin temperature. A new evaluating index, named the skin sensitivity index (SSI), was defined to overcome individual differences and skin subtleness variations. To illustrate the effectiveness of SSI proposed, a two multi-layers deep learning framework (NISDL method I and II) was designed and the DenseNet201 was used for extracting features from skin images. The partly personal saturation temperature (NIPST) algorithm was use for algorithm comparisons. Another deep learning algorithm without SSI (DL) was also generated for algorithm comparisons. Finally, a total of 1.44 million image data was used for algorithm validation. The results show that 55.62% and 52.25% error values (NISDL method I, II) are scattered at (0 degrees C, 0.25 degrees C), and the same error intervals distribution of NIPST is 35.39%.

Place, publisher, year, edition, pages
MDPI, 2019
Keywords
contactless measurements, skin sensitivity index, thermal comfort, subtleness magnification, deep learning, piecewise stationary time series
National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:kth:diva-254118 (URN)10.3390/app9071375 (DOI)000466547500110 ()2-s2.0-85064083775 (Scopus ID)
Note

QC 20190624

Available from: 2019-06-24 Created: 2019-06-24 Last updated: 2019-06-24Bibliographically approved
Cheng, X., Yang, B., Hedman, A., Olofsson, T., Li, H. & Van Gool, L. (2019). NIDL: A pilot study of contactless measurement of skin temperature for intelligent building. Energy and Buildings, 198, 340-352
Open this publication in new window or tab >>NIDL: A pilot study of contactless measurement of skin temperature for intelligent building
Show others...
2019 (English)In: Energy and Buildings, ISSN 0378-7788, E-ISSN 1872-6178, Vol. 198, p. 340-352Article in journal (Refereed) Published
Abstract [en]

Human thermal comfort measurement plays a critical role in giving feedback signals for building energy efficiency. A contactless measuring method based on subtleness magnification and deep learning (NIDL) was designed to achieve a comfortable, energy efficient built environment. The method relies on skin feature data, e.g., subtle motion and texture variation, and a 315-layer deep neural network for constructing the relationship between skin features and skin temperature. A physiological experiment was conducted for collecting feature data (1.44 million) and algorithm validation. The contactless measurement algorithm based on a partly-personalized saturation temperature model (NIPST) was used for algorithm performance comparisons. The results show that the mean error and median error of the NIDL are 0.476 degrees C and 0.343 degrees C which is equivalent to accuracy improvements of 39.07% and 38.76%, respectively.

Place, publisher, year, edition, pages
Elsevier, 2019
Keywords
Contactless method, Thermal comfort measurement, Vision-based subtleness magnification, Deep learning, Intelligent building
National Category
Building Technologies
Identifiers
urn:nbn:se:kth:diva-255723 (URN)10.1016/j.enbuild.2019.06.007 (DOI)000477091800027 ()
Note

QC 20190814

Available from: 2019-08-14 Created: 2019-08-14 Last updated: 2019-08-14Bibliographically approved
Xie, S., Zheng, X., Shao, W.-Z., Zhang, Y.-D., Lv, T. & Li, H. (2019). Non-Blind Image Deblurring Method by the Total Variation Deep Network. IEEE Access, 7, 37536-37544
Open this publication in new window or tab >>Non-Blind Image Deblurring Method by the Total Variation Deep Network
Show others...
2019 (English)In: IEEE Access, E-ISSN 2169-3536, Vol. 7, p. 37536-37544Article in journal (Refereed) Published
Abstract [en]

There are a lot of non-blind image deblurring methods, especially with the total variation (TV) model-based method. However, how to choose the parameters adaptively for regularization is a major open problem. We proposed a very novel method that is based on the TV deep network to learn the best parameters adaptively for regularization. We used deep learning and prior knowledge to set up a TV-based deep network and calculate the parameters of regularization, such as biases and weights. Therefore, we used the idea of a deep network to update these parameters automatically to avoid sophisticated calculations. Our experimental results by our proposed network are significantly better than several other methods, in respect of detail retention and anti-noise performance. At the same time, we can achieve the same effect with a minimum number of training sets, thus speeding up the calculation.

Place, publisher, year, edition, pages
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC, 2019
Keywords
Non-blind image deblurring, total variation model, deep learning
National Category
Electrical Engineering, Electronic Engineering, Information Engineering
Identifiers
urn:nbn:se:kth:diva-249827 (URN)10.1109/ACCESS.2019.2891626 (DOI)000463637800001 ()
Note

QC 20190423

Available from: 2019-04-23 Created: 2019-04-23 Last updated: 2019-04-23Bibliographically approved
Shao, W.-Z., Ge, Q., Wang, L.-Q., Lin, Y.-Z., Deng, H.-S. & Li, H. (2019). Nonparametric Blind Super-Resolution Using Adaptive Heavy-Tailed Priors. Journal of Mathematical Imaging and Vision, 61(6), 885-917
Open this publication in new window or tab >>Nonparametric Blind Super-Resolution Using Adaptive Heavy-Tailed Priors
Show others...
2019 (English)In: Journal of Mathematical Imaging and Vision, ISSN 0924-9907, E-ISSN 1573-7683, Vol. 61, no 6, p. 885-917Article in journal (Refereed) Published
Abstract [en]

Single-image nonparametric blind super-resolution is a fundamental image restoration problem yet largely ignored in the past decades among the computational photography and computer vision communities. An interesting phenomenon is observed that learning-based single-image super-resolution (SR) has been experiencing a rapid development since the boom of the sparse representation in 2005s and especially the representation learning in 2010s, wherein the high-res image is generally blurred by a supposed bicubic or Gaussian blur kernel. However, the parametric assumption on the form of blur kernels does not hold in most practical applications because in real low-res imaging a high-res image can undergo complex blur processes, e.g., Gaussian-shaped kernels of varying sizes, ellipse-shaped kernels of varying orientations, curvilinear kernels of varying trajectories. The paper is mainly motivated by one of our previous works: Shao and Elad (in: Zhang (ed) ICIG 2015, Part III, Lecture notes in computer science, Springer, Cham, 2015). Specifically, we take one step further in this paper and present a type of adaptive heavy-tailed image priors, which result in a new regularized formulation for nonparametric blind super-resolution. The new image priors can be expressed and understood as a generalized integration of the normalized sparsity measure and relative total variation. Although it seems that the proposed priors are simple, the core merit of the priors is their practical capability for the challenging task of nonparametric blur kernel estimation for both super-resolution and deblurring. Harnessing the priors, a higher-quality intermediate high-res image becomes possible and therefore more accurate blur kernel estimation can be accomplished. A great many experiments are performed on both synthetic and real-world blurred low-res images, demonstrating the comparative or even superior performance of the proposed algorithm convincingly. Meanwhile, the proposed priors are demonstrated quite applicable to blind image deblurring which is a degenerated problem of nonparametric blind SR.

Place, publisher, year, edition, pages
Springer, 2019
Keywords
Super-resolution, Blind deconvolution, Camera shake deblurring, Discriminative models, Convolutional neural networks, Normalized sparsity, Relative total variation
National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:kth:diva-255571 (URN)10.1007/s10851-019-00876-1 (DOI)000475765100008 ()2-s2.0-85062626918 (Scopus ID)
Note

QC 20190802

Available from: 2019-08-02 Created: 2019-08-02 Last updated: 2019-08-02Bibliographically approved
Cheng, X., Yang, B., Liu, G., Olofsson, T. & Li, H. (2018). A variational approach to atmospheric visibility estimation in the weather of fog and haze. Sustainable cities and society, 39, 215-224
Open this publication in new window or tab >>A variational approach to atmospheric visibility estimation in the weather of fog and haze
Show others...
2018 (English)In: Sustainable cities and society, ISSN 2210-6707, Vol. 39, p. 215-224Article in journal (Refereed) Published
Abstract [en]

Real-time atmospheric visibility estimation in foggy and hazy weather plays a crucial role in ensuring traffic safety. Overcoming the inherent drawbacks with traditional optical estimation methods, like limited sampling volume and high cost, vision-based approaches have received much more attention in recent research on atmospheric visibility estimation. Based on the classical Koschmieder's formula, atmospheric visibility estimation is carried out by extracting an inherent extinction coefficient. In this paper we present a variational framework to handle the nature of time-varying extinction coefficient and develop a novel algorithm of extracting the extinction coefficient through a piecewise functional fitting of observed luminance curves. The developed algorithm is validated and evaluated with a big database of road traffic video from Tongqi expressway (in China). The test results are very encouraging and show that the proposed algorithm could achieve an estimation error rate of 10%. More significantly, it is the first time that the effectiveness of Koschmieder's formula in atmospheric visibility estimation was validated with a big dataset, which contains more than two million luminance curves extracted from real-world traffic video surveillance data.

Place, publisher, year, edition, pages
Elsevier, 2018
Keywords
Atmospheric visibility estimation, Computer vision, Fog and haze, Piecewise stationary time series, Variational approach
National Category
Infrastructure Engineering
Identifiers
urn:nbn:se:kth:diva-227594 (URN)10.1016/j.scs.2018.02.001 (DOI)000433169800020 ()2-s2.0-85042790582 (Scopus ID)
Note

QC 20180521

Available from: 2018-05-21 Created: 2018-05-21 Last updated: 2018-07-02Bibliographically approved
Khan, M. S., Halawani, A., Rehman, S. U. & Li, H. (2018). Action Augmented Real Virtuality: A Design for Presence. IEEE Transactions on Cognitive and Developmental Systems, 10(4), 961-972
Open this publication in new window or tab >>Action Augmented Real Virtuality: A Design for Presence
2018 (English)In: IEEE Transactions on Cognitive and Developmental Systems, ISSN 2379-8920, Vol. 10, no 4, p. 961-972Article in journal (Refereed) Published
Abstract [en]

This paper addresses the important question of how to design a video teleconferencing setup to increase the experience of spatial and social presence. Traditional video teleconferencing setups are lacking in presenting the nonverbal behaviors that humans express in face-to-face communication, which results in decrease in presence-experience. In order to address this issue, we first present a conceptual framework of presence for video teleconferencing. We introduce a modern presence concept called real virtuality and propose a new way of achieving this based on body or artifact actions to increase the feeling of presence, and we named this concept presence through actions. Using this new concept, we present the design of a novel action-augmented real virtuality prototype that considers the challenges related to the design of an action prototype, action embodiment, and face representation. Our action prototype is a telepresence mechatronic robot (TEBoT), and action embodiment is through a head-mounted display (HMD). The face representation solves the problem of face occlusion introduced by the HMD. The novel combination of HMD, TEBoT, and face representation algorithm has been tested in a real video teleconferencing scenario for its ability to solve the challenges related to spatial and social presence. We have performed a user study where the invited participants were requested to experience our novel setup and to compare it with a traditional video teleconferencing setup. The results show that the action capabilities not only increase the feeling of spatial presence but also increase the feeling of social presence of a remote person among local collaborators.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2018
Keywords
Actions, embodied telepresence system, embodiment, face occlusion, face retrieval, perception, real virtuality, telepresence, virtual reality, WebRTC
National Category
Interaction Technologies
Identifiers
urn:nbn:se:kth:diva-240755 (URN)10.1109/TCDS.2018.2828865 (DOI)000452636400012 ()2-s2.0-85045736549 (Scopus ID)
Note

QC 20190107

Available from: 2019-01-07 Created: 2019-01-07 Last updated: 2019-01-07Bibliographically approved
Yan, J., Lu, G., Li, H. & Wang, S. (2018). Bimodal emotion recognition based on facial expression and speech. Journal of Nanjing University of Posts and Telecommunications, 38(1), 60-65
Open this publication in new window or tab >>Bimodal emotion recognition based on facial expression and speech
2018 (English)In: Journal of Nanjing University of Posts and Telecommunications, ISSN 1673-5439, Vol. 38, no 1, p. 60-65Article in journal (Refereed) Published
Abstract [en]

In the area of future artificial intelligence, the emotion recognition of the computers will play a more important role. For the bimodal emotion recognition from facial expression and speech, a feature fusion method based on sparse canonical correlation analysis is presented. Firstly, the emotion features from facial expression and speech are respectively extract. Then, the parse canonical correlation analysis is used to fuse the bimodal emotion features. Finally, the K-nearest neighbor classifier is used for emotion recognition. The experimental results show that the bimodal method based on the sparse canonical correlation analysis can obtain better recognition rate than the speech and the facial expression with single modal.

Place, publisher, year, edition, pages
Journal of Nanjing Institute of Posts and Telecommunications, 2018
Keywords
Bimodal emotion recognition, Facial expression, Sparse canonical correlation analysis, Speech
National Category
Computer and Information Sciences
Identifiers
urn:nbn:se:kth:diva-247225 (URN)10.14132/j.cnki.1673-5439.2018.01.007 (DOI)2-s2.0-85056809780 (Scopus ID)
Note

QC 20190403

Available from: 2019-04-03 Created: 2019-04-03 Last updated: 2019-04-03Bibliographically approved
Shao, W., Lin, Y., Bao, B., Wang, L., Ge, Q. & Li, H. (2018). Blind deblurring using discriminative image smoothing. In: 1st Chinese Conference on Pattern Recognition and Computer Vision, PRCV 2018: . Paper presented at 23 November 2018 through 26 November 2018 (pp. 490-500). Springer Verlag
Open this publication in new window or tab >>Blind deblurring using discriminative image smoothing
Show others...
2018 (English)In: 1st Chinese Conference on Pattern Recognition and Computer Vision, PRCV 2018, Springer Verlag , 2018, p. 490-500Conference paper, Published paper (Refereed)
Abstract [en]

This paper aims to exploit the full potential of gradient-based methods, attempting to explore a simple, robust yet discriminative image prior for blind deblurring. The specific contributions are three-fold: Above all, a pure gradient-based heavy-tailed model is proposed as a generalized integration of the normalized sparsity and the relative total variation. On the second, a plug-and-play algorithm is deduced to alternatively estimate the intermediate sharp image and the nonparametric blur kernel. With the numerical scheme, image estimation is simplified to an image smoothing problem. Lastly, a great many experiments are performed accompanied with comparisons with state-of-the-art approaches on synthetic benchmark datasets and real blurry images in various scenarios. The experimental results show well the effectiveness and robustness of the proposed method. 

Place, publisher, year, edition, pages
Springer Verlag, 2018
Keywords
Blind deblurring, Discriminative prior, Low-illumination, Computer vision, Gradient-based method, Image estimation, Low illuminations, Numerical scheme, State-of-the-art approach, Synthetic benchmark, Image enhancement
National Category
Media Engineering
Identifiers
urn:nbn:se:kth:diva-247475 (URN)10.1007/978-3-030-03398-9_42 (DOI)2-s2.0-85057075196 (Scopus ID)9783030033972 (ISBN)
Conference
23 November 2018 through 26 November 2018
Note

QC20190502

Available from: 2019-05-03 Created: 2019-05-03 Last updated: 2019-05-03Bibliographically approved
Xie, S., Yang, C., Zhang, Z. & Li, H. (2018). Scatter Artifacts Removal Usings Using Learning-Based Method for CBCT in IGRT System. IEEE Access, 6, 78031-78037
Open this publication in new window or tab >>Scatter Artifacts Removal Usings Using Learning-Based Method for CBCT in IGRT System
2018 (English)In: IEEE Access, E-ISSN 2169-3536, Vol. 6, p. 78031-78037Article in journal (Refereed) Published
Abstract [en]

Cone-beam-computed tomography (CBCT) has shown enormous potential in recent years, but it is limited by severe scatter artifacts. This paper proposes a scatter-correction algorithm based on a deep convolutional neural network to reduce artifacts for CBCT in an image-guided radiation therapy (IGRT) system. A two-step registration method that is essential in our algorithm is implemented to preprocess data before training. The testing result on real data acquired from the IGRT system demonstrates the ability of our approach to learn artifacts distribution. Furthermore, the proposed method can be applied to enhance the performance on such applications as dose estimation and segmentation.

Place, publisher, year, edition, pages
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC, 2018
Keywords
CBCT, scatter correction, image registration, deep CNN
National Category
Media and Communication Technology
Identifiers
urn:nbn:se:kth:diva-241235 (URN)10.1109/ACCESS.2018.2884704 (DOI)000454607600001 ()2-s2.0-85058116792 (Scopus ID)
Note

QC 20190117

Available from: 2019-01-17 Created: 2019-01-17 Last updated: 2019-01-17Bibliographically approved
Zhu, B., Hedman, A. & Li, H. (2017). Designing Digital Mindfulness: Presence-In and Presence-With versus Presence-Through. In: PROCEEDINGS OF THE 2017 ACM SIGCHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS (CHI'17): . Paper presented at THE 2017 ACM SIGCHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS (CHI'17) (pp. 2685-2695). ASSOC COMPUTING MACHINERY
Open this publication in new window or tab >>Designing Digital Mindfulness: Presence-In and Presence-With versus Presence-Through
2017 (English)In: PROCEEDINGS OF THE 2017 ACM SIGCHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS (CHI'17), ASSOC COMPUTING MACHINERY , 2017, p. 2685-2695Conference paper, Published paper (Refereed)
Abstract [en]

The digital health and wellbeing movement has led to development of digital mindfulness applications that aim to help people to become mindful. In this paper we suggest a broad scheme for classifying and ordering apps intended to support mindfulness. This scheme consists of four levels of what we here term digital mindfulness. One crucial aspect of the fourth level is that artifacts at this level allow for what we term as presence-with and presence-in as opposed to presence-through, which occurs at the first three levels. We articulate our four levels along with specific design qualities through concrete examples of existing mindfulness apps and through research through design (RtD) work conducted with design fiction examples. We then use a working design case prototype to further illustrate the possibilities of presence-with and presence-in. We hope our four levels of digital mindfulness framework will be found useful by other researchers in discussing and planning the design of their own mindfulness apps and digital artifacts.

Place, publisher, year, edition, pages
ASSOC COMPUTING MACHINERY, 2017
Keywords
Digital mindfulness, design, presence, interaction, wellbeing, attention, awareness, being, research through design, aesthetics
National Category
Computer and Information Sciences
Identifiers
urn:nbn:se:kth:diva-225813 (URN)10.1145/3025453.3025590 (DOI)000426970502061 ()
Conference
THE 2017 ACM SIGCHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS (CHI'17)
Note

QC 20180409

Available from: 2018-04-09 Created: 2018-04-09 Last updated: 2018-04-09Bibliographically approved
Projects
Green Video Sharing [2008-06212_VR]; Umeå UniversityGreen Video Sharing [2008-08035_VR]; Umeå UniversityIs Wyner-Ziv coding a core technique enabling next generation face recognition technology for large-scale face image retrieval? [2009-04489_VR]; Umeå University
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0003-3779-5647

Search in DiVA

Show all publications