Change search
Refine search result
1 - 6 of 6
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Brudfors, Mikael
    et al.
    KTH, School of Computer Science and Communication (CSC).
    Seitel, Alexander
    University of British Columbia.
    Rasoulian, Abtin
    University of British Columbia.
    Lasso, Andras
    Queens University, Canada.
    Lessoway, Victoria
    Woman's Hospital, Vancouver, Canada.
    Osborn, Jill
    St Pauls Hospital, Vancouver, Canada.
    Maki, Atsuto
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Rohling, Robert
    University of British Columbia.
    Abolmaesumi, Purang
    University of British Columbia.
    Towards real-time, tracker-less 3D ultrasound guidance for spine anaesthesia2015In: International Journal of Computer Assisted Radiology and Surgery, ISSN 1861-6410, E-ISSN 1861-6429, Vol. 10, no 6, p. 855-865Article in journal (Refereed)
    Abstract [en]

    Purpose: Epidural needle insertions and facet joint injections play an important role in spine anaesthesia. The main challenge of safe needle insertion is the deep location of the target, resulting in a narrow and small insertion channel close to sensitive anatomy. Recent approaches utilizing ultrasound (US) as a low-cost and widely available guiding modality are promising but have yet to become routinely used in clinical practice due to the difficulty in interpreting US images, their limited view of the internal anatomy of the spine, and/or inclusion of cost-intensive tracking hardware which impacts the clinical workflow. Methods: We propose a novel guidance system for spine anaesthesia. An efficient implementation allows us to continuously align and overlay a statistical model of the lumbar spine on the live 3D US stream without making use of additional tracking hardware. The system is evaluated in vivo on 12 volunteers. Results: The in vivo study showed that the anatomical features of the epidural space and the facet joints could be continuously located, at a volume rate of 0.5 Hz, within an accuracy of 3 and 7 mm, respectively. Conclusions: A novel guidance system for spine anaesthesia has been presented which augments a live 3D US stream with detailed anatomical information of the spine. Results from an in vivo study indicate that the proposed system has potential for assisting the physician in quickly finding the target structure and planning a safe insertion trajectory in the spine.

  • 2.
    Qin, Chunxia
    et al.
    Shanghai Jiao Tong Univ, Sch Biomed Engn, Shanghai, Peoples R China.;Shanghai Jiao Tong Univ, Sch Mech Engn, Room 805,Dongchuan Rd 800, Shanghai 200240, Peoples R China..
    Cao, Zhenggang
    Shanghai Jiao Tong Univ, Sch Mech Engn, Room 805,Dongchuan Rd 800, Shanghai 200240, Peoples R China..
    Fan, Shengchi
    Shanghai Jiao Tong Univ, Shanghai Peoples Hosp 9, Sch Med, Shanghai, Peoples R China..
    Wu, Yiqun
    Shanghai Jiao Tong Univ, Shanghai Peoples Hosp 9, Sch Med, Shanghai, Peoples R China..
    Sun, Yi
    Katholieke Univ Leuven, Fac Med, Dept Imaging & Pathol, OMFS IMPATH Res Grp, Louvain, Belgium.;Univ Hosp Leuven, Dept Oral & Maxillofacial Surg, Louvain, Belgium..
    Politis, Constantinus
    Katholieke Univ Leuven, Fac Med, Dept Imaging & Pathol, OMFS IMPATH Res Grp, Louvain, Belgium.;Univ Hosp Leuven, Dept Oral & Maxillofacial Surg, Louvain, Belgium..
    Wang, Chunliang
    KTH, School of Engineering Sciences in Chemistry, Biotechnology and Health (CBH), Biomedical Engineering and Health Systems, Medical Imaging.
    Chen, Xiaojun
    Shanghai Jiao Tong Univ, Sch Mech Engn, Room 805,Dongchuan Rd 800, Shanghai 200240, Peoples R China..
    An oral and maxillofacial navigation system for implant placement with automatic identification of fiducial points2019In: International Journal of Computer Assisted Radiology and Surgery, ISSN 1861-6410, E-ISSN 1861-6429, Vol. 14, no 2, p. 281-289Article in journal (Refereed)
    Abstract [en]

    PurposeSurgical navigation system (SNS) has been an important tool in surgery. However, the complicated and tedious manual selection of fiducial points on preoperative images for registration affects operational efficiency to large extent. In this study, an oral and maxillofacial navigation system named BeiDou-SNS with automatic identification of fiducial points was developed and demonstrated.MethodsTo solve the fiducial selection problem, a novel method of automatic localization for titanium screw markers in preoperative images is proposed on the basis of a sequence of two local mean-shift segmentation including removal of metal artifacts. The operation of the BeiDou-SNS consists of the following key steps: The selection of fiducial points, the calibration of surgical instruments, and the registration of patient space and image space. Eight cases of patients with titanium screws as fiducial markers were carried out to analyze the accuracy of the automatic fiducial point localization algorithm. Finally, a complete phantom experiment of zygomatic implant placement surgery was performed to evaluate the whole performance of BeiDou-SNS. Results and conclusionThe coverage of Euclidean distances between fiducial marker positions selected automatically and those selected manually by an experienced dentist for all eight cases ranged from 0.373 to 0.847mm. Four implants were inserted into the 3D-printed model under the guide of BeiDou-SNS. And the maximal deviations between the actual and planned implant were 1.328mm and 2.326mm, respectively, for the entry and end point while the angular deviation ranged from 1.094 degrees to 2.395 degrees. The results demonstrate that the oral surgical navigation system with automatic identification of fiducial points can meet the requirements of the clinical surgeries.

  • 3.
    Wang, Chunliang
    et al.
    Linköping University Hospital, Sweden.
    Frimmel, Hans
    Persson, Anders
    Smedby, Örjan
    Linköping University Hospital, Sweden.
    An interactive software module for visualizing coronary arteries in CT angiography2008In: International Journal of Computer Assisted Radiology and Surgery, ISSN 1861-6410, E-ISSN 1861-6429, Vol. 3, no 1-2, p. 11-18Article in journal (Refereed)
    Abstract [en]

    Object: A new software module for coronary artery segmentation and visualization in CT angiography (CTA) datasets is presented, which aims to interactively segment coronary arteries and visualize them in 3D with maximum intensity projection (MIP) and volume rendering (VRT). Materials and Methods: The software was built as a plug-in for the open-source PACS workstation OsiriX. The main segmentation function is based an optimized "virtual contrast injection" algorithm, which uses fuzzy connectedness of the vessel lumen to separate the contrast-filled structures from each other. The software was evaluated in 42 clinical coronary CTA datasets acquired with 64-slice CT using isotropic voxels of 0.3-10.5 mm. Results: The median processing time was 6.4 min, and 100% of main branches (right coronary artery, left circumflex artery and left anterior descending artery) and 86.9% (219/252) of visible minor branches were intact. Visually correct centerlines were obtained automatically in 94.7% (321/339) of the intact branches. Conclusion: The new software is a promising tool for coronary CTA post-processing providing good overviews of the coronary artery with limited user interaction on low-end hardware, and the coronary CTA diagnosis procedure could potentially be more time-efficient than using thin-slab technique.

  • 4.
    Wang, Chunliang
    et al.
    KTH, School of Technology and Health (STH), Medical Engineering, Medical Imaging. Linköping University, Sweden; Sectra AB, Sweden.
    Lundström, C.
    CT scan range estimation using multiple body parts detection: let PACS learn the CT image content2016In: International Journal of Computer Assisted Radiology and Surgery, ISSN 1861-6410, E-ISSN 1861-6429, Vol. 11, no 2, p. 317-325Article in journal (Refereed)
    Abstract [en]

    Purpose: The aim of this study was to develop an efficient CT scan range estimation method that is based on the analysis of image data itself instead of metadata analysis. This makes it possible to quantitatively compare the scan range of two studies. Methods: In our study, 3D stacks are first projected to 2D coronal images via a ray casting-like process. Trained 2D body part classifiers are then used to recognize different body parts in the projected image. The detected candidate regions go into a structure grouping process to eliminate false-positive detections. Finally, the scale and position of the patient relative to the projected figure are estimated based on the detected body parts via a structural voting. The start and end lines of the CT scan are projected to a standard human figure. The position readout is normalized so that the bottom of the feet represents 0.0, and the top of the head is 1.0. Results: Classifiers for 18 body parts were trained using 184 CT scans. The final application was tested on 136 randomly selected heterogeneous CT scans. Ground truth was generated by asking two human observers to mark the start and end positions of each scan on the standard human figure. When compared with the human observers, the mean absolute error of the proposed method is 1.2 % (max: 3.5 %) and 1.6 % (max: 5.4 %) for the start and end positions, respectively. Conclusion: We proposed a scan range estimation method using multiple body parts detection and relative structure position analysis. In our preliminary tests, the proposed method delivered promising results.

  • 5.
    Wang, Chunliang
    et al.
    Linköping University, Sweden.
    Ritter, Felix
    Smedby, Örjan
    Linköping University, Sweden.
    Making the PACS workstation a browser of image processing software: a feasibility study using inter-process communication techniques2010In: International Journal of Computer Assisted Radiology and Surgery, ISSN 1861-6410, E-ISSN 1861-6429, Vol. 5, no 4, p. 411-419Article in journal (Refereed)
    Abstract [en]

    Purpose To enhance the functional expandability of a picture archiving and communication systems (PACS) workstation and to facilitate the integration of third-part image-processing modules, we propose a browser-server style method. Methods In the proposed solution, the PACS workstation shows the front-end user interface defined in an XML file while the image processing software is running in the background as a server. Inter-process communication (IPC) techniques allow an efficient exchange of image data, parameters, and user input between the PACS workstation and stand-alone image-processing software. Using a predefined communication protocol, the PACS workstation developer or image processing software developer does not need detailed information about the other system, but will still be able to achieve seamless integration between the two systems and the IPC procedure is totally transparent to the final user. Results A browser-server style solution was built between OsiriX (PACS workstation software) and MeVisLab (Image-Processing Software). Ten example image-processing modules were easily added to OsiriX by converting existing MeVisLab image processing networks. Image data transfer using shared memory added <10 ms of processing time while the other IPC methods cost 1-5 s in our experiments. Conclusion The browser-server style communication based on IPC techniques is an appealing method that allows PACS workstation developers and image processing software developers to cooperate while focusing on different interests.

  • 6.
    Wang, Chunliang
    et al.
    Linköping University, Linköping, Sweden.
    Smedby, Örjan
    Linköping University, Linköping, Sweden.
    Integrating automatic and interactive methods for coronary artery segmentation: let the PACS workstation think ahead2010In: International Journal of Computer Assisted Radiology and Surgery, ISSN 1861-6410, E-ISSN 1861-6429, Vol. 5, no 3, p. 275-285Article in journal (Refereed)
    Abstract [en]

    Purpose To present newly developed software that can provide fast coronary artery segmentation and accurate centerline extraction for later lesion visualization and quantitative measurement while minimizing user interaction. Methods Previously reported fully automatic and interactive methods for coronary artery extraction were optimized and integrated into a user-friendly workflow. The user's waiting time is saved by running the non-supervised coronary artery segmentation and centerline tracking in the background as soon as the images are received. When the user opens the data, the software provides an intuitive interactive analysis environment. Results The average overlap between the centerline created in our software and the reference standard was 96.0%. The average distance between them was 0.38 mm. The automatic procedure runs for 1.4-2.5 min as a single-thread application in the background. Interactive processing takes 3 min in average. Conclusion In preliminary experiments, the software achieved higher efficiency than the former interactive method, and reasonable accuracy compared to manual vessel extraction.

1 - 6 of 6
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf