Real-time 3D Echocardiography (RT3DE) has been proven to be an accurate tool for left ventricular (LV) volume assessment. However, identification of the LV endocardium remains a challenging task, mainly because of the low tissue/blood contrast of the images combined with typical artifacts. Several semi and fully automatic algorithms have been proposed for segmenting the endocardium in RT3DE data in order to extract relevant clinical indices, but a systematic and fair comparison between such methods has so far been impossible due to the lack of a publicly available common database. Here, we introduce a standardized evaluation framework to reliably evaluate and compare the performance of the algorithms developed to segment the LV border in RT3DE. A database consisting of 45 multivendor cardiac ultrasound recordings acquired at different centers with corresponding reference measurements from 3 experts are made available. The algorithms from nine research groups were quantitatively evaluated and compared using the proposed online platform. The results showed that the best methods produce promising results with respect to the experts' measurements for the extraction of clinical indices, and that they offer good segmentation precision in terms of mean distance error in the context of the experts' variability range. The platform remains open for new submissions.
Background and purpose Damage to the blood-brain barrier with subsequent contrast enhancement is a hallmark of glioblastoma. Non-enhancing tumor invasion into the peritumoral edema is, however, not usually visible on conventional magnetic resonance imaging. New quantitative techniques using relaxometry offer additional information about tissue properties. The aim of this study was to evaluate longitudinal relaxation R-1, transverse relaxation R-2, and proton density in the peritumoral edema in a group of patients with malignant glioma before surgery to assess whether relaxometry can detect changes not visible on conventional images. Methods In a prospective study, 24 patients with suspected malignant glioma were examined before surgery. A standard MRI protocol was used with the addition of a quantitative MR method (MAGIC), which measured R-1, R-2, and proton density. The diagnosis of malignant glioma was confirmed after biopsy/surgery. In 19 patients synthetic MR images were then created from the MAGIC scan, and ROIs were placed in the peritumoral edema to obtain the quantitative values. Dynamic susceptibility contrast perfusion was used to obtain cerebral blood volume (rCBV) data of the peritumoral edema. Voxel-based statistical analysis was performed using a mixed linear model. Results R-1, R-2, and rCBV decrease with increasing distance from the contrast-enhancing part of the tumor. There is a significant increase in R1 gradient after contrast agent injection (P<.0001). There is a heterogeneous pattern of relaxation values in the peritumoral edema adjacent to the contrast-enhancing part of the tumor. Conclusion Quantitative analysis with relaxometry of peritumoral edema in malignant gliomas detects tissue changes not visualized on conventional MR images. The finding of decreasing R-1 and R-2 means shorter relaxation times closer to the tumor, which could reflect tumor invasion into the peritumoral edema. However, these findings need to be validated in the future.
Several preprocessing methods are applied to the automatic classification of interstitial lung disease (ILD). The proposed methods are used for the inputs to an established convolutional neural network in order to investigate the effect of those preprocessing techniques to slice-level classification accuracy. Experimental results demonstrate that the proposed preprocessing methods and a deep learning approach outperformed the case of the original images input to deep learning without preprocessing.
Cortical bone plays a big role in the mechanical competence of bone. The analysis of cortical bone requires accurate segmentation methods. Level set methods are usually in the state-of-the-art for segmenting medical images. However, traditional implementations of this method are computationally expensive. This drawback was recently tackled through the so-called coherent propagation extension of the classical algorithm which has decreased computation times dramatically. In this study, we assess the potential of this technique for segmenting cortical bone in interactive time in 3D images acquired through High Resolution peripheral Quantitative Computed Tomography (HR-pQCT). The obtained segmentations are used to estimate cortical thickness and cortical porosity of the investigated images. Cortical thickness and Cortical porosity is computed using sphere fitting and mathematical morphological operations respectively. Qualitative comparison between the segmentations of our proposed algorithm and a previously published approach on six images volumes reveals superior smoothness properties of the level set approach. While the proposed method yields similar results to previous approaches in regions where the boundary between trabecular and cortical bone is well defined, it yields more stable segmentations in challenging regions. This results in more stable estimation of parameters of cortical bone. The proposed technique takes few seconds to compute, which makes it suitable for clinical settings.
The accuracy of the analyses for studying the three dimensional trabecular bone microstructure rely on the quality of the segmentation between trabecular bone and bone marrow. Such segmentation is challenging for images from computed tomography modalities that can be used in vivo due to their low contrast and resolution. For this purpose, we propose in this paper a granulometry-based segmentation method. In a first step, the trabecular thickness is estimated by using the granulometry in gray scale, which is generated by applying the opening morphological operation with ball-shaped structuring elements of different diameters. This process mimics the traditional sphere-fitting method used for estimating trabecular thickness in segmented images. The residual obtained after computing the granulometry is compared to the original gray scale value in order to obtain a measurement of how likely a voxel belongs to trabecular bone. A threshold is applied to obtain the final segmentation. Six histomorphometric parameters were computed on 14 segmented bone specimens imaged with cone-beam computed tomography (CBCT), considering micro-computed tomography (micro-CT) as the ground truth. Otsu’s thresholding and Automated Region Growing (ARG) segmentation methods were used for comparison. For three parameters (Tb.N, Tb.Th and BV/TV), the proposed segmentation algorithm yielded the highest correlations with micro-CT, while for the remaining three (Tb.Nd, Tb.Tm and Tb.Sp), its performance was comparable to ARG. The method also yielded the strongest average correlation (0.89). When Tb.Th was computed directly from the gray scale images, the correlation was superior to the binary-based methods. The results suggest that the proposed algorithm can be used for studying trabecular bone in vivo through CBCT.
Content-Based Medical Image Retrieval (CBMIR) is an important research field in the context of medical data management. In this paper we propose a novel CBMIR system for the automatic retrieval of radiographic images. Our approach employs a Convolutional Neural Network (CNN) to obtain high- level image representations that enable a coarse retrieval of images that are in correspondence to a query image. The retrieved set of images is refined via a non-parametric estimation of putative classes for the query image, which are used to filter out potential outliers in favour of more relevant images belonging to those classes. The refined set of images is finally re-ranked using Edge Histogram Descriptor, i.e. a low-level edge-based image descriptor that allows to capture finer similarities between the retrieved set of images and the query image. To improve the computational efficiency of the system, we employ dimensionality reduction via Principal Component Analysis (PCA). Experiments were carried out to evaluate the effectiveness of the proposed system on medical data from the “Image Retrieval in Medical Applications” (IRMA) benchmark database. The obtained results show the effectiveness of the proposed CBMIR system in the field of medical image retrieval.
Variations in the shape and appearance of anatomical structures in medical images are often relevant radiological signs of disease. Automatic tools can help automate parts of this manual process. A cloud-based evaluation framework is presented in this paper including results of benchmarking current state-of-the-art medical imaging algorithms for anatomical structure segmentation and landmark detection: the VISCERAL Anatomy benchmarks. The algorithms are implemented in virtual machines in the cloud where participants can only access the training data and can be run privately by the benchmark administrators to objectively compare their performance in an unseen common test set. Overall, 120 computed tomography and magnetic resonance patient volumes were manually annotated to create a standard Gold Corpus containing a total of 1295 structures and 1760 landmarks. Ten participants contributed with automatic algorithms for the organ segmentation task, and three for the landmark localization task. Different algorithms obtained the best scores in the four available imaging modalities and for subsets of anatomical structures. The annotation framework, resulting data set, evaluation setup, results and performance analysis from the three VISCERAL Anatomy benchmarks are presented in this article. Both the VISCERAL data set and Silver Corpus generated with the fusion of the participant algorithms on a larger set of non-manually-annotated medical images are available to the research community.
Perceptual organisation techniques aim at mimicking the human visual system for extracting salient information from noisy images. Tensor voting has been one of the most versatile of those methods, with many different applications both in computer vision and medical image analysis. Its strategy consists in propagating local information encoded through tensors by means of perception-inspired rules. Although it has been used for more than a decade, there are still many unsolved theoretical issues that have made it challenging to apply it to more problems, especially in analysis of medical images.
The main aim of this chapter is to review the current state of the research in tensor voting, to summarise its present challenges, and to describe the new trends that we foresee will drive the research in this field in the next few years. Also, we discuss extensions of tensor voting that could lead to potential performance improvements and that could make it suitable for further medical applications.
Tensor Voting is a technique that uses perceptual rules to group points in a set of input data. Its main advantage lies in its ability to robustly extract geometrical shapes like curves and surfaces from point clouds even in noisy scenarios. Following the original formulation this is achieved by exploiting the relative positioning of those points with respect to each other. Having this in mind, it is not a straight forward task to apply original tensor voting to greyscale images. Due to the underlying voxel grid, digital images have all data measurements at regularly sampled positions. For that reason, the pure spatial position of data points relative to each other does not provide useful information unless one considers the measured intensity value in addition to that. To account for that, previous approaches of employing tensor voting to scalar images have followed mainly two ideas. One is to define a subset of voxels that are likely to resemble a desired structure like curves or surfaces in the original image in a preprocessing step and to use only those points for initialisation in tensor voting. In other methods, the encoding step is modified e.g. by using estimations of local orientations for initialisation. In contrast to these approaches, another idea is to embed all information given as input, that is position in combination with intensity value, into a 4D space and perform classic tensor voting on that. In doing so, it is neither necessary to rely on a preprocessing step for estimating local orientation features nor is it needed to employ assumptions within the encoding step as all data points are initialised with unit ball tensors. Alternatively, the intensity dimension could be partially included by considering it in the weighting function of tensor voting while still employing 3D tensors for the voting. Considering the advantage of a shorter computation time for the latter approach, it is of interest to investigate the differences between these two approaches. Although different methods have employed an ND implementation of tensor voting before, the actual interpretation of its output, that is the estimation of a local hyper surface at each point, depends on the actual application at hand. As we are especially interested in the analysis of blood vessels in CT angiography data, we study the feasibility of detecting tubular structures and the estimation of their orientation totally within the proposed framework and also compare the two mentioned approaches with a special focus on these aspects. In this chapter we first provide the formulation of both approaches followed by the application-specific interpretations of the shape of 4D output tensors. Based on that, we compare the information inferred by both methods from both synthetic and medical image data focusing on the application of blood vessel analysis.
mong the various diffusion MRI techniques, diffusion ten-sor imaging (DTI) is still most commonly used in clinicalpractice in order to investigate connectivity and fibre anatomyin the human brain. Besides its apparent advantages of a shortacquisition time and noise robustness compared to other tech-niques, it suffers from its major weakness of assuming a sin-gle fibre model in each voxel. This constitutes a problem forDTI fibre tracking algorithms in regions with crossing fibres.Methods approaching this problem in a postprocessing stepemploy diffusion-like techniques to correct the directional in-formation. We propose an extension of tensor voting in whichinformation from voxels with a single fibre is used to inferorientation distributions in multi fibre voxels. The method isable to resolve multiple fibre orientations by clustering tensorvotes instead of adding them up. Moreover, a new vote cast-ing procedure is proposed which is appropriate even for smallneighbourhoods. To account for the locality of DTI data, weuse a small neighbourhood for distributing information at atime, but apply the algorithm iteratively to close larger gaps.The method shows promising results in both synthetic casesand for processing DTI-data of the human brain.
Among the various diffusion MRI techniques, diffusion tensor imaging (DTI) is still most commonly used in clinical practice in order to investigate connectivity and fibre anatomy in the human brain. Besides its apparent advantages of a short acquisition time and noise robustness compared to other techniques, it suffers from its major weakness of assuming a single fibre model in each voxel. This constitutes a problem for DTI fibre tracking algorithms in regions with crossing fibres. Methods approaching this problem in a postprocessing step employ diffusion-like techniques to correct the directional information. We propose an extension of tensor voting in which information from voxels with a single fibre is used to infer orientation distributions in multi fibre voxels. The method is able to resolve multiple fibre orientations by clustering tensor votes instead of adding them up. Moreover, a new vote casting procedure is proposed which is appropriate even for small neighbourhoods. To account for the locality of DTI data, we use a small neighbourhood for distributing information at a time, but apply the algorithm iteratively to close larger gaps. The method shows promising results in both synthetic cases and for processing DTI-data of the human brain.
STUDY DESIGN: Cross-sectional study. BACKGROUND: Findings of fat infiltration in cervical spine multifidus, as a sign of degenerative morphometric changes due to whiplash injury, need to be verified. OBJECTIVES: To develop a method using water/fat magnetic resonance imaging (MRI) to investigate fat infiltration and cross-sectional area of multifidus muscle in individuals with whiplash associated disorders (WADS) compared to healthy controls. METHODS: Fat infiltration and cross-sectional area in the multifidus muscles spanning the C4 to C7 segmental levels were investigated by manual segmentation using water/fat-separated MRI in 31 participants with WAD and 31 controls, matched for age and sex. RESULTS: Based on average values for data spanning C4 to C7, participants with severe disability related to WAD had 38% greater muscular fat infiltration compared to healthy controls (P = .03) and 45% greater fat infiltration compared to those with mild to moderate disability related to WAD (P = .02). There were no significant differences between those with mild to moderate disability and healthy controls. No significant differences between groups were found for multifidus cross-sectional area. Significant differences were observed for both cross-sectional area and fat infiltration between segmental levels. CONCLUSION: Participants with severe disability after a whiplash injury had higher fat infiltration in the multifidus compared to controls and to those with mild/moderate disability secondary to WAD. Earlier reported findings using T1-weighted MRI were reproduced using refined imaging technology. The results of the study also indicate a risk when segmenting single cross-sectional slices, as both cross-sectional area and fat infiltration differ between cervical levels.
Trabecular bone structure has been shown to impact bone strength and fracture risk. In vitro, this structure can be measured by micro-computed tomography (micro-CT). For clinical use, it would be valuable if multi-slice computed tomography (MSCT) could be used to analyse trabecular bone structure. One important step in the analysis is image volume segmentation. Previous segmentation techniques have either been computer resource intensive or produced sub-optimal results when used on MSCT data. This paper proposes a new segmentation method that tries to balance good results against computational complexity. Material. Fourteen human radius specimens where scanned with MSCT and segmented using the proposed method as well as two segmentation methods previously used to segment trabecular bone (Otsu and Automated Region Growing (ARG)). The proposed method (named FCH) uses a combination of feature space clustering, edge detection and hysteresis thresholding. For evaluation, we computed correlations with the reference method micro-CT for 7 structure parameters and measured segmentation time. Results. Correlations with micro-CT were highest for FCH in 3 cases, highest for ARG in 3 cases, and in general lower for Otsu. Both FCH and ARG had correlations higher than 0.80 for all parameters, except for trabecular thickness and trabecular termini. FCH was 60 times slower than Otsu, but 5 times faster than ARG. Discussion. The high correlations with micro-CT suggest that with a suitable segmentation method it might be possible to analyse trabecular bone structure using MSCT-machines. The proposed segmentation method may represent a useful balance between speed and accuracy.
Purpose: The aim of this work was to quantify the extent of lipid-rich necrotic core (LRNC) and intraplaque hemorrhage (IPH) in atherosclerotic plaques. Methods: Patients scheduled for carotid endarterectomy underwent four-point Dixon and T1-weighted magnetic resonance imaging (MRI) at 3 Tesla. Fat and R2* maps were generated from the Dixon sequence at the acquired spatial resolution of 0.60×0.60×0.70mm voxel size. MRI and three-dimensional (3D) histology volumes of plaques were registered. The registration matrix was applied to segmentations denoting LRNC and IPH in 3D histology to split plaque volumes in regions with and without LRNC and IPH. Results: Five patients were included. Regarding volumes of LRNC identified by 3D histology, the average fat fraction by MRI was significantly higher inside LRNC than outside: 12.64±0.2737% versus 9.294±0.1762% (mean±standard error of the mean [SEM]; P<0.001). The same was true for IPH identified by 3D histology, R2* inside versus outside IPH was: 71.81±1.276 s-1 versus 56.94±0.9095 s-1 (mean±SEM; P<0.001). There was a strong correlation between the cumulative fat and the volume of LRNC from 3D histology (R2=0.92) as well as between cumulative R2* and IPH (R2=0.94). Conclusion: Quantitative mapping of fat and R2* from Dixon MRI reliably quantifies the extent of LRNC and IPH.
Vascular segmentation plays an important role in the assessment of peripheral arterial disease. The segmentation is very challenging especially for arteries with severe stenosis or complete occlusion. We present a cascading algorithm for vascular centerline tree detection specializing in detecting centerlines in diseased peripheral arteries. It takes a three-dimensional computed tomography angiography (CTA) volume and returns a vascular centerline tree, which can be used for accelerating and facilitating the vascular segmentation. The algorithm consists of four levels, two of which detect healthy arteries of varying sizes and two that specialize in different types of vascular pathology: severe calcification and occlusion. We perform four main steps at each level: appropriate parameters for each level are selected automatically, a set of centrally located voxels is detected, these voxels are connected together based on the connection criteria, and the resulting centerline tree is corrected from spurious branches. The proposed method was tested on 25 CTA scans of the lower limbs, achieving an average overlap rate of 89% and an average detection rate of 82%. The average execution time using four CPU cores was 70 s, and the technique was successful also in detecting very distal artery branches, e. g., in the foot.
Acute respiratory distress syndrome (ARDS) is associated with a high mortality rate in intensive care units. To lower the number of fatal cases, it is necessary to customize the mechanical ventilator parameters according to the patient’s clinical condition. For this, lung segmentation is required to assess aeration and alveolar recruitment. Airway segmentation may be used to reach a more accurate lung segmentation. In this paper, we seek to improve lung segmentation results by proposing a novel automatic airway-tree segmentation that is able to address the heterogeneity of ARDS pathology by handling various lung intensities differently. The method detects a simplified airway skeleton, thereby obtains a set of seed points together with an approximate radius and intensity range related to each of the points. These seeds are the input for an onion-kernel region-growing segmentation algorithm where knowledge about radius and intensity range restricts the possible leakage in the parenchyma. The method was evaluated qualitatively on 70 thoracic Computed Tomography volumes of subjects with ARDS, acquired at significantly different mechanical ventilation conditions. It found a large proportion of airway branches including tiny poorly-aerated bronchi. Quantitative evaluation was performed indirectly and showed that the resulting airway segmentation provides important anatomic landmarks. Their correspondences are needed to help a registration-based segmentation of the lungs in difficult ARDS cases where the lung boundary contrast is completely missing. The proposed method takes an average time of 43 s to process a thoracic volume which is valuable for the clinical use.
Vascular diseases are a common cause of death, particularly in developed countries. Computerized image analysis tools play a potentially important role in diagnosing and quantifying vascular pathologies. Given the size and complexity of modern angiographic data acquisition, fast, automatic and accurate vascular segmentation is a challenging task.In this paper we introduce a fully automatic high-speed vascular skeleton extraction algorithm that is intended as a first step in a complete vascular tree segmentation program. The method takes a 3D unprocessed Computed Tomography Angiography (CTA) scan as input and produces a graph in which the nodes are centrally located artery voxels and the edges represent connections between them. The algorithm works in two passes where the first pass is designed to extract the skeleton of large arteries and the second pass focuses on smaller vascular structures. Each pass consists of three main steps. The first step sets proper parameters automatically using Gaussian curve fitting. In the second step different filters are applied to detect voxels - nodes - that are part of arteries. In the last step the nodes are connected in order to obtain a continuous centerline tree for the entire vasculature. Structures found, that do not belong to the arteries, are removed in a final anatomy-based analysis. The proposed method is computationally efficient with an average execution time of 29s and has been tested on a set of CTA scans of the lower limbs achieving an average overlap rate of 97% and an average detection rate of 71%.
This chapter focuses on skeleton detection for clinical evaluation of blood vessel systems. In clinical evaluation, there is a need for fast and accurate segmentation algorithms that can reliably provide vessel measurements and additional information for clinicians to decide the diagnosis.Since blood vessels have a characteristic tubular shape, their segmentation can be accelerated and facilitated by first identifying the rough vessel centerlines, which can be seen as a special case of an image skeleton extraction algorithm. A segmentation algorithm will finally use the resulting skeleton as a seed region during the segmentation. The proposed method takes an unprocessed 3D computed tomography angiography (CTA) scan as an input and generates a connected graph of centrally located arterial voxels. The method works in two levels, where large arteries are captured in the first level, and small arteries are added in the second one. Experimental results show that the method can achieve high overlap rate and acceptable detection rate accuracies. High computational efficiency of the method opens the possibility for an interactive clinical use.
Recent advances in Computed Tomography Angiography provide high-resolution 3D images of the vessels. However, there is an inevitable requisite for automated and fast methods to process the increased amount of generated data. In this work, we propose a fast method for vascular skeleton extraction which can be combined with a segmentation algorithm to accelerate the vessel delineation. The algorithm detects central voxels - nodes - of potential vessel regions in the orthogonal CT slices and uses a convolutional neural network (CNN) to identify the true vessel nodes. The nodes are gradually linked together to generate an approximate vascular skeleton. The CNN classifier yields a precision of 0.81 and recall of 0.83 for the medium size vessels and produces a qualitatively evaluated enhanced representation of vascular skeletons.
We present a coverage segmentation method for extracting thin structures in three-dimensional images. The proposed method is an improved extension of our coverage segmentation method for 2D thin structures. We suggest implementation that enables low memory consumption and processing time, and by that applicability of the method on real CTA data. The method needs a reliable crisp segmentation as an input and uses information from linear unmixing and the crisp segmentation to create a high-resolution crisp reconstruction of the object, which can then be used as a final result, or down-sampled to a coverage segmentation at the starting image resolution. Performed quantitative and qualitative analysis confirm excellent performance of the proposed method, both on synthetic and on real data, in particular in terms of robustness to noise.
Segmenting brain tissue from MR scans is thought to be highly beneficial for brain abnormality diagnosis, prognosis monitoring, and treatment evaluation. Many automatic or semi-automatic methods have been proposed in the literature in order to reduce the requirement of user intervention, but the level of accuracy in most cases is still inferior to that of manual segmentation. We propose a new brain segmentation method that integrates volumetric shape models into a supervised artificial neural network (ANN) framework. This is done by running a preliminary level-set based statistical shape fitting process guided by the image intensity and then passing the signed distance maps of several key structures to the ANN as feature channels, in addition to the conventional spatial-based and intensity-based image features. The so-called shape context information is expected to help the ANN to learn local adaptive classification rules instead of applying universal rules directly on the local appearance features. The proposed method was tested on a public datasets available within the open MICCAI grand challenge (MRBrainS13). The obtained average Dice coefficient were 84.78%, 88.47%, 82.76%, 95.37% and 97.73% for gray matter (GM), white matter (WM), cerebrospinal fluid (CSF), brain (WM + GM) and intracranial volume respectively. Compared with other methods tested on the same dataset, the proposed method achieved competitive results with comparatively shorter training time.
We aim at reconstructing superficial vessels of the brain. Ultimately, they will serve to guide the deformation methods to compensate for the brain shift. A pipeline for three-dimensional (3-D) vessel reconstruction using three mono-complementary metal-oxide semiconductor cameras has been developed. Vessel centerlines are manually selected in the images. Using the properties of the Hessian matrix, the centerline points are assigned direction information. For correspondence matching, a combination of methods was used. The process starts with epipolar and spatial coherence constraints (geometrical constraints), followed by relaxation labeling and an iterative filtering where the 3-D points are compared to surfaces obtained using the thin-plate spline with decreasing relaxation parameter. Finally, the points are shifted to their local centroid position. Evaluation in virtual, phantom, and experimental images, including intraoperative data from patient experiments, shows that, with appropriate camera positions, the error estimates (root-mean square error and mean error) are similar to 1 mm.
In this study we present a non-rigid point set registration for 3D curves (composed by 3D set of points). The method was evaluated in the task of registration of 3D superficial vessels of the brain where it was used to match vessel centerline points. It consists of a combination of the Coherent Point Drift (CPD) and the Thin-Plate Spline (TPS) semilandmarks. The CPD is used to perform the initial matching of centerline 3D points, while the semilandmark method iteratively relaxes/slides the points. For the evaluation, a Magnetic Resonance Angiography (MRA) dataset was used. Deformations were applied to the extracted vessels centerlines to simulate brain bulging and sinking, using a TPS deformation where a few control points were manipulated to obtain the desired transformation (T-1). Once the correspondences are known, the corresponding points are used to define a new TPS deformation(T-2). The errors are measured in the deformed space, by transforming the original points using T-1 and T-2 and measuring the distance between them. To simulate cases where the deformed vessel data is incomplete, parts of the reference vessels were cut and then deformed. Furthermore, anisotropic normally distributed noise was added. The results show that the error estimates (root mean square error and mean error) are below 1 mm, even in the presence of noise and incomplete data.
During percutaneous coronary intervention, stents are placed in narrowings of the arteries to restore normal blood flow. Despite improvements in stent design, deployment techniques and drug-eluting coatings, restenosis and stent thrombosis remain a significant problem. Population stent design based on statistical shape analysis may improve clinical outcomes. Computed tomographic (CT) coronary angiography scans from 211 patients with a zero calcium score, no stenoses and no intermediate artery, were used to create statistical shape models of 446 major coronary artery bifurcations (left main, first diagonal and obtuse marginal and right coronary crux). Coherent point drift was used for registration. Principal component analysis shape scores were tested against clinical risk factors, quantifying the importance of recognised shape features in intervention including size, angles and curvature. Significant differences were found in (1) vessel size and bifurcation angle between the left main and other bifurcations; (2) inlet and curvature angle between the right coronary crux and other bifurcations; and (3) size and bifurcation angle by sex. Hypertension, smoking history and diabetes did not appear to have an association with shape. Physiological diameter laws were compared, with the Huo-Kassab model having the best fit. Bifurcation coronary anatomy can be partitioned into clinically meaningful modes of variation showing significant shape differences. A computational atlas of normal coronary bifurcation shape, where disease is common, may aid in the design of new stents and deployment techniques, by providing data for bench-top testing and computational modelling of blood flow and vessel wall mechanics.
Aims: The aim of this study was to define the shape variations, including diameters and angles, of the major coronary artery bifurcations. Methods and results: Computed tomographic angiograms from 300 adults with a zero calcium score and no stenoses were segmented for centreline and luminal models. A computational atlas was constructed enabling automatic quantification of 3D angles, diameters and lengths of the coronary tree. The diameter (mean +/- SD) of the left main coronary was 3.5 +/- 0.8 mm and the length 10.5 +/- 5.3 mm. The left main bifurcation angle (distal angle or angle B) was 89 +/- 21 degrees for cases with, and 75 +/- 23 degrees for those without an intermediate artery (p<0.001). Analogous measurements of diameter and angle were tabulated for the other major bifurcations (left anterior descending/diagonal, circumflex/obtuse marginal and right coronary crux). Novel 3D angle definitions are proposed and analysed. Conclusions: A computational atlas of normal coronary artery anatomy provides distributions of diameter, lengths and bifurcation angles as well as more complex shape analysis. These data define normal anatomical variation, facilitating stent design, selection and optimal treatment strategy. These population models are necessary for accurate computational flow dynamics, can be 3D printed for bench testing bifurcation stents and deployment strategies, and can aid in the discussion of different approaches to the treatment of coronary bifurcations.
Vesselness filters aim at enhancing tubular structures in medical images. The most popular vesselness filters are based on eigenanalyses of the Hessian matrix computed at different scales. However, Hessian-based methods have well-known limitations, most of them related to the use of second order derivatives. In this paper, we propose an alternative strategy in which ring-like patterns are sought in the local orientation distribution of the gradient. The method takes advantage of symmetry properties of ring-like patterns in the spherical harmonics domain. For bright vessels, gradients not pointing towards the center are filtered out from every local neighborhood in a first step. The opposite criterion is used for dark vessels. Afterwards, structuredness, evenness and uniformness measurements are computed from the power spectrum in spherical harmonics of both the original and the half-zeroed orientation distribution of the gradient. Finally, the features are combined into a single vesselness measurement. Alternatively, a structure tensor that is suitable for vesselness can be estimated before the analysis in spherical harmonics. The two proposed methods are called Ring Pattern Detector (RPD) and Filtered Structure Tensor (FST) respectively. Experimental results with computed tomography angiography data show that the proposed filters perform better compared to the state-of-the-art.
We recently proposed a method for estimating vesselness based on detection of ring patterns in the local distribution ofthe gradient. This method has a better performance than other state-of-the-art algorithms. However, the original implementation of the method makes use of the spherical harmonics transform locally, which is time consuming. In this paper we propose an equivalent formulation of the method based on higher-order tensors. A linear mapping between the spherical harmonics transform and higher-order orientation tensors is used in order to reduce the complexity of the method. With the new implementation, the analysis of computed tomography angiography data can be performed 2.6 times faster compared with the original implementation.
The apparent stiffness tensor is an important mechanical parameter for characterizing trabecular bone. Previous studies have modeled this parameter as a function of mechanical properties of the tissue, bone density and a second-order fabric tensor, which encodes both anisotropy and orientation of trabecular bone. Although these models yield strong correlations between observed and predicted stiffness tensors, there is still space for reducing accuracy errors.In this paper we propose a model that uses fourth-order instead of second-order fabric tensors. First, the totally symmetric part of the stiffness tensor is assumed proportional to the fourth-order fabric tensor in the logarithmic scale. Second, the asymmetric part of the stiffness tensor is derived from relationships among components of the harmonic tensor decomposition of the stiffness tensor. The mean intercept length (MIL), generalized MIL (GMIL) and global structure tensor fourth-order were computed from images acquired through micro computed tomography of 264 specimens of the femur. The predicted tensors were compared to the stiffness tensors computed by using the micro finite element method (micro-FE), which was considered as the gold standard, yielding strong correlations (R^2 above 0.962). The GMIL tensor yielded the best results among the tested fabric tensors. The Frobenius error, geodesic error and the error of the norm were reduced by applying the proposed model by 3.75%, 0.07% and 3.16%, respectively compared to the model by Zysset and Curnier (1995) with the second-order MIL tensor. From the results, fourth-order fabric tensors are a good alternative to the more expensive micro-FE stiffness predictions.
Percutaneous transluminal coronary angioplasty (PTCA) requires X-ray images employing high radiation dose with high concentration of contrast media, leading to the risk of radiation induced injury and nephropathy. These drawbacks can be reduced by using lower doses of X-rays and contrast media, with the disadvantage of noisier PTCA images. In this paper, convolutional neural networks were used in order to denoise low dose PTCA-like images, built by adding artificial noise to high dose images. MSE and SSIM based loss functions were tested and compared visually and quantitatively for different types and levels of noise. The results showed promising performance for denoising task.
Background: For optimizing and evaluating image quality in medical imaging, one can use visual grading experiments, where observers rate some aspect of image quality on an ordinal scale. To analyze the grading data, several regression methods are available, and this study aimed at empirically comparing such techniques, in particular when including random effects in the models, which is appropriate for observers and patients. Methods: Data were taken from a previous study where 6 observers graded or ranked in 40 patients the image quality of four imaging protocols, differing in radiation dose and image reconstruction method. The models tested included linear regression, the proportional odds model for ordinal logistic regression, the partial proportional odds model, the stereotype logistic regression model and rank-order logistic regression (for ranking data). In the first two models, random effects as well as fixed effects could be included; in the remaining three, only fixed effects. Results: In general, the goodness of fit (AIC and McFadden's Pseudo R-2) showed small differences between the models with fixed effects only. For the mixed-effects models, higher AIC and lower Pseudo R-2 was obtained, which may be related to the different number of parameters in these models. The estimated potential for dose reduction by new image reconstruction methods varied only slightly between models. Conclusions: The authors suggest that the most suitable approach may be to use ordinal logistic regression, which can handle ordinal data and random effects appropriately.
Segmentation of various structures from the chest radiograph is often performed as an initial step in computer-aided diagnosis/detection (CAD) systems. In this study, we implemented a multi-task fully convolutional network (FCN) to simultaneously segment multiple anatomical structures, namely the lung fields, the heart, and the clavicles, in standard posterior-anterior chest radiographs. This is done by adding multiple fully connected output nodes on top of a single FCN and using different objective functions for different structures, rather than training multiple FCNs or using a single FCN with a combined objective function for multiple classes. In our preliminary experiments, we found that the proposed multi-task FCN can not only reduce the training and running time compared to treating the multi-structure segmentation problems separately, but also help the deep neural network to converge faster and deliver better segmentation results on some challenging structures, like the clavicle. The proposed method was tested on a public database of 247 posterior–anterior chest radiograph and achieved comparable or higher accuracy on most of the structures when compared with the state-of-the-art segmentation methods.
A new level-set based interactive segmentation framework is introduced, where the algorithm learns the intensity distributions of the tumor and surrounding tissue from a line segment drawn by the user from the middle of the lesion towards the border. This information is used to design a likelihood function, which is then incorporated into the level-set framework as an external speed function guiding the segmentation. The endpoint of the input line segment sets a limit to the propagation of 3D region, i.e., when the zero-level-set crosses this point, the propagation is forced to stop. Finally, a fast level set algorithm with coherent propagation is used to solve the level set equation in real time. This allows the user to instantly see the 3D result while adjusting the position of the line segment to tune the parameters implicitly. The “fluctuating” character of the coherent propagation also enables the contour to coherently follow the mouse cursor’s motion when the user tries to fine-tune the position of the contour on the boundary, where the learned likelihood function may not necessarily change much. Preliminary results suggest that radiologists can easily learn how to use the proposed segmentation tool and perform relatively accurate segmentation with much less time than the conventional slice-by-slice based manual procedure.
To improve the accuracy of multi-organ segmentation, we propose a model-based segmentation framework that utilizes the local phase information from paired quadrature filters to delineate the organ boundaries. Conventional local phase analysis based on local orientation has the drawback of outputting the same phases for black-to-white and white-to-black edges. This ambiguity could mislead the segmentation when two organs’ borders are too close. Using the gradient of the signed distance map of a statistical shape model, we could distinguish between these two types of edges and avoid the segmentation region leaking into another organ. In addition, we propose a level-set solution that integrates both the edge-based (represented by local phase) and region-based speed functions. Compared with previously proposed methods, the current method uses local adaptive weighting factors based on the confidence of the phase map (energy of the quadrature filters) instead of a global weighting factor to combine these two forces. In our preliminary studies, the proposed method outperformed conventional methods in terms of accuracy in a number of organ segmentation tasks.
In this report, a novel automatic heart and vessel segmentation method is proposed. The heart segmentation pipeline consists of three major steps: heart localization using landmark detection, heart isolation using statistical shape model and myocardium segmentation using learning based voxel classification and local phase analysis. In our preliminary test, the proposed method achieved encouraging results.