Change search
Refine search result
12 1 - 50 of 63
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Boeck, M.
    et al.
    KTH.
    Forsgren, A.
    KTH.
    Eriksson, K.
    KTH.
    Karlsson, J.
    KTH.
    Robust Model Predictive Control for Adaptive Radiation Therapy2017In: Medical physics (Lancaster), ISSN 0094-2405, Vol. 44, no 6, p. 3143-3143Article in journal (Other academic)
  • 2.
    Bokrantz, Rasmus
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Optimization and Systems Theory.
    Multicriteria optimization for volumetric-modulated arc therapy by decomposition into a fluence-based relaxation and a segment weight-based restriction2012In: Medical physics (Lancaster), ISSN 0094-2405, Vol. 39, no 11, p. 6712-6725Article in journal (Refereed)
    Abstract [en]

    Purpose: To develop a method for inverse volumetric-modulated arc therapy (VMAT) planning that combines multicriteria optimization (MCO) with direct machine parameter optimization. The ultimate goal is to provide an efficient and intuitive method for generating high quality VMAT plans. Methods: Multicriteria radiation therapy treatment planning amounts to approximating the relevant treatment options by a discrete set of plans, and selecting the combination thereof that strikes the best possible balance between conflicting objectives. This approach is applied to two decompositions of the inverse VMAT planning problem: a fluence-based relaxation considered at a coarsened gantry angle spacing and under a regularizing penalty on fluence modulation, and a segment weight-based restriction in a neighborhood of the solution to the relaxed problem. The two considered variable domains are interconnected by direct machine parameter optimization toward reproducing the dose-volume histogram of the fluence-based solution. Results: The dose distribution quality of plans generated by the proposed MCO method was assessed by direct comparison with benchmark plans generated by a conventional VMAT planning method. The results for four patient cases (prostate, pancreas, lung, and head and neck) are highly comparable between the MCO plans and the benchmark plans: Discrepancies between studied dose-volume statistics for organs at risk were-with the exception of the kidneys of the pancreas case-within 1 Gy or 1 percentage point. Target coverage of the MCO plans was comparable with that of the benchmark plans, but with a small tendency toward a shift from conformity to homogeneity. Conclusions: MCO allows tradeoffs between conflicting objectives encountered in VMAT planning to be explored in an interactive manner through search over a continuous representation of the relevant treatment options. Treatment plans selected from such a representation are of comparable dose distribution quality to conventionally optimized VMAT plans.

  • 3.
    Bokrantz, Rasmus
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Optimization and Systems Theory. RaySearch Laboratories, Sweden.
    Miettinen, Kaisa
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Optimization and Systems Theory. University of Jyväskylä, Finland.
    Projections onto the Pareto surface in multicriteria radiation therapy optimization2015In: Medical physics (Lancaster), ISSN 0094-2405, Vol. 42, no 10, p. 5862-5870Article in journal (Refereed)
    Abstract [en]

    Purpose: To eliminate or reduce the error to Pareto optimality that arises in Pareto surface navigation when the Pareto surface is approximated by a small number of plans. Methods: The authors propose to project the navigated plan onto the Pareto surface as a postprocessing step to the navigation. The projection attempts to find a Pareto optimal plan that is at least as good as or better than the initial navigated plan with respect to all objective functions. An augmented form of projection is also suggested where dose-volume histogram constraints are used to prevent that the projection causes a violation of some clinical goal. The projections were evaluated with respect to planning for intensity modulated radiation therapy delivered by step-and-shoot and sliding window and spot-scanned intensity modulated proton therapy. Retrospective plans were generated for a prostate and a head and neck case. Results: The projections led to improved dose conformity and better sparing of organs at risk (OARs) for all three delivery techniques and both patient cases. The mean dose to OARs decreased by 3.1 Gy on average for the unconstrained form of the projection and by 2.0 Gy on average when dose-volume histogram constraints were used. No consistent improvements in target homogeneity were observed. Conclusions: There are situations when Pareto navigation leaves room for improvement in OAR sparing and dose conformity, for example, if the approximation of the Pareto surface is coarse or the problem formulation has too permissive constraints. A projection onto the Pareto surface can identify an inaccurate Pareto surface representation and, if necessary, improve the quality of the navigated plan.

  • 4.
    Bornefalk, Hans
    KTH, School of Engineering Sciences (SCI), Physics.
    Implications of unchanged detection criteria with CAD as second reader of mammograms2006In: Medical physics (Lancaster), ISSN 0094-2405, Vol. 33, no 4, p. 922-929Article in journal (Refereed)
    Abstract [en]

    In this paper we address the use of computer-aided detection (CAD) systems as second readers in mammography. The approach is based on Bayesian decision theory and its implication for the choice of optimal operating points. The choice of a certain operating point along an ROC curve corresponds to a particular tradeoff between false positives and missed cancers. By minimizing a total risk function given this tradeoff, we determine optimal decision thresholds for the radiologist and CAD system when CAD is used as a second reader. We show that under very general circumstances, the performance of the sequential system is improved if the decision threshold of the latent human decision variable is increased compared to what it would have been in the absence of the CAD system. This means that an initial stricter decision criterion should be applied by the radiologist when CAD is used as a second reader than otherwise. First and foremost, the results in this paper should be interpreted qualitatively, but an attempt is made at quantifying the effect by tuning the model to a prospective study evaluating the use of CAD as a second reader. By making some necessary and plausible assumptions, we are able to estimate the effect of the resulting suboptimal operating point. In this study of 12 860 women, we estimate that a 15% reduction in callbacks for masses could have been achieved with only about a 1.5% relative decrease in sensitivity compared to that without using a stricter initial criterion by the radiologist. For microcalcifications the corresponding values are 7% and 0.2%. (c) 2006 American Association of Physicists in Medicine.

  • 5.
    Bornefalk, Hans
    KTH, School of Engineering Sciences (SCI), Physics, Medical Imaging.
    Task-based weights for photon counting spectral x-ray imaging2011In: Medical physics (Lancaster), ISSN 0094-2405, Vol. 38, no 11, p. 6065-6073Article in journal (Refereed)
    Abstract [en]

    Purpose: To develop a framework for taking the spatial frequency composition of an imaging taskinto account when determining optimal bin weight factors for photon counting energy sensitivex-ray systems. A second purpose of the investigation is to evaluate the possible improvement comparedto using pixel based weights.Methods: The Fourier based approach of imaging performance and detectability index d0 is appliedto pulse height discriminating photon counting systems. The dependency of d0 on the bin weightfactors is made explicit, taking into account both differences in signal and noise transfer characteristicsacross bins and the spatial frequency dependency of interbin correlations from reabsorbedscatter. Using a simplified model of a specific silicon detector, d0 values for a high and a low frequencyimaging task are determined for optimal weights and compared to pixel based weights.Results: The method successfully identifies bins where a large point spread function degradesdetection of high spatial frequency targets. The method is also successful in determining how todownweigh highly correlated bins. Quantitative predictions for the simplified silicon detectormodel indicate that improvements in the detectability index when applying task-based weightsinstead of pixel based weights are small for high frequency targets, but could be in excess of 10%for low frequency tasks where scatter-induced correlation otherwise degrade detectability.Conclusions: The proposed method makes the spatial frequency dependency of complex correlationstructures between bins and their effect on the system detective quantum efficiency easier toanalyze and allows optimizing bin weights for given imaging tasks. A potential increase in detectabilityof double digit percents in silicon detector systems operated at typical CT energies (100kVp) merits further evaluation on a real system. The method is noted to be of higher relevancefor silicon detectors than for cadmium (zink) telluride detectors.

  • 6.
    Bornefalk, Hans
    KTH, School of Engineering Sciences (SCI), Physics, Medical Imaging.
    XCOM intrinsic dimensionality for low-Z elements at diagnostic energies2012In: Medical physics (Lancaster), ISSN 0094-2405, Vol. 39, no 2, p. 654-657Article in journal (Refereed)
    Abstract [en]

    Purpose: To determine the intrinsic dimensionality of linear attenuation coefficients (LACs) from XCOM for elements with low atomic number (Z = 1-20) at diagnostic x-ray energies (25-120 keV). H-0(q), the hypothesis that the space of LACs is spanned by q bases, is tested for various q-values. Methods: Principal component analysis is first applied and the LACs are projected onto the first q principal component bases. The residuals of the model values vs XCOM data are determined for all energies and atomic numbers. Heteroscedasticity invalidates the prerequisite of i.i.d. errors necessary for bootstrapping residuals. Instead wild bootstrap is applied, which, by not mixing residuals, allows the effect of the non-i.i.d residuals to be reflected in the result. Credible regions for the eigenvalues of the correlation matrix for the bootstrapped LAC data are determined. If subsequent credible regions for the eigenvalues overlap, the corresponding principal component is not considered to represent true data structure but noise. If this happens for eigenvalues l and l + 1, for any l <= q, H-0(q) is rejected. Results: The largest value of q for which H-0(q) is nonrejectable at the 5%-level is q = 4. This indicates that the statistically significant intrinsic dimensionality of low-Z XCOM data at diagnostic energies is four. Conclusions: The method presented allows determination of the statistically significant dimensionality of any noisy linear subspace. Knowledge of such significant dimensionality is of interest for any method making assumptions on intrinsic dimensionality and evaluating results on noisy reference data. For LACs, knowledge of the low-Z dimensionality might be relevant when parametrization schemes are tuned to XCOM data. For x-ray imaging techniques based on the basis decomposition method (Alvarez and Macovski, Phys. Med. Biol. 21, 733-744, 1976), an underlying dimensionality of two is commonly assigned to the LAC of human tissue at diagnostic energies. The finding of a higher statistically significant dimensionality thus raises the question whether a higher assumed model dimensionality (now feasible with the advent of multibin x-ray systems) might also be practically relevant, i.e., if better tissue characterization results can be obtained.

  • 7.
    Bornefalk, Hans
    et al.
    KTH, School of Engineering Sciences (SCI), Physics.
    Bornefalk-Hermansson, Anna
    On the comparison of FROC curves in mammography CAD systems2005In: Medical physics (Lancaster), ISSN 0094-2405, Vol. 32, no 2, p. 412-417Article in journal (Refereed)
    Abstract [en]

    We present a novel method for assessing the performance of computer-aided detection systems on unseen cases at a given sensitivity level. The sampling error introduced when training the system on a limited data set is captured as the uncertainty in determining the system threshold that would yield a certain predetermined sensitivity on unseen data sets. By estimating the distribution of system thresholds, we construct a confidence interval for the expected number of false positive markings per image at a given sensitivity. We present two alternative procedures for estimating the probability density functions needed for the construction of the confidence interval. The first is based on the common assumption of Poisson distributed number of false positive markings per image. This procedure also relies on the assumption of independence between false positives and sensitivity, an assumption that can be relaxed with the second procedure, which is nonparametric. The second procedure uses the bootstrap applied to the data generated in the leave-one-out construction of the FROC curve, and is a fast and robust way of obtaining the desired confidence interval. Standard FROC curve analysis does not account for the uncertainty in setting the system threshold, so this method should allow for a more fair comparison of different systems. The resulting confidence intervals are surprisingly wide. For our system a conventional FROC curve analysis yields 0.47 false positive markings per image at 90% sensitivity. The 90% confidence interval for the number of false positive markings per image is (0.28, 1.02) with the parametric procedure and (0.27, 1.04) with the nonparametric bootstrap. Due to its computational simplicity and its allowing more fair comparisons between systems, we propose this method as a complement to the traditionally presented FROC curves. (C) 2005 American Association of Physicists in Medicine.

  • 8.
    Bornefalk, Hans
    et al.
    KTH, School of Engineering Sciences (SCI), Physics.
    Lewin, John H.
    Danielsson, Mats
    Lundqvist, Mats
    KTH, School of Engineering Sciences (SCI), Physics.
    Improved dual-energy imaging with electronic spectrum splittingIn: Medical physics (Lancaster), ISSN 0094-2405Article in journal (Refereed)
  • 9.
    Böck, Michelle
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Optimization and Systems Theory. RaySearch Labs AB, Sweden.
    Eriksson, Kjell
    Forsgren, Anders
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Optimization and Systems Theory.
    Hardemark, Bjorn
    Toward robust adaptive radiation therapy strategies2017In: Medical physics (Lancaster), ISSN 0094-2405, Vol. 44, no 6, p. 2054-2065Article in journal (Refereed)
    Abstract [en]

    Purpose: To set up a framework combining robust treatment planning with adaptive re-optimization in order to maintain high treatment quality, to respond to interfractional geometric variations and to identify those patients who will benefit the most from an adaptive fractionation schedule. Methods: The authors propose robust adaptive strategies based on stochastic minimax optimization for a series of simulated treatments on a one-dimensional patient phantom. The plan applied during the first fractions should be able to handle anticipated systematic and random errors. Information on the individual geometric variations is gathered at each fraction. At scheduled fractions, the impact of the measured errors on the delivered dose distribution is evaluated. For a patient having received a dose that does not satisfy specified plan quality criteria, the plan is re-optimized based on these individually measured errors. The re-optimized plan is then applied during subsequent fractions until a new scheduled adaptation becomes necessary. In this study, three different adaptive strategies are introduced and investigated. (a) In the first adaptive strategy, the measured systematic and random error scenarios and their assigned probabilities are updated to guide the robust re-optimization. (b) In the second strategy, the degree of conservativeness is adapted in response to the measured dose delivery errors. (c) In the third strategy, the uncertainty margins around the target are recalculated based on the measured errors. The simulated treatments are subjected to systematic and random errors that are either similar to the anticipated errors or unpredictably larger in order to critically evaluate the performance of these three adaptive strategies. Results: According to the simulations, robustly optimized treatment plans provide sufficient treatment quality for those treatment error scenarios similar to the anticipated error scenarios. Moreover, combining robust planning with adaptation leads to improved organ-at-risk protection. In case of unpredictably larger treatment errors, the first strategy in combination with at most weekly adaptation performs best at notably improving treatment quality in terms of target coverage and organ-at-risk protection in comparison with a non-adaptive approach and the other adaptive strategies. Conclusion: The authors present a framework that provides robust plan re-optimization or margin adaptation of a treatment plan in response to interfractional geometric errors throughout the fractionated treatment. According to the simulations, these robust adaptive treatment strategies are able to identify candidates for an adaptive treatment, thus giving the opportunity to provide individualized plans, and improve their treatment quality through adaptation. The simulated robust adaptive framework is a guide for further development of optimally controlled robust adaptive therapy models.

  • 10.
    Carlsson, F
    et al.
    KTH. RaySearch Laboratories AB, Stockholm, Sweden.
    Forsgren, Anders
    KTH, Superseded Departments (pre-2005), Mathematics.
    Iterative regularization of the IMRT optimization problem2005In: Medical physics (Lancaster), ISSN 0094-2405, Vol. 32, no 6, p. 2140-2140Article in journal (Other academic)
  • 11. Carlsson, F
    et al.
    Rehbinder, H
    Forsgren, Anders
    KTH, Superseded Departments (pre-2005), Mathematics.
    Lof, J
    On the use of curvature information in IMRT optimization2004In: Medical physics (Lancaster), ISSN 0094-2405, Vol. 31, no 6, p. 1906-1906Article in journal (Other academic)
  • 12.
    Carlsson, Fredrik
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Optimization and Systems Theory.
    Combining segment generation with direct step-and-shoot optimization in intensity-modulated radiation therapy2008In: Medical physics (Lancaster), ISSN 0094-2405, Vol. 35, no 9, p. 3828-3838Article in journal (Refereed)
    Abstract [en]

    A method for generating a sequence of intensity-modulated radiation therapy step-and-shoot plans with increasing number of segments is presented. The objectives are to generate high-quality plans with few, large and regular segments, and to make the planning process more intuitive. The proposed method combines segment generation with direct step-and-shoot optimization, where leaf positions and segment weights are optimized simultaneously. The segment generation is based on a column generation approach. The method is evaluated on a test suite consisting of five head-and-neck cases and five prostate cases, planned for delivery with an Elekta SLi accelerator. The adjustment of segment shapes by direct step-and-shoot optimization improves the plan quality compared to using fixed segment shapes. The improvement in plan quality when adding segments is larger for plans with few segments. Eventually, adding more segments contributes very little to the plan quality, but increases the plan complexity. Thus, the method provides a tool for controlling the number of segments and, indirectly, the delivery time. This can support the planner in finding a sound trade-off between plan quality and treatment complexity.

  • 13.
    Carlsson, Fredrik
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Optimization and Systems Theory.
    Forsgren, Anders
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Optimization and Systems Theory.
    Iterative regularization in intensity-modulated radiation therapy optimization2006In: Medical physics (Lancaster), ISSN 0094-2405, Vol. 33, no 1, p. 225-234Article in journal (Refereed)
    Abstract [en]

    A common way to solve intensity-modulated radiation therapy (IMRT) optimization problems is to use a beamlet-based approach. The approach is usually employed in a three-step manner: first a beamlet-weight optimization problem is solved, then the fluence profiles are converted into stepand-shoot segments, and finally postoptimization of the segment weights is performed. A drawback of beamlet-based approaches is that beamlet-weight optimization problems are ill-conditioned and have to be regularized in order to produce smooth fluence profiles that are suitable for conversion. The purpose of this paper is twofold: first, to explain the suitability of solving beamlet-based IMRT problems by a BFGS quasi-Newton sequential quadratic programming method with diagonal initial Hessian estimate, and second, to empirically show that beamlet-weight optimization problems should be solved in relatively few iterations when using this optimization method. The explanation of the suitability is based on viewing the optimization method as an iterative regularization method. In iterative regularization, the optimization problem is solved approximately by iterating long enough to obtain a solution close to the optimal one, but terminating before too much noise occurs. Iterative regularization requires an optimization method that initially proceeds in smooth directions and makes rapid initial progress. Solving ten beamlet-based IMRT problems with dose-volume objectives and bounds on the beamlet-weights, we find that the considered optimization method fulfills the requirements for performing iterative regularization. After segment-weight optimization, the treatments obtained using 35 beamlet-weight iterations outperform the treatments obtained using 100 beamlet-weight iterations, both in terms of objective value and of target uniformity. We conclude that iterating too long may in fact deteriorate the quality of the deliverable plan.

  • 14.
    Cederström, Björn
    et al.
    KTH, School of Engineering Sciences (SCI), Physics.
    Fredenberg, Erik
    KTH, School of Engineering Sciences (SCI), Physics.
    The influence of anatomical noise on optimal beam quality in mammography2014In: Medical physics (Lancaster), ISSN 0094-2405, Vol. 41, no 12, p. 121903-Article in journal (Refereed)
    Abstract [en]

    Purpose: Beam-quality optimization in digital mammography traditionally considers detection of a target obscured by quantum noise in a homogeneous background. This does not correspond well to the clinical imaging task because real mammographic images contain a complex superposition of anatomical structures, resulting in anatomical noise that may dominate over quantum noise. The purpose of this paper is to assess the influence on optimal beam quality in mammography when anatomical noise is taken into account. Methods: The detectability of microcalcifications and masses was quantified using a theoretical ideal-observer model that included quantum noise as well as anatomical noise and a simplified model of a photon-counting mammography system. The outcome was experimentally verified using two types of simulated tissue phantoms. Results: The theoretical model showed that the detectability of tumors and microcalcifications behaves differently with respect to beam quality and dose. The results for small microcalcifications were similar to what traditional optimization methods yield, which is to be expected because quantum noise dominates over anatomical noise at high spatial frequencies. For larger tumors, however, low-frequency anatomical noise was the limiting factor. Because anatomical structure noise has similar energy dependence as tumor contrast, the optimal x-ray energy was found to be higher and the useful energy region was wider than traditional methods suggest. A simplified scalar model was able to capture this behavior using a fitted noise mixing parameter. The phantom measurements confirmed these theoretical results. Conclusions: It was shown that since quantum noise constitutes only a small fraction of the noise, the dose could be reduced substantially without sacrificing tumor detectability. Furthermore, when anatomical noise is included, the tube voltage can be increased well beyond what is conventionally considered optimal and used clinically, without loss of image quality. However, no such conclusions can be drawn for the more complex mammographic imaging task as a whole. (C) 2014 American Association of Physicists in Medicine.

  • 15. Crosbie, J. C.
    et al.
    Rogers, P. A. W.
    Stevenson, A. W.
    Hall, C. J.
    Lye, J. E.
    Nordström, Terese
    KTH.
    Midgley, S. M.
    Lewis, R. A.
    Reference dosimetry at the Australian Synchrotron's imaging and medical beamline using free-air ionization chamber measurements and theoretical predictions of air kerma rate and half value layer2013In: Medical physics (Lancaster), ISSN 0094-2405, Vol. 40, no 6, p. 062103-Article in journal (Refereed)
    Abstract [en]

    Purpose: Novel, preclinical radiotherapy modalities are being developed at synchrotrons around the world, most notably stereotactic synchrotron radiation therapy and microbeam radiotherapy at the European Synchrotron Radiation Facility in Grenoble, France. The imaging and medical beamline (IMBL) at the Australian Synchrotron has recently become available for preclinical radiotherapy and imaging research with clinical trials, a distinct possibility in the coming years. The aim of this present study was to accurately characterize the synchrotron-generated x-ray beam for the purposes of air kerma-based absolute dosimetry. Methods: The authors used a theoretical model of the energy spectrum from the wiggler source and validated this model by comparing the transmission through copper absorbers (0.1-3.0 mm) against real measurements conducted at the beamline. The authors used a low energy free air ionization chamber (LEFAC) from the Australian Radiation Protection and Nuclear Safety Agency and a commercially available free air chamber (ADC-105) for the measurements. The dimensions of these two chambers are different from one another requiring careful consideration of correction factors. Results: Measured and calculated half value layer (HVL) and air kerma rates differed by less than 3% for the LEFAC when the ion chamber readings were corrected for electron energy loss and ion recombination. The agreement between measured and predicted air kerma rates was less satisfactory for the ADC-105 chamber, however. The LEFAC and ADC measurements produced a first half value layer of 0.405 ± 0.015 and 0.412 ± 0.016 mm Cu, respectively, compared to the theoretical prediction of 0.427 ± 0.012 mm Cu. The theoretical model based upon a spectrum calculator derived a mean beam energy of 61.4 keV with a first half value layer of approximately 30 mm in water. Conclusions: The authors showed in this study their ability to verify the predicted air kerma rate and x-ray attenuation curve on the IMBL using a simple experimental method, namely, HVL measurements. The HVL measurements strongly supports the x-ray beam spectrum, which in turn has a profound effect on x-ray dosimetry.

  • 16.
    Danielsson, Mats
    KTH, School of Engineering Sciences (SCI), Physics, Medical Imaging.
    Challenges and Opportunities with Photon Counting CT2012In: Medical physics (Lancaster), ISSN 0094-2405, Vol. 39, no 6, p. 3989-3989Article in journal (Other academic)
  • 17.
    Danielsson, Mats
    KTH, School of Engineering Sciences (SCI), Physics, Physics of Medical Imaging.
    MO‐D‐210A‐01: Photon Counting Detectors for Mammography2009In: Medical physics (Lancaster), ISSN 0094-2405, Vol. 36, no 6, p. 2699-2699Article in journal (Refereed)
    Abstract [en]

    Mammography is currently one of the most common x‐ray imaging examinations. More than 100 million women worldwide are screened every year and early detection of breast cancer through mammography has proven to be a key to significantly reduced mortality. The requirement on spatial resolution as well as contrast resolution is very high in order to detect and diagnose the cancer. Moreover, because of the large number of women going through this procedure and the fact that more than 99 % are healthy, it also becomes very important to minimize the radiation dose. Photon counting may be one way to meet the demands and mammography is the first modality in x‐ray imaging to implement photon counting detectors. FDA approval is still pending but they are currently in routine clinical use in more than 15 countries. The photon counting enables a discrimination of all electronic noise and a more optimum use of the information in each x‐ray. The absence of electronic noise is particularly important in low dose applications, in for example tomosynthesis a number of exposures from different angles are required and since the dose in each projection is just a fraction of the total dose for a mammogram the sensitivity to electronic noise will increase. Using the spectral information for each x‐ray it is in principle possible to deduce the elemental composition of an object in the breast. This could for example be used to enhance microcalcifications relative to soft tissue and differentiate water from fat in cysts. Recently contrast mammography has attracted significant attention. In this application Iodine is used as a contrast media to visualize the vascular structure. As in breast MRI the cancer stand out because of the leaky vessels resulting from its angiogenesis. A photon counting detector gives a unique opportunity to image the Iodine through spectral imaging by adjusting one of the thresholds to its K‐edge. Challenges for photon counting in mammography are high rates of x‐rays, both to generate the required flux at the source and to handle the rates at the detector without pile‐up. Even more difficult to handle are the charge sharing between detector pixels which, if not corrected for, will compromise the energy information. The current status of photon counting detectors in mammography will be described together with strategies to overcome the pit‐falls. Also future possibilities with spectral imaging in mammography will be investigated and examples from ongoing clinical trials will be given. Learning Objectives: 1. Status of photon counting detectors in mammography 2. Pit‐falls and opportunities with photon counting detectors for mammography 3. Future applications based on spectral detectors for mammography.

  • 18.
    Danielsson, Mats
    KTH, School of Engineering Sciences (SCI), Physics, Physics of Medical Imaging.
    Philips Microdose Mammography - the Technology and Physics Behind the First FDA Approved Photon Counting X-Ray Imaging System2012In: Medical physics (Lancaster), ISSN 0094-2405, Vol. 39, no 6, p. 4017-4017Article in journal (Other academic)
    Abstract [en]

    Purpose: To validate the use of 4D‐Computed Tomography (4D‐CT) for pre‐treatment evaluation of fractional regional ventilation in patients with lung cancer by benchmarking its performance against scintigraphy V/Q imaging, the current gold‐standard. The second aim is to further corroborate the results of 4D‐CT estimation of lung aeration against the results of Pulmonary Function Testing. Methods: Scintigraphy V/Q and 4D‐CT studies were acquired in four lung cancer patients prior to treatment with radiation therapy. PFTs were acquired in 3 out of the 4 patients. 4D‐CT images were used to create 3D fractional regional ventilation maps by applying a ‘mass correction’ and subtracting the spatially matched end‐exhale and end‐inhale images. Ventilation maps were then collapsed in the anterior‐posterior dimension to create a coronal 2D projection image consistent with the scintigraphy V/Q images. The left and right lung fields were isolated on the projection image and divided into 3 sections of equal height. Summation of the signal intensity in each of the sections was carried out on the maps analogous to the analysis performed on V/Q scans and statistically compared using the Kendall's tau rank correlation. Results: The non‐parametric Kendall's tau estimate ranged between 0.87–0.95 for N=4, with corresponding p‐values ranging between 0.005–0.0002. Mean functional residual capacities (FRC) from the PFTs (N=3) versus calculated FRCs was 2.7 +/− 0.6 L and 2.4 +/− 0.7 L, and the null hypothesis could not be rejected (p = 0.61). The mean fractional regional ventilation versus the ratio of tidal‐volume/FRC was 0.24 +/− 0.11 and 0.22 +/− 0.08, and the null hypothesis could not be rejected (p=0.73). Conclusions: There was a strong correlation between 4D‐CT and scintigraphy V/Q. The similarity between the calculated and measured FRCs further validates the utility of 4D‐CT and supports its use in evaluating lung ventilation in patients with pulmonary neoplasms.

  • 19.
    Danielsson, Mats
    KTH, School of Engineering Sciences (SCI), Physics, Physics of Medical Imaging.
    TH‐A‐217BCD‐01: Challenges and Opportunities with Photon Counting CT2012In: Medical physics (Lancaster), ISSN 0094-2405, Vol. 39, no 6, p. 3989-3989Article in journal (Refereed)
    Abstract [en]

    There is currently a large interest in photon counting CT detector research in both academia and industry. There are several detector systems and strategies to handle major challenges such as the very high count‐rate, while the energy information for each photon is retained. Another challenge is cross talk, which may compromise the energy estimation for the photons and can cause double counts, which gets worse with smaller pixel size. If implemented in the clinic, photon counting CT will likely enable a dose reduction when this is important, as for example in pediatric CT. Photon counting CT will also make possible quantitative measurements, energy weighting and/or tissue decomposition techniques that can be of great importance for a number of imaging tasks.

  • 20.
    Engberg, Lovisa
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Optimization and Systems Theory.
    Forsgren, Anders
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Optimization and Systems Theory.
    Eriksson, Kjell
    Hardemark, Bjorn
    Explicit optimization of plan quality measures in intensity-modulated radiation therapy treatment planning2017In: Medical physics (Lancaster), ISSN 0094-2405, Vol. 44, no 6, p. 2045-2053Article in journal (Refereed)
    Abstract [en]

    Purpose: To formulate convex planning objectives of treatment plan multicriteria optimization with explicit relationships to the dose-volume histogram (DVH) statistics used in plan quality evaluation. Methods: Conventional planning objectives are designed to minimize the violation of DVH statistics thresholds using penalty functions. Although successful in guiding the DVH curve towards these thresholds, conventional planning objectives offer limited control of the individual points on the DVH curve (doses-at-volume) used to evaluate plan quality. In this study, we abandon the usual penalty-function framework and propose planning objectives that more closely relate to DVH statistics. The proposed planning objectives are based on mean-tail-dose, resulting in convex optimization. We also demonstrate how to adapt a standard optimization method to the proposed formulation in order to obtain a substantial reduction in computational cost. Results: We investigated the potential of the proposed planning objectives as tools for optimizing DVH statistics through juxtaposition with the conventional planning objectives on two patient cases. Sets of treatment plans with differently balanced planning objectives were generated using either the proposed or the conventional approach. Dominance in the sense of better distributed doses-at-volume was observed in plans optimized within the proposed framework. Conclusion: The initial computational study indicates that the DVH statistics are better optimized and more efficiently balanced using the proposed planning objectives than using the conventional approach.

  • 21.
    Engström, Emma
    KTH, School of Architecture and the Built Environment (ABE), Land and Water Resources Engineering, Environmental Management and Assessment.
    Comparison of power spectra for tomosynthesis projections and reconstructed images2009In: Medical physics (Lancaster), ISSN 0094-2405, Vol. 36, no 5, p. 1753-1758Article in journal (Refereed)
    Abstract [en]

    Burgess [Med. Phys. 28, 419-437 (2001)] showed that the power spectrum of mammographic breast background follows a power law and that lesion detectability is affected by the power-law exponent beta which measures the amount of structure in the background. Following the study of Burgess , the authors measured and compared the power-law exponent of mammographic backgrounds in tomosynthesis projections and reconstructed slices to investigate the effect of tomosynthesis imaging on background structure. Our data set consisted of 55 patient cases. For each case, regions of interest (ROIs) were extracted from both projection images and reconstructed slices. The periodogram of each ROI was computed by taking the squared modulus of the Fourier transform of the ROI. The power-law exponent was determined for each periodogram and averaged across all ROIs extracted from all projections or reconstructed slices for each patient data set. For the projections, the mean beta averaged across the 55 cases was 3.06 (standard deviation of 0.21), while it was 2.87 (0.24) for the corresponding reconstructions. The difference in beta for a given patient between the projection ROIs and the reconstructed ROIs averaged across the 55 cases was 0.194, which was statistically significant (p < 0.001). The 95% CI for the difference between the mean value of beta for the projections and reconstructions was [0.170, 0.218]. The results are consistent with the observation that the amount of breast structure in the tomosynthesis slice is reduced compared to projection mammography and that this may lead to improved lesion detectability.

  • 22.
    Fredenberg, Erik
    et al.
    KTH, School of Engineering Sciences (SCI), Physics, Medical Imaging.
    Cederström, Björn
    KTH, School of Engineering Sciences (SCI), Physics, Medical Imaging.
    Åslund, Magnus
    KTH, School of Engineering Sciences (SCI), Physics.
    Nillius, Peter
    KTH, School of Engineering Sciences (SCI), Physics, Medical Imaging.
    Danielsson, Mats
    KTH, School of Engineering Sciences (SCI), Physics, Medical Imaging.
    An efficient pre-object collimator based on an x-ray lens2009In: Medical physics (Lancaster), ISSN 0094-2405, Vol. 36, no 2, p. 626-633Article in journal (Refereed)
    Abstract [en]

    A multiprism lens (MPL) is a refractive x-ray lens with one-dimensional focusing properties. If used as a pre-object collimator in a scanning system for medical x-ray imaging, it reduces the divergence of the radiation and improves on photon economy compared to a slit collimator. Potential advantages include shorter acquisition times, a reduced tube loading, or improved resolution. We present the first images acquired with a MPL in a prototype for a scanning mammography system. The lens showed a gain of flux of 1.32 compared to a slit collimator at equal resolution, or a gain in resolution of 1.31–1.44 at equal flux. We expect the gain of flux in a clinical setup with an optimized MPL and a custom-made absorption filter to reach 1.67, or 1.45–1.54 gain in resolution.

  • 23.
    Fredenberg, Erik
    et al.
    KTH, School of Engineering Sciences (SCI), Physics, Medical Imaging.
    Danielsson, Mats
    KTH, School of Engineering Sciences (SCI), Physics, Medical Imaging.
    Stayman, J. Webster
    Siewerdsen, Jeffrey H.
    Åslund, Magnus
    Ideal-observer detectability in photon-counting differential phase-contrast imaging using a linear-systems approach2012In: Medical physics (Lancaster), ISSN 0094-2405, Vol. 39, no 9, p. 5317-5335Article in journal (Refereed)
    Abstract [en]

    Purpose: To provide a cascaded-systems framework based on the noise-power spectrum (NPS), modulation transfer function (MTF), and noise-equivalent number of quanta (NEQ) for quantitative evaluation of differential phase-contrast imaging (Talbot interferometry) in relation to conventional absorption contrast under equal-dose, equal-geometry, and, to some extent, equal-photon-economy constraints. The focus is a geometry for photon-counting mammography. Methods: Phase-contrast imaging is a promising technology that may emerge as an alternative or adjunct to conventional absorption contrast. In particular, phase contrast may increase the signal-difference-to-noise ratio compared to absorption contrast because the difference in phase shift between soft-tissue structures is often substantially larger than the absorption difference. We have developed a comprehensive cascaded-systems framework to investigate Talbot interferometry, which is a technique for differential phase-contrast imaging. Analytical expressions for the MTF and NPS were derived to calculate the NEQ and a task-specific ideal-observer detectability index under assumptions of linearity and shift invariance. Talbot interferometry was compared to absorption contrast at equal dose, and using either a plane wave or a spherical wave in a conceivable mammography geometry. The impact of source size and spectrum bandwidth was included in the framework, and the trade-off with photon economy was investigated in some detail. Wave-propagation simulations were used to verify the analytical expressions and to generate example images. Results: Talbot interferometry inherently detects the differential of the phase, which led to a maximum in NEQ at high spatial frequencies, whereas the absorption-contrast NEQ decreased monotonically with frequency. Further, phase contrast detects differences in density rather than atomic number, and the optimal imaging energy was found to be a factor of 1.7 higher than for absorption contrast. Talbot interferometry with a plane wave increased detectability for 0.1-mm tumor and glandular structures by a factor of 3-4 at equal dose, whereas absorption contrast was the preferred method for structures larger than similar to 0.5 mm. Microcalcifications are small, but differ from soft tissue in atomic number more than density, which is favored by absorption contrast, and Talbot interferometry was barely beneficial at all within the resolution limit of the system. Further. Talbot interferometry favored detection of "sharp" as opposed to "smooth" structures, and discrimination tasks by about 50% compared to detection tasks. The technique was relatively insensitive to spectrum bandwidth, whereas the projected source size was more important. If equal photon economy was added as a restriction, phase-contrast efficiency was reduced so that the benefit for detection tasks almost vanished compared to absorption contrast, but discrimination tasks were still improved close to a factor of 2 at the resolution limit. Conclusions: Cascaded-systems analysis enables comprehensive and intuitive evaluation of phase-contrast efficiency in relation to absorption contrast under requirements of equal dose, equal geometry, and equal photon economy. The benefit of Talbot interferometry was highly dependent on task, in particular detection versus discrimination tasks, and target size, shape, and material. Requiring equal photon economy weakened the benefit of Talbot interferometry in mammography.

  • 24.
    Fredenberg, Erik
    et al.
    KTH, School of Engineering Sciences (SCI), Physics, Medical Imaging.
    Hemmendorff, Magnus
    Cederström, Björn
    KTH, School of Engineering Sciences (SCI), Physics, Medical Imaging.
    Åslund, Magnus
    Danielsson, Mats
    KTH, School of Engineering Sciences (SCI), Physics, Medical Imaging.
    Contrast-enhanced spectral mammography with a photon-counting detector2010In: Medical physics (Lancaster), ISSN 0094-2405, Vol. 37, no 5, p. 2017-2029Article in journal (Refereed)
    Abstract [en]

    Purpose: Spectral imaging is a method in medical x-ray imaging to extract information about the object constituents by the material-specific energy dependence of x-ray attenuation. In particular, the detectability of a contrast agent can be improved over a lumpy background. We have investigated a photon-counting spectral imaging system with two energy bins for contrast-enhanced mammography. System optimization and the potential benefit compared to conventional non-energy-resolved imaging was studied.

    Methods: A framework for system characterization was set up that included quantum and anatomical noise, and a theoretical model of the system was benchmarked to phantom measurements.

    Results: It was found that optimal combination of the energy-resolved images corresponded approximately to minimization of the anatomical noise, and an ideal-observer detectability index could be improved more than a factor of two compared to absorption imaging in the phantom study. In the clinical case, an improvement close to 80% was predicted for an average glandularity breast, and a factor of eight for dense breast tissue. Another 70% was found to be within reach for an optimized system.

    Conclusions: Contrast-enhanced spectral mammography is feasible and beneficial with the current system, and there is room for additional improvements.

  • 25.
    Fredriksson, Albin
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Optimization and Systems Theory.
    A characterization of robust radiation therapy treatment planning methods-from expected value to worst case optimization2012In: Medical physics (Lancaster), ISSN 0094-2405, Vol. 39, no 8, p. 5169-5181Article in journal (Refereed)
    Abstract [en]

    Purpose: To characterize a class of optimization formulations used to handle systematic and random errors in radiation therapy, and to study the differences between the methods within this class. Methods: The class of robust methods that can be formulated as minimax stochastic programs is studied. This class generalizes many previously used methods, ranging between optimization of the expected and the worst case objective value. The robust methods are used to plan intensity-modulated proton therapy (IMPT) treatments for a case subject to systematic setup and range errors, random setup errors with and without uncertain probability distribution, and combinations thereof. As reference, plans resulting from a conventional method that uses a margin to account for errors are shown. Results: For all types of errors, target coverage robustness increased with the conservativeness of the method. For systematic errors, best case organ at risk (OAR) doses increased and worst case doses decreased with the conservativeness. Accounting for random errors of fixed probability distribution resulted in heterogeneous dose. The heterogeneities were reduced when uncertainty in the probability distribution was accounted for. Doing so, the OAR doses decreased with the conservativeness. All robust methods studied resulted in more robust target coverage and lower OAR doses than the conventional method. Conclusions: Accounting for uncertainties is essential to ensure plan quality in complex radiation therapy such as IMPT. The utilization of more information than conventional in the optimization can lead to robust target coverage and low OAR doses. Increased target coverage robustness can be achieved by more conservative methods.

  • 26. Fredriksson, Albin
    et al.
    Forsgren, Anders
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Optimization and Systems Theory.
    Hardemark, Bjorn
    Maximizing the probability of satisfying the clinical goals in radiation therapy treatment planning under setup uncertainty2015In: Medical physics (Lancaster), ISSN 0094-2405, Vol. 42, no 7, p. 3992-3999Article in journal (Refereed)
    Abstract [en]

    Purpose: This paper introduces a method that maximizes the probability of satisfying the clinical goals in intensity-modulated radiation therapy treatments subject to setup uncertainty. Methods: The authors perform robust optimization in which the clinical goals are constrained to be satisfied whenever the setup error falls within an uncertainty set. The shape of the uncertainty set is included as a variable in the optimization. The goal of the optimization is to modify the shape of the uncertainty set in order to maximize the probability that the setup error will fall within the modified set. Because the constraints enforce the clinical goals to be satisfied under all setup errors within the uncertainty set, this is equivalent to maximizing the probability of satisfying the clinical goals. This type of robust optimization is studied with respect to photon and proton therapy applied to a prostate case and compared to robust optimization using an a priori defined uncertainty set. Results: Slight reductions of the uncertainty sets resulted in plans that satisfied a larger number of clinical goals than optimization with respect to a priori defined uncertainty sets, both within the reduced uncertainty sets and within the a priori, nonreduced, uncertainty sets. For the prostate case, the plans taking reduced uncertainty sets into account satisfied 1.4 (photons) and 1.5 (protons) times as many clinical goals over the scenarios as the method taking a priori uncertainty sets into account. Conclusions: Reducing the uncertainty sets enabled the optimization to find better solutions with respect to the errors within the reduced as well as the nonreduced uncertainty sets and thereby achieve higher probability of satisfying the clinical goals. This shows that asking for a little less in the optimization sometimes leads to better overall plan quality.

  • 27.
    Fredriksson, Albin
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Optimization and Systems Theory.
    Forsgren, Anders
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Optimization and Systems Theory.
    Hardemark, Björn
    Minimax optimization for handling range and setup uncertainties in proton therapy2011In: Medical physics (Lancaster), ISSN 0094-2405, Vol. 38, no 3, p. 1672-1684Article in journal (Refereed)
    Abstract [en]

    Purpose: Intensity modulated proton therapy (IMPT) is sensitive to errors, mainly due to high stopping power dependency and steep beam dose gradients. Conventional margins are often insufficient to ensure robustness of treatment plans. In this article, a method is developed that takes the uncertainties into account during the plan optimization. Methods: Dose contributions for a number of range and setup errors are calculated and a minimax optimization is performed. The minimax optimization aims at minimizing the penalty of the worst case scenario. Any optimization function from conventional treatment planning can be utilized by the method. By considering only scenarios that are physically realizable, the unnecessary conservativeness of other robust optimization methods is avoided. Minimax optimization is related to stochastic programming by the more general minimax stochastic programming formulation, which enables accounting for uncertainties in the probability distributions of the errors. Results: The minimax optimization method is applied to a lung case, a paraspinal case with titanium implants, and a prostate case. It is compared to conventional methods that use margins, single field uniform dose (SFUD), and material override (MO) to handle the uncertainties. For the lung case, the minimax method and the SFUD with MO method yield robust target coverage. The minimax method yields better sparing of the lung than the other methods. For the paraspinal case, the minimax method yields more robust target coverage and better sparing of the spinal cord than the other methods. For the prostate case, the minimax method and the SFUD method yield robust target coverage and the minimax method yields better sparing of the rectum than the other methods. Conclusions: Minimax optimization provides robust target coverage without sacrificing the sparing of healthy tissues, even in the presence of low density lung tissue and high density titanium implants. Conventional methods using margins, SFUD, and MO do not utilize the full potential of IMPT and deliver unnecessarily high doses to healthy tissues.

  • 28.
    Gorniak, Richard J.
    et al.
    New York University.
    Farrell, Edward J.
    IBM Thomas J. Watson Research Center.
    Kramer, Elissa L.
    New York University, Department of Radiology.
    Maguire Jr., Gerald Q.
    Columbia University, Department of Computer Science.
    Noz, Marilyn E.
    New York University, Department of Radiology.
    Reddy, David P.
    Accuracy of an Interactive Registration Technique Applied to Thallium-201 SPECT and MR Brain Images1997In: Medical physics (Lancaster), ISSN 0094-2405, Vol. 24, no 8, p. 1354-Article in journal (Refereed)
  • 29.
    Gorniak, Richard J.
    et al.
    New York University.
    Kramer, Elissa L.
    New York University, Department of Radiology.
    Maguire Jr., Gerald Q.
    KTH, School of Information and Communication Technology (ICT), Microelectronics and Information Technology, IMIT.
    Noz, Marilyn E.
    New York University, Department of Radiology.
    Schettino, C. J.
    New York University, Department of Radiology.
    Zeleznik, Michael P.
    Saya Systems Inc., Salt Lake City, UT, USA.
    Evaluation of a Semi-Automatic 3D Fusion Technique Applied to Thallium-201 SPECT and MRI Brain/Frame Volume Data Sets2001In: Medical physics (Lancaster), ISSN 0094-2405, Vol. 28, no 8, p. 1190-Article in journal (Refereed)
  • 30.
    Grönberg, Fredrik
    et al.
    KTH, School of Engineering Sciences (SCI), Physics, Physics of Medical Imaging.
    Danielsson, Mats
    KTH, School of Engineering Sciences (SCI), Physics, Physics of Medical Imaging.
    Sjölin, Martin
    KTH, School of Engineering Sciences (SCI), Physics, Physics of Medical Imaging.
    Count statistics of nonparalyzable photon-counting detectors with nonzero pulse length2018In: Medical physics (Lancaster), ISSN 0094-2405, Vol. 45, no 8, p. 3800-3811Article in journal (Refereed)
    Abstract [en]

    PurposePhoton-counting detectors are expected to be the next big step in the development of medical computed tomography (CT). Accurate modeling of the behavior of photon-counting detectors in both low and high count rate regimes is important for accurate image reconstruction and detector performance evaluations. The commonly used ideal nonparalyzable (delta pulse) model is built on crude assumptions that make it unsuitable for predicting the behavior of photon-counting detectors at high count rates. The aim of this work is to present an analytical count statistics model that better describes the behavior of photon-counting detectors with nonzero pulse length. MethodsAn analytical statistical count distribution model for nonparalyzable detectors with nonzero pulse length is derived using tools from statistical analysis. To validate the model, a nonparalyzable photon-counting detector is simulated using Monte Carlo methods and compared against. Image performance metrics are computed using the Fisher information metric and a comparison between the proposed model, approximations of the proposed model, and those made by the ideal nonparalyzable model is presented and analyzed. ResultsIt is shown that the presented model agrees well with the results from the Monte Carlo simulation and is stable for varying x-ray beam qualities. It is also shown that a simple Gaussian approximation of the distribution can be used to accurately model the behavior and performance of nonparalyzable detectors with nonzero pulse length. Furthermore, the comparison of performance metrics show that the proposed model predicts a very different behavior than the ideal nonparalyzable detector model, suggesting that the proposed model can fill an important gap in the understanding of pileup effects. ConclusionsAn analytical model for the count statistics of a nonparalyzable photon-counting detector with nonzero pulse length is presented. The model agrees well with results obtained from Monte Carlo simulations and can be used to improve, speed up and simplify modeling of photon-counting detectors.

  • 31. Haider, J. M.
    et al.
    Kramer, Elissa L.
    New York University, Department of Radiology.
    Maguire Jr., Gerald Q.
    KTH, School of Information and Communication Technology (ICT), Microelectronics and Information Technology, IMIT.
    Millan, Evelyn
    New York University.
    Noz, Marilyn E.
    New York University, Department of Radiology.
    Orbach, D. B.
    Zeleznik, Michael P.
    Saya Systems Inc., Salt Lake City, UT, USA.
    Problems with two automatic methods for SPECT-SPECT and SPECT-MRI Volume Matching2002In: Medical physics (Lancaster), ISSN 0094-2405, Vol. 29, no 8, p. 1944-Article in journal (Refereed)
  • 32.
    Horii, Steven C.
    et al.
    New York University.
    Noz, Marilyn E.
    New York University.
    Maguire Jr., Gerald Q.
    Schimpf, James H.
    New York University.
    Zeleznik, Michael P.
    Hitchner, Lewis E.
    University of Utah.
    Baxter, Brent S.
    University of Utah.
    A Unified Digital Image Distribution and Processing System1983In: Medical physics (Lancaster), ISSN 0094-2405, Vol. 10, no 1, p. 129-129Article in journal (Refereed)
  • 33.
    Horii, Steven C.
    et al.
    New York University.
    Noz, Marilyn E.
    New York University, Department of Radiology.
    Maguire Jr., Gerald Q.
    Columbia University, Computer Science.
    Schimpf, James H.
    New York University.
    Zeleznik, Michael P.
    New York University.
    Hitchner, Lewis E.
    University of Utah.
    Baxter, Brent S.
    University of Utah.
    A Unified Digital Image Distribution and Processing System1982In: Medical physics (Lancaster), ISSN 0094-2405, Vol. 10, no 1, p. 129-Article in journal (Refereed)
  • 34.
    Hormozan, Yashar
    et al.
    KTH, School of Information and Communication Technology (ICT), Materials- and Nano Physics, Material Physics, MF.
    Sychugov, Ilya
    KTH, School of Information and Communication Technology (ICT), Materials- and Nano Physics, Material Physics, MF.
    Linnros, Jan
    KTH, School of Information and Communication Technology (ICT), Materials- and Nano Physics, Material Physics, MF.
    High-resolution x-ray imaging using a structured scintillator2016In: Medical physics (Lancaster), ISSN 0094-2405, Vol. 43, no 2, p. 696-701Article in journal (Refereed)
    Abstract [en]

    Purpose: In this study, the authors introduce a new generation of finely structured scintillators with a very high spatial resolution (a few micrometers) compared to conventional scintillators, yet maintaining a thick absorbing layer for improved detectivity. Methods: Their concept is based on a 2D array of high aspect ratio pores which are fabricated by ICP etching, with spacings (pitches) of a few micrometers, on silicon and oxidation of the pore walls. The pores were subsequently filled by melting of powdered CsI(Tl), as the scintillating agent. In order to couple the secondary emitted photons of the back of the scintillator array to a CCD device, having a larger pixel size than the pore pitch, an open optical microscope with adjustable magnification was designed and implemented. By imaging a sharp edge, the authors were able to calculate the modulation transfer function (MTF) of this finely structured scintillator. Results: The x-ray images of individually resolved pores suggest that they have been almost uniformly filled, and the MTF measurements show the feasibility of a few microns spatial resolution imaging, as set by the scintillator pore size. Compared to existing techniques utilizing CsI needles as a structured scintillator, their results imply an almost sevenfold improvement in resolution. Finally, high resolution images, taken by their detector, are presented. Conclusions: The presented work successfully shows the functionality of their detector concept for high resolution imaging and further fabrication developments are most likely to result in higher quantum efficiencies.

  • 35.
    Hsieh, Scott S.
    et al.
    Univ Calif Los Angeles, Dept Radiol Sci, Los Angeles, CA 90024 USA..
    Sjölin, Martin
    KTH, School of Engineering Sciences (SCI), Physics, Physics of Medical Imaging.
    Digital count summing vs analog charge summing for photon counting detectors: A performance simulation study2018In: Medical physics (Lancaster), ISSN 0094-2405, Vol. 45, no 9, p. 4085-4093Article in journal (Refereed)
    Abstract [en]

    PurposeCharge sharing is a significant problem for CdTe-based photon counting detectors (PCDs) and can cause high-energy photons to be misclassified as one or more low-energy events. Charge sharing is especially problematic in PCDs for CT because the high flux necessitates small pixels, which increase the magnitude of charge sharing. Analog charge summing (ACS) is a powerful solution to reduce spectral distortion arising from charge sharing but may be difficult to implement. We investigate correction of the signal after digitization by the comparator (digital count summing), which is only able to correct a subset of charge sharing events but may have implementation advantages. We compare and quantify the relative performance of digital and ACS in simulations. MethodsTransport of photons in CdTe was modeled using Monte Carlo simulations. Energy deposited in the CdTe substrate was converted to electrical charges of a predetermined shape, and all charges within the detector pixel are assumed to be perfectly collected. In ACS, the maximum charge received over any 2x2 block of pixels was grouped together prior to digitization. In digital count summing (DCS), the charge was digitized in each pixel, and subsequently, adjacent pixels that detected events grouped their charge to record a single, higher energy event. All simulations were performed at the limit of low flux (no pileup). The default tube voltage was 120kVp, object thickness was 20cm of water, pixel pitch was 250m, and charge cloud modeled as a Gaussian with sigma=40m. Variation of these parameters was examined in a sensitivity analysis. ResultsDetectors that used no correction, DCS, and ACS misclassified 51%, 39%, and 15% of incident photons, respectively. For iodine basis material imaging, DCS exhibited 100% greater dose efficiency compared to uncorrected, and ACS exhibited an additional 111% greater dose efficiency compared to digital charge summing. For a nonspectral task, the dose efficiency improvement as estimated by improvement of zero-frequency detective quantum efficiency, DQE(0) was 10% for DCS compared to uncorrected and 10% for ACS compared to DCS. A sensitivity analysis showed that DCS generally achieved half the benefit of ACS over a range of conditions, although the benefit was markedly less if the charge cloud was instead modeled as a small sphere. ConclusionsSumming of counts after digitization may be a simpler alternative to summing of charge prior to digitization due to the relative complexity of analog circuit design. Over most conditions studied, it provides roughly half the benefit of ACS and may offer certain implementation advantages.

  • 36. Isaksson, Markus
    et al.
    Jaldén, Joakim
    KTH, School of Electrical Engineering (EES), Signal Processing.
    Murphy, M. J.
    On using an adaptive neural network to predict lung tumor motion during respiration for radiotherapy applications2005In: Medical physics (Lancaster), ISSN 0094-2405, Vol. 32, no 12, p. 3801-3809Article in journal (Refereed)
  • 37.
    Kastrinos, F.
    et al.
    New York University.
    Maguire Jr., Gerald Q.
    Columbia University, Computer Science.
    Noz, Marilyn E.
    New York University, Department of Radiology.
    Statistical Evaluation of SPECT Scans vs. Structure/Function Image Fusion1990In: Medical physics (Lancaster), ISSN 0094-2405, Vol. 17, p. 42-Article in journal (Refereed)
  • 38.
    Larsson, Daniel H.
    et al.
    KTH, School of Engineering Sciences (SCI), Applied Physics, Biomedical and X-ray Physics.
    Lundström, Ulf
    KTH, School of Engineering Sciences (SCI), Applied Physics, Biomedical and X-ray Physics.
    Westermark, Ulrica K.
    Arsenian Henriksson, Marie
    Burvall, Anna
    KTH, School of Engineering Sciences (SCI), Applied Physics, Biomedical and X-ray Physics.
    Hertz, Hans M.
    KTH, School of Engineering Sciences (SCI), Applied Physics, Biomedical and X-ray Physics.
    First application of liquid-metal-jet sources for small-animal imaging: High-resolution CT and phase-contrast tumor demarcation2013In: Medical physics (Lancaster), ISSN 0094-2405, Vol. 40, no 2, p. 021909-Article in journal (Refereed)
    Abstract [en]

    Purpose: Small-animal studies require images with high spatial resolution and high contrast due to the small scale of the structures. X-ray imaging systems for small animals are often limited by the microfocus source. Here, the authors investigate the applicability of liquid-metal-jet x-ray sources for such high-resolution small-animal imaging, both in tomography based on absorption and in soft-tissue tumor imaging based on in-line phase contrast. Methods: The experimental arrangement consists of a liquid-metal-jet x-ray source, the small-animal object on a rotating stage, and an imaging detector. The source-to-object and object-to-detector distances are adjusted for the preferred contrast mechanism. Two different liquid-metal-jet sources are used, one circulating a Ga/In/Sn alloy and the other an In/Ga alloy for higher penetration through thick tissue. Both sources are operated at 40-50 W electron-beam power with similar to 7 mu m x-ray spots, providing high spatial resolution in absorption imaging and high spatial coherence for the phase-contrast imaging. Results: High-resolution absorption imaging is demonstrated on mice with CT, showing 50 mu m bone details in the reconstructed slices. High-resolution phase-contrast soft-tissue imaging shows clear demarcation of mm-sized tumors at much lower dose than is required in absorption. Conclusions: This is the first application of liquid-metal-jet x-ray sources for whole-body small-animal x-ray imaging. In absorption, the method allows high-resolution tomographic skeletal imaging with potential for significantly shorter exposure times due to the power scalability of liquid-metal-jet sources. In phase contrast, the authors use a simple in-line arrangement to show distinct tumor demarcation of few-mm-sized tumors. This is, to their knowledge, the first small-animal tumor visualization with a laboratory phase-contrast system.

  • 39.
    Larsson, Jakob C.
    et al.
    KTH, School of Engineering Sciences (SCI), Applied Physics, Biomedical and X-ray Physics.
    Lundström, Ulf
    KTH, School of Engineering Sciences (SCI), Applied Physics, Biomedical and X-ray Physics.
    Hertz, Hans M.
    KTH, School of Engineering Sciences (SCI), Applied Physics, Biomedical and X-ray Physics.
    Characterization of scintillator-based detectors for few-ten-keV high-spatial-resolution x-ray imaging2016In: Medical physics (Lancaster), ISSN 0094-2405, Vol. 43, no 6Article in journal (Refereed)
    Abstract [en]

    Purpose: High-spatial-resolution x-ray imaging in the few-ten-keV range is becoming increasingly important in several applications, such as small-animal imaging and phase-contrast imaging. The detector properties critically influence the quality of such imaging. Here the authors present a quantitative comparison of scintillator-based detectors for this energy range and at high spatial frequencies. Methods: The authors determine the modulation transfer function, noise power spectrum (NPS), and detective quantum efficiency for Gadox, needle CsI, and structured CsI scintillators of different thicknesses and at different photon energies. An extended analysis of the NPS allows for direct measurements of the scintillator effective absorption efficiency and effective light yield as well as providing an alternative method to assess the underlying factors behind the detector properties. Results: There is a substantial difference in performance between the scintillators depending on the imaging task but in general, the CsI based scintillators perform better than the Gadox scintillators. At low energies (16 keV), a thin needle CsI scintillator has the best performance at all frequencies. At higher energies (28-38 keV), the thicker needle CsI scintillators and the structured CsI scintillator all have very good performance. The needle CsI scintillators have higher absorption efficiencies but the structured CsI scintillator has higher resolution. Conclusions: The choice of scintillator is greatly dependent on the imaging task. The presented comparison and methodology will assist the imaging scientist in optimizing their high-resolution few-ten-keV imaging system for best performance.

  • 40.
    Maguire Jr., Gerald Q.
    et al.
    Columbia University, Department of Computer Science.
    Horii, Steven C.
    New York University.
    Schimpf, James H.
    New York University.
    Noz, Marilyn E.
    New York University.
    Zeleznik, Michael P.
    Baxter, Brent S.
    University of Utah.
    Hitchner, Lewis E.
    University of Utah.
    A Digital Radiology Department1982In: Medical physics (Lancaster), ISSN 0094-2405, Vol. 9, no 4, p. 636-Article in journal (Refereed)
  • 41.
    Maguire Jr., Gerald Q.
    et al.
    Columbia University.
    Noz, Marilyn E.
    New York University.
    Critical Analysis of and Software Implementation Strategies for the ACR-NEMA Standard1986In: Medical physics (Lancaster), ISSN 0094-2405, Vol. 13, no 4, p. 572-572Article in journal (Refereed)
  • 42.
    Maguire Jr., Gerald Q.
    et al.
    Columbia University, Department of Computer Science.
    Noz, Marilyn E.
    New York University.
    Image formats: five years after the {AAPM} standard for digital image interchange1989In: Medical physics (Lancaster), ISSN 0094-2405, Vol. 16, no 5, p. 818-823Article in journal (Refereed)
    Abstract [en]

    The publication of AAPM Report No. 10 was the first attempt to standardize image formats in the medical imaging community. Since then, three other groups have formed (CART—the Scandinavian collaboration for Computer Assisted Radiation Therapy treatment planning; ACR–NEMA, a collaboration whose purpose is to formulate a standard digital interface to medical imaging equipment; and COST B2 Nuclear Medicine Project a European collaboration whose purpose is to define a format for digital image exchange in Nuclear Medicine). The AAPM format uses key‐value pairs in plain text to keep track of all information associated with a particular image. The radiation oncology community in the U.S. has been defining key‐value pairs for use with CT, nuclear medicine and magnetic resonance (MR) images. The COST B2 Nuclear Medicine Project has also adopted this format and together with the Australian/New Zealand Society of Nuclear Medicine Technical Standards Sub‐Committee which has also adopted this format, has defined an initial set of key‐value pairs for Nuclear Medicine images. Additionally, both ACR–NEMA and CART have been defining fields for use with the same types of images. The CART collaboration has introduced a database which is available electronically, but is maintained by a group of individuals. ACR–NEMA operates through committee meetings. The COST B2 Nuclear Medicine Project operates through electronic (and postal where necessary) mail. To insure a consistent set of field names in such a rapidly developing arena requires the use of a server rather than a committee. Via a server a person would inquire if a particular field had been defined. If so, the defined name would be returned. If not, the person would be given the opportunity to define the field. The next inquiry would return the previously defined field. As new modalities are added to the imaging repetoire, it would be easier and faster to ensure the consistency and adequacy of the database; e.g., in the present version of its standard, the ACR–NEMA fields are adequate for CT but there are very few fields suitable for describing the parameters associated with nuclear medicine and MR images.

  • 43.
    Maguire Jr., Gerald Q.
    et al.
    Columbia University, Department of Computer Science.
    Schimpf, James H.
    New York University.
    Horii, Steven C.
    New York University.
    Noz, Marilyn E.
    New York University.
    A Unified Digital Image Display and Processing System1981In: Medical physics (Lancaster), ISSN 0094-2405, Vol. 8, no 5, p. 740-Article in journal (Refereed)
  • 44.
    Maguire Jr., Gerald Q.
    et al.
    KTH, School of Information and Communication Technology (ICT), Communication Systems, CoS.
    Schimpf, James H.
    New York University.
    Horii, Steven C.
    New York University.
    Noz, Marilyn E.
    New York University, Department of Radiology.
    Application of Image Chain Analysis Techniques to Gamma Camera MTF Measurements1979In: Medical physics (Lancaster), ISSN 0094-2405, Vol. 6, p. 335-Article in journal (Refereed)
  • 45.
    Nillius, Peter
    et al.
    KTH, School of Engineering Sciences (SCI), Physics, Physics of Medical Imaging.
    Klamra, Wlodek
    KTH, School of Engineering Sciences (SCI), Physics, Particle and Astroparticle Physics.
    Sibczynski, Pawel
    Sharma, Diksha
    Danielsson, Mats
    KTH, School of Engineering Sciences (SCI), Physics, Physics of Medical Imaging.
    Badano, Aldo
    Light output measurements and computational models of microcolumnar CsI scintillators for x-ray imaging2015In: Medical physics (Lancaster), ISSN 0094-2405, Vol. 42, no 2, p. 600-605Article in journal (Refereed)
    Abstract [en]

    Purpose: The authors report on measurements of light output and spatial resolution of microcolumnar CsI:Tl scintillator detectors for x-ray imaging. In addition, the authors discuss the results of simulations aimed at analyzing the results of synchrotron and sealed-source exposures with respect to the contributions of light transport to the total light output. Methods: The authors measured light output from a 490-mu m CsI:Tl scintillator screen using two setups. First, the authors used a photomultiplier tube (PMT) to measure the response of the scintillator to sealed-source exposures. Second, the authors performed imaging experiments with a 27-keV monoenergetic synchrotron beam and a slit to calculate the total signal generated in terms of optical photons per keV. The results of both methods are compared to simulations obtained with hybrid MANTIS, a coupled x-ray, electron, and optical photon Monte Carlo transport package. The authors report line response (LR) and light output for a range of linear absorption coefficients and describe a model that fits at the same time the light output and the blur measurements. Comparing the experimental results with the simulations, the authors obtained an estimate of the absorption coefficient for the model that provides good agreement with the experimentally measured LR. Finally, the authors report light output simulation results and their dependence on scintillator thickness and reflectivity of the backing surface. Results: The slit images from the synchrotron were analyzed to obtain a total light output of 48 keV(-1) while measurements using the fast PMT instrument setup and sealed-sources reported a light output of 28 keV-1. The authors attribute the difference in light output estimates between the two methods to the difference in time constants between the camera and PMT measurements. Simulation structures were designed to match the light output measured with the camera while providing good agreement with the measured LR resulting in a bulk absorption coefficient of 5x10(-5) mu m(-1). Conclusions: The combination of experimental measurements for microcolumnar CsI:Tl scintillators using sealed-sources and synchrotron exposures with results obtained via simulation suggests that the time course of the emission might play a role in experimental estimates. The procedure yielded an experimentally derived linear absorption coefficient for microcolumnar Cs:Tl of 5x10(-5) mu m(-1). To the author's knowledge, this is the first time this parameter has been validated against experimental observations. The measurements also offer insight into the relative role of optical transport on the effective optical yield of the scintillator with microcolumnar structure.

  • 46.
    Nordström, Marcus
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Hult, Henrik
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Maki, Atsuto
    KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, perception and learning, RPL.
    Löfman, Fredrik
    Raysearch Labs, Stockholm, Sweden..
    Pareto Dose Prediction Using Fully Convolutional Networks Operating in 3D2018In: Medical physics (Lancaster), ISSN 0094-2405, Vol. 45, no 6, p. E176-E176Article in journal (Other academic)
  • 47.
    Nordström, Marcus
    et al.
    KTH. RaySearch Labs, Stockholm, Sweden..
    Soderberg, J.
    RaySearch Labs, Stockholm, Sweden..
    Shusharina, N.
    Massachusetts Gen Hosp, Boston, MA 02114 USA..
    Edmunds, D.
    Massachusetts Gen Hosp, Boston, MA 02114 USA..
    Lofman, F.
    RaySearch Labs, Stockholm, Sweden..
    Hult, Henrik
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Maki, Atsuto
    KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, Perception and Learning, RPL.
    Bortfeld, T.
    Massachusetts Gen Hosp, Boston, MA 02114 USA..
    Interactive Deep Learning-Based Delineation of Gross Tumor Volume for Postoperative Glioma Patients2019In: Medical physics (Lancaster), ISSN 0094-2405, Vol. 46, no 6, p. E426-E427Article in journal (Other academic)
  • 48.
    Noz, Marilyn E.
    et al.
    New York University.
    Erdman, William A.
    MIDDLESEX GEN UNIV HOSP,RUTGERS MED SCH,NEW BRUNSWICK,NJ 08901 .
    Salviani, J. A.
    Maguire Jr., Gerald Q.
    Schimpf, James H.
    New York University.
    Horii, Steven C.
    New York University.
    Simple Image Acquisition and Analysis Protocols in an Automated Nuclear-Medicine Department1983In: Medical physics (Lancaster), ISSN 0094-2405, Vol. 10, no 4, p. 549-Article in journal (Other academic)
  • 49. Oden, Jakob
    et al.
    Zimmerman, Jens
    Bujila, Robert
    KTH, School of Engineering Sciences (SCI), Physics. Karolinska University Hospital, Sweden.
    Nowik, Patrik
    Poludniowski, Gavin
    Technical Note: On the calculation of stopping-power ratio for stoichiometric calibration in proton therapy2015In: Medical physics (Lancaster), ISSN 0094-2405, Vol. 42, no 9, p. 5252-5257Article in journal (Refereed)
    Abstract [en]

    Purpose: The quantitative effects of assumptions made in the calculation of stopping-power ratios (SPRs) are investigated, for stoichiometric CT calibration in proton therapy. The assumptions investigated include the use of the Bethe formula without correction terms, Bragg additivity, the choice of I-value for water, and the data source for elemental I-values. Methods: The predictions of the Bethe formula for SPR (no correction terms) were validated against more sophisticated calculations using the SRIM software package for 72 human tissues. A stoichiometric calibration was then performed at our hospital. SPR was calculated for the human tissues using either the assumption of simple Bragg additivity or the Seltzer-Berger rule (as used in ICRU Reports 37 and 49). In each case, the calculation was performed twice: First, by assuming the I-value of water was an experimentally based value of 78 eV (value proposed in Errata and Addenda for ICRU Report 73) and second, by recalculating the I-value theoretically. The discrepancy between predictions using ICRU elemental I-values and the commonly used tables of Janni was also investigated. Results: Errors due to neglecting the correction terms to the Bethe formula were calculated at less than 0.1% for biological tissues. Discrepancies greater than 1%, however, were estimated due to departures from simple Bragg additivity when a fixed I-value for water was imposed. When the I-value for water was calculated in a consistent manner to that for tissue, this disagreement was substantially reduced. The difference between SPR predictions when using Janni's or ICRU tables for I-values was up to 1.6%. Experimental data used for materials of relevance to proton therapy suggest that the ICRU-derived values provide somewhat more accurate results (root-mean-square-error: 0.8% versus 1.6%). Conclusions: The conclusions from this study are that (1) the Bethe formula can be safely used for SPR calculations without correction terms; (2) simple Bragg additivity can be reasonably assumed for compound materials; (3) if simple Bragg additivity is assumed, then the I-value for water should be calculated in a consistent manner to that of the tissue of interest (rather than using an experimentally derived value); (4) the ICRU Report 37 I-values may provide a better agreement with experiment than Janni's tables.

  • 50.
    Persson, Mats
    et al.
    KTH, School of Engineering Sciences (SCI), Physics, Physics of Medical Imaging. Royal Inst Technol, Dept Phys, SE-10691 Stockholm, Sweden..
    Bujila, Robert
    Karolinska Univ Hosp, Unit Xray Phys, Sect Imaging Phys Solna, Dept Med Phys, SE-17176 Stockholm, Sweden..
    Nowik, Patrik
    Karolinska Univ Hosp, Unit Xray Phys, Sect Imaging Phys Solna, Dept Med Phys, SE-17176 Stockholm, Sweden..
    Andersson, Henrik
    Karolinska Univ Hosp, Unit Xray Phys, Sect Imaging Phys Solna, Dept Med Phys, SE-17176 Stockholm, Sweden..
    Kull, Love
    Sunderby Hosp, Med Radiat Phys, SE-97180 Lulea, Sweden..
    Andersson, Jonas
    Umea Univ, Radiat Phys, Dept Radiat Sci, SE-90185 Umea, Sweden..
    Bornefalk, Hans
    KTH, School of Engineering Sciences (SCI), Physics. Royal Inst Technol, Dept Phys, SE-10691 Stockholm, Sweden..
    Danielsson, Mats
    KTH, School of Engineering Sciences (SCI), Physics, Physics of Medical Imaging. Royal Inst Technol, Dept Phys, SE-10691 Stockholm, Sweden..
    Upper limits of the photon fluence rate on CT detectors: Case study on a commercial scanner2016In: Medical physics (Lancaster), ISSN 0094-2405, Vol. 43, no 7, p. 4398-4411Article in journal (Refereed)
    Abstract [en]

    Purpose: The highest photon fluence rate that a computed tomography (CT) detector must be able to measure is an important parameter. The authors calculate the maximum transmitted fluence rate in a commercial CT scanner as a function of patient size for standard head, chest, and abdomen protocols. Methods: The authors scanned an anthropomorphic phantom (Kyoto Kagaku PBU-60) with the reference CT protocols provided by AAPM on a GE LightSpeed VCT scanner and noted the tube current applied with the tube current modulation (TCM) system. By rescaling this tube current using published measurements on the tube current modulation of a GE scanner [N. Keat, "CT scanner automatic exposure control systems," MHRA Evaluation Report 05016, ImPACT, London, UK, 2005], the authors could estimate the tube current that these protocols would have resulted in for other patient sizes. An ECG gated chest protocol was also simulated. Using measured dose rate profiles along the bowtie filters, the authors simulated imaging of anonymized patient images with a range of sizes on a GE VCT scanner and calculated the maximum transmitted fluence rate. In addition, the 99th and the 95th percentiles of the transmitted fluence rate distribution behind the patient are calculated and the effect of omitting projection lines passing just below the skin line is investigated. Results: The highest transmitted fluence rates on the detector for the AAPM reference protocols with centered patients are found for head images and for intermediate-sized chest images, both with a maximum of 3.4 . 10(8) mm(-2) s-1, at 949 mm distance from the source. Miscentering the head by 50 mm downward increases the maximum transmitted fluence rate to 5.7 . 10(8) mm(-2) s(-1). The ECG gated chest protocol gives fluence rates up to 2.3 . 10(8)-3.6 . 10(8) mm(-2) s(-1) depending on miscentering. Conclusions: The fluence rate on a CT detector reaches 3 . 10(8)-6 . 10(8) mm(-2) s(-1) in standard imaging protocols, with the highest rates occurring for ECG gated chest and miscentered head scans. These results will be useful to developers of CT detectors, in particular photon counting detectors. (C) 2016 American Association of Physicists in Medicine.

12 1 - 50 of 63
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf