kth.sePublications
Change search
Refine search result
1234567 1 - 50 of 320
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1. Abbak, Ramazan A.
    et al.
    Sjöberg, Lars E.
    KTH, School of Architecture and the Built Environment (ABE), Urban Planning and Environment, Geodesy and Geoinformatics.
    Ellmann, Artu
    Ustun, Aydin
    A precise gravimetric geoid model in a mountainous area with scarce gravity data: a case study in central Turkey2012In: Studia Geophysica et Geodaetica, ISSN 0039-3169, E-ISSN 1573-1626, Vol. 56, no 4, p. 909-927Article in journal (Refereed)
    Abstract [en]

    In mountainous regions with scarce gravity data, gravimetric geoid determination is a difficult task that needs special attention to obtain reliable results satisfying the demands, e.g., of engineering applications. The present study investigates a procedure for combining a suitable global geopotential model and available terrestrial data in order to obtain a precise regional geoid model for Konya Closed Basin (KCB). The KCB is located in the central part of Turkey, where a very limited amount of terrestrial gravity data is available. Various data sources, such as the Turkish digital elevation model with 3 '' x 3 '' resolution, a recently published satellite-only global geopotential model from the Gravity Recovery and Climate Experiment satellite (GRACE) and the ground gravity observations, are combined in the least-squares sense by the modified Stokes' formula. The new gravimetric geoid model is compared with Global Positioning System (GPS)/levelling at the control points, resulting in the Root Mean Square Error (RMS) differences of +/- 6.4 cm and 1.7 ppm in the absolute and relative senses, respectively. This regional geoid model appears to he more accurate than the Earth Gravitational Model 2008, which is the best global model over the target area, with the RMS differences of +/- 8.6 cm and 1.8 ppm in the absolute and relative senses, respectively. These results show that the accuracy of a regional gravimetric model can be augmented by the combination of a global geopotential model and local terrestrial data in mountainous areas even though the quality and resolution of the primary terrestrial data are not satisfactory to the geoid modelling procedure.

  • 2.
    Abdelmajid, Yezeed
    KTH, School of Architecture and the Built Environment (ABE), Urban Planning and Environment, Geodesy and Geoinformatics.
    Investigation and Comparison of 3D Laser Scanning Software Packages2012Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Laser scanning technologies has become an important tool in many engineering projects and applications. The output of laser measuring is the point cloud, which is processed in a way that makes it suitable for different applications. Processing of point cloud data is achieved through laser scanning software packages. Depending on the field of application, these packages have many different kinds of functions and methods that can be used. The main processing tasks used on a laser scanning software package include registration, modelling and texture mapping. Investigation and comparison of two laser scanning processing packages (Leica Cyclone and InnovMetric PolyWorks) are performed in this study. The theoretical and mathematical backgrounds of the above functions are presented and discussed. The available methods and functions used by each of the packages for these tasks are addressed and discussed. By using sample data, these functions are trailed and their results are compared and analyzed.

    The results from registration tests show the same results on both packages for the registration using target methods. Although, the results of cloud-to-cloud registration show some deviation from target registration results, they are more close to each other in both packages than to the target registration results. This indicates the efficiency of cloud-to-cloud methods in averaging the total registration error on all used points, unlike target registration methods.

    The modelling tests show more differences in the accuracy of generated models between the two packages. For both fitting and surface construction methods, PolyWorks showed better results and capabilities for three-dimensional modelling. As a result, the advantages and disadvantages of each package are presented in relation with the used task and methods, and a review of data exchange abilities is presented.

    Download full text (pdf)
    fulltext
  • 3.
    Abdollahzadeh, Makan
    et al.
    KTH, School of Architecture and the Built Environment (ABE), Urban Planning and Environment, Geodesy and Geoinformatics.
    Najafi-Alamdari, Mehdi
    Geodesy, KNToosi Uni. Tech..
    Application of Molodensky's Method for Precise Determination of Geoid in Iran2011In: Journal of Geodetic Science, ISSN 2081-9919, E-ISSN 2081-9943, Vol. 1, no 3, p. 259-270Article in journal (Refereed)
    Abstract [en]

    Determination of the geoid with a high accuracy is a challenging task among geodesists. Its precise determination is usually carried out by combining a global geopotential model with terrestrial gravity anomalies measured in the region of interest along with some topographic information. In this paper, Molodensky's approach is used for precise determination of height anomaly. To do this, optimum combination of global geopotential models with the validated terrestrial surface gravity anomalies and some deterministic modification schemes are investigated. Special attention is paid on the strict modelling of the geoidal height and height anomaly difference. The accuracy of the determined geoid is tested on the 513 points of Iranian height network the geoidal height of which are determined by the GPS observations.

  • 4.
    Abshirini, Ehsan
    et al.
    KTH, School of Architecture and the Built Environment (ABE), Urban Planning and Environment, Geodesy and Geoinformatics.
    Koch, Daniel
    KTH, School of Architecture and the Built Environment (ABE), Architecture.
    Visibility Analysis, Similarity and Dissimilarity in General Trends of Building Layouts and their Functions2013In: Proceedings of Ninth International Space Syntax Symposium / [ed] Young Ook Kim, Hoon Tae Park, Kyung Wook Seo, Seoul: Sejong University Press , 2013, p. 11:1-11:15Conference paper (Refereed)
    Abstract [en]

    Visibility analysis is one of the key methods in space syntax theory that discusses visual information conveyed to observers from any location in space that is potentially directly visible for the observer without any obstruction. Visibility – simply defined as what we can see – not only affects the spatial function of buildings, but also has visual relation to the perception of buildings by inhabitants and visitors. In this paper we intend to present the result of visibility analysis applied on a sample of building layouts of different sizes and functions from a variety of places of periods. The main aim of this paper is to statistically explore the general trends of building layouts and show if and how visibility properties such as connectivity, clustering coefficient, mean depth, entropy, and integration values can make distinctions among different functions of buildings. Our findings reveal that there are significant correlation coefficients among global properties of visibility in which we consider the mean value of properties, a similarity suggesting that they are not intensively manipulated by architecture. On the other hand, there are correlations although less so than the previous, still significant among local properties of visibility in which we consider the (max-min) value of properties, suggesting that social, cultural or other physical parameters distinguish buildings individually. We also show that functions such as ‘museum’ and ‘veterinary’ are relatively well-clustered, while functions such as ‘ancient’ and ‘shopping’ show high diversity. In addition, using a decision tree model we show that, in our sample, functions such as ‘museum’ and ‘library’ are more predictable rather than functions such as ‘hospital’ and ‘shopping.’ All of these mean that – at least in our sample – the usability and applicability of well-clustered and well-predicted functions have been predominant in shaping their interior spaces; vice versa, in well-diverse and unpredicted functions, the pragmatic solutions of people’s daily life developed in material culture affect the visual properties of their interior spaces.

    Download full text (pdf)
    Abshirini & Koch -- Visibility Analysis, Similarity, and Dissimilarity in General Trends of Building Layouts and their Functions (SSS9 2013)
  • 5.
    Agampatian, Razmik
    KTH, School of Architecture and the Built Environment (ABE), Urban Planning and Environment, Geodesy and Geoinformatics.
    Using GIS to measure walkability: A Case study in New York City2014Independent thesis Advanced level (degree of Master (Two Years)), 80 credits / 120 HE creditsStudent thesis
    Abstract [en]

    Obesity has become a global epidemic due to changes in society and in behavioral patterns of communities over the last decades. The decline in physical activity is one of the major contributors to the global obesity epidemic. Thus programs, plans and policies that promote walking could be a possible solution against obesity and its comorbidities. That is because walking is the simplest and most common form of physical activity among adults, regardless of age, sex, ethnic group, education or income level.

    The characteristics of the built environment might be significant factors that affect people’s decision to walk. Thus, measurable characteristics can assist in determining the extent to which the built environment affects the people. These characteristics can also provide indirect evidence of the state of population health for the area under study. Towards the analysis and assessment of potential associations between a number of measures of the built environment and walking, Geographic Information Systems have an increasing acceptance. Composite measures, also known as Walkability Indices, are a promising method to measure the degree to which an area provides opportunities to walk to various destinations.

    The main objective of this research is to develop a method to model walkability drawing partially from previously developed Walkability Indices and walkability measures, and suggest eventually an improved Walkability Index composed of 6 parameters. These are: i) Residential Density, ii) Diversity – Entropy Index, iii) Connectivity, iv) Proximity, v) Environmental Friendliness, vi) Commercial Density – FAR. The chosen spatial unit of analysis is the Census Tract level. The method of buffering that defines spatial units around geocoded locations at a given distance is also employed in an attempt to suggest an improvement of previously used methods. The study area is New York City (NYC).

    The results imply that Manhattan is the most walkable Borough, while Staten Island is the least walkable Borough. It is also suggested that NYC has a centripetal structure, meaning that the historical center and the entire island of Manhattan is more developed, and more walkable, followed by the adjacent areas of the neighboring Boroughs of Bronx, Brooklyn and Queens. The farthest areas of NYC’s periphery are consistently found to have the lowest walkability. Additionally, neighborhoods that are extremely homogeneous in terms of land-use and do not include considerable number of commercial parcels score very low. Hence, Census Tracts that are mainly characterized by primarily industrial land-use or contain large transportation infrastructures (e.g. ports, airports, large train stations) or even large metropolitan parks display limited walkability.

    The results and findings coincide to a satisfactory extent with the results of previous studies. However, the comparison is simple and barely based on easily observed patterns. As a result, the validity of the new Walkability Index might need further assessment due to limitations and lack of data.

    All types of limitations have been identified including limitations in data and in methodology. Suggestions for further research include possible additional parameters that can be employed in our Walkability Indices (e.g. crime rate, and separate parameter for parks and green areas) and further research whether the components of a Walkability Index should be weighted or not. In general, Walkability Indices are promising GIS applications that still need further research and development.

    Download full text (pdf)
    agampatian_student_theses
  • 6.
    Ahmad, Muhammad Nabi
    KTH, School of Architecture and the Built Environment (ABE), Urban Planning and Environment, Geodesy and Geoinformatics.
    Development of MetroMan with Open Source & Commercial web GIS technologies2011Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
  • 7.
    Alizadeh Khameneh, Mohammad Amin
    KTH, School of Architecture and the Built Environment (ABE), Urban Planning and Environment, Geodesy and Geoinformatics.
    Tree Detection and Species Identification using LiDAR Data2013Independent thesis Advanced level (degree of Master (Two Years)), 30 credits / 45 HE creditsStudent thesis
    Abstract [en]

    The importance of single-tree-based information for forest management and related industries in countries like Sweden, which is covered in approximately 65% by forest, is the motivation for developing algorithms for tree detection and species identification in this study. Most of the previous studies in this field are carried out based on aerial and spectral images and less attention has been paid on detecting trees and identifying their species using laser points and clustering methods.

    In the first part of this study, two main approaches of clustering (hierarchical and K-means) are compared qualitatively in detecting 3-D ALS points that pertain to individual tree clusters. Further tests are performed on test sites using the supervised k-means algorithm in which the initial clustering points are defined as seed points. These points, which represent the top point of each tree are detected from the cross section analysis of the test area. Comparing those three methods (hierarchical, ordinary K-means and supervised K-means), the supervised K-means approach shows the best result for clustering single tree points. An average accuracy of 90% is achieved in detecting trees. Comparing the result of the thesis algorithms with results from the DPM software, developed by the Visimind Company for analysing LiDAR data, shows more than 85% match in detecting trees.

    Identification of trees is the second issue of this thesis work. For this analysis, 118 trees are extracted as reference trees with three species of spruce, pine and birch, which are the dominating species in Swedish forests. Totally six methods, including best fitted 3-D shapes (cone, sphere and cylinder) based on least squares method, point density, hull ratio and slope changes of tree outer surface are developed for identifying those species. The methods are applied on all extracted reference trees individually. For aggregating the results of all those methods, a fuzzy logic system is used because of its good reputation in combining fuzzy sets with no distinct boundaries. The best-obtained model from the fuzzy system provides 73%, 87% and 71% accuracies in identifying the birch, spruce and pine trees, respectively. The overall obtained accuracy in species categorization of trees is 77%, and this percentage is increased dealing with only coniferous and deciduous types classification. Classifying spruce and pine as coniferous versus birch as deciduous species, yielded to 84% accuracy.

    Download full text (pdf)
    fulltext
  • 8. Bachmann, A.
    et al.
    Borgelt, C.
    Gidófalvi, Gyözö
    KTH, School of Architecture and the Built Environment (ABE), Urban Planning and Environment, Geodesy and Geoinformatics.
    Incremental frequent route based trajectory prediction2013In: IWCTS 2013 - 6th ACM SIGSPATIAL International Workshop on Computational Transportation Science, Association for Computing Machinery (ACM), 2013, p. 49-54Conference paper (Refereed)
    Abstract [en]

    Recent technological trends enable modern traffic prediction and management systems in which the analysis and prediction of movements of objects is essential. To this extent the present paper proposes IncCCFR - a novel, incremental approach for managing, mining, and predicting the incrementally evolving trajectories of moving objects. In addition to reduced mining and storage costs, a key advantage of the incremental approach is its ability to combine multiple temporally relevant mining results from the past to capture temporal and periodic regularities in movement. The approach and its variants are empirically evaluated on a large real-world data set of moving object trajectories, originating from a fleet of taxis, illustrating that detailed closed frequent routes can be efficiently discovered and used for prediction.

  • 9.
    Bagherbandi, Mohammad
    KTH, School of Architecture and the Built Environment (ABE), Urban Planning and Environment, Geodesy and Geoinformatics.
    A comparison of three gravity inversion methods for crustal thickness modelling in Tibet plateau2012In: Journal of Asian Earth Sciences, ISSN 1367-9120, E-ISSN 1878-5786, Vol. 43, no 1, p. 89-97Article in journal (Refereed)
    Abstract [en]

    Crustal thickness can be determined by gravimetric methods based on different assumptions, e.g. by isostatic hypotheses. Here we compare three gravimetric inversion methods to estimate the Moho depth. Two Moho models based on the Vening Meinesz-Moritz hypothesis and one by using Parker-Oldenburg's algorithm, which are investigated in Tibet plateau. The results are compared with CRUST2.0, and it will be presented that the estimated Moho depths from the Vening Meinesz-Moritz model will be better than the Parker-Oldenburg's algorithm.

  • 10.
    Bagherbandi, Mohammad
    KTH, School of Architecture and the Built Environment (ABE), Urban Planning and Environment, Geodesy and Geoinformatics.
    An Isostatic Earth Crustal Model: and Its Applications2011Doctoral thesis, monograph (Other academic)
    Abstract [en]

     The Mohorovičič discontinuity (Moho), which is the surface separating the Earth’s crust from the mantle, is of great interest among geoscientists. The Moho depth can be determined by seismic and gravimetric methods. The seismic methods are expensive, time-consuming and suffer from lack of global coverage of data, while the gravimetric methods use inexpensive and mostly already available global and regional data based on an isostatic model. The main reasons for studying an isostatic model are on one hand the gaps and uncertainties of the seismic models, and, on the other hand, the generous availability of gravity data from global models for the gravimetric-isostatic model. In this study, we present a new gravimetric-isostatic Moho model, called the Vening Meinesz-Moritz (VMM) model. Also, a combined Moho model based on seismic and gravimetric models is presented.

    Classical isostatic hypotheses assume that the topographic potential is fully compensated at all wavelengths, while is not the case in reality. We found that the maximum degree of compensation for the topographic potential based on the new Moho model is 60, corresponding to the resolution of about 330 km. Other (dynamic) isostatic effects (such as temporal compensation, plate tectonics, post-glacial rebound, etc) should be considered as well, which are disregarded in this thesis. Numerical results imply that the dynamic phenomena affect mostly the long-wavelengths.

    The VMM model is applied for different purposes. The Moho density contrast is an important parameter for estimating the Moho depth, and we present a technique to simultaneously estimate Moho depth and density contrast by the VMM and seismic models. Another application is the recovery of gravity anomaly from Satellite Gravity Gradiometry (SGG) data by a smoothing technique, and we show that the VMM model performs better than the Airy-Heiskanen isostatic model. We achieved an rms difference of 4 mGal for the gravity anomaly estimated from simulated GOCE data in comparison with EGM08, and this result is better than direct downward continuation of the data without smoothing. We also present a direct method to recover Moho depth from the SGG mission, and we show that the recovered Moho is more or less of the same quality as that obtained from terrestrial gravimetric data (with an rms error of 2 km). Moreover, a strategy is developed for creating substitutes for missing GOCE data in Antarctica, where there is a polar gap of such data.

    The VMM model is further used for constructing a Synthetic Earth Gravity Model (SEGM). The topographic-isostatic potential is simple to apply for the SEGM, and the latter can be an excellent tool to fill data gaps, extending the EGMs to higher degrees and validating a recovery technique of the gravity field from a satellite mission. Regional and global tests of the SEGM yield a relative error of less than 3 % vs. EGM08 to degree 2160.

     

    Download full text (pdf)
    mohbag
  • 11.
    Bagherbandi, Mohammad
    KTH, School of Architecture and the Built Environment (ABE), Urban Planning and Environment, Geodesy and Geoinformatics.
    Combination of seismic and an isostatic crustal thickness models using Butterworth filter in a spectral approach2012In: Journal of Asian Earth Sciences, ISSN 1367-9120, E-ISSN 1878-5786, Vol. 59, no SI, p. 240-248Article in journal (Refereed)
    Abstract [en]

    In this study, using Butterworth filter a combined crustal thickness model based on seismic and isostatic-gravimetric models is presented in a spectral domain. Vening Meinesz-Moritz isostatic model and a seismic model which obtained from sparse seismic data are two models used in this study. The filter used helps to join two models without any jump in the overlap degree in the spectral domain. The main motivations of this study are (a) presenting a higher resolution for the crustal thickness and (b) removing non-isostatic effects from the isostatic model. The result obtained from the combined model is a synthetic Earth crustal model up to degree 180 (equivalent resolution 1° × 1°). In spite of the differences in the some parts of the Earth between the seismic and isostatic-gravimetric models, the test computations show a satisfactory agreement between the results provided. Numerical results show that this method of combination agrees with the seismic crustal thickness (about 2.0. km rms difference).

  • 12.
    Bagherbandi, Mohammad
    KTH, School of Architecture and the Built Environment (ABE), Urban Planning and Environment, Geodesy and Geoinformatics.
    Global earth isostatic model using smoothed Airy-Heiskanen and Vening Meinesz hypotheses2012In: Earth Science Informatics, ISSN 1865-0473, Vol. 5, no 2, p. 93-104Article in journal (Refereed)
    Abstract [en]

    Isostatic hypotheses are used for different purposes in geophysics and geodesy. The Erath crustal thickness modelling is more complicated than the classical isostatic models. In this study we try to modify Airy-Hesiskanen model, utilizing a smoothing factor, to a model with regional or global isostatic model through a modern solution of the gravimetric-isostatic Vening Meinesz model and CRUST.0. In Airy-Hesiskanen's theory there is no correlation between neighbouring crustal columns, while this must be the case in reality due to the elasticity of the Earth. The idea is to keep the simplicity of the Airy-Heiskanen model, because it needs only the topographic data, and change the model which becomes to a model with regional/global isostatic model. The isostatic assumption for compensating the topographic potential is incomplete, as there are other geophysical phenomena which should be considered. Using the isostatic hypothesis for determining the depth of crust causes some disturbing signals in the gravity anomaly (approximately 285 mGal), which influence the crustal thickness determination. In this paper a simple method use for removing these effects. Spherical harmonic potential coefficients of the topographic compensation masses are used for modifying Airy-Heiskanen's model in a least-square adjustment procedure by estimating smoothing factor. The numerical analysis shows that below degree 10, the modified Airy-Hesiskanen and Vening Meinesz models are close together. Smoothing factors for modifying the Airy-Hesiskanen model vary from 0.75 to 0.64 between degrees 200 and 2159.

  • 13.
    Bagherbandi, Mohammad
    KTH, School of Architecture and the Built Environment (ABE), Urban Planning and Environment, Geodesy and Geoinformatics.
    Impact of compensating mass on the topographic mass-A study using isostatic and non-isostatic Earth crustal models2012In: Acta Geodaetica et Geophysica Hungarica, ISSN 1217-8977, E-ISSN 1587-1037, Vol. 47, no 1, p. 29-51Article in journal (Refereed)
    Abstract [en]

    The Earth topographic masses are compensated by an isostatic adjustment. According to the isostatic hypothesis a mountain is compensated by mass deficiency beneath it, where the crust is floating on the viscous mantle. For study of the impact of the compensating mass on the topographic mass a crustal thickness (Moho boundary) model is needed. A new gravimetric-isostatic model to estimate the Moho depth, Vening Meinesz-Moritz model, and two well-known Moho models (CRUST2.0 and Airy-Heiskanen) are used in this study. All topographic masses cannot be compensated by simple isostatic assumption then other compensation mechanism should be considered. In fact small topographic masses can be supported by elasticity of the larger masses and deeper Earth's layers. We discuss this issue applying spatial and spectral analyses in this study. Here we are going to investigate influence of the crustal thickness and its density in compensating the topographic potential. This study shows that the compensating potential is larger than the topographic potential in low-frequencies vs. in high-frequencies which are smaller. The study also illustrates that the Vening Meinesz-Moritz model compensates the topographic potential better than other models, which is more suitable for interpolation of the gravity field in comparison with two other models. In this study, two methods are presented to determine the percentage of the compensation of the topographic potential by the isostatic model. Numerical studies show that about 75% and 57% of the topographic potentials are compensated by the potential beneath it in Iran and Tibet. In addition, correlation analysis shows that there is linear relation between the topographic above the sea level and underlying topographic masses in the low-frequencies in the crustal models. Our investigation shows that about 580 +/- 7.4 metre (in average) of the topographic heights are not compensated by variable the crustal root and density.

  • 14.
    Bagherbandi, Mohammad
    KTH, School of Architecture and the Built Environment (ABE), Urban Planning and Environment, Geodesy and Geoinformatics.
    Moholso: A MATLAB program to determine crustal thickness by an isostatic and a global gravitational model2012In: Computers & Geosciences, ISSN 0098-3004, E-ISSN 1873-7803, Vol. 44, p. 177-183Article in journal (Refereed)
    Abstract [en]

    This paper focuses on the modeling of the boundary between Earth's crust and upper mantle using a gravimetric-isostatic model. Here a MATLAB code is presented based on the gravimetric-isostatic model i.e. the Veiling Meinesz-Moritz model. Inverse problems in isostasy consist in making the isostatic anomalies to be zero under a certain isostatic hypothesis. The Vening Meinesz-Moritz problem is to determine the Moho depth such that the compensating attraction totally compensates the Bouguer gravity anomaly on the Earth's surface, implying that the isostatic anomaly vanishes on the Earth's surface. The main idea is easy but the theoretical analysis is somewhat difficult. Here a practical method to recover the Moho depth from the gravity data is used in the MATLAB code (Moholso.m) based on the Vening Meinesz-Moritz method. The code has been designed based on different sub-codes. The body of the main code works according to the vectorization technique, because this technique causes that the speed of code increases. One of the important possible limitations for the code is over-flow and under-flow for higher degrees in the fully normalized associated Legendre function. This problem occurs in the subroutine applied in this study, it limits the numerical study up to degrees 1800-2000.

  • 15.
    Bagherbandi, Mohammad
    et al.
    KTH, School of Architecture and the Built Environment (ABE), Urban Planning and Environment, Geodesy and Geoinformatics.
    Eshagh, Mehdi
    KTH, School of Architecture and the Built Environment (ABE), Urban Planning and Environment, Geodesy and Geoinformatics.
    Crustal thickness recovery using an isostatic model and GOCE data2012In: Earth Planets and Space, ISSN 1343-8832, E-ISSN 1880-5981, Vol. 64, no 11, p. 1053-1057Article in journal (Refereed)
    Abstract [en]

    One of the GOCE satellite mission goals is to study the Earth's interior structure including its crustal thickness. A gravimetric-isostatic Moho model, based on the Vening Meinesz-Moritz (VMM) theory and GOCE gradiomet-ric data, is determined beneath Iran's continental shelf and surrounding seas. The terrestrial gravimetric data of Iran are also used in a nonlinear inversion for a recovering-Moho model applying the VMM model. The newly-computed Moho models are compared with the Moho data taken from CRUST2.0. The root-mean-square (RMS) of differences between the CRUST2.0 Moho model and the recovered model from GOCE and that from the terrestrial gravimetric data are 3.8 km and 4.6 km, respectively.

  • 16.
    Bagherbandi, Mohammad
    et al.
    KTH, School of Architecture and the Built Environment (ABE), Urban Planning and Environment, Geodesy and Geoinformatics.
    Eshagh, Mehdi
    Recovery of Moho's undulations based on the Vening Meinesz-Moritz theory from satellite gravity gradiometry data: A simulation study2012In: Advances in Space Research, ISSN 0273-1177, E-ISSN 1879-1948, Vol. 49, no 6, p. 1097-1111Article in journal (Refereed)
    Abstract [en]

    In the gravimetric approach to determine the Moho depth an isostatic hypothesis can be used. The Vening Meinesz-Moritz isostatic hypothesis is the recent theory for such a purpose. Here, this theory is further developed so that the satellite gravity gradiometry (SGG) data are used for recovering the Moho depth through a nonlinear integral inversion procedure. The kernels of its forward and inverse problems show that the inversion should be done in a larger area by 5 than the desired one to reduce the effect of the spatial truncation error of the integral formula. Our numerical study shows that the effect of this error on the recovered Moho depths can reach 6 km in Persia and it is very significant. The iterative Tikhonov regularization in a combination with either generalized cross validation or quasi-optimal criterion of estimating the regularization parameter seems to be suitable and the solution is semi-convergent up to the third iteration. Also the Moho depth recovered from the simulated SGG data will be more or less the same as that obtained from the terrestrial gravimetric data with a root mean square error of 2 km and they are statistically consistent.

  • 17.
    Bagherbandi, Mohammad
    et al.
    KTH, School of Architecture and the Built Environment (ABE), Urban Planning and Environment, Geodesy and Geoinformatics.
    Eshagh, Mehdi
    KTH, School of Architecture and the Built Environment (ABE), Urban Planning and Environment, Geodesy and Geoinformatics.
    Sjöberg, Lars E.
    KTH, School of Architecture and the Built Environment (ABE), Urban Planning and Environment, Geodesy and Geoinformatics.
    Multi-objective versus single-objective models in geodetic network optimization2009In: Nordic Journal of Surveying and Real Estate Research, ISSN 1459-5877, E-ISSN 2341-6599, Vol. 6, no 1, p. 7-20Article in journal (Refereed)
  • 18.
    Bagherbandi, Mohammad
    et al.
    KTH, School of Architecture and the Built Environment (ABE), Urban Planning and Environment, Geodesy and Geoinformatics.
    Sjoberg, Lars E.
    KTH, School of Architecture and the Built Environment (ABE), Urban Planning and Environment, Geodesy and Geoinformatics.
    Comparison of crustal thickness from two gravimetric-isostatic models and CRUST2.02011In: Studia Geophysica et Geodaetica, ISSN 0039-3169, E-ISSN 1573-1626, Vol. 55, no 4, p. 641-666Article in journal (Refereed)
    Abstract [en]

    The MohoroviiA double dagger discontinuity is the boundary between the Earth's crust and mantle. Several isostatic hypotheses exist for estimating the crustal thickness and density variation of the Earth's crust from gravity anomalies. The goal of this article is to compare the Airy-Heiskanen and Vening Meinesz-Moritz (VMM) gravimetric models for determining Moho depth, with the seismic Moho (CRUST2.0 or SM) model. Numerical comparisons are performed globally as well as for some geophysically interesting areas, such as Fennoscandia, Persia, Tibet, Canada and Chile. These areas are most complicated areas in view of rough topography (Tibet, Persia and Peru and Chile), post-glacial rebound (Fennoscandia and Canada) and tectonic activities (Persia). The mean Moho depth provided by CRUST2.0 is 22.9 +/- 0.1 km. Using a constant Moho density contrast of 0.6 g/cm(3), the corresponding mean values for Airy-Heiskanen and VVM isostatic models become 25.0 +/- 0.04 km and 21.6 +/- 0.08 km, respectively. By assuming density contrasts of 0.5 g/cm(2) and 0.35 g/cm(3) for continental and oceanic regions, respectively, the VMM model yields the mean Moho depth 22.6 +/- 0.1 km. For this model the global rms difference to CRUST2.0 is 7.2 km, while the corresponding difference between Airy-Heiskanen model and CRUST2.0 is 11 km. Also for regional studies, Moho depths were estimated by selecting different density contrasts. Therefore, one conclusion from the study is that the global compensation by the VMM method significantly improves the agreement with the CRUST2.0 vs. the local compensation model of Airy-Heiskanen. Also, the last model cannot be correct in regions with ocean depth larger than 9 km (e.g., outside Chile), as it may yield negative Moho depths. This problem does not occur with the VMM model. A second conclusion is that a realistic variation of density contrast between continental and oceanic areas yields a better fit of the VMM model to CRUST2.0. The study suggests that the VMM model can primarily be used to densify the CRUST2.0 Moho model in many regions based on separate data by taking advantage of dense gravity data. Finally we have found also that the gravimetric terrain correction affects the determination of the Moho depth by less than 2 km in mean values for test regions, approximately. Hence, for most practical applications of the VMM model the simple Bouguer gravity anomaly is sufficient.

  • 19.
    Bagherbandi, Mohammad
    et al.
    KTH, School of Architecture and the Built Environment (ABE), Urban Planning and Environment, Geodesy and Geoinformatics. KTH, School of Architecture and the Built Environment (ABE), Real Estate and Construction Management, Geodesy and Satellite Positioning.
    Sjöberg, Lars
    KTH, School of Architecture and the Built Environment (ABE), Urban Planning and Environment, Geodesy and Geoinformatics. KTH, School of Architecture and the Built Environment (ABE), Real Estate and Construction Management, Geodesy and Satellite Positioning.
    Non-isostatic effects on crustal thickness: A study using CRUST2.0 in Fennoscandia2012In: Physics of the Earth and Planetary Interiors, ISSN 0031-9201, E-ISSN 1872-7395, Vol. 200, p. 37-44Article in journal (Refereed)
    Abstract [en]

    The crustal thickness can be determined based on gravimetric-isostatic and seismic models. Modelling crustal thickness by a gravimetric-isostatic model suffers from some problems. The isostatic assumption for compensating the topographic potential is incomplete, as there are other effects which should be considered. Using the isostatic hypothesis for determining the depth of crust causes some disturbing signals, non-isostatic effects, which influence the crustal thickness determination. Isostatic and non-isostatic compensations are the main issues in this paper. We present three methods to overcome the problem due to the disturbing signals, namely the approach by truncating the spherical harmonic approach, determination of non-isostatic correction using a seismic crustal thickness model (e.g., CRUST2.0) and combination of isostatic and seismic models by applying a least-squares adjustment method. The estimated results of the non-isostatic effects varies between 65.2 and 391.8 mGal in Fennoscandia. The root mean squares difference of the crustal thickness obtained from the gravimetric-isostatic model and CRUST2.0 is improved up to six times (from 6.15 to 0.97 km) when the non-isostatic effects are considered.

  • 20.
    Bagherbandi, Mohammad
    et al.
    KTH, School of Architecture and the Built Environment (ABE), Urban Planning and Environment, Geodesy and Geoinformatics.
    Sjöberg, Lars E.
    University of Gävle, Sweden.
    A synthetic Earth gravity model based on a topographic-isostatic model2012In: Studia Geophysica et Geodaetica, ISSN 0039-3169, E-ISSN 1573-1626, Vol. 56, no 4, p. 935-955Article in journal (Refereed)
    Abstract [en]

    The Earth's gravity field is related to the topographic potential in medium and higher degrees, which is isostatically compensated. Hence, the topographic-isostatic (TI) data are indispensable for extending an available Earth Gravitational Model (EGM) to higher degrees. Here we use TI harmonic coefficients to construct a Synthetic Earth Gravitational Model (SEGM) to extend the EGMs to higher degrees. To achieve a high-quality SEGM, a global geopotential model (EGM96) is used to describe the low degrees, whereas the medium and high degrees are obtained from the TI or topographic potential. This study differes from others in that it uses a new gravimetric-isostatic model for determining the TI potential. We test different alternatives based on TI or only topographic data to determine the SEGM. Although the topography is isostatically compensated only to about degree 40-60, our study shows that using a compensation model improves the SEGM in comparison with using only topographic data for higher degree harmonics. This is because the TI data better adjust the applied Butterworth filter, which bridges the known EGM and the new high-degree potential field than the topographic data alone.

  • 21.
    Bagherbandi, Mohammad
    et al.
    KTH, School of Architecture and the Built Environment (ABE), Urban Planning and Environment, Geodesy and Geoinformatics.
    Sjöberg, Lars E
    KTH, School of Architecture and the Built Environment (ABE), Urban Planning and Environment, Geodesy and Geoinformatics.
    Determination of crustal thickness by Vening Meinesz-Moritz hypothesis and its geodetic applications2010Conference paper (Other (popular science, discussion, etc.))
  • 22.
    Bagherbandi, Mohammad
    et al.
    KTH, School of Architecture and the Built Environment (ABE), Urban Planning and Environment, Geodesy and Geoinformatics.
    Sjöberg, Lars E.
    KTH, School of Architecture and the Built Environment (ABE), Urban Planning and Environment, Geodesy and Geoinformatics. University of Gävle, Sweden .
    Improving gravimetric-isostatic models of crustal depth by correcting for non-isostatic effects and using CRUST2.02013In: Earth-Science Reviews, ISSN 0012-8252, E-ISSN 1872-6828, Vol. 117, p. 29-39Article in journal (Refereed)
    Abstract [en]

    The principle of isostasy is important in different fields of geosciences. Using an isostatic hypothesis for estimating the crustal thickness suffers from the more or less incomplete isostatic model and that the observed gravity anomaly is not only generated by the topographic/isostatic signal but also by non-isostatic effects (NIEs). In most applications of isostatic models the NIEs are disregarded. In this paper, we study how some isostatic models related with Vening Meinez's isostatic hypothesis can be improved by considering the NIE. The isostatic gravity anomaly needs a correction for the NIEs, which varies from as much as 494 mGal to -308 mGal. The result shows that by adding this correction the global crustal thickness estimate improves about 50% with respect to the global model CRUST2.0, i.e. the root mean square differences of the crustal thickness of the best Vening Meinesz type and CRUST2.0 models are 6.9 and 3.2 km before and after improvement, respectively. As a result, a new global model of crustal thickness using Vening Meinesz and CRUST2.0 models is generated. A comparison with an independent African crustal depth model shows an improvement of the new model by 6.8 km vs. CRUST2.0 (i.e. rms differences of 3.0 and 9.8 km, respectively). A comparison between oceanic lithosphere age and the NIEs is discussed in this study, too. One application of this study can be to improve crustal depth in areas where CRUST2.0 data are sparse and bad and to densify the resolution vs. the CRUST2.0 model. Other applications can be used to infer the viscosity of the mantle from the NIEs signal to study various locations around the Earth for understanding complete, over- and under-compensations of the topography.

  • 23.
    Bagherbandi, Mohammad
    et al.
    KTH, School of Architecture and the Built Environment (ABE), Urban Planning and Environment, Geodesy and Geoinformatics.
    Tenzer, R.
    Geoid-to-quasigeoid separation computed using the GRACE/GOCE global geopotential model GOCO02S -A case study of Himalayas and Tibet2013In: Terrestrial, Atmospheric and Oceanic Science, ISSN 1017-0839, E-ISSN 2223-8964, Vol. 24, no 1, p. 59-68Article in journal (Refereed)
    Abstract [en]

    The geoid-to-quasigeoid correction has been traditionally computed approximately as a function of the planar Bouguer gravity anomaly and the topographic height. Recent numerical studies based on newly developed theoretical models, however, indicate that the computation of this correction using the approximate formula yields large errors especially in mountainous regions with computation points at high elevations. In this study we investigate these approximation errors at the study area which comprises Himalayas and Tibet where this correction reaches global maxima. Since the GPS-leveling and terrestrial gravity datasets in this part of the world are not (freely) available, global gravitational models (GGMs) are used to compute this correction utilizing the expressions for a spherical harmonic analysis of the gravity field. The computation of this correction can be done using the GGM coefficients taken from the Earth Gravitational Model 2008 (EGM08) complete to degree 2160 of spherical harmonics. The recent studies based on a regional accuracy assessment of GGMs have shown that the combined GRACE/GOCE solutions provide a substantial improvement of the Earth's gravity field at medium wavelengths of spherical harmonics compared to EGM08. We address this aspect in numerical analysis by comparing the gravity field quantities computed using the satellite-only combined GRACE/GOCE model GOCO02S against the EGM08 results. The numerical results reveal that errors in the geoid-to-quasigeoid correction computed using the approximate formula can reach as much as ~1.5 m. We also demonstrate that the expected improvement of the GOCO02S gravity field quantities at medium wavelengths (within the frequency band approximately between 100 and 250) compared to EGM08 is as much as ±60 mGal and ±0.2 m in terms of gravity anomalies and geoid/quasigeoid heights respectively.

  • 24.
    Bagherbandi, Mohammad
    et al.
    KTH, School of Architecture and the Built Environment (ABE), Urban Planning and Environment, Geodesy and Geoinformatics.
    Tenzer, Robert
    Comparative analysis of Vening-Meinesz Moritz isostatic models using the constant and variable crust-mantle density contrast - a case study of Zealandia2013In: Journal of Earth System Science, E-ISSN 0973-774X, Vol. 122, no 2, p. 339-348Article in journal (Refereed)
    Abstract [en]

    We compare three different numerical schemes of treating the Moho density contrast in gravimetric inverse problems for finding the Moho depths. The results are validated using the global crustal model CRUST2.0, which is determined based purely on seismic data. Firstly, the gravimetric recovery of the Moho depths is realized by solving Moritz's generalization of the Vening-Meinesz inverse problem of isostasy while the constant Moho density contrast is adopted. The Pratt-Hayford isostatic model is then facilitated to estimate the variable Moho density contrast. This variable Moho density contrast is subsequently used to determine the Moho depths. Finally, the combined least-squares approach is applied to estimate jointly the Moho depths and density contract based on a priori error model. The EGM2008 global gravity model and the DTM2006.0 global topographic/bathymetric model are used to generate the isostatic gravity anomalies. The comparison of numerical results reveals that the optimal isostatic inverse scheme should take into consideration both the variable depth and density of compensation. This is achieved by applying the combined least-squares approach for a simultaneous estimation of both Moho parameters. We demonstrate that the result obtained using this method has the best agreement with the CRUST2.0 Moho depths. The numerical experiments are conducted at the regional study area of New Zealand's continental shelf.

  • 25.
    Bagherbandi, Mohammad
    et al.
    KTH, School of Architecture and the Built Environment (ABE), Urban Planning and Environment, Geodesy and Geoinformatics.
    Tenzer, Robert
    Sjöberg, Lars E.
    KTH, School of Architecture and the Built Environment (ABE), Urban Planning and Environment, Geodesy and Geoinformatics.
    Moho depth uncertainties in the Vening-Meinesz Moritz inverse problem of isostasy2014In: Studia Geophysica et Geodaetica, ISSN 0039-3169, E-ISSN 1573-1626, Vol. 58, no 2, p. 227-248Article in journal (Refereed)
    Abstract [en]

    We formulate an error propagation model based on solving the Vening Meinesz-Moritz (VMM) inverse problem of isostasy. The system of observation equations in the VMM model defines the relation between the isostatic gravity data and the Moho depth by means of a second-order Fredholm integral equation of the first kind. The corresponding error model (derived in a spectral domain) functionally relates the Moho depth errors with the commission errors of used gravity and topographic/bathymetric models. The error model also incorporates the non-isostatic bias which describes the disagreement, mainly of systematic nature, between the isostatic and seismic models. The error analysis is conducted at the study area of the Tibetan Plateau and Himalayas with the world largest crustal thickness. The Moho depth uncertainties due to errors of the currently available global gravity and topographic models are estimated to be typically up to 1-2 km, provided that the GOCE gravity gradient observables improved the medium-wavelength gravity spectra. The errors due to disregarding sedimentary basins can locally exceed similar to 2 km. The largest errors (which cause a systematic bias between isostatic and seismic models) are attributed to unmodeled mantle heterogeneities (including the core-mantle boundary) and other geophysical processes. These errors are mostly less than 2 km under significant orogens (Himalayas, Ural), but can reach up to similar to 10 km under the oceanic crust.

  • 26.
    Bagherbandi, Mohammad
    et al.
    KTH, School of Architecture and the Built Environment (ABE), Urban Planning and Environment, Geodesy and Geoinformatics. University of Gävle, Sweden.
    Tenzer, Robert
    Sjöberg, Lars E.
    KTH, School of Architecture and the Built Environment (ABE), Urban Planning and Environment, Geodesy and Geoinformatics.
    Abrehdary, Majid
    KTH, School of Architecture and the Built Environment (ABE), Urban Planning and Environment, Geodesy and Geoinformatics.
    On the residual isostatic topography effect in the gravimetric Moho determination2015In: Journal of Geodynamics, ISSN 0264-3707, E-ISSN 1879-1670, Vol. 83, p. 28-36Article in journal (Refereed)
    Abstract [en]

    In classical isostatic models, a uniform crustal density is typically assumed, while disregarding the crustal density heterogeneities. This assumption, however, yields large errors in the Moho geometry determined from gravity data, because the actual topography is not fully isostatically compensated. Moreover, the sub-crustal density structures and additional geodynamic processes contribute to the overall isostatic balance. In this study we investigate the effects of unmodelled density structures and geodynamic processes on the gravity anomaly and the Moho geometry. For this purpose, we define the residual isostatic topography as the difference between actual topography and isostatic topography, which is computed based on utilizing the Vening Meinesz-Moritz isostatic theory. We show that the isostatic gravity bias due to disagreement between the actual and isostatically compensated topography varies between 382 and 596 mGal. This gravity bias corresponds to the Moho correction term of 16 to 25 km. Numerical results reveal that the application of this Moho correction to the gravimetrically determined Moho depths significantly improves the RMS fit of our result with some published global seismic and gravimetric Moho models. We also demonstrate that the isostatic equilibrium at long-to-medium wavelengths (up to degree of about 40) is mainly controlled by a variable Moho depth, while the topographic mass balance at a higher-frequency spectrum is mainly attained by a variable crustal density.

  • 27.
    Bagherbandi, Mohammad
    et al.
    KTH, School of Architecture and the Built Environment (ABE), Urban Planning and Environment, Geodesy and Geoinformatics.
    Tenzer, Robert
    Sjöberg, Lars E.
    KTH, School of Architecture and the Built Environment (ABE), Urban Planning and Environment, Geodesy and Geoinformatics.
    Novak, Pavel
    Improved global crustal thickness modeling based on the VMM isostatic model and non-isostatic gravity correction2013In: Journal of Geodynamics, ISSN 0264-3707, E-ISSN 1879-1670, Vol. 66, p. 25-37Article in journal (Refereed)
    Abstract [en]

    In classical isostatic models for a gravimetric recovery of the Moho parameters (i.e., Moho depths and density contrast) the isostatic gravity anomalies are usually defined based on the assumption that the topographic mass surplus and the ocean mass deficiency are compensated within the Earth's crust. As acquired in this study, this assumption yields large disagreements between isostatic and seismic Moho models. To assess the effects not accounted for in classical isostatic models, we conduct a number of numerical experiments using available global gravity and crustal structure models. First, we compute the gravitational contributions of mass density contrasts due to ice and sediments, and subsequently evaluate respective changes in the Moho geometry. Residual differences between the gravimetric and seismic Moho models are then used to predict a remaining non-isostatic gravity signal, which is mainly attributed to unmodeled density structures and other geophysical phenomena. We utilize three recently developed computational schemes in our numerical studies. The apparatus of spherical harmonic analysis and synthesis is applied in forward modeling of the isostatic gravity disturbances. The Moho depths are estimated globally on a 1 arc-deg equiangular grid by solving the Vening-Meinesz Moritz inverse problem of isostasy. The same estimation model is applied to evaluate the differences between the isostatic and seismic models. We demonstrate that the application of the ice and sediment density contrasts stripping gravity corrections is essential for a more accurate determination of the Moho geometry. We also show that the application of the additional non-isostatic correction further improves the agreement between the Moho models derived based on gravity and seismic data. Our conclusions are based on comparing the gravimetric results with the CRUST2.0 global crustal model compiled using results of seismic surveys.

  • 28.
    Bagherlbandi, Mohammad
    et al.
    KTH, School of Architecture and the Built Environment (ABE), Urban Planning and Environment, Geodesy and Geoinformatics.
    Sjöberg, Lars E.
    KTH, School of Architecture and the Built Environment (ABE), Urban Planning and Environment, Geodesy and Geoinformatics.
    Modelling the density contrast and depth of the Moho discontinuity by seismic and gravimetric-isostatic methods with an application to Africa2012In: Journal of African Earth Sciences, ISSN 1464-343X, Vol. 68, p. 111-120Article in journal (Refereed)
    Abstract [en]

    The crustal thickness (Moho depth) is of interest in several geosciences applications, such as geography, geophysics and geodesy. Usually the crustal depth and density variations are estimated by seismic survey data. As such data collection is very time-consuming and expensive an attractive option could be to use a gravimetric/isostatic model. In this case, realistic estimates for the crustal density and Moho density contrast (MDC) are important. In this study, we first use the seismic crustal thickness of CRUST2.0 model as a known parameter in combination with gravimetric data in estimating the crust-mantle density contrast by the isostatic model of Vening Meinesz-Moritz. We present different models to estimate the MDC and its impact on the modelling of the gravimetric-isostatic Moho depth. The theory is applied to estimate the Moho depth of the African continental crust by using different models for the MDC: (a) constant value (0.6 g/cm(3)), (b) Pratt-Hayford's model, (c) CRUST2.0 as input to three gravimetric/isostatic models based on Vening Meinesz-Moritz theory. The isostatic models agree by 5.8-7.1 km in the rms with the regional seismic model at a resolution of 2 degrees x 2 degrees, and the smallest rms difference at a resolution of 1 degrees x 1 degrees is of 7.2 km. For comparison, the rms differences of CRUST2.0 and the regional seismic model are 8.8 and 9.1 km at the resolutions of 2 degrees (interpolated) and 1 degrees, respectively. The result suggests that the gravimetric/isostatic Moho model can be used in densification of the CRUST2.0 Moho geometry, and to improve it in areas with poor data.

  • 29.
    Balazadegan Sarvrood, Yashar
    et al.
    KTH, School of Architecture and the Built Environment (ABE), Urban Planning and Environment, Geodesy and Geoinformatics.
    Amin, Md Nurul
    KTH, School of Architecture and the Built Environment (ABE), Urban Planning and Environment, Geodesy and Geoinformatics.
    Server Based Real Time GPS-IMU Integration Aided by Fuzzy Logic Based Map Matching Algorithm for Car Navigation2011Independent thesis Advanced level (degree of Master (Two Years)), 30 credits / 45 HE creditsStudent thesis
    Abstract [en]

    The stand-alone Global Positioning System (GPS) or an Integrated GPS and Dead Reckoning Systems (such as Inertial Navigation System or Odometer and magnetometer) have been widely used for vehicle navigation. An essential process in such an application is to map match the position obtained from GPS (or/and other sensors) on a digital road network map.

    GPS positioning is relatively accurate in open sky conditions, but its position is not accurate in dense urban canyon conditions where GPS is affected by signal blockage and multipath. High sensitivity GPS (HS GPS) receivers, can increase the availability, but are affected by multipath and cross correlation due to weak signal tracking. Inertial navigation system can be used to bridge GPS gaps, However, position and velocity results in such conditions are typically biased, therefore, fuzzy logic  based map matching, is mostly used because it can take noisy, imprecise input, to yield crisp (i.e. numerically accurate) output. Fuzzy logic can be applied effectively to map match the output from a High sensitivity GPS receiver or integrated GPS and INS in urban canyons because of its inherent tolerance to imprecise inputs.

    In this thesis stand-alone GPS positioning and integrated GPS and Inertial Measurement Unit (IMU) positioning aided by fuzzy logic based map matching for Stockholm urban and suburban areas are performed.  A comparison is carried out between, Map matching for stand-alone GPS and integrated GPS and IMU. Stand-alone GPS aided map matching algorithms identifies 96.4% of correct links for rural area, 92.6% for urban area (car test) and 93.4% for bus test in urban area. Integrated GPS and IMU aided map matching algorithms identifies 97.3% of correct links for rural area, 94.4% for urban area (car test) and 94.4% for bus test in urban area. Integrated GPS and Inertial Measurement Unit produces better vehicle azimuth than stand-alone GPS, especially at low speed. Furthermore, there are five more fuzzy rules based on gyro rate in integrated GPS and IMU map matching algorithm. Therefore, it shows better map matching results. GPS blackout happens rarely in Stockholm, because there are not many tall buildings in this city. Therefore, the integrated GPS and IMU aided by map matching shows only small improvement over stand-alone GPS aided by map matching.

    Download full text (pdf)
    fulltext
  • 30.
    Ban, Yifang
    KTH, School of Architecture and the Built Environment (ABE), Urban Planning and Environment, Geodesy and Geoinformatics.
    Assessing the Impact of Landscape Dynamics on the Terrestrial Biodiversity Using Multisensor Renmote Sensing Project #: DNR 151/05 & DNR 151/05:2: A Project Report Submitted to the Swedish National Space Board2010Report (Other academic)
  • 31.
    Ban, Yifang
    KTH, School of Architecture and the Built Environment (ABE), Urban Planning and Environment, Geodesy and Geoinformatics.
    ENVISAT ASAR Dual-Polarization Temporal Backscatter Profiles of Urban Land Covers2005In: The 9th International Symposium on Physical Measurements and Signatures in Remote Sensing (ISPMSRS) , 2005Conference paper (Other academic)
  • 32.
    Ban, Yifang
    et al.
    KTH, School of Architecture and the Built Environment (ABE), Urban Planning and Environment, Geodesy and Geoinformatics.
    Fredman, David
    Jonsson, Martin
    Svensson, Leif
    Multi-Criteria Evaluations for Improved Placement of Defibrillators: Preliminary Results2013In: Circulation, ISSN 0009-7322, E-ISSN 1524-4539, Vol. 128, no 22, p. 78-Article in journal (Other academic)
  • 33.
    Ban, Yifang
    et al.
    KTH, School of Architecture and the Built Environment (ABE), Urban Planning and Environment, Geoinformatics.
    Jacob, Alexander
    KTH, School of Architecture and the Built Environment (ABE), Urban Planning and Environment, Geodesy and Geoinformatics. KTH, School of Architecture and the Built Environment (ABE), Urban Planning and Environment, Geoinformatics.
    Fusion of multitemporal spaceborne SAR and optical data for urban mapping and urbanization monitoring2016In: Remote Sensing and Digital Image Processing, ISSN 1567-3200, p. 107-123Article in journal (Refereed)
    Abstract [en]

    The overall objective of this research is to evaluate multitemporal spaceborne SAR and optical data for urban land cover mapping and urbanization monitoring. Multitemporal Sentinel-1A SAR and historical ERS SAR and ENVISAT ASAR data as well as HJ-1B multispectral data were acquired in Beijing, Chendgdu and Nanchang, China where rapid urbanization has taken place. KTHSEG, a novel object-based classification method is adopted for urban land cover mapping while KTH-Pavia Urban Extractor, a robust algorithm is improved for urban extent extraction and urbanization monitoring. The research demonstrates that, for urban land cover classification, the fusion of multitemporal SAR and optical data is superior to SAR or optical data alone. The second best classification result is achieved using fusion of 4-date SAR and one HJ-1B image. The results indicate that carefully selected multitemporal SAR dataset and its fusion with optical data could produce nearly as good classification accuracy as the whole multitemporal dataset. The results also show that KTH-SEG, the edge-aware region growing and merging segmentation algorithm, is effective for classification of SAR, optical and their fusion. KTH-SEG outperforms eCognition, the commonly used commercial software, for image segmentation and classification of linear features. For Urban extent extraction, single-date and multitemporal SAR data including ERS SAR, ENVISAT ASAR and Sentinel-1A SAR achieved very promising results in all study areas using the improved KTH-Pavia Urban Extractor. The results showed that urban areas as well as small towns and villages could be well extracted using multitemporal Sentinel-1A SAR data while major urban areas could be well extracted using a single-date single-polarization SAR image. The results clearly demonstrate that multitemporal SAR data are cost- and time-effective way for monitoring spatiotemporal patterns of urbanization.

  • 34.
    Ban, Yifang
    et al.
    KTH, School of Architecture and the Built Environment (ABE), Urban Planning and Environment, Geodesy and Geoinformatics.
    Jacob, Alexander
    KTH, School of Architecture and the Built Environment (ABE), Urban Planning and Environment, Geodesy and Geoinformatics.
    Object-Based Fusion of Multitemporal Multiangle ENVISAT ASAR and HJ-1B Multispectral Data for Urban Land-Cover Mapping2013In: IEEE Transactions on Geoscience and Remote Sensing, ISSN 0196-2892, E-ISSN 1558-0644, Vol. 51, no 4, p. 1998-2006Article in journal (Refereed)
    Abstract [en]

    The objectives of this research are to develop robust methods for segmentation of multitemporal synthetic aperture radar (SAR) and optical data and to investigate the fusion of multitemporal ENVISAT advanced synthetic aperture radar (ASAR) and Chinese HJ-1B multispectral data for detailed urban land-cover mapping. Eight-date multiangle ENVISAT ASAR images and one-date HJ-1B charge-coupled device image acquired over Beijing in 2009 are selected for this research. The edge-aware region growing and merging (EARGM) algorithm is developed for segmentation of SAR and optical data. Edge detection using a Sobel filter is applied on SAR and optical data individually, and a majority voting approach is used to integrate all edge images. The edges are then used in a segmentation process to ensure that segments do not grow over edges. The segmentation is influenced by minimum and maximum segment sizes as well as the two homogeneity criteria, namely, a measure of color and a measure of texture. The classification is performed using support vector machines. The results show that our EARGM algorithm produces better segmentation than eCognition, particularly for built-up classes and linear features. The best classification result (80%) is achieved using the fusion of eight-date ENVISAT ASAR and HJ-1B data. This represents 5%, 11%, and 14% improvements over eCognition, HJ-1B, and ASAR classifications, respectively. The second best classification is achieved using fusion of four-date ENVISAT ASAR and HJ-1B data (78%). The result indicates that fewer multitemporal SAR images can achieve similar classification accuracy if multitemporal multiangle dual-look-direction SAR data are carefully selected.

  • 35.
    Ban, Yifang
    et al.
    KTH, School of Architecture and the Built Environment (ABE), Urban Planning and Environment, Geodesy and Geoinformatics.
    Jacob, Alexander
    KTH, School of Architecture and the Built Environment (ABE), Urban Planning and Environment, Geodesy and Geoinformatics.
    Gamba, Paolo
    University of Pavia, Pavia, Italy.
    Spaceborne SAR Data for Global Urban Mapping at 30m Resolution Utilizing a Robust Urban Extractor2015In: ISPRS journal of photogrammetry and remote sensing (Print), ISSN 0924-2716, E-ISSN 1872-8235, Vol. 103Article in journal (Refereed)
    Abstract [en]

    With more than half of the world population now living in cities and 1.4 billion more people expected to move into cities by 2030, urban areas pose significant challenges on local, regional and global environment. Timely and accurate information on spatial distributions and temporal changes of urban areas are therefore needed to support sustainable development and environmental change research. The objective of this research is to evaluate spaceborne SAR data for improved global urban mapping using a robust processing chain, the KTH-Pavia Urban Extractor. The proposed processing chain includes urban extraction based on spatial indices and Grey Level Co-occurrence Matrix (GLCM) textures, an existing method and several improvements i.e., SAR data preprocessing, enhancement, and post-processing. ENVISAT Advanced Synthetic Aperture Radar (ASAR) C-VV data at 30m resolution were selected over 10 global cities and a rural area from six continents to demonstrated robustness of the improved method. The results show that the KTH-Pavia Urban Extractor is effective in extracting urban areas and small towns from ENVISAT ASAR data and built-up areas can be mapped at 30m resolution with very good accuracy using only one or two SAR images. These findings indicate that operational global urban mapping is possible with spaceborne SAR data, especially with the launch of Sentinel-1 that provides SAR data with global coverage, operational reliability and quick data delivery.

  • 36.
    Ban, Yifang
    et al.
    KTH, School of Architecture and the Built Environment (ABE), Urban Planning and Environment, Geodesy and Geoinformatics.
    Jian, L.
    KTH, School of Architecture and the Built Environment (ABE), Urban Planning and Environment, Geodesy and Geoinformatics.
    Kazi, I.
    KTH, School of Architecture and the Built Environment (ABE), Urban Planning and Environment, Geodesy and Geoinformatics.
    Ihse, M.
    Stockholm University.
    Synergy of ENVISAT ASAR and MERIS Data for Landuse/Land-Cover Mapping: Earsel symposium, Warsaw, Poland2006Other (Other academic)
  • 37.
    Ban, Yifang
    et al.
    KTH, School of Architecture and the Built Environment (ABE), Urban Planning and Environment, Geodesy and Geoinformatics.
    Yousif, Osama A.
    KTH, School of Architecture and the Built Environment (ABE), Urban Planning and Environment, Geodesy and Geoinformatics.
    Multitemporal Spaceborne SAR Data for Urban Change Detection in China2012In: IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, ISSN 1939-1404, E-ISSN 2151-1535, Vol. 5, no 4, p. 1087-1094Article in journal (Refereed)
    Abstract [en]

    The objective of this research is to examine effective methods for urban change detection using multitemporal spaceborne SAR data in two rapid expanding cities in China. One scene of ERS-2 SAR C-VV image was acquired in Beijing in 1998 and in shanghai in 1999 respectively and one scene of ENVISAT ASAR C-VV image was acquired in near-anniversary dates in 2008 in Beijing and Shanghai. To compare the SAR images from different dates, a modified ratio operator that takes into account both positive and negative changes was developed to derive a change image. A generalized version of Kittler-Illingworth minimum-error thresholding algorithm was then tested to automatically classify the change image into two classes, change and no change. Various probability density functions such as Log normal, Generalized Gaussian, Nakagami ratio, and Weibull ratio were investigated to model the distribution of the change and no change classes. The results showed that Kittler-Illingworth algorithm applied to the modified ratio image is very effective in detecting temporal changes in urban areas using SAR images. Log normal and Nakagami density models achieved the best results. The Kappa coefficients of these methods were of 0.82 and 0.71 for Beijing and Shanghai respectively while the false alarm rates were 2.7% and 4.75%. The findings indicated that the change accuracies obtained using Kittler-Illingworth algorithm vary depending on how the assumed conditional class density function fits the histograms of change and no change classes.

  • 38.
    Batsos, Epameinondas
    et al.
    KTH, School of Architecture and the Built Environment (ABE), Urban Planning and Environment, Geodesy and Geoinformatics.
    Rabbi, Atta
    KTH, School of Architecture and the Built Environment (ABE), Urban Planning and Environment, Geodesy and Geoinformatics.
    Clustering and cartographic simplification of point data set2012Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Download full text (pdf)
    fulltext
  • 39.
    Bednjanec, Martina
    KTH, School of Architecture and the Built Environment (ABE), Urban Planning and Environment, Geodesy and Geoinformatics.
    Calibration of ALS Intensity Data2011Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    With general advancements in computer technology and development ofdirect geo-referencing technology, such as GPS, airborne laser scanningsystems came into widespread use especially after 2002. In spite of relativelyhigh cost of purchase, the systems proved to be cost effective, providing fastand in large volumes 3D geospatial data acquisition with unprecedentedaccuracy and relatively modest processing complexity. Country-widecollection of laser scanning data, mainly due to DTM derivation, is becomingan attractive possibility for mapping. Since 2009, Swedish Government hasapproved and financed the project of developing the New National ElevationModel (NNH) for the country of Sweden, with aspects of monitoring climatechanges and other environmental impacts. The National Land Survey ofSweden, which is commissioned to carry out the project, is offering this highlyaccurate scanned data less expensive to secondary users, such as companiesspecialized in forestry applications, etc. Beside the geospatial data (X, Y, Z),laser systems additionally record the received signal intensity for eachmeasurement. So far, intensity values were just an additional variable, notused extensively, but in recent years many efforts have been made tounderstand and interpret these values. The wider use of intensity data ismissing due to the lack of techniques to calibrate them, so that theyrepresent values proportional to the scattering characteristics of the target. Inthe scope of this thesis it was examined which properties influence intensityvalues and to what degree. Already proposed methods for calibration weresummarized and the most suitable one was implemented based on the datafrom the NNH project and instruments used for it. The results proved to begood both empirically and visually, with reduced intensity variations over thesame targets. The potentials of using this corrected data are presented, suchas surface classification, automatic object recognition, multi-temporalanalysis, and more.

    Download full text (pdf)
    Calibration_of_ALS_Intensity_Data
  • 40.
    Bergsjö, Joline
    KTH, School of Architecture and the Built Environment (ABE), Urban Planning and Environment, Geodesy and Geoinformatics.
    Object based change detection in urban area using KTH-SEG2014Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Today more and more people are moving to the cities around the world. This puts a lot of strain on the infrastructure as the cities grow in both width and height. To be able to monitor the ongoing change remote sensing is an effective tool and ways to make it even more effective, better and easier to use are constantly sought after.

    One way to monitor change detection is object based change detection. The idea has been around since the seventies, but it wasn’t until the early 2000 when it was introduced by Blaschke and Strobl(2001) to the market as a solution to the issues with pixel based analysis that it became popular with remote analysts around the world.

    KTH-SEG is developed at KTH Geoinformatics. It is developed to segment images in order to preform object based analysis; it can also be used for classification.

    In this thesis object based change detection over an area of Shanghai is carried out. Two different approaches are used; post-classification analysis as well as creating change detection images. The maps are assessed using the maximum likelihood report in the software Geomatica.

    The segmentation and classification is done using KTH-SEG, training areas and ground truth data polygons are drawn in ArcGIS and pre-processing and other operations is carried out using Geomatica.

    KTH-SEG offers a number of changeable settings that allows the segmentation to suit the image at hand.  It is easy to use and produces well defined classification maps that are usable for change detection

    The results are evaluated in order to estimate the efficiency of object based change detection in urban area and KTH-SEG is appraised as a segmentation and classification tool.

    The results show that the post-classification approach is superior to the change detection images. Whether the poor result of the change detection images is affected by other parameters than the object based approach can’t be determined. 

    Download full text (pdf)
    Bergsjö 2014 KEX
  • 41.
    Bin, Jiang
    et al.
    Geomatics, University of Gävle.
    Xintao, Liu
    KTH, School of Architecture and the Built Environment (ABE), Urban Planning and Environment, Geodesy and Geoinformatics.
    Jia, Tao
    KTH, School of Architecture and the Built Environment (ABE), Urban Planning and Environment, Geodesy and Geoinformatics.
    Scaling of Geographic Space as a Universal Rule for Mapping or Cartographic Generalization2011Article in journal (Other academic)
    Abstract [en]

    Mapping, or cartographic generalization in particular, is a process of producing maps at different levels of detail by retaining essential properties of the underlying geographic space. In this paper, we explore how mapping or cartographic generalization process can be guided by the underlying scaling of geographic space. Scaling refers to the fact that in a large geographic area small objects are far more common than large ones. In the corresponding probability density function, this scaling is reflected as a heavy tailed distribution such as a power law, lognormal, and exponential distribution. In essence, any heavy tailed distribution consists of the head of the distribution (with a low percentage of objects) and the tail of the distribution (with a high percentage of objects). We therefore suggest that the objective of the mapping process is to retain the objects in the head yet to eliminate those in the tail. We applied this selection principle to several generalization experiments, and found that the scaling of geographic space can indeed be a universal rule for mapping and cartographic generalization. We further relate the universal rule to T\"opfer's radical law (or trained cartographers' decision making in general), and illustrate several advantages of the universal rule compared to the radical law.

  • 42.
    Blänning, Erik
    et al.
    KTH, School of Architecture and the Built Environment (ABE), Urban Planning and Environment, Geodesy and Geoinformatics.
    Ivarsson, Caroline
    KTH, School of Architecture and the Built Environment (ABE), Urban Planning and Environment, Geodesy and Geoinformatics.
    Assessment of Placing of Field Hospitals After the 2010 Haiti EarthquakeUsing Geospatial Data2012Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    When natural disasters such as earthquakes happen, there is a need for an efficient method to support humanitarian aid organizations in the decision making process. One such decision is placement of Foreign Field Hospitals to assist with medical help.To support such a decision lots of different information and data needs to be gathered and combined. The main objectives of this thesis are to collect existing data published shortly after the earthquake in Haiti 2010 as well as data published up to two months after the earthquake. The data is then to be evaluated according to adequacy for analysis and the result of the analysis to be compared to the actual placements of the field hospitals after the 2010 earthquake.The method used in this analysis is Multi Criteria Evaluation (MCE). Data regarding population, elevation, roads, land use, damage, climate, water, health facility locations and airport location are collected and weighted relative with the Analytic Hierarchy Process (AHP) with weights retrieved from a questionnaire sent out to Non-Governmental Organizations (NGOs) and countries involved in the disaster relief. The result obtained from the MCE is a final suitability map depicting areas that are suitable according to the different factors.The data availability for the thesis project is an issue, due to lack of data published shortly after the earthquake. Some of the data used in the analysis do not have the sufficient detail level. Still, an analysis can be performed where suitable areas are obtained.The suitable locations found in the analysis agree well in most cases with where the actual FFHs are placed, however a few locations are not in proximity to where the suitable areas lie. A few of the locations were located in areas exposed to frequently floods. Even though the data availability and quality leaves things to desire, the analysis method shows promising results for future research. The approach could help aggregating information from different sources and provide support in pre-dispatch organization, already having a set of suitable locations to arrive to.

    Download full text (pdf)
    fulltext
  • 43.
    Bobrinskaya, Maria
    KTH, School of Architecture and the Built Environment (ABE), Urban Planning and Environment, Geodesy and Geoinformatics.
    Remote Sensing for Analysis of Relationships between Land Cover and Land Surface Temperature in Ten Megacities2012Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Urbanization is one of the most significant phenomena of the anthropogenic influence on the Earth’s environment. One of the principal results of the urbanization is the creation of megacities, with their local climate and high impact on the surrounding area. The design and evolution of an urban area leads to higher absorption of solar radiation and heat storage in which is the foundation of the urban heat island phenomenon. Remote sensing data is a valuable source of information for urban climatology studies. The main objective of this thesis research is to examine the relationship between land use and land cover types and corresponding land surface temperature, as well as the urban heat island effect and changes in these factors over a 10 year period. 10 megacities around the world where included in this study namely Beijing (China), Delhi (India), Dhaka (Bangladesh), Los Angeles (USA), London (UK), Mexico City (Mexico), Moscow (Russia), New York City (USA), Sao Paulo (Brazil) and Tokyo (Japan).

    Landsat satellite data were used to extract land use/land cover information and their changes for the abovementioned cities. Land surface temperature was retrieved from Landsat thermal images. The relationship between land surface temperature and landuse/land-cover classes, as well as the normalized vegetation index (NDVI) was analyzed.

    The results indicate that land surface temperature can be related to land use/land cover classes in most cases. Vegetated and undisturbed natural areas enjoy lower surface temperature, than developed urban areas with little vegetation. However, the cities show different trends, both in terms of the size and spatial distribution of urban heat island. Also, megacities from developed countries tend to grow at a slower pace and thus face less urban heat island effects than megacities in developing countries.

    Download full text (pdf)
    fulltext
  • 44. Bozic, B.
    et al.
    Fan, Huaan
    KTH, School of Architecture and the Built Environment (ABE), Urban Planning and Environment, Geodesy and Geoinformatics.
    Milosavljevic, Z.
    Establishment of the MGI EDM calibration baseline2013In: Survey review - Directorate of Overseas Surveys, ISSN 0039-6265, E-ISSN 1752-2706, Vol. 45, no 331, p. 263-268Article in journal (Refereed)
    Abstract [en]

    This paper deals with the estimation of the quality of the baseline for the calibration of distance measurement devices which was established by Serbian Military Geographic Institute for military use. The basic characteristics of the baseline are explained, and a plan for the checking of the baseline quality is proposed. The measurements realised so far can be grouped into two phases. The measurements have been processed, and the estimates of the distances of this length standard have been obtained. The standard deviations of the least squares estimates of the lengths were better than 0.3 mm in each epoch. This precision offers the possibility to check all measurement devices with a minimum uncertainty of the calibrations, of +/-(1 mm + 1 ppm). The stability of the pillars is also analysed. The conventional deformation analysis method was applied to three datasets and the results obtained by evaluating them are shown.

  • 45.
    Bronder, Axel
    et al.
    KTH, School of Architecture and the Built Environment (ABE), Urban Planning and Environment, Geodesy and Geoinformatics.
    Persson, Erik
    KTH, School of Architecture and the Built Environment (ABE), Urban Planning and Environment, Geodesy and Geoinformatics.
    Design, Implementation and Evaluation of a Mobile GIS Solution for a Land Registration Project in Lesotho2013Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    This thesis describes in detail how a mobile Geographical Information System (GIS) was designed, implemented and evaluated for the prevailing circumstances of a land regularisation project in Lesotho in Southern Africa. The GIS was developed as an application for the Android platform, primarily with the tablet-computer format in mind, to be used for land registration field work.

    The main purpose of the paper is to determine whether an ad hoc mobile GIS solution can improve the efficiency of the field work as well as the integrity of the data collected in the field work of the land regularisation project in Lesotho. The aim is also to evaluate the performance and usefulness of modern tablet computers in this context.

    The application was developed and tested on site in Lesotho on two tablet computers borrowed from Samsung Electronics AB in Sweden. After the development phase, the solution was later implemented on tablet computers of a different model for the remainderof the land regularisation project in Lesotho.

    The design process started with a field visit where the work-flow of the project was analysed. From this a needs analysis were formed together with the management staff on site that served as a base for the development process. The development and implementation was then performed with continuous communication and evaluation with the personnel of the project. As the development progressed, the solution was also tested and evaluated continuously in the field work.

    Not only did the solution perform well both software- and hardware-wise, despite strong sun from high altitudes and lack of internet connection in Lesotho, it also exceeded the expectations of the staff. The solution significantly improved the work environment for the field workers of the project and the efficiency was raised, according to the evaluation. A unified management staff concludes in the evaluation of this paper that they will consider using tablet computers together with an ad hoc application for the field work of their next project.

    Download full text (pdf)
    MScThesis_Geoinformatics_AxelBronder_ErikPersson
  • 46.
    Bronder, Axel Viking
    KTH, School of Architecture and the Built Environment (ABE), Urban Planning and Environment, Geodesy and Geoinformatics.
    Visualization of a SWEPOS Coordinate Analysis2011Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Lantmäteriet, The Swedish Mapping, Cadastral and Land Registration Authority, is responsible for the operation and maintenance of SWEPOS and SWEREF99 (the Swedish official reference frame) and there-fore also responsible for control and verification of the data provided by the reference stations.Clas-Göran Persson at Lantmäteriet has created a new controlling procedure, including software, to analyze the position uncertainties of the SWEPOS stations using various statistical methods. The soft-ware evaluates the daily calculated coordinate data of the stations subsequently; it will not be installed directly on the actual stations. The primary goal is to study the stability of SWEPOS from a SWEREF 99 point of view, not to analyze SWEPOS real-time service. The controlling procedure is referred to as “the CGP Program”.The CGP Program is a toolbox of statistical methods created in MATLAB, determining standard devia-tion, correlation, distribution (outliers) and more. Its main purpose is to determine if the SWEPOS data consists of uniform uncorrelated normal distributed deviations, known as white noise, or not.The purpose of my Thesis is to; based on this new controlling procedure, create a graphical overview of the current status of the SWEPOS network for Lantmäteriet. Instead of making a thorough analysis of each station the maps created in this Thesis will visualize the outcome on an overall basis and identify the stations and areas of interest for further analysis.Together with representatives from Lantmäteriet we decided that three different map types where of interest. All the maps were to be based on SWEPOS data from 2010, analyzed by the CGP program, and visualized on a nationwide basis. They differ in their cartographic appearance and they all describe dif-ferent characteristics of the SWEPOS stations.Conclusions from the maps and the numerical analyzes:

    * There is a clear "winter effect", most obvious in the height coordinate. Removal of the snow-period results in lower standard deviations and fewer unwanted systematic effects.

    * The Northing coordinate has a slightly higher standard deviation than the Easting coordinate. The standard deviation in Height is around 50% larger than the horizontal standard deviation.

    * There is no evidence for physical movements, when comparing the official SWEREF 99 coordinates with the 2010 positions.

    Download full text (pdf)
    fulltext
  • 47. Cao, J.
    et al.
    Yang, L.
    Zheng, X.
    Liu, B.
    Zhao, L.
    Ni, X.
    Dong, F.
    Mao, Bo
    KTH, School of Architecture and the Built Environment (ABE), Urban Planning and Environment, Geodesy and Geoinformatics.
    Social attribute based web service information publication mechanism in delay tolerant network2011In: Proc. - IEEE Int. Conf. Comput. Sci. Eng., CSE Int. Symp. Pervasive Syst., Algorithms, Networks, I-SPAN IEEE Int. Conf. IUCC, 2011, p. 435-442Conference paper (Refereed)
    Abstract [en]

    The intermittence of the connection between nodes and limited resources greatly impair the effectiveness of service information publication in Delay Tolerant Network (DTN). To overcome this problem, a multi-layer service information cooperative publication mechanism is proposed in this paper. Firstly, the social interaction network and service overlay network is established in the form of abstract weight graph. Then, the community division algorithm is used to analysis the social characteristics of service interaction and the social attribute based DTN model-S-DTN is constructed. Finally, the carrier nodes of the information publication is selected from neighbor set by computing utilization function based node context, and a multi-layer cooperative mechanism is proposed to achieve effective service information publication in DTN. The experiment results indicate that the proposed publication mechanism brings nearly the same success ratio as Epidemic Routing with lower delay and network loads. Additionally, it shows better performance in overall metrics than Prophet Routing.

  • 48.
    Chekole, Solomon Dargie
    KTH, School of Architecture and the Built Environment (ABE), Urban Planning and Environment, Geodesy and Geoinformatics.
    Surveying with GPS, total station and terresterial laser scaner: a comparative study2014Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Download full text (pdf)
    Solomon Thesis
  • 49.
    Danila, Uliana
    KTH, School of Architecture and the Built Environment (ABE), Urban Planning and Environment, Geodesy and Geoinformatics.
    Mold2012: a new gravimetric quasigeoid model over Moldova2012Licentiate thesis, monograph (Other academic)
    Abstract [en]

    In order to be able to use the operational Moldavian GNSS Positioning System MOLDPOS efficiently for the determination of normal heights in surveying engineering, e.g. during the construction of a road, an accurate quasigeoid model is needed. The main goal of this thesis is to present a new gravimetric quasigeoid model for Moldova (Mold2012), which has been determined by applying the Least Squares Modification of Stokes’ formula with Additive corrections (LSMSA), also called the KTH method. Due to limited coverage of gravity data, the integration area is often limited to a small spherical cap around the computation point, which leads to a truncation error for geoid height. Molodensky et al. (1962) showed that the truncation error can be reduced by the modification of Stokes’ formula, where the measured gravity data are combined with the low-frequency component of the geoid from a Global Gravitational Model (GGM). The LSMSA technique combines the GGM and the terrestrial data in an optimum way.

    In order to find the most suitable modification approach or cap size it is necessary to compare the gravimetric height anomalies with the GPS/levelling derived height anomalies, and for this purpose we use a GPS/levelling dataset that consists of 1042 points with geodetic coordinates in the MOLDREF99 reference system and normal heights at the same points given in the height system Baltic 77.

    The magnitude of the additive corrections varies within an interval from -0.6 cm to -4.3 cm over the area of Moldova. The quasigeoid model which results from combining the ITG-Grace02s solution (with n = M = 170, ψ0 = 3° and σΔg = 10 mGal) and the solution obtained from the modified Stokes’ formula together with the additive correction gives the best fit for the GPS/levelling data with a standard deviation (STD) of ±7.8 cm. The evaluation of the computed gravimetric quasigeoid is performed by comparing the gravimetric height anomalies with the GPS/levelling derived height anomalies for 1042 points.

    However, the above heterogeneous data include outliers, and in order to find and eliminate these, a corrector surface model is used. This surface provides a connection to the local vertical when the GNSS technique is used. After the elimination of the suspicious outliers (170 points) according to a 2-RMS test, a new corrective surface was computed based on the remaining 872 GPS/levelling points, and the STD of residuals became ±4.9 cm. The STD value for the residuals according to the order of the levelling network for the Mold2012 fitted to the local vertical datum is 3.8 cm for the I-order, 4.3 cm for the II-order, 4.5 cm for the III-order and 5.0 cm for the IV-order levelling network. But the STD of the residuals for the 18 control points indicates a better result where the STD is 3.6 cm and RMS is 3.9 cm and the min and max value of residuals is -5.3 cm and 9.0 cm, respectively.

    As the STD of the differences in height anomaly are not just the standard error of the height anomalies (quasigeoid model), but it contains also the standard errors of GPS heights and of normal heights. Assuming that the latter STDs are 3 cm and 3.5 cm, respectively, the STD of Mold2012 is estimated to 1.7 cm.

    Download full text (pdf)
    fulltext
  • 50.
    Demšar, Urska
    KTH, School of Architecture and the Built Environment (ABE), Urban Planning and Environment, Geodesy and Geoinformatics.
    A strategy for observing soil moisture by remote sensing in the Murray-Darling basin2005In: Proceedings 2005: The 8th AGILE International Conference on Geographic Information Science, AGILE 2005, 2005Conference paper (Refereed)
    Abstract [en]

    The article presents a proposal for a strategy for integration of the data from the new environmental satellites, SMOS, HYDROS and ALOS in a system for monitoring soil moisture in the Murray-Darling basin in Australia. The proposal was developed as one of the results of the team project STREAM during the Summer Session Programme 2004 of the International Space University.

1234567 1 - 50 of 320
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf