Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Visualisation and Generalisation of 3D City Models
KTH, School of Architecture and the Built Environment (ABE), Urban Planning and Environment, Geoinformatik och Geodesi.
2011 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

3D city models have been widely used in various applications such as urban planning, traffic control, disaster management etc. Efficient visualisation of 3D city models in different levels of detail (LODs) is one of the pivotal technologies to support these applications. In this thesis, a framework is proposed to visualise the 3D city models online. Then, generalisation methods are studied and tailored to create 3D city scenes in different scales dynamically. Multiple representation structures are designed to preserve the generalisation results on different level. Finally, the quality of the generalised 3D city models is evaluated by measuring the visual similarity with the original models.

 

In the proposed online visualisation framework, City Geography Makeup Language (CityGML) is used to represent city models, then 3D scenes in Extensible 3D (X3D) are generated from the CityGML data and dynamically updated to the user side for visualisation in the Web-based Graphics Library (WebGL) supported browsers with X3D Document Object Model (X3DOM) technique. The proposed framework can be implemented at the mainstream browsers without specific plugins, but it can only support online 3D city model visualisation in small area. For visualisation of large data volumes, generalisation methods and multiple representation structures are required.

 

To reduce the 3D data volume, various generalisation methods are investigated to increase the visualisation efficiency. On the city block level, the aggregation and typification methods are improved to simplify the 3D city models. On the street level, buildings are selected according to their visual importance and the results are stored in the indexes for dynamic visualisation. On the building level, a new LOD, shell model, is introduced. It is the exterior shell of LOD3 model, in which the objects such as windows, doors and smaller facilities are projected onto walls.  On the facade level, especially for textured 3D buildings, image processing and analysis methods are employed to compress the texture.

 

After the generalisation processes on different levels, multiple representation data structures are required to store the generalised models for dynamic visualisation. On the city block level the CityTree, a novel structure to represent group of buildings, is tested for building aggregation. According to the results, the generalised 3D city model creation time is reduced by more than 50% by using the CityTree. Meanwhile, a Minimum Spanning Tree (MST) is employed to detect the linear building group structures in the city models and they are typified with different strategies. On the building level and the street level, the visible building index is created along the road to support building selection. On facade level the TextureTree, a structure to represent building facade texture, is created based on the texture segmentation.

 

Different generalisation strategies lead to different outcomes. It is critical to evaluate the quality of the generalised models. Visually salient features of the textured building models such as size, colour, height, etc. are employed to calculate the visual difference between the original and the generalised models. Visual similarity is the criterion in the street view level building selection. In this thesis, the visual similarity is evaluated locally and globally. On the local level, the projection area and the colour difference between the original and the generalised models are considered. On the global level, the visual features of the 3D city models are represented by Attributed Relation Graphs (ARG) and their similarity distances are calculated with the Nested Earth Mover’s Distance (NEMD) algorithm.

 

The overall contribution of this thesis is that 3D city models are generalised in different scales (block, street, building and facade) and the results are stored in multiple representation structures for efficient dynamic visualisation, especially for online visualisation.

Place, publisher, year, edition, pages
Stockholm: KTH Royal Institute of Technology , 2011. , xii, 104 p.
Series
Trita-SOM , ISSN 1653-6126 ; 2011>19
Keyword [en]
3D city models, visualisation, generalisation, multiple representation structure, similarity evaluation, aggregation, typification, shell model, street index, texture compression, texture segmentation
National Category
Other Environmental Engineering
Research subject
SRA - ICT
Identifiers
URN: urn:nbn:se:kth:diva-48174ISBN: 978-91-7501-189-9 (print)OAI: oai:DiVA.org:kth-48174DiVA: diva2:456906
Public defence
2011-12-02, D2, Lindstedtsvägen 5, Entreplan, KTH, Stockholm, 10:00 (English)
Opponent
Supervisors
Projects
ViSuCity
Note
QC 20111116Available from: 2011-11-16 Created: 2011-11-16 Last updated: 2011-11-16Bibliographically approved
List of papers
1. Online Visualisation of a 3D City Model Using CityGML and X3DOM
Open this publication in new window or tab >>Online Visualisation of a 3D City Model Using CityGML and X3DOM
2011 (English)In: Cartographica, ISSN 0317-7173, E-ISSN 1911-9925, Vol. 46, no 2, 109-114 p.Article in journal (Refereed) Published
Abstract [en]

This article proposes a novel framework for online visualization of 3D city models. CityGML is used to represent the city models, based on which 3D scenes in X3D are generated, then dynamically updated to the user side with AJAX and visualized in WebGL-supported browsers with X3DOM. The experimental results show that the proposed framework can easily be implemented using widely supported major browsers and can efficiently support online visualization of 3D city models in small areas. For the visualization of large volumes of data, generalization methods and multiple-representation data structure should be studied in future research.

Keyword
3D modelling, visual language for GIS, Web mapping
National Category
Other Computer and Information Science
Research subject
SRA - ICT
Identifiers
urn:nbn:se:kth:diva-48156 (URN)10.3138/carto.46.2.109 (DOI)2-s2.0-79955632348 (Scopus ID)
Note

QC 20111116

Available from: 2011-11-16 Created: 2011-11-16 Last updated: 2017-12-08Bibliographically approved
2. A Multiple Representation Data Structure for Dynamic Visualisation of Generalised 3D City Models
Open this publication in new window or tab >>A Multiple Representation Data Structure for Dynamic Visualisation of Generalised 3D City Models
2011 (English)In: ISPRS journal of photogrammetry and remote sensing (Print), ISSN 0924-2716, E-ISSN 1872-8235, Vol. 66, no 2, 198-208 p.Article in journal (Refereed) Published
Abstract [en]

In this paper, a novel multiple representation data structure for dynamic visualisation of 3D city models, called CityTree, is proposed. To create a CityTree, the ground plans of the buildings are generated and simplified. Then, the buildings are divided into clusters by the road network and one CityTree is created for each cluster. The leaf nodes of the CityTree represent the original 3D objects of each building, and the intermediate nodes represent groups of close buildings. By utilizing CityTree, it is possible to have dynamic zoomfunctionality in real time. The CityTree methodology is implemented in aframework where the original city model is stored in CityGML and the CityTree is stored as X3D scenes. A case study confirms the applicability of the CityTree for dynamic visualisation of 3D city models.

National Category
Other Computer and Information Science
Identifiers
urn:nbn:se:kth:diva-24707 (URN)10.1016/j.isprsjprs.2010.08.001 (DOI)000288628000006 ()2-s2.0-79951727298 (Scopus ID)
Note
QC 20110530Available from: 2010-09-23 Created: 2010-09-23 Last updated: 2017-12-12Bibliographically approved
3. Detection and typification of linear structures for dynamic visualization of 3D city models
Open this publication in new window or tab >>Detection and typification of linear structures for dynamic visualization of 3D city models
2012 (English)In: Computers, Environment and Urban Systems, ISSN 0198-9715, E-ISSN 1873-7587, Vol. 36, no 3, 233-244 p.Article in journal (Refereed) Published
Abstract [en]

Cluttering is a fundamental problem in 3D city model visualization. In this paper, a novel method for removing cluttering by typification of linear building groups is proposed. This method works. in static as well as dynamic visualization of 3D city models. The method starts by converting building models in higher Levels of Details (LoDs) into LoD1 with ground plan and height. Then the Minimum Spanning Tree (MST) is generated according to the distance between the building ground plans. Based on the MST, linear building groups are detected for typification. The typification level of a building group is determined by its distance to the viewpoint as well as its viewing angle. Next, the selected buildings are removed and the remaining ones are adjusted in each group separately. To preserve the building features and their spatial distribution, Attributed Relational Graph (ARC) and Nested Earth Mover's Distance (NEMD) are used to evaluate the difference between the original building objects and the generalized ones. The experimental results indicate that our method can reduce the number of buildings while preserving the visual similarity of the urban areas.

Keyword
3D city models, Typification, Dynamic visualization, Minimum spanning tree, Similarity measurement
National Category
Other Computer and Information Science
Identifiers
urn:nbn:se:kth:diva-48161 (URN)10.1016/j.compenvurbsys.2011.10.001 (DOI)000303078500004 ()2-s2.0-84858800046 (Scopus ID)
Note
QC 20120525Available from: 2011-11-16 Created: 2011-11-16 Last updated: 2017-12-08Bibliographically approved
4. Shell model representation as a substitute of LOD3 for 3D modeling in CityGML
Open this publication in new window or tab >>Shell model representation as a substitute of LOD3 for 3D modeling in CityGML
2011 (English)In: Geo-spatial Information Science, ISSN 1009-5020, Vol. 14, no 2, 78-84 p.Article in journal (Refereed) Published
Abstract [en]

The OGC standard for 3D city modeling is widely used in an increasing number of applications. It defines five consecutive Levels of Detail (LoD0 to LoD4 with increasing accuracy and structural complexity), in which LoD3 includes all exterior appearances and geometrical details and subsequently requires much storage space. A new LoD is introduced as shell model with the exterior shell of the LoD3 model and the opening objects like windows, doors as well as smaller façade objects are projected onto walls. In this paper, a user survey is presented. The results of this survey show that the shell model can give users almost the same visual impression as the LoD3 model. Furthermroe, algorithms are developed to extract the shell model from LoD3 model. Experiments show that this shell model can reduce up to 90% storage of the original LoD3 model. Therefore, on one hand it can be used as a substitute for a LoD3 model for the visualization on small displays. On the other hand, it can be treated as a sub-level of detail (SLoD3) in CityGML, since it retains almost the same amount of information but requires much less storage space.

Keyword
shell model, 3D building, CityGML, generalization, user survey
National Category
Other Environmental Engineering
Identifiers
urn:nbn:se:kth:diva-48163 (URN)10.1007/s11806-011-0445-8 (DOI)2-s2.0-79956017582 (Scopus ID)
Conference
Joint ISPRS Workshop on 3D City Modelling and Applications and the 6th 3D GeoInfo, 3DCMA 2011; Wuhan; China; 26 June 2011 through 28 June 2011
Note

QC 20111116

Available from: 2011-11-16 Created: 2011-11-16 Last updated: 2017-01-10Bibliographically approved
5. Real time visualisation of 3D city models in street view based on visual salience
Open this publication in new window or tab >>Real time visualisation of 3D city models in street view based on visual salience
(English)In: International Journal of Geographical Information Science, ISSN 1365-8816, E-ISSN 1365-8824Article in journal (Refereed) Submitted
Abstract [en]

Street level visualization is an important application of the 3D city models. Challenges in the street level visualization are the cluttering of the detailed buildings and the performance. In this paper, a novel method for street level visualization based on visual salience evaluation is proposed. The basic idea of the method is to preserve these salient buildings in a view and remove the non-salient ones. The method is composed by pre-process and real-timevisualization. The pre-process starts by converting 3D building models in higher Levels of Detail (LoDs) into LoD1 with simplified ground plan. Then a number of index view points are created along the streets; these indexes refer both to the positions and the direction of the sights. A visual salience value is computed for each visible simplified building in respective index. The salience of the visible building is calculated based on the visual difference of the original and generalized models. We propose and evaluate three methods for visual salience: local difference, global difference and minimum projection area. The real-time visualization process starts by mapping the observer to its closest indexes. Then the street view is generated based on the building information stored in theindexes. A user study shows that the local visual salience gives better result than the global and area, and the proposed method can reduce the number of loaded building by 90% while still preserve the visual similarity with the original models.

Keyword
3D city models, street level visualization, selection, visual salience
National Category
Other Computer and Information Science
Identifiers
urn:nbn:se:kth:diva-48168 (URN)
External cooperation:
Note

QS 20120328

Available from: 2011-11-16 Created: 2011-11-16 Last updated: 2017-12-08Bibliographically approved
6. Generalisation of textured 3D city models using image compression and multiple representation data structure
Open this publication in new window or tab >>Generalisation of textured 3D city models using image compression and multiple representation data structure
2013 (English)In: ISPRS journal of photogrammetry and remote sensing (Print), ISSN 0924-2716, E-ISSN 1872-8235, Vol. 79, 68-79 p.Article in journal (Refereed) Published
Abstract [en]

Texture is an essential part of 3D building models and it often takes up a big proportion of the data volume, thus makes dynamic visualization difficult. To compress the texture of 3D building models for the dynamic visualization in different scales, a multi-resolution texture generalization method is proposed, which contains two steps: texture image compression and texture coloring. In the first step, the texture images are compressed in both horizontal and vertical directions using wavelet transform. In the second step, TextureTreeis created to store the building color texture for the dynamic visualization from different distances. To generate TextureTree, texture images are iteratively segmented by horizontal and vertical dividing zone, e.g. edge or background from edge detection, until each section is basically in the same color. Thentexture in each section is represented by their main color and the TextureTree iscreated based on the color difference between the adjacent sections. In dynamic visualization, the suitable compressed texture images or the TextureTree nodes are selected to generate the 3D scenes based on the angle and the distance between user viewpoint and the building surface. The experimental results indicate that the wavelet based image compression and proposed TextureTree can effectively represent the visual features of the textured buildings with much less data.

Keyword
Three-dimensional Building model, Texture Compression, Multiresolution Image, Multiple Representation Data Structures, Dynamic Visualization
National Category
Remote Sensing Other Computer and Information Science
Identifiers
urn:nbn:se:kth:diva-48170 (URN)10.1016/j.isprsjprs.2013.02.008 (DOI)000318889600006 ()2-s2.0-84875254960 (Scopus ID)
Note

QS 20120328. Updatad from Submitted to Published. 20130627

Available from: 2011-11-16 Created: 2011-11-16 Last updated: 2017-12-08Bibliographically approved

Open Access in DiVA

fulltext(2374 kB)4283 downloads
File information
File name FULLTEXT02.pdfFile size 2374 kBChecksum SHA-512
717676a25c5d78f107ba61fc4c6b3252c427d99f98ab66b4884442ac56373b735fd39e024edbb70c6ecb5878b094e0d5847f8346228bc12dbc4595777f9a7458
Type fulltextMimetype application/pdf

Search in DiVA

By author/editor
Mao, Bo
By organisation
Geoinformatik och Geodesi
Other Environmental Engineering

Search outside of DiVA

GoogleGoogle Scholar
Total: 4283 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

isbn
urn-nbn

Altmetric score

isbn
urn-nbn
Total: 2123 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf