Transactions in GIS
Công bố khoa học tiêu biểu
* Dữ liệu chỉ mang tính chất tham khảo
It is well known that the grid cell size of a raster digital elevation model has significant effects on derived terrain variables such as slope, aspect, plan and profile curvature or the wetness index. In this paper the quality of DEMs derived from the interpolation of photogrammetrically derived elevation points in Alberta, Canada, is tested. DEMs with grid cell sizes ranging from 100 to 5 m were interpolated from 100 m regularly spaced elevation points and numerous surface‐specific point elevations using the ANUDEM interpolation method. In order to identify the grid resolution that matches the information content of the source data, three approaches were applied: density analysis of point elevations, an analysis of cumulative frequency distributions using the Kolmogorov‐Smirnov test and the root mean square slope measure. Results reveal that the optimum grid cell size is between 5 and 20 m, depending on terrain com‐plexity and terrain derivative. Terrain variables based on 100 m regularly sampled elevation points are compared to an independent high‐resolution DEM used as a benchmark. Subsequent correlation analysis reveals that only elevation and local slope have a strong positive relationship while all other terrain derivatives are not represented realistically when derived from a coarse DEM. Calculations of root mean square errors and relative root mean square errors further quantify the quality of terrain derivatives.
SLEUTH is a computational simulation model that uses adaptive cellular automata to simulate the way cities grow and change their surrounding land uses. It has long been known that models are of most value when calibrated, and that using back‐casting (testing against known prior data) is an effective calibration method. SLEUTH's calibration uses the brute force method: every possible combination and permutation of its control parameters is tried, and the outcomes tested for their success at replicating prior data. Of the SLEUTH calibration approaches tried so far, there have been several suggested rules to follow during the brute force procedure to deal with problems of tractability, most of which leave out many of the possible parameter combinations. In this research, we instead attempt to create the complete set of possible outcomes with the goal of examining them to select the optimum from among the millions of possibilities. The self‐organizing map (SOM) was used as a data reduction method to pursue the isolation of the best parameter sets, and to indicate which of the existing 13 calibration metrics used in SLEUTH are necessary to arrive at the optimum. As a result, a new metric is proposed that will be of value in future SLEUTH applications. The new measure combines seven of the current measures, eight if land use is modeled, and is recommended as a way to make SLEUTH applications more directly comparable, and to give superior modeling and forecasting results.
The last decade has seen a renaissance in spatial modeling. Increased computational power and the greater availability of spatial data have aided in the creation of new modeling techniques for studying and predicting the growth of cities and urban areas. Cellular automata is one modeling technique that has become widely used and cited in the literature; yet there are still some very basic questions that need to be answered with regards to the use of these models, specifically relating to the spatial resolution during calibration and how it can impact model forecasts. Using the SLEUTH urban growth model (
The most common mass transit modes in metropolitan cities include buses, subways, and taxicabs, each of which contribute to an interconnected complex network that delivers urban dwellers to their destinations. Understanding the intertwined usages of these three transit modes at different places and time allows for better sensing of urban mobility and the built environment. In this article, we leverage a comprehensive data collection of bus, metro, and taxicab ridership from Shenzhen, China to unveil the spatio‐temporal interplay between different mass transit modes. To achieve this goal, we develop a novel spectral clustering framework that imposes spatio‐temporal similarities between mass transit mode usage in urban space and differentiates urban spaces associated with distinct ridership patterns of mass transit modes. Five resulting categories of urban spaces are identified and interpreted with auxiliary knowledge of the city's metro network and land‐use functionality. In general, different categorized urban spaces are associated with different accessibility levels (such as high‐, medium‐, and low‐ranked) and different urban functionalities (such as residential, commercial, leisure‐dominant, and home–work balanced). The results indicate that different mass transit modes cooperate or compete based on demographic and socioeconomic attributes of the underlying urban environments. Our proposed analytical framework provides a novel and effective way to explore the mass transit system and the functional heterogeneity in cities. It demonstrates great potential for assisting policymakers and municipal managers in optimizing public transportation facility allocation and city‐wide daily commuting distribution.
The research community of
The concept of Volunteered Geographic Information (VGI) has recently emerged from the new Web 2.0 technologies. The OpenStreetMap project is currently the most significant example of a system based on VGI. It aims at producing free vector geographic databases using contributions from Internet users. Spatial data quality becomes a key consideration in this context of freely downloadable geographic databases. This article studies the quality of French OpenStreetMap data. It extends the work of Haklay to France, provides a larger set of spatial data quality element assessments (i.e. geometric, attribute, semantic and temporal accuracy, logical consistency, completeness, lineage, and usage), and uses different methods of quality control. The outcome of the study raises questions such as the heterogeneity of processes, scales of production, and the compliance to standardized and accepted specifications. In order to improve data quality, a balance has to be struck between the contributors' freedom and their respect of specifications. The development of appropriate solutions to provide this balance is an important research issue in the domain of user‐generated content.
Disabled people, especially the blind and vision‐impaired, are challenged by many transitory hazards in urban environments such as construction barricades, temporary fencing across walkways, and obstacles along curbs. These hazards present a problem for navigation, because they typically appear in an unplanned manner and are seldom included in databases used for accessibility mapping. Tactile maps are a traditional tool used by blind and vision‐impaired people for navigation through urban environments, but such maps are not automatically updated with transitory hazards. As an alternative approach to static content on tactile maps, we use volunteered geographic information (VGI) and an Open Source system to provide updates of local infrastructure. These VGI updates, contributed via voice, text message, and e‐mail, use geographic descriptions containing place names to describe changes to the local environment. After they have been contributed and stored in a database, we georeference VGI updates with a detailed gazetteer of local place names including buildings, administrative offices, landmarks, roadways, and dormitories. We publish maps and alerts showing transitory hazards, including location‐based alerts delivered to mobile devices. Our system is built with several technologies including PHP, JavaScript, AJAX, Google Maps API, PostgreSQL, an Open Source database, and PostGIS, the PostgreSQL's spatial extension. This article provides insight into the integration of user‐contributed geospatial information into a comprehensive system for use by the blind and vision‐impaired, focusing on currently developed methods for geoparsing and georeferencing using a gazetteer.
Recent literature has reported inaccuracies associated with some popular home range estimators such as kernel density estimation, especially when applied to point patterns of complex shapes. This study explores the use of characteristic hull polygons (CHPs) as a new method of home range estimation. CHPs are special bounding polygons created in GIS that can have concave edges, be composed of disjoint regions, and contain areas of unoccupied space within their interiors. CHPs are created by constructing the Delaunay triangulation of a set of points and then removing a subset of the resulting triangles. Here, CHPs consisting of 95% of the smallest triangles, measured in terms of perimeter, are applied for home range estimation. First, CHPs are applied to simulated animal locational data conforming to five point pattern shapes at three sample sizes. Then, the method is applied to black‐footed albatross (
- 1
- 2