impervious surface identification and extraction from orthoimaginary. This document is the
workflow of the process undertaken. It is intended for knowledgeable users familiar with
technical aspects of land use modeling in an ArcGIS Pro environment.
This guidance is prepared as a revision to the impervious surface mapping
2
by Esri’s Learn
ArcGIS team. This guidance will serve as an accompanying document to the GI feasibility and
inventory mapping. To facilitate its use, this guidance is divided into sections that contain
description of each stage on the flowchart. In addition, pre-processing (before the execution of
classification) and post-processing (after the execution: map composition) stages guidance is
also added to this guidance.
The prerequisite computational system to initiate the process includes-
● Windows 10, 64 bits or higher
● Central processing units (CPUs) multicore processor (atleast i5-i7 series or 10
th
generation)
● RAM capacity higher 16 GB
● Enough physical space available
In addition, having a solid-state drive (SSD) and dedicated graphics processing unit (GPU) will
expedite the computation significantly. Although the process can run directly on the CPU, it will
take longer to run, and the workflow can be overwhelming to the CPU.
Classification and Segmentation
Land Use/Land Cover (LULC) data are an important input for ecological, hydrological, and
agricultural models. Most LULC classifications are either created using pixel-based analysis of
remotely sensed imagery which are often supervised or unsupervised. These pixel-based
procedures examine the spectral properties
3
of each pixel in interest without considering the
associated spatial or contextual information. Using pixel-based classification on high-resolution
imagery may produce a "salt and pepper" effect, which contributes to inaccuracy (Gao and Mas,
2008). In contrast, segmentation, an object-based approach, produces a spectrally homogenous
object where every pixel in an image is given a label of a corresponding class. The object-based
image analysis aggregates pixels based on a segmentation algorithm such as the mean shift
function which groups neighboring pixels that are similar in color, shape, and spectral
characteristics together (Blaschke, 2010). The comparison between the two processes can be
explained as,
● Pixel-based classification: Classification is performed on a pixel-by-pixel basis, using
only the spectral information available for that specific pixel (i.e., values of pixels within
the locality are ignored).
● Object-based segmentation: Classification is done on a localized group of pixels,
considering the spatial properties of land-based features as they relate to each other.
In supervised image classification, the user trains computer algorithms to extract features from
an imaginary. These training samples are drawn as polygons, rectangles, or points, and the
computer learns and scans the rest of the image to identify similar features. However, the
3
Spectral resolution describes the ability of a sensor to define fine wavelength intervals.
2
https://learn.arcgis.com/en/projects/calculate-impervious-surfaces-from-spectral-imagery/
4
This material is based upon work supported by the Department of Energy and the Michigan Energy Office (MEO) under Award Number EE00007478
as part of the Catalyst Communities program. Find this document and more about the CLC Fellowship that supported this project at
graham.umich.edu/clcf.