Pages

Friday, July 29, 2016

Exercise 9: Create a Script Tool

The Objective

The goal of this exercise was to write a script that can be saved and used as a tool. The function of this tool is that it creates a Swiss Hillshade for raster datasets.

The Process

The first step of writing this script is to import the usual modules and extensions. Then, the "GetParametersAsText" function is used to set up the variables for the tool. Next, the tool's processing workflow is established using a try except statement. The script is then run and placed into a toolbox. Lastly, the Add Script Wizard is ran through to add the final touches to the tool's functionality and user interface (ie. help descriptions, credits, etc.)

The Results

#-------------------------------------------------------------------------------
# Name:        Ex9
# Purpose:     Create a script tool
#
# Author:      Zach Miller
#
# Date:        7/28/2016
#-------------------------------------------------------------------------------

#ensure the program is working
print "script initialized"

#import system modules
import os
import shutil
import time
import datetime

#Import arcpy module
import arcpy
from arcpy import env
from arcpy.sa import *
from arcpy.ddd import *
arcpy.env.overwriteOutput = True
arcpy.CheckOutExtension('3D')
arcpy.CheckOutExtension('spatial')

input_DEM = arcpy.GetParameterAsText(0)

scratchPath = arcpy.GetParameterAsText(1)
fileName = arcpy.GetParameterAsText(2)
filtered_Hillshade = arcpy.GetParameterAsText(3)
aerial_Perspective = arcpy.GetParameterAsText(4)

Z_factor = "1"

print "Creating local variables! " ,datetime.datetime.now().strftime("%H:%M:%S")
# Local variables:
divide_By = "5"
dEM_div_5 = os.path.join(scratchPath,fileName) + "_divHS"
default_Hillshade = os.path.join(scratchPath,fileName) + "_HS"

try:
    # Process: Divide
    print "dividing input DEM by 5 " ,datetime.datetime.now().strftime("%H:%M:%S")
    arcpy.Divide_3d(input_DEM, divide_By, dEM_div_5)

    # Process: Hillshade
    print "Create default hillshade" ,datetime.datetime.now().strftime("%H:%M:%S")
    arcpy.HillShade_3d(input_DEM, default_Hillshade, "315", "45", "NO_SHADOWS", Z_factor)

    # Process: Focal Statistics
    print "Create a median filter for the hillshade effect " ,datetime.datetime.now().strftime("%H:%M:%S")
    outFocalStatistics = FocalStatistics(default_Hillshade, NbrRectangle(4, 4, "CELL"), "MEDIAN", "NODATA")
    outFocalStatistics.save(filtered_Hillshade)

    # Process: Plus
    print "Add the divided dem to the default hillshade, creating an aerial perspective ",datetime.datetime.now().strftime("%H:%M:%S")
    arcpy.Plus_3d(dEM_div_5, default_Hillshade, aerial_Perspective)

    print "The script is complete"

except:
    #report an error message
    arcpy.AddError("Something didn't work")

    #report any error messages it might have generated

    arcpy.AddMessage(arcpy.GetMessages(2))


Sources

All sources provided by Dr. Christina Hupy.

Thursday, July 28, 2016

Exercise 8: Raster Processing with Booleans and Loop

The Objectives

The goal of this exercise was to clip scanned raster files and merge them together using Booleans and loops.

The Process


  • Import modules.
  • Create variables, set up the work space and create empty lists.
  • Create a spatial reference object (one tile index's coordinate system was used). 
  • Set up a loop. 
  • Put files of the specified criteria to a raster list. 
  • Set up processing loops for each raster. 
  • Format the raster name. 
  • Project all of the rasters. 
  • Create a polygon of the raster's topographic footprint.
  • Add the name of the raster to a new field. 
  • Create a list of tiles for a merge function. 
  • Merge the tiles.
  • Create "IF" statement to set up a "for, in" loop. 
  • Clip the rasters. 
  • Merge the rasters together. 

The Results

#-------------------------------------------------------------------------------
# Name:        Ex8
# Purpose:     Processes rasters with booleans and loops
#
# Author:      Zach Miller
#
# Created:     7/28/2016
#-------------------------------------------------------------------------------

# ensure the program is working
print "script initialized"

# import system modules
import os
import shutil
import time
import datetime
import arcpy
from arcpy import env
from arcpy.sa import *
from arcpy.ddd import *
#-------------------------------------------------------------------------------

#Allow the script to overwrite existing files
arcpy.env.workspace = "Q:\StudentCoursework\CHupy\GEOG.491.801.2167\MILLERZM\Exercise_8\Topos"
arcpy.env.overwriteOutput = True
arcpy.env.outputCoordinateSystem = arcpy.SpatialReference("WGS 1984 UTM Zone 16N")
arcpy.CheckOutExtension('3D')
arcpy.CheckOutExtension('spatial')

#create variables
print "Obtaining a List of rasters" ,datetime.datetime.now().strftime("%H:%M:%S")

workspace = r"Q:\StudentCoursework\CHupy\GEOG.491.801.2167\MILLERZM\Exercise_8\Topos"

geodatabasePath = "Q:\StudentCoursework\CHupy\GEOG.491.801.2167\MILLERZM\Exercise_8\Ex8.gdb"

#make a feature dataset in your geodatabase using the WGS84 UTM Zone 16N coordinate system
featureDatasetPath = "Q:\StudentCoursework\CHupy\GEOG.491.801.2167\MILLERZM\Exercise_8\Ex8.gdb\WisconsinTiles"

#Make a scratch geodatabase for the clipped topos
clipGeodatabasePath = "Q:\StudentCoursework\CHupy\GEOG.491.801.2167\MILLERZM\Exercise_8\Ex8.gdb"

rasters = []
tileList = []

mergedFolderLocation = "Q:\StudentCoursework\CHupy\GEOG.491.801.2167\MILLERZM\Exercise_8\Ex8.gdb"
mergedName = "KettleMoraine_Merged"

field = "FILENAME"

clipTileIndex = "Q:\StudentCoursework\CHupy\GEOG.491.801.2167\MILLERZM\Exercise_8\Ex8.gdb\WisconsinTiles\Wisconsin_24kTileIndex"

createIndex = True
clipRasters = True
mosaicRasters = True

mergedTiles = os.path.join(featureDatasetPath,mergedName)+ "_TileFootprints"
#-------------------------------------------------------------------------------

#Create spatial reference object, using the coordinate system of the tile index
sr=arcpy.Describe(clipTileIndex).spatialReference
#-------------------------------------------------------------------------------

#If the boolean for create index is true, run the following code.
if(createIndex==True):

    #The walk function traverses a workspace, and finds files with the specified filetype
    #in this instance, it finds TIF files. It contains the directory path, directory name, and file names within the 'walk' object
    walk = arcpy.da.Walk(workspace, type="TIF")
#-------------------------------------------------------------------------------
    #i, j, and k are variables that will be used used to know the progress of the loop
    i = 0
    j = 0
    k = 0
#-------------------------------------------------------------------------------

    for dirpath, dirnames, filenames in walk:
        for filename in filenames:
            #The script will iterate through all of the subfolders, and add all valid tifs to the list 'rasters'
            rasters.append(os.path.join(dirpath, filename))
            #For each valid raster, 1 value will be added to i...which will ultimately tell how many valid files there are
            i = i+1
#-------------------------------------------------------------------------------

    #The following operations will be performed on all of the rasters in 'rasters'
    for raster in rasters:
#-------------------------------------------------------------------------------

        #for the first iteration, j = 0 value 'k' will be # of valid rasters- 0.
        #for subsequent iterations, it will count down until no rasters remain.
        k = i-j
        print str(k) + "rasters remaining"
#-------------------------------------------------------------------------------

        #Formatting the name of the topo, in an initial attempt to replace the spaces with underscores
        outName = os.path.basename(raster).rstrip(os.path.splitext(raster)[1])
        print outName
        outName = str(outName)
        outName2 = outName.replace(" ","_")

        print outName2

        #By splitting the string using the index of the 3rd character, we skip the "WI_" in the filename
        shortened=str(outName[3:])
        #Then we use the string 'index' function to identify the index of the next underscore in the string (it directly follows the end of the filename!)
        endIndex=shortened.index("_")+3

        #using these two new indicies, we make a new string of only the file's name
        name1 = str(outName[3:endIndex])
        #use the 'replace' string function to completely remove the spaces from the resulting filenames
        name2 = name1.replace(" ","_")

        rasterProj = os.path.join(clipGeodatabasePath,name2) + "_Proj"
        rasterTile = os.path.join(featureDatasetPath,name2)+"_Footprint"
#-------------------------------------------------------------------------------

        print "Projecting " + name2 ,datetime.datetime.now().strftime("%H:%M:%S")
        arcpy.ProjectRaster_management(raster,rasterProj,"","BILINEAR")
#-------------------------------------------------------------------------------

        #Create a polygon of the topo's footprint, including the collar
        print "Calculating raster domain"
        arcpy.RasterDomain_3d(rasterProj,rasterTile,"POLYGON")
#-------------------------------------------------------------------------------

        #add field to hold the projected raster's path name in the polygon's attribute table
        print "Adding a field"
        arcpy.AddField_management(rasterTile,field,"TEXT")

        #The following expression will be used to set the new field's value to the projected raster's name, using an update cursor
        expression = str(rasterProj)

        print "Calculating field" ,datetime.datetime.now().strftime("%H:%M:%S")
        with arcpy.da.UpdateCursor(rasterTile,field) as cursor:
            for row in cursor:
                row[0] = expression
                #Update row must be used for the changes to take effect!
                cursor.updateRow(row)
        del cursor
#-------------------------------------------------------------------------------

        #adding the tile to a list of tiles for a merge function
        tileList.append(rasterTile)

        j = j+1

        print ""
#-------------------------------------------------------------------------------

    #merging the tiles to make a complete tile index
    print "Merging the tiles" ,datetime.datetime.now().strftime("%H:%M:%S")
    arcpy.Merge_management(tileList,mergedTiles)
#-------------------------------------------------------------------------------

if(clipRasters==True):
    #creating a cursor to iterate through the rasters, tile-by-tile
    print "Clipping the collars off of the rasters " ,datetime.datetime.now().strftime("%H:%M:%S")
    cursorForTopos = arcpy.da.SearchCursor(mergedTiles, field)
    rastersToMosaic=[]

    for row in cursorForTopos:
#-------------------------------------------------------------------------------

        #take the name from the "FILENAME" field
        inputRaster = str(row[0])

        #obtain only the base filename from the previous string
        inputRasterName = os.path.basename(inputRaster).rstrip(os.path.splitext(inputRaster)[1])

        clipBoundary="ClipBoundary"
        arcpy.MakeFeatureLayer_management(clipTileIndex,clipBoundary)

        rasterBoundary="RasterBoundary"
        rasterNameField=arcpy.AddFieldDelimiters(mergedTiles,field)

        #Within python, when one wants to put a string within certain quotes, they can specify this by using '%s' as a placeholder, then using a single % before the string they wish to substitute
        #This will allow for the input raster to be recorded as 'inputRaster', within the string.
        value = "'%s'" % inputRaster

        #The next line allows for the creation of a sql query by using string substitution to replace both of the placeholders with the
        #string name field and the newly formatted 'inputRaster' value
        layerCreationSQLExpression = "%s = %s" % (rasterNameField, value)

        #This line uses the newly created sql statement to select only the tile that matches the cursor value, which is incredibly
        #important for future steps.
        arcpy.MakeFeatureLayer_management(mergedTiles,rasterBoundary,layerCreationSQLExpression)

        #This line selects the feature from the state tile index that has its center in the previously selected polygon
        arcpy.SelectLayerByLocation_management(clipBoundary,"HAVE_THEIR_CENTER_IN",rasterBoundary,"","NEW_SELECTION")

        outputRaster = os.path.join(clipGeodatabasePath,inputRasterName)+"_Clip"

        # This logic exists to identify if the raster has already been generated for a specific tile, and skips the operation if it has already been performed
        if arcpy.Exists(outputRaster):
            print inputRasterName + " has already been clipped"
            rastersToMosaic.append(outputRaster)
        else:
            # The raster will be clipped by the boundary identified using the select layer by location function.
            print "Clipping " + inputRasterName ,datetime.datetime.now().strftime("%H:%M:%S")
            arcpy.Clip_management(inputRaster,"",outputRaster,clipBoundary,"","ClippingGeometry","NO_MAINTAIN_EXTENT")
            print inputRasterName + " Clipped " ,datetime.datetime.now().strftime("%H:%M:%S")
            rastersToMosaic.append(outputRaster)

            #This del function is essential, otherwise a schema lock may remain on your geodatabase, preventing you from accessing its contents until ArcMap is restarted.
    del cursorForTopos
#-------------------------------------------------------------------------------

    if(mosaicRasters==True):
        print "Mosaicing Topos" ,datetime.datetime.now().strftime("%H:%M:%S")
        arcpy.MosaicToNewRaster_management(rastersToMosaic,mergedFolderLocation,mergedName,sr,"8_BIT_UNSIGNED","",3)
        print "Mosaicing Complete" ,datetime.datetime.now().strftime("%H:%M:%S")

print "Script Complete!" ,datetime.datetime.now().strftime("%H:%M:%S")


Sources

All sources provided by Dr. Christina Hupy
USGS National Map - http://nationalmap.gov/

Wednesday, July 27, 2016

Exercise 7: Risk Model for Landslide Susceptibility in Oregon

The Objectives

The goal of this assignment was to take raster files from previous exercises as well as a Major Roads and Landslide points shape files to create a landslide susceptibility map depicting risk levels of major roads. This exercise used fishnet and zonal statistics functions to obtain the resulting map.

The Process

In order to accomplish this task, rasters from exercises 5 and 6 were used as well as roads and landslide points shape files. First, the standard modules and extensions were imported and variables were created. A square Kilometer fishnet was then created to be combined with rasters and other data later on to form a grid pattern for landslide susceptibility of roads. Next, a road buffer was created and intersected with the fishnet. Then, values were reclassified into five breaks, each with a different risk factor (slope and landcover rasters were used for this). Lastly, zonal statistics were used to find median values for the unit of analysis and the resulting table was joined with the analysis layer.

The Results
#-------------------------------------------------------------------------------
# Name:        Ex7
# Purpose:     Generates Risk Model for Landslides in Oregon
#
# Author:      Zach Miller
#
# Date:        7/26/2016
#-------------------------------------------------------------------------------

# ensure the program is working
print "script initialized"

# import system modules
import os
import shutil
import time
import datetime
import arcpy
from arcpy import env
from arcpy.sa import *
from arcpy.ddd import *

# Allow the script to overwrite existing files
arcpy.env.workspace = "Q:\StudentCoursework\CHupy\GEOG.491.801.2167\MILLERZM\Ex6.gdb"
arcpy.env.overwriteOutput = True
arcpy.env.outputCoordinateSystem = "Q:\StudentCoursework\CHupy\GEOG.491.801.2167\MILLERZM\Ex5\Ex5.gdb\OregonBoundary"

#NAD_1983_HARN_Oregon_Statewide_Lambert
arcpy.CheckOutExtension('3D')
arcpy.CheckOutExtension('spatial')
#-------------------------------------------------------------------------------

#create variables
geodatabasePath = "Q:\StudentCoursework\CHupy\GEOG.491.801.2167\MILLERZM\Ex6.gdb"
featureDatasetPath = "Q:\StudentCoursework\CHupy\GEOG.491.801.2167\MILLERZM\Ex6.gdb\OregonFCs"

majorRoads = "Q:\StudentCoursework\CHupy\GEOG.491.801.2167\MILLERZM\Ex6.gdb\OregonFCs\MajorRoads_NW"
landslidePts = "Q:\StudentCoursework\CHupy\GEOG.491.801.2167\MILLERZM\Ex6.gdb\OregonFCs\LandslidePoints_NW"
precipData = "Q:\StudentCoursework\CHupy\GEOG.491.801.2167\MILLERZM\Ex6.gdb\NW_Oregon_PrecipitationData_800m"
slopeData = "Q:\StudentCoursework\CHupy\GEOG.491.801.2167\MILLERZM\Ex5\Ex5.gdb\NW_Oregon_Slope_30m"
landUseData = "Q:\StudentCoursework\CHupy\GEOG.491.801.2167\MILLERZM\Ex6.gdb\NLCD2011_Oregon_NW"

boundaryName = "ProcessingBoundary"
fishnetName = "Oregon_NW_1kmFishnet"
roadsBufferName = "MajorRoads_NW_Buffered"
roadsBuffIntName = "MajorRoads_NW_Buffer_1kmGrid"
riskRasterName = "NW_Oregon_LandslideRisk"
pointName = os.path.basename(landslidePts).rstrip(os.path.splitext(landslidePts)[1])

processingBoundary = os.path.join(featureDatasetPath,boundaryName)
outputFishnet = os.path.join(featureDatasetPath,fishnetName)
roadBufferOutput = os.path.join(featureDatasetPath,roadsBufferName)
roadBuffGridOutput = os.path.join(featureDatasetPath,roadsBuffIntName)
slopeReclassRaster = slopeData + "_Reclass"
lulcReclassRaster = landUseData + "_Reclass"

landslideRiskRaster = os.path.join(geodatabasePath,riskRasterName)

landslideRiskStatistics = landslideRiskRaster + "_Stats"

#field names
bufferedRoadsUniqueIDField = "FID_Oregon_NW_1kmFishnet"
#-------------------------------------------------------------------------------

#Create processing boundary by extracting the footprint from the slope raster
print "Creating Processing boundary!" ,datetime.datetime.now().strftime("%H:%M:%S")
arcpy.RasterDomain_3d(slopeData,processingBoundary,"POLYGON")

#Create fishnet (Grid) with 1km spacing
#variables for fishnet
desc = arcpy.Describe(processingBoundary)
gridSize = 1000.0
print "Creating Fishnet!" ,datetime.datetime.now().strftime("%H:%M:%S")
arcpy.CreateFishnet_management(outputFishnet,str(desc.extent.lowerLeft),str(desc.extent.XMin) + " " + str(desc.extent.YMax+10),gridSize,gridSize,"0","0",str(desc.extent.upperRight),"NO_LABELS",processingBoundary,"POLYGON")
#-------------------------------------------------------------------------------

# Process: Buffer
arcpy.Buffer_analysis(majorRoads, roadBufferOutput, "100 Meters", "FULL", "ROUND", "ALL", "", "PLANAR")
#-------------------------------------------------------------------------------

# Process: Intersect
print "Intersecting!" ,datetime.datetime.now().strftime("%H:%M:%S")
arcpy.Intersect_analysis([roadBufferOutput,outputFishnet], roadBuffGridOutput, "ALL", "", "INPUT")
#-------------------------------------------------------------------------------

#This string specifies different value ranges and the values these ranges will be assigned.
remapStringSlope = "0 1.484953 1;1.484954 6.184316 2;6.184317 10.883696 3;10.883697 15.583077 4;15.583078 29.681219 5;29.681220 34.380600 4; 34.380601 39.079981 3;39.079982 43.779362 2;43.779363 90 1"

#This string specifies risk values to the different nlcd classes based on their values.
remapStringNLCD = "11 1;21 4;22 4;23 3;24 1;31 2;41 4;42 5;43 5;52 5;71 4;81 3;82 3;90 3;95 1"
#-------------------------------------------------------------------------------

print "Reclassifying slope into individual run type classes " ,datetime.datetime.now().strftime("%H:%M:%S")
arcpy.Reclassify_3d(slopeData,"VALUE",remapStringSlope,slopeReclassRaster,"NODATA")

print "Reclassifying landuse into individual run type classes " ,datetime.datetime.now().strftime("%H:%M:%S")
arcpy.Reclassify_3d(landUseData,"VALUE",remapStringNLCD,lulcReclassRaster,"NODATA")
#-------------------------------------------------------------------------------

print "Multiplying the reclassed rasters to obtain a risk value"
arcpy.Times_3d(slopeReclassRaster,lulcReclassRaster,landslideRiskRaster)
#-------------------------------------------------------------------------------

# Process: Zonal Statistics as Table
arcpy.gp.ZonalStatisticsAsTable_sa(roadBuffGridOutput, bufferedRoadsUniqueIDField, landslideRiskRaster, landslideRiskStatistics, "DATA", "MEDIAN")
#-------------------------------------------------------------------------------

# Process: Join Field
arcpy.JoinField_management(roadBuffGridOutput, bufferedRoadsUniqueIDField, landslideRiskStatistics, bufferedRoadsUniqueIDField, "Median")

print "Script is complete"

Sources



All sources provided by Dr. Christina Hupy
Oregon Spatial Data Library
USGS National Map

Monday, July 25, 2016

Exercise 6: Analyzing Raster Data

The Objectives

The goals of this exercise were to use the raster files created in exercise 5 in combination with precipitation and land use rasters provided in this exercise to calculate susceptibility of landslides in the study area. Ultimately, this produced an excel table with fields including: slope, precipitation, slide area, and land cover for landslides within the study area.

The Process

In order to produce this table raster files (and their data tables) were used from exercise 5 as well as new raster files "". First, the standard modules and extensions were imported and the work space was set up. Next, variables were created, this is where the rasters were brought into the script to be manipulated. Output files were then named and given a list of feature classes to belong to. Field names for the newly-created feature classes' attribute tables were then created. Then, the select tool was used to narrow down the type of landslide that is desired and those selected values were put into a point feature class. After, a buffer distance and a slide length field were created. The buffer was then executed and the resulting values added to the field. An UpdateCursor function was used to manage the values in the field. Next, summary stats were calculated for all of the landslide variables. Lastly, these values were tabulated to correlate land cover type and landslide area. The resulting table was then exported to Microsoft Excel for further analysis.

The Results

#-------------------------------------------------------------------------------
# Name:        Ex6
# Purpose:     Analyzes Raster Data
#
# Author:      Zach Miller
#
# Date:        7/22/2016
#-------------------------------------------------------------------------------

# ensure the program is working
print "script initialized"

# import system modules
import os
import shutil
import time
import datetime
import arcpy
from arcpy import env
from arcpy.sa import *
from arcpy.ddd import *

# Allow the script to overwrite existing files
arcpy.env.workspace = "Q:\StudentCoursework\CHupy\GEOG.491.801.2167\MILLERZM\Ex6.gdb"
arcpy.env.overwriteOutput = True
arcpy.env.outputCoordinateSystem = "Q:\StudentCoursework\CHupy\GEOG.491.801.2167\MILLERZM\Ex5\Ex5.gdb\OregonBoundary"

#NAD_1983_HARN_Oregon_Statewide_Lambert
arcpy.CheckOutExtension('3D')
arcpy.CheckOutExtension('spatial')
#-------------------------------------------------------------------------------

#create variables
geodatabasePath = "Q:\StudentCoursework\CHupy\GEOG.491.801.2167\MILLERZM\Ex6.gdb"
featureDatasetPath = "Q:\StudentCoursework\CHupy\GEOG.491.801.2167\MILLERZM\Ex6.gdb\OregonFCs"
otherDataFolder = "Q:\StudentCoursework\CHupy\GEOG.491.801.2167\MILLERZM\Ex6.gdb"
landslidePts = "Q:\StudentCoursework\CHupy\GEOG.491.801.2167\MILLERZM\Ex6.gdb\OregonFCs\LandslidePoints_NW"
precipData = "Q:\StudentCoursework\CHupy\GEOG.491.801.2167\MILLERZM\Ex6.gdb\NW_Oregon_PrecipitationData_800m"
landUseData = "Q:\StudentCoursework\CHupy\GEOG.491.801.2167\MILLERZM\Ex6.gdb\NLCD2011_Oregon_NW"
slopeData = "Q:\StudentCoursework\CHupy\GEOG.491.801.2167\MILLERZM\Ex5\Ex5.gdb\NW_Oregon_Slope_30m"

pointName = os.path.basename(landslidePts).rstrip(os.path.splitext(landslidePts)[1])

pointSelectOutput = os.path.join(geodatabasePath,pointName)+"_Selected"
pointsWithLandUse = os.path.join(geodatabasePath,pointName) + "_LU"
pointsWithSlope = os.path.join(geodatabasePath,pointName) + "_Slope"
pointsWithPrecip = os.path.join(geodatabasePath,pointName) + "_Precip"

pointsBuffered = pointSelectOutput + "_Buffer"

bufferedSlopeStatistics = pointsBuffered + "_Slope"
bufferedSlideTypeSummary = pointsBuffered + "_Summary_SlideType"
bufferedLandUseSummary = pointsBuffered + "_Summary_LULC"
bufferedBothSummary = pointsBuffered + "_Summary_Both"
buffered_TabulateLULC = pointsBuffered + "_TabulateLULC"

tabulateLULC_ExcelTable = os.path.join(otherDataFolder,os.path.basename(buffered_TabulateLULC).rstrip(os.path.splitext(buffered_TabulateLULC)[1]))+".xls"

createdFCs = []
createdFCs.append(pointSelectOutput)
createdFCs.append(pointsWithLandUse)
createdFCs.append(pointsWithSlope)
createdFCs.append(pointsWithPrecip)

# field names
uniqueIDField = "UNIQUE_ID"
bufferDistanceField = "BufferDistance"
slideLengthmFieldName = 'Slide_Length_m'
rasterValueFieldName = "RASTERVALU"
lulcFieldName = "NLCD_2011"
moveClassFieldName = "MOVE_CLASS"
pointSlopeFieldName = 'Slope_Pt'
pointPrecipFieldName = 'PRECIP_mm'
bufferedSlopeFieldName = 'Slope_Buffered_Mean'
calculatedSlopeFieldName = 'Slope_Mean_Calculated'
#-------------------------------------------------------------------------------

# Process: Select
print "Selecting!" ,datetime.datetime.now().strftime("%H:%M:%S")
arcpy.Select_analysis(landslidePts, pointSelectOutput, "LENGTH_ft <> 0 AND WIDTH_ft <> 0 AND MOVE_CLASS IN ( 'Debris Flow', 'Earth Flow', 'Flow')")
#-------------------------------------------------------------------------------

# add land use class information to the landslide points only for their specific locations
ExtractValuesToPoints(pointSelectOutput,landUseData,pointsWithLandUse,"NONE","ALL")
arcpy.AlterField_management(pointsWithLandUse,rasterValueFieldName,"LULC_Value","LULC Value")

# Add slope information to the landslide points using interpolation
ExtractMultiValuesToPoints(pointsWithLandUse,[[slopeData,pointSlopeFieldName],[precipData, pointPrecipFieldName]],"BILINEAR")
#-------------------------------------------------------------------------------

# Process: Add Buffer Field
arcpy.AddField_management(pointsWithLandUse, bufferDistanceField, "FLOAT", "", "", "", "", "NULLABLE", "NON_REQUIRED", "")

# Process: Calculate Field
arcpy.CalculateField_management(pointsWithLandUse, bufferDistanceField, "(!LENGTH_ft!+ !WIDTH_ft!)/2 *0.3048", "PYTHON_9.3", "")

# Process: Add Slide Length Field
arcpy.AddField_management(pointsWithLandUse, slideLengthmFieldName, "FLOAT", "","", "", "", "NULLABLE", "NON_REQUIRED", "")

# Process: Calculate Field
arcpy.CalculateField_management(pointsWithLandUse,slideLengthmFieldName,"!LENGTH_ft!*0.3048", "PYTHON_9.3", "")

# Process: Buffer (2)
print "Buffering!", datetime.datetime.now().strftime("%H:%M:%S")
arcpy.Buffer_analysis(pointsWithLandUse, pointsBuffered, bufferDistanceField, "FULL","ROUND", "NONE", "", "PLANAR")
#-------------------------------------------------------------------------------

# Process: Zonal Statistics as Table
arcpy.gp.ZonalStatisticsAsTable_sa(pointsBuffered, uniqueIDField, slopeData, bufferedSlopeStatistics, "DATA", "ALL")

# Process: Join Field
arcpy.JoinField_management(pointsBuffered, uniqueIDField, bufferedSlopeStatistics, uniqueIDField, "Min;Max;Range;Mean;Std")

#alter the joined field name so we can easily identify it later
arcpy.AlterField_management(pointsBuffered,"MEAN",bufferedSlopeFieldName,bufferedSlopeFieldName)

# Process: Add Field for the search and update cursor
arcpy.AddField_management(pointsBuffered, calculatedSlopeFieldName, "FLOAT", "", "", "", "", "NULLABLE", "NON_REQUIRED", "")

# Process: Add Field for slide length
arcpy.AddField_management(pointsBuffered, slideLengthmFieldName, "FLOAT", "", "", "", "", "NULLABLE", "NON_REQUIRED", "")

# calculate field for slide length
arcpy.CalculateField_management(pointsBuffered, slideLengthmFieldName, "!Length_ft!*0.3048", "PYTHON_9.3", "")
#-------------------------------------------------------------------------------

# Create and use an update cursor to set the slope value to the buffered value for all that are not null, and the point value for features that have a null buffered value
print "Setting values based on buffered and point slope!",datetime.datetime.now().strftime("%H:%M:%S")
listOfFields=[pointSlopeFieldName,bufferedSlopeFieldName,calculatedSlopeFieldName]

# The update cursor takes a list of fields and allows for the values to be changed by specific functions
with arcpy.da.UpdateCursor(pointsBuffered,listOfFields) as cursor:

# it allows for iteration by row (so each value will be calculated independently
    for row in cursor:
        # if the buffered slope value [1] is null (None), set the value for the calculated slope [2] to the point value [0]
        # (FYI: lists of values use indicies of 0,1,2,etc)
        if(row[1]== None):
            #print "Setting value to the point's slope value"
            row[2] = row[0]
            cursor.updateRow(row)
        # if the buffered slope value [1] is valid, set the value for the calculated slope [2] to the buffered value [1]
        else:
            #print "Setting value to the buffer's slope value"
            row[2] = row[1]
            cursor.updateRow(row)
del cursor
#-------------------------------------------------------------------------------

# Calculate summary statistics for precip, slope, and slide area, based on movement class
print "Calculating summary statistics for precip, slope, and slide area, based on movement class" ,datetime.datetime.now().strftime("%H:%M:%S")
arcpy.Statistics_analysis(pointsBuffered,bufferedSlideTypeSummary,[[pointPrecipFieldName,"MEAN"],[pointSlopeFieldName,"MEAN"],[bufferDistanceField,"MEAN"],[slideLengthmFieldName,"MEAN"]],moveClassFieldName)

# Calculate summary statistics for precip, slope, and slide area, based on LULC class
print "Calculating summary statistics for precip, slope, and slide area, based on LULC",datetime.datetime.now().strftime("%H:%M:%S")
arcpy.Statistics_analysis(pointsBuffered,bufferedLandUseSummary,[[pointPrecipFieldName,"MEAN"],[pointSlopeFieldName,"MEAN"],[bufferDistanceField,"MEAN"],[slideLengthmFieldName,"MEAN"]],lulcFieldName)

# Calculate summary statistics for precip, slope, and slide area, based on movement class and LULC class
print "Calculating summary statistics for precip, slope, and slide area, based on LULC",datetime.datetime.now().strftime("%H:%M:%S")
arcpy.Statistics_analysis(pointsBuffered,bufferedBothSummary,[[pointPrecipFieldName,"MEAN"],[pointSlopeFieldName,"MEAN"],[bufferDistanceField,"MEAN"],[slideLengthmFieldName,"MEAN"]],[lulcFieldName,moveClassFieldName])
#-------------------------------------------------------------------------------

#Tabulate area is used to identify how much of each buffer falls within different LULC classes
print "Tabulating the area of each LULC class for each buffered point"
TabulateArea(pointsBuffered,uniqueIDField,landUseData,lulcFieldName,buffered_TabulateLULC,15)

#Export the table to excel for additional analysis!
arcpy.TableToExcel_conversion(buffered_TabulateLULC,tabulateLULC_ExcelTable)

for fc in createdFCs:
    if arcpy.Exists(fc):
        print fc + "Exists, and will now be deleted"
        arcpy.Delete_management(fc)

print "Script is complete"

Sources

All sources provided by Dr. Christina Hupy

Wednesday, July 20, 2016

Exercise 5: Preparing Raster Data For Analysis Using Raster Lists and Loops

The Objectives

The goal of this assignment was to set up a loop for formatting, projecting, clipping, hill shading, and calculating a list of raster files.

The Process

In order to complete this task, a series of commands were put into Python as script. First, the arcpy, date and time, spatial analysis extension, and 3-D analysis extension modules were imported to Python. Then, a work space was set up and overwrite was enabled to allow for data manipulation and saving: "Q:\StudentCoursework\CHupy\GEOG.491.801.2167\MILLERZM\Ex5\oregon". After, the coordinate system was established and variables were created. Next, raster files from the Ex5 geodatabase were listed, pulling only .img files. A "FOR IN" loop was implemented to allow the following list of rasters to undergo all of the manipulations (ie. clip, project, hill shade, etc) established in the variable creation step. This has a similar function as the iterator tool in model builder used in exercise 2. Lastly, the files were merged by mosaicing the manipulated raster files.

The Results

The script generated four shape files and four raster files to be used together.

#-------------------------------------------------------------------------------
# Name:        Exercise 5
# Purpose:     Prepares Raster data for analysis
#
# Author:      Zach Miller
#
# Date:        7/20/2016
#-------------------------------------------------------------------------------

#ensure the program is working
print"script initialized"

#import system modules
import os
import time
import datetime
import arcpy
from arcpy import env
import shutil
from arcpy.sa import*
from arcpy.ddd import*

# Allow the script to overwrite existing files
arcpy.env.workspace = "Q:\StudentCoursework\CHupy\GEOG.491.801.2167\MILLERZM\Ex5\oregon"
arcpy.env.overwriteOutput = True

#The next line needs to be set to your EX5 geodatabase's Oregon Boundary feature class
arcpy.env.outputCoordinateSystem = "Q:\StudentCoursework\CHupy\GEOG.491.801.2167\MILLERZM\Ex5\Ex5.gdb\OregonBoundary"

#"NAD_1983_HARN_Oregon_Statewide_Lambert" projection
arcpy.CheckOutExtension('3D')
arcpy.CheckOutExtension('spatial')
#-------------------------------------------------------------------------------

#create variables
print "Obtaining a List of rasters" ,datetime.datetime.now().strftime("%H:%M:%S")

#The workspace is the folder holding the rasters
workspace = r"Q:\StudentCoursework\CHupy\GEOG.491.801.2167\MILLERZM\Ex5\oregon"
geodatabasePath = "Q:\StudentCoursework\CHupy\GEOG.491.801.2167\MILLERZM\Ex5\Ex5.gdb"

# Set the boundary used to clip all of the rasters to the Oregon State boundary
clipBoundary = "Q:\StudentCoursework\CHupy\GEOG.491.801.2167\MILLERZM\Ex5\Ex5.gdb\OregonBoundary"

#This list will hold all of the clipped rasters for a future mosaicing operation
clipList = []

#This list is similar to the clip list, but is for hillshades
hsList = []

#This list is for slope rasters
slopeList = []

#The following names will be used for the three mosaic operations: Elevation, Hillshade, and Slope
mosaicedDEM = "NW_Oregon_Elevation_30m"
mosaicedHS = "NW_Oregon_Hillshade_30m"
mosaicedSlope = "NW_Oregon_Slope_30m"
#-------------------------------------------------------------------------------

#create a list of rasters with .img extension
print "Obtaining a list of rasters"
rasters = arcpy.ListRasters("","IMG")
#-------------------------------------------------------------------------------

#Set up "for in" loop
for raster in rasters:
    print "Formatting the raster's name"
#-------------------------------------------------------------------------------
#using os.path.basename... we obtain just the base name of the raster, without the file extension or source path.
outName = os.path.basename(raster).rstrip(os.path.splitext(raster)[1])
print "Unformatted name: " + outName
outName = str(outName)

#We use a string 'slice' to remove the first three characters from the filename...essentially removing the img, and leaving only the coordinates
fileName=str(outName[3:])
print "Formatted name: " + fileName

#Next, using the newly reformatted filename, we set all of the names for the new rasters we are to create.
rasterProj = os.path.join(geodatabasePath,fileName) + "_Proj"
rasterClip = os.path.join(geodatabasePath,fileName) + "_Clip"
rasterHS = os.path.join(geodatabasePath,fileName)+"_HS"
rasterSlope = os.path.join(geodatabasePath,fileName) + "_Slope"

#print raster projection to make sure the name formatting is working.
print rasterProj
#-------------------------------------------------------------------------------

#We already set the coordinate system using the 'output coordinate system' environment setting so we don't need to worry about it here.
# Be sure to use "BILINEAR" or "CUBIC" resampling when projecting elevation and other continuous datasets, as they will otherwise have incorrect values.
print "Projecting " + fileName , datetime.datetime.now().strftime("%H:%M:%S")
arcpy.ProjectRaster_management(raster,rasterProj,"","BILINEAR")
#-------------------------------------------------------------------------------

#The projected raster will be clipped to the state boundary, and "NO_MAINTAIN_EXTENT" should be set so the alignment of the pixels isn't changed by the clip function
print "Clipping " + fileName
arcpy.Clip_management(rasterProj,"",rasterClip,clipBoundary,"","ClippingGeometry","NO_MAINTAIN_EXTENT")
clipList.append(rasterClip)
#-------------------------------------------------------------------------------

#run a hillshade on the clipped rasters
print "Hillshading " + fileName
arcpy.HillShade_3d(rasterClip,rasterHS)
hsList.append(rasterHS)
#-------------------------------------------------------------------------------

#calculate slope on the clipped rasters
print "Calculating slope of " + fileName
arcpy.Slope_3d(rasterClip,rasterSlope)
slopeList.append(rasterSlope)
print "loop is complete"
#-------------------------------------------------------------------------------

#merging the tiles
print "Mosaicing the clipped DEMS"
arcpy.MosaicToNewRaster_management(clipList,geodatabasePath,mosaicedDEM,"","32_BIT_FLOAT","",1)
print "Mosaicing the hillshaded DEMS"
arcpy.MosaicToNewRaster_management(hsList,geodatabasePath,mosaicedHS,"","8_BIT_UNSIGNED","",1)
print "Mosaicing the slope rasters"
arcpy.MosaicToNewRaster_management(slopeList,geodatabasePath,mosaicedSlope,"","32_BIT_FLOAT","",1)

print "script is complete" ,datetime.datetime.now().strftime("%H:%M:%S")

Sources

All sources provided by Dr. Christina Hupy

Monday, July 18, 2016

Exercise 4: Working With Fields and Selections

The Objectives
The goal of this exercise was to add and calculate fields as well as select features using SQL statements in Python. This was done by using the script (and data) from exercise three and modifying some of the tools used.

The Process

First, the system modules needed for this exercise were imported. Next, variables were set up. Then, tools 'add field', 'calculate field', and 'select' were used. Finally, compactness was calculated and the exercise was completed.

The Results
#-------------------------------------------------------------------------------
# Name:        Ex4
# Purpose:     Calculate a Field and Apply an SQL Statement
#
# Author:      Zach Miller
#
# Date:        7/18/2016
#-------------------------------------------------------------------------------

#Ex4

#ensuring the program is working
print "Script initialized"

#import system models
import arcpy
from arcpy import env
import os
import time
import datetime

#allow the script to overwrite existing files
arcpy.env.overwriteOutput = True

#create variables
print"Create Variables",datetime.datetime.now().strftime("%H:%M:%S")

#input geodatabase
ex3geodatabasePath="Q:\StudentCoursework\CHupy\GEOG.491.801.2167\MILLERZM\BlankResults_Ex3.gdb"

#This filename will be used for the subsequent 'dissolve output' fc name
intersectName = "IntersectedFcs"
dissolveOutput = os.path.join(ex3geodatabasePath,intersectName)+"_Dissolved"

#outputVariables
selectName = "DissolveFC_Selected"
finalSelect = os.path.join(ex3geodatabasePath,selectName)
print "Adding a field to the dissolved fcs",datetime.datetime.now().strftime("%H:%M:%S")

# Process: Add Field
arcpy.AddField_management(dissolveOutput, "Area_km2", "FLOAT", "", "", "", "","NULLABLE", "NON_REQUIRED", "")
print "Calculating the area in km2" ,datetime.datetime.now().strftime("%H:%M:%S")

# Process: Calculate Field
arcpy.CalculateField_management(dissolveOutput, "Area_km2", "[Shape_Area] / 1000000", "VB", "")
print "Selecting only polygons with areas greater than 2km",datetime.datetime.now().strftime("%H:%M:%S")

# Process: Select (4)
arcpy.Select_analysis(dissolveOutput, finalSelect, "Area_km2 >2")
print "Adding a field for calculating compactness",datetime.datetime.now().strftime("%H:%M:%S")

# Process: Add Field (2)
arcpy.AddField_management(finalSelect, "Compactness_Float", "FLOAT", "", "", "", "","NULLABLE", "NON_REQUIRED", "")
print "Calculating compactness" ,datetime.datetime.now().strftime("%H:%M:%S")

# Process: Calculate Field (2)
arcpy.CalculateField_management(finalSelect, "Compactness_Float", "[Shape_Area] / ([Shape_Length] * [Shape_Length] )", "VB", "")


print "Calculations complete" ,datetime.datetime.now().strftime("%H:%M:%S")

Sources

All sources provided by Dr. Christina Hupy

Exercise 3: Intro to Geoprocessing With Python

The Objectives

The goal of this assignment was to become familiar with importing modules to set up a script, creating smart variables, and exporting geoprocessing models as scripts. This task was accomplished by simply recreating the steps done in the first exercise of the class, through scripting in Python.

The Process

First, there had to be a connection to the model created in exercise one, this connection was made by exporting the model as a .py file and saved to a geodatabase created for exercise three. Then, ArcPy modules and Python modules were imported into Python in order to be able to write the script. These modules included: os, time, and datetime. Next, variables were created by linking files to their respective geodatabases then tools were used to manipulate the data. Once the data was clipped, buffered, selected, etc. it was exported to the new geodatabase that was created for this exercise. Lastly, the data was cleaned-up, so-to-speak, and the script generated a new file within the geodatabase.

The Results

#-------------------------------------------------------------------------------
# Name:        Ex3
# Purpose:     Finds ideal areas for ski resorts
#
# Author:      Zach Miller
#
# Date:        7/15/2016
#-------------------------------------------------------------------------------
# Ensuring the program is working
print "Script initialized"

#import system models
import arcpy
from arcpy import env
import os
import time
import datetime
arcpy.env.overwriteOutput = True

#create variables
print"Create Variables",datetime.datetime.now().strftime("%H:%M:%S")

#input geodatabase
ex1geodatabasePath = "Q:\StudentCoursework\CHupy\GEOG.491.801.2167\MILLERZM\Ex1.gdb"

#inout names
nfsLand = "NFS_Land"
snowDepth = "SnowDepth_in"
temp = "Temperature_F"
studyArea = "StudyArea"
airports = "Airports"

#linking geodatabase with name
nfsInput = os.path.join(ex1geodatabasePath,nfsLand)
snowDepthInput = os.path.join(ex1geodatabasePath,snowDepth)
tempInput = os.path.join(ex1geodatabasePath,temp)
studyAreaInput = os.path.join(ex1geodatabasePath,studyArea)
airportsInput = "Q:\StudentCoursework\CHupy\GEOG.491.801.2167\MILLERZM\Ex1.gdb\Airports_Project"

#output variables
ex3geodatabasePath = "Q:\StudentCoursework\CHupy\GEOG.491.801.2167\MILLERZM\BlankResults_Ex3.gdb"

#clipped
nfsLandClip = os.path.join(ex3geodatabasePath,nfsLand)+"_Clip"
snowDepthClip = os.path.join(ex3geodatabasePath,snowDepth)+"_Clip"
tempClip = os.path.join(ex3geodatabasePath,temp)+"_Clip"
airportsClip = os.path.join(ex3geodatabasePath,airports)+"_Clip"

#selected
snowDepthSelected = os.path.join(ex3geodatabasePath,snowDepth)+"_Selected"
tempSelected = os.path.join(ex3geodatabasePath,temp)+"_Selected"
airportsSelected = os.path.join(ex3geodatabasePath,airports)+"_Selected"

#buffer output
airportsBuffered = os.path.join(ex3geodatabasePath,airports)+"_Buffered"

#intersect output
intersectName = "IntersectedFcs"
intersectOutput = os.path.join(ex3geodatabasePath,intersectName)

#dissolve output
dissolveOutput = os.path.join(ex3geodatabasePath,intersectName)+"_Dissolved"
finalSelect = os.path.join(ex3geodatabasePath,intersectName)+"_MoreThan2km2"

#begin processing
print "Starting to Process",datetime.datetime.now().strftime("H:%M:%S")
print "Clipping fcs to within the study area",datetime.datetime.now().strftime("%H:%M:%S")

#clip all of the feature classes to the study area

#clip snow depth
arcpy.Clip_analysis(snowDepthInput,studyAreaInput,snowDepthClip,"")

#clip temp
arcpy.Clip_analysis(tempInput,studyAreaInput,tempClip,"")

#clip nfs land
arcpy.Clip_analysis(nfsInput,studyAreaInput,nfsLandClip,"")

#clip airports
arcpy.Clip_analysis(airportsInput,studyAreaInput,airportsClip,"")

#select meaningful values from the clipped classes
print "Selecting only meaningful values from the clipped classes",datetime.datetime.now().strftime("%H:%M:%S")
arcpy.Select_analysis(snowDepthClip,snowDepthSelected,"gridcode>=250")

#process: select (2)
arcpy.Select_analysis(tempClip,tempSelected,"gridcode<32")

#process: select (3)
arcpy.Select_analysis(airportsClip,airportsSelected,"OwnerType='Pu' AND hasTower='Y'")
print"Buffering selected airports",datetime.datetime.now().strftime("%H:%M:%S")

#process: buffer
arcpy.Buffer_analysis(airportsSelected,airportsBuffered,"40 Miles","FULL","ROUND","ALL","","PLANAR")
print"Intersecting the fcs",datetime.datetime.now().strftime("%H:%M:%S")

#process: Intersect (2)
arcpy.Intersect_analysis([snowDepthSelected,tempSelected,nfsLandClip,airportsBuffered],intersectOutput,"ALL","","INPUT")
print"Dissolving the intersected fcs",datetime.datetime.now().strftime("%H:%M:%S")

#process: Dissolve (2)
arcpy.Dissolve_management(intersectOutput,dissolveOutput,"","","SINGLE_PART","DISSOLVE_LINES")

print"Script is complete"

Sources

All sources provided by Dr. Christina Hupy

Tuesday, July 12, 2016

Exercise 2: Advanced Module Builder

The Objectives

The goal of this assignment was to learn more advanced skills of model builder such as inline variable substitution and iteration. In the scenario for this assignment, the task at hand was to characterize ski runs from three different ski resorts by their skill level. The data provided included feature classes from Vail, Beaver Creek, and Breckenridge ski resorts, as well as slope, elevation, and aspect raster files.

The Process

First, an iterator was used to be able to run through the model multiple times to produce shape files for each of the feature classes. From there, a buffer of 10 meters was placed on each of the feature classes. Next, three "zonal statistics as tables" tools were used to join the three raster files given (aspect, slope, and elevation) which produced three tables. These tables were then joined (using the "join"tool) first with the original buffer output, then daisy-chained through the outputs that followed until all three of the zone statistics tables were used. This put everything into one table. Lastly, the "select" tool allowed a query to be used to create four feature classes, which included: beginner, intermediate, advanced, and expert classifications.

The Results

 This model produced data that could be put into visual form in a cohesive manner. The data is easy to understand and gives the viewer a good idea of the layout of the slope. For example in the south central region of the resort there is a pattern of expert and advanced runs that one might perceive to be a ridge in the mountain.

Sources

All sources provided by Dr. Christina Hupy

Monday, July 11, 2016

Exercise 1: Review Geoprocessing

The Objectives

The goal of this assignment was to reintroduce geoprocessing skills learned in Introduction to GIS. This included reviewing how to make a data flow model in Model Builder, as well as using ArcCatalog and ArcMap. In the scenario for this assignment, there was a motive to find suitable areas for a new ski resort expansion in the United States' Rocky Mountain region. The set of criteria for suitable locations included: being within designated National Forest lands, having an average snowfall of more than 240 inches, having an average winter temperature below 32 degrees Fahrenheit, being within 40 miles of a commercial public airport, and having an area of greater than 2 square kilometers.

The Process

The tools used to obtain this data included: project, clip, select, buffer, intersect, dissolve, add field, and calculate field. First, (the correctly projected) "airports" "national forested lands", "snow depth (inches)", and "temperature" shape files were clipped to the "Study Area" shape file. From there, attributes were selected for the "airports", "temperature", and "snow depth" shape files. For "airports", the attributes included being a public airport and having a control tower, then a 40 mile buffer was placed around each location. For "temperature", the attribute was having an average winter temperature less than 32 degrees Fahrenheit. For "snow depth", the attribute was having an average annual snow fall of greater than 240 inches. Once these shape files were specified to the assignment's scenario, the shape files were intersected to bring all of the elements together. Next, the dissolve tool was used to get rid of interior lines in each layer. Then, the "add field" and "calculate field" tools were used to create a new field in the attribute table, this was named "Areakm2". Then the areas greater than two square kilometers were selected using the "select" tool. Lastly, compactness was calculated using "add field" and "calculate field" tools.

The Results


This scenario produced results that found suitable areas located in Montana, Idaho, Utah, Colorado, New Mexico, and Arizona, but none in Wyoming. Some other factors might have to be added to find even more accurate/ appropriate locations for the specific task at hand such as elevation, proximity to other ski resorts, etc.

Sources

All sources provided by Dr. Christina Hupy