A set of ArcGIS tools that assist with sampling and scoring spatial data by enabling proportional allocations, density sampling, and different scoring methods. The documentation for each tool in the scripts folder and toolbox will be placed in the read me in the section below.
This tool intended to provide a way to use sampling geography that will calculate proportional averages or sums based on the percentage of an intersection covered by the sampling geography. The output is the sampling geography with fields sampled from the base features.
The goal of this script is to enable analysis of demographic or other area based data based on arbitrary sampling polygons.
Parameter | Explanation | Data Type |
---|---|---|
Sampling_Features |
Dialog Reference The sampling features are the features you want to associate proportional averages or sums from the attributes in the base features. The output will look like this input polygon layer with new fields. ___________________ |
Feature Layer |
Base Features |
Dialog Reference The base features have the attributes being sampled by the polygon sampling features. ___________________ |
Multiple Value |
Output Features |
Dialog Reference The output feature class is a copy of the sampling features with new sum & average field. ___________________ |
Multiple Value |
Sum Fields |
Dialog Reference Fields to proportionally sum (based on the overlapping areas between the sampling and base features) from the base to the sampling features. ___________________ |
Multiple Value |
Mean Fields |
Dialog Reference Fields to proportionally average (based on the overlapping areas between the sampling and base features from the base to the sampling features. ___________________ |
Multiple Value |
This script is intended to help aid a density based network/vector analysis process by computing KDEs, associating them with a target vector file, and computing percentile scores of non-zero/null density scores. This helps with cartography and analysis on networks and other vector data.
The goal of this script is to assist in creating clean density maps using networks and to assist with planning prioritization processes by scoring those chosen densities according to multiple weights in a single step. This tool leverages memory workspaces only usable in ArcGIS Pro, and it will no longer operate in ArcMap.
Parameter | Explanation | Data Type |
---|---|---|
Input_Feature_Class |
Dialog Reference Feature class of point values that will be used to compute kernel densities. If the fields already exist, they will be updated by the tool. ___________________ | Feature Class |
Weight_Fields |
Dialog Reference Density feature class fields that are used to both weight and filter kernel density estimates. Each kernel density is computed on non-null values, but a weight of 0 will still be treated as non-existent data. ___________________ | Fields |
Input_Target_Vector |
Dialog Reference This is the target network/vector that the kernel densities will be associated with. Zero values will be turned into nulls. ___________________ | Feature Class |
Add_Percentiles (Optional) |
Dialog Reference If true, this will add a percentile calculation for every weight field. ___________________ | Boolean |
Cell_Size,Search_Radius, and Unit Area Factor |
Dialog Reference These are the KDE control fields that the tool will use to compute the kernel densities of all the weighted elements in the input feature class. You can find out more information on the Kernel Density tools documentation. ___________________ | Multiple Values |
Barrier Features |
Dialog Reference The dataset that defines the barriers for KDE estimation (impacts shortest distances). The barriers can be a feature layer of polyline or polygon features. ___________________ | Multiple Values |
Intermediate Raster |
Dialog Reference The output save location for intermediate raster files from the kernel density. ___________________ | Raster Dataset |
This ArcGIS scripting tool is designed to take selected fields and create an added field with a Z score for each one of the selected fields.
The goal of this script is to add new fields with standardized Z Scores for every field selected. The Z Scores are based on the values of each column, so they will change depending on the extent of the current data set.
Parameter | Explanation | Data Type |
---|---|---|
Input_Feature_Class |
Dialog Reference This is the selected input feature class that will have new fields with Z scores calculated and joined to it. If the fields already exist, they will be updated by the tool. ___________________ Python Reference The feature class uses the ExtendTable function used from the DA module of arcpy to join a modified structured numpy array with column-wise calculated Z scores joined to it. |
Feature Layer |
Fields_to_Standarize |
Dialog Reference These are the fields that will have their Z scores calculated within a Pandas data frames, converted to a structured numpy array, and then joined to the input feature class based on the object ID. The fields added will be in the form of "Zscore_"+%FieldName%. If a field of that form already exists in the table, it will be updated. ___________________ Python Reference Generally the fields are selected from the feature class to be converted into a numpy array, then into a pandas data frame, then back to structured numpy array to be joined based on the object ID. This tool assumes there is an object ID to use to join to. |
Multiple Value |
This ArcGIS scripting tool is designed to take selected fields and create an added field with a percentile score for each one of the selected fields.
The goal of this script is to add new fields with percentile scores for every field selected. The percentile scores are based on the values of each column, so they will change depending on the extent of the current data set.
Parameter | Explanation | Data Type |
---|---|---|
Input_Feature_Class |
Dialog Reference This is the selected input feature class that will have new fields with percentiles calculated and joined to it. If the fields already exist, they will be updated by the tool. ___________________ Python Reference The feature class uses the ExtendTable function used from the DA module of arcpy to join a modified structured numpy array with column-wise calculated Z scores joined to it. |
Feature Layer |
Percentile_Fields |
Dialog Reference These are the fields that percentiles scores added to the input feature class will be based. ___________________ Python Reference Generally the fields are selected from the feature class to be converted into a numpy array, then into a pandas data frame, then back to structured numpy array to be joined based on the object ID. This tool assumes there is an object ID to use to join to from a table. These percentile scores are made of percent ranks using the pandas rank function. |
Multiple Value |
Other Parameters* |
Dialog Reference This tool has a host of other parameters including parameters to invert scores (change from high to low to low to high, etc.), change the method of ranking (average vs. max), designated values to fill null scores, and the choice of relative ranking field groups. These parameters are documented in the tool metadata. ___________________ Python Reference Generally the fields are selected from the feature class to be converted into a numpy array, then into a pandas data frame, then back to structured numpy array to be joined based on the object ID. This tool assumes there is an object ID to use to join to. |
Multiple Value |
This tool is designed to perform min-max scaling on specified fields within an input feature class. By applying this scaling technique, fields are linearly normalized between a defined minimum and maximum value. Additionally, users have the option to set percentiles that can adjust what is considered the minimum or maximum, allowing for more flexible scaling based on percentile scores.
The primary objective of this function is to facilitate the scaling of field values in a feature class, such that the values fall within a specified target range. This can be especially useful when comparing or visualizing datasets with different scales or units.
Parameter | Explanation | Data Type |
---|---|---|
Input Feature Class |
Dialog Reference This is the selected input feature class that will have new fields linearly normalized scores will be joined to it. If the fields already exist, they will be updated by the tool. |
String |
Input Fields |
Dialog Reference List of fields to be scaled between either the min-max or some percentile band. |
List |
Minimum Percentile |
Dialog Reference Minimum percentile for scaling. Replaces the minimum. |
Float (optional) |
Maximum Percentile |
Dialog Reference Maximum percentile for scaling. Replaces the maximum. |
Float (optional) |
Target Minimum Score |
Dialog Reference Minimum value of the target range for scaling. |
Float |
Target Maximum Score |
Dialog Reference Maximum value of the target range for scaling. |
Float |
This tool is designed to calculate a weighted index for an input feature class using specified variable weights. The output is the original feature class with an additional field representing the computed weighted index.
The goal of this script is to enable analysis of spatial data by applying weighted calculations to multiple attributes based on user-defined weights.
Parameter | Explanation | Data Type |
---|---|---|
Input Feature Class |
Dialog Reference The input feature class containing the attributes to be weighted and combined into a weighted index. The output will include a new field with the calculated index. ___________________ |
Feature Layer |
Input Variable Weight Value String |
Dialog Reference A string representing the value table of variables and their associated weights. Each entry should include the variable name and its weight. ___________________ |
String |
Output Field Name |
Dialog Reference The name of the output field where the computed weighted index will be stored. This field will be added to the input feature class. ___________________ |
String |
Null Fill Value |
Dialog Reference The value used to fill null entries in the input variables before computing the weighted index. This ensures no missing data affects the calculations. ___________________ |
Float |
This scripting tool is designed to take selected fields and create an added field that classifies based on their unique combinations of values using numpy.
The goal of this script is to add a group field based on a selection of fields chosen in the tool. Two fields will be added, one with a number representing the group ID (can be dissolved or summarized on), and another with a string with the query used to isolate it. The names of the fields are based on the base name parameter.
Parameter | Explanation | Data Type |
---|---|---|
Input_Feature_Class |
Dialog Reference This is the selected input feature class that will have new group fields joined to it. If the fields already exist, they will be updated by the tool. ___________________ Python Reference The feature class uses the ExtendTable function used from the DA module of arcpy to join a modified structured numpy array with column-wise group IDs joined to it. |
Feature Layer |
Fields_to_Group |
Dialog Reference These are the fields you want unique group categories of. It can be used to make a unique ID out of several different field attributes. ___________________ Python Reference Uses dynamic query creation to generate isolated numpy arrays to join to the input table. |
Multiple Value |
Base_Name |
Dialog Reference This is the string that is prepended to the the new field names. The field name will be this base name along with either the strings "Num" or "String" appended to the end. ___________________ Python Reference The fields will validated based on the work space. |
String |