-
Notifications
You must be signed in to change notification settings - Fork 66
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merge typology into zone #3755
Merge typology into zone #3755
Conversation
also, fix the surroundings helper to handle bad OSM nan data
Caution Review failedThe pull request is closed. WalkthroughThis pull request represents a comprehensive effort to standardize naming conventions across the Community Energy Atlas (CEA) codebase. The changes primarily focus on renaming database and DataFrame columns from uppercase to lowercase, particularly targeting keys like 'Name' to 'name', 'Pipe_DN' to 'pipe_DN', and 'Type' to 'type'. The modifications span multiple files and modules, indicating a systematic approach to improving code consistency and readability. Changes
Sequence DiagramsequenceDiagram
participant Developer
participant Codebase
participant DataFiles
Developer->>Codebase: Initiate naming standardization
Codebase->>DataFiles: Scan existing column names
DataFiles-->>Codebase: Return current naming conventions
Codebase->>Codebase: Replace 'Name' with 'name'
Codebase->>Codebase: Replace uppercase keys with lowercase
Codebase->>DataFiles: Update file reading/writing methods
Codebase-->>Developer: Complete naming standardization
Possibly Related PRs
Poem
📜 Recent review detailsConfiguration used: .coderabbit.yaml 📒 Files selected for processing (10)
Finishing Touches
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
Documentation and Community
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 14
🔭 Outside diff range comments (8)
notebooks/2584-migration-script.ipynb (1)
Line range hint
96-96
: Improve path handling for cross-platform compatibilityThe script uses hard-coded Windows paths with backslashes. Consider:
- Using
os.path.join()
for path construction- Making paths configurable via environment variables or command-line arguments
- Using forward slashes or raw strings to avoid escape character issues
Apply this diff to improve path handling:
-SCENARIO_TO_MIGRATE = r"c:\Users\darthoma\Documents\CityEnergyAnalyst\projects\2.29.0\kleinalbis" -EXAMPLE_SCENARIO = r"c:\Users\darthoma\Documents\CityEnergyAnalyst\projects\working-with-databases\kleinalbis" -DATABASE = r"c:\Users\darthoma\Documents\GitHub\CityEnergyAnalyst\cea\databases\CH" +SCENARIO_TO_MIGRATE = os.getenv('CEA_SCENARIO_TO_MIGRATE', os.path.join('projects', '2.29.0', 'kleinalbis')) +EXAMPLE_SCENARIO = os.getenv('CEA_EXAMPLE_SCENARIO', os.path.join('projects', 'working-with-databases', 'kleinalbis')) +DATABASE = os.getenv('CEA_DATABASE', os.path.join('cea', 'databases', 'CH'))cea/plots/demand/peak_load_supply.py (1)
Line range hint
91-91
: Maintain consistent column naming.The
peak_load_district
function still uses "Name" while the rest of the codebase has been updated to use "name".Apply this diff to maintain consistency:
- trace = go.Bar(x=data_frame_totals["Name"], y=y, name=name, + trace = go.Bar(x=data_frame_totals["name"], y=y, name=name,cea/import_export/import_from_rhino_gh.py (1)
Line range hint
11-11
: Remove unused import.The
pandas
import is not used in this file.Apply this change:
-import pandas as pd
cea/analysis/lca/operation.py (1)
Line range hint
11-11
: Remove unused import.The
geopandas.GeoDataFrame
import is no longer used after the changes.-from geopandas import GeoDataFrame as gpdf
cea/plots/supply_system/a_supply_system_map.py (1)
Line range hint
198-201
: Update column name to maintain consistency.The code still uses 'Name' in the DataFrame operations while the project is standardizing on 'name'.
- building_connectivity = supply_systems[['Name']].copy() + building_connectivity = supply_systems[['name']].copy() building_connectivity['DH_connectivity'] = ( supply_systems['type_hs'].map(heating_infrastructure) == 'DISTRICT').astype(int) building_connectivity['DC_connectivity'] = ( supply_systems['type_cs'].map(cooling_infrastructure) == 'DISTRICT').astype(int)cea/optimization_new/building.py (1)
Line range hint
20-20
: Remove unused import.The import
cea.utilities.dbf
is not used in the code and should be removed.-from cea.utilities import dbf
notebooks/2637-single-point-of-reference-for-variables.ipynb (1)
Line range hint
5-7
: Improve file handling with context managers and proper encoding.File operations should use context managers and explicit encoding to prevent resource leaks and encoding issues.
-yaml.dump(schemas, open(schemas_yml, 'w'), indent=2, default_flow_style=False) +with open(schemas_yml, 'w', encoding='utf-8') as f: + yaml.dump(schemas, f, indent=2, default_flow_style=False)cea/optimization/preprocessing/decentralized_buildings_heating.py (1)
Line range hint
24-24
: Remove unused importThe
cea.utilities.dbf
import is no longer used since the code now reads CSV files instead of DBF.-import cea.utilities.dbf
🧹 Nitpick comments (9)
notebooks/2584-migration-script.ipynb (1)
Line range hint
95-95
: Consider upgrading to Python 3.xThe notebook is using Python 2.7 which reached end-of-life on January 1, 2020. Consider upgrading to Python 3.x for security updates and modern language features.
cea/interfaces/dashboard/api/project.py (1)
7-7
: Consolidate duplicategeopandas
importsThe module
geopandas
is imported twice, once without an alias and once asgpd
. To improve code clarity and maintainability, consider importinggeopandas
only once with the desired alias.Apply this diff to consolidate the imports:
- import geopandas ... - import geopandas as gpd + import geopandas as gpdAlso applies to: 19-19
cea/tests/test_schedules.py (1)
103-103
: Consider using a data factory for test data creation.The DataFrame is created with hardcoded test data. Consider creating a test data factory to maintain test data separately from the test logic, making it easier to update when the schema changes.
cea/datamanagement/surroundings_helper.py (1)
111-114
: Consider using constants for column names.The column names are hardcoded in multiple places. Consider defining them as constants to maintain consistency and make updates easier.
+BUILDING_ID_PREFIX = "CEA" +REQUIRED_COLUMNS = ["name", "height_ag", "floors_ag", "description", "category", "geometry", "REFERENCE"] - shapefile["name"] = [key + str(x + 1000) for x in range(no_buildings)] + shapefile["name"] = [BUILDING_ID_PREFIX + str(x + 1000) for x in range(no_buildings)] - result = shapefile[["name", "height_ag", "floors_ag", "description", "category", "geometry", "REFERENCE"]] + result = shapefile[REQUIRED_COLUMNS]cea/demand/building_properties.py (1)
11-11
: Remove unused import.The
COLUMNS_ZONE_GEOMETRY
import is not used in the code.-from cea.datamanagement.databases_verification import COLUMNS_ZONE_TYPOLOGY, COLUMNS_ZONE_GEOMETRY +from cea.datamanagement.databases_verification import COLUMNS_ZONE_TYPOLOGY🧰 Tools
🪛 Ruff (0.8.2)
11-11:
cea.datamanagement.databases_verification.COLUMNS_ZONE_GEOMETRY
imported but unusedRemove unused import:
cea.datamanagement.databases_verification.COLUMNS_ZONE_GEOMETRY
(F401)
🪛 GitHub Actions: Ruff
[error] 11-11: Unused import:
cea.datamanagement.databases_verification.COLUMNS_ZONE_GEOMETRY
imported but unusedcea/import_export/result_summary.py (2)
12-12
: Consider using a more specific import for geopandas.Instead of importing the entire geopandas library, consider importing only the required functionality to reduce memory footprint.
-import geopandas as gpd +from geopandas import read_file
Line range hint
873-893
: Simplify winter season handling logic.The winter season handling logic is complex and could be simplified. Consider extracting this into a separate helper function for better maintainability.
+def handle_winter_season(df, season_df, indices): + """Helper function to handle winter season that spans year boundary""" + winter_hours = df[df['period_season'] == season]['period_hour'].values + winter_diff = winter_hours[1:] - winter_hours[:-1] + gap_indices = (winter_diff > 1).nonzero()[0] + + if len(gap_indices) > 0: + first_chunk_end = winter_hours[gap_indices[0]] + second_chunk_start = winter_hours[gap_indices[0] + 1] + season_df.loc[indices, 'hour_start'] = second_chunk_start + season_df.loc[indices, 'hour_end'] = first_chunk_end + return season_dfcea/schemas.yml (2)
Line range hint
1-16200
: Schema structure is well-defined and consistent.The schema definitions follow good practices with clear descriptions, proper type validation, and well-defined value constraints. The unit notation is consistent throughout.
Consider adding:
- More detailed descriptions for weather data fields
- Examples for complex value patterns
- Cross-references between related schemas
Line range hint
1-16200
: Consider enhancing schema relationship documentation.While the schema relationships are well structured through lookup tables and 'used_by' fields, consider:
- Add explicit documentation of schema dependencies in a dedicated section
- Implement validation for cross-referenced schema fields
- Create a visual diagram showing schema relationships
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (77)
cea/analysis/costs/system_costs.py
(3 hunks)cea/analysis/lca/embodied.py
(4 hunks)cea/analysis/lca/operation.py
(6 hunks)cea/datamanagement/archetypes_mapper.py
(10 hunks)cea/datamanagement/create_new_scenario.py
(0 hunks)cea/datamanagement/data_migrator.py
(0 hunks)cea/datamanagement/databases_verification.py
(2 hunks)cea/datamanagement/schedule_helper.py
(6 hunks)cea/datamanagement/streets_helper.py
(1 hunks)cea/datamanagement/surroundings_helper.py
(3 hunks)cea/datamanagement/zone_helper.py
(11 hunks)cea/default.config
(1 hunks)cea/demand/building_properties.py
(14 hunks)cea/demand/demand_main.py
(0 hunks)cea/demand/demand_writers.py
(7 hunks)cea/demand/electrical_loads.py
(2 hunks)cea/demand/occupancy_helper.py
(2 hunks)cea/demand/thermal_loads.py
(1 hunks)cea/demand/ventilation_air_flows_detailed.py
(2 hunks)cea/import_export/export_to_rhino_gh.py
(0 hunks)cea/import_export/import_from_rhino_gh.py
(1 hunks)cea/import_export/result_summary.py
(21 hunks)cea/inputlocator.py
(2 hunks)cea/interfaces/dashboard/api/inputs.py
(4 hunks)cea/interfaces/dashboard/api/project.py
(9 hunks)cea/interfaces/dashboard/map_layers/demand/layers.py
(1 hunks)cea/interfaces/dashboard/map_layers/life_cycle_analysis/layers.py
(3 hunks)cea/interfaces/dashboard/map_layers/renewable_energy_potentials/layers.py
(1 hunks)cea/interfaces/dashboard/map_layers/thermal_network/layers.py
(1 hunks)cea/optimization/distribution/network_optimization_features.py
(1 hunks)cea/optimization/preprocessing/decentralized_buildings_cooling.py
(1 hunks)cea/optimization/preprocessing/decentralized_buildings_heating.py
(2 hunks)cea/optimization/preprocessing/preprocessing_main.py
(1 hunks)cea/optimization/preprocessing/processheat.py
(1 hunks)cea/optimization_new/building.py
(3 hunks)cea/optimization_new/containerclasses/energyPotential.py
(1 hunks)cea/optimization_new/domain.py
(2 hunks)cea/optimization_new/helperclasses/optimization/tracker.py
(1 hunks)cea/optimization_new/network.py
(11 hunks)cea/plots/demand/__init__.py
(1 hunks)cea/plots/demand/comfort_chart.py
(1 hunks)cea/plots/demand/energy_balance.py
(3 hunks)cea/plots/demand/energy_end_use.py
(1 hunks)cea/plots/demand/energy_use_intensity.py
(1 hunks)cea/plots/demand/heating_reset_schedule.py
(1 hunks)cea/plots/demand/peak_load.py
(2 hunks)cea/plots/demand/peak_load_supply.py
(1 hunks)cea/plots/solar_potential/__init__.py
(1 hunks)cea/plots/supply_system/a_supply_system_map.py
(1 hunks)cea/plots/thermal_networks/a_network_design.py
(2 hunks)cea/plots/thermal_networks/f_pump_duration_curve.py
(1 hunks)cea/resources/geothermal.py
(1 hunks)cea/resources/radiation/geometry_generator.py
(3 hunks)cea/resources/radiation/main.py
(4 hunks)cea/resources/radiation/simplified/main.py
(2 hunks)cea/resources/sewage_heat_exchanger.py
(1 hunks)cea/schemas.yml
(34 hunks)cea/scripts.yml
(1 hunks)cea/technologies/network_layout/connectivity_potential.py
(2 hunks)cea/technologies/network_layout/minimum_spanning_tree.py
(2 hunks)cea/technologies/network_layout/steiner_spanning_tree.py
(11 hunks)cea/technologies/network_layout/substations_location.py
(3 hunks)cea/technologies/thermal_network/simplified_thermal_network.py
(10 hunks)cea/technologies/thermal_network/substation_matrix.py
(2 hunks)cea/technologies/thermal_network/thermal_network.py
(35 hunks)cea/technologies/thermal_network/thermal_network_costs.py
(1 hunks)cea/technologies/thermal_network/thermal_network_optimization.py
(2 hunks)cea/tests/test_schedules.config
(1 hunks)cea/tests/test_schedules.py
(2 hunks)cea/utilities/dbf.py
(1 hunks)cea/utilities/trace_inputlocator/trace_inputlocator.output.new.yml
(3 hunks)docs/how-to-prepare-inputs-for-network-simulation.rst
(1 hunks)docs/intermediate_input_methods.rst
(1 hunks)docs/output_methods.rst
(4 hunks)docs/thermal_network_matrix/thermal_network_matrix_data_description.rst
(4 hunks)notebooks/2584-migration-script.ipynb
(1 hunks)notebooks/2637-single-point-of-reference-for-variables.ipynb
(4 hunks)
💤 Files with no reviewable changes (4)
- cea/import_export/export_to_rhino_gh.py
- cea/demand/demand_main.py
- cea/datamanagement/data_migrator.py
- cea/datamanagement/create_new_scenario.py
✅ Files skipped from review due to trivial changes (9)
- cea/optimization_new/containerclasses/energyPotential.py
- cea/optimization_new/helperclasses/optimization/tracker.py
- cea/demand/thermal_loads.py
- cea/demand/ventilation_air_flows_detailed.py
- cea/optimization_new/domain.py
- cea/plots/thermal_networks/a_network_design.py
- docs/intermediate_input_methods.rst
- cea/utilities/trace_inputlocator/trace_inputlocator.output.new.yml
- cea/default.config
🧰 Additional context used
🪛 GitHub Actions: Ruff
cea/optimization/preprocessing/decentralized_buildings_heating.py
[error] 24-24: Unused import: cea.utilities.dbf
imported but unused
cea/optimization_new/building.py
[error] 20-20: Unused import: cea.utilities.dbf
imported but unused
cea/import_export/import_from_rhino_gh.py
[error] 11-11: Unused import: pandas
imported but unused
cea/analysis/lca/operation.py
[error] 11-11: Unused import: geopandas.GeoDataFrame
imported but unused
cea/demand/building_properties.py
[error] 11-11: Unused import: cea.datamanagement.databases_verification.COLUMNS_ZONE_GEOMETRY
imported but unused
cea/datamanagement/zone_helper.py
[error] 18-18: Unused import: cea.datamanagement.databases_verification.COLUMNS_ZONE_GEOMETRY
imported but unused
🪛 Ruff (0.8.2)
cea/demand/building_properties.py
11-11: cea.datamanagement.databases_verification.COLUMNS_ZONE_GEOMETRY
imported but unused
Remove unused import: cea.datamanagement.databases_verification.COLUMNS_ZONE_GEOMETRY
(F401)
cea/datamanagement/zone_helper.py
18-18: cea.datamanagement.databases_verification.COLUMNS_ZONE_GEOMETRY
imported but unused
Remove unused import: cea.datamanagement.databases_verification.COLUMNS_ZONE_GEOMETRY
(F401)
⏰ Context from checks skipped due to timeout of 90000ms (2)
- GitHub Check: windows-latest
- GitHub Check: macos-latest
🔇 Additional comments (94)
cea/resources/sewage_heat_exchanger.py (2)
42-42
: LGTM!The change to read names from the total demand CSV file looks good.
Line range hint
55-57
: Update column names to match the new lowercase standardization.As part of the data input structure refactoring, if the column names in the CSV files are being standardized to lowercase, the following uppercase column references need to be updated to prevent runtime errors:
Qww_sys_kWh
Qww_kWh
Tww_sys_sup_C
Tww_sys_re_C
mcptw_kWperC
mcpww_sys_kWperC
Let's verify the column names in the demand results CSV files:
notebooks/2584-migration-script.ipynb (2)
Line range hint
117-117
: Well-structured data transformation logicThe data transformation approach is well-documented and includes validation steps by comparing old and new formats. The merging of age.dbf and occupancy.dbf into typology.dbf is handled systematically.
Line range hint
1-1
: Add unit tests and improve documentationWhile the notebook is well-documented with markdown cells, consider:
- Adding unit tests for the helper functions
- Including input data validation
- Adding logging to track migration progress
- Documenting error handling procedures
Run this script to check for existing tests:
✅ Verification successful
Add tests to align with project's testing practices
The codebase has an established testing framework with multiple test files in
cea/tests/
, but no tests exist for this migration script. Adding tests would maintain consistency with the project's testing practices.
- Consider creating a dedicated test file in
cea/tests/
for migration validation- Follow existing test patterns from similar files like
cea/tests/test_dbf.py
- Include both positive and negative test cases for data validation
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Search for test files related to migration rg -g "test_*.py" -l "migration|typology|age.dbf|occupancy.dbf"Length of output: 65
Script:
#!/bin/bash # Search for any test files in the repository fd "test.*\.(py|ipynb)$" # Search for test-related documentation fd "README" --exec grep -l -i "test" {} \; # Look for test assertions in the migration notebook rg "assert|unittest|pytest" notebooks/2584-migration-script.ipynbLength of output: 728
cea/optimization/preprocessing/processheat.py (1)
38-38
: Update to lowercasename
column is consistentThe loop variable has been updated to
df.name
, which aligns with the standardized naming convention in the codebase.cea/datamanagement/streets_helper.py (1)
29-29
: Update to lowercasename
column is consistentThe change from
Name
toname
in the DataFrames ensures consistency with the updated naming conventions across the codebase.cea/optimization/distribution/network_optimization_features.py (1)
59-59
:⚠️ Potential issueVerify the column name change in merge operation
The change from
'Pipe_DN'
to'Ppipe_DN'
seems to introduce a new prefix 'P'. This might be a typo or could affect the merge operation.✅ Verification successful
Column name change appears intentional and correct
The prefix 'P' in 'Ppipe_DN' appears to be a deliberate distinction between edge list data and piping cost reference data ('pipe_DN'). The merge operation is correctly implemented for cost calculations.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Check column names in the relevant files echo "Checking edge list files for column names:" fd -e csv "edge_list" | xargs head -n 1 echo "Checking distribution systems database for column names:" fd "distribution_systems" -e xlsx | while read file; do echo "File: $file" python3 -c "import pandas as pd; print(pd.read_excel('$file', sheet_name='THERMAL_GRID').columns.tolist())" doneLength of output: 303
Script:
#!/bin/bash # Search for CSV files and their headers echo "=== Searching for CSV files with their headers ===" fd -e csv . | xargs -I {} sh -c 'echo "File: {}"; head -n 1 "{}"' echo -e "\n=== Searching for DataFrame column references ===" # Search for DataFrame operations with these column names rg -l "Pipe_DN|Ppipe_DN" --type py echo -e "\n=== Checking column usage in Python files ===" ast-grep --pattern 'merge($$$, left_on="Ppipe_DN"$$$)' --lang py ast-grep --pattern 'merge($$$, left_on="Pipe_DN"$$$)' --lang pyLength of output: 6227
Script:
#!/bin/bash echo "=== Examining network_optimization_features.py ===" rg -B 10 -A 10 "merge.*Ppipe_DN" cea/optimization/distribution/network_optimization_features.py echo -e "\n=== Checking for related variable assignments ===" ast-grep --pattern 'edges_file = $$$' --lang py ast-grep --pattern 'piping_cost_data = $$$' --lang pyLength of output: 1741
Script:
#!/bin/bash echo "=== Checking edge list file structure ===" # Find and examine headers of thermal network edge list files fd "edge_list" -e csv | while read -r file; do echo "File: $file" head -n 1 "$file" done echo -e "\n=== Checking for other references to Ppipe_DN ===" rg "Ppipe_DN" -B 2 -A 2Length of output: 1031
Script:
#!/bin/bash echo "=== Examining database schema ===" # Look for schema definitions or documentation rg -B 2 -A 2 "pipe_DN|Ppipe_DN" --type md --type txt # Check Excel file structure fd "distribution_systems" -e xlsx | while read -r file; do echo "File: $file" python3 -c "import pandas as pd; df = pd.read_excel('$file', sheet_name='THERMAL_GRID', nrows=1); print(df.columns.tolist())" doneLength of output: 180
cea/technologies/network_layout/substations_location.py (2)
72-78
: Ensure consistent naming in geometry operationsThe changes in the point simplification function maintain consistency with the new naming convention.
34-34
: Verify the impact on network layout calculationsThe column name changes in the building selection logic are part of the standardization effort. However, these changes affect critical network layout calculations.
Also applies to: 44-44, 47-47
✅ Verification successful
Column name standardization verified as safe
The change to lowercase 'name' is consistent with all network layout calculations. All related files in the network layout module use the same column naming convention, and the data flow remains intact.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Check for any remaining uppercase 'Name' references in network layout related files cd cea/technologies/network_layout/ rg -l "\\['Name'\\]" . # Verify the column names in the input files echo "Checking building shapefile column names:" fd -e shp | while read file; do echo "File: $file" python3 -c "import geopandas as gpd; print(gpd.read_file('$file').columns.tolist())" doneLength of output: 183
Script:
#!/bin/bash # Get the full context of the file echo "=== Full file context ===" cat cea/technologies/network_layout/substations_location.py # Check for DataFrame operations with name/Name echo -e "\n=== DataFrame name column references ===" rg -l "\\[['|\"]name['|\"]\\]|\\[['|\"]Name['|\"]\\]" cea/technologies/network_layout/ # Check for related imports and dependencies echo -e "\n=== Related imports and dependencies ===" rg -l "import.*pandas|import.*geopandas" cea/technologies/network_layout/Length of output: 4092
cea/interfaces/dashboard/map_layers/thermal_network/layers.py (1)
103-104
: LGTM! Verify column name changes across related files.The changes align with the standardization of column names from uppercase to lowercase.
Let's verify that all related shapefiles use the lowercase "name" column:
✅ Verification successful
Column name standardization is consistent across the codebase
The lowercase "name" column is consistently used across all related files, particularly in thermal network operations and shapefile handling. No instances of uppercase "Name" were found, confirming successful standardization.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Search for any remaining uppercase "Name" columns in shapefiles rg -l "\"Name\"" --type py | grep -i "shapefile\|gis\|network"Length of output: 62
Script:
#!/bin/bash # Search for DataFrame operations with Name/name columns using multiple patterns echo "=== Searching for DataFrame column access patterns ===" rg -i "df\[.['\"]name['\"]|\.name|set_index\(['\"]name['\"]|['\"]name['\"]\s*:" --type py echo -e "\n=== Searching for specific shapefile operations with name columns ===" rg -i "read_file.*|to_file.*|geopandas.*name" --type py echo -e "\n=== Using ast-grep to find DataFrame operations ===" ast-grep --pattern 'set_index("$_")' --lang py ast-grep --pattern "set_index('$_')" --lang pyLength of output: 41267
cea/technologies/network_layout/minimum_spanning_tree.py (2)
46-50
: LGTM! Edge field names standardized.The changes align with the standardization of field names from uppercase to lowercase for edge-related fields.
55-55
: LGTM! Node field names standardized.The changes align with the standardization of field names from uppercase to lowercase for node-related fields.
Let's verify the consistency of these field names in the shapefile outputs:
Also applies to: 68-68, 70-71
✅ Verification successful
Field names are consistently lowercase in network layout code
The field names "building", "type", and "name" are correctly standardized to lowercase throughout the network layout implementation, particularly in the shapefile generation code where these fields are populated and written to disk.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Search for any remaining uppercase field names in network layout code rg -A 5 "type_mat|pipe_DN|building|type" --type py | grep -i "network.*layout"Length of output: 65647
cea/interfaces/dashboard/map_layers/life_cycle_analysis/layers.py (2)
30-30
: LGTM!The column name change from "Name" to "name" aligns with the standardization of naming conventions across the codebase.
Line range hint
120-130
: LGTM!The column name changes in the DataFrame operations maintain consistency with the standardized naming convention.
cea/plots/demand/peak_load.py (2)
55-55
: LGTM!The column name change in the bar plot's x-axis maintains consistency with the standardized naming convention.
90-90
: LGTM!The column name change in the district-level plot maintains consistency with the standardized naming convention.
cea/plots/demand/peak_load_supply.py (1)
56-56
: LGTM!The column name change in the bar plot's x-axis maintains consistency with the standardized naming convention.
cea/optimization/preprocessing/preprocessing_main.py (1)
111-115
: LGTM!The attribute and column name changes in the
get_building_names_with_load
function maintain consistency with the standardized naming convention across the codebase.cea/plots/solar_potential/__init__.py (1)
65-65
: LGTM! Column name standardization looks good.The change from uppercase 'Name' to lowercase 'name' for DataFrame index is consistent with the project-wide standardization effort.
Also applies to: 70-70, 75-75, 80-80
cea/plots/thermal_networks/f_pump_duration_curve.py (1)
79-79
: LGTM! Column name standardization is correct.The change from 'Name' to 'name' in the column_names list is consistent with the project-wide standardization effort.
cea/import_export/import_from_rhino_gh.py (1)
159-161
: LGTM! Improved building names retrieval.The change to use
locator.get_zone_building_names()
directly is a cleaner approach.cea/interfaces/dashboard/map_layers/renewable_energy_potentials/layers.py (1)
190-190
: Verify data access after index name change.The change from "Name" to "name" is consistent with the standardization effort. However, ensure that all building data is correctly accessed after this change.
Run this script to verify the column name in the zone geometry file:
✅ Verification successful
Column name change is consistent with codebase standards
The change from "Name" to "name" is correct and consistent with how the column is created in
zone_helper.py
and accessed throughout the codebase, including other dashboard map layers.🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Verify the column name in zone geometry file matches the new index name # Check if the column exists in the file rg -l '"name"' "$(fd -t f -e shp -e dbf -e csv 'zone' | head -n 1)"Length of output: 110
Script:
#!/bin/bash # Find the implementation of get_zone_geometry() ast-grep --pattern 'def get_zone_geometry' # Search for files related to zone geometry fd . -t f -e py | xargs rg -l "zone.*geometry" -i # Search for both column name variants in Python files fd . -t f -e py | xargs rg -l '"Name"|"name"' --type pyLength of output: 3280
Script:
#!/bin/bash # Get implementation of get_zone_geometry rg -A 5 -B 5 "def get_zone_geometry" cea/inputlocator.py # Check zone_helper.py for column name usage rg -A 3 -B 3 '"[Nn]ame"' cea/datamanagement/zone_helper.py # Find similar patterns of accessing zone geometry data rg -A 2 "read_file.*zone.*geometry.*name" -g "*.py"Length of output: 2567
cea/datamanagement/surroundings_helper.py (1)
97-97
: Good use of pd.isna() for null checking.Using
pd.isna()
instead ofnp.isnan()
is more robust as it handles more types of null values in pandas.cea/plots/demand/__init__.py (1)
153-153
: Consider cache invalidation strategy.The column name change from 'Name' to 'name' might affect cached plot data. Ensure there's a strategy to invalidate or update existing caches.
cea/analysis/lca/operation.py (1)
52-52
: LGTM! Data handling improvements.The changes improve the code by:
- Simplifying data reading by using
pd.read_csv
instead ofGeoDataFrame
.- Standardizing column naming conventions in merge operations.
Also applies to: 82-85
cea/resources/radiation/main.py (1)
43-43
: LGTM! Consistent data handling approach.The changes align with the project-wide standardization by:
- Using
pd.read_csv
for reading architectural properties.- Using lowercase 'name' as the index column.
Also applies to: 175-175
cea/demand/demand_writers.py (1)
29-29
: LGTM! Consistent column naming.The changes systematically standardize the column naming convention from 'Name' to 'name' across different methods and classes, improving code consistency.
Also applies to: 39-39, 65-65, 90-90, 108-108, 144-144, 190-190
cea/plots/supply_system/a_supply_system_map.py (1)
190-190
: LGTM! Consistent data reading approach.The change to use
pd.read_csv
aligns with the project-wide standardization of data reading methods.cea/analysis/costs/system_costs.py (2)
93-93
: LGTM! Column name standardization.The change from 'Name' to 'name' in the result dictionary update aligns with the standardization effort.
Line range hint
174-207
: Verify consistent column naming in related dataframes.The merging operations now use lowercase 'name'. While the change is correct, we should verify that all input dataframes have the corresponding lowercase column names to avoid merge failures.
✅ Verification successful
Column naming is consistent with the codebase
The building supply CSV file is dynamically generated by archetypes_mapper.py, which uses consistent lowercase 'name' column naming. The merge operations throughout the codebase consistently use lowercase 'name', and there's no evidence of uppercase 'Name' being used in related files.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Check if any files still use uppercase 'Name' column in CSV/Excel files echo "Checking for 'Name' column in CSV files..." find . -name "*.csv" -exec sh -c 'echo "=== $1 ==="; head -n1 "$1" | grep -i "Name"' sh {} \; echo "Checking for 'Name' column in Excel files..." find . -name "*.xlsx" -exec sh -c 'echo "=== $1 ==="; python3 -c "import pandas as pd; print(pd.read_excel(\"$1\").columns.str.contains(\"Name\").any())" 2>/dev/null' sh {} \;Length of output: 5566
Script:
#!/bin/bash # Find locator method definitions echo "=== Searching for locator method definitions ===" ast-grep --pattern 'def get_building_supply(self)' -p 'def get_database_supply_assemblies(self)' -p 'def get_database_feedstocks(self)' echo "=== Searching for string patterns that might indicate file paths ===" rg -i "building.?supply|supply.?assemblies|feedstocks" --type pyLength of output: 6712
Script:
#!/bin/bash # Check column names in specific files echo "=== Building Supply CSV columns ===" fd -e csv "building.?supply" -x head -n1 {} echo -e "\n=== Supply Assemblies Excel sheets and columns ===" fd -e xlsx "SUPPLY.xlsx" -x python3 -c ' import pandas as pd try: excel = pd.ExcelFile("{}") print("Sheets:", excel.sheet_names) for sheet in ["HEATING", "HOT_WATER", "COOLING", "ELECTRICITY"]: if sheet in excel.sheet_names: print(f"\n{sheet} columns:", pd.read_excel(excel, sheet).columns.tolist()) except Exception as e: print("Error reading {}: ", e) ' echo -e "\n=== Feedstocks Excel structure ===" fd "FEEDSTOCKS.xlsx" -x python3 -c ' import pandas as pd try: excel = pd.ExcelFile("{}") print("Sheets:", excel.sheet_names) data = pd.read_excel(excel, sheet_name=None) for name, df in data.items(): print(f"\n{name} columns:", df.columns.tolist()) except Exception as e: print("Error reading {}: ", e) 'Length of output: 1290
Script:
#!/bin/bash # Search for relevant files and inspect CSV contents echo "=== Searching for building supply CSV files ===" find . -type f -name "*.csv" -exec sh -c ' if grep -q "building.*supply" <<< "{}"; then echo "=== {} ===" head -n1 "{}" fi ' sh {} \; echo -e "\n=== Searching for supply system files ===" find . -type f -name "*.csv" -exec sh -c ' if grep -q "supply.*system" <<< "{}"; then echo "=== {} ===" head -n1 "{}" fi ' sh {} \; echo -e "\n=== List all relevant Excel files ===" find . -type f \( -name "*SUPPLY*.xlsx" -o -name "*FEEDSTOCK*.xlsx" \)Length of output: 6413
Script:
#!/bin/bash echo "=== Searching for CSV files with 'supply' in name or content ===" find . -name "*.csv" -type f -exec sh -c 'echo "=== $1 ==="; head -n1 "$1"' sh {} \; echo -e "\n=== Checking Excel files structure ===" find . -name "SUPPLY.xlsx" -o -name "FEEDSTOCKS.xlsx" -type f -exec ls -l {} \;Length of output: 6536
Script:
#!/bin/bash echo "=== Searching specifically for building supply files ===" find . -type f -exec sh -c ' if echo "$1" | grep -i "building.*supply\|supply.*building" > /dev/null; then echo "=== $1 ===" head -n1 "$1" fi ' sh {} \; echo -e "\n=== Searching for supply-related files in outputs directory ===" find . -path "*/outputs/*" -type f -name "*supply*.csv" -exec sh -c 'echo "=== $1 ==="; head -n1 "$1"' sh {} \;Length of output: 564
cea/plots/demand/energy_balance.py (2)
64-65
: LGTM! DataFrame column name standardization.The changes correctly update the DataFrame operations to use lowercase 'name' consistently.
88-88
: LGTM! Standardized month array naming.The changes from 'YEAR' to 'year' in the month array naming maintain consistency with the lowercase naming convention.
Also applies to: 129-129
cea/demand/electrical_loads.py (1)
180-180
: Verify building age data consistency.The change from 'YEAR' to 'year' in building properties affects age-based calculations. We need to ensure that the building age data is consistently using lowercase 'year' across all input files and related code.
Also applies to: 231-231
✅ Verification successful
Building age data consistency verified
The change from 'YEAR' to 'year' is consistent with the rest of the codebase. All related files, including building_properties.py, use lowercase 'year' for age-related data.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Check for 'YEAR' usage in building properties and related files echo "Checking for 'YEAR' in Python files..." rg -l "bpr\.age\['YEAR'\]" . echo "Checking building age data files..." find . -name "*age*.csv" -exec sh -c 'echo "=== $1 ==="; head -n1 "$1" | grep -i "YEAR"' sh {} \;Length of output: 313
Script:
#!/bin/bash # Check for lowercase 'year' usage and building properties implementation echo "Checking for 'year' in Python files..." rg "bpr\.age\['year'\]" -A 2 . echo -e "\nChecking BuildingPropertiesRow implementation..." ast-grep --pattern 'class BuildingPropertiesRow { $$$ }' echo -e "\nChecking age dictionary initialization..." rg "age.*=.*{.*year" .Length of output: 912
Script:
#!/bin/bash # Final verification with corrected patterns echo "Searching for any remaining uppercase YEAR..." rg -i "age.*['\"]YEAR['\"]" . echo -e "\nSearching for age dictionary initialization..." rg "age\s*=\s*\{" -A 2 . echo -e "\nSearching for BuildingProperties class definition..." fd -e py -x rg -l "class.*BuildingProperties" {}Length of output: 842
cea/optimization_new/building.py (2)
5-5
: LGTM! Updated docstring for consistency.The docstring now correctly references 'name' in lowercase, maintaining consistency with the code changes.
116-116
: LGTM! Standardized DataFrame operations.The changes correctly update the DataFrame operations to use lowercase 'name' consistently.
Also applies to: 132-135
cea/datamanagement/schedule_helper.py (4)
13-13
: LGTM! Import of standardized column names.The import of
COLUMNS_ZONE_TYPOLOGY
fromdatabases_verification
promotes consistency in column naming across the codebase.
212-212
: LGTM! Consistent naming convention for use types.The standardization of column names for use types (e.g., "use_type1", "use_type2", "use_type3") improves code readability and maintainability.
Also applies to: 234-236
53-57
: Verify data consistency after column name changes.The change from 'Name' to 'name' in DataFrame operations could affect data consistency. Ensure all upstream and downstream code is updated accordingly.
283-283
: Verify data schema compatibility.The change from DBF to CSV format requires verification of data schema compatibility and handling of potential data type conversions.
✅ Verification successful
Schema compatibility is properly handled and verified.
The codebase has a robust schema validation system in place that verifies data compatibility regardless of the file format:
- Column names and presence verification via
assert_columns_names()
- Missing value detection through
check_na_values()
- Data type validation for numeric fields
- Duplicate name prevention via
check_duplicated_names()
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Check for potential data type issues in the zone geometry CSV # Search for data type definitions or schema validations echo "Checking for schema validations..." rg -A 5 "get_zone_geometry|COLUMNS_ZONE_TYPOLOGY" --type py # Look for any error handling related to CSV reading echo "Checking for CSV-related error handling..." rg -A 5 "read_csv.*get_zone_geometry" --type pyLength of output: 33741
Script:
#!/bin/bash # Search for data type specifications and validations echo "Checking for data type validations..." rg -A 5 "dtype|astype.*zone" --type py # Look for test files related to zone geometry echo "Checking for zone geometry tests..." fd "test.*zone" --type f --exec rg -l "csv|read_csv" # Search for any schema validation functions echo "Checking for schema validation functions..." rg -A 5 "verify_input|assert_columns|check_.*values" --type pyLength of output: 21716
cea/technologies/network_layout/connectivity_potential.py (1)
252-252
: LGTM! Standardized column naming in DataFrame operations.The changes consistently use lowercase 'name' in DataFrame operations, aligning with the project-wide naming convention.
Also applies to: 335-335
✅ Verification successful
Column naming consistency verified in DataFrame operations
All DataFrame operations in connectivity_potential.py consistently use lowercase 'name' for column references, maintaining the project-wide naming convention. This includes both the dictionary key creation and column access patterns.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Verify consistent column naming in network layout calculations # Check for any inconsistent column name usage in network layout files echo "Checking for column name consistency in network layout..." rg -l "\\[['\"](Name|NAME)['\"]\\]" "cea/technologies/network_layout/"Length of output: 197
Script:
#!/bin/bash # Search for any uppercase variants of 'name' column in Python files echo "Checking for 'Name' column usage patterns..." rg -t py "Name" "cea/technologies/network_layout/" echo -e "\nChecking for 'NAME' column usage patterns..." rg -t py "NAME" "cea/technologies/network_layout/" echo -e "\nChecking DataFrame operations with name-related columns..." rg -t py "(?i)\.name|\\[[\'\"]name[\'\"]\\]|name\s*=|names\s*=" "cea/technologies/network_layout/"Length of output: 1040
Script:
#!/bin/bash # Search for DataFrame column naming patterns in connectivity_potential.py echo "Checking DataFrame operations in connectivity_potential.py..." rg -A 2 -B 2 "name|Name|NAME" "cea/technologies/network_layout/connectivity_potential.py" echo -e "\nChecking DataFrame column access patterns..." rg -A 2 -B 2 '\[.*(name|Name|NAME).*\]|\.(name|Name|NAME)' "cea/technologies/network_layout/connectivity_potential.py"Length of output: 1417
cea/datamanagement/databases_verification.py (1)
13-19
: LGTM! Comprehensive column name standardization.The changes establish a clear and consistent naming convention for all geometry and typology columns. The addition of
COLUMNS_ZONE
provides a comprehensive list of zone-related columns.✅ Verification successful
Column name standardization verified successfully
The changes have been properly implemented with consistent usage across all database verification functions. No legacy uppercase column references remain in the codebase.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Verify the impact of column name changes on database verification # Check for any remaining uppercase column references in database verification echo "Checking for uppercase column references in database verification..." rg -l "\\[['\"](Name|NAME|YEAR|TYPE)['\"]\\]" "cea/datamanagement/" # Check for any validation errors related to column names echo "Checking for column-related validation errors..." rg -A 5 "assert_columns_names|check_duplicated_names" "cea/datamanagement/"Length of output: 4594
cea/analysis/lca/embodied.py (5)
107-112
: LGTM! Improved geometry calculations.The changes properly handle coordinate system projection and geometry calculations.
133-172
: LGTM! Consistent field naming in merge operations.The standardization of field names improves code maintainability and reduces the risk of merge errors.
343-343
: LGTM! Standardized result column names.The result DataFrame consistently uses lowercase column names.
176-176
: Verify merge operations after column name changes.Ensure that all merge operations work correctly with the new column names.
✅ Verification successful
Merge operations verified and working correctly
The merge operations are safe and working correctly. The 'name' column is consistently maintained across all DataFrames and merge operations, with no conflicting column renames that could affect the merge chain.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Check for potential merge issues # Look for merge operations that might be affected echo "Checking merge operations..." rg -A 5 "merge.*on=.*name" "cea/analysis/lca/" # Check for any error handling around merges echo "Checking merge error handling..." rg -A 5 "merge.*how=" "cea/analysis/lca/"Length of output: 2322
Script:
#!/bin/bash # Check DataFrame creation and column names echo "Checking DataFrame creation and columns..." rg -B 5 -A 2 "zone_df|surface_properties|architecture_df" "cea/analysis/lca/" echo "Checking for column renaming..." rg -B 2 -A 2 "rename\(|columns\s*=" "cea/analysis/lca/" echo "Checking DataFrame reading operations..." rg -B 2 -A 2 "read_csv|read_excel|read_" "cea/analysis/lca/"Length of output: 9368
103-104
: Verify data integrity after source change.The switch from building typology to architecture data source requires verification of data completeness and compatibility.
✅ Verification successful
Data integrity is maintained through existing validation framework
The switch from building typology to architecture data is safe because:
- The architecture file is generated through a controlled process in archetypes_mapper.py
- A comprehensive schema-based validation system ensures data quality
- Required fields (Hs_ag, Hs_bg, Ns, Es, etc.) are well-defined and validated
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Check for potential data inconsistencies after source change # Check for required columns in both old and new data sources echo "Checking column requirements..." rg -A 5 "get_building_architecture|get_zone_geometry" "cea/analysis/lca/" # Look for any error handling related to missing data echo "Checking error handling..." rg -A 5 "pd\.read_csv.*get_building_architecture" "cea/analysis/lca/"Length of output: 2830
Script:
#!/bin/bash # Search for column usage and data validation echo "Checking column usage in architecture_df..." rg -A 3 "architecture_df\[" "cea/analysis/lca/" echo -e "\nChecking data validation..." ast-grep --pattern 'def lca_embodied$$$architecture_df$$$' echo -e "\nChecking for related tests..." fd "test.*embodied.*py$" -x cat {}Length of output: 390
Script:
#!/bin/bash # Search for function calls using architecture_df and documentation echo "Checking function calls and data flow..." rg -l "architecture_df" "cea/analysis/lca/" echo -e "\nChecking for documentation..." rg -A 10 "building_architecture" "cea/datamanagement/" "cea/databases/" echo -e "\nChecking for data validation utilities..." rg -A 5 "def validate|class.*Validator" "cea/"Length of output: 13120
cea/interfaces/dashboard/api/inputs.py (3)
184-192
: Good standardization of column names.Consistent update of column names from 'Name' to 'name' across different file types improves code maintainability.
251-260
: Good consistency in reference column handling.The changes ensure uniform handling of the reference column across different file types, improving code reliability.
348-349
: Good type handling for name column.Converting name column to string type ensures consistent data handling and prevents type-related issues.
cea/plots/demand/comfort_chart.py (1)
297-298
: LGTM! Verify data format transition.The changes to standardize column names and update the data reading method look good. However, ensure that all building air conditioning data has been properly migrated from DBF to CSV format.
Run this script to verify the data format transition:
cea/demand/occupancy_helper.py (1)
48-50
: LGTM! Verify data format transition.The changes to standardize the data reading method look good. However, ensure that all building data has been properly migrated from DBF to CSV format.
Run this script to verify the data format transition:
✅ Verification successful
Data format usage is correct and well-structured!
The code correctly uses appropriate formats for different types of data:
- Building properties (internal loads, comfort, architecture) use CSV format for efficient tabular data storage
- Geometric/spatial data remains in shapefile format as per GIS best practices
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Verify that building data exists in CSV format and contains the expected columns # Check if CSV files exist and contain the required columns echo "Checking internal loads data..." rg -l 'building_internal.*\.csv$' | xargs head -n 1 | rg 'name.*Occ_m2p.*Qs_Wp.*X_ghp' echo "Checking comfort data..." rg -l 'building_comfort.*\.csv$' | xargs head -n 1 | rg 'name.*Ths_set_C.*Ths_setb_C.*Tcs_set_C.*Tcs_setb_C' echo "Checking architecture data..." rg -l 'building_architecture.*\.csv$' | xargs head -n 1 | rg 'name.*floors_ag.*floors_bg'Length of output: 492
Script:
#!/bin/bash # First, let's find any CSV or DBF files related to these building data echo "=== Searching for building data files ===" echo "CSV files:" fd -e csv -E "*test*" . | grep -i "building_\(internal\|comfort\|architecture\)" echo -e "\nDBF files:" fd -e dbf -E "*test*" . | grep -i "building_\(internal\|comfort\|architecture\)" # If we find CSV files, let's look at their structure echo -e "\n=== CSV Structure (if files exist) ===" for pattern in "building_internal" "building_comfort" "building_architecture"; do echo -e "\nChecking $pattern:" fd -e csv -E "*test*" . | grep -i "$pattern" | xargs -I {} sh -c 'echo "File: {}"; head -n 2 "{}" 2>/dev/null || true' doneLength of output: 1271
Script:
#!/bin/bash # Check locator implementation and data paths echo "=== Checking locator implementation ===" ast-grep --pattern 'def get_building_internal' ast-grep --pattern 'def get_building_comfort' ast-grep --pattern 'def get_building_architecture' echo -e "\n=== Checking for documentation ===" fd -e md . | xargs rg -l -i "building.*\.(csv|dbf)" echo -e "\n=== Checking for example/reference data ===" fd . -E "*.py" | rg -l "building_(internal|comfort|architecture)"Length of output: 602
Script:
#!/bin/bash echo "=== Checking documentation content ===" cat docs/scenario_folder/module_input_and_output_files.md echo -e "\n=== Searching for migration guides ===" fd -e md . | xargs rg -l -i "migration|format.*change|dbf.*csv" echo -e "\n=== Checking for database schema or format definitions ===" rg -l "building_(internal|comfort|architecture).*\.(csv|dbf)" -g "!*.py"Length of output: 4756
Script:
#!/bin/bash echo "=== Checking CHANGELOG for format changes ===" cat CHANGELOG.md echo -e "\n=== Checking data flow documentation ===" cat docs/script-data-flow.rst echo -e "\n=== Searching for format conversion code ===" rg -l "shp.*to.*csv" -g "*.py"Length of output: 65964
Script:
#!/bin/bash # Check for file format conversion code echo "=== Checking for format conversion code ===" rg -l "read_.*csv|to_csv" cea/demand/ echo -e "\n=== Checking occupancy helper implementation ===" rg -A 5 "get_building_internal|get_building_comfort|get_building_architecture" cea/demand/occupancy_helper.py echo -e "\n=== Checking for shapefile handling ===" rg -l "read_.*shp|to_shp" cea/Length of output: 1208
cea/resources/radiation/geometry_generator.py (2)
181-186
: LGTM! Standardized column naming.The changes to use lowercase 'name' for indexing are consistent with the project-wide standardization effort.
465-469
: LGTM! Added validation for window-to-wall ratio.Good addition of error handling to ensure the window-to-wall ratio (wwr) is a valid numeric value.
cea/datamanagement/zone_helper.py (6)
65-71
: Improved clarity of warning messages.The warning messages about floor extraction from OpenStreetMap have been updated to be more informative and clearer about the assumptions made.
88-92
: Standardized column naming.The changes from 'REFERENCE' to 'reference' are consistent with the project-wide standardization effort.
Also applies to: 299-304
347-354
: Improved typology file handling.The refactoring of typology file calculation improves code organization by separating the calculation from the file writing operation.
Line range hint
373-426
: Updated use type handling.The changes standardize use type column names and improve the handling of building use types from OpenStreetMap data.
477-479
: Improved clarity of construction year warning.The warning message about construction year data from OpenStreetMap has been updated to be more informative about the fallback strategy.
488-491
: Standardized column naming.The changes from 'YEAR' to 'year' are consistent with the project-wide standardization effort.
cea/demand/building_properties.py (3)
71-81
: LGTM! Column name and file extension changes.The changes standardize column names to lowercase and update file extensions from .dbf to .csv, which is consistent with the PR objectives.
867-891
: LGTM! Error message updates.The error messages have been updated to use lowercase column names, maintaining consistency with the schema changes.
911-924
: LGTM! Field list updates.The field lists have been updated to use lowercase column names, aligning with the schema changes.
cea/technologies/thermal_network/thermal_network_optimization.py (2)
132-133
: LGTM! Column name standardization.The column name has been changed from 'Name' to 'name' for consistency.
602-602
: LGTM! Column name standardization.The column name has been changed from 'Name' to 'name' for consistency.
cea/optimization_new/network.py (2)
317-317
: LGTM! Documentation updates.The docstrings have been updated to reflect the new lowercase column names.
Also applies to: 650-653
665-666
: LGTM! Column name standardization.Column names have been standardized to lowercase throughout the code, maintaining consistency with the schema changes.
Also applies to: 677-678, 724-724, 732-733, 796-796, 805-805, 841-841, 891-893
cea/inputlocator.py (2)
695-695
: LGTM! Column name standardization.The column name has been changed from 'Name' to 'name' for consistency.
699-719
: LGTM! File extension updates.File extensions have been updated from .dbf to .csv for better compatibility and standardization.
cea/import_export/result_summary.py (1)
1333-1336
: Add error handling for file access and data validation.The function should validate file existence and handle potential read errors gracefully. Consider adding try-catch blocks and input validation.
cea/technologies/thermal_network/thermal_network.py (4)
185-186
: Consistent renaming of node and building identifiers.The changes standardize the naming convention from uppercase to lowercase for node types, building identifiers, and column names. This improves code consistency.
Also applies to: 202-203, 215-215, 218-218, 237-237, 240-244
1199-1199
: Consistent pipe property column renaming.The changes standardize pipe property column names. The implementation correctly handles the renaming in both the DataFrame operations and file I/O.
Also applies to: 1209-1209, 1222-1222
2444-2444
: Comprehensive node type and building identifier updates.The changes systematically update all node type checks and building identifier references throughout the file. The implementation maintains the original logic while improving naming consistency.
Also applies to: 2523-2524, 3270-3270, 3284-3287, 3295-3295, 3358-3359, 3383-3387, 3411-3412
1131-1131
: Verify the impact of column name changes on database queries.The changes to column names in database queries and error messages maintain functionality while improving consistency. However, we should verify that all database schemas have been updated accordingly.
Also applies to: 1138-1138, 1160-1160, 1174-1174, 1178-1178
cea/tests/test_schedules.config (1)
2-2
: Update test configuration to match new naming convention.The changes standardize the use type identifiers in test configuration, making them more readable and consistent with the codebase changes. Ensure all test cases using these configurations are updated.
✅ Verification successful
Test configuration update is correctly aligned with codebase changes
The new naming convention (
use_type1
,use_type2
,use_type3
) is consistently used throughout the codebase, and no instances of the old naming patterns remain. The test configuration update is properly synchronized with these changes.Note: There is a backward compatibility check in
schedule_helper.py
that still references the old_USE_R
pattern, but this is intentional for handling legacy data.🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Check for any remaining references to old use type identifiers rg -i "1ST_USE|2ND_USE|3RD_USE" --type pyLength of output: 43
Script:
#!/bin/bash # Search for new naming convention echo "=== Checking new naming convention ===" rg "use_type[123]r?|use_type_[123]r?" --type py # Look for config files that might contain these patterns echo -e "\n=== Checking config files ===" fd -e config -x rg -l "use_type|1ST_USE|2ND_USE|3RD_USE" {} # Search for partial matches to catch variations echo -e "\n=== Checking for partial matches ===" rg "_USE[_R]?" --type pyLength of output: 3849
docs/how-to-prepare-inputs-for-network-simulation.rst (1)
27-29
: Documentation updated to reflect new field naming convention.The documentation has been updated to match the new lowercase naming convention for shape file fields. This maintains consistency between code and documentation.
cea/interfaces/dashboard/map_layers/demand/layers.py (1)
157-157
: Update GeoDataFrame index column to use lowercase naming.The change standardizes the column name used for indexing. Verify that all zone geometry files use the correct column name to avoid data loading issues.
✅ Verification successful
Verified: Column name "name" is the correct convention
The change to use lowercase "name" as the index column aligns with the existing codebase convention. All components that interact with zone geometry files consistently use "name" as the column identifier.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Check for any remaining uppercase "Name" columns in geometry files find . -name "*.shp" -exec ogrinfo -so {} \; | grep -i "Name"Length of output: 64
Script:
#!/bin/bash # Find geometry files fd -e shp -e geojson # Search for geometry file handling patterns rg -l "geometry" --type pythonLength of output: 134
Script:
#!/bin/bash # Find locator implementation and zone geometry handling rg "get_zone_geometry" -t py rg "zone_geometry" -t pyLength of output: 11348
docs/thermal_network_matrix/thermal_network_matrix_data_description.rst (3)
144-145
: LGTM! Column names standardized to lowercase.The renaming of
Type_mat
totype_mat
andPipe_DN
topipe_DN
follows the standardization effort for column names.
154-161
: LGTM! Consistent naming in extended edge_df columns.The column names in the extended edge_df maintain consistency with the earlier changes.
276-276
: LGTM! Index name standardized.The index name in pipe_properties has been updated to maintain consistency with other changes.
cea/scripts.yml (1)
232-257
: LGTM! Life Cycle Analysis section properly structured.The new Life Cycle Analysis section is well-organized with:
- Clear module paths
- Appropriate interfaces
- Correct input file dependencies
- Proper parameter specifications
docs/output_methods.rst (2)
13-22
: LGTM! Building typology variables standardized.The building typology variables have been updated to use clearer, more descriptive names while following the lowercase convention:
1ST_USE
→use_type1
1ST_USE_R
→use_type1r
- etc.
440-442
: LGTM! Network-related variables consistently renamed.Network layout and thermal network variables have been standardized to lowercase:
Name
→name
Pipe_DN
→pipe_DN
Type_mat
→type_mat
Also applies to: 457-460, 1762-1764
cea/technologies/network_layout/steiner_spanning_tree.py (3)
85-91
: LGTM! Column name standardizationThe change from 'Name' to 'name' aligns with the PR's objective of standardizing column names to lowercase.
117-119
: LGTM! Consistent naming conventionThe changes to lowercase for 'name' and 'type' maintain consistency with the project's new naming convention.
Line range hint
267-268
: LGTM! Type field standardizationThe change from 'Type' to 'type' in the node type check maintains consistency with the lowercase naming convention.
cea/datamanagement/archetypes_mapper.py (2)
62-69
: LGTM! Improved data handling with geopandasThe changes improve the code by:
- Using geopandas for reading zone geometry instead of DBF
- Maintaining consistent lowercase column naming
Line range hint
396-400
: LGTM! More descriptive column namingThe change from 'STANDARD' to 'const_type' improves code clarity by using a more descriptive column name while maintaining the standardization effort.
cea/optimization/preprocessing/decentralized_buildings_heating.py (1)
50-52
: LGTM! Improved data handlingThe changes improve the code by:
- Using CSV instead of DBF for data reading
- Maintaining consistent lowercase column naming
cea/technologies/thermal_network/simplified_thermal_network.py (3)
135-136
: LGTM! Consistent naming in duplicate checkingThe change to lowercase 'name' maintains consistency with the project's naming convention.
536-538
: LGTM! Standardized field namesThe changes to lowercase field names in edge and node list files maintain consistency with the project's naming convention.
543-550
: LGTM! Improved merge operationThe changes improve the code by:
- Using consistent lowercase field names
- Properly handling column name conflicts in merge operation with suffixes
cea/technologies/thermal_network/thermal_network_costs.py (1)
573-574
: LGTM! Consistent naming convention update.The changes correctly update the column name from 'Name' to 'name' to maintain consistency with the standardized naming convention across the codebase.
cea/technologies/thermal_network/substation_matrix.py (1)
15-15
: LGTM! Standardized column name in constants.The update to
BUILDINGS_DEMANDS_COLUMNS
correctly changes 'Name' to 'name' to align with the standardized naming convention.cea/optimization/preprocessing/decentralized_buildings_cooling.py (1)
758-758
: LGTM! Consistent column name update.The change correctly updates the column access from 'Name' to 'name' to maintain consistency with the standardized naming convention.
@@ -120,7 +120,7 @@ def calc_graph(analysis_fields, data): | |||
name = NAMING[field] | |||
total_percent = (y / data['total'] * 100).round(2).values | |||
total_percent_txt = ["(%.2f %%)" % x for x in total_percent] | |||
trace = go.Bar(x=data["Name"], y=y, name=name, text=total_percent_txt, orientation='v', | |||
trace = go.Bar(x=data["name"], y=y, name=name, text=total_percent_txt, orientation='v', |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Codebase verification
Fix inconsistent column name usage across demand plots.
The review comment is correct. The column name inconsistency in energy_end_use.py
needs to be fixed. However, similar inconsistencies exist in other files:
energy_end_use.py
: Fixanchor_list = data_frame[:3].Name.values
to usename
energy_end_use_intensity.py
: Update all occurrences of"Name"
to"name"
peak_load_supply.py
: Update remaining occurrences of"Name"
to"name"
These changes should be made as part of the column name standardization effort.
🔗 Analysis chain
Fix inconsistent column name usage.
The column name has been changed from "Name" to "name" here, but it's still referenced as "Name" in calc_top_three_anchor_loads
at line 142. This inconsistency will cause runtime errors.
Apply this diff to fix the inconsistency:
- anchor_list = data_frame[:3].Name.values
+ anchor_list = data_frame[:3].name.values
Let's verify other potential occurrences of "Name" column usage in the codebase:
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Search for potential inconsistencies in column name usage
# Look for patterns that might reference the "Name" column in pandas operations
# Search for direct column access patterns
rg -l 'df\[["'\'']Name["'\'']' || rg -l '\.Name\b'
# Search for column name in string literals
rg -l '\bName\b' --type py
Length of output: 1449
Script:
#!/bin/bash
# Search for DataFrame operations with "Name" column in demand plots
rg -A 2 -B 2 '\.Name\b|name="Name"|df\[["'\'']Name["'\'']|\["Name"\]' cea/plots/demand/
# Search for DataFrame operations with "name" (lowercase) to verify the standard
rg -A 2 -B 2 '\.name\b|name="name"|df\[["'\'']name["'\'']|\["name"\]' cea/plots/demand/
Length of output: 7667
merged_gdf.to_file(locator.get_zone_geometry(), driver='ESRI Shapefile') | ||
|
||
|
||
def create_typology(scenario_form, locator): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Undefined variable zone_df
in create_typology
function
The zone_df
variable is used in the create_typology
function but is not defined within its scope. This will result in a NameError
. To fix this, pass zone_df
as a parameter to the function and update the function call accordingly.
Apply this diff to fix the issue:
At function definition:
- def create_typology(scenario_form, locator):
+ def create_typology(scenario_form, zone_df, locator):
At function call:
- create_typology(scenario_form, locator)
+ create_typology(scenario_form, zone_df, locator)
Also applies to: 367-367
@@ -216,7 +219,7 @@ | |||
verify_input_geometry_zone(zone_df) | |||
|
|||
# Replace invalid characters in building name (characters that would affect path and csv files) | |||
zone_df["Name"] = zone_df["Name"].str.replace(r'[\\\/\.,\s]', '_', regex=True) | |||
zone_df["name"] = zone_df["name"].str.replace(r'[\\\/\.,\s]', '_', regex=True) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ensure uniqueness of building names after replacing invalid characters
When invalid characters are replaced with underscores in the zone_df["name"]
, distinct building names may become identical, leading to duplicates. This can cause errors in downstream processing. It's important to check for duplicates after the replacement and handle them appropriately.
Apply this diff to check and handle duplicates:
# Replace invalid characters in building name
zone_df["name"] = zone_df["name"].str.replace(r'[\\\/\.,\s]', '_', regex=True)
+ # Check for duplicates in 'name' column
+ if zone_df["name"].duplicated().any():
+ duplicates = zone_df[zone_df["name"].duplicated()]["name"].unique()
+ raise Exception(f"Duplicate building names found after replacing invalid characters: {duplicates}")
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
zone_df["name"] = zone_df["name"].str.replace(r'[\\\/\.,\s]', '_', regex=True) | |
zone_df["name"] = zone_df["name"].str.replace(r'[\\\/\.,\s]', '_', regex=True) | |
# Check for duplicates in 'name' column | |
if zone_df["name"].duplicated().any(): | |
duplicates = zone_df[zone_df["name"].duplicated()]["name"].unique() | |
raise Exception(f"Duplicate building names found after replacing invalid characters: {duplicates}") |
@@ -36,7 +36,7 @@ def layout(self): | |||
|
|||
@property | |||
def data(self): | |||
return self.hourly_loads[self.hourly_loads['Name'].isin(self.buildings)] | |||
return self.hourly_loads[self.hourly_loads['name'].isin(self.buildings)] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Codebase verification
Column name inconsistency confirmed across demand plotting files
The change to 'name' in heating_reset_schedule.py is part of a needed standardization, but inconsistencies exist in related files:
cea/plots/demand/load_duration_curve.py
: Uses 'Name'cea/plots/demand/comfort_chart.py
: Uses 'Name'cea/plots/demand/energy_balance.py
: Uses 'name'cea/plots/demand/heating_reset_schedule.py
: Uses 'name'
These files operate on the same hourly_loads
DataFrame and should use consistent column naming.
🔗 Analysis chain
Verify consistent column name usage across related files
The change from 'Name' to 'name' aligns with the standardization effort. However, we should ensure this change is consistently applied in all files that interact with hourly_loads
.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for any remaining instances of 'Name' column references
rg -l "\\['Name'\\]" --type python
# Search for new 'name' column usage pattern
rg -l "\\['name'\\]" --type python
Length of output: 143
Script:
#!/bin/bash
# Search for DataFrame column access patterns using both 'Name' and 'name'
rg "\\['Name'\\]|\\['name'\\]|\\.Name|\\.name" -l
# Use ast-grep to find string literals 'Name' and 'name' in array subscript expressions
ast-grep --pattern '$_[$"Name"]'
ast-grep --pattern '$_[$"name"]'
Length of output: 3787
Script:
#!/bin/bash
# Search for hourly_loads usage with context
rg "hourly_loads.*\\['[Nn]ame'\\]" -C 2 cea/plots/demand/
# Get full content of the heating_reset_schedule.py for context
cat cea/plots/demand/heating_reset_schedule.py
Length of output: 4841
buffer = zone_df[zone_df["name"] == building].buffer(buffer_m).geometry | ||
buildings_intersect = zone_df.intersects(buffer.values[0]) | ||
|
||
for building_name in zone_df[buildings_intersect]["Name"].values: | ||
for building_name in zone_df[buildings_intersect]["name"].values: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Codebase verification
Revert column name change to maintain consistency
The change from "Name" to "name" breaks the established convention. The "Name" column is widely used across the codebase in utilities, plots, technologies, and optimization modules. This change should be reverted to "Name" to prevent potential runtime errors.
🔗 Analysis chain
Verify column name changes across dependent modules.
The column name change from "Name" to "name" needs verification across dependent modules to ensure no references are broken.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Find potential references to the old column name "Name" in Python files
# that might need updating
# Search for DataFrame operations with "Name"
rg -t py "\\[[\'\"]Name[\'\"]\\]"
# Search for string literals containing "Name"
rg -t py "'Name'|\"Name\""
Length of output: 4419
cea/datamanagement/zone_helper.py
Outdated
@@ -15,10 +15,9 @@ | |||
|
|||
import cea.config | |||
import cea.inputlocator | |||
from cea.datamanagement.databases_verification import COLUMNS_ZONE_TYPOLOGY | |||
from cea.datamanagement.databases_verification import COLUMNS_ZONE_GEOMETRY, COLUMNS_ZONE |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove unused import.
The COLUMNS_ZONE_GEOMETRY
import is not used in the code.
Apply this diff to fix the unused import:
-from cea.datamanagement.databases_verification import COLUMNS_ZONE_GEOMETRY, COLUMNS_ZONE
+from cea.datamanagement.databases_verification import COLUMNS_ZONE
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
from cea.datamanagement.databases_verification import COLUMNS_ZONE_GEOMETRY, COLUMNS_ZONE | |
from cea.datamanagement.databases_verification import COLUMNS_ZONE |
🧰 Tools
🪛 Ruff (0.8.2)
18-18: cea.datamanagement.databases_verification.COLUMNS_ZONE_GEOMETRY
imported but unused
Remove unused import: cea.datamanagement.databases_verification.COLUMNS_ZONE_GEOMETRY
(F401)
🪛 GitHub Actions: Ruff
[error] 18-18: Unused import: cea.datamanagement.databases_verification.COLUMNS_ZONE_GEOMETRY
imported but unused
list_buildings = df_buildings['name'].to_list() | ||
|
||
# Get the GFA of the selected buildings | ||
list_list_useful_cea_results, list_appendix = exec_read_and_slice(hour_start, hour_end, locator, list_metrics_architecture, list_buildings) | ||
list_list_useful_cea_results_buildings = filter_cea_results_by_buildings(bool_use_acronym, list_list_useful_cea_results, list_buildings) | ||
df_buildings = pd.merge(df_buildings, list_list_useful_cea_results_buildings[0][0], on='Name', how='inner') | ||
df_buildings = pd.merge(df_buildings, list_list_useful_cea_results_buildings[0][0], on='name', how='inner') |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Codebase verification
Add error handling for CSV write operations
The code performs multiple file write operations through to_csv
calls without proper error handling. While read operations have some error handling, the write operations need similar protection. Consider:
- Wrap
to_csv
calls in try-except blocks to handle IOError, PermissionError - Add directory existence and write permission checks before file operations
- Handle potential disk space or file lock issues
🔗 Analysis chain
Add path validation and error handling for file operations.
The code should validate file paths and handle potential file operation errors. Consider adding checks for directory permissions and file existence.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Check write permissions in target directories
fd -t d "summary" -x test -w {} \; -exec echo "Directory {} is writable" \;
Length of output: 290
Script:
#!/bin/bash
# Search for exec_read_and_slice implementation
ast-grep --pattern 'def exec_read_and_slice($$$)'
# Search for file write operations in the same file
rg "to_csv|to_excel|write|save" cea/import_export/result_summary.py
Length of output: 2763
Script:
#!/bin/bash
# Check for error handling in writer functions
rg -A 5 -B 5 "try:|except:|finally:" cea/import_export/result_summary.py
# Look for the locator pattern implementation
ast-grep --pattern 'class $locator {
$$$
}'
Length of output: 903
@@ -45,7 +45,7 @@ | |||
{ | |||
"name": "stdout", | |||
"output_type": "stream", | |||
"text": "get_building_typology 3RD_USE_R Ratio of gross floor area for Third Use Type [m2/m2] {0.0...1.0}\nget_building_typology 3RD_USE Building Third Use Type [code] {MULTI_RES...OFFICE}\nget_building_typology 1ST_USE Building Main Use Type [code] {MULTI_RES...OFFICE}\nget_building_typology STANDARD Construction Standard [code] {T1...Tn}\nget_building_typology 1ST_USE_R Ratio of gross floor area for Main Use Type [m2/m2] {0.0...1.0}\nget_building_typology YEAR Construction year [-] {0...n}\nget_building_typology 2ND_USE Building Second Use Type [code] {MULTI_RES...OFFICE}\nget_building_typology 2ND_USE_R Ratio of gross floor area for Second Use Type [m2/m2] {0.0...1.0}\nget_building_typology Name Unique building ID. It must start with a letter. [-] alphanumeric\nget_geothermal_potential Ts_C TODO TODO TODO\nget_geothermal_potential Area_avail_m2 TODO TODO TODO\nget_geothermal_potential QGHP_kW TODO TODO TODO\nget_optimization_slave_heating_activation_pattern Q_Storage_gen_directload_W TODO TODO TODO\nget_optimization_slave_heating_activation_pattern E_Furnace_wet_gen_W TODO TODO TODO\nget_optimization_slave_heating_activation_pattern Q_HP_Lake_gen_directload_W TODO TODO TODO\nget_optimization_slave_heating_activation_pattern Q_BaseBoiler_gen_directload_W TODO TODO TODO\nget_optimization_slave_heating_activation_pattern Q_SC_ET_gen_storage_W TODO TODO TODO\nget_optimization_slave_heating_activation_pattern Q_SC_ET_gen_directload_W TODO TODO TODO\nget_optimization_slave_heating_activation_pattern Q_Furnace_wet_gen_directload_W TODO TODO TODO\nget_optimization_slave_heating_activation_pattern Q_BackupBoiler_gen_directload_W TODO TODO TODO\nget_optimization_slave_heating_activation_pattern E_PVT_gen_W TODO TODO TODO\nget_optimization_slave_heating_activation_pattern Q_SC_FP_gen_directload_W TODO TODO TODO\nget_optimization_slave_heating_activation_pattern Q_HP_Server_gen_directload_W TODO TODO TODO\nget_optimization_slave_heating_activation_pattern DATE TODO TODO TODO\nget_optimization_slave_heating_activation_pattern Q_HP_Sew_gen_directload_W TODO TODO TODO\nget_optimization_slave_heating_activation_pattern Q_districtheating_sys_req_W TODO TODO TODO\nget_optimization_slave_heating_activation_pattern Q_SC_FP_gen_storage_W TODO TODO TODO\nget_optimization_slave_heating_activation_pattern Q_Furnace_dry_gen_directload_W TODO TODO TODO\nget_optimization_slave_heating_activation_pattern E_Furnace_dry_gen_W TODO TODO TODO\nget_optimization_slave_heating_activation_pattern E_CHP_gen_W TODO TODO TODO\nget_optimization_slave_heating_activation_pattern Q_CHP_gen_directload_W TODO TODO TODO\nget_optimization_slave_heating_activation_pattern Q_GHP_gen_directload_W TODO TODO TODO\nget_optimization_slave_heating_activation_pattern Q_PVT_gen_directload_W TODO TODO TODO\nget_optimization_slave_heating_activation_pattern Q_PeakBoiler_gen_directload_W TODO TODO TODO\nget_optimization_slave_heating_activation_pattern Q_PVT_gen_storage_W TODO TODO TODO\nget_optimization_slave_heating_activation_pattern Q_HP_Server_storage_W TODO TODO TODO\nSC_totals T_SC_sup_C TODO TODO TODO\nSC_totals Q_SC_l_kWh TODO TODO TODO\nSC_totals Date Date and time in hourly steps. [datetime] {yyyy-mm-dd hh:mm:ss-Z}\nSC_totals Eaux_SC_kWh TODO TODO TODO\nSC_totals radiation_kWh Total radiatiative potential. [kWh] {0.0...n}\nSC_totals Area_SC_m2 TODO TODO TODO\nSC_totals Q_SC_gen_kWh TODO TODO TODO\nSC_totals mcp_SC_kWperC TODO TODO TODO\nSC_totals T_SC_re_C TODO TODO TODO\nget_network_layout_nodes_shapefile Building Unique building ID. It must start with a letter. [-] alphanumeric\nget_network_layout_nodes_shapefile geometry Geometry [-] [-]\nget_network_layout_nodes_shapefile Type Weather a Plant or A Customer [-] alphanumeric\nget_network_layout_nodes_shapefile Name Unique building ID. It must start with a letter. [-] alphanumeric\nget_total_demand GRID_ww_MWhyr TODO TODO TODO\nget_total_demand Qhs_sen_ahu_MWhyr AHU sensible heat demand [MWh/year] {0.0...n}\nget_total_demand GRID_l0_kW TODO TODO TODO\nget_total_demand Ea_MWhyr TODO TODO TODO\nget_total_demand Qcs_sys_MWhyr End-use space cooling demand MWh/yr {0.0...n}\nget_total_demand Qhs_sys_MWhyr End-use space heating demand MWh/yr {0.0...n}\nget_total_demand GRID_pro_MWhyr TODO TODO TODO\nget_total_demand GRID_cre_MWhyr TODO TODO TODO\nget_total_demand OIL_hs0_kW Nominal OIL requirement for space heating supply kW {0.0...n}\nget_total_demand Epro_MWhyr Electricity supplied to industrial processes MWh/yr {0.0...n}\nget_total_demand QH_sys_MWhyr Total building heating demand [MWh/year] {0.0...n}\nget_total_demand Qhs_sen_ahu0_kW Nominal AHU sensible heat demand. [kW] {0.0...n}\nget_total_demand Qcs0_kW Nominal Total cooling demand. [kW] {0.0...n}\nget_total_demand WOOD_hs_MWhyr WOOD requirement for space heating supply MWh/yr {0.0...n}\nget_total_demand Qcs_dis_ls0_kW Nominal Cool distribution losses. [kW] {0.0...n}\nget_total_demand GRID0_kW Nominal Grid electricity consumption kW {0.0...n}\nget_total_demand GRID_cs_MWhyr TODO TODO TODO\nget_total_demand GRID_MWhyr Grid electricity consumption MWh/yr {0.0...n}\nget_total_demand GRID_data0_kW TODO TODO TODO\nget_total_demand GRID_hs_MWhyr TODO TODO TODO\nget_total_demand Qcdata_MWhyr Data centre cooling demand [MWh/year] {0.0...n}\nget_total_demand Edata_MWhyr Electricity consumption for data centers MWh/yr {0.0...n}\nget_total_demand Qcs_em_ls_MWhyr Cool emission losses [MWh/year] {0.0...n}\nget_total_demand Qhs_lat_aru_MWhyr ARU latent heat demand [MWh/year] {0.0...n}\nget_total_demand E_sys_MWhyr End-use electricity demand MWh/yr {0.0...n}\nget_total_demand E_cdata0_kW Nominal Data centre cooling specific electricity consumption. [kW] {0.0...n}\nget_total_demand PV_MWhyr PV electricity consumption MWh/yr {0.0...n}\nget_total_demand Qww_sys_MWhyr End-use hotwater demand MWh/yr {0.0...n}\nget_total_demand QH_sys0_kW Nominal total building heating demand. [kW] {0.0...n}\nget_total_demand E_cre0_kW Nominal Refrigeration system electricity consumption. [kW] {0.0...n}\nget_total_demand Qcs_lat_sys0_kW Nominal System latent cool demand. [kW] {0.0...n}\nget_total_demand OIL_ww0_kW Nominal OIL requirement for hotwater supply kW {0.0...n}\nget_total_demand Eal0_kW Nominal Total net electricity for all sources and sinks. [kW] {0.0...n}\nget_total_demand Qhs_sen_sys0_kW Nominal HVAC systems sensible heat demand. [kW] {0.0...n}\nget_total_demand DH_hs_MWhyr Energy requirement by district heating (space heating supply) MWh/yr {0.0...n}\nget_total_demand Qcre_sys_MWhyr End-use refrigeration demand MWh/yr {0.0...n}\nget_total_demand COAL_ww0_kW Nominal Coal requirement for hotwater supply kW {0.0...n}\nget_total_demand GRID_l_MWhyr TODO TODO TODO\nget_total_demand Qcs_sen_scu_MWhyr SCU system cool demand [MWh/year] {0.0...n}\nget_total_demand Af_m2 Conditioned floor area (heated/cooled) [m2] {0.0...n}\nget_total_demand DH_ww_MWhyr Energy requirement by district heating (hotwater supply) MWh/yr {0.0...n}\nget_total_demand Qhs_dis_ls0_kW Nominal Heating system distribution losses. [kW] {0.0...n}\nget_total_demand Qcdata0_kW Nominal Data centre cooling demand. [kW] {0.0...n}\nget_total_demand Qhs_lat_sys_MWhyr System latent heat demand [MWh/year] {0.0...n}\nget_total_demand Qcs_sen_aru0_kW Nominal ARU system cool demand. [kW] {0.0...n}\nget_total_demand Qcs_sys_aru0_kW Nominal ARU system cool demand. [kW] {0.0...n}\nget_total_demand Qcs_lat_aru0_kW Nominal ARU latent cool demand. [kW] {0.0...n}\nget_total_demand Qcs_sen_scu0_kW Nominal SCU system cool demand. [kW] {0.0...n}\nget_total_demand GRID_aux_MWhyr TODO TODO TODO\nget_total_demand Qcs_lat_ahu0_kW Nominal AHU latent cool demand. [kW] {0.0...n}\nget_total_demand Qcre0_kW Nominal Refrigeration cooling demand. [kW] {0.0...n}\nget_total_demand Qhs_lat_sys0_kW Nominal System latent heat demand. [kW] {0.0...n}\nget_total_demand GRID_cs0_kW TODO TODO TODO\nget_total_demand Aocc_m2 Occupied floor area (heated/cooled) [m2] {0.0...n}\nget_total_demand SOLAR_hs_MWhyr Solar thermal energy requirement for space heating supply MWh/yr {0.0...n}\nget_total_demand GRID_pro0_kW TODO TODO TODO\nget_total_demand COAL_ww_MWhyr Coal requirement for hotwater supply MWh/yr {0.0...n}\nget_total_demand Qcs_sys_ahu_MWhyr AHU system cool demand [MWh/year] {0.0...n}\nget_total_demand Qhs_sys0_kW Nominal end-use space heating demand kW {0.0...n}\nget_total_demand Qcs_sys_scu0_kW Nominal SCU system cool demand. [kW] {0.0...n}\nget_total_demand Aroof_m2 Roof area [m2] {0.0...n}\nget_total_demand E_cdata_MWhyr Electricity consumption due to data center cooling MWh/yr {0.0...n}\nget_total_demand DC_cdata_MWhyr District cooling for data center cooling demand MWh/yr {0.0...n}\nget_total_demand Qcs_MWhyr Total cool demand [MWh/year] {0.0...n}\nget_total_demand Qcs_sys_ahu0_kW Nominal AHU system cool demand. [kW] {0.0...n}\nget_total_demand Qhs_sys_ahu_MWhyr AHU system heat demand [MWh/year] {0.0...n}\nget_total_demand Qcdata_sys_MWhyr End-use data center cooling demand MWh/yr {0.0...n}\nget_total_demand OIL_hs_MWhyr OIL requirement for space heating supply MWh/yr {0.0...n}\nget_total_demand OIL_ww_MWhyr OIL requirement for hotwater supply MWh/yr {0.0...n}\nget_total_demand E_cs_MWhyr Electricity consumption due to space cooling MWh/yr {0.0...n}\nget_total_demand NG_ww_MWhyr NG requirement for hotwater supply MWh/yr {0.0...n}\nget_total_demand El_MWhyr TODO TODO TODO\nget_total_demand GRID_cdata0_kW TODO TODO TODO\nget_total_demand GRID_a0_kW TODO TODO TODO\nget_total_demand Qhs_sen_aru0_kW ARU sensible heat demand [kW] {0.0...n}\nget_total_demand Qhs_lat_ahu_MWhyr AHU latent heat demand [MWh/year] {0.0...n}\nget_total_demand PV0_kW Nominal PV electricity consumption kW {0.0...n}\nget_total_demand Qcs_sys0_kW Nominal end-use space cooling demand kW {0.0...n}\nget_total_demand GRID_cre0_kW TODO TODO TODO\nget_total_demand Qcs_lat_ahu_MWhyr AHU latent cool demand [MWh/year] {0.0...n}\nget_total_demand Eaux0_kW Nominal Auxiliary electricity consumption. [kW] {0.0...n}\nget_total_demand DC_cre_MWhyr District cooling for refrigeration demand MWh/yr {0.0...n}\nget_total_demand Qhs_sys_aru_MWhyr ARU sensible heat demand [MWh/year] {0.0...n}\nget_total_demand Qcs_em_ls0_kW Nominal Cool emission losses. [kW] {0.0...n}\nget_total_demand Qcs_dis_ls_MWhyr Cool distribution losses [MWh/year] {0.0...n}\nget_total_demand Qhpro_sys_MWhyr Yearly processes heat demand. [MWh/year] {0.0...n}\n" | |||
"text": "get_building_typology 3RD_USE_R Ratio of gross floor area for Third Use Type [m2/m2] {0.0...1.0}\nget_building_typology 3RD_USE Building Third Use Type [code] {MULTI_RES...OFFICE}\nget_building_typology use_type1 Building Main Use Type [code] {MULTI_RES...OFFICE}\nget_building_typology STANDARD Construction Standard [code] {T1...Tn}\nget_building_typology use_type1 Ratio of gross floor area for Main Use Type [m2/m2] {0.0...1.0}\nget_building_typology YEAR Construction year [-] {0...n}\nget_building_typology 2ND_USE Building Second Use Type [code] {MULTI_RES...OFFICE}\nget_building_typology 2ND_USE_R Ratio of gross floor area for Second Use Type [m2/m2] {0.0...1.0}\nget_building_typology Name Unique building ID. It must start with a letter. [-] alphanumeric\nget_geothermal_potential Ts_C TODO TODO TODO\nget_geothermal_potential Area_avail_m2 TODO TODO TODO\nget_geothermal_potential QGHP_kW TODO TODO TODO\nget_optimization_slave_heating_activation_pattern Q_Storage_gen_directload_W TODO TODO TODO\nget_optimization_slave_heating_activation_pattern E_Furnace_wet_gen_W TODO TODO TODO\nget_optimization_slave_heating_activation_pattern Q_HP_Lake_gen_directload_W TODO TODO TODO\nget_optimization_slave_heating_activation_pattern Q_BaseBoiler_gen_directload_W TODO TODO TODO\nget_optimization_slave_heating_activation_pattern Q_SC_ET_gen_storage_W TODO TODO TODO\nget_optimization_slave_heating_activation_pattern Q_SC_ET_gen_directload_W TODO TODO TODO\nget_optimization_slave_heating_activation_pattern Q_Furnace_wet_gen_directload_W TODO TODO TODO\nget_optimization_slave_heating_activation_pattern Q_BackupBoiler_gen_directload_W TODO TODO TODO\nget_optimization_slave_heating_activation_pattern E_PVT_gen_W TODO TODO TODO\nget_optimization_slave_heating_activation_pattern Q_SC_FP_gen_directload_W TODO TODO TODO\nget_optimization_slave_heating_activation_pattern Q_HP_Server_gen_directload_W TODO TODO TODO\nget_optimization_slave_heating_activation_pattern DATE TODO TODO TODO\nget_optimization_slave_heating_activation_pattern Q_HP_Sew_gen_directload_W TODO TODO TODO\nget_optimization_slave_heating_activation_pattern Q_districtheating_sys_req_W TODO TODO TODO\nget_optimization_slave_heating_activation_pattern Q_SC_FP_gen_storage_W TODO TODO TODO\nget_optimization_slave_heating_activation_pattern Q_Furnace_dry_gen_directload_W TODO TODO TODO\nget_optimization_slave_heating_activation_pattern E_Furnace_dry_gen_W TODO TODO TODO\nget_optimization_slave_heating_activation_pattern E_CHP_gen_W TODO TODO TODO\nget_optimization_slave_heating_activation_pattern Q_CHP_gen_directload_W TODO TODO TODO\nget_optimization_slave_heating_activation_pattern Q_GHP_gen_directload_W TODO TODO TODO\nget_optimization_slave_heating_activation_pattern Q_PVT_gen_directload_W TODO TODO TODO\nget_optimization_slave_heating_activation_pattern Q_PeakBoiler_gen_directload_W TODO TODO TODO\nget_optimization_slave_heating_activation_pattern Q_PVT_gen_storage_W TODO TODO TODO\nget_optimization_slave_heating_activation_pattern Q_HP_Server_storage_W TODO TODO TODO\nSC_totals T_SC_sup_C TODO TODO TODO\nSC_totals Q_SC_l_kWh TODO TODO TODO\nSC_totals Date Date and time in hourly steps. [datetime] {yyyy-mm-dd hh:mm:ss-Z}\nSC_totals Eaux_SC_kWh TODO TODO TODO\nSC_totals radiation_kWh Total radiatiative potential. [kWh] {0.0...n}\nSC_totals Area_SC_m2 TODO TODO TODO\nSC_totals Q_SC_gen_kWh TODO TODO TODO\nSC_totals mcp_SC_kWperC TODO TODO TODO\nSC_totals T_SC_re_C TODO TODO TODO\nget_network_layout_nodes_shapefile Building Unique building ID. It must start with a letter. [-] alphanumeric\nget_network_layout_nodes_shapefile geometry Geometry [-] [-]\nget_network_layout_nodes_shapefile Type Weather a Plant or A Customer [-] alphanumeric\nget_network_layout_nodes_shapefile Name Unique building ID. It must start with a letter. [-] alphanumeric\nget_total_demand GRID_ww_MWhyr TODO TODO TODO\nget_total_demand Qhs_sen_ahu_MWhyr AHU sensible heat demand [MWh/year] {0.0...n}\nget_total_demand GRID_l0_kW TODO TODO TODO\nget_total_demand Ea_MWhyr TODO TODO TODO\nget_total_demand Qcs_sys_MWhyr End-use space cooling demand MWh/yr {0.0...n}\nget_total_demand Qhs_sys_MWhyr End-use space heating demand MWh/yr {0.0...n}\nget_total_demand GRID_pro_MWhyr TODO TODO TODO\nget_total_demand GRID_cre_MWhyr TODO TODO TODO\nget_total_demand OIL_hs0_kW Nominal OIL requirement for space heating supply kW {0.0...n}\nget_total_demand Epro_MWhyr Electricity supplied to industrial processes MWh/yr {0.0...n}\nget_total_demand QH_sys_MWhyr Total building heating demand [MWh/year] {0.0...n}\nget_total_demand Qhs_sen_ahu0_kW Nominal AHU sensible heat demand. [kW] {0.0...n}\nget_total_demand Qcs0_kW Nominal Total cooling demand. [kW] {0.0...n}\nget_total_demand WOOD_hs_MWhyr WOOD requirement for space heating supply MWh/yr {0.0...n}\nget_total_demand Qcs_dis_ls0_kW Nominal Cool distribution losses. [kW] {0.0...n}\nget_total_demand GRID0_kW Nominal Grid electricity consumption kW {0.0...n}\nget_total_demand GRID_cs_MWhyr TODO TODO TODO\nget_total_demand GRID_MWhyr Grid electricity consumption MWh/yr {0.0...n}\nget_total_demand GRID_data0_kW TODO TODO TODO\nget_total_demand GRID_hs_MWhyr TODO TODO TODO\nget_total_demand Qcdata_MWhyr Data centre cooling demand [MWh/year] {0.0...n}\nget_total_demand Edata_MWhyr Electricity consumption for data centers MWh/yr {0.0...n}\nget_total_demand Qcs_em_ls_MWhyr Cool emission losses [MWh/year] {0.0...n}\nget_total_demand Qhs_lat_aru_MWhyr ARU latent heat demand [MWh/year] {0.0...n}\nget_total_demand E_sys_MWhyr End-use electricity demand MWh/yr {0.0...n}\nget_total_demand E_cdata0_kW Nominal Data centre cooling specific electricity consumption. [kW] {0.0...n}\nget_total_demand PV_MWhyr PV electricity consumption MWh/yr {0.0...n}\nget_total_demand Qww_sys_MWhyr End-use hotwater demand MWh/yr {0.0...n}\nget_total_demand QH_sys0_kW Nominal total building heating demand. [kW] {0.0...n}\nget_total_demand E_cre0_kW Nominal Refrigeration system electricity consumption. [kW] {0.0...n}\nget_total_demand Qcs_lat_sys0_kW Nominal System latent cool demand. [kW] {0.0...n}\nget_total_demand OIL_ww0_kW Nominal OIL requirement for hotwater supply kW {0.0...n}\nget_total_demand Eal0_kW Nominal Total net electricity for all sources and sinks. [kW] {0.0...n}\nget_total_demand Qhs_sen_sys0_kW Nominal HVAC systems sensible heat demand. [kW] {0.0...n}\nget_total_demand DH_hs_MWhyr Energy requirement by district heating (space heating supply) MWh/yr {0.0...n}\nget_total_demand Qcre_sys_MWhyr End-use refrigeration demand MWh/yr {0.0...n}\nget_total_demand COAL_ww0_kW Nominal Coal requirement for hotwater supply kW {0.0...n}\nget_total_demand GRID_l_MWhyr TODO TODO TODO\nget_total_demand Qcs_sen_scu_MWhyr SCU system cool demand [MWh/year] {0.0...n}\nget_total_demand Af_m2 Conditioned floor area (heated/cooled) [m2] {0.0...n}\nget_total_demand DH_ww_MWhyr Energy requirement by district heating (hotwater supply) MWh/yr {0.0...n}\nget_total_demand Qhs_dis_ls0_kW Nominal Heating system distribution losses. [kW] {0.0...n}\nget_total_demand Qcdata0_kW Nominal Data centre cooling demand. [kW] {0.0...n}\nget_total_demand Qhs_lat_sys_MWhyr System latent heat demand [MWh/year] {0.0...n}\nget_total_demand Qcs_sen_aru0_kW Nominal ARU system cool demand. [kW] {0.0...n}\nget_total_demand Qcs_sys_aru0_kW Nominal ARU system cool demand. [kW] {0.0...n}\nget_total_demand Qcs_lat_aru0_kW Nominal ARU latent cool demand. [kW] {0.0...n}\nget_total_demand Qcs_sen_scu0_kW Nominal SCU system cool demand. [kW] {0.0...n}\nget_total_demand GRID_aux_MWhyr TODO TODO TODO\nget_total_demand Qcs_lat_ahu0_kW Nominal AHU latent cool demand. [kW] {0.0...n}\nget_total_demand Qcre0_kW Nominal Refrigeration cooling demand. [kW] {0.0...n}\nget_total_demand Qhs_lat_sys0_kW Nominal System latent heat demand. [kW] {0.0...n}\nget_total_demand GRID_cs0_kW TODO TODO TODO\nget_total_demand Aocc_m2 Occupied floor area (heated/cooled) [m2] {0.0...n}\nget_total_demand SOLAR_hs_MWhyr Solar thermal energy requirement for space heating supply MWh/yr {0.0...n}\nget_total_demand GRID_pro0_kW TODO TODO TODO\nget_total_demand COAL_ww_MWhyr Coal requirement for hotwater supply MWh/yr {0.0...n}\nget_total_demand Qcs_sys_ahu_MWhyr AHU system cool demand [MWh/year] {0.0...n}\nget_total_demand Qhs_sys0_kW Nominal end-use space heating demand kW {0.0...n}\nget_total_demand Qcs_sys_scu0_kW Nominal SCU system cool demand. [kW] {0.0...n}\nget_total_demand Aroof_m2 Roof area [m2] {0.0...n}\nget_total_demand E_cdata_MWhyr Electricity consumption due to data center cooling MWh/yr {0.0...n}\nget_total_demand DC_cdata_MWhyr District cooling for data center cooling demand MWh/yr {0.0...n}\nget_total_demand Qcs_MWhyr Total cool demand [MWh/year] {0.0...n}\nget_total_demand Qcs_sys_ahu0_kW Nominal AHU system cool demand. [kW] {0.0...n}\nget_total_demand Qhs_sys_ahu_MWhyr AHU system heat demand [MWh/year] {0.0...n}\nget_total_demand Qcdata_sys_MWhyr End-use data center cooling demand MWh/yr {0.0...n}\nget_total_demand OIL_hs_MWhyr OIL requirement for space heating supply MWh/yr {0.0...n}\nget_total_demand OIL_ww_MWhyr OIL requirement for hotwater supply MWh/yr {0.0...n}\nget_total_demand E_cs_MWhyr Electricity consumption due to space cooling MWh/yr {0.0...n}\nget_total_demand NG_ww_MWhyr NG requirement for hotwater supply MWh/yr {0.0...n}\nget_total_demand El_MWhyr TODO TODO TODO\nget_total_demand GRID_cdata0_kW TODO TODO TODO\nget_total_demand GRID_a0_kW TODO TODO TODO\nget_total_demand Qhs_sen_aru0_kW ARU sensible heat demand [kW] {0.0...n}\nget_total_demand Qhs_lat_ahu_MWhyr AHU latent heat demand [MWh/year] {0.0...n}\nget_total_demand PV0_kW Nominal PV electricity consumption kW {0.0...n}\nget_total_demand Qcs_sys0_kW Nominal end-use space cooling demand kW {0.0...n}\nget_total_demand GRID_cre0_kW TODO TODO TODO\nget_total_demand Qcs_lat_ahu_MWhyr AHU latent cool demand [MWh/year] {0.0...n}\nget_total_demand Eaux0_kW Nominal Auxiliary electricity consumption. [kW] {0.0...n}\nget_total_demand DC_cre_MWhyr District cooling for refrigeration demand MWh/yr {0.0...n}\nget_total_demand Qhs_sys_aru_MWhyr ARU sensible heat demand [MWh/year] {0.0...n}\nget_total_demand Qcs_em_ls0_kW Nominal Cool emission losses. [kW] {0.0...n}\nget_total_demand Qcs_dis_ls_MWhyr Cool distribution losses [MWh/year] {0.0...n}\nget_total_demand Qhpro_sys_MWhyr Yearly processes heat demand. [MWh/year] {0.0...n}\n" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Consider moving schema processing logic to a dedicated module.
The notebook contains complex data processing logic that would be better maintained in a dedicated Python module. This would improve reusability and testing.
Consider creating a new module cea.schemas.processor
to handle this functionality.
Also applies to: 121-121
@@ -329,7 +329,7 @@ PVT_total_buildings: | |||
unit: '[kWh]' | |||
values: '{0.0...n}' | |||
min: 0.0 | |||
Name: | |||
name: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Codebase verification
Schema changes need corresponding code updates
The schema changes from 'Name' to 'name' in PVT, PV and SC total buildings schemas require corresponding updates in the following files that write to these schemas:
- cea/technologies/solar/photovoltaic_thermal.py: Uses "Name" as index_label when writing to PVT_total_buildings()
- cea/technologies/solar/photovoltaic.py: Uses "Name" as index_label when writing to PV_total_buildings()
- cea/technologies/solar/solar_collector.py: Uses "Name" as index_label when writing to SC_total_buildings()
🔗 Analysis chain
LGTM! Column renaming follows consistent pattern.
The renaming of 'Name' to 'name' across different schemas follows a consistent lowercase pattern. This standardization improves code consistency.
Let's verify that all dependent code has been updated to use the new lowercase column names:
Also applies to: 868-868, 1317-1317
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for any remaining uppercase 'Name' column references
rg -i "['|\"]Name['|\"]" --type python
Length of output: 75
Script:
#!/bin/bash
# Search for Name/name column references in Python files
rg -t py "(?i)['\"]name['\"]|\.Name|\.name|\\[['\"](N|n)ame['\"]\\]" -A 2
# Search specifically in PV, PVT and SC related files
rg -t py "(PV_total_buildings|PVT_total_buildings|SC_total_buildings).*(name|Name)" -A 2
# Check YAML files for any inconsistent column naming
rg -t yaml "(?i)name:" -A 1
Length of output: 85214
Co-authored-by: Reynold Mok <[email protected]>
Co-authored-by: Reynold Mok <[email protected]>
This the Phase One of refactoring the data input structure for CEA 4.
Test it together with #3753 (Format Helper).
Phase Two should include the changes on the optimisation-new script.
Phase Three should refactor the DB format.
Summary by CodeRabbit
Based on the comprehensive summary, here are the release notes:
Release Notes
Data Management
.dbf
to.csv
file formats for building propertiesBuilding Properties
1ST_USE
→use_type1
)Name
toname
Network and Thermal Systems
Pipe_DN
→pipe_DN
)Performance and Compatibility
These changes improve code readability, maintainability, and provide a more uniform approach to data handling in the Community Energy Analyst (CEA) toolset.