Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Minor housekeeping updates #1246

Open
wants to merge 14 commits into
base: dev
Choose a base branch
from
3 changes: 1 addition & 2 deletions .github/PULL_REQUEST_TEMPLATE/release.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,8 +2,7 @@ Prepare for release of HDMF [version]

### Before merging:
- [ ] Make sure all PRs to be included in this release have been merged to `dev`.
- [ ] Major and minor releases: Update dependency ranges in `pyproject.toml` and minimums in
`requirements-min.txt` as needed.
- [ ] Major and minor releases: Update dependency ranges in `pyproject.toml` as needed.
- [ ] Check legal file dates and information in `Legal.txt`, `license.txt`, `README.rst`, `docs/source/conf.py`,
and any other locations as needed
- [ ] Update `pyproject.toml` as needed
Expand Down
11 changes: 0 additions & 11 deletions .github/dependabot.yml
Original file line number Diff line number Diff line change
@@ -1,16 +1,5 @@
version: 2
updates:
# disable checking python requirements files because there are too
# many updates and dependabot will not ignore requirements-min.txt
# until https://github.com/dependabot/dependabot-core/issues/2883 is resolved
# workaround is to continue updating these files manually

# - package-ecosystem: "pip"
# directory: "/"
# schedule:
# # Check for updates to requirements files and pyproject.toml every week
# interval: "weekly"

- package-ecosystem: "github-actions"
directory: "/"
schedule:
Expand Down
6 changes: 3 additions & 3 deletions .github/workflows/project_action.yml
Original file line number Diff line number Diff line change
Expand Up @@ -12,10 +12,10 @@ jobs:
steps:
- name: GitHub App token
id: generate_token
uses: tibdex/github-app-token@v2
uses: actions/create-github-app-token@v1
with:
app_id: ${{ secrets.APP_ID }}
private_key: ${{ secrets.APP_PEM }}
app-id: ${{ secrets.APP_ID }}
private-key: ${{ secrets.APP_PEM }}

- name: Add to Developer Board
env:
Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/ruff.yml
Original file line number Diff line number Diff line change
Expand Up @@ -8,4 +8,4 @@ jobs:
- name: Checkout repo
uses: actions/checkout@v4
- name: Run ruff
uses: chartboost/ruff-action@v1
uses: astral-sh/ruff-action@v3
6 changes: 6 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,11 @@
# HDMF Changelog

## [Unreleased]

### Changed
- Removed `requirements-min.txt` in favor of the `min-reqs` optional dependency group in `pyproject.toml`. @rly [#1246](https://github.com/hdmf-dev/hdmf/pull/1246)
- Updated GitHub actions and ruff configuration. @rly [#1246](https://github.com/hdmf-dev/hdmf/pull/1246)

## HDMF 4.0.0 (January 22, 2025)

### Breaking changes
Expand Down
6 changes: 3 additions & 3 deletions docs/source/software_process.rst
Original file line number Diff line number Diff line change
Expand Up @@ -45,8 +45,9 @@ pyproject.toml_ contains a list of package dependencies and their version ranges
running HDMF. As a library, upper bound version constraints create more harm than good in the long term (see this
`blog post`_) so we avoid setting upper bounds on requirements.

When setting lower bounds, make sure to specify the lower bounds in both pyproject.toml_ and
requirements-min.txt_. The latter is used in automated testing to ensure that the package runs
When setting lower bounds, make sure to specify the lower bounds in the ``[project] dependencies`` key and
``[project.optional-dependencies] min-reqs`` key in pyproject.toml_.
The latter is used in automated testing to ensure that the package runs
correctly using the minimum versions of dependencies.

Minimum requirements should be updated manually if a new feature or bug fix is added in a dependency that is required
Expand All @@ -56,7 +57,6 @@ minimum version to be as high as it is.

.. _pyproject.toml: https://github.com/hdmf-dev/hdmf/blob/dev/pyproject.toml
.. _blog post: https://iscinumpy.dev/post/bound-version-constraints/
.. _requirements-min.txt: https://github.com/hdmf-dev/hdmf/blob/dev/requirements-min.txt

--------------------
Testing Requirements
Expand Down
24 changes: 16 additions & 8 deletions pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ authors = [
description = "A hierarchical data modeling framework for modern science data standards"
readme = "README.rst"
requires-python = ">=3.9"
license = {text = "BSD-3-Clause"}
license = "BSD-3-Clause"
classifiers = [
"Programming Language :: Python",
"Programming Language :: Python :: 3.9",
Expand All @@ -29,6 +29,7 @@ classifiers = [
"Intended Audience :: Science/Research",
"Topic :: Scientific/Engineering :: Medical Science Apps.",
]
# make sure to update min-reqs dependencies below when these lower bounds change
dependencies = [
"h5py>=3.1.0",
"jsonschema>=3.2.0",
Expand All @@ -39,6 +40,7 @@ dependencies = [
dynamic = ["version"]

[project.optional-dependencies]
# make sure to update min-reqs dependencies below when these lower bounds change
tqdm = ["tqdm>=4.41.0"]
zarr = ["zarr>=2.12.0,<3"]
sparse = ["scipy>=1.7"]
Expand Down Expand Up @@ -72,6 +74,18 @@ docs = [
# all possible dependencies
all = ["hdmf[tqdm,zarr,sparse,termset,test,docs]"]

# minimum requirements of project dependencies for testing (see .github/workflows/run_all_tests.yml)
min-reqs = [
"h5py==3.1.0",
"jsonschema==3.2.0",
"numpy==1.19.3",
"pandas==1.2.0",
"ruamel.yaml==0.16.0",
"scipy==1.7.0",
"tqdm==4.41.0",
"zarr==2.12.0",
]

[project.urls]
"Homepage" = "https://github.com/hdmf-dev/hdmf"
"Bug Tracker" = "https://github.com/hdmf-dev/hdmf/issues"
Expand Down Expand Up @@ -141,13 +155,8 @@ omit = [
# force-exclude = "src/hdmf/common/hdmf-common-schema|docs/gallery"

[tool.ruff]
lint.select = ["E", "F", "T100", "T201", "T203"]
lint.select = ["E", "F", "T100", "T201", "T203", "C901"]
exclude = [
".git",
".tox",
"__pycache__",
"build/",
"dist/",
"src/hdmf/common/hdmf-common-schema",
"docs/source/conf.py",
"src/hdmf/_due.py",
Expand All @@ -160,7 +169,6 @@ line-length = 120
[tool.ruff.lint.per-file-ignores]
"docs/gallery/*" = ["E402", "T201"]
"src/*/__init__.py" = ["F401"]
"setup.py" = ["T201"]
"test_gallery.py" = ["T201"]

[tool.ruff.lint.mccabe]
Expand Down
10 changes: 0 additions & 10 deletions requirements-min.txt

This file was deleted.

2 changes: 1 addition & 1 deletion src/hdmf/backends/hdf5/h5tools.py
Original file line number Diff line number Diff line change
Expand Up @@ -1086,7 +1086,7 @@ def write_link(self, **kwargs):
self.__set_written(builder)
return link_obj

@docval({'name': 'parent', 'type': Group, 'doc': 'the parent HDF5 object'}, # noqa: C901
@docval({'name': 'parent', 'type': Group, 'doc': 'the parent HDF5 object'},
{'name': 'builder', 'type': DatasetBuilder, 'doc': 'the DatasetBuilder to write'},
{'name': 'link_data', 'type': bool,
'doc': 'If not specified otherwise link (True) or copy (False) HDF5 Datasets', 'default': True},
Expand Down
4 changes: 2 additions & 2 deletions src/hdmf/build/objectmapper.py
Original file line number Diff line number Diff line change
Expand Up @@ -184,7 +184,7 @@ def no_convert(cls, obj_type):
"""
cls.__no_convert.add(obj_type)

@classmethod # noqa: C901
@classmethod
def convert_dtype(cls, spec, value, spec_dtype=None): # noqa: C901
"""
Convert values to the specified dtype. For example, if a literal int
Expand Down Expand Up @@ -276,7 +276,7 @@ def __check_convert_numeric(cls, value_type):
np.issubdtype(value_dtype, np.integer)):
raise ValueError("Cannot convert from %s to 'numeric' specification dtype." % value_type)

@classmethod # noqa: C901
@classmethod
def __check_edgecases(cls, spec, value, spec_dtype): # noqa: C901
"""
Check edge cases in converting data to a dtype
Expand Down
113 changes: 58 additions & 55 deletions src/hdmf/common/resources.py
Original file line number Diff line number Diff line change
Expand Up @@ -544,6 +544,53 @@
if len(missing_terms)>0:
return {"missing_terms": missing_terms}

def _validate_object(self, container, attribute, field, file):
if attribute is None: # Trivial Case
relative_path = ''
object_field = self._check_object_field(file=file,
container=container,
relative_path=relative_path,
field=field)
else: # DataType Attribute Case
attribute_object = getattr(container, attribute) # returns attribute object
if isinstance(attribute_object, AbstractContainer):
relative_path = ''
object_field = self._check_object_field(file=file,
container=attribute_object,
relative_path=relative_path,
field=field)
else: # Non-DataType Attribute Case:
obj_mapper = self.type_map.get_map(container)
spec = obj_mapper.get_attr_spec(attr_name=attribute)
parent_spec = spec.parent # return the parent spec of the attribute
if parent_spec.data_type is None:
while parent_spec.data_type is None:
parent_spec = parent_spec.parent # find the closest parent with a data_type
parent_cls = self.type_map.get_dt_container_cls(data_type=parent_spec.data_type, autogen=False)
if isinstance(container, parent_cls):
parent = container
# We need to get the path of the spec for relative_path
absolute_path = spec.path
relative_path = absolute_path[absolute_path.find('/')+1:]
object_field = self._check_object_field(file=file,
container=parent,
relative_path=relative_path,
field=field)
else:
msg = 'Container not the nearest data_type'
raise ValueError(msg)

Check warning on line 581 in src/hdmf/common/resources.py

View check run for this annotation

Codecov / codecov/patch

src/hdmf/common/resources.py#L580-L581

Added lines #L580 - L581 were not covered by tests
else:
parent = container # container needs to be the parent
absolute_path = spec.path
relative_path = absolute_path[absolute_path.find('/')+1:]
# this regex removes everything prior to the container on the absolute_path
object_field = self._check_object_field(file=file,
container=parent,
relative_path=relative_path,
field=field)
return object_field


@docval({'name': 'container', 'type': (str, AbstractContainer), 'default': None,
'doc': ('The Container/Data object that uses the key or '
'the object_id for the Container/Data object that uses the key.')},
Expand All @@ -558,7 +605,7 @@
{'name': 'file', 'type': HERDManager, 'doc': 'The file associated with the container.',
'default': None},
)
def add_ref(self, **kwargs):
def add_ref(self, **kwargs): # noqa: C901
"""
Add information about an external reference used in this file.

Expand Down Expand Up @@ -630,52 +677,7 @@
msg = 'This entity already exists. Ignoring new entity uri'
warn(msg, stacklevel=3)

#################
# Validate Object
#################
if attribute is None: # Trivial Case
relative_path = ''
object_field = self._check_object_field(file=file,
container=container,
relative_path=relative_path,
field=field)
else: # DataType Attribute Case
attribute_object = getattr(container, attribute) # returns attribute object
if isinstance(attribute_object, AbstractContainer):
relative_path = ''
object_field = self._check_object_field(file=file,
container=attribute_object,
relative_path=relative_path,
field=field)
else: # Non-DataType Attribute Case:
obj_mapper = self.type_map.get_map(container)
spec = obj_mapper.get_attr_spec(attr_name=attribute)
parent_spec = spec.parent # return the parent spec of the attribute
if parent_spec.data_type is None:
while parent_spec.data_type is None:
parent_spec = parent_spec.parent # find the closest parent with a data_type
parent_cls = self.type_map.get_dt_container_cls(data_type=parent_spec.data_type, autogen=False)
if isinstance(container, parent_cls):
parent = container
# We need to get the path of the spec for relative_path
absolute_path = spec.path
relative_path = absolute_path[absolute_path.find('/')+1:]
object_field = self._check_object_field(file=file,
container=parent,
relative_path=relative_path,
field=field)
else:
msg = 'Container not the nearest data_type'
raise ValueError(msg)
else:
parent = container # container needs to be the parent
absolute_path = spec.path
relative_path = absolute_path[absolute_path.find('/')+1:]
# this regex removes everything prior to the container on the absolute_path
object_field = self._check_object_field(file=file,
container=parent,
relative_path=relative_path,
field=field)
object_field = self._validate_object(container, attribute, field, file)

#######################################
# Validate Parameters and Populate HERD
Expand Down Expand Up @@ -1000,7 +1002,7 @@

@classmethod
@docval({'name': 'path', 'type': str, 'doc': 'The path to the zip file.'})
def from_zip(cls, **kwargs):
def from_zip(cls, **kwargs): # noqa: C901
"""
Method to read in zipped tsv files to populate HERD.
"""
Expand Down Expand Up @@ -1075,11 +1077,12 @@
msg = "Key Index out of range in EntityKeyTable. Please check for alterations."
raise ValueError(msg)


er = HERD(files=files,
keys=keys,
entities=entities,
entity_keys=entity_keys,
objects=objects,
object_keys=object_keys)
er = HERD(
files=files,
keys=keys,
entities=entities,
entity_keys=entity_keys,
objects=objects,
object_keys=object_keys
)
return er
Loading
Loading