Skip to content

Commit

Permalink
Merge pull request #31 from fidelity/fix_ci
Browse files Browse the repository at this point in the history
Fix CI issues, remove support for Python 3.7 and add support for Python 3.11, 3.12
  • Loading branch information
AshishPvjs authored Sep 13, 2024
2 parents 644ee54 + 59cf762 commit d3217fb
Show file tree
Hide file tree
Showing 15 changed files with 58 additions and 28 deletions.
4 changes: 2 additions & 2 deletions .github/workflows/ci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ jobs:
runs-on: ${{ matrix.os }}
strategy:
matrix:
python-version: ["3.7", "3.8", "3.9", "3.10"]
python-version: ["3.8", "3.9", "3.10", "3.11", "3.12"]
os: [ubuntu-latest, macos-latest, windows-latest]
steps:
- uses: actions/checkout@v2
Expand All @@ -27,7 +27,7 @@ jobs:
- name: Check
shell: bash
run: |
python3 -m pip install --upgrade pip
python3 -m pip install --upgrade pip setuptools
pip install -e .
python3 -m unittest discover -v tests
python3 setup.py install
11 changes: 11 additions & 0 deletions CHANGELOG.txt
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,17 @@
CHANGELOG
=========

-------------------------------------------------------------------------------
Sep 06, 2024 2.1.0
-------------------------------------------------------------------------------
Major
- Remove support for Python 3.7 and add support for Python 3.11 and Python 3.12
- Update CI test environment to drop Python 3.7 and add Python 3.11, Python 3.12.
- Fix typos in docstrings for fairness metrics get_scores method.
- Update S3 link in evalrs lfm_dataset_path
- Add Probabilistic Fairness Metric calculation example in quick start.
- Adding setuptools in github worklow to address no pre-install of setuptools in Python 3.12

-------------------------------------------------------------------------------
Jan 25, 2023 2.0.1
-------------------------------------------------------------------------------
Expand Down
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -185,7 +185,7 @@ print('F1 score is', f1_score.get_score(predictions, labels))

## Installation

Jurity requires **Python 3.7+** and can be installed from PyPI using ``pip install jurity`` or by building from source as shown in [installation instructions](https://fidelity.github.io/jurity/install.html).
Jurity requires **Python 3.8+** and can be installed from PyPI using ``pip install jurity`` or by building from source as shown in [installation instructions](https://fidelity.github.io/jurity/install.html).

## Citation

Expand Down
2 changes: 1 addition & 1 deletion docs/_sources/install.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ Installation
Requirements
------------

The library requires Python **3.6+** and depends on standard packages such as ``pandas, numpy``
The library requires Python **3.8+** and depends on standard packages such as ``pandas, numpy``
The ``requirements.txt`` lists the necessary packages.

Install via pip
Expand Down
2 changes: 1 addition & 1 deletion docs/install.html
Original file line number Diff line number Diff line change
Expand Up @@ -93,7 +93,7 @@
</div>
<section id="requirements">
<span id="id1"></span><h2>Requirements<a class="headerlink" href="#requirements" title="Permalink to this heading"></a></h2>
<p>The library requires Python <strong>3.6+</strong> and depends on standard packages such as <code class="docutils literal notranslate"><span class="pre">pandas,</span> <span class="pre">numpy</span></code>
<p>The library requires Python <strong>3.8+</strong> and depends on standard packages such as <code class="docutils literal notranslate"><span class="pre">pandas,</span> <span class="pre">numpy</span></code>
The <code class="docutils literal notranslate"><span class="pre">requirements.txt</span></code> lists the necessary packages.</p>
</section>
<section id="install-via-pip">
Expand Down
24 changes: 24 additions & 0 deletions docs/quick.html
Original file line number Diff line number Diff line change
Expand Up @@ -109,6 +109,30 @@ <h2>Calculate Fairness Metrics<a class="headerlink" href="#calculate-fairness-me
</pre></div>
</div>
</section>
<section id="probabilistic-fairness-metric">
<h2>Calculate Probabilistic Fairness Metric<a class="headerlink" href="#probabilistic-fairness-metric" title="Permalink to this heading"></a></h2>
<div class="highlight-python notranslate">
<div class="highlight">
<pre>
<span class="c1"># Import binary fairness metrics from Jurity</span>
<span class="kn">from</span> <span class="nn">jurity.fairness</span> <span class="kn">import</span> <span class="n">BinaryFairnessMetrics</span>

<span class="c1"># Instead of 0/1 deterministic membership at individual level</span>
<span class="c1"># consider likelihoods of membership to protected classes for each sample</span>
<span class="n">binary_predictions</span> <span class="o">=</span> <span class="p">[</span><span class="mi">1</span><span class="p">,</span> <span class="mi">1</span><span class="p">,</span> <span class="mi">0</span><span class="p">,</span> <span class="mi">1</span><span class="p">]</span>
<span class="n">memberships</span> <span class="o">=</span> <span class="p">[</span><span class="p">[</span><span class="mf">0.2</span><span class="p">,</span> <span class="mf">0.8</span><span class="p">],</span> <span class="p">[</span><span class="mf">0.4</span><span class="p">,</span> <span class="mf">0.6</span><span class="p">],</span> <span class="p">[</span><span class="mf">0.2</span><span class="p">,</span> <span class="mf">0.8</span><span class="p">],</span> <span class="p">[</span><span class="mf">0.9</span><span class="p">,</span> <span class="mf">0.1</span><span class="p">]</span><span class="p">]</span>

<span class="c1"># Metric</span>
<span class="n">metric</span> <span class="o">=</span> <span class="n">BinaryFairnessMetrics</span><span class="o">.</span><span class="n">StatisticalParity</span><span class="p">()</span>
<span class="nb">print</span><span class="p">(</span><span class="s2">"Binary Fairness score: "</span><span class="p">,</span> <span class="n">metric</span><span class="o">.</span><span class="n">get_score</span><span class="p">(</span><span class="n">binary_predictions</span><span class="p">,</span> <span class="n">memberships</span><span class="p">))</span>

<span class="c1"># Surrogate membership: consider access to surrogate membership at the group level.</span>
<span class="n">surrogates</span> <span class="o">=</span> <span class="p">[</span><span class="mi">0</span><span class="p">,</span> <span class="mi">2</span><span class="p">,</span> <span class="mi">0</span><span class="p">,</span> <span class="mi">1</span><span class="p">]</span>
<span class="nb">print</span><span class="p">(</span><span class="s2">"Binary Fairness score: "</span><span class="p">,</span> <span class="n">metric</span><span class="o">.</span><span class="n">get_score</span><span class="p">(</span><span class="n">binary_predictions</span><span class="p">,</span> <span class="n">memberships</span><span class="p">,</span> <span class="n">surrogates</span><span class="p">))</span>
</pre>
</div>
</div>
</section>
<section id="fit-and-apply-bias-mitigation">
<h2>Fit and Apply Bias Mitigation<a class="headerlink" href="#fit-and-apply-bias-mitigation" title="Permalink to this heading"></a></h2>
<div class="highlight-python notranslate"><div class="highlight"><pre><span></span><span class="c1"># Import binary fairness metrics and mitigation</span>
Expand Down
2 changes: 1 addition & 1 deletion docsrc/install.rst
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ Installation
Requirements
------------

The library requires Python **3.6+** and depends on standard packages such as ``pandas, numpy``
The library requires Python **3.8+** and depends on standard packages such as ``pandas, numpy``
The ``requirements.txt`` lists the necessary packages.

Install via pip
Expand Down
2 changes: 1 addition & 1 deletion evalrs/evaluation/utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@
from datetime import datetime


LFM_DATASET_PATH="https://cikm-evalrs-dataset.s3.us-west-2.amazonaws.com/evalrs_dataset.zip"
LFM_DATASET_PATH="https://evarl-2022-public-dataset.s3.us-east-1.amazonaws.com/evalrs_dataset.zip"

TOP_K_CHALLENGE = 100
LEADERBOARD_TESTS = [
Expand Down
2 changes: 1 addition & 1 deletion jurity/_version.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,4 +2,4 @@
# Copyright FMR LLC <[email protected]>
# SPDX-License-Identifier: Apache-2.0

__version__ = "2.0.1"
__version__ = "2.1.0"
7 changes: 3 additions & 4 deletions jurity/fairness/average_odds.py
Original file line number Diff line number Diff line change
Expand Up @@ -47,11 +47,10 @@ def get_score(labels: Union[List, np.ndarray, pd.Series],
Parameters
----------
labels: labels: Union[List, np.ndarray, pd.Series]
Ground truth labels for each row (0/1).
labels: Union[List, np.ndarray, pd.Series]
Ground truth labels for each row (0/1).
predictions: Union[List, np.ndarray, pd.Series]
Binary predictions from some black-box classifier (0/1).
Binary prediction for each sample from a binary (0/1) lack-box classifier.
Binary prediction for each sample from a binary (0/1) black-box classifier.
memberships: Union[List, np.ndarray, pd.Series, List[List], pd.DataFrame],
Membership attribute for each sample.
If deterministic, it is a binary label for each sample [0, 1, 0, .., 1]
Expand Down
7 changes: 3 additions & 4 deletions jurity/fairness/equal_opportunity.py
Original file line number Diff line number Diff line change
Expand Up @@ -39,11 +39,10 @@ def get_score(labels: Union[List, np.ndarray, pd.Series],
Parameters
----------
labels: labels: Union[List, np.ndarray, pd.Series]
Ground truth labels for each row (0/1).
labels: Union[List, np.ndarray, pd.Series]
Ground truth labels for each row (0/1).
predictions: Union[List, np.ndarray, pd.Series]
Binary predictions from some black-box classifier (0/1).
Binary prediction for each sample from a binary (0/1) lack-box classifier.
Binary prediction for each sample from a binary (0/1) black-box classifier.
memberships: Union[List, np.ndarray, pd.Series, List[List], pd.DataFrame],
Membership attribute for each sample.
If deterministic, it is a binary label for each sample [0, 1, 0, .., 1]
Expand Down
7 changes: 3 additions & 4 deletions jurity/fairness/fnr_difference.py
Original file line number Diff line number Diff line change
Expand Up @@ -44,11 +44,10 @@ def get_score(labels: Union[List, np.ndarray, pd.Series],
Parameters
----------
labels: labels: Union[List, np.ndarray, pd.Series]
Ground truth labels for each row (0/1).
labels: Union[List, np.ndarray, pd.Series]
Ground truth labels for each row (0/1).
predictions: Union[List, np.ndarray, pd.Series]
Binary predictions from some black-box classifier (0/1).
Binary prediction for each sample from a binary (0/1) lack-box classifier.
Binary prediction for each sample from a binary (0/1) black-box classifier.
memberships: Union[List, np.ndarray, pd.Series, List[List], pd.DataFrame],
Membership attribute for each sample.
If deterministic, it is a binary label for each sample [0, 1, 0, .., 1]
Expand Down
7 changes: 3 additions & 4 deletions jurity/fairness/predictive_equality.py
Original file line number Diff line number Diff line change
Expand Up @@ -48,11 +48,10 @@ def get_score(labels: Union[List, np.ndarray, pd.Series],
Parameters
----------
labels: labels: Union[List, np.ndarray, pd.Series]
Ground truth labels for each row (0/1).
labels: Union[List, np.ndarray, pd.Series]
Ground truth labels for each row (0/1).
predictions: Union[List, np.ndarray, pd.Series]
Binary predictions from some black-box classifier (0/1).
Binary prediction for each sample from a binary (0/1) lack-box classifier.
Binary prediction for each sample from a binary (0/1) black-box classifier.
memberships: Union[List, np.ndarray, pd.Series, List[List], pd.DataFrame],
Membership attribute for each sample.
If deterministic, it is a binary label for each sample [0, 1, 0, .., 1]
Expand Down
3 changes: 1 addition & 2 deletions jurity/fairness/statistical_parity.py
Original file line number Diff line number Diff line change
Expand Up @@ -40,8 +40,7 @@ def get_score(predictions: Union[List, np.ndarray, pd.Series],
Parameters
----------
predictions: Union[List, np.ndarray, pd.Series]
Binary predictions from some black-box classifier (0/1).
Binary prediction for each sample from a binary (0/1) lack-box classifier.
Binary prediction for each sample from a binary (0/1) black-box classifier.
memberships: Union[List, np.ndarray, pd.Series, List[List], pd.DataFrame],
Membership attribute for each sample.
If deterministic, it is a binary label for each sample [0, 1, 0, .., 1]
Expand Down
4 changes: 2 additions & 2 deletions setup.py
Original file line number Diff line number Diff line change
Expand Up @@ -26,10 +26,10 @@
packages=setuptools.find_packages(exclude=["*.tests", "*.tests.*", "tests.*", "tests"]),
classifiers=[
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3.6",
"Programming Language :: Python :: 3.8",
"Operating System :: OS Independent",
],
project_urls={"Source": "https://github.com/fidelity/jurity"},
install_requires=required,
python_requires=">=3.6"
python_requires=">=3.8"
)

0 comments on commit d3217fb

Please sign in to comment.