You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
RCA chunks are expected to start at zero and increase one by one, this raises a warning in case it doesn't start at zero or has any gap. Although these warnings are raised this doesn't affect the result of the RCA fit.
Maybe we should be more user friendly and allow the user to specify chunk ids by arbitrary non-negative integers (negative is interpreted as "not in chunk"), even if they do start at 0 and are not contiguous, just like we (and sklearn) do for methods which are fitted on a classic class vector y.
Yes, it would be nicer to do pre-processing on chunk labels with np.unique(), similar to scikit-learn. It's potentially a performance hit, though, so we might want to allow users to skip it.
To solve this it should be necessary to replace the current max() computation by the unique() computation, I don't think the performance difference should be significant. In any case, this issue arises in the chunk mean centering which is going to be deprecated in the future.
Description
RCA chunks are expected to start at zero and increase one by one, this raises a warning in case it doesn't start at zero or has any gap. Although these warnings are raised this doesn't affect the result of the RCA fit.
Maybe we should be more user friendly and allow the user to specify chunk ids by arbitrary non-negative integers (negative is interpreted as "not in chunk"), even if they do start at 0 and are not contiguous, just like we (and sklearn) do for methods which are fitted on a classic class vector
y
.Steps/Code to Reproduce
Expected Results
No thrown unexpected warnings.
Actual Results
The following warnings are thrown:
Versions
Linux-5.0.0-37-generic-x86_64-with-Ubuntu-18.04-bionic
Python 3.6.9 (default, Nov 7 2019, 10:44:02)
[GCC 8.3.0]
NumPy 1.18.1
SciPy 1.4.1
Scikit-Learn 0.22.1
Metric-Learn 0.5.0
The text was updated successfully, but these errors were encountered: