-
Notifications
You must be signed in to change notification settings - Fork 703
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support input check for pool operator #10532
Open
Dmovic
wants to merge
24
commits into
master
Choose a base branch
from
support_input_check
base: master
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
+121
−0
Open
Changes from 21 commits
Commits
Show all changes
24 commits
Select commit
Hold shift + click to select a range
3614247
add adaptive avg pool input check
Dmovic 148146f
add max unpool input check
Dmovic e8784e3
add adaptive max pool input check
Dmovic 64deba5
fix max pool2d
Dmovic 2a24cd7
auto format by CI
oneflow-ci-bot ef5005c
add Tensor shape check
Dmovic 33766ca
auto format by CI
oneflow-ci-bot a1daa40
add rand shape check
Dmovic 2481e92
fix non negative
Dmovic 65dceb2
add zeros ones shape check
Dmovic ad0986a
auto format by CI
oneflow-ci-bot 170a197
auto format by CI
oneflow-ci-bot 656879e
add check non negative function
Dmovic 0a78cdc
Merge branch 'support_input_check' of https://github.com/Oneflow-Inc/…
Dmovic b0881a8
update check non negative
Dmovic 0bc50c9
update check negative
Dmovic 4548aac
inline check non negative
Dmovic 314a3cc
add empty shape check
Dmovic df929ec
update adaptive none
Dmovic 3438b2f
auto format by CI
oneflow-ci-bot 4ce4bab
add randn shape check
Dmovic 4dc75dc
update check function
Dmovic f0e5e24
auto format by CI
oneflow-ci-bot 0d56eba
Merge branch 'master' into support_input_check
Dmovic File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -674,6 +674,10 @@ def __init__(self, output_size: _size_1_t) -> None: | |
super().__init__() | ||
assert output_size is not None, "'output_size' cannot be NoneType" | ||
self.output_size = _single(output_size) | ||
assert len(self.output_size) == 1, "'output_size' should contain one int" | ||
assert ( | ||
self.output_size[0] is None or self.output_size[0] >= 0 | ||
), f"elements of output_size must be greater than or equal to 0, but got {self.output_size}" | ||
|
||
def forward(self, x): | ||
assert ( | ||
|
@@ -741,6 +745,10 @@ def __init__(self, output_size, data_format=None) -> None: | |
super().__init__() | ||
assert output_size is not None, "'output_size' cannot be NoneType" | ||
self.output_size = _pair(output_size) | ||
assert len(self.output_size) == 2, "'output_size' must be 2" | ||
assert (self.output_size[0] is None or self.output_size[0] >= 0) and ( | ||
self.output_size[1] is None or self.output_size[1] >= 0 | ||
), f"elements of output_size must be greater than or equal to 0, but got {self.output_size}" | ||
if data_format: | ||
if not data_format in ["channels_first", "channels_last"]: | ||
raise ValueError( | ||
|
@@ -824,6 +832,12 @@ def __init__(self, output_size) -> None: | |
super().__init__() | ||
assert output_size is not None, "'output_size' cannot be NoneType" | ||
self.output_size = _triple(output_size) | ||
assert len(self.output_size) == 3, "'output_size' must be 3" | ||
assert ( | ||
(self.output_size[0] is None or self.output_size[0] >= 0) | ||
and (self.output_size[1] is None or self.output_size[1] >= 0) | ||
and (self.output_size[2] is None or self.output_size[2] >= 0) | ||
), f"elements of output_size must be greater than or equal to 0, but got {self.output_size}" | ||
|
||
def forward(self, x): | ||
assert ( | ||
|
@@ -892,6 +906,9 @@ def forward(self, input): | |
assert ( | ||
len(input.shape) == 3 and len(self.output_size) == 1 | ||
), "the length of 'output_size' does not match the input size, 1 expected" | ||
assert ( | ||
self.output_size[0] is None or self.output_size[0] >= 0 | ||
), f"elements of output_size must be greater than or equal to 0, but got {self.output_size}" | ||
new_output_size = _generate_output_size(input.shape, self.output_size) | ||
return flow.nn.functional.adaptive_max_pool1d( | ||
input, self.output_size, self.return_indices | ||
|
@@ -964,6 +981,10 @@ def forward(self, input): | |
assert ( | ||
len(input.shape) == 4 | ||
), f"expected 4-dimensional tensor, but got {len(input.shape)}-dimensional tensor" | ||
assert len(self.output_size) == 2, "'output_size' must be 2" | ||
assert (self.output_size[0] is None or self.output_size[0] >= 0) and ( | ||
self.output_size[1] is None or self.output_size[1] >= 0 | ||
), f"elements of output_size must be greater than or equal to 0, but got {self.output_size}" | ||
new_output_size = _generate_output_size(input.shape, self.output_size) | ||
return flow.nn.functional.adaptive_max_pool2d( | ||
input, self.output_size, self.return_indices, self.channel_pos | ||
|
@@ -1019,12 +1040,55 @@ def forward(self, input): | |
assert ( | ||
len(input.shape) == 5 | ||
), f"expected 5-dimensional tensor, but got {len(input.shape)}-dimensional tensor" | ||
assert len(self.output_size) == 3, "'output_size' must be 3" | ||
assert ( | ||
(self.output_size[0] is None or self.output_size[0] >= 0) | ||
and (self.output_size[1] is None or self.output_size[1] >= 0) | ||
and (self.output_size[2] is None or self.output_size[2] >= 0) | ||
), f"elements of output_size must be greater than or equal to 0, but got {self.output_size}" | ||
new_output_size = _generate_output_size(input.shape, self.output_size) | ||
return flow.nn.functional.adaptive_max_pool3d( | ||
input, self.output_size, self.return_indices | ||
) | ||
|
||
|
||
def _unpool_output_size_check( | ||
input, | ||
kernel_size: List[int], | ||
stride: List[int], | ||
padding: List[int], | ||
output_size: Optional[List[int]], | ||
) -> List[int]: | ||
input_size = input.size() | ||
default_size = [] | ||
for d in range(len(kernel_size)): | ||
default_size.append( | ||
(input_size[-len(kernel_size) + d] - 1) * stride[d] | ||
+ kernel_size[d] | ||
- 2 * padding[d] | ||
) | ||
if output_size is None: | ||
ret = default_size | ||
else: | ||
if len(output_size) == len(kernel_size) + 2: | ||
output_size = output_size[2:] | ||
if len(output_size) != len(kernel_size): | ||
raise ValueError( | ||
"output_size should be a sequence containing " | ||
f"{len(kernel_size)} or {len(kernel_size) + 2} elements, but it has a length of '{len(output_size)}'" | ||
) | ||
for d in range(len(kernel_size)): | ||
min_size = default_size[d] - stride[d] | ||
max_size = default_size[d] + stride[d] | ||
if not (min_size < output_size[d] < max_size): | ||
raise ValueError( | ||
f'invalid output_size "{output_size}" (dim {d} must be between {min_size} and {max_size})' | ||
) | ||
|
||
ret = output_size | ||
return ret | ||
|
||
|
||
class MaxUnpool1d(Module): | ||
r"""Computes a partial inverse of :class:`MaxPool1d`. | ||
|
@@ -1100,6 +1164,27 @@ def __init__( | |
self.padding = padding | ||
|
||
def forward(self, x, indices, output_size=None): | ||
kernel_size = _single(self.kernel_size) | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. 重复逻辑封装成函数 |
||
if self.stride is not None: | ||
_stride = _single(self.stride) | ||
else: | ||
_stride = kernel_size | ||
padding = _single(self.padding) | ||
check_output_size = _unpool_output_size_check( | ||
x, kernel_size, _stride, padding, output_size | ||
) | ||
assert ( | ||
len(check_output_size) == 1 | ||
), f"There should be exactly one element in output_size, but got {len(check_output_size)}" | ||
assert ( | ||
indices.dtype == flow.int64 | ||
), f"elements in indices should be type int64 but got: {indices.dtype}" | ||
assert ( | ||
len(x.size()) == 2 or len(x.size()) == 3 | ||
), f"Input to max_unpooling1d should be a 2d or 3d Tensor, but got {len(x.size())} dimensions" | ||
assert ( | ||
x.size() == indices.size() | ||
), f"Expected shape of indices to be same as that of the input tensor" | ||
return flow._C.max_unpool1d( | ||
x, indices, self.kernel_size, self.stride, self.padding, output_size | ||
) | ||
|
@@ -1188,6 +1273,27 @@ def __init__( | |
self.padding = padding | ||
|
||
def forward(self, x, indices, output_size=None): | ||
kernel_size = _pair(self.kernel_size) | ||
if self.stride is not None: | ||
_stride = _pair(self.stride) | ||
else: | ||
_stride = kernel_size | ||
padding = _pair(self.padding) | ||
check_output_size = _unpool_output_size_check( | ||
x, kernel_size, _stride, padding, output_size | ||
) | ||
assert ( | ||
len(check_output_size) == 2 | ||
), f"There should be exactly two elements in output_size, but got {len(check_output_size)}" | ||
assert ( | ||
indices.dtype == flow.int64 | ||
), f"elements in indices should be type int64 but got: {indices.dtype}" | ||
assert ( | ||
len(x.size()) == 3 or len(x.size()) == 4 | ||
), f"Input to max_unpooling1d should be a 3d or 4d Tensor, but got {len(x.size())} dimensions" | ||
assert ( | ||
x.size() == indices.size() | ||
), f"Expected shape of indices to be same as that of the input tensor" | ||
return flow._C.max_unpool2d( | ||
x, indices, self.kernel_size, self.stride, self.padding, output_size | ||
) | ||
|
@@ -1266,6 +1372,27 @@ def __init__( | |
self.padding = padding | ||
|
||
def forward(self, x, indices, output_size=None): | ||
kernel_size = _triple(self.kernel_size) | ||
if self.stride is not None: | ||
_stride = _triple(self.stride) | ||
else: | ||
_stride = kernel_size | ||
padding = _triple(self.padding) | ||
check_output_size = _unpool_output_size_check( | ||
x, kernel_size, _stride, padding, output_size | ||
) | ||
assert ( | ||
len(check_output_size) == 3 | ||
), f"There should be exactly three elements in output_size, but got {len(check_output_size)}" | ||
assert ( | ||
indices.dtype == flow.int64 | ||
), f"elements in indices should be type int64 but got: {indices.dtype}" | ||
assert ( | ||
len(x.size()) == 4 or len(x.size()) == 5 | ||
), f"Input to max_unpooling1d should be a 4d or 5d Tensor, but got {len(x.size())} dimensions" | ||
assert ( | ||
x.size() == indices.size() | ||
), f"Expected shape of indices to be same as that of the input tensor" | ||
return flow._C.max_unpool3d( | ||
x, indices, self.kernel_size, self.stride, self.padding, output_size | ||
) | ||
|
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这个名字改成
CheckShapeNonNegative
吧