We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Let's support constant folding of ResizeBiliear Op
DETR model's positional embedding code generates such a pattern.
def get_abs_pos(abs_pos, has_cls_token, hw): """ Calculate absolute positional embeddings. If needed, resize embeddings and remove cls_token dimension for the original embeddings. Args: abs_pos (Tensor): absolute positional embeddings with (1, num_position, C). has_cls_token (bool): If true, has 1 embedding in abs_pos for cls token. hw (Tuple): size of input image tokens. Returns: Absolute positional embeddings after processing with shape (1, H, W, C) """ h, w = hw if has_cls_token: abs_pos = abs_pos[:, 1:] xy_num = abs_pos.shape[1] size = int(math.sqrt(xy_num)) assert size * size == xy_num if size != h or size != w: new_abs_pos = F.interpolate( abs_pos.reshape(1, size, size, 1024).permute(0, 3, 1, 2), size=(h, w), mode="nearest", ) # ResizeNearestNeighbor was generated here return new_abs_pos.permute(0, 2, 3, 1) else: return abs_pos.reshape(1, h, w, 1024)
The text was updated successfully, but these errors were encountered:
No branches or pull requests
What
Let's support constant folding of ResizeBiliear Op
Why
DETR model's positional embedding code generates such a pattern.
The text was updated successfully, but these errors were encountered: