Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix to_layout sharded bug #17820

Open
wants to merge 8 commits into
base: main
Choose a base branch
from
Open

fix to_layout sharded bug #17820

wants to merge 8 commits into from

Conversation

nardoTT
Copy link
Contributor

@nardoTT nardoTT commented Feb 11, 2025

Ticket

Link to Github Issue #17706

Problem description

to_layout doesn't change the memory configuration of sharded tensors for untilize with unpadding if needed. The memory config stays L1

What's changed

Change the memory config if the requested configuration is different than the input tensor one

Checklist

@nardoTT nardoTT changed the title to_layout sharded keeps the tensor in L1 fix to_layout sharded bug Feb 11, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants