Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Simplify autoformat facilities in preparation for removal #17896

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

ayerofieiev-tt
Copy link
Member

@ayerofieiev-tt ayerofieiev-tt commented Feb 15, 2025

Ticket

Link to Github Issue

Problem description

Autoformat is a convoluted mechanism which we want to remove.
But because its convoluted its hard to make in one step.
Want to clean it up a bit as I am trying to better understand how it works.

What's changed

Added comments, removed things which clearly should not be needed.

Checklist

Copy link
Contributor

@sjameelTT sjameelTT left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You may want to file issues at op owners to ask for removal to help out here.

@@ -117,7 +117,7 @@ Tensor AutoFormat::format_input_tensor(
padded_shape.to_array_4D(),
tt::tt_metal::Array4D({0, 0, 0, 0}),
pad_value,
false,
false, /* multicore */
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it is better to put it before false: /* multicore */ false, or it looks like is related to the next parameter.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should be /*multicore=*/false,

ttnn::Shape(unpadded_shape), pad_c, pad_n, pad_h, pad_w);
[](const std::array<uint32_t, 4>& unpadded_shape) -> std::vector<uint32_t> {
auto result =
ttnn::operations::experimental::auto_format::AutoFormat::pad_to_tile_shape(ttnn::Shape(unpadded_shape));
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd create a standalone function. might be usefull in c++ too.

ttnn::Shape AutoFormat::pad_to_tile_shape(const ttnn::Shape& unpadded_shape) {
using namespace tt::constants;
auto rank = unpadded_shape.rank();
TT_ASSERT(rank >= 1, "rank of shape to pad to tile shape must be at least 1.");
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

TT_FATAL might be better.

@@ -100,6 +100,11 @@ Tensor tensor_to_device(

Tensor tensor_cpu(const Tensor& input_tensor, bool blocking, QueueId cq_id) {
ZoneScoped;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you can move zonescoped down or graph tracekr up.

Copy link
Contributor

@dmakoviichuk-tt dmakoviichuk-tt left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A few minor comments

auto a_pad_shape = ttnn::operations::experimental::auto_format::AutoFormat::pad_to_tile_shape(
temp.get_padded_shape(), false, false, true, true);
auto a_pad_shape =
ttnn::operations::experimental::auto_format::AutoFormat::pad_to_tile_shape(temp.get_padded_shape());

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: can we do something like using ttnn::operations::experimental::auto_format::AutoFormat autoformat and avoid all this long line?

}

bool AutoFormat::legal_tile_shape(const ttnn::Shape& shape) {
return (shape[2] % tt::constants::TILE_HEIGHT == 0 && shape[3] % tt::constants::TILE_WIDTH == 0);

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit and more a question than an affirmation... seems that shape[2] and shape[3] have some meaning, wouldn't be better to have some constants like shape[HEIGHT] % tt::constants::TILE_HEIGHT?

Copy link

@dgomezTT dgomezTT left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good to me, just a few comments/questions but overall makes sense

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants