Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bufferization] Enable OneShot #10

Open
wants to merge 8 commits into
base: cpu-proto
Choose a base branch
from

Conversation

Devjiu
Copy link

@Devjiu Devjiu commented Nov 29, 2023

This commit enables oneshot bufferization and associated passes.
Amount of mallocs reduced, but correctness should be verified on bigger
amount of nets/layers.

in shape: torch.Size([100, 128, 128])
w shape: (8192, 16384)
b shape: (8192,)

vanilla pytorch

1.20438 ms = (1.3544 + 1.2414 + 1.1625 + 1.2506 + 1.1141 + 1.1030 + 1.1984 + 1.2178 + 1.2174 + 1.1842)/10

toch_mlir

4.60667 ms = (5.7596 + 4.8763 + 6.0515 + 4.1456 + 4.3975 + 4.2410 + 4.2244 + 4.1725 + 4.2603 + 3.9380)/10

toch_mlir oneshot

3.86755 ms = (3.8856 + 3.5017 + 3.6128 + 3.5422 + 4.7873 + 3.6021 + 3.5291 + 3.5989 + 3.6128 + 5.0030)/10
x = self.flatten(x)
x = self.linear1(x)
x = self.relu(x)
x = self.linear2(x)

MLP(128 * 128, 1024)
current - memref.alloc() - 6 ops
oneshot - memref.alloc() - 3 ops

kurapov-peter and others added 3 commits November 10, 2023 15:14
* Add conda env yaml

* Add mlp example

* Add dumps to mlp example (enabled by default)
Signed-off-by: Ilya Enkovich <[email protected]>
Co-authored-by: Laurent Montigny <[email protected]>
…nels. (#4)

* Add lowering pass and backend to experiment with matmul kernels.

Signed-off-by: Ilya Enkovich <[email protected]>

* Use MKL for matmul kernels.

Signed-off-by: Ilya Enkovich <[email protected]>

---------

Signed-off-by: Ilya Enkovich <[email protected]>
@Devjiu Devjiu force-pushed the dmitriim/enable_oneshot_buffer branch 2 times, most recently from 68c5230 to ee73567 Compare November 29, 2023 20:53
ienkovich and others added 5 commits December 1, 2023 18:36
This commit introduces FuseLinalgOpsPass that merges Transpose in any
matmul argument for Linalg dialect. The conversion from torch to
linalg has also been changed and `aten::mm` is converted to `linalg::matmul`
instead of `linalg::generic`.

Signed-off-by: Dmitrii Makarenko <[email protected]>
This commit enables oneshot bufferization and associated passes.
Amount of mallocs reduced, butr correctness should be verified on bigger
amount of nets/layers.

Signed-off-by: Dmitrii Makarenko <[email protected]>
@Devjiu Devjiu force-pushed the dmitriim/enable_oneshot_buffer branch from ee73567 to be5ba89 Compare December 1, 2023 17:46
@@ -34,6 +34,7 @@ def _build_lowering_pipeline(opts: TestOptions):
"func.func(linalg-fuse-elementwise-ops)",
"convert-shape-to-std",
# Bufferize.
"one-shot-bufferize",
"func.func(scf-bufferize)",
"func.func(tm-tensor-bufferize)",

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shouldn't one-shot replace all of these?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

not all of this, it just do some dep analysis on early stage. AFAIU next bufferization passes converts dialect constructions to buffers, so we still need them

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants