-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bufferization] Enable OneShot #10
base: cpu-proto
Are you sure you want to change the base?
Conversation
* Add conda env yaml * Add mlp example * Add dumps to mlp example (enabled by default)
Signed-off-by: Ilya Enkovich <[email protected]> Co-authored-by: Laurent Montigny <[email protected]>
…nels. (#4) * Add lowering pass and backend to experiment with matmul kernels. Signed-off-by: Ilya Enkovich <[email protected]> * Use MKL for matmul kernels. Signed-off-by: Ilya Enkovich <[email protected]> --------- Signed-off-by: Ilya Enkovich <[email protected]>
68c5230
to
ee73567
Compare
Signed-off-by: Ilya Enkovich <[email protected]>
This commit introduces FuseLinalgOpsPass that merges Transpose in any matmul argument for Linalg dialect. The conversion from torch to linalg has also been changed and `aten::mm` is converted to `linalg::matmul` instead of `linalg::generic`. Signed-off-by: Dmitrii Makarenko <[email protected]>
This commit enables oneshot bufferization and associated passes. Amount of mallocs reduced, butr correctness should be verified on bigger amount of nets/layers. Signed-off-by: Dmitrii Makarenko <[email protected]>
ee73567
to
be5ba89
Compare
@@ -34,6 +34,7 @@ def _build_lowering_pipeline(opts: TestOptions): | |||
"func.func(linalg-fuse-elementwise-ops)", | |||
"convert-shape-to-std", | |||
# Bufferize. | |||
"one-shot-bufferize", | |||
"func.func(scf-bufferize)", | |||
"func.func(tm-tensor-bufferize)", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Shouldn't one-shot replace all of these?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
not all of this, it just do some dep analysis on early stage. AFAIU next bufferization passes converts dialect constructions to buffers, so we still need them
This commit enables oneshot bufferization and associated passes.
Amount of mallocs reduced, but correctness should be verified on bigger
amount of nets/layers.
in shape: torch.Size([100, 128, 128])
w shape: (8192, 16384)
b shape: (8192,)
vanilla pytorch
toch_mlir
toch_mlir oneshot