Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add llama and resnet emitc tests #2144

Merged
merged 1 commit into from
Feb 16, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion include/ttmlir/Dialect/TTNN/Utils/TransformUtils.h
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@

namespace mlir::tt::ttnn::utils {
// Get or insert device for the given operation.
GetDeviceOp getOrInsertDevice(mlir::PatternRewriter &rewriter,
GetDeviceOp getOrInsertDevice(mlir::RewriterBase &rewriter,
mlir::Operation *op);

// Helper method to insert a ToLayoutOp to convert the input operand to the
Expand Down
45 changes: 44 additions & 1 deletion lib/Conversion/TTNNToEmitC/TTNNToEmitC.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -604,6 +604,49 @@ class RepeatOpConversionPattern
}
};

// RepeatInterleave op conversion pattern
//
class RepeatInterleaveOpConversionPattern
: public TTNNToEmitCBaseOpConversionPattern<ttnn::RepeatInterleaveOp> {
public:
using TTNNToEmitCBaseOpConversionPattern<
ttnn::RepeatInterleaveOp>::TTNNToEmitCBaseOpConversionPattern;

LogicalResult
matchAndRewrite(ttnn::RepeatInterleaveOp repeatInterleaveOp,
ttnn::RepeatInterleaveOp::Adaptor adaptor,
ConversionPatternRewriter &rewriter) const override {
// Create operands vector
//
llvm::SmallVector<Value, 2> operands{
adaptor.getOperands()[0],
};

// Create ArrayAttr object holding attributes and pointers to operands
//
ArrayAttr arrayAttrs = rewriter.getArrayAttr({
rewriter.getIndexAttr(0), // input tensor
repeatInterleaveOp.getRepeatsAttr(), repeatInterleaveOp.getDimAttr(),
repeatInterleaveOp.getMemoryConfig().has_value()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: has_value() is not needed.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure, but I prefer it this way as it makes it obvious what is being checked.

? (operands.push_back(ttnn_to_emitc::utils::createMemoryConfigOp(
rewriter,
repeatInterleaveOp.getMemoryConfigAttr(),
repeatInterleaveOp.getLoc())
->getResult(0)),
mlir::cast<Attribute>(rewriter.getIndexAttr(1)))
Copy link
Contributor

@azecevicTT azecevicTT Feb 7, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this cast really needed?
Also I struggle to follow the logic here, you can maybe derive work related to ?: operator before ArrayAttr creation.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The cast is indeed needed, otherwise the error is Incompatible operand types ('IntegerAttr' and 'emitc::OpaqueAttr'). I remember that you and I actually looked into this while working on OnesOp conversion haha!

you can maybe derive work related to ?: operator before ArrayAttr creation.

I'll leave it like this since it's "closer" to the automated approach we discussed, with the idea of this op (and OnesOp with same-style implementation) being one of the first to rework into it (automated approach).

: ttnn_to_emitc::utils::createStdNullopt(
rewriter), // ttnn::MemoryConfig
});

rewriter.replaceOpWithNewOp<emitc::CallOpaqueOp>(
repeatInterleaveOp,
this->getTypeConverter()->convertType(repeatInterleaveOp.getType()),
this->convertOpName(repeatInterleaveOp), arrayAttrs, nullptr, operands);

return success();
}
};

// GetDeviceOp conversion pattern
//
namespace {
Expand Down Expand Up @@ -1320,7 +1363,7 @@ void populateTTNNToEmitCPatterns(mlir::MLIRContext *ctx,
//
patterns.add<TransposeOpConversionPattern, ConcatOpConversionPattern,
ReshapeOpConversionPattern, RepeatOpConversionPattern,
DefaultOpConversionPattern<ttnn::RepeatInterleaveOp>,
RepeatInterleaveOpConversionPattern,
DefaultOpConversionPattern<ttnn::SliceOp>,
DefaultOpConversionPattern<ttnn::PermuteOp>,
DefaultOpConversionPattern<ttnn::PadOp>>(typeConverter, ctx);
Expand Down
18 changes: 16 additions & 2 deletions lib/Dialect/TTNN/Transforms/Passes.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,7 @@
#include "ttmlir/Dialect/TTNN/IR/TTNNOps.h"
#include "ttmlir/Dialect/TTNN/IR/TTNNOpsAttrs.h"
#include "ttmlir/Dialect/TTNN/IR/TTNNOpsTypes.h"
#include "ttmlir/Dialect/TTNN/Utils/TransformUtils.h"
#include "ttmlir/Dialect/TTNN/Utils/Utils.h"

#include "mlir/Analysis/Liveness.h"
Expand Down Expand Up @@ -218,11 +219,24 @@ class TTNNCreateInputGenerators

// Create a new tensor
//
mlir::Value tensorValue = rewriter.create<ttnn::OnesOp>(
ttnn::OnesOp onesOp = rewriter.create<ttnn::OnesOp>(
forwardFuncOp->getLoc(), tensorType, shapeAttr, dTypeAttr,
tensorLayoutAttr, nullptr, nullptr);

generatedTensors.push_back(tensorValue);
// If tensor is meant to be on device, add ToDevice op
//
if (layoutAttr.isDeviceBufferType()) {
ttnn::GetDeviceOp device =
ttnn::utils::getOrInsertDevice(rewriter, onesOp);

mlir::Value tensorOnDevice = rewriter.create<ttnn::ToDeviceOp>(
forwardFuncOp->getLoc(), tensorType, onesOp.getResult(),
device.getResult(), nullptr);

generatedTensors.push_back(tensorOnDevice);
} else {
generatedTensors.push_back(onesOp.getResult());
}
}

// Return the generated tensors
Expand Down
2 changes: 1 addition & 1 deletion lib/Dialect/TTNN/Utils/TransformUtils.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@
namespace mlir::tt::ttnn::utils {
// Gets or inserts a GetDeviceOp at the top of the current block of the given
// operation.
GetDeviceOp getOrInsertDevice(PatternRewriter &rewriter, Operation *op) {
GetDeviceOp getOrInsertDevice(RewriterBase &rewriter, Operation *op) {
Block *block = op->getBlock();
for (auto &op : block->getOperations()) {
if (auto deviceOp = dyn_cast<ttnn::GetDeviceOp>(op)) {
Expand Down
2,667 changes: 2,667 additions & 0 deletions test/ttmlir/EmitC/TTNN/models/llama_prefill.mlir

Large diffs are not rendered by default.

508 changes: 508 additions & 0 deletions test/ttmlir/EmitC/TTNN/models/resnet.mlir

Large diffs are not rendered by default.

10 changes: 10 additions & 0 deletions test/ttmlir/EmitC/TTNN/tensor/repeat_interleave.mlir
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
// RUN: ttmlir-opt --ttir-to-ttnn-backend-pipeline="system-desc-path=%system_desc_path%" %s > %t.mlir
// RUN: ttmlir-translate --ttnn-to-flatbuffer %t.mlir > %basename_t.ttnn
// RUN: ttmlir-opt --ttnn-modify-signatures-for-dylib --convert-ttnn-to-emitc %t.mlir > %t2.mlir
// RUN: ttmlir-translate --mlir-to-cpp %t2.mlir > %basename_t.cpp

func.func @repeat_interleave(%arg0: tensor<4x6xf32>) -> tensor<4x24xf32> {
%0 = tensor.empty() : tensor<4x24xf32>
%1 = "ttir.repeat_interleave"(%arg0, %0) {repeats = 4 : ui32, dim = 1 : si32} : (tensor<4x6xf32>, tensor<4x24xf32>) -> tensor<4x24xf32>
return %1 : tensor<4x24xf32>
}
1 change: 0 additions & 1 deletion tools/ttnn-standalone/ci_compile_dylib.py
Original file line number Diff line number Diff line change
Expand Up @@ -145,7 +145,6 @@ def compile_shared_object(cpp_file_path, output_dir):
destination_path = os.path.join(output_dir, output_file_name)
shutil.copy2(compiled_so_path, destination_path)
print(f" Successfully copied compiled file to {destination_path}.")
os.remove(source_cpp_path)
except subprocess.CalledProcessError as e:
print(f" Error during build process: {e}")
print(e.stderr)
Expand Down
1 change: 1 addition & 0 deletions tools/ttnn-standalone/ttnn-precompiled.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,7 @@
#include "operations/creation.hpp"
#include "operations/data_movement/concat/concat.hpp"
#include "operations/data_movement/repeat/repeat.hpp"
#include "operations/data_movement/repeat_interleave/repeat_interleave.hpp"
#include "operations/data_movement/transpose/transpose.hpp"
#include "operations/eltwise/binary/binary.hpp"
#include "operations/eltwise/binary/binary_composite.hpp"
Expand Down
Loading