Convert directory fbcode/aitemplate to use the Ruff Formatter (
#1033 )
c10::nullopt -> std::nullopt (
#1032 )
Break down test_group_fused_layernorm_sigmoid_mul to avoid timeout (
#…
Allow Passing in Size of Dynamic Dimensions to Inference Function (
#1025
Add reciprocal operator (
#1023 )
Use the full header file path to fix the build (
#1022 )
Add max_acc_splits (
#1017 )
Process separate q/k/v weights in MHA converter (
#1020 )
Prevent erroneous deduping of the full op (
#1018 )
Make V100 tests runnable on {A,H}100 in Sandcastle CI (
#1019 )
add eager mode check for NaN and Inf (
#1015 )
add cuda_graph for mts_gpu_benchmark (
#1012 )
Handle size-0 output for split (
#1011 )
allow concatenating empty tensors (
#1010 )
Use torch.all to check tensor equality (
#1008 )
c10::optional -> std::optional in aiplatform/distributed_inference/py…
Fix minimizer tests (
#1006 )
Disable AITModel in fx2ait on AMD
standalone find_batch_size_dim (
#1005 )
Range operator lowering minimizer
Update OSS black linter version to 24.4.0 (
#1001 )
Skip fuse_mm_elementwise fusion with model output in the middle (
#1000 )
Fixed a couple of dynamic shape detection issues (
#996 )
apply Black 2024 style in fbcode (11/16)
ait: Explicitly throw when indexing a boolean tensor for masking (
#992 )
allow binary ops where only one arg is an immutable IntVarTensor (
#987 )
add support for permute call with tuple args (
#986 )
You can’t perform that action at this time.