-
Notifications
You must be signed in to change notification settings - Fork 5.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Test Mv] Move collective/fleet to test dir #51987
[Test Mv] Move collective/fleet to test dir #51987
Conversation
你的PR提交成功,感谢你对开源项目的贡献! |
❌ The PR is not created using PR's template. You can refer to this Demo. |
* test_logit_op * add cudaKernel to replace eigen impl * bf16 unit test CI
* add nms3 register output defs * remove nms from set * remove nms from set
…ddle#50021) * extract common methods to reuse * add header for transpose ops * fused_transpose * Split big function * transpose2 tests * fused_transpose * Apply extra attributes * add pbtxt file * update pbtxt * Merge develop * add more strict op compats * code style * remove mkldnn_data_type * unify SetOutMemDescWithReshape2FuseSupport * adjust quantize-dequantize for transpose * remove appendact * transpose2 quantization * fix int8 tests * adjust transpose_op to current develop * delete fusion code from transpose_kernel * add fused transpose to NHWC unittest * change order
* Add fused_feed_forward pass for semi-automatic static graph training. * Add fused_feedforward property in parallel_executor.cc * Polish code. * Polish fused feed_forward pass code. Support use_dropout1 and use_dropout2 option. * Support model parallel in fused_feedforward pass.
* fix distribute_fpn_proposals * fix bug
…ddle#51687) * support 0-d tensor for element wise unary ops * fix python code style check * fix approval check * support 0-d tensor for onednn softmax and logsoftmax kernels * fix commnets * fix some unittests
* test_get_kernel * add invoke signature * change reduce_max * change frobenius_norm * reset reduce_max according to composite and change reduce_all * fix the bug when Scalar(*) * fix 'scalar when support_tensor' * change code according to review * change 'keep_signature' to 'manual_signature' and add some erro info * support optimizers autogen * change sgd yaml * change generate signature * fix test/cpp/new_executor/CM * reset signature generated function * change signature funciton * change signature funciton
* allow return none when stop_gradient=True * remove useless code * refine code * refine code * fix test cast * change more test * add more tests * fix error msg in pylayer
…0926) * finish pr * skip cpu test for logical * change test style * fix error.
…and unnecessary <list/tuple> passed to <list/tupule>() (PaddlePaddle#51928) * autofix * add select config * autofix C410 * add C410 select
* add nanmedian output defs * remove the multiclass_nms3 momentum
…ddle#51532) * add fp16 and bfp16 for temporalshift * add fp16 and bfp16 for complex * fix bug * fix bug * add fp16 and bf16 for conj * fix bug * fix bug * Update complex_kernel.h fix bug * Update temporal_shift_grad_kernel.h fix bug * Update temporal_shift_kernel.h fix bug
* Add bf16 support for elementwise_pow * Update ut
* add patterns * update rule based tuner * add forward sub program completion * add unittest * add bwd sub program completion
* update * update * update * update * update * fix test
* unify add_position_encoding * unify affine_channel * unify alloc_float_status * unify allreduce * unify alltoall * unify anchor_generator * unify ascend_trigger * fix bug * fix test
* delete old dygraph xpu op test
* first commit * fix bugs * remove_useless sync
…dle#51975) * remove fluid deps in fused_linear_param_grad_add_kernel * fix compile error * fix ut error * follow comments
* add meshgrid composite rule * add meshgrid composite rule * update * add into CMakeLists * fix * update * update * optimize code * fix meshgrid op * update test
….cc (PaddlePaddle#51676) * delete prim flag for matmul_2_grad * delete prim flag for matmul_2_grad * add new setgradoutmeta for matmul_double_grad_node * modify test and delete log * deal with review
…X mode API {func_name}' (PaddlePaddle#51991) * [Polish Log] Polish Tensor operants' log: 'OperantsManager reusing XXX mode API {func_name}' * Make API name more precise
…ddlePaddle#52019) * [CodeStyle] update ruff config with `#` style to reduce conflicts * revert C403-C406 * remove B905; test=document_fix
* [Dy2St]Fix clone for test state problem * clean code
…tidea/Paddle into move_collective_feet_to_test
很抱歉,经过我们的反复讨论,你的PR暂未达到合入标准,请阅读飞桨原生算子开发规范,你可以重新提交新的PR,我们先将此PR关闭,感谢你的贡献。 |
PR types
Others
PR changes
Others
Describe
Move collective/fleet to test dir
新pr