Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix gatherv overflow when the combined packed mesh size exceeds 2^31. #117

Merged
merged 10 commits into from
Aug 6, 2024

Conversation

Algiane
Copy link
Member

@Algiane Algiane commented Aug 5, 2024

Fix gatherv overflow when the combined packed mesh size exceeds 2^31.

In this case, the array of displacements provided to MPI_gatherv overflow the INT32_MAX value while the function expect an array of int32_t.

This PR follows on from the PRs #113 and #114 that have to be credited to @mpotse and @wissambouymedj.

Fix

The gatherv has been replaced by MPI_Send / Recv calls.

Validation

  • The attached piece of code provide a copy past of the Gatherv call in ParMmg as well as the implemented fix with Send/Recv.
  • If built with -DENABLE_GATHERV, the program calls the gatherv function, otherwise it calls the send/recv ones. The integer type of arrays displs and buf_idx as to be defined when building the program (-DINT_TYPE=int or -DINT_TYPE=size_t`)
  • Run on 8 mpi processes with INT_TYPE defined to size_t we can obxerve the gatherv failure while the send/recv messages succeed.
#include <mpi.h>
#include <stdio.h>
#include <limits.h>
#include <string.h>
#include <assert.h>
#include <memory.h>
#include <stdlib.h>


#define MPI_MERGEMESH_TAG 1


int main(void ) {
  int msg_size = INT_MAX-1;

    // Initialize the MPI environment
    MPI_Init(NULL, NULL);

    // Get the number of processes
    int nprocs;
    MPI_Comm_size(MPI_COMM_WORLD, &nprocs);

    // Get the rank of the process
    int myrank;
    MPI_Comm_rank(MPI_COMM_WORLD, &myrank);

    // Get the name of the processor
    char processor_name[MPI_MAX_PROCESSOR_NAME];
    int name_len;
    MPI_Get_processor_name(processor_name, &name_len);

    // From here, the code is copied from ParMmg and modified only to simulate
    // the gathering of a very large char array with gatherv function (if the
    // program is build with -DENABLE_GATHERV option) or MPI_Send/Recv ones
    size_t     pack_size_tot,next_disp;

    int        *rcv_pack_size;
    int        root,pack_size;
    char       *rcv_buffer,val,*buffer;

    INT_TYPE *displs,buf_idx;

    // Fake initialization of parameters and arrays on each proc
    root = 0;

    int8_t A = 65;
    if ( myrank == root ) {
      pack_size = 10;
      val = A;
    }
    else {
      int8_t rank_unit = myrank % 10;
      pack_size = msg_size;
      val = A + rank_unit;
    }

    /* Set buffer to suitable value */
    buffer = (char*)malloc(pack_size);
    rcv_pack_size = (int*)malloc(nprocs*sizeof(int));
    displs = (INT_TYPE*)malloc(nprocs*sizeof(INT_TYPE));

    memset(buffer,val,pack_size);

    printf(".. On rank %d, buffer is filled with %.5s\n",myrank,buffer);

    MPI_Gather(&pack_size,1,MPI_INT,rcv_pack_size,1,MPI_INT,root,MPI_COMM_WORLD);

    if ( myrank == root ) {
      displs[0] = 0;
      for ( int k=1; k<nprocs; ++k ) {

        next_disp = displs[k-1] + rcv_pack_size[k-1];
        if(next_disp>INT_MAX){
          /* The displacements argument to MPI_Gatherv() is an array of int
         * (signed) so the number of elements must be smaller than 2^31.
         * To get around this we must pack more data in a single element
         * or use multiple messages.
         */
          fprintf(stderr, "\n\n  ## Warning: INT MAX overflow in displs array \n\n");
          // MPI_Abort(MPI_COMM_WORLD, 1);      /* error detected only on root */
        }
        displs[k] = next_disp;
      }
      /* On root, we will gather all the meshes in rcv_buffer so we have to
       * compute the total pack size */
      pack_size_tot        = (size_t)(displs[nprocs-1])+(size_t)(rcv_pack_size[nprocs-1]);
      assert ( pack_size_tot < SIZE_MAX && "SIZE_MAX overflow" );

      /* root will write directly in the suitable position of rcv_buffer */
      buf_idx = displs[root];
    }
    else {
      /* on ranks other than root we just need to store the local mesh so buffer
       * will be of size pack_size */
      pack_size_tot = pack_size;
      /* we will write the mesh at the starting position */
      buf_idx = 0;
    }

    rcv_buffer = (char *)malloc(pack_size_tot);

 #ifndef ENABLE_GATHERV
    if ( myrank == root ) {
      /* With send/recv we have to copy buffer in rcv_buffer on rank root */
      memcpy(rcv_buffer,buffer,pack_size);

      for ( int i = 0; i < nprocs; ++i ) {
        if ( i != myrank ) {
          printf(" -- rank %d rcv %d data from %d at address %zu\n",root,rcv_pack_size[i],i,displs[i]);
          MPI_Recv(rcv_buffer + displs[i], rcv_pack_size[i], MPI_CHAR, i,
                   MPI_MERGEMESH_TAG, MPI_COMM_WORLD, MPI_STATUS_IGNORE);
        }
      }
    } else {
      printf(" -- rank %d send a char array of size %d with value %c to %d\n",myrank,pack_size,buffer[0],root);
      MPI_Send(buffer, pack_size, MPI_CHAR, root, MPI_MERGEMESH_TAG,MPI_COMM_WORLD);
    }
#else
    MPI_Gatherv ( buffer,pack_size,MPI_CHAR,
                  rcv_buffer,rcv_pack_size,
                  displs,MPI_CHAR,root,MPI_COMM_WORLD );
#endif

    if ( myrank == root ) {
      printf("\n\n ==> Program succeed !!! gathered buffer %.5s ... %.5s ...\n",
             rcv_buffer,&rcv_buffer[pack_size_tot - rcv_pack_size[nprocs-1] - 2]);
    }

    free ( buffer );
    free ( rcv_buffer );

    // Finalize the MPI environment.
    MPI_Finalize();

    return 0;
}

Todo

The mesh packing inside a char array implies that we need large intergers to store the message size. An alternative that was implemented in the past (and was working I think) used hierarchical MPI_Struct to define the C structures used in the mesh (points, xpoints, tetra...) as well as the mesh itself. It should allow to easier the mesh communication and to reduce the size of the integer used to provide the message size (because the MPI structures will be larger than chars which is the minimal unit for the type of variables).

See issue #118

wissambouymedj and others added 6 commits June 13, 2024 17:34
…in PMMG_gather_parmesh exceeds INT_MAX

  This problem occurs when the size of the first np-1 of the compressed mesh parts exceeds INT_MAX.
  This commit introduces an error message for this circumstance, which replaces an error on a negative allocation size.
  It also shows the allocation size in the error message for allocation failur so that this kind of problem is easier to diagnose.
@Algiane Algiane added the kind: bug Something isn't working label Aug 5, 2024
@Algiane Algiane self-assigned this Aug 5, 2024
Copy link

codecov bot commented Aug 5, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 63.10%. Comparing base (babaf07) to head (6dac366).

Additional details and impacted files
@@             Coverage Diff             @@
##           develop     #117      +/-   ##
===========================================
+ Coverage    63.07%   63.10%   +0.02%     
===========================================
  Files           46       46              
  Lines        18955    18960       +5     
  Branches      3542     3544       +2     
===========================================
+ Hits         11956    11964       +8     
+ Misses        6073     6071       -2     
+ Partials       926      925       -1     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

Algiane added 3 commits August 6, 2024 11:54
  - If ran using 1 mpi process, the rcv_buffer variable has to be filled by the packed parmesh;
  - the pointer toward the buffer provided to the mpipack_parmesh function is modified by the function so cannot be used anymore after the function call.
@Algiane Algiane force-pushed the wissambouymedj-feature/fix-centralized-output branch from 7351df6 to 84be939 Compare August 6, 2024 09:54
@Algiane Algiane merged commit 5cbeb4d into develop Aug 6, 2024
42 checks passed
@Algiane Algiane deleted the wissambouymedj-feature/fix-centralized-output branch August 21, 2024 18:39
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind: bug Something isn't working
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants