Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Misc pending tasks associated with refactoring in branch release-0.2 #39

Open
23 of 30 tasks
fverdugo opened this issue Oct 7, 2021 · 11 comments
Open
23 of 30 tasks
Assignees

Comments

@fverdugo
Copy link
Member

fverdugo commented Oct 7, 2021

Merging into master PR #50

  • Update README
  • remove src/OLD, test/OLD, compile
  • Add NEWS.md

High priority

Medium priority

  • Nicer user API for PETScSolver (to be done in GridapPETSc). In particular, better handling of the lifetime of PETSc objects.
  • Implement distributed models from gmsh (in GridapGmsh) @fverdugo
  • implement lu! for PSparseMatrix following the same strategy as with \
  • Move mpi tests to their own CI job @amartinhuertas PR Separating Seq and MPI tests in Github actions #47
  • Automatically generate sysimage to reduce mpi tests runtime @amartinhuertas PR Adding sysimage generation to mpi tests #48
  • Think how the user can define the local and global vector type in the FESpace
  • Showing NLSolve.jl solver trace only in master MPI rank leads to a dead lock.
  • Peridodic BCs
  • Implement ZeroMeanFESpace

Low priority

  • Implement a more lazy initialization of the matrix exchanger in PartitionedArrasys since the exchanger is not always needed.
  • Overlap compression of the sparse matrix with communication of rhs assembly
  • Implement another strategy to represent local matrices in PartitionedArrays with a data layout compatible with petsc
  • interpolate_everywhere not available in GridapDistributed.jl, only interpolate Solved in PR Dirichlet values #74
@amartinhuertas
Copy link
Member

amartinhuertas commented Oct 15, 2021

Let me document here a spare set of issues while I go through GridapDistributed.jl (to not forget, these are temporary, I may add new items on the fly, etc.):

  • [Minor Solved in PR https://github.com/Rhs assembly alone #40] DistributedSparseMatrixAssembler does not have strategy member variable, and I think it should, as required by the get_assembly_strategy function (among others).
  • [Important] If you use FullyAssembledRows() strategy in DistributedSparseMatrixAssembler but do not pass the ghost cells in the assembly raw data, the code does not complain. Conversely if one uses SubAssembledRows() + raw data with ghost cells. I think this is VERY dangerous and prone to errors. We should try to find a solution.
  • Why the assemble objects are called Allocation and NOT Allocator ?
  • While the function local_views in GridapDistributed.jl is NOT actually called local_view. Why in plural?
  • More issues to come ...

@fverdugo fverdugo changed the title Misc pending tasks associated with refactoring in branch parrays_and_gridap_0.17_integration Misc pending tasks associated with refactoring in branch release-0.2 Oct 15, 2021
@fverdugo
Copy link
Member Author

Hi @amartinhuertas!

I have opened a new branch https://github.com/gridap/GridapDistributed.jl/tree/release-0.2 from the current state.

We can use this branch as code that "is working" and do intermediate devs somewhere else.

@fverdugo
Copy link
Member Author

@amartinhuertas I have classified the pending tasks as high/medium/low priority. (Move tasks from one tier not another if you find it necessary)

For me, we can merge the branch once the high-priority test are fixed. The missing low/medium priority ones can be moved to separate issues.

@amartinhuertas
Copy link
Member

For me, we can merge the branch once the high-priority test are fixed. The missing low/medium priority ones can be moved to separate issues.

Ok. I have added a new high-priority task. Agreed.

@fverdugo
Copy link
Member Author

@amartinhuertas I am adding a nonlinear test and I have found a bug associated with the in-place assembly functions assemble_xxx!. I am fixing it.

@amartinhuertas
Copy link
Member

@amartinhuertas I am adding a nonlinear test and I have found a bug associated with the in-place assembly functions assemble_xxx!. I am fixing it.

Yes, indeed I added @test_broken macro calls in FESpacesTests.j in relation to these. You will find this in the rhs_assembly_branch. When you fix it, you can replace these by @test.

@fverdugo
Copy link
Member Author

Hi @amartinhuertas

I have added the nonlinear example. In particular,

  • the in-place assembly functions are working.
  • one can use the parallel assembly strategy to build a valid triangulation e.g. Triangulation(with_ghost,model) and Triangulation(FullyAssembledRows(),model) are equivalent.

@oriolcg
Copy link
Member

oriolcg commented Jan 17, 2022

Hi @amartinhuertas, @fverdugo, what are the developments needed for periodic BCs?

@fverdugo
Copy link
Member Author

At least, the ghost layer has to be implemented taking into account periodicity. And perhaps other devs.

@oriolcg
Copy link
Member

oriolcg commented Feb 2, 2022

Hi @amartinhuertas and @fverdugo, the last task of the list can be checked out once PR #74 is accepted. Also, I suggest to add another task into the list: support ZeroMeanFESpace. I guess that this is only a matter of selecting only one local space to constrain a DOF and use the global volume instead of the local volume as a member of the struct. What do you think about this?

@amartinhuertas
Copy link
Member

the last task of the list can be checked out once PR #74 is accepted.

Great @oriolcg ! Thanks for your work!

support ZeroMeanFESpace

Sure. Task added.

I guess that this is only a matter of selecting only one local space to constrain a DOF and use the global volume instead of the local volume as a member of the struct. What do you think about this?

I indeed implemented this in GridapDistributed.jl v0.1.0. See https://github.com/gridap/GridapDistributed.jl/blob/v0.1.0/src/ZeroMeanDistributedFESpaces.jl and https://github.com/gridap/GridapDistributed.jl/blob/v0.1.0/test/ZeroMeanDistributedFESpacesTests.jl. I guess it should be a matter to re-write this using the new code organization in v0.2.0.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants