Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Concurrent subdomain-wise assembly? #86

Open
prj- opened this issue Nov 24, 2022 · 7 comments
Open

Concurrent subdomain-wise assembly? #86

prj- opened this issue Nov 24, 2022 · 7 comments

Comments

@prj-
Copy link

prj- commented Nov 24, 2022

Sorry for these very naive questions, I don't know the inner workings of Gridap so they may be inappropriate.

  1. When you do assembly in parallel, do you use ghost elements, or do you use partial assembly and then sum contributions at the interfaces?
  2. I'm guessing there is some underlying data decomposition of the initial mesh depending on the number of MPI processes. Is it possible to assemble a different variational formulation just on the "local" portion of the initial mesh?
@prj-
Copy link
Author

prj- commented Nov 29, 2022

Should I ask this question some place else @fverdugo? (sorry to ping you randomly, it seems you are the latest contributor to this repo)

@prj-
Copy link
Author

prj- commented Dec 6, 2022

Is this interface still maintained @principejavier @amartinhuertas?

@amartinhuertas
Copy link
Member

Hi @prj- ! Thanks for your interest in this repo. Apologies. I missed this issue. The interface is still maintained. To ask questions regarding the gridap ecosystem better to use the gitter channel or the slack workspace. We tend to be more responsive there.

I will answer your questions in a later post, but in the meantime you could take a look at the gridapdistributed repo and associated Joss paper as these give you an overall picture of how distributed memory computations are handled in the gridap ecosystem.

@prj-
Copy link
Author

prj- commented Dec 6, 2022

Thank you very much! Excellent, I see you are using ghost cells in the JOSS paper. Is there an easy way to assemble a local matrix on the local cells + ghost cells? Kind of like what you do for BDDC, but with overlap. My end goal would be to add an interface to PCHPDDM, and for that, I need both a local Mat and an IS which maps the local numbering to the global numbering.

@amartinhuertas
Copy link
Member

Is there an easy way to assemble a local matrix on the local cells + ghost cells?

Yes. The following psecudocode (did not execute it, it may contain typos) builds the local_matrices instance of
type XXXPData (see PartitionedArrays.jl package for more details) with the local matrices being assembled on each
process. Note that I am using the sequential backend (instead of mpi) just to allow the code to be debugged sequentially using e.g. the Julia debugger builtin in the Julia extension for VSCode.

using Gridap
using GridapDistributed
using PartitionedArrays
partition = (2,2)
prun(sequential,partition) do parts
  domain = (0,1,0,1)
  mesh_partition = (4,4)
  model = CartesianDiscreteModel(parts,domain,mesh_partition)
  order = 2
  u((x,y)) = (x+y)^order
  f(x) = -Δ(u,x)
  reffe = ReferenceFE(lagrangian,Float64,order)
  V = TestFESpace(model,reffe,dirichlet_tags="boundary")
  U = TrialFESpace(u,V)
  local_matrices=map_parts(model.models, V.spaces, U.spaces) do model, U, V
    Ω = Triangulation(model)
    dΩ = Measure(Ω,2*order)
    a(u,v) = ( (v)(u) )dΩ
    l(v) = ( v*f )dΩ
    op = AffineFEOperator(a,l,U,V)
    op.op.matrix
  end
end

I need both a local Mat and an IS which maps the local numbering to the global numbering.

In regards to the local to global numbering mapping of DoFs, the gids member variable of V (of type DistributedSingleFieldFESpace) should provide you with the information required. This is a variable of type PRange (see PartitionedArrays.jl package and fverdugo/PartitionedArrays.jl#63 for more details).

@prj-
Copy link
Author

prj- commented Dec 7, 2022

Thanks for the detailed explanation. I propose to leave the issue open until I sort everything out, but if you prefer to close it (and then maybe I'll re-open a new one), that's OK.

@amartinhuertas
Copy link
Member

Ok. We can leave the issue open and discuss all that is needed here. Note that github also offer the Discussions Tab. For future discussions (if any), we can use that tool as well.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants