forked from OPM/opm-models
-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathCHANGELOG
290 lines (272 loc) · 15.6 KB
/
CHANGELOG
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
Notable Differences Between eWoms 2.2 and eWoms 2.3
===================================================
- The namespace for all eWoms classes has been changed from "Dumux" to
"Ewoms". This makes it easy to provide distribution packages for
both, Dumux and eWoms that do not clash. Also, it allows to use
Dumux and eWoms classes within the same piece of code. Finally, this
helps to avoid mental confusion when it comes to relation between
Dumux and eWoms. To adapt your code to the change, you may follow
the procedure below:
- MAKE A BACKUP OF YOUR PROJECT. Although the following steps should
usually work without manual intervention, there might still be
unforseen problems!
- Go to the root directory of your project:
cd $YOUR_PROJECT_ROOT
- Rename the 'dumux' directory (if existent) to 'ewoms'. If you use
git for revision control of your project, this should be done
using
git mv dumux ewoms
- Run the following to adapt the contents of the files of your
project:
# adapt c++ sources
find -name "*.hh" -o -name "*.cc" | \
xargs sed -i "
s/Dumux::/Ewoms::/g;
s/namespace Dumux/namespace Ewoms/;
s/\(include.*\)dumux\(.*\)/\1ewoms\2/;
s/DUMUX/EWOMS/"
# adapt the build system
find -name "Makefile.am" -o -name "configure.ac" | \
xargs sed -i "s/dumux/ewoms/g"
- Run dunecontrol for your project and re-compile.
- The implicit models have been reorganized:
- Discretizations now reside in the ewoms/disc directory
- The vertex-centered finite volume discretization has been renamed
from "box" to "vcfv" to make its name more descriptive and more
consistent with further discretizations. The names of all
properties, classes and files which where prefixed by 'Box' are
now prefixed by 'Vcfv'.
- Control files for creating Debian/Ubuntu packages have been
added. To build these packages, just run 'debuild -us -uc' in the
top-level directory of the source tree. (This only works for Debian
or Ubuntu based distributions, obviously.)
- The autotools based build system is now non-recursive. This makes
parallel builds (i.e. 'make -j$NUM_CORES check') more efficent. New
tests need to be added to the toplevel Makefile.am instead of to the
Makefile.am of the directory where the test resides.
- The decoupled models have been parallelized. No dynamic
load balancing can be done yet.
- The concept of Newton controllers has been abandoned. The
functionality previously implemented has been folded into the
NewtonMethod classes which now use static polymorphism. The reason
for this change is that this makes the Newton solver use the same
design as used by the rest of eWoms, and that the original reason to
split the policy from the algorithmic part of the Newton method was
not required. (The reason was to be able to implement different
policies with zero overhead, but all implemented Newton controllers
were just derived from NewtonController.) Also, the following
properties have been renamed during the exercise:
- NewtonRelTolerance -> NewtonRelativeTolerance
- NewtonAbsTolerance -> NewtonAbsoluteTolerance
- NewtonMaxSteps -> NewtonMaxIterations
- NewtonTargetSteps -> NewtonTargetIterations
- NewtonMaxRelError -> NewtonMaxRelativeError
- NewtonSatisfyAbsAndRel -> NewtonSatisfyAbsoluteAndRelative
- A fully-implicit discrete fracture model for two-phase problems has
been added.
- The header file containing the eWoms parameter system has been
renamed from "parameters.hh" to "parametersystem.hh" to make it
consistent with the naming scheme of the property system.
Notable Differences Between DuMuX 2.1 and eWoms 2.2
===================================================
- The minumim required version of the Dune core modules is now
2.2. This means that eWoms will no longer compile with Dune 2.1.
- eWoms now compiles with the LLVM-CLang compiler. The minimum version
required is CLang 3.1.
- Support for compilers that don't support the 'auto' keyword and
initializer lists has been dropped. As a consequence eWoms will not
compile with GCC prior to version 4.4.
- The abstractions between problems, flow models, discretization
schemes and solvers have been overhauled for the fully-implicit models:
- Problems only specify a given set-up. This means that problems
usually do not manipulate primary variables or fluxes directly.
- All "spatial parameters" classes have been merged with their
respecitve problems. This was done because most methods of the
problems already specified spatially dependent parameters.
- The API of the problems is not specific to any flow model or spatial
or temporal discretization anymore. This is achived by passing
so-called "context objects" and a space and time index to all of the
problem's methods which are potentially dependent on space or
time. All context objects provide a generic set of methods
(e.g. pos() and globalSpaceIdx()), and some discreization specific
ones. The problems and the physical models are not supposed to know
what a given space and a time index means.
- The boundary conditions for fully-implicit models are now weakly
imposed. For this reason, the problem's methods boundaryTypes(),
dirichlet(), and neumann() are superseded by the method
boundary(). This method gets an object of the new BoundaryRateVector
class, which can be used to specify fluxes for a given boundary
segment. It also provides methods to specify in-flow, out-flow and
free-flow boundary conditions called setInFlow(), setOutFlow() and
setFreeFlow() given a fluid state for the boundary segment.
- As a consequence of the previous item, Dirichlet boundary conditions
are now not implemented as constraint degrees of freedom anymore. If
the problem requires constraints, the "EnableConstraints" property
needs to be set to "true" and the problem needs to overload the
constraints() method. In contrast to the previous implementation,
eWoms now also supports constraint degrees of freedom in the
interior of the domain. ("Dirichlet" conditions previously where
only possible at finite volumes adjacent to the grid boundary.)
- The parameter system has received a major overhaul: “Runtime”
parameters have been abandoned, as have parameter groups. Runtime
parameters have bitten the dust because of their flawed semantics
(all parameters are always specified on run-time), groups where
removed because they caused confusion when specifying parameters
from the command line (e.g. the correct way to specify the relative
tolerance of the Newton solver was ‘—newton.rel-tolerance’ instead
of ‘—newton-rel-tolerance’). Also, all parameters must now be
registered using the REGISTER_PARAM macro before they are
accessed. This allows to print a canonical list with all parameters
and their description when the program is run with “—help” and also
allows to print a comprehensive list with the values of all
parameters at the beginning of a simulation.
- The fully-implicit compositional models 1p2c, 2p2c(ni) and 3p3c(ni)
are superseeded by the 'pvs' model, a generic model based on primary
variable switching. The model supports an arbitrary number of fluid
phases and chemical components, but it still requires that all fluid
phases are ideal mixtures.
- The fully-implicit multi-phase models which assume immiscibility --
i.e. 1p and 2p are superseeded by 'immiscible' model which supports
an arbitrary number of fluid phases.
- A fully-implicit black-oil model and the corresponding fluid system
has been added. The black-oil model is a three-phase three-component
paritially miscible model that is widely used in petrolium
engineering. The black-oil fluid system also allows to use
full-fledged compositional models (Mp-Nc, 3p3c) to be used with
black-oil parameters.
- The fully-implicit non-isothermal models (2pni, 2p2cni, 3p3cni,
stokes2cni) have been replaced by a generic implementation of the
energy equation. As a consequence, you have to derive your problem's
type tag from the one of the isothermal model (i.e. BoxNcp, BoxPvs,
BoxImmisicible or BoxStokes) and set the property ‘EnableEnergy’ to
true. The latter can be done by adding the following line to the
problem file:
SET_BOOL_PROP(YourProblemTypeTag, EnableEnergy, true);
- Many bugs have been shaken out of the code for MPI parallelization
code. As a result, the fully-implicit models should work much better
for parallel computations. As a consequence, the parallelization is
no longer considered experimental.
- Support for using pluggable convergence criteria has been added to
the linear solvers. The eWoms default criterion now consideres the
reduction of the weighted maximum defect of the linear system of
equations instead of the reduction of the two norm of the
defect. This leads to major performance and stability improvements
in many cases.
- A back-end for the parallel algebraic multi-grid linear solver
provided by DUNE-ISTL has been added for the fully implicit finite
volume models. It is still considered to be slightly experimental
and it can be used by specifying the property
SET_TYPE_PROP(ProblemTypeTag, LinearSolver, Dumux::BoxParallelAmgSolver<TypeTag>);
in the problem file.
- All fully-implicit models now use the generic M-phase material laws
which (in principle) allow to implement capillary pressure and
relative permeability relations depending on absolute pressure,
temperature and phase composition. Also, these material laws do not
require the flow model to make assumptions on the wettability of
fluids.
- Heat conduction laws have been introduced. They work analogous to
the capillary pressure/relative permeability laws.
- The primary variables are not "dumb" vectors anymore, but can also
store "pseudo primary variables" like the phase state.
- 'Rate vectors' have been introduced. They allow to specify fluxes
and source terms independent of the underlying flow model and also
support specifying volumetric rates.
- Writing VTK output files has been centralized. This allows to
simplify the flow models, and allows much finer granularity over
what quantities get written to disk. Writing a quantity can be
enabled by passing e.g. the "--vtk-write-temperature=1" parameter to
the executable.
- Flow models now specify names for the conservation equations and
primary variables which they use. This significantly improves the
comprehensibility of the VTK output when the convergence behaviour of
the Newton method is written to disk.
- Most physical models now only support a single formulation
(i.e. choice of primary variables and conservation equations). This
makes them more readable and improves test coverage. This change was
motivated by the fact that, since problems can easily be specified
independent of the flow model, supporting multiple formulations does
not really make sense anymore.
- A h-adaptive semi-implicit model using an MPFA L-method was
added. It simulates two-phase and two-phase, two-component flow on
unstructured grids in two dimensions. See test/decoupled/2p(2c) for
an example.
Notable Differences Between DuMuX 2.0 and DuMuX 2.1
===================================================
- The thermodynamics framework has been overhauled:
- The programming interfaces for fluid systems, fluid states and
components has been formalized and cleaned up.
- Fluid systems now have the option to cache computationally
expensive parameters if they are needed for several relations.
- Fluid systems are not charged with the computation of the
chemical equilibrium anymore.
- Fluid states are now centralized infrastructure instead of being
model-specific.
- Constraint solvers (which simplify solving thermodynamic
constraints) have been introduced.
- Outflow boundary conditions have been implemented for the
fully-implicit models 1p2c, 2p2c(ni) and stokes(2cni).
- The problem and spatial parameter base classes also provide optional
model-independent interfaces. These methods only get the position in
global coordinates as argument and are named *AtPos()
(e.g. boundaryTypesAtPos()). This allows an easy transfer of problem
definitions between implicit and sequential models.
- The following fully-implicit models have been added:
- 3p3c, 3p3cni: Isothermal and non-isothermal three-phase,
three-component models for flow and transport in porous media
based on primary variable switching.
- MpNc: A model for arbitrary number of phases M > 0, and components
(N >= M - 1 >= 1) for flow and transport in porous media. This
model also comes with an energy and a molecular diffusion module.
- stokes, stokes2c, stokes2cni: Models for the plain Stokes
equation as well as isothermal and non-isothermal Stokes models
for two-component fluids.
- The sequentially-coupled models have been overhauled:
- A common structure for cell centered standard finite volume
implementations has been introduced.
- The data structures where overhauled to avoid large clumps of data
in large-scale simulations: Each cell stores data in its own
storage object.
- The too large assemble() methods have been split into submethods
getStorage(), getFlux() etc. By this, inheritance of classes has
been improved and code duplication was reduced.
- The conceptual seperation of the "VariableClass" (central
infrastructure), data storage, transport model and pressure model
has been improved.
- More of infrastructure is now shared with the implicit models
(e.g. the BoundaryTypes). This results in significant performance
improvements and makes debugging easier.
- The 2padaptive sequentially coupled model has been added. This model
implements a grid-adaptive finite volume scheme for immiscible
two-phase flow in porous media on non-conforming quadrilateral
grids.
- The dependencies for the external dune-pdelab and boost packages
have been removed.
- The build system has received major improvements:
- There is now much better test coverage of build-time dependencies
on packages for the default autotools-based build system.
- Experimental support for building DuMuX using CMake has been much
improved. In the long run, CMake is projected to become the
default build system.
- All headers can now be included without any preconditions.
- DuMuX now compiles without warnings if the -pedantic flag used for GCC.
- Specifying run-time parameters is now possible. The mechanism allows
to use parameter files or to specify parameters directly on the
command line and fallback parameter input files have been added for
each test application. As a consequence, applications can be run
now without specifying any command line arguments.
- The DuMuX property system has been fine-tuned:
- Encapsulating property names with the PTAG() is no longer required
for the GET_PROP* macros (but is still allowed).
- Setting property defaults has been deprecated.
- All properties defined for a type tag can now be printed. Also,
their value and the location in the source where they where
specified is included in the output.
- Using quadruple precision math has been made possible for GCC 4.6 or newer:
- To use it, the configure option '--enable-quad' needs to be added
and the type of scalar values needs to be changed to 'quad'. This
can be done in the problem file using
SET_TYPE_PROP(YourProblemTypeTag, Scalar, quad);
It should be noted, that performance is very poor when using
quadruple precision arithmetic. This feature is primarily meant as
a debugging tool to quickly check whether there are machine
precision related convergence problems.