Skip to content
andreasstegmueller edited this page Mar 10, 2021 · 8 revisions

This project is a collection of different examples for model calibration and a discussion on how to connect this calibration process with real data and parameterized simulation problems. This also requires to structure the information that is put back into the database and to define metadata to be able to retrieve the information.

Model calibration competency questions

In order to answer the question, what metadata we would have to store, we should think about competency questions and how potential answers would look like.

  • Return all data with more than 10 model parameters, 5 hyperparameters using a variational bayesian approach that was solved with tensorflow (calibration) and a fenics forward model in 2020 from KIT?

  • Return all data in the area of constitutive models for concrete that are based on at least 20 different experimental data from different labs?

  • Point me to all specific data that was used in the calibration of the following model parameters.

  • How was the model set up (link to a specific WORKFLOW), including the procedure (deterministic opt, sampling method, Bayes, etc.)?

  • How was the forward model chosen, and what guided the decision?

  • Who performed the model calibration and which is the underlying exerperimental process (e.g. also raw data, owner/agent, instances of that)?

Clone this wiki locally