quarta-feira, 6 de novembro de 2019

ER 2019 - Keynote - Next Generation Modeling Environments - By Barbara Weber

Next Generation Modeling Environments
By Barbara Weber
*My comments are marked with an asterisk

In order to make conceptual modeling easier and more efficient, we need to understand who is behind the model. Is it a novice or an expert modeler?

In her work, she applied the Cheetah experimentation platform, which uses eye tracking and log mining. They also experiment with think aloud protocols to have the explanations behind the interaction with the model.

She also uses some quantitative based techniques to understand about the use of the patterns in the model.

----

Context detection:

  • Based on objective measures intead of self-assessment of expertise
  • Unobtrusive towards the modeler
  • Applicable to online settings (works on intermediate models, not necessarily complete ones, and the features are calculated effectively)
  • Independent of a specific modeling tool
----

Automatic detection of the modeling phase
  • Again, took some inspiration on Software Engineering, basing the work on an previously existing work
  • Comparing of code comprehension, code review, prose review in terms of brain activation using MRI. 
Analysis of MRI data showed largely distinct neural representation for different activities. This suggests that tehre is a need for support for phase-specific support.

Regarding BPM, 
- Flexible processes
  • Repeated execution of different phases - iteratively
  • From that, she aimed at automatically inferring the BP modeling process
They mapped this to the preexisting results on SE experiments, linking each phase to the expected MRI image. 
Process e.g.: Problem Understanding, Method Fixing, Modeling, Reconciliation, 
  • Detecting of modeling and reconciliation phases by looking at model interaction
  • And the other phases by eye tracking moving.
*In the talk, she explained step by step how this experiment happened. And it is very interesting! Worth taking a look at the slides!

The accuracy of the experiment was around 80%, which is very encouraging.

Of course it is not something that you can directly reapply in another setting. But you may get some inspirations. And it is already very interesting to use such results to help students to learn conceptual modeling in a better way.

She highlights that there are already a lot of conceptual modeling data available which provides an opportunity for us to explore this to understand better the conceptual modeling activities, among other things.

----

Towards neuro-adaptive modeling environments

She said that this part is a bit deeper and more complicated.

Can give rise to neuro adaptive systems that can adjust themselves to the user's current mental state. 
It is not the idea to substitute the human modeler, but rather supporting her in a better and more personalized and adaptable way.

She gave an example of mental sate, which is cognitive load. But this is analogous to handling other mental states.

There is a relationship of cognitive load and poor decisions.
There is a correlation of psychological and behavioral measures with specific mental states. 
E.g. 
  • heart rate variability
  • eye tracking
  • EEG
  • Galvanic Skin Response
The trend is using machine learning based algorithms, with increasingly multi-modal approaches.

*She presented a Neuro-adaptive platform - very comprehensive and insightful!

So far, there are existing systems that are able to detect the mental state and say to the user "you are stressed, please relax". But next generation modeling supporting systems must do more than that. And to do so, they have to understand much more about the context of the modeler.  

She gave some examples in her slides: for instance, a test-driven modeling suite based on hybrid artifacts combining declarative process models with test cases. Another example is DCR-HR

----

Q&A

You have to take care of at least three aspects:
- the person and how expert she is
- the task 
- the artifact
About the size, it is true that complex processes lead to big models, but also, there are also cases in which people are developing big models by mistake and perhaps the tool can help.

About the sub-processes, she has a PhD student that works on hierarchical models and he moves away from the general assumption that hierarchies are always good. This may  also leads to some problems in the model.

She mentioned that the test-driven modeling suite has been embedded in a commercial tool.



Nenhum comentário:

Postar um comentário