quarta-feira, 24 de setembro de 2014

My talk at FOIS 2014

An Ontological Interpretation of Non-Functional RequirementsRenata Guizzardi, Feng-Lin Li, Alex Borgida, Giancarlo Guizzardi, Jennifer Horkoff and John Mylopoulos

My talk was at Day 1 - session 2. I really enjoyed the feedback I’ve got! What was more rewarding was to realize that people with different background showed interest and understood what I was talking about. In conferences like FOIS, where diversity is in its soul, my main worry was to communicate well the work for this broad audience. For that, something really important is to make a good introduction, presenting some definitions (like Requirements Engineering, for instance) and illustrating the problems the ambiguous definitions of NFRs create in real life, thus motivating our work of ontological clarification. Following Giancarlo’s suggestion, I also made sure to state that our focus was on applied ontology, i.e. on using theoretical works for with the aim of producing practical implications in RE, rather than proposing a new theory. 

Another challenge in preparing the presentation regarded the fact that we have a dense paper. There are too many things to talk about and selecting which were the “must have” was not so easy. I ended up leaving out important things, like the distinction between NFR and softgoal, and the formal syntax to specify requirements. I decided to focus on the definition of NFRs as quality goals and on the explanation regarding vague NFRs, quality constraints and gradable NFRs. Thus, I talked about the important operations of NFR refinement and operationalization. And I also discussed and exemplified the technique we use to analyze the satisfaction of gradable NFRs. Before concluding the paper, following another advice of Giancarlo, I also mentioned the empirical work and our related paper, so that it was clear that some evaluation (at least preliminary) of the use of the proposed ontological concepts has been done.

Oscar Pastor was the chair of the session and he conducted the discussion very well. Indeed, I was happy to see that there were a lot of people willing to make questions, all of them very interesting. Here are some of these questions:

1) Nicola Guarino pointed out that he agrees with our view of NFR as requirements associated to qualities. However, he said (and it is true!) that I talked very little about the bearers of these qualities and he asked me what they were.

Response: I profited from the fact that Xiaowei had come before me in the session to mention that these qualities could inhere in all those elements he had explained, i.e.: functionalities of the system-to-be, parts of the system-to-be, executions of the system-to-be etc. Now I also would like to remind people that information system is broad so requirements (both function and non-functional) may also be process requirements, ontology requirements, i.e. not necessarily automated system. But I understood from Nicola’s comment that in future publications, we must dedicate a section to discuss the bearers and I even think that from the experiment, we have examples to illustrate all kinds of bearers so it will be a very interesting discussion.

2) Nicholas Asher asked why I made the requirement crisp and then I made it gradable. He asked questions regarding the process of requirements elicitation.

Response: Nicholas gave me a great opportunity to mention some problems in the RE practice. Indeed, I had to tell him at this point that in reality, many NFRs remain vague until the end of the process and unfortunately are never dealt with, being the cause of many problems. Also, it is a pity that people classify all that is vague as NFR, without a deeper reflection to whether that requirement refers to function or quality.  He thought the problem was very interesting! In a sense, I think his question also brings us some other things to think about regarding how to talk about gradability. In the paper, we took this choice: first refining, than making the requirement gradable. But in reality, a stakeholder may also say something like “the search result should be returned in around 30s”, which is already gradable, without ever being crisp. Most of the times, indeed, that is probably what happen. So I guess for a journal paper, we can invert things and explain it like this. One thing that I talked to Nicholas in the coffee break and that should also be more explicitly said is that there are requirements which by its nature is crisp while others are gradable or at least would profit from our decision of making the crisp boarder in the quality region thiker (i.e. gradeable). As the time is limited during the presentation, it is not possible to clarify all these things but I think this point is clear in the paper!

Kit Fine: Kit had a comment that what we are proposing for gradability may be solved with a kind of supervaluation, which is his recent work. He also mentioned a terminological problem with the term vague.

Response: I think and Giancarlo is even more convinced that this is something we should look at. Regarding terminology, this is indeed something else to take into account because, since the morning, during Kit’s keynote speech, it was clear that what philosophers (and also linguists, as Asher agrees with him on this), vagueness is more connected to the gradability we are proposing than to what we call vague NFR. For vague NFR, they suggested us to substitute it to underspecified NFR.

Carlos Azevedo (NEMO’s student): Given the NFR stated as “the search result should be returned in 30s”, is there also a way, calculating the gradable NFR satisfaction technique you propose, to come up with the result that 5s is even better than 30s? I ask that because although as Requirements Engineers we establish a maximal desired limit for this kind of requirement, if the result is better than this limit, this should be acknowledged.

Response: I’d have to check but I guess not because both 30s and 5s would be within the gradable region and it would result 1 to the gradable membership function. But that would be desirable indeed! We must think about it.  

Mara Abel: Isn’t this technique of gradable NFR satisfaction calculation too expensive computationally speaking? What justifies it, given that you could simply use the Euclidean distance measure to do this?

Response: Giancarlo asked to respond to this one and highlighted the fact that we considered fuzzy logics, but we think our proposal is better because it explains where the used numbers come from, while in fuzzy logics, they are usually arbitrary. He highlighted the fact that in general, stakeholders are aware of the prototypes and this makes the technique we are proposing a natural candidate to solve the problem.



 

terça-feira, 23 de setembro de 2014

FOIS 2014 - Short Paper Session - Chair: João Paulo


FOIS 2014 - Short Paper Session - Chair: João Paulo

1) Emilio Sanfilippo
Events and Activities: Is there an Ontology behind BPMN?Emilio Sanfilippo, Stefano Borgo and Claudio Masolo

Unfortunatelly I missed most of this presentation. But it seems to have a very interesting practical value. Basically, it builds on the concept of perdurant. I have a question, which I will ask Emilio about the differentiation of the process execution and the process model, which in UFO, we accomplish including the concept of Description as a Social Object.

2) David Aveiro
An Ontology for the τ-theory of Enterprise EngineeringJan Dietz, David Aveiro, João Pombinho and Jan Hoogervorst

Interesting discussion on Function (affordances) vs. construction (looking at things as they really are)
He proposes to objectify value because this is a central concept and for enterprises, value has properties.
Value and experience
Value is seen as a relation between function and capacity
Important not to provide only one solution - analyze alternatives.
Conclusions:
affordances, purpose and value bridge the ontological and teleological perspectives, bringing a high degree of objectivization to the modeling approaches.
separation of concerns of affordance
target the so many times neglected why dimension of the Zachman framework.

My comment in the session: Opportunity of collaboration! We are both (and JP) targeting research on the why dimension of EA.

Here is a figure of the ontology:

 

FOIS 2014 - Day 1 - session 3

FOIS 2014 - Day 1 - session 3

1) Michael Gruninger
Mathematical Foundations for Participation Ontologies (competition paper)Carmen Chui and Michael Gruninger

Representing activity occurence using geometry, representing occurences as lines, time points as points and objects as planes.

They looked at several ontologies of participation, including TSL, Gangemi's ontology and DOLCE. They could verify them using this method and prove they are isomorphic to a point, and that one is more expressive than others for specific things.

It is a highly formal method based on axiom verification, also very founded in mathematics. This may introduce bias (according to a questioner).

In my view, the work seems interesting and Gruninger made a very consistent and lively presentation (also with demonstrations) that highlighted the strenghts of this approach and proved some of his points.

Worth investigating further!

2) Nicolas Troquard

A formal theory for conceptualizing artefacts and tool manipulationsNicolas Troquard

Na approach highly based on logics -- formalization is in its essence.
It sounds interesting at a first sight but I disagree with some underlying assumptions of the work. For instance, seeing artifact as an agent (????) He justified that saying that the artifact is "doing" something for someone and "has a purpose". Come on, we are in an ontology conference, there are better ways to account for those things!!!

Also, he said something like "I like to think of artifacts as agents". But hey, ontologically, what are artifacts? Either they are agents or not... independently on how we like to think about them.  According to UFO, they are social objects and that distinguishes them from intentional substances, who act, have intentions (internal commitments) and perceive events. In summary, for me artifacts are very far from being agents.

Then, he also presented another confuse concept of artifact as an anti-rigid concept, saying that all artifacts have been a non-artifact in the past. All? Why? 

FOIS 2014 - Day 1 - Session 2

Day 1 - Session 2

1)      Fabiano Ruy 
An Ontological Analysis of the ISO/IEC 24744 MetamodelFabiano B. Ruy, Ricardo A. Falbo, Monalessa P. Barcellos and Giancarlo Guizzardi
 
Ontological analysis of a ISO standard (24744) – the analysis focused on one part of the standard SEMDM

There are other parts, AFOS (Foundational ontology for Standards) among them as a future initiative. Our view at NEMO is that the grounding should be in the Project from the beginning and not be a future perspective.

Presented UFO (A, B and C), focusing on the concepts he will need to explain the foundation he made.

Showed the existing UML model of what is called the Endeavor Level. The paper also brings the analysis of another level. Then, he moved deeper into the presented concepts of the Endeavor level to find the issues and recommend the reengineering of the model, based on UFO supported ontological analysis.

He proposed a number of recommendations, very useful ones! For example, before Producer grouped agents and objects under the same concept, now this is fixed; subtypes were regrouped forming different partitions, etc.

Reading the paper is a great idea!

2)      Xiaowei Wang
Towards an Ontology of Software: a Requirements Engineering Perspective
Xiaowei Wang, Nicola Guarino, Giancarlo Guizzardi and John Mylopoulos


Motivation: using ontological clarification to solve problems arising for example as software changes. Discussing identity, we can find criteria to determine: is the software the same one, after some changes or is it a new one?
Related work: Oberle (2006) – differentiate code, execution and copy; Irmak (2013) – software as artifact.
He discussed the underlying theory of Zave and Jackson. Then he showed a view of the model, having on the left hand side a hierarchy of artifacts (such as software system, software program, software product etc.) linking it with the “how to do” label; in the left hand side, concepts related to the purpose, such as specification, execution etc., linking it to “what to do”. Then, he detailed the ontological concepts from this figure.
Software as a bridge between abstract and concrete. Interesting metaphore!
Interesting question on separation between the program and the intent. According to the questioner (and I agree), correct syntax in a program does not happen by chance. Xiaowei referred to the terminology ambiguities to justify the work. Nicola added that even when the code is intentionally crafted but when the code changes the program may remain the same. The questioner agrees that there should be a distinction among these concepts but he disagrees that the code is not somehow connected to intent.  
FOIS 2014 - Notes on the Technical Sessions - Day 1 

Michael Gruninger

A Sideways Look at Upper OntologiesMichael Gruninger, Torsten Hahmann, Megan Katsumi and Carmen Chui

Method based on axiomatization and proof to compare and connect diferente Upper Ontologies.

1) How can we understand to what ontological commitments an Upper Ontology commits ?

This is done by decomposing the Upper ontology in a set of generic ontologies which are its modules. E.g. ontology on Betweeness, Timepoint ontology

He claimed they proved (not shown here) that DOLCE is reduceable to a set of more traditional mathematical ontologies.

There is a base in his lab that puts together these different ontologies. He showed some excerpts represented by graphs. For instance, different perspectives on Betweeness, Timepoint Ontologies.

Ontological commitments vs. Ontological choices
*ask his slides (very interesting distinction!)

Every generic ontology has a set of commitments and a set of choices (the choices are materialized by axioms that constrain the ontology).

2) How can an upper ontology be partially reused?

He presents a model (kind of architectural model), which he calls sideways view, providing a kind of method of how this can be done. But for me, it was too fast to understand.

3) How can we partially integrate an upper ontology that agrees with only...

Identifying relationships between two distinct ontological repositories.

When there are no mappings between two upper ontologies, he uses a reference model based on mathematical ontologies.

4) How can we integrate multiple extensions of an upper ontology when they are mutually inconsistente which each other?

Generalized Differences - capture the ontological commitments and choices which are made by one but not by the other ontology. It is a set of entailments that are missing in one of the ontologies.

This is important to understand to which limits we can share.



Must the ontologies be heavily formalized? Just enough, he answered. The more axioms the better but you can already reach some mapping with incomplete formalization, because usually, there needs to be some room for flexibility (i.e. the ontology should allow different models).

How much does the formalism commitment influence their view?

Kit Fine – A New Theory of Vagueness
1st keynote @FOIS 2014

His previous theory stated that: if you have a statement with vague terms in it, you should first make all terms very precise and then you say the statement is True if every possible interpretation of the combination of these precise terms is True.

There are other theories and all of them have problems.

2008 – started working on this new version

Predicate is vague = predicate is indetermined

Global indeterminate vs local indeterminate: If a predicate does not work for a range of cases; while the letter says the predicate does not work for a particular case.

There is a temptation of defining global inderterminance based on a couple of local indeterninance. But this should not be the case.

Local indeterminance -> existence of borderline cases

There is a radical view that claims these boarderline cases do not exist so local indeterminance does not exist. For instance, because of the difficulty in differentiating boarderline cases from boarder-boarderline cases and so on.

But indeterminacy exists! – [** he concluded this after some theorethical exploration of logics and some examples. **]

There is a difference between a predicate being partially defined and indetermined.

[** unfortunately, for lack of knowledge on philosophical theories from my side, I could not capture much more on this talk **]

Challenges in Commercializing Expert Knowledge Authoring

Talk by Vinay Chaudhri @LogOnto

Development Process (cyclics including these steps)

·         Determining relevance and pre-planing

·         Reaching consensus

·         Encoding planning

·         Encoding

·         Key term review

·         Question-based testing

Team: 90% biologists 10% technical staff

 

Challenge 1: Long-term innovation / it requires Culture change

Ontology-based qusetion answering is too radical a change for high school education

·         Ontology-based Q/A is not a common place technology even for bio-informatics researcher

·         Education innovations usually begin at graduate level and trickle down to lower grade levels.

Challenge 2: Publishers are too daunted

They were asked if the method was generalizable to other domains. One biology book is too small.

 

Challenge 3: Further research

We do not have ontology designs for capturing all of textbook knowledge

·         For example, see our FOIS paper on content modeling challenges

·         We can currently model only 40-50% of textbook knowledge

·         We need sustained ontology research to capture greater fractions of textbook knowledge.

Challenge 4: Product-focused R&D

How much of the textbook do we actually need to capture?

·         What is the minimal viable representation?

·         How much of the representation can be incrementally added?

Should the answer be limited to just the chapter studied?

Challenge 5: Need non-profit driven funding

·         Academic research sources

·         Foundation and philanthropic support – people passionate about education and knowledge advancement

What do you need to commercialize a product:

A few paying customers – venture capital will take you more seriously

Minimum viable product (minimum freature you must have to make it useful) – eventually adapt to particular client requirements

Mara says: perhaps the challenge is the domain: schools only innovate to differentiate themselves from others (not being out-to-date). A difficult market to sell innovation.

Observations from the technical  paper presentations @Onto.com/ODISE 2014


1) Work on the FIBO ontology (Mike Bennett):
Interesting practical work in industry. In the end, he raised the issue of the need of a consistente methodology to evaluate ontologies.
The speaker mentioned the Oquare matriz – evaluation method that proposes some criteria for ontology evaluation.
We talked a bit about having a benchmark for ontology evaluation. Sergio de Cesare reminded us that evaluating an ontology is not a one-time thing. We must evaluate the ontology over a long period of time and using diferente datasets. If after 10 years, the ontology is still working on new and big-enough datasets, than it is stable, otherwise either it must be updated or the models should be updated.

2) Comments by Nicola Guarino at Maria Claudia Cavalcanti’s talk

-          Take the terms that are more common sense and assume them as primitives, focusing first on the more controversial terms

-          When working with the government, rely on the law (the used terms are usually founded on some normative act that should be considered when developing the ontology)

-          Usually, what the system technitians chose as terms is very distant than what is written in these laws.

-          Tiago Prince mentioned that even these laws are ambiguous by choice, not to close all doors (it is required for the law to be flexible) but even with these ambiguities, they help a lot.
Casanova Keynote @Onto.com/ODISE 2014 – An Algebra of lightweight Ontologies

This presentation plus the papers about this work are available at: http://www.inf.puc-rio.br/~casanova/
How may an application answer a question?

R: By matching the external schema with the database conceptual schema
Matching has to be a non-problem.

What do we mean by a combination of fragments of one or more domain ontologies? The rest of the talk is about combining ontology frameworks.

Music Ontology – popular ontology used to explain the formalism

When you found the Music ontology on the FOAF ontology, people and organization are disjoint classes but the classes of the music ontology that are mapped into person and organization are not disjoint in the Music ontology, so you must add that axiom.  In owl, you can not assume any inheritance, you must constraint the language yourself.

He presents a table with lightweight constraints that can be done using DL and then  shows them on a graph. The conceptual model previously done, with both FOAF and the Music Ontology gains new relations with the introduction of the lightweight constraints.

Question: are these constraints recurrent? How did you come up with them? Or were these constraints introduced for this particular case?

The IMPLIES method compute the constraints

Projection, deprecation, union, intersection and difference are operations tha may be done over Ontologies.

His work is targeted at: a) Constructing an external schema (union, projection, deprecation), b) comparing ontologies and (intersection, difference) c) creating a mediated schema (intersection). In this talk, we will talk about a).

He projects the FOAF ontology, adds the classes of the Music Ontology and then peforms a Union operation to connect the two ontologies.

He presents a table summarizing the constraints in natural language.

[** this was as far as I got – I could not watch the end of the talk but I bet it was very interesting **]