quinta-feira, 10 de setembro de 2015

[Ontobras 2015] Keynote: Developing Ontologies for Land Cover and Land Use Data - Gilberto Câmara

Motivation
Seeking to connect the knowledge from different fields. E.g. economy, geography etc. How to connect such knowledge? How to understand the cultural aspects of each field? This requires a huge effort and it is a challenge in the Ontology field.

Quest for the perfect map:
The Ontologist (Barry Smith like) point of view:
The world is divided in cells. Each cell has a single class. There is a correct classification. The more our classification coincides with the ideal classification, the better.

But...
It is very hard to use this point of view in practice. The map's cells have a specific granularity, which depends on the instrument used to measure the world (satellite, eye, or another measurement instrument). And given this granularity, it is impossible to provide a unique classification to each cell.

Land use: the arrangements, activities, inputs people undertake in a certain land cover type to produce, change or maintain it.
Classification: non-managed forest, cattle production, temporary agriculture, shifting cultivation.



The definition depends on a debate among real people. In this case, the speaker was on a helicopter with Marina Silva, the governor of Amazon, military personnel and other people. Each one was using a particular definition of how many land should be covered to characterise a forest (and differentiate it from deforested area).

There is something missing on the Ontological debate: change, time, movement
Representing change is very hard!

Land trajectories: the transformation of land cover due to actions of land use. He showed graphs showing that the same area was a forest from 2000 to 2004, a pasture from 2004 to 2006 ...

E.g. of Land trajectory graph and the relation to event


How does our brain represent time?
... by means of events: relevant moments of change
Book: "Why time speeds up when you get older" - dutch author

Objects exist and events occur (*it sounds like the distinction between endurant and event).

The fact that our brain is not capable of processing change in a good way (we develop different senses whether there was a big or small change, our perceptions change with time etc.), it makes it really hard to perceive time (which affects the perception of land trajectory). And it is even harder to describe with a unique category that would cover all parts of the trajectory.

Land cover vs. Land use
Land cover - endurant (*according to Giancarlo, it is the role of the land).
Land use - event.

Opportunity: because now (and even more in the future), we have much more data to represent the states of the world, we are more able to understand land trajectory. The challenge is to make sense of the data and for that, ontological thinking is essencial.

*brilliant slide!


"In theory, there is no difference between theory and practice. In practice, there is." (Yogi Berra)

Conclusions:
Managing change is a major challenge for the scientific community. Big data creates new challenges. We need ontological thinking for understanding data.

quarta-feira, 9 de setembro de 2015

[Ontobras 2015] Keynote: Large-scale Semantic Web Reasoning - by Grigoris Antoniou

Large-scale Semantic Web Reasoning - by Grigoris Antoniou

1st Keynote Ontobras

Intro: Big Data is commonly associated with Data Mining and Machine learning.
·      Uncover hidden patterns, thus new insights
·      Mostly statistical approaches

He claims semantics and reasonings are also relevant:
1. Semantic interoperability
2. Decision making
3. Data cleaning
4. Inferring high-level knowledge from data

1. Why do we need Semantic Interoperability?
  • ·      To create added value through combination of different, independently maintained sources. E.g. Health (combine healthcare, social and economic data to better predict problems and to derive interventions)
  • ·      Combine historic and actual data
Problems: complexity and dynamicity

2. Make sense of the huge amounts of data:
  • -       turn it into action
  • -       be able to explain decisions – transparency and increased confidence
  • -       be able to deal with imperfect, missing or conflicting data
  • -       all in the remit of Knowledge Representation (KR)
  • -       E.g. alert of a possible dangerous situation for an elderly person when certain conditions are met.
Potential Domains: smart cities, intelligent environments, ambient assisted living, intelligent healthcare (including remote monitoring), disaster management

But can we deliver?

The problem:
  • -       traditional approaches work in centralized memory
  • -       But we cannot load big data (or the Web) on a centralized memory, nor are we expected to do so in the future.
  • To the rescue: New computational paradigms
  • -       Developed in the past decade as part of high-performance computing, cloud computing etc.
  • -       Developed independently of the Semantic Web (SW) and KR, but we can use them.
What Follows:
  •  Basic RDFS reasoning on Map Reduce
  •  Computationally simple nonmonotonic reasoning on Map Reduce
  •  Computationally complex ontology repair approach using Signal/Collect (can we apply these approaches to exponentially reasoning tasks?)
Problems:
  • Load Balancing 
  • High I/O Cost 
  • Program Complexity
MapReduce:
  • Introduced by Google in 2004
  • Computation is expressed only with Maps and Reduce (popular implementation: Hadoop)
No restante da palestra, ele propos soluções utilizando MapReduce.

Still missing / Future Work:
- What pararelization architecture are more appropriate?
- There are no agreed benchmarks for large-scale reasoning
- More complex types of reasoning should be considered, such as spationtemporal reasoning and exponential reasoning.
- Extreme reasoning: make reasoning with big data work in real time (there are new systems, like Apache Storm)

quarta-feira, 24 de setembro de 2014

My talk at FOIS 2014

An Ontological Interpretation of Non-Functional RequirementsRenata Guizzardi, Feng-Lin Li, Alex Borgida, Giancarlo Guizzardi, Jennifer Horkoff and John Mylopoulos

My talk was at Day 1 - session 2. I really enjoyed the feedback I’ve got! What was more rewarding was to realize that people with different background showed interest and understood what I was talking about. In conferences like FOIS, where diversity is in its soul, my main worry was to communicate well the work for this broad audience. For that, something really important is to make a good introduction, presenting some definitions (like Requirements Engineering, for instance) and illustrating the problems the ambiguous definitions of NFRs create in real life, thus motivating our work of ontological clarification. Following Giancarlo’s suggestion, I also made sure to state that our focus was on applied ontology, i.e. on using theoretical works for with the aim of producing practical implications in RE, rather than proposing a new theory. 

Another challenge in preparing the presentation regarded the fact that we have a dense paper. There are too many things to talk about and selecting which were the “must have” was not so easy. I ended up leaving out important things, like the distinction between NFR and softgoal, and the formal syntax to specify requirements. I decided to focus on the definition of NFRs as quality goals and on the explanation regarding vague NFRs, quality constraints and gradable NFRs. Thus, I talked about the important operations of NFR refinement and operationalization. And I also discussed and exemplified the technique we use to analyze the satisfaction of gradable NFRs. Before concluding the paper, following another advice of Giancarlo, I also mentioned the empirical work and our related paper, so that it was clear that some evaluation (at least preliminary) of the use of the proposed ontological concepts has been done.

Oscar Pastor was the chair of the session and he conducted the discussion very well. Indeed, I was happy to see that there were a lot of people willing to make questions, all of them very interesting. Here are some of these questions:

1) Nicola Guarino pointed out that he agrees with our view of NFR as requirements associated to qualities. However, he said (and it is true!) that I talked very little about the bearers of these qualities and he asked me what they were.

Response: I profited from the fact that Xiaowei had come before me in the session to mention that these qualities could inhere in all those elements he had explained, i.e.: functionalities of the system-to-be, parts of the system-to-be, executions of the system-to-be etc. Now I also would like to remind people that information system is broad so requirements (both function and non-functional) may also be process requirements, ontology requirements, i.e. not necessarily automated system. But I understood from Nicola’s comment that in future publications, we must dedicate a section to discuss the bearers and I even think that from the experiment, we have examples to illustrate all kinds of bearers so it will be a very interesting discussion.

2) Nicholas Asher asked why I made the requirement crisp and then I made it gradable. He asked questions regarding the process of requirements elicitation.

Response: Nicholas gave me a great opportunity to mention some problems in the RE practice. Indeed, I had to tell him at this point that in reality, many NFRs remain vague until the end of the process and unfortunately are never dealt with, being the cause of many problems. Also, it is a pity that people classify all that is vague as NFR, without a deeper reflection to whether that requirement refers to function or quality.  He thought the problem was very interesting! In a sense, I think his question also brings us some other things to think about regarding how to talk about gradability. In the paper, we took this choice: first refining, than making the requirement gradable. But in reality, a stakeholder may also say something like “the search result should be returned in around 30s”, which is already gradable, without ever being crisp. Most of the times, indeed, that is probably what happen. So I guess for a journal paper, we can invert things and explain it like this. One thing that I talked to Nicholas in the coffee break and that should also be more explicitly said is that there are requirements which by its nature is crisp while others are gradable or at least would profit from our decision of making the crisp boarder in the quality region thiker (i.e. gradeable). As the time is limited during the presentation, it is not possible to clarify all these things but I think this point is clear in the paper!

Kit Fine: Kit had a comment that what we are proposing for gradability may be solved with a kind of supervaluation, which is his recent work. He also mentioned a terminological problem with the term vague.

Response: I think and Giancarlo is even more convinced that this is something we should look at. Regarding terminology, this is indeed something else to take into account because, since the morning, during Kit’s keynote speech, it was clear that what philosophers (and also linguists, as Asher agrees with him on this), vagueness is more connected to the gradability we are proposing than to what we call vague NFR. For vague NFR, they suggested us to substitute it to underspecified NFR.

Carlos Azevedo (NEMO’s student): Given the NFR stated as “the search result should be returned in 30s”, is there also a way, calculating the gradable NFR satisfaction technique you propose, to come up with the result that 5s is even better than 30s? I ask that because although as Requirements Engineers we establish a maximal desired limit for this kind of requirement, if the result is better than this limit, this should be acknowledged.

Response: I’d have to check but I guess not because both 30s and 5s would be within the gradable region and it would result 1 to the gradable membership function. But that would be desirable indeed! We must think about it.  

Mara Abel: Isn’t this technique of gradable NFR satisfaction calculation too expensive computationally speaking? What justifies it, given that you could simply use the Euclidean distance measure to do this?

Response: Giancarlo asked to respond to this one and highlighted the fact that we considered fuzzy logics, but we think our proposal is better because it explains where the used numbers come from, while in fuzzy logics, they are usually arbitrary. He highlighted the fact that in general, stakeholders are aware of the prototypes and this makes the technique we are proposing a natural candidate to solve the problem.



 

terça-feira, 23 de setembro de 2014

FOIS 2014 - Short Paper Session - Chair: João Paulo


FOIS 2014 - Short Paper Session - Chair: João Paulo

1) Emilio Sanfilippo
Events and Activities: Is there an Ontology behind BPMN?Emilio Sanfilippo, Stefano Borgo and Claudio Masolo

Unfortunatelly I missed most of this presentation. But it seems to have a very interesting practical value. Basically, it builds on the concept of perdurant. I have a question, which I will ask Emilio about the differentiation of the process execution and the process model, which in UFO, we accomplish including the concept of Description as a Social Object.

2) David Aveiro
An Ontology for the τ-theory of Enterprise EngineeringJan Dietz, David Aveiro, João Pombinho and Jan Hoogervorst

Interesting discussion on Function (affordances) vs. construction (looking at things as they really are)
He proposes to objectify value because this is a central concept and for enterprises, value has properties.
Value and experience
Value is seen as a relation between function and capacity
Important not to provide only one solution - analyze alternatives.
Conclusions:
affordances, purpose and value bridge the ontological and teleological perspectives, bringing a high degree of objectivization to the modeling approaches.
separation of concerns of affordance
target the so many times neglected why dimension of the Zachman framework.

My comment in the session: Opportunity of collaboration! We are both (and JP) targeting research on the why dimension of EA.

Here is a figure of the ontology:

 

FOIS 2014 - Day 1 - session 3

FOIS 2014 - Day 1 - session 3

1) Michael Gruninger
Mathematical Foundations for Participation Ontologies (competition paper)Carmen Chui and Michael Gruninger

Representing activity occurence using geometry, representing occurences as lines, time points as points and objects as planes.

They looked at several ontologies of participation, including TSL, Gangemi's ontology and DOLCE. They could verify them using this method and prove they are isomorphic to a point, and that one is more expressive than others for specific things.

It is a highly formal method based on axiom verification, also very founded in mathematics. This may introduce bias (according to a questioner).

In my view, the work seems interesting and Gruninger made a very consistent and lively presentation (also with demonstrations) that highlighted the strenghts of this approach and proved some of his points.

Worth investigating further!

2) Nicolas Troquard

A formal theory for conceptualizing artefacts and tool manipulationsNicolas Troquard

Na approach highly based on logics -- formalization is in its essence.
It sounds interesting at a first sight but I disagree with some underlying assumptions of the work. For instance, seeing artifact as an agent (????) He justified that saying that the artifact is "doing" something for someone and "has a purpose". Come on, we are in an ontology conference, there are better ways to account for those things!!!

Also, he said something like "I like to think of artifacts as agents". But hey, ontologically, what are artifacts? Either they are agents or not... independently on how we like to think about them.  According to UFO, they are social objects and that distinguishes them from intentional substances, who act, have intentions (internal commitments) and perceive events. In summary, for me artifacts are very far from being agents.

Then, he also presented another confuse concept of artifact as an anti-rigid concept, saying that all artifacts have been a non-artifact in the past. All? Why? 

FOIS 2014 - Day 1 - Session 2

Day 1 - Session 2

1)      Fabiano Ruy 
An Ontological Analysis of the ISO/IEC 24744 MetamodelFabiano B. Ruy, Ricardo A. Falbo, Monalessa P. Barcellos and Giancarlo Guizzardi
 
Ontological analysis of a ISO standard (24744) – the analysis focused on one part of the standard SEMDM

There are other parts, AFOS (Foundational ontology for Standards) among them as a future initiative. Our view at NEMO is that the grounding should be in the Project from the beginning and not be a future perspective.

Presented UFO (A, B and C), focusing on the concepts he will need to explain the foundation he made.

Showed the existing UML model of what is called the Endeavor Level. The paper also brings the analysis of another level. Then, he moved deeper into the presented concepts of the Endeavor level to find the issues and recommend the reengineering of the model, based on UFO supported ontological analysis.

He proposed a number of recommendations, very useful ones! For example, before Producer grouped agents and objects under the same concept, now this is fixed; subtypes were regrouped forming different partitions, etc.

Reading the paper is a great idea!

2)      Xiaowei Wang
Towards an Ontology of Software: a Requirements Engineering Perspective
Xiaowei Wang, Nicola Guarino, Giancarlo Guizzardi and John Mylopoulos


Motivation: using ontological clarification to solve problems arising for example as software changes. Discussing identity, we can find criteria to determine: is the software the same one, after some changes or is it a new one?
Related work: Oberle (2006) – differentiate code, execution and copy; Irmak (2013) – software as artifact.
He discussed the underlying theory of Zave and Jackson. Then he showed a view of the model, having on the left hand side a hierarchy of artifacts (such as software system, software program, software product etc.) linking it with the “how to do” label; in the left hand side, concepts related to the purpose, such as specification, execution etc., linking it to “what to do”. Then, he detailed the ontological concepts from this figure.
Software as a bridge between abstract and concrete. Interesting metaphore!
Interesting question on separation between the program and the intent. According to the questioner (and I agree), correct syntax in a program does not happen by chance. Xiaowei referred to the terminology ambiguities to justify the work. Nicola added that even when the code is intentionally crafted but when the code changes the program may remain the same. The questioner agrees that there should be a distinction among these concepts but he disagrees that the code is not somehow connected to intent.  
FOIS 2014 - Notes on the Technical Sessions - Day 1 

Michael Gruninger

A Sideways Look at Upper OntologiesMichael Gruninger, Torsten Hahmann, Megan Katsumi and Carmen Chui

Method based on axiomatization and proof to compare and connect diferente Upper Ontologies.

1) How can we understand to what ontological commitments an Upper Ontology commits ?

This is done by decomposing the Upper ontology in a set of generic ontologies which are its modules. E.g. ontology on Betweeness, Timepoint ontology

He claimed they proved (not shown here) that DOLCE is reduceable to a set of more traditional mathematical ontologies.

There is a base in his lab that puts together these different ontologies. He showed some excerpts represented by graphs. For instance, different perspectives on Betweeness, Timepoint Ontologies.

Ontological commitments vs. Ontological choices
*ask his slides (very interesting distinction!)

Every generic ontology has a set of commitments and a set of choices (the choices are materialized by axioms that constrain the ontology).

2) How can an upper ontology be partially reused?

He presents a model (kind of architectural model), which he calls sideways view, providing a kind of method of how this can be done. But for me, it was too fast to understand.

3) How can we partially integrate an upper ontology that agrees with only...

Identifying relationships between two distinct ontological repositories.

When there are no mappings between two upper ontologies, he uses a reference model based on mathematical ontologies.

4) How can we integrate multiple extensions of an upper ontology when they are mutually inconsistente which each other?

Generalized Differences - capture the ontological commitments and choices which are made by one but not by the other ontology. It is a set of entailments that are missing in one of the ontologies.

This is important to understand to which limits we can share.



Must the ontologies be heavily formalized? Just enough, he answered. The more axioms the better but you can already reach some mapping with incomplete formalization, because usually, there needs to be some room for flexibility (i.e. the ontology should allow different models).

How much does the formalism commitment influence their view?