quarta-feira, 23 de maio de 2018

Notes on the RE PhD Course - University of Trento - Class: Requirements Prioritization - Angelo Susi

Notes on the RE Course - Requirements Prioritization  

*while Angelo projected the Day3 slides for recap

Luciano Baresi – micro-services (pieces of code that are very good for Agile Team Development – they have a precise goal – not the same of micro-goal, simply precise)
Here is one of his interesting papers

For that to work, you need:
-      Reliable information/documentation about the micro-service
-      Well-developed interface to enable interconnection.

Release Plan– 

*The PhD work by Azevedo, C. seems to be of particular interest.

We must apply algorithms that are agnosticto the problems. The same algorithms that exist for so many years (e.g. genetic algorithms, markov chain based algorithms etc.) may be used to solve new problems. 
These algorithms are sensible to small changes, so you must FIRST understand the problem. 

In terms of agnostic algorithms, the human intuition works like this: the algorithms give you the RIGHT QUESTIONS to ask the stakeholders to find out the information you need.

What to do about the consistency of human information? People may lie, ignore or be bound by ethics not to disclose some information. Angelo says that there are existing decision-making algorithms that enable consistency check using mathematics.

Pareto Optimality in multi criteria

*send to Angelo the reference by … at CIbSE 2018

Interesting discussion on the kinds of approaches to solve the problem: kinds of algorithms and the tradeoff human automated vs. human assisted.

Very interesting slide on RE Prioritization Works in Literature 

Setting requirements with actual measures 

Characteristics of requirement B (on slide): 
-      Title
-      Description
-      Cost of implementation
-      Risk
-      Value for stakeholder

In practice, there is only partial knowledge about each requirement. In company A, for e.g. there are written requirements (text-based) in a worksheet and for 1 requirement in 20, there is an information given by the manager, e.g. “high risk”.

Analytic Hierarchy Process 
Very interesting decision-making algorithm;
Angelo explained it really well!

*Rank Boost (Freund, Iyer, Shapire and Singer 1998) – according to Angelo, a very well-written paper. I found a 2003 paper of this group.

Very interesting Machine learning algorithm  (named CBRank) by Angelo, Anna and Paolo Avesani. Paper on Transaction of Software Engineering 2013. A previous version was published in the RE Conference. 

To learn, the ML algorithm (named CBRank) compares its own results to rankings done by users (called domain knowledge in the algorithm). The principle was combining machine and human-based knowledge.

Disadvantage(according to a Mirheidari): linear learning. With a new/updated domain knowledge, you must of course update the pair sampling. However, you must wait for the system to learn, because it only works well after lots (let’s say 100) iterations.
*Angelo replies: ok, but we can also change the algorithm, which is an old one. However, if you read the 1998 paper, you will see that they talked about user feedback already at that moment.

*Mirheidari presentation in the class:
Paper about detecting problems in decision-making regarding security.
Categorization of over 100 works
Ref: Seyed Ali Mirheidari, Sajjad Arshad, Rasool Jalili. Alert Correlation Algorithms: A Survey and Taxonomy In: CSS

Similarity algorithms
Knowledge-based algorithms
1)   Pre-requisite and consequence
2)   Scenario
Statistical-based algorithms

They discussed advantages and disadvantages of these works based on 5 metrics that are related to their work:
-      Algorithm capability
-      Algorithm accuracy
-      Algorithm computation power
-      Required KB
-      Algorithm extendibility and flexibility.

*After the survey, he designed a hybrid approach to maximize the probability of finding the attack.

Another good survey (Seyed says even better than his): Alert correlation survey: framework and techniques. Sadoddin and Ghorbani (2006) Alert correlation survey: framework and techniques at PTS'06

Empirical Study
Angelo made a very good description of their empirical study to validate CBRank. 

Search-based approach - Angelo explained an approach based on Interactive Genetic Algorithm (IGA).

In this algorithm, an individual is a complete rank of requirements.

Getting information from the user works by restricting the population (in other words, increasing the number of constraints). An important point here is to minimize bothering the user, while also taking from her the right kind of information that will make the algorithm better (better means faster, more accurate and having less conflict).

The Production of individuals phase may also work well to generate test cases for the algorithm.

After the production of individuals, you may have conflicted individuals and this is made explicit, so that we may ask the users preferences regarding such conflict.

Nenhum comentário:

Postar um comentário