La Elicitación de Requisitos No Funcionales en el Contexto de la Industria del Software: Un Reporte de Experiencias
Sandra Lorena Buitron Ruiz and Francisco J. Pino
Goal: Find out how the medium and small organizations are conducting the NFR elicitation activities using the MREliNN framework.
They start by understanding the Business Processes. They use a method called ERNF, which is part of the applied framework.
This method has three dimensions: Knowledge, Technical and Organizational dimensions.
They applied the study in 9 projects of 4 existing companies, in the areas of Health, Services and Education.
She presented a list of observations that were gathered in the application of the project.
Then she presented a list of lessons learned in the application of the project.
Among the limitations, she mentioned the lack of a process for the elicitation of NFRs by the company limited the efficiency of the application of the method and limited availability of the project leaders.
She concluded by saying that there is an intention by several companies to adopt RE activities to elicit and analyze NFRs, but now the practice is very ad-hoc. They need tools to help automate the process of application of the method they applied.
As future work, she mentioned a few goals to improve MERliNN.
Ontología para la Especificación de Casos de Uso, Casos de Prueba y su Trazabilidad
Marcela Vegetti, María Luciana Roldán, Marcelo Marciszack, Silvio Gonnet and Horacio Leone
Tools supporting Use Case specification and Test Case Management are not integrated. They propose the use of an ontology of requirements and test cases to support that, improving the traceability between the produced artifacts.
Methodology:
- requirements specification:
- semi-formal conceptualization using UML class diagrams.
- implementation and formalization
- evaluation in terms of the specified requirements
For the requirements specification, they defined the ontology scope, elicited competency questions and then identified some concepts related to each competency questions.
She showed two UML class diagrams, one for Use Cases and another one for Test Case Management.
Part of the work reuses previous work of the group on use cases consistent analysis. They propose the application of an analogous technique to define the test cases. Thus, the concepts regarding such work have been included in the ontology, serving as a way to integrate Use Cases and Test Cases.
She showed a UML class diagram focusing on the artifacts for which traceability is required. This model is composed of concepts modeling the artifacts and the type of traceability that may be provided between them.
The ontology was implemented using Protege.
As validation, she gave some reasoning examples allowed with the ontology that match the elicited competency questions
Other ontologies in other software engineering sub-areas may be integrated with the developed ontology to provide traceability between other software engineering artifacts.
Q&A:
Regarding the competency questions, she said that they were elicited by the authors of the work, based on their previous knowledge about the domains being modeled. They used the general question grouping, refining and abstracting to get to the final set of questions. Beatriz mentioned that this process can leave out some important questions.
* Competency questions remains as the main used techniques for requirements elicitation. However, this is not very efficient. Indeed, the ontology engineering community usually ignores the RE findings which can be useful to help elicit, analyze and document requirements. We should fix that!
*There have been works from NEMO using:
1) goal-modeling to come up with the competency questions for ontology development.
2) using the results of a systematic literature mapping to support the definition of competency questions.
Do users talk about the software in my product? Analyzing user reviews on IoT products
Kamonphop Srisopha, Pooyan Behnamghader and Barry Boehm
Two challenges in investigating products reviews
- huge volume of data (velocity might not be as high)
- cannot assume that sentence are related to only software (also hardware, service etc.)
What kind of information we can get out of a user review: user background, hardware complaint, product general evaluation, general evaluation, general evaluation comparing with competitive product, software feature request, software praise, software praise comparing with competitive product.
Goal of this study: study if and how much users talks about the software in an IoT product and how it can be beneficial for software engineering:
RQ1: how can IoT product review be categorized?
RQ2: How much information in the studied product reviews is relevant for software engineers
RQ3: how effectively can machine learning techniques classify such reviews?
1) For responding RQ1:
Their study looked as data having a sentence-level granularity.
Top level categorization (based on the domain) - To come up with the categories for his dataset, he made a study based on his own categorization and the categorization of 52 master students.
Second level categorization (based on what kind of informations messages provide): For the message types, he applied an existing user feedback taxonomy classifying the types of messages there are in product review.
Four types considered: Information Giving, Inquiry, Feature Request and Problem Discovery
2) For responding RQ2:
They made a manual classification of the dataset according to the categories defined in 1).
He showed some results of his findings, also discussing some related works.
3) For responding RQ3:
They used a text processing technique based on the vector model (using TF-IDF metrics). They used Word2Vec (distributional hypothesis approach), which states if two words appear in the same context, they have similar semantics. The approach plots words in a matriz, showing graphical proximity of semantic related words.
He presented his findings on the use of such method.
Q&A:
* I believe there is room for the use of semantic-based approaches (ontology-based reasoning) in connection with the machine learning techniques. He said he did not think about it but he thinks it can be useful.
He wants to look for Sentiment Analysis and to an Ontology on software quality to help improve their categorization.
Sandra Lorena Buitron Ruiz and Francisco J. Pino
Goal: Find out how the medium and small organizations are conducting the NFR elicitation activities using the MREliNN framework.
They start by understanding the Business Processes. They use a method called ERNF, which is part of the applied framework.
This method has three dimensions: Knowledge, Technical and Organizational dimensions.
They applied the study in 9 projects of 4 existing companies, in the areas of Health, Services and Education.
She presented a list of observations that were gathered in the application of the project.
Then she presented a list of lessons learned in the application of the project.
Among the limitations, she mentioned the lack of a process for the elicitation of NFRs by the company limited the efficiency of the application of the method and limited availability of the project leaders.
She concluded by saying that there is an intention by several companies to adopt RE activities to elicit and analyze NFRs, but now the practice is very ad-hoc. They need tools to help automate the process of application of the method they applied.
As future work, she mentioned a few goals to improve MERliNN.
Ontología para la Especificación de Casos de Uso, Casos de Prueba y su Trazabilidad
Marcela Vegetti, María Luciana Roldán, Marcelo Marciszack, Silvio Gonnet and Horacio Leone
Tools supporting Use Case specification and Test Case Management are not integrated. They propose the use of an ontology of requirements and test cases to support that, improving the traceability between the produced artifacts.
Methodology:
- requirements specification:
- semi-formal conceptualization using UML class diagrams.
- implementation and formalization
- evaluation in terms of the specified requirements
For the requirements specification, they defined the ontology scope, elicited competency questions and then identified some concepts related to each competency questions.
She showed two UML class diagrams, one for Use Cases and another one for Test Case Management.
Part of the work reuses previous work of the group on use cases consistent analysis. They propose the application of an analogous technique to define the test cases. Thus, the concepts regarding such work have been included in the ontology, serving as a way to integrate Use Cases and Test Cases.
She showed a UML class diagram focusing on the artifacts for which traceability is required. This model is composed of concepts modeling the artifacts and the type of traceability that may be provided between them.
The ontology was implemented using Protege.
As validation, she gave some reasoning examples allowed with the ontology that match the elicited competency questions
Other ontologies in other software engineering sub-areas may be integrated with the developed ontology to provide traceability between other software engineering artifacts.
Q&A:
Regarding the competency questions, she said that they were elicited by the authors of the work, based on their previous knowledge about the domains being modeled. They used the general question grouping, refining and abstracting to get to the final set of questions. Beatriz mentioned that this process can leave out some important questions.
* Competency questions remains as the main used techniques for requirements elicitation. However, this is not very efficient. Indeed, the ontology engineering community usually ignores the RE findings which can be useful to help elicit, analyze and document requirements. We should fix that!
*There have been works from NEMO using:
1) goal-modeling to come up with the competency questions for ontology development.
2) using the results of a systematic literature mapping to support the definition of competency questions.
Do users talk about the software in my product? Analyzing user reviews on IoT products
Kamonphop Srisopha, Pooyan Behnamghader and Barry Boehm
Two challenges in investigating products reviews
- huge volume of data (velocity might not be as high)
- cannot assume that sentence are related to only software (also hardware, service etc.)
What kind of information we can get out of a user review: user background, hardware complaint, product general evaluation, general evaluation, general evaluation comparing with competitive product, software feature request, software praise, software praise comparing with competitive product.
Goal of this study: study if and how much users talks about the software in an IoT product and how it can be beneficial for software engineering:
RQ1: how can IoT product review be categorized?
RQ2: How much information in the studied product reviews is relevant for software engineers
RQ3: how effectively can machine learning techniques classify such reviews?
1) For responding RQ1:
Their study looked as data having a sentence-level granularity.
Top level categorization (based on the domain) - To come up with the categories for his dataset, he made a study based on his own categorization and the categorization of 52 master students.
Second level categorization (based on what kind of informations messages provide): For the message types, he applied an existing user feedback taxonomy classifying the types of messages there are in product review.
Four types considered: Information Giving, Inquiry, Feature Request and Problem Discovery
2) For responding RQ2:
They made a manual classification of the dataset according to the categories defined in 1).
He showed some results of his findings, also discussing some related works.
3) For responding RQ3:
They used a text processing technique based on the vector model (using TF-IDF metrics). They used Word2Vec (distributional hypothesis approach), which states if two words appear in the same context, they have similar semantics. The approach plots words in a matriz, showing graphical proximity of semantic related words.
He presented his findings on the use of such method.
Q&A:
* I believe there is room for the use of semantic-based approaches (ontology-based reasoning) in connection with the machine learning techniques. He said he did not think about it but he thinks it can be useful.
He wants to look for Sentiment Analysis and to an Ontology on software quality to help improve their categorization.