terça-feira, 8 de abril de 2025

REFSQ'25 Keynote on Designing Software means shaping Digital Society by Markus Oermann

He starts talking about the two ticks that allow us to know if the person "read" our message. It is meant for transparency (that's what they claim). But if no eye track is in place, you don't even know if the person read it or not, only that the app was open.

This shapes society since it has been created. It pressures us to respond to messages immediately! And people are not happy if we don't do it. It is similar to the prisioner with some guard in a tower and the prisioner is constantly being watched. What happens is that this shapes the behavior of the prisioner. Fo the company, the model is celear: they want to maximize engagement.

Society also shapes technology. WE pressed Microsoft to have the option of turning the two ticks off and Microsoft added that possibility for individual chats.

2010 - Facebook gave you a chance to tag someone once the system recognized the faces of your friends

They claimed that they introduced a new form of communication, new interaction model, improving user's communication. But at the same time, Facebook was also creating a huge database for training face recognition software. In a sense, they engaged users to unknownly participate in free labor (it's a kind of slavery). As a result, they developed their algorithm for face recognition and profit from this greatly. AFTER THAT, they introduced an "off" button in case users don't want to be tagged.

2021 - users pressed Microsoft again and finally, Facebook accepted to shut down this feature permanetnly

Nudge

Strategic architectural choice to lead consumers to take the action expected by the company

Also used to regulate userse. Shall we do "op-in" or "op-out"? So why do we need institutions? We can easily regulate them with technology. Nudging is based on behavioral economics

Look at the book subtitle: "Improving Decisions About Health, Weatlh and Happiness"


Two very similar books, one trying to do "good" and the other teaching how to manipulate people by nudging


COMPAS case. After the publicaiton of the paper proving bia, the company changed names but continues to distribute this system


Error rate bias: The bias arises from an unequal distribution of the system's systematic errors to the detriment of a group that has to bear the associated social costs. Why does this happen? see slide below (very interesting!!):

Social problem: these systems are optimized only in the sense of awarding the benefits in the most restrictive way possible + pre-existing biases are reinforced and "self-fulfilling-prophecies" are estalished.

Moral Responsibility
Control condition and Epistemic condition are in the basis of moral resopnsibility since Aristotle. This view still shapes our society.

Functionalities - requirements Norms - rules Values - what we live for

Work from Regulatory litarture: "The Collingridge Dilemma of Technology Impact Assessment"


Research and Development is becoming more and more political. Politicians start discussing it more and more, sometimes not even acknowledging this in a conscious level.

Reuglatos/lawmarkes have become aware of the fact tha - technology itself is a resource to establish governance
- reculations is way more effective when it kicks when tehre still is design flexibility
- relevant choices for soceity are made early in the process

R&D of digital technologies is becoming a sobject of public plitics materialized in lawas and regulations

Responsibility regarding legal compliance - Art. 25 GDPR: Privacy by Design.

AI Act Art. 3 n. 1 A/A "machine baseds ystem designed to operate with varying levels of autonomy that may exhibt adaptiveness after deployment and that for explicit or implicit objectives, infers from the input it receives how generate ooutputs such as predictions content..."

Scope: Art. 2 A/A = very broad marketplace principle
It regulates europena players but also foreign players that act in the European Market.
The big techs claim that they make their products GDPR compliant because it was more convenient than the other choices: a) leave the EU market; and b) create specific products for Europe

Direct quote from the talk: "There is a big tension between the Eu reultaion view and the EUA regulation view. How will this power play end? I don't know, we can discuss it later. "

- Minimal risk: spam filters, Ai gaming
- Limited risk: AI sytems and chat bots
- The High Risk AI Systems should be the main focus of Requirements Engineering Research accordingt o him.

He tells that some further guidelines are expected to clarify the AI Act but they won't come out before 2026. This is the normal process of creating a new regulation. It is first very abstract and clarifications come slowly. So right now, it is hard to distiinguish high-risk systems with intended use vs. non intended use"

I asked a question a bout how hopeful can we be about laws being ethical if our society is not really ethical and people are the creators of laws (as well as developers of systems)?
He said we should care for the liberal institutions to make sure that we keep the democratic, autonomous models of our societies.

For high risk systems, the applicatiblity will start in August 2026. When the guidelines ccome in (around Feb. 26), we will be able to define more clearly what these systems are.

Nenhum comentário:

Postar um comentário