We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

Data, AI and Health: how to manage a that cares for everyone

00:00

Formale Metadaten

Titel
Data, AI and Health: how to manage a that cares for everyone
Serientitel
Anzahl der Teile
45
Autor
Mitwirkende
Lizenz
CC-Namensnennung 3.0 Deutschland:
Sie dürfen das Werk bzw. den Inhalt zu jedem legalen Zweck nutzen, verändern und in unveränderter oder veränderter Form vervielfältigen, verbreiten und öffentlich zugänglich machen, sofern Sie den Namen des Autors/Rechteinhabers in der von ihm festgelegten Weise nennen.
Identifikatoren
Herausgeber
Erscheinungsjahr
Sprache
Produzent
Produktionsjahr2022
ProduktionsortWageningen

Inhaltliche Metadaten

Fachgebiet
Genre
Abstract
Dr. Gemma Galdon-Clavell is a leading voice on technology ethics and algorithmic accountability.She is the founder and CEO of Eticas Consulting, where she is responsible for leading themanagement, strategic direction and execution of the Eticas vision (https://www.eticasconsulting.com/). One of the sectors where AI has been embed into rapidly specially in the past few years is Healthcare. The advantages that might come from this are enormous in terms of time managing and efficiency, so enormous that we are missing the point: caring. It is critical to apply ethics and oversight when handling individuals' data that is so crucial and impactful. During this session, we will go through the complexity behind this data and systems and the best practices to ensure success.
Schlagwörter
Besprechung/Interview
Besprechung/Interview
Besprechung/Interview
Besprechung/Interview
Besprechung/Interview
Transkript: Englisch(automatisch erzeugt)
So as I was saying, most of you have probably seen me either face to face in Toulouse, I believe it was, or in other remote meetings.
I've been one of the ethics advisor for the MOOC project focusing on issues of data protection. And the idea today is to give you a more general picture of what the issues are. In the MOOC project, we've been very much focused on compliance with Horizon 2020 regulations. Here, I will discuss this, but I also
want to go more broadly into why these issues are relevant. So one of the things that is important to understand first and foremost is that when people discuss artificial intelligence and the current state of technology and how technology has the potential to change how we
live, how we love, how we work, how we research social issues, how we research health issues, one of the things that is often overlooked is that all the potential that we have in machine learning, data analysis, the most relevant thing that has happened in the last few years to make
the current moment of artificial intelligence possible is personal data. So the big changes of the last few years are not technological. We haven't had massive technological breakthroughs recently. What has changed is that for the first time in history, we have information about what most people do
all the time in their private and professional lives all over the world. And this is what is changing the possibilities of the technical tools that we already have. So machine learning is nothing new. AI is nothing new. What is new is that we can use these systems
with actual data from citizens. So we don't need synthetic data or dummy data. We actually have information on what people do all the time with everything, when they work, how they feel, but also how the illnesses evolve, how secondary effects make themselves evident
to researchers. So what is at the heart of this technology revolution is personal data. And personal data is like something that is radioactive. It gives you a lot of power. And without that radioactivity, you couldn't get things running, but it can also be a liability.
And if you don't manage it well, you can cause a lot of harm. So what we need to understand is that when we have personal data, we have a potential risk for liability. So we should never collect that data without knowing what we wanna use it for because it is like having something radioactive in your hands. And what the commission wants to ensure
is that any project that is funded with public money takes into account the need to protect this data. So we have clearly issues of personal data and privacy that are linked to this new technological moment, but not only. What is happening right now is that systems
that up until recently had been used in the entertainment or leisure sectors are now moving towards high risk sectors. And so algorithms and data systems that we initially developed, even big data,
that we initially developed for environments where the risks were very low, think the algorithms for Netflix to decide which film to suggest you watch after you've just watched the film. That is AI. And what that algorithm does is it's trained on the decisions
that millions of people have made in the past on what to watch after the same film or series that you just watched. And it gives you a recommendation. So we initially, as a society and also as technologists, we started working with the current state of AI in an environment that was very low risk. If you make a mistake,
if the recommendation that Netflix gives you is not the best or it's not really what you feel like watching, it's not a big deal. But what happens when you implement those same systems, very basic AI systems that only use training data from the past, look for patterns
and look to reproduce those patterns? What happens when we export those algorithmic systems into high risk environments? And what we are seeing is that what we did with leisure and entertainment fields doesn't scale, doesn't go easily into high risk environments.
What we are realizing is that in high risk environments, such as health, but not only health, the commission labels high risk environments anything related to social services, health, of course, labor, security. So these more obvious spaces
where fundamental rights or individual opportunities are at stake. So when we transpose those AI systems into high risk scenarios, what we're finding is that a lot of issues emerge. And so what was good enough for the entertainment and leisure field is not good enough
for the health or education sectors. And let me just give you some examples. And one example that is very much linked to health. One of the only audits, algorithmic audits that we have been able to access in the health domain is the audit for a system used by a network
of a hundred US hospitals that was used to prioritize people in the emergency room. And so there's this hundred hospitals I've been implementing for the last few years, an algorithm that helps them make decisions on who to see first in the emergency room. When this system was audited,
what the researchers found was that the system had been trained on financial data, not medical data. Therefore, the decision was not about who had a more critical medical condition, but about whose condition would be more expensive to treat.
The interesting thing is that this didn't happen because there was bad faith. It happened because oftentimes those that develop the algorithms have no understanding of what they are automating. And so they just optimize for the data they are given. In the case of these a hundred hospitals,
because they were part of an insurance company, the data that the insurance company had was financial data. And so oftentimes algorithms use the data they have and not the data that they need. And again, if you make these kinds of mistakes with Netflix, with GPS, with anything linked to leisure,
not high stakes, high risk environments, you can afford to make those mistakes. You can afford to train an algorithm on the wrong data. Your decisions will be less accurate, but they may be good enough. When you make the same mistake in a high risk environment, you're basically killing people.
You are having huge impacts on people's chances of recovering from an illness. And you're also providing a really bad service. The same audit found not only that we had a crucial major problem with training data, it also found that the system was systematically discriminating against black Americans.
Again, this was an algorithm only deployed in a hundred US hospitals. Why did that happen? Because what the training data said was that in the past, black Americans had been treated worse, had experienced more delays in treatment,
and also the money spent on them was less than that spent on white people for the same diseases. And so the system learned that, that was what the training data said and reproduced that because in the absence of an audit,
that's what these systems do. And again, if data has historical inaccuracies, you may be able to afford to have this kind of discrimination in leisure or entertainment settings, but you can't afford to re-victimize populations
just because you were not careful enough to check your data for systemic bias and systemic discrimination. And so in just one of the few audits that we have in the medical sector, we are already seeing how these exporting of AI systems used in other environments is not scaling well.
The systems that we see oftentimes implemented in the medical field are too simple for the challenges that we have before us. High-risk AI needs high-risk algorithms, algorithms that incorporate the concerns of a specific high-risk space, but also algorithms that minimize error rates,
false positives and false negatives. What happens, and most of you are also technical, if you change the amount of error rates that you can have, the whole system changes. And so something that can be done with a 30% error rate is impossible
with a 5% error rate for obvious reasons. And so oftentimes the technologies that we need in the medical field are different from the technologies that have been developed in other spaces. So oftentimes the providers of the technology for the health sector will not be the best ones
if their expertise comes from other low-risk environments. So we have on the one hand, the issue of personal data and privacy, which is really important, but we're also seeing how issues of discrimination and bad coding of data can also lead to lots new problems. And so the commission is aware of this.
And so in the current framework program, we have additional obligations in terms of making sure that all partners go through these issues and ensure that anything that is developed with public funding understands not only privacy and personal data risks, but also other social impact related
to the use of AI systems. So I guess what you should always bear in mind is that if you are dealing with personal data and or AI, you have something radioactive in your hands. And that is great because it gives you power and it's bad because it gives you liability.
Now, how to address, how to minimize the risk for liability? The first thing is to understand when it is that you have personal data. And we see a lot of issues around this. Oftentimes technical groups don't have a good understanding of what personal data is. Personal data is not only your name and your address.
Personal data is any information that may lead to your identification. So an IP address is also personal data. GPS data is also personal data. So there's all these things that are not obviously personal data, but could lead to us being identified.
So anything that could lead to us being identified should be treated as personal data. So if you have this kind of personal data, not just names and addresses, but anything that could link someone to their identity, you are dealing with personal data. And if you add health data, then you are dealing with sensitive personal data.
And sensitive personal data is subject to additional obligations in the current legislation at the EU level. If you are managing sensitive personal data, you have radioactive material, great,
very, very, very toxic. And so you need to be very, very careful. So the first step is to identify whether you actually are dealing with personal data and with sensitive personal data. Once data is labeled as sensitive and personal, you need to take a series of steps. What I would recommend here is that if you are not interested in people,
if you don't care if whoever lives in that place is called John or Sophie, if that's not the focus of your research, anonymize as soon as possible. So make sure that your systems only have anonymize or robustly pseudonymize data.
You will get rid of a lot of liability if you do that. So do not go for what most people do, which is I just want as much information as I can. So if I can know your name and address, if I can know your IP address, if I can know your geolocation, I'll collect it just because I can. This is something from the past.
What we have done with data in the past, we cannot do anymore. We cannot do with GDPR and we cannot do with the six pieces of legislation around data that are being currently discussed at the EU parliament. So you wanna make sure that you turn this toxic asset, this radioactivity, that you remove the radioactive element
if you don't need it. Of course, sometimes you will need it and that is fine and we can find ways for you to use it. But if you don't need it, remove liability at step one. So either collect it and anonymize it immediately or do it as soon as possible. If you do that and you've seen that in the MOOD project, we have been able to clear many tasks in the project
because actually these things were minimized and we took steps to ensure that data was anonymized at the first possibility. If you anonymize, all your troubles pretty much go away in what concerns personal data.
If you cannot anonymize personal data, you need to go to your DPO. All your institutions should have a data protection officer who needs to help you come up with a data management plan. And the data management plan is what you will need to use in case something goes wrong. If at some point anyone complains about how your viewers their data or what you have done,
the only thing that will save you is proving that you actually took a risk-based approach and that you identified that you had personal data or sensitive personal data and that you had a data management plan that ensured that the way you cared for that data was in line with the current legislation, which basically means make sure that you protect the data, that you can use it for things
other than you collected it, that it is kept in safe places, that there is someone who is responsible for that data set, that if people want to access that information, they can based on the consent, that you are not, and this is very important, that you are not reusing personal data. If you collect personal data for one project, you can use it for that project only,
unless in the consent form, you made it explicit that you were gonna use it for another project, but this is important. In the consent form, you can say, I wanna be using this data for this project and another project, but you need to know what the project is about. So you can say, I'm gonna be using this for project A
and then a project B that I may develop in five years time. You cannot do that. You need to say, I'm gonna be using this for project A and B. The projects need to be already defined. If they are not defined, you cannot gather consent because people don't know what they're consenting to. And so you would only be able to gather consent or if you gather consent for A and an indeterminate B,
that consent would not be valid and you would be in little trouble. So very important. And again, this is very much common sense. Like people's data speak about very intimate parts of themselves. The data that you are collecting that for you may be no risk. If that data is gathered by another entity,
it could be high risk. Like things that are collected for health reasons, what if that data fell into an insurance company? What would an insurance company do with that data? So in your environment, it may feel like the data is completely safe, but if you don't protect it well enough, that data may end up being sold or stolen by a third party who then uses it to harm
or to have an impact on the life of that person or those people. So again, it's just about thinking, how can I make sure, since I am dealing with this, which is radioactive, how can I make sure that I avoid spills, that I avoid leaks, that I keep everything contained and secure
and that I have explained and justify why I need radioactive material? So that's for the personal data. Consent, data management plan and security in keeping that information is paramount. Then if you are gonna use AI tools,
you need to make sure that your AI tools are not discriminating against specific groups. And this is something that we find all the time. I don't know, but I don't see you all, but I don't know how many women are in this session, but you should know that every time we women go to the bank, we receive between 10 and 20 times less services than men.
And that is only because we are underrepresented in the data sets. And again, if you are underrepresented in a training data set for Netflix or GPS, that is fine.
But if you are underrepresented to access a mortgage or a loan or medical treatment, then that is a problem. So you should always test your systems for fairness. If you are using AI systems, you need to identify who the different collectives, who the different groups in your data sets are
and do tests, perform tests to ensure that specific groups are not affected disproportionately by the systems that you are using. And we have lots of examples in the medical field. The medical field is one of the fields where we're seeing more and more uptake of AI. We are finding systems that can derive gender
even when we as scientists haven't found a way to see gender in X-rays of people. AI systems can find gender. And so if they can find gender, they can also discriminate against specific genders.
So you need to understand what are the different clusters, the different groups that your AI will come about and see whether in any of those groups there is a risk for discrimination because in the past, those groups have been treated differently. So this is the main thing that you need to do. What we would always recommend
is that you always audit your systems, that you always make sure that before you implement an AI system and once a year throughout their deployment, that you have an independent third party that comes and audits your system and gives you guarantees that the system works well. Because otherwise it's very, very likely
that you will never have certainty as to how the system actually works. And you may be relying on a system that makes bad decisions, not only from a medical perspective, but also from a legal perspective and in terms of your ability to provide a good service or to get to good results. So these are for me the two things that I think you should bear in mind the most.
Issues of personal data and issues of automation. And now because I know that with these issues, the main thing is the specific. I wanna stop my presentation here and open a space for discussion to see whether you have examples or things that we can address that are concrete examples where we can see how to manage that data
in ways that are acceptable for the legislation and also acceptable for the population. So if you've had examples of issues that you've come across either with personal data or with AI systems, please feel free to share them here or ask questions. I'm very happy to address them. Thank you, Jenna.
It was a really interesting talk and it's a little bit outside of personal data, but it's an experience that we have in Finland. So, and it's my usual favorite disease, tick-borne encephalitis. Mood colleagues are gonna be sick of it in the end, I think. So, tick-borne encephalitis is a very, very focal disease.
So usually it's transmitted by ticks in a very delimited area. We have a National Infectious Disease Register, which contains a lot of personal data. And thanks to that data, we have place of residence, for example, which we use to assess the risk in different areas
and provide vaccine recommendations. And that's, there is a benefit of having those place of residence. The problem of tick-borne encephalitis being a very, very focal disease is that progressively, and it's gonna come more and more possibly, we will end up saying that you have to be vaccinated
if you live in the neighborhood or the vicinity of this park in Helsinki or this village close to that forest. It's not so much of a risk of, it's not really direct to the personal data, but do you think there is a risk that in the near future,
we might not be able to do that? Because it has a benefit to the population, but it has a risk too. And it's just, I'm just opening, just, yeah, over. In my experience with the right understanding of the risks, you can do everything.
It's just about understanding those risks and mitigating them. So in this case, my main question would be, is this disease stigmatizing? Is there any kind of social stigma linked to having this disease? Like AIDS, for instance, having AIDS was a huge problem, continues to be a huge problem. And also the disease that points to your sexual
customs or preferences. Is that the same case with the disease that you're discussing? There is no stigma at all associated with this disease. It just means that you like to hang out in the forest. But as a medical doctor, initially for me,
the secrecy of is applied to everything, not only what is stigmatizing, but of course there is no risk of direct identification, but there is a balance if there is an inhabitants in a very small area from the moment
that we recommend a vaccine for an area which is maybe 300 persons only, this is gonna be problematic. It goes with personalized medicine. Personalized medicine is also saying people like you are more likely to be at risk. And if we say in the open, people that have this feature are more at risk of this disease,
then there's a balance that's complicated. And I'm worried that in the future, by willing to be too protective, we might miss some opportunities for preventing other disease in the broader group. But it's just an open discussion. The legislation is not going in this direction. The legislation is not going in the direction
of forbidding things, of not allowing to do things. What the legislation is saying is, because this is radioactive, you need to take precautions. And so what you need to do in this case is document. So you need to document what is the disease about, why are you deciding to focus on a few people, and you can do a cost benefit analysis. Like if we gave vaccines to everyone,
it would be very costly, but also the secondary effects would reveal themselves in a lot more population that they don't really need the vaccine. Then you can also make sure that the personal data is handled in a centralized way so that it's not GDs or individual doctors that have access to this information, but you only have like a central space
that has this data and sends letters and communicate to these people that you have been offered to be vaccinated because of this, this, this, and this, and be very clear about why they have been picked to be vaccinated so that they can also feel empowered to decide whether actually, I'm registered as a resident here, but actually I haven't lived there in a long, long time
because that may also happen. You may have registries that are not good enough. You may have people living there who are not registered. So there's all these things that you need to take into account. But the important thing here is to document. So any decision that you make, do the exercise that you were doing, there's pros and cons, but document it. So it's not only in your mind,
but it's actually a formal document that accompanies the policy. And if anything goes wrong, you can say, these are the things that we took into account. You may not agree with them and we can revisit them, but at least we made the exercise. The problem we're having right now is that people are doing things like this, like emailing people who may have AIDS,
or we have it now with monkey pox. You know, just not taking precautions or being careful about the implications of what they are doing. So the problem right now is that no one is documenting. If you are documenting and making the exercise of putting on a scale, the pros and cons,
you are already ahead of everybody else and you are in line with what the regulation requires. One of the things that we are finding is that oftentimes lawyers in organizations discourage anything that has to do with personal data out of fear, when actually that is not the spirit or the letter of GDPR.
GDPR says, you can do it, just make sure you cover your basis. And I wanna be able to verify that you have covered your basis. So just you saying, oh, don't worry, I know what I'm doing is not enough. I want the actual report. I want the documentation of all the procedures or all the meetings you've had to make sure that your decisions have been approved
by even civil society representatives sometimes. If you wanna cover all your bases, I would say even you can also address some local medical boards and talk to them about the choices that you are making, get their validations and then go ahead with the policy. So again, it's only about covering your basis. We can do everything with data and the law is very clear about this
as long as we protect people. And of course, some things cannot be done while protecting people and that's when we have problems. But that's a very small subset of cases. In most cases, we can find ways to make sure that you can use the data in ways that protect the public. Thank you. Thank you for that point.
Elena, did you have a question or something, a comment? Yes, always. So first of all, I would like to thank you a lot for your contribution in the MOOD project. And even though we don't meet so often,
but still your work and Anne's work have an impact, even a small impact for me, it's an important impact. So I can share my experience. You know that in Syrah with Mathieu Roche, we're working in, we have actually developed a tool which is called PadiWeb. So it's a tool that monitors Google News.
And we promote very much in the MOOD project. So the point of the tool of course, is not to monitor people's opinion, is not to monitor politicians, what they say in the media, no. It's about discovering news that talk about outbreaks.
However, very often in the news, there is a name. So the Minister of Health said that there is the first case of monkey pox, for example. Or similar information, the Ministry of Agriculture, or Emmanuel Macron, he did a ban, lockdown measures, et cetera, et cetera.
So do you still think that this is personal information that should be anonymized? We did that actually, very, very recently we did that. And I can say thanks to your insisting and your emails so that people really, really understand.
So yes, please. Yes, yes, it is personal data and there is personal data. And this is something we've encountered a lot with, for instance, doing analysis of court records. So people say, oh, I have all these court records. And of course you have the name of the perpetrator, sometimes also the victim in court records.
If they published it, why can't I? Well, you're not them. And it's the same with the media. The media has, because of freedom of information, they have some legal coverage that you as researchers don't have. And so when you are moving that information from a purely informative context
into a research context, the guidelines that apply to you are the guidelines of research and you need to anonymize. You don't have the same affordances that the media have because you are not the media. And so it is really important to always remember this.
Whenever you are moving data from one place to the next, again, everything changes. There's issues of misuse of data, the legal framework may change. And so the fact that something is available doesn't mean that you can use it. And it's like this, the fact that you see
a car industry doesn't mean you can get in and drive it. It's the same thing. Yes, you see it. Yes, it seems available, but you need to take the precautions that apply to your specific exercise when dealing with that data. Because in your processing of that data, there are new, you're generating new risks.
And so when a journal informs about monkeypox, they have, that day, they have like 200 pieces on different things. One of them is monkeypox. So you may hear about one case. If you are processing all that data and gathering all the information on monkeypox in the same place, if someone wants to attack monkeypox actors,
you are facilitating that space. So you have changed the nature of that space. And yes, you have used public information, but you have created a space that is more open to misuse. And that's why you need to implement additional ways of protecting that data. And again, it's not a difficult exercise
and you are not interested in the people. You don't care who they are or even where they are located for points, for issues of dissemination. Maybe you want to know where they're located for your own records, but then you also need to discriminate. So am I doing this exercise for internal reasons?
And so I can delete the name, but I want to keep the location. But then if I'm doing this for external purposes, I want to put it on my website, then I would remove the name and maybe also the location. Because if people see that a lot of people with monkey pox are in Germany, that may become an issue. So again, it's always about the context and thinking, what if it was my name?
Would I want to see my name in that space? And if you wouldn't want to see your name in that space, then maybe you can find ways to anonymize it. And again, this doesn't harm science. You can still analyze that data. You cannot make it public to make it easy for others to analyze it in ways that are detrimental to their rights. Yes, thank you.
So yes, that's what we are doing. So we are very happy because we are not just pure modelers, scientists, epidemiologists, but we really are motivated to solve as much as we can, of course, with our competencies, issues regarding data protection.
And also show others how it can be done. Yeah, well, I'm happy to share with the others. And then another thing that we did is that because it's a tool, it has an access to a certain number of registered users. So we have also created a protocol on the use and the protection of the data from the users of PadiWeb
because I have access to the system and I need to show that I know how to protect the data on all the users, telephone number, emails, what they do with our data, et cetera. So this is what we are currently doing. Very good.
They need to know the things that would empower them to make a decision. So I think there's a template that we share with them with projects that I can share with you. Again, there's like seven things that you need to make sure are incorporated.
Like you need to describe the project. You need to have a DPO that is identified. You need to have the time during which the data is gonna be retained. If you are collecting personal data, you need to explain how it's gonna be anonymized. You need to tell people how to exercise their access rights to that data.
There's like a set of things that are actually, it could be like a checklist. And I'm all in favor for consent forms that are very, very short and very to the point because people to understand what is happening, they don't need three pages. They probably will not read three pages. So you wanna make sure they understand. So you tell them, this project is about this,
funded by this, this is what we're gonna do. You've been contacted because of this. The interview or the exercise is gonna last for that much time. We're gonna be recording it, taping it, whatever it is that you are doing. We're gonna be processing your data in this way, that way or this other way.
We will anonymize it at this time, ideally at the very beginning. So you don't even collect their personal data, but if you collect it, we'll anonymize it at this step. And if you ever have questions about this exercise, you can contact the DPO and you can also exercise your access,
rectification and deletion rights before the DPO. And that's it. And you ask for a signature of the person and the date. That's it. I also see a lot of consent forms that then ask for a lot of data from the person giving consent, which just kind of defeats the purpose.
Well, you don't need their name and the date of birth and address. You just need a signature, not even the name. Like if anything goes wrong, the signature is enough to see whether it was that person. So just signature and date. Like do not abuse people's data when you are fulfilling a step
that is designed precisely to protect people's data. But again, for me, one page, one page and a half is more than enough. And one thing that I encourage in more complex projects is to have a table at the end so that people can agree and disagree over certain things. So they may agree to take part, but not to be video recorded.
So they should be able to say, yes, I agree to take part. Do you agree to being video recorded? No. Do you agree for your data to be used in scientific papers without being anonymized? No. Like ask, provide what we call granular consent in which people are empowered to make decisions about how you use that data.
So have a grid at the very end with the different activities that you wanna do with the data and people can say yes or no. And then you act on the basis of that information. Thank you. So basically there's always, we always need to give the option to say no. I mean, of course, to answer if it.
But oftentimes consent forms are like, oh, so if you don't accept all this, you're out. When actually the fact that someone is not video recorded, they may still be valuable to the exercise. So give them a chance to participate without the video recording. Or if they don't wanna be quoted with their full name afterwards, maybe you don't need to put them with their full name.
So give them the opportunity to be quoted anonymously in the future. And that allows for you to have access to these people. So it makes it easier for you to build the trust with more people who may not be comfortable with giving you their information, especially in health settings, that is quite important. Being very clear about why are you collecting this data?
How is it gonna impact me? Am I signing something that can harm me? If you give them this kind of control, you're also building trust in the process. I'm a big advocate for seeing consent, not as a burden, but as an opportunity to have a conversation with the people that we wanna engage. And it's actually a moment where we can build that trust and not see it as like a tick box exercise
or just sign this because we just have to do this consent thing. Actually explain it, make sure that they understand and build that trust. That will mean that they'll be more comfortable doing the exercise. And therefore they will provide you more information that you need for your research. Yeah, exactly. Often as a user, for example, or a person that uses the internet,
I am very confused of what this is. Why am I doing it? And it's always very blurred, which is I think then a bad exercise, right? And most of the time we don't know what are our rights and what is the GDPR act in the EU.
So I think it's very important to provide a very clear, simplified, but clear and informing form. Yeah, and don't forget that it's, laws are just our way of coding social values. So don't get stuck with the law. We are trying to protect people.
We are doing all this to protect people. So never forget this. And when lawyers come to you saying, oh, you need to add three more pages to this consent form, this doesn't cover everything. We're like, no, three more pages do not protect people because that goes against clarity. So bear in mind what we are trying to protect here. We're not trying to protect the law.
We're trying to protect people. And so always follow that principle. That's what I do in my practice. And I find that in my projects, people are a lot more creative and a lot more innovative in engaging with this kind of things and kind of breaking away from this bureaucratic process of gathering consent and doing the data management,
but actually seeing it as an opportunity to get things right, to get everyone on the same page, discussing what it is they wanna do with this project. What data do we actually need and actually sharing an understanding of what data is useful and what data is just a liability that you don't want. So actually addressing these things proactively
at the very beginning contributes to better science. That's very much my experience and I'm convinced that this is the case. We share templates at the very beginning of the project that then have been adapted and readapted. And I would say, I don't think that the consent forms are the best. I think they are too long. But if you go back to those initial templates, I think that's a good example.
And again, this is recorded. So in what I described, I was just mentioning all the steps that you need to follow. So just make them yours, adapt them to your project. If there's just, I counted them once, it was like seven steps, the things that need to be incorporated and that's it.
It's an exercise of trust building and transparency. Thank you. And the more we do, the more we can get better at it. Definitely. And have more examples from different projects and different settings. Yes, thanks.