Something has gone terribly wrong with academic administration of privacy and ethics

Today, I was asked to submit a data processing plan for a project that uses anonymized survey data downloadable from the Internet by anyone (3%) and computer-simulated data (97%). Admittedly, the European Social Survey (and similar) was collected on humans, but it was released after careful anonymization, and the data collection was done with careful ethics considerations in mind. So, when I download and analyze this data, why do I have to complete bureaucratic paperwork? (And if I do, why do I have an administrator who is tasked to ensure that I do this, but not facilitate this process to the level of: I prepared a template based on what you said, sign here, please? Let me know if you have questions. Come at the problem with a service and not an enforcement mentality.)

I taught an AI workshop at a prominent Western European university a couple of weeks ago. We discussed how NotebookLM can be used for notes and summarizing research papers (even into a podcast you can listen to on the way to school to catch up on the topic you need to be an expert on that day). It can even be used to analyze texts qualitatively. We had an extensive discussion about GDPR and what data export means. We discussed how pasting sensitive or personal info into that box and sending it to Google’s servers might actually be illegal, when it is illegal and etc. What else can you do if you have personal data you cannot analyze with NotebookLM and need an alternative? I believe this is an incredibly important thing to know as a researcher.

During the next break, one of my students told me that they could not access this website. It turns out that someone at their institution decided that this tool (which is helpful for a million things) was banned on their university laptops because it uploads to US-based servers. The explanation given is that the local GDPR officer wanted to ensure that no personal data is uploaded. I asked if Gmail is blocked (which also uploads to US-based servers), what about Google Docs? Neither are blocked, needless to say. Google’s search engine (which has a search box that one could put personal data into) wasn’t blocked. And the computer was probably using Chrome.

A person who doesn’t understand these things is more likely to email personal data with Gmail, upload it to their Google Drive, or use Google Docs to edit it than paste it into NotebookLM. Wouldn’t both scholars and the end goal of ethical research that does not violate privacy laws be better served by education and not academic policing? I mean, you can lock down people’s computers to complete unusability (like another Western European institution is currently doing with a friend’s laptop, not allowing him to install R packages). But the end of that will only be that they will buy and use a personal machine. What a waste of resources.

Now I am talking to another prominent university (one that is currently advertising an AI full professor job in the social sciences), and they told me that under no circumstances can people use OpenAI or Google AI products at the school. (I am sure Claude would be in the same category if they knew what that is.) That’s great. So, what exactly can I use to research AI? Or the high salary is so I can buy enough personal machines. (I got the impression this is the case. Some regulations exist, and some can violate those regulations. Rules may be different for the equals and the more equals.)

Please allow me to go on the record and, without much additional elaboration, say this is absolutely ridiculous.

I recommend:

1. Organizations educate and trust their employees, rather than police them.
2. Decrease the bureaucratic burden on researchers by reassigning bureaucrats who were tasked with policing them to assist them.
3. Instill a culture of ethical research not through rules, regulations, and enforcement but through the emphasis (and training) of ethical research.

On this last point, the world of academia has gone off the rails completely. At this point, scientists are trained in how to pass ethics reviews and not in the ethics of research. It produces a culture of box-checking without considering the actual ethics of research. (And often those boxes make absolutely no sense.) Recently, a good friend and colleague and I were discussing research that, if conducted and working, could have massive societal consequences – some of it potentially bad. An ethicist friend was walking by, so we grabbed the person to include in the conversation. It took us a few minutes to get the person to snap out of the you need to do XY and Z and then there should not be any issues with regards to an institutional review mode of operation (admittedly, the question they surely receive the most) and engage with the question: what are the ethics of doing this research.

Also, recently, I was talking to a colleague about the potential ethical implications of their work. Their response was about what they needed to pass the ethics review. Then they proclaimed that they don’t care about ethics. I had to object strongly because I knew they cared about ethics. I know the person well, and nobody has greater personal and scholarly integrity. He meant to say that he hates that ethics became this mundane, meaningless bureaucratic burden, and he does not want to talk about it. (Then we had a good discussion about the project and its ethical implications.) I hope it highlights the problem well that research ethics has become this ethics review blob. And integrity is defined through its enforcement. This is bad. Very bad.

Our current academic practices, both in the realm of privacy and ethics, are stupid. Please start trusting researchers. When they know what they are doing, they do the right thing. And the ever-expanding bureaucracies of academia could benefit from a service and not an enforcement mentality. It makes for more and better research and a much better work environment.

We need change!