AI on the Cheap

Repeatedly, I get asked whether ChatGPT is worth paying for, what other products I would recommend, and, if not ChatGPT, which product one should subscribe to. 

Here’s the answer.

Yes. ChatGPT is worth paying for; it is even worth paying $200 for every month… for some people. If you are one of these people is the real question.

So let’s start here. There’s ChatGPT, but there are also several competing products with both free and paid tiers. These are: Anthropic’s Claude AI, Google’s Gemini, Mistral’s Le Chat (which is EU-based, GDPR, privacy, etc.), xAI’s Grok, and Perplexity. With the exception of Mistral, these are US-based services. And there are many additional free ones worth mentioning, like Meta (Facebook) has a free chatbot, which may or may not be available in your country, there’s the Technology Innovation Institute’s Falcon Chat (from the United Arab Emirates – great for Arabic) which is free, and there are a number of Chinese products, like Deepseek Chat or Baidu’s Ernie Chat. There’s even a free Hungarian Puli Chat developed specifically for Hungarian use by the ELTE Research Centre for Linguistics.

If you are wondering whether you should subscribe to a paid product, you should be using all of these chats (especially the ones with the paid tier) regularly. Try them out. Compare. See which gives you better answers. Which one do you like the best for your own use cases? I can give you a few of my impressions. Gemini is a good workhorse, and its research team is second to none, even before the ChatGPT era. Claude AI is pushing AI safety and ethical development, which I like. I also think that its models are the best writers. Mistral’s Le Chat is weaker than the others, but its coding model impressed me. It is also cheaper, and if privacy and GDPR are important for you, this is an EU-based company. xAI’s growth was impressive. They built the biggest data center in almost no time and caught up to the rest. Their most advanced models’ approaches are interesting. (But that costs $300 per month.)  It is trained on Twitter (X) data, and Elon Musk is the head. So buyer beware. I personally don’t have enough experience with Grok. Perplexity does not use its own models for the most part. They buy the AI from others. Their product is generally well-liked, and their paid tier lets you use many different models, though I would expect these models to be limited in some ways. Personally, Perplexity has not impressed me at first. But now I have a Pro plan, and they released a whole AI browser (which is cool, and they did this before OpenAI released Atlas), so I am using it more and more. It is growing on me. If I did not already have a subscription to Claude and Gemini (just cancelled ChatGPT, actually, though not sure how long this will last), I would be all over Perplexity, specifically because I can use many different models in their chat.

Ethan Mollick, someone whose name you should know if you care about the AI world, once said, You need to use an AI chat product for 10 hours before you really know what it is capable of, before you really know how to effectively use it.

So, if you have not put your 10 hours in on the above products, you do not need a subscription.

Once you have put in your 10 hours, you will also know the limits of these free products. You will have preferences. You will know which one you like and which one you don’t. That will answer your next question. Which product should I subscribe to?

But if you really want to do a better-than-free AI on the cheap, there are two options. Mistral is a bit cheaper than the others, and if you are in “Education” (student or faculty), you are eligible for their discount, which brings the price down to 6 EUR (+VAT), or even less with the annual plan. Again, I don’t think their models are as good as the others. I see more hallucinations, factual errors, though their coding and Linux tech support skills impressed me over the past few weeks. 

The other, slightly cheaper way to get one of these products is through Revolut. Right now, Revolut offers a Perplexity subscription as part of their Premium (and above) tiers. If you already subscribe to a Premium Revolut plan, you can get this for free. And if not, a Premium plan is much cheaper than the subscription. If you sign up because of this post, please use my invite link to sign up.

Something has gone terribly wrong with academic administration of privacy and ethics

Today, I was asked to submit a data processing plan for a project that uses anonymized survey data downloadable from the Internet by anyone (3%) and computer-simulated data (97%). Admittedly, the European Social Survey (and similar) was collected on humans, but it was released after careful anonymization, and the data collection was done with careful ethics considerations in mind. So, when I download and analyze this data, why do I have to complete bureaucratic paperwork? (And if I do, why do I have an administrator who is tasked to ensure that I do this, but not facilitate this process to the level of: I prepared a template based on what you said, sign here, please? Let me know if you have questions. Come at the problem with a service and not an enforcement mentality.)

I taught an AI workshop at a prominent Western European university a couple of weeks ago. We discussed how NotebookLM can be used for notes and summarizing research papers (even into a podcast you can listen to on the way to school to catch up on the topic you need to be an expert on that day). It can even be used to analyze texts qualitatively. We had an extensive discussion about GDPR and what data export means. We discussed how pasting sensitive or personal info into that box and sending it to Google’s servers might actually be illegal, when it is illegal and etc. What else can you do if you have personal data you cannot analyze with NotebookLM and need an alternative? I believe this is an incredibly important thing to know as a researcher.

During the next break, one of my students told me that they could not access this website. It turns out that someone at their institution decided that this tool (which is helpful for a million things) was banned on their university laptops because it uploads to US-based servers. The explanation given is that the local GDPR officer wanted to ensure that no personal data is uploaded. I asked if Gmail is blocked (which also uploads to US-based servers), what about Google Docs? Neither are blocked, needless to say. Google’s search engine (which has a search box that one could put personal data into) wasn’t blocked. And the computer was probably using Chrome.

A person who doesn’t understand these things is more likely to email personal data with Gmail, upload it to their Google Drive, or use Google Docs to edit it than paste it into NotebookLM. Wouldn’t both scholars and the end goal of ethical research that does not violate privacy laws be better served by education and not academic policing? I mean, you can lock down people’s computers to complete unusability (like another Western European institution is currently doing with a friend’s laptop, not allowing him to install R packages). But the end of that will only be that they will buy and use a personal machine. What a waste of resources.

Now I am talking to another prominent university (one that is currently advertising an AI full professor job in the social sciences), and they told me that under no circumstances can people use OpenAI or Google AI products at the school. (I am sure Claude would be in the same category if they knew what that is.) That’s great. So, what exactly can I use to research AI? Or the high salary is so I can buy enough personal machines. (I got the impression this is the case. Some regulations exist, and some can violate those regulations. Rules may be different for the equals and the more equals.)

Please allow me to go on the record and, without much additional elaboration, say this is absolutely ridiculous.

I recommend:

1. Organizations educate and trust their employees, rather than police them.
2. Decrease the bureaucratic burden on researchers by reassigning bureaucrats who were tasked with policing them to assist them.
3. Instill a culture of ethical research not through rules, regulations, and enforcement but through the emphasis (and training) of ethical research.

On this last point, the world of academia has gone off the rails completely. At this point, scientists are trained in how to pass ethics reviews and not in the ethics of research. It produces a culture of box-checking without considering the actual ethics of research. (And often those boxes make absolutely no sense.) Recently, a good friend and colleague and I were discussing research that, if conducted and working, could have massive societal consequences – some of it potentially bad. An ethicist friend was walking by, so we grabbed the person to include in the conversation. It took us a few minutes to get the person to snap out of the you need to do XY and Z and then there should not be any issues with regards to an institutional review mode of operation (admittedly, the question they surely receive the most) and engage with the question: what are the ethics of doing this research.

Also, recently, I was talking to a colleague about the potential ethical implications of their work. Their response was about what they needed to pass the ethics review. Then they proclaimed that they don’t care about ethics. I had to object strongly because I knew they cared about ethics. I know the person well, and nobody has greater personal and scholarly integrity. He meant to say that he hates that ethics became this mundane, meaningless bureaucratic burden, and he does not want to talk about it. (Then we had a good discussion about the project and its ethical implications.) I hope it highlights the problem well that research ethics has become this ethics review blob. And integrity is defined through its enforcement. This is bad. Very bad.

Our current academic practices, both in the realm of privacy and ethics, are stupid. Please start trusting researchers. When they know what they are doing, they do the right thing. And the ever-expanding bureaucracies of academia could benefit from a service and not an enforcement mentality. It makes for more and better research and a much better work environment.

We need change!