More on AI Literature Reviews in the Social Sciences.

Recently I needed to throw together an overview of the literature for a specific topic. The application was, surprisingly, non-scientific. People who work in the actual world of politics asked me what the literature says about a certain topic. This gave me another opportunity to test AI tools and their capabilities, especially since the topic was not something I was neck-deep in. Simply speaking, I would have had to do a literature discovery from scratch using something like my classic literature review technique if it wasn’t for AI.

If you haven’t read my old piece on AI literature reviews, it mostly still stands. Start there. Not here.

To make a long story short, after trying many things that worked very poorly, (lots of inaccuracies and hallucinations), I finally cracked it and found a fairly good way of doing this. The downside is that it is incredibly work-intensive. Maybe not as work-intensive as doing the classic literature review from scratch, but it really pushed the boundaries of what is possible with AI today and what is not. All in all, I had a lot of fun doing it once I got passed the excruciatingly long and painful preparatory phase.

I tried all the tools in my old piece and then some to try to automate this literature review process. Unfortunately, even scite_ is becoming less and less capable of social science research as they put up more and more guardrails. Its writing is shittier and shittier. Article discovery is, unfortunately, quite poor as (at least without tweaking) it does not find the most relevant literature. It’s still the best thing for discovery but what it writes up is inadequate. (It is good enough for the natural sciences, but not for social science.) I also tried more creative approaches but all of my attempts lead to the AI model just making shit up.

But I recently heard on a podcast, one of the three early April 2024 episodes of The Ezra Klein Show that current models can keep around a book’s worth of information “in mind” while chatting. And since I now have a Claude AI subscription I figured I should put this to the test. I grabbed a whole year of a relevant topical journal. I figured I should try to chat with the model about the articles published in the journal that year. (Yes. Download one at a time. Merge the PDFs and upload.) But I was told it was too much. So I did this for a half year. It worked OK, but it was very slow. Claude 3 told me I could only ask a few questions before it cut me off. Clearly, I hit the limits well before it was useful. And what I really wanted to do is to chat about 6-10 years of issues.

So I tried ripping the PDFs into plain text but I ended up hitting similar limits very quickly. And it didn’t seem to work too well. Looking at what came out of the pdf-to-text auto-generation, I am not shocked the AI didn’t know what to do with that.

Here’s what worked. I just decided to make article discovery mostly “manually.” Fortunately, here the task was either to find everything in the top journals about topic X or glance through everything in the X topical journals that could be used for purpose A. So this was done manually with searches within the journal’s webpages augmented with some specialized Google Scholar work. For the topical journals, I read the titles of entire issues. I also grabbed a few things that the journals recommended along the way in other journals they publish. (In fact, this was done already for months.) To add, I used scite_, I used Preplexity.

After all this, if an article looked like it could be relevant, I took its citation, abstract, and discussion/conclusion and manually pasted it into a text file. I separated articles by #####. This took forever. (In 2024, why did I have to manually copy and paste 60 article abstracts, I have no idea. I guess I do have an idea. Moving on…) But once I had a document with around 60 articles (around 82k words), I was able to upload this to Claude AI and have a good discussion about what is in there. (Pro tip. Never leave a DOI out of a citation if you ever do this. Ask me how I know.)

Yes, every prompt was super slow and I was hitting limits in about 6-8 prompts until it cut me off for a few hours. And this was with the paid Claude AI subscription. So I had to be thoughtful about what I asked. I rolled up my sleeve and used all my prompt engineering (and even more prompt engineering) skills not to waste a single prompt. (If the latter link was helpful to you, sign up for their newsletter. I am not sure I was supposed to give this to you without you signing up. There are many more cool pages where this came from. And if you are at all AI curious, the newsletter is a great quick read every day. ) And sometimes I had to wait until the next prompt… for hours. But it worked. To be sure, first I asked for a bibliography. (That was correct.) And I even got correct quotes when I asked it to go step by step and first identify the main points it will make, then find good quotes in the uploaded txt file to support those claims, and then write up the lit review. Prompts for a full lit review with a few subheadings to guide yielded something much shorter than I wanted (despite size specifications). Going section by section did the trick.

After going through and checking everything (themes the articles were cited for, quotes, cites, bibliography), I can honestly say this worked very well, especially for the narrow purpose. The only issue I had was made-up DOI links when the document uploaded left the DOI link out. The literature discovery was practically manual labor, and the prep work for the uploaded book-length document was mind-numbing, but the results are very good, and overall, it was a fifth of the time to do it this way even including double-triple checking everything.

If it wasn’t so excruciatingly slow, and if I wasn’t running up against limits, it would be amazing to just have a conversation about the pieces with the AI. Have a discussion with the papers or paper authors. In fact, I am now convinced journals should train their models (GPTs?) simply with their own text and allow subscribers to have conversations about the content.

It’s Not Just Trump

You can find the replication information HERE for the Public Opinion Quarterly Article “It’s Not Just Trump” by Levente Littvay, Jennifer McCoy, and Gábor Simonovits. You will find two files here: one replicating the analysis with the Americas Barometer (AB_rep.zip) and one for the original data collection Orig_Data_rep.zip. In the latter, you will find two scripts, one reproducing the analysis in the article and one for the analysis (without independents) presented in the Supplementary Materials. All analyses were done with the most recent version of R, with the most up-to-date packages on January 25, 2024.

Can AI write literature reviews for social science articles? Nope. Not yet.

(Updated October 17, 2023 – still not yet.)

I invested a decent amount of time over the past weeks trying to figure out if there’s a good workflow to follow on AI-assisted article discovery, systematization, and literature review writing for social science academic work.

TL;DR: Nope!

I did this by writing a piece where I was not so knowledgeable about the two dependent variables. I was quite familiar with the 4 independent ones. I needed to review the literature on 8 sets of relationships and the dependent variable in general.

Most LLM tools just make shit up and the verification process to try to figure out what’s real and what’s not and if the real citations say what the AI says they say takes longer (and is substantially more frustrating) than my old-fashioned discovery approach, you can read about here.

There are OK tools for discovery. But they often fail to find the most relevant pieces. But I do recommend you check these out:

  • elicit.org (the new beta is not yet fully functional – but it may be worth buying credits now as it will be more expensive later, I suspect)
  • scite.ai (worth subscribing probably – I did and paid out of pocket, though I didn’t pay full price as I was able to find a coupon.)
  • consensus.app (another one that pays attention to not making stuff up. Good free tier for now – added a month after the others)

Not LLM based, but another useful literature review tool I came across in the process was this: researchrabbitapp.com (And I understand this is free.) Check also: connectedpapers.com

No tool I found was useful in rewriting (messy) notes on relevant articles into a clean literature review draft. It either added stuff it should not have, it got the message, flow, logic, or relationships tested wrong. It failed to recover all the citations from links and notes. I tried ChatGPT, Claude, Llama 2 (of which I run several versions on my own computer), scite, elicit, Bing and Google’s beta search tools. None of them worked very well. Grammarly (with subscription) along with a citation manager and some good old-fashioned writing was quicker and more useful in turning messy notes, and bullet points into good writing. (Well, I wouldn’t say good writing but clear and adequately bland writing for academia.)

This said, I had luck with more organized notes summarization, summarization of PDFs I uploaded which it allows for (including articles I was wondering if I should read fully) using claude.ai. Unfortunately you need to be in the UK or US to use it (or have a VPN that makes it think you are in the UK or the US). Out of all the tools, I liked Claude’s writing the most when I told it to summarize my notes on the literature in the style of a concise social science literature review. 

This note will likely be outdated in 3-6 months (even with this one-month update). The day this changes is the day that LLMs solve the hallucination problem and can, effectively cross-reference links with text written or the day Google Scholar (or one of the others) implements a good LLM. This is not far away at all. I honestly thought we were there anyway, hence the experiment. But we’re not.

How to write a survey?

OK, so you figured out you need to conduct a survey. What’s next?

Everyone’s gut reaction is, let’s start writing questions. But if you want to do it right this is exactly what you should not do. In fact, resist all urges to even think about what questions to ask. To write a better survey, the first question you need to answer is:

1. what am I interested in finding out?

It is very important that you do not phrase the answer to this in a survey question. You never really cared how people respond to any question anyway, right? You care about what is behind those responses. Let’s say, you want to know if a customer likes a product. Maybe you want to know if they are likely to refer you, as a service provider, to others. Ask yourself, what are the relevant demographic information needed for your study. Make an exhaustive list of what you want to find out.

Now you write questions? No! You still should not. Rather, ask yourself…

2. what will you do with this information?

You have to think about this question in two different ways, two different sequential steps.

a. How will I analyze this data. Develop an analytical strategy. Will I look at (or present) a histogram? Do I want to see any association between two of the constructs defined in point 1? Are older or younger people more likely to refer my services to others? Men or women? Compile an exhaustive list of such questions you have, that you want to get out of the data. Once you asked your questions, come up with an analytical strategy. Do I just want some descriptive information (like a cross-tabulation) all the way to needing to develop an instrumental variable regression model to causally ascertain the relationship between a key independent and the dependent variable? For the latter you may also need to find three or four plausible instruments. Have an analytical strategy in mind. It could be as simple as calculating the mean, it could be a simple inferential statistic like a correlation or a two sample t-test or it could be something complex. Just make sure you have a preliminary analytical strategy.

b. Know what you will do with the information. Devise action strategies. If I see that young people do not refer my service to friends, I will develop a marketing strategy that will nudge young people to do this. Maybe you are not the person taking action on the survey, then devise a recommendation strategy. If you work for a client (even if it is in-house) push them hard on devising this strategy before you start the survey. The better they know what they want to do with the data, the better chance you have a writing a useful survey for them.

To aid steps 1 and 2 (which you may need to go back and forth on a bit) it is a good idea to draw things out. It is OK to go back and forth between your current and previous steps, but don’t go beyond that.

3. Take the constructs you identified as crucial and figure out how best to operationalize them. Chances are, this is the stage where the constructs (if you are following these steps closely, that are drawn up in a web of relationships of interests) are going to start turning into survey questions. It is OK to ask multiple survey questions trying to tap one construct. Beware (and if need be, modify) the analytical strategy developed in step 2 as you start operationalizing. Maybe you thought you will look at a correlation, but it turns out a simple yes-no question is the best way to ask about something. Then you will only have a dichotomy and not a continuous variable as called for by a correlation. So, you may need to adjust your analytical strategy. At this stage, don’t go back to step 1 anymore.

Follow conventional rules of questionnaire design. Make sure you are asking questions (not just throwing words at the respondent. “Gender: ” should be “What is your gender”? The survey process is a conversation. Don’t break the basic conversational rules. Make sure the response categories you offer are unique, mutually exclusive and they answer the question you ask. Label all your response categories and no need to throw numbers you will use in the analysis at people. Unless you are some survey researcher or quantitative social scientist (which you probably are, or slowly becoming if you got this far), it is wholly unnatural to map a conversation on to some numeric space, so don’t make people do it. Also, don’t even bother them with your numeric mapping. And – very important – make sure the response categories actually answer the question. If the question starts with “how many”, the answer is never “strongly agree” or “disagree”.

Remember that bipolar scales should be no wider than 7 points (11 for experts – but good luck labeling all of them…) and unipolar scales no wider than 5 points (7 for experts). Don’t let your respondent just run through tables of questions with the same responses. They will lose attention. Better if you write question specific response categories.

Write at a grade level that is around 3-5 years lower than the lower end of your population. Don’t use big words, homonyms, heteronyms or jargon that may not be understood by the respondent. Ask one question at a time (the words and + or are usually red flags in survey questions). You can offer a don’t know option, just remember, it encourages people to not engage with the survey, not to think about the survey. (And men, on average are less likely to admit not knowing anything anyway, so even if your goal is to find out of people don’t know something, just know that your results will be biased no matter what, so why bother.) This is hardly an exhaustive list. But there are a million more pointers and also great survey question writing tutorials online. Read through a few. What I see less of is tutorials demonstrating this broader process that looks beyond the question writing and, IMHO, is absolutely necessary to acquire quality responses that you can effectively use.

Finally, please remember that most people hate surveys. This process ensures that you only ask what is necessary and what you know you need and know what to do with. The longer a survey is, the worse the data quality will be. Off the bat, fewer people will take a survey that seems long to them (and it is a good idea to tell people anyway how long the survey will take to ensure they have enough time to do it when they do end up taking it – some people may never take a survey as they don’t know if they will have time to do so, unless you give them a ballpark estimate of how long). People’s attention spans are more and more limited in today’s day and age. After 10 minutes, you can forget about them paying much attention which will be at the expense of data quality. This process ensures no unnecessary questions are asked.

When you start a survey by writing questions, you become fond of those questions and more likely to ask them (or hesitate on cutting them later). This is why it is especially important to first know what research question you need answered and only then, start designing the survey questions that will help you do it.

Of course, there will always be that stakeholder who comes and says, we also should ask question XYZ… and sometimes they have good ideas with obvious implications. But most of the time, this is not the case. The best weapon against such a proposal is the demonstration of the thoughtful development as described above. You come back to them demonstrating how and why all questions are in this survey and asking them, now with this in mind, why do you want to ask that too? They will either improve your design on the spot or back down. Either way, it is a win-win.

Books

As I signed my name to the third piece of paper recently (or was it the fourth), I am wondering, where is that person who swore they would never write a book. Now, with this signature, I have three book contracts, and even an edited volume forthcoming soon.

So lets talk about these a bit. It is good to take stock of what am I doing.

Hawkins K, Carlin R, Littvay L, Rovira Kaltwasser C (eds.) (forthcoming) The Ideational Approach to Populism: Concept, Theory, and Analysis Extremism and Democracy series at Routledge

I am very much looking forward to this one. A little over a year ago I wrote an opinion piece in Nature where I argued for the importance of systematic comparative analysis. This book is our (Team Populism‘s) first attempt at this and, while not perfect, I am pretty happy with the outcome.

Then there is the almost finished book with my former students, Bruno Castanho Silva and Constantin Manuel Bosancianu on Multilevel Structural Equation Modeling. There is no book dedicated to the subject, to date. There are a few good book chapters, but we wanted to do something more accessible to a less technical crowd (and since Bruno asked me if we could just write this equation in landscape, I am not sure we succeeded). We pitched the idea to SAGE’s Little Green Book series a while back and they liked it. We managed to workshop earlier draft chapters at the 2016 and 2017 ECPR Summer Schools in Methods and Techniques and in a few weeks we will do it again. Hopefully it will be a complete draft by then. This solely depends on my writing and editing superpowers. (Yeah, I should be writing that and not blog posts.)

(On a side note, how inept I am at up to date quantitative methods technology will become clear at the bottom of this post, but I am proud to see that my former students’ webpages are on github – a platform I very much need to learn how to use. (And they aren’t the only ones to have their pages on github.) Not having the time to learn everything I want to learn, I hope, is compensated somewhat by pushing my students down rabbit holes they need to tumble-down to get (well) ahead of where I am at. #FeelingOldUnder40. I know, I know, I am embarrassing…)

Riding on the success of this idea and after years of discussions on what we should collaborate on, Jochen Mayerl and I sat down after the 2016 ECPR Summer School and spent two days discussing (and pounding out) what a short introductory Structural Equation Modeling book would look like if we wrote one. Fortunately SAGE’s same series, that was visibly missing a modern SEM book from the 170+ book repertoire, sounded convinced. We spent some time a year later working out some of the details, but the work starts early next year and hopefully we will have a draft to pilot at the 2018 ECPR Summer School.

But even before that, Cambridge University Press approached Kirk Hawkins to write a short book for their new Elements series on how the 2016 US election (read: Donald Trump, but we think also Bernie Sanders) fits into the comparative world of populism. Americanists in the US can be quite detached from the world of comparative politics and it is clear that current events caught them (as much as everyone else) by surprise. Americanists want to look into Trump and populism, so instead of reinventing the wheel (or rather, further confusing an already massively muddled concept – there is a lot of that going on nowadays), maybe placing the US in a comparative context where populism has been studied for decades is not a bad idea. And the Elements series looks like a great medium to do this. This also gives some opportunities to show off systematic comparative research (which is still the exception in populism studies).

The irony of Kirk (the American) asking me (the Hungarian) to write a book with him on American politics (not his field, much more mine) should not escape anyone. Between the two of us, I think we have sufficient chops in systematic comparative research on the level of elites, masses, focus on Latin America, Europe and, of course, the US to do this right. This is going to be fun. I am very much looking forward to it. And it is going to be tough as deadlines are tight. They want the book by the end of January. (February might actually be doable. At least these things are short, which, in the case of the SEM book may be more a challenge than an asset, but…)

There is another reason I am looking forward to this. Recently, a medical researcher colleague asked me to help with some stats. I figured I will finally do this the right way. I have been teaching R and good coding practices for years to my students but I never internalized them myself. I figured I will do this right, for once. (I won’t open SPSS, etc.) It was so much fun. For most of my political science work I rarely get to open a stats package. I rarely open
R and not realize it is way outdated and I should upgrade. When I have to run something, it is usually in Mplus and I can usually get a collaborator to give me clean, Mplus ready data (or I am in a rush and do it in SPSS, thought it has been a while). Now I have a project where I can and will do everything myself and do it right. Looking forward to learning more about visualization in R. I did a workshop with Martin Molder but still never had to open ggplot2.

So I have three books to write before the start of my sabbatical (assuming the request goes through to leave next fall for a year). What will I do on my sabbatical? I guess I have ideas. CEU’s Comparative Populism Project will start to produce data. There are multiple other grant proposals, grant calls in the pipeline with the potential to keep me busy doing populism research (or, at least, scientific busy work with hopes of generating something useful – like data – for research). Maybe another book? Definitely articles. I do have a plan at the back of my head to write one more stats book, this one on experimental design for political and social scientists. We will see if that happens. I need to do a lot more research on what is out there before I commit to writing even a proposal for this.

Our article on Populism and Belief in Conspiracy Theories is up on pre-release

Our article with Bruno Castanho Silva and Fede Vegetti in the Swiss Political Science Review special issue on populism is titled: The Elite Is Up to Something: Exploring the Relation Between Populism and Belief in Conspiracy Theories.

Abstract: We explore the relationship between populist attitudes and conspiratorial beliefs on the individual level with two studies using American samples. First, we test whether and what kinds of conspiratorial beliefs predict populist attitudes. Our results show that belief in conspiracies with greedy, but not necessarily purely evil, elites are associated with populism. Second, we test whether having a conspiratorial mentality is associated with all separate sub-dimensions of populist attitudes – people-centrism, anti-elitism, and a good-versus-evil view of politics. Results show a relation only with the first two, confirming the common tendency of both discourses to see the masses as victims on elites’ hands. These findings contribute to research on the correlates of populism at the individual level, which is essential to understanding why this phenomenon is so strong in contemporary democracies.

CEU Intellectual Themes Initiative interdisciplinary project on text analysis

In addition to the Comparative Populism project, another ITI project I am involved with also got funded in the most recent round. With the leadership of Tijana Krstić and Jessie Labov, I am happy to announce that the Text Analysis Across Disciplines project starts on Monday.

Now, any advice for me where I can learn the ins and outs of the trade? Please send it on Twitter or Facebook (email also works). (Not that I don’t run a methods school, but on those exact days I tend to be pretty busy. So I need other options.)

More about the project:

Text analysis can mean different things to different audiences: from close reading of literary texts, to critical reflections on historical sources, to the computational analysis of big data using text mining techniques. The project seek to forge a continuum among these diverse disciplinary approaches to text analysis: from the analog to the digital, from the historical to the contemporary, from pure research to public outreach. This project has grown out of the work of the Digital Humanities Initiative (DHI), an 18-month exploratory project funded by the Humanities Initiative. After surveying the CEU community and spending the 2016-2017 year consulting with faculty, staff, and students in virtually every department and program on campus, the project team has identified text analysis as the one area of digital research which is much in demand, but critically absent from CEU’s curricula and research profile. Therefore, the project will offer courses, master classes, project incubation, and several public events to demonstrate the crucial role of text analysis in ‘small,’ ‘medium’ and ‘big data’ research. In order to introduce the techniques and methodologies specific to text analysis at CEU, the team will draw on working partnerships that the DHI has established with several Hungarian institutions (ELTE, the Petofi Literary Museum, and the Institute for Literary Studies and the Institute for Historical Research at the Academy of Sciences), as well as the European-wide DARIAH network. The primary goals for the TANAD project is to successfully establish 2 university-wide courses and several grant-worthy projects which bridge the work of the Just Data initiative with ongoing text-based research in departments and programs across the university.

CEU’s Intellectual Themes Initiative Funds Our Comparative Populism Project

We are grateful for CEU’s Intellectual Themes Initiative for funding our project. Now we have a lot of work to do over the next two years. Announcement penned by Erin Jenne:

We are excited to announce our two-year interdisciplinary CEU grant project, Comparative Populism, which will launch September 2017 and is conducted by Levente Littvay, Bruno Castanho E Silva, Rosario Aguilar Pariente, Constantin Iordachi, Nick Sitter, Zsolt Enyedi, Elissa Helms, Balazs Vedres, Judit Sandor, Matt Singer, Norbert Sabic, Federico Vegetti, several CEU students (hopefully) and with external support from multiple scholars including Team Populism.

This project brings together CEU and international scholars working on topics related to populism across different disciplinary traditions. The aim is to build up a comparative database on countries across Europe on the varieties of populist politics and policies across the region from the end of the Cold War to present and to explore the connections between populism on the one hand and gender, law, foreign policy, and party politics on the other. By joining the different methodological skills and perspectives across the different academic units, the project team can arrive at a multi-faceted understanding of why populism manifests more strongly in some countries than others in the same region, why it takes on social conservative dimension in some places and more nationalist/nativist dimension in others, and how all of this connects to gender, the law, foreign policy, public administration and party systems.

Find out more HERE!

Evaluation of English as a Medium Instruction Outcomes

Hanging with the good Jose L. Arco-Tirado in Granada, working on the impact of English as a Medium Instruction (EMI) on University GPA. It is kind of hard as the people who go into an English language instruction school in Spain are not exactly a random sample. (Spoiler alert, they are much better students and end up with a better GPAs than the average.) But what if we try to construct a reasonable comparison group? The results flip. Not so good news for EMI education, (though I still don’t care. My kid still gets three languages spoken to her at home and I still want to send her to a school where she can learn a fourth).

Not to bore anyone with the details but we used matching to get closer to a true causal inference (not that we expect to succeed – but at least get a more reasonable comparison). Propensity matching sucks (and no, we did not try higher order interactions and machine learning to find best model), as usual Coarsened Exact Matching (as implemented and without tweaking) throws away most of the treatment group (not exactly something we can afford to do now) and, as anyone could have predicted, Genetic Matching is still the method that saves the day with incredible balance and super interesting findings. (I guess findings were the same with a simple nearest neighbor propensity matching and tended in this direction with CEM as well but insignificant due to probably the very low sample size.)

I want to spend more time in Granada. Maybe next year when we’ll have enough data to try a regression discontinuity design. (Or any other time is fine…)