More on AI Literature Reviews in the Social Sciences.

Recently I needed to throw together an overview of the literature for a specific topic. The application was, surprisingly, non-scientific. People who work in the actual world of politics asked me what the literature says about a certain topic. This gave me another opportunity to test AI tools and their capabilities, especially since the topic was not something I was neck-deep in. Simply speaking, I would have had to do a literature discovery from scratch using something like my classic literature review technique if it wasn’t for AI.

If you haven’t read my old piece on AI literature reviews, it mostly still stands. Start there. Not here.

To make a long story short, after trying many things that worked very poorly, (lots of inaccuracies and hallucinations), I finally cracked it and found a fairly good way of doing this. The downside is that it is incredibly work-intensive. Maybe not as work-intensive as doing the classic literature review from scratch, but it really pushed the boundaries of what is possible with AI today and what is not. All in all, I had a lot of fun doing it once I got passed the excruciatingly long and painful preparatory phase.

I tried all the tools in my old piece and then some to try to automate this literature review process. Unfortunately, even scite_ is becoming less and less capable of social science research as they put up more and more guardrails. Its writing is shittier and shittier. Article discovery is, unfortunately, quite poor as (at least without tweaking) it does not find the most relevant literature. It’s still the best thing for discovery but what it writes up is inadequate. (It is good enough for the natural sciences, but not for social science.) I also tried more creative approaches but all of my attempts lead to the AI model just making shit up.

But I recently heard on a podcast, one of the three early April 2024 episodes of The Ezra Klein Show that current models can keep around a book’s worth of information “in mind” while chatting. And since I now have a Claude AI subscription I figured I should put this to the test. I grabbed a whole year of a relevant topical journal. I figured I should try to chat with the model about the articles published in the journal that year. (Yes. Download one at a time. Merge the PDFs and upload.) But I was told it was too much. So I did this for a half year. It worked OK, but it was very slow. Claude 3 told me I could only ask a few questions before it cut me off. Clearly, I hit the limits well before it was useful. And what I really wanted to do is to chat about 6-10 years of issues.

So I tried ripping the PDFs into plain text but I ended up hitting similar limits very quickly. And it didn’t seem to work too well. Looking at what came out of the pdf-to-text auto-generation, I am not shocked the AI didn’t know what to do with that.

Here’s what worked. I just decided to make article discovery mostly “manually.” Fortunately, here the task was either to find everything in the top journals about topic X or glance through everything in the X topical journals that could be used for purpose A. So this was done manually with searches within the journal’s webpages augmented with some specialized Google Scholar work. For the topical journals, I read the titles of entire issues. I also grabbed a few things that the journals recommended along the way in other journals they publish. (In fact, this was done already for months.) To add, I used scite_, I used Preplexity.

After all this, if an article looked like it could be relevant, I took its citation, abstract, and discussion/conclusion and manually pasted it into a text file. I separated articles by #####. This took forever. (In 2024, why did I have to manually copy and paste 60 article abstracts, I have no idea. I guess I do have an idea. Moving on…) But once I had a document with around 60 articles (around 82k words), I was able to upload this to Claude AI and have a good discussion about what is in there. (Pro tip. Never leave a DOI out of a citation if you ever do this. Ask me how I know.)

Yes, every prompt was super slow and I was hitting limits in about 6-8 prompts until it cut me off for a few hours. And this was with the paid Claude AI subscription. So I had to be thoughtful about what I asked. I rolled up my sleeve and used all my prompt engineering (and even more prompt engineering) skills not to waste a single prompt. (If the latter link was helpful to you, sign up for their newsletter. I am not sure I was supposed to give this to you without you signing up. There are many more cool pages where this came from. And if you are at all AI curious, the newsletter is a great quick read every day. ) And sometimes I had to wait until the next prompt… for hours. But it worked. To be sure, first I asked for a bibliography. (That was correct.) And I even got correct quotes when I asked it to go step by step and first identify the main points it will make, then find good quotes in the uploaded txt file to support those claims, and then write up the lit review. Prompts for a full lit review with a few subheadings to guide yielded something much shorter than I wanted (despite size specifications). Going section by section did the trick.

After going through and checking everything (themes the articles were cited for, quotes, cites, bibliography), I can honestly say this worked very well, especially for the narrow purpose. The only issue I had was made-up DOI links when the document uploaded left the DOI link out. The literature discovery was practically manual labor, and the prep work for the uploaded book-length document was mind-numbing, but the results are very good, and overall, it was a fifth of the time to do it this way even including double-triple checking everything.

If it wasn’t so excruciatingly slow, and if I wasn’t running up against limits, it would be amazing to just have a conversation about the pieces with the AI. Have a discussion with the papers or paper authors. In fact, I am now convinced journals should train their models (GPTs?) simply with their own text and allow subscribers to have conversations about the content.