Aaron Tay's Avatar

Aaron Tay

@aarontay.bsky.social

I'm librarian + blogger from Singapore Management University. Social media, bibliometrics, analytics, academic discovery tech.

3,363 Followers  |  342 Following  |  2,998 Posts  |  Joined: 05.07.2023
Posts Following

Posts by Aaron Tay (@aarontay.bsky.social)

But even thats a minority i think. The free perplexity for uni campus (now abruptly terminated) was very successful here at it based on what usage stats I've seen

09.03.2026 03:11 — 👍 0    🔁 0    💬 0    📌 0

The *perception* of being monitored by schools is probably what students fear, students being students .There's a lot of fear they will get punished for "using it wrong". They probably less worried about monitoring by openai, Google etc.

09.03.2026 03:10 — 👍 1    🔁 0    💬 1    📌 0
Preview
Can BM25 be a probability? BM25 odds vs probabilities: a tour of Bayesian BM25 and what it means for hybrid search calibration.

[Read] Can BM25 be a probability? softwaredoug.com/blog/2026/03...

08.03.2026 04:49 — 👍 1    🔁 0    💬 0    📌 0

But undermind workspace/projects is of course way easier...

07.03.2026 17:02 — 👍 2    🔁 0    💬 0    📌 0

I am still trying but it looks like an attempt to match using claude code to built your own research agent. Using claude code is of course way more flexible & Undermind projects at the end of the day is limited to just one search tool, while with CC you can add pretty much all(easiest with API)(14)

07.03.2026 17:01 — 👍 3    🔁 0    💬 1    📌 0

the deep search can't actually check against the reference list, so you need to use the generalist or report generator agent to do that and then generate the report. or even after generating the report from search use the generalist to ask to check & remove references cited (13)

07.03.2026 16:59 — 👍 3    🔁 0    💬 1    📌 0

The reports seem to be correct, but I think you cannot rely on the "Deep search" to do anything too smart as it is a fixed workflow but the things the agent do in the chat window is agentic. So for example it may generate a report claiming these are papers that could have been cited by X but(12)

07.03.2026 16:58 — 👍 2    🔁 0    💬 1    📌 0
Post image Post image

Finally there is a last catchall "Generalist" agent which you can use. Ultimately, I was curious, can you make it answer queries like find papers that paper X should have cited but did not?What about other artibary tasks like take 2 papers in library & check overlap of citations? Reports shown(11)

07.03.2026 16:55 — 👍 3    🔁 0    💬 1    📌 0
Post image

Here's the report generated that shows papers that cite the top cited papers in each category in my library (10)

07.03.2026 16:51 — 👍 3    🔁 0    💬 1    📌 0
Post image Post image

Take the earlier search i ran which was to take the top cited paper of each category as a seed paper and find papers that cite those.The completed search report allows you to generate a report over the papers found only in that search (9)

07.03.2026 16:45 — 👍 3    🔁 0    💬 1    📌 0
Post image Post image

The other agent is the report writer agent. This works just as about you expect and works on either a search you have run or papers in the paper library (8)

07.03.2026 16:40 — 👍 3    🔁 0    💬 1    📌 0
Post image

You can also refer to specific papers in your paper library using the cite keys like Tra24b. Often the chat agent itself can get part of the answer without running a deep search but you can do that if you want (7)

07.03.2026 16:31 — 👍 3    🔁 0    💬 1    📌 0
Post image Post image

From there you can launch the search. The unique thing here is you can refer to things in your paper library, so for example, you can ask it to look at papers in your paper library by category, take the highest cited paper in each category and.... search for papers that cite it....(6)

07.03.2026 16:26 — 👍 3    🔁 0    💬 1    📌 0
Post image

The one you will use the most is the "search architect", it works as you expect guiding you with the charatestic Undermind probing of what exactly do you want. A slight different here is it can access your past searches, reports generated and paper library to add context. (5)

07.03.2026 16:21 — 👍 3    🔁 0    💬 1    📌 0
Post image

One of my major critiques of many "AI research assistant" tools of the time including Undermind was that they ran fixed workflows rather than truly being agentic. Undermind projects now becomes more agentic by offering 3 agents you can mix and match (4) aarontay.substack.com/p/how-agenti...

07.03.2026 16:14 — 👍 4    🔁 0    💬 1    📌 0
Post image

Because Undermind search generally work best for related focused areas, if you have a large topic like impact of LLM on systematic review, you can run multiple search queries - e.g. impact of LLM on generation of boolean search, on title abstract screening, on criticial appraisal seperately (3)

07.03.2026 16:12 — 👍 3    🔁 0    💬 1    📌 0
Post image

You are no longer limited to a single Undermind search run. In this example, I ran multiple searches using Undermind's "search architect agent" and all the papers found were classified into major research topics and subtopics and can be viewed i the paper library (2)

07.03.2026 16:10 — 👍 4    🔁 0    💬 1    📌 0
Post image

Undermind.ai has now added a new Projects feature. This now only allows you to create workspaces for your projects, but also make more agentic workflows possible (1)

07.03.2026 16:07 — 👍 10    🔁 1    💬 3    📌 1

I'm not saying some people wont think like that. faculty will be well off enough to pay for their own. Students? At most they will be afraid they might be flagged for uses that affect their grades. But to say that a large number would think like that and not use? Sorry I need more evidence.

07.03.2026 11:32 — 👍 1    🔁 0    💬 1    📌 0

I think it's just a bunch of
Wishful thinking to fit the hoped for view gen ai use is down. I also suspect tools getting better or students smarter at checking hallucinated references so this librarian sees less of it and took it as lower usage of gen ai

07.03.2026 11:10 — 👍 1    🔁 0    💬 0    📌 0

I could even argue we as librarians are MORE prone to over extent because we feel pressure to rush to whatever is the perceived new crisis & act like we are experts to "teach users". How many librarians months after chatgpt launch suddenly became "experts" in LLMs? (6)

07.03.2026 06:09 — 👍 6    🔁 0    💬 0    📌 1

The whole self-congratulatory, I told you so narrative arc coupled with accusations of "illusion of competence" of others shows all the hallmarks of the smug "I know best because I am a librarian" tone that is very distasteful. From where i sit librarians are as prone to the curse as anyone (5)

07.03.2026 06:06 — 👍 2    🔁 0    💬 1    📌 0

The librarian clearly has some beef with the institution IT unit and isn't shy about writing about it, but it affects the objectivity of the piece. There are other things I can point out that are wrong but frankly what borthers me is the tone of the whole piece (4)

07.03.2026 05:57 — 👍 1    🔁 0    💬 1    📌 0

There's a point that some students are worried abt being monitored when using institution sponsored systems but I dont see evidence this is a huge factor for rejection. Also author says these users prefer to use their own paid or free accounts which doesnt support the gen ai use is receding(3)

07.03.2026 05:53 — 👍 1    🔁 1    💬 2    📌 1

that Gen ai use among students is receding is a delusional take. As evidence it points to quit chatgpt movement , but this is rejection of Openai not gen ai as a whole. Likely people switched to Gemini and Especially Claude (which jumped to #1 on appstore) (2)

07.03.2026 05:49 — 👍 1    🔁 0    💬 2    📌 0

Looking at a utterly delusional librarian substack post and thinking whether I want to respond. There's a lot I agree with the piece e.g. in terms of the trust users have with the library and how often we see things other institution units may not notice. But equally there's this delusional view (1)

07.03.2026 05:44 — 👍 3    🔁 2    💬 1    📌 0
Preview
Helen King (@pubtechradar) Worth spending time with this article from @scott cunningham: https://causalinf.substack.com/p/claude-code-27-research-and-publishing According to Scott's thinking: Economists currently submit a...

Worth spending time with this article from Scott Cunningham:

Summary and my thoughts here: substack.com/@pubtechrada...

Original article here:
substack.com/home/post/p-...

06.03.2026 10:28 — 👍 1    🔁 1    💬 0    📌 0

Congrats i just reinvented SR.

07.03.2026 02:52 — 👍 3    🔁 0    💬 0    📌 0

I've read a ton of theory on search algos there a bewildering number of techniques. Take just combining & reranking from multiple sources.. its hard to know what really works, so maybe easiest is just get everything or as much as you can , throw away the obviously non relevant and let human see?

07.03.2026 02:52 — 👍 1    🔁 0    💬 1    📌 0

The more i play with Claude code to combine sources to create the ultimate search agent the more i realise the wisdom of traditional systematic review. Why? (1)

07.03.2026 02:48 — 👍 2    🔁 0    💬 1    📌 0