Five maps so far. Is someone doing this by hand, the hard way? I figured it was an AI someone programmed, but if it’s an individual or small team, big respect. Very neat project.
They provide links to their github that explains their whole methodology. This is a scientific effort and is as transparent and well-documented as a project can be. They provide the code so you can understand the exact mechanics at play or just fork the project if you want to take the work in a different direction.
It’s a great project and long overdue. I personally think scientific journals are incredibly outdated and haven’t been necessary for a couple of decades. Just put your work on a stable web site and cut out the parasites at the journals.
github links show 2x contributers. cool project
AI would probably be pretty useful for this. You’d have to assume most of the “answers” are in the abstract, so you could just build one to scrape academic texts. Use an RAG so it doesn’t hallucinate, maybe. Idk if that violates some T&C nonsense that doing it by hand doesn’t though.
It’s cool that shows all the papers and not just some abstract metric or yes or no answer.
it’s still only five topics and you really just have to trust the devs that info is accurate and not biased.
They provide direct quotes from the papers that support their scoring and also direct links to the full papers.
It’s super easy to just check their conclusions. I followed up on several papers yes and no on the vax question. There was no skullduggery as every paper I looked at was represented fairly in the scoring.
As in other scientific efforts, this is not just a ‘trust me, bro’ situation. They provide references.
Not what I really meant. I was after that one has to trust them to actually provide a suitable and representative coverage on all the papers released on the subject.
Something I’ve seen on some PubMed meta-analyses is the inclusion of the various search terms and inclusion/exclusion criteria used; something along those lines maybe?
I see, thanks for clarifying.
I think that concern is partly covered by their scoring. If a bad-faith actor put together a distorted gathering of papers that favored their conclusions but weren’t cited widely, those papers would have very small circles.
So it would be visually apparent that either: they were being dishonest in their research gathering, or the question has not yet been studied widely enough for this tool to be useful.
The more I think about this the more I love this project and their way of displaying the state of consensus on a question.
Please add a section about nature! Global warming, deforestation, and other human effects on nature.
You can suggest new maps. They ask for links to papers, so if this is a thing you are passionate about and have some recent papers, especially review papers. Reviews seem to get more points in their schemes.
I love this project too and have a personal passion in neurobiology studies related to benefits of yoga. When I have a couple of hours, I will submit a map suggestion for that topic.
This is awesome, but things like how the paper score is calculated should probably be on the website itself, not some scrolling and a couple of clicks away on the GitHub page. Maybe a single link or pop up on the chart itself?







