AI and HealthCare, Author Interviews, Cannabis, Pharmacology, Technology / 28.08.2025
When the Math Doesn’t Add Up, Can AI Do the First Pass to Improve Biomedical Research?
[caption id="attachment_70470" align="alignleft" width="150"]
Dr. Dobbins, PharmD[/caption]
MedicalResearch.com Interview with:
Duncan Dobbins, PharmD, MHI
Geisinger College of Health Sciences
Scranton, Pennsylvania
MedicalResearch.com: What prompted this commentary, and what did you find?
Response: In theory, there could be a drug interaction between immunotherapy and medical cannabis. A small (N=102) observational report from Israel appeared to find that immunotherapies worked much less well in cancer patients who also used medical cannabis.1 However, a follow up report2 took about two weeks and involved manually rechecking the math and data-analysis. Several discrepancies emerged between the methods and results. Two-tailed tests were listed in the methods yet one-tailed p values appeared in the results. Arithmetic errors, some traceable to unconventional “floor” rounding, affected key percentages. Multiple p values in Table 1 (21 out of 22) could not be reproduced with the stated tests. Finally, smoking status, a key confound, was not reported. Taken together, these issues complicate interpretation and highlight how small computational slips can cascade into larger inferential uncertainty. For this follow-up report, I was asked, “Do you think AI could have double checked this math?”
Dr. Dobbins, PharmD[/caption]
MedicalResearch.com Interview with:
Duncan Dobbins, PharmD, MHI
Geisinger College of Health Sciences
Scranton, Pennsylvania
MedicalResearch.com: What prompted this commentary, and what did you find?
Response: In theory, there could be a drug interaction between immunotherapy and medical cannabis. A small (N=102) observational report from Israel appeared to find that immunotherapies worked much less well in cancer patients who also used medical cannabis.1 However, a follow up report2 took about two weeks and involved manually rechecking the math and data-analysis. Several discrepancies emerged between the methods and results. Two-tailed tests were listed in the methods yet one-tailed p values appeared in the results. Arithmetic errors, some traceable to unconventional “floor” rounding, affected key percentages. Multiple p values in Table 1 (21 out of 22) could not be reproduced with the stated tests. Finally, smoking status, a key confound, was not reported. Taken together, these issues complicate interpretation and highlight how small computational slips can cascade into larger inferential uncertainty. For this follow-up report, I was asked, “Do you think AI could have double checked this math?”