Evidence first.
Then words.
The sources we use, the grades we assign, and the caveats we carry when the evidence is thinner than we would like.
Our research process
Every article starts with a question readers are actually asking, not a keyword we want to rank for. Then we go find the best available evidence.
We identify the question or claim, search primary sources (PubMed, ClinicalTrials.gov, FDA labels, regulatory databases), search secondary sources (Europe PMC, systematic review databases, official health agency pages), grade the evidence for each claim, write the article with inline source links, evidence grades, caveats, and a "last reviewed" date, and flag any claims where the evidence is thin, conflicting, or preliminary.
We do not start with a conclusion and then find sources to support it. The evidence leads. Sometimes that means the answer is "we don't know yet," and we say that.
Source hierarchy
We weight sources differently depending on what they are and what they can reliably tell you.
Tier 1 (primary research and regulatory sources): PubMed/NCBI, Europe PMC, ClinicalTrials.gov, DailyMed/FDA SPL, openFDA, CDC/NHANES, USDA FoodData Central. These are our anchor sources. When we make a claim about what a study found or what a label says, it traces back here.
Tier 2 (supplementary and context sources): RxNorm/RxNav (drug name normalization), PubChem (compound properties), ChEMBL (bioactivity data), NIH ODS (supplement safety), NIH RePORTER (funded research projects), FDA warning letters (enforcement actions). These support context, comparison tables, and background. We do not treat them as standalone clinical evidence.
Tier 3 (question discovery only): Reddit threads, social media posts, influencer content, clinic marketing, press releases. We use these to understand what people are asking and what claims are circulating. We never cite them as scientific evidence.
Evidence grades in practice
Our grades (A through D) correspond to what the source can actually support. Here is how they show up in articles:
Grade A example: "Tirzepatide produced greater average weight loss than semaglutide in the SURMOUNT-2 and STEP-1 head-to-head comparison, a Grade A finding backed by a large randomized trial."
Grade B example: "Observational data from a 2024 retrospective cohort study (n=12,400) suggests that patients who discontinue GLP-1 therapy regain approximately two-thirds of lost weight within 12 months, though this has not been confirmed in randomized discontinuation trials. Grade B."
Grade C example: "A 2024 case series (n=7) described hair thinning in patients on high-dose semaglutide. This is a preliminary signal from a very small sample. Grade C. The mechanism is unclear and the incidence is not established."
Grade D reference example: "Social media discussions about 'stacking' GLP-1s with off-label peptides have increased. These practices are not supported by published clinical evidence and may carry unknown risks."
The grade is always stated. If we cannot assign a grade because the evidence is too thin, we say that instead of inflating it.
Special caveats
Some sources require extra care. We flag these in every article where they appear.
FAERS (adverse event reports): The FDA Adverse Event Reporting System collects voluntary reports of suspected adverse events. These reports are valuable for signal detection but have specific limitations: reports are voluntary and incomplete; a report does not mean the drug caused the event; raw report counts cannot be compared across drugs to determine which is more dangerous. When we reference FAERS data, we include the qualifier that these are voluntary reports that do not establish causation.
Preprints: Preprints are research papers posted publicly before peer review. When we reference a preprint, we label it as a preprint in the text and the source list, note that findings may change after peer review, and avoid building strong claims on preprint data alone.
Animal and in-vitro studies: Studies done in animals or cell cultures can suggest mechanisms but do not prove that the same thing happens in humans. We state clearly that the study was preclinical, do not imply that results translate directly to human outcomes, and rate these Grade C at most.
Compounded products: Compounded semaglutide, tirzepatide, and other prescription drugs are not FDA-approved versions. We distinguish them from FDA-approved branded products, reference FDA warning letters about specific compounded products when relevant, and do not make safety or efficacy claims about compounded versions based on data from the approved versions.
What to ask a clinician
When an article covers a treatment, drug, or supplement, we include a section with productive questions a reader can bring to their healthcare provider. These are conversation starters, not recommendations.
We frame each item as a question, not a directive. We do not say "you should ask about" or "make sure your doctor knows." We do not suggest specific answers the provider should give.
Example questions for a GLP-1 article: Is this medication appropriate for my health history? What are the most common side effects and how long do they typically last? What happens if I need to stop taking it? Are there interactions with my current medications or supplements? What kind of monitoring would I need?
Source retrieval and review dates
Every article includes sources consulted (with stable identifiers), the date we last accessed or pulled data from each source, a "last reviewed" date showing when an editor last checked claims against current sources, and a target date for the next scheduled review (90 or 180 days, depending on topic activity).
These dates appear near the bottom of every article. If a source has been updated since our last retrieval, we note the gap and prioritize an update.
When the evidence changes
Our articles are not one-time publications. When new evidence arrives that materially affects a claim, we update the article and note what changed, when, and why. If a Grade A claim is downgraded (or upgraded), we explain the reason. If a regulatory action affects a covered topic, we update within 72 hours when possible. Major corrections appear prominently at the top of the article, not just at the bottom.
We would rather be transparent about what changed than pretend we were right all along.
This methodology is a living document. Last updated: May 2026. Questions about our research process: editorial@glowdiary.org.