Research Transparency

Research Transparency

How we earn your confidence

How we earn your confidence

How we earn your confidence

Last updated: February 18th, 2026

Quick Summary


We believe the organizations and communities we serve deserve to know exactly how their data is collected, processed, and analyzed. Transparency isn't a compliance checkbox, it's how we build trust.

Last updated: February 18th, 2026

Quick Summary


We believe the organizations and communities we serve deserve to know exactly how their data is collected, processed, and analyzed. Transparency isn't a compliance checkbox, it's how we build trust.


How We Practice Transparency

Every Warren project produces a methodology disclosure statement covering 11 standardized elements defined by the American Association for Public Opinion Research (AAPOR). These disclosures document who sponsored and conducted the research, how participants were recruited, how data was collected and processed, and the limitations of each study.


We publish these disclosures within each project dashboard so that clients, participants, and reviewers can verify our methods.


How We Practice Transparency

Every Warren project produces a methodology disclosure statement covering 11 standardized elements defined by the American Association for Public Opinion Research (AAPOR). These disclosures document who sponsored and conducted the research, how participants were recruited, how data was collected and processed, and the limitations of each study.


We publish these disclosures within each project dashboard so that clients, participants, and reviewers can verify our methods.


Methodology Framework


All Nesolagus surveys are designed in accordance with our Survey Methodology Framework, a comprehensive document that establishes standards for conversational survey design, bias prevention, ethical framing, validation protocols, and data quality assurance. The framework is grounded in established survey research literature and behavioral science principles.



Download Methodology Framework (PDF)


AI in Our Process

Where AI helps


We use AI (Anthropic Claude) to assist with instrument design: generating discovery questions, drafting technical survey briefs, and constructing survey instruments from approved specifications. All AI-assisted outputs are reviewed and approved by the project lead before deployment.


Where AI stops


No AI, large language model, or machine learning system is used in any post-collection stage. All data processing, theme detection, archetype classification, quality scoring, and narrative analysis are performed through deterministic, rule-based methods. No respondent data is transmitted to any external AI service.


A detailed AI usage audit documenting every instance of AI involvement in the Warren platform is available upon request.



Published Methodology Disclosures


Greater Hartford Arts Council Donor Insights Survey

Description: Conversational donor intelligence survey conducted August - October 2025. 157 completed surveys from approximately 2,000 contacts. Web-based deployment via the Warren platform.

🔗 See Disclosure here

_______________________________________________________


Westover Student Experience Survey

Conversational student experience survey conducted December 2025 - January 2026. 28 completed surveys from approximately 200 invited students. Web-based deployment via the Warren platform. See more

🔗 See Disclosure here


How We Handle Your Data


Every dataset passes through Warren's five-stage data processing pipeline. Every transformation is logged in a structured audit trail. No data imputation is performed -- we never fill in missing responses or infer answers on behalf of participants.


  1. Row-Level Validation
    Empty rows removed, duplicates detected, timestamps verified for impossible dates and out-of-range durations.

  2. Field-Level Cleaning

    Whitespace normalized, encoding fixed. Narrative text is always preserved exactly as the participant entered it.

  3. Spam and Low-Effort Detection
    Blocklist filtering, repetition detection, and minimum effort thresholds flag responses for review.

  4. Response Normalization

    Semantically equivalent responses grouped under canonical labels. Multi-select values parsed into structured arrays.

  5. Quality Scoring

    Each response scored 0-100 on length, effort, and coherence. Every action logged with original value, new value, reason, and confidence score.


Our Boundaries


✔️ We do not sell respondent data

✔️ We do not use respondent data to train AI models

✔️ We do not impute missing responses


✔️ We do not claim probability sampling when using non-probability methods


✔️ We do not report margins of error for non-probability samples

✔️ We do not use AI to process, code, or analyze respondent data


Questions About Our Methods?

We welcome scrutiny. If you have questions about how we conduct research, process data, or analyze results, we're happy to discuss. Contact us here.