
Methodology Framework
All Nesolagus surveys are designed in accordance with our Survey Methodology Framework, a comprehensive document that establishes standards for conversational survey design, bias prevention, ethical framing, validation protocols, and data quality assurance. The framework is grounded in established survey research literature and behavioral science principles.
AI in Our Process
Where AI helps
We use AI (Anthropic Claude) to assist with instrument design: generating discovery questions, drafting technical survey briefs, and constructing survey instruments from approved specifications. All AI-assisted outputs are reviewed and approved by the project lead before deployment.
Where AI stops
No AI, large language model, or machine learning system is used in any post-collection stage. All data processing, theme detection, archetype classification, quality scoring, and narrative analysis are performed through deterministic, rule-based methods. No respondent data is transmitted to any external AI service.
A detailed AI usage audit documenting every instance of AI involvement in the Warren platform is available upon request.
Published Methodology Disclosures
Greater Hartford Arts Council Donor Insights Survey
Description: Conversational donor intelligence survey conducted August - October 2025. 157 completed surveys from approximately 2,000 contacts. Web-based deployment via the Warren platform.
🔗 See Disclosure here
_______________________________________________________
Westover Student Experience Survey
Conversational student experience survey conducted December 2025 - January 2026. 28 completed surveys from approximately 200 invited students. Web-based deployment via the Warren platform. See more
🔗 See Disclosure here
How We Handle Your Data
Every dataset passes through Warren's five-stage data processing pipeline. Every transformation is logged in a structured audit trail. No data imputation is performed -- we never fill in missing responses or infer answers on behalf of participants.
Row-Level Validation
Empty rows removed, duplicates detected, timestamps verified for impossible dates and out-of-range durations.Field-Level Cleaning
Whitespace normalized, encoding fixed. Narrative text is always preserved exactly as the participant entered it.
Spam and Low-Effort Detection
Blocklist filtering, repetition detection, and minimum effort thresholds flag responses for review.Response Normalization
Semantically equivalent responses grouped under canonical labels. Multi-select values parsed into structured arrays.
Quality Scoring
Each response scored 0-100 on length, effort, and coherence. Every action logged with original value, new value, reason, and confidence score.
Our Boundaries
✔️ We do not sell respondent data
✔️ We do not use respondent data to train AI models
✔️ We do not impute missing responses
✔️ We do not claim probability sampling when using non-probability methods
✔️ We do not report margins of error for non-probability samples
✔️ We do not use AI to process, code, or analyze respondent data
Questions About Our Methods?
We welcome scrutiny. If you have questions about how we conduct research, process data, or analyze results, we're happy to discuss. Contact us here.