

❶ Data Collection Strategy
This study employed a web-based conversational survey (chatbot) as its primary data collection method. The survey was designed as a structured conversational experience delivered through Nesolagus LLC’s proprietary platform, Warren. Rather than presenting a static questionnaire, participants engaged in a guided dialogue with a chatbot persona representing Amanda Roy, CEO of the Greater Hartford Arts Council (GHAC). The conversational design incorporated dynamic branching logic, multimedia elements (including a video welcome from the CEO), and adaptive response pathways based on participant input.
The survey included a mix of question types: single-select multiple choice, multi-select checkboxes, 5-point Likert-type semantic differential scales, open-ended text responses, and forced-ranking exercises. The instrument was designed to collect both quantitative data (connection type, engagement levels, communication preferences, demographic characteristics) and rich qualitative narratives (personal stories about arts engagement, perceived barriers, future vision for the arts ecosystem). Narrative responses were collected via text input, with the platform also supporting video and audio response capture for select questions.
❷ Who Sponsored the Research and Who Conducted It
Sponsor
Greater Hartford Arts Council (GHAC), Hartford, CT. GHAC directly contracted and funded this research.
Conducted By
Nesolagus, LLC (nesolagus.com), a relationship intelligence company specializing in consent-first conversational survey research. Aaron Lyles, Founder and CEO, served as project lead. Survey design, data collection, and analysis were conducted by Nesolagus. The Warren survey tool used for data collection in this study was developed in collaboration with a technical partner under a prior arrangement. Nesolagus has since independently rebuilt the platform. The research methodology, instrument design, analytical framework, and all client-facing deliverables were created solely by Nesolagus.
❸ Measurement Tools / Instruments
The survey instrument was a 20-block conversational script delivered via the Warren chatbot platform. The instrument was developed through a formal discovery workshop process with GHAC stakeholders (conducted June 11, 2025) and iteratively refined based on project objectives outlined in a signed Scope of Work. The full conversational script, including all question text, response options, branching logic, transition messages, and multimedia cues, is available upon request.
AI-Assisted Instrument Development
The survey instrument was designed by the project lead through a collaborative discovery workshop process and iterative development sessions with a technical partner. AI (Anthropic Claude) was consulted during question refinement but did not generate the instrument. The conversational script was manually authored and transferred into the Warren survey platform for deployment.
Opening Sequence
A 45–60 second video welcome from Amanda Roy (CEO), followed by a text-based introduction establishing purpose, confidentiality, estimated time (8–10 minutes), and consent. Participants were offered “Let’s start” or “Tell me more first” before proceeding. A secondary informational message was provided for those selecting “Tell me more.” Continuing constituted informed consent per the posted Terms of Service and Privacy Policy.
Question Types and Distribution
Single-select multiple choice (connection type, arts importance, relationship preferences), multi-select with randomized option order (arts ecosystem connections, communication preferences, barriers to engagement, ecosystem priorities), 5-point semantic differential scales (GHAC perception across multiple dimensions), open-ended text with optional video/audio response (personal stories, barriers, future vision), and forced-ranking exercises (priority areas).
Contextual Framing and Branching
Dynamic response messages acknowledged participant selections before subsequent questions (e.g., current supporters received different affirmation language than first-time contacts). The instrument included conditional routing based on connection type, supporter status, and specific response selections. All question text used neutral, non-leading language reviewed against the Nesolagus Survey Methodology Framework v2.0, a 66-page methodology document grounded in behavioral science principles and established survey design standards.
Demographic Collection
Optional demographic questions (age range, gender identity, racial/ethnic background, donation range, ZIP code) were placed at the end of the survey, preceded by a consent gate asking willingness to share demographic information and an explanation of purpose. Participants could decline the entire demographics section or skip individual questions.
❹ Population Under Study
The target population was individuals on GHAC’s donor and newsletter email lists, approximately 2,000 contacts. This population included current donors, lapsed donors, former workplace giving campaign participants, individuals connected through artists or arts organizations, and newsletter subscribers who may not have had a direct financial relationship with GHAC. The population was defined by inclusion on GHAC’s existing contact lists, which covered individuals with some prior connection to the organization across the 34-town Greater Hartford, Connecticut region.
It should be noted that GHAC’s contact lists were not fully organized or recently updated at the time of deployment. Some contacts may have been inactive, had outdated email addresses, or no longer had a relationship with GHAC. One secondary objective of the survey was to help GHAC assess the current state and reach of their existing contact database.
❺ Method Used to Generate and Recruit the Sample
Sampling Method
This was a non-probability sample. The survey was distributed to approximately 2,000 contacts on GHAC’s existing donor and newsletter email lists. No probability-based selection was employed; the full list was solicited (attempted census of the list population).
Recruitment Method
Participants were contacted exclusively via email campaign. There was no social media campaign, no placement on the GHAC website, and no paid advertising or public solicitation. Email invitations included a direct link to the chatbot experience.
Eligibility
Any individual who received the email invitation and had internet access was eligible to participate. No demographic quotas or screening criteria were applied.
Cooperation Strategies
The email invitation featured a personal appeal from the CEO. The chatbot opened with a video message from Amanda Roy establishing trust and framing participation as a listening exercise rather than a fundraising appeal (“This isn’t about asking for donations or adding you to another mailing list”). The conversational format was designed to create a psychologically safe environment using principles of conversational affirmation, trust-based framing, and progressive disclosure. No financial compensation or incentives were offered.
❻ Method(s) and Mode(s) of Data Collection
Collection
Data were collected via a single mode: self-administered web-based conversational survey (chatbot). The chatbot was delivered through the Warren platform, accessible via standard web browsers on desktop and mobile devices. The survey incorporated text input, button selection, multi-select checkboxes, slider scales, video/audio response recording (via VideoAsk integration for select narrative questions), and video playback. The survey was offered in English only. The estimated completion time was 8–10 minutes. The conversational interface presented one question or message at a time in a chat-style format, with auto-advancing transitions between contextual messages.
❼ Dates of Data Collection Used to Generate and Recruit the Sample
Data collection took place from August 18, 2025 through approximately late October 2025 (approximately 10 weeks).
❽ Sample Sizes and Discussion of the Precision of the Results
Contacts Solicited | ~2,000
——————————————
Surveys Started | 457
——————————————
Surveys Completed | 157
——————————————
Completion Rate | 34.4%
——————————————
Response Rate | ~7.9%
——————————————
Narrative Responses | ~260
——————————————
Demographic Opt-in | 78%
As this was a non-probability sample, traditional margins of sampling error are not applicable and are not reported. Results are descriptive and should be interpreted as representing the views of those who chose to participate, not as statistically generalizable estimates of the full donor/contact population. The primary analytical value of this study lies in the qualitative narrative data and the identification of donor engagement archetypes and behavioral patterns rather than point estimates of population parameters.
❾ How the Data Were Weighted
No weighting was applied to the data. The study was designed as a qualitative-dominant mixed-methods inquiry focused on narrative insights, donor archetype identification, and relationship mapping rather than population-representative quantitative estimation. Given the non-probability sampling approach and the qualitative emphasis of the research objectives, weighting was neither appropriate nor attempted.
❿ How the Data Were Processed and Procedures to Ensure Data Quality
Data processing was handled through the Warren platform’s configurable data cleaning pipeline, which operates at the point of data import and produces both cleaned data and a complete audit trail documenting all transformations. The pipeline consists of five stages:
Stage 1: Row-Level Validation
Empty rows with no substantive data were detected and removed. Duplicate responses were identified by matching session IDs and near-identical response content. Timestamps were validated for impossible dates and out-of-range durations.
Stage 2: Field-Level Cleaning
Whitespace was normalized (trimmed, multiple spaces collapsed). Case normalization was applied selectively: categorical response fields were normalized for consistency, while open-ended narrative text was preserved as entered. UTF-8 encoding issues, smart quote inconsistencies, and character encoding artifacts were repaired.
Stage 3: Spam and Low-Effort Detection
A configurable blocklist was applied to flag known spam patterns. Repetition detection flagged responses consisting of repeated characters or phrases. A minimum effort threshold was enforced per field type based on character count and word count. Responses falling below quality thresholds were flagged for review and excluded from analysis.
Stage 4: Response Normalization
For categorical fields with open text entry (e.g., race/ethnicity), semantically equivalent responses were grouped under canonical labels using configurable normalization rules. Multi-select values stored as semicolon-separated strings were parsed into structured arrays for analysis.
Stage 5: Quality Scoring
Each response received a per-response quality score (0–100) based on response length, apparent effort, and coherence. Completion status was tracked (partial vs. full), and completion scoring recorded how far each respondent progressed through the survey. Analysis was conducted on completed surveys (n=157).
The pipeline produced a structured audit log documenting every action taken, including filtering, normalization, modification, and flagging, with the original value, new value, reason, confidence score, and an indication of whether each action was overridable by a human reviewer.
Post-Collection Analysis Methods and AI Disclosure
No AI, machine learning, or large language model was used in any stage of data processing, qualitative coding, or quantitative analysis. This was verified through a code-level audit of the entire Warren platform codebase conducted February 18, 2026.
Theme detection uses deterministic keyword matching against predefined lexicons (~120 positive and ~110 negative sentiment terms across 12 universal and 8 GHAC-specific theme categories). Not machine learning classification.
Sentiment scoring uses positive/negative word-count methods against predefined word lists. Not a machine learning sentiment model.
Respondent segmentation (archetype classification) uses rule-based keyword pattern matching in JSON configuration. Not machine learning clustering or classification.
Quality scoring uses configurable deterministic rules (response length, word count, character count thresholds). Not machine learning prediction.
Spam and gibberish detection uses blocklist terms, regex pattern matching, and repetition detection. Not machine learning content filtering.
Report narratives, executive summaries, and strategic recommendations were drafted collaboratively by the project lead with AI assistance (Claude Code) during dashboard development, then reviewed and approved by the project lead. No AI processes respondent data at runtime to generate text.
All analysis code and methodology documentation is available for audit upon request.
⓫ Limitations of the Design and Data Collection
As with all research, this study has limitations that should be considered when interpreting the results:
Non-probability sample
Results reflect the views of those who chose to participate and cannot be generalized to the full population of GHAC donors, supporters, or the Greater Hartford community. Self-selection bias may be present.
Contact list quality
GHAC’s email lists were not fully updated at the time of the study. The effective reach may be substantially lower than the approximately 2,000 contacts solicited.
Single distribution channel
Distributed exclusively via email with no social media promotion or website placement.
Completion attrition
Of 457 who started, 157 completed (34.4%). Completers may differ from drop-outs, introducing potential non-response bias.
Mode effects
The conversational chatbot format represents a novel survey modality. Response patterns may differ from traditional survey formats.
Qualitative emphasis
Designed primarily for qualitative insights and archetype identification. Quantitative findings should be interpreted as directional.
Qualitative analysis review
Thematic analysis was conducted by the project lead in collaboration with GHAC stakeholders. Formal independent analyst verification using the organization’s Human Verification Checklist protocol was not performed for this project. This protocol is being implemented for future engagements.