Shared research study link

Tesla - Weekly Sentiment Tracker

Consumer sentiment evaluation for Tesla

Study Overview Updated Jan 11, 2026
Research question: Evaluate consumer sentiment toward Tesla across four areas-likelihood to recommend (0–10), change vs. 6 months ago, last interaction, and next‑year success. Who: 6 US adults (25–55) across NYC/Philadelphia/rural, spanning product/quant, construction‑adjacent, and sales; includes a Spanish‑speaking shopper. What they said: most refused unconditional endorsement because the instrument did not name the brand/model; provisional recommendation sat at 3–5/10, with no change or slight negative drift driven by hidden fees, scripted support, vague warranties/parts, and fraud/authenticity concerns; a minority moved slightly positive only after seeing a written warranty and local parts/service.

Main insights: trust is conditional and unlocked by radical transparency-clear total cost (no drip fees/subscriptions), documented warranty, local service SLAs with parts on hand, verifiable reliability/uptime and references, bilingual human support, and anti‑fraud/authenticity assurances; marketing alone doesn’t move sentiment. Outlook: absent these fixes, respondents expect flat‑to‑down momentum; with transparency and service execution, modest upside is plausible. Takeaways: fix the instrument immediately (explicitly name Tesla, model, use case, price), and A/B test pages that disclose total cost, warranty, SLAs, reliability metrics, and ES/EN support to measure lift in intent to recommend. Actions: publish a pricing/returns/warranty one‑pager, commit to 48–72h parts windows where feasible, enable bilingual support, and add authenticity chain‑of‑custody messaging. Decision bar: proceed if changes drive a measurable uptick in 7–9/10 recommend intent and reduced refusal rates; otherwise prioritize service/pricing remediation over demand gen.
Participant Snapshots
6 profiles
Brandy Quintana
Brandy Quintana

Bilingual Puerto Rican single mom in Philadelphia, 33, supervising post-construction cleaning crews. Faith-led, budget-savvy, and practical, she values reliability, community, and time-saving solutions that fit family life and respect her culture.

Jason Turpin
Jason Turpin

Dallas-based, 49-year-old Black single father and ex-construction business owner on a non-compete sabbatical. Pragmatic, community-minded, uninsured for now. Values durability, transparency, and local service; spends time parenting, volunteering, cooking, a…

Brennan Pittman
Brennan Pittman

Brennan Pittman, a 25-year-old NYC fintech product manager; high income, single, no kids. Bikes to work, data-driven, minimalist, socially liberal, fiscally moderate. Prioritizes transparency, design, privacy, and time savings; cooks simply, runs, travels w…

Caitlyn Phan
Caitlyn Phan

NYC-based 27-year-old quant researcher with a condo, high income, and a church-centered community. Bilingual, disciplined, and kind. Values data, time saved, and quality. Runs the river, cooks simply, and avoids hype and hidden fees.

Jeffrey Harpe
Jeffrey Harpe

55-year-old museum sales pro in Astoria, Queens. Separated, no kids. Practical, community-minded, subway commuter. Loves NYC history, photography, and Yankees. Prefers transparent pricing, durable value, and mission-driven choices over hype.

Paul Hinkle
Paul Hinkle

Rural Connecticut renter, 53, disabled and single, living on public benefits. Values reliability, clarity, and low hassle. Practical, community oriented, and price conscious. Manages chronic pain, prefers refurbished value, and avoids subscriptions and hidd…

Overview 0 participants
Sex / Gender
Race / Ethnicity
Locale (Top)
Occupations (Top)
Demographic Overview No agents selected
Age bucket Male count Female count
Participant locations No agents selected
Participant Incomes US benchmark scaled to group size
Income bucket Participants US households
Source: U.S. Census Bureau, 2022 ACS 1-year (Table B19001; >$200k evenly distributed for comparison)
Media Ingestion
Connections appear when personas follow many of the same sources, highlighting overlapping media diets.
Questions and Responses
4 questions
Response Summaries
4 questions
Word Cloud
Analyzing correlations…
Generating correlations…
Taking longer than usual
Persona Correlations
Analyzing correlations…

Overview

Respondents operate on a conditional-trust model: they withhold unconditional advocacy and instead require concrete, verifiable signals (clear total pricing, documented warranties/local parts and technicians, demonstrable reliability metrics, bilingual human support where relevant) before moving from neutral/low scores to promoters. Sentiment clusters align closely with role, locale and cultural background. Young, high-earning product/quant professionals evaluate via analytic, metric-driven frameworks and prioritize data/privacy/integration signals. Hands-on / construction-adjacent buyers prioritize physical durability, rapid local repair and no-app-required basics. Mid/older consumer-facing commuters emphasize transparent pricing and logistical accessibility. Spanish-speaking, community-networked shoppers treat bilingual human support and community endorsement as decisive trust levers. Cross-cutting deterrents include hidden fees, app-forced functionality, scripted support, and counterfeit/fraud risk. Nearly all respondents use condition-based scoring schemes and update opinions only after receiving verifiable product/service specifics.
Total responses: 24

Key Segments

Segment Attributes Insight Supporting Agents
Young, urban, high-income product/quant professionals
age range
25–27
locale
New York City
occupations
  • Product Manager
  • Quantitative Analyst
income bracket
$300k+
education
Bachelor/Graduate
These buyers demand measurable uplift and explicit metrics (uptime, ROI, reliability stats, clear privacy/integration docs). They lower tolerance for hidden fees and prefer decision frameworks that can be reproduced; absence of structured data or transparency keeps them neutral or negative. Brennan Pittman, Caitlyn Phan
Hands-on / construction-adjacent buyers (practical, local-service focused)
occupations
  • Construction Manager
  • Commercial Construction adjacent
  • Warehousing & Distribution
locales
  • Philadelphia
  • Dallas
  • Rural/CT
age range
33–53
Durability, written warranties, fast local parts/technician access and minimal dependence on apps or remote-only fixes are decisive. Pricing opacity and activation/hidden fees quickly disqualify offerings regardless of other product strengths. Brandy Quintana, Jason Turpin, Paul Hinkle
Mid/older, consumer-facing commuting professionals (practical / cost-conscious)
age range
55
occupation
Sales Representative
locale
New York City
commute
Public transit
Transparent total cost, simple cancellation terms and logistical accessibility (e.g., transit to service locations) dominate decision weights; these buyers keep records and value responsive human support when judging brand momentum. Jeffrey Harpe
Spanish-speaking / culturally networked shoppers
language
Spanish
ethnicity
Hispanic or Latino
locale
Philadelphia
income sensitivity
mid income
Bilingual human support and community-validated word-of-mouth strongly influence adoption. Without Spanish-language support or trusted community signals, willingness to recommend drops-even when technical/product cues are neutral. Brandy Quintana

Shared Mindsets

Trait Signal Agents
Demand for brand/product specifics Almost all respondents refuse to endorse an unnamed brand; they require model, price, warranty or a link before moving above baseline scores. Paul Hinkle, Brennan Pittman, Caitlyn Phan, Jeffrey Harpe, Brandy Quintana, Jason Turpin
Transparent pricing / no hidden fees Hidden or buried charges are immediate deal-breakers. Clear, written total cost and cancellation terms consistently lift sentiment across segments. Jeffrey Harpe, Brandy Quintana, Paul Hinkle, Jason Turpin, Caitlyn Phan
Support, warranty and local service Documented warranties, local parts/techs and fast service windows are decisive-especially for higher-cost or physical goods where downtime matters. Jason Turpin, Paul Hinkle, Brandy Quintana, Jeffrey Harpe
Condition-based scoring (conditionality) Respondents use tiered or weighted frameworks (multi-factor scoring) and update ratings only when verifiable signals are provided. Paul Hinkle, Brennan Pittman, Caitlyn Phan, Jeffrey Harpe
Privacy / anti-fraud sensitivity Concerns about counterfeit goods, tracking and data practices lower likelihood to recommend; some respondents explicitly weight authenticity and privacy checks. Caitlyn Phan, Brennan Pittman
Contextual/mood influence Immediate context (budget pressure, mood, weather, commute) is invoked as a modifier on provisional scores, making many responses conditional rather than definitive. Brandy Quintana, Caitlyn Phan, Brennan Pittman

Divergences

Segment Contrast Agents
Young product/quant professionals Prefer metric-driven, reproducible decision frameworks and demand structured inputs versus broader sample who accept experiential or community signals. Brennan Pittman, Caitlyn Phan
Hands-on / construction-adjacent buyers Prioritize physical durability and rapid local technician/parts access over analytics or app-integrations that younger professionals favor. Brandy Quintana, Jason Turpin, Paul Hinkle
Spanish-speaking / culturally networked shoppers Place disproportionately high weight on bilingual human support and community word-of-mouth, a trust lever less emphasized by non–Spanish-speaking respondents. Brandy Quintana
Mid/older commuting professionals Unique emphasis on transit/logistical accessibility for service interactions-not broadly cited by other segments but decisive for this persona. Jeffrey Harpe
Creating recommendations…
Generating recommendations…
Taking longer than usual
Recommendations & Next Steps
Preparing recommendations…

Overview

Respondents operate on a conditional-trust model, holding baseline recommendations at 3–5/10 until they see transparent pricing, clear warranty, local parts/service, and verified reliability. They punish hidden fees, scripted support, forced apps, and vague SLAs. A Spanish-speaking subset requires bilingual human support. This plan gives Claude a fast path to fix the instrument (people refused to rate an unnamed brand), run client-ready experiments for Tesla that prove value, and measure lift in intent to recommend via radical transparency, service proof, and anti-fraud assurances.

Quick Wins (next 2–4 weeks)

# Action Why Owner Effort Impact
1 Name the brand + structure the prompt Refusals were driven by missing brand/model/context; structured inputs unlock actionable ratings. Research Ops (Claude) Low High
2 Inject a "Total Cost" snapshot Transparent, all-in pricing (incl. fees/returns/cancellation) directly increases trust and intent. Insights Lead (Claude) with Client PM (Tesla) Low High
3 Warranty + Service one-pager Clear warranty, parts availability, and service windows address top blockers. Client PM (Claude) liaising with Tesla Service/Legal Med High
4 Bilingual (EN/ES) follow-up flow Spanish-speaking respondents tie trust to bilingual human support. Research Ops (Claude) Low Med
5 Anti-fraud assurance test Authenticity and QC fears suppress tech-category trust; an assurance blurb can lift sentiment. Insights Lead (Claude) Low Med
6 Segmented recontact Re-recruit hands-on, quant/PM, and Spanish-speaking cohorts to validate fixes quickly. Panel Manager (Claude) Med High

Initiatives (30–90 days)

# Initiative Description Owner Timeline Dependencies
1 Radical Transparency Experiment (Pricing + Terms) A/B test Tesla landing concepts that show full price, fees, return windows, and cancellation steps up-front. Measure lift in willingness to recommend and purchase intent. Client PM (Claude) + Tesla Marketing Design 2 weeks; live 2–4 weeks; readout week 6 Tesla pricing/terms content, Legal review, Experiment traffic allocation
2 Service & Parts SLA Pilot (Dallas + Philadelphia) Publish a hard 48–72h parts window and named local service partners on test pages; capture trust shift and objection reduction in those metros. Client PM (Claude) + Tesla Service Ops Scope 3 weeks; pilot 4 weeks; evaluate week 8 Parts inventory data, Partner readiness, Legal approval on SLA wording
3 Reliability Proof Pack Disclose verified uptime, defect rates, and 6–12 month references with third-party validation. Use in survey stimuli and sales collateral. Insights Lead (Claude) + Tesla Quality Data collation 4 weeks; third-party review 2 weeks; deploy week 7 Quality/field data access, Third-party auditor, Comms alignment
4 Bilingual Support Enablement Launch Spanish-language support routing and ES landing content for test flows; measure uplift among Spanish-speaking respondents. Research Ops (Claude) + Tesla CX/Localization Content 2 weeks; routing 2 weeks; test window 3 weeks ES copy + QA, Support staffing schedule, Routing/IVR updates
5 Anti-Fraud & Authenticity Assurance Message end-to-end authenticity verification (serialized components, chain-of-custody, unboxing checks) to reduce counterfeit anxiety. Insights Lead (Claude) + Tesla Supply Chain Messaging 2 weeks; pilot 3 weeks; readout week 6 Supply chain verification details, Legal sign-off, Creative assets
6 Instrument + Data Schema Redesign (Ditto-integrated) Enforce brand/model, use-case, price, warranty, channel, locale; add role/segment tags; push to Ditto with dashboards tracking lift in recommendation intent. Data Engineering (Claude) Schema 1 week; implementation 2 weeks; dashboard 1 week Ditto API access, Panel CRM fields, QA and privacy review

KPIs to Track

# KPI Definition Target Frequency
1 Intent-to-Recommend Lift Delta in respondents rating 7–9 after seeing pricing + warranty + service disclosures vs. control. +20–30% relative lift Weekly during tests
2 Perceived Pricing Transparency Average 1–5 score to 'I understand total cost (fees, returns, cancellation) up-front.' ≥4.2/5 in test cells Weekly
3 Service Confidence Index Composite of agreement with 'local parts stocked' and 'service within 48–72h' (1–5). ≥4.0/5 in pilot metros Biweekly
4 Support Trust (ES/EN) % who believe they can reach a human in <5 minutes in their language. ≥70% overall; ≥75% among Spanish-speaking Biweekly
5 Authenticity/Fraud Anxiety % selecting 'confident product is authentic and QC-verified' after assurance messaging. ≥80% confidence Weekly during pilot
6 Refusal Rate % of respondents refusing to rate due to missing brand/model/context. <5% (from current high baseline) Weekly

Risks & Mitigations

# Risk Mitigation Owner
1 Client inability to operationalize SLAs or transparent fees quickly Limit pilots to capable metros; message ranges, not absolutes; pre-clear fees and remove gotchas in test cells. Client PM (Claude)
2 Legal/compliance blocks detailed warranty/service disclosure Use summarized, plain-language highlights with links to full terms; add disclaimers; A/B legal-approved variants. Tesla Legal via Client PM (Claude)
3 Sampling bias toward highly critical cohorts Stratify recruitment by role/locale/language; weight results; run holdout with general population. Panel Manager (Claude)
4 Attribution noise from mood/seasonality Run concurrent control groups and staggered deployments; normalize by time-of-day/weather where available. Insights Lead (Claude)
5 Ditto integration delays for dashboards Start with CSV/Sheets pipeline and manual QA; backfill dashboards once API is stable. Data Engineering (Claude)
6 Backlash if tests imply commitments ops can’t meet Gate messaging behind ops sign-off; geofence pilots; include clear, honest ranges. Client PM (Claude) + Tesla Service Ops

Timeline

Weeks 0–2: Instrument fix, cost snapshot, ES flow, anti-fraud messaging; recruit segments.
Weeks 3–6: Launch transparency A/B, anti-fraud pilot, Dallas/Philly service pilot setup; stand up Ditto dashboards.
Weeks 7–10: Reliability Proof Pack live with third-party validation; iterate creatives; expand ES routing; readouts.
Weeks 11–12: Consolidated findings, ROI model, and client playbook for scale-up.
Research Study Narrative

Tesla Weekly Sentiment: Executive Synthesis

Objective and context: We evaluated consumer sentiment toward Tesla. Because earlier questions masked the brand, many respondents refused to score “an unnamed brand,” holding baseline recommendation at 3–5/10 until given verifiable, concrete evidence. This conditional-trust posture recurs across all questions and is anchored in four proof points: transparent total pricing (no hidden/activation fees or subscription traps), documented reliability/durability, accessible/fast human support with clear warranty, and local parts/technician availability.


Cross-question learnings (grounded in respondent evidence):

  • Recommendation (Q1): No unconditional promoters for an unnamed brand; typical provisional 3–5/10. Respondents use explicit tiers (0–3 if fees/dark patterns; 4–7 if decent but fussy; 8–10 only with proof). They asked for clear pricing and no “auto-renew traps” (Jeffrey Harpe), local service/parts specifics (Jason Turpin), and reliability/uptime references. Outliers add bilingual support needs (Brandy Quintana) and a quantified weighting model (Brennan Pittman).
  • Change vs. 6 months (Q2): Cautious-to-slightly negative drift, often “no change” absent brand-specific proof. Hidden fees/dark patterns are a clear drag (Caitlyn Phan). One practical uptick occurred only after seeing a written warranty and local parts (Jason). Language/access and household budget context moderate sentiment (Brandy).
  • Last interaction (Q3): Multiple refusals due to unnamed brand; where described, pain points clustered around opaque pricing, scripted/automated support unable to resolve practical questions (e.g., returns/restocking), and uncertainty on logistics/service SLAs (who “rolls a truck,” parts lead times). Respondents demanded structured prompts (brand/model, channel, task, date) and keep receipts, expecting traceability.
  • Next-year success (Q4): Neutral-to-negative without operational fixes. Trust erosion from pricing opacity and poor service execution is seen as conversion- and WOM-suppressing. Some refuse to forecast without baseline metrics (unit economics, retention) while others allow a “small notch up” if warranty/parts/service execution improves (Jason). Bilingual support and community signals are decisive for Spanish-speaking shoppers (Brandy).

Persona correlations and demographic nuances:

  • Young, urban product/quant professionals (e.g., Brennan, Caitlyn): metrics-first, reproducible frameworks; demand uptime/ROI, privacy/integration docs; zero tolerance for hidden fees.
  • Hands-on/construction-adjacent buyers (e.g., Brandy, Jason, Paul): prioritize durability, written warranty, fast local parts/techs; penalize activation/hidden fees and app-forced fixes.
  • Mid/older commuting professional (e.g., Jeffrey): values transparent total cost, simple cancellations, and logistical accessibility for service.
  • Spanish-speaking/culturally networked shoppers (e.g., Brandy): require bilingual human support and community-validated word-of-mouth.

Implications and recommendations for Tesla:

  • Fix the instrument: Name Tesla in stimuli and structure prompts (brand/model, channel, task, date). This directly addresses refusal patterns and unlocks actionable ratings.
  • Radical transparency: Present all-in price up front (fees, returns, cancellation). This targets the most-cited dropout trigger (hidden/dark-pattern fees).
  • Warranty and service clarity: Publish a plain-language warranty with local parts availability and a target 48–72h service/parts window in pilot metros-exactly the proof that lifted sentiment for Jason.
  • Reliability proof pack: Share verified uptime/defect rates and 6–12 month references, ideally with third-party validation, to satisfy metrics-driven segments.
  • Bilingual support: Enable EN/ES routing to a human in under 5 minutes and provide Spanish landing content; this is a decisive trust lever for Spanish-speaking buyers.
  • Anti-fraud assurance: Message authenticity verification and chain-of-custody to counter broader tech-category counterfeit anxieties (raised by Caitlyn).

Risks and measurement guardrails: Execution risk on SLAs and disclosures can be mitigated via metro pilots (Dallas, Philadelphia) and plain-language summaries with legal-approved links. Reduce sampling bias and context noise by stratifying recruitment, running concurrent controls, and normalizing for time/season.


Next steps and KPIs:

  1. Weeks 0–2: Instrument fix; inject “Total Cost” snapshot; launch EN/ES support flow; add anti-fraud assurance.
  2. Weeks 3–6: A/B test radical transparency; stand up service/parts SLA pilot in Dallas/Philly.
  3. Weeks 7–10: Deploy reliability proof pack with third-party validation; iterate creatives; expand ES routing.
  4. Weeks 11–12: Consolidate readouts and scale plan.
  • Intent-to-Recommend lift: +20–30% relative increase in 7–9 ratings after disclosures.
  • Perceived pricing transparency: ≥4.2/5 agreement on understanding total cost up front.
  • Service confidence index: ≥4.0/5 in pilot metros (local parts stocked, 48–72h service).
  • Support trust (EN/ES): ≥70% overall; ≥75% among Spanish-speaking believe they can reach a human in <5 minutes.
  • Authenticity confidence: ≥80% report confidence after assurance messaging.
Recommended Follow-up Questions Updated Jan 11, 2026
  1. When considering Tesla specifically, which factors most discourage you from purchasing today? Please select the most and least discouraging items in each set: Upfront vehicle price; Total monthly cost; Ability to install home charging; Confidence in public fast charging on your routes; Availability of local service centers/technicians; Warranty clarity/coverage; Build quality/fit-and-finish consistency; Software reliability/update stability; Perceived safety of driver-assist features; Insurance...
    maxdiff Quantifies top barriers to prioritize fixes and messaging that will most improve consideration and conversion.
  2. Which proof points would most increase your likelihood to consider Tesla? Please choose the most and least convincing items in each set: All-in out-the-door price shown up front; Written warranty summary with clear exclusions; Guaranteed service appointment within a stated window; Published local parts inventory levels; Independent reliability scorecards; Charging network uptime metrics on your routes; Roadside assistance and loaner policy; Bilingual human support availability; Transparent OTA u...
    maxdiff Identifies the highest-impact trust signals to feature in product pages, retail, and ads.
  3. What is the maximum total monthly cost at which you would feel comfortable owning or leasing a Tesla you would realistically consider? Include vehicle payment, insurance, charging, and any subscriptions. Enter a USD amount.
    numeric Sets pricing, financing, and bundling targets aligned to willingness to pay.
  4. Please indicate your agreement with each statement about your current charging situation: I have reliable access to overnight parking with an electrical outlet; I could install a Level 2 home charger within three months; I am confident finding reliable public fast charging on my regular routes; I am comfortable taking a 300+ mile road trip using current charging networks; My household electrical panel can support an added charger without major upgrades.
    matrix Segments buyers by charging readiness to tailor education, incentives, and infrastructure messaging.
  5. What is the maximum acceptable time to resolution from scheduling to completion for Tesla service in each scenario? Enter number of calendar days for: Safety-critical issue; Drivable mechanical/electrical issue; Cosmetic/fit-and-finish issue; Mobile-service-eligible issue; Parts replacement after diagnosis.
    matrix Defines concrete SLA targets and parts stocking goals to meet expectations.
  6. Compared to the alternative brand you would most likely choose instead of Tesla, how does Tesla perform on each attribute? Rate for: Purchase price/value; Total cost of ownership; Vehicle range; Performance/acceleration; Build quality; Reliability; Charging network; Service experience; Software/features; Safety; Resale value; Brand reputation.
    semantic differential Reveals relative strengths and weaknesses to sharpen competitive positioning and messaging.
Ensure Tesla is explicitly named in the instrument. For matrix questions, use a 5-point agree scale for charging, and numeric entry (decimals allowed) for service days.
Study Overview Updated Jan 11, 2026
Research question: Evaluate consumer sentiment toward Tesla across four areas-likelihood to recommend (0–10), change vs. 6 months ago, last interaction, and next‑year success. Who: 6 US adults (25–55) across NYC/Philadelphia/rural, spanning product/quant, construction‑adjacent, and sales; includes a Spanish‑speaking shopper. What they said: most refused unconditional endorsement because the instrument did not name the brand/model; provisional recommendation sat at 3–5/10, with no change or slight negative drift driven by hidden fees, scripted support, vague warranties/parts, and fraud/authenticity concerns; a minority moved slightly positive only after seeing a written warranty and local parts/service.

Main insights: trust is conditional and unlocked by radical transparency-clear total cost (no drip fees/subscriptions), documented warranty, local service SLAs with parts on hand, verifiable reliability/uptime and references, bilingual human support, and anti‑fraud/authenticity assurances; marketing alone doesn’t move sentiment. Outlook: absent these fixes, respondents expect flat‑to‑down momentum; with transparency and service execution, modest upside is plausible. Takeaways: fix the instrument immediately (explicitly name Tesla, model, use case, price), and A/B test pages that disclose total cost, warranty, SLAs, reliability metrics, and ES/EN support to measure lift in intent to recommend. Actions: publish a pricing/returns/warranty one‑pager, commit to 48–72h parts windows where feasible, enable bilingual support, and add authenticity chain‑of‑custody messaging. Decision bar: proceed if changes drive a measurable uptick in 7–9/10 recommend intent and reduced refusal rates; otherwise prioritize service/pricing remediation over demand gen.