Artificial intelligence is reshaping American life faster than most people feel prepared for, and public opinion is still catching up. Slingshot Strategies' new "focus poll" — combining the depth of a focus group with the scale of a national poll — takes the temperature on where Americans stand on AI.
For this survey, fielded from March 23 to April 3, 2026, 1,087 U.S. adults were prompted with open-ended audio questions about AI, with their responses captured by microphone. These responses were then transcribed, categorized, and analyzed.
01 / Overall feelings about AI
Americans are using AI and see its potential, but concern is widespread.
The question asked
"Broadly speaking, can you say how you feel about artificial intelligence?"
33%, a plurality, fall in the "concerned but mixed" camp. The most common worries: lack of regulation, job loss, and the erosion of truth through misinformation and deepfakes. A few respondents independently noted that even the builders of AI themselves don't seem to feel in control.
How respondents talked about AI
Voices on AI broadly
02 / AI companies
Most respondents have little to say about specific AI companies because they don't know enough to differentiate them.
The question asked
"There are a lot of companies working in AI: xAI (maker of Grok), Microsoft (maker of CoPilot), OpenAI (maker of ChatGPT), Palantir, Anthropic (maker of Claude), and Google (maker of Gemini). Do you have strong feelings, positive or negative, about any of these companies?"
Where opinions formed, they centered on ethical conduct, data privacy and surveillance practices, and whether companies prioritize profit over public good. Anthropic drew positive mentions from several respondents for its stance on military and surveillance uses. Palantir and xAI drew the most negative mentions.
Voices on the companies
03 / Elon Musk and Grok
Strong feelings about Musk — in both directions — dominate the responses to his role in the industry.
The question asked
"xAI, which makes Grok, is owned by Elon Musk. What do you think about Elon Musk owning Grok?"
Whether respondents end up positive or negative on Grok has less to do with the product than with how they already feel about its owner.
The spectrum of reactions
25%
19%
9%
13%
6%
15%
Positive →← Negative
Tap a band to filter the quotes
Strong negative view (15%)
04 / The Grok sexualized material incident
Responses were more viscerally negative than anywhere else, and the reaction crossed normal dividing lines.
The question asked
"There was recently an incident involving Grok, xAI's model. Elon Musk's artificial intelligence chatbot, Grok, created and then publicly shared at least 1.8 million sexualized images of women, many without their consent, and more than 23,000 sexualized images of children. How do you feel about this?"
The dominant reaction was outrage — respondents called it disgusting, said it should be criminal, and many directed blame at Musk personally. Equally striking: a meaningful share of respondents had never heard of it.
34%
Outrage and disgust — the largest single response
13%
Hadn't heard about the incident at all
A chorus, in their words
05 / Anthropic's standoff
Anthropic's refusal drew genuine praise across party lines, with some treating the blacklisting as validation rather than punishment.
The question asked
"Anthropic, the company that makes the AI tool Claude, refused to allow the military to use its technology for mass surveillance of Americans or for fully autonomous weapons. The Department of War then blacklisted Anthropic and signed a deal with rival company OpenAI instead. What do you think about AI being used in war?"
Broader opposition to AI in warfare — and to domestic surveillance of Americans especially — was the dominant note throughout.
Views on AI in war
"OpenAI are traitors to humanity and Anthropic should get a medal."
— Man, 27, Democrat, Black, OH
More voices on AI in war
06 / AI and military targeting
A plurality of respondents expressed support, a shift from the more skeptical responses to abstract questions about AI in war.
The question asked
"The U.S. military says it used AI to process large amounts of data and identify over 2,000 targets in Iran, including 1,000 in the first 24 hours, a pace that would not have been possible without AI. How do you feel about the military using AI in this way?"
The gap between abstract and specific framing mattered. The most powerful dissents invoked civilian casualties, including the bombing of a girls' school, as evidence that AI targeting is already failing.
Voices on AI in war (abstract)
07 / Job displacement
The responses on job displacement were among the most personally specific in the survey.
The question asked
"In an interview with CBS's 60 Minutes, an AI executive said plainly: AI could wipe out half of all entry-level white-collar jobs and spike unemployment to 10% to 20% in the next one to five years. How does this make you feel about AI?"
People connected the question directly to their own work, their households, and their sense of economic stability. The dominant feeling was fear, with some calling for regulation and a smaller share expressing resignation that the process is already too far along to stop.
48%
expressed fear or hard opposition to AI job replacement — including 14% who said AI shouldn't be used for it at all.
How respondents reacted to the job displacement scenario
Voices on job displacement
08 / Trusting Musk on AI risk
Musk's warnings about AI's dangers landed with many respondents — but a majority didn't trust him to manage the risk.
The question asked
"Elon Musk has said, 'AI is a fundamental existential risk for human civilization, and I don't think people fully appreciate that.' He has declared that the danger of AI is 'much greater than the danger of nuclear warheads, by a lot.' Do you trust Elon Musk to manage the risks posed by AI?"
Several respondents raised on their own the contradiction of warning about AI while building it. A significant minority pushed back, saying his intelligence and track record make him better positioned than most.