Sam Altman, CEO of OpenAI, and Lisa Su, CEO of Advanced Micro Devices, testify during the Senate Commerce, Science and Transportation Committee hearing titled "Winning the AI Race: Strengthening U.S.
Capabilities in Computing and Innovation," in Hart building on Thursday, May 8, 2025. Tom Williams | CQ-Roll Call, Inc.
| Getty ImagesIn a sweeping interview last week, OpenAI CEO Sam Altman addressed a plethora of moral and ethical questions regarding his company and the ChatGPT AI model.
"Look, I don't sleep that well at night.
There's a lot of stuff that I feel a lot of weight on, but bably nothing more than the fact that every day, hundreds of millions of people talk to our model," Altman told former Fox News host Tucker Carlson in a nearly hour-long interview.
"I don't actually worry us getting the big moral decisions wrong," Altman said, though he admitted "maybe we will get those wrong too." Rather, he said he loses the most sleep over the "very small decisions" on model behavior, which can ultimately have big repercussions.These decisions tend to center around the ethics that inform ChatGPT, and what questions the chatbot does and doesn't answer.
Here's an outline of some of those moral and ethical dilemmas that appear to be keeping Altman awake at night.How does ChatGPT address suicide?According to Altman, the most difficult issue the company is grappling with recently is how ChatGPT apaches suicide, in light of a lawsuit from a family who blamed the chatbot for their teenage son's suicide.The CEO said that out of the thousands of people who commit suicide each week, many of them could possibly have been talking to ChatGPT in the lead-up."They bably talked [suicide], and we bably didn't their s," Altman said candidly.
"Maybe we could have said something better. Maybe we could have been more active.
Maybe we could have vided a little bit better advice , hey, you need to get this help." watch now12:1912:19Jay Edelson on OpenAI wrongful death lawsuit: We're putting OpenAI & Sam Altman on trial, not AISquawk BoxLast month, the parents of Adam Raine filed a duct liability and wrongful death suit against OpenAI after their son died by suicide at age 16.
In the lawsuit, the family said that "ChatGPT actively helped Adam explore suicide methods."Soon after, in a blog post titled "Helping people when they need it most," OpenAI detailed plans to address ChatGPT's shortcomings when handling "sensitive situations," and said it would keep imving its nology to tect people who are at their most vulnerable.
How are ChatGPT's ethics determined?Another large topic broached in the sit-down interview was the ethics and morals that inform ChatGPT and its stewards.
While Altman described the base model of ChatGPT as trained on the collective experience, knowledge and learnings of humanity, he said that OpenAI must then align certain behaviors of the chatbot and decide what questions it won't answer.
"This is a really hard blem. We have a lot of users now, and they come from very different life perspectives...
But on the whole, I have been pleasantly surprised with the model's ability to learn and apply a moral framework." When pressed on how certain model specifications are decided, Altman said the company had consulted "hundreds of moral philosophers and people who thought ethics of nology and systems."An example he gave of a model specification made was that ChatGPT will avoid answering questions on how to make biological weapons if mpted by users."There are examples of where society has an interest that is in significant tension with user freedom," Altman said, though he added the company "won't get everything right, and also needs the input of the world" to help make these decisions.How private is ChatGPT?Another big discussion topic was the concept of user privacy regarding chatbots, with Carlson arguing that generative AI could be used for "totalitarian control."In response, Altman said one piece of policy he has been pushing for in Washington is "AI privilege," which refers to the idea that anything a user says to a chatbot should be completely confidential.
"When you talk to a doctor your health or a lawyer your legal blems, the government cannot get that information, right?...
I think we should have the same concept for AI." watch now2:3502:35OpenAI CEO Sam Altman on path to fitability: Willing to run at a loss to focus on growthSquawk BoxAccording to Altman, that would allow users to consult AI chatbots their medical history and legal blems, among other things.
Currently, U.S. officials can subpoena the company for user data, he added."I think I feel optimistic that we can get the government to understand the importance of this," he said.
Will ChatGPT be used in military operations?Asked by Carlson if ChatGPT would be used by the military to harm humans, Altman didn't vide a direct answer."I don't know the way that people in the military use ChatGPT today...
but I suspect there's a lot of people in the military talking to ChatGPT for advice."Later, he added that he wasn't sure "exactly how to feel that."OpenAI was one of the AI companies that received a $200 million contract from the U.S.
Department of Defense to put generative AI to work for the U.S. military. The firm said in a blog post that it would vide the U.S.
government access to custom AI models for national security, support and duct roadmap information.Just how powerful is OpenAI?Carlson, in his interview, predicted that on its current trajectory, generative AI and by extension, Sam Altman, could amass more power than any other person, going so far as to call ChatGPT a "religion."In response, Altman said he used to worry a lot the concentration of power that could result from generative AI, but he now believes that AI will result in "a huge up leveling" of all people.
Get a weekly round up of the top stories from around the world in your inbox every Friday. "What's happening now is tons of people use ChatGPT and other chatbots, and they're all more capable.
They're all kind of doing more.
They're all able to achieve more, start new es, come up with new knowledge, and that feels pretty good."However, the CEO said he thinks AI will eliminate many jobs that exist today, especially in the short-term.