A lot of what I do professionally is help investors look at a founding team before they commit. The question is always the same: who is this person, and what happens when things get hard?
The New Yorker piece on Sam Altman is the most detailed public case study I've read in years. Altman is not unusual as a type. The pattern is too familiar. I've seen versions of it in scale-ups of 30 people, 80 people. What makes this different is the scale of the consequences.
So I ran through the article using the same framework I use in due diligence. Six categories. What I see in each one. And then the question that interests me most: why do experienced investors keep missing this, and what can we actually learn from it?
A note on limits: This is not a verdict on Altman as a person. I've only seen him once, when I attended a YCombinator event years ago. I've read this article, and I've been following him for some time by watching quite some interviews since he was at YCombinator. I usually start working from intuition. Looking at his decisions, his behavior, and his micro-expressions, I would describe him as, let's say, an interesting person. What I can speak to is the pattern, because I've spent twelve years watching similar patterns in smaller companies, where the consequences are contained to 50 or 200 people.
Six categories. One clear picture.
The framework I use maps founder behavior across six categories. They're not personality traits. They're behavioral patterns that either protect or undermine an organization as it scales. Here's how Altman scores on each one, based only on what's documented in the article.
Two categories score at the top of the range: Integrity at 95% and Control at 90%. These two are structurally connected. High integrity risk means the information coming from the top is unreliable. High control risk means the structures that should catch that problem have already been removed or circumvented. When both are high together, the organization loses its ability to self-correct. That is the most dangerous combination I encounter in due diligence work, and the one most difficult to see from the outside.
The lying system
This is where the article is most specific. The Ilya Memos were about seventy pages of documentation. They open with a list: "Sam exhibits a consistent pattern of..." The first item is "Lying."
The incidents are specific and cross-checked. He told Murati that GPT-4 safety features had been approved. They hadn't. He cited the company's general counsel as his authority. The general counsel, asked directly over Slack, replied: "ugh... confused where sam got that impression."
He told Amodei he had "good authority" from a senior executive that Amodei's team was plotting a coup. When Daniela Amodei brought that executive into the room, the executive denied saying anything. Altman then said: "I didn't even say that." Daniela replied: "You just said that."
What makes this pattern so hard to act on is the mechanism Altman uses when confronted. He says he doesn't recall. Across many conversations for the article, that phrase appears again and again. He doesn't remember the Microsoft merger clause. He doesn't recall the threat to Murati's reputation. He has a different version of the coup conversation.
What I see in smaller companies is the same mechanism, just at a different scale. It's never an outright denial. It's someone who consistently has a slightly different version of what was said, and that version always lands in their favor. After enough rounds of this, the people around them start doubting their own memory. That's the real effect of the pattern. And it's one of the hardest things to name clearly in a reference call, because each individual instance sounds like a misunderstanding.
Structures built to be dissolved
The clearest structural expression of Altman's control instinct is what happened immediately after his firing. He negotiated the composition of the board that would investigate him. He texted Nadella: "would you do this: bret, larry summers, adam as the board and me as ceo and then bret handles the investigation."
The investigation produced no written report. Oral briefings only, to the two men he had effectively chosen. One board member described the suggestion that all members received those briefings as "an absolute, outright lie."
A former researcher described the pattern directly: "He sets up structures that, on paper, constrain him in the future. But then, when the future comes and it comes time to be constrained, he does away with whatever the structure was."
The pattern across years: Nonprofit charter. Merge-and-assist clause. 20% compute for superalignment. Safety approval protocols. Investigation with independent board. Each was real enough to attract good people and satisfy investors, and each was dissolved when it became inconvenient.
Every critic eventually left
The people who initially joined OpenAI were exceptional in the real sense of the word. Sutskever, Amodei, Murati. Serious, brilliant people who made real sacrifices to be there. Their presence gave the whole thing credibility. When investors ran reference checks, they found exactly who you'd want to find.
What the reference checks couldn't easily surface was that Altman's relationship with each of these people eventually reached the same endpoint: exhaustion, distrust, departure. The talent rotated through. Each departure had an explanation. The pattern was invisible unless you looked at the full sequence.
When I do reference calls, I come back to one question: what happened to the last person who raised a serious concern? Not what that person is like as a colleague. What happened to them. Did they stay? Did they leave quietly? Did the founder's version of that departure match what everyone else saw? That one question tells me more in 60 seconds than two hours of asking about leadership style and company culture.
Five of the six people shown raised explicit concerns before they left. Every single one departed. You could explain any individual departure as the normal friction of a fast-growing company. But when five out of six people who raised serious concerns left, and several went on to found safety-focused competitors, that is not friction. It is the system working exactly as designed. The departure is the mechanism, not a failure of it.
Why smart people kept missing it
This is what I think about most. Whether Sam Altman is a bad person is a question for courts and philosophers. The more useful question is: what is it about this specific pattern that consistently bypasses experienced investors and board members?
I see six things at work. Together they form a kind of trap. Each mechanism on its own is manageable. Together they make the pattern almost invisible until it's too late to act on it without enormous cost.
I see versions of three or four of these mechanisms in almost every team due diligence I do. The hardest combination to see through is product credibility combined with sunk cost. When results are real and visible, and when the financial exposure is already large, the cost of acting on concerns starts to exceed the cost of not acting. That is when boards stop doing due diligence and start managing exposure instead.
In every team due diligence I do, I check for one thing early on: is there anyone who regularly disagrees with the founder in meetings? If I can't find one person who does that, and the company has been running for two or more years, that's a signal. Friction isn't useful in itself. Silence at that stage is usually not harmony. It's learned behavior. People have figured out that pushing back costs more than staying quiet.
Sound familiar? Check a founder in your portfolio.
The same six dimensions used in this analysis are built into a free, anonymous scan. Three minutes. You get a risk profile, red flags by category, and a practical observation checklist.
Want to know what this kind of assessment looks like in practice?
I do behavioral due diligence as part of team assessments for investors. The framework in this article is what I use. If you're about to make a significant bet on a founding team and you want a second perspective before you do, that's what I'm here for.
The dots only connect when you see the sequence
This is what Sutskever understood when he spent months compiling seventy pages. Each incident, taken alone, has a plausible innocent explanation. A misremembered conversation. A miscommunication. Normal founder-employee friction. Reasonable people could disagree about any one of them.
Put them on a timeline, organize them by category, and the shape becomes undeniable. The shape is no longer "this person made some mistakes." It's "this person operates this way, consistently, across years, across organizations, across relationships."
The visualization below maps documented incidents from the article across time and the six behavioral categories. Watch what happens when they all appear together.
What the map makes visible is something you cannot see in a single reference check. Every row has incidents. No time period is clean. Integrity and Control incidents appear in every phase shown, from 2007 through 2024. The pattern is not a late-stage problem that emerged when the stakes got high. It is consistent across the entire documented history. One conversation is an incident. Seventeen years is a system.
The structure failed. Not the person.
This is usually how these things go. These things rarely start with a villain. What you usually get is a series of reasonable people making individually defensible choices, until the distance traveled is too large to trace back.
The board tried to act. It had no communications team, no investor relationships, no legal war chest. Altman had all of these. The lesson isn't that the board was wrong. The lesson is that oversight structures that look strong on paper are often fragile in practice, especially when the person being overseen has spent years building the dependencies that make acting on concerns more costly than not acting.
The concern isn't whether OpenAI's CEO is trustworthy. The concern is that the structures designed to manage that question have been quietly hollowed out. And the institutions that should have caught this each had a reasonable explanation for why this particular moment was not the moment to act.
What I take from this, practically speaking, is that the standard due diligence checklist is not designed to catch this pattern. It catches embezzlement. It catches sexual harassment. It catches obvious criminality. It does not catch systematic information management over years, executed with sufficient charm and genuine results.
The investors who call me after a problem has surfaced all say the same thing: the signals were there. They just looked like something else at the time. One person leaving. One board discussion that went nowhere. One reference call that felt slightly off but didn't break any obvious rules. My job is to help them look at those signals together, before they commit, while the pattern is still forming rather than already set.
What better due diligence actually looks like
Last year I did 18 team due diligences for investors. Five of them resulted in a cancelled investment.
In each of those five cases, the pattern looked different on the surface. Different sectors, different stages, different founder profiles. But there was a consistent gap between what the founder said and what the people around them actually experienced. That gap only became visible when I looked at incidents across time rather than one by one.
The six categories in the emaho framework give you the structure. But the real question is whether you look at incidents over time rather than evaluating each one in isolation. One misremembered conversation is nothing. Seven misremembered conversations across three organizations, all in the speaker's favor, is a pattern.
The other thing I'd look at: what happens to people who raise serious concerns? Do they stay or do they leave? And do their departures have explanations that consistently favor the founder's narrative?
Those two questions, pattern across time and fate of critics, will catch most of what standard reference checks miss.
See the dynamics in your own team
The patterns in this article don't only exist at OpenAI. They show up in teams of 8 and teams of 80. The emaho platform maps personality types, role dynamics, and collaboration patterns across your full team, so you can see what's actually there before it becomes a problem.
You've read the analysis.
Now run it on your own portfolio.
The six dimensions in this article are the same six dimensions in the scan. Free, anonymous, three minutes. You get a risk profile, a radar chart, identified red flags, and an observation checklist you can take into your next conversation with the founder.