Regular readers of this Substack will recall that I have never been bullish on OpenAI. Last Fall, I warned that they might someday be known as the WeWork of OpenAI, and in late January I wrote an essay about some of the strong headwinds they faced, despite their seeming invincibi … | Continue reading
Wake up. If a single bug can take down airlines, banks, retailers, media outlets, and more, what on earth makes you think we are ready for AGI? The world needs to up its software game massively. We need to invest in improving software reliability and methodology, not rushing out … | Continue reading
It’s been a long while since I have felt really positive about most of what I have been reading and seeing in AI. Not because I inherently dislike AI, but because I feel like so many folks have been brainwashed by the “scale is all you need” notion and “AGI in 2025” (or 2027 or 2 … | Continue reading
Last night was a travesty, but that is just the beginning of our problems | Continue reading
He’s still down for 2029, same as ever | Continue reading
The increasingly delayed countdown to GPT5 | Continue reading
Further evidence that AGI is not imminent | Continue reading
State Senator Scott Wiener and others in California have proposed a bill, SB-1047m that would some modest (to my taste) restraints on AI. It doesn’t call for a private right of action, which would allow individual citizens to sue AI companies for a wide set of reasons; it doesn’t … | Continue reading
Investors appear to have taken note | Continue reading
A memo for future intellectual historians | Continue reading
Passing along this scoop from Kevin Roose: Roose supplied a gift link: https://x.com/kevinroose/status/1797992577255518480?s=61 The letter itself, cosigned by Bengio, Hinton, and Russell, can be found here https://righttowarn.ai. I fully endorse its four recommendations: | Continue reading
Perhaps no week of AI drama will ever match the week in which Sam got fired and rehired, but the writers for the AI reality series we are all watching just don’t quit. For one thing, the bad press about Sam Altman and OpenAI, who once seemingly could do no wrong, just keep coming … | Continue reading
The wall is reliability. | Continue reading
OpenAI’s new board just made its loyalties clear. We should all be worried/ | Continue reading
Helen Toner finally explains what the board was thinking | Continue reading
In case you haven’t heard, there‘s a brawl happening over at X, Musk vs LeCun. For days I tried to resist commenting, but so many people (friends, reporters, etc) keep asking me for my opinion I have decided to oblige. The first thing that I will tell you is that each of those no … | Continue reading
Update to my last essay, “What should we learn from OpenAI’s mistakes and broken promises?” Sam Altman, 2016: “We’re planning a way to allow wide swaths of the world to elect representatives to a new governance board. Because if I weren’t in on this I’d be, like, Why do these fuc … | Continue reading
It’s increasingly clear that OpenAI has not been consistently candid. What follows from that? | Continue reading
As recently as November 2023, OpenAI promised in their filing as a nonprofit exempt from income tax to make AI that that “benefits humanity … unconstrained by a need to generate financial return”. The first step towards that should be a question about product – are the products w … | Continue reading
A sticky example / Up with which I will not chuck | Continue reading
Sam Altman is trending and virtually every comment is trashing him. | Continue reading
Another of AI’s bitter lessons | Continue reading
Even on little things, Sam is not consistently candid. | Continue reading
Your loyal correspondent goes gaga for Goodall | Continue reading
Safety seems to be taking a back seat | Continue reading
GPT-4o hot take: • The speech synthesis is terrific, reminds me of Google Duplex (which never took off). but • If OpenAI had GPT-5, they have would shown it. • They don’t have GPT-5 after 14 months of trying. • The most important figure in the blogpost is attached below. And the … | Continue reading
An AI Soap Opera in the making? | Continue reading
Fear, The Denial of Uncertainties, and Hype | Continue reading
Social media was bad. Adding AI into the mix could easily get a lot worse. | Continue reading
The backstory behind Taming Silicon Valley | Continue reading
So many people are confused about the relation between human cognitive errors and LLM hallucinations that I wrote this short explainer: Humans say things that aren't true for many different reasons • Sometimes they lie • Sometimes they misremember things | Continue reading
“The definition of insanity is doing the same thing over and over again while expecting different results.” If all we had was ChatGPT, we could say, hmm “maybe hallucinations are just a bug”, and fantasize that they weren’t hard to fix. If all we had was Gemini, we could say, hmm … | Continue reading
Some thoughts occasioned by Meta’s new model and a bad week in the stock market | Continue reading
A remembrance from Doug Hofstadter | Continue reading
It’s kind of surreal to compare some of the talks at TED yesterday with reality. Microsoft’s Mustafa Suleyman promised that hallucinations would be cured “soon”, yet my X feed is still filled with examples like these from Princeton Professor Aleksandra Korolova: | Continue reading
How well did last year’s talks hold up? | Continue reading
How Generative AI plays on human cognitive vulnerability | Continue reading
The conventional wisdom, well captured recently by Ethan Mollick, is that LLMs are advancing exponentially. A few days ago, in very popular blog post, Mollick claimed that “the current best estimates of the rate of improvement in Large Language models show capabilities doubling e … | Continue reading
Why Elon was probably wise not to take the bet | Continue reading
An open letter to Elon Musk | Continue reading
A bunch of small but important items in recently-breaking AI news: Tesla settled their lawsuit with the Huang family over Walter Huang’s death, for an undisclosed amount of money. “Although Huang’s family acknowledges he was distracted while the car was driving, they argued Tesla … | Continue reading
With apologies, there was a broken link in my last post (now corrected in the online version of the post). The important new arXiv article can be found here https://arxiv.org/abs/2404.04125. Thank you all for your support! –Gary | Continue reading
A new result casts serious doubt on the viability of scaling | Continue reading
You may have read yesterday’s New York Times report by Cade Metz and others on how many of the biggest AI companies have been cutting ethical corners in a race to gather as much data as possible (“OpenAI, Google and Meta ignored corporate policies, altered their own rules and dis … | Continue reading
AI is making shit up, and thar made-up stuff is trending on X | Continue reading
A couple days ago I reported a survey saying that most IT professional are worried about the security of LLMs. They have every right to be. There seems to be an endless number of ways of attacking them. In my forthcoming book, Taming Silicon Valley, I describe two examples. The f … | Continue reading