No, this article was not written by ChatGPT | Continue reading
Decades of stumbling over and over in the same places. When will we ever learn? | Continue reading
Some things that have and haven’t changed over the last few years | Continue reading
Some things we all need to watch out for. | Continue reading
Grady Booch and I discuss | Continue reading
All the recent goodwill and enthusiasm could evaporate fast | Continue reading
Five reasons why including ChatGPT in your list of authors is a bad idea | Continue reading
I don’t usually write about business deals, much less about rumors about business deals, but this one has me scratching my head, and and is actually super relevant to how people on the inside - both at Microsoft and OpenAI are viewing the future of AI. | Continue reading
The Errors They MakeWhy We Need to Document Them, and What We Have Decided to Do About it | Continue reading
A time capsule of AI thought leaders in 2022 gives us a lot to think about, going forward | Continue reading
Maybe not | Continue reading
What comes after ChatGPT? 7 predictions for 2023 | Continue reading
New systems like chatGPT are enormously entertaining, and even mind-boggling, but also unreliable, and potentially dangerous. | Continue reading
It's not just monkeys and typewriters. It's more interesting than that. | Continue reading
Hint: It’s not all about scaling | Continue reading
How could we tell? | Continue reading
5 reasons why Large Language Models like GPT-3 couldn’t save Alexa | Continue reading
How MetaAI’s Galactica just jumped the shark | Continue reading
Here are 5 things you might want to consider | Continue reading
In May, in a tweet that gave rise to this very Substack, DeepMind executive Nando de Freitas declared AGI victory, possibly prematurely, shouting “It’s all about scale now! The Game is Over!”: de Freitas was arguing that AI doesn’t need a paradigm shift; it just needs more data, … | Continue reading
If you believe in innateness, you don’t believe in learning If you believe in learning, you should oppose innateness The more things are learned, the less we need innateness If you believe in innateness, you have to believe in innate, domain-specific knowledge | Continue reading
The trouble with too much benefit of the doubt | Continue reading
MetaAI’s new text-to-movie software Make a Video is straight-up amazing. It’s also, as ever, stuck in the liminal space between superintelligence and super-lost | Continue reading
What was missing from Tesla’s new Optimus demo was perhaps even more important than what was there | Continue reading
The Musk Bros, who ever they are, have high hopes for Friday’s update on Tesla’s Optimus robot: My own followers are (perhaps not surprisingly) a bit less sanguine: With tongue slightly in cheek, but genuine skepticism, here my Top 5 reasons why my own expectations are quite low: | Continue reading
spoiler alert: not very | Continue reading
AI has undeniably made progress; systems like DALL-E and Stable Diffusion are impressive, surprising, and fun to play with. We can all agree on that. But can the current state of AI be criticized? A lot of recent media accounts have tried to stick AI critics in a little box. Scie … | Continue reading
Tools like Stable Diffusion and Dall-E are inarguably brilliant at drawing images. But how well do they understand the world? | Continue reading
Clever Hans and how corporate AI plays the media, in 2022 | Continue reading
From a showmanship standpoint, Google’s new robot project PaLM-SayCan is incredibly cool. Humans talk, and a humanoid robot listens, and acts. In the best case, the robot can read between the lines, moving beyond the kind of boring direct speech (“bring me pretzels from the kitch … | Continue reading
Sure, kids imitate their parents, but that’s just a small part of the story | Continue reading
and why (a) large language models are so prone to fabricating misinformation and (b) DALL-E is no super-genius | Continue reading
What nearly everyone got wrong about DALL-E & Google’s Imagen, and why when it comes to AI hype, you still can't believe what you read | Continue reading
An Enthusiastic Adversarial Collaboration with @Sl8rv | Continue reading
And will it survive if it doesn’t? | Continue reading
No, LaMDA is not sentient. Not even slightly. | Continue reading
Probably so. A response to Scott Alexander’s essay, “Somewhat Contra Marcus On AI Scaling” | Continue reading
Tracking the evolution of large language models | Continue reading
Are large language models a good model of *human* language? | Continue reading
Should we really expect artificial general intelligence in 2029? | Continue reading