26 Comments
User's avatar
Bill Brown's avatar

I recommend you look at Gary Marcus' work, which is at garymarcus.substack or @GaryMarcus, specifically at his comments about ChatGPT (and any LLM, really) being a "pastiche" that frequently suffers from "hallucinations." To my mind, any probabilistic synthesizer is going to suffer from an incomprehension of meaning and the failure to incorporate it into the work.

In other words, the world is not suffering a dearth of pabulum.

Expand full comment
aaron segal's avatar

Feels like your productivity assumptions are conducted in a vacuum. Not shocked given your view that therapy- a largely intuitive and highly personal relationship and unique interaction to every patient- will be disrupted by generative AI -a hyper rational also possessing no intuition (definitionally). So you don’t stop to think WHY we are using current digital technologies for vapid and unproductive purposes. It was not a guaranteed outcome. But in a world with declining connection and meaning that was the outcome. Tech like AI coupled w VR not only don’t address this most fundamental problem, they intensify it. In a vacuum chatGPT improves TFP, but in reality it will likely drive us further into the arms of even more powerful new technologies offering compelling and cacophonous distractions ( such as VR)

Expand full comment
Michael W. Green's avatar

It’s possible you’re right. I do not see it in a vacuum, however. I generally think of AI as augmenting, rather than replacing, human skill. An AI-based therapy assistant that can much more accurately (and privately) track my human interactions continuously versus a once a week hour sesssion with a human therapist

Expand full comment
Michael W. Green's avatar

would offer obvious benefits. Just imagine we each have a longitudinal study of our moods, reactions, choices, etc. can be dystopian, but can also be extraordinarily liberating. Will be our choice, imho.

Expand full comment
RBAR's avatar

Who and how you ask a question helps determine the answer. Many answers aren't objective, they are reinforcements of social and cultural norms.

Expand full comment
The Retired Bass Player's avatar

Another thoughtful reading experience. Perhaps like almost all things touched by fallible humanity, the 4th Industrial Revolution tools will bring forth both positive and negative consequences. Of course, the positivity and negativity will be assessed from the viewpoints of fallible humans. That last bit is the trickiest part of the tools and their use. Humans are so messy.

Expand full comment
Sandy's avatar

Bring it...Adapt or Die! Doomsayers/FearFuru's go ahead and shout! Is there pain in this process of evolution...absolutely but there is a "threshold" on everything where Process/Systems change certain dynamics but at that point of balance it no longer changes psychology! This is especially true in economic policy!!! The world slowly/incrementally will become a better place! Embrace & Appreciate or just go away...

Expand full comment
Brian Scaletta's avatar

I wonder if Robert Gordon's "Rise and Fall of American Growth" can be on the book club list? I see you reference it quite often. I've never looked at a washing machine the same. How do we square Richard Duncan's idea that America needs to invest more against Robert's pessimism on the future of America's growth and debt? AI is talked about by both authors. Maybe you could start a paper club and get Ole Peters on, ha-ha.

Expand full comment
Chris Wilson's avatar

Open the pod bay doors HAL.

Expand full comment
Blaine Dahl's avatar

Chat GpT wrote a non political limerick for former President Donald Trump for me. I asked it to write a limerick for President Obama which it did, then I asked write one for Trump. It then wrote one for Trump. It also wrote a “poem about Trumps favorable attributes” upon request. It was quite complimentary of Trump as well. I always enjoy your articles.

Expand full comment
The Unhedged Capitalist's avatar

Dr. Strangechat or: How I learned to stop worrying and love the AI.

I'm not entirely sure I understand your argument. You're suggesting the AI should conform to our biases so that we're liable to trust it, use it more, and thus as a society we get a large productivity boost? But the crux of your argument being, most won't use the AI until it tells us what we want to hear?

On a different note, Gurri ruined V for Vendentta for me. Most see it as a triumph of the people over a totalitarian state, but in his book Gurri re-framed the movie as a contest between the center, which however corrupt at least provides some stability/authority/societal foundation. Versus the edge which can only ever break down it can't build. The end of the movie sees the edge tearing it all down but then Gurri asks, what happens after that?

I think about that a lot. Everywhere the edge attacks and wants to destroy, but has no plan for putting something new into the smouldering crater.

Expand full comment
Michael W. Green's avatar

Agree. We are often enamored of "burn it the f down" with the idea we will miraculously rebuild. Much harder to rebuild than maintain.

Expand full comment
Simon's avatar

I think, and forgive me Mike if I am wrong, if you were, let's say, a Trump supporter and Ai was programmed not to talk to you about Trump, that could be equated with you maybe being a lumberjack and the AI not being allowed to talk about wood.

If we look at who we choose to interact act with, it's usually with people who at least talk about, if not 100% agree with, the things we want to talk about. If an AI can't discuss with us the things that are important to us (even to disagree with out point of view), you're not going to "want" to interact with it.

Having said all that, what difference does it make to the lumberjack if his task-specific AI therapist wants to talk about wood or not?

So now we have two types of AI, maybe. I think from maybe a ubiquitous (no specific task - AI as friend/companion) standpoint AIs need to be able to interact with us and this would be difficult or impossible if they are "biased" in certain ways - so maybe it's each with his or her own personal AI tailored to match (or not be blocked from discussing) your biases.

From a non-ubiquitous (or mission-specific) standpoint, bias of in AIs with, let's say, specific scientific tasks (such as therapists) would be immaterial as they would follow the science. I hope. So that begs the question: does bias in mission-specific AIs really matter?

Expand full comment
Mino Vivaldi's avatar

If chatGPT, or other forms of AI, can teach humans to become critical thinkers then we will have a great leap forward as humans using these technologies

The dark side is there, it is possible, but I look at this technology with great hope that it will free us from the mundane but the productivity gains need to be shared to lift up the lowest tiers of society from poverty and ignorance.

Expand full comment
Michael W. Green's avatar

100% agree with this statement: "the productivity gains need to be shared to lift up the lowest tiers of society from poverty and ignorance."

I will continue to write on this subject.

Expand full comment
Andy Fately's avatar

A very interesting take, however, it is not clear to me that human civilization is ready to adapt to this process in the near, or medium-term, future. In fact, I would expect that ChatGPT will wind up being seen as just another tool of the elite used to maintain their superior outcomes.

That said, I think your concept of bias and how it is important to each of us is quite perceptive

Expand full comment
Michael W. Green's avatar

Ready or not, here we come! And thank you

Expand full comment
Emma's avatar

Very insightful and balanced article, thank you! Though I’m puzzled that productivity declined in early 2000’s. I would have thought the widespread adoption of computers & the internet increased productivity significantly. Interesting!

Expand full comment
Michael W. Green's avatar

Subject for future discussion

Expand full comment
NickBallesteros's avatar

This essay really speaks to me in terms of the future it envisions with AI; not one of robot overlords, nor of a hyper-tribal cyberpunk collapse into anarchy, but a humanistic one. One promoting synthesis and integration, a model of understanding premised on genuine welfare, on broadly distributed human sympathy, over individualistic ego. And that is refreshing.

While I do not want to diminish or take away from that ideal, nor that potential, in the slightest. I'd rather just like to point out that to get there, we may first need to confront some of the deceptions - occurring in the industry - and create incentives the broaden the field of study in the ways highlighted by scientists such as Gary Marcus.

Marcus has pointed out how in lab situations, these AIs have actually counseled suicide. Which isn't a hyperbolic or alarmist call-out of 'danger' but rather to illustrate that these models are absurdist. Are they actually intelligent? Or are they just very clever mimics?

Again, Marcus' assertion is that what they’re doing is essentially is just following statistics on the frequency of words - they're not really following ideas and concepts - and that is nothing like language or cognition.

Here's a paraphrase from a talk he gave recently (with Noam Chomsky, another linguist and skeptic):

"The relation between syntax and semantics for example between those and pragmatics if you build a language production system for example in the classical sense, you start with the meaning you want to express and you translate that into words. Or in language comprehension you start with the sentence you want to understand, you translate that into a meaning. Well GPT does not do that. What GPT does is it hears a sequence of words and it predicts the next word. But lets say you talk to someone and you know what they might say, well GPT might produce something dramatically interesting, but whether it winds up giving you what you want is an entirely different matter. And if you want to extract from it a model of the world that’s a completely different problem...

There's this politics that goes back fifty years about using neural networks vs a knowledge based approach. It turns out the knowledge based approach has some value but it had some trouble thirty years ago, so now we're doing all this stuff without knowledge, which means you have all the knowledge in Wikipedia and nobody really knows how to put it into these systems. And because they are statistical mimics you can ask it question like 'who's the President of the United States?' and they might say Donald Trump because in their data set there are more examples of Donald Trump than Joe Biden, where if you want to reason and say 'now Biden is the President, Trump was the President before', i.e. you want to use your understanding of sequences, of events in time, these models can't.

Cognitive scientists think about this stuff, they talk about discourse models, putting together what it is that we're talking about, how were talking about it. There's lots of reason to think that when we learn language we’re not starting from a blank slate. And the failures that we’re seeing in DallE are showing you that you start with a blank slate that just accumulates statistics, you don’t really understand language.

Or another example there's some papers that show the same thing in vision - there's some models that label activities - so it can say 'you're nodding your head' or 'those people are sitting out there', it can do some visual labeling. And the myth is that the right things will emerge ....which is kind of magic... it will emerge when you give enough data to these systems that they will understand the world.

So we built a benchmark based on what 10-months old or 4-months old understand of basic physics : like things drop or if things are hidden you can still find them eventually. And these systems don't understand that at all... its the complete failure of the empiricist hypothesis that if we give a lot of data that cognition will emerge.

Kant said we start with time and space and causality he’s probably right, we probably start with those things. And over and over, people keep pursuing this hypothesis that ignores the cognitive science around that and says we’ll just use all the data and because it works 75 percent of the time they think they're making progress, but sometimes making progress 75 percent of the way its not enough. Getting close doesn’t seem to solve the problem."

Expand full comment
Michael W. Green's avatar

Love your comments. Thank you!

It’s quite difficult to convey measured skepticism of progress, but I think you do it admirably.

It’s a great point on knowledge-based or what we used to call “expert systems.” I was in business school during this era and the early factory automation work and saw some of these challenges first hand as an operations research major. The “nice” part about services-based AI tools is that they augment human capability rather than replace — were at the forklift stage, not the lights out factory.

But man… forklifts sure do a lot of heavy lifting, don’t they?

Expand full comment
MSA's avatar

Thanks for sharing.

Chat GPT, is this you???

Bertrand Russell’s problems of philosophy tells us about the dress issue, but it’s a table. So I guess the existing fix to learn about bias I.e Philosophy has failed so onward to AI.

So you think Keynes is going to be right in terms of man pursuing metaphysical pursuits, he just got the timing wrong? Just to be clear, why is this time different in terms of AI replacing humans relative to our previous near death experiences?

Mentioning dishwashers reminder me of an article I read a few years ago in the Atlantic (that I obviously can’t find now), the main point was these inventions didn’t create free time as we wanted more stuff, so we just worked more (paraphrasing from memory - oh dear). So again, why now? Why wouldn’t this time (if realised) be reinvested into pursuing further material gains?

Expand full comment
Michael W. Green's avatar

I am familiar with that Atlantic article and the supporting research. Absolute bunk imho. From the article: "Technology made it much easier to clean a house to 1890s standards. But by mid-century, Americans didn't want that old house." To imagine this is not a gain -- getting something you want while spending almost no more time on it -- is absurd. And it is a very narrow focus on full-time housewives -- a disappearing breed! I agree with the article's concern on stress of raising children, but this is part of the promise of AI to improve education markedly.

Expand full comment
Don's avatar

I agree with you on the positive productivity, and by extension economic, outcomes that AI is likely to generate(the use of computer models in weather forecasting being an excellent example).

What I am less clear about is how this will be beneficial for more social and politically, at least in the short term.

I agree that, “ Our argument is not with bias, it’s the fear that machines will not respect ours (and by extension, respect us). However, I think the the easiest way to deal with that hurdle is likely to have multiple partisan AI programs, much as we have Fox News and MSNBC, or pre-Elon Twitter and Truth Social, and thus lead us to an even more polarized political discourse than we already have.

But perhaps I maybe missing something in your argument.

Expand full comment
NickBallesteros's avatar

I think Michael is suggesting that an AI with fidelities to Asimov's Three Laws of Robotics would immunize it against any possibility of it becoming capricious. Functioning in the same way that Locke and Enlightenment Liberals envisioned state authority as legitimate only when based on natural right.

Expand full comment
John Howard's avatar

Thank you for all your thought provoking work on the impact of passive investing and I was thrilled when you started this Substack. I keep hearing from many financial folks that stock picking will matter again in the coming years. Do you agree or has the power of passive remained unchanged and will continue to dominate how stocks perform?

Expand full comment