26 Comments
Feb 5, 2023Liked by Michael W. Green

I recommend you look at Gary Marcus' work, which is at garymarcus.substack or @GaryMarcus, specifically at his comments about ChatGPT (and any LLM, really) being a "pastiche" that frequently suffers from "hallucinations." To my mind, any probabilistic synthesizer is going to suffer from an incomprehension of meaning and the failure to incorporate it into the work.

In other words, the world is not suffering a dearth of pabulum.

Expand full comment
Feb 5, 2023Liked by Michael W. Green

Feels like your productivity assumptions are conducted in a vacuum. Not shocked given your view that therapy- a largely intuitive and highly personal relationship and unique interaction to every patient- will be disrupted by generative AI -a hyper rational also possessing no intuition (definitionally). So you don’t stop to think WHY we are using current digital technologies for vapid and unproductive purposes. It was not a guaranteed outcome. But in a world with declining connection and meaning that was the outcome. Tech like AI coupled w VR not only don’t address this most fundamental problem, they intensify it. In a vacuum chatGPT improves TFP, but in reality it will likely drive us further into the arms of even more powerful new technologies offering compelling and cacophonous distractions ( such as VR)

Expand full comment
Feb 6, 2023Liked by Michael W. Green

Who and how you ask a question helps determine the answer. Many answers aren't objective, they are reinforcements of social and cultural norms.

Expand full comment
Feb 5, 2023Liked by Michael W. Green

Another thoughtful reading experience. Perhaps like almost all things touched by fallible humanity, the 4th Industrial Revolution tools will bring forth both positive and negative consequences. Of course, the positivity and negativity will be assessed from the viewpoints of fallible humans. That last bit is the trickiest part of the tools and their use. Humans are so messy.

Expand full comment
Feb 5, 2023Liked by Michael W. Green

Bring it...Adapt or Die! Doomsayers/FearFuru's go ahead and shout! Is there pain in this process of evolution...absolutely but there is a "threshold" on everything where Process/Systems change certain dynamics but at that point of balance it no longer changes psychology! This is especially true in economic policy!!! The world slowly/incrementally will become a better place! Embrace & Appreciate or just go away...

Expand full comment

I wonder if Robert Gordon's "Rise and Fall of American Growth" can be on the book club list? I see you reference it quite often. I've never looked at a washing machine the same. How do we square Richard Duncan's idea that America needs to invest more against Robert's pessimism on the future of America's growth and debt? AI is talked about by both authors. Maybe you could start a paper club and get Ole Peters on, ha-ha.

Expand full comment

Open the pod bay doors HAL.

Expand full comment

Chat GpT wrote a non political limerick for former President Donald Trump for me. I asked it to write a limerick for President Obama which it did, then I asked write one for Trump. It then wrote one for Trump. It also wrote a “poem about Trumps favorable attributes” upon request. It was quite complimentary of Trump as well. I always enjoy your articles.

Expand full comment

Dr. Strangechat or: How I learned to stop worrying and love the AI.

I'm not entirely sure I understand your argument. You're suggesting the AI should conform to our biases so that we're liable to trust it, use it more, and thus as a society we get a large productivity boost? But the crux of your argument being, most won't use the AI until it tells us what we want to hear?

On a different note, Gurri ruined V for Vendentta for me. Most see it as a triumph of the people over a totalitarian state, but in his book Gurri re-framed the movie as a contest between the center, which however corrupt at least provides some stability/authority/societal foundation. Versus the edge which can only ever break down it can't build. The end of the movie sees the edge tearing it all down but then Gurri asks, what happens after that?

I think about that a lot. Everywhere the edge attacks and wants to destroy, but has no plan for putting something new into the smouldering crater.

Expand full comment

If chatGPT, or other forms of AI, can teach humans to become critical thinkers then we will have a great leap forward as humans using these technologies

The dark side is there, it is possible, but I look at this technology with great hope that it will free us from the mundane but the productivity gains need to be shared to lift up the lowest tiers of society from poverty and ignorance.

Expand full comment

A very interesting take, however, it is not clear to me that human civilization is ready to adapt to this process in the near, or medium-term, future. In fact, I would expect that ChatGPT will wind up being seen as just another tool of the elite used to maintain their superior outcomes.

That said, I think your concept of bias and how it is important to each of us is quite perceptive

Expand full comment

Very insightful and balanced article, thank you! Though I’m puzzled that productivity declined in early 2000’s. I would have thought the widespread adoption of computers & the internet increased productivity significantly. Interesting!

Expand full comment

This essay really speaks to me in terms of the future it envisions with AI; not one of robot overlords, nor of a hyper-tribal cyberpunk collapse into anarchy, but a humanistic one. One promoting synthesis and integration, a model of understanding premised on genuine welfare, on broadly distributed human sympathy, over individualistic ego. And that is refreshing.

While I do not want to diminish or take away from that ideal, nor that potential, in the slightest. I'd rather just like to point out that to get there, we may first need to confront some of the deceptions - occurring in the industry - and create incentives the broaden the field of study in the ways highlighted by scientists such as Gary Marcus.

Marcus has pointed out how in lab situations, these AIs have actually counseled suicide. Which isn't a hyperbolic or alarmist call-out of 'danger' but rather to illustrate that these models are absurdist. Are they actually intelligent? Or are they just very clever mimics?

Again, Marcus' assertion is that what they’re doing is essentially is just following statistics on the frequency of words - they're not really following ideas and concepts - and that is nothing like language or cognition.

Here's a paraphrase from a talk he gave recently (with Noam Chomsky, another linguist and skeptic):

"The relation between syntax and semantics for example between those and pragmatics if you build a language production system for example in the classical sense, you start with the meaning you want to express and you translate that into words. Or in language comprehension you start with the sentence you want to understand, you translate that into a meaning. Well GPT does not do that. What GPT does is it hears a sequence of words and it predicts the next word. But lets say you talk to someone and you know what they might say, well GPT might produce something dramatically interesting, but whether it winds up giving you what you want is an entirely different matter. And if you want to extract from it a model of the world that’s a completely different problem...

There's this politics that goes back fifty years about using neural networks vs a knowledge based approach. It turns out the knowledge based approach has some value but it had some trouble thirty years ago, so now we're doing all this stuff without knowledge, which means you have all the knowledge in Wikipedia and nobody really knows how to put it into these systems. And because they are statistical mimics you can ask it question like 'who's the President of the United States?' and they might say Donald Trump because in their data set there are more examples of Donald Trump than Joe Biden, where if you want to reason and say 'now Biden is the President, Trump was the President before', i.e. you want to use your understanding of sequences, of events in time, these models can't.

Cognitive scientists think about this stuff, they talk about discourse models, putting together what it is that we're talking about, how were talking about it. There's lots of reason to think that when we learn language we’re not starting from a blank slate. And the failures that we’re seeing in DallE are showing you that you start with a blank slate that just accumulates statistics, you don’t really understand language.

Or another example there's some papers that show the same thing in vision - there's some models that label activities - so it can say 'you're nodding your head' or 'those people are sitting out there', it can do some visual labeling. And the myth is that the right things will emerge ....which is kind of magic... it will emerge when you give enough data to these systems that they will understand the world.

So we built a benchmark based on what 10-months old or 4-months old understand of basic physics : like things drop or if things are hidden you can still find them eventually. And these systems don't understand that at all... its the complete failure of the empiricist hypothesis that if we give a lot of data that cognition will emerge.

Kant said we start with time and space and causality he’s probably right, we probably start with those things. And over and over, people keep pursuing this hypothesis that ignores the cognitive science around that and says we’ll just use all the data and because it works 75 percent of the time they think they're making progress, but sometimes making progress 75 percent of the way its not enough. Getting close doesn’t seem to solve the problem."

Expand full comment
Feb 5, 2023·edited Feb 5, 2023

Thanks for sharing.

Chat GPT, is this you???

Bertrand Russell’s problems of philosophy tells us about the dress issue, but it’s a table. So I guess the existing fix to learn about bias I.e Philosophy has failed so onward to AI.

So you think Keynes is going to be right in terms of man pursuing metaphysical pursuits, he just got the timing wrong? Just to be clear, why is this time different in terms of AI replacing humans relative to our previous near death experiences?

Mentioning dishwashers reminder me of an article I read a few years ago in the Atlantic (that I obviously can’t find now), the main point was these inventions didn’t create free time as we wanted more stuff, so we just worked more (paraphrasing from memory - oh dear). So again, why now? Why wouldn’t this time (if realised) be reinvested into pursuing further material gains?

Expand full comment

I agree with you on the positive productivity, and by extension economic, outcomes that AI is likely to generate(the use of computer models in weather forecasting being an excellent example).

What I am less clear about is how this will be beneficial for more social and politically, at least in the short term.

I agree that, “ Our argument is not with bias, it’s the fear that machines will not respect ours (and by extension, respect us). However, I think the the easiest way to deal with that hurdle is likely to have multiple partisan AI programs, much as we have Fox News and MSNBC, or pre-Elon Twitter and Truth Social, and thus lead us to an even more polarized political discourse than we already have.

But perhaps I maybe missing something in your argument.

Expand full comment

Thank you for all your thought provoking work on the impact of passive investing and I was thrilled when you started this Substack. I keep hearing from many financial folks that stock picking will matter again in the coming years. Do you agree or has the power of passive remained unchanged and will continue to dominate how stocks perform?

Expand full comment