"You can simultaneously believe in the miracle of capitalism and technology to improve living standards, while simultaneously rejecting the laissez faire attitude towards society and culture."
Or... you can realize that Capitalism's fundamental values are a-humanist and unable to account for the things you - and most people - *really* value. And you can decide then to devote your time, energy, intelligence, and resources to figuring out a system to value and promote and support what we *actually* need as humans.
We've had a series of "this is the best system we've found so far" throughout history. I'm sure there were some serfs in earlier eras who might have argued that serfdom was better than being a hunter-gatherer or whatever. But we ultimately found better systems. Why would we not be looking for one now, when the failures of Capitalism are so evident, and increasingly so? And indeed when such failures are arguably reaching a fatal tipping point (climate change, inequality, etc.).
I know I'm probably tilting at windmills a bit here with the fundamental orientation of this newsletter and you as an author. But that's precisely why I feel like this is a leverage point: you as a clearly thoughtful person who explicitly values things that Capitalism does not, and being simultaneously one of its most ardent defenders, this is a great opportunity to hone the blade of transcapitalism (or post-capitalism). 😄
One of my more radical techno-optimist beliefs is that Marx's dream is only possible with ASI. We need something smarter than humans to pull off central planning.
Yesss, I agree! But it won't just happen, we need to steer a course that direction. I'd love to read your full take on this. Might even get me to subscribe. 😉
This is so beautifully written, Evan. Technology is a way of revealing. On the flip side, I think one thing these advances will teach us is what it truly means to be human. Like you rightly said, many people are lonely, and this will only increase. And maybe we’d start to pay attention to human to human connections again. Small pockets of this movements are already appearing
Thank you for this thoughtful piece. Heidegger's QCT seems to have had a fairly limited reach in the way people talk about AI, and its nice to see someone else digging into it. Your reading, with its focus on "enframing", offered a fresh take to me, who usually focuses on the "bringing forth" parts of QCT.
Your description of how the AI can "enframe" our understanding on a given concepts is apt, though I wish you had gone further into the ways engaging with an AI interface can modify our thoughts and behavior. Heidegger focused a lot in that essay on the relationship that man has to nature, and living with the effects of the industrial revolution seems to have shown him what it means to suddenly be capable of thinking of a hillside or mountain as a "resource", as you described.
In some ways our own age is more bleak. The commoditization of consumer attention through advertising encourages more engaging design in our technology, further emphasized by the endless hunger for data on which to train AI models. These pressures lead to designing interfaces that optimize for attention capture and the production of user data - perhaps another "standing reserve" in the age of AI.
If the design of apps optimizing user engagement weren't enough to tilt you, consider the effect of such interactions on a user - then consider how deeply exposed we are having our behavior modified by our interactions with AI. Given that we are being changed by AI (mostly, I suspect, in ways that cripple our capacity for critical thought), the follow-up question to me is how the interface and AI that I might use are or are not aligned with my own goals and interests. Do I trust this tool to change me?
My own answer is a resounding, "No." But rather than bemoan the coming of AI or turning mis-Anthropic, I would rather build something aligned with my values and interests, something that prioritizes transparency, control, and an invitation to engage with complexity. I choose to design an interface with the AI that allows for the improvements in our technology to be reflected back onto us, to take my fear and apprehension at what the AI might "enframe" for me and redefine that enframing, and to instead "bring forth" the kind of person I would like to become.
PS
I think often of Plato's Meno (~95d), where Socrates offers a seeming contradiction in quotes of Theognis. The tricky passage involves an obscure conjugation of the greek "to teach", but one reading is that we can only learn virtue by being presented with an environment which facilitates us teaching ourselves to be virtuous. The promise I see in AI is that it might provide the circumstances for such opportunities to practice virtuously.
"You can simultaneously believe in the miracle of capitalism and technology to improve living standards, while simultaneously rejecting the laissez faire attitude towards society and culture."
Or... you can realize that Capitalism's fundamental values are a-humanist and unable to account for the things you - and most people - *really* value. And you can decide then to devote your time, energy, intelligence, and resources to figuring out a system to value and promote and support what we *actually* need as humans.
We've had a series of "this is the best system we've found so far" throughout history. I'm sure there were some serfs in earlier eras who might have argued that serfdom was better than being a hunter-gatherer or whatever. But we ultimately found better systems. Why would we not be looking for one now, when the failures of Capitalism are so evident, and increasingly so? And indeed when such failures are arguably reaching a fatal tipping point (climate change, inequality, etc.).
I know I'm probably tilting at windmills a bit here with the fundamental orientation of this newsletter and you as an author. But that's precisely why I feel like this is a leverage point: you as a clearly thoughtful person who explicitly values things that Capitalism does not, and being simultaneously one of its most ardent defenders, this is a great opportunity to hone the blade of transcapitalism (or post-capitalism). 😄
One of my more radical techno-optimist beliefs is that Marx's dream is only possible with ASI. We need something smarter than humans to pull off central planning.
Yesss, I agree! But it won't just happen, we need to steer a course that direction. I'd love to read your full take on this. Might even get me to subscribe. 😉
This is so beautifully written, Evan. Technology is a way of revealing. On the flip side, I think one thing these advances will teach us is what it truly means to be human. Like you rightly said, many people are lonely, and this will only increase. And maybe we’d start to pay attention to human to human connections again. Small pockets of this movements are already appearing
Love this so much! Looking forward to more installments
You meant to write Schumpeter's creative destruction.
Ugh good catch! Fixing
Thank you for this thoughtful piece. Heidegger's QCT seems to have had a fairly limited reach in the way people talk about AI, and its nice to see someone else digging into it. Your reading, with its focus on "enframing", offered a fresh take to me, who usually focuses on the "bringing forth" parts of QCT.
Your description of how the AI can "enframe" our understanding on a given concepts is apt, though I wish you had gone further into the ways engaging with an AI interface can modify our thoughts and behavior. Heidegger focused a lot in that essay on the relationship that man has to nature, and living with the effects of the industrial revolution seems to have shown him what it means to suddenly be capable of thinking of a hillside or mountain as a "resource", as you described.
In some ways our own age is more bleak. The commoditization of consumer attention through advertising encourages more engaging design in our technology, further emphasized by the endless hunger for data on which to train AI models. These pressures lead to designing interfaces that optimize for attention capture and the production of user data - perhaps another "standing reserve" in the age of AI.
If the design of apps optimizing user engagement weren't enough to tilt you, consider the effect of such interactions on a user - then consider how deeply exposed we are having our behavior modified by our interactions with AI. Given that we are being changed by AI (mostly, I suspect, in ways that cripple our capacity for critical thought), the follow-up question to me is how the interface and AI that I might use are or are not aligned with my own goals and interests. Do I trust this tool to change me?
My own answer is a resounding, "No." But rather than bemoan the coming of AI or turning mis-Anthropic, I would rather build something aligned with my values and interests, something that prioritizes transparency, control, and an invitation to engage with complexity. I choose to design an interface with the AI that allows for the improvements in our technology to be reflected back onto us, to take my fear and apprehension at what the AI might "enframe" for me and redefine that enframing, and to instead "bring forth" the kind of person I would like to become.
PS
I think often of Plato's Meno (~95d), where Socrates offers a seeming contradiction in quotes of Theognis. The tricky passage involves an obscure conjugation of the greek "to teach", but one reading is that we can only learn virtue by being presented with an environment which facilitates us teaching ourselves to be virtuous. The promise I see in AI is that it might provide the circumstances for such opportunities to practice virtuously.