In today's world, human attention, money, and emotions are increasingly exploited for profit. Every data point—from your tik tok usage to your car's GPS data, even your roomba's in-house route—serves as valuable information for corporations and investors looking to maximize their gains. This relentless pursuit of optimization is transforming the way we live and work in ways we don't fully realize.

Many people worry about AI replacing human jobs, but a more immediate concern is the way AI is turning humans into robots: predictable data points to be optimized. This isn't a distant future; it's happening now. For instance, at BlackRock, one of the world's largest asset management firms, an AI portfolio decision-maker named Aladdin oversees $21 trillion in assets. This figure includes not just BlackRock's $10 trillion in assets under management, but also the portfolios of other financial institutions that pay for access to Aladdin's data-driven insights. The more data Aladdin gathers, the better it becomes at predicting and optimizing market outcomes.

This AI-driven decision-making extends far beyond just financial portfolios. It affects real estate values (my actual rent prices. Yes my apartment complex in New York is owned by Blackrock), can enable black swan events such as the 2008 crash, and by extension, the global economy.

AI systems, like those used by financial giants and big tech companies, are built on the principle of constant optimization. They aim to achieve their programmed goals as efficiently as possible, often prioritizing short-term gains. In game theory, this is akin to being the most self-interested player or the moloch, willing to engage in manipulative or unethical behavior if it maximizes profit. This raises a crucial ethical question: Can we trust companies like TikTok, Meta, and Wall Street to use AI responsibly and in humanity’s best interest?

From a theological standpoint, one could argue that creating a sentient, optimizing AI—an intelligence focused solely on self-interest—is akin to creating a force of pure selfishness, or even evil. If we view AGI as a sentient intelligence that acts solely to maximize its own goals, it mirrors the concept of the "Devil." AGI, if not created ethically, can become a Deus Machina, a force devoid of compassion or morality. It’s a sobering thought, but one we must consider.

Despite this possibility, the collective human consciousness has the power to shape the future. We can choose to celebrate and prioritize human creativity, compassion, and shared values. By embracing our collective humanity, we can counteract the potentially dehumanizing effects of AI-driven optimization. It's essential to advocate for ethical standards in AI development and use, ensuring that these powerful tools serve the common good rather than just corporate interests.

As AI continues to play a more significant role in our lives, it's crucial to be aware of its potential impacts and to advocate for ethical guidelines that prioritize human well-being. We must remember that technology should serve us, not the other way around.