As AI continues to integrate itself into the fabric of modern life, the ethical challenges surrounding its use become increasingly critical. Nowhere is this more evident than in the world of social media, where artificial intelligence is being used to create fake accounts, skew public discourse, and promote harmful content for the sake of engagement. Platforms like Facebook, Twitter (X), and others have become battlegrounds for attention, with algorithms that prioritize engagement often amplifying the worst aspects of human behavior.
The issue that AI can be used to manipulate social media and create the illusion of popularity is not just a technical glitch—it’s a reflection of how unchecked algorithms can harm societies and contribute to the spread of misinformation, hate, and division. As Yuval Noah Harari suggests, corporations should be held accountable for the consequences of the algorithms they deploy, just as humans are held accountable for their actions. The question we must ask is: How do we ensure that AI, and the algorithms it powers, align with ethical standards that promote truth, compassion, and societal well-being?
The Illusion of Popularity: AI-Generated Fake Accounts
One of the most concerning developments in the AI-social media nexus is the rise of fake accounts, or bot accounts, that impersonate real users and generate content designed to sway public opinion. These AI-generated accounts can comment, like, share, and retweet, creating the illusion that certain opinions or pieces of content are far more popular than they truly are. This distortion can lead to a dangerous feedback loop where users are more likely to believe and engage with content that appears to be endorsed by a large number of people—when in reality, much of that engagement is fake.
Fake accounts can promote divisive content, create echo chambers, and even influence elections by making particular ideas or opinions seem more mainstream or widely accepted than they actually are. These bots can comment on news articles, inflating the visibility of extremist views, or flood social media feeds with disinformation, skewing public perception of important issues.
The impact is clear: when people see a post that has thousands of likes, shares, or comments, they are more likely to believe it is credible or worth engaging with—even if it was artificially boosted by bots. This can tilt the scales of public discourse in favor of harmful, misleading, or false narratives.
Algorithms Amplifying Harmful Content
AI-driven algorithms play a central role in determining what content users see on social media. Platforms like Facebook and Twitter are designed to maximize user engagement—whether that means more likes, comments, or time spent on the platform. Unfortunately, algorithms often prioritize sensational, controversial, or inflammatory content because these types of posts generate the most engagement. Content that stirs anger, fear, or outrage gets shared more often, leading algorithms to push it further up in users’ feeds.
This phenomenon is nothing new. As Harari points out, when the Gutenberg Press was invented, its potential to spread knowledge was initially overshadowed by its use for sensationalist content. Instead of distributing scientific or philosophical texts, the press was used to print sensational stories about witch hunts and other fear-inducing narratives—because that’s what sold. Similarly, today’s algorithms favor content that evokes strong emotions, often at the cost of promoting reasoned, balanced discourse.
The result is a social media landscape where hate speech, divisive rhetoric, and misinformation are not just allowed to exist—they are actively promoted by the very algorithms that drive these platforms.
The Need for Corporate Accountability
Harari suggests that just as individuals are held accountable for impersonating professionals like doctors or surgeons, corporations should be held responsible for the outcomes of their algorithms. If an AI algorithm is designed with the sole intention of increasing engagement, but ends up promoting violence, hatred, or fear, the company that created it should be held accountable for the harm it causes.
Currently, many social media companies shirk this responsibility, arguing that they are simply platforms for free speech. However, the algorithms they use to determine what content gets prioritized and seen are not neutral—they are designed with specific goals in mind, such as maximizing engagement. When those goals lead to real-world harm, whether through the spread of disinformation, the incitement of violence, or the amplification of divisive content, the companies behind these algorithms should be held accountable.
This shift in responsibility is critical if we are to create a healthier, more balanced digital space. Just as doctors are held to ethical standards in their treatment of patients, tech companies must be held to ethical standards in the creation and deployment of their algorithms.
The Path Forward: Ethical AI Use and Transparent Algorithms
To address these issues, we need transparency, accountability, and a shift in priorities. Corporations should be required to disclose when content is AI-generated or when bots are influencing online discourse. Users should have the right to know when they are engaging with real people and when they are interacting with AI-generated content. This transparency would help combat the manipulation of public opinion and restore trust in digital spaces.
Furthermore, we need to rethink the design of AI algorithms to prioritize truth, compassion, and balance over engagement and profit. This might involve tweaking algorithms to promote more nuanced, informative content and ensuring that hate speech and divisive rhetoric are deprioritized, rather than amplified.
If we are to harness AI for the highest good, we must align its development and use with the principles of Dharma—compassion, non-harm, and truth. By holding corporations accountable for the consequences of their algorithms and ensuring that AI-driven content is transparent, we can begin to mitigate the harm that AI currently contributes to the digital landscape.
Conclusion: A Call for Ethical AI in Social Media
The potential for AI to manipulate public discourse through fake accounts and harmful algorithms is a problem that cannot be ignored. If we want AI to serve humanity in positive and meaningful ways, we must hold corporations responsible for the algorithms they create and use. By insisting on transparency, accountability, and ethical standards, we can guide AI toward a future where it amplifies the best of human values—rather than the worst.
The responsibility to ensure that AI is used ethically lies with all of us. As Harari suggests, it is time for corporations and developers to face the consequences of the tools they build. The stakes are too high to allow AI to be used without checks and balances, and we must act now to ensure that AI serves the greater good, rather than distorting reality for profit.
To learn more about Yuval Noah Harari’s views on AI, technology, and the future of humanity, check out his latest book Nexus.
The following interview with Yuval Noah Harari on YouTube discusses his new book Nexus, which explores the history of information networks and the challenges posed by artificial intelligence. Harari argues that the way these networks are built predisposes us to use that power unwisely, and that we need to be more mindful of the potential dangers of AI. He also calls for greater regulation of the tech industry to prevent the misuse of AI. Harari’s insights are both thought-provoking and timely, and this interview provides a valuable overview of his book.
🙏🕊️🙏
Thank you 🙏