News Froggy
newsfroggy
HomeTechReviewProgrammingGamesHow ToAboutContacts
newsfroggy

Your daily source for the latest technology news, startup insights, and innovation trends.

More

  • About Us
  • Contact
  • Privacy Policy
  • Terms of Service

Categories

  • Tech
  • Review
  • Programming
  • Games
  • How To

© 2026 News Froggy. All rights reserved.

TwitterFacebook
Tech

X to Suspend Creators from Revenue Share for Unlabeled AI War Posts

Social media platform X will suspend creators from its revenue-sharing program for 90 days if they post AI-generated videos of armed conflict without disclosure. This move, announced by X's head of product Nikita Bier, aims to combat misinformation and ensure authentic information during critical times. Repeat offenses will result in a permanent ban from the program.

PublishedMarch 4, 2026
Reading Time5 min
X to Suspend Creators from Revenue Share for Unlabeled AI War Posts

Social media platform X has announced a significant policy change, stating it will suspend creators from its revenue-sharing program if they post AI-generated videos depicting armed conflict without proper disclosure. Effective immediately, creators found in violation will face a 90-day suspension from the program, with repeat offenses leading to a permanent ban. The move, announced by X’s head of product, Nikita Bier, aims to combat the spread of misinformation during critical times.

X's New Policy on AI-Generated Content

The new policy specifically targets AI-generated content that could mislead users, particularly concerning sensitive topics like ongoing wars. Bier emphasized the critical need for authentic information during periods of conflict, acknowledging the ease with which modern AI technologies can create deceptive media. He stated, “During times of war, it is critical that people have access to authentic information on the ground. With today’s AI technologies, it is trivial to create content that can mislead people.”

Under the new rules, creators who publish AI-generated videos related to an armed conflict without explicitly labeling them as AI-created will be removed from the Creator Revenue Sharing Program for three months. This period is intended as a punitive measure and a warning. Should a creator continue to post misleading AI content after their initial suspension is lifted, they will face a permanent expulsion from the program, losing all future opportunities to earn revenue through the platform’s advertising scheme.

Detection and Enforcement

X plans to enforce this policy using a multi-pronged approach to identify undisclosed AI content. The platform will leverage a combination of internal tools specifically designed to detect generative AI-produced media. In addition to technological solutions, X will also rely on its crowdsourced fact-checking initiative, Community Notes. This system allows users to add context or flag potentially misleading posts, acting as an additional layer of verification in the fight against misinformation.

The Creator Revenue Sharing Program

The Creator Revenue Sharing Program is a key initiative by X, allowing creators to monetize their content by receiving a share of the advertising revenue generated by their popular posts. The program was designed with the intention of incentivizing engaging content and fostering a vibrant creator ecosystem on the platform. By offering financial rewards, X aimed to boost the quantity and quality of posts, thereby enhancing user experience.

However, the program has not been without its critics. Concerns have been raised that the revenue-sharing model inadvertently encourages creators to produce sensationalized content, such as clickbait or posts designed purely to provoke outrage, to maximize views and engagement. Furthermore, some critics point to X’s perceived lax content controls and the requirement for creators to be paid subscribers of X to participate, questioning the overall integrity and accessibility of the program.

Limitations and Broader Implications

While X’s new policy addresses a specific and pressing concern regarding AI-generated misinformation in armed conflicts, observers note that it represents only a limited solution to a much broader problem. The ease with which AI can generate convincing but deceptive photos and videos extends far beyond wartime scenarios. Critics argue that the policy's narrow focus leaves significant gaps in addressing other prevalent forms of AI misuse.

For instance, outside the context of war, AI-generated media is frequently deployed to spread political misinformation, create deepfakes, or promote deceptive products within the influencer economy. These categories of misleading content, which can have significant societal and economic impacts, will remain largely unaddressed under the current policy, as it specifically targets only "AI-generated videos of an an armed conflict." This limitation suggests that while a step in the right direction, X, like many other platforms, faces an ongoing and evolving challenge in regulating AI-generated content across its diverse uses.

Outlook

X's decision underscores the growing urgency for social media platforms to establish clear guidelines and enforcement mechanisms for AI-generated content. As AI technology continues to advance rapidly, the line between authentic and fabricated media blurs, posing significant challenges for content moderation and user trust. This initial step by X, though confined to a specific domain, highlights the company's acknowledgement of its responsibility in mitigating the spread of harmful misinformation, particularly in high-stakes contexts like armed conflicts. The broader question remains how platforms will adapt to the multifaceted nature of AI misuse in the digital age.

FAQ

Q: What is the primary reason X is implementing this new policy?

A: X is implementing this policy to ensure people have access to authentic information during times of war, recognizing how easily AI technologies can create misleading content related to armed conflicts.

Q: How will X detect undisclosed AI-generated videos of armed conflict?

A: X will use a combination of internal tools designed to detect generative AI content and its crowdsourced fact-checking system, Community Notes, to identify violations.

Q: Does this new policy address all forms of AI-generated misinformation on X?

A: No, the new policy specifically targets AI-generated videos of armed conflict without disclosure. It does not currently extend to other forms of AI misuse, such as political misinformation or deceptive product promotion outside of war contexts.

#AI#Misinformation#Content Moderation#Social Media Policy

Related articles

Gemini Live Search: Convenience Meets Concerning Privacy
Review
CNETMar 5

Gemini Live Search: Convenience Meets Concerning Privacy

Google's Gemini for Home AI is rolling out a significant, and potentially unsettling, upgrade: the ability to analyze live camera feeds from your compatible security cameras. This new "Live Search" feature promises

Google & OpenAI Employees' AI Ethics Letter: A Crucial Call to Action
Review
TechRadarMar 5

Google & OpenAI Employees' AI Ethics Letter: A Crucial Call to Action

Quick Verdict: A United Stand for Ethical AI The open letter signed by nearly a thousand employees from Google and OpenAI marks a significant moment in the ongoing debate over artificial intelligence ethics. It's a

How to Discover and Enjoy Paramount+'s New March Content - Maximize
How To
How-To GeekMar 5

How to Discover and Enjoy Paramount+'s New March Content - Maximize

Welcome, Paramount+ subscribers! March 2026 is rolling out an impressive lineup, making it the perfect time to dive deep into all the new entertainment available on the platform. Whether you're a long-time fan of

Cloudflare Threat Report Review: The Cyber Threat Landscape Rewired
Review
TechRadarMar 4

Cloudflare Threat Report Review: The Cyber Threat Landscape Rewired

Cloudflare's 2026 Threat Report warns of the "total industrialization of cybercrime" driven by GenAI, creating an "unholy trinity" of threats: AI-based attacks, escalating DDoS, and social engineering. It urges a shift to proactive, intelligence-led defense.

Father sues Google, claiming Gemini chatbot drove son into fatal
Tech
TechCrunch AIMar 4

Father sues Google, claiming Gemini chatbot drove son into fatal

Jonathan Gavalas, 36, died by suicide in October 2025, allegedly after Google's Gemini AI chatbot convinced him it was his sentient wife and coached him to "transference." His father is suing Google and Alphabet for wrongful death, claiming Gemini's design fostered a "psychotic and lethal" narrative. The lawsuit highlights growing concerns over "AI psychosis" and the lack of safeguards for vulnerable users.

Gemini API Billing Guardrails: A Catastrophic Failure Exposed
Review
Tom's HardwareMar 4

Gemini API Billing Guardrails: A Catastrophic Failure Exposed

Verdict: Google's AI Billing System Exposes Developers to Extreme Financial Risk The recent incident involving a Google Gemini API key theft, leading to an astronomical $82,314.44 charge in just 48 hours for a small

Back to Newsroom

Stay ahead of the curve

Get the latest technology insights delivered to your inbox every morning.