News Froggy
newsfroggy
HomeTechReviewProgrammingGamesHow ToAboutContacts
newsfroggy

Your daily source for the latest technology news, startup insights, and innovation trends.

More

  • About Us
  • Contact
  • Privacy Policy
  • Terms of Service

Categories

  • Tech
  • Review
  • Programming
  • Games
  • How To

© 2026 News Froggy. All rights reserved.

TwitterFacebook
Tech

X to Suspend Creators from Revenue Share for Unlabeled AI War Posts

Social media platform X will suspend creators from its revenue-sharing program for 90 days if they post AI-generated videos of armed conflict without disclosure. This move, announced by X's head of product Nikita Bier, aims to combat misinformation and ensure authentic information during critical times. Repeat offenses will result in a permanent ban from the program.

PublishedMarch 4, 2026
Reading Time5 min
X to Suspend Creators from Revenue Share for Unlabeled AI War Posts

Social media platform X has announced a significant policy change, stating it will suspend creators from its revenue-sharing program if they post AI-generated videos depicting armed conflict without proper disclosure. Effective immediately, creators found in violation will face a 90-day suspension from the program, with repeat offenses leading to a permanent ban. The move, announced by X’s head of product, Nikita Bier, aims to combat the spread of misinformation during critical times.

X's New Policy on AI-Generated Content

The new policy specifically targets AI-generated content that could mislead users, particularly concerning sensitive topics like ongoing wars. Bier emphasized the critical need for authentic information during periods of conflict, acknowledging the ease with which modern AI technologies can create deceptive media. He stated, “During times of war, it is critical that people have access to authentic information on the ground. With today’s AI technologies, it is trivial to create content that can mislead people.”

Under the new rules, creators who publish AI-generated videos related to an armed conflict without explicitly labeling them as AI-created will be removed from the Creator Revenue Sharing Program for three months. This period is intended as a punitive measure and a warning. Should a creator continue to post misleading AI content after their initial suspension is lifted, they will face a permanent expulsion from the program, losing all future opportunities to earn revenue through the platform’s advertising scheme.

Detection and Enforcement

X plans to enforce this policy using a multi-pronged approach to identify undisclosed AI content. The platform will leverage a combination of internal tools specifically designed to detect generative AI-produced media. In addition to technological solutions, X will also rely on its crowdsourced fact-checking initiative, Community Notes. This system allows users to add context or flag potentially misleading posts, acting as an additional layer of verification in the fight against misinformation.

The Creator Revenue Sharing Program

The Creator Revenue Sharing Program is a key initiative by X, allowing creators to monetize their content by receiving a share of the advertising revenue generated by their popular posts. The program was designed with the intention of incentivizing engaging content and fostering a vibrant creator ecosystem on the platform. By offering financial rewards, X aimed to boost the quantity and quality of posts, thereby enhancing user experience.

However, the program has not been without its critics. Concerns have been raised that the revenue-sharing model inadvertently encourages creators to produce sensationalized content, such as clickbait or posts designed purely to provoke outrage, to maximize views and engagement. Furthermore, some critics point to X’s perceived lax content controls and the requirement for creators to be paid subscribers of X to participate, questioning the overall integrity and accessibility of the program.

Limitations and Broader Implications

While X’s new policy addresses a specific and pressing concern regarding AI-generated misinformation in armed conflicts, observers note that it represents only a limited solution to a much broader problem. The ease with which AI can generate convincing but deceptive photos and videos extends far beyond wartime scenarios. Critics argue that the policy's narrow focus leaves significant gaps in addressing other prevalent forms of AI misuse.

For instance, outside the context of war, AI-generated media is frequently deployed to spread political misinformation, create deepfakes, or promote deceptive products within the influencer economy. These categories of misleading content, which can have significant societal and economic impacts, will remain largely unaddressed under the current policy, as it specifically targets only "AI-generated videos of an an armed conflict." This limitation suggests that while a step in the right direction, X, like many other platforms, faces an ongoing and evolving challenge in regulating AI-generated content across its diverse uses.

Outlook

X's decision underscores the growing urgency for social media platforms to establish clear guidelines and enforcement mechanisms for AI-generated content. As AI technology continues to advance rapidly, the line between authentic and fabricated media blurs, posing significant challenges for content moderation and user trust. This initial step by X, though confined to a specific domain, highlights the company's acknowledgement of its responsibility in mitigating the spread of harmful misinformation, particularly in high-stakes contexts like armed conflicts. The broader question remains how platforms will adapt to the multifaceted nature of AI misuse in the digital age.

FAQ

Q: What is the primary reason X is implementing this new policy?

A: X is implementing this policy to ensure people have access to authentic information during times of war, recognizing how easily AI technologies can create misleading content related to armed conflicts.

Q: How will X detect undisclosed AI-generated videos of armed conflict?

A: X will use a combination of internal tools designed to detect generative AI content and its crowdsourced fact-checking system, Community Notes, to identify violations.

Q: Does this new policy address all forms of AI-generated misinformation on X?

A: No, the new policy specifically targets AI-generated videos of armed conflict without disclosure. It does not currently extend to other forms of AI misuse, such as political misinformation or deceptive product promotion outside of war contexts.

#AI#Misinformation#Content Moderation#Social Media Policy

Related articles

Apple Redesigned Smartwatches: Import Ban Averted, Feature Stays
Review
EngadgetApr 19

Apple Redesigned Smartwatches: Import Ban Averted, Feature Stays

Verdict: A Sigh of Relief for Apple Watch Buyers Apple has successfully navigated a significant legal challenge, as the US International Trade Commission (ITC) recently ruled against imposing a second import ban on its

Tech
NYT TechnologyApr 19

analysis: Hundreds of Fake Pro-Trump Avatars Emerge on Social Media

A network of hundreds of AI-generated pro-Trump influencer accounts has surged across TikTok, Instagram, Facebook, and YouTube ahead of midterm elections. These fake personas rapidly post political content, seemingly aiming to sway conservative voters. President Trump has even reposted content from one such artificial account.

Google Wallet: The Unexpected Essential for My Digital Life
Review
Android AuthorityApr 18

Google Wallet: The Unexpected Essential for My Digital Life

Google Wallet: The Unexpected Essential for My Digital Life For many of us, Google's suite of apps forms the bedrock of our digital existence. Calendar organizes our schedule, Chrome is our window to the web, Gmail

Anthropic CEO Meets White House Amid AI Hacking Fears
Tech
Washington Post TechnologyApr 18

Anthropic CEO Meets White House Amid AI Hacking Fears

Anthropic CEO met White House Chief of Staff over national security concerns about the Mythos AI model. It automates cyberattacks, prompting urgent government assessment.

Tech
NYT TechnologyApr 18

analysis: Cerebras, an A.I. Chip Maker, Files to Go Public as Tech

AI chip maker Cerebras has refiled for an initial public offering (IPO), revealing a 75% revenue surge to $510 million and a $238 million profit last year. The move positions Cerebras amid a burgeoning wave of tech IPOs, including anticipated listings from SpaceX, OpenAI, and Anthropic.

Boosting LLM Accuracy: Building a Context Hub Relevance Engine
Programming
freeCodeCampApr 18

Boosting LLM Accuracy: Building a Context Hub Relevance Engine

Context Hub (`chub`) addresses LLM limitations by providing coding agents with curated, versioned documentation and skills via a CLI, augmented by local annotations and maintainer feedback. This article explores `chub`'s workflow and content model, then demonstrates building a companion relevance engine. This engine uses an additive reranking layer with extracted signals to significantly improve search accuracy for shorthand queries without altering `chub`'s core design.

Back to Newsroom

Stay ahead of the curve

Get the latest technology insights delivered to your inbox every morning.