Social media platform X has introduced a new enforcement measure targeting creators who post AI generated war footage without proper disclosure. Under the updated policy, accounts that fail to clearly label conflict-related videos produced using artificial intelligence risk losing access to the platform’s revenue-sharing program for 90 days.
AI Disclosure Policy Targets Conflict Content
The decision was outlined by Nikita Bier, X’s head of product, who emphasized the importance of preserving authenticity during wartime events. He noted that rapid advancements in generative AI tools have made it increasingly easy to fabricate realistic battlefield scenes, raising concerns about misinformation spreading during sensitive geopolitical moments.
Unlike standard moderation practices such as labeling or removing posts, the new approach directly links compliance to monetization. Creators who repeatedly violate the disclosure requirement may face permanent removal from X’s revenue-sharing system.
Enforcement Through Community Notes and Metadata Signals
X stated that enforcement may be triggered through Community Notes, metadata analysis, or other technical signals indicating AI-generated content. The rule specifically applies to videos depicting armed conflicts and does not constitute a blanket restriction on all AI-generated media shared on the platform.
The policy arrives amid heightened tensions in the Middle East, where online misinformation has become a growing concern alongside real-world military developments.
Disclaimer
This content is for informational purposes only and does not constitute financial, investment, or legal advice. Cryptocurrency trading involves risk and may result in financial loss.

