Jump to content

Commons:Village pump/Proposals

Add topic
From Wikimedia Commons, the free media repository
Latest comment: 4 days ago by Bedivere in topic URAA DRs by country

Shortcuts: COM:VP/P • COM:VPP

Welcome to the Village pump proposals section

This page is used for proposals relating to the operations, technical issues, and policies of Wikimedia Commons; it is distinguished from the main Village pump, which handles community-wide discussion of all kinds. The page may also be used to advertise significant discussions taking place elsewhere, such as on the talk page of a Commons policy. Recent sections with no replies for 30 days and sections tagged with {{Section resolved|1=--~~~~}} may be archived; for old discussions, see the archives; the latest archive is Commons:Village pump/Proposals/Archive/2026/01.

Please note
  • One of Wikimedia Commons’ basic principles is: "Only free content is allowed." Please do not ask why unfree material is not allowed on Wikimedia Commons or suggest that allowing it would be a good thing.
  • Have you read the FAQ?

 
SpBot archives all sections tagged with {{Section resolved|1=~~~~}} after 5 days and sections whose most recent comment is older than 30 days.

Ratify Commons:AI images of identifiable people as a guideline

[edit]

Following the discussion at Commons:Village_pump/Proposals/Archive/2025/09#Ban_AI_generated_or_edited_images_of_real_people, I prepared Commons:AI images of identifiable people.

I am now seeking to have it officially adopted as a guideline.

@GPSLeo, Josve05a, JayCubby, Dronebogus, Jmabel, Grand-Duc, Pi.1415926535, Túrelio, Raymond, Isderion, Smial, Adamant1, Infrogmation, Omphalographer, Bedivere, Masry1973, and Ooligan: I believe this is everyone that participated in the original discussion. Please feel free to ping anyone if I missed them.

Cheers, The Squirrel Conspiracy (talk) 22:28, 7 December 2025 (UTC)Reply

 Support. Omphalographer (talk) 23:15, 7 December 2025 (UTC)Reply
  •  Strong oppose The original proposal had AI generated photos where the description states that the photo shows an actual person are not allowed but this new proposal now has the much more restrictive Images of identifiable people created by AI are not allowed on Commons unless at least one of the following criteria are met [posted by the person or reliable sources cover it]. I don't know if the voters here all know about this. I think it should be changed. There are two main issues:
Example File:King Tutankhamun brought to life using AI.gif (display was disabled)
  1. Information graphics and art such as caricatures relating to public officials such as an information graphic or artwork pointing out problems of Trump behavior, claims & policies.
  2. It doesn't seem to exclude identifiable historic people. AI images can often make sense, especially when there is nearly no or no free media available of the person. An example is on the right.
I think the votes were done hastily without proper deliberation and without consideration of potential uses. A policy this indiscriminate and restrictive additionally seems to violate existing policies COM:SCOPE, COM:INUSE and COM:NOTCENSORED. A constructive approach would be to edit the proposed policy but I would probably still tend toward oppose because I see no need for this – we should strive to stay as unbiased and uncensored as possible and delete files based on whether that is due per set/case. People could introduce more and more restrictions and soon you'll find yourself in a situation where you can't even upload an image critical of Trump anymore per policy (and with wider adoption of AI tools by society, this is what this policy will already achieve to a large extent).
Prototyperspective (talk) 15:29, 8 December 2025 (UTC)Reply
It's bold of you to assume that everyone above you voted "hastily without proper deliberation and without consideration of potential uses". More likely, I think, is that the other participants simply disagree with you.
  • Regarding the first point: "The image in question is the subject of non-trivial coverage by reliable sources" already covers the use case of "caricatures relating to public officials". The series of images that File:Trump’s arrest (2).jpg belongs to, for example, are permissible under this guideline. This guideline would not permit a random user's AI image caricature of Trump, but even without this guideline, it would be deleted as personal art.
  • Regarding the second point: "It doesn't seem to exclude identifiable historic people.", that is working as designed. If it's a notable depiction, it'll be covered by "non-trivial coverage by reliable sources". If it's a random user's AI image of a historic figure, even without this guideline, it would be deleted as personal art. Keep in mind that the image you posted does not depict King Tut. It depicts what a probability engine thinks the promoter is looking for - a young boy with Arabic features in pharaoh attire. It has no way of knowing if any of what it did is accurate. This is why some projects have already banned most AI images.
The Squirrel Conspiracy (talk) 16:45, 8 December 2025 (UTC)Reply
sincerely that "Tutankhamun" image is a disgusting AI slop. I can see why it is necessary to have these all (non notable) depictions banned. If someone wants to share their (prompted) art, there are venues such as Tumblr, Deviantart and Twitter (or whatever Elon Musk has decided to call it). Bedivere (talk) 16:52, 8 December 2025 (UTC)Reply
Nothing about is disgusting. why it is necessary to have these all (non notable) depictions ok: so why? Prototyperspective (talk) 17:05, 8 December 2025 (UTC)Reply
They are fictional reconstructions produced by a model, not representatios of an actual person, making them potentially misleading and outside COM:SCOPE. Allowing non-notable AI depictions would open the door to massive amounts of invented imagery serving no educational purpose. Notable cases are covered by the exception. Bedivere (talk) 22:07, 8 December 2025 (UTC)Reply
So a public broadcast documentary showing some well-known historical figure is means that segment is noneducational and the documentary so so badly disgusting because they're showing a historical person differently than s/he may have looked? Prototyperspective (talk) 22:40, 8 December 2025 (UTC)Reply
in that case, the key would be that the recreation would most likely be a human creation or representation, not something created by an algorithm. Bedivere (talk) 00:57, 9 December 2025 (UTC)Reply
to assume that… I didn't do so if you read my comment. This is a false statement. already covers the use case of "caricatures relating to public officials" No, it doesn't. It means caricatures and critical works are reserved to the privileged few who got reported on in major publications. What chaos if we'd allow common citizens to release critical art and information graphics right? it would be deleted as personal art. No, it wouldn't (necessarily). It depends on how educational/useful it is. a young boy with Arabic features in pharaoh attire Exactly, and such things can be useful and interesting, especially if engineered to closely match data about the given person. no way of knowing if any of what it did is accurate not the AI but the prompter. Prototyperspective (talk) 17:11, 8 December 2025 (UTC)Reply
  •  Support, with the addendum that publications on behalf of someone should also be permitted. --Carnildo (talk) 23:15, 8 December 2025 (UTC)Reply
  •  Support Infrogmation of New Orleans (talk) 01:21, 9 December 2025 (UTC)Reply
  •  Support the proposal and also  Support whacking User:Prototyperspective with a wet trout Apocheir (talk) 04:08, 9 December 2025 (UTC)Reply
    Re trout, if I made an error point out which by addressing it (ideally refuting it).
    Why do educational documentaries use fictional depictions of historical people if such can't be educationally useful? These are banned by this proposal as well. I always support truly considering and addressing points raised in every kind of community decision-making, especially when it's volunteers.
    • Another point I didn't mention earlier, the policy rationalizes itself with When dealing with photographs of people, we are required to consider the legal and moral rights of the subject […] Commons has long held that files that pose such legal or moral concerns but why would not apply to paintings or nonAI digital art of identifiable people? And does this really apply to neutral depictions of ancient historical people? There is no need for this policy considering the very low number of of such files Commons currently has.
    Prototyperspective (talk) 10:24, 9 December 2025 (UTC)Reply
    Personal art about notable people was always not allowed as being out of scope. That is was only handled through the regular scope rules was never a problem because of the small amount of such uploads. Now with the AI tools available there are much more of such uploads. To avoid long discussions and case by case decisions, we need this new stricter guideline. GPSLeo (talk) 11:28, 12 December 2025 (UTC)Reply
    • Personal art about notable people was always not allowed as being out of scope False. Personal art by non-contributors is speedily deleted so this is an additional reason for why there is no need for this proposed policy. Other than that, I don't know of such a policy, especially not one that clarifies what is meant with "Personal art".
    • Now with the AI tools available there are much more of such uploads. Arguably false. There aren't many – currently just 99 in the cat. That's the number of files uploaded every ? two minutes maybe?
    • Moreover, a significant fraction of them are COM:INUSE, underlining that these files can be useful also on Wikimedia projects despite that the ones we have are not close to what is possible with these tools in terms of quality (and accuracy if data on appearance is available) but Commons isn't just there for only wikiprojects but also for e.g. documentary makers who often show fictional imagery of historical people (as stated earlier and which I could prove by linking to several such documentaries with the example timestamps).
    • To avoid long discussions and case by case decisions, we need this new stricter guideline For personal art by non-contributors and hoaxes, files can already be speedily deleted without discussion. For files that are of low-quality or not useful, there generally are no lengthy discussions. Enabling users to discuss whether a file should be deleted is a point of COM:NOTCENSORED which this proposed policy would as far as I can see invalidate in terms of its current title/proposition. There are a lot of things where one may prefer to not enable discussion. I still see no need for a stricter guideline.
    Prototyperspective (talk) 11:41, 12 December 2025 (UTC)Reply
  •  Oppose The page refers to "legal and moral" rights as a justification but doesn't cover cases where the legal and moral rights are expired. If there's another good reason to exclude pictures of, say Cleopatra or Genghis Khan, the policy needs to spell it out. -Nard (Hablemonos) (Let's talk) 17:27, 11 December 2025 (UTC)Reply
    Editorial standards are moral rights too. Be seldom make editorial decisions for other Wikis on Commons, but here it is needed to protect our project. Having AI generated images of historical personalities, used to show how this person looked like, is against good journalistic standards. We still allow such images if created in the context of a relevant art project of scientific paper. But we do not want that ever user can just upload such content. GPSLeo (talk) 11:37, 12 December 2025 (UTC)Reply
    used to show how this person looked like This is not the only use-case of such imagery. An example I made is a documentary film video about say Ancient Egypt and I noted I could provide evidence that such documentaries usually do include fictional imagery of historical people. is against good journalistic standards Commons is not censored based on proposed "journalistic standards". Prototyperspective (talk) 16:12, 15 December 2025 (UTC)Reply
    I think the point is that living people have certain rights that dead people cannot have, and this proposal's main justification lies there. Editorial standards seem to be secondary to the proposal. whym (talk) 23:41, 5 January 2026 (UTC)Reply
    Editorial standards are not moral rights; they're standards used by a certain organization. I see no evidence that journalistic standards exclude the use of tools to show how someone might have looked like. Wikipedia certainly uses much worse, random images produced by people who had no idea how the person may have looked, but by paint and not computers.--Prosfilaes (talk) 03:24, 8 January 2026 (UTC)Reply
    • FWIW, those have a certain value in terms of showing how someone was perceived in a given era. For example, all images of biblical figures are from people who had never seen them (unless we count visionaries as actual witnesses). A painting of Jesus by a notable artist has an historical significance that an AI image of Jesus does not, though it would be purely coincidental for either to be a good likeness. - Jmabel ! talk 03:47, 8 January 2026 (UTC)Reply
  •  Support --ReneeWrites (talk) 23:11, 13 December 2025 (UTC)Reply
  •  Strong oppose No reason provided why this is needed when Commons:Scope already exists. --Trade (talk) 15:59, 15 December 2025 (UTC)Reply
    @Trade I assume you mean  Support, otherwise the context is not clear for us :) --PantheraLeo1359531 😺 (talk) 16:03, 15 December 2025 (UTC)Reply
  •  Oppose for its treatment of dead, especially long-dead, people. AI of living people is problematic. AI pictures of King Tut are not. That rule is far too much in telling the other projects that depend on us what they may use as illustrations.--Prosfilaes (talk) 07:13, 17 December 2025 (UTC)Reply
 Support with Jmabel's caveat.   — 🇺🇦Jeff G. please ping or talk to me🇺🇦 01:54, 18 December 2025 (UTC)Reply
 Strong support. I don't think we should be hosting deepfakes of any kind, to prevent the spread of misinformation, respect towards the person being depicted among many other ethical and social considerations. It's moon (talk) 03:35, 27 December 2025 (UTC) – Edited on 07:21, 29 December 2025 (UTC) Reply
 Support Surely one benefit of this guideline is that it will deter those who attempt to get around copyright violation by using AI-generated portrait. However, considering that the Commons version may differ from or even conflict with those of other communities, images that do not comply with this guideline should also be excluded from COM:INUSE rules. 0x0a (talk) 17:51, 28 December 2025 (UTC)Reply
@0x0a: that last (about this trumping INUSE) sounds like you are making a different proposal than the one about which everyone above has expressed their opinion. - Jmabel ! talk 01:46, 29 December 2025 (UTC)Reply
Um, I kinda believe INUSE also needs to be updated accordingly, so I opened a new discussion at
👉︎ Commons_talk:Project_scope#Proposed_change:_excluding_images_do_not_comply_with_COM:AIP_from_COM:INUSE_rules -- 0x0a (talk) 10:12, 29 December 2025 (UTC)Reply
@0x0a: I disagree. Part of the point of the guideline is to not use deepfakes and unaccurate representations of identifiable people not just on Commons but across all Wikimedia projects. Therefore all images that don't meet with the proposed guideline should, in my opinion, get deleted once the guideline gets ratified regardless of wether they are currently in use on other projects or not (with perhaps the only exceptions being images that get used to illustrate the concept of deepfake or similar itself → and even in those cases, they should probably still have been published by the person they depict). It's moon (talk) 10:58, 29 December 2025 (UTC) – Edited on 12:13, 29 December 2025 (UTC)Reply
Frankly, I don't know which of my statements you disagree. I clearly support this proposal and have already opened a revision discussion at Commons_talk:Project_scope regarding the conflicting part with the guideline. 0x0a (talk) 14:50, 29 December 2025 (UTC)Reply
Whoops, I misread INUSE, I thought you were saying that images used on other projects should be kept, which I disagreed on, but I am realizing you were saying they should get deleted, so turns out we both agree. It's moon (talk) 16:05, 29 December 2025 (UTC)Reply
  • I think the oppose votes, even if they are the minority, raise valid points about living people and long-dead people. I’d suggest focusing on living people (and perhaps the recently deceased) for now. This is not to say anything goes for images of the dead, it would just be left undertermined in the meantime. I think that a narrower focus would allow us to ratify some important and non-controversial part of the proposal quickly with a broader support. We can continue working on the rest and additively revise the policy after that. whym (talk) 11:38, 5 January 2026 (UTC)Reply
  •  Oppose This seems over thought. Take the bit that's important, tweak it, and add it to COM:PIP. AI images of identifiable people are not allowed on Commons unless they have been published with the subject's permission or the image itself is the subject of significant public commentary in reputable sources.
    There's no need to rehash a moral framework, define what a person is, or legislate interactions with overarching standards like SCOPE or DW. There's no need to add technical issues related to things like upscaling. Wherever that needs to go, it's not specific to identifiable people. There's no need to trying to define a boundary between substantially AI edited or AI generated. No need to get into what counts as a good source.
    The operative bit above sets the standard and people can sort out the finer details in vivo. GMGtalk 14:18, 5 January 2026 (UTC)Reply
  •  Comment Regarding AI images of long-dead people, while not necessarily problematic when it comes to legal and moral rights of the subjects, there are other factors that make these images unsuitable for an educative project like Commons. The example of Tutankhamun illustrates this perfectly. We have multiple forensic studies that reconstruct Tutankhamun's appearance based on the actual structure of his skull and mummy (see [1], [2], [3], [4], [5], [6]). However, files such as File:King Tutankhamun brought to life using AI.gif are problematic because they are historically inaccurate, overly idealized misrepresentations. This just comes to further show how Generative AI can and will make false assumptions about historical subjects and introduce misinformation. It's moon (talk) 14:50, 5 January 2026 (UTC)Reply
    What if a Wikibooks chapter wants to discuss misinformation using AI-generated Tutankhamun images as illustrations? whym (talk) 23:38, 5 January 2026 (UTC)Reply
    I had seen that study before my post with that gif earlier FYI and I'm well aware of scientific facial reconstruction.
    • First of all you're making the false assumption that the educational function of media showing ancient people is primarily or even only to educate people on how exactly precisely the given people looked like. That is not necessarily the case, probably not even usually. If I wanted to make an educational podcast video about King Tutankhamun talking about historical facts and the peculiarity of his young age, it would be more interesting if it had some visuals. Such an animation even if not accurate to the most precise tiniest of detail would help the listener to visualize and better imagine what is being talked about plus it makes them take up more information as the content is not dull and boring but exciting. An example here is the Fall of Civilizations podcast that I sometimes enjoy listening to. It also has some visuals to it on YouTube – do you think it's accurate to the last detail? Example Ep 18 Fall of the Pharaohs (1.1 M views) such as its depiction of Ramesses. (Btw I made some educational podcast in the past and went to Commons to find free media to use which was often so gappy that I had to first upload relevant media to here from elsewhere and see how AI media can be useful for podcast&documentary-making sometimes depending on various factors such as how it's contextualized etc.)
    • It depends on how the file is used. If it's used in a Wikipedia article where the text implies or the caption says basically 'This is how Tutankhamun exactly looked like' then it's problematic. But the problem there is how it's used, not that it's on Commons.
    • The gif actually looks quite similar the scientific reconstruction. Maybe you think it's of utmost importance that even the tiniest of facial details is exactly accurate in any depiction and everything else is "misinformation". But that's not what matters to many or in many contexts, such as when the media is not contextualized as to be a very realistic restoration and the subject is just e.g. the young age of Tutankhamun. Moreover, most paintings, especially historic and ancient ones are very inaccurate.
    • The question is not whether there are studies that reconstruct a given person's face – and for most notable long-dead people there aren't any – but whether the media is on Commons / free-licensed. There's basically one person (big thanks to him) who creates (static) restorations of notable people – ~150 files in Category:Works by Cícero Moraes – and sometimes (probably fewer than these) some free-licensed image in some study or elsewhere to import. For many notable subjects there aren't media. Key here is that just because a file is on Commons, doesn't mean it has or needs to be used. Lastly, AI tools here can be leveraged to create scientifically accurate free-licensed depictions of people: one can prompt with descriptions of the scientific reconstruction and additionally select and adjust the results until one has a result where the appearance matches that of the scientific reconstruction.
    Prototyperspective (talk) 00:22, 6 January 2026 (UTC)Reply
    @Prototyperspective: I think that most regulars understand the proposed policy not as tool for an absolute prohibition of AI generated depictions of (long-dead) persons, but rather as a quality-assurance tool to stem any influx of such imagery without clear-cut use case. I as supporter certainly do.
    I see the current situations as: "Upload first, ask later", and without robust tools to have a redactional overview over AI generated imagery. It's kind of similar to "shall issue states" in the US in regard to firearm laws and concealed carry. I think that most supporters are advocating for the alternative of "Ask yourself first if AI is useful, then if yes, upload", the default being "Don't upload" (or delete by due process if uploaded anyway). Such a mindset in regard to AI slop and AI generated imagery in general would be a robust tool for the needed curating. To return to the concealed carry example: we should switch from a "shall issue" to a "may issue" style of permit. This implies that, of course, an AI generated Tutankhamun image with a demonstrated solid use case (like the Wikibooks thing above your post), can, may stay. I'm advocating for that such AI imagery imperatively needs a worked-out context in its description (prompt, use case, ideally the sources) besides the demonstrated need of actual use somewhere; otherwise it's liable to get deleted.
    Lastly, you wrote AI tools here can be leveraged to create scientifically accurate free-licensed depictions of people: one can prompt with descriptions of the scientific reconstruction and additionally select and adjust the results until one has a result where the appearance matches that of the scientific reconstruction. As it stands now, the tools available to the general public (ChatGPT, DALL-E, Stable Diffusion...) are built in a way to generate eye candy (as you wrote on the German Forum, I could also refer de:Klickibunti), not scientifically sound media, as that is likely expected by the general public, by their users. Some software that is specifically made for scientific reproductions (like forensic face generation, digitally aging or similar) won't be within the purview of this policy. Regards, Grand-Duc (talk) 18:22, 6 January 2026 (UTC)Reply
    Reasonable point but I disagree: there is no flood of AI imagery and this proposed policy probably won't be much of an help with this nonproblem if it was a problem and it's redundant due to policies COM:SCOPE and COM:DIGNITY while in direct contradiction with COM:NOTCENSORED and, as explained above, COM:SCOPE where the minor potential benefits are not worth the inconsistency and problems that come with this proposed policy. People can already nominate any such or many such files at once for deletion.
    The Tutankhamun animation has two educational use-cases I can readily think of and we shouldn't assume we can and need to be able to readily think of potential use-cases:
    1. as part of some video or page about Tutankhamun where the animation is not contextualized as to being precise to the last facial wrinkle but just some rough AI visualization eg showing his young age 2. as an illustration of how AI tools can be used to visualize people such as ancient people in moving (nonstatic) format (that is even if some say the quality is low).
    are built in a way to generate eye candy I know they are not built for what I described to be easy. That doesn't mean they can't be used for that. People could for example learn about this use-case and the current issues with it and adjust these tools or use them in sophisticated ways to create better-quality results of that type. Some software that is specifically made for scientific reproductions I'm not talking about other software though. The current models can already be used for this. It's just not easy. Many people think using AI tools is always easy but it isn't – just the way maybe most people use them is simple but some people use them in more sophisticated ways that need a lot of skill and expertise. I outlined roughly how these tools, including just standard Stable Diffusion etc, can be used for reproductions of scientific accuracy and you seem to have overread or ignored that. This can already be done, I'm just not skilled enough with these tools plus also not motivated enough to spend my time and effort on it to prove it to you right now. My prior low-effort uploads relating to this are more about (enabling) communicating the concept and idea – this again can lead to people working on fleshing out this application for higher-quality results via adjusting or building tools and developing workflows. But again, not for every application does each facial detail matter such as for the podcast linked above where also at least one ancient person is depicted without scientific precision level accuracy (btw typo it has 11 M views, not 1.1 M). Prototyperspective (talk) 19:19, 6 January 2026 (UTC)Reply
    You repeatedly claim that editors overread or fail to deliberate whenever they disagree with your views ([1], [2], [3]).
    My stance is that we need to build policies based on how AI is currently being used, not how it could or may theoretically be used. I'm not against changing the policy later down the line if we see a change in AI accuracy or a tendency to a more responsible usage, but for now we have to address the current reality. It's moon (talk) 21:20, 6 January 2026 (UTC)Reply
    Your claims are ad hominem argumentation, and I will not stand for them.   — 🇺🇦Jeff G. please ping or talk to me🇺🇦 21:42, 6 January 2026 (UTC)Reply
    @Jeff G.: Could you clarify on who you are replying to? It's moon (talk) 22:00, 6 January 2026 (UTC)Reply
    @It's moon: I was replying to Prototyperspective, referencing your characterization of their claims. Sorry for not specifying that, I thought my indentation was clear.   — 🇺🇦Jeff G. please ping or talk to me🇺🇦 22:49, 6 January 2026 (UTC)Reply
    Understood, thanks. It's moon (talk) 23:05, 6 January 2026 (UTC)Reply
    Absurd claim; if you ignore all I said in my comment imo it's better to not comment at all. Prototyperspective (talk) 22:54, 6 January 2026 (UTC)Reply
    @Prototyperspective: Better for you, maybe. I didn't ignore it, I agreed with @It's moon's characterization of it. I asked you nicely in this edit 16:09, 7 November 2024 (UTC) to stop with the insults and displaying your pro-AI bias. Now, I am warning you: if you do it again, I am going to report you.   — 🇺🇦Jeff G. please ping or talk to me🇺🇦 23:21, 6 January 2026 (UTC)Reply
    I'm not insulting anybody and didn't make any ad homininem and am nicely asking you to please not accuse me of things I'm not doing, thanks. Prototyperspective (talk) 23:29, 6 January 2026 (UTC)Reply
    @Prototyperspective Did you or did you not write "you ignore all I said in my comment" 22:54, 6 January 2026 (UTC)?   — 🇺🇦Jeff G. please ping or talk to me🇺🇦 23:36, 6 January 2026 (UTC)Reply
    This is not an insult. It was a rational point that your comment did not address nor relate to anything I wrote (where btw imo a constructive rational response would be to prove me wrong by pointing to the specific text segment to which your comment does relate if there was any but there isn't any "ad homininem" in there, let alone is it all just that). With "ignore" I meant you didn't address any of it which is of course one can do but I'm also free to point it out even if you disagree with that assessment. Prototyperspective (talk) 23:44, 6 January 2026 (UTC)Reply
    Re. there is no flood of AI imagery - my experience speaks otherwise. I've seen a ton of clearly AI-generated images uploaded to Commons, including a substantial number of AI-generated or heavily AI-retouched images of people. Omphalographer (talk) 21:51, 6 January 2026 (UTC)Reply
    How is that a flood? People upload floods of mundane low-resolution photos of all sorts, repetitive high-size mundane photos, and so on – probably hundreds per day on average. There's just a few thousand AI files; 1089 in AI-generated humans – that's near-nothing in Commons. And the depictions of historic/ancient people is an order of magnitude below that. Prototyperspective (talk) 23:00, 6 January 2026 (UTC)Reply
    The vast majority of new AI-generated uploads are deleted, most often under CSD F10. The files which end up categorized - and particularly those which are placed in those "AI-generated by subject" categories - are a small fraction of what's coming in. Omphalographer (talk) 23:22, 6 January 2026 (UTC)Reply
    Good point but it's not a small fraction in my experience (of for over a year regularly tracking all new AI uploads and categorizing probably more than half of AI-related files) but maybe something around as many as are still on Commons.
    If one makes a comparatively large effort to delete low-quality AI media, then it can seem as if it's a flood but there's days where not even one AI image got uploaded and people I think aren't doing a comparable effort to find and delete low-quality drawings and low-resolution-mundane photos. I think we just keep disagreeing on that point but it's not central to my arguments above – especially so since you also say these files are already even speedily-deleted so this new policy is not needed, especially not in this indiscriminate/harsh+unjustified shape. Prototyperspective (talk) 23:38, 6 January 2026 (UTC)Reply
    Re. there's days where not even one AI image got uploaded - not recently! There are typically somewhere on the order of 50 to 100 AI-generated images uploaded every day. Omphalographer (talk) 21:28, 9 January 2026 (UTC)Reply
    I think that most regulars understand the proposed policy not as tool for an absolute prohibition of AI generated depictions of (long-dead) persons, but rather as a quality-assurance tool to stem any influx of such imagery without clear-cut use case. Er, what? No, we don't use policy that says these things are "not allowed" and then argue it's fine because it's not an absolute prohibition. Policy should say exactly what it means; laws saying that X is not allowed and people in the know getting the wink and nod from people also in the know is a good way to piss off users.--Prosfilaes (talk) 03:24, 8 January 2026 (UTC)Reply
  •  Support Without having read this whole discussion, I've looked at the proposed guideline as it stands today, and I agree with the proposal. It is quite restrictive, but I think we need to be restrictive handling such AI-generated images. We should always be extremely cautious and only allow a selection of such images where there is a very good reason for each individual image to host it at all. Gestumblindi (talk) 09:53, 6 January 2026 (UTC)Reply
    One of the controversial points emerged in the discussion is if we are legally required to protect dead people's dignity in the same way to that of living people. What do you think? whym (talk) 10:36, 7 January 2026 (UTC)Reply
    @Whym: Well, legally required? That's a question we could discuss in great detail, as it very much depends on the jurisdiction. Germany, for example, has quite strong postmortal personality rights at least for recently deceased people, while Switzerland doesn't have quite the same concept. I don't know how this is in the US; if we applied the same principles as for copyright, we could require an image (be it real or AI generated) to not infringe postmortal personality rights in the US and in its country of origin... But I think regarding AI generated images, that's a point we don't even need to discuss, as the moral and scope issues should be enough to refrain from hosting such images in most cases. Gestumblindi (talk) 18:49, 7 January 2026 (UTC)Reply
    Yeah, it seems like there is a territory specific component to be considered regarding the living vs dead issue.
    The current proposal's main justification, as it is written, seems to be the moral rights of the people depicted, though. (It's in the first paragraphs.) If there are other, more important rationales, I think the proposal needs to be revised to more clearly include them and argue based on them. Without such (major) revision, I think it would make a more solid argument if we stick with living people within this iteration. whym (talk) 01:20, 11 January 2026 (UTC)Reply
  •  Support Strakhov (talk) 18:27, 6 January 2026 (UTC)Reply
  •  Support Ternera (talk) 14:02, 7 January 2026 (UTC)Reply
  •  Support Chorchapu (talk) 01:38, 14 January 2026 (UTC)Reply

Leveraging Reddit content with automatic release requests and a queue

[edit]

I’ve been thinking that it would be great to automate some of the outreach that we do via VRT—normally we cold-email copyright holders for material after someone has already identified it as worth uploading to Commons.

Reddit is full of subreddits where people post their own photos—of living things, artifacts, locations, food, etc. Lots of these photos are of a high quality and carry encyclopedic value. What if highly-voted original content on Reddit automatically received a comment explaining Wikimedia licensing and asking the author if they would be willing to release? They could agree right there in the comments. Then the post could be added to a queue/pool where volunteers could verify the license and evaluate the usefulness of the image before uploading to Commons.

I think this would really open a bottleneck in the relicensing of images across the web for Commons. The logistics need to be thought through, but I think it’s achievable through coordinating with subreddit mods to allow these comments (independent bots are a normal part of Reddit, and the API allows reasonable use for free). Zanahary (talk) 17:48, 14 December 2025 (UTC)Reply

Sounds good :) --PantheraLeo1359531 😺 (talk) 19:11, 14 December 2025 (UTC)Reply
But the process should be as convenient as possible for the posters, otherwise it could deter them from considering the licensing --PantheraLeo1359531 😺 (talk) 19:13, 14 December 2025 (UTC)Reply
Yes, I imagine they could just reply “yes” to the automated comment Zanahary (talk) 20:32, 14 December 2025 (UTC)Reply
unfortunately, I'm unsure just saying "yes" would constitute release and make sure they have reasonable understanding. A message like "I release this photo under CC-BY-4.0" would be appropriate release. All the Best -- Chuck Talk 00:32, 15 December 2025 (UTC)Reply
+1 --PantheraLeo1359531 😺 (talk) 08:34, 15 December 2025 (UTC)Reply
That's fine, too. I think that part of the implementation should be simple, with guidance from the copyright-dedicated volunteers and VRT here. Zanahary (talk) 08:37, 15 December 2025 (UTC)Reply
 Support that would be great and thought about this too, hence my post at Commons talk:Permission requests#Example site to find useful media to ask for permission. I think this would be only or most useful for data graphics, not random photos etc or only feasible for a subset of images because mods and/or admins wouldn't allow more frequent posts. If everything that gets more than a threshold of upvotes gets such a request, there would be lots of problematic and/or low-quality media but that may be worth it if this allows scaling this up. Maybe just do this for /r/DataIsBeautiful at first instead of more widely and then consider more subreddits if it works well, one at a time.
Somebody would need to build the bot and then this would need to be accepted effectively by the subreddits mods – I think these two things would be the main challenges.
Regarding how to declare permission, the comment could say something like "If you're willing to license your image this way, please state so by commenting with 'I the creator of this work license it under CCBY4.0'" and also allow for replies that have the license icon embedded in the image. So far, there's just 3 files from that sub and few files in Category:Images from Reddit. Often, info from the post needs to be included in the file description. Btw, I'm still waiting for a clarification of a chart creator whose file I uploaded here whether it shows the share of operating systems used in user reports or for users who made reports in that month. Prototyperspective (talk) 01:19, 16 December 2025 (UTC)Reply
@Prototyperspective, the sorts of image subreddits are those like r/Whatisthisbug where people generally upload their own photos, or subreddits where original photographs are marked as such in a machine-readable way through flairs. I think this would avoid the copyright violations we could expect from promoting the poster of any image on reddit past an upvote threshold to say “I release!” Zanahary (talk) 08:24, 16 December 2025 (UTC)Reply
Good idea! However, it doesn't relate to anything I wrote – probably you thought with If everything that gets more than a threshold of upvotes gets such a request, there would be lots of problematic and/or low-quality media but that may be worth it I (also) meant copyvios but I meant things like low-resolution pics of bugs, echo-chamber posts, misinfo posts, educationally useless posts, etc. As said, I think this depends on which subreddits the media is sourced from and secondly, may be worth it and easily manageable as the total number of uploads would probably still be fairly low and these uploads deletable via DRs. Prototyperspective (talk) 13:56, 16 December 2025 (UTC)Reply
 Oppose per Grand-Duc.   — 🇺🇦Jeff G. please ping or talk to me🇺🇦 13:38, 16 December 2025 (UTC)Reply
That's what en:Wikipedia:Images from social media, or elsewhere (which uses Wikipedia as a host, for it's familiarity to non-Wikimedians) is for. Feel free to post a link to that, in Reddit discussions or elsewhere. Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 21:03, 26 December 2025 (UTC)Reply
Doing that manually at scale is infeasible.
Actually, no support or discussion is needed here to build & launch a bot that comments with this link and a question whether they license the file under CCBY automatically in relevant reddit posts as long as the respective subreddits' mods are okay with that. Prototyperspective (talk) 10:13, 4 January 2026 (UTC)Reply
 Oppose per Grand-Duc. I oppose automating this process, considering how much content on Reddit are reposts. ReneeWrites (talk) 21:13, 26 December 2025 (UTC)Reply
This would only be added to posts that are original content which are marked with [OC]. Prototyperspective (talk) 10:10, 4 January 2026 (UTC)Reply

Getting Redditors to agree with having their photos on Commons is easy. Getting them through VRT is the hard part--Trade (talk) 14:56, 15 December 2025 (UTC)Reply

That’s kind of the key here: they just have to comment right there under their own posts, in response to the standard comment about licensing, “I release this post under the CC BY SA 4.0 license”. There’s no VRT processing needed! Zanahary (talk) 16:20, 15 December 2025 (UTC)Reply
 Oppose, because of the automation thing. Automating requests makes for a lack of review in regard to questions about COM:SCOPE and FOP. I'd support the idea of creating a tool to post such licensing requests, as long as the requests are only individually triggered by human contributors. The actual implementation could e.g. be a browser addon or a local gadget/tool like "video2commons". Regards, Grand-Duc (talk) 02:00, 16 December 2025 (UTC)Reply
In the context of what I wrote above: there is no FOP for data graphics and virtually all above a certain threshold are educationally useful. Additionally, it would be good to take into account real-world-practice where a tiny fraction of files that can be deleted via DRs is worth having lots of files to begin with. Prototyperspective (talk) 02:25, 16 December 2025 (UTC)Reply
Yes to your last point. People are automatically given the option on Flickr, iNaturalist, and YouTube to opt in to licensing their uploads under a Wikimedia-compatible license. This automated solicitation for Reddit would be much the same. Not all uploads released freely are necessarily uploaded to Commons; it’s up to volunteers to consider scope. Zanahary (talk) 08:27, 16 December 2025 (UTC)Reply
I took a peek at graphics posted in that "Data is beautiful" subreddit. I sincerely doubt that anyone of them is suitable for Commons, as a data source statement is not given every time and you often barely have any context. That's a good example of "not educational useful" (barring for teaching how to abuse statistics for guiding public opinions) and thus being out of scope. Such graphical datasets are handled by Wikipedia editions themselves (there's a Mediawiki extension for that IIRC), directly based upon any referenced data source.
@Zanahary: Flickr and Youtube made an executive decision to offer the Creative Commons licenses by themselves. I don't know whether the companies were lobbied by the WMF to do so, it could very well have been a thing of gaining a good standing, independent of the Wikimedia movement. If a thing like that, changing a company policy, is your aim, it's far beyond the scope of a Commons VP proposal. Regards, Grand-Duc (talk) 10:20, 16 December 2025 (UTC)Reply
Take a look at r/WhatIsThisBug for an example of the sort of subreddit that I had in mind.
The Flickr/YouTube/Soundcloud/iNat examples are meant as a parallel to show that just prompting uploaders on other websites to release their work under a WM-compatible license doesn't necessarily lead to a flood of inappropriate material on Commons, as it's still up to the Commons community to judge media works for scope and copyright. Zanahary (talk) 14:17, 16 December 2025 (UTC)Reply
The subreddit "What is this bug" looks and feels, sorry, like crap. When I accessed it, I saw only imagery where the image quality is too low (lack of sharpness) for any sensible import here. The images also apparently lack any sound description and localisation, data needed nowadays for scientific motifs. Furthermore, such images are more often than not shot in densely populated areas. This entails that Commons is likely to already have a coverage on those motifs in a decent quality.
That subreddit may serve for another purpose: reducing the population in Category:Unidentified insects by country et al....
I'm actually tempted to upgrade my oppose to a strong one, this Reddit import idea really looks like a stillbirth for me. Regards, Grand-Duc (talk) 18:03, 16 December 2025 (UTC)Reply
I really don’t get the opposition. Low quality images can be found everywhere, including on websites that we draw a huge volume of media works from and in the rolls of own-work uploads people make directly to Commons. Casting a wide net to encourage copyright holders to release with a license compatible with Commons can, in my view, only make it easier to get quality works for Commons, and doesn’t risk tricking Commons volunteers into shoving junk onto the project. What negative outcome do you imagine from this project? Zanahary (talk) 21:39, 16 December 2025 (UTC)Reply
I thought you're proposing also a bot that uploads files where the permission was given instead of leaving it to volunteers. Whether the permission request in the comments is just for enabling upload or makes the file either automatically or most-likely uploaded makes a great difference to the OC posters on reddit. I don't think many would give permission just to enable a hypothetical Commons upload, at most it would be granted if upload is basically a given.
@Grand-Duc: Most posts there have the data sources specified in the comments, at least the top-voted ones. Instead of the link above, it's better to look at this. There are lots of huge gaps in data graphics and many of these would be useful. So I think your claim that they don't have the datasources is false at least for a reasonable cut-off votes threshold where a bot requests permission. And there's of course also other subreddits where datasources don't matter because it's just a photo. Prototyperspective (talk) 14:06, 16 December 2025 (UTC)Reply
No, I think the public nature of the disclosure would lead to trolling if stuff got automatically uploaded on release. I don't see why an otherwise interested redditor would turn down the chance to release their work when told "In order for images to be usable on Wikimedia prohects, including Wikipedia, they need to be licensed under XYZ conditions ... do you agree to release this post under a Wikimedia-compatible license, so that it can be used on Wikimedia projects?" Zanahary (talk) 14:19, 16 December 2025 (UTC)Reply
Ok well, that's also an approach. I don't see how trolling is a concern though (there is a votes threshold) and don't know how you envision the review queue to work. An idea would be for the bot to update a report page with a table where one column has the external link to to permission granted post and then users manually mark rows as done. Prototyperspective (talk) 15:13, 16 December 2025 (UTC)Reply
 Weak oppose. Conceptually this isn't a million miles away from User:Red panda bot, which drags in tens of thousands of Flickr photos on the grounds that they were promoted to "Flickr Explore". The bot has a step of human curation, but it's not very scrupulous: it sometimes uploads images with FOP, packaging or AI issues.
They could agree right there in the comments. Then the post could be added to a queue/pool where volunteers could verify the license and evaluate the usefulness of the image before uploading to Commons. This sequence does sound like it's potentially going to be wasting the time of Reddit users, if a bot sometimes asks them to consider and confirm a licence and then ... we decide not to import their image because nobody here wants it, or there's a copyright problem, or we already have better images of the same thing, or we do want it but it took us six months to upload it. It'd undermine the bot's purpose if it started to be ignored or resented in the subreddits that it patrolled.
Grand-Duc's suggestion of a tool to process individual human requests is a good one. Perhaps something like User:CommonsDelinker/commands where a Commons user can add a Reddit URL to a list, and a bot will go off and ask permission (and maybe even upload the file if it gets it) before pinging the requester with an update. Belbury (talk) 19:04, 16 December 2025 (UTC)Reply
 Oppose per Grand-Duc and Belbury -- Ooligan (talk) 00:34, 7 January 2026 (UTC)Reply

A page that shows all work on Commons by a given creator

[edit]
in a sortable table, like Paintings by Wassily Kandinsky (Sum of all paintings), but automatic and for every Wikidata-linked creator

The Creator template/tag is great, but there's no easy way to load up all (and only) works on Commons with a given creator tag. I propose a Works:Creator Name, WorksByCreator:Creator Name, CreatorWorks:Creator Name, or similar page that shows all (and only) works on Commons tagged with that creator. The Edinburgh Early Photography Archive (talk) 21:58, 24 December 2025 (UTC)Reply

@The Edinburgh Early Photography Archive: That's what categories are for, e.g. Category:Works by Samuel Alexander Walker. Sam Wilson 23:23, 24 December 2025 (UTC)Reply
That requires placing things in categories separately. It doesn't leverage the Creator tag. It won't show you anything tagged with the creator that hasn't also been placed in the category (if the category exists at all). The Edinburgh Early Photography Archive (talk) 06:51, 25 December 2025 (UTC)Reply
Or Special:Search/insource:"Creator:Creator Name", assuming you mean specifically things tagged with the "Creator" template for that person. E.g. Special:Search/insource:"Creator:Asahel Curtis". - Jmabel ! talk 00:36, 25 December 2025 (UTC)Reply
Oh yeah, good point. Another way could be 'what links here', e.g. Special:WhatLinksHere/Creator:Samuel_Alexander_Walker. Sam Wilson 03:26, 25 December 2025 (UTC)Reply
Doing a manual search is not elegant or user-friendly, and again does not leverage the Creator tag properly. Part of the point of the Creator tag must surely be to tie together separate works on Commons with the same creator, in a straightforward and easily-viewable way, not using a complicated search function. Even if it must depend on a complicated search function, there should be a link to it on the Creator template. I should be able to look at a Creator section and easily open up "Other works by this creator". The Edinburgh Early Photography Archive (talk) 06:53, 25 December 2025 (UTC)Reply
hastemplate is a neater search method: Special:Search/hastemplate:"Creator:Asahel Curtis".
@The Edinburgh Early Photography Archive has a good point, that the creator template should include this link. RoyZuo (talk) 14:56, 29 December 2025 (UTC)Reply
@The Edinburgh Early Photography Archive: Hi, All files by a given creator should be in the category for this creator. Please see Category:Works by Wassily Kandinsky for a prolific artist with hundreds of files order by style of works, and then by date, name, museum, source, subject, genre, etc. That's the main reason for the categories. An unsorted list of works would not be useful, but there is Paintings by Wassily Kandinsky for a list of all works with a Wikidata item. Yann (talk) 09:42, 25 December 2025 (UTC)Reply
This again depends on people using the categories properly (which may not even exist). It might be a bit overwhelming for very prolific creators, but it would be very useful for less prolific ones. And it could sorted in various ways, for example by title. It could be presented in a table like Paintings by Wassily Kandinsky. That page isn't overwhelming, and it's generated/updated automatically by a bot anyway, so why not an automatic page for any Creator? The Edinburgh Early Photography Archive (talk) 09:49, 25 December 2025 (UTC)Reply
Your assumption with new users does not make sense. Creating a creator page is much more complex than creating a category. Creator pages are an outdated way to store information about creators anyway. This is now done through Wikidata and these pages are not needed anymore. GPSLeo (talk) 10:41, 25 December 2025 (UTC)Reply
I just meant that in some cases a Creator page already exists but a "Works by" category does not. I was not aware that Creator pages were outdated. When I look at Creator:Samuel Alexander Walker it looks like it's generated entirely from Wikidata? The Edinburgh Early Photography Archive (talk) 10:44, 25 December 2025 (UTC)Reply
Actually, having a category is required when creating a Creator template. If the category doesn't exist, a warning is given. Yann (talk) 12:07, 25 December 2025 (UTC)Reply
My apologies! I want not aware of this. Thank you to everyone for all of the very useful comments. I suppose my question is still, if there is a very useful, Wikidata-based, bot-generated page like Paintings by Wassily Kandinsky, with all works in a table, easily viewable and sortable in many different ways (unlike "Works by" category pages), then why isn't there the same exact (but automatically generated) page, with a nice big sortable table, for any Wikidata-linked creator? I feel that there should be. That's my proposal (taking into account all of the previous comments). The Edinburgh Early Photography Archive (talk) 12:13, 25 December 2025 (UTC)Reply
Please see related discussion in the thread above at Commons:Village pump/Proposals#Are "Sum of all paintings" project galleries welcome on wiki commons?. Thanks. Tvpuppy (talk) 14:24, 25 December 2025 (UTC)Reply
Thank you, I see how that discussion is basically what I'm asking about. There seems to be support for the Sum of All Paintings tables in general. In response to that discussion, I would say (1) Why only paintings and not all works? (2) Why bot-generated and not automatic? (3) Why only some selected creators? Who decides which creators deserve a convenient sortable table of all of their works? Why Wassily Kandinsky but not Hilma af Klint? It should be universal for any creator. "Works:Creator Name" could load up a convenient sortable table of all works for any creator, without a human deciding if they or their type of work "qualify". The Edinburgh Early Photography Archive (talk) 14:32, 25 December 2025 (UTC)Reply
Why only paintings and not all works? Because that is the project that a group of volunteers took on. If you think their scope is too narrow, and you can come in with some resources, they might be open to widening it. If you can't come in with resources, I for one do not recommend approaching them with "why aren't you doing more?"
Why bot-generated and not automatic? Because volunteers can write bots, but cannot modify core wiki code. If you or your organization would like to give a grant to the Wikimedia Foundation to put something like this in the core code, it might be worth discussing. But in terms of funding from WMF, Commons has been something of a red-headed stepchild, and if funds were to become available for a feature of the community's choosing, I cannot imagine this being among the top dozen. See meta:Community Wishlist/Wishes; sort by "votes".
Who decides… The volunteers working on this. I imagine they'd be open to almost any artist being included; someone would have to enter their works into Wikidata (either by hand or by an automated intake of a database compatible with CC-0); also, for Commons at least, there is not much point to doing this for artists who have few or no works yet in the public domain, since we cannot host images of those except insofar as we may have a few via free licenses or freedom of panorama. - Jmabel ! talk 19:46, 25 December 2025 (UTC)Reply
I didn’t realise you needed to donate money for something to be changed or improved. Thank you for explaining. The Edinburgh Early Photography Archive (talk) 19:57, 25 December 2025 (UTC)Reply
In terms of getting WMF resources for Commons (or any other project) that the Foundation hasn't chosen to allocate, yes, you probably do, and only they can modify the underlying software. I believe that, like most foundations, they would consider targeted grants to do a specific piece of work that they consider positive but not otherwise a top priority. Otherwise, someone coming in more or less from the outside and saying "spend your money differently" isn't going to have much influence.
But when I said "If you can't come in with resources" above, I mainly meant additional volunteers to do the work. The Foundation provides very little support for Commons, mainly server hosting and the use of mediawiki and wikibase software; there have at times been as many as perhaps half a dozen WMF FTEs (FTE => "full-time equivalents", the equivalent of having on dedicated full-time employee) devoted to Commons. At the moment, it would surprise me if the FTE equivalent is more than 2. We do a lot of things with bots that would, in theory, better be core functions because (as I said above) volunteers can write bots. - Jmabel ! talk 20:42, 25 December 2025 (UTC)Reply
Thank you so much for your patience and for explaining further. I hope my questions didn't come off as questioning the efforts of volunteers at Sum of all paintings, or the scope of their project. I think their work is fantastic and very useful. I only meant to propose that a convenient table-based view of all (Wikidata-linked) works on Commons by any given (Wikidata-linked) creator should be considered for future core functionality. I appreciate your explanation of why it isn't core functionality currently, and the challenges in having a change like this made, especially when there are other items that the community feel are more important. Thank you for highlighting possible ways forward as well. The Edinburgh Early Photography Archive (talk) 12:54, 26 December 2025 (UTC)Reply
You can create a page like "Sum of all paintings" for any artist. Paintings are more easily managed, copyright wise. We can copy them from any source if the paintings are in the public domain. That's not the case for other kinds of works, like statues, where the photographer gets a copyright. Yann (talk) 15:09, 26 December 2025 (UTC)Reply
Thank you, this is probably the best way forward for me! My interest is early photographers who are already fully in the public domain eg Creator:John_Moffat or Creator:Samuel Alexander Walker. So just to confirm, I could create a "Photographs by Samuel Alexander Walker" page with all of his Wikidata-tagged works in a sortable table (and possibly add a link to it on his Creator page somehow), and that would be allowed? The Edinburgh Early Photography Archive (talk) 17:54, 26 December 2025 (UTC)Reply
Seems reasonable to me, as long as we have images for a reasonable number of them. (I wouldn't want to see something like this on Commons for someone where we list, say 2000 works for which we have only 5 of them with images, and don't have any clear prospect of getting many more; not sure exactly where the cutoff would be.) - Jmabel ! talk 18:03, 26 December 2025 (UTC)Reply

Feedback requested: A free online tool for extracting images from video

[edit]

Hi everyone,

I noticed that some users struggle with extracting high-quality images from video files for upload. I have developed a free online tool called [Video To JPG] (link: https://videotojpg.com) that helps extract JPG frames from videos.

It is free to use and I believe it could be helpful for editors who want to create thumbnails or extract specific frames from video content.

I would appreciate any feedback from the community:

1. Is this tool useful for your workflow?

2. Are there any features I should add to make it more Commons-friendly (e.g., PNG output, specific metadata)?

Note: I am the developer of this tool and I am posting here to seek consensus before adding it to any help pages, to avoid conflict of interest.

Thank you! Charlesding2024 (talk) 05:14, 28 December 2025 (UTC)Reply

This would be really helpful! I would look at Commons:CropTool as a model for how to integrate this sort of tool into Commons. Basically, it's loaded as an option in the left bar when viewing a compatible file, and automatically uploads the new extraction with correct metadata and a link to the file from which it was sourced. Zanahary (talk) 15:36, 29 December 2025 (UTC)Reply
Also, if it's not open-source, I don't see any Wikimedia project adopting it. I don't know if that a rule. Zanahary (talk) 15:37, 29 December 2025 (UTC)Reply
@Charlesding2024: You could add wikitext output that indicates the source and the timecode within the source, for easier review.   — 🇺🇦Jeff G. please ping or talk to me🇺🇦 20:03, 29 December 2025 (UTC)Reply
@Jeff G. Thanks for the wikitext suggestion! I would love to implement this to make the review process smoother.
Could you provide an example of the specific format or templates you would prefer? For instance, should I generate a full {{Information}} template, or just a specific tag like {{Extracted from}} with the timecode?
Having a sample of your desired output would help me ensure it fits the Commons workflow perfectly. Charlesding2024 (talk) 07:04, 30 December 2025 (UTC)Reply
@Charlesding2024: I suppose you could start with File:Elephants Dream s6 both.jpg, which is used as an illustration of the "Audiovisual works" section of COM:SCREENSHOT. I don't know of a standard method of indicating the timing other than "at 1m02s" (as distinguished from "at 1h02m"). {{Extracted from}} does not seem to fit your client-side focus.   — 🇺🇦Jeff G. please ping or talk to me🇺🇦 11:20, 30 December 2025 (UTC)Reply
@Jeff G. Thanks for the inspiration! Since my tool is designed for batch extraction (users often pick multiple frames at once), I realized a single text box wouldn't suffice.
I plan to implement a comprehensive solution covering three aspects to fit the Commons workflow:
  1. Smart Filenames: Extracted images will be automatically named with the timestamp (e.g., MyVideo_at_1m02s.jpg). This ensures the metadata is preserved in the file itself when uploading.
  2. Gallery View Copy: In the results grid, I will add a small 'Copy Source' button under each image, generating the specific text you suggested: Source: Extracted from [Filename] at [Time].
  3. Batch Metadata Export: For heavy users, I'll add an option to copy/export a text list of all source info for the current batch at once.
I can implement all of these features. Does this sound like a solid workflow for editors? If so, I'll get to work on it immediately! Charlesding2024 (talk) 16:25, 4 January 2026 (UTC)Reply
@Jeff G. I've just updated the tool based on your feedback! You can try the new features live on VideoToJPG.com.
Implementation Details:
  • Smart UI: I added a "Src" dropdown at the bottom of each thumbnail. Users can now choose between "Plain Text" (standard) or "Wikitext" (for linked files). I made this selectable to avoid forcing "red links" if the source video isn't on Commons.
  • Batch Support: The ZIP download now includes a separate source_info_wiki.txt alongside the standard metadata, making batch uploads easier.
A quick question on placement: Now that the tool is optimized for the Commons workflow, do you think it would be appropriate to list it on the [Commons:Tools] page?
If so, would you recommend placing it under the "Upload media" section or "Maintenance"? I'd value your advice on where it would be most discoverable for editors. Charlesding2024 (talk) 09:37, 5 January 2026 (UTC)Reply
Aside from the open source requirement that Zanahary has already noted, a PNG lossless output would be sweet, I've used ffmpeg for that before, maybe you could look into integrating ffmpeg. JPEG is a visually lossy format: you always get some artifacts / loss of detail when converting an image to JPEG. It's moon (talk) 20:51, 29 December 2025 (UTC)Reply
@It's moon Thank you for the detailed feedback regarding image quality and artifacts.
You are absolutely right about JPEG. I am happy to let you know that I already support lossless PNG output. You can try the specific feature here: https://videotojpg.com/video-to-png.
Regarding the open-source mention: I plan to open-source the core processing logic in the future to facilitate community review.
Also, similar to your ffmpeg suggestion, my tool runs entirely client-side (in the browser). This means it achieves local processing speed and privacy without needing to upload files to a server. Charlesding2024 (talk) 07:25, 30 December 2025 (UTC)Reply
That is really cool! I will definitely use your tool. Another area where you could improve it further, if you are interested in more feedback, would be to support video URLs as input (and even YouTube URLs if you wanted to take it to the next level). A lot of tools on MediaWiki allow users to provide a file URL instead of needing to download something to reupload it. It is really useful to prevent junk from piling up on users' hard drives. It's moon (talk) 08:10, 30 December 2025 (UTC)Reply
@It's moon I'm glad to hear that you find the tool useful!
Regarding the URL input (and YouTube) support: That is indeed a very convenient feature to save disk space.
However, since my tool runs entirely client-side, fetching videos from external URLs (especially YouTube) is technically challenging due to browser CORS (Cross-Origin Resource Sharing) restrictions. It usually requires a backend server to proxy the data, which I am currently avoiding to keep the tool private and cost-free.
That said, I will investigate if there are any client-side solutions to load videos from CORS-enabled sources (like Wikimedia Commons files) directly! Charlesding2024 (talk) 08:33, 30 December 2025 (UTC)Reply
@It's moon Update: I've added support for direct URL loading in v1.4.2. Since this is client-side, it requires the source to support CORS. I also added a fallback dialog to guide users when CORS blocks the request. Thanks again for the idea! ~~~~ Charlesding2024 (talk) 05:38, 3 January 2026 (UTC)Reply
Nice work! It's moon (talk) 05:45, 3 January 2026 (UTC)Reply
I don't see what the advantages to just creating screen stills in your local video player are. Why would one use this website instead of just take stills in video players? In MPV and VLC player I think one just presses ctrl+s to save the full-resolution still. Prototyperspective (talk) 10:17, 4 January 2026 (UTC)Reply
@Prototyperspective: You raise a valid point regarding local players like VLC.
However, the key feature that distinguishes this tool is its built-in **Blur Detection**.
When extracting frames from a video (especially with motion), it is often difficult to tell by eye which specific frame is the sharpest. My tool analyzes the frames and provides a **sharpness score**, helping editors objectively identify and save the highest quality frame possible.
This is something standard video players don't usually offer, and it's specifically designed to help upload better quality images to Commons. ~~~~
Charlesding2024 (talk) 11:25, 4 January 2026 (UTC)Reply

Should category:hentai (very NSFW), category:fan service (kind of NSFW) and to a lesser extent category:ecchi (NSFW) be deprecated in favor of only using more objective categories?

[edit]

I’ve had an issue with these categories for a while even though I’m pretty sure I created category:ecchi a long time ago. They’re all lacking in an agreed-upon definition and redundant to each other. Hentai is a western neologism for anime-styled porn; that meaning doesn’t exist in Japan (beyond similarly informal terms like “H-manga”) and even in the west it’s a pretty informal category that means different things to different people. Fan service isn’t even necessarily sexy; it’s just stuff gratuitously added to please the audience, which could be anything. Ecchi is at least sort of defined as a genre basically akin to “sex comedy” but I’m not sure you could objectively determine whether an individual image is “Ecchi” given it also just means “sexy in a fun, playful, not super explicit way”.
because of these issues I propose that Category:Hentai in anime and manga (nsfw) be deleted; category:Hentai be restricted to files/cats about western hentai providers like Fakku and maybe category:Ahegao (nsfw-ish); category:fan service just be deleted entirely and category:ecchi be redirected to category:ecchi anime and manga and only be used as a category for files/categories about works in the genre. We also have categories like Category:Nude or partially nude people in anime and manga (nsfw) Category:People having sex in anime and manga (nsfw) Category:Swimwear in anime and manga (kind of nsfw) and Category:Lingerie in anime and manga (kind of nsfw) that cover the same scope without the overlap and subjectivity issues. Dronebogus (talk) 22:21, 5 January 2026 (UTC)Reply

Have you considered bringing this up at COM:CFI COM:CFD? I think it belongs there. If you want visibility, you can start a CFD discussion and then (maybe after a while) post here a pointer. whym (talk) 01:20, 11 January 2026 (UTC)Reply
I don’t think COM:CFI is the right link Dronebogus (talk) 01:51, 11 January 2026 (UTC)Reply
Fixed. whym (talk) 02:49, 11 January 2026 (UTC)Reply
I considered that, but there’s so many categories that would be affected by this in different but overlapping ways. Plus CFD has a backlog that stretches to the moon and back Dronebogus (talk) 04:25, 11 January 2026 (UTC)Reply
My experience has been that CfD is mostly effective as a way of gaining consensus for simple proposed actions like renaming, merging, or deleting categories. It's less effective in more complex situations, or where the nominator isn't sure of what to do. Omphalographer (talk) 22:07, 11 January 2026 (UTC)Reply

Adding a thing in block notices

[edit]

Hello everyone, I would want the community to discuss on a suggestion made by @0x0a at COM:ANU. For my view on it, I happen to agree with them. I quote some new users may not be aware of our blocking policy And our block message box doesn't explicitly state that creating a new account during the block period is not allowed, which might lead them into an endless cycle of block and block evasion. I found it necessary to clearly state this rule in the block message box. I would say that we can adjust the block notices that states that the user shouldn't create a new account as that will further lead to blocks and bans for socking. Shaan SenguptaTalk 14:08, 7 January 2026 (UTC)Reply

I second this motion. 0x0a (talk) 14:35, 7 January 2026 (UTC)Reply
Votes/Comments
 Support It removes plausible deniability. JayCubby (talk) 15:47, 9 January 2026 (UTC)Reply
 Support For anyone with common sense, sure. But a lot of users who get blocked seem to lack common sense - apparently it isn't as common as the phrase would imply? - so we might as well spell it out. Omphalographer (talk) 08:26, 10 January 2026 (UTC)Reply
As to why it's not obvious, I wonder if the root cause might be an assumption that most account suspensions are automated (which can be true for other platforms that new users are more familiar with). If so, we might want to let them know that humans (rather than a big, faceless and glitchy automated system) block accounts here and you are expected to engage with those humans when you want to be unblocked. whym (talk) 01:10, 11 January 2026 (UTC)Reply
@Whym, would you like to suggest a draft? Or maybe @Tvpuppy, you did good work with DR notice. Anyone else is also invited since this has been supported so far. Shaan SenguptaTalk 05:09, 11 January 2026 (UTC)Reply

Text renovation workbench

[edit]

Current text in {{Blocked}}:

You have been blocked from editing Commons for a duration of TIME for the following reason: REASON.

If you wish to make useful contributions, you may do so after the block expires. If you believe this block is unjustified, you may add UNBLOCK REQUEST below this message explaining clearly why you should be unblocked. See also the block log. For more information, see Appealing a block.

I suggest the following additions (in italics here):

You have been blocked from editing Commons for a duration of TIME for the following reason: REASON. A human reviewed your contributions and found them against Commons' rules.

If you wish to make useful contributions, you may do so after the block expires. Creating a new account while this block is in force is in itself a blockable offense and can lead to a permanent exclusion! Do not try to game the system. If you believe this block is unjustified, you may add UNBLOCK REQUEST below this message explaining clearly why you should be unblocked. See also the block log. For more information, see Appealing a block.

— Preceding unsigned comment added by Grand-Duc (talk • contribs) 03:38, 12 January 2026 (UTC)Reply

Alternative suggestions for the italicized passages:
An administrator has reviewed your contributions and found them to be against Commons' rules.
Creating a new account while this block is in force is itself a blockable offense and may lead to permanent exclusion from Commons.
  • However, neither that nor the wording above works for an indef-block, where we need something more like Creating a new account while this block is in force is itself a blockable offense and makes it very unlikely that your block will ever be rescinded."
  • And when we block accounts for being sockpuppets, even that is not on the mark; in that case we either can omit this or need something clarifying that this sockpuppet account will almost certainly never be unblocked.
Jmabel ! talk 05:53, 12 January 2026 (UTC)Reply
The point made by Whym above at Revision #1145790913, with I wonder if the root cause might be an assumption that most account suspensions are automated (which can be true for other platforms that new users are more familiar with) stirred me. I think that it'll be worth to underline that humans do the blocking, and it's not necessarily clear that something called administrator is actually human, when going by experiences in social network or online game environments.
Indeed, I did not think about sockpuppets. But Jmabel's suggestion is in my opinion a sound starting point to work on or adapt outright. About socks: either a boolean switch "sock Y/N" would be needed, and isn't there {{Sockpuppet}} available already? Grand-Duc (talk) 07:02, 12 January 2026 (UTC)Reply
{{Sockpuppet}} goes on the user page, not the user talk page, and is not addressed to the user themself but to admins and others acting in a quasi-administrative capacity. - Jmabel ! talk 20:42, 12 January 2026 (UTC)Reply

Change expectations of (and criteria for becoming) a license reviewer

[edit]
  1. Should we change the criteria for becoming a license reviewer by striking "be familiar with restrictions that may apply, such as freedom of panorama." And replace it with "show basic competency in copyright restrictions, such as by having a history of importing files which are not copyright violations or tagging copyright violations for deletion."
  2. And should we change the procedure of license review to emphasize that checking if a user is really the copyright holder is generally only necessary for license reviewers to take when they are suspicious of signs of a copyright violation, but is not always necessary on every image.

Rationale: We currently have a massive backlog of items needing a license review. At the present moment, it is over 80,000 items in the surface category alone, with tens of thousands more in the subcategories, and growing. We currently have very stringent rules requiring license reviewers to essentially certify items as free of copyright violations. This has led to license reviews of files taking much longer (requiring extremely thorough investigations), and led to fewer people being trusted with the right.

This all neglects the original purpose of the right, which was to create a record showing that an item was uploaded at the specific location under the specified license, in case the item is later deleted. The purpose was never to certify an item as copyright-violation free. This means that many items in our backlogs may end up needing to be deleted if the item is deleted as the external website, while at the same time the size of the backlog means that copyright violations that could be caught are ignored anyway. Keep in mind that we created license reviewer bots that handle this task on certain websites that have nearly no ability to check for copyright violations in the same fashion, and they have been granted this user right (so it isn't as though the license review confirmation ever truly confirmed it was copyright violation free.

Original discussion hereAplucas0703 (talk) 17:36, 14 January 2026 (UTC)Reply

tagging copyright violations => "accurately tagging copyright violations"? - Jmabel ! talk 18:56, 14 January 2026 (UTC)Reply
 Support Completely agree with Aplucas0703 (no objections against adding "accurately"). Maybe the preceding discussion should be mentioned? Gestumblindi (talk) 19:04, 14 January 2026 (UTC)Reply
 Partial support - the first point looks useful. And it doesn't feel like a change, only like an alternative wording which actually better describes the needed prerequisites for the job. Do we need to go through a full RfC for that wording change? Oppose the rationale, second paragraph. I don't want to see humans restricting themselves to bot-like tasks. So, I do not get what you want / propose with your second point. What would be the exact change you're advocating for, Aplucas0703? Regards, Grand-Duc (talk) 01:18, 15 January 2026 (UTC)Reply
The purpose of point 2 is to speed up license reviews by clarifying that a license review is not intended to be an extensive check for a copyright violation, but rather a check that a file was uploaded under the stated license. License reviewers may choose to do a deeper check if they see clear red-flags of copyright violations. They are not expected to catch every possible copyright violation or certify an item as copyright-free, as others down the road are expected to be able to find such violations.

The reason for this leniency is that the current size of the backlog is so long that many copyright violations aren't being checked anyway in addition to not having a basic license review (which could be helpful if a discussion about it arises and the file was deleted in the meantime). We both want what is best for preserving the integrity of copyright on Commons, so I actually think this is better overall for copyright in that regard, since we can't expect one person to do it perfectly. This places more faith in the community as a whole to find copyright violations and use the information gathered in the basic license review to help them decide that. Aplucas0703 (talk) 02:08, 15 January 2026 (UTC)Reply
If we are going to limit the license review task to simply verifying that the source claimed to offer the license, we probably will want to adjust Template:LicenseReview to allow a status that effectivly means something like "I confirmed that the site says it offers the license, but someone with copyright expertise ought to have a closer look because it feels a little fishy." - Jmabel ! talk 06:33, 15 January 2026 (UTC)Reply

URAA DRs by country

[edit]

I've opened this discussion about the creation of a new categorization of URAA requests by country of origin. Friniate (talk) 17:38, 18 January 2026 (UTC)Reply

no need to ask, you can just do it. Bedivere (talk) 17:44, 18 January 2026 (UTC)Reply