-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
moderation: user should open issue if needed #4
base: main
Are you sure you want to change the base?
Conversation
That's not a bad idea as long as the blocked user gets a notification about the issue. My opinion is a moderator log isn't as useful as a new place to move the discussion |
I did see on https://docs.github.com/en/communities/maintaining-your-safety-on-github/blocking-a-user-from-your-organization that it says
IIRC you said that option didn't work in testing (on another repo)? Perhaps we should test it again? If that would work and we could include a link to the issue somehow in their message, then it could be quite nice. Feels like the ordering might be back-to-front though; i.e. the issue will only be created after then ban, by which point we missed the time we needed the issue link. This is making me wonder whether a tool of the style suggested by harding might be worthwhile. This way the tool could create the issue first, then ban and include link to issue in the ban automagically? |
yeah the block action itself does optionally send a notification but it only includes a super vague explanation which the moderator can not edit. Also might have to go through whatever notification options or filters the user has. That's why I think a comment in the thread the user is already looking at might be the best way to redirect the "what about censorship" conversation |
Ah I see. It's a shame to replace a spam comment with a generic (spam?) comment in a thread, but seems like no real way around that then. |
Yes I agree as well but if the user is blocked then the moderator comment is ideally the end. |
I think the exact contrary: for each moderation action should open an issue and ensure it’s public and doing manually. Unless it’s automated ChatGPT-like spams, I would recommend for the moderators to take time to write an issue with context, personified elements and explain the rational why the decision is taken in the particular instance. There are reasons why judges do that in court of justice among liberal democracies since centuries, it’s not only avoid mistakes, though additionally ensure the people taking of the decision are more able to understand it. Doing it in public also avoid reduce the odds of the moderation actions being qualified of “blackmail” by external observers (whether they are court of justice, bitcoin journalist, the bitcoin opinion on reddit) and exposing yourself. Hiding behind a private IRC-like bot might work once, maybe twice though it’s easy to break that kind of veil if you know how to hit. |
@ariard I think we are largely in agreement with you, and also with the current moderation transparency guidelines, that an issue will be opened for banning a user for anything other than "obvious spam". This will be both manual and particular to the specific incident. From the guidelines:
So far, I am not aware of any non-"obvious spam" bans being dealt, so we are in some ways yet-to-see how this plays out. In the case that you literally mean:
... then I think on balance I disagree. I don't believe that requiring a new issue be opened for every hidden or deleted comment will be helpful or productive for anyone involved. There are already 3 clauses in the guidelines which cover actions to be taken in the case of locking a thread and hiding/deleting comments:
Following these guidelines means there will be a transparent record available of moderation considerations/rationale, but for (hidden/deleted) comments it will be inline with the OP, rather than for non-spam bans which will be in a manually-created issue over here. To me, this feels like a good balance; transparency is maintained in all events, it should not be not overly-burdensome on moderaters to moderate, and contributors/readers should be able to find information on why comments were hidden (inline with the tread they are reading) and why someone may have recieved a temporary ban (in a linked issue in this repository). There are no plans currently to use anything like the kinds of helper-bot being discussed above to perform actions anonymously, which I hope addresses your concerns in your final paragraph. |
@willcl-ark Thanks for your answer.
I think this is where I disagree too with you. I strongly recommend that each moderation action should open an issue and ensure it is public. At the exception of ChatGPT or bot-like spams, each moderation should be done in public including for hidden / deleted comments. Otherwise in my view moderators are becoming arbitrators of what kind of technical information are valid to be said in the context of the bitcoin core repository. A distorsion of their roles which is more to ensure respect, civility and spirited exchange in the repository (according to the moderation guidelines and at least the reason some contributors ACKed the principle in the proposal issue). From then, you’re only one step to hid / delete pure technical comments that only represent a minority view among all the contributors and there is no repeal of such actions available to the issuers of such comments. I have seen many times maintainers hiding / deleting pure technical comments, not because those comments constituted an ad hominem, though more likely because it was contrary to their own technical positions (or they were in an apparent conflict of interest). Constraining the moderators to argue in public why in each case comments are hidden or deleted is a form of protection to ensure moderators are not letting their own technical opinions over-duly influencing their moderation decisions, or make it more apparent if they have interest at stake (as inconsistent decisions driven by personal interests are more likely to flag out). As such, both for reasons of respect of technical minority view and conflict of interest prevention, I can only recommend to have the moderation guidelines modified to withdraw the following formulation "Or, to avoid disrupting the flow of the conversation, it could be sent in a private message to the poster or added as a postscript to the original comment”. In addition, with the current formulation, moderators have their skins at risks of being legally sued for “blackmail” as there can be situations with lack of proceeding forms and publicity on their actions. This is likely if any litigation happens on the moderation rules themselves in the future, a judge will recognize the group of bitcoin contributors as a de facto organization and that “disciplinary" actions have to follow formalized proceedings (of which e.g private messages on whatever text app is not).
This is an open-source janitorial role, if someone thinks moderation is a burden far too heavy for their shoulders they can always resign or ask more contributors to be appointed moderators. The worst outcome would be for a moderator, being under time pressure, to neglect justifying why a moderation decision is taken and by this behavior slowly destroying the project culture. Once again, publicity of all the moderation decisions should be the norm and I can only invite the moderators to read the excellent Tom Bingham’s “The Rule of Law” (it’s on the usage of extra-judiciary torture e.g at Guantanamo and across history of centuries and why respect of rule of law, i.e public proceedings and formalization matters in society). |
Thanks Antoine for your clear reply, it helps me understand better your concerns. I would like to try and summarise again where I think we both stand, please correct me if I'm wrong: Obvious spam
I think that we both, along with the current moderation policy, agree that an issue should not be required to be opened for these cases, and IMO that's a good thing! Borderline spam
I think this is where you and the current moderation policy disagree? Currently the procedure and transparency pertaining to these "borderline spam" comments can be found at https://github.com/bitcoin-core/meta/blob/main/MODERATION-GUIDELINES.md#moderation-procedure and https://github.com/bitcoin-core/meta/blob/main/MODERATION-GUIDELINES.md#moderation-transparency To summarize current moderator guidance does not have the requirement that an issue be opened in these cases, calling instead for moderators to (optionally) leave a comment on why another comment has been hidden and/or deleted in the case of a "mixed" comment (containing both useful and inflammatory information), or, for a comment which does not add any value at all, not needing any action as it should be "self explanatory" why it was moderated. Banning accountsedit to follow: I pressed ctrl+enter by mistake an posted too early 😢
DiscussionIt seems to me (and please, correct me if I have misunderstood) that you are advocating for some of the "borderline spam" comments, which can currently be hidden/deleted without requiring an issue to be opened, to require an issue to be opened for transparency and accountability reasons. If my understanding now is correct, then it perhaps does not seem as overly burdensome to modertors as I first suspected, however I have not yet had any experience with any "overheating" issues... In my previous comment, I was worried that you were advocating for a literal "issue per action" (including spam etc.), which would be disproportionate in my opinion. To think about implmentation of your proposal more, I wonder if rather than require a new issue per ("borderline spam") moderation action, we could group moderation actions by (bitcoin/bitcoin) pr/issue number, opening a single issue in this repo for each bitcoin issue, and simply add a new comment per moderation action? Having a single issue opened here per bitcoin/bitcoin issue/pr, seems like it would make things easier to find for folks that want to see the "full discussion" including moderations, as all moderations would be grouped per topic? One other thought I have on this; if folks do want "100% transparency" with regards to canonically "what was posted" (before it may have been moderated), I'd recommend subscribing to the bitcoin/bitcoin repo, in which case GitHub will send you an email with the content of every new comment (and the rest!). Of course, this does not address the concern over what a web-page viewer may see following moderations, but it does mean there is a record of exactly what may have been removed (it doesn't capture edits to comments), which could be used to hold moderators accountable if necessary. I fell foul of this system myself in posting this very message, by hitting Let me know what you think of the above, and I do appreicate your concern for transparency in this process, along with the safety of the moderators themselves! |
Thanks Will for the clear reply too. About "obvious spam", I think we're mostly in agreement though it can be good to About "bordeline spam" this is where I think we're more in disagreement. First, I think the definiton you're proposing, which is drawing from the moderation Same with "may simply not add any value beyond that which was already discussed in So in my opinion, for borderline spam there should be an issue open by default About "banning accounts", I dont' think we should do that as one could mount an On the implementation of the proposal, I think yes this can be reasonable to For the automatic copy-pasting of everything which has been opened on the github About the safety of moderators - I certainly don't wish to have moderators exposed to |
I think asking moderators to open an issue with title and description every time they take an action may be overkill. I would like to limit the amount of off topic comments in the software repo including meta comments by moderators, but I think this is maybe the simplest solution. If a moderator blocks a user (even if its just for 24 hours) that moderator should leave a comment pointing that user here, where they are not blocked (this is a different repo in a different organization). Then at least the user and moderator can discuss anything if necessary without bothering the software contributors or maintainers.