Improving quality of the assistant responses with collective editing #3444
autosquash
started this conversation in
Ideas
Replies: 1 comment
-
I would like to add that the use of autocorrectors such as LanguageTool (multiple languages, in my case I use this one because my mother tongue is Spanish) or Grammarly (only allows English) should be mandatory or strongly recommended. In my case, I always use it to enter any kind of text, and avoid grammatical and spelling mistakes. Both autocorrectors can be integrated as add-ons in several browsers, and correctly detect the text fields available in Open Assistant. They can be presented in the following places (in my opinion): on the home page, the data entry page, and Discord. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I would like to propose a functionality that I think would be quite useful for developing better responses from the assistant. In the absence of a better name, I have called it "collective editing." The idea would be to implement the ability to construct responses based on existing ones, by editing them. These edits would not automatically be considered "better", but would be added to the possible responses and classified by other users. This would mean using collective intelligence to arrive at the best responses.
Examples of editing:
Correcting a wrong fact in an otherwise good response.
Adding information to a response.
Correcting spelling, grammar, or style mistakes,
Adding a source or reference.
Some advantages of this approach:
The ability for each user to contribute their strengths, such as knowledge of a topic or writing ability, to complement one another instead of competing.
It would also incentivize the ability to respond provisionally to a request with incomplete data or a basic outline, and have other users with more knowledge complete the response. This would free up time for more expert users and give more opportunities for collaboration for those who are less knowledgeable on a topic.
In practice, it seems that many of the current high-quality responses are created by ChatGPT, directly or edited by the user who submits them. Enabling subsequent editing would not only improve these responses and correct specific errors but also give them a more "human" touch, avoiding the more obvious markers of ChatGPT.
Implementation could include the option to edit an existing response and then submit it when the user is acting as an assistant, when labeling a particular response, or when the action to be taken is to establish the order between several responses. In such cases, there should be an option to "improve" the response.
Possible downsides to consider:
Possible excessive proliferation of proposals with minimal variations between them. This could be reduced by quickly discarding previous versions when an "improved edit" receives a superior rating from a reduced number of users (for example, 3) and no inferior ratings. In that case, the new version would be considered better, and the old one would be removed from the system (unless it has some utility as a relative counterexample).
Technical difficulties in implementing the proposal: they are likely to be minor compared to the benefits provided.
Beta Was this translation helpful? Give feedback.
All reactions