Making the translations fetched, granular and consistent #1063
Replies: 3 comments 1 reply
-
There are a few questions that I think we need to answer first before talking about the technical implementation questions. What are the goals regarding localization?Examples could be:
What are our requirements for translations?"Examples of possible requirements could be:
Legacyhttps://beta.shapeshift.com has an implementation of localization that we can look at as an example. I used to run a browser with |
Beta Was this translation helpful? Give feedback.
-
gm, We are currently looking into FOSS CL (continuous localization) solutions. One in particular we are looking at is Firefox's Pontoon platform: https://shapeshift-cl.herokuapp.com/ -- however, the platform is not 100% set on, and I'm constantly on the lookout for a platform that could meet our needs even better. Whatever direction the engineering ws chooses, we have full confidence that careful thought and reasoning was put in and will support it. Lastly, please keep us in the loop so that we can adjust accordingly :) edit: Also, I am in full support of future proofing the platform with a i18n and l10n plugin specifically for React. At the moment airbnb's polyglot is doing its job, but I believe there are more advanced plugins that would make your guys' job much easier as the ShapeShift platform grows into who knows what in the future. |
Beta Was this translation helpful? Give feedback.
-
Related discussion: #1114 |
Beta Was this translation helpful? Give feedback.
-
Currently, we only have translations in a big
messages-<xy>
.json file.It was originally a single file, but we now have multiple files, one for each language. That brings a first question:
1. How do we want to keep the translations consistent?
As we add more translations in
messages-en.json
, we are accumulating missing translations in others. The values are obviously going to be missing until we get translations done, but we are also missing the keys.IMO, the best way to handle this currently - if this is something we want - is to create a simple script that would use diff the english JSON and the others, and add missing entries as "TODO".
Now, another issue we have is that the "single" (declined in multiple languages) file is getting bigger and bigger, harder to maintain, and for contributors, it is very easy to add translations to a path that's actually wrong semantically for the current domain. Which brings me to a second issue:
2. How do we want to make translations more granular?
A naive approach would be to have smaller JSON by domains, and construct the root JSON out of this, à la
combineReducers()
. I'm sure there are better approaches, and @cjthompson had one that has plugins in mind.Finally, and after agreeing on solutions for these two first points, we should take into consideration a final one:
@0xdef1cafe mentioned that we might use a third-party service provider in the future for translations.
3. How can we make translations compliant with a third-party service provider?
Obviously, this is all dependent on the service used. I haven't yet dug into the existing options and didn't find any mention of this on oneDAO, so I'm leaving this open for discussion and ideation.
Beta Was this translation helpful? Give feedback.
All reactions