Collaborative translation
One of the ideas to promote the translation of scientific articles is to create a collaborative translation tool. Producing a good translation needs a several skills, it needs skill in two languages and knowledge about the topic. Such a combination is easier to find in a team and working in a team motivates. People regularly make partial translations and stop when they know enough for themselves. It would be great if others could finish the job.
There are already many translation tools, but none really fit this use case. There is machine translation; for many language pairs this works quite well nowadays, but for scientific texts the system may trip up and accuracy is very important. So a machine translation is at best a first draft an expert should have a look at. When this quality is good enough, people will not need a collaborative translation tool, nor a translation published in a repository and findable via our translations database.
There are computer aided translation packages to help (team of) professional translators. These systems work with office file formats and HTML and tend to be proprietary and thus cannot be improved to fit our use case. The break the text up in segments (paragraphs) to be translated piece by piece. They do have many tricks that would be worthwhile to implement in a collaborative system as well, such a databases with phrases to ensure they are consistently translated. Also the data formats can be used as inspiration. The only exception to this rule seems to be OmegaT, a FOSS project coded in Java. A remaining problem could be that such systems are intended for professionals and may have a steep learning curve.
Tools for the translation of software packages (their user interface and documentation) is often collaborative, tends to be easy to use and sometimes even somewhat gamified. In addition these systems are often free software and could thus be improved. This may thus come closest to a collaborative tool for scientific articles, but these tools only work with nicely structured text files for in- and output, while scientific articles will have equations, tables, references, and figures and will often not even be available in a text format, but some sort of PDF file.
So a collaborative translation tool would live on the internet and combine features of the computer aided translation packages and the software translation packages, while having additional tools to parse article PFDs into a text format.
Single Source Publishing https://github.com/singlesourcepub/community/wiki/Announcement-Blog
pandoc, OCR, Zettlr
Scientific markdown as the translation format (is text, so would work well with existing software for code). This is a nice collaborative scientific markdown pad. https://mur2.co.uk/editor
The collaborative tool should allow for communication between translators in general (for coordination of the work and community building) and discussions on specific translated sentences. Preferably this communication would work for two people as well as for entire classes jointly translating an article. It should be able to upload partial translations and have a page showing partial translations where people can help out.
It would save a lot of time for many translations to have a first draft translated by machine learning tools. This should be checked by humans from accuracy, but a scientific article does not have to be beautiful prose, but clear. The user feedback in interactive machine translations can be used to improve the system and make it better at translating scientific works. The latter would require running the machine translation ourselves. It was suggested that this may require considerable resources, memory, computer power and bandwidth; maybe this could be obtained by collaborating with the European Open Science Cloud.
Points to ponder
Translator acknowledgement davidpomerenke made a useful comment on the Open Science Feed that would be helpful with the input:
I've recently coded an unpublished project on scientific citation mining, and for that purpose I had looked a bit into tools for converting PDFs to more useful formats. I ended up using Grobid, which converts the PDF to a very detailed XML format. The format is not a word processing format though, but a format specifically for representing scientific documents. I don't know, if it would, for example, contain tags about bold or italicized text. The tool is working really well, but since you probably cannot use the output XML format directly, it will need some postprocessing, which would be relatively simple with XML parsing libraries. An alternative is pdfextract by Crossref. They probably use this to build their own large database. It also works really well and gives you some JSON that would probably need less postprocessing than Grobid. I didn't use it for some minor technical reason that I forgot. pdffigures2 is from the team behind Semantic Scholar, and they probably use it to extract the figures that they show in their search engine. It only extracts figures and their captions and no other things. I don't recall whether the other tools can also extract figures, but if not, then this will be a perfect supplement. Another alternative that's on my list but that I didn't try is Cermine. There are some more tools that specialize in mining only the citations, but I found them to be less powerful (although perhaps more performant) than Grobid. Many publishers also publish a supplementary HTML version these days, which may be an acceptable format or at least easy to convert to other formats with pandoc. I have also seen that authors upload the Latex source along with the PDF on Arxiv, but I don't know common that is. Another current project which is not directly related to your question but which you may find cool is ScholarPhi, where they try to annotate PDFs with useful semantic information.