Aleksandr Migunov's AI Experiments

Meta (formerly Facebook) invested millions of dollars developing an AI model in order to translate Facebook posts into a wide variety of languages.  The model is called 'No Language Left Behind' (NLLB), and Aleksandr has learned how to use it specifically for Bible translation.  All AI models require a significant amount of training data, and NLLB requires about one third of the Bible to already be translated into a language.  Since the New Testament is about one third of the Bible, any language that has the New Testament and wants the Old Testament is a candidate for NLLB.  In order to use NLLB, the source language must have the complete Bible, and the target language must have at least the New Testament.  Then Aleksandr fine-tunes NLLB with the two New Testaments, and then NLLB will produce a draft translation of the Old Testament.  The quality of the drafts produced by NLLB depends on many factors; the two main factors are i) how closely related the source and target languages are, and ii) how similar the styles of translation are for the two New Testaments (literal vs dynamic).  So if two languages are closely related and the two New Testaments are very similar in style, the draft of the Old Testament produced by NLLB will usually be very good.  The drafts always contain numerous errors, but the mother-tongue translators are generally able to correct those errors very quickly.  The Ayta Mag-indi Old Testament books drafted by NLLB are so good that the mother-tongue translators are editing them at a rate of three or four chapters per day.

Computational linguists have developed a technique for evaluating the quality of a translation, and the result of that test is called a BLEU score (Bilingual Evaluation Understudy). Drafts that are helpful to mother-tongue translators generally have a BLEU score of at least 35.  Drafts with a BLEU score below 35 usually contain so many errors that they probably aren't helpful to translators.  For the Ayta Mag-indi project in the Philippines, NLLB produced drafts with a BLEU score of 76.7, meaning that those drafts are of very high quality.  A map summarizing the results of Aleksandr's experiments is shown below.  As more Old Testaments books are drafted and evaluated, we'll update this map.

World map for AI Projects