12 Jul Machine Translation Quality for English into Dutch
The premise of the recent Wall Street Journal article focuses on the question, “Is the language barrier about to become a thing of the past?” It presents that ten years from now, near-perfect speech simultaneous machine translation improvements will arise and improve machine translation quality, even with the ability to recreate the speaker’s voice in the translated output. According to Alec Ross, this advancement to the industry of International Linguistics can bring the world closer together.
Machine translation quality, good enough?
In present time, existing machine translation quality is deemed unreliable. Only languages with similar grammar structures can be accurately translated using machine translation. Even then, it is recommended to still seek for a native speaker’s translation for complex sentences. For example, a complex sentence in Chinese translated into English using a machine translator may not generate accurate results. This is due to a logic that the machine’s programming follows.
Currently, there is a slow advancement in terms of machine translation quality due to the varied structures of every language. For example, in the German language, the verb usually falls at the end of the sentence whereas, in English, verbs are usually present early in a sentence. The requirement of having at least a full sentence before having an accurate translation makes it extremely difficult to program real-time translation. Ross reiterates that all in-your-ear translation devices would require a minimum of sentence-long delay to provide accurate translation. But given the current status quo, a sentence-long delay to create an accurate speech-to-speech program would still be a huge achievement.
However, this is not the only bottleneck to making this program. There are several factors as to why it may take more than ten years before we have good machine translation quality. Here’s why:
Varied Accents. Since current machines require a standard reference, accents are accurately interpreted only by skilled human interpreters. Even then, skilled interpreters also experience difficulties with this area. A good example of how much this area needs improvement would be Siri and its inability to recognize some words when used with thick accents.
Dialects. Dialects are a “particular form of language peculiar to a specific region or social group.” The high number of varied dialects for every language makes it difficult but not impossible. Machines can be programmed to learn dialects but acquiring and separating raw data is extremely difficult.
Required Incorporation of Culture. Language and Culture are inseparable. Our daily speech contains subtle references to varied aspects of our culture sometimes making it incomprehensible to outsiders, even if they understand the denotation of the words you say. Cultural difference is an important consideration when translating documents. For some languages, the literal translation of a sentence can mean two very different things.
Context. Tied with culture, the context in which the sentences are used weighs heavily in determining the meaning of the sentence. For example, from a business perspective, “I’ll think about it.” means “No.” in a Chinese office context. Context uses the speaker’s background and current beliefs or emotions to tie with the context of what it being said. Another example would be the use of sarcasm. Use of sarcasm in text is extremely difficult to locate. In this regard, you can expect that some or most of the information may be lost in the process of automatically translating a text through machine translation.
Tone and Body Language. These are determining factors that can indicate the context of a sentence. The most difficult part in acquiring this data is that it is something that is heard and seen, then processed along with what was said. While machines may be programmed to identify body language, tone – tied with accents and other vocal identifiers, may be harder to determine and would still need human interference for checking.
This said it isn’t impossible to create an automatic speech-to-speech translation machine given the technological advancements that we are experiencing. But it cannot happen without intricate Artificial Intelligence. Translating a language accurately requires a high level of cognitive function that present computers do not have. It is a complex process that would require an intelligent application of background knowledge of the speaker, the accents, the context, the culture, the gestures, as well as in-depth knowledge of the language the speaker is using and the language the content will be translated to. Else, the machine will not be able to translate the content accurately. Requiring human perception means requiring the Artificial Intelligence to be as close to human as possible.
These bottlenecks, however, are only notes on where we can still improve in this area. But it doesn’t mean impossibility. Improvements for the next ten years would be significant – just look at where we’ve come since Google released Google Translate in 2006: machine translation quality has improved over time.
Machine Translation Quality
If only machines were as efficient in comprehending our requirements as humans do, we would be relying more on them. It is because we humans have been programmed to see and perceive things differently. Turning a word into an exact meaning word in a different language may be the idea of accuracy for the machine but it sometimes doesn’t serve the purpose of translation.
Take localization for example, when an app or website is localized, some of the terms and words are not translated in to the same exact word but are rather changed to a more acceptable version locally, even if it’s not the same exact word for the original. This is why we can’t compare machine translation quality to human translation quality. Technology may have made things easier and convenient, yet there are many things such as translation quality, that will always remain associated with humans and their very brains.