Believe it or not, the foundations for machine translation are far older than you would expect, dating back to the ninth century.
With its beginnings in cryptography, the theoretical basis of machine translation was established by Al-Kindi, an Arab cryptographer. The four pillars were frequency, probability, statistics and cryptanalysis and are still the basis of machine translation today.
Modern interest in machine translation also grew from cryptography during the Second World War. Methods continued to be refined during the Cold War between English and Russian, mostly as a tool to decide if materials were worth sending to a human translator – if an article seemed confidential, it would be sent to a real linguist; if not, it was disposed of. Despite these advances, the 1966 ALPAC report deemed machine translation more expensive, less accurate and slower than human translation.
The most recent type of machine translation is neural machine translation and is currently the most successful. This draws on machine learning and translates by comparing corpora and drawing on existing data. While it’s quick, it can still make mistakes and lack context – we’ve all seen some silly Google translate mistakes over the years. Neural machine translation also relies on a bank of existing data and it’s worth bearing in mind that your text will be stored in Google’s cloud to be drawn on again and again if you use their translation tool – alright if it’s just your French homework (cheater!) but if you require more confidential services, a translator with an NDA would be more effective.
At MTT, we occasionally offer machine translation post-editing – a good way of balancing the speed of machine translation options with the expertise of human translators. However, we still find that using real translators creates the best output.
Contact our friendly team on +44 1562 748 778 or send us an email to [email protected] for all of your translation needs!