Global Content Archives | Rubric

Ian Henderson
June 3, 2019
AdobeStock_233564313-1280x853.jpeg

As beneficial as an acquisition, consolidation, or merger (M&A) can be for an organization, they can cause a great deal of instability and stress. This is particularly true for managers and others who are trying to oversee the process.

Even under the best of circumstances, you’ll need a plan to help navigate the merger. And even then you are still likely to come across challenges. Considering the M&A process can have on an impact across an entire company, in what way is this likely to affect the processes, scope, technology, and staff involved in localization projects? We unpack this further with a brief look at one of our clients who underwent multiple mergers.

How multiple mergers impacted a client

Our team was brought in to assist a multinational software corporation that had undergone several mergers. With each merger, new products with new languages were added to their portfolio. What had started out as a single-language human resources management product, ended up requiring translation into 43 languages. As each merger added a new layer of complexity, our client ultimately decided that it was more cost effective to create a new product from scratch — one that was designed with localization and multiple languages in mind. A new strategy was also required to deal with the growing complications, the most notable being the inconsistent use of terminology.

Even changes that may seem insignificant, such as referring to employees as staff, colleagues or teammates, can have a huge impact on a company, or in this scenario, the relevance of their multilingual HR product.

Understanding M&As and localization teams

During M&As, affected teams may ask a number of questions, including:

  • Which markets shall be prioritized moving forward?
  • Which brands and products will be marketed where?
  • How much are we going to translate for each market?
  • Which languages will the newly formed company focus on for localization?

These M&A questions are affected by the degree to which the prospective companies are merging, which is in turn determined by whether an acquisition, consolidation, or merger is taking place.

Tips for localizing after an acquisition, merger, or consolidation

Once you have established the extent of the M&A, you’ll need to implement these four steps:

  1. Any knowledge about the brand should be documented and stored in one place for reference. Information about a product can often be fragmented and scattered, even within an organization. This is equally applicable to the language and terminology used for a product. When a merger takes place, this information must be centralized to avoid problems further down the line.
  2. Once information is stored in one place, a review will be required to compile a comprehensive cross-company brand glossary and style guide. Both organizations will bring their own preferences and style, so the review will ensure there is no misalignment between the two. Make sure to involve product owners, writers, legal, marketing, and translation team managers to achieve a consensus.
  3. Translation memories will also be impacted, and the newly merged company will need to assess if legacy translation memories should be penalized moving forward. In our blog, From a Million Words to Fifty Thousand, you can learn more about the purpose and benefits a translation memory offers your organization. In this context, penalized refers to the match rate a term or phrase may have with another term or phrase in the translation memories’ systems. In translation memories, if a term or phrase has a 100% match, it can be pulled through automatically to replace the term or phrase. However, if the company has decided post-merger that the term or phrase is potentially no longer relevant, it can be penalized within the system so it no longer reflects as a 100% match, allowing a translator to step in and assess the situation.
  4. The newly merged company will need to assess which tools and suppliers are kept on board for the new translation process. You can learn more about adding new systems to your company here. Companies often rely on different tools to get work done. To ensure that no problems arise during the translation process (for example, incompatibility between tools), companies need to identify which tools and suppliers will be used, and then standardize their systems. Clients should seek the assistance of a trusted global partner to help them with this process.

Implementing these steps will help ensure that a merger or acquisition doesn’t dramatically impact the performance of your localization teams.

A localization and translation partner to help you through the M&A process

While these steps will help your localization and translation teams transition through this period, it’s always better to avoid or minimize these problems beforehand. The right content partner with experience in the global arena can help you achieve this. Rubric is a customer-centric Global Content Partner with years of experience developing and managing localization and translation strategies for multinational companies.

To find out more on how we can ensure your content localization and translation proceeds smoothly, no matter the circumstances, contact us today. If you need to keep up to date with the latest on localization systems that can help your business navigate mergers, be sure to subscribe to our blog.


Ian Henderson
May 27, 2019
carlos-muza-84523-unsplash-1280x912.jpg

We’re swimming in more ‘Big Data’ than we know what to do with. Here’s the good news: the sheer amount of information on-hand can give you an accurate idea of who your customer is, and what should drive your globalization strategy. Both now and in the future.

Define your data goals from the outset

A good question to ask is: what content are people using, and where? With so much content being consumed digitally, businesses now have a number of data-rich channels to analyze. These include eLearning resources, eCommerce platforms, and website content. Usage data can be a treasure trove of information, but if you don’t know what to look for it can be overwhelming. To use Big Data effectively, define the questions you want the data to answer from the outset, then set a baseline from which you can measure the impact or change. From then on it’s an iterative process, whereby the success of your content is reviewed and localization strategy adjusted with each new metric gathered.

Get to know your user profiles

What content are your users consuming? Where are they based?

A user profile is a set of data that gives you an overview of browsing habits, as well as personal specifics such as gender, age, and location. This information provides an accurate idea of who is consuming different pieces of content. Additionally, by monitoring the success of existing translations and how they affect product sales, a business can ascertain which localization strategies work in a specific region, and which do not.

For example, a business may launch a campaign with the express purpose of gathering data about the user base in a specific market. This rich data could then be used for further expansion, as you would have a good idea of what content is received well and what doesn’t resonate with the audience. You can then localize accordingly, selecting which content requires translation into which languages.

Ensure your global strategy doesn’t overreach your budget

When it comes to budgeting for localization, you need to consider the value of your content.

For example: if a company finds that users only look at 20% of their product content, it would not make budgetary sense to translate the remaining 80%. So while high-value content demands high quality translation, it may be sufficient in terms of user experience and expectations for the remaining 80% low-value content to be machine translated using a service such as Google’s machine translation.

Gauging opinion with social media

The internet revolutionized the consumer–brand relationship. So much so, that entire careers are now built on managing, analyzing, and reacting to social media metrics. Where in the past a business would have to send out a survey for feedback, opinion is now easily gleaned, tracked, and measured from consumers’ comments and shares.

This insight is invaluable for brand expansion, gauging an audience’s opinion of competitors, and identifying ownable, niche areas. For brands that are already entrenched in a market, audience opinion and sentiment is crucial for growth. For example, should your organization offer a full multilingual customer service, or would simply localizing online product reviews be of greater benefit?

Back up your data-driven decisions with a trusted Global Content Partner

When it comes to localization strategy, a good rule of thumb is to have a baseline in place, with a target in mind, and to adjust as you go. Collecting usage data can help to determine which content (web pages, user manuals, and product information) should be localized, and into which languages. Tracking data also helps to identify which content is delivering results. It’s an iterative process that can be improved with the help of the localization expertise of a Global Content Partner, like Rubric. Check out more in our blog that mentions Facebook data.


Ian Henderson
May 14, 2019
manual-1280x851.jpg

Technical writing invariably involves a great deal of content reuse. If you’ve ever authored technical documents across multiple products and projects for the same organization, you’ve undoubtedly found yourself repeating elements of text and style many times over.

Streamlining this content reuse can be one of the best ways to improve the efficiency of your authoring and localization processes. And, with the right tools and strategy, it’s easier than you might think.

The Darwin Information Typing Architecture (DITA) is an open standard, XML-based architecture for writing and publishing technical documents, and it was built from the ground up to support content reuse. DITA encourages a modular approach to technical writing where topics – the basic units of information within DITA – are capable of standing alone and being reused in many different documents. The focus is on content rather than layout, with the goal of maximizing reuse to save time and resources.

DITA was originally developed by IBM almost 20 years ago. It has received numerous updates since then, and it is experiencing a renaissance with the release of new tools and Lightweight DITA – a simplified version for those that do not require the full feature set, or prefer to work in HTML5 or Markdown.

Switching from traditional word editors to DITA can seem like a daunting prospect, but if used correctly, DITA is an invaluable tool that drives effective writing and localization. That’s why we’ve put together this article to give you some tips on how to get started.

 

The right tools

The first stage in any DITA implementation is choosing your tooling. If you’re new to the architecture and looking to explore its potential, the DITA Open Toolkit is an excellent starting point for experimentation. It’s a free, open-source publishing engine, and it actually serves as the foundation for much of the DITA software ecosystem – including many of the most popular, proprietary authoring and content management applications.

Oxygen XML Editor 21.0 interface
Oxygen XML Editor 21.0 interface

 

 

 

 

 

 

 

 

 

 

 

 

When you’re ready to implement DITA in earnest, tools such as Oxygen XML Editor are the natural next step. This kind of software provides an easy-to-use visual interface for creating and editing technical documentation, much like a typical word processor. But unlike a word processor, these tools come with built-in DITA support, enabling writers to manage their modular content units and effortlessly reuse them via content references.

Content References can be used to pull a huge variety of previously-created content into a new project. This can range from a single phrase, to a topic, to an entire collection of connected content.

 

Don’t let localization be an afterthought

The benefits of DITA aren’t limited to the initial authoring process – it can also significantly streamline localization. The key here is to make sure that you factor in localization right from the outset.

Content created in DITA can be easily converted to XLIFF for translation. But before you get to that point, there are a number of things you can do to make your content more localization-friendly:

  • Write in International English rather than American or British English. Avoid colloquial expressions, idioms, and overly complex sentences.
  • Determine whether there is anything that should not be translated, such as lists of parameters and part numbers. Most DITA tools will give you the option to flag this content for exclusion, which can make a huge difference to localization costs by reducing the scope of work.
  • In cases where you need to customize your content for different products within a range – or for different outputs for the same product (e.g. PDF manual vs online help manual) – use DITA’s conditional text feature to clearly indicate which content should vary, and in what way.
  • Develop a glossary to precisely define terms, especially acronyms and abbreviations.
  • Consider using a controlled language (for instance, Simplified Technical English) with a limited vocabulary and fixed style guidelines. This will improve the consistency of your content and minimize the risk of ambiguity for localization service providers.
  • Use the SVG format for images that include annotations or callout text. SVG graphics are the easiest to edit with computer-assisted translation tools.

Following these suggestions from the start of a project will enable you to move seamlessly from the initial content creation to localization. And once the localization is complete, you will be able to use a DITA publishing engine to generate deliverables for each of your target languages with just a few simple commands. Authors simply have to create and follow well-defined layout rules, and DITA takes care of the rest.

An additional advantage to using DITA for localization is that after a topic has been translated once, it does not have to be translated again – reducing both cost and turnaround times in localization when content is reused.

 

Leverage the experts

Working with experienced specialists is the best way to guarantee a smooth DITA adoption and avoid localization complications. At Rubric, our experts know DITA inside and out, and they are ready to provide their best practice expertise to help you plan your DITA implementation strategy.

Send us some of your own collateral and we can advise on DITA best practices! After clicking, attach some of your source documents to your email and Ian Henderson, our CTO, will reach out with some tips and guidance to help you embed structured authoring and simplify your content management.

Stay tuned for the next couple of weeks as we cover Content Authoring, Product Information Management (PIM) systems and other topics that can help drive your localization strategy.


denver-767050_1920-1280x853.jpg

Rubric is pleased to announce that we’ll be attending The Society of Technical Communication’s (STC) 2019 Summit! Now in its 66th year, the STC Summit is the premier conference for the technical communication world.

STC is the largest and oldest professional association dedicated to the advancement of technical communication. The expo brings together our peers for in-depth discussions and presentations on key trends, issues, and cutting-edge solutions.

What is technical communication?

Technical communication simplifies technical or specialized subject matter, such as medical procedures or computer algorithms. STC defines the benefits:

“The value that technical communicators deliver is twofold: They make information more useable and accessible to those who need that information, and in doing so, they advance the goals of the companies or organizations that employ them.”

Who is Rubric?

Founded in 1994, Rubric is a trusted Global Content Partner with a track record of helping multinational companies achieve their global strategy goals via targeted translation for multilingual markets. Some of our clients include Amway, AccuWeather, and Toshiba.

Last year, and with the help of CSA Research, Rubric underwent a re-brand that saw the company pivot to a consumer-centric strategy centered around Global Content. Our new descriptor — your ‘trusted Global Content Partner’ — was born from this shift in focus. As a trusted Global Content Partner, we thrive on collaboration with our clients to solve the challenges and complexities of Global Content. Rubric offers a wide spectrum of localization solutions for organizations that want to market their services globally. This includes translating product and training manuals, ensuring digital content is aligned to a region’s language, as well as market research and guidance from asset ideation through to delivery.

Why did Rubric opt for this model? While traditional translation services may save a company costs, the strategies employed do not deliver the long-term transformational ROI that a trusted Global Content Partner can offer. In fact, by shepherding content from creation to translation to market release, we have proven that a company will save on costly reworks down the line.

What kind of clients do we partner with?

From localizing Amway’s multimedia training collateral to delivering a new level of global weather hyper-localization for AccuWeather, Rubric has delivered translation solutions to some of the world’s largest organizations. We offer solutions in the technology, manufacturing, and software spheres for companies that want their products and services translated for multilingual markets.

Who you’ll meet at the STC Summit

Our management team will be manning the booth — make sure to say hi, they’re looking forward to meeting you!

  • Ian Henderson, Chairman and Chief Technology Officer: Ian is the co-founder of Rubric and has devoted more than 20 years to Rubric’s growth. His foresight and communication prowess has been instrumental in helping clients reap the rewards of globalization and benefit from agile workflows, while still guaranteeing the integrity of their content.
  • Françoise Henderson, Chief Executive Officer: Françoise is the co-founder of Rubric. With over 20 years of experience in corporate management and translation, her leadership of Rubric’s worldwide operations and strategy has proven invaluable. Under her guidance, we’ve generated agile KPI-driven globalization workflows for clients and reduced time-to-market across multiple groups.
Where you’ll find Rubric at the STC Summit

Come meet us at Booth #304 and see some examples of our Global Content Partner strategy in action!

In the meantime, connect with us on social media:

Facebook

Twitter

Linkedin

Here’s to a memorable STC Summit 2019!


kenny-luo-1286691-unsplash-1280x854.jpg

Whether you’re dealing with an aircraft, an industrial robot, or a pump, when it comes to configuring and maintaining machinery there is no room for error. Mistakes during installation or servicing can lead to equipment failure, accidents, and even fatalities. That’s why it’s so important for technical documentation to be clear and concise, with no room for misunderstanding.

But achieving this level of clarity can be a major challenge, especially when you factor in language barriers: even though English is the prevailing language for technical documentation, engineers and end-users are not always native speakers.

Standardize and simplify

The proven solution to this problem is Simplified Technical English (STE). Originally developed for the aerospace industry, STE is a controlled language that utilizes a limited vocabulary where each word has a single, clearly defined meaning. By keeping word usage and linguistic construction simple and consistent, STE minimizes the potential for misunderstandings.

Today, STE is seeing growing popularity outside of aviation. In the manufacturing sector in particular, businesses and technical communicators are increasingly seeing the advantages of employing a preexisting, standardized framework for their technical writing. Internal style guides and glossaries are not new concepts, but developing them from scratch and keeping them up-to-date can be immensely time-consuming. In contrast, STE is a premade and proven system, and organizations can easily adopt it with just minor customizations to suit their industry.

But how can you tell if STE is right for your business? Well, consider this question: Is proper understanding of your installation and maintenance documentation critical to safety? If your answer is “yes”, then STE is almost certainly a good fit.

From simplified English to simplified translation

STE is an excellent way to make your technical documentation more consistent and easier to understand for non-native English speakers, but it isn’t always enough on its own. Sometimes you will need to go one step further and translate your content.

Target audience is the biggest factor here. If your end user doesn’t speak English at a high enough level, or at all, then translation is obviously the best option. This situation is especially common in B2C scenarios where the customer base is wider and potentially more varied.

When localizing technical documentation, STE still offers the ideal starting point, since the benefits of STE (reduced ambiguity, improved clarity and consistency) are passed on to the localized content.

By localizing your existing STE style guide and glossary for each of your target languages, you can maintain the same degree of clarity in your translations as you have in your source content. This approach reduces ambiguity from the localization process, minimizing the risk of misunderstanding or error and resulting in a higher quality, easy-to-use end product.

Additionally, authoring source content in a concise and standardized way will enable you to make the best use of translation memory (TM) technology. TM systems automatically provide suggestions to translators by remembering past translations. And when sentence construction, word usage and grammar are kept consistent in the source language, the potential for TM leveraging – and the resulting time and cost savings – goes up significantly.

Benefits for writers, readers, and businesses

Improved end-user safety is the main reason for adopting STE and standardized translations, but it is far from the only benefit. The approach we have described in this blog can make life easier for technical writers, translators, and customers, while also delivering considerable savings to your business., using the same controlled language across all projects makes content creation much more straightforward. With this methodology, technical communicators typically make fewer errors, spend less time worrying about word choice, and gain more benefit from TM systems – all of which help them to work more effectively and productively.

  • For writers, using the same controlled language across all projects makes content creation much more straightforward. With this methodology, technical communicators typically make fewer errors, spend less time worrying about word choice, and gain more benefit from TM systems – all of which help them to work more effectively and productively.
  • From the business perspective, STE keeps content concise and wordcount low. This leads to lower translation volumes and lower costs, especially when combined with improved TM system utilization.
  • Last but not least, implementing these standards will greatly enhance the consistency of technical documentation across your company as a whole. When instructions are always written in the same way, repeat customers will have a far easier time safely getting to grips with new products.
Partnering for success

Adopting STE principles in other languages can seem daunting, but that’s where Rubric comes in. Our expert team will work with you to develop bespoke localization style guides and advise on how to embed best practice terminology processes into your business.

Our experts can also help inform your decisions on the tooling and architecture used in your localization process. These choices will have a major, multiplicative effect on the quality of your content and the efficiency of your processes – so the earlier you involve us, the better!

If you’d like to learn more about STE or our localization services, we’ll be at the STC 2019 Technical Communication Summit & Expo in Denver, CO next week. Come visit us at Booth #304 from May 5th-8th – we’d love to meet you! If you aren’t in the Denver area, be sure to follow us on social media for the latest updates.


Dominic Spurling
April 24, 2019
A-Consensus-1280x746.jpg

From a software engineering perspective, the localization process can be an entropy-increasing stage in your devops pipeline.

Localization tools need to extract a snapshot of the user experience, usually from resource files, and generate translated equivalents without adversely affecting the integrity of the application. User interface strings must be unpicked from (sometimes deeply nested) mark-up and presented to translators, who prepare target language strings, which must be ready to nest back into place within identically structured mark-up.

The tendency for small inconsistencies in the source to become large ones in target language files and for non-breaking anomalies to become breaking ones – this is entropy in UI projects.

At Rubric, we use a mix of automated tests and manual checks by both linguists and engineers, to help minimize this effect. Below I’ll work through a typical example to show how you can help your global content partner by minimizing entropy at the start of the process. (Look out for the inconsistencies in the original source.)

An example resource file

The following XML is based on a typical resource file for an Android app:

<strings>
	<check_mobile_devices_wifi>
		<![CDATA[Check your mobile device’s Wi-Fi settings and make sure your mobile device is connected to your home network##REPLACE_WITH_HOME_NETWORK##.<br /><br />Or, if you still can't connect, click START OVER.]]>
	</check_mobile_devices_wifi>
	<we_are_here_to_help>
		<![CDATA[We&rsquo;re here to help]]>
	</we_are_here_to_help>
	<firmware_system_setup>
		<![CDATA[How would you like to connect your speaker to your network?]]>
	</firmware_system_setup>
</strings>

 

Step 1 – Identify content type and unwrap nested formats

The file is first put through an Android Strings XML parser to extract the value of each key. Content type within CDATA sections (HTML) is identified and handed off to a secondary parser

  • Note: there are two right single quotation marks, highlighted in yellow. One of them is HTML encoded as &rsquo; but the other is a literal character. This is an example of an inconsistency, which could lead to problems down the line.

Step 2 – Parse HTML and protect tags and placeholders

Here the Entities are decoded (second key) and HTML tags and application-specific placeholders are protected.

Step 3 – Present translatable strings to translators

Translations are pre-populated from translation memory where possible and the translator fills any gaps which remain. The placeholders shown in purple cannot be altered by the translator but may be re-arranged if required by the sentence structure of the target language.

Step 4 – Write out target files

This is often the most technically complex part of the process where inconsistencies in the source can become amplified. The translated segments are processed (through each of the above steps in reverse), eventually reconstituting the original format.

First, placeholders and tags are re-injected and special characters are re-encoded or escaped:

The escaped single quote will probably not do any harm if it is decoded at right points down the line in your devops pipeline. However, if the structure source is internally consistent (less entropy!) this kind of ambiguity can be avoided.

Finally, the translated strings are re-injected into the original markup:

<strings>
  <check_mobile_devices_wifi>
    <![CDATA[Vérifiez les paramètres Wi-Fi de votre périphérique mobile pour vous assurer que ce dernier est connecté à votre réseau domestique##REPLACE_WITH_HOME_NETWORK##.<br /><br />Si vous ne pouvez toujours pas vous connecter, cliquez sur RECOMMENCER.]]>
  </check_mobile_devices_wifi>  
  <we_are_here_to_help>
    <![CDATA[Nous sommes là pour vous aider]]>
  </we_are_here_to_help>
  <firmware_system_setup>
    <![CDATA[Comment souhaitez-vous connecter l&rsquo;enceinte à votre réseau?]]>
  </firmware_system_setup>  
</strings>

 

How you can help your Global Content partner

As well as providing source files which are structured in a consistent way, there are a couple of other ways in which you can help optimize the localization process and enhance the quality of the end product:

  • Provide a complete set of files with every localization request

    At Rubric, we typically run diff reports at the end of every localization project in order to review changes in the English source and compare those against changes in the target files. This helps us to pick up any unexpected changes (for example, escaped characters introduced in error). Working with a complete set of files for each revision simplifies the diff process and makes reports easier to analyze.

  • Say something when you find anomalies

    If you find that you are having to apply fixes to localized resource files, please tell your Global Content partner, as this will enable them to correct any misconfigurations.

*first image of a black hole courtesy of the Event Horizon Telescope (EHT) network.


Blog-4-1280x858.jpg

Right now, the primary content strategy for businesses should be video marketing. Static images and text-only posts are no longer enough to resonate with your audience online. Video is an unequivocal marketing revolution and brands need to embrace this format wholeheartedly to reap its global rewards.

These stats offer eye-opening highlights:

  • Social videos are shared 1200% more than text and images, combined.
  • 5 billion YouTube videos are consumed every day.
  • Video can raise email click-through rates by 200–300%.
  • 6 billion video adverts are consumed online every year.
  • 45% of users watch over an hour of Facebook or YouTube videos a week.
  • 500 million people use Instagram Stories every day.

These statistics can’t be ignored. And when these lines of communication are so easily accessible, brands need to ensure their content resonates across linguistic boundaries. To achieve this, you need a solid video localization strategy.

The different kinds of video localization on the market

Over and above budget, turnaround time, and production value, video localization should be determined by the end-user. Additionally, it’s imperative that video localization is factored into the authoring process as early as possible (whether it’s your content partner managing the localization or an internal team).

  • Subtitles

    Is the video intended for someone scrolling through their social media feeds? If so, you may want to consider adding subtitles — 85% of Facebook videos and two thirds of Snapchat videos are watched on mute. If the piece of content is intended for multilingual audiences and your timings are tight, adding subtitles is a quick method for getting your message out there. According to research, subtitles improve comprehension, meaning your messaging is far more likely to be understood and remembered when using closed captions.

  • Voiceover

    Does your video contain a lot of information or is it intended for research purposes? If so, you may want to open your contact list and get your favorite voiceover artist into the studio. Voiceovers lend themselves to multimedia assets such as eLearning courses, product and marketing videos, and instructional pieces because they allow the user to pause, rewind, and study at their leisure.

  • Simple User Interface

    TechSmith explains: “It can be difficult to onboard users to new and complex interfaces and workflows. Too much information can easily overwhelm the user and make it difficult to keep the focus on the essential feature or functionality.”

    Enter the Simple User Interface (SUI) and our collaboration with TechSmith, the industry-leader in screen recording and screen capture. Essentially, SUI involves removing or simplifying unnecessary elements in favor of essential, recognizable iconography that multilingual markets can easily understand. A SUI interface is an excellent visual aid for quick, uncluttered user education because it takes cognitive overload out of the equation.

    For this reason, Rubric has teamed with TechSmith to make presentation easier through a marriage of simple visualization cues and scalable localization techniques.

Video Localization best practices

Consider the following best practices when laying down your video localization foundation:

  • Inform your localization strategy by learning what your customers need and expect from video content.
  • Aim for collaborative video localization from the get-go by commissioning the skills of a trusted Global Content Partner. Rubric’s partnership with TechSmith has resulted in high-quality marketing, tutorial, and onboarding videos that wouldn’t have been possible had they been attempted in siloes.
  • Design your video with localization in mind by keeping things simple: use iconography instead of text, universal examples, and a simplified user interface.
  • Ensure that you’re giving your end-user the information, context, and guidance they need.

In the end, whichever type of video localization you choose, it needs to account for your target market’s cultural nuances. For example:

  • Does your subtitle lexicon include slang and other unique colloquialisms?
  • Does your voiceover artist employ a cadence that your targeted audience will understand, enjoy, and respond to?
  • Does your SUI use similar visual cues as the market it’s intended for?

 

Rubric is a customer-centric, Global Content Partner. We partner with multinational companies, like TechSmith, to help them achieve their global strategy goals. We’re pushing the boundaries of video localization and experimenting with new, innovative technologies for greater resonance across multilingual markets. We live for collaboration. We’d like to do the same for you. Rubric’s two-day workshop will analyze actual video and content examples from your business to advise on the localization strategies you should be implementing to maximize your reach.


Blog-3-1280x850.jpg

How translation memory cuts costs and elevates Global Content

As digital information expands, translation memory (TM) evolves with it. And today, TM systems are the most used translation applications in the world. A TM system is a complex undertaking that requires a particular skill set.

What is translation memory? In short, translation memory is a comprehensive database that recycles previous translations to be used in new text. By leveraging past translations, a translator can assess whether an automatically generated suggestion is appropriate for the text they’re adapting.

Uwe Reinke of Cologne University of Applied Sciences explains it as such:

“The idea behind its core element, the actual “memory” or translation archive, is to store the originals and their human translations of e-content in a computer system, broken down into manageable units, generally one sentence long. Over time, enormous collections of sentences and their corresponding translations are built up in the systems.”

This process not only saves time and effort, but maintains a high level of quality and consistency across Global Content projects.

The key benefits of translation memory

  • Cumulative savings

    A TM database “learns” from previous projects. When you begin a new one, the new text is segmented and analyzed against past translations to produce matches in your database. Over time, the accumulation of translation memory “knowledge” decreases costs on future translations, while expanding the depth of your text database.

  • Quick Turnaround

    Rubric was tasked with delivering a new level of weather personalization and global localization with AccuWeather’s Universal Forecast Database. From English to Korean, Rubric was able to reduce 1,000,000 words for translation to just 50,000. And though it took a year, we consider the completion of a project this vast to be quick turnaround. For further information about AccuWeather, keep reading.

  • Superior translations

    TM also aids in a translator’s accuracy and output. By aligning your business’s vocabulary, tone, and style, you give a translator the foundation they need to produce high quality translations.

The role of machine translation in translation memory

Simply put, machine translation (MT) is the automation of the translation process by computer. Where translation memory requires a human translator, machine translation is used in combination with TM to hasten project delivery without the need for human input.

There are a number of MT engines available:

  • Generic

    Google Translate, Bing, and similar are grouped here. These platforms provide quick translations to millions of people around the world and can be purchased by companies for API-integration into their systems.

  • Customizable

    An MT element that can be used to improve the accuracy of a business’s vocabulary within a specific field, be it medical, legal, or financial. Customizable MT can factor in a company’s own style and lexicon too.

  • Adaptive

    Introduced by Lilt in 2016, followed by SDL a year later, adaptive MT has greatly improved a translator’s output and is expected to challenge TM in the coming years.

In all cases, MT will attempt to create translated sentences from what it’s learned. For example, it may parse two or three TM matches and automatically combine them to complete a sentence. The result is often the kind of garbled, ungrammatical translation Google Translate produces at times. Because of this risk, a human translator should be available to audit and edit the results for project success.

Gaining efficiencies from large, repetitive texts such as product catalogues is an art that Rubric excels at. We analyze and filter texts to breakdown the component phrases and reduce the unique text for translation. Here’s how we introduce the human element into the act of translation.

How does Rubric use translation memory?

We briefly mentioned our involvement in AccuWeather’s Universal Forecast Database. Through content analysis and manipulation, we were able to translate an exhaustive database of weather phrases into form forecasts such as “sunny, mostly clear, with changing clouds in the afternoon”. Because the component phrase ‘sunny’ was repeated in the file thousands of times, we wanted to ensure we leveraged one translation for all of the repetitions to save costs. We achieved this by translating the above example phrase and ‘sunny’ separately.

Translators were then able to focus on the unique component phrases, while checking them against full weather forecast phrases for grammatical accuracy. With this approach we were able to reduce the scope of the database project from 1,000,000 words to around 50,000. The resultant savings in both cost and time were staggering.

Previous translations where the source text is identical to the new text, or partially matches it, can also be stored in translation memory. In either case, the TM will propose any matching database entries for the translator to use as they see fit.

TM can also be programmed to store translations by product. This is vital for when you have a new product and want to prioritize the order of multiple product TMs to assess how appropriate multiple translations would be. For example, using Windows XP terminology versus Windows 8, or Android terminology against iOS.

 

 

Rubric is a customer-centric, Global Content Partner. We partner with multinational companies to help them achieve their global strategy goals. Need help expanding globally? A trusted Global Content Partner will guide, expand, and strengthen the quality and impact of your translation. Sign up for a two-day workshop where we’ll analyze actual content examples from your business to show you how we can house, maintain and manipulate your TMs in a structured, consistent way across markets.


banter-snaps-128471-unsplash_2-2-1280x720.jpg

With 5G on the horizon and approaching at speed, AI, machine learning, and voice search will soon have a network to match their processing potential. But what do lightning-quick transfer times and cutting-edge comms tech mean for international brands? Let’s find out.

How artificial intelligence is changing global communication

Raconteur reports that “with the help of parallel text datasets such as Wikipedia, European Parliament proceedings and telephone transcripts from South Asia, machine-learning has now reached the point where translation tools rival their human counterparts.”

No longer the stuff of science fiction, artificial intelligence is powering text-to-speech and speech-to-text functionality across leading platforms and devices. In fact, Google and Amazon are in the midst of a battle to see who emerges as the king of speech technology. Google Cloud has just updated its AI-powered speech tools, meaning that brands and businesses can get access to additional voices and languages:

  • Google Text-to-Speech:

    The product now supports 21 languages, on top of 31 new voices courtesy of WaveNet, a deep neural network for compiling raw audio into realistic, natural voices.

  • Google Speech-to-Text:

    The customer usage data attained through data logging has enhanced Google’s models, enabling video transcription that has 64% fewer transcription errors.

Similar to Google’s Text-to-Speech, Amazon’s Polly is currently turning “text into lifelike speech using deep learning”. While Amazon’s Transcribe falls short of Google Speech’s supported languages, its custom vocabulary offering makes up for it. It’s a fair call to say that both products are equally competitive at the moment.

This leap in translation technology has remarkable implications for online translations and face-to-face communication. In fact, Skype’s Meeting Broadcast is already trialing real-time translation for video meetings, bringing us closer to demolishing the language barrier.

Consumers are demanding localized video content

Not so long ago — in a world of dial-up modems and 56k speeds — static visuals and reams of text were the only viable forms of content delivery. Fast-forward to today’s hyper-fast connection speeds and you have a fertile environment for the video format to thrive. Indeed, you’d be hard pressed to find a social media post or webpage without an easy-to-digest embedded video. In fact, social media video generates 1200% more shares than text and image content combined.

With video now the most popular means of content-consumption online, users are demanding authentic localization from brands. Some considerations:

  • A well-delivered voiceover from a native language speaker conveys the cadence and emotional weight or subtlety of local communication in the region you’re targeting. However, it can be costly to translate and record dialogue for every country you’re delivering the video too.
  • Given that 85% of video on Facebook is watched with the sound off, it makes sense for a business to invest in high-quality subtitles. By accurately voicing (pun intended) the nuances of a country’s language through text, you go a long way towards fostering brand loyalty. Consumers are far more likely to choose a brand that’s taken the time and effort to craft content that’s unique to their region.

Your voice is the command

While we’re already witnessing the rise of Voice Search, it’s predicted that 30% of all website sessions will be without a screen by 2020. Now whether or not that comes to pass, there’s no arguing that Siri, Alexa, and similar have emerged as communicative powerhouses that demand attention.

And with great power comes SEO responsibility. Currently 20% of all Google searches are voice-based. And with this statistic expected to rise exponentially, Google is already ploughing resources into voice search optimization for more accurate website ranking, starting with conditioning users to use voice on mobile phones. To get the best results, it’s important to localize your content and SEO for a particular region so that native speakers can find your product or service with ease.

Make technology your friend with an optimized Global Content strategy

As video, text, and speech technology evolves to facilitate the quick translation of multiple languages, it’s vital your Global Content is aligned with innovation and correctly worked for its intended markets. A Global Content Partner has the experience and expertise to tailor and optimize your messaging to the regions you’re targeting.

If you think your organization might benefit from our managed Global Content services, be sure to sign up for a two-day workshop. In the session, we’ll use actual data and examples from your business to show you exactly what’s working in your processes and what can be improved.


Follow Our Activity

Stay up to date with our latest activity relating to Global Content.