Products Archives | Rubric

Rebecca Metcalf
September 2, 2019
samuel-zeller-vpR0oc4X8Mk-unsplash-1280x944.jpg

For companies with growing product portfolios, keeping track of product information can be a major headache. When you have content related to hundreds or even thousands of products spread across multiple systems and countless deliverables, it becomes all too easy for inconsistencies and mistakes to slip in.

This is where Product Information Management (PIM) systems come in. A PIM system acts as a single source of truth for all product-related information, helping businesses ensure that all of the data and content they publish is consistent and up-to-date.

Among the most valuable benefits of PIM systems – and perhaps the most compelling reason for adopting one – is that they can drive immense savings through content reuse. Modular content can be authored once and then redeployed in any context that requires the same information, and that also means it only needs to be translated once.

 

What is a PIM system?

PIM systems are essentially large databases for centrally storing, managing, and updating information at a product-specific level. A PIM system isn’t just for technical data, it contains all of the content associated with the product – including names, feature and benefit descriptions, product images, specifications, pricing, and more.

Other authoring and content management systems in your infrastructure can integrate directly with the PIM system and dynamically pull the most up-to-date information on-demand. This takes all the complexity out of managing product info. There’s no need to author or update the same thing multiple times, and there’s no risk of publishing inconsistent or inaccurate content. The latter is particularly valuable in industries such as manufacturing, where the accuracy of technical documentation goes hand in hand with user safety.

Last but not least, by enabling businesses to author and translate content just once (rather than multiple times across content types), PIM systems can lead to significant time and cost savings for global content strategies. Content authors can work far more efficiently since they don’t have to spend time tracking down or rewriting content. And for localization, the ability to reuse text that has already been translated can massively reduce the overall translation volume – which is the primary contributor to cost.

 

Creating content and authoring for a PIM system

To make the most of a PIM system, it’s crucial to bear reuse and localization in mind from the get-go when structuring and authoring content.

First and foremost, writers should always refer to the appropriate style guide and glossary. The PIM system is the golden source of information for your products, so it’s paramount that content structures, terminology, units of measurement, and tone are internally consistent. Following a style guide and glossary will make text easier to understand for consumers and translators, leading to a better end user experience and higher quality localization.

Secondly, to maximize reusability and reduce the potential for linguistic issues to occur when content is used in different contexts, we recommend authoring content in small, standalone components. These small chunks of content are easier to fit into different contexts than longer sentences or paragraphs. Bear in mind that whilst numeric content or measurements may be suitable to insert in flowing text, textual content is better suited to use as standalone fields. These considerations should be factored into the structure of PIM content and the authoring of all content that will use PIM data.

 

Example

Without the use of a PIM system, product features may be written as an independent statement. For example:

Bore Size from 8” to 16”

When using a PIM system, however, PIM fields should be structured to allow the statement to be broken down into smaller, reusable chunks. This maximizes reuse and ensures the data works in different contexts. The above statement could be broken down as:

English Content PIM field
Bore Size {Bore Size_name}
8″ {Bore Size_Min}
16″ {Bore Size_Max}

The new statement would then be created by combining fields and automatically pulling in the relevant data:

{Bore size_name}: {Bore size _Min} – {Bore size _Max}

Which would produce the following in English:

Bore size: 8” – 16”

The same data could also be used in different contexts. One example is in product specifications tables – PIM data that is re-useable and internationalized ensures time savings and the use of accurate information.

 

Choosing a PIM system

PIM systems are rapidly gaining popularity, and the number of solutions on the market is growing fast. The best option for your business will depend on your specific needs, but regardless of industry, we believe that there are two core features that you should always look for in a good PIM system:

  1. The ability to integrate with your authoring and content management systems. As explained above, this will eliminate the need to manually update product information in each system. Instead, they will all dynamically pull data from a single source of truth. Once you have a PIM system that integrates with existing processes and technologies, you can maximize ROI by ensuring that all your content teams are using it. PIM systems deliver the greatest value when they are used throughout the business as part of a holistic strategy.
  2. The ability to export and re-import content for translation. Exporting text in a structured, editable format (such as XML or XLIFF) will enable your LSP to work on your content using their preferred tools, leading to faster turnaround times and lower costs. Once translation is complete, it should be easy to import the new content into the PIM system.

 

Learn more

Whether you’re already facing challenges managing your product information, or whether you’re looking to take control before things get out of hand, adopting a PIM system can be an excellent way to reduce complexity, cut costs, and unlock new growth opportunities. Subscribe to the Rubric Blog to learn more about PIM systems and discover other ways to optimize your global content strategy. Our next article will dive into translating HMI Apps and hardware translation.


Rebecca Metcalf
July 1, 2019
mika-baumeister-703680-unsplash-1280x855.jpg

This week we have a guest co-author, Michael Hall from Yaskawa. As Manager of Technical Communications, he’s responsible for all aspects of technical document production for the U.S. Market. Learn more below:

 

The destruction of the Mars Climate Orbiter is a notorious example of what can happen when numerical standards get confused. A programming error in one piece of software produced numeric results in United States Customary Units (USCS) instead of the intended Metric (SI). This error caused the $300 millon Orbiter to approach Mars at the wrong trajectory and be destroyed during orbital insertion. This was an error of software development rather than documentation, but the same principle applies – don’t let your technical writing lead to the next $300 million mistake!

Accuracy is the cornerstone of technical writing. Engineers and end-users depend on precise documentation to safely maintain and operate equipment. Mistakes in writing or localization can put lives in danger or lead to costly equipment damage. This is especially true of numeric content, particularly measurements, where even a single digit or symbol out of place can lead to wildly incorrect assumptions or calculations – with potentially catastrophic results.

With that in mind, ensuring the accuracy of numerals and measurements should be a top priority for every technical writer and translator, and for every organization where users depend on accurate technical documentation. In this article, we’ll take a look at some key considerations and best practices to guarantee numeric accuracy in documentation.

 

Know your standards

It almost goes without saying, but you should always be following the appropriate numerical standard – a system of measurement that clearly defines the name, symbol, and quantity of each unit.

The most common standard today is the International System of Units (SI), the modern form of the metric system. However, note that applicable standards do vary by industry and country. The most important example of this is the United States, where SI is used for science and medicine, but consumers and the manufacturing sector typically use the USCS instead.

The United Kingdom is another notably peculiar case. In the UK, SI is the official system, yet imperial units are still widely used in everyday life and in specific circumstances, such as road signage.

When creating technical documentation, ensure you are following the correct standard for your target audience and industry, and make it clear from the outset which system you are using. If multiple standards are included in a single document, consider which should appear first.

 

Keep it consistent

Accuracy and consistency go hand-in-hand, especially when translation and localization come into play. What this typically means is that it’s vital to handle all numerals and measurements in exactly the same way throughout each document.

Following a standard is a good first step, but manufacturers may have unique or specific requirements for numeric content in localized documents.

As best practice we strongly advise technical writers go a step further by creating a style guide that clearly lays out rules for dealing with numeric content. A good style guide is often the result of collaboration between the manufacturer and the language service provider.

For example, a style guide will help linguists with rules for rounding or unit conversion (the latter being particularly important when the audiences for localized documentation use different numerical standards).

You should work with marketing, legal teams and your translation provider to create style guides for each locale that you are targeting, taking into account all industry-specific standards and safety regulations. Building style guides can be a daunting prospect as they must contain more than simply rules for numeric content. Capitalization, word choice, punctuation usage are also key components of a good style guide. Writing source English content to a language standard such as Simplified Technical English (ASD STE100) can also dramatically simplify the process.

 

Check, re-check, and check again

Even the most careful writers following the most well-defined standards and style guides will occasionally make mistakes. That’s why it’s crucial to use a robust system of checks to flag any inconsistencies – both in the original and localized texts. Ask your language service provider about their quality control processes to ensure this crucial process is used to translate or localize your documents. This process can be largely automated, but it’s always worth including at least one human review.

When dealing with multiple languages, we suggest paying particular attention to the original version. Identifying issues in the original will help to anticipate and prevent issues in translation, while any mistakes or inconsistencies in the original will likely be carried over into localized versions.

 

Localize in bulk – but be careful!

Localizing your numeric content all at once can be an excellent way to ensure consistency and save time. That being said, be very careful about making global changes. For this reason, we recommend only making global changes to numerals, and not to measurements. Anyone who has ever used a “replace all” function knows how easy it is to inadvertently create gibberish words when making sweeping changes to text. It is similarly easy to accidentally break numbers and codes – and these mistakes can be much more difficult to spot. If global changes are made, we recommend a thorough comparison of the target to the source to expose unintended results if they exist. Then adjust the global replacement routine to eliminate these errors.

Also, remember that not all numeric content actually needs to be localized. Product and part numbers, for instance, will typically remain the same in all versions. Excluding this content from translation can significantly reduce the word count and, by extension, the cost. With the right tools and authoring architectures (such as Darwin Information Typing Architecture (DITA) it’s easy to flag specific numeric content for exclusion.

 

Putting it into practice

At Rubric, we recently put these principles into practice while designing a localization process for our client, Yaskawa America, Inc. – the world’s largest manufacturer of AC Inverter Drives, Servo and Motion Control, and Robotics Automation Systems.

Yaskawa’s technical documentation includes a large amount of numeric content. They use leading-edge database publishing tools to enforce strict control and consistency of content for product instructions. Yaskawa requires the same level of quality when product instructions are sent for language translation. To maximize accuracy and consistency, we created a process that enables the translators to identify and work on all the appropriate numerals at once by extracting the data from DITA and scalable vector graphics (SVG) – numeric content that does not require input from translators is excluded. We also implemented automated checks to flag up any anomalies after translation and established a final human review process to ensure quality.

These steps have relieved Yaskawa from the time-consuming burden of checking numeric content post-translation. Yaskawa’s quality assurance process for language translation enables them to review and approve final publications much more quickly.

If you’re interested in achieving similar results, consider partnering with Rubric. We’ve spent almost 25 years localizing technical manuals, developing bespoke tools, and building trust – we’ll work with you to transform your localization approach with robust processes for each project. Subscribe to our blog below to get the latest updates on translation and localization, and how they affect your business.


Ian A. Henderson
June 19, 2019
andres-urena-470137-unsplash-1280x853.jpg

The Internet of Things (IoT) is an interconnected universe of device, data, and software. Simply put, IoT connects physical devices — TVs, fridges, headphones, etc. — to the internet via sensors that send data to cloud networks for transformation into useful information. From experiential marketing technology that enhances event management, to an app that works with the thermostat in your office to provide comfort-ability, the sky’s the limit when it comes to what IoT can do.

Essentially, IoT expands the reach of the internet to improve our everyday lives with data. It’s showing no signs of slowing down, either: the market is on-target to deliver over $3 trillion annually by 2026. But how does IoT and its global, border-leaping connectivity affect localization and translation?

IoT’s evolution is affecting localization on a global scale

The Internet of Things is always evolving, making it tough to decide what needs to be translated and what is superfluous. Add a product’s ever-changing lifecycle into the mix, and localization for global markets can quickly become overwhelming.

One of the first questions to consider is what will your device interact with: do your headphones rely on Google Assistant or Amazon Alexa? Do those services support your markets — if not, do you localize your content in anticipation of those services catching up in that particular language? Make sure to consider the timeline for future updates of your product and resolution of any mismatch of language availability to ensure a positive user experience.

IoT requires fast, accurate translations

The need for speed in terms of device and UX interaction directly impacts translation, with a crucial need for consistency to ensure that the devices are compatible. For example, how can Alexa play a song through a smart headset if the voice prompts are incorrect? Cloud-based products like Alexa are developed at such a pace that manufacturers of 3rd-party devices have to scramble to keep up with language updates and additions.

IoT’s constant evolution is also changing the product development cycle. This quick delivery of digital information means that Global Content Partners are having to become more agile, and their tools more automated to keep up with the ecosystem’s time-critical translation workflows. Thanks to the collaboration of industry professionals, translation technology has developed apace with the world of AI and IoT. Expert knowledge of how to leverage TMS, CAT tools, and machine translation (MT) is essential for tackling the volume and speed of IoT development. In addition, localization technology like automated translation can save time and money throughout a product’s lifecycle by populating text with pre-existing translations. This targeted automation also gives your linguists the time they need to focus on the more demanding, high-level localization tasks.

Strategic planning

A further solution to meeting quick turnaround times is to integrate your localization process into the development cycle from the beginning. By doing so, translators, engineers, and other stakeholders can analyze the product’s requirements, advise on the way forward, and align with your Translation Management System (TMS).

Businesses would do well to bring their Global Content Partners into the fold early-on for advice and guidance. Early collaboration opens the channels of communication necessary for iterative localization throughout the product’s lifecycle.

Localization is more than just translation. It’s a strategic foundation from which to deploy critical, targeted translations to your global markets. And just as localization is more than translation, a trusted Global Content Partner is more than an LSP. An experienced Global Content Partner like Rubric will analyze your organization’s global markets and content, and then advise on a localization strategy to achieve your global goals.

 

Don’t forget to subscribe to our blog below to get the latest updates on translation, localization and how things like IoT can affect your business’ strategy.


Rebecca Metcalf
May 14, 2019
manual-1280x851.jpg

Technical writing invariably involves a great deal of content reuse. If you’ve ever authored technical documents across multiple products and projects for the same organization, you’ve undoubtedly found yourself repeating elements of text and style many times over.

Streamlining this content reuse can be one of the best ways to improve the efficiency of your authoring and localization processes. And, with the right tools and strategy, it’s easier than you might think.

The Darwin Information Typing Architecture (DITA) is an open standard, XML-based architecture for writing and publishing technical documents, and it was built from the ground up to support content reuse. DITA encourages a modular approach to technical writing where topics – the basic units of information within DITA – are capable of standing alone and being reused in many different documents. The focus is on content rather than layout, with the goal of maximizing reuse to save time and resources.

DITA was originally developed by IBM almost 20 years ago. It has received numerous updates since then, and it is experiencing a renaissance with the release of new tools and Lightweight DITA – a simplified version for those that do not require the full feature set, or prefer to work in HTML5 or Markdown.

Switching from traditional word editors to DITA can seem like a daunting prospect, but if used correctly, DITA is an invaluable tool that drives effective writing and localization. That’s why we’ve put together this article to give you some tips on how to get started.

 

The right tools

The first stage in any DITA implementation is choosing your tooling. If you’re new to the architecture and looking to explore its potential, the DITA Open Toolkit is an excellent starting point for experimentation. It’s a free, open-source publishing engine, and it actually serves as the foundation for much of the DITA software ecosystem – including many of the most popular, proprietary authoring and content management applications.

Oxygen XML Editor 21.0 interface
Oxygen XML Editor 21.0 interface

 

 

 

 

 

 

 

 

 

 

 

 

When you’re ready to implement DITA in earnest, tools such as Oxygen XML Editor are the natural next step. This kind of software provides an easy-to-use visual interface for creating and editing technical documentation, much like a typical word processor. But unlike a word processor, these tools come with built-in DITA support, enabling writers to manage their modular content units and effortlessly reuse them via content references.

Content References can be used to pull a huge variety of previously-created content into a new project. This can range from a single phrase, to a topic, to an entire collection of connected content.

 

Don’t let localization be an afterthought

The benefits of DITA aren’t limited to the initial authoring process – it can also significantly streamline localization. The key here is to make sure that you factor in localization right from the outset.

Content created in DITA can be easily converted to XLIFF for translation. But before you get to that point, there are a number of things you can do to make your content more localization-friendly:

  • Write in International English rather than American or British English. Avoid colloquial expressions, idioms, and overly complex sentences.
  • Determine whether there is anything that should not be translated, such as lists of parameters and part numbers. Most DITA tools will give you the option to flag this content for exclusion, which can make a huge difference to localization costs by reducing the scope of work.
  • In cases where you need to customize your content for different products within a range – or for different outputs for the same product (e.g. PDF manual vs online help manual) – use DITA’s conditional text feature to clearly indicate which content should vary, and in what way.
  • Develop a glossary to precisely define terms, especially acronyms and abbreviations.
  • Consider using a controlled language (for instance, Simplified Technical English) with a limited vocabulary and fixed style guidelines. This will improve the consistency of your content and minimize the risk of ambiguity for localization service providers.
  • Use the SVG format for images that include annotations or callout text. SVG graphics are the easiest to edit with computer-assisted translation tools.

Following these suggestions from the start of a project will enable you to move seamlessly from the initial content creation to localization. And once the localization is complete, you will be able to use a DITA publishing engine to generate deliverables for each of your target languages with just a few simple commands. Authors simply have to create and follow well-defined layout rules, and DITA takes care of the rest.

An additional advantage to using DITA for localization is that after a topic has been translated once, it does not have to be translated again – reducing both cost and turnaround times in localization when content is reused.

 

Leverage the experts

Working with experienced specialists is the best way to guarantee a smooth DITA adoption and avoid localization complications. At Rubric, our experts know DITA inside and out, and they are ready to provide their best practice expertise to help you plan your DITA implementation strategy.

Send us some of your own collateral and we can advise on DITA best practices! After clicking, attach some of your source documents to your email and Ian Henderson, our CTO, will reach out with some tips and guidance to help you embed structured authoring and simplify your content management.

Stay tuned for the next couple of weeks as we cover Content Authoring, Product Information Management (PIM) systems and other topics that can help drive your localization strategy.


Dominic Spurling
April 24, 2019
A-Consensus-1280x746.jpg

From a software engineering perspective, the localization process can be an entropy-increasing stage in your devops pipeline.

Localization tools need to extract a snapshot of the user experience, usually from resource files, and generate translated equivalents without adversely affecting the integrity of the application. User interface strings must be unpicked from (sometimes deeply nested) mark-up and presented to translators, who prepare target language strings, which must be ready to nest back into place within identically structured mark-up.

The tendency for small inconsistencies in the source to become large ones in target language files and for non-breaking anomalies to become breaking ones – this is entropy in UI projects.

At Rubric, we use a mix of automated tests and manual checks by both linguists and engineers, to help minimize this effect. Below I’ll work through a typical example to show how you can help your global content partner by minimizing entropy at the start of the process. (Look out for the inconsistencies in the original source.)

An example resource file

The following XML is based on a typical resource file for an Android app:

<strings>
	<check_mobile_devices_wifi>
		<![CDATA[Check your mobile device’s Wi-Fi settings and make sure your mobile device is connected to your home network##REPLACE_WITH_HOME_NETWORK##.<br /><br />Or, if you still can't connect, click START OVER.]]>
	</check_mobile_devices_wifi>
	<we_are_here_to_help>
		<![CDATA[We&rsquo;re here to help]]>
	</we_are_here_to_help>
	<firmware_system_setup>
		<![CDATA[How would you like to connect your speaker to your network?]]>
	</firmware_system_setup>
</strings>

 

Step 1 – Identify content type and unwrap nested formats

The file is first put through an Android Strings XML parser to extract the value of each key. Content type within CDATA sections (HTML) is identified and handed off to a secondary parser

  • Note: there are two right single quotation marks, highlighted in yellow. One of them is HTML encoded as &rsquo; but the other is a literal character. This is an example of an inconsistency, which could lead to problems down the line.

Step 2 – Parse HTML and protect tags and placeholders

Here the Entities are decoded (second key) and HTML tags and application-specific placeholders are protected.

Step 3 – Present translatable strings to translators

Translations are pre-populated from translation memory where possible and the translator fills any gaps which remain. The placeholders shown in purple cannot be altered by the translator but may be re-arranged if required by the sentence structure of the target language.

Step 4 – Write out target files

This is often the most technically complex part of the process where inconsistencies in the source can become amplified. The translated segments are processed (through each of the above steps in reverse), eventually reconstituting the original format.

First, placeholders and tags are re-injected and special characters are re-encoded or escaped:

The escaped single quote will probably not do any harm if it is decoded at right points down the line in your devops pipeline. However, if the structure source is internally consistent (less entropy!) this kind of ambiguity can be avoided.

Finally, the translated strings are re-injected into the original markup:

<strings>
  <check_mobile_devices_wifi>
    <![CDATA[Vérifiez les paramètres Wi-Fi de votre périphérique mobile pour vous assurer que ce dernier est connecté à votre réseau domestique##REPLACE_WITH_HOME_NETWORK##.<br /><br />Si vous ne pouvez toujours pas vous connecter, cliquez sur RECOMMENCER.]]>
  </check_mobile_devices_wifi>  
  <we_are_here_to_help>
    <![CDATA[Nous sommes là pour vous aider]]>
  </we_are_here_to_help>
  <firmware_system_setup>
    <![CDATA[Comment souhaitez-vous connecter l&rsquo;enceinte à votre réseau?]]>
  </firmware_system_setup>  
</strings>

 

How you can help your Global Content partner

As well as providing source files which are structured in a consistent way, there are a couple of other ways in which you can help optimize the localization process and enhance the quality of the end product:

  • Provide a complete set of files with every localization request

    At Rubric, we typically run diff reports at the end of every localization project in order to review changes in the English source and compare those against changes in the target files. This helps us to pick up any unexpected changes (for example, escaped characters introduced in error). Working with a complete set of files for each revision simplifies the diff process and makes reports easier to analyze.

  • Say something when you find anomalies

    If you find that you are having to apply fixes to localized resource files, please tell your Global Content partner, as this will enable them to correct any misconfigurations.

*first image of a black hole courtesy of the Event Horizon Telescope (EHT) network.


rawpixel-579262-unsplash-1280x796.jpg

In 2014, Amway introduced The Voice as a platform for its independent business owners to communicate, collaborate, and share ideas. But while The Voice looked great on paper, Amway quickly realized that facilitating clear and productive communication was more complicated than they had initially expected. Amway Business Owners (ABOs) come from all corners of the globe, speaking a combined total of over 60 languages.


Product-names.jpeg

Expanding into global markets is an exciting prospect for any business. It holds the promise of reaching new customers, driving profitability, and adding international depth to the brand’s reputation. But as with every opportunity, there is a degree of risk and uncertainty. Will your brand messaging be effective at addressing local cultural sensibilities? Will local markets respond to your product design and packaging?


Follow Our Activity

Stay up to date with our latest activity relating to Global Content.