In this article, we will design and implement an end-to-end Translations service, supporting infrastructure, and client integrations each for mobile, web, and backend services. We will support both application-content and user-content. We will start with requirements and then begin system design covering multiple components including: admin translations portal, backend translations service, client pipeline integration, client sidecar/agent, and client libraries.
- Admin Translations Portal
- Backend Service, APIs, and Kafka Integration
- Client Pipeline Integration
- Client Translations Sidecar/Agent
- Client Translations Libraries
- Performance Analysis and Conclusion
Internationalization is a common need across industries to prepare products for global audiences. A key element of this is translating text into the locale (region + language) of the user. Text may come from application-content which is mostly static (e.g. button text, marketing content) or from user-content which is highly dynamic (e.g. API response of book titles). We expect a given translation to take up to 48 hours for translators to complete.
Support hundreds of locales, geographic regions, and languages Support thousands of translations in a given page Support application-content, user-content, static, and dynamic text Low latency rendering of content without negative performance impact Admin portal for translators to translate text including contextual info for high quality
Admin Translations Portal
The first component we will design is the administrative portal for translators to translate text into a given locale and language. This will be a Single Page Application (SPA) using OpenID Connect (OIDC) for authentication and authorization. This frontend will be written in TS with React.
There are three pages: a login page using OIDC, a list page with rows of text needing translations including search box and filters, and a detail page providing a form for translators to view contextual info and add translated text.
Backend Translations Service
The translations service will provide APIs for viewing, adding, and updating translations. It will leverage a database for persisting translations. It will also integrate with a Kafka pub/sub channel for publishing updates which clients may subscribe to.
- List of translation objects containing title, status, and date updated
- Adds new required translations in bulk and returns UUIDs
- Translation object containing title, status, date updated, contextual info, link if available, and any other metadata
- Upserts translation in DB and pushes event to Kafka
A database will be critical for storing and querying translations. This data is not highly relational and will require significant scale. Access patterns will consist of key-lookups and paginated lists with filtering. We will leverage Amazon DynamoDB. Keys will use UUID’s and objects will contain region, language, contextual info, translation status, original text, and optionally translated text.
The backend service could run in a multitude of compute environments ranging from serverless Lambdas, to containers running on Fargate or Kubernetes (EKS), to EC2 virtual machines. Given we require low latency, significant scale, and want to minimize operational load, we will go with Fargate containerized instances for now. AWS Fargate provides a containers-as-a-service platform for quickly deploying containers with autoscaling and minimal operational load. A load balancer will distribute requests across backend containers.
For sending translations to clients we could leverage several patterns such as clients polling the Translations service or publishing events via Pub/Sub. Given the lengthy and highly-variable amount of time (up to 48hrs) between events for a given translation UUID (e.g. needs-translation, translated) and the large number of clients, Pub/Sub is a good design pattern here.
There are numerous Pub/Sub services we could leverage such as Apache Kafka, Redis Pub/Sub, and even Amazon SNS. Kafka is an event-streaming service perfect for publishing, processing, and subscribing to events. Kafka provides a highly durable disk-based replicated Pub/Sub model which we will leverage here. Each time text is translated an event will be published with the relevant UUID.
Client Pipeline Integration
With the backend translations service squared away we will turn our attention to clients. The first step to consider is enqueuing translations. Translations may take up to 48 hours for translators to translate text into the specified locale and so we want to start the translation process as soon as possible.
For application-content, ideally this process starts as soon as mobile or frontend engineers commit code containing new translation tokens. Tokens will include default English text, description, and options for pluralization flexibility. For enqueueing, we can use either Git hooks or integrate into the CI/CD pipeline. We will go with the later - everytime a new commit is merged the continuous integration pipeline will make a POST call to the Translations service to bulk add the newly required translations. In the response the Translations service will return a list of UUIDs which will be persisted to disk.
Enqueuing user-content by backend services happens immediately once the backend service finishes processing the request. The user-content is persisted to a client-service-owned database presumably and a POST call is made to the Translations service to bulk-add the newly required translations. In the response, the Translations service will return a list of UUIDs which too will be persisted.
Client Translations Sidecar/Agent/Daemon
Next, we discuss the client integration providing the retrieval and rendering of translated text for clients.
There are three types of clients we must consider: (1) mobile engineers building Android and iOS mobile applications, (2) backend engineers building services with client data needing translation, and (3) frontend engineers building web applications. Let’s discuss each in more detail.
Mobile clients are easiest in that integrating translated text can occur offline with different packages/bundles published to separate geographies via the Android and Apple app marketplaces. During the packing and publishing workflow a new step will be added which takes tokenized translations, pulls localized translated text, and persists it as file(s) in the mobile application package/bundle.
Web and Backend Service Clients
Both web and backend service clients will require a separate process to subscribe to translations as they are updated and persist them as file(s) on the server. The client consists of two components: (1) an agent with local cache which subscribes to Kafka Pub/Sub events and (2) a library for integrating translations into the app or service. For web clients, this will be on the server which serves frontend web content.
The agent subscribes to Kafka Pub/Sub events for the relevant translation UUID’s. As the Translations service completes a translation it publishes an event for the relevant UUID and all subscribing clients receive the translated text which is then persisted to a local cache.
Client Translations Libraries
The final component is the client library. The client library is responsible for integrating already translated text into the mobile app, web app, or backend service response. For mobile apps this happens at build time, whereas for web and backend service clients this happens continuously as translation events are received via Kafka and persisted to a local cache.
When the web or backend service receives a new request and prepares its response, the client library interpolates translated text, replacing translation tokens with matching translation text. For web apps, this is via a custom React component. For mobile apps and backend services, this is via per-language clients.
Performance Analysis and Conclusion
This design ensures high performance for an excellent customer experience. By enqueueing new translations as early as possible we ensure quick turnaround time. By using Kafka Pub/Sub we minimize load on the backend Translations service. By using local client caches, clients can depend on low-latency interpolation of translated text. Together this provides a highly-performant translations solution.
That covers at a high-level the multiple components required to build an end-to-end translations solution. We discussed the admin translations portal, backend translations service, client pipeline integration, client sidecar/agent, and client libraries.