Close Menu
    Facebook LinkedIn YouTube WhatsApp X (Twitter) Pinterest
    Trending
    • Roam Rider twin-slide pop-up pickup camper
    • Airwallex founder Jack Zhang is offering $100,000 to AI startup founders under 25
    • How Elon Musk Squeezed OpenAI: They ‘Are Gonna Want to Kill Me’
    • Resorts World NYC opens first full casino in New York City with live table games in Queens
    • Sony’s Latest PlayStation Update Sparks DRM Fears: What We Know
    • System Design Series: Apache Flink from 10,000 Feet, and Building a Flink-powered Recommendation Engine
    • 15-second semicylinder air tent unboxes from the cube
    • Emergency First Responders Say Waymos Are Getting Worse
    Facebook LinkedIn WhatsApp
    Times FeaturedTimes Featured
    Thursday, April 30
    • Home
    • Founders
    • Startups
    • Technology
    • Profiles
    • Entrepreneurs
    • Leaders
    • Students
    • VC Funds
    • More
      • AI
      • Robotics
      • Industries
      • Global
    Times FeaturedTimes Featured
    Home»Artificial Intelligence»Bytes Speak All Languages: Cross-Script Name Retrieval via Contrastive Learning
    Artificial Intelligence

    Bytes Speak All Languages: Cross-Script Name Retrieval via Contrastive Learning

    Editor Times FeaturedBy Editor Times FeaturedApril 26, 2026No Comments13 Mins Read
    Facebook Twitter Pinterest Telegram LinkedIn Tumblr WhatsApp Email
    Share
    Facebook Twitter LinkedIn Pinterest Telegram Email WhatsApp Copy Link


    screening system checks a reputation in opposition to a watchlist, it faces a silent failure mode that no one talks about. Kind “Владимир Путин” right into a system listed on “Vladimir Putin” and most name-matching approaches return nothing. The 2 strings share zero characters, so edit distance is meaningless, phonetic codes fail (they assume Latin), and BM25 provides up solely.

    This isn’t an obscure edge case. Immigration databases, hospital file techniques, and monetary compliance pipelines cope with this each day. And but, the dominant approaches to this drawback are both classical (edit distance, Soundex variants) or heavyweight (fine-tune a multilingual LLM on a number of hundred manually labeled pairs). On this submit, I’ll stroll you thru how we educated a compact transformer encoder from scratch on uncooked UTF-8 bytes, with no tokenizer, no pretrained spine, and no script detection, to resolve cross-script phonetic identify retrieval. We achieved 0.775 MRR and 0.897 R@10 throughout 8 non-Latin scripts, lowering the efficiency hole between Latin and non-Latin queries by 10x over the perfect classical baseline.

    The complete code is on GitHub. This submit covers the concepts and the engineering.

    Why is this tough?

    The issue sits on the intersection of three issues that don’t cooperate:

    Scripts are disjoint image units. “Schwarzenegger” and “שוורצנגר” (Hebrew) haven’t any shared characters. Edit distance, the go-to for fuzzy matching, produces a maximum-distance rating each time a script boundary is crossed. Phonetic hashing (Double Metaphone, Soundex) encodes approximate English pronunciation, so it’s ineffective for non-Latin queries by design.

    Romanization is just not a operate. The Chinese language identify written as “张” maps to Zhang, Chang, and Cheung relying on dialect, romanization customary, and historic conference. The Korean “박” maps to Park, Pak, and Bak. Any strategy that tries to normalize to a canonical Latin type (like ICU transliterate) will get the fitting reply for one conference and fail for the others.

    Names carry no semantic context. Dense retrieval strategies like DPR and BGE-M3 are highly effective for sentence-level duties as a result of surrounding phrases present semantic grounding. For a 2-word individual identify there is no such thing as a context to compensate for floor mismatch. Chari et al. (2025) confirmed that even robust multilingual retrievers degrade severely when queries are transliterated quite than written of their native script.

    The perception behind our strategy: each Unicode character decomposes deterministically into 1 to 4 bytes from a set 256-symbol alphabet. “Владимир” and “Vladimir” are totally different byte sequences, however a mannequin educated contrastively on sufficient phonetic pairs can study to map them to close by vectors. The vocabulary is common by building.

    Constructing Coaching Information at Scale

    You may’t prepare this mannequin with out information, and there’s no dataset of 4 million cross-script phonetic identify pairs mendacity round. We constructed one with a 4-stage LLM pipeline.

    Information technology pipeline (Picture by creator)

    Stage 1: Stratified sampling from Wikidata

    We began with 2 million person-name entities from Wikidata, which offers canonical English names plus partial cross-script labels (some entities have Russian or Arabic names of their Wikidata file, most don’t). Naively sampling from this produces a dataset dominated by English-only names. We stratified by script-coverage bucket (0, 1-2, 3-4, 5+ non-English labels) and sampled proportionally inside every bucket, yielding 119,040 entities with balanced protection.

    Stage 2: Phonetic Latin variants (Llama-3.1-8B)

    For every English anchor identify, we requested Llama-3.1-8B-Instruct to generate 4 phonetic spelling variants — the sorts of mishearings and misspellings actual folks produce. The immediate was strict:

    Generate 4 DISTINCT phonetic spelling variants of this identify
    because it sounds when spoken: "Catherine"
    
    Guidelines:
    - Every variant have to be spelled in a different way from all others and from the unique
    - Simulate how totally different folks would possibly mishear or misspell the identify phonetically
    - Do NOT use nicknames, abbreviations, or shortened varieties
    - Do NOT change language (keep in Latin script)
    
    Return a JSON array of precisely 4 strings, no clarification:
    ["variant1", "variant2", ...]

    Outcome for “Catherine”: ["Kathryn", "Katerin", "Kathrin", "Katharine"]

    Stage 3: Cross-script transliteration (Qwen3-30B)

    For every English identify and every of its Latin variants, we generated transliterations into 8 scripts: Arabic, Russian, Chinese language, Japanese, Hebrew, Hindi, Greek, Korean. We used Qwen3-Coder-30B-A3B-Instruct-FP8:

    {
      "Catherine": {"ar": "كاثرين", "ru": "Катрин", "he": "קתרין", ...},
      "Kathryn":   {"ar": "كاثرين", "ru": "Катрин", ...},
      "Katharine": {"ar": "...", "ru": "...", ...}
    }

    Each stage is independently resumable: it reads present output, builds a set of already-processed entity IDs, and skips them. A crash loses at most one in-flight batch.

    Stage 4: Merge and tag

    The ultimate stage merges Wikidata ground-truth labels with LLM output, deduplicates, and tags every optimistic pair by sort:

    • phonetic: Latin spelling variant of the English anchor (“Catherine” → “Kathryn”)
    • script: direct transliteration right into a non-Latin script (“Catherine” → “كاثرين”)
    • mixed: a phonetic Latin variant that was then transliterated (“Katharine” → “كاثرين”)

    Positives are saved per entity; negatives should not saved in any respect, they’re mined dynamically throughout coaching. Splits are assigned on the entity degree (80/10/10, deterministic MD5 hash of entity ID) so all variants of an id go to at least one partition.

    Remaining dataset: 119,040 entities, 4.67 million optimistic pairs.


    The Mannequin

    The encoder is genuinely small: 6 transformer layers, 8 consideration heads, hidden dim 256, FFN dim 1024, dropout 0.1, max size 256 bytes. Complete parameters: ~4M.

    class ByteLevelEncoder(PreTrainedModel):
        def __init__(self, config: ByteEncoderConfig):
            tremendous().__init__(config)
            self.embedding = nn.Embedding(
                config.vocab_size,   # 256 — uncooked UTF-8 bytes
                config.hidden_dim,
                padding_idx=config.pad_token_id,
            )
            self.pos_embedding = nn.Embedding(config.max_len, config.hidden_dim)
    
            encoder_layer = nn.TransformerEncoderLayer(
                d_model=config.hidden_dim,
                nhead=config.n_heads,
                dim_feedforward=config.ffn_dim,
                dropout=config.dropout,
                batch_first=True,
                norm_first=True,   # pre-norm: extra steady when coaching from scratch
            )
            self.transformer = nn.TransformerEncoder(
                encoder_layer, num_layers=config.n_layers,
                enable_nested_tensor=False,
            )
    
        def ahead(self, input_ids, attention_mask):
            B, L = input_ids.form
            positions = torch.arange(L, machine=input_ids.machine).unsqueeze(0)
            x = self.embedding(input_ids) + self.pos_embedding(positions)
            padding_mask = ~attention_mask  # TransformerEncoder makes use of True = ignore
            x = self.transformer(x, src_key_padding_mask=padding_mask)
            # imply pool over actual tokens solely
            mask_f = attention_mask.unsqueeze(-1).float()
            pooled = (x * mask_f).sum(dim=1) / mask_f.sum(dim=1).clamp(min=1)
            return F.normalize(pooled, p=2, dim=-1)  # unit vectors

    Why pre-norm (norm_first=True)? When coaching a transformer from scratch (no pretrained initialization), pre-norm stabilizes gradient movement in early coaching. Submit-norm tends to diverge until you’re cautious with studying fee warmup and initialization. For a fine-tuning situation, you most likely don’t want to consider this, however right here it mattered.

    The output is a unit vector in 256 dimensions. Cosine similarity = interior product on unit vectors, so retrieval is only a dot product.


    Coaching: InfoNCE and Laborious Unfavourable Mining

    The InfoNCE loss

    The loss is customary: an (anchor, optimistic) pair ought to have a excessive interior product; the anchor’s interior product with each different optimistic within the batch (the in-batch negatives) ought to be low.

    def infonce_loss(anchor, optimistic, temperature=0.07):
        # anchor, optimistic: (B, D), L2-normalized
        logits = (anchor @ optimistic.T) / temperature  # (B, B)
        labels = torch.arange(len(anchor), machine=anchor.machine)  # diagonal = appropriate
        return F.cross_entropy(logits, labels)

    With batch measurement 256 and temperature 0.07, that is 255 negatives per anchor per step. The temperature controls how peaked the distribution is: too excessive and the loss ignores onerous negatives, too low and coaching turns into unstable.

    Why in-batch negatives aren’t sufficient

    In-batch negatives are low cost however shallow: they’re random names from the dataset, which are typically straightforward to separate. A mannequin that has been coaching for a number of hundred steps can distinguish “Catherine” from “Zhao Wei” effortlessly. What it struggles with is “Katarina” vs “Katherine” — names which are phonetically shut however check with totally different folks. These are the instances the place the gradient sign is definitely informative.

    That is the motivation for ANCE (Approximate Nearest Neighbour Contrastive Estimation): periodically rebuild a FAISS index from the present mannequin’s embeddings, then for every anchor, discover the present nearest non-matching neighbors and use these as negatives. They’re onerous exactly as a result of the mannequin at the moment thinks they’re related.

    ANCE schedule plot (Picture by creator)

    The onerous unfavorable schedule

    class ANCEBatchSampler(Sampler):
        def _current_mix_ratio(self) -> float:
            if self._step < self.warmup or self.index is None:
                return 0.0
            steps_past_warmup = self._step - self.warmup
            # ramp from 0 → target_mix_ratio over mix_ramp_steps
            return min(
                self.target_mix_ratio,
                self.target_mix_ratio * steps_past_warmup / max(1, self.mix_ramp_steps)
            )

    Through the first 200 steps: random batches solely. The mannequin has no significant construction but; a FAISS index over random embeddings would produce ineffective onerous negatives.

    After step 200: the FAISS index is rebuilt periodically from contemporary embeddings (each refresh_every steps). Every batch is constructed by taking a seed anchor, discovering its nearest neighbors within the present index, filling n_hard = batch_size * mix_ratio slots with these neighbors, and padding the remainder with random samples. The combination ratio ramps linearly from 0 to 0.7 over 500 steps after warmup, so the transition is gradual.

    The coaching loop:

    for batch in train_loader:
        anchor   = mannequin(batch["anchor"].to(machine), batch["anchor_mask"].to(machine))
        optimistic = mannequin(batch["positive"].to(machine), batch["positive_mask"].to(machine))
        loss = loss_fn(anchor, optimistic)
        loss.backward()
        optimizer.step()
        optimizer.zero_grad()
        scheduler.step()
    
        if global_step % refresh_every == 0:
            embs, ids = encode_all(mannequin, train_ds, train_batch_size, machine)
            train_sampler.update_index(embs, ids)

    Analysis

    The retrieval setup is an ordinary dense IR analysis. The corpus is all 11,974 test-split anchor names, every encoded to a unit vector and saved in a FAISS FlatIP index. Every optimistic variant within the check set is issued as a question; retrieval succeeds if the proper anchor seems within the top-k outcomes.

    We report MRR, R@1, R@5, R@10, and NDCG@10, damaged down 3 ways: total, by question sort, and by script.

    Total outcomes:

    Overall performance comparison across retriever systems
    Total efficiency comparability throughout retriever techniques

    The classical baselines (Levenshtein, Double Metaphone, BM25) cluster at MRR ~0.09. This seems to be horrible, however it’s an artifact of what’s being measured: 70% of the analysis queries are cross-script (script or mixed sort), on which these strategies rating close to zero as a result of they share no characters with Latin-indexed names. On Latin-only queries, Levenshtein achieves 0.894 MRR — a wonderfully respectable quantity for a classical baseline.

    Why total MRR misleads

    The mixed sort is each the toughest and the commonest (70% of queries): the question is a phonetic variant of the anchor that was then transliterated right into a non-Latin script (“Katharine” → “كاثرين”, English anchor “Catherine”). Breaking down by question sort reveals the place every technique really fails.

    Performance comparison of all testing scenarios
    Efficiency comparability of all testing eventualities (Picture by creator)
    Table showing comparison of performance
    Comparability of efficiency in opposition to the perfect conventional strategies

    The mannequin must deal with phonetic variation and script change concurrently. Transliterate, which applies a set canonical romanization, drops to 0.485 right here as a result of a set mapping can’t account for phonetic variants within the question.

    The byte encoder maintains robust efficiency throughout all three sorts (0.937 / 0.827 / 0.738). The contrastive coaching sign, which sees all three pair sorts, efficiently aligns phonetically equal byte sequences no matter script.

    The script hole

    Script hole comparability

    The script hole is the R@10 distinction between Latin and non-Latin queries. Classical baselines have gaps of 0.88 to 0.94: they retrieve effectively inside Latin script however fail solely throughout script boundaries. The byte encoder reduces this to 0.096.

    Importantly, the mannequin additionally improves Latin R@10 from 0.944 to 0.983. The contrastive goal generalizes within-script in addition to throughout scripts.

    The remaining hole (0.096) is sort of solely defined by two scripts:

    Performance comparison across languages
    Efficiency comparability throughout languages

    Scripts with constant romanization conventions (Arabic, Russian, Hebrew, Hindi, Greek) attain above 0.95. Chinese language (0.666) and Korean (0.728) are the outliers. Each have extreme romanization ambiguity: “张” maps to Zhang, Chang, and Cheung; “박” maps to Park, Pak, and Bak. The LLM-generated coaching information comprises all of those as positives for a similar entity, which produces conflicting gradient sign. The mannequin can’t totally resolve which embedding area a reputation belongs to when its romanization is genuinely ambiguous.

    Discover additionally that BM25 performs barely higher on Chinese language and Korean than different baselines. This isn’t as a result of BM25 understands phonetics. When the question is already within the goal script (Chinese language querying a Chinese language-indexed corpus), similar CJK characters could seem in each question and doc, producing incidental character n-gram overlap. This impact disappears for true cross-script retrieval (Latin question, CJK corpus) and shouldn’t be mistaken for phonetic matching.

    FAISS index ablation

    Performance comparison across Indexing techniques
    Efficiency comparability throughout Indexing strategies

    HNSW matches precise search recall (0.896 vs 0.897 R@10) at 5.7x decrease latency. For deployment, HNSW is the selection: the small recall penalty is negligible and the latency enchancment compounds at scale. IVF-PQ cuts index measurement by 96% at a 6.4% R@10 penalty — value contemplating in case you’re indexing thousands and thousands of entities and reminiscence is constrained.

    At 11,974 entities the distinction between 0.03 ms and 0.17 ms is tutorial. At 50 million entities in an actual deployment, HNSW’s recall benefit over IVF-Flat turns into extra pronounced because the variety of index partitions grows.


    What doesn’t work (and why)

    The mannequin fails to totally shut the hole on Chinese language and Korean, and the reason being value dwelling on. The pipeline generates non-Latin variants solely by transliterating from Latin: “Catherine” → Latin variant → Arabic/Chinese language/and so on. It by no means generates native-script spelling variation. Various Arabic orthographies, Korean spacing conventions, or variant Chinese language character varieties that check with the identical identify don’t seem in coaching information. The mannequin learns to map Latin byte sequences to non-Latin byte sequences, however it hasn’t seen non-Latin spelling variation inside a single script.

    This can be a identified limitation. The repair can be a fifth pipeline stage: given a generated Chinese language or Arabic identify, ask the LLM to supply native-script phonetic variants of it. We didn’t do that, so the mannequin is probably going underperforming on queries that characterize real-world native-script variation.

    A second limitation: 99.5% of optimistic pairs are LLM-generated. The analysis makes use of the identical LLM-generated pairs. If the LLM systematically mistransliterates a category of names, each coaching and analysis sign can be fallacious in the identical course, and we might not catch it. The 0.5% Wikidata floor reality offers a sanity test however not an entire one.


    Key takeaways

    Byte-level tokenization is an underused instrument for multilingual duties. It eliminates out-of-vocabulary tokens by building, requires no language-specific tokenizer, and provides you a common 256-symbol vocabulary that covers each Unicode character. For duties the place floor type issues greater than semantics — like identify matching — it’s a pure match.

    LLMs are a viable information engine for low-resource retrieval duties. We generated 4.67 million optimistic pairs throughout 8 scripts utilizing two open-weight fashions. The pipeline is 4 phases, every independently resumable. This strategy is generalizable to different low-resource entity matching issues the place ground-truth labels are scarce however a succesful LLM can synthesize life like variation.

    ANCE onerous unfavorable mining issues. The transition from random negatives to ANN-mined onerous negatives noticeably sharpens the embedding area. With out it, the mannequin would study to separate straightforward instances (totally different names in the identical script) however wrestle on the onerous ones (phonetically related names throughout scripts).

    Report outcomes by question sort and script, not simply total MRR. An total MRR of 0.775 masks big variation: 0.937 on phonetic queries, 0.738 on mixed. A system that appears mediocre on headline metrics could also be near-perfect for one use case and damaged for one more.


    The code, dataset pipeline, educated checkpoint, and analysis scripts are at github.com/vedant-jumle/cross-language-phonetic-text-alignment.

    Notice about Wikidata: Wikidata is launched underneath CC0 1.0 Universal (public area) — no restrictions on use, together with industrial.



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Editor Times Featured
    • Website

    Related Posts

    System Design Series: Apache Flink from 10,000 Feet, and Building a Flink-powered Recommendation Engine

    April 30, 2026

    Agentic AI: How to Save on Tokens

    April 29, 2026

    4 YAML Files Instead of PySpark: How We Let Analysts Build Data Pipelines Without Engineers

    April 29, 2026

    Ensembles of Ensembles of Ensembles: A Guide to Stacking

    April 29, 2026

    How AI Policy in South Africa Is Ruining Itself

    April 29, 2026

    PyTorch NaNs Are Silent Killers — So I Built a 3ms Hook to Catch Them at the Exact Layer

    April 28, 2026
    Leave A Reply Cancel Reply

    Editors Picks

    Roam Rider twin-slide pop-up pickup camper

    April 30, 2026

    Airwallex founder Jack Zhang is offering $100,000 to AI startup founders under 25

    April 30, 2026

    How Elon Musk Squeezed OpenAI: They ‘Are Gonna Want to Kill Me’

    April 30, 2026

    Resorts World NYC opens first full casino in New York City with live table games in Queens

    April 30, 2026
    Categories
    • Founders
    • Startups
    • Technology
    • Profiles
    • Entrepreneurs
    • Leaders
    • Students
    • VC Funds
    About Us
    About Us

    Welcome to Times Featured, an AI-driven entrepreneurship growth engine that is transforming the future of work, bridging the digital divide and encouraging younger community inclusion in the 4th Industrial Revolution, and nurturing new market leaders.

    Empowering the growth of profiles, leaders, entrepreneurs businesses, and startups on international landscape.

    Asia-Middle East-Europe-North America-Australia-Africa

    Facebook LinkedIn WhatsApp
    Featured Picks

    Lovescape Video Generator: My Unfiltered Thoughts

    September 25, 2025

    ‘How to Train Your Dragon’ Remake: Release Date and Time on Peacock

    September 14, 2025

    20 Best Presidents’ Day Deals on Gear Our Reviewers Actually Used (2026)

    February 15, 2026
    Categories
    • Founders
    • Startups
    • Technology
    • Profiles
    • Entrepreneurs
    • Leaders
    • Students
    • VC Funds
    Copyright © 2024 Timesfeatured.com IP Limited. All Rights.
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    • About us
    • Contact us

    Type above and press Enter to search. Press Esc to cancel.