Close Menu
    Facebook LinkedIn YouTube WhatsApp X (Twitter) Pinterest
    Trending
    • How AI played a central role in spreading misinformation about the Bondi terrorist attack – thanks to a fake news site
    • Hoka Coupon Codes: 10% Off | December 2025
    • Michigan man arrested for murdering his mother and stealing her money to gamble
    • Today’s NYT Connections: Sports Edition Hints, Answers for Dec. 16 #449
    • Virtual Power Plants Face New Grid Test
    • A brief history of Sam Altman’s hype
    • Severe droughts caused Indus Valley Civilization’s decline
    • 3 ways to reduce trauma for everyone after an event like Bondi
    Facebook LinkedIn WhatsApp
    Times FeaturedTimes Featured
    Tuesday, December 16
    • Home
    • Founders
    • Startups
    • Technology
    • Profiles
    • Entrepreneurs
    • Leaders
    • Students
    • VC Funds
    • More
      • AI
      • Robotics
      • Industries
      • Global
    Times FeaturedTimes Featured
    Home»Artificial Intelligence»Demystifying Cosine Similarity | Towards Data Science
    Artificial Intelligence

    Demystifying Cosine Similarity | Towards Data Science

    Editor Times FeaturedBy Editor Times FeaturedAugust 10, 2025No Comments8 Mins Read
    Facebook Twitter Pinterest Telegram LinkedIn Tumblr WhatsApp Email
    Share
    Facebook Twitter LinkedIn Pinterest Telegram Email WhatsApp Copy Link


    is a generally used metric for operationalizing duties equivalent to semantic search and doc comparability within the area of pure language processing (NLP). Introductory NLP programs typically present solely a high-level justification for utilizing cosine similarity in such duties (versus, say, Euclidean distance) with out explaining the underlying arithmetic, leaving many knowledge scientists with a quite imprecise understanding of the subject material. To handle this hole, the next article lays out the mathematical instinct behind the cosine similarity metric and exhibits how this may help us interpret ends in follow with hands-on examples in Python.

    Observe: All figures and formulation within the following sections have been created by the creator of this text.

    Mathematical Instinct 

    The cosine similarity metric relies on the cosine perform that readers could recall from highschool math. The cosine perform displays a repeating wavelike sample, a full cycle of which is depicted in Determine 1 under for the vary 0 <= x <= 2*pi. The Python code used to supply the determine can also be included for reference.

    import numpy as np
    import matplotlib.pyplot as plt
    
    # Outline the x vary from 0 to 2*pi
    x = np.linspace(0, 2 * np.pi, 500)
    y = np.cos(x)
    
    # Create the plot
    plt.determine(figsize=(8, 4))
    plt.plot(x, y, label='cos(x)', coloration='blue')
    
    # Add notches on the x-axis at pi/2 and three*pi/2
    notch_positions = [0, np.pi/2, np.pi, 3*np.pi/2, 2*np.pi]
    notch_labels = ['0', 'pi/2', 'pi', '3*pi/2', '2*pi']
    plt.xticks(ticks=notch_positions, labels=notch_labels)
    
    # Add customized horizontal gridlines solely at y = -1, 0, 1
    for y_val in [-1, 0, 1]:
        plt.axhline(y=y_val, coloration='grey', linestyle='--', linewidth=0.5)
    
    # Add vertical gridlines at specified x-values
    for x_val in notch_positions:
        plt.axvline(x=x_val, coloration='grey', linestyle='--', linewidth=0.5)
    
    # Customise the plot
    plt.xlabel("x")
    plt.ylabel("cos(x)")
    
    # Remaining structure and show
    plt.tight_layout()
    plt.present()
    Determine 1: Cosine Operate

    The perform parameter x denotes an angle in radians (e.g., the angle between two vectors in an embedding house), the place pi/2, pi, 3*pi/2, and a couple of*pi, are 90, 180, 270, and 360 levels, respectively.

    To know why the cosine perform can function a helpful foundation for designing a vector similarity metric, discover that the fundamental cosine perform, with none purposeful transformations as proven in Determine 1, has maxima at x = 2*a*pi, minima at x = (2*b + 1)*pi, and roots at x = (c + 1/2)*pi for some integers a, b, and c. In different phrases, if x denotes the angle between two vectors, cos(x) returns the most important worth when the vectors level in the identical course, the smallest worth when the vectors level in reverse instructions, and 0 when the vectors are orthogonal to one another.

    This habits of the cosine perform neatly captures the interaction between two key ideas in NLP: semantic overlap (conveying how a lot that means is shared between two texts) and semantic polarity (capturing the oppositeness of that means in texts). For instance, the texts “I appreciated this film” and “I loved this movie” would have excessive semantic overlap (they specific basically the identical that means regardless of utilizing completely different phrases) and low semantic polarity (they don’t specific reverse meanings). Now, if the embedding vectors for 2 phrases occur to encode each semantic overlap and polarity, then we’d anticipate synonyms to have cosine similarity approaching 1, antonyms to have cosine similarity approaching -1, and unrelated phrases to have cosine similarity approaching 0.

    In follow, we’ll sometimes not know the angle x instantly. As a substitute, we should derive the cosine worth from the vectors themselves. Given two vectors U and V, every with n components, the cosine of the angle between these vectors — equal to the cosine similarity metric — is computed because the dot product of the vectors divided by the product of the vector magnitudes:

    The above system for the cosine of the angle between two vectors could be derived from the so-called Cosine Rule, as demonstrated within the section between minutes 12 and 18 of this video:

    A neat proof of the Cosine Rule itself is offered on this video:

    The next Python implementation of cosine similarity explicitly operationalizes the formulation offered above, with out counting on any black-box, third-party packages:

    import math
    
    def cosine_similarity(U, V):
        if len(U) != len(V):
            elevate ValueError("Vectors have to be of the identical size.")
    
        # Compute dot product and magnitudes
        dot_product = sum(u * v for u, v in zip(U, V))
        magnitude_U = math.sqrt(sum(u ** 2 for u in U))
        magnitude_V = math.sqrt(sum(v ** 2 for v in V))
        
        # Zero vector dealing with to keep away from division by zero
        if magnitude_U == 0 or magnitude_V == 0:
            elevate ValueError("Can't compute cosine similarity for zero-magnitude vectors.")
    
        return dot_product / (magnitude_U * magnitude_V)

    readers can consult with this article for a extra environment friendly Python implementation of the cosine distance metric (outlined as 1 minus cosine similarity) utilizing the NumPy and SciPy packages.

    Lastly, it’s price evaluating the mathematical instinct of cosine similarity (or distance) with that of Euclidean distance, which measures the linear distance between two vectors and may function a vector similarity metric. Particularly, the decrease the Euclidean distance between two vectors, the upper their semantic similarity is prone to be. The Euclidean distance between two vectors U and V (every of size n) could be computed utilizing the next system:

    Beneath is the corresponding Python implementation:

    import math
    
    def euclidean_distance(U, V):
        if len(U) != len(V):
            elevate ValueError("Vectors have to be of the identical size.")
    
        # Compute sum of squared variations
        sum_squared_diff = sum((u - v) ** 2 for u, v in zip(U, V))
    
        # Take the sq. root of the sum
        return math.sqrt(sum_squared_diff)

    Discover that, because the elementwise variations within the Euclidean distance system are squared, the ensuing metric will all the time be a non-negative quantity — zero if the vectors are equivalent, constructive in any other case. Within the NLP context, this means that Euclidean distance is not going to replicate semantic polarity in fairly the identical method as cosine distance does. Furthermore, so long as two vectors level in the identical course, the cosine of the angle between them will stay the identical whatever the vector magnitudes. Against this, the Euclidean distance metric is affected by variations in vector magnitude, which can result in deceptive interpretations in follow (e.g., two texts of various lengths could yield a excessive Euclidean distance regardless of being semantically related). As such, cosine similarity is the popular metric in lots of NLP situations, the place figuring out vector — or semantic — directionality is the first concern.

    Concept versus Observe

    In a sensible NLP situation, the interpretation of cosine similarity hinges on the extent to which the vector embedding encodes polarity in addition to semantic overlap. Within the following hands-on instance, we’ll examine the similarity between two given phrases utilizing a pretrained embedding mannequin that doesn’t encode polarity (all-MiniLM-L6-v2) and one which does (distilbert-base-uncased-finetuned-sst-2-english). We may also use extra environment friendly implementations of cosine similarity and Euclidean distance by leveraging capabilities supplied by the SciPy package deal.

    from scipy.spatial.distance import cosine as cosine_distance
    from sentence_transformers import SentenceTransformer
    from transformers import AutoTokenizer, AutoModel
    import torch
    
    # Phrases to embed
    phrases = ["movie", "film", "good", "bad", "spoon", "car"]
    
    # Load a pre-trained embedding mannequin from Hugging Face
    model_1 = SentenceTransformer("sentence-transformers/all-MiniLM-L6-v2")
    model_2_name = "distilbert-base-uncased-finetuned-sst-2-english"
    model_2_tokenizer = AutoTokenizer.from_pretrained(model_2_name)
    model_2 = AutoModel.from_pretrained(model_2_name)
    
    # Generate embeddings for mannequin 1
    embeddings_1 =  dict(zip(phrases, model_1.encode(phrases)))
    
    # Generate embeddings for mannequin 2
    inputs = model_2_tokenizer(phrases, padding=True, truncation=True, return_tensors="pt")
    with torch.no_grad():
        outputs = model_2(**inputs)
        embedding_vectors_model_2 = outputs.last_hidden_state.imply(dim=1)
    embeddings_2 = {phrase: vector for phrase, vector in zip(phrases, embedding_vectors_model_2)}
    
    # Compute and print cosine similarity (1 - cosine distance) for each embedding fashions
    print("Cosine similarity for embedding mannequin 1:")
    print("film", "t", "movie", "t", 1 - cosine_distance(embeddings_1["movie"], embeddings_1["film"]))
    print("good", "t", "unhealthy", "t", 1 - cosine_distance(embeddings_1["good"], embeddings_1["bad"]))
    print("spoon", "t", "automobile", "t", 1 - cosine_distance(embeddings_1["spoon"], embeddings_1["car"]))
    print()
    
    print("Cosine similarity for embedding mannequin 2:")
    print("film", "t", "movie", "t", 1 - cosine_distance(embeddings_2["movie"], embeddings_2["film"]))
    print("good", "t", "unhealthy", "t", 1 - cosine_distance(embeddings_2["good"], embeddings_2["bad"]))
    print("spoon", "t", "automobile", "t", 1 - cosine_distance(embeddings_2["spoon"], embeddings_2["car"]))
    print()

    Output:

    Cosine similarity for embedding mannequin 1:
    film 	 movie 	 0.8426464702276286
    good 	 unhealthy 	 0.5871497042685934
    spoon 	 automobile 	 0.22919675707817078
    
    Cosine similarity for embedding mannequin 2:
    film 	 movie 	 0.9638281550070811
    good 	 unhealthy 	 -0.3416433451550165
    spoon 	 automobile 	 0.5418748837234599

    The phrases “film” and “movie”, that are sometimes used as synonyms, have cosine similarity near 1, suggesting excessive semantic overlap as anticipated. The phrases “good” and “unhealthy” are antonyms, and we see this mirrored within the detrimental cosine similarity end result when utilizing the second embedding mannequin recognized to encode semantic polarity. Lastly, the phrases “spoon” and “automobile” are semantically unrelated, and the corresponding orthogonality of their vector embeddings is indicated by their cosine similarity outcomes being nearer to zero than for “film” and “movie”.

    The Wrap

    The cosine similarity between two vectors relies on the cosine of the angle they kind, and — not like metrics equivalent to Euclidean distance — will not be delicate to variations in vector magnitudes. In idea, cosine similarity must be near 1 if the vectors level in the identical course (indicating excessive similarity), near -1 if the vectors level in reverse instructions (indicating excessive dissimilarity), and near 0 if the vectors are orthogonal (indicating unrelatedness). Nonetheless, the precise interpretation of cosine similarity in a given NLP situation relies on the character of the embedding mannequin used to vectorize the textual knowledge (e.g., whether or not the embedding mannequin encodes polarity along with semantic overlap).



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Editor Times Featured
    • Website

    Related Posts

    The Machine Learning “Advent Calendar” Day 15: SVM in Excel

    December 15, 2025

    6 Technical Skills That Make You a Senior Data Scientist

    December 15, 2025

    Geospatial exploratory data analysis with GeoPandas and DuckDB

    December 15, 2025

    Lessons Learned from Upgrading to LangChain 1.0 in Production

    December 15, 2025

    The Machine Learning “Advent Calendar” Day 14: Softmax Regression in Excel

    December 14, 2025

    The Skills That Bridge Technical Work and Business Impact

    December 14, 2025

    Comments are closed.

    Editors Picks

    How AI played a central role in spreading misinformation about the Bondi terrorist attack – thanks to a fake news site

    December 16, 2025

    Hoka Coupon Codes: 10% Off | December 2025

    December 16, 2025

    Michigan man arrested for murdering his mother and stealing her money to gamble

    December 16, 2025

    Today’s NYT Connections: Sports Edition Hints, Answers for Dec. 16 #449

    December 16, 2025
    Categories
    • Founders
    • Startups
    • Technology
    • Profiles
    • Entrepreneurs
    • Leaders
    • Students
    • VC Funds
    About Us
    About Us

    Welcome to Times Featured, an AI-driven entrepreneurship growth engine that is transforming the future of work, bridging the digital divide and encouraging younger community inclusion in the 4th Industrial Revolution, and nurturing new market leaders.

    Empowering the growth of profiles, leaders, entrepreneurs businesses, and startups on international landscape.

    Asia-Middle East-Europe-North America-Australia-Africa

    Facebook LinkedIn WhatsApp
    Featured Picks

    The quantum leap for startups: What early adopters need to know

    May 19, 2025

    Tax raid on betting could threaten 40,000 jobs in the UK, analysis warns

    October 30, 2025

    Amazon’s Fire Tablets, Tested, So You Don’t Have To (2025)

    July 21, 2025
    Categories
    • Founders
    • Startups
    • Technology
    • Profiles
    • Entrepreneurs
    • Leaders
    • Students
    • VC Funds
    Copyright © 2024 Timesfeatured.com IP Limited. All Rights.
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    • About us
    • Contact us

    Type above and press Enter to search. Press Esc to cancel.