Close Menu
    Facebook LinkedIn YouTube WhatsApp X (Twitter) Pinterest
    Trending
    • Let’s Analyze OpenAI’s Claims About ChatGPT Energy Use
    • Plasma device offers deodorant-free odor control
    • Belgian HealthTech startup Koios Care raises €1 million to monitor and treat Parkinson’s Disease with real-world data
    • 7 Best Electric Toothbrushes, Tested For Two Weeks Each (2025)
    • I Took a Quick Tour Through Longevity Culture. Here’s What I Learned About Reverse Aging
    • Women in Semiconductors: a Critical Workforce Need
    • Robots-Blog | Enhancing Drone Navigation with AI and IDS uEye Camera Technology
    • Tilting electric 4-wheeler zips through traffic like a motorcycle
    Facebook LinkedIn WhatsApp
    Times FeaturedTimes Featured
    Monday, June 16
    • Home
    • Founders
    • Startups
    • Technology
    • Profiles
    • Entrepreneurs
    • Leaders
    • Students
    • VC Funds
    • More
      • AI
      • Robotics
      • Industries
      • Global
    Times FeaturedTimes Featured
    Home»Artificial Intelligence»How to Reduce Your Power BI Model Size by 90%
    Artificial Intelligence

    How to Reduce Your Power BI Model Size by 90%

    Editor Times FeaturedBy Editor Times FeaturedMay 26, 2025No Comments21 Mins Read
    Facebook Twitter Pinterest Telegram LinkedIn Tumblr WhatsApp Email
    Share
    Facebook Twitter LinkedIn Pinterest Telegram Email WhatsApp Copy Link


    what makes Power Bi so quick and highly effective in the case of efficiency? So highly effective, that it performs advanced calculations over thousands and thousands of rows within the blink of a watch.

    On this article, we’ll dig deep to find what’s “beneath the hood” of Energy BI, how your knowledge is being saved, compressed, queried, and eventually, introduced again to your report. When you end studying, I hope that you’re going to get a greater understanding of the exhausting work taking place within the background and admire the significance of making an optimum knowledge mannequin to get most efficiency from the Energy BI engine.

    First look beneath the hood — Formulation Engine and Storage Engine

    First, I would like you to satisfy the VertiPaq engine, “mind & muscle tissue” of the system behind not solely Energy BI, but additionally Evaluation Providers Tabular and Excel Energy Pivot. Fact to be stated, VertiPaq represents just one a part of the storage engine throughout the Tabular mannequin, moreover DirectQuery, which we’ll focus on individually in one of many subsequent articles.

    Once you ship the question to get knowledge to your Energy BI report, here’s what occurs:

    • Formulation Engine (FE) accepts the request, processes it, generates the question plan, and eventually executes it
    • Storage Engine (SE) pulls the information out of the Tabular mannequin to fulfill the request issued throughout the question generated by the Formulation Engine

    Storage Engine works in two alternative ways to retrieve requested knowledge: VertiPaq retains a snapshot of the information in reminiscence. This snapshot may be refreshed once in a while from the unique knowledge supply.

    Quite the opposite, DirectQuery doesn’t retailer any knowledge. It simply forwards the question straight to the information supply for each single request.

    Photo by RKTW extend on Unsplash

    Information within the Tabular mannequin is normally saved both as an in-memory snapshot (VertiPaq) or in DirectQuery mode. Nonetheless, there may be additionally a risk of implementing a hybrid Composite mannequin, which depends on each architectures in parallel.

    Formulation Engine — “Mind” of Energy BI

    As I already pressured, Formulation Engine accepts the question, and because it’s in a position to “perceive” DAX (and MDX additionally, however it’s out of the scope of this collection), it “interprets” DAX into a selected question plan, consisting of bodily operations that have to be executed to get outcomes again.

    These bodily operations may be joins between a number of tables, filtering, or aggregations. It’s necessary to know that Formulation Engine works in a single-threaded manner, which signifies that requests to Storage Engine are all the time being despatched sequentially.

    Storage Engine — “Muscle tissues” of Energy BI

    As soon as the question has been generated and executed by the Formulation Engine, the Storage Engine comes into the scene. It bodily goes by the information saved throughout the Tabular mannequin (VertiPaq) or goes on to a distinct knowledge supply (SQL Server, for instance, if DirectQuery storage mode is in place).

    With regards to specifying the storage engine for the desk, there are three attainable choices to select from:

    • Import mode — primarily based on VertiPaq. Desk knowledge is being saved in reminiscence as a snapshot. Information may be refreshed periodically
    • DirectQuery mode — knowledge is being retrieved from the information supply at question time. Information resides in its authentic supply earlier than, throughout, and after the question execution
    • Twin mode — a mixture of the primary two choices. Information from the desk is being loaded into reminiscence, however at question time it will also be retrieved instantly from the supply

    Versus Formulation Engine, which doesn’t assist parallelism, the Storage Engine can work asynchronously.

    Meet VertiPaq Storage Engine

    As now we have drawn an enormous image beforehand, let me clarify in additional element what VertiPaq does within the background to spice up the efficiency of our Energy BI stories.

    Once we select Import mode for our Energy BI tables, VertiPaq performs the next actions:

    • Reads the information supply, transforms knowledge right into a columnar construction, encodes, and compresses knowledge inside every of the columns
    • Establishes a dictionary and index for every of the columns
    • Prepares and establishes relationships
    • Computes all calculated columns and calculated tables, and compresses them

    The 2 primary traits of VertiPaq are:

    1. VertiPaq is a columnar database
    2. VertiPaq is an in-memory database
    Picture by creator

    As you possibly can see within the illustration above, columnar databases retailer and compress knowledge otherwise from conventional row-store databases. Columnar databases are optimized for vertical knowledge scanning, which signifies that each column is structured in its personal manner and bodily separated from different columns!

    With out going into deep evaluation about benefits and disadvantages between row-store vs column-store databases, since it will require a separate collection of articles, let me simply pinpoint a number of key differentials when it comes to efficiency.

    With columnar databases, single-column entry is quick and efficient. As soon as the computation begins to contain a number of columns, issues turn into extra advanced, because the middleman steps’ outcomes have to be quickly saved in a roundabout way.

    Merely stated, columnar databases are extra CPU-intensive, whereas row-store databases improve I/O, due to many scans of ineffective knowledge.

    Thus far, we painted an enormous image of the structure that allows Energy BI to completely shine as an final BI device. Now, we’re able to dive deeper into particular architectural options and consequently leverage this data to take advantage of our Energy BI stories, by tuning our knowledge mannequin to extract the utmost from the underlying engine.

    Inside VertiPaq in Energy BI — Compress for fulfillment!

    Photo by Kaboompics at Pexels

    As you may recall from the earlier a part of this text, we scratched the floor of VertiPaq, a strong storage engine, which is “accountable” for the blazing-fast efficiency of most of your Energy BI stories (at any time when you might be utilizing Import mode or Composite mannequin).

    3, 2, 1…Fasten your seatbelts!

    One of many key traits of the VertiPaq is that it’s a columnar database. We realized that columnar databases retailer knowledge optimized for vertical scanning, which signifies that each column has its personal construction and is bodily separated from different columns.

    That reality allows VertiPaq to use various kinds of compression to every of the columns independently, selecting the optimum compression algorithm primarily based on the values in that particular column.

    Compression is being achieved by encoding the values throughout the column. However, earlier than we dive deeper into an in depth overview of encoding strategies, simply remember that this structure isn’t completely associated to Energy BI — within the background is a Tabular mannequin, which can be “beneath the hood” of Evaluation Providers Tabular and Excel Energy Pivot.

    Worth Encoding

    That is probably the most fascinating worth encoding sort since it really works completely with integers and, subsequently, requires much less reminiscence than, for instance, when working with textual content values.

    How does this look in actuality? Let’s say now we have a column containing quite a lot of telephone calls per day, and the worth on this column varies from 4.000 to five.000. What the VertiPaq would do, is to search out the minimal worth on this vary (which is 4.000) as a place to begin, then calculate the distinction between this worth and all the opposite values within the column, storing this distinction as a brand new worth.

    Picture by creator

    At first look, 3 bits per worth won’t appear to be a major saving, however multiply this by thousands and thousands and even billions of rows and you’ll admire the quantity of reminiscence saved.

    As I already pressured, Worth Encoding is being utilized completely to integer knowledge sort columns (forex knowledge sort can be saved as an integer).

    Hash Encoding (Dictionary Encoding)

    That is in all probability probably the most incessantly used compression sort by a VertiPaq. Utilizing Hash encoding, VertiPaq creates a dictionary of the distinct values inside one column and afterward replaces “actual” values with index values from the dictionary.

    Right here is an instance to make issues clearer:

    Picture by creator

    As you could discover, VertiPaq recognized distinct values throughout the Topics column, constructed a dictionary by assigning indexes to these values, and eventually saved index values as tips to “actual” values. I assume you might be conscious that integer values require manner much less reminiscence house than textual content, in order that’s the logic behind such a knowledge compression.

    Moreover, by with the ability to construct a dictionary for any knowledge sort, VertiPaq is virtually knowledge sort unbiased!

    This brings us to a different key takeover: irrespective of in case your column is of textual content, bigint or float knowledge sort — from VertiPaq perspective it’s the identical — it must create a dictionary for every of these columns, which means that every one these columns will present the identical efficiency, each when it comes to velocity and reminiscence house allotted! In fact, by assuming that there aren’t any vital variations in dictionary sizes between these columns.

    So, it’s a fable that the information sort of the column impacts its dimension throughout the knowledge mannequin. Quite the opposite, the variety of distinct values throughout the column, which is called cardinality, principally influences column reminiscence consumption.

    RLE (Run-Size-Encoding)

    The third algorithm (RLE) creates a form of mapping desk, containing ranges of repeating values, avoiding storing each single (repeated) worth individually.

    Once more, looking at an instance will assist to raised perceive this idea:

    Picture by creator

    In actual life, VertiPaq doesn’t retailer Begin values, as a result of it might rapidly calculate the place the subsequent node begins by summing earlier Rely values.

    As highly effective as it would have a look at first look, the RLE algorithm is very depending on the ordering throughout the column. If the information is saved the way in which you see within the instance above, RLE will carry out nice. Nonetheless, in case your knowledge buckets are smaller and rotate extra incessantly, then RLE wouldn’t be an optimum answer.

    Yet one more factor to bear in mind relating to RLE: In actuality, VertiPaq doesn’t retailer knowledge the way in which it’s proven within the illustration above. First, it performs Hash encoding and creates a dictionary of the topics, after which applies the RLE algorithm, so the ultimate logic, in its most simplified manner, can be one thing like this:

    Picture by creator

    So, RLE happens after Worth or Hash Encoding, in these eventualities when VertiPaq “thinks” that it is smart to compress knowledge moreover (when knowledge is ordered in that manner that RLE would obtain higher compression).

    Re-Encoding concerns

    Regardless of how “good” VertiPaq is, it might additionally make some dangerous choices, primarily based on incorrect assumptions. Earlier than I clarify how re-encoding works, let me simply briefly iterate by the method of knowledge compression for a selected column:

    • VertiPaq scans a pattern of rows from the column
    • If the column knowledge sort isn’t an integer, it can look no additional and use Hash encoding
    • If the column is of integer knowledge sort, some further parameters are evaluated: if the numbers within the pattern linearly improve, VertiPaq assumes that it’s in all probability a main key and chooses Worth encoding
    • If the numbers within the column are moderately shut to one another (the quantity vary isn’t very extensive, like in our instance above with 4.000–5.000 telephone calls per day), VertiPaq will use Worth encoding. Quite the opposite, when values fluctuate considerably throughout the vary (for instance between 1.000 and 1.000.000), then Worth encoding doesn’t make sense, and VertiPaq will apply the Hash algorithm

    Nonetheless, it might occur typically that VertiPaq decides about which algorithm to make use of primarily based on the pattern knowledge, however then some outlier pops up and it must re-encode the column from scratch.

    Let’s use our earlier instance for the variety of telephone calls: VertiPaq scans the pattern and chooses to use Worth encoding. Then, after processing 10 million rows, rapidly it discovered a 500.000 worth (it may be an error, or no matter). Now, VertiPaq re-evaluates the selection, and it might resolve to re-encode the column utilizing the Hash algorithm as an alternative. Certainly, that may affect the entire course of when it comes to the time wanted for reprocessing.

    Lastly, right here is the record of parameters (so as of significance) that VertiPaq considers when selecting which algorithm to make use of:

    • Variety of distinct values within the column (Cardinality)
    • Information distribution within the column — column with many repeating values may be higher compressed than one containing incessantly altering values (RLE may be utilized)
    • Variety of rows within the desk
    • Column knowledge sort — impacts solely the dictionary dimension

    Decreasing the information mannequin dimension by 90% — actual story!

    After we laid the theoretical floor for understanding the structure behind the VertiPaq storage engine, and which sorts of compression it makes use of to optimize your Energy BI knowledge mannequin, it’s the suitable second to get our fingers soiled and apply our data in a real-life case!

    Place to begin = 776 MB

    Our knowledge mannequin is kind of easy, but memory-intensive. We’ve a reality desk (factChat), which accommodates knowledge about reside assist chats and one dimension desk (dimProduct), which pertains to a reality desk. Our reality desk has round 9 million rows, which shouldn’t be an enormous deal for Energy BI, however the desk was imported as it’s, with none further optimization or transformation.

    Picture by creator

    Now, this pbix file consumes a whopping 777 MB!!! You may’t imagine it? Simply have a look:

    Picture by creator

    Simply bear in mind this image! In fact, I don’t must inform you how a lot time this report must load or refresh, and the way our calculations are sluggish due to the file dimension.

    …and it’s even worse!

    Moreover, it’s not simply 776 MBs that take our reminiscence, since reminiscence consumption is being calculated taking into consideration the next elements:

    • PBIX file
    • Dictionary (you’ve realized concerning the dictionary to start with sections of this text)
    • Column hierarchies
    • Person-defined hierarchies
    • Relationships

    Now, if I open Activity Supervisor, go to the Particulars tab, and discover the msmdsrv.exe course of, I’ll see that it burns greater than 1 GB of reminiscence!

    Oh, man, that actually hurts! And we haven’t even interacted with the report! So, let’s see what we will do to optimize our mannequin…

    Rule #1 — Import solely these columns you actually need

    The primary and a very powerful rule is: preserve in your knowledge mannequin solely these columns you actually need for the report!

    That being stated, do I actually need each the chatID column, which is a surrogate key, and the sourceID column, which is a main key from the supply system? Each of those values are distinctive, so even when I must depend the full variety of chats, I might nonetheless be superb with solely one in all them.

    Picture by creator

    So, I’ll take away the sourceID column and verify how the file appears now:

    Picture by creator

    By eradicating only one pointless column, we saved greater than 100 MB!!! Let’s look at additional what may be eliminated with out taking a deeper look (and we’ll come to this later, I promise).

    Do we actually want each the unique begin time of the chat and UTC time, one saved as a Date/Time/Timezone sort, the opposite as Date/Time, and each going to a second stage of precision??!!

    Let me eliminate the unique begin time column and preserve solely UTC values.

    Picture by creator

    One other 100 MB of wasted house gone! By eradicating simply two columns we don’t want, we diminished the scale of our file by 30%!

    Now, that was with out even trying into the small print of the reminiscence consumption. Let’s now activate DAX Studio, my favourite device for troubleshooting Energy BI stories. As I already pressured a number of occasions, this device is a MUST if you happen to plan to work critically with Energy BI — and it’s utterly free!

    One of many options in DAX Studio is a VertiPaq Analyzer, a really great tool constructed by Marco Russo and Alberto Ferrari from sqlbi.com. Once I connect with my pbix file with DAX Studio, listed below are the numbers associated to my knowledge mannequin dimension:

    Picture by creator

    I can see right here what the costliest columns are in my knowledge mannequin and resolve if I can discard a few of them, or if I must preserve all of them.

    At first look, I’ve few candidates for removing — sessionReferrer and referrer columns have excessive cardinality and subsequently can’t be optimally compressed. Furthermore, as these are textual content columns and have to be encoded utilizing a Hash algorithm, you possibly can see that their dictionary dimension is extraordinarily excessive! If you happen to take a better look, you possibly can discover that these two columns take virtually 40% of my desk dimension!

    After checking with my report customers in the event that they want any of those columns, or possibly solely one in all them, I’ve bought a affirmation that they don’t carry out any evaluation on these columns. So, why on Earth ought to we bloat our knowledge mannequin with them??!!

    One other robust candidate for removing is the LastEditDate column. This column simply exhibits the date and time when the report was final edited within the knowledge warehouse. Once more, I checked with the report customers, and so they didn’t even know that this column exists!

    I eliminated these three columns, and the result’s:

    Picture by creator

    Oh, God, we halved the scale of our knowledge mannequin by simply eradicating a number of pointless columns.

    Fact be informed, there are a number of extra columns that may very well be dismissed from the information mannequin, however let’s now give attention to different strategies for knowledge mannequin optimization.

    Rule #2 — Scale back the column cardinality!

    As you could recall from the earlier a part of the article, the rule of thumb is: the upper the cardinality of a column, the tougher for VertiPaq to optimally compress the information. Particularly if we aren’t working with integer values.

    Let’s take a deeper have a look at VertiPaq Analyzer outcomes:

    Picture by creator

    As you see, even when the chatID column has increased cardinality than the datetmStartUTC column, it takes virtually 7 occasions much less reminiscence! Since it’s a surrogate key integer worth, VertiPaq applies Worth encoding, and the scale of a dictionary is irrelevant. Alternatively, Hash encoding is being utilized for the column of date/time knowledge sort with excessive cardinality, so the dictionary dimension is enormously increased.

    There are a number of strategies for lowering the column cardinality, similar to splitting columns. Listed below are a number of examples of utilizing this method.

    For Integer columns, you possibly can break up them into two even columns utilizing division and modulo operations. In our case, it will be:

    SELECT chatID/1000 AS chatID_div
    ,chatID % 1000 AS chatID_mod……….

    This optimization approach have to be carried out on the supply aspect (on this case, by writing a T-SQL assertion). If we use the calculated columns, there is no such thing as a profit in any respect, for the reason that authentic column needs to be saved within the knowledge mannequin first.

    An analogous approach can convey vital financial savings when you may have decimal values within the column. You may merely break up values earlier than and after the decimal as defined in this article.

    Since we don’t have any decimal values, let’s give attention to our downside — optimizing the datetmStartUTC column. There are a number of legitimate choices to optimize this column. The primary is to verify in case your customers want granularity increased than the day stage (in different phrases, are you able to take away hours, minutes, and seconds out of your knowledge).

    Let’s verify what financial savings this answer would convey:

    Picture by creator

    The very first thing we discover is that our file is now 271 MB, so 1/3 of what we began with. VertiPaq Analyzer’s outcomes present that this column is now virtually completely optimized, going from taking up 62% of our knowledge mannequin to simply barely over 2.5%! That’s huuuuge!

    Picture by creator

    Nonetheless, it appeared that the day-level grain was not superb sufficient, and my customers wanted to research figures on the hour stage. OK, so we will at the least eliminate minutes and seconds, and that may additionally lower the cardinality of the column.

    So, I’ve imported values rounded per hour:

    SELECT chatID
                    ,dateadd(hour, datediff(hour, 0, datetmStartUTC), 0) AS datetmStartUTC
                    ,customerID
                    ,userID
                    ,ipAddressID
                    ,productID
                    ,countryID
                    ,userStatus
                    ,isUnansweredChat
                    ,totalMsgsOp
                    ,totalMsgsUser
                    ,userTimezone
                    ,waitTimeSec
                    ,waitTimeoutSec
                    ,chatDurationSec
                    ,sourceSystem
                    ,topic
                    ,usaccept
                    ,transferUserID
                    ,languageID
                    ,waitFirstClick
                FROM factChat

    It appeared that my customers additionally didn’t want a chatVariables column for evaluation, so I’ve additionally eliminated it from the information mannequin.

    Lastly, after disabling Auto Date/Time in Choices for Information Load, my knowledge mannequin dimension was round 220 MB! Nonetheless, one factor nonetheless bothered me: the chatID column was nonetheless occupying virtually 1/3 of my desk. And that is only a surrogate key, which isn’t utilized in any of the relationships inside my knowledge mannequin.

    Picture by creator

    So, right here I used to be analyzing two totally different options: the primary was to easily take away this column and combination the variety of chats, counting them utilizing the GROUP BY clause. Nonetheless, there can be no profit by protecting the chatID column in any respect, because it’s not getting used anyplace in our knowledge mannequin. As soon as I’ve eliminated it from the mannequin, one final time, let’s verify the pbix file dimension:

    Picture by creator

    Please recall the quantity we began at: 776 MB! So, I’ve managed to scale back my knowledge mannequin dimension by virtually 90%, making use of some easy strategies which enabled the VertiPaq storage engine to carry out extra optimum compression of the information.

    And this was an actual use case, which I confronted over the last 12 months!

    Common guidelines for lowering knowledge mannequin dimension

    To conclude, right here is the record of common guidelines you need to be mindful when attempting to scale back the information mannequin dimension:

    • Hold solely these columns your customers want within the report! Simply sticking with this one single rule will prevent an unbelievable quantity of house, I guarantee you…
    • Attempt to optimize column cardinality at any time when attainable. The golden rule right here is: take a look at, take a look at, take a look at…and if there’s a vital profit from, for instance, splitting one column into two, or to substitute a decimal column with two complete quantity columns, then do it! However, additionally remember that your measures have to be rewritten to deal with these structural adjustments, with a purpose to show anticipated outcomes. So, in case your desk isn’t huge, or if it’s important to rewrite lots of of measures, possibly it’s not price splitting the column. As I stated, it will depend on your particular situation, and you need to fastidiously consider which answer makes extra sense
    • Identical as for columns, preserve solely these rows you want: for instance, possibly you don’t must import knowledge from the final 10 years, however solely 5! That may even scale back your knowledge mannequin dimension. Discuss to your customers, ask them what they actually need, earlier than blindly placing all the things inside your knowledge mannequin
    • Combination your knowledge at any time when attainable! Which means — fewer rows, decrease cardinality, so all good issues you might be aiming to attain! If you happen to don’t want hours, minutes, or seconds stage of granularity, don’t import them! Aggregations in Energy BI (and Tabular mannequin generally) are an important and extensive subject, which is out of the scope of this collection, however I strongly advocate you verify Phil Seamark’s blog and his collection of posts on artistic aggregations utilization
    • Keep away from utilizing DAX calculated columns at any time when attainable, since they don’t seem to be being optimally compressed. As a substitute, attempt to push all calculations to a knowledge supply (SQL database, for instance) or carry out them utilizing the Energy Question editor
    • Use correct knowledge sorts (for instance, in case your knowledge granularity is on a day stage, there is no such thing as a want to make use of Date/Time knowledge sort. Date knowledge sort will suffice)
    • Disable Auto Date/Time choice for knowledge loading (this may take away a bunch of routinely created date tables within the background)

    Conclusion

    After you realized the fundamentals of the VertiPaq storage engine and totally different strategies it makes use of for knowledge compression, I needed to wrap up this text by displaying you a real-life instance of how we will “assist” VertiPaq (and Energy BI consequently) to get the perfect out of report efficiency and optimum useful resource consumption.

    Thanks for studying, hope that you just loved the article!



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Editor Times Featured
    • Website

    Related Posts

    Let’s Analyze OpenAI’s Claims About ChatGPT Energy Use

    June 16, 2025

    How AI Girlfriend Chatbots are Inspired by Popular Culture

    June 16, 2025

    Can AI Truly Develop a Memory That Adapts Like Ours?

    June 16, 2025

    User Authorisation in Streamlit With OIDC and Google

    June 15, 2025

    Tested an NSFW AI Video Generator with Voice

    June 15, 2025

    Are We Entering a New Era of Digital Freedom or Exploitation?

    June 15, 2025

    Comments are closed.

    Editors Picks

    Let’s Analyze OpenAI’s Claims About ChatGPT Energy Use

    June 16, 2025

    Plasma device offers deodorant-free odor control

    June 16, 2025

    Belgian HealthTech startup Koios Care raises €1 million to monitor and treat Parkinson’s Disease with real-world data

    June 16, 2025

    7 Best Electric Toothbrushes, Tested For Two Weeks Each (2025)

    June 16, 2025
    Categories
    • Founders
    • Startups
    • Technology
    • Profiles
    • Entrepreneurs
    • Leaders
    • Students
    • VC Funds
    About Us
    About Us

    Welcome to Times Featured, an AI-driven entrepreneurship growth engine that is transforming the future of work, bridging the digital divide and encouraging younger community inclusion in the 4th Industrial Revolution, and nurturing new market leaders.

    Empowering the growth of profiles, leaders, entrepreneurs businesses, and startups on international landscape.

    Asia-Middle East-Europe-North America-Australia-Africa

    Facebook LinkedIn WhatsApp
    Featured Picks

    How to Avoid US-Based Digital Services—and Why You Might Want To

    March 21, 2025

    Google AI tricked by Cwmbran roundabouts Aprils fools’ prank

    April 20, 2025

    This Startup Wants YouTube Creators to Get Paid for AI Training Data

    October 1, 2024
    Categories
    • Founders
    • Startups
    • Technology
    • Profiles
    • Entrepreneurs
    • Leaders
    • Students
    • VC Funds
    Copyright © 2024 Timesfeatured.com IP Limited. All Rights.
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    • About us
    • Contact us

    Type above and press Enter to search. Press Esc to cancel.