By no means miss a brand new version of The Variable, our weekly e-newsletter that includes a top-notch collection of editors’ picks, deep dives, neighborhood information, and extra.
With the top of the 12 months just some weeks away, neither our authors nor our readers are exhibiting any indicators of slowing down.
We’re thrilled to have revealed a few of our strongest articles of the 12 months previously month: sensible guides on LLM workflows and sources on career growth, Python-focused tutorials, and deep dives on not too long ago launched instruments, amongst different standout matters. Learn on to meet up with (or revisit) November’s most-read tales.
Graph RAG vs SQL RAG
Which database paradigm delivers extra correct and insightful outcomes? Reinhard Sellmair units out to evaluate the efficiency of two varieties of RAG methods by pitting GraphRAG and SQL RAG in opposition to one another, utilizing the identical dataset and questions.
LLM-Powered Time-Collection Evaluation
Within the second a part of Sara Nobrega’s common sequence, we study in regards to the prompts we’d like for superior mannequin improvement (suppose ARIMA and LSTM).
The best way to Construct Machine Studying Initiatives That Assist You Get Employed
Not all ML portfolios are created equal. Egor Howell shares time-tested insights on what works — and what doesn’t.
Different November Highlights
Don’t miss our different prime reads from the previous month, tackling NumPy, Multimodal RAG, marimo notebooks, and plenty of different matters — each evergreen and leading edge.
NumPy for Absolute Inexperienced persons: A Challenge-Primarily based Strategy to Information Evaluation, by Ibrahim Salami
Understanding Convolutional Neural Networks (CNNs) By Excel, by Angela Shi
Run Python As much as 150× Sooner with C, by Thomas Reid
The best way to Construct an Over-Engineered Retrieval System, by Ida Silfverskiöld
Constructing a Multimodal RAG That Responds with Textual content, Photographs, and Tables from Sources, by Partha Sarkar
Why I’m Making the Change to marimo Notebooks, by Parul Pandey
Your Subsequent ‘Massive’ Language Mannequin May Not Be Massive After All, by Moulik Gupta
In Case You Missed It: Our Newest Writer Q&As
We love sharing our authors’ experience, profession insights, and views on the current developments on the planet of knowledge science and AI. Listed below are our most up-to-date Author Spotlights.
- “Techniques pondering helps me put the large image entrance and middle”
Shuai Guo on deep analysis brokers, analytical AI vs LLM-based brokers, and methods pondering.
- “The success of an AI product depends upon how intuitively customers can work together with its capabilities”
Janna Lipenkova on AI technique, AI merchandise, and the way area data can change the whole form of an AI answer.
Meet Our New Authors
We hope you’re taking the time to discover the superb work from the newest cohort of TDS contributors:
- Jure Leskovec, a Stanford professor of laptop science and entrepreneur, explains why LLMs aren’t a one-size-fits-all answer for firms.
- Sherin Sunny, a senior engineer at Walmart, walked us by the creation of a pc imaginative and prescient undertaking aimed toward detecting leaves.
- Manuel Franco de la Peña launched us to ShaTS, a novel Shapley-based explainability methodology particularly designed for time-series fashions, which he co-created.
We love publishing articles from new authors, so should you’ve not too long ago written an fascinating undertaking walkthrough, tutorial, or theoretical reflection on any of our core matters, why not share it with us?
We’d Love Your Suggestions, Authors!
Are you an current TDS creator? We invite you to fill out a 5-minute survey so we will enhance the publishing course of for all contributors.

