Close Menu
    Facebook LinkedIn YouTube WhatsApp X (Twitter) Pinterest
    Trending
    • New Releases on Prime Video in May 2026: Jack Reacher, Spider-Noir and More
    • 4 YAML Files Instead of PySpark: How We Let Analysts Build Data Pipelines Without Engineers
    • Metajets use light propulsion for future space travel
    • Malta’s startup residency: A pathway for founders expanding into Europe (Sponsored)
    • Sanctioned Chinese AI Firm SenseTime Releases Image Model Built for Speed
    • Champions League Soccer: Stream Atletico Madrid vs. Arsenal Live
    • Ensembles of Ensembles of Ensembles: A Guide to Stacking
    • This region in space poses the greatest danger in our Solar System
    Facebook LinkedIn WhatsApp
    Times FeaturedTimes Featured
    Wednesday, April 29
    • Home
    • Founders
    • Startups
    • Technology
    • Profiles
    • Entrepreneurs
    • Leaders
    • Students
    • VC Funds
    • More
      • AI
      • Robotics
      • Industries
      • Global
    Times FeaturedTimes Featured
    Home»Artificial Intelligence»4 YAML Files Instead of PySpark: How We Let Analysts Build Data Pipelines Without Engineers
    Artificial Intelligence

    4 YAML Files Instead of PySpark: How We Let Analysts Build Data Pipelines Without Engineers

    Editor Times FeaturedBy Editor Times FeaturedApril 29, 2026No Comments11 Mins Read
    Facebook Twitter Pinterest Telegram LinkedIn Tumblr WhatsApp Email
    Share
    Facebook Twitter LinkedIn Pinterest Telegram Email WhatsApp Copy Link


    us three weeks to ship a single information pipeline. At this time, an analyst with zero Python expertise does it in a day. Right here’s how we obtained there.

    I’m Kiril Kazlou, an information engineer at Mindbox. Our staff frequently recalculates enterprise metrics for shoppers — which suggests we’re continuously constructing information marts for billing and analytics, pulling from dozens of various sources.

    For a very long time, we relied on PySpark for all our information processing. The issue? You possibly can’t actually work with PySpark with out Python expertise. Each new pipeline required a developer. And that meant ready — generally for weeks.

    On this submit, I’ll stroll you thru how we constructed an inside information platform the place an analyst or product supervisor can spin up a frequently up to date pipeline by writing simply 4 YAML recordsdata.

    Why PySpark Was Slowing Us Down

    Let me illustrate the ache with a textbook instance — calculating MAU (Month-to-month Energetic Customers).

    On the floor, this looks like a easy SQL job: COUNT(DISTINCT customerId) throughout a couple of tables over a time window. However due to all of the infrastructure overhead — PySpark, Airflow DAG setup, Spark useful resource allocation, testing — we needed to hand it off to builders. The consequence? A full week simply to ship a MAU counter.

    Each new metric took one to 3 weeks to ship. And each single time, the method regarded the identical:

    1. An analyst outlined the enterprise necessities, discovered an out there developer, and handed over the context.
    2. The developer clarified particulars, wrote PySpark code, went by means of code evaluation, configured the DAG, and deployed.

    What we really wished was for analysts and product managers — the individuals who perceive the enterprise logic greatest and are fluent in SQL and YAML — to deal with this themselves. No Python. No PySpark.

    How pipelines have been constructed with PySpark

    What We Changed PySpark With: YAML and SQL Are All You Want

    To take a declarative strategy, we cut up our information layer into three components and picked the correct software for every:

    • dlt (information load software) — ingests information from exterior APIs and databases into object storage. Configured solely by means of a YAML file. No code required.
    • dbt (information construct software) on Trino — transforms information utilizing pure SQL. It hyperlinks fashions through ref(), mechanically builds a dependency graph, and handles incremental updates.
    • Airflow + Cosmos — orchestrates the pipelines. The Airflow DAG is auto-generated from dag.yaml and the dbt mission.

    We have been already utilizing Trino as a question engine for ad-hoc queries and had it plugged into Superset for BI. It had already confirmed itself: for queries with normal logic, it processed huge datasets sooner and with fewer sources than Spark. On prime of that, Trino natively helps federated entry to a number of information shops from a single SQL question. For 90% of our pipelines, Trino was an ideal match.

    Diagram of the new pipeline workflow: an analyst writes YAML configs and SQL models directly. dbt and Trino handle execution automatically through Airflow. No developer involvement required. The full process takes one day.
    After: analyst-owned pipelines with dbt + Trino

    How We Load Information: dlt.yaml

    The primary YAML file describes the place and the way to load information for downstream processing. Right here’s a real-world instance — loading billing information from an inside API:

    product: sg-team
    function: billing
    schema: billing_tarification
    
    dag:
      dag_id: dlt_billing_tarification
      schedule: "0 4 * * *"
      description: "Each day refresh of tarification information"
      tags:
        - billing
    
    alerts:
      enabled: true
      severity: warning
    
    supply:
      kind: rest_api
      shopper:
        base_url: "https://internal-api.instance.com"
        auth:
          kind: bearer
          token: dlt-billing.token
      sources:
        - title: tarification_data
          endpoint:
            path: /tarificationData
            technique: POST
            json:
              firstPeriod: "{{ previous_month_date }}"
              lastPeriod: "{{ previous_month_date }}"
              pricingPlanLine: CurrentPlan
          write_disposition: change
          processing_steps:
            - map: dlt_custom.billing_tarification_data.map
    
        - title: charges_raw
          columns:
            staffUserName:
              data_type: textual content
              nullable: true
          endpoint:
            path: /data-feed/expenses
            technique: POST
            json:
              firstPeriod: "{{ previous_month_date }}"
              lastPeriod: "{{ previous_month_date }}"
          write_disposition: change
    
        - title: discounts_raw
          endpoint:
            path: /data-feed/reductions
            technique: POST
            json:
              firstPeriod: "{{ previous_month_date }}"
              lastPeriod: "{{ previous_month_date }}"
          write_disposition: change

    This config defines 4 sources from a single API. For each, we specify the endpoint, request parameters, and a write technique — in our case, change means “overwrite each time.” You may as well add processing steps, outline column varieties, and configure alerts.

    Your complete config is 40 traces of YAML. With out dlt, every connector can be a Python script dealing with requests, pagination, retries, serialization to Delta Desk format, and uploads to storage.

    How We Remodel Information With SQL: dbt_project.yaml and sources.yaml

    The subsequent step is configuring the dbt mannequin. With Trino, meaning SQL queries.

    Right here’s an instance of how we arrange the MAU calculation. That is what occasion preparation from a single supply seems to be like:

    -- int_mau_events_visits.sql (simplified)
    {{ config(materialized='desk') }}
    
    WITH interval AS (
        -- Rolling window: final 5 months to present
        SELECT
            YEAR(CURRENT_DATE - INTERVAL '5' MONTH) AS start_year,
            MONTH(CURRENT_DATE - INTERVAL '5' MONTH) AS start_month,
            YEAR(CURRENT_DATE) AS end_year,
            MONTH(CURRENT_DATE) AS end_month
    ),
    
    occasions AS (
        -- Pull go to occasions throughout the interval window
        SELECT src._tenant, src.unmergedCustomerId,
               'visits' AS src_type, src.endpoint
        FROM {{ supply('last', 'customerstracking_visits') }} src
        CROSS JOIN interval p
        WHERE src.unmergedCustomerId IS NOT NULL
          AND /* ...timestamp filtering by 12 months/month bounds... */
    ),
    
    events_with_customer AS (
        -- Resolve merged buyer IDs
        SELECT e._tenant,
               COALESCE(mc.mergedCustomerId, e.unmergedCustomerId) AS customerId,
               e.src_type, e.endpoint
        FROM occasions e
        LEFT JOIN {{ ref('int_merged_customers') }} mc
          ON e._tenant = mc._tenant
          AND e.unmergedCustomerId = mc.unmergedCustomerId
    )
    
    -- Hold solely precise (non-deleted) clients
    SELECT ewc._tenant, ewc.customerId, ewc.src_type, ewc.endpoint
    FROM events_with_customer ewc
    WHERE EXISTS (
        SELECT 1 FROM {{ ref('int_actual_customers') }} ac
        WHERE ewc._tenant = ac._tenant
          AND ewc.customerId = ac.customerId
    )

    All 10 occasion sources comply with the very same sample. The one variations are the supply desk and the filters. Then the fashions merge right into a single stream:

    -- int_mau_events.sql (union of all sources)
    SELECT * FROM {{ ref('int_mau_events_inapps_targetings') }}
    UNION ALL
    SELECT * FROM {{ ref('int_mau_events_inapps_clicks') }}
    UNION ALL
    SELECT * FROM {{ ref('int_mau_events_visits') }}
    UNION ALL
    SELECT * FROM {{ ref('int_mau_events_orders') }}
    -- ...plus 6 extra sources

    And at last, the information mart the place all the pieces will get aggregated:

    -- mau_period_datamart.sql
    {{ config(
        materialized='incremental',
        incremental_strategy='merge',
        unique_key=['_tenant', 'start_year', 'start_month', 'end_year', 'end_month']
    ) }}
    
     int -%
    
    WITH interval AS (
        SELECT
            YEAR(CURRENT_DATE - INTERVAL '{{ months_back }}' MONTH) AS start_year,
            MONTH(CURRENT_DATE - INTERVAL '{{ months_back }}' MONTH) AS start_month,
            YEAR(CURRENT_DATE) AS end_year,
            MONTH(CURRENT_DATE) AS end_month
    ),
    events_resolved AS (
        SELECT * FROM {{ ref('int_mau_events') }}
    ),
    metrics_by_tenant AS (
        SELECT
            er._tenant,
            COUNT(DISTINCT CASE WHEN src_type = 'visits'
                  THEN customerId END) AS CustomersTracking_Visits,
            COUNT(DISTINCT CASE WHEN src_type = 'orders'
                  THEN customerId END) AS ProcessingOrders_Orders,
            COUNT(DISTINCT CASE WHEN src_type = 'mailings'
                  THEN customerId END) AS Mailings_MessageStatuses,
            -- ...different metrics
            COUNT(DISTINCT customerId) AS MAU
        FROM events_resolved er
        GROUP BY er._tenant
    )
    SELECT m.*, p.start_year, p.start_month, p.end_year, p.end_month
    FROM metrics_by_tenant m
    CROSS JOIN interval p

    For the information mart configuration, we use incremental_strategy='merge'. dbt mechanically generates the merge question, substituting the unique_key for upsert. No must manually implement incremental loading.

    To tie the fashions right into a single mission, we arrange dbt_project.yaml:

    title: mau_period
    model: '1.0.0'
    
    fashions:
      mau_period:
        +on_table_exists: change
        +on_schema_change: append_new_columns

    And sources.yaml, which describes the enter tables:

    sources:
      - title: last
        database: data_platform
        schema: last
        tables:
          - title: inapps_targetings_v2
          - title: inapps_clicks_v2
          - title: customerstracking_visits
          - title: processingorders_orders
          - title: cdp_mergedcustomers_v2
          # ...

    The consequence is identical enterprise logic we had in PySpark, however in pure SQL: sources.yaml replaces typedspark schemas, {{ ref() }} and {{ supply() }} change .get_table(), and computerized execution order through the dependency graph replaces handbook Spark useful resource tuning.

    How We Configure Airflow: dag.yaml

    The fourth configuration file defines when and the way Airflow runs the pipeline:

    product: sg-team
    function: billing
    schema: mau
    schedule: "15 21 * * *"  # on daily basis at 00:15 MSK
    
    params:
      - title: start_date
        description: "Begin date (YYYY-MM-DD). Go away empty for auto"
        default: ""
      - title: end_date
        description: "Finish date (YYYY-MM-DD). Go away empty for auto"
        default: ""
      - title: months_back
        description: "Months to look again (default: 5)"
        default: 5
    
    alerts:
      enabled: true
      severity: warning

    Then our Python script parses dag.yaml and dbt_project.yaml and makes use of the Cosmos library to generate a completely purposeful Airflow DAG. That is the solely piece of Python code in your entire setup. It’s written as soon as and works for each dbt mission. Right here’s the important thing half:

    def _build_dbt_project_dags(project_path: Path, environ: dict) -> record[DbtDag]:
        config_dict = yaml.safe_load(dag_config_path.read_text())
        config = DagConfig.model_validate(config_dict)
    
        # YAML params → Airflow Params
        params = {}
        operator_vars = {}
        for param in config.params:
            params[param.name] = Param(
                default=param.default if param.default shouldn't be None else "",
                description=param.description,
            )
            operator_vars[param.name] = f"{{{{ params.{param.title} }}}}"
    
        # Cosmos creates the DAG from the dbt mission
        with DbtDag(
            dag_id=f"dbt_{project_path.title}",
            schedule=config.schedule,
            params=params,
            project_config=ProjectConfig(dbt_project_path=project_path),
            profile_config=ProfileConfig(
                profile_name="default",
                target_name=project_name,
                profile_mapping=TrinoLDAPProfileMapping(
                    conn_id="trino_default",
                    profile_args={
                        "database": profile_database,
                        "schema": profile_schema,
                    },
                ),
            ),
            operator_args={"vars": operator_vars},
        ) as dag:
            # Create schema earlier than working fashions
            create_schema = SQLExecuteQueryOperator(
                task_id="create_schema",
                conn_id="trino_default",
                sql=f"CREATE SCHEMA IF NOT EXISTS {profile_database}.{profile_schema} ...",
            )
            # Connect to root duties
            for unique_id, _ in dag.dbt_graph.filtered_nodes.objects():
                activity = dag.tasks_map[unique_id]
                if not activity.upstream_task_ids:
                    create_schema >> activity

    Cosmos reads manifest.json from the dbt mission, parses the mannequin dependency graph, and creates a separate Airflow activity for every mannequin. Process dependencies are constructed mechanically based mostly on ref() calls within the SQL.

    How Analysts Construct Pipelines With out Builders

    Now when an analyst wants a brand new recurring pipeline, they’ll put it collectively in a couple of steps:

    Step 1. Create a folder within the repo: dbt-projects/my_new_pipeline/.

    Step 2. If exterior information ingestion is required, write a YAML config for dlt.

    Step 3. Write SQL fashions within the fashions/ folder and describe the sources in sources.yaml.

    Step 4. Create dbt_project.yaml and dag.yaml.

    Step 5. Push to Git, undergo evaluation, merge.

    CI/CD builds the dbt mission and ships artifacts to S3. Airflow reads the DAG recordsdata from there, Cosmos parses the dbt mission and generates the duty graph. On schedule, dbt runs the fashions on Trino within the appropriate order. The tip result’s an up to date information mart within the warehouse, accessible by means of Superset.

    What Modified After the Migration

    Before-and-after comparison showing pipeline delivery time dropping from one to three weeks under PySpark to one day with the YAML-based stack, and pipeline ownership shifting from developers to analysts.
    What modified: from weeks to sooner or later, from builders to analysts

    For analysts to construct pipelines on their very own, they should perceive ref() and supply() ideas, the distinction between desk and incremental materialization, and the fundamentals of Git. We ran a couple of inside workshops and put collectively step-by-step guides for every activity kind.

    Why the New Stack Doesn’t Absolutely Substitute PySpark

    For about 10% of our pipelines, PySpark continues to be the one choice — when a metamorphosis merely doesn’t match into SQL. dbt helps Jinja macros, however that’s no substitute for full-blown Python. And it could be dishonest to skip over the restrictions of the brand new instruments.

    dlt + Delta: experimental upsert help. We use the Delta format in our storage layer. dlt’s Delta connector is marked as experimental, so the merge technique didn’t work out of the field. We needed to discover workarounds — in some circumstances we used change as a substitute of merge (sacrificing incrementality), and in others we wrote customized processing_steps.

    Trino’s restricted fault tolerance. Trino does have a fault tolerance mechanism, but it surely works by writing intermediate outcomes to S3. At our terabyte-scale information volumes, that is impractical — the sheer variety of S3 operations makes it prohibitively costly. With out fault tolerance enabled, if a Trino employee goes down, your entire question fails. Spark, in contrast, restarts simply the failed activity. We addressed this with DAG-level retries and by decomposing heavy fashions into chains of intermediate ones.

    UDFs and customized logic. In Spark, you possibly can write customized logic in Python proper contained in the pipeline — tremendous handy. With the brand new structure, that is a lot more durable. dbt on prime of Trino doesn’t assist: Jinja solely generates SQL, and dbt’s Python fashions solely work with Snowflake, Databricks, and BigQuery. You possibly can write UDFs in Trino, however solely in Java — with all of the overhead that entails: a separate repo, a construct pipeline, deploying JARs throughout all employees. So when a metamorphosis doesn’t match into SQL, you both find yourself with an unmaintainable SQL monster or a standalone script that breaks the lineage.

    What’s Subsequent: Assessments, Mannequin Templates, and Coaching

    Higher testing. We had stable pipeline testing in PySpark, however the brand new structure continues to be catching up. Current dbt variations launched unit testing — now you can validate SQL mannequin logic in opposition to mock information with out spinning up the total pipeline. We need to add dbt checks each on the mannequin degree and as a separate monitoring layer.

    Reusable templates for frequent patterns. Lots of our dbt fashions look alike. A single config may describe a dozen fashions with the identical sample — solely the supply desk and filters differ. We plan to extract the shared logic into dbt macros.

    Increasing the platform’s consumer base. We would like extra engineers and analysts to work with information independently. We’re planning common inside coaching periods, documentation, and onboarding guides so new customers can rise up to hurry shortly and begin constructing their very own fashions.

    In case your staff is caught in the identical “analysts await builders” loop, I’d love to listen to the way you’re fixing it. Connect with me on LinkedIn and let’s evaluate notes.


    All photographs on this article are by the writer until in any other case famous.



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Editor Times Featured
    • Website

    Related Posts

    Ensembles of Ensembles of Ensembles: A Guide to Stacking

    April 29, 2026

    How AI Policy in South Africa Is Ruining Itself

    April 29, 2026

    PyTorch NaNs Are Silent Killers — So I Built a 3ms Hook to Catch Them at the Exact Layer

    April 28, 2026

    Correlation Doesn’t Mean Causation! But What Does It Mean?

    April 28, 2026

    Let the AI Do the Experimenting

    April 28, 2026

    The Next Frontier of AI in Production Is Chaos Engineering

    April 28, 2026
    Leave A Reply Cancel Reply

    Editors Picks

    New Releases on Prime Video in May 2026: Jack Reacher, Spider-Noir and More

    April 29, 2026

    4 YAML Files Instead of PySpark: How We Let Analysts Build Data Pipelines Without Engineers

    April 29, 2026

    Metajets use light propulsion for future space travel

    April 29, 2026

    Malta’s startup residency: A pathway for founders expanding into Europe (Sponsored)

    April 29, 2026
    Categories
    • Founders
    • Startups
    • Technology
    • Profiles
    • Entrepreneurs
    • Leaders
    • Students
    • VC Funds
    About Us
    About Us

    Welcome to Times Featured, an AI-driven entrepreneurship growth engine that is transforming the future of work, bridging the digital divide and encouraging younger community inclusion in the 4th Industrial Revolution, and nurturing new market leaders.

    Empowering the growth of profiles, leaders, entrepreneurs businesses, and startups on international landscape.

    Asia-Middle East-Europe-North America-Australia-Africa

    Facebook LinkedIn WhatsApp
    Featured Picks

    Marshall Heston 120 Review: Premium Style, Restrained Sound

    November 16, 2025

    The Machine Learning “Advent Calendar” Day 5: GMM in Excel

    December 5, 2025

    Amazon boss says AI will replace jobs at tech giant

    June 18, 2025
    Categories
    • Founders
    • Startups
    • Technology
    • Profiles
    • Entrepreneurs
    • Leaders
    • Students
    • VC Funds
    Copyright © 2024 Timesfeatured.com IP Limited. All Rights.
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    • About us
    • Contact us

    Type above and press Enter to search. Press Esc to cancel.