Uploaded image for project: 'Talend DI components'
  1. Talend DI components
  2. TDI-40374

Advanced Bulk Load for Snowflake

Apply templateInsert Lucidchart Diagram
    XMLWordPrintable

Details

    • Epic
    • Status: closed
    • Minor
    • Resolution: Fixed
    • None
    • 7.1.1
    • None

    Description

      We are trying to differentiate ourself regarding snowflake and competition and to add value to our suscription releases.

      Our respective CEO have exchanged regarding our ability to create a unique Differentiator in providing Snowflake indepotent capabilities aka loading with error hanling.

      Here is the requirement on what what to be achieved.

      As Users, I am looking to load data into Snowflake with a quality of service (QoS) that assures consistent data across the multi-region Snowflake instance(s). Due to the data volumes, extended load time, and geographic distribution it is quite possible that faults at the ETL server, network, or even regional infrastructure layers may occur. I would like to be able to configure a Talend bulk load component that ensures fault tolerance for any failure mode so that the end-to-end bulk load operation can be assured of high availability.

      The first step is to design a job that will provide idempotent functionality that would serve as POC and then migrate from POC to integrating the feature into SnowFlake Component.

      Specific design questions for prototype phase

      • Does the ETL process need to be aware / configured with multiple Snowflake endpoints in an active/passive manner?
      • How is state preserved across failures?
      • Attention should be paid to how notification of progress/failure is received by Talend processes, are there any callbacks to the Talend process required?
      • How is accurate progress captured upon restoration of service? Idempotent patterns allow for resubmission but for some use cases where the unit of work was successful this may not be necessary.
      • Depending on the failure point, a network outage could prevent access to S3. Is this failure mode in scope? It becomes a separate use case of reliable publishing to S3 and is likely to be out of the POC.
      • Test strategy - what is the scope of testing appropriate for this phase

      Based on the POC, it might appear that we would need to support writing to S3 and issuying a S3 COPy to snowflake after.

      Failure Modes

      Processes and Services

      • Talend ETL job
      • S3 (might be transparent if we do not write directly to S3)
      • Snowflake Region Access
      • Network

      Processing Stages

      • Pre-S3 Write (Connecticity errors?)
      • During S3 Write
      • After S3 Write before Bulk Load
      • During Bulk Load
      • After Bulk Load before Job End
      • Snowflake Replication
        Talend S3 Snowflake Network
      Pre-S3 Write X     X
      During S3 Write X X   X
      After S3 Write X X   X
      During Bulk Load X X X X
      After Bulk Load X   X X
      Snowflake Replication     X  

      Feedback from presales. We would need to investigate on the encryption mechanism when referencing the S3 file in the staging area for snowflake to load it.

      Attachments

        Issue Links

          Activity

            People

              Unassigned Unassigned
              bboutros benjamin boutros
              Votes:
              0 Vote for this issue
              Watchers:
              4 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: