Understanding this page

On this page, we’ll walk through an example of a DryMerge workflow. We’ll first describe the workflow, discuss key DryMerge concepts showcased by the workflow, show the code, and finally break down key sections of the code with explanations.

Hello World

Hello World (In DryMerge)

Let’s build a simple API that returns “Hello World” when you hit the endpoint. Then, we can add on to it to make it a little more interesting. Alternatively, you can start playing around with workflow ideas with Dry Make GPT.

Create a new file

In your current directory, create a new file with the .yaml extension. At the top, specify your organization:

organization: DryMerge

Add your first node :)

For now, let’s make a simple node. We’ll call it hello-world and it’ll return “Hello World” when you hit the endpoint. There are three components we need to define for any API node: id, request, and dependencies. Let’s think through them one by one.

Define the node

Nodes are defined within the nodes field, and given names and types with their id. The id is the key that defines how we refer to your API Node. The format is [name].[type]. We can do that for our node like this:

nodes:
    hello-world.api:
        ...

Note that the ... just indicates in the documentation that this is part of a bigger yaml object, it doesn’t have any special meaning :)

Define the request schema

The request schema is a yaml object that defines HTTP requests that the node will make. Here, you can specify body, url, headers, method, or params. For example, a simple GET request to get the time could look like this:

nodes:
    get-time.api:
        request:
            url: https://worldtimeapi.org/api/timezone/America/New_York
            method: GET

Our node wll be simple. There’s this neat endpoint that we set up at https://std.drymerge.com/testing/reflect, and it just sends your request body back at you. With that knowledge, we can get it to send Hello World! to us! Try to begin defining the request schema core yourself, and see how well it matches the version we set up below:

  ...
    request:
        url: https://std.drymerge.com/testing/reflect
        method: POST
        headers:
            Content-Type: application/json
        body:
            dry_string: 'Hello World!'
  ...

Define the dependency schema

The dependency schema is a yaml object that defines the dependencies of the node. This node doesn’t need any dependencies, but let’s show how we could check the time before calling it.

Here, we say to call the node with id get-time.api before we call our Hello World api. We’ve also aliased the response to time, which we could use to access any part of the response body.

nodes:
    hello-world.api:
        request:
            ...
        dependencies:
            time:
                id: get-time.api

Put it all together

Now that we have all the components, we can put it all together into one big yaml file. It’ll look like this:

organization: DryMerge
version: 1
nodes:
  hello-world.api:
    request:
      url: https://std.drymerge.com/testing/reflect
      method: POST
      headers:
        Content-Type: application/json
      body:
        dry_string: 'Hello World! (at {{context.time.datetime}} )'
    dependencies:
      time:
        id: get-time.api
  get-time.api:
    request:
      url: https://worldtimeapi.org/api/timezone/America/New_York
      method: GET

Save this yaml file and keep track of the path.

Upload your node

Now, we can upload our node to the DryMerge engine after which we can run it from anywhere. Simply use the dry up command like this:

dry up <path-to-your-yaml>

You should get a success message! If not, make sure that you’re authenticated and that you’ve provided the right path to your yaml.

Call your node

This is the whole point of a node, right? Let’s call it! There are two ways to call your node: via the CLI or via the API.

Call your node via the CLI

If you’re using the CLI, you can use the dry run command to run your node like this:

dry run hello-world.api

Call your node via the API

Both the CLI and browser make underlying calls to the DryMerge API. You can make one of these calls yourself if you want to! One way to do it is via curl, like this:

curl --location --request POST 'https://api.drymerge.com/execute/DryMerge/~/hello-world.api' \
--header 'Content-Type: application/json' --data-raw {}

Other ways to call your node

You could call your node via an API testing tool like Postman or Insomnia or via a programming language like Python or Javascript. The possibilities are endless!

Support Bot

Workflow Description

This workflow is a simple support bot that dynamically responds to user emails. The workflow is triggered by a user’s email. It then consults data stores containing information about the user, their history, and their account. Finally, it sends a response email to the user using an LLM.

Key Concepts

  1. Trigger: The workflow is triggered by an email. The trigger is defined in the trigger section of the workflow.
  2. Multi-Language Functions: This workflow employs function nodes to hook into existing code and form a workflow over multiple languages.
  3. Imports and Templates: Several of the nodes referenced in this workflow are imported via templates, demonstrating code reuse.
  4. Transformations: The workflow uses transformations to convert data from one format to another.
  5. Maps: The workflow uses a map to apply a node to every member of an iterable.
  6. Hydration: The workflow uses JSON templating to hydrate values dynamically.

Code

DryMerge YAML

First, we have the YAML file that defines the workflow.

organization: DryMerge
version: 1
infra:
  # Set up a trigger to react to a new gmail.
  monitor-email.import:
    use:
      - id: DryMerge/google/new-gmail.template
        hydrate:
          access_token: '{{secrets.google_access_token}}'
          action:
            id: google/send-email.api

nodes:
  # Send the actual email response back.
  email-back.import:
    use:
      - id: DryMerge/google/send-email.template
        but:
          <google/send-email-initializer.merge>.dependencies.request-content:
            id: extract-email.merge
          <google/send-email-initializer.merge>.dependencies.response:
            id: respond-to-user.api
        hydrate:
          access_token: '{{secrets.google_access_token}}'
        replace:
          '{{context.to}}': '{{context.request-content.from}}'
          '{{context.subject}}': 'Re: {{context.request-content.subject}}'
          '{{context.body}}': '{{context.response}}'

  # Node that composes a response to the user using the OpenAI API.
  respond-to-user.import:
    use:
      - id: openai/gpt-3.api
        as: respond-to-user.api
        but:
          # Set the system instructions to the response-instruction.merge node's output.
          <respond-to-user.api>.dependencies.meta:
            id: response-instruction.merge
          # Set the full context to the response-context.merge node's output.
          <respond-to-user.api>.dependencies.prompt:
            id: response-context.merge

  # Merge the data from the extracted email, the user-contextual-data.fn, and the request-urgency.fn nodes.
  response-context.merge:
    merge:
      # Merge the user contextual data and request urgency into the prompt.
      dry_string: "You're a customer service representative named Edward. This is a service request that you are to respond to: {{context.request-content}}. Here's pertinent user data to help you respond: {{context.user-context}}. The request's urgency is {{context.request-urgency}}."
    dependencies:
      request-content:
        id: extract-email.merge
      user-context:
        id: user-contextual-data.fn
      request-urgency:
        id: request-urgency.fn

  # Insert some simple data describing how to respond to the user.
  response-instruction.merge:
    merge: 'Please respond to the user as a customer service representative. Attempt to resolve the issue. If you cannot resolve the issue, escalate the issue to a manager (with user instructions).'


  # Get user data, including purchase history/past interactions from our python service.
  user-contextual-data.fn:
    name: 'user-data'
    with:
      dry_string: '{{context.request-content.from}}'
    proxy:
      id: python-worker.queue
    dependencies:
      request-content:
        id: extract-email.merge

  # Use our TypeScript code to assign an urgency request to the customer.
  request-urgency.fn:
    name: 'request-urgency'
    with:
      email_content:
        dry_string: 'Subject: {{context.request-content.subject}} Body: {{context.request-content.body}}'
      customer_history:
        dry_value: '{{context.user-context}}'
    proxy:
      id: ts-worker.queue
    dependencies:
      request-content:
        id: extract-email.merge
      user-context:
        id: user-contextual-data.fn

  # Get the latest new email for the sake of example; we could do it for every email with more maps.
  extract-email.merge:
    merge:
      dry_value: '{{context.request-content.0}}'
    dependencies:
      request-content:
        id: extract-email-content.api

  # Node that extracts the email content from the gmail API.
  extract-email-content.api:
    request:
      url:
        # Fetch the email content from the map id.
        dry_string: 'https://www.googleapis.com/gmail/v1/users/me/messages/{{map.id}}'
      method: GET
      headers:
        Authorization:
          dry_string: 'Bearer {{secrets.google_access_token}}'

    # Map over every new email in this meta field.
    map:
      field: 'meta.compare.diff'

    # 3 quick transformations to extract readable data from the API response.
    transform:
      - query:
          expression: |
            {
              from: payload.headers[?name=='From'].value | [0],
              subject: payload.headers[?name=='Subject'].value | [0],
              body: payload.parts[?mimeType=='text/plain'].body.data | [0]
            }
      - convert:
          from: base64
          to: string
          field: body
      - regex:
          pattern: '<(.+?)>'
          field: from

Auxiliary Python

Next, we have the Python code that is used to fetch user data.

import random
from datetime import datetime, timedelta
from drymerge import DryClient
from drymerge.identity import DryId


def get_customer_history(customer_email):
    # Seed the random number generator for consistency
    random.seed(customer_email)

    # Initialize customer history with some base data
    customer_history = {
        "purchase_history": [],
        "support_history": []
    }

    # Adding random additional purchase and support history
    for _ in range(random.randint(1, 5)):
        days_ago = random.randint(1, 365)
        purchase_date = (datetime.now() - timedelta(days=days_ago)).strftime("%Y-%m-%d")
        customer_history["purchase_history"].append({
            "product_id": str(random.randint(1000, 9999)),
            "product_name": random.choice(["Headphones", "Keyboard", "Monitor"]),
            "purchase_date": purchase_date
        })

        issue_date = (datetime.now() - timedelta(days=days_ago)).strftime("%Y-%m-%d")
        customer_history["support_history"].append({
            "date": issue_date,
            "issue": random.choice(["Software issue", "Hardware malfunction", "Inquiry about product"])
        })

    return customer_history


client = DryClient(
    api_key="2730036d8f3978ec80f0d821e8cf2d86d5fcb9e1140cbf2cbbe2f314e45606eb5126d25d0c30277847ba6d1d8fe132ba",
    proxy_name=DryId("python-worker", "queue"))

client.route("user-data", get_customer_history)
client.start()

Auxiliary Typescript

And finally we have the typescript code that assigns an urgency level to the customers’ request.


import { parse, format } from 'date-fns';
import {DryClient, DryId} from 'drymerge'
interface Request {
    email_content: string;
    customer_history: CustomerHistory;
}

interface CustomerHistory {
    support_history?: SupportIssue[];
    purchase_history?: Purchase[];
}

interface SupportIssue {
    date: string;
}

interface Purchase {
    purchase_date: string;
}

function calculateUrgency(request: Request): string {
    let urgencyScore = 0;
    const emailContent = request.email_content;
    const customerHistory = request.customer_history;

    // Increase urgency if the email content contains urgent words
    const urgentWords = ['urgent', 'asap', 'immediately', 'help'];
    for (const word of urgentWords) {
        if (emailContent.toLowerCase().includes(word)) {
            urgencyScore += 2;
        }
    }

    // Increase urgency based on recent unresolved support issues
    let recentIssues = 0;
    for (const issue of customerHistory.support_history || []) {
        const issueDate = parse(issue.date, 'yyyy-MM-dd', new Date());
        const daysSinceIssue = Math.floor((Date.now() - issueDate.getTime()) / (1000 * 60 * 60 * 24));
        if (daysSinceIssue < 30) {
            recentIssues += 1;
        }
    }

    if (recentIssues > 3) {
        urgencyScore += 2;
    } else if (recentIssues > 0) {
        urgencyScore += 1;
    }

    // Adjust urgency based on the customer's purchase history in the last year
    let purchasesLastYear = 0;
    for (const purchase of customerHistory.purchase_history || []) {
        const purchaseDate = parse(purchase.purchase_date, 'yyyy-MM-dd', new Date());
        const daysSincePurchase = Math.floor((Date.now() - purchaseDate.getTime()) / (1000 * 60 * 60 * 24));
        if (daysSincePurchase < 365) {
            purchasesLastYear += 1;
        }
    }

    if (purchasesLastYear > 5) {
        urgencyScore += 1;
    }

    // Determine urgency level based on the calculated score
    if (urgencyScore >= 3) {
        return 'High Urgency';
    } else if (urgencyScore >= 1) {
        return 'Medium Urgency';
    } else {
        return 'Low Urgency';
    }
}

const client = new DryClient(
    '2730036d8f3978ec80f0d821e8cf2d86d5fcb9e1140cbf2cbbe2f314e45606eb5126d25d0c30277847ba6d1d8fe132ba',
    false,
    new DryId('ts-worker', 'queue'),
);

client.route('request-urgency', calculateUrgency);

client.start();

Key Section Breakdown

The Trigger

We see in the first section of the DryMerge workflow under infra that we import a template for a new email trigger. We note two configurables passed in through hyration.

  1. The access token, {{secrets.google_access_token}}, is a secret that is encrypted and stored in the DryMerge database. It is used to authenticate the workflow with the Gmail API. You can add a secret via the CLI with dry secret upsert --name <name> --value <value>.
  2. The action, google/send-email.api, is the action that is triggered when a new email is received. This action is defined in the nodes section of the workflow.

Multi-Language Functions

In the Python and Typescript sections, we see existing functions that are connected to the DryMerge workflow layer via the SDK client. Couple of notes:

  1. In both Python and Typescript, we link to a queue with DryId("python-worker", "queue") and new DryId('ts-worker', 'queue') respectively. Similarly, in the DryMerge YAML, we link to the same queue in our .fn function nodes with python-worker.queue and ts-worker.queue. This is step 1 in linking functions to the workflow layer.
  2. We also specify names of the functions in the SDK client: client.route("user-data", get_customer_history) and client.route('request-urgency', calculateUrgency). Those names are also specified in the DryMerge YAML: name: 'user-data' and name: 'request-urgency'.

Imports and Templates

To actually send off the email, we import a template for the google/send-email node. When importing this template, you can see two new sections: a but section and a replace section.

  1. The replace section replaces arbitrary strings in the template with strings we want; perfect for making arguments match the format you receive them. In order to view the arguments used in a template so that you know what you can replace, use the CLI: dry summarize DryMerge/google/send-email.template (or whtever the template name is).
  2. The but section allows us to index into and change the JSON structure of the template. In this case, we are changing the dependencies of the google/send-email-initializer.merge node to reflect the output of the extract-email.merge and respond-to-user.api nodes. Note the <> brackets around the node name: this ensures that the engine knows that we’re indexing into the string DryMerge/google/send-email.merge holistically instead of DryMerge/google/send-email and then merge of that object.

Transformations

If you check the extract-email-content.api node, you’ll see that we use a transformation to convert the response from the Gmail API into a more readable format. We use the query transformation to extract the from, subject, and body fields from the response. We then use the convert transformation to convert the body field from base64 to a string. Finally, we use the regex transformation to extract the email address from the from field. These transformations are applied in order!

Maps

We use a map to extract content from every email passed from the trigger in extract-email-content.api. Note that we use the meta templating field: this field is passed down by the DryMerge infrastructure that called the workflow. In this case, it comes from the email trigger; the diff contains all the new emails received since the last time we checked.

Hydration

We use hydration all over the place here. Note the different types: context, map, meta, and secrets.

  1. Context is used to access data passed by the user during the API call or generated by node dependencies.
  2. Map is used to access the current item in an iterable created by the map keyword in a node definition.
  3. meta is used to access data passed by the DryMerge infrastructure that called the workflow. In this case, it comes from the email trigger; the diff contains all the new emails received since the last time we checked.
  4. Secrets is used to access secrets stored in the DryMerge database. You can add a secret via the CLI with dry secret upsert --name <name> --value <value>.

Search Nodes

Workflow Description

The workflows described here utilize search nodes to extract specific data points from dynamic or unstructured content. They illustrate how to configure search nodes to identify and return structured data, such as capital cities from country information or dining capacity from restaurant details.

Key Concepts

  1. Search Definition: Each search node defines what to extract, such as capital cities or currencies, from the content provided.
  2. Content Provisioning: The content field is used to specify the source of the unstructured data, which can be a dynamic context variable.
  3. Dependency Management: Search nodes can depend on other API nodes, leveraging their responses as input for the search.
  4. Search Parameters: Within a search node, various parameters such as name, description, and type define the structured data to be extracted.
  5. Hydration: Similar to other nodes, search nodes can dynamically hydrate values based on the workflow’s context or previous nodes’ outputs.

Code

DryMerge YAML

Here is the YAML file defining the search nodes for extracting country and dining information.

organization: DryMerge
version: 1
nodes:
  country.search:
    content:
      dry_value: '{{context.country}}'
    search:
      - name: 'capital'
        description: "What's the capital of the country?"
        type: string
      - name: 'currency'
        description: "What currency does the country use?"
        type: string
    dependencies:
      country:
        id: country.api
        args:
          name: 'Germany'

  country.api:
    request:
      url: 'https://restcountries.com/v3.1/name/Germany'
      method: GET

  dining-info.search:
    content:
      dry_value: '{{context.restaurant}}'
    search:
      - name: 'address'
        description: "What's the physical address of the restaurant? Street/directions, etc."
        type: string
      - name: 'phone'
        description: "What's the phone number of the restaurant?"
        type: string
        array: true
      - name: 'capacity'
        description: "What's the maximum capacity (in seated people) an event at this restaurant can hold? Look for things related to capacity and seating"
        type: number
    dependencies:
      restaurant:
        id: restaurant.api

  restaurant.api:
    request:
      url: https://onemarket.com/private-dining/
      method: GET

Key Section Breakdown

Search Nodes

  • country.search and dining-info.search: These are the search nodes that define what data to extract from the given content. They both utilize a dry_value to specify the context from which to extract information, which is the country or restaurant data.

Search Parameters

  • Each search node contains a list of search parameters. For instance, country.search looks for the capital and currency of a given country, while dining-info.search seeks to find address, phone number, and capacity of a restaurant.

Dependencies

  • country.api and restaurant.api: These nodes are API calls that the search nodes depend on. They fetch the necessary data from external sources which is then processed by the search nodes.

API Nodes

  • The API nodes (country.api and restaurant.api) perform GET requests to specified URLs to retrieve country details and restaurant information, respectively.

Content Provisioning

  • The content within the search nodes is provided directly by the dry_value field, referencing context.country or context.restaurant as the source for search.

Dynamic Configuration

  • The search nodes can be dynamically configured using arguments passed to their dependencies. For example, in country.api, the country name ‘Germany’ is passed as an argument to fetch specific country details.

A user can input a country name, and the country.search node will extract structured data such as the capital city and the currency used in the country. This is particularly useful for applications that need to present users with concise information without manual data entry.

In the dining-info.search node, a user can provide the name of a restaurant, and the node will extract details such as the address, phone numbers, and seating capacity. This can be leveraged in applications related to event planning or restaurant booking systems to automate the gathering of venue details.

Summarize Commits Workflow

Workflow Description

This workflow in DryMerge is designed to trigger on new GitHub commits and send a summary of the commit changes to a Slack channel. It’s a powerful example of how DryMerge can be integrated with version control and communication platforms to streamline workflows.

Key Concepts

  1. Trigger: The workflow is initiated by a new commit in a GitHub repository.
  2. Integration with External Services: It demonstrates integration with GitHub for commit data and Slack for messaging.
  3. Data Processing and Summarization: Uses the OpenAI API to generate summaries of the commits.
  4. Hydration and Templating: Shows the use of dynamic data injection (hydration) into workflow nodes.

Code

DryMerge YAML

Here’s the YAML configuration for the workflow:

organization: DryMerge
version: 1
infra:
  # Node that triggers on a new GitHub commit.
  commit-trigger.import:
    use:
      - id: github/new-commit.template
        hydrate:
          identifier: ''
          owner: 'DryMergeInc'
          repo: 'graphapi'
          branch: 'staging'
          access_token: '{{secrets.github_access_token}}'
          action:
            id: slack/send-message-for-new-commit.api

nodes:
  # Send a Slack message to the channel with a summary of new commit changes.
  send-summary.import:
    use:
      - id: slack/send-message.template
        hydrate:
          identifier: '-for-new-commit'
          access_token: '{{secrets.slack_access_token}}'
          channel_name: 'ai-reports'
          message_content: '{{context.message}}'
        but:
          <slack/send-message-for-new-commit.api>.dependencies.message:
            id: generate-commit-summary.api

  # Node that composes a response using the OpenAI API.
  summarize-changes.import:
    use:
      - id: openai/gpt-3.api
        as: generate-commit-summary.api
        but:
          <generate-commit-summary.api>.dependencies.meta:
            id: summary-instructions.merge
          <generate-commit-summary.api>.dependencies.prompt:
            id: response-context.merge

  # Node defining system prompt for OpenAI API.
  summary-instructions.merge:
    merge: 'Please summarize the changes provided in the commit(s) below.'

  # Merge the data from the commit diff and file system changes.
  response-context.merge:
    merge:
      dry_string: "Here's a summary of the commit(s): {{context.fetch-commit-diff}}"
    dependencies:
      fetch-commit-diff:
        id: fetch-commit-diff.api

  # Node fetching commit changes using GitHub API.
  fetch-commit-diff.api:
    request:
      url:
        dry_string: 'https://api.github.com/repos/DryMergeInc/graphapi/commits/{{map.sha}}'
      method: GET
      headers:
        Authorization:
          dry_string: 'Bearer {{secrets.github_access_token}}'
        User-Agent:
          dry_string: 'DryMerge'
    map:
      field: 'meta.compare.diff'

Key Section Breakdown

Slack Message Sending

The send-summary.import node is responsible for sending the commit summary to a Slack channel. It imports a Slack message sending template and is configured to post in the ‘ai-reports’ channel. The message content is dynamically set to the output of the commit summary generation.

Commit Summary Generation

The summarize-changes.import node leverages the OpenAI API to generate a concise summary of the commits. It imports an OpenAI GPT-3 template and is configured to use the output from the summary-instructions.merge and response-context.merge nodes as input.

Summary Instructions

The summary-instructions.merge node provides a structured prompt for the OpenAI API, instructing it to summarize the changes in the commit.

Response Context Compilation

The response-context.merge node merges data from the commit diff, fetched by the fetch-commit-diff.api node, to provide a detailed context for the summary generation.

Fetching Commit Changes

The fetch-commit-diff.api node fetches detailed information about each commit using the GitHub API. It’s configured to fetch data for commits on the ‘staging’ branch of the specified repository. The node uses a map to process each commit in the list provided by the trigger.

Workflow Functionality

This DryMerge workflow efficiently automates the process of notifying a Slack channel about new commits in a GitHub repository. It not only notifies but also provides a detailed summary of the changes, making it easier for team members to stay updated with the development progress without having to manually check each commit.

Use Case

Such a workflow is particularly useful in continuous integration/continuous deployment (CI/CD) environments, where keeping track of changes and quickly communicating them across the team is crucial for smooth operations.