This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

Agent Integrations

Information on how to integrate agentic frameworks with Dapr runtime

What are agent integrations in Dapr?

Dapr augments and enhances other agentic frameworks by providing them with key critical features for running in production:

Install Dapr

With Dapr, developers writing AI systems using the framework of their choice enjoy accelerated development via the Dapr APIs and gain confidence taking agentic systems into production.

1 - CrewAI

Dapr first-class integrations with CrewAI Agents

What is the Dapr CrewAI integration?

Dapr provides CrewAI agents first class integrations that range from agent session management to connecting agents via pub/sub and orchestrating agentic workflows.

1.1 - CrewAI Workflows

How to run CrewAI agents with durable, fault-tolerant execution using Dapr Workflows

Overview

Dapr Workflows make it possible to run CrewAI agents reliably, durably, and with built-in resiliency.
By orchestrating CrewAI tasks with the Dapr Workflow engine, developers can:

  • Ensure long-running CrewAI work survives crashes and restarts.
  • Get automatic checkpoints, retries, and state recovery.
  • Run each CrewAI task as a durable activity.
  • Observe execution through tracing, metrics, and structured logs.

This guide walks through orchestrating multiple CrewAI tasks using Dapr Workflows, ensuring each step is run exactly once even if the process restarts.

Getting Started

Initialize Dapr locally to set up a self-hosted environment for development. This process installs the Dapr sidecar binaries, provisions the workflow engine, and prepares a default components directory. For full details, see guide on initializing Dapr locally.

Initialize Dapr:

dapr init

Verify that daprio/dapr, openzipkin/zipkin, and redis are running:

docker ps

Install Python

python -m venv .venv
source .venv/bin/activate     # Windows: .venv\Scripts\activate

Install Dependencies

pip install dapr dapr-ext-workflow crewai

Create a Workflow to Run CrewAI Tasks

Create a file named crewai_workflow.py and paste the following:

from dapr.ext.workflow import (
    WorkflowRuntime,
    DaprWorkflowContext,
    WorkflowActivityContext,
    DaprWorkflowClient,
)
from crewai import Agent, Task, Crew
import time

wfr = WorkflowRuntime()

# ------------------------------------------------------------
# 1. Define Agent, Tasks, and Task Dictionary
# ------------------------------------------------------------
agent = Agent(
    role="Research Analyst",
    goal="Research and summarize impactful technology updates.",
    backstory="A skilled analyst who specializes in researching and summarizing technology topics.",
)

tasks = {
    "latest_ai_news": Task(
        description="Find the latest news about artificial intelligence.",
        expected_output="A 3-paragraph summary of the top 3 stories.",
        agent=agent,
    ),
    "ai_startup_launches": Task(
        description="Summarize the most impactful AI startup launches in the last 6 months.",
        expected_output="A list summarizing 2 AI startups with links.",
        agent=agent,
    ),
    "ai_policy_updates": Task(
        description="Summarize the newest AI government policy and regulation updates.",
        expected_output="A bullet-point list summarizing the latest policy changes.",
        agent=agent,
    ),
}

# ------------------------------------------------------------
# 2. Activity — runs ONE task by name
# ------------------------------------------------------------
@wfr.activity(name="run_task")
def run_task_activity(ctx: WorkflowActivityContext, task_name: str):
    print(f"Running CrewAI task: {task_name}", flush=True)

    task = tasks[task_name]

    # Create a Crew for just this one task
    temp_crew = Crew(agents=[agent], tasks=[task])

    # kickoff() works across CrewAI versions
    result = temp_crew.kickoff()

    return str(result)

# ------------------------------------------------------------
# 3. Workflow — orchestrates tasks durably
# ------------------------------------------------------------
@wfr.workflow(name="crewai_multi_task_workflow")
def crewai_workflow(ctx: DaprWorkflowContext):
    print("Starting multi-task CrewAI workflow", flush=True)

    latest_news = yield ctx.call_activity(run_task_activity, input="latest_ai_news")
    startup_summary = yield ctx.call_activity(run_task_activity, input="ai_startup_launches")
    policy_updates = yield ctx.call_activity(run_task_activity, input="ai_policy_updates")

    return {
        "latest_news": latest_news,
        "startup_summary": startup_summary,
        "policy_updates": policy_updates,
    }

# ------------------------------------------------------------
# 4. Runtime + Client (entry point)
# ------------------------------------------------------------
if __name__ == "__main__":
    wfr.start()

    client = DaprWorkflowClient()
    instance_id = "crewai-multi-01"

    client.schedule_new_workflow(
        workflow=crewai_workflow,
        input=None,
        instance_id=instance_id
    )

    state = client.wait_for_workflow_completion(instance_id, timeout_in_seconds=60)
    print(state.serialized_output)

This CrewAI agent starts a workflow that does news gathering and summary for the subjects of AI and startups.

Create the Workflow Database Component

Dapr Workflows persist durable state using any Dapr state store that supports workflows. Create a directory named components, then create the file workflowstore.yaml:

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: workflowstore
spec:
  type: state.redis
  version: v1
  metadata:
  - name: redisHost
    value: localhost:6379
  - name: redisPassword
    value: ""
  - name: actorStateStore
    value: "true"

This component stores:

  • Code execution checkpoints
  • Execution history
  • Deterministic resumption state
  • Final output data

Set a CrewAI LLM Provider

CrewAI needs an LLM configuration or token to run. See instructions here.

For example, to set up OpenAI:

export OPENAI_API_KEY=sk-...

Run the Workflow

Launch the CrewAI workflow using the Dapr CLI:

dapr run \
  --app-id crewaiwf \
  --dapr-grpc-port 50001 \
  --resources-path ./components \
  -- python3 ./crewai_workflow.py

As the workflow runs, each CrewAI task is executed as a durable activity. If the process crashes, the workflow resumes exactly where it left off. You can try this by killing the process after the first activity and then rerunning that command line above with the same app ID.

Open Zipkin to view workflow traces:

http://localhost:9411

2 - OpenAI Agents

Dapr first-class integrations for OpenAI Agents

What is the Dapr OpenAI Agents integration?

Dapr provides OpenAI agents first class integrations that range from agent session management to connecting agents via pub/sub and orchestrating agentic workflows. The Dapr OpenAI integration is an extension in the OpenAI Python SDK that developers can use to augment OpenAI agents with the various Dapr APIs.

2.1 - Agent Sessions

How to use Dapr to reliably and securely manage agent state

Overview

By using Dapr to manage the state and session data for OpenAI agents, users can store agent state in all databases supported by Dapr, including key/value stores, caches and SQL databases. Developers also get built-in tracing, metrics and resiliency policies that make agent session data operate reliably in production.

Getting Started

Initialize Dapr locally to set up a self-hosted environment for development. This process fetches and installs the Dapr sidecar binaries, runs essential services as Docker containers, and prepares a default components folder for your application. For detailed steps, see the official guide on initializing Dapr locally.

To initialize the Dapr control plane containers and create a default configuration file, run:

dapr init

Verify you have container instances with daprio/dapr, openzipkin/zipkin, and redis images running:

docker ps

Install Python

Install Dependencies

pip install openai-agents dapr

Create an OpenAI Agent

Let’s create a simple OpenAI agent. Put the following in a file named openai_agent.py:

import asyncio
from agents import Agent, Runner
from agents.extensions.memory.dapr_session import DaprSession

async def main():
    agent = Agent(
        name="Assistant",
        instructions="Reply very concisely.",
    )

    session = DaprSession.from_address(
        session_id="123",
        state_store_name="statestore"
    )

    result = await Runner.run(agent, "What city is the Golden Gate Bridge in?", session=session)
    print(result.final_output)

    result = await Runner.run(agent, "What state is it in?", session=session)
    print(result.final_output)

    result = await Runner.run(agent, "What's the population?", session=session)
    print(result.final_output)

asyncio.run(main())

Set an OpenAI API key

export OPENAI_API_KEY=sk-...

Create a Python venv

python -m venv .venv                                                                                                                                                                      
source .venv/bin/activate  # On Windows: .venv\Scripts\activate

Create the database component

The component file is how Dapr connects to your databae. The full list of supported databases can be found here. Create a components directory and this file in it:

statestore.yaml:

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: statestore
spec:
  type: state.redis
  version: v1
  metadata:
  - name: redisHost
    value: localhost:6379
  - name: redisPassword
    value: ""

Run The Agent

Now run the local Dapr process and your Python script using the Dapr CLI.

dapr run --app-id openaisessions --dapr-grpc-port 50001 --resources-path ./components -- python3 ./openai_agent.py

Open http://localhost:9411 to view your the traces and dependency graph.

You can see the session data stored in Redis with the following command

hgetall "123:messages" 

Next Steps

Now that you have an OpenAI agent using Dapr to manage the agent sessions, explore more you can do with the State API and how to enable resiliency policies for enhanced reliability.

Read more about OpenAI agent sessions and Dapr here.