Seismic data analysis is an essential component of energy exploration, but configuring complex processing workflows has traditionally been a time-consuming and error-prone challenge. Halliburton’s Seismic Engine, a cloud-native application for seismic data processing, is a powerful tool that previously required manual configuration of approximately 100 specialized tools to create workflows. This process was not only time-consuming but also required deep expertise, potentially limiting the accessibility and efficiency of the software.
To address this challenge, Halliburton partnered with the AWS Generative AI Innovation Center to develop an AI-powered assistant for Seismic Engine. The solution uses Amazon Bedrock, Amazon Bedrock Knowledge Bases, Amazon Nova, and Amazon DynamoDB to transform complex workflow creation into conversations. Geoscientists and data scientists can configure processing tools through natural language interaction instead of manual configuration.
In this post, we’ll explore how we built a proof-of-concept that converts natural language queries into executable seismic workflows while providing a question-answering capability for Seismic Engine tools and documentation. We’ll cover the technical details of the solution, share evaluation results showing workflow acceleration of up to 95%, and discuss key learnings that can help other organizations enhance their complex technical workflows with generative AI.
Our collaboration with AWS has been instrumental in accelerating subsurface interpretation workflows. By integrating Amazon Bedrock services with Halliburton Landmark’s DS365 Seismic Engine, we were able to reduce traditionally time‑consuming workflow‑building tasks by an order of magnitude. This generative AI–powered workflow assistant not only improves efficiency and accuracy but also makes our advanced geophysical tools more accessible to a broader range of users. The scalable cloud‑native architecture on AWS has enabled us to deliver a seamless, conversational experience that fundamentally improves productivity across subsurface workflows.
— Phillip Norlund, Manager of Subsurface Technologies, Halliburton Landmark
— Slim Bouchrara, Senior Product Owner of Subsurface R&D, Halliburton Landmark
Solution overview
Our project aimed to address two key objectives: transforming natural language queries into executable seismic workflows, and providing an intelligent question and answer (Q&A) system for Seismic Engine documentation. To achieve this, we developed a solution using Amazon Bedrock that enables geoscientists to interact with complex seismic tools through natural conversation.The backbone of our system is a FastAPI application deployed on AWS App Runner, which handles user queries through a streaming interface. When a user submits a query, an intent router powered by Amazon Nova Lite analyzes the request to determine whether it’s seeking workflow generation or technical information. For Q&A requests, the system uses Amazon Bedrock Knowledge Bases with Amazon OpenSearch Serverless to provide relevant answers from indexed documentation. For workflow requests, a generation agent using Anthropic’s Claude on Amazon Bedrock creates YAML workflows by selecting from 82 available Seismic Engine tools.
To maintain context and enable multi-turn conversations, we integrated Amazon DynamoDB for chat history and interaction logging. The system supports streaming responses for both Q&A and workflow generation, providing immediate feedback to users as the system processes their requests. This architecture allows complex technical workflows to be created and modified through natural conversation, while maintaining the precise control required for seismic data processing. The following diagram illustrates the solution architecture.
Query routing and intent classification
After the user’s query is provided to the system, the Intent Router classifies the intent label of the given query by calling Amazon Nova Lite via the Amazon Bedrock API. The large language model (LLM) is given a prompt to produce one of three intent labels: “Workflow_Generation”, “QnA”, and “General_Question”.The “Workflow_Generation” label is used to route queries related to workflow generation, including reading/loading datasets, data processing operations, and various requests involving manipulating specific datasets. The “QnA” intent label is used for questions related to specific tools, requests for sample workflows, or questions about Seismic Engine documentation. The “General_Question” label is reserved for queries unrelated to Seismic Engine operations or workflows.In our implementation, Amazon Nova Lite performed the routing task efficiently, offering a good balance between accuracy and latency.
Question answering implementation
The Q&A component handles Seismic Engine-related queries by using Amazon Bedrock Knowledge Bases, a fully managed service for end-to-end Retrieval Augmented Generation (RAG) workflow. We chose Bedrock Knowledge Bases because it alleviates the operational overhead of managing vector databases, chunking strategies, and embedding pipelines. As a fully managed service, it handles infrastructure scaling, security, and maintenance automatically, so that our team could focus on solution development rather than RAG infrastructure operations. The service provides native support for multiple chunking strategies including hierarchical chunking, which maintains parent-child relationships to balance granular retrieval with broader document context.The data sources include tool documentation markdown files and Seismic Engine manuals stored in S3. We kept tool documentation files unchunked since they’re relatively short, preserving complete context for individual tools. For longer documents like Seismic Engine manuals, we used hierarchical chunking with default settings. We use Amazon Titan Text Embeddings V2 for embedding generation and OpenSearch Serverless as the vector database. The system also stores metadata such as file names, URLs, and document types for each indexed item for downstream use.For both retrieval and response generation, we use Amazon Bedrock Knowledge Bases’ retrieve_and_generate API with Claude 3.5 Haiku as the model. The system supports multi-turn conversations by maintaining session context, and responses are formatted with inline citations for enhanced traceability.
Note: This solution was developed and evaluated using Claude 3.5 Sonnet V2 and Claude 3.5 Haiku. Since then, these models have been succeeded by Claude Sonnet 4.5 and most recently Claude Sonnet 4.6, as well as Claude Haiku 4.5, all available through Amazon Bedrock. The solution architecture supports model upgrades without code changes, so that you can use the latest model capabilities.
This approach enables our system to provide context-aware, relevant answers to user queries about Seismic Engine tools and workflows.
Workflow generation
For queries classified as “Workflow_Generation”, our solution uses LLM agents to convert natural language into executable YAML workflows. The agent is bound with 82 tools available on Seismic Engine. Based on the user’s query and tool specifications that define inputs, parameters, and outputs, the agent selects appropriate tools, determines their correct execution order, and generates a YAML workflow that addresses the user’s requirements. The following figure illustrates the workflow generation process.
We used both Claude 3.5 Sonnet V2 and Claude 3.5 Haiku in our implementation, orchestrated through the LangChain framework for agent management and tool binding. The models are provided with detailed tool descriptions and specifications, so that they can understand each tool’s capabilities and requirements. When generating workflows, the system considers not only the explicit requirements in the user’s query but also includes necessary default parameters when specific values aren’t provided.The workflow generation process supports multi-turn conversations, so that users can modify previously generated workflows through natural language requests. By using conversation history stored in Amazon DynamoDB, the LLM can either generate new workflows or modify existing ones according to the user’s current query.
Evaluation
To evaluate our solution’s effectiveness, we created a comprehensive test dataset of query-workflow pairs, consisting of both low and medium complexity workflows. These were derived from real historical workflows and validated by subject matter experts to verify they accurately represent typical user requests.
Workflow generation results
Model
Complexity
Success Rate
Mean Generation Time (s)
Median Generation Time (s)
Claude Haiku 3.5
simple
84%
8.3
5.9
medium
90%
12.4
9.1
Claude Sonnet 3.5 V2
simple
86%
11.2
11.5
medium
97%
15.8
16.6
Both models demonstrated strong performance, with Claude Sonnet 3.5 V2 showing superior success rates, particularly for medium complexity workflows. The system delivers responses through streaming, providing users with immediate feedback as the workflow is generated, with complete workflows delivered within 5.9-16.6 seconds. Claude Haiku 3.5 offers faster generation times, providing a trade-off option between speed and accuracy.
Comparison to baseline performance
User Type
% Success
% Failure
Time to Build Simple Flow (min)
Time to Build Complex Flow (min)
New User
70%
20%
4
20
Experienced User
85%
10%
2
5
Our Solution
84-97%
3-16%
0.13-0.26
0.21-0.28
Our generative AI solution demonstrates the following improvements:
- Success rates of 84-97% surpass both new and experienced users.
- Workflow creation time is reduced from minutes to seconds, representing over a 95% time reduction.
These results demonstrate that users across experience levels can enhance productivity by over 95%, while maintaining or exceeding the accuracy of manual workflow creation.
Conclusion
In this post, we showed how we used Amazon Bedrock to transform complex technical processes into natural conversations. By implementing an AI-powered assistant with integrated Q&A capabilities, we achieved workflow generation success rates of 84-97% while reducing creation time by over 95% compared to manual processes. The system’s ability to handle both low and medium complexity workflows, combined with its contextual understanding of Seismic Engine tools, demonstrates how generative AI can improve industrial software usability without compromising accuracy.
This approach also generalizes well to other domains with complex, multi-step agentic workflows requiring specialized tool knowledge and configuration. As next steps, consider exploring multi-agent architectures using frameworks like Strands Agents SDK with Amazon Bedrock AgentCore for improved accuracy through specialized sub-agents.
About the authors
Yuan Tian
Yuan is an Applied Scientist at the AWS Generative AI Innovation Center, where he architects and implements generative AI solutions such as agentic systems for customers across healthcare, life sciences, finance, and energy. He brings an interdisciplinary background combining machine learning with computational biology, and holds a Ph.D. in Immunology from the University of Alabama at Birmingham.
Di Wu
Di is a Deep Learning Architect at AWS Generative AI Innovation Center, specializing in GenAI, AI agents, and model customization. He works with enterprise customers across diverse industries to architect and deliver production-ready AI solutions, including healthcare data analyst agents, travel booking voice agents, and database deep research agents. Outside of work, Di enjoys reading and writing.
Gan Luan
Gan is an Applied Scientist on the AWS Generative AI Innovation and Delivery team. He is passionate about leveraging generative AI techniques to help customers solve real-world business problems.
Haochen Xie
Haochen is a Senior Data Scientist at AWS Generative AI Innovation Center. He is an ordinary person.
Hayley Park
Hayley is an Applied Scientist at the AWS Generative AI Innovation Center, where she helps companies tackle real business problems by building generative AI applications. Before joining AWS GenAI, she worked on voice and language experiences across the Alexa Kids and Fire TV SLU teams. She holds a Ph.D. in Computational Linguistics from the University of Illinois at Urbana-Champaign, where her research focused on computational methods for low-resource languages, as well as an M.S. in Statistics.
Baishali Chaudhury
Baishali is an Applied Scientist at the Generative AI Innovation Center at AWS, where she focuses on advancing Generative AI solutions for real-world applications. She has a strong background in computer vision, machine learning, and AI for healthcare. Baishali holds a PhD in Computer Science from University of South Florida and PostDoc from Moffitt Cancer Centre.
Jared Kramer
Jared is an Applied Science Manager at AWS Generative AI Innovation Center.
Arun Ramanathan
Arun is a Senior Generative AI Strategist at AWS Generative AI Innovation Center.

