LocalStack is a versatile tool that spins up a local cloud environment, enabling developers to test cloud applications without relying on actual cloud services. One powerful feature is the ability to use custom Extensions to mock various cloud services, including the OpenAI API.
The LocalStack OpenAI Extension allows you to mock the OpenAI API for integration testing purposes. While not all endpoints are supported yet, the crucial ones are covered, facilitating comprehensive testing of your application’s core OpenAI functionality.
Installing the LocalStack OpenAI Extension
- Install LocalStack by following the instructions in the official documentation.
python3 -m pip install localstack
- Install the OpenAI Extension:
LOCALSTACK_AUTH_TOKEN=... localstack extensions install "git+https://github.com/localstack/localstack-extensions/#egg=localstack-extension-openai&subdirectory=openai"
- Start LocalStack with the OpenAI Extension enabled:
LOCALSTACK_AUTH_TOKEN=... localstack start
Once installed and you start LocalStack, you can access the OpenAI Mock API through openai.localhost.localstack.cloud:4566
Supported endpoints
The LocalStack OpenAI Extension currently supports the following endpoints:
- Chat completion
- Engines Listing
- Transcribe
- Translate
- Generate Image URL
- Generate Image Base64
- Embeddings
- Fine Tuning
- Files
- Moderations
Writing an OpenAI Integration Test with LocalStack
With the LocalStack OpenAI Extension set up, you can write integration tests that mock the OpenAI API. Here’s an example in Python using the openai
library:
import openai
openai.organization = "org-test"
openai.api_key = "test"
openai.base_url = "http://openai.localhost.localstack.cloud:4566/v1/"
def test_chat_completion():
completion = openai.chat.completions.create(
model="gpt-4",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Hello!"}
]
)
assert len(completion.choices) > 0
In this example, you’re configuring the openai
library to use the LocalStack mock instead of the live API. You’re setting the organization
, api_key
, and api_base
parameters accordingly.
The test_chat_completion
function demonstrates how to call the OpenAI ChatCompletion API and assert the expected behavior. By mocking the API with LocalStack, you can test your application’s interaction with the OpenAI API without incurring costs or relying on the live API’s availability.
This approach allows you to write comprehensive integration tests that cover various scenarios and edge cases, ensuring your application’s robustness and reliability when working with the OpenAI API.
Conclusion
Mocking the OpenAI API with LocalStack can significantly improve the efficiency and reliability of your integration testing process. By providing a local, lightweight environment that mimics the behavior of the actual API, LocalStack enables you to write thorough tests without incurring costs or relying on the live API’s availability.
By incorporating LocalStack into your development workflow, you can streamline your integration testing, catch issues early, and ensure a smoother deployment process for your OpenAI-powered applications.