Code Coverage Challenges with LocalStack and Coverage.py

Hi,

I am trying to obtain a trace of the code that gets executed when I make API calls to LocalStack but I have encountered several challenges.

To achieve this, I use Coveragepy (used by localstack as well) to grab coverage, I have tried the following steps:

  1. I used the coverage run command to execute a script that calls main from localstack.cli. The command I used is: coverage run -p localstack_start start.
  2. Additionally, I configured coverage.py using the following settings:
concurrency = multiprocessing
relative_files = True
sigterm = True
debug = trace
debug_file = cov-log
source = /home/.local/lib/python3.10/site-packages/localstack

However, I have encountered some challenges with this approach:

  1. Coverage.py only seems to trace the modules that are imported when calling main from localstack.cli. It doesn’t trace any other parts of the code, and the coverage remains low. The code that may get invoked with API calls is not at all traced.
  2. To improve the tracing, I tried forcibly importing all the modules of LocalStack, which did help increase the coverage to around 23%. However, the coverage does not increase further when I make API calls to LocalStack. 23% is the count of all the modules getting imported. It remains stuck at 23%.

I believe there might be a simpler way to achieve my goal, or perhaps I am missing something crucial in my approach.

Any expertise and insights would be greatly appreciated.

Hi Anna, thanks for posting this message. It’s certainly an interesting use case.

Can you post the contents of your startup script - it’s possible that you are invoking localstack start (or the equivalent Python code) which in turn runs LocalStack in a docker container. You can run LocalStack directly on the host, but this is more complex and requires more dependencies. See the development environment setup guide in our docs for how to do this.


Another possibility is: if you can find a test that executes the code you are interested in tracing, then you can see this PR which uses “dynamic contexts” in coverage.py to annotate which tests execute which line of code in LocalStack. Note: you don’t need to run the tests to get this information, the PR shows how to download the required .coverage database yourself. Also this may not help you trace the code, but it at least may give you some pointers.

If you can run the tests, you can write your own test that executes the behaviour you want to trace.

In general, all service requests are made to a class called a Provider that handles specific service requests. For example the SQS provider method for create_queue. You can follow the code execution path yourself.

1 Like

Thanks, Simon! Running directly on the host resolved all my issues.

I have a follow-up question. I am writing some tests with the aim of exhaustively covering all operations (valid paths and error paths) supported by a certain service e.g., S3. Can you point to directories that includes the implementation of all APIs of S3 for example, so that I can set my coverage on them? Thanks for your help!

Hi @anna, sorry for the long delay in replies, I need to configure my discourse notifications better!

We have implemented detailed API method coverage in our docs section on feature coverage. You can filter by service that you are interested in. Or if you want a machine-readable version, you can look at our extracted data files (e.g. for S3).

If you wanted to do this work yourself, the code for all of the LocalStack services is organised into the localstack/services/<service name>/provider.py files e.g. for KMS → (slightly different for S3 as we have multiple versions). Each Python method in the <service>Provider class corresponds to an AWS API call, though some will be missing as we often fall back on moto to provide functionality.

1 Like