question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

How to use `--store-durations` in Github Actions?

See original GitHub issue

Let me first thank you for this great tool, which makes it super easy to decrease testing time in Github Actions.

I have some interrogations about how the new feature to combine --store-durations with --groups is supposed to be used to update test timings while running the splitted test suite during CI. I couldn’t find any documentation, but I assume the idea is to do as suggested by @sondrelg in https://github.com/jerry-git/pytest-split/issues/11#issuecomment-850256162 and basically use the Github actions/cache to cache the .test_durations file.

I see two problems with that approach, both due to the fact that, as far as I understand, loading the cache is done at the beginning of the job, storing the cache at the end.

  1. If there are several concurrent runs of the tests for different groups, they will read the same value of the file at the beginning if available (from the previous run), but they will try to overwrite the cache when they finish. The result is that the slowest group will have its durations persisted, because faster groups will have their durations overwritten almost immediately by slower ones. I believe this will cause all test durations from faster groups to be overestimated (as they will have no durations, so average test duration of the slowest group will be taken), so on average the other groups with unestimated tests will still finish faster, and it’s not clear to me that over several runs, the durations will converge to the accurate values, as the slowest group will probably remain the slowest.
  2. I don’t think there’s any guarantee that a job for a given group can’t start after the job for a different group in the same run finishes. However, if that happens, the first job will have already updated the durations in the cache before the second one starts, causing a potentially different split into groups, which might cause some tests to run twice or not to run at all.

Unless I misunderstood how this whole thing works, I think the more robust approach would be to store the group durations as artifacts, and then have an additional job in the workflow which depends on the group runs that consolidates all the separate artifacts into one duration file and caches that. That would solve problem 1. as all group durations will be taken into account, and problem 2. as the caching will happen only after all test runs finish. The missing piece to implement such a strategy is a tool that can combine the test durations from the different groups, and a way to annotate in the .test_durations file whether a duration has been updated in the current run in order for the tool to know which durations need to be put into the combined file.

Issue Analytics

  • State:open
  • Created 2 years ago
  • Reactions:2
  • Comments:18 (14 by maintainers)

github_iconTop GitHub Comments

7reactions
michamoscommented, Jun 17, 2021

Actually I managed to make it work correctly afaict with the current implementation, by modifying @sondrelg’s combining script to also take into account the previous version of the test durations to only update the changed values. I also managed to use the github cache to store the test durations across runs.

This is the (simplified) workflow I use:

 test:
    # additional config, like matrix omitted here
    steps:
      # test setup omitted here

      - name: Get durations from cache
        uses: actions/cache@v2
        with:
          path: test_durations
          # the key must never match, even when restarting workflows, as that
          # will cause durations to get out of sync between groups, the
          # combined durations will be loaded if available
          key: test-durations-split-${{ github.run_id }}-${{ github.run_number}}-${{ matrix.group }}
          restore-keys: |
            test-durations-combined-${{ github.sha }}
            test-durations-combined

      - name: Run tests
        run: pytest --splits 6 --group ${{ matrix.group }} --store-durations

      - name: Upload partial durations
        uses: actions/upload-artifact@v2
        with:
          name: split-${{ matrix.group }}
          path: .test_durations

  update_durations:
    name: Combine and update integration test durations
    runs-on: ubuntu-latest
    needs: test
    steps:
      - name: Checkout
        uses: actions/checkout@v2

      - name: Get durations from cache
        uses: actions/cache@v2
        with:
          path: .test_durations
          # key won't match during the first run for the given commit, but
          # restore-key will if there's a previous stored durations file, 
          # so cache will both be loaded and stored
          key: test-durations-combined-${{ github.sha }}
          restore-keys: test-durations-combined

      - name: Download artifacts
        uses: actions/download-artifact@v2

      - name: Combine test durations
        uses: ./.github/actions/combine-durations
        with:
          split-prefix: split-

The tricky point when using the actions/cache is that if a key matches in the cache, the cache will be loaded but not updated (docs). So we need to ensure a cache miss based on the key and use restore-keys instead for loading the cache.

For clarity, I’ve split combine-durations into its own action. This might become a github action into a separate repo. The action.yml contains

name: Combine durations
description: Combine pytest-split durations from multiple groups

inputs:
  durations-path:
    description: The path to the durations file (must match `--durations-path` arg to pytest)
    required: false
    default: .test_durations
  split-prefix:
    description: The path to the split durations (must match the artifacts name)
    required: true

runs:
  using: composite
  steps:
    - name: Combine durations
      shell: bash
      run: >
        python3 $GITHUB_ACTION_PATH/combine_durations.py ${{ inputs.split-prefix }} ${{ inputs.durations-path }}

and the combine_durations.py script is

import json
import sys
from pathlib import Path

split_prefix = sys.argv[1]
durations_path = Path(sys.argv[2])

split_paths = Path(".").glob(f"{split_prefix}*/{durations_path.name}")
try:
    previous_durations = json.loads(durations_path.read_text())
except FileNotFoundError:
    previous_durations = {}
new_durations = previous_durations.copy()

for path in split_paths:
    durations = json.loads(path.read_text())
    new_durations.update(
        {
            name: duration
            for (name, duration) in durations.items()
            if previous_durations.get(name) != duration
        }
    )

durations_path.parent.mkdir(parents=True, exist_ok=True)
durations_path.write_text(json.dumps(new_durations))

It would be good if (a less quick-and-dirty version of) this script became part of pytest-split as it depends on implementation details of the test durations storage format.

1reaction
sondrelgcommented, Jun 26, 2021

If you set up the functionality with argparse, you can use the Poetry scripts feature to make the command runnable with any command you want 🙂 https://python-poetry.org/docs/pyproject/#scripts

Read more comments on GitHub >

github_iconTop Results From Across the Web

Workflow syntax for GitHub Actions
When using the push event, you can configure a workflow to run on specific branches or tags. Use the branches filter when you...
Read more >
Quickstart for GitHub Actions
Try out the features of GitHub Actions in 5 minutes or less. ... name: Check out repository code uses: actions/checkout@v3 - run: echo...
Read more >
GitHub Actions Documentation
Automate, customize, and execute your software development workflows right in your repository with GitHub Actions. You can discover, create, and share ...
Read more >
Workflow commands for GitHub Actions
You can use workflow commands when running shell commands in a workflow or in an action's code.
Read more >
Expressions - GitHub Docs
You can evaluate expressions in workflows and actions. About expressions. You can use expressions to programmatically set environment variables in workflow ...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found