question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Have benchmark actions leave a comment in PRs

See original GitHub issue

Leaving myself a note here that’ll be easy to find as I work on improving the benchmark experience:

Per comment by @developit in #2560:

This is absolutely stellar.

Two thoughts:

  • Do you think it would be better to consolidate the build so that it builds once and uses that same build for all benchmarks? Done.
  • Also, I wonder if we could have it post the benchmark results as a comment? Done.

For the comment posting, here’s some helper functions I have that use actions-toolkit:

/** Create a status check, and return a function that updates (completes) it. */
async function createCheck(github, context) {
    const check = await github.checks.create({
        ...context.repo,
        name: 'Benchmarks',
        head_sha: context.payload.pull_request.head.sha,
        status: 'in_progress',
    });

    return async details => {
        await github.checks.update({
            ...context.repo,
            check_run_id: check.data.id,
            completed_at: new Date().toISOString(),
            status: 'completed',
            ...details
        });
    };
}


/** create a PR comment, or update one if it already exists */
async function postOrUpdateComment(github, context, commentMarkdown) {
    const commentInfo = {
		...context.repo,
		issue_number: context.issue.number
	};

	const comment = {
		...commentInfo,
		body: commentMarkdown + '\n\n<sub>preact-benchmarks-action</sub>' // used to update this comment later
	};

	startGroup(`Updating PR comment`);
	let commentId;
	try {
		const comments = (await github.issues.listComments(commentInfo)).data;
		for (let i=comments.length; i--; ) {
			const c = comments[i];
			if (c.user.type === 'Bot' && /<sub>[\s\n]*preact-benchmarks-action/.test(c.body)) {
				commentId = c.id;
				break;
			}
		}
	}
	catch (e) {
		console.log('Error checking for previous comments: ' + e.message);
	}

	if (commentId) {
		try {
			await github.issues.updateComment({
				...context.repo,
				comment_id: commentId,
				body: comment.body
			});
		}
		catch (e) {
			commentId = null;
		}
	}

	if (!commentId) {
		try {
			await github.issues.createComment(comment);
		} catch (e) {
			console.log(`Error creating comment: ${e.message}`);
        }
    }
    endGroup();
}

Usage:

// example action.js

import { getInput, startGroup, endGroup, setFailed, setOutput } from '@actions/core';
import { GitHub, context } from '@actions/github';

async function runBenchmarks(github, context) {
  return {
    markdown: 'full-page markdown result',
    summary: 'quick one-line stats summary'
  };
}

(async () => {
    const token = process.env.GITHUB_TOKEN || getInput('repo-token');
    const github = token ? new GitHub(token) : {};

    // if there's no GITHUB_TOKEN, just log to stdout
    let finish = details => console.log(details);
    if (token) {
        finish = await createCheck(github, context);
    }

    try {
        // call the actual function that does the work:
        const result = await runBenchmarks(github, context);
        // { markdown: string, summary?: string }

        if (!result) {
            throw Error('No benchmark results.');
        }

        if (token) {
            await postOrUpdateComment(github, context, `
                🚀 Benchmark results for ${context.payload.pull_request.head.sha.substring(0,7)}:
                ${result.markdown}
                <sub>(${new Date().toUTCString()})</sub>
            `.trim().replace(/^\s+/gm, ''));
        }

        await finish({
            conclusion: 'success',
            output: {
                title: `Benchmark Results`,
                summary: result.summary
            }
        });
    } catch (e) {
        setFailed(e.message);

        await finish({
            conclusion: 'failure',
            output: {
                title: 'Benchmarks failed',
                summary: `Error: ${e.message}`
            }
        });
    }
})();

Issue Analytics

  • State:closed
  • Created 3 years ago
  • Comments:5 (4 by maintainers)

github_iconTop GitHub Comments

1reaction
raj-al-ghulcommented, Aug 11, 2020

@andrewiggins thanks for your helper functions, I was attempting to do something similar in one of my own repos and it was great to see a reference.

FYI you can use a markdown comment to mark the comment the bot creates instead of visible text. (just lends itself to a slightly cleaner look).

Also, I’d imagine you’d want to run the benchmark on push and not just create? in which case context.issue.number seems to be undefined.

1reaction
developitcommented, Jun 14, 2020

Updates to comments don’t send an email, so I’m also a fan of that approach. If we wanted to avoid it sending an email at all, the status check gets its own page with arbitrary Markdown, and can also bubble up a simple stats line as the description shown when clicking on a check’s green checkmark. I believe checks also post properly from fork branches, which is nice.

I do think benches should only run if tests pass, yeah. When tests fail we can’t really trust the benches, so they become additional knock-on failures that don’t add their own info.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Benchmarks, PRs, & Cheaters… : r/orangetheory - Reddit
But does anyone else experience the frustration of members inputting times for benchmarks that they did NOT get?!
Read more >
Is GitHub Actions suitable for running benchmarks?
Speed-critical projects use benchmark suites to track performance over time and detect regressions from commit to commit. For these measurements ...
Read more >
Github Action can't comment on PR - Stack Overflow
I am using a github action that compares benchmark ...
Read more >
Performance Benchmarks on Pull Request | DoltHub Blog
Our new feature is defined as a Github Actions workflow. It enables our Dolt developers to comment on a pull request with "#benchmark"...
Read more >
Do Larger Pull Requests Receive More Extensive Reviews?
Pull requests (PRs) are the currency of the realm for software engineers ... Learn how to keep PRs from suffering comment overload.
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found