Skip to main content

24 posts tagged with "mergestat"

View All Tags

Patrick DeVivo

Recently, MergeStat added support for collecting data from GitHub Actions. We wanted a way to start asking questions about our CI pipelines, and very soon we were able to leverage MergeStat to surface useful results.

Finding Our Most Expensive CI Pipelines

Last year we received a GitHub bill that surprised us. We had an expensive charge related to GitHub Actions minutes, even though most of our repos are public (Actions usage is free for public repos). We ended up finding a couple of experimental workflows in private repos that were the culprits of the higher-than-usual costs.

To find these, we exported our raw Actions usage data into a spreadsheet, aggregated the minutes, and ordered by the most expensive repos. This wasn't too difficult, but it made us think about how valuable it could be to always have this data on hand, with the ability to query it ad-hoc (vs needing to run an export, load a spreadsheet and run some formulas every time).

Now that we have GitHub Actions data syncing with MergeStat, we're able to do this:

-- find our longest running CI jobs
SELECT
EXTRACT(EPOCH FROM (public.github_actions_workflow_runs.updated_at - public.github_actions_workflow_runs.run_started_at))::integer/60 AS minutes_to_run,
public.repos.repo,
public.github_actions_workflow_runs.*
FROM public.github_actions_workflow_runs
INNER JOIN public.repos ON public.github_actions_workflow_runs.repo_id = public.repos.id
ORDER BY 1 DESC

Screenshot of SQL query for finding long running GitHub Actions in the MergeStat app

Improving DevEx by Finding Long Running Tests

In a very related use case, a partner reached out asking how to use MergeStat to query for long running tests. Long running tests can be a significant bottleneck in a software-delivery pipeline, especially at scale. This impacts developer experience and the rate at which teams can deliver value to production.

We were able to tweak the query above to answer this question. In short, we filtered out workflow runs that had a name that included the string test, which was good enough for the situation at hand:

SELECT 
EXTRACT(EPOCH FROM (public.github_actions_workflow_runs.updated_at - public.github_actions_workflow_runs.run_started_at))::integer/60 AS minutes_to_run,
public.repos.repo,
public.github_actions_workflow_runs.*
FROM public.github_actions_workflow_runs
INNER JOIN public.repos ON public.github_actions_workflow_runs.repo_id = public.repos.id
WHERE public.github_actions_workflow_runs.name ilike '%test%'
ORDER BY 1 DESC

Next Up

These two examples hopefully convey the value of having this type of CI/CD data on hand and accessible with SQL. Since this data can sit next to pull requests, commits, code, issues and more, we can augment these queries with additional dimensions to get deeper insights into engineering operations overall.

In addition, we're working on some exciting features that will make accessing and using this data much easier. In particular, saved queries, charts, dashboards, reports and alerts will make it possible to use this data more effectively.

  • Visualize which repos or teams are using the most CI minutes.
  • Send a Slack alert when a CI run takes longer than expected (configurable).
  • Track improvements in CI runtimes over different periods and teams, use it to celebrate or communicate up.

Regardless, MergeStat's mission continues to be in service of data agility for software engineering. Check us out!

Join our Slack

Our community Slack is a great place to find help and ask questions. We're always happy to chat about MergeStat there 🎉!

Patrick DeVivo

The GIF on our landing page shows a SQL query executing in a MergeStat application:

GIF showing a query surfacing Go mod versions

If you look closely, you'll notice that the query has something to do with go.mod files. In particular, it's this query:

-- Go version counts across all repos
SELECT
SUBSTRING(git_files.contents FROM 'go ([0-9]+.[0-9]+)') AS dependencies_go_version,
COUNT(1) AS version_count
FROM git_files
INNER JOIN repos ON repos.id = git_files.repo_id
WHERE git_files.path LIKE '%go.mod' AND SUBSTRING(git_files.contents FROM 'go ([0-9]+.[0-9]+)') IS NOT NULL
GROUP BY dependencies_go_version
ORDER BY version_count DESC

which parses out the go directive in any go.mod file it finds, looking for the declared Go version.

A go directive indicates that a module was written assuming the semantics of a given version of Go. The version must be a valid Go release version: a positive integer followed by a dot and a non-negative integer (for example, 1.9, 1.14).

Why?

Cool - but how is this useful? The value we are trying to convey in this query is MergeStat's ability to surface data across many Git repositories. This in particular should resonate with anyone who's worked on a team responsible for lots of codebases across an organization: DevEx, DevSecOps, Platform, etc.

Aggregating Go versions across many repos is useful in ensuring a consistent experience for Go developers. This may be important for an internal DevEx or Platform team to understand, or for any one responsible for the maintenance of many open-source, public Go projects.

It could serve as a proxy measure for how "up to date" codebases are, whether they are able to support the latest features of the language. For instance, Generics were introduced in 1.18, and it may be important to know how much of your Go source code is able to make use of them by looking at go.mod files.

Looking at Public HashiCorp go.mod Files

Let's take a look at the go.mod files in some public Git repos. HashiCorp is famously a large Go shop, and many of their core projects are in Go - so let's take a look at all their public repos!

Here's how I used MergeStat to do so:

Run MergeStat

First, I grabbed a server from our friends at Equinix Metal. I intend to use the MergeStat docker-compose set-up, outlined here.

I ssh into my new server and run the following:

git clone https://github.com/mergestat/mergestat
cd mergestat
docker-compose up

This brings up an instance of MergeStat on port :3300 on the IP of my new server, where I can login with the default credentials postgres and password.

Setup Repos & Syncs

Next, I add a GitHub PAT so that I can:

  1. Add a repo auto import for the hashicorp GitHub organization
  2. Use the GITHUB_REPO_METADATA sync type (the GIT_FILES sync does not require GitHub authentication, since it uses the native Git protocol)

Screenshot of adding a repo auto import

I make sure to enable the GitHub Repo Metadata and Git Files sync types.

Screenshot of the repo auto import once added

Now, the file contents and metadata from the GitHub API should begin syncing for all public HashiCorp repos 🎉. I can run this query to keep track of the initial sync:

SELECT count(*) FROM mergestat.repo_syncs WHERE last_completed_repo_sync_queue_id IS NULL

Once the result is 0, I'll know that the first run for all the syncs I've defined are complete.

Executing Queries

Once the initial data sync is complete, I can start executing some queries. Let's start by identifying how many of the 977 repos are identified as Go repos (by GitHub):

SELECT count(*), primary_language FROM github_repo_info
GROUP BY primary_language
ORDER BY count(*) DESC

Chart of HashiCorp repos by language

Looks like 431 are picked up as Go codebases.

Next, let's run the original query from above, but with a small amendment. I'll change the LIKE '%go.mod' to LIKE 'go.mod', to only identify go.mod files in the root of a repo. This avoids picking up vendored Go modules.

HashiCorp Go versions

And there you have it! Looks like most public HashiCorp repos are on 1.18.

You may notice that the sum of all repos in this query is 390, not 431. This is likely because not all Go repos have a go.mod file.

Operationalize It

Because MergeStat makes use of PostgreSQL, queries like this can be hooked up to BI tools and internal dashboards. This could be a useful way to track a migration effort, for instance, or alert on when a repo begins using a "non-standard" version of Go for your organization.

Further, these queries data can be augmented with commit and team data, to add additional dimensions.

Join our Slack

Our community Slack is a great place to find help and ask questions. We're always happy to chat about MergeStat there 🎉!

Patrick DeVivo

Recently, I came across this tweet from Nicolas Carlo:

Nicolas Carlo tweet about finding hotspots in a git repo

Finding hotspots in a (git) codebase can be surfaced with the following:

git log --format=format: --name-only --since=12.month \
| egrep -v '^$' \
| sort \
| uniq -c \
| sort -nr \
| head -50

This defines hotspots as the files most frequently modified in the last year (by number of commits).

This bash script looks a lot like what both MergeStat and MergeStat Lite can surface, but using SQL 🎉!

MergeStat Example

MergeStat can be used to surface this list as well:

select file_path, count(*)
from git_commits join git_commit_stats on (git_commits.repo_id = git_commit_stats.repo_id and git_commits.hash = git_commit_stats.commit_hash)
join repos on git_commits.repo_id = repos.id
where repo like '%mergestat/mergestat' -- limit to a specific repo
and git_commits.parents < 2 -- ignore merge commits
and author_when > now() - '1 year'::interval
group by file_path
order by count(*) desc
limit 50

Screenshot of MergeStat Example

MergeStat Lite Example

MergeStat Lite (our CLI) can be run against a git repo on disk to surface the same set of file paths:

select
file_path, count(*)
from commits, stats('', commits.hash)
where commits.author_when > date('now', '-12 month')
and commits.parents < 2 -- ignore merge commits
group by file_path
order by count(*) desc
limit 50

Screenshot of MergeStat Lite Example

Why bother?

As Nicolas Carlo points out, identifying hotspots in a codebase is an effective way to determine which files are worth examining as candidates for a refactor.

The SQL queries above can be modified to better suit your needs. For example:

  • Filter for particular file types by extension (maybe you only care about hotspots in .go files, for example)
  • Filter out particular directories
  • Modify the time frame
  • Surface hotspots across multiple repositories
  • Filter hotspots based on authors

Patrick DeVivo

We’re proud to announce that MergeStat has raised over $1.2m in pre-seed funding led by OSS Capital, with participation from Caffeinated Capital and prominent angel investors.

At MergeStat, our mission is to enable operational analytics for software engineering teams via powerful, open-source tooling and systems.

We believe there is tremendous value in connecting SQL with the systems used to build and ship software. Engineering organizations are often “black boxes” driven by intuition and gut-feel decisions. By enabling access to the processes and artifacts of software engineering as data, MergeStat helps organizations become data-driven in the ways most relevant to their needs.

Our PostgreSQL approach is based on applying SQL to the data sources involved in building and shipping software. SQL not only grants the ability to ask adhoc questions, but also brings compatibility with many BI, visualization, and data platforms.

We’re working with a variety of organizations using MergeStat to answer questions about the software-development-lifecycle, in the tools they prefer. To date, we’ve worked with teams using our SQL approach to answer questions around:

  • Audit and compliance
  • Code and dependency version sprawl
  • Engineering metrics and transparency (DORA metrics)
  • Code pattern monitoring and reporting
  • Developer on-boarding
  • Vulnerability reporting
  • …and more

“MergeStat is bringing the lingua franca of data to code: applying SQL to the modern SDLC will unlock tremendous value for all software developers. We are greatly honored to partner with Patrick DeVivo on this important mission!”

– Joseph Jacks, Founder and General Partner at OSS Capital

Over the past year, MergeStat has grown into an amazing team with a growing open-source presence. We’re very excited for our next chapter, as we begin working with more organizations looking to answer questions about how their engineering teams operate.

Investors

Patrick DeVivo

If you’ve spent any time around code, you’ll know that // TODO comments are a fact of life. They're a common pattern, regardless of how you feel about them! For instance, the Kubernetes codebase has over 2,000, and Linux has over 3,000.

In the MergeStat codebase, we use them as a low-effort way to track small, technical debt items. Sometimes, a future fix or refactor might not warrant a ticket (it’s too small and too specific to a piece of implementation detail), but it’s still worth making a note. TODO comments are a good fit for this because they are:

  • Low effort - very easy to add and remove, just leave a comment in the code
  • Safe from context switching - no need to switch into a ticketing system, stay in the editor and near the relevant code
  • Tracked in version control - so you have a history and audit trail

We recently connected with Ivan Smirnov, CTO of Elude on this topic, and were excited to learn about his enthusiasm for tracking TODO comments 🙂. He shared that during his time at Google, there was an internal system which aggregated TODOs across codebases, as a way of surfacing parts of code worth returning to. He missed having a similar solution in his role at Elude.

Luckily, we were able to help with a MergeStat + Grafana based solution! Elude operates a self-hosted instance of MergeStat (using our Kubernetes helm chart), and connects to its PostgreSQL database with a Grafana instance. We collaborated on putting together a starting “TODO Tracker” dashboard, which is available in our examples as a Grafana export:

Screenshot of Grafana board tracking TODOs

"Elude currently uses TODO comments as a low friction mechanism to track technical debt. MergeStat is the missing link that allows us to turn these comments into trackable, actionable tasks." – Ivan Smirnov, Elude CTO

The SQL involved looks something like this:


SELECT
git_blame.line,
git_blame.author_name,
git_blame.author_email,
git_blame.author_when,
REPLACE(repos.repo, 'https://github.com/', '') AS repo,
repos.repo || '/blob/main/' || git_blame.path || '#L' || git_blame.line_no AS url -- generate a link to the line in GitHub
FROM git_blame
INNER JOIN repos ON repos.id = git_blame.repo_id
WHERE git_blame.line LIKE '%TODO%'
ORDER BY git_blame.author_when ASC

and should be fairly easy to customize to different needs:

  • Only apply to certain repos
  • Filter out certain file paths by pattern
  • Look for FIXME and BUG comments as well
  • Parse out "assignees" (i.e. TODO(patrickdevivo))
  • etc...

If you're interested in taking a look at your own // TODO comments, go ahead and try out a local instance of MergeStat!

Join our Slack

Our community Slack is a great place to find help and ask questions. We're always happy to chat about MergeStat there 🎉!

Patrick DeVivo

Today we're excited to announce the availability of in-app SQL querying in the latest versions of MergeStat! 🎉

Illustration of the Query UI

One of the more significant pain points noted by our early adopters has been the need for external tools to access the data MergeStat provides, even for simple, one-off queries. Many users (ourselves included) lean on existing data products such as Grafana, Metabase and Superset to consume MergeStat data (in dashboards, alerts and reports).

With the addition of our query interface (in the Queries tab of our management console), MergeStat users can now execute SQL directly in our app.

Query results can be copied or downloaded as CSV or JSON, for quick use in other tools and contexts. We believe this is a significant step-forward in our app's functionality, and will continue to invest in this area of our management console. Keep a lookout for additional features, including:

  • Saved and examples queries
  • Inline schema documentation and editor auto-completions
  • Basic charting and data visualization
  • Query execution history

It's important to note that as always, the MergeStat PostgreSQL database can be connected directly to by BI tools, desktop SQL clients, or SQL drivers.

Check out the latest MergeStat release to get started!

Join our Slack

Our community Slack is a great place to find help and ask questions. We're always happy to chat about MergeStat there 🎉!

Patrick DeVivo

Today we're very happy to announce a new flavor of MergeStat! If you've been following our progress over the last year, you'll know that our mission is to enable operational analytics for software engineering teams. You'll also know that our approach has been heavily based on SQLite (and its virtual table mechanism) to bring data from git repositories (source code, metadata, the GitHub API, etc) into a SQL context for flexible querying and analytics.

MergeStat Management Console Illustration

This has enabled initial use cases in many interesting domains related to the software-development-lifecycle, from audit and compliance, to code querying, to release time metrics, to technical debt surfacing, and much, much more.

For the past several months, we’ve been working on an approach in service of this mission that uses a different mechanism for enabling that SQL context. That mechanism is based on syncing data into a Postgres database for downstream querying, vs using a local SQLite based approach.

This grants us two significant advantages:

  1. More compatibility with the ecosystem of data (and BI/visualization) tools
  2. Much faster query time for data spread across multiple API sources

More Compatibility

We love SQLite, but the fact of the matter is that it’s much easier to integrate a data visualization or BI product (open-source or not) with a Postgres server (or Postgres compatible server) than with a SQLite database file. A central part of our mission has always been not just enabling SQL, but allowing our SQL solution to play well with the wide array of existing tools that companies, teams and individuals are already using.

We want anyone to be able to query MergeStat data from tools like Metabase, Grafana, Tableau, Superset, etc. Postgres compatibility takes us a step in that direction, and we’re already working with early users integrating their SDLC data with these types of tools, via MergeStat.

Faster Queries

Our SQLite virtual-table based approach is an effective way to query a local data source (such as a git repository on disk). However, it begins to fall short when it comes to querying over large sets of data spread across many pages in a web API (or multiple APIs). What ends up happening is queries spend much more time waiting for HTTP requests to finish than actually executing SQL. This means at query time, the user is forced to wait for data collection to occur. Depending on the scope of the data involved, this can potentially take a very long time. With our new approach, data is synced and maintained in a background process, so that data collection (from potentially slow sources) occurs out-of-band from user queries.

What’s Next?

Our SQLite approach isn’t going anywhere. We still intend to invest in that and continue building it into a valuable open-source CLI that anyone can use. It’s been renamed MergeStat Lite and now lives at mergestat/mergestat-lite.

Our new approach now lives at mergestat/mergestat. Our documentation has been updated to reflect these changes, and we couldn’t be more excited to share more in the coming days and weeks.

We have some great features planned, and will be spending more time showcasing various use-cases from our growing community.

Join our Slack

As usual, our community Slack is a great place to find help and ask questions. We're always happy to chat about MergeStat there 🎉!

Patrick DeVivo

For organizations using GitHub Enterprise, the GitHub audit log is now supported as a table in the MergeStat CLI, enabling SQL queries against audit log events 🎉. Check it out!

The new table-value-function is github_org_audit_log, and may be queried like so:

SELECT * FROM github_org_audit_log('mergestat') -- replace with your org name

and has the following columns (type is TEXT unless otherwise specified):

  • id
  • entry_type
  • action
  • actor_type
  • actor_login
  • actor_ip
  • actor_location (JSON)
  • created_at (DATETIME)
  • operation_type
  • user_login
GitHub Enterprise Only

This feature only works with GitHub Enterprise organizations, since the audit log API is only available to enterprise accounts.

This enables some interesting use-cases for GitHub enterprise users looking to monitor, alert and report on events from the GitHub audit log. We're working with some early users of this table, and will publish some guides and examples in the near future 🔜.

The JSON column (actor_location) can be accessed with the SQLite JSON operators.

SELECT actor_location->>'country' FROM github_org_audit_log('mergestat')
+----------------------------+
| ACTOR_LOCATION->>'COUNTRY' |
+----------------------------+
| United States |
+----------------------------+
| United States |
+----------------------------+
| Belgium |
+----------------------------+
| ... |
+----------------------------+

It's important to note the following from the GitHub API docs as well:

GitHub audit log notes

Join our Slack

Our community Slack is a great place to find help and ask questions. We're always happy to chat about MergeStat there 🎉!

Patrick DeVivo

SQLite 3.38.0 (released 2022-02-22) added support for built-in JSON functions (no longer a compile-time option) and two new JSON operators, -> and ->>, similar to ones available in MySQL and PostgreSQL.

MergeStat is a SQL interface to data in git repositories, based on SQLite. As such, we can now take advantage of these new operators in our queries, wherever JSON data is returned.

note

To install the MergeStat CLI via homebrew, run:

brew tap mergestat/mergestat
brew install mergestat

Or see other options

package.json

An easy source of JSON to start with is the package.json file present in most JavaScript/TypeScript codebases. Let's take a look at the one in the popular tailwindlabs/tailwindcss repo.

SELECT contents->>'name', contents->>'license' FROM files WHERE path = 'package.json'
+-------------------+----------------------+
| CONTENTS->>'NAME' | CONTENTS->>'LICENSE' |
+-------------------+----------------------+
| tailwindcss | MIT |
+-------------------+----------------------+

Here we use contents->>'name' and contents->>'license' to pull out the name and license fields from the package.json JSON file.

Let's take this up a level, and look at at the name and license fields for all repos in an org where a package.json file is present. Sticking with tailwindlabs (and now adding a GITHUB_TOKEN when we execute the following):

-- CTE (https://www.sqlite.org/lang_with.html)
-- to select all the package.json files from repos in the tailwindlabs GitHub org
WITH package_json_files AS (
SELECT
name,
github_repo_file_content('tailwindlabs', repo.name, 'package.json') AS package_json
FROM github_org_repos('tailwindlabs') repo
)
SELECT
name as repo_name,
package_json->>'name',
package_json->>'license'
FROM package_json_files
WHERE package_json IS NOT NULL
note

This query may take some time to run, as it makes a fair number of GitHub API requests.

+-----------------------------+-----------------------------+--------------------------+
| REPO_NAME | PACKAGE_JSON->>'NAME' | PACKAGE_JSON->>'LICENSE' |
+-----------------------------+-----------------------------+--------------------------+
| tailwindcss | tailwindcss | MIT |
+-----------------------------+-----------------------------+--------------------------+
| webpack-starter | NULL | NULL |
+-----------------------------+-----------------------------+--------------------------+
| tailwindcss.com | NULL | NULL |
+-----------------------------+-----------------------------+--------------------------+
| tailwindcss-plugin-examples | NULL | NULL |
+-----------------------------+-----------------------------+--------------------------+
| tailwindcss-intellisense | root | NULL |
+-----------------------------+-----------------------------+--------------------------+
| tailwindcss-playground | NULL | NULL |
+-----------------------------+-----------------------------+--------------------------+
| tailwindcss-custom-forms | @tailwindcss/custom-forms | MIT |
+-----------------------------+-----------------------------+--------------------------+
| tailwindcss-typography | @tailwindcss/typography | MIT |
+-----------------------------+-----------------------------+--------------------------+
| heroicons | heroicons | MIT |
+-----------------------------+-----------------------------+--------------------------+
| tailwindui-vue | @tailwindui/vue | MIT |
+-----------------------------+-----------------------------+--------------------------+
| blog.tailwindcss.com | tailwind-blog | NULL |
+-----------------------------+-----------------------------+--------------------------+
| tailwindui-react | @tailwindui/react | MIT |
+-----------------------------+-----------------------------+--------------------------+
| heroicons.com | NULL | NULL |
+-----------------------------+-----------------------------+--------------------------+
| play.tailwindcss.com | play.tailwindcss.com | NULL |
+-----------------------------+-----------------------------+--------------------------+
| headlessui | headlessui | MIT |
+-----------------------------+-----------------------------+--------------------------+
| tailwind-play-api | NULL | NULL |
+-----------------------------+-----------------------------+--------------------------+
| tailwindcss-aspect-ratio | @tailwindcss/aspect-ratio | MIT |
+-----------------------------+-----------------------------+--------------------------+
| tailwindcss-forms | @tailwindcss/forms | MIT |
+-----------------------------+-----------------------------+--------------------------+
| tailwindcss-line-clamp | @tailwindcss/line-clamp | MIT |
+-----------------------------+-----------------------------+--------------------------+
| tailwindcss-jit | @tailwindcss/jit | MIT |
+-----------------------------+-----------------------------+--------------------------+
| prettier-plugin-tailwindcss | prettier-plugin-tailwindcss | NULL |
+-----------------------------+-----------------------------+--------------------------+

We see that most repos with a package.json file also have a name, while some do not (NULL). All the declared licences appear to be MIT.

Dependencies

Finally, let's take a look at all the dependencies declared in package.json files across (JavaScript/TypeScript) codebases in an org. This time, let's try on the freeCodeCamp GitHub org.

WITH package_json_files AS (
SELECT
name,
github_repo_file_content('freeCodeCamp', repo.name, 'package.json') AS package_json
FROM github_org_repos('freeCodeCamp') repo
WHERE (primary_language = 'TypeScript' OR primary_language = 'JavaScript')
)
SELECT
count(*),
json_group_array(name) AS repo_names,
dep.key
FROM package_json_files, json_each(package_json->'dependencies') dep
WHERE package_json IS NOT NULL
GROUP BY dep.key
ORDER BY count(*) DESC

This query will show the most frequently used dependencies within a GitHub org (as extracted from the dependencies JSON value of package.json files). Here are the top 10 results from the above query, on freeCodeCamp:

  1. dotenv (35)
  2. express (31)
  3. body-parser (26)
  4. react-dom (16)
  5. react (16)
  6. lodash (16)
  7. cors (14)
  8. mongoose (12)
  9. chai (12)
  10. passport (11)
Join our Slack

We've recently launched a community Slack - feel free to stop in if you have questions or anything to share 🎉.

Patrick DeVivo

We had a user looking for a way to track merged GitHub pull requests that were unapproved (when merged into a main branch). This is typically done with an "Admin Override" when merging a PR, or from loose branch protection rules allowing for unapproved merges into a branch.

GitHub PR admin override

Allowing repo administrators to merge without a review can be a useful shortcut for "emergency" hotfixes that avoid a potentially time-consuming code review cycle. This of course depends on the habits and culture of your engineering organization.

Too many unapproved merges, however, is probably a sign of something wrong (again, depending on the organization and codebase). For instance, a high volume of "emergency fixes" shipping without any review, or developers simply short-circuiting the review cycle to get code merged/deployed quickly.

Surfacing Unapproved, Merged PRs

The following MergeStat query will list all unapproved pull requests merged into the main branch.

SELECT
number,
title,
date(created_at) AS created_at,
date(merged_at) AS merged_at,
author_login,
review_decision,
merged
FROM github_prs('mergestat', 'mergestat') -- set to your own repo
WHERE
merged = true
AND review_decision <> 'APPROVED'
AND base_ref_name = 'main' -- set to your own branch
ORDER BY created_at DESC

This will yield a table that looks something like (in our web app):

Unapproved PRs in mergestat/mergestat

If you'd like to know which GitHub users most frequently merge without an approval, you can run the following:

SELECT
count(*), merged_by
FROM github_prs('mergestat', 'mergestat')
WHERE
merged = true
AND review_decision <> 'APPROVED'
AND base_ref_name = 'main'
GROUP BY merged_by
ORDER BY count(*) DESC

which will display an ordered list of the most common "unapproved mergers" in a codebase.

Additional Ideas

  1. Filter by created_at or merged_at if you only care about recently merged PRs
  2. Query for unapproved (and merged) PRs across an entire GitHub org (rather than a single repo)
  3. Create a Slack/Discord bot that alerts when an unnapproved PR is merged, or shares a monthly report
  4. Track in a dashboard (a line chart) to record instances over time
Join our Slack

We've recently launched a community Slack - feel free to stop in if you have questions or anything to share 🎉.