> What I've noticed in practice however, is that occasionally, this process will allow an upgrade to a dependency that will pass the automated build and test step, but introduce the wildest runtime error into the application. Usually at the time when we aim to deliver something.

Sounds like dependabot is very useful for uncovering insufficient test coverage or missing integration tests :)

A more useful bot would be named Undependabot. Cuts complexity caused by the dependency bloat by suggesting to remove random dependencies and linking to http://vanilla-js.com/. Flags any PR that introduces new dependencies, and computes the cost in time and money these new dependencies should cost in the long term, and the time and bandwidth usage it adds to each (CI) build. Adds badges to developers and teams that reduce their dependencies or have few or none. Rejects any PR introducing (transitive) dependencies to packages from jonschlinkert.
Same feelings. I like the idea but in practice I don’t trust it.

Or rather, I don’t trust package maintainers to adhere to semver. I prefer to manually go through dependencies updating one at a time and reading the change logs. I usually do this in batch. Peace of mind is worth more than the hour saved every week or two.

I do really like the tool that flags security issues with packages though.

I've been using Depfu for a while and I think it handles some of the biggest pain points with Dependency spam.

- Package releases don't get PRs in their first 24 hours unless they are for security issues, so you don't get noise if there's a yank or a quick patch for a bug in the latest release

- You can set development (or production!) packages to only update once a week

- Packages that are known to have a very frequent release cadence (AWS SDK subcomponents, looking at you)_get pushed to a much slower PR pace so that you only update them 2x/month, etc.

- This might be fixed now, but it had much nicer auto-merge behavior for releases that passed CI.

- With Yarn, it can run `yarn-deduplicate` after updates to trim down shared dependency bloat.

FWIW we still use Dependabot for security patches only because they seem to get picked up a few hours earlier. We also have much tighter lock rules on some JS packages which seem to make breaking changes on patch/minor releases.

My strategy is to blindly merge developer dependencies like linter tools etc., with the rationale that if CI succeeds, the dependency didn't break anything. Obviously, that is not completely guaranteed but the risk feels small enough that it's acceptable.

Any update to production dependencies I will want to test manually unless it happens to be a part of the code where I feel exceptionally confident in our test coverage.

Did anyone try Renovatebot[1]? It should be OpenSource and endorsed from OpenSSF and Google. Main advantages I see is batched updates, which reduce a bit the dependency update spam.

1: https://renovatebot.com/

My main gripe is that Dependabot can end up raising multiple PRs for the same dependency bump in the same repo (especially with Dockerfiles). I really wish I could tell it to do rollups e.g. `@dependabot rollup #1234 #1235 #1236` or something like that.

To save having to do multiple rounds of merge PR, rebase next PR, wait for CI... I end up doing my own rollup PRs by merging the various Dependabot branches. At least Dependabot is smart enough to close all of the original PRs when the rollup is merged.

I have not used Dependabot myself, but I wrote a tool that runs in a nightly CI job and creates or updates a single PR per repo with all of the dependency updates. You can then merge those with the push of a button. I think that's better than the more granular approach of having one PR per update.

This still lets me know when an update to ANY internal or external dependency will break the build, so that I can take a look.

My tool only works with Nix Flakes, so it updates dependencies of Nix packages, but you often have other language-specific package managment tools wrapped inside of Nix (e.g. nix calling cargo) and I think it would be a cool extension to add a configuarable language-specific shell command where the results of running that command are added to that single PR as well. That could document the exact update workflow a dev should go through while also regularly exercising it.

https://github.com/serokell/update-daemon is a better tool than mine that does the same thing and I have been thinking about adding the feature there. I'd be interested in discussing that idea further, maybe I'll open an issue for that.

At a previous job, we loved the idea of depandabot but in practice it didn't match the way we work or review PRs. And as you said, just because a test passed doesn't mean that the update is 100% safe.

So instead, we identified our critical dependencies, then included dependencies update task to the list of tech-debt tasks we handle every week manually.

To address this issue we designed a static analysis that can check if the upgrade is likely to break the application. Here are some details of the work- “Effective Static Checking of Library Updates” https://dl.acm.org/doi/abs/10.1145/3236024.3275535

When using the analysis a PR for upgrading the dependencies would look like this - https://github.com/tmroberts56/java-maven/pull/3

> occasionally, this process will allow an upgrade to a dependency that will pass the automated build and test step, but introduce the wildest runtime error into the application

That will then happen whether you use dependabot or another approach. If you don't test this manually and don't have enough test coverage, you will fail there eventually.

Dependabot helps you fail here early when you have N packages that changed and it's easy to figure out what happened, rather than wait until you're upgrading 10*N packages together and have to isolate things manually.

I’ve only really experienced Dependabot for libraries, rather than applications, and in that context I don’t really understand why it exists or why people use it.

If you’re making a library and it bumps the lower version bounds of your dependencies, it’s doing something that’s actively slightly harmful. It’d be fine to notify you about major/incompatible version bumps, maybe even try them to see if they work (though certainly don’t apply the change automatically—a major version bump on a dependency is regularly a breaking change even if your tests still pass). But minor/patch/compatible releases that are already accepted by the version specifier? You shouldn’t be bumping those. Test against them if you like, maybe update your testing lockfile, but generally speaking the versions of libraries are an application concern, and your library should not be trying to dictate it.

(I must admit that the whole concept of compatible ranges is in practice slightly broken through and through, though it seldom causes trouble. I would love to have tooling that actively minimised dependency versions: “1.2.3? Turns out the newest functionality you’re depending on is from 1.1.0, so I’ll reduce the spec to ‘1.1’.” This would, of course, need to be paired with checking that you don’t accidentally use newer functionality—at the least, running your tests on minimum versions, preferably stronger API checks. Overall it’s the sort of thing that is difficult to make robust by any means in the likes of JavaScript and Python, but which could sanely be done in Rust, and there’s enough interest that I think something will probably happen within the next decade.)

But applications? Sure, this is where Dependabot has an actual sane purpose. But presuming people tend to have it configured in similar ways to how they configure it in libraries… ugh. I’d still rather just do roughly what it does manually from time to time.

Software went from DLL Hell [1] to Dependency Hell [2]. Only now there is a dependency war so it is much more dangerous, and so many more dependencies.

Dependabot has to be used because of the threats. However everyone automating and moving to latest is also a threat. SolarWinds/VMWare/USGov hack [3] was all related to CI builds and automated "trust", ended up infecting tens of thousands of systems that thought they were secure with SOC2. SOC2 ends up making enterprises "trust" many third parties. What happens when dependabot is an attack vector as well...

The log4j/Log4Shell [4] issue shows how long exploits can go on without detection or automated fixes. Node is filled with dependency issues and that is just the known exploits besides all the "telemetry". [5]

Any third party or dependency is a potential attack vector, and dependency saturation is adding lots of tedium to shipping. So much time goes to just updating libs it is a bit of a tragic comedy.

[1] https://en.wikipedia.org/wiki/DLL_Hell

[2] https://en.wikipedia.org/wiki/Dependency_hell

[3] https://en.wikipedia.org/wiki/2020_United_States_federal_gov...

[4] https://en.wikipedia.org/wiki/Log4Shell

[5] https://en.wikipedia.org/wiki/Npm_(software)#Notable_breakag...

We don't use it on our Node app because it would be too noisy. Another noisy library was the AWS SDK for .NET. In that case, I actually use a wildcard version range `3.7.*` to always have the latest patch release and cut down on Dependabot noise.

We have it on for everything else (.NET/Nuget, Ruby, Docker, GitHub Actions, Git Submodule). It's great at keeping us up to date with little effort. For major (and some minor) releases, I'll read release notes to see if there's anything to watch out for.

It's especially useful for updating the git repo we use as a submodule in 5 other repos. Submodules were a source of frequent merge conflicts before as different developers updated it in their PRs. Now we pretty much don't think about it, and Dependabot keeps it up to date with our latest database models and shared libraries.

edit: We have it set to weekly for everything but the submodule, which is daily.

I use it in 2 different ways depending on the project.

1) For more important projects I have high test coverage (over 95%) and integration tests. In this setting, I have dependabot set to check once a week and the PRs get automatically merged if everything works. I have never had a problem with this setup, if the update breaks something, the tests fail and I get to fix it manually.

2) For less important projects without CD, I usually merge dev dependencies and minor versions by hand if there isn't something obviously wrong in the CI. Yeah, there's a risk it breaks something, but I can fix that before a release - obviously these are projects that only get worked on occasionally. Major versions I usually check manually before merging.

I think the ultimate answer is good testing as part of CI. Either you trust your tests and then I think it's ok to auto-merge, or you don't and then all bets are of...

Someone knowledgeable of the codebase reviews changelogs of individual packages. They merge all simple cases and flag any breaking changes and anything that affects features used in the codebase. These can then be tested/fixed by others not so deeply familiar with the codebase, which fuels knowledge transfer.
For people complaining that Dependabot is too noisy, it is easy to configure it to only open PRs for major versions.

I am a big fan of Dependabot, but maximizing its effective is highly dependent on the culture of the team/codebase.

I set it up on a repo, then delete emails I recieve from it.
Dependabot made me move some side projects away from React cos it really highlighted how many dependencies I was bringing in without realising.
Disabling it on a repository is one of the first things I do. I tend not to like how noisy it is on Node project. Instead, I have scheduled tasks which audit periodically.
It would be really helpful if dependabot mentioned how much bigger my build size would be relative to the main branch.
Dependabot only works reliably with lots of integration tests.
I ignore everything it says, for me it's just noise