Or rather, I don’t trust package maintainers to adhere to semver. I prefer to manually go through dependencies updating one at a time and reading the change logs. I usually do this in batch. Peace of mind is worth more than the hour saved every week or two.
I do really like the tool that flags security issues with packages though.
- Package releases don't get PRs in their first 24 hours unless they are for security issues, so you don't get noise if there's a yank or a quick patch for a bug in the latest release
- You can set development (or production!) packages to only update once a week
- Packages that are known to have a very frequent release cadence (AWS SDK subcomponents, looking at you)_get pushed to a much slower PR pace so that you only update them 2x/month, etc.
- This might be fixed now, but it had much nicer auto-merge behavior for releases that passed CI.
- With Yarn, it can run `yarn-deduplicate` after updates to trim down shared dependency bloat.
FWIW we still use Dependabot for security patches only because they seem to get picked up a few hours earlier. We also have much tighter lock rules on some JS packages which seem to make breaking changes on patch/minor releases.
Any update to production dependencies I will want to test manually unless it happens to be a part of the code where I feel exceptionally confident in our test coverage.
To save having to do multiple rounds of merge PR, rebase next PR, wait for CI... I end up doing my own rollup PRs by merging the various Dependabot branches. At least Dependabot is smart enough to close all of the original PRs when the rollup is merged.
This still lets me know when an update to ANY internal or external dependency will break the build, so that I can take a look.
My tool only works with Nix Flakes, so it updates dependencies of Nix packages, but you often have other language-specific package managment tools wrapped inside of Nix (e.g. nix calling cargo) and I think it would be a cool extension to add a configuarable language-specific shell command where the results of running that command are added to that single PR as well. That could document the exact update workflow a dev should go through while also regularly exercising it.
https://github.com/serokell/update-daemon is a better tool than mine that does the same thing and I have been thinking about adding the feature there. I'd be interested in discussing that idea further, maybe I'll open an issue for that.
So instead, we identified our critical dependencies, then included dependencies update task to the list of tech-debt tasks we handle every week manually.
When using the analysis a PR for upgrading the dependencies would look like this - https://github.com/tmroberts56/java-maven/pull/3
That will then happen whether you use dependabot or another approach. If you don't test this manually and don't have enough test coverage, you will fail there eventually.
Dependabot helps you fail here early when you have N packages that changed and it's easy to figure out what happened, rather than wait until you're upgrading 10*N packages together and have to isolate things manually.
If you’re making a library and it bumps the lower version bounds of your dependencies, it’s doing something that’s actively slightly harmful. It’d be fine to notify you about major/incompatible version bumps, maybe even try them to see if they work (though certainly don’t apply the change automatically—a major version bump on a dependency is regularly a breaking change even if your tests still pass). But minor/patch/compatible releases that are already accepted by the version specifier? You shouldn’t be bumping those. Test against them if you like, maybe update your testing lockfile, but generally speaking the versions of libraries are an application concern, and your library should not be trying to dictate it.
(I must admit that the whole concept of compatible ranges is in practice slightly broken through and through, though it seldom causes trouble. I would love to have tooling that actively minimised dependency versions: “1.2.3? Turns out the newest functionality you’re depending on is from 1.1.0, so I’ll reduce the spec to ‘1.1’.” This would, of course, need to be paired with checking that you don’t accidentally use newer functionality—at the least, running your tests on minimum versions, preferably stronger API checks. Overall it’s the sort of thing that is difficult to make robust by any means in the likes of JavaScript and Python, but which could sanely be done in Rust, and there’s enough interest that I think something will probably happen within the next decade.)
But applications? Sure, this is where Dependabot has an actual sane purpose. But presuming people tend to have it configured in similar ways to how they configure it in libraries… ugh. I’d still rather just do roughly what it does manually from time to time.
Dependabot has to be used because of the threats. However everyone automating and moving to latest is also a threat. SolarWinds/VMWare/USGov hack [3] was all related to CI builds and automated "trust", ended up infecting tens of thousands of systems that thought they were secure with SOC2. SOC2 ends up making enterprises "trust" many third parties. What happens when dependabot is an attack vector as well...
The log4j/Log4Shell [4] issue shows how long exploits can go on without detection or automated fixes. Node is filled with dependency issues and that is just the known exploits besides all the "telemetry". [5]
Any third party or dependency is a potential attack vector, and dependency saturation is adding lots of tedium to shipping. So much time goes to just updating libs it is a bit of a tragic comedy.
[1] https://en.wikipedia.org/wiki/DLL_Hell
[2] https://en.wikipedia.org/wiki/Dependency_hell
[3] https://en.wikipedia.org/wiki/2020_United_States_federal_gov...
[4] https://en.wikipedia.org/wiki/Log4Shell
[5] https://en.wikipedia.org/wiki/Npm_(software)#Notable_breakag...
We have it on for everything else (.NET/Nuget, Ruby, Docker, GitHub Actions, Git Submodule). It's great at keeping us up to date with little effort. For major (and some minor) releases, I'll read release notes to see if there's anything to watch out for.
It's especially useful for updating the git repo we use as a submodule in 5 other repos. Submodules were a source of frequent merge conflicts before as different developers updated it in their PRs. Now we pretty much don't think about it, and Dependabot keeps it up to date with our latest database models and shared libraries.
edit: We have it set to weekly for everything but the submodule, which is daily.
1) For more important projects I have high test coverage (over 95%) and integration tests. In this setting, I have dependabot set to check once a week and the PRs get automatically merged if everything works. I have never had a problem with this setup, if the update breaks something, the tests fail and I get to fix it manually.
2) For less important projects without CD, I usually merge dev dependencies and minor versions by hand if there isn't something obviously wrong in the CI. Yeah, there's a risk it breaks something, but I can fix that before a release - obviously these are projects that only get worked on occasionally. Major versions I usually check manually before merging.
I think the ultimate answer is good testing as part of CI. Either you trust your tests and then I think it's ok to auto-merge, or you don't and then all bets are of...
I am a big fan of Dependabot, but maximizing its effective is highly dependent on the culture of the team/codebase.
Sounds like dependabot is very useful for uncovering insufficient test coverage or missing integration tests :)