This seems like a really bad idea to me. I could understand and perhaps get behind the idea that you might use something like this to find the optimal version of a package to use in a given project, but unexpected differences between your development environment and production are a common source of outages.
It also requires using a different package manager called Thamos: https://thoth-station.ninja/docs/developers/thamos/. This tool then outputs requirements files compatible with Pipenv, pip, or pip-tools (though notably not Poetry).
That being said, all of the examples and config seems very centered around ML use cases, with the Thamos config accepting settings for OS, cpu, and cuda versions. Is variance in performance between otherwise-compatible versions of ML packages really that big a problem?
> The Python Packaging Authority (PyPA), along with the Python community, is working on an endpoint to provide the dependency information.
So what is the `requires_dist` key in e.g. https://pypi.org/pypi/Django/3.2/json ?
(My experimental dependency locking tool Pipimi (https://github.com/akx/pipimi/blob/f055b0c0/pipimi.py#L43-L5...) uses that endpoint.)