What I Found:
After some research, I discovered that if AutoGPT is running in a Docker container, it might be a networking issue. Docker often isolates its network, making it unable to access the host’s services directly.
Solution:
Instead of using localhost
, replace it with host.docker.internal
in the connection string:
http://host.docker.internal:11434/
This workaround is discussed in this GitHub issue: Ollama Issue #703.
Questions:
- Is using
host.docker.internal
the best solution for this scenario? - Are there other networking configurations or best practices to make AutoGPT work seamlessly with Ollama?