Building My Digital Brain Part 5: Setup

This is part 5 of my series on building my own digital brain.

Today I’m providing a walkthrough of the digital brain I built for myself using Open WebUI.

I’m also providing step-by-step instructions for how I set it up.

The setup is 100% free, and the steps can be followed regardless of technical experience:

Let me know what you think!

Thanks
Dave

1 Like

Got it all set up. Great guide, @davewaring - thanks!

I even managed to load up DuckDuckGo as the search engine instead of Google, totally rad.

I have a question. Why does Docker (in my case, the app) have to be open in order for http://localhost:3000 to connect?

I just tried to access localhost on port 3000 and Firefox was unable to connect. Searching around and scouring the Open WebUI forum, I didn’t find much in terms of a solution.

I opened the Docker app, and localhost:3000 worked.

Is this a bug? Feature? Is there a workaround so I don’t have to open the Docker app to access the webUI?

1 Like

Thanks @buddy_eric and welcome to the community!

Yep docker has to be running for it to work.

Docker is essentially a platform that allows you to run applications in isolated environments called “containers.”

And running things this way makes them easier to setup and maintain than would otherwise be the case.

So if docker is not running then the container that is hosting your Open WebUI on your computer is not running either.

Any other questions or issues let me know!

Dave

Is it possible [on a Mac] to set a script that runs when OS loads to get Docker and Ollama running in the background?

(I also noticed that when Ollama is NOT running, even if Docker is, the WebUI will load but no models are available. So if such a script is possible, I would want to include running Ollama as well.)

Hey Eric,

It looks like it’s straightforward to get docker up and running but a little more involved to get ollama up and running at startup.

I asked my Open WebUi digital brain how to do it and this is what I got back.

Let me know what you think!

Thanks
Dave
To ensure that Ollama and Docker run automatically at startup on your Mac, you’ll generally need to set them up as login items or use startup scripts. Below is a step-by-step guide to achieve this:

For Docker (via GUI):

Docker Desktop for Mac comes with an easy “auto-start” setting that you can enable via the user interface.

  1. Open Docker Desktop:

    • Go to your Applications folder and open Docker.
  2. Enable Auto Startup:

    • Click on the Docker icon in the Mac’s menu bar (top-right corner).
    • Choose Preferences or Settings.
    • In the menu, navigate to General settings and check the box labeled Start Docker Desktop when you log in.

This ensures Docker Desktop starts automatically when your Mac boots up.

For Ollama (via Terminal):

Ollama requires a CLI-based approach to automatically run at startup. You can achieve this by creating a launch daemon or using a launchd plist file.

Step 1: Create the Launch Daemon File

You’ll need to create a property list (plist) file that defines how Ollama should be run at startup.

  1. Open the Terminal app.
  2. Use a command-line text editor (like nano) to create a com.ollama.startup.plist file in the ~/Library/LaunchAgents directory:
nano ~/Library/LaunchAgents/com.ollama.startup.plist
  1. In the nano editor, enter the following content into the plist file. Replace the /path/to/ollama with the actual path where Ollama is installed (update the path if needed or use its command as necessary):
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
  <dict>
    <key>Label</key>
    <string>com.ollama.startup</string>

    <key>ProgramArguments</key>
    <array>
      <string>/path/to/ollama</string>
      <string>serve</string> <!-- Or the necessary command to start Olive. -->
    </array>

    <key>RunAtLoad</key>
    <true/>
  </dict>
</plist>
  1. Once done, press CTRL + O to save the file, then CTRL + X to exit.

Step 2: Load the Launch Agent

Now, load the created plist file to make it active:

launchctl load ~/Library/LaunchAgents/com.ollama.startup.plist

This will ensure that the Ollama process starts immediately and also when your Mac starts up.

Checking if Everything Works

  • Restart your Mac and confirm that Docker automatically starts (you can check this by looking at the menu bar for the Docker icon).
  • Confirm that Ollama is running by checking for your processes in the Terminal:
ps aux | grep ollama

Managing Startup Items (Optional)

You can add items manually in System Settings under Users & Groups if needed, but for command-line utilities like Ollama, using launchd is preferred.

You should now have both Docker and Ollama set up to start automatically when you log into macOS.

1 Like

Outstanding! Will give this a try.

great would appreciate it if you’d come back and share your experience with the community after doing so!

Dave

I apologize for the delay in response. It seems I did not need to do this after all, as my Ollama already runs automatically on OS startup. I’m not sure how I missed that. I had convinced myself that the local LLM models weren’t loading, despite having run Docker, but after a controlled experiment I found Ollama was in fact loading correctly, and the models populated within WebUI.

If I took an extra step/modified settings during the Ollama install, that made it run at OS startup, I don’t remember. I believe it runs automatically on startup out of the box. So good luck and godspeed to those who do not want autorun at startup!

Thanks Dave.

Eric

awesome glad you figured it out and thanks for letting us know!

Dave

Eh sorry to bring this back up.

I realized what was not loading at boot: the [necessary] ngrok socket connection. That doesn’t boot up automatically, it requires me to run the below command from Terminal, and is severed if/when I close terminal.

ngrok http --url=my-ngrok-url.ngrok-free.app 3000

What this means: if my computer is not on and OS loaded, if docker and Ollama are not up and running, and if the ngrok socket is established, then and only then am I able to access the WebUI remotely via web. Put another way, the circumstance where I’m out and about and can access the WebUI remotely will probably be extremely rare.

I need a dedicated machine.

Eric

No worries Eric thanks for sharing your experience so we can all learn together.

Yes I think getting up and running on a laptop is a good first step but as soon as it’s something you want to use on a daily basis then dedicating hardware to it or hosting in a virtual private cloud is the way to go.

I talk more about the tradeoffs of this from a privacy and ownership perspective here: Building My Digital Brain Part 2: Ownership & Control

1 Like

Thanks for sharing Dave!

1 Like