dockcheck is simple CLI tool to simplify keeping track of and updating your containers. Selective semi/fully auto updates, notifications on new versions and much more.

Another 6 months have passed and a bunch of updates have been made. The most recent ones are multi-threaded/asynchronous checks to greatly increase speed, notifications on new dockcheck release for those who run scheduled unattended checks, osx and bsd compatibility changes, prometheus exporter to push stats to eg. Grafana and more.

I’m happy to see the project still being used and improved by its users as I thought other great tools (dockge, wud, watchtower and others) would replace it.

As it’s been a while I’ll try to list the features:

  • Checks all your containers for new updates, without pulling.
  • Manually select which containers or choose all.
  • Either run it to auto update all, or not update any and just list results.
  • Tie it to notify you on new updates.
    • Templates: Synology DSM, mSMTP, Apprise, ntfy.sh , Gotify , Pushbullet , Telegram , Matrix, Pushover , Discord.
    • Enrich with urls to container release notes.
  • Optionally export metrics to Prometheus to show how many images got updates available in a graph.
  • Other misc options as:
    • Use labels to only update containers with label set.
    • Use a N days old option to only update images that have been stable release N days.
    • Auto prune dangling images.
    • Include stopped containers.
    • Exclude specific containers.

I’ve got to thank this community for contributing with donations, ideas, surfacing issues, testing and PRs. It’s a joy!

  • suicidaleggroll@lemm.ee
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    23 hours ago

    This is a great tool, thanks for the continued support.

    Personally, I don’t actually use dockcheck to perform updates, I only use it for its update check functionality, along with a custom plugin which, in cooperation with a python script of mine, serves a REST API that lists all containers on all of my systems with available updates. That then gets pulled into homepage using their custom API function to make something like this: https://imgur.com/a/tAaJ6xf

    So at a glance I can see any containers that have updates available, then I can hop into Dockge to actually apply them on my own schedule.

    • mag37@lemmy.mlOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      17 hours ago

      Thank you! Oh! That’s pretty cool, do you mind sharing bits of how this is done? Would be nice to incorporate into a notify-template in the future.

      • suicidaleggroll@lemm.ee
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        5 hours ago

        Sure, it’s a bit hack-and-slash, but not too bad. Honestly the dockcheck portion is already pretty complete, I’m not sure what all you could add to improve it. The custom plugin I’m using does nothing more than dump the array of container names with available updates to a comma-separated list in a file. In addition to that I also have a wrapper for dockcheck which does two things:

        1. dockcheck plugins only run when there’s at least one container with available updates, so the wrapper is used to handle cases when there are no available updates.
        2. Some containers aren’t handled by dockcheck because they use their own management system, two examples are bitwarden and mailcow. The wrapper script can be modified as needed to support handling those as well, but that has to be one-off since there’s no general-purpose way to handle checking for updates on containers that insist on doing things in their own custom way.

        Basically there are 5 steps to the setup:

        1. Enable Prometheus metrics from Docker (this is just needed to get running/stopped counts, if those aren’t needed it can skipped). To do that, add the following to /etc/docker/daemon.json (create it if necessary) and restart Docker:
        {
          "metrics-addr": "127.0.0.1:9323"
        }
        

        Once running, you should be able to run curl http://localhost:9323/metrics and see a dump of Prometheus metrics

        1. Clone dockcheck, and create a custom plugin for it at dockcheck/notify.sh:
        send_notification() {
        Updates=("$@")
        UpdToString=$(printf ", %s" "${Updates[@]}")
        UpdToString=${UpdToString:2}
        
        File=updatelist_local.txt
        
        echo -n $UpdToString > $File
        }
        
        1. Create a wrapper for dockcheck:
        #!/bin/bash
        
        cd $(dirname $0)
        
        ./dockcheck/dockcheck.sh -mni
        
        if [[ -f updatelist_local.txt ]]; then
          mv updatelist_local.txt updatelist.txt
        else
          echo -n "None" > updatelist.txt
        fi
        

        At this point you should be able to run your script, and at the end you’ll have the file “updatelist.txt” which will either contain a comma-separated list of all containers with available updates, or “None” if there are none. Add this script into cron to run on whatever cadence you want, I use 4 hours.

        1. The main Python script:
        #!/usr/bin/python3
        
        from flask import Flask, jsonify
        
        import os
        import time
        import requests
        import json
        
        app = Flask(__name__)
        
        # Listen addresses for docker metrics
        dockerurls = ['http://127.0.0.1:9323/metrics']
        
        # Other dockerstats servers
        staturls = []
        
        # File containing list of pending updates
        updatefile = '/path/to/updatelist.txt'
        
        @app.route('/metrics', methods=['GET'])
        def get_tasks():
          running = 0
          stopped = 0
          updates = ""
        
          for url in dockerurls:
              response = requests.get(url)
        
              if (response.status_code == 200):
                for line in response.text.split("\n"):
                  if 'engine_daemon_container_states_containers{state="running"}' in line:
                    running += int(line.split()[1])
                  if 'engine_daemon_container_states_containers{state="paused"}' in line:
                    stopped += int(line.split()[1])
                  if 'engine_daemon_container_states_containers{state="stopped"}' in line:
                    stopped += int(line.split()[1])
        
          for url in staturls:
              response = requests.get(url)
        
              if (response.status_code == 200):
                apidata = response.json()
                running += int(apidata['results']['running'])
                stopped += int(apidata['results']['stopped'])
                if (apidata['results']['updates'] != "None"):
                  updates += ", " + apidata['results']['updates']
        
          if (os.path.isfile(updatefile)):
            st = os.stat(updatefile)
            age = (time.time() - st.st_mtime)
            if (age < 86400):
              f = open(updatefile, "r")
              temp = f.readline()
              if (temp != "None"):
                updates += ", " + temp
            else:
              updates += ", Error"
          else:
            updates += ", Error"
        
          if not updates:
            updates = "None"
          else:
            updates = updates[2:]
        
          status = {
            'running': running,
            'stopped': stopped,
            'updates': updates
          }
          return jsonify({'results': status})
        
        if __name__ == '__main__':
          app.run(host='0.0.0.0')
        

        The neat thing about this program is it’s nestable, meaning if you run steps 1-4 independently on all of your Docker servers (assuming you have more than one), then you can pick one of the machines to be the “master” and update the “staturls” variable to point to the other ones, allowing it to collect all of the data from other copies of itself into its own output. If the output of this program will only need to be accessed from localhost, you can change the host variable in app.run to 127.0.0.1 to lock it down. Once this is running, you should be able to run curl http://localhost:5000/metrics and see the running and stopped container counts and available updates for the current machine and any other machines you’ve added into “staturls”. You can then turn this program into a service or launch it @reboot in cron or in /etc/rc.local, whatever fits with your management style to start it up on boot. Note that it does verify the age of the updatelist.txt file before using it, if it’s more than a day old it likely means something is wrong with the dockcheck wrapper script or similar, and rather than using the output the REST API will print “Error” to let you know something is wrong.

        1. Finally, the Homepage custom API to pull the data into the dashboard:
                widget:
                  type: customapi
                  url: http://localhost:5000/metrics
                  refreshInterval: 2000
                  display: list
                  mappings:
                    - field:
                        results: running
                      label: Running
                      format: number
                    - field:
                        results: stopped
                      label: Stopped
                      format: number
                    - field:
                        results: updates
                      label: Updates
        
        • mag37@lemmy.mlOP
          link
          fedilink
          English
          arrow-up
          1
          ·
          53 minutes ago

          Thats really nice! Thank you so much for the writeup.

          Would you mind if I added this as a discussion (crediting you and this post!) in the github project? Or if you’d like to copypaste it yourself to get the credit and be a part of the discussion.