Show by Label

Thursday, May 30, 2019

Keeping an eye on my dad

It's probably not what you think...

My dad lives alone in an apartment all by himself. He is 92 yeas old and a little wobbly on his feet. We already lost my mom a few years ago, after she wondered around (advanced stage of Alzheimer's) during the night and fell en broke her hip while in a nursing home. Recently, a lady next door to him was found after 14 hours while lying on the floor. She fell and could not get to the alarm button or could not operate it. My dad has one of those buttons too, en he claims to have it on him all the time, which I doubt a bit. A number of years ago,an aunt from my wife fell badly in her home, and broke her arm. She could not get to the telephone,and was only found after almost two days. I want to spare him the same ordeal.

A little while ago, I stumbled on a website from a guy that is an expert on Computer Vision. He publishes many applications that use a camera of software to do recognition and tracking of objects. This peaked my interest. I had a RPi camera in mystash for at least two years, with the intend to fabricate a burglar alarm. The applications I found were using Motion, but I didn't think much of it, so the camera was left in the box.

Here is the website where I found many interesting applications where I learned a lot from :

In the process of learning I downloaded the code and tried many posts. After a while I zoomed in on a post that counted people.

It looked the right thing for me. The reason I picked this particular application is as follows.
I want to measure movement, not presence. Image the situation 
where my dad sits in a chair, or falls and lies on the ground. Knowing that he's there does not help him, nor me. I want to know that he is moving around in the apartment.

So tracking him crossing an imaginary line seemed to be the right approach. I changed to code from the application to have a vertical line he needs to cross, and I tried several iterations to get it to work. The problem I ran into was speed. I'm using an RPi Model3 B+ and that is simply not fast enough to reliably recognize my dad as a "person" and track him.

I tried many things to speed things up, but the RPi was already running flat out on all cylinders (4 cores). Just when I was about to shelve the project, I found another post on the website that rejuvenated my interest. This post was about the recently launched device from Google, the Coral TPU USB accelerator.
Here is that post :

I read all I could find about this device on the internet, which is not much at the moment. There was a very impressive Youtube video where Google folks showed what this device could do, and so I bit the bullet and ordered one.

It arrived after a few days, and I quickly installed the driver software and tried the demo programs, they all worked flawlessly, so my initial doubts where quickly diminishing.

I then started a new app using this device, while retaining some of the structure and elements I used before. I was amazed at the speed. There is no longer a need to do recognition every 40 frames and use the cpu lighter tracking method in between. The Coral runs so fast that I can use recognition on every frame and have a very smooth display of movement. I use htops version 2.2 to track the activity on the cores, and also track the core temperature and speed. It now runs with less than 40% load and the core temperature never goes much above 60-70 degrees C. When my kit is installed in a cabinet, it may get warmer, so I'm using a temperature driven fan. This is described on this forum as well. By the way, I'm running the Coral at the normal speed, there is no need to go faster.

Needless to say, I am very enthusiastic about this solution.

As you can image, I softly introduced my dad to the idea of using a device that would track his movement. He knows I'm not taking a video and invade his personal space. He was fine with the idea, and I have shown him a few early prototypes to get him used to the idea. After I finished the version with the Coral TPU, I took the kit over to his apartment to do a first tryout there, and see if I could really do what I wanted to do. I also needed to find a nice spot for the RPi and the camera.

Thinks worked very well right from the start, so we were both enthusiastic. I also found a nice place to put the stuff. He has a flat screen TV that is located in a cabinet. On the back is an unused screw hole that is there to mount a wall frame to the TV. I can use that to mount the RPi case and the Coral TPU. I can then use double sided tape to attach the camera enclosure on top of the TV. The location is perfect. The camera is facing an area in his living room where he needs to walk through to get to the kitchen, front door, or bathroom.

I'm currently running full-blown tests in my own "lab", to make sure everything works.
I'll be updating things as I get further into the deployment.

Have fun!

Here are some explanations of the program and it's features.

Movements are tracked, and when there is no movement for a longer period, I sound a beeper with an increasing loudness to alert my dad that he needs cross the eye, or I will get a message. The reason I'm doing this is to make sure I limit the number of messages, and if something happens to him, he knows that help will be on the way pretty soon.
The message I get will be an SMS to my smartphone. I can then first call him to see what is going on, or alert the local help desk in the apartment building.  Next step is for myself to get over there as fast as I can. I can be there in 10-15 minutes. If I can't, I can call my brother or sister and let them know what is going on.

The tiny beeper is connected through a transistor to a GPIO pin and is fed by the 5V supply of the P1 connector.

There is no need to run this program the whole day. I decided to activate the application several times during the day when he is most likely to be home. I use cron to start and stop the application. When the app is running, activities get logged into a file. At the end of every day this logfile is emailed to me. Initially I will collect a lot of data in this file, but can reduce it (set DEBUG to False in the configuration file) when the test phase is over.

I use a configuration file for key parameters that make it easy for me to change them while I'm at his place.
The file is a simple json formatted structure.
The name of the file is conf.json
    "DEBUG": true,
    "DAEMON": true,
    "SMS": true,
    "alarm_time": 60,
    "window": 3500,
    "confidence": 0.8

DEBUG = True : activate a lot of logging
DAEMON = True : when the program runs normally, activated by cron. There is no screen output. With DEAMON = False, you can see the activity on the console used during testing.
SMS = True : will send out SMS messages, False will turn that off
alarm_time = number of seconds the beeper will sound just before the SMS goes out. Crossing the eye will reset the counter
window = number of seconds in which a movement must be registered, otherwise the alarm phase will start.
confidence = is used for the recognition factor for the model. I want it to be pretty high to avoid stray recognitions.


I use systemd and cron to start and stop the dad_watch script and just cron for the send email scripts.
Note that I use the root version of cron!
In order to do that, use sudo su and than crontab -e
Here is a list of the cron entries:

0 8 * * * /bin/systemctl start dad_watch.service
0 9 * * * /bin/systemctl stop dad_watch.service

0 11 * * * /bin/systemctl start dad_watch.service
0 12 * * * /bin/systemctl stop dad_watch.service

0 13 * * * /bin/systemctl start dad_watch.service
0 14 * * * /bin/systemctl stop dad_watch.service

0 16 * * * /bin/systemctl start dad_watch.service
0 17 * * * /bin/systemctl stop dad_watch.service

0 20 * * * /bin/systemctl start dad_watch.service
0 21 * * * /bin/systemctl stop dad_watch.service

0 22 * * * /usr/bin/python3 /home/pi/


Here is the systemd profile located here :  /etc/systemd/system/dad_watch.service

# This service installs a python script that runs the dad_watch python script.
# When the script crashes, it is automatically restarted.
# If it crashes too many times, it will be forced to fail, or you can let systemd reboot
# systemctl daemon-reload

Description=Installing dad_watch script

ExecStart=/usr/bin/python3 /home/pi/

# The number of times the service is restarted within a time period can be set
# If that condition is met, the RPi can be rebooted
# actions can be none|reboot|reboot-force|reboot-immidiate

# The following are defined in the /etc/systemd/system.conf file and are
# global for all services
# They can also be set on a per process here:
# if they are not defined here, they fall back to the system.conf values



Here is a quick and dirty email program that I use to send me the log file

# Name:
# Purpose:     send an email with a file attachment
# Author:      paulv
# Created:     25-04-2019
# Copyright:   (c) paulv 2019
# Licence:     <your licence>

import smtplib
from email.mime.multipart import MIMEMultipart
from email.mime.text import MIMEText

def send_mail():
    fromaddr = "email address"
    toaddr = "your email address"
    msg = MIMEMultipart()
    msg['From'] = fromaddr
    msg['To'] = toaddr
    msg['Subject'] = "Dad_watch activity log file"
    body = "Today's log file"

    filename = "/home/pi/dad_watch.log"
    with open(filename, 'r') as f:
        attachment = MIMEText(
    attachment.add_header('Content-Disposition', 'attachment', filename=filename)

    msg.attach(MIMEText(body, 'plain'))
    server = smtplib.SMTP('')
    server.login(fromaddr, "your password")
    text = msg.as_string()
    server.sendmail(fromaddr, toaddr, text)

def main():

if __name__ == '__main__':

Here is the current version of the main Python script.
Note that you must install the imutil and the twilio modules both as user pi (or whatever you use), and also as root. Otherwise Python cannot find the modules.
I use Twilio to send an SMS to my mobile. You can sign-up for a free account here :

# Name:
# Purpose:     Watching the activity of my dad, and send an SMS as warning when
#              no activity has been detected for a while.
# Author:      Paul Versteeg, based on idea and blog from Adrian Rosebrock
# Modified:
# Created:     26-05-2019
# Copyright:   (c) Paul Versteeg
# Licence:     <your licence>

# import the necessary packages
from edgetpu.detection.engine import DetectionEngine
from import VideoStream
from import FPS
from PIL import Image
import argparse
import imutils    #pip3 install imutils @ sudo for running as root
import warnings
import datetime
import json
import time
import cv2
import numpy as np
import os
import os.path
import sys
import shlex
import subprocess
import traceback
from threading import Thread
import logging
import logging.handlers
from import Client  # pip3 install twilio & sudo for running as root
import RPi.GPIO as GPIO
from multiprocessing import Process, Queue
import signal

#filter warnings, load the configuration file
conf = json.load(open("/home/pi/conf.json"))

# load the configuration variables
DEBUG = conf["DEBUG"]
SMS = conf["SMS"]
alarm_time= conf["alarm_time"]
window = conf["window"]
confidence = conf["confidence"]

if DEBUG :
    print ("DEBUG is {}".format(DEBUG))
    print ("DAEMON is {}".format(DAEMON))
    print("SMS is {}".format(SMS))
    print("confidence {}".format(confidence))
    print("alarm_time is {}".format(alarm_time))
    print("window is {}".format(window))

ALARM = False      # alarm flag

#--logger definitions
# save daily logs for 7 days
# the logfile will be mailed daily
LOG_FILENAME = "/home/pi/dad_watch.log"
LOG_LEVEL = logging.INFO  # Could be e.g. "TRACE", "ERROR", "" or "WARNING"
logger = logging.getLogger(__name__)
handler = logging.handlers.TimedRotatingFileHandler(LOG_FILENAME, when="midnight", backupCount=7)
formatter = logging.Formatter('%(asctime)s %(levelname)-8s %(message)s')

class MyLogger():
    A class that can be used to capture stdout and sterr to put it in the log

    def __init__(self, level, logger):
            '''Needs a logger and a logger level.'''
            self.logger = logger
            self.level = level

    def write(self, message):
        # Only log if there is a message (not just a new line)
        if message.rstrip() != "":
                self.logger.log(self.level, message.rstrip())

    def flush(self):
        pass  # do nothing -- just to handle the attribute for now

# --Replace stdout and stderr with logging to file so we can run it as a daemon
# and still see what is going on
    sys.stdout = MyLogger(logging.INFO, logger)
    sys.stderr = MyLogger(logging.ERROR, logger)

def beep():
    GPIO.output(17, GPIO.HIGH)
    GPIO.output(17, GPIO.LOW)

def get_cpu_temp():
    # get the cpu temperature
    # need to use the full path, otherwise root cannot find it
    cmd = "/opt/vc/bin/vcgencmd measure_temp"
    args = shlex.split(cmd)
    output, error = subprocess.Popen(args, stdout = subprocess.PIPE, \
                    stderr= subprocess.PIPE).communicate()

    # strip the temperature out of the returned string
    # the returned string is in the form : b"temp=43.9'C\n"
    # if your localization is set to US, you get the temp in Fahrenheit,
    # so you need to adapt the stripping somewhat
    cpu_temp =float(output[5:9]) # for Celcius

def init():
    global labels, model, vs, fps

    # set the GPIO mode
    GPIO.setup(17, GPIO.OUT)

    # the cpu temp must be at the lowest, so report it.
    write_log("info", "CPU temp is {} degrees C".format(get_cpu_temp()))

    # initialize the labels dictionary
    print("parsing class labels...")
    labels = {}
    # loop over the class labels file
    for row in open("/home/pi/edgetpu_models/coco_labels.txt"):
        # unpack the row and update the labels dictionary
        (classID, label) = row.strip().split(maxsplit=1)
        labels[int(classID)] = label.strip()

    # load the Google Coral object detection model
    print("loading Coral model...")
    model = DetectionEngine("/home/pi/edgetpu_models/mobilenet_ssd_v2_coco_quant_postprocess_edgetpu.tflite")

    # initialize the video stream and allow the camera sensor to warmup
    print("starting video stream...")
    vs = VideoStream(usePiCamera=True).start()
    # let the camera sensor warm-up

    if not DAEMON :
        # start the frames per second throughput estimator
        fps = FPS().start()

    if DEBUG : beep()

def write_log(mode="info", msg=""):
    Create a log file with information during the running of this app.
    Log information with a qualifier to easier sort it.


    if mode == "info" :

    if mode == "debug" :

    if mode == "error" :

    # catch all
    print ("---catch all---", mode)

def send_sms(msg='no txt'):

    account_sid = 'yours'
    auth_token = 'yours'

        client = Client(account_sid, auth_token)

        message = client.messages.create(
                 body = msg,
                 from_='you get this from twilio',
                 to='your mobile number')

        if DEBUG : write_log("info", message.sid)

    except Exception as e:
        write_log("error", "Unexpected Exception in send_sms() : \n{0}".format(e))

def beep():
    GPIO.output(17, GPIO.HIGH)
    GPIO.output(17, GPIO.LOW)

def sound_alarm():
    Function to create a seperate thread to start an alarm beep.
    The alarm will be stoppen when movement has been detected again or
    when the total alarm time has been exceeded.

    global alarm_thread, ALARM_RUNNING, ALARM, SMS

        class alarm_threadclass(Thread):

            def run(self):
                global alarm_thread, ALARM_RUNNING, ALARM, SMS
                    start_time =
                    i = 1
                    while ALARM :
                        ALARM_RUNNING = True
                        GPIO.output(17, GPIO.HIGH)
                        GPIO.output(17, GPIO.LOW)
                        i += 1
                        max_alarm = ( - start_time).seconds

                        if (max_alarm > conf["alarm_time"]) :
                            write_log("info", "*** no movement during alarm, sending SMS")
                            if SMS :
                                send_sms("no movement during alarm fase")
                                SMS = False # send only one SMS per session

                    ALARM_RUNNING = False
                    write_log("info", "alarm thread ended")
                    ALARM = False

                except Exception as e:
                    print("Unexpected Exception in sound_alarm :\n{0} ".format(e))
                    write_log("error", "Unexpected Exception in sound_alarm :\n{0} ".format(e))

        alarm_thread = alarm_threadclass()
        alarm_thread.setDaemon(True) # so a ctrl-C can terminate it
        if not ALARM_RUNNING :
            alarm_thread.start() # start the thread
            write_log("info", "sound_alarm thread started")

    except Exception as e:
        write_log("error", "Unexpected Exception in sound_alarm() : \n{0}".format(e))

def sig_handler (signum=None, frame = None):
    This function will catch the most important system signals, but NOT not a shutdown!

    This handler catches the following signals from the OS:
        SIGHUB = (1) SSH Terminal logout
        SIGINT = (2) Ctrl-C
        SIGQUIT = (3) ctrl-\
        IOerror = (5) when terminating the SSH connection (input/output error)
        SIGTERM = (15) Deamon terminate (deamon --stop): is coming from systemd
    However, it cannot catch SIGKILL = (9), the kill -9 or the shutdown procedure


        if DEBUG : ("info", "Sig_handler called with signal : {0}".format(signum))
        if signum == 1 :
            return # ignore SSH logout termination

        write_log("info", "Sig_handler is terminating the application")
        # the cpu temp must be at the highest, so report it.
        write_log("info", "CPU temp is {} degrees C".format(get_cpu_temp()))

        if DEBUG :

        if not DAEMON:
            # stop the fps timer and display the collected results
            print("[INFO] elapsed time: {:.2f}".format(fps.elapsed()))
            print("[INFO] approx. FPS: {:.2f}".format(fps.fps()))

        # do a bit of cleanup
        GPIO.output(17, GPIO.LOW) # force the beeper to quit

        os._exit(0) # force the exit to the OS

    except Exception as e: # IOerror 005 when terminating the SSH connection
        write_log("info", "Unexpected Exception in sig_handler() : \n{0}".format(e))

def main():
    global ALARM

    write_log("info", "\n\n   ***** Starting up")

    # initialize the frame dimensions (we'll set them as soon as we read
    # the first frame from the video)
    W = None
    H = None
    startX = 0
    startY = 0
    endX = 0
    endY = 0

    dir = None  # direction

    side = None
    old_side = None
    movement = 0


    # setup a catch for the following signals: signal.SIGINT = ctrl-c
    for sig in (signal.SIGTERM, signal.SIGINT, signal.SIGHUP, signal.SIGQUIT):
        signal.signal(sig, sig_handler)

    # setup the movement timers
    lastSeen =
    lastSeen_ts =

        # loop over the frames from the video stream
        while True:
            # setup the presence timers
            timestamp =
            lastSeen_ts =
            interval = (lastSeen_ts - lastSeen).seconds

            # grab the frame from the threaded video stream and resize it
            # to have a maximum width of 500 or 800 pixels during testing
            frame =

            if DAEMON :
                frame = imutils.resize(frame, width=500)
                frame = imutils.resize(frame, width=800)

            orig = frame.copy()

            # if the frame dimensions are empty, set them
            if W is None or H is None:
                (H, W) = frame.shape[:2]
                if DEBUG : print("[INFO] H= {} W= {}".format(H,W))
                # Ceate a dividing line in the center of the frame
                # it is used to determine movement of the objects
                centerline = W // 2

            if (interval > (conf["window"] + 100) and ALARM == True) :
                write_log("info", "reset the timer interval")
                lastSeen =
                ALARM = False

            # check if we went past the movement timing deadline...
            if ((interval > conf["window"]) and (ALARM == False)):
                ts =  timestamp.strftime("%A %d %B %Y %H:%M:%S")
                write_log("info", "*** no movement alarm")
                ALARM = True

            # prepare the frame for object detection by converting it
            # from BGR to RGB channel ordering and then from a NumPy
            # array to PIL image format
            frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
            frame = Image.fromarray(frame)

            # do the heavy lifting on the Coral USB tpu
            results = model.DetectWithImage(frame, threshold=conf["confidence"],
                keep_aspect_ratio=True, relative_coord=False)

            # loop over the results
            for r in results:
                # use the index to get at the properlabel
                label = labels[r.label_id]
                # we're only interested in persons
                if label != "person" :

                # extract the bounding box and predicted class label
                box = r.bounding_box.flatten().astype("int")
                (startX, startY, endX, endY) = box

                # calculate the middle of the object
                position = (startX + endX) // 2
                # The centerline line is in the middle of the frame (W // 2)
                # Determine if the object movement is accross the centerline
                # If the middle of the object AND the right-hand side (endX) has
                # crossed, there was a complete move to the left.
                # If the middle of the object AND the left-hand side (startX) has
                # crossed, there was a complete move to the right.

                if position < centerline and endX < centerline :
                    side = "left"
                    # print("moving left")
                if position > centerline and startX > centerline :
                    side = "right"
                    # print("moving right")

                # if there is a change in the side, record it as a movement
                if side is not old_side:
                    movement += 1
                    if DEBUG : beep()
                    if DEBUG : print("[INFO] movement {}".format(movement))
                    write_log("info", "movement {}".format(movement))

                    # reset the counter & alarm flag
                    lastSeen =
                    if (ALARM == True) :
                        ALARM = False

                old_side = side

                if not DAEMON :
                    # draw the bounding box and label on the image
                    cv2.rectangle(orig, (startX, startY), (endX, endY),
                        (0, 255, 0), 2)
                    y = startY - 15 if startY - 15 > 15 else startY + 15
                    text = "{}: {:.2f}%".format(label, r.score * 100)
                    cv2.putText(orig, text, (startX, y),
                        cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0, 255, 0), 2)

            if not DAEMON :
                # show the output frame and wait for a key press
                cv2.imshow("Frame", orig)
                key = cv2.waitKey(1) & 0xFF

                # if the `q` key was pressed, break from the loop
                if key == ord("q"):
                # update the FPS counter

    except KeyboardInterrupt:
        if (DEBUG) and (not DAEMON): print ("\nCtrl-C Terminating")
        GPIO.output(17, GPIO.LOW)

    except Exception as e:
        sys.stderr.write("Got exception: %s" % e)
        if (DEBUG) and (not DAEMON): print(traceback.format_exc())
        GPIO.output(17, GPIO.LOW)

        if not DAEMON :
            # stop the timer and display FPS information
            print("[INFO] elapsed time: {:.2f}".format(fps.elapsed()))
            print("[INFO] approx. FPS: {:.2f}".format(fps.fps()))
        # do a bit of cleanup
        GPIO.output(17, GPIO.LOW)

if __name__ == '__main__':

No comments:

Post a Comment