Drifting off in Loiter Mode
See original GitHub issueI’m running dronekit on a raspberry pi connected to a pixhawk 4.
Yesterday I was testing some code that makes the drone take off, move to a local position, and loiter for 30 seconds while detecting objects below it using a camera and tensorflow. The vehicle took off correctly and moved to the local position, but then when it switched to loiter mode it continued to drift away until it hit a tree branch and shut off the motors completely (although power to the vehicle was not lost). It appeared as if it didn’t switch to loiter at all but just continued to drift in the same manner that it had when flying to the relative position. This was very confusing as I had previously run my code in SITL with flightgear to visualize the drone’s movements and it behaved exactly as expected.
Additionally, when it began to drift off, I triggered the ctrl-C failsafe I’d written into my python code, which did nothing. In previous tests (simulated and with the real vehicle) the failsafe has worked every time. My theory is that the Raspberry Pi lost connection to the Pixhawk, however, I did not get the typical ‘no heartbeat’ message in the terminal running my python code. It’s worth noting that I often get some drift in loiter mode, but I attribute it to the lack of satellite reception at my house (I usually get ~9 sats) and the vehicle will correct itself after drifting a few feet. Also, the drift I observed here was faster than the usual drift. I’d also calibrated my sensors before running this mission.
Here’s the python2 code I’m running that handles all the heavy lifting (Tensoflow communicates with the python2 code over a socket on localhost:12000 to send it the required data):
#!/usr/bin/env python
# -- coding: utf-8 --
"""
© Copyright 2015-2016, 3D Robotics.
simple_goto.py: GUIDED mode "simple goto" example (Copter Only)
Demonstrates how to arm and takeoff in Copter and how to navigate to points using Vehicle.simple_goto.
Full documentation is provided at http://python.dronekit.io/examples/simple_goto.html
"""
#Set up option parsing to get socket options
import argparse
import socket
import errno
from object_detection import tcpclientsimple
from dronekit import connect, VehicleMode, LocationGlobalRelative
from pymavlink import mavutil
import time
import signal
import sys
import sys
import subprocess
import getpass
parser = argparse.ArgumentParser(description='Print out vehicle state information')
parser.add_argument('--socket',
help="use socket?")
parser.add_argument('--connect',
help="vehicle connection target string")
args = parser.parse_args()
useSocket = args.socket
def safeReceive(msglen=3):
recv = ''
try:
recv = server_socket.receive(msglen)
except socket.error as serr:
print('encountered error')
return recv
def receiveClassName():
server_socket = tcpclientsimple.socketClient()
server_socket.connect('', 12000)
recvlen = int(safeReceive())
print('recvlen: ', recvlen)
time.sleep(0.25)
server_socket = tcpclientsimple.socketClient()
server_socket.connect('', 12000)
return safeReceive(recvlen)
#connect to socket and wait for init
if useSocket:
print('using socket')
server_socket = tcpclientsimple.socketClient()
server_socket.connect('', 12000)
print('connected to socket. waiting for init (0x1)....')
recv = safeReceive()
while recv != '0x1':
server_socket = tcpclientsimple.socketClient()
server_socket.connect('', 12000)
time.sleep(0.2)
if safeReceive() == '0x1':
break
print("started")
def signal_handler(sig, frame):
print('Ctrl+C!')
print('Emergency landing...')
vehicle.mode = VehicleMode("LAND")
vehicle.flush()
print('closing vehicle..')
vehicle.close()
time.sleep(0.25)
# Shut down simulator if it was started.
if sitl is not None:
print('closing SITL')
sitl.stop()
sys.exit(0)
vehicle.flush()
print('closing vehicle..')
vehicle.close()
# Start SITL if no connection string specified
connection_string = "/dev/ttyAMA0"
#Set up option parsing to get connection string
#parser = argparse.ArgumentParser(description='Print out vehicle state information')
connection_string = args.connect
sitl = None
#EMERGENCY LAND ON CTR+C
signal.signal(signal.SIGINT, signal_handler)
# Connect to the Vehicle
print('Connecting to vehicle on: %s' % connection_string)
if(connection_string == "udp:127.0.0.1:14550"):
vehicle = connect(connection_string, wait_ready=True, heartbeat_timeout=180)
print('connected to SITL')
else:
vehicle = connect(connection_string, wait_ready=True, baud=57600, heartbeat_timeout=180)
print('connected to real vehicle')
def arm_and_takeoff(aTargetAltitude):
"""
Arms vehicle and fly to aTargetAltitude.
"""
print("Basic pre-arm checks")
# Don't try to arm until autopilot is ready
while not vehicle.is_armable:
print(" Waiting for vehicle to initialise...")
time.sleep(1)
print("Arming motors")
# Copter should arm in GUIDED mode
vehicle.mode = VehicleMode("GUIDED")
while not vehicle.mode.name == 'GUIDED': # Wait until mode has changed
print(" Waiting for mode change ...")
time.sleep(1)
vehicle.armed = True
# Confirm vehicle armed before attempting to take off
while not vehicle.armed:
print(" Waiting for arming...")
time.sleep(1)
print("Taking off!")
vehicle.simple_takeoff(aTargetAltitude) # Take off to target altitude
# Wait until the vehicle reaches a safe height before processing the goto (otherwise the command
# after Vehicle.simple_takeoff will execute immediately).
while True:
print(" Altitude: ", vehicle.location.global_relative_frame.alt)
# Break and return from function just below target altitude.
if vehicle.location.global_relative_frame.alt >= aTargetAltitude * 0.95:
print("Reached target altitude")
break
time.sleep(1)
def goto_position_target_local_ned(north, east, down):
"""
Send SET_POSITION_TARGET_LOCAL_NED command to request the vehicle fly to a specified
location in the North, East, Down frame.
"""
msg = vehicle.message_factory.set_position_target_local_ned_encode(
0, # time_boot_ms (not used)
0, 0, # target system, target component
mavutil.mavlink.MAV_FRAME_BODY_NED, # frame
0b0000111111111000, # type_mask (only positions enabled)
north, east, down, # x, y, z positions (or North, East, Down in the MAV_FRAME_BODY_NED frame
0, 0, 0, # x, y, z velocity in m/s (not used)
0, 0, 0, # x, y, z acceleration (not supported yet, ignored in GCS_Mavlink)
0, 0) # yaw, yaw_rate (not supported yet, ignored in GCS_Mavlink)
# send command to vehicle
vehicle.send_mavlink(msg)
vehicle.flush()
def set_home(aLocation, aCurrent=1):
"""
Send MAV_CMD_DO_SET_HOME command to set the Home location to either the current location
or a specified location.
For more information see:
http://copter.ardupilot.com/common-mavlink-mission-command-messages-mav_cmd/#mav_cmd_do_set_home
"""
# create the MAV_CMD_DO_SET_HOME command:
# http://copter.ardupilot.com/common-mavlink-mission-command-messages-mav_cmd/#mav_cmd_do_set_home
msg = vehicle.message_factory.command_long_encode(
0, 0, # target system, target component
mavutil.mavlink.MAV_CMD_DO_SET_HOME, # command
0, # confirmation
aCurrent, # param 1: 1 to use current position, 2 to use the entered values.
0, 0, 0, # params 2-4
aLocation.lat,
aLocation.lon,
aLocation.alt
)
# send command to vehicle
vehicle.send_mavlink(msg)
vehicle.flush()
#wait for camera to initialize
print('Waiting for camera to init (0x2)')
server_socket = tcpclientsimple.socketClient()
server_socket.connect('', 12000)
recv = safeReceive()
while(recv != '0x2'):
server_socket = tcpclientsimple.socketClient()
server_socket.connect('', 12000)
recv = safeReceive()
# set home to current location
print("Setting home location")
set_home(vehicle.location.global_relative_frame)
time.sleep(1)
arm_and_takeoff(2)
print("Set default/target airspeed to 3")
vehicle.airspeed = 3
# sleeping for 5 seconds before moving
time.sleep(5)
print('Going to localNED: (1,1,-2)')
goto_position_target_local_ned(1, 1, -1)
#allow 7 seconds to move
time.sleep(7)
# set the channel overrides which avoids crashing when mode changed to loiter
print("setting throttle channel to 1500 via channel overrides")
vehicle.channels.overrides['3'] = 1500
print("Setting LOITER mode...")
vehicle.mode = VehicleMode("LOITER")
while not vehicle.mode.name == 'LOITER': # Wait until mode has changed
print(" Waiting for mode change ...")
time.sleep(1)
#open filewriter
f = open("objectLog.txt", "w")
f.write("***Detected Objects*** \n")
f.close()
#wait for first frame
print('Waiting for first frame (0x3)')
server_socket = tcpclientsimple.socketClient()
server_socket.connect('', 12000)
recv = safeReceive()
while(recv != '0x3'):
server_socket = tcpclientsimple.socketClient()
server_socket.connect('', 12000)
recv = safeReceive()
vehicle.channels.overrides['3'] = 1500
# start a time for loiter mode
start_time = time.time()
# sleep for 30 seconds in loiter mode. Set override every second
print('Mode: ', str(vehicle.mode.name))
print('Press Ctrl+C to land...')
f = open("objectLog.txt", "a")
while time.time() - start_time < 30:
if(useSocket == 'true'):
# wait for signal
print('Waiting for className receive signal (0x4)')
server_socket = tcpclientsimple.socketClient()
server_socket.connect('', 12000)
recv = safeReceive()
while (recv != '0x4'):
vehicle.channels.overrides['3'] = 1500
server_socket = tcpclientsimple.socketClient()
server_socket.connect('', 12000)
recv = safeReceive()
server_socket = tcpclientsimple.socketClient()
server_socket.connect('', 12000)
recvlen = int(safeReceive())
print('recvlen: ', recvlen)
time.sleep(0.25)
server_socket = tcpclientsimple.socketClient()
server_socket.connect('', 12000)
receivedName = safeReceive(recvlen)
print(receivedName)
f.write(receivedName)
f.write("\n")
vehicle.channels.overrides['3'] = 1500
signal.signal(signal.SIGINT, signal_handler)
#close filewriter
f.close()
print("Setting LAND mode...")
vehicle.mode = VehicleMode("LAND")
vehicle.flush()
print("Completed")
# Close vehicle object before exiting script
print("Close vehicle object")
vehicle.close()
# Shut down simulator if it was started.
if sitl is not None:
sitl.stop()`
And here’s the python3 code I’m using that controls tensorflow:
`######## Picamera Object Detection Using Tensorflow Classifier #########
#
# Author: Evan Juras
# Date: 4/15/18
# Description:
# This program uses a TensorFlow classifier to perform object detection.
# It loads the classifier uses it to perform object detection on a Picamera feed.
# It draws boxes and scores around the objects of interest in each frame from
# the Picamera. It also can be used with a webcam by adding "--usbcam"
# when executing this script from the terminal.
## Some of the code is copied from Google's example at
## https://github.com/tensorflow/models/blob/master/research/object_detection/object_detection_tutorial.ipynb
## and some is copied from Dat Tran's example at
## https://github.com/datitran/object_detector_app/blob/master/object_detection_app.py
## but I changed it to make it more understandable to me.
#bind to socket 12000
import socket
import time
HOST = ''
PORT = 12000
print('Binding to UDP socket on ', end='')
if HOST=='':
print('127.0.0.1', end='')
else:
print(HOST, end='')
print(':', PORT, '.....', sep='')
server_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
server_socket.bind((HOST, PORT))
def sendOverSocket(msg, timeout=3):
server_socket.settimeout(3)
try:
server_socket.listen()
conn, addr = server_socket.accept()
with conn:
conn.sendall(bytes(msg, "utf-8"))
conn.close()
except socket.timeout:
print('request timed out')
def sendClassNameOverSocket(className):
#send the length of the incoming class name over the socket
print('after zfill', str(len(str(className))).zfill(3))
sendOverSocket(str(len(str(className))).zfill(3), 5)
#wait for it to be received
#send actual class name over socket
sendOverSocket(str(className), 5)
#send init string
print('successfully bound to socket')
print('sending init (0x1)')
sendOverSocket('0x1')
# Import packages
import os
import cv2
import numpy as np
from picamera.array import PiRGBArray
from picamera import PiCamera
import tensorflow as tf
import sys
import math
# Set up camera constants
IM_WIDTH = 1280
IM_HEIGHT = 720
#IM_WIDTH = 640 Use smaller resolution for
#IM_HEIGHT = 480 slightly faster framerate
# Select camera type (if user enters --usbcam when calling this script,
# a USB webcam will be used)
camera_type = 'picamera'
# This is needed since the working directory is the object_detection folder.
sys.path.append('..')
# Import utilites
from utils import label_map_util
from utils import visualization_utils as vis_util
print('imported utilities')
# Name of the directory containing the object detection module we're using
MODEL_NAME = 'ssdlite_mobilenet_v2_coco_2018_05_09'
# Grab path to current working directory
CWD_PATH = os.getcwd()
# Path to frozen detection graph .pb file, which contains the model that is used
# for object detection.
PATH_TO_CKPT = os.path.join(CWD_PATH,MODEL_NAME,'frozen_inference_graph.pb')
# Path to label map file
PATH_TO_LABELS = os.path.join(CWD_PATH,'data','mscoco_label_map.pbtxt')
# Number of classes the object detector can identify
NUM_CLASSES = 90
## Load the label map.
# Label maps map indices to category names, so that when the convolution
# network predicts `5`, we know that this corresponds to `airplane`.
# Here we use internal utility functions, but anything that returns a
# dictionary mapping integers to appropriate string labels would be fine
label_map = label_map_util.load_labelmap(PATH_TO_LABELS)
categories = label_map_util.convert_label_map_to_categories(label_map, max_num_classes=NUM_CLASSES, use_display_name=True)
category_index = label_map_util.create_category_index(categories)
# Load the Tensorflow model into memory.
detection_graph = tf.Graph()
with detection_graph.as_default():
od_graph_def = tf.GraphDef()
with tf.gfile.GFile(PATH_TO_CKPT, 'rb') as fid:
serialized_graph = fid.read()
od_graph_def.ParseFromString(serialized_graph)
tf.import_graph_def(od_graph_def, name='')
sess = tf.Session(graph=detection_graph)
print('loaded tensorflow resources')
# Define input and output tensors (i.e. data) for the object detection classifier
# Input tensor is the image
image_tensor = detection_graph.get_tensor_by_name('image_tensor:0')
# Output tensors are the detection boxes, scores, and classes
# Each box represents a part of the image where a particular object was detected
detection_boxes = detection_graph.get_tensor_by_name('detection_boxes:0')
# Each score represents level of confidence for each of the objects.
# The score is shown on the result image, together with the class label.
detection_scores = detection_graph.get_tensor_by_name('detection_scores:0')
detection_classes = detection_graph.get_tensor_by_name('detection_classes:0')
# Number of objects detected
num_detections = detection_graph.get_tensor_by_name('num_detections:0')
# Initialize frame rate calculation
frame_rate_calc = 1
freq = cv2.getTickFrequency()
font = cv2.FONT_HERSHEY_SIMPLEX
# Initialize camera and perform object detection.
# The camera has to be set up and used differently depending on if it's a
# Picamera or USB webcam.
# I know this is ugly, but I basically copy+pasted the code for the object
# detection loop twice, and made one work for Picamera and the other work
# for USB.
def load_image_into_numpy_array(image):
(im_width, im_height) = image.size
return np.array(image.getdata()).reshape(
(im_height, im_width, 3)).astype(np.uint8)
def GetClassName(data):
for cl in data:
return cl['name']
print('initializing camera...')
### Picamera ###
if camera_type == 'picamera':
# Initialize Picamera and grab reference to the raw capture
camera = PiCamera()
camera.resolution = (IM_WIDTH,IM_HEIGHT)
camera.rotation = 180
camera.framerate = 10
rawCapture = PiRGBArray(camera, size=(IM_WIDTH,IM_HEIGHT))
rawCapture.truncate(0)
i = 0
#send init camera message
print('telling dronekit that camera is ready (0x2)')
sendOverSocket('0x2')
for frame1 in camera.capture_continuous(rawCapture, format="bgr",use_video_port=True):
t1 = cv2.getTickCount()
# Acquire frame and expand frame dimensions to have shape: [1, None, None, 3]
# i.e. a single-column array, where each item in the column has the pixel RGB value
frame = np.copy(frame1.array)
frame.setflags(write=1)
frame_expanded = np.expand_dims(frame, axis=0)
# Perform the actual detection by running the model with the image as input
(boxes, scores, classes, num) = sess.run(
[detection_boxes, detection_scores, detection_classes, num_detections],
feed_dict={image_tensor: frame_expanded})
if(i==0):
# send first frame indicator
print('sending first frame indicator (0x3)')
sendOverSocket('0x3')
height, width, channels = frame.shape
print('height ', height, ' width ', width)
print('detection_boxes', detection_boxes)
print('boxes', boxes)
ymin = boxes[0][0][0] * height
xmin = boxes[0][0][1] * width
ymax = boxes[0][0][2] * height
xmax = boxes[0][0][3] * width
xcenter = (xmin + xmax) / 2
ycenter = (ymin + ymax) / 2
refAngle = 0
pi = math.pi
if xcenter != 0:
refAngle = np.arctan((ycenter-(0.5*height)) / (xcenter-(0.5*width)))
# map reference angle if in the wrong quadrant
#if refAngle > 0 and xcenter < 0 and ycenter < 0:
# refAngle -= math.pi
#if refAngle < 0 and xcenter < 0 and ycenter > 0:
# refAngle += math.pi
i+=1
#draw box on center of object
boxes[0][1][0] = ((ycenter/height)-0.001)
boxes[0][1][1] = ((xcenter/width)-0.001)
boxes[0][1][2] = ((ycenter/height)+0.001)
boxes[0][1][3] = ((xcenter/width)+0.001)
#scores[0]= 0.75
#classes[0][1] = classes[0][0]
#draw box at center
boxes[0][2][0] = 0.5-0.001
boxes[0][2][1] = 0.5-0.001
boxes[0][2][2] = 0.5+0.001
boxes[0][2][3] = 0.5+0.001
#classes[0][2] = classes[0][0]
# Draw the results of the detection (aka 'visulaize the results')
vis_util.visualize_boxes_and_labels_on_image_array(
frame,
np.squeeze(boxes),
np.squeeze(classes).astype(np.int32),
np.squeeze(scores),
category_index,
use_normalized_coordinates=True,
line_thickness=8,
min_score_thresh=0.40)
data = [category_index.get(value) for index, value in enumerate(classes[0]) if scores[0, index] > 0.6]
#output class name to terminal and file
print(GetClassName(data))
#send class name over udp socket
print('sending class name signal (0x4)')
sendOverSocket('0x4')
print('sending object class name over socket')
print('sending', str(GetClassName(data)))
if GetClassName(data) is not None:
sendClassNameOverSocket(GetClassName(data))
else:
sendClassNameOverSocket('None')
cv2.putText(frame,"FPS: {0:.2f}".format(frame_rate_calc),(30,50),font,1,(255,255,0),2,cv2.LINE_AA)
# All the results have been drawn on the frame, so it's time to display it.
cv2.imshow('Object detector', frame)
t2 = cv2.getTickCount()
time1 = (t2-t1)/freq
frame_rate_calc = 1/time1
# Press 'q' to quit
if cv2.waitKey(1) == ord('q'):
break
rawCapture.truncate(0)
camera.close()
cv2.destroyAllWindows()
Any help with this is much appreciated!
Edit: Sorry for the bad code formatting I’m not really sure how to get it to display correctly…
Issue Analytics
- State:
- Created 4 years ago
- Comments:5 (2 by maintainers)
Thanks for the advice. I’ll rebuild the quad and try to reproduce the issue with a simple takeoff -> loiter -> land mission while writing the python’s debugging output to a logfile and hoping the pixhawk’s logs don’t get corrupted. I’ll also video it for good measure so I only have to try and reproduce this once hahaha.
I got a crash in simple loiter as well, It was due to high hdop and gps glitches. If you are flying close to surfaces that can reflect gps signals (buildings) and have bad gps reception, loiter can result into incosistent behavior from time to time. I had flights that were fine for 5 minutes and then turned crazy around 10. Also you should have a remote controller emergency as well (switch to stabilize)