How to use the dlib.train_simple_object_detector function in dlib

To help you get started, we’ve selected a few dlib examples, based on popular ways it is used in public projects.

Secure your code as it's written. Use Snyk Code to scan source code in minutes - no build needed - and fix issues immediately.

github apollos / opencv-practice / object_detection_made_easy / train_detector.py View on Github external
imageID = imageID.replace(".jpg", "")
	p = "{}/annotation_{}.mat".format(args["annotations"], imageID)
	annotations = loadmat(p)["box_coord"]

	# loop over the annotations and add each annotation to the list of bounding
	# boxes
	bb = [dlib.rectangle(left=long(x), top=long(y), right=long(w), bottom=long(h))
			for (y, h, x, w) in annotations]
	boxes.append(bb)

	# add the image to the list of images
	images.append(io.imread(imagePath))

# train the object detector
print("[INFO] training detector...")
detector = dlib.train_simple_object_detector(images, boxes, options)

# dump the classifier to file
print("[INFO] dumping classifier to file...")
detector.save(args["output"])

# visualize the results of the detector
win = dlib.image_window()
win.set_image(detector)
dlib.hit_enter_to_continue()
github mit-nlp / MITIE / python_examples / train_object_detector.py View on Github external
# You just need to put your images into a list.
images = [io.imread(faces_folder + '/2008_002506.jpg'),
          io.imread(faces_folder + '/2009_004587.jpg')]
# Then for each image you make a list of rectangles which give the pixel
# locations of the edges of the boxes.
boxes_img1 = ([dlib.rectangle(left=329, top=78, right=437, bottom=186),
               dlib.rectangle(left=224, top=95, right=314, bottom=185),
               dlib.rectangle(left=125, top=65, right=214, bottom=155)])
boxes_img2 = ([dlib.rectangle(left=154, top=46, right=228, bottom=121),
               dlib.rectangle(left=266, top=280, right=328, bottom=342)])
# And then you aggregate those lists of boxes into one big list and then call
# train_simple_object_detector().
boxes = [boxes_img1, boxes_img2]

detector2 = dlib.train_simple_object_detector(images, boxes, options)
# We could save this detector to disk by uncommenting the following.
#detector2.save('detector2.svm')

# Now let's look at its HOG filter!
win_det.set_image(detector2)
dlib.hit_enter_to_continue()

# Note that you don't have to use the XML based input to
# test_simple_object_detector().  If you have already loaded your training
# images and bounding boxes for the objects then you can call it as shown
# below.
print("\nTraining accuracy: {}".format(
    dlib.test_simple_object_detector(images, boxes, detector2)))
github menpo / menpodetect / menpodetect / dlib / train.py View on Github external
image_pixels = [menpo_image_to_uint8(im) for im in images]

    if num_threads is None:
        import multiprocessing

        num_threads = multiprocessing.cpu_count()

    options = dlib.simple_object_detector_training_options()
    options.epsilon = epsilon
    options.add_left_right_image_flips = add_left_right_image_flips
    options.be_verbose = verbose_stdout
    options.C = C
    options.detection_window_size = detection_window_size
    options.num_threads = num_threads

    return dlib.train_simple_object_detector(image_pixels, rectangles, options)
github marando / pycatfd / lib / Trainer.py View on Github external
def train_object_detector(self):
        self.__print_training_message('object detector')
        opt = dlib.simple_object_detector_training_options()
        opt.add_left_right_image_flips = True
        opt.C = 5
        opt.num_threads = self.cpu_cores
        opt.be_verbose = True
        opt.detection_window_size = self.window_size ** 2
        dlib.train_simple_object_detector(self.xml, DETECTOR_SVM, opt)
github mit-nlp / MITIE / python_examples / train_object_detector.py View on Github external
# Tell the code how many CPU cores your computer has for the fastest training.
options.num_threads = 4
options.be_verbose = True


training_xml_path = os.path.join(faces_folder, "training.xml")
testing_xml_path = os.path.join(faces_folder, "testing.xml")
# This function does the actual training.  It will save the final detector to
# detector.svm.  The input is an XML file that lists the images in the training
# dataset and also contains the positions of the face boxes.  To create your
# own XML files you can use the imglab tool which can be found in the
# tools/imglab folder.  It is a simple graphical tool for labeling objects in
# images with boxes.  To see how to use it read the tools/imglab/README.txt
# file.  But for this example, we just use the training.xml file included with
# dlib.
dlib.train_simple_object_detector(training_xml_path, "detector.svm", options)



# Now that we have a face detector we can test it.  The first statement tests
# it on the training data.  It will print(the precision, recall, and then)
# average precision.
print("")  # Print blank line to create gap from previous output
print("Training accuracy: {}".format(
    dlib.test_simple_object_detector(training_xml_path, "detector.svm")))
# However, to get an idea if it really worked without overfitting we need to
# run it on images it wasn't trained on.  The next line does this.  Happily, we
# see that the object detector works perfectly on the testing images.
print("Testing accuracy: {}".format(
    dlib.test_simple_object_detector(testing_xml_path, "detector.svm")))