MyGit

0.24.0

roboflow/supervision

版本发布时间: 2024-10-05 04:25:47

roboflow/supervision最新发布版本:0.24.0(2024-10-05 04:25:47)

Supervision 0.24.0 is here! We've added many new changes, including the F1 score, enhancements to LineZone, EasyOCR support, NCNN support, and the best Cookbook to date! You can also try out our annotators directly in the browser. Check out the release notes to find out more!

📢 Announcements

image-1

Changelog

🚀 Added

import supervision as sv
from supervision.metrics import F1Score

predictions = sv.Detections(...)
targets = sv.Detections(...)

f1_metric = F1Score()
f1_result = f1_metric.update(predictions, targets).compute()

print(f1_result)
print(f1_result.f1_50)
print(f1_result.small_objects.f1_50)

image-8-with-new

SAHI principle Inference Slicer in action

Embedded workflow example

import supervision as sv
import cv2

image = cv2.imread("<SOURCE_IMAGE_PATH>")

line_zone = sv.LineZone(
    start=sv.Point(0, 100),
    end=sv.Point(50, 200)
)
line_zone_annotator = sv.LineZoneAnnotator(
    text_orient_to_line=True,
    display_text_box=False,
    text_centered=False
)

annotated_frame = line_zone_annotator.annotate(
    frame=image.copy(), line_counter=line_zone
)

sv.plot_image(frame)

https://github.com/user-attachments/assets/d7694b81-26ca-4236-bc66-af3d9e79d367

import supervision as sv
import cv2

image = cv2.imread("<SOURCE_IMAGE_PATH>")

line_zone = sv.LineZone(
    start=sv.Point(0, 100),
    end=sv.Point(50, 200)
)
line_zone_annotator = sv.LineZoneAnnotatorMulticlass()

frame = line_zone_annotator.annotate(
    frame=frame, line_zones=[line_zone]
)

sv.plot_image(frame)

https://github.com/user-attachments/assets/b109f5bd-6ae7-473b-b4e8-910a869736b4

import supervision as sv
import easyocr
import cv2

image = cv2.imread("<SOURCE_IMAGE_PATH>")

reader = easyocr.Reader(["en"])
result = reader.readtext("<SOURCE_IMAGE_PATH>", paragraph=True)
detections = sv.Detections.from_easyocr(result)

box_annotator = sv.BoxAnnotator(color_lookup=sv.ColorLookup.INDEX)
label_annotator = sv.LabelAnnotator(color_lookup=sv.ColorLookup.INDEX)

annotated_image = image.copy()
annotated_image = box_annotator.annotate(scene=annotated_image, detections=detections)
annotated_image = label_annotator.annotate(scene=annotated_image, detections=detections)

sv.plot_image(annotated_image)

EasyOCR example

import numpy as np

boxes_true = np.array([[[1, 0], [0, 1], [3, 4], [4, 3]]])
boxes_detection = np.array([[[1, 1], [2, 0], [4, 2], [3, 3]]])
ious = sv.oriented_box_iou_batch(boxes_true, boxes_detection)
print("IoU between true and detected boxes:", ious)

Note: the IoU is approximated as mask IoU. Approximated OBB overlap

import cv2
from ncnn.model_zoo import get_model
import supervision as sv

image = cv2.imread("<SOURCE_IMAGE_PATH>")
model = get_model(
    "yolov8s",
    target_size=640,
    prob_threshold=0.5,
    nms_threshold=0.45,
    num_threads=4,
    use_gpu=True,
)
result = model(image)
detections = sv.Detections.from_ncnn(result)

🌱 Changed

🔧 Fixed

✅ No deprecations this time!

❌ Removed

🏆 Contributors

@onuralpszr (Onuralp SEZER), @joaomarcoscrs (João Marcos Cardoso Ramos da Silva), @jcruz-ferreyra (Juan Cruz), @patel-zeel (Zeel B Patel), @grzegorz-roboflow (Grzegorz Klimaszewski), @Kadermiyanyedi (Kader Miyanyedi), @ediardo (Eddie Ramirez), @CharlesCNorton, @ethanwhite (Ethan White), @josephofiowa (Joseph Nelson), @tibeoh (Thibault Itart-Longueville), @SkalskiP (Piotr Skalski), @LinasKo (Linas Kondrackis)

Thank you to Pexels for providing fantastic images and videos!

相关地址:原始地址 下载(tar) 下载(zip)

查看:2024-10-05发行的版本