0.24.0
版本发布时间: 2024-10-05 04:25:47
roboflow/supervision最新发布版本:0.24.0(2024-10-05 04:25:47)
Supervision 0.24.0
is here! We've added many new changes, including the F1 score, enhancements to LineZone, EasyOCR support, NCNN support, and the best Cookbook to date! You can also try out our annotators directly in the browser. Check out the release notes to find out more!
📢 Announcements
-
Supervision is celebrating Hacktoberfest! Whether you're a newcomer to open source or a veteran contributor, we welcome you to join us in improving
supervision
. You can grab any issue without an assigned contributor: Hacktoberfest Issues Board. We'll be adding many more issues next week! 🎉 -
We recently launched the Model Leaderboard. Come check how the latest models perform! It is also open-source, so you can contribute to it as well! 🚀
Changelog
🚀 Added
- Added F1 score as a new metric for detection and segmentation. The F1 score balances precision and recall, providing a single metric for model evaluation. #1521
import supervision as sv
from supervision.metrics import F1Score
predictions = sv.Detections(...)
targets = sv.Detections(...)
f1_metric = F1Score()
f1_result = f1_metric.update(predictions, targets).compute()
print(f1_result)
print(f1_result.f1_50)
print(f1_result.small_objects.f1_50)
- Added new cookbook: Small Object Detection with SAHI. This cookbook provides a detailed guide on using
InferenceSlicer
for small object detection, and is one of the best cookbooks we've ever seen. Thank you @ediardo! #1483
- You can now try supervision annotators on your own images. Check out the annotator docs. The preview is powered by an Embedded Workflow. Thank you @joaomarcoscrs! #1533
- Enhanced
LineZoneAnnotator
, allowing the labels to align with the line, even when it's not horizontal. Also, you can now disable text background, and choose to draw labels off-center which minimizes overlaps for multipleLineZone
labels. Thank you @jcruz-ferreyra! #854
import supervision as sv
import cv2
image = cv2.imread("<SOURCE_IMAGE_PATH>")
line_zone = sv.LineZone(
start=sv.Point(0, 100),
end=sv.Point(50, 200)
)
line_zone_annotator = sv.LineZoneAnnotator(
text_orient_to_line=True,
display_text_box=False,
text_centered=False
)
annotated_frame = line_zone_annotator.annotate(
frame=image.copy(), line_counter=line_zone
)
sv.plot_image(frame)
https://github.com/user-attachments/assets/d7694b81-26ca-4236-bc66-af3d9e79d367
- Added per-class counting capabilities to
LineZone
and introducedLineZoneAnnotatorMulticlass
for visualizing the counts per class. This feature allows tracking of individual classes crossing a line, enhancing the flexibility of use cases like traffic monitoring or crowd analysis. #1555
import supervision as sv
import cv2
image = cv2.imread("<SOURCE_IMAGE_PATH>")
line_zone = sv.LineZone(
start=sv.Point(0, 100),
end=sv.Point(50, 200)
)
line_zone_annotator = sv.LineZoneAnnotatorMulticlass()
frame = line_zone_annotator.annotate(
frame=frame, line_zones=[line_zone]
)
sv.plot_image(frame)
https://github.com/user-attachments/assets/b109f5bd-6ae7-473b-b4e8-910a869736b4
- Added
from_easyocr
, allowing integration of OCR results into the supervision framework. EasyOCR is an open-source optical character recognition (OCR) library that can read text from images. Thank you @onuralpszr! #1515
import supervision as sv
import easyocr
import cv2
image = cv2.imread("<SOURCE_IMAGE_PATH>")
reader = easyocr.Reader(["en"])
result = reader.readtext("<SOURCE_IMAGE_PATH>", paragraph=True)
detections = sv.Detections.from_easyocr(result)
box_annotator = sv.BoxAnnotator(color_lookup=sv.ColorLookup.INDEX)
label_annotator = sv.LabelAnnotator(color_lookup=sv.ColorLookup.INDEX)
annotated_image = image.copy()
annotated_image = box_annotator.annotate(scene=annotated_image, detections=detections)
annotated_image = label_annotator.annotate(scene=annotated_image, detections=detections)
sv.plot_image(annotated_image)
- Added
oriented_box_iou_batch
function todetection.utils
. This function computes Intersection over Union (IoU) for oriented or rotated bounding boxes (OBB), making it easier to evaluate detections with non-axis-aligned boxes. Thank you @patel-zeel! #1502
import numpy as np
boxes_true = np.array([[[1, 0], [0, 1], [3, 4], [4, 3]]])
boxes_detection = np.array([[[1, 1], [2, 0], [4, 2], [3, 3]]])
ious = sv.oriented_box_iou_batch(boxes_true, boxes_detection)
print("IoU between true and detected boxes:", ious)
Note: the IoU is approximated as mask IoU.
-
Extended
PolygonZoneAnnotator
to allow setting opacity when drawing zones, providing enhanced visualization by filling the zone with adjustable transparency. Thank you @grzegorz-roboflow! #1527 -
Added
from_ncnn
, a connector for the NCNN. It is a powerful object detection framework from Tencent, written from ground-up in C++, with no third party dependencies. Thank you @onuralpszr! #1524
import cv2
from ncnn.model_zoo import get_model
import supervision as sv
image = cv2.imread("<SOURCE_IMAGE_PATH>")
model = get_model(
"yolov8s",
target_size=640,
prob_threshold=0.5,
nms_threshold=0.45,
num_threads=4,
use_gpu=True,
)
result = model(image)
detections = sv.Detections.from_ncnn(result)
🌱 Changed
-
Supervision now depends on
opencv-python
rather thanopencv-python-headless
. #1530 -
Fixed broken or outdated links in documentation and notebooks, improving navigation and ensuring accuracy of references. Thanks to @capjamesg for identifying these issues. #1523
-
Enabled and fixed Ruff rules for code formatting, including changes like avoiding unnecessary iterable allocations and using Optional for default mutable arguments. #1526
🔧 Fixed
-
Updated the COCO 101 point Average Precision algorithm to correctly interpolate precision, providing a more precise calculation of average precision without averaging out intermediate values. #1500
-
Resolved miscellaneous issues highlighted when building documentation. This mostly includes whitespace adjustments and type inconsistencies. Updated documentation for clarity and fixed formatting issues. Added explicit version for
mkdocstrings-python
. #1549 -
Clarified documentation around the
overlap_ratio_wh
argument deprecation inInferenceSlicer
. #1547
✅ No deprecations this time!
❌ Removed
- The
frame_resolution_wh
parameter inPolygonZone
has been removed due to deprecation. - Supervision installation methods "headless" and "desktop" removed, as they are no longer needed.
pip install supervision[headless]
will install the base library and warn of non-existent extra.
🏆 Contributors
@onuralpszr (Onuralp SEZER), @joaomarcoscrs (João Marcos Cardoso Ramos da Silva), @jcruz-ferreyra (Juan Cruz), @patel-zeel (Zeel B Patel), @grzegorz-roboflow (Grzegorz Klimaszewski), @Kadermiyanyedi (Kader Miyanyedi), @ediardo (Eddie Ramirez), @CharlesCNorton, @ethanwhite (Ethan White), @josephofiowa (Joseph Nelson), @tibeoh (Thibault Itart-Longueville), @SkalskiP (Piotr Skalski), @LinasKo (Linas Kondrackis)
Thank you to Pexels for providing fantastic images and videos!