0.23.0
版本发布时间: 2024-08-29 01:45:41
roboflow/supervision最新发布版本:0.24.0(2024-10-05 04:25:47)
🚀 Added
-
BackgroundOverlayAnnotator
annotates the background of your image! #1385
https://github.com/user-attachments/assets/c1f3ce11-08c1-4648-9176-4e7920b91a8a
(video by Pexels)
- We're introducing metrics, which currently supports
xyxy
boxes and masks. Over the next few releases,supervision
will focus on adding more metrics, allowing you to evaluate your model performance. We plan to support not just boxes, masks, but oriented bounding boxes as well! #1442
[!TIP] Help in implementing metrics is very welcome! Keep an eye on our issue board if you'd like to contribute!
import supervision as sv
from supervision.metrics import MeanAveragePrecision
predictions = sv.Detections(...)
targets = sv.Detections(...)
map_metric = MeanAveragePrecision()
map_result = map_metric.update(predictions, targets).compute()
print(map_result)
print(map_result.map50_95)
print(map_result.large_objects.map50_95)
map_result.plot()
Here's a very basic way to compare model results:
📊 Example code
import supervision as sv
from supervision.metrics import MeanAveragePrecision
from inference import get_model
import matplotlib.pyplot as plt
# !wget https://media.roboflow.com/notebooks/examples/dog.jpeg
image = "dog.jpeg"
model_1 = get_model("yolov8n-640")
model_2 = get_model("yolov8s-640")
model_3 = get_model("yolov8m-640")
model_4 = get_model("yolov8l-640")
results_1 = model_1.infer(image)[0]
results_2 = model_2.infer(image)[0]
results_3 = model_3.infer(image)[0]
results_4 = model_4.infer(image)[0]
detections_1 = sv.Detections.from_inference(results_1)
detections_2 = sv.Detections.from_inference(results_2)
detections_3 = sv.Detections.from_inference(results_3)
detections_4 = sv.Detections.from_inference(results_4)
map_n_metric = MeanAveragePrecision().update([detections_1], [detections_4]).compute()
map_s_metric = MeanAveragePrecision().update([detections_2], [detections_4]).compute()
map_m_metric = MeanAveragePrecision().update([detections_3], [detections_4]).compute()
labels = ["YOLOv8n", "YOLOv8s", "YOLOv8m"]
map_values = [map_n_metric.map50_95, map_s_metric.map50_95, map_m_metric.map50_95]
plt.title("YOLOv8 Model Comparison")
plt.bar(labels, map_values)
ax = plt.gca()
ax.set_ylim([0, 1])
plt.show()
- Added the
IconAnnotator
, which allows you to place icons on your images. #930
https://github.com/user-attachments/assets/ff80acf5-67f2-4c20-a3fe-b63cac07ae31
(Video by Pexels, icons by Icons8)
import supervision as sv
from inference import get_model
image = <SOURCE_IMAGE_PATH>
icon_dog = <DOG_PNG_PATH>
icon_cat = <CAT_PNG_PATH>
model = get_model(model_id="yolov8n-640")
results = model.infer(image)[0]
detections = sv.Detections.from_inference(results)
icon_paths = []
for class_name in detections.data["class_name"]:
if class_name == "dog":
icon_paths.append(icon_dog)
elif class_name == "cat":
icon_paths.append(icon_cat)
else:
icon_paths.append("")
icon_annotator = sv.IconAnnotator()
annotated_frame = icon_annotator.annotate(
scene=image.copy(),
detections=detections,
icon_path=icon_paths
)
- Segment Anything 2 was released this month. And while you can load its results via
from_sam
, we've added support tofrom_ultralytics
for loading the results if you ran it with Ultralytics. #1354
import cv2
import supervision as sv
from ultralytics import SAM
image = cv2.imread("...")
model = SAM("mobile_sam.pt")
results = model(image, bboxes=[[588, 163, 643, 220]])
detections = sv.Detections.from_ultralytics(results[0])
polygon_annotator = sv.PolygonAnnotator()
mask_annotator = sv.MaskAnnotator()
annoated_image = mask_annotator.annotate(image.copy(), detections)
annoated_image = polygon_annotator.annotate(annoated_image, detections)
sv.plot_image(annoated_image, (12,12))
SAM2 with our annotators:
https://github.com/user-attachments/assets/6a98d651-2596-43e9-b485-ea6f0de4fffa
-
TriangleAnnotator
andDotAnnotator
contour color customization #1458 -
VertexLabelAnnotator
for keypoints now hastext_color
parameter #1409
🌱 Changed
- Updated
sv.Detections.from_transformers
to support thetransformers v5
functions. This includes theDetrImageProcessor
methodspost_process_object_detection
,post_process_panoptic_segmentation
,post_process_semantic_segmentation
, andpost_process_instance_segmentation
. #1386 -
InferenceSlicer
now features anoverlap_ratio_wh
parameter, making it easier to compute slice sizes when handling overlapping slices. #1434
image_with_small_objects = cv2.imread("...")
model = get_model("yolov8n-640")
def callback(image_slice: np.ndarray) -> sv.Detections:
print("image_slice.shape:", image_slice.shape)
result = model.infer(image_slice)[0]
return sv.Detections.from_inference(result)
slicer = sv.InferenceSlicer(
callback=callback,
slice_wh=(128, 128),
overlap_ratio_wh=(0.2, 0.2),
)
detections = slicer(image_with_small_objects)
🛠️ Fixed
- Annotator type fixes #1448
- New way of seeking to a specific video frame, where other methods don't work #1348
-
plot_image
now clearly states the size is in inches. #1424
⚠️ Deprecated
-
overlap_filter_strategy
inInferenceSlicer.__init__
is deprecated and will be removed insupervision-0.27.0
. Useoverlap_strategy
instead. -
overlap_ratio_wh
inInferenceSlicer.__init__
is deprecated and will be removed insupervision-0.27.0
. Useoverlap_wh
instead.
❌ Removed
- The
track_buffer
,track_thresh
, andmatch_thresh
parameters inByteTrack
are deprecated and were removed as ofsupervision-0.23.0
. Uselost_track_buffer,
track_activation_threshold
, andminimum_matching_threshold
instead. - The
triggering_position
parameter insv.PolygonZone
was removed as ofsupervision-0.23.0
. Usetriggering_anchors
instead.
🏆 Contributors
@shaddu, @onuralpszr (Onuralp SEZER), @Kadermiyanyedi (Kader Miyanyedi), @xaristeidou (Christoforos Aristeidou), @Gk-rohan (Rohan Gupta), @Bhavay-2001 (Bhavay Malhotra), @arthurcerveira (Arthur Cerveira), @J4BEZ (Ju Hoon Park), @venkatram-dev, @eric220, @capjamesg (James), @yeldarby (Brad Dwyer), @SkalskiP (Piotr Skalski), @LinasKo (LinasKo)