0.20.0
版本发布时间: 2024-04-25 04:49:04
roboflow/supervision最新发布版本:0.24.0(2024-10-05 04:25:47)
🚀 Added
-
sv.KeyPoints
to provide initial support for pose estimation and broader keypoint detection models. (#1128) -
sv.EdgeAnnotator
andsv.VertexAnnotator
to enable rendering of results from keypoint detection models. (#1128)
import cv2
import supervision as sv
from ultralytics import YOLO
image = cv2.imread(<SOURCE_IMAGE_PATH>)
model = YOLO('yolov8l-pose')
result = model(image, verbose=False)[0]
keypoints = sv.KeyPoints.from_ultralytics(result)
edge_annotators = sv.EdgeAnnotator(color=sv.Color.GREEN, thickness=5)
annotated_image = edge_annotators.annotate(image.copy(), keypoints)
import cv2
import supervision as sv
from ultralytics import YOLO
image = cv2.imread(<SOURCE_IMAGE_PATH>)
model = YOLO('yolov8l-pose')
result = model(image, verbose=False)[0]
keypoints = sv.KeyPoints.from_ultralytics(result)
vertex_annotators = sv.VertexAnnotator(color=sv.Color.GREEN, radius=10)
annotated_image = vertex_annotators.annotate(image.copy(), keypoints)
🌱 Changed
-
sv.LabelAnnotator
by adding an additionalcorner_radius
argument that allows for rounding the corners of the bounding box. (#1037) -
sv.PolygonZone
such that theframe_resolution_wh
argument is no longer required to initializesv.PolygonZone
. (#1109)
[!WARNING]
Theframe_resolution_wh
parameter insv.PolygonZone
is deprecated and will be removed insupervision-0.24.0
.
-
sv.get_polygon_center
to calculate a more accurate polygon centroid. (#1084) -
sv.Detections.from_transformers
by adding support for Transformers segmentation models and extract class names values. (#1069)
import torch
import supervision as sv
from PIL import Image
from transformers import DetrImageProcessor, DetrForSegmentation
processor = DetrImageProcessor.from_pretrained("facebook/detr-resnet-50-panoptic")
model = DetrForSegmentation.from_pretrained("facebook/detr-resnet-50-panoptic")
image = Image.open(<SOURCE_IMAGE_PATH>)
inputs = processor(images=image, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
width, height = image.size
target_size = torch.tensor([[height, width]])
results = processor.post_process_segmentation(
outputs=outputs, target_sizes=target_size)[0]
detections = sv.Detections.from_transformers(results, id2label=model.config.id2label)
mask_annotator = sv.MaskAnnotator()
label_annotator = sv.LabelAnnotator(text_position=sv.Position.CENTER)
annotated_image = mask_annotator.annotate(
scene=image, detections=detections)
annotated_image = label_annotator.annotate(
scene=annotated_image, detections=detections)
🛠️ Fixed
-
sv.ByteTrack.update_with_detections
which was removing segmentation masks while tracking. Now,ByteTrack
can be used alongside segmentation models. (#787)
🏆 Contributors
@onuralpszr (Onuralp SEZER), @rolson24 (Raif Olson), @xaristeidou (Christoforos Aristeidou), @jeslinpjames (Jeslin P James), @Griffin-Sullivan (Griffin Sullivan), @PawelPeczek-Roboflow (Paweł Pęczek), @pirnerjonas (Jonas Pirner), @sharingan000, @macc-n, @LinasKo (Linas Kondrackis), @SkalskiP (Piotr Skalski)