0.19.0
版本发布时间: 2024-03-15 20:04:59
roboflow/supervision最新发布版本:0.24.0(2024-10-05 04:25:47)
🧑🍳 Cookbooks
Supervision Cookbooks - A curated open-source collection crafted by the community, offering practical examples, comprehensive guides, and walkthroughs for leveraging Supervision alongside diverse Computer Vision models. (#860)
🚀 Added
-
sv.CSVSink
allowing for the straightforward saving of image, video, or stream inference results in a.csv
file. (#818)
import supervision as sv
from ultralytics import YOLO
model = YOLO(<SOURCE_MODEL_PATH>)
csv_sink = sv.CSVSink(<RESULT_CSV_FILE_PATH>)
frames_generator = sv.get_video_frames_generator(<SOURCE_VIDEO_PATH>)
with csv_sink:
for frame in frames_generator:
result = model(frame)[0]
detections = sv.Detections.from_ultralytics(result)
csv_sink.append(detections, custom_data={<CUSTOM_LABEL>:<CUSTOM_DATA>})
https://github.com/roboflow/supervision/assets/26109316/621588f9-69a0-44fe-8aab-ab4b0ef2ea1b
-
sv.JSONSink
allowing for the straightforward saving of image, video, or stream inference results in a.json
file. (#819)
import supervision as sv
from ultralytics import YOLO
model = YOLO(<SOURCE_MODEL_PATH>)
json_sink = sv.JSONSink(<RESULT_JSON_FILE_PATH>)
frames_generator = sv.get_video_frames_generator(<SOURCE_VIDEO_PATH>)
with json_sink:
for frame in frames_generator:
result = model(frame)[0]
detections = sv.Detections.from_ultralytics(result)
json_sink.append(detections, custom_data={<CUSTOM_LABEL>:<CUSTOM_DATA>})
-
sv.mask_iou_batch
allowing to compute Intersection over Union (IoU) of two sets of masks. (#847) -
sv.mask_non_max_suppression
allowing to perform Non-Maximum Suppression (NMS) on segmentation predictions. (#847) -
sv.CropAnnotator
allowing users to annotate the scene with scaled-up crops of detections. (#888)
import cv2
import supervision as sv
from inference import get_model
image = cv2.imread(<SOURCE_IMAGE_PATH>)
model = get_model(model_id="yolov8n-640")
result = model.infer(image)[0]
detections = sv.Detections.from_inference(result)
crop_annotator = sv.CropAnnotator()
annotated_frame = crop_annotator.annotate(
scene=image.copy(),
detections=detections
)
https://github.com/roboflow/supervision/assets/26109316/0a5b67ce-55e7-4e26-9495-a68f9ad97ec7
🌱 Changed
-
sv.ByteTrack.reset
allowing users to clear trackers state, enabling the processing of multiple video files in sequence. (#827) -
sv.LineZoneAnnotator
allowing to hide in/out count usingdisplay_in_count
anddisplay_out_count
properties. (#802) -
sv.ByteTrack
input arguments and docstrings updated to improve readability and ease of use. (#787)
[!WARNING]
Thetrack_buffer
,track_thresh
, andmatch_thresh
parameters insv.ByterTrack
are deprecated and will be removed insupervision-0.23.0
. Uselost_track_buffer,
track_activation_threshold
, andminimum_matching_threshold
instead.
-
sv.PolygonZone
to now accept a list of specific box anchors that must be in zone for a detection to be counted. (#910)
[!WARNING]
Thetriggering_position
parameter insv.PolygonZone
is deprecated and will be removed insupervision-0.23.0
. Usetriggering_anchors
instead.
- Annotators adding support for Pillow images. All supervision Annotators can now accept an image as either a numpy array or a Pillow Image. They automatically detect its type, draw annotations, and return the output in the same format as the input. (#875)
🛠️ Fixed
-
sv.DetectionsSmoother
removingtracking_id
fromsv.Detections
. (#944) -
sv.DetectionDataset
which, after changes introduced insupervision-0.18.0
, failed to load datasets in YOLO, PASCAL VOC, and COCO formats.
🏆 Contributors
@onuralpszr (Onuralp SEZER), @LinasKo (Linas Kondrackis), @LeviVasconcelos (Levi Vasconcelos), @AdonaiVera (Adonai Vera), @xaristeidou (Christoforos Aristeidou), @Kadermiyanyedi (Kader Miyanyedi), @NickHerrig (Nick Herrig), @PacificDou (Shuyang Dou), @iamhatesz (Tomasz Wrona), @capjamesg (James Gallagher), @sansyo, @SkalskiP (Piotr Skalski)