MyGit

v1.21.0

argilla-io/argilla

版本发布时间: 2023-12-21 22:45:33

argilla-io/argilla最新发布版本:v2.1.0(2024-09-05 23:11:08)

🔆 Release highlights

Draft queue

We’ve added a new queue in the Feedback Task UI so that you can save your drafts and have them all together in a separate view. This allows you to save your responses and come back to them before submission.

Note that responses won’t be autosaved now and to save your changes you will need to click on “Save as draft” or use the shortcut command ⌘ + S (macOS), Ctrl + S (other).

Improved shortcuts

We’ve been working to improve the keyboard shortcuts within the Feedback Task UI to make them more productive and user-friendly.

You can now select labels in Label and Multi-label questions using the numerical keys in your keyboard. To know which number corresponds with each label you can simply show or hide helpers by pressing command ⌘ (MacOS) or Ctrl (other) for 2 seconds. You will then see the numbers next to the corresponding labels.

We’ve also simplified shortcuts for navigation and actions, so that they use as few keys as possible.

Check all available shortcuts here.

New metrics module

We've added a new module to analyze the annotations, both in terms of agreement between the annotators and in terms of data and model drift monitoring.

Agreement metrics

Easily measure the inter-annotator agreement to explore the quality of the annotation guidelines and consistency between annotators:

import argilla as rg
from argilla.client.feedback.metrics import AgreementMetric
feedback_dataset = rg.FeedbackDataset.from_argilla("...", workspace="...")
metric = AgreementMetric(dataset=feedback_dataset, question_name="question_name")
agreement_metrics = metric.compute("alpha")
#>>> agreement_metrics
#[AgreementMetricResult(metric_name='alpha', count=1000, result=0.467889)]

Read more here.

Model metrics

You can use ModelMetric to model monitor performance for data and model drift:

import argilla as rg
from argilla.client.feedback.metrics import ModelMetric
feedback_dataset = rg.FeedbackDataset.from_argilla("...", workspace="...")
metric = ModelMetric(dataset=feedback_dataset, question_name="question_name")
annotator_metrics = metric.compute("accuracy")
#>>> annotator_metrics
#{'00000000-0000-0000-0000-000000000001': [ModelMetricResult(metric_name='accuracy', count=3, result=0.5)], '00000000-0000-0000-0000-000000000002': [ModelMetricResult(metric_name='accuracy', count=3, result=0.25)], '00000000-0000-0000-0000-000000000003': [ModelMetricResult(metric_name='accuracy', count=3, result=0.5)]}

Read more here.

List aggregation support for TermsMetadataProperty

You can now pass a list of terms within a record’s metadata that will be aggregated and filterable as part of a TermsMetadataProperty.

Here is an example:

import argilla as rg

dataset = rg.FeedbackDataset(
    fields = ...,
    questions = ...,
    metadata_properties = [rg.TermsMetadataProperty(name="annotators")]
)

record = rg.FeedbackRecord(
    fields = ...,
    metadata = {"annotators": ["user_1", "user_2"]}
)

Reindex from CLI

Reindex all entities in your Argilla instance (datasets, records, responses, etc.) with a simple CLI command.

argilla server reindex

This is useful when you are working with an existing feedback datasets and you want to update the search engine info.

Changelog 1.21.0

Added

Changed

Fixed

Removed

相关地址:原始地址 下载(tar) 下载(zip)

查看:2023-12-21发行的版本