diff --git a/docs/source/en/task_summary.mdx b/docs/source/en/task_summary.mdx index 17be519605888e..08e36507a8971e 100644 --- a/docs/source/en/task_summary.mdx +++ b/docs/source/en/task_summary.mdx @@ -1077,16 +1077,15 @@ The following examples demonstrate how to use a [`pipeline`] and a model and tok >>> from transformers import pipeline >>> vision_classifier = pipeline(task="image-classification") ->>> vision_classifier( +>>> result = vision_classifier( ... images="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg" ... ) -[{'label': 'lynx, catamount', 'score': 0.4403027892112732}, - {'label': 'cougar, puma, catamount, mountain lion, painter, panther, Felis concolor', - 'score': 0.03433405980467796}, - {'label': 'snow leopard, ounce, Panthera uncia', - 'score': 0.032148055732250214}, - {'label': 'Egyptian cat', 'score': 0.02353910356760025}, - {'label': 'tiger cat', 'score': 0.023034192621707916}] +>>> print("\n".join([f"Class {d['label']} with score {round(d['score'], 4)}" for d in result])) +Class lynx, catamount with score 0.4335 +Class cougar, puma, catamount, mountain lion, painter, panther, Felis concolor with score 0.0348 +Class snow leopard, ounce, Panthera uncia with score 0.0324 +Class Egyptian cat with score 0.0239 +Class tiger cat with score 0.0229 ``` The general process for using a model and feature extractor for image classification is: