In MonkeyLearn, each classification or extraction task is considered a query, either it’s done by the API or by the GUI.

A single API request can consume multiple queries (classifications or extractions). That is, if a classifier module is called with 150 texts, it will perform 150 classifications and will consume 150 queries.

Not every API request consumes queries: getting module details, training, deploying, uploading samples, etc, do not consume queries.

Every call to the API returns an X-Query-Limit-Request-Queries parameter which tells how many queries were consumed by that call. Also, the API Reference documentation details how many queries consumes each API call.

Pipelines consume the same amount of queries as making all the classifications and extractions independently. The advantage it’s much simpler, much faster and produces less API overhead.

Did this answer your question?