Result
Result is always in JSON format
Paging
By default the result will contain only 25 items.
If there are more items exist, you'll see Next
link in the beginning of your response:
{
"next": "https://restapi.tpondemand.com/api/v2/userstories?where=(Effort>0)&take=25&skip=25",
"items": [
{"resourceType":"UserStory","id":194,"name":"New Story"},
{"resourceType":"UserStory","id":182,"name":"Import Tasks from CSV"},
{"resourceType":"UserStory","id":180,"name":"Highlight important Tasks"},
{"resourceType":"UserStory","id":178,"name":"Effort for Tasks"},
{"resourceType":"UserStory","id":177,"name":"Export Tasks into CSV"},
{"resourceType":"UserStory","id":176,"name":"Add Task"},
{"resourceType":"UserStory","id":175,"name":"Tag users"},
{"resourceType":"UserStory","id":174,"name":"ToDo list"},
{"resourceType":"UserStory","id":171,"name":"Add User"},
{"resourceType":"UserStory","id":166,"name":"Prototype"},
{"resourceType":"UserStory","id":165,"name":"Delete Task"},
{"resourceType":"UserStory","id":164,"name":"Email settings"},
{"resourceType":"UserStory","id":162,"name":"Print Tasks"},
{"resourceType":"UserStory","id":161,"name":"Create tasks from new email"},
{"resourceType":"UserStory","id":160,"name":"Email plugin"},
{"resourceType":"UserStory","id":159,"name":"Basic REST API"},
{"resourceType":"UserStory","id":156,"name":"Advanced REST API"},
{"resourceType":"UserStory","id":155,"name":"Facebook integration"},
{"resourceType":"UserStory","id":154,"name":"G+ integration"},
{"resourceType":"UserStory","id":153,"name":"Integrate social media"},
{"resourceType":"UserStory","id":152,"name":"Create custom Theme"},
{"resourceType":"UserStory","id":151,"name":"Install Wordpress on production server"},
{"resourceType":"UserStory","id":142,"name":"Create custom Theme for blog"},
{"resourceType":"UserStory","id":141,"name":"Install Wordpress for blogs"},
{"resourceType":"UserStory","id":140,"name":"Prepare Server-Side integration"}
]
}
Page size is controlled with take
parameter. The maximum page size is 1000:
/api/v2/userstories?where=(Effort>0)&take=1000
If you need to pull more than 1000 items, you would have to use the paging and go trough all the pages of the response using take
and skip
parameters:
Page 1:
/api/v2/userstories?where=(Effort>0)&prettify&take=1000
Page 2:
/api/v2/userstories?where=(Effort>0)&prettify&take=1000&skip=1000
Page 3:
/api/v2/userstories?where=(Effort>0)&prettify&take=1000&skip=2000
Aggregation
It's possible to use the following aggregations on root collection:
- count
- sum
- average
- min
- max
For example, get count of all user stories:
/api/v2/userstory?result=Count
81
Get sum, average, min and max efforts from all user stories:
/api/v2/userstory?result={sum:sum(effort),average:average(effort),min:min(effort),max:max(effort)}
{
"sum": 798.0000,
"average": 9.851851,
"min": 0.0000,
"max": 27.0000
}
Get effort sum from all user stories, that have tasks:
/api/v2/userstory?where=(Tasks.count!=0)&result=sum(effort)
70.0000
API v2 uses LINQ expression parser which syntax is very close to standard .NET syntax
Streaming
There are several disadvantages to using paging for retrieving large numbers of entities:
- as mentioned above, there is a hard limit to 1000 entities per page,
- if you have to use multiple requests to retrieve entities continuously, you may see missing or duplicated results if entities were created or deleted in between page requests,
- using
take
/skip
may prevent us from using some performance optimizations under the hood.
For scenarios where you need to retrieve the entire set of the entities from Targetprocess (i.e. for external integration purposes), it's possible to use streaming service API.
Streaming API performs paged requests under the hood, and resulting items are added to response JSON as soon as they are retrieved from Targetprocess. That means that you can use this API to implement JSON streaming and start processing the results as they come, before the request fully executed.
Streaming service requests use a different base endpoint:
/svc/tp-apiv2-streaming-service/stream/userstory?where=(Effort>0)
API itself is mostly compatible with APIv2, i.e. it's possible to specify entity type, select
, filter
and/or where
, but there are a few differences:
take
andskip
parameters will be ignored, since the entire set of entities is retrieved,result
is ignored - aggregations are handled by APIv2 on the entire set of data and are unaffected by paging, so you can use regular APIv2 requests if you needresult
,orderby
is ignored - all entities are always ordered by entity ID: this helps us optimize streaming and ensure no results are duplicated or missing as result of modifications happening in parallel,- all retrieved entities will also include
__id
field containing entity ID - you can safely ignore it when processing results. - there is a rate limit for at most 10 concurrent requests per account
Much like APIv2 and APIv1, Streaming isn't meant to be used as a replacement to APIv2 (especially since it doesn't support all of the features, like
result
andorderby
). Its primarily intended to be used in conjunction with external integrations that require retrieval of large amounts of data from Targetprocess.
Note on authentication
The Streaming API supports Access Tokens (i.e.
access_token
URL parameter) and Basic authentication.
Updated about 1 year ago