Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
408 views
in Technique[技术] by (71.8m points)

rest - How do streaming resources fit within the RESTful paradigm?

With a RESTful service you can create, read, update, and delete resources. This all works well when you're dealing with something like a database assets - but how does this translate to streaming data? (Or does it?) For instance, in the case of video, it seems silly to treat each frame as resource that I should query one at a time. Rather I would set up a socket connection and stream a series of frames. But does this break the RESTful paradigm? What if I want to be able to rewind or fast forward the stream? Is this possible within the RESTful paradigm? So: How do streaming resources fit within the RESTful paradigm?

As a matter of implementation, I am getting ready to create such a streaming data service, and I want to make sure I'm doing it the "best way". I'm sure this problem's been solved before. Can someone point me to good material?

See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Answer

0 votes
by (71.8m points)

I did not manage to find materials about truly RESTful streaming - it seems that results are mostly about delegating streaming to another service (which is not a bad solution). So I'll do my best to tackle it myself - note that streaming is not my domain, but I'll try to add my 2 cents.

In the aspect of streaming, I think that we need to separate the problem into two independent parts:

  1. access to media resources (meta data)
  2. access to the medium/stream itself (binary data)

1.) Access to media resources
This is pretty straightforward and can be handled in a clean and RESTful way. As an example, let's say that we will have an XML-based API which allows us to access a list of streams:

GET /media/

<?xml version="1.0" encoding="UTF-8" ?>
<media-list uri="/media">
    <media uri="/media/1" />
    <media uri="/media/2" />
    ...
</media-list>

...and also to individual streams:

GET /media/1

<?xml version="1.0" encoding="UTF-8" ?>
<media uri="/media/1">
    <id>1</id>
    <title>Some video</title>
    <stream>rtsp://example.com/media/1.3gp</stream>
</media>

2.) Access to the medium/stream itself
This is the more problematic bit. You already pointed out one option in your question, and that is to allow access to frames individually via a RESTful API. Even though this might work, I agree with you that it's not a viable option.

I think that there is a choice to be made between:

  1. delegating streaming to a dedicated service via a specialized streaming protocol (e.g. RTSP)
  2. utilizing options available in HTTP

I believe the former to be the more efficient choice, although it requires a dedicated streaming service (and/or hardware). It might be a bit on the edge of what is considered RESTful, however note that our API is RESTful in all aspects and even though the dedicated streaming service does not adhere to the uniform interface (GET/POST/PUT/DELETE), our API does. Our API allows us proper control over resources and their meta data via GET/POST/PUT/DELETE, and we provide links to the streaming service (thus adhering with connectedness aspect of REST).

The latter option - streaming via HTTP - might not be as efficient as the above, but it's definitely possible. Technically, it's not that different than allowing access to any form of binary content via HTTP. In this case our API would provide a link to the binary resource accessible via HTTP, and also advises us about the size of the resource:

GET /media/1

<?xml version="1.0" encoding="UTF-8" ?>
<media uri="/media/1">
    <id>1</id>
    <title>Some video</title>
    <bytes>1048576</bytes>
    <stream>/media/1.3gp</stream>
</media>

The client can access the resource via HTTP by using GET /media/1.3gp. One option is for the client to download the whole resource - HTTP progressive download. A cleaner alternative would be for the client to access the resource in chunks by utilizing HTTP Range headers. For fetching the second 256KB chunk of a file that is 1MB large, the client request would then look like this:

GET /media/1.3gp
...
Range: bytes=131072-262143
...

A server which supports ranges would then respond with the Content-Range header, followed by the partial representation of the resource:

HTTP/1.1 206 Partial content
...
Content-Range: bytes 131072-262143/1048576
Content-Length: 1048576
...

Note that our API already told the client the exact size of the file in bytes (1MB). In a case where the client would not know the size of the resource, it should first call HEAD /media/1.3gp in order to determine the size, otherwise it's risking a server response with 416 Requested Range Not Satisfiable.


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...