我真的很想了解使用node.js将ffmpeg实时输出流到HTML5客户端的最佳方法,因为有很多变量在起作用,而且我在这个领域没有很多经验,花了很多小时尝试不同的组合。
我的用例是:
1)IP摄像机RTSP H.264流由FFMPEG采集,并使用节点中的以下FFMPEG设置重新混合到mp4容器中,并输出到STDOUT。这仅在初始客户端连接上运行,因此部分内容请求不会尝试再次产生FFMPEG。
liveFFMPEG = child_process.spawn("ffmpeg", [
"-i", "rtsp://admin:12345@192.168.1.234:554" , "-vcodec", "copy", "-f",
"mp4", "-reset_timestamps", "1", "-movflags", "frag_keyframe+empty_moov",
"-" // output to stdout
], {detached: false});
2)我使用节点http服务器捕获STDOUT,并根据客户端请求将其流式传输回客户端。当客户端第一次连接时,我产生了上面的FFMPEG命令行,然后将STDOUT流通过管道传递到HTTP响应。
liveFFMPEG.stdout.pipe(resp);
我还使用了stream事件将FFMPEG数据写入HTTP响应,但是没有区别
xliveFFMPEG.stdout.on("data",function(data) {
resp.write(data);
}
我使用以下HTTP标头(在流式传输预先记录的文件时也使用并起作用)
var total = 999999999 // fake a large file
var partialstart = 0
var partialend = total - 1
if (range !== undefined) {
var parts = range.replace(/bytes=/, "").split("-");
var partialstart = parts[0];
var partialend = parts[1];
}
var start = parseInt(partialstart, 10);
var end = partialend ? parseInt(partialend, 10) : total; // fake a large file if no range reques
var chunksize = (end-start)+1;
resp.writeHead(206, {
'Transfer-Encoding': 'chunked'
, 'Content-Type': 'video/mp4'
, 'Content-Length': chunksize // large size to fake a file
, 'Accept-Ranges': 'bytes ' + start + "-" + end + "/" + total
});
3)客户端必须使用HTML5视频标签。
我没有流传输回放的问题(使用fs.createReadStream和206 HTTP部分内容)到HTML5客户端,该视频文件以前是使用上述FFMPEG命令行记录的(但保存到文件而不是STDOUT),所以我知道FFMPEG流是正确的,当连接到HTTP节点服务器时,我什至可以在VLC中正确看到视频实时流。
However trying to stream live from FFMPEG via node HTTP seems to be a lot harder as the client will display one frame then stop. I suspect the problem is that I am not setting up the HTTP connection to be compatible with the HTML5 video client. I have tried a variety of things like using HTTP 206 (partial content) and 200 responses, putting the data into a buffer then streaming with no luck, so I need to go back to first principles to ensure I'm setting this up the right way.
Here is my understanding of how this should work, please correct me if I'm wrong:
1) FFMPEG should be setup to fragment the output and use an empty moov (FFMPEG frag_keyframe and empty_moov mov flags). This means the client does not use the moov atom which is typically at the end of the file which isn't relevant when streaming (no end of file), but means no seeking possible which is fine for my use case.
2) Even though I use MP4 fragments and empty MOOV, I still have to use HTTP partial content, as the HTML5 player will wait until the entire stream is downloaded before playing, which with a live stream never ends so is unworkable.
3) I don't understand why piping the STDOUT stream to the HTTP response doesn't work when streaming live yet if I save to a file I can stream this file easily to HTML5 clients using similar code. Maybe it's a timing issue as it takes a second for the FFMPEG spawn to start, connect to the IP camera and send chunks to node, and the node data events are irregular as well. However the bytestream should be exactly the same as saving to a file, and HTTP should be able to cater for delays.
4) When checking the network log from the HTTP client when streaming a MP4 file created by FFMPEG from the camera, I see there are 3 client requests: A general GET request for the video, which the HTTP server returns about 40Kb, then a partial content request with a byte range for the last 10K of the file, then a final request for the bits in the middle not loaded. Maybe the HTML5 client once it receives the first response is asking for the last part of the file to load the MP4 MOOV atom? If this is the case it won't work for streaming as there is no MOOV file and no end of the file.
5) When checking the network log when trying to stream live, I get an aborted initial request with only about 200 bytes received, then a re-request again aborted with 200 bytes and a third request which is only 2K long. I don't understand why the HTML5 client would abort the request as the bytestream is exactly the same as I can successfully use when streaming from a recorded file. It also seems node isn't sending the rest of the FFMPEG stream to the client, yet I can see the FFMPEG data in the .on event routine so it is getting to the FFMPEG node HTTP server.
6) Although I think piping the STDOUT stream to the HTTP response buffer should work, do I have to build an intermediate buffer and stream that will allow the HTTP partial content client requests to properly work like it does when it (successfully) reads a file? I think this is the main reason for my problems however I'm not exactly sure in Node how to best set that up. And I don't know how to handle a client request for the data at the end of the file as there is no end of file.
7) Am I on the wrong track with trying to handle 206 partial content requests, and should this work with normal 200 HTTP responses? HTTP 200 responses works fine for VLC so I suspect the HTML5 video client will only work with partial content requests?
As I'm still learning this stuff its difficult to work through the various layers of this problem (FFMPEG, node, streaming, HTTP, HTML5 video) so any pointers will be greatly appreciated. I have spent hours researching on this site and the net, and I have not come across anyone who has been able to do real time streaming in node but I can't be the first, and I think this should be able to work (somehow!).
这是一个非常普遍的误解。没有实时HTML5视频支持(iOS和Mac Safari上的HLS除外)。您也许可以使用webm容器来“破解”它,但是我不希望它得到普遍支持。您要查找的内容包含在Media Source Extensions中,您可以在其中一次将片段提供给浏览器。但是您将需要编写一些客户端javascript。