实时将HTTP流传输到HTML5视频客户端的最佳方法

我真的很想了解使用node.js将ffmpeg实时输出流到HTML5客户端的最佳方法,因为有很多变量在起作用,而且我在这个领域没有很多经验,花了很多小时尝试不同的组合。

我的用例是:

1)IP摄像机RTSP H.264流由FFMPEG采集,并使用节点中的以下FFMPEG设置重新混合到mp4容器中,并输出到STDOUT。这仅在初始客户端连接上运行,因此部分内容请求不会尝试再次产生FFMPEG。

liveFFMPEG = child_process.spawn("ffmpeg", [
                "-i", "rtsp://admin:12345@192.168.1.234:554" , "-vcodec", "copy", "-f",
                "mp4", "-reset_timestamps", "1", "-movflags", "frag_keyframe+empty_moov", 
                "-"   // output to stdout
                ],  {detached: false});

2)我使用节点http服务器捕获STDOUT,并根据客户端请求将其流式传输回客户端。当客户端第一次连接时,我产生了上面的FFMPEG命令行,然后将STDOUT流通过管道传递到HTTP响应。

liveFFMPEG.stdout.pipe(resp);

我还使用了stream事件将FFMPEG数据写入HTTP响应,但是没有区别

xliveFFMPEG.stdout.on("data",function(data) {
        resp.write(data);
}

我使用以下HTTP标头(在流式传输预先记录的文件时也使用并起作用)

var total = 999999999         // fake a large file
var partialstart = 0
var partialend = total - 1

if (range !== undefined) {
    var parts = range.replace(/bytes=/, "").split("-"); 
    var partialstart = parts[0]; 
    var partialend = parts[1];
} 

var start = parseInt(partialstart, 10); 
var end = partialend ? parseInt(partialend, 10) : total;   // fake a large file if no range reques 

var chunksize = (end-start)+1; 

resp.writeHead(206, {
                  'Transfer-Encoding': 'chunked'
                 , 'Content-Type': 'video/mp4'
                 , 'Content-Length': chunksize // large size to fake a file
                 , 'Accept-Ranges': 'bytes ' + start + "-" + end + "/" + total
});

3)客户端必须使用HTML5视频标签。

我没有流传输回放的问题(使用fs.createReadStream和206 HTTP部分内容)到HTML5客户端,该视频文件以前是使用上述FFMPEG命令行记录的(但保存到文件而不是STDOUT),所以我知道FFMPEG流是正确的,当连接到HTTP节点服务器时,我什至可以在VLC中正确看到视频实时流。

However trying to stream live from FFMPEG via node HTTP seems to be a lot harder as the client will display one frame then stop. I suspect the problem is that I am not setting up the HTTP connection to be compatible with the HTML5 video client. I have tried a variety of things like using HTTP 206 (partial content) and 200 responses, putting the data into a buffer then streaming with no luck, so I need to go back to first principles to ensure I'm setting this up the right way.

Here is my understanding of how this should work, please correct me if I'm wrong:

1) FFMPEG should be setup to fragment the output and use an empty moov (FFMPEG frag_keyframe and empty_moov mov flags). This means the client does not use the moov atom which is typically at the end of the file which isn't relevant when streaming (no end of file), but means no seeking possible which is fine for my use case.

2) Even though I use MP4 fragments and empty MOOV, I still have to use HTTP partial content, as the HTML5 player will wait until the entire stream is downloaded before playing, which with a live stream never ends so is unworkable.

3) I don't understand why piping the STDOUT stream to the HTTP response doesn't work when streaming live yet if I save to a file I can stream this file easily to HTML5 clients using similar code. Maybe it's a timing issue as it takes a second for the FFMPEG spawn to start, connect to the IP camera and send chunks to node, and the node data events are irregular as well. However the bytestream should be exactly the same as saving to a file, and HTTP should be able to cater for delays.

4) When checking the network log from the HTTP client when streaming a MP4 file created by FFMPEG from the camera, I see there are 3 client requests: A general GET request for the video, which the HTTP server returns about 40Kb, then a partial content request with a byte range for the last 10K of the file, then a final request for the bits in the middle not loaded. Maybe the HTML5 client once it receives the first response is asking for the last part of the file to load the MP4 MOOV atom? If this is the case it won't work for streaming as there is no MOOV file and no end of the file.

5) When checking the network log when trying to stream live, I get an aborted initial request with only about 200 bytes received, then a re-request again aborted with 200 bytes and a third request which is only 2K long. I don't understand why the HTML5 client would abort the request as the bytestream is exactly the same as I can successfully use when streaming from a recorded file. It also seems node isn't sending the rest of the FFMPEG stream to the client, yet I can see the FFMPEG data in the .on event routine so it is getting to the FFMPEG node HTTP server.

6) Although I think piping the STDOUT stream to the HTTP response buffer should work, do I have to build an intermediate buffer and stream that will allow the HTTP partial content client requests to properly work like it does when it (successfully) reads a file? I think this is the main reason for my problems however I'm not exactly sure in Node how to best set that up. And I don't know how to handle a client request for the data at the end of the file as there is no end of file.

7) Am I on the wrong track with trying to handle 206 partial content requests, and should this work with normal 200 HTTP responses? HTTP 200 responses works fine for VLC so I suspect the HTML5 video client will only work with partial content requests?

As I'm still learning this stuff its difficult to work through the various layers of this problem (FFMPEG, node, streaming, HTTP, HTML5 video) so any pointers will be greatly appreciated. I have spent hours researching on this site and the net, and I have not come across anyone who has been able to do real time streaming in node but I can't be the first, and I think this should be able to work (somehow!).

阳光梅L2020/03/23 14:22:40

这是一个非常普遍的误解。没有实时HTML5视频支持(iOS和Mac Safari上的HLS除外)。您也许可以使用webm容器来“破解”它,但是我不希望它得到普遍支持。您要查找的内容包含在Media Source Extensions中,您可以在其中一次将片段提供给浏览器。但是您将需要编写一些客户端javascript。

LGil2020/03/23 14:22:40

试试binaryjs。它就像socket.io一样,但它唯一能做的就是播放音频视频。Binaryjs谷歌它

宝儿2020/03/23 14:22:39

如何使用jpeg解决方案,只是让服务器将jpeg一张一张地分发到浏览器,然后使用canvas元素绘制这些jpeg? http://thejackalofjavascript.com/rpi-live-streaming/

神奇JinJin2020/03/23 14:22:39

我围绕百老汇h264编解码器(脚本)编写了HTML5视频播放器,该播放器可以在所有浏览器(台式机,iOS等)上实时播放(无延迟)h264视频。

视频流通过websocket发送到客户端,每帧解码后的帧并显示在画布中(使用webgl进行加速)

在github上查看https://github.com/131/h264-live-player

老丝2020/03/23 14:22:39

谢谢大家,特别是szatmary,因为这是一个复杂的问题,涉及很多层,所有这些层都必须先工作,然后才能流式传输实时视频。为了阐明我的原始问题和HTML5视频使用与Flash的关系-我的用例非常喜欢HTML5,因为它是通用的,易于在客户端和将来实现。Flash是遥遥领先的第二好选择,因此我们坚持使用HTML5解决这个问题。

通过这个练习,我学到了很多东西,并且同意直播比VOD难得多(这对HTML5视频很有效)。但是在我的用例中,确实做到了这一点,令人满意的解决方案在解决了诸如MSE,Flash,Node中精心设计的缓冲方案等更复杂的选项之后,解决方案非常简单。问题在于FFMPEG破坏了碎片化的MP4,我不得不调整FFMPEG参数,而我最初使用的HTTP上的标准节点流管道重定向就是所需要的。

在MP4中,有一个“片段化”选项,可将mp4分成更小的片段,这些片段具有自己的索引,并使mp4实时流式传输选项可行。但是不可能找回流(对于我的用例来说还可以),并且更高版本的FFMPEG支持分段。

注意时间安排可能是一个问题,在我的解决方案中,由于重混合的原因,我有2到6秒的延迟(有效的FFMPEG必须接收实时流,将其重新混合,然后将其发送到节点以通过HTTP服务) 。无法对此做很多事情,但是在Chrome中,视频确实会尽一切可能赶上来,这使视频有点跳动,但比IE11(我的首选客户端)更新了。

除了在本文中解释代码的工作方式之外,还请查看带有注释的GIST(不包括客户端代码,它是带有节点http服务器地址的标准HTML5视频标记)。GIST在这里:https : //gist.github.com/deandob/9240090

我无法找到该用例的类似示例,因此我希望上面的解释和代码对其他人有所帮助,特别是因为我从该站点中学到了很多,并且仍然认为自己是初学者!

尽管这是我特定问题的答案,但我选择了szatmary的答案作为公认的答案,因为它是最全面的答案。

小胖2020/03/23 14:22:39

看看这个解决方案据我所知,Flashphoner允许在纯HTML5页面中播放实时音频和视频流。

他们使用MPEG1G.711编解码器进行播放。黑客将解码后的视频呈现到HTML5 canvas元素,并通过HTML5音频上下文播放解码后的音频。

老丝阿飞2020/03/23 14:22:39

将基于RTSP的网络摄像头实时流传输到HTML5客户端的一种方法(涉及重新编码,因此会导致质量损失,并且需要一些CPU能力):

  • 设置一个Icecast服务器(可以与Web服务器位于同一台计算机上,也可以位于从cam接收RTSP流的计算机上)
  • 在从摄像机接收流的机器上,不要使用FFMPEG,而要使用gstreamer。它能够接收和解码RTSP流,对其进行重新编码并将其流传输到Icecast服务器。管道示例(仅视频,无音频):

    gst-launch-1.0 rtspsrc location=rtsp://192.168.1.234:554 user-id=admin user-pw=123456 ! rtph264depay ! avdec_h264 ! vp8enc threads=2 deadline=10000 ! webmmux streamable=true ! shout2send password=pass ip=<IP_OF_ICECAST_SERVER> port=12000 mount=cam.webm
    

=>然后,您可以将<video>标记与icecast-stream的URL(http://127.0.0.1:12000/cam.webm)一起使用,它将在支持webm的所有浏览器和设备中运行

Mandy2020/03/23 14:22:39

看一下JSMPEG项目。在那里实现了一个好主意-使用JavaScript在浏览器中解码MPEG。例如,可以使用WebSockets或Flash将来自编码器(例如FFMPEG)的字节传输到浏览器。我认为,如果社区能够赶上潮流,它将是目前最好的HTML5实时视频流解决方案。