How to write opencv mat to gstreamer pipeline? - c++

I want to add some opencv processes to a gstreamer pipeline and then send it over udpsink.
I'm able to read frames from gstreamer like this:
// may add some plugins to the pipeline later
cv::VideoCapture cap("v4l2src ! video/x-raw, framerate=30/1, width=640, height=480, format=RGB ! videoconvert ! appsink");
cv::Mat frame;
while(ture){
cap >> frame;
// do some processing to the frame
}
But what can't figure out is how to pass the processed frame to the following pipeline: appsrc ! x264enc ! mpegtsmux ! udpsink host=localhost port=5000
I've tried
cv::VideoWriter writer = cv::VideoWriter("appsrc ! x264enc ! mpegtsmux ! udpsink host=localhost port=5000", 0, (double)30, cv::Size(640, 480), true);
writer << processedFrame;
However, the receiver side receives nothing. (I uses the pipeline $gst-launch-1.0 udpsrc port=5000 ! tsparse ! tsdemux ! h264parse ! avdec_h264 ! videoconvert ! ximagesink sync=false as receiver)
My question is, can I pass processed opencv Mat to a gstreamer pipeline and let it do some encoding, and then send over network through udpsink? If yes, how do I achieve this?
Side question, is there any way I can debug a VideoWriter? Such as checking if frames are actually written into it.
Note that I'm using opencv 2.4.12 and gstreamer 1.2 on ubuntu 14.04.
Any help are great!
EDIT:
To provide more info, I tested the following code, and it gave GStreamer Plugin: Embedded video playback halted; module appsrc0 reported: Internal data flow error.
#include <stdio.h>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/opencv.hpp>
int main(int argc, char *argv[]){
cv::VideoCapture cap("v4l2src ! video/x-raw, framerate=30/1, width=640, height=480, format=RGB ! videoconvert ! appsink");
if (!cap.isOpened()) {
printf("=ERR= can't create capture\n");
return -1;
}
cv::VideoWriter writer;
// problem here
writer.open("appsrc ! video/x-raw, framerate=30/1, width=640, height=480, format=RGB ! autovideoconvert ! ximagesink sync=false", 0, (double)30, cv::Size(640, 480), true);
if (!writer.isOpened()) {
printf("=ERR= can't create writer\n");
return -1;
}
cv::Mat frame;
int key;
while (true) {
cap >> frame;
if (frame.empty()) {
printf("no frame\n");
break;
}
writer << frame;
key = cv::waitKey( 30 );
}
cv::destroyWindow( "video" );
}
Apparently there's something wrong with the appsrc pipeline, but I have no idea what went wrong because the pipeline gst-launch-1.0 v4l2src ! video/x-raw, framerate=30/1, width=640, height=480, format=RGB ! videoconvert ! ximagesink sync=false works fine.

After hours of searching and testing, I finally got the answer.
The key is to use only videoconvert after appsrc, no need to set caps. Therefore, a writer pipeline would look like appsrc ! videoconvert ! x264enc ! mpegtsmux ! udpsink host=localhost port=5000.
Following is a sample code that reads images from a gstreamer pipeline, doing some opencv image processing and write it back to the pipeline.
With this method, you can add any opencv process to a gstreamer pipeline easily.
// Compile with: $ g++ opencv_gst.cpp -o opencv_gst `pkg-config --cflags --libs opencv`
#include <stdio.h>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/opencv.hpp>
int main(int argc, char** argv) {
// Original gstreamer pipeline:
// == Sender ==
// gst-launch-1.0 v4l2src
// ! video/x-raw, framerate=30/1, width=640, height=480, format=RGB
// ! videoconvert
// ! x264enc noise-reduction=10000 tune=zerolatency byte-stream=true threads=4
// ! mpegtsmux
// ! udpsink host=localhost port=5000
//
// == Receiver ==
// gst-launch-1.0 -ve udpsrc port=5000
// ! tsparse ! tsdemux
// ! h264parse ! avdec_h264
// ! videoconvert
// ! ximagesink sync=false
// first part of sender pipeline
cv::VideoCapture cap("v4l2src ! video/x-raw, framerate=30/1, width=640, height=480, format=RGB ! videoconvert ! appsink");
if (!cap.isOpened()) {
printf("=ERR= can't create video capture\n");
return -1;
}
// second part of sender pipeline
cv::VideoWriter writer;
writer.open("appsrc ! videoconvert ! x264enc noise-reduction=10000 tune=zerolatency byte-stream=true threads=4 ! mpegtsmux ! udpsink host=localhost port=9999"
, 0, (double)30, cv::Size(640, 480), true);
if (!writer.isOpened()) {
printf("=ERR= can't create video writer\n");
return -1;
}
cv::Mat frame;
int key;
while (true) {
cap >> frame;
if (frame.empty())
break;
/* Process the frame here */
writer << frame;
key = cv::waitKey( 30 );
}
}
Hope this helps. ;)

Ok this is long for comment.. its not answer but few hints:
1a, Use netcast to check what is recieved on reciever side..
Its simple:
shell> nc -l 5000 -u
Than check whats being printed when you send something to the reciever.. nc is set to dump everything to the screen..
1b, you can try vlc for reciever and check the debug messages (its located in Tools > Messages - or hit Ctrl+M). Set the log lever to 2 debug .. Then open network stream and use udp://#:5000 as URL..
Btw you could test it with rtp with pipe:
appsrc ! x264enc ! mpegtsmux ! rtpmp2tpay ! udpsink host=localhost port=5000
Which is in vlc rtp://#:5000 then..
2, Check whats flowing after appsrc by using identity element (very helpful.. I use it often to debug pipe problems):
change your pipe to (note the identity element and -v for udpsink):
cv::VideoWriter writer = cv::VideoWriter("appsrc ! identity silent=false ! x264enc ! mpegtsmux ! udpsink -v host=localhost port=5000", 0, (double)30, cv::Size(640, 480), true);
Then run your code and check its output.. it shall list the incomming buffers from appsrc
3, To the code you posted as update - no I meant to use caps= attribute for caps, but maybe there is not difference:
writer.open("appsrc caps="video/x-raw, framerate=30/1, width=640, height=480, format=RGB" ! autovideoconvert ! ximagesink sync=false", 0, (double)30, cv::Size(640, 480), true);

Related

gstreamer rtsp stream not readable using VLC, but readable from another gstreamer instance

How can I create a simple C++ program using OpenCV to stream using rstp so that it can be seen using vlc?
I have been looking many examples but none works.
Thanks
For instance:
int main()
{
VideoCapture cap(0);
if (!cap.isOpened()) {
cerr <<"VideoCapture not opened"<<endl;
exit(-1);
}
VideoWriter writer(
"appsrc ! videoconvert ! video/x-raw,format=YUY2,width=640,height=480,framerate=30/1 ! jpegenc ! rtpjpegpay ! udpsink host=127.0.0.1 port=5000",
0, // fourcc
30, // fps
Size(640, 480),
true); // isColor
if (!writer.isOpened()) {
cerr <<"VideoWriter not opened"<<endl;
exit(-1);
}
while (true) {
Mat frame;
cap.read(frame);
writer.write(frame);
}
return 0;
}
The video feed can be read using the command line
gst-launch-1.0 -v udpsrc port=5000
! application/x-rtp, media=video, clock-rate=90000, encoding-name=JPEG, payload=26
! rtpjpegdepay
! jpegdec
! xvimagesink sync=0
However, it cannot be opened with VLC using the rtsp://127.0.0.1:5000
URL
I got a solution.
Here is an improved version of the code
#include <iostream>
#include <opencv2/imgproc.hpp>
#include <opencv2/highgui.hpp>
using namespace cv;
int main()
{
VideoCapture cap("/home/salinas/Descargas/Minions/Minions.avi"); // video file input
if (!cap.isOpened())
{
std::cout << "Video Capture Fail" << std::endl;
return 0;
}
VideoWriter writer;
// Write this string to one line to be sure!!
writer.open("appsrc ! videoconvert ! videoscale ! video/x-raw,width=640,height=480 ! x264enc speed-preset=veryfast tune=zerolatency bitrate=800 ! rtspclientsink location=rtsp://localhost:8554/mystream ",
0, 20, Size(640, 480), true);
// Comment this line out
Mat img;
while(cap.read(img))
{
cv::resize(img, img, Size(640, 480));
cv::imshow("raw", img);
writer << img;
cv::waitKey(25);
}
Now, the problem is that this is not directly read by a program like vcl. You need to run at the same time an instance of rtsp-simple-server (you can download the binaries wo dependencies here)
It seems like the opencv writer sends the data to the rtsp-simple-server, which redirects the stream to rtsp clients that request it.
Finally, go to vlc and open the url rtsp://localhost:8554/mystream

Gstreamer HSL Stream cannot be read

I'm trying to create a HLS stream using OpenCV and Gstreamer in Linux (Ubuntu 20.10).
The OpenCv was successfully installed with GStreamer support.
I have created a simple application with the help of these two tutorials:
http://4youngpadawans.com/stream-live-video-to-browser-using-gstreamer/
How to use Opencv VideoWriter with GStreamer?
The code is the following:
#include <string>
#include <iostream>
#include <opencv2/core.hpp>
#include <opencv2/highgui.hpp>
#include <opencv2/videoio/videoio_c.h>
using namespace std;
using namespace cv;
int main()
{
VideoCapture cap;
if(!cap.open(0, CAP_V4L2))
return 0;
VideoWriter writer(
"appsrc ! videoconvert ! videoscale ! video/x-raw,width=640,height=480 ! x264enc ! mpegtsmux ! hlssink playlist-root=http://192.168.1.42:8080 location=/home/sem/hls/segment_%05d.ts target-duration=5 max-files=5 playlist-location=/home/sem/hls/playlist.m3u8 ",
0,
20,
Size(800, 600),
true);
if (!writer.isOpened()) {
std::cout <<"VideoWriter not opened"<<endl;
exit(-1);
}
for(;;)
{
Mat frame;
cap >> frame;
if( frame.empty() ) break; // end of video stream
writer.write(frame);
imshow("this is you, smile! :)", frame);
if( waitKey(10) == 27 ) break; // stop capturing by pressing ESC
}
}
The HTTP served was started using python command
python3 -m http.server 8080
At first look everything is fine. Streamer creates all needed files (playlist and xxx.ts files)
Folder with the HTTP Server content
Server Response on requests
But if I try to play the stream it does not work:
Open Stream in browser
The playing using VLC-Player does not work also (green screen)
Could someone give me a hint, what I'm doing wrong?
Thanks in advance!
Check what stream format is created. Check what color format you push into the pipeline. If its RGB chances are you create a non 4:2:0 stream which has very limited decoder support.
Thanks Florian,
I tried to change the format but it was not a problem.
First, what shall be performed is to take a real frame rate from capture device
int fps = cap.get(CV_CAP_PROP_FPS);
VideoWriter writer(
"appsrc ! videoconvert ! videoscale ! video/x-raw, width=640, height=480 ! x264enc ! mpegtsmux ! hlssink playlist-root=http://192.168.1.42:8080 location=/home/sem/hls/segment_%05d.ts target-duration=5 max-files=5 playlist-location=/home/sem/hls/playlist.m3u8 ",
0,
fps,
Size(640, 480),
true);
Second, the frame size shall be the same in all places it is mentioned.
The frame, which is captured, shall be also resized:
resize(frame, frame, Size(640,480));
writer.write(frame);
After this changes the chunks, generated by gstreamer, can be opened in a local player and the video works. Unfortunately the remote access still failing. :(

G streamer Video Streaming and Receiving

I need to receive a video in laptop from a Camera device which is connected to Video frame grabber(supports G-streamer). In laptop I need to process the video using opencv and then stream the video in RTSP format. How to receive a video and then stream the video in RTSP using G-streamer c++? Kindly share the example codes
This is my code sender.cpp is in laptop #1 and receiver.cpp code is in laptop #2. I can't able to see the video in receiver end. Additionally, attached my build information for your reference
Sender.cpp
cv::VideoCapture video;
cv::Mat frame;
cv::VideoWriter videoOut;
videoOut.open("appsrc ! videoconvert ! video/x-raw, format=YUY2,width=640,height=480,farmerate=30/1 ! jpegnc ! rtpjpegpay ! udpsink host=192.168.1.200 port=5000", cv::CAP_GSTREAMER, 0 , 30, cv::Size(640,480),true);
if(video.open(0)) {
while(video.isOpened()) {
video >> frame;
cv::putText(frame, "Processed Video", cv::Point(100,80),cv::FONT_HERSHEY_PLAIN, 2, cv::Scalar(0,0,255), 2);
if(videoOut.isOpened())
videoOut.write(frame);
else
cout<<"Writer Not Opened";
cv::imshow("Sender", frame);
cv::waitKey(25);
}
}
else {
cout<<"Camera Not Opened";
}
Receiver.cpp
cv::VideoCapture cap("udpsrc port=5000 ! application/x-rtp, encoding-name=JPEG, payload=26 ! rtpjpegdepay ! jpegdec ! videoconvert ! appsink sync = true async = true", cv::CAP_GSTREAMER);
if(cap.isOpened()) {
cv::Mat m;
cout<<"Video Init";
while(1) {
cout<<"Video Streaming";
cap >> m;
cv::imshow("Receiver", m);
cv::waitKey(10);
}
}
else
cout<<"Cap Not Opening";
To extract the video using gstreamer, make sure you build opencv with GStreamer. Once you do that, simply create the pipeline for GStreamer and pass it as an argument to the cv::VideoCapture() object like so
std::string videoAddress = "udpsrc port=50004 ! application/x-rtp, encoding-name=JPEG,payload=26 ! rtpjpegdepay ! jpegdec ! videoconvert ! appsink sync = true async = true";
cv::VideoCapture *camera = new cv::VideoCapture();
cameraOpened = camera->open(videoAddress, cv::CAP_GSTREAMER);
I'm not sure how to help with the second part of your question.

Sending frame by VideoWriter; can't catching it again (OpenCV 3.1, c++)

I am trying to write a simple video streaming application that performs the following tasks:
Get a frame from camera this part is working);
Modify frame;
Send to a gstreamer pipeline.
Code:
VideoWriter writer;
writer.open("appsrc ! rtpvrawpay ! host =localhost port=5000" , 0, 30, cv::Size(IMAGE_WIDTH, IMAGE_HEIGHT), true);
while(true){
//get frame etc.
writer.write(frame);
}
VLC player can't see anything with command:
vlc -vvv rtp://#localhost:5000
I tried:
cv::VideoCapture cap("udpsrc port=5000 ! tsparse ! videoconvert ! appsink");
But it didn't start (no error log, just didn't get any frame).
I am using OpenCV 3.1, and I have read the support documentation for GStreamer.
What can be wrong?
Before using OpenCV's Gstreamer API, it's important that you have a working pipeline, using Gstreamer's commandline tool.
Sender:
Working pipeline:
gst-launch-1.0 -v v4l2src \
! video/x-raw, framerate=30/1, width=640, height=480, format=BGR \
! videoconvert ! video/x-raw, format=I420, width=640, height=480, framerate=30/1 \
! rtpvrawpay ! udpsink host=127.0.0.1 port=5000
OpenCV code:
bool sender()
{
VideoCapture cap = VideoCapture("v4l2src ! video/x-raw, framerate=30/1, width=640, height=480, format=BGR ! appsink",cv::CAP_GSTREAMER);
VideoWriter out = VideoWriter("appsrc ! videoconvert ! video/x-raw, format=I420, width=640, height=480, framerate=30/1 ! rtpvrawpay ! udpsink host=127.0.0.1 port=5000",CAP_GSTREAMER,0,30,Size(640,480));
if(!cap.isOpened() || !out.isOpened())
{
cout<<"VideoCapture or VideoWriter not opened"<<endl;
return false;
}
Mat frame;
while(true)
{
cap.read(frame);
if(frame.empty())
break;
/* Modify frame here*/
out.write(frame);
imshow("frame", frame);
if(waitKey(1) == 'q')
break;
}
cap.release();
out.release();
return true;
}
Receiver:
gst-launch-1.0 -v udpsrc port=5000 \
! "application/x-rtp, media=(string)video, clock-rate=(int)90000, encoding-name=(string)RAW, sampling=(string)YCbCr-4:2:0, depth=(string)8, width=(string)640, height=(string)480, payload=(int)96" \
! rtpvrawdepay ! xvimagesink
The problem was that my openCV's version didn't support gstreamer's VideoWriter. I change it to 3.3.0 and it works.

How to stream only audio/video from mpeg file using gstreamer?

I am new for gstreamer. I want to stream only audio/video on a rtp port. I tried following command to stream only video from mpeg file :
$ gst-launch-1.0 -v rtpbin name=rtpbin filesrc location=KRSNA.mpg ! decodebin ! 'video/mpeg' ! rtpmpvpay ! udpsink host=127.0.0.1 port=9078
Following is the verbose for command :
Setting pipeline to PAUSED ...
Pipeline is PREROLLING ...
/GstPipeline:pipeline0/GstDecodeBin:decodebin0/GstTypeFindElement:typefind.GstPad:src: caps = video/mpeg, systemstream=(boolean)true, mpegversion=(int)1
/GstPipeline:pipeline0/GstDecodeBin:decodebin0/GstTypeFindElement:typefind.GstPad:src: caps = video/mpeg, systemstream=(boolean)true, mpegversion=(int)1
/GstPipeline:pipeline0/GstDecodeBin:decodebin0/GstMpegPSDemux:mpegpsdemux0.GstPad:sink: caps = video/mpeg, systemstream=(boolean)true, mpegversion=(int)1
/GstPipeline:pipeline0/GstDecodeBin:decodebin0/GstMultiQueue:multiqueue0.GstPad:sink_0: caps = video/mpeg, mpegversion=(int)1, systemstream=(boolean)false
/GstPipeline:pipeline0/GstDecodeBin:decodebin0/GstMultiQueue:multiqueue0.GstPad:sink_0: caps = video/mpeg, mpegversion=(int)1, systemstream=(boolean)false
/GstPipeline:pipeline0/GstDecodeBin:decodebin0/GstMpegvParse:mpegvparse0.GstPad:sink: caps = video/mpeg, mpegversion=(int)1, systemstream=(boolean)false
/GstPipeline:pipeline0/GstDecodeBin:decodebin0/GstMultiQueue:multiqueue0.GstPad:sink_1: caps = audio/mpeg, mpegversion=(int)1
/GstPipeline:pipeline0/GstDecodeBin:decodebin0/GstMultiQueue:multiqueue0.GstPad:sink_1: caps = audio/mpeg, mpegversion=(int)1
/GstPipeline:pipeline0/GstDecodeBin:decodebin0/GstMultiQueue:multiqueue0.GstPad:sink_1: caps = audio/mpeg, mpegversion=(int)1
/GstPipeline:pipeline0/GstDecodeBin:decodebin0/GstMpegAudioParse:mpegaudioparse0.GstPad:src: caps = audio/mpeg, mpegversion=(int)1, mpegaudioversion=(int)1, layer=(int)2, rate=(int)48000, channels=(int)2, parsed=(boolean)true
/GstPipeline:pipeline0/GstDecodeBin:decodebin0/GstMpg123AudioDec:mpg123audiodec0.GstPad:sink: caps = audio/mpeg, mpegversion=(int)1, mpegaudioversion=(int)1, layer=(int)2, rate=(int)48000, channels=(int)2, parsed=(boolean)true
/GstPipeline:pipeline0/GstDecodeBin:decodebin0/GstMpegvParse:mpegvparse0.GstPad:src: caps = video/mpeg, mpegversion=(int)1, systemstream=(boolean)false, parsed=(boolean)true, width=(int)720, height=(int)576, framerate=(fraction)25/1, pixel-aspect-ratio=(fraction)16/15, codec_data=(buffer)000001b32d0240830c352398
/GstPipeline:pipeline0/GstDecodeBin:decodebin0/avdec_mpeg2video:avdec_mpeg2video0.GstPad:sink: caps = video/mpeg, mpegversion=(int)1, systemstream=(boolean)false, parsed=(boolean)true, width=(int)720, height=(int)576, framerate=(fraction)25/1, pixel-aspect-ratio=(fraction)16/15, codec_data=(buffer)000001b32d0240830c352398
Redistribute latency...
/GstPipeline:pipeline0/GstDecodeBin:decodebin0/GstMpg123AudioDec:mpg123audiodec0.GstPad:src: caps = audio/x-raw, format=(string)S16LE, layout=(string)interleaved, rate=(int)48000, channels=(int)2, channel-mask=(bitmask)0x0000000000000003
/GstPipeline:pipeline0/GstDecodeBin:decodebin0/avdec_mpeg2video:avdec_mpeg2video0.GstPad:src: caps = video/x-raw, format=(string)I420, width=(int)720, height=(int)576, pixel-aspect-ratio=(fraction)16/15, interlace-mode=(string)progressive, colorimetry=(string)bt601, framerate=(fraction)25/1
/GstPipeline:pipeline0/GstDecodeBin:decodebin0/GstMultiQueue:multiqueue0: max-size-buffers = 5
/GstPipeline:pipeline0/GstDecodeBin:decodebin0/GstMultiQueue:multiqueue0: max-size-time = 0
/GstPipeline:pipeline0/GstDecodeBin:decodebin0/GstMultiQueue:multiqueue0: max-size-bytes = 2097152
/GstPipeline:pipeline0/GstDecodeBin:decodebin0/GstMultiQueue:multiqueue0: max-size-buffers = 5
/GstPipeline:pipeline0/GstDecodeBin:decodebin0/GstMultiQueue:multiqueue0: max-size-time = 0
/GstPipeline:pipeline0/GstDecodeBin:decodebin0/GstMultiQueue:multiqueue0: max-size-bytes = 2097152
/GstPipeline:pipeline0/GstCapsFilter:capsfilter1: caps = video/mpeg
/GstPipeline:pipeline0/GstCapsFilter:capsfilter2: caps = video/mpeg
/GstPipeline:pipeline0/GstDecodeBin:decodebin0/GstMpg123AudioDec:mpg123audiodec0.GstPad:src: caps = audio/x-raw, format=(string)S16LE, layout=(string)interleaved, rate=(int)48000, channels=(int)2, channel-mask=(bitmask)0x0000000000000003
/GstPipeline:pipeline0/GstDecodeBin:decodebin0.GstDecodePad:src_1: caps = audio/x-raw, format=(string)S16LE, layout=(string)interleaved, rate=(int)48000, channels=(int)2, channel-mask=(bitmask)0x0000000000000003
/GstPipeline:pipeline0/GstDecodeBin:decodebin0.GstDecodePad:src_1.GstProxyPad:proxypad8: caps = audio/x-raw, format=(string)S16LE, layout=(string)interleaved, rate=(int)48000, channels=(int)2, channel-mask=(bitmask)0x0000000000000003
/GstPipeline:pipeline0/GstDecodeBin:decodebin0/avdec_mpeg2video:avdec_mpeg2video0.GstPad:src: caps = video/x-raw, format=(string)I420, width=(int)720, height=(int)576, pixel-aspect-ratio=(fraction)16/15, interlace-mode=(string)progressive, colorimetry=(string)bt601, framerate=(fraction)25/1
/GstPipeline:pipeline0/GstDecodeBin:decodebin0.GstDecodePad:src_0: caps = video/x-raw, format=(string)I420, width=(int)720, height=(int)576, pixel-aspect-ratio=(fraction)16/15, interlace-mode=(string)progressive, colorimetry=(string)bt601, framerate=(fraction)25/1
/GstPipeline:pipeline0/GstDecodeBin:decodebin0.GstDecodePad:src_0.GstProxyPad:proxypad9: caps = video/x-raw, format=(string)I420, width=(int)720, height=(int)576, pixel-aspect-ratio=(fraction)16/15, interlace-mode=(string)progressive, colorimetry=(string)bt601, framerate=(fraction)25/1
ERROR: from element /GstPipeline:pipeline0/GstDecodeBin:decodebin0/GstMpegPSDemux:mpegpsdemux0: Internal data stream error.
Additional debug info:
gstmpegdemux.c(2871): gst_flups_demux_loop (): /GstPipeline:pipeline0/GstDecodeBin:decodebin0/GstMpegPSDemux:mpegpsdemux0:
stream stopped, reason not-linked
ERROR: pipeline doesn't want to preroll.
Setting pipeline to NULL ...
Freeing pipeline ...
How to stream only audio/video from mpeg file using gstreamer ?
Any help/pointers appriciated ...
Please try not using decodebin. Use the compatible demuxer element and then use the required parser and decoder elements.
If you want to run elementary (only video / only audio) out of a container format, use the required elements ( video elements for video stream / audio elements for audio stream).
gst-launch-0.10 filesrc location=sample.mkv ! matroskademux ! h264parse ! ffdec_h264 ! ffmpegcolorspace ! autovideosink
The above statement for example will give the video output ignoring audio and same can be achieved for audio.
Hope this helps, also look here
gst-launch-0.10 filesrc location=c:/test.mpg ! mpegpsdemux ! mpegvideoparse ! mpeg2dec ! rtpvrawpay ! udpsink port=9078
This pipeline works for me, although I had to substitute mpegpsdemux for tsdemux, since I only had the former installed.