I need to get a stream video from one time to another time. I take the video from a device with an SD, and I need to get just a subpart of the video recorded on the SD.
I use GetReplayUri that gives me the url of the stream video. It doesn't permit to get a subpart of the video but just have all video.
AddUsernameTokenDigest(deviceReplayBindingProxy, NULL, GetUser(), GetPwd(), deltaT);
if (deviceReplayBindingProxy->GetReplayUri(&GetReplayUri_tmp, GetReplayUriResponse_tmp) != SOAP_OK)
{
return NC_INTERNAL_ERROR;
}
std::string RTSP_url{ "" };
RTSP_url += GetReplayUriResponse_tmp.Uri;
RTSP_url += "?start=" + boost::lexical_cast<std::string>(startTime) + "&end=" + boost::lexical_cast<std::string>(endTime);
It doens't work at all. Any suggestion?
It's possible with an RTSP command, introducing the range parameter.
example:
PLAY rtsp://192.168.0.1/path/to/recording RTSP/1.0
Cseq: 123
Session: 12345678
Require: onvif-replay
Range: clock=20090615T114900.440Z-20090615T115000Z
Rate-Control: no
PLAY rtsp://192.168.0.1/path/to/recording RTSP/1.0
Cseq: 123
Session: 12345678
For full reference see https://www.onvif.org/specs/stream/ONVIF-Streaming-Spec.pdf
Related
Im looking for some help/direction/suggestion... I have my discord bot say command here, its taken me a while to get it this far and working. What im looking for is to adapt the current code so that i can check if the word after ".say" is a "#channel" and if so send the message there. ie ".say #feedback your welcome" ....... would result in the bot saying "your welcome" in the feedback channel ELSE just send the message within the same channel. I hope i have explained myself properly but this is over my head, iv tried to research as much as possible. thanks in advance for your time
case 'say':
if (!message.member.roles.cache.find(r => r.name === 'Moderator') && !message.member.roles.cache.find(r => r.name === 'Staff')) return message.channel.send('You dont not have the required permissions').then(msg => {
msg.delete({ timeout: 5000 })
})
else if (message.content.startsWith(".say")) {
if (message.deletable) {
message.delete();
}
if (!args[1]) return message.channel.send("Nothing to say").then(msg => {
msg.delete({ timeout: 5000 })
})
message.channel.send(`${(args.slice(1).join(" "))}`)
To check if a channel is mention in the incoming message, you can use Message.mentions property that returns a MessageMentions object.
MessageMentions.channels returns a Collection of GuildChannel that are mention in the message.
So to check if at least one channel is mention in the message :
if (message.mentions.channels.size > 0) {
// There is at least 1 channel mention in the message.
} else {
// No channel mention in the message.
}
To send a message to the first channel mention :
message.mentions.channels.first().send("message");
I'm new to Arduino and C++, and I'm working on an Arduino project for school. I have a Firebase Cloud Function and I want to call if from the Arduino. I start a connection with the server and give it a query:
const String query = "QUERY";
if (client.connect(server, 80)) {
Serial.print("Connected to ");
Serial.println(client.remoteIP());
// Make an HTTP request:
client.println(query);
client.println("Host: SERVER-HOST-NAME");
client.println("Connection: close");
client.println();
}
Now this works very fine if the query string was declared as one piece:
String query = String("GET /cloudFunctionName?dataThatShouldBeVariable=FixedData HTTP/1.1");
Example:
String query = String("GET /updateDatabase?temp=10 HTTP/1.1");
When using the method above the cloud function triggers and the database gets updated. But I can't use it in production because I have some variable data so what I did was:
String query = String("GET /cloudFunctionName?dataThatShouldBeVariable=" + String(VariableData) + " HTTP/1.1");
Example:
String query = String("GET /updateDatabase?temp=" + String(10) + " HTTP/1.1");
Now when I print the results of the two strings I get the same result, but when using the second method the cloud function doesn't get triggered and the database doesn't get updated.
I believe that the problem is with the query variable not with the server or the hardware but I couldn't find an answer. I've been working on this for hours and I really got mad and I don't know what to do. Please tell me what is the problem and how to fix it.
I need to play specific part of a video (eg. form 10 sec to 20 sec).
so I use Alamofire with custom header range to download just this part of the video:
Alamofire.download(videoUrl,
method: .get,
headers: ["Range":String(format: "bytes=%d-%d", startByte, endByte)],
to: destination).response { (response) in
completionHandler(response.destinationURL)
}
so far so good, I can see the downloaded file.
But when I'm trying to play it, avplayer will fail to do that, I'm using
videoAsset.loadValuesAsynchronously(forKeys: [ObserverContexts.urlAssetDurationKey, ObserverContexts.urlAssetPlayableKey])
to load that specific keys async but the duration state can not be loaded:
let durationStatus = self.videoAsset.statusOfValue(forKey: ObserverContexts.urlAssetDurationKey, error: &durationError)
guard durationStatus == .loaded else {
Logger.log.error("durationStatus not loaded: \(self.videoId)")
self.delegate?.onLoadError(error: . durationStatusNotLoaded)
return
}
so this error gets triggered and I can't play the video.
I'm not master in video files, but I think the video file has an header and partially downloading will ruin this header and player can not play it.
so any suggestion or idea (even the simplest) would be appreciated.
tnx in advance.
EDIT: There is definitely something wrong with video header, if I start download 10 second from start of the video, the video get played for 10 sec, BUT the end video time or duration is wrong, how can I fix this?
I don't think IOS or Swift provides this feature of downloading an image from the mid of it and then play it. You can download the video file contents as they are in the form of bits and then can save them with any extension to any file name.
But a video file contains a lots of data in addition to just these bytes that you have downloaded such as header, type, duration, etc. If your downloaded file doesn't contain that data your video will be unplayable.
In this case I think(99% sure) you will have to use native code not just Swift to decode the video from a specific time to your desired time.
A lots of free libraries are available, give them a try. One such kind of library is ffmpeg. Check this as well.
All the best.
This is not the answer to original question, But I managed to achieve my goal by using AVFoundation, now I can download the exact part I want and it is playable :
let downloadUrl = URL(string: videoUrl)!
let asset = AVURLAsset(url: downloadUrl)
let composition = AVMutableComposition()
let videoTrack = composition.addMutableTrack(withMediaType: AVMediaTypeVideo, preferredTrackID:Int32(kCMPersistentTrackID_Invalid))
let audioTrack = composition.addMutableTrack(withMediaType: AVMediaTypeAudio, preferredTrackID: Int32(kCMPersistentTrackID_Invalid))
let tracks = asset.tracks(withMediaType: AVMediaTypeVideo)
guard let assetVideoTrack = tracks.first else {
Logger.log.debug("video track is not available: \(videoId)")
completionHandler(nil)
return
}
let startCM = CMTime(seconds: startSecond, preferredTimescale: 1)
//let endCM = CMTime(seconds: startSecond + duration, preferredTimescale: 1)
//EDITED
let endCM = CMTime(seconds: duration, preferredTimescale: 1)
let timeRange = CMTimeRangeMake(startCM, endCM)
do {
try videoTrack.insertTimeRange(timeRange, of: assetVideoTrack, at: CMTimeMake(0, 1))
} catch let errorInsertingVideo {
Logger.log.debug("can not get video time range: \(videoId) -- \(errorInsertingVideo.localizedDescription)")
completionHandler(nil)
return
}
videoTrack.preferredTransform = assetVideoTrack.preferredTransform
guard let assetAudioTrack : AVAssetTrack = asset.tracks(withMediaType: AVMediaTypeAudio).first else {
Logger.log.debug("audio track is not available: \(videoId)")
completionHandler(nil)
return
}
do {
try audioTrack.insertTimeRange(timeRange, of: assetAudioTrack, at: CMTimeMake(0, 1))
}catch let errorInsertingAudio {
Logger.log.debug("can not get audio time range: \(videoId) -- \(errorInsertingAudio.localizedDescription)")
completionHandler(nil)
return
}
guard let destinationUrl = self.getDestinationUrl(videoId) else {
// can not access video cache dir
Logger.log.error("can not access video cache dir")
completionHandler(nil)
return
}
let exportSession : AVAssetExportSession = AVAssetExportSession(asset: composition, presetName: AVAssetExportPresetPassthrough)!
exportSession.outputURL = destinationUrl
exportSession.timeRange = timeRange
exportSession.outputFileType = AVFileTypeQuickTimeMovie
exportSession.exportAsynchronously(completionHandler: {
switch exportSession.status {
case .completed:
let output = exportSession.outputURL!
completionHandler(output)
case .failed:
completionHandler(nil)
break
default :
completionHandler(nil)
break
}
})
With this trick I can download exactly 10 second (optional) of a video starting from any second at the middle (or start) of the video.
EDIT: ok this code can NOT download middle part, it only works it you start from begining of the Video.
EDIT 2: Fix added.
In summary
File.writeFile() creates a PNG file of 0 bytes when trying to write a Blob made from base64 data.
In my application, I am trying to create a file that consists of base64 data stored in the db. The rendered equivalent of the data is a small anti-aliased graph curve in black on a transparent background (never more that 300 x 320 pixels) that has previously been created and stored from a canvas element. I have independently verified that the stored base64 data is indeed correct by rendering it at one of various base64 encoders/decoders available online.
Output from "Ionic Info"
--------------------------------
Your system information:
Cordova CLI: 6.3.1
Gulp version: CLI version 3.9.1
Gulp local:
Ionic Framework Version: 2.0.0-rc.2
Ionic CLI Version: 2.1.1
Ionic App Lib Version: 2.1.1
Ionic App Scripts Version: 0.0.39
OS:
Node Version: v6.7.0
--------------------------------
The development platform is Windows 10, and I've been testing directly on a Samsung Galaxy S7 and S4 so far.
I know that the base64 data has to be converted into binary data (as a Blob) first, as File does not yet support writing base64 directly in to an image file. I found various techniques with which to do this, and the code which seems to suit my needs the most (and reflects a similar way I would have done it in java is illustrated below):
Main code from constructor:
this.platform.ready().then(() => {
this.graphDataService.getDataItem(this.job.id).then((data) =>{
console.log("getpic:");
let imgWithMeta = data.split(",")
// base64 data
let imgData = imgWithMeta[1].trim();
// content type
let imgType = imgWithMeta[0].trim().split(";")[0].split(":")[1];
console.log("imgData:",imgData);
console.log("imgMeta:",imgType);
console.log("aftergetpic:");
// this.fs is correctly set to cordova.file.externalDataDirectory
let folderpath = this.fs;
let filename = "dotd_test.png";
File.resolveLocalFilesystemUrl(this.fs).then( (dirEntry) => {
console.log("resolved dir with:", dirEntry);
this.savebase64AsImageFile(dirEntry.nativeURL,filename,imgData,imgType);
});
});
});
Helper to convert base64 to Blob:
// convert base64 to Blob
b64toBlob(b64Data, contentType, sliceSize) {
//console.log("data packet:",b64Data);
//console.log("content type:",contentType);
//console.log("slice size:",sliceSize);
let byteCharacters = atob(b64Data);
let byteArrays = [];
for (let offset = 0; offset < byteCharacters.length; offset += sliceSize) {
let slice = byteCharacters.slice(offset, offset + sliceSize);
let byteNumbers = new Array(slice.length);
for (let i = 0; i < slice.length; i++) {
byteNumbers[i] = slice.charCodeAt(i);
}
let byteArray = new Uint8Array(byteNumbers);
byteArrays.push(byteArray);
}
console.log("size of bytearray before blobbing:", byteArrays.length);
console.log("blob content type:", contentType);
let blob = new Blob(byteArrays, {type: contentType});
// alternative way WITHOUT chunking the base64 data
// let blob = new Blob([atob(b64Data)], {type: contentType});
return blob;
}
save the image with File.writeFile()
// save the image with File.writeFile()
savebase64AsImageFile(folderpath,filename,content,contentType){
// Convert the base64 string in a Blob
let data:Blob = this.b64toBlob(content,contentType,512);
console.log("file location attempt is:",folderpath + filename);
File.writeFile(
folderpath,
filename,
data,
{replace: true}
).then(
_ => console.log("write complete")
).catch(
err => console.log("file create failed:",err);
);
}
I have tried dozens of different decoding techniques, but the effect is the same. However, if I hardcode simple text data into the writeFile() section, like so:
File.writeFile(
folderpath,
"test.txt",
"the quick brown fox jumps over the lazy dog",
{replace: true}
)
A text file IS created correctly in the expected location with the text string above in it.
However, I've noticed that whether the file is the 0 bytes PNG, or the working text file above, in both cases the ".then()" consequence clause of the File Promise never fires.
Additionally, I swapped the above method and used the Ionic 2 native Base64-To-Gallery library to create the images, which worked without a problem. However, having the images in the user's picture gallery or camera roll is not an option for me as I do not wish to risk a user's own pictures while marshalling / packing / transmitting / deleting the data-rendered images. The images should be created and managed as part of the app.
User marcus-robinson seems to have experienced a similar issue outlined here, but it was across all file types, and not just binary types as seems to be the case here. Also, the issue seems to have been closed:
https://github.com/driftyco/ionic/issues/5638
Anybody experiencing something similar, or possibly spot some error I might have caused? I've tried dozens of alternatives but none seem to work.
I had similar behaviour saving media files which worked perfectly on iOS. Nonetheless, I had the issue of 0 bytes file creation on some Android devices in release build (dev build works perfectly). After very long search, I followed the following solution
I moved the polyfills.js script tag to the top of the index.html in the ionic project before the cordova.js tag. This re-ordering somehow the issue is resolved.
So the order should look like:
<script src="build/polyfills.js"></script>
<script type="text/javascript" src="cordova.js"></script>
Works on ionic 3 and ionic 4.
The credits go to 1
I got that working with most of your code:
this.file.writeFile(this.file.cacheDirectory, "currentCached.jpeg", this.b64toBlob(src, "image/jpg", 512) ,{replace: true})
The only difference i had was:
let byteCharacters = atob(b64Data.replace(/^data:image\/(png|jpeg|jpg);base64,/, ''));
instead of your
let byteCharacters = atob(b64Data);
Note: I did not use other trimming etc. like those techniques you used in your constructor class.
So I try to create a C++ web server with services and stuff. It is alive here, this is how to compile in in 3 lines under regular user on Linux, and here is it svn.
To redirect users I use such function:
void http_utils::send_found_302( std::string redirect_lication, boost::shared_ptr<boost::asio::ip::tcp::socket> socket, boost::shared_ptr<http_response> response )
{
response->status = 302;
response->description = "Found";
response->headers.insert(std::pair<std::string, std::string>("Location", redirect_lication));
response->send(*socket);
}
And in Chrome and Safary and IE. I can register and log in into my server. But FF... FF allows me to registe user in DB (meaning sends correct request), but when server trys to redirect it to page I want it shows me sad page=(
So example with pictures: we input credantials:
We hit on submit... and is gets stuck on connection for ever...:
When we try just to use that URL it tries to go to we see that it got part of "Found response" but has not redirected itself...( If we would try to login with chrome we would get all correct, also if we would just follow that url in chrome we would get redirected to where needed and see such image:
I created a simple user: demo_user#gmail.com with pass 123456 so you can try it out...
So what is wrong with the way I redirect? what shall be added to response to make it work for FF?
At the end of the day I made this:
void http_utils::send_found_302( const std::string & redirect_lication, boost::shared_ptr<boost::asio::ip::tcp::socket> socket, boost::shared_ptr<http_response> response )
{
/* if there was no Fire Fox and probably other dull browsers we would do this (Chrome, IE, Safari tested):
*
*\code
response->status = 302;
response->description = "Found";
response->headers.insert(std::pair<std::string, std::string>("Location", redirect_lication));
response->send(*socket);
* \endcode
*
* but indeed there are.
*
* We could also create simple HTML redirection page - would work anywhere.
* \code
std::ostringstream data_stream;
data_stream << "<!DOCTYPE HTML PUBLIC \"-//W3C//DTD HTML 4.0 Transitional//EN\"><html><head><script type=\"text/javascript\">location.replace(\""
<< redirect_lication << "\");</script><noscript><meta http-equiv=\"refresh\" content=\"0; url= "
<< redirect_lication << "\"></noscript></head><body><p>Please turn on JavaScript!</p><a href=\""
<< redirect_lication << "\"><h1>Content awaits you here.</h1></a></body></html>";
response->headers.insert(std::pair<std::string, std::string>("Cache-Control", "max-age=0"));
response->headers.insert(std::pair<std::string, std::string>("Pragma", "no-cache"));
http_utils::send(data_stream.str(), socket, response);
* \endcode
*
* so we decided to mix - html page and HTTP redirection
*/
response->description = "Found";
response->headers.insert(std::pair<std::string, std::string>("Location", redirect_lication));
response->headers.insert(std::pair<std::string, std::string>("Content-Location", redirect_lication));
response->headers.insert(std::pair<std::string, std::string>("Refresh", std::string("0; url=" + redirect_lication)));
response->headers.insert(std::pair<std::string, std::string>("Cache-Control", "max-age=0"));
response->headers.insert(std::pair<std::string, std::string>("Pragma", "no-cache"));
std::ostringstream data_stream;
data_stream << "<!DOCTYPE HTML PUBLIC \"-//W3C//DTD HTML 4.0 Transitional//EN\"><html><head><script type=\"text/javascript\">location.replace(\""
<< redirect_lication << "\");</script><noscript><meta http-equiv=\"refresh\" content=\"0; url= "
<< redirect_lication << "\"></noscript></head><body><p>Please turn on JavaScript!</p><a href=\""
<< redirect_lication << "\"><h1>Content awaits you here.</h1></a></body></html>";
http_utils::send(302, data_stream.str(), socket, response);
}
Let me explain: Current FF did not like my 302 and 303 redirection... so Simple solution - was to move the battle to the other side - side of HTML so I created simple code that returns HTML page that would auto redirect user or at least present him with correct link. Tested it - all worked as was desired (with short local links). Than I added solution that also worked not only I send HTML page but, also relocation headers. This shall work anywhere, and if browser is smart it shall not load contents of redirection page at all...)
So now all works.=) And thanks to KayEss answer I made all links absolute now... Why not?)
For best portability the location header needs to be an absolute URL. It looks like you're trying to send a relative one.
I.e.:
Location: http://example.com/path/image.png
Instead of:
Location: image.png