Is it possible to avoid the use of passwords when using the SAS Metadata Batch Export Tool?
I am building a feature in my STP web app (SAS 9.2, IWA, Kerberos) for auto-exporting metadata items. As per the documentation, the ExportPackage utility requires credentials either directly (-user and -password etc options) or via a connection profile (-profile).
Logging onto the application server as sassrv, the contents of my connection profile are as follows:
#Properties file updated on: Thu Mar 12 16:35:07 GMT 2015 !!!!! DO NOT EDIT !!!!!!!
#Thu Mar 12 16:35:07 GMT 2015
Name=SAS
port=8561
InternalAccount=false
host=DEV-SASMETA.somecompany.int
AppServer.Default=A5MNZZZZ.AR000666
AllowLocalPasswords=true
authenticationdomain=DefaultAuth
SingleSignOn=true
Running my code however results in the following:
44 +%put Batch tool located at: &platform_object_path;
Batch tool located at: C:\Program Files\SAS/SASPlatformObjectFramework/9.2
45 +filename inpipe pipe
46 + " ""&platform_object_path\ExportPackage"" -profile MyProfile
47 + -package 'C:\Temp\TestPackage.spk' -objects '/SomeFolder/ARCHIVE(Folder)' -includeDep -subprop";
48 +data _null_;
49 + infile inpipe;
50 + input; putlog _infile_;
51 +run;
NOTE: The infile INPIPE is:
Unnamed Pipe Access Device,
PROCESS="C:\Program Files\SAS/SASPlatformObjectFramework/9.2\ExportPackage" -profile MyProfile -package 'C:\Temp\TestPackage.spk' -objects '/SomeFolder/ARCHIVE(Folder)' -includeDep
-subprop,
RECFM=V,LRECL=256
The export process has failed. The native implementation module for the security package could not be found in the path.
For more information, view the export log file: C:\Users\sassrv\AppData\Roaming\SAS\Logs\Export_150427172003.log
NOTE: 2 records were read from the infile INPIPE.
The minimum record length was 112.
The maximum record length was 121.
The log file was empty.
Presumably my options here are limited to:
Requesting the user password from the front end
Using a system account in the connection profile
Using a system account in the -user & -password options
??
I verified this some time ago with SAS Technical Support; it's currently not possible.
I entered a SASWare Ballet item for it: https://communities.sas.com/t5/SASware-Ballot-Ideas/Add-Integrated-Windows-Authentication-IWA-support-to-Batch/idi-p/220474
please vote!
This was initially resolved by embedding a username / password in the profile, but now it works by taking the (modified) template profile below, and adding the host= / port= parameters dynamically at runtime (so can use the same profile in different environments).
IWA is now used to connect to the metadata server!
# This file is used by Release Management for connecting to the metadata server
# The host= and port= parameters are provided by the application at runtime
SingleSignOn=true
AllowLocalPasswords=true
InternalAccount=false
SecurityPackageList=Negotiate,NTLM
SecurityPackage=Negotiate
An important thing I discovered (not in SAS documentation) is that you can substitute a profile name with an absolute path to a profile (.swa file) in the ExportPackage commmand.
Edit (one year later):
As pointed out by #Stig Eide, the error does seem to relate to 32 vs 64 bit JREs. I also came across this issue in DI Studio today and solved it by copying the sspiauth.dll files as described here
Related
"failureReason": "Job validation failed: Request field config is
invalid, expected an estimated total output size of at most 400 GB
(current value is 1194622697155 bytes).",
The actual input file was only 8 seconds long. It was created using the safari media recorder api on mac osx.
"failureReason": "Job validation failed: Request field
config.editList[0].startTimeOffset is 0s, expected start time less
than the minimum duration of all inputs for this atom (0s).",
The actual input file was 8 seconds long. It was created using the desktop Chrome media recorder api, with mimeType "webm; codecs=vp9" on mac osx.
Note that Stackoverlow wouldn't allow me to include the tag google-cloud-transcoder suggested by "Getting Support" https://cloud.google.com/transcoder/docs/getting-support?hl=sr
Like Faniel mentioned, your first issue is that your video was less than 10 seconds which is below the minimum 10 seconds for the API.
Your second issue is that the "Duration" information is likely missing from the EBML headers of your .webm file. When you record with MediaRecorder the duration of your video is set to N/A in the file headers as it is not known in advance. This means the Transcoder API will treat the length of your video is Infinity / 0. Some consider this a bug with Chromium.
To confirm this is your issue you can use ts-ebml or ffprobe to inspect the headers of your video. You can also use these tools to repair the headers. Read more about this here and here
Also just try running with the Transcoder API with this demo .webm which has its duration information set correctly.
This Google documentation states that the input file’s length must be at least 5 seconds in duration and should be stored in Cloud Storage (for example, gs://bucket/inputs/file.mp4). Job Validation error can occur when the inputs are not properly packaged and don't contain duration metadata or contain incorrect duration metadata. When the inputs are not properly packaged, we can explicitly specify startTimeOffset and endTimeOffset in the job config to set the correct duration. If the duration of the ffprobe output (in seconds) of the job config is more than 400 GB, it can result in a job validation error. We can use the following formula to estimate the output size.
estimatedTotalOutputSizeInBytes = bitrateBps * outputDurationInSec / 8;
Thanks for the question and feedback. The Transcoder API currently has a minimum duration of 10 seconds which may be why the job wasn't successful.
I'm using custom algorithm running shipped with Docker image on p2 instance with AWS Sagemaker (a bit similar to https://github.com/awslabs/amazon-sagemaker-examples/blob/master/advanced_functionality/scikit_bring_your_own/scikit_bring_your_own.ipynb)
At the end of training process, I try to write down my model to output directory, that is mounted via Sagemaker (like in tutorial), like this:
model_path = "/opt/ml/model"
model.save(os.path.join(model_path, 'model.h5'))
Unluckily, apparently the model gets too big with time and I get the
following error:
RuntimeError: Problems closing file (file write failed: time = Thu Jul
26 00:24:48 2018
00:24:49 , filename = 'model.h5', file descriptor = 22, errno = 28,
error message = 'No space left on device', buf = 0x1a41d7d0, total
write[...]
So all my hours of GPU time are wasted. How can I prevent this from happening again? Does anyone know what is the size limit for model that I store on Sagemaker/mounted directories?
When you train a model with Estimators, it defaults to 30 GB of storage, which may not be enough. You can use the train_volume_size param on the constructor to increase this value. Try with a large-ish number (like 100GB) and see how big your model is. In subsequent jobs, you can tune down the value to something closer to what you actually need.
Storage costs $0.14 per GB-month of provisioned storage. Partial usage is prorated, so giving yourself some extra room is a cheap insurance policy against running out of storage.
In the SageMaker Jupyter notebook, you can check free space on the filesystem(s) by running !df -h. For a specific path, try something like !df -h /opt.
I'm using the OSMF's Strobe Media Playback player to try and play files from AWS Cloudfront/S3
The bucket is called ct.recorder. The cloudfront distribution is called 1dm7svtk8jb00c.cloudfront.net, and it's origin is ct.recorder.
The video within the bucket is called vid_test001
I've tried initializing the player with rtmp://s34osaecrafusl.cloudfront.net/cfx/st/vid_test001
But that doesn't work.
I get Connection attempt rejected by FMS server. Connection failed.
I've also tried it with .flv at the end, but that doesn't work either.
Am I not linking to the file properly, or is it my player?
Well, I had an entire answer written up, speculating that it was related to bucket permissions, and now I'm scratching that answer and posting this, instead. :)
$ rtmpdump -r rtmp://s34osaecrafusl.cloudfront.net/cfx/st/vid_test001.flv -o testfile.flv
RTMPDump v2.4
(c) 2010 Andrej Stepanchuk, Howard Chu, The Flvstreamer Team; license: GPL
Connecting ...
WARNING: HandShake: client signature does not match!
INFO: Connected...
Starting download at: 0.000 kB
INFO: Metadata:
INFO: duration 13.82
INFO: videocodecid 2.00
INFO: audiocodecid 6.00
INFO: canSeekToEnd FALSE
INFO: createdby AMS 5
INFO: creationdate Tue Dec 03 13:41:46 2013
1190.238 kB / 13.82 sec (100.0%)
Download complete
This actually works for me... both with, and without, the .flv on the end, and the resulting file is a 7 second video of a guy looking at a webcam.
Using "smplayer" for Windows, I can connect to cloudfront with the rtmp:// url and stream the video, but it only works without the .flv on the end, using:
MPlayer Redxii-SVN-r36243-4.6.3 (C) 2000-2013 MPlayer Team
Custom build by Redxii, http://smplayer.sourceforge.net
Compiled against FFmpeg version N-52798-gf5846dc
Build date: Sun May 5 23:51:25 EDT 2013
This doesn't quite answer your question of why it isn't working, except to say that your player seems to be lying to you as far as "Connection attempt rejected by FMS server" because, at least from here, it's good, except for this part, and I don't know what it means.
WARNING: HandShake: client signature does not match!
However, that could just be a distraction.
It looks as if it's going to be your player... so trying other players would be worthwhile.
It is, of course, possible, that there's a regional issue involving the particular edge location inside cloudfront that you access from your location, which could be significantly different than the one I'm hitting, since it's geographically... but if another player works where you are, then you may have the answer you're looking for. Firing up wireshark and analyzing the protocol exchange could be an interesting exercise also.
Afterthought: the extra slash in your path could also be blowing something's mind, since an RTMP url apparently consists of two distinct components, "application"/"stream_name" and the point of delineation may be ambiguous at some level to some component in the chain. If cloudfront thinks the "application" is "cfx" and the stream is "st/vid_test001" but the client assumes the "application" is "cfx/st" with stream name "vid_test001" it seems like there could be some potential for interoperability trouble there. This is wild speculation, but perhaps worth experimentation, too.
The embed parameter urlIncludesFMSApplicationInstance needs to be set to true.
I have a C++ code in which I am using sql loader using system(). When SQL Loader executes while running the code, I got below mentioned messages which I want to disable:
SQL*Loader: Release 10.2.0.1.0 - Production on Thu Mar 14 14:11:25 2013
Copyright (c) 1982, 2005, Oracle. All rights reserved.
Commit point reached - logical record count 20
Commit point reached - logical record count 40
Commit point reached - logical record count 60
Commit point reached - logical record count 80
Remember that the system function uses the shell to execute the command. So you can use normal shell redirection:
system("/some/program > /dev/null");
You can use the silent=ALL option to suppress these messages:
system("/orahomepath/bin/sqlldr silent=ALL ...")
See also SQL*Loader Command-Line Reference:
As SQL*Loader executes, you also see feedback messages on the screen, for example:
Commit point reached - logical record count 20
You can suppress these messages by specifying SILENT with one or more values:
...
ALL - Implements all of the suppression values: HEADER, FEEDBACK, ERRORS, DISCARDS, and PARTITIONS.
Depending on the sql*ldr implementation, you might still end up with one or the other output - if you need complete silence, see the answer from #Joachim below.
do u know perhaps a way (via script or program) to find out if e.g. a WMI script runs from a remote PC1 and performs some tasks in another PC2 when I am seating in a third PC: PC3
Assume that all PC belong to the same network and domain and have windows xp installed.
The reason for this that I administer a small network and I think that one student shuts down the PC where another student works, via WMI scripting.
Is there a way to monitor (via script or program) such a thing, without disabling wmi remote access.
Thanks everybody
You can get the credentials used to perform the shutdown by looking at verbose WMI logs.
1) Enable verbose WMI logging
Run 'Wmimgmt.msc' (also available under My Computer > 'Manage' > 'Services and Applications' > 'WMI Control')
Select 'WMI Control (Local)', right click --> select 'Properties'
Select 'Logging' Tab, set 'Logging level' to Verbose
2) Look at the WMI log files (Default location: %WINDIR%\system32\wbemLogs) to see record of remote access and actions taken. Specifically, look at wbemcore.log
Example: When I logged in remotely I saw the following entry [<domain> and <username> here were the real ones used for the remote connection]:
(Thu Aug 13 <time>) : DCOM connection from <domain>\<username>
at authentiction level Packet, AuthnSvc = 9, AuthzSvc = 1, Capabilities = 0
Then, to execute the WMI method the student would need to GetObject Win32_OperatingSystem, which showed up like this:
(Thu Aug 13 <time>): CALL CWbemNamespace::GetObject
BSTR ObjectPath = win32_operatingsystem
long lFlags = 0
And finally you'd look for executing the Win32Shutdown method, which should log something like this:
(Thu Aug 13 <time>) : CALL CWbemNamespace::ExecMethodAsync
BSTR ObjectPath = Win32_OperatingSystem
BSTR MethodName = Win32Shutdown