WinSCP error while performing directory Sync - azure-webjobs

I've developed a .Net console application to run as a webjob under Azure App Service.
This console app is using WinSCP to transfer files from App Service Filesystem to an on-prem FTP Server.
The job is failing with below error:
Upload of "D:\ ...\log.txt" failed: WinSCP.SessionRemoteException: Error deleting file 'log.txt'. After resumable file upload the existing destination file must be deleted. If you do not have permissions to delete file destination file, you need to disable resumable file transfers.
Herein the code snippet I use to perform the directory sync (I've disabled deletion):
var syncResult = session.SynchronizeDirectories(SynchronizationMode.Remote, localFolder, remoteFolder, false,false);
Any clues on how to disable resumable file transfers ??

Use TransferOptions.ResumeSupport:
var transferOptions = new TransferOptions();
transferOptions.ResumeSupport.State = TransferResumeSupportState.Off;
var syncResult =
session.SynchronizeDirectories(
SynchronizationMode.Remote, localFolder, remoteFolder, false, false,
transferOptions);

Related

Copy file from windows remote server to GCS bucket using Airflow

file_path = "\\wfs8-XXXXX\XXXX"
The files is on remote server path, I am using cloud composer to automate my data pipeline,
so How I will be able to copy the files from remote windows server to GCS bucket using composer ?
I tried to use LocalFilesystemToGCSOperator, but I am not able to provide any connection option to connect windows remote server, please advise
upload_file = LocalFilesystemToGCSOperator(
task_id="upload_file",
src=PATH_TO_UPLOAD_FILE,
dst=DESTINATION_FILE_LOCATION,
bucket=BUCKET_NAME,
)
In this case you can use SFTPToGCSOperator operator, example :
copy_file_from_sftp_to_gcs = SFTPToGCSOperator(
task_id="file-copy-sftp-to-gcs",
sftp_conn_id="<your-connection>",
source_path=f"{FILE_LOCAL_PATH}/{OBJECT_SRC_1}",
destination_bucket=BUCKET_NAME,
)
You have to configure the sftp connection in Airflow, you can check this topic to have an example.

How do I connect to an Oracle cloud database within a C++ application?

I have a cloud database with Oracle and I'm trying to learn how to execute SQL commands within my c++ application in Windows.
Here's what I've tried.
1) Using Instant Client (OCCI)
I have tried to follow these docs posted by Oracle
I downloaded instant client, unzipped, and put it under the directory called Oracle
I downloaded the SDK for instant client and put it under the same directory
I downloaded the wallet files and put them under network/admin directory
I set the path variable to the directory Oracle/instant_client and created a user variable called ORACLE_HOME and set it to the same directory
Created a TNS_ADMIN user variable and set it to network/admin
Created an empty visual studio project and added the sdk to the additional include dependencies and added the sdk to the additional library directories under the project properties
Added the .lib files to the additional dependencies under the project properties (linker -> input)
Then I wrote this code
Environment* env;
Connection* conn;
env = Environment::createEnvironment(Environment::DEFAULT);
conn = env->createConnection ("username", "password", "(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp) (HOST=myserver111) (PORT=5521))(CONNECT_DATA = (SERVICE_NAME = bjava21)))");
env->terminateConnection(conn);
Environment::terminateEnvironment(env);
It compiles, runs, and builds the environment successfully but an error occurs when it tries to create a connection
2) Using ODBC
Downloaded the ODBC Oracle Driver
Added a new data source name to the system dsn
Tested connection successfully
Abandoned efforts as I couldn't find helpful and simple docs to help me with my project
3) Using Oracle Developer Tools for Visual Studio
I downloaded the developer tools and added them to visual studio
Connected to my Oracle database using Server Explorer
Was able to see my tables and data and modify them using the Server Explorer
Was unable to find docs or be able to add code that allowed me to execute SQL queries
Update
I have removed ORACLE_HOME and TNS_ADMIN as environment variables
I've tried to connect to my remote database using Instant Client SDK but no luck
I have tried the following and nothing has worked
createConnection("username", "password", "(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp) (HOST=myserver111) (PORT=5521))(CONNECT_DATA = (SERVICE_NAME = bjava21)))")
createConnection("username", "password", "//host:[port][/service name]")
createConnection("username", "password", "xxx_low")
createConnection("username", "password", "protocol://host:port/service_name?wallet_location=/my/dir&retry_count=N&retry_delay=N")
createConnection("username", "password", "username/password#xxx_low")
I'm able to connect to SQLPlus in a variety of ways but not in my c++ application
Error While Debugging:
Unhandled exception in exe: Microsoft C++ exception: oracle::occi::SQLException at memory location
Full Code
#include <occi.h>
using namespace oracle::occi;
int main() {
Environment* env;
Connection* conn;
env = Environment::createEnvironment(Environment::DEFAULT);
conn = env->createConnection("username", "password", "(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp) (HOST=myserver111) (PORT=5521))(CONNECT_DATA = (SERVICE_NAME = bjava21)))");
env->terminateConnection(conn);
Environment::terminateEnvironment(env);
return 0;
}
With Instant Client in general:
don't set ORACLE_HOME. This can have side effects.
you don't need to set TNS_ADMIN since you put the unzipped network files in the default directory.
For cloud:
In the app, use one of the network aliases from the tnsnames.ora file (eg. xxxx_low). You can see the descriptors have extra info that your hard coded descriptor is missing.
ODBC will be exactly the same. Once you have the wallet files extracted to the default network/admin subdirectory you just need to connect with the DB credentials and use a network alias from the tnsnames.ora file.
More info is in my blog post How to connect to Oracle Autonomous Cloud Databases.
Official doc is in Connect to Autonomous Database Using Oracle Database Tools

WebSphere 8.5.5 Error: PLGC0049E: The propagation of the plug-in configuration file failed for the Web server

I just installed a new WebSphere 8.5.5 ESB on Linux Centos 7.
All installation i did with root user.
Than i did the following steps to create a Web Service:
1) create server with user wasadmin
2) Generate plugin
3) Propagate plugin
In the last step i get the error:
PLGC0049E: The propagation of the plug-in configuration file failed for the Web server. test2lsoa01-02Node01Cell.XXXXXXXXX-node.IHSWebserver.
Error A problem was encountered transferring the designated file. Make sure the file exists and has correct access permissions.
The file /u01/apps/IBM/WebSphere/profiles/ApplicationServerProfile1/config/cells/test2lsoa01-02Node01Cell/nodes/XXXXX-node/servers/IHSWebserver/plugin-cfg.xml exist.
I gave him for test chmod 777 plugin-cfg.xml
Still the error is not going away.
Can someone help?
User wsadmin would be the user attempting to move the file. Ensure that ID can access /u01/apps/IBM/WebSphere/profiles/ApplicationServerProfile1/config/cells/test2lsoa01-02Node01Cell/nodes/XXXXX-node/servers/IHSWebserver/plugin-cfg.xml and there should be a target directory as well (in the webserver installation where plugin-cfg.xml is being moved to). Ensure that wsadmin has write access to this target location if propagating using node sync. If using IHS admin, ensure that the userid/password defined in the web server definition has write access to the target location.
A good test would be to access the source plugin-cfg.xml using wsadmin userid and attempt to manually move the file to the target location with the appropriate ID (based upon use of node sync or IHS admin).

Does AWS CPP S3 SDK support "Transfer acceleration"

I enabled "Transfer acceleration" on my bucket. But I dont see any improvement in speed of Upload in my C++ application. I have waited for more than 20 minutes that is mentioned in AWS Documentation.
Does the SDK support "Transfer acceleration" by default or is there a run time flag or compiler flag? I did not spot anything in the SDK code.
thanks
Currently, there isn't a configuration option that simply turns on transfer acceleration. You can however, use endpoint override in the client configuration to set the accelerated endpoint.
What I did to enable a (working) transfer acceleration:
set in the bucket configuration on the AWS panel "Transfer Acceleration" to enabled.
add to the IAM user that I use inside my C++ application the permission s3::PutAccelerateConfiguration
Add the following code to the s3 transfer configuration (bucket_ is your bucket name, the final URL must match the one shown in the AWS panel "Transfer Acceleration"):
Aws::Client::ClientConfiguration config;
/* other configuration options */
config.endpointOverride = bucket_ + ".s3-accelerate.amazonaws.com";
Ask for acceleration to the bucket before transfer... (docs in here )
auto s3Client = Aws::MakeShared<Aws::S3::S3Client>("Uploader",
Aws::Auth::AWSCredentials(id_, key_), config);
Aws::S3::Model::PutBucketAccelerateConfigurationRequest bucket_accel;
bucket_accel.SetAccelerateConfiguration(
Aws::S3::Model::AccelerateConfiguration().WithStatus(
Aws::S3::Model::BucketAccelerateStatus::Enabled));
bucket_accel.SetBucket(bucket_);
s3Client->PutBucketAccelerateConfiguration(bucket_accel);
You can check in the detailed logs of the AWS sdk that your code is using the accelerated entrypoint and you can also check that before the transfer start there is a call to /?accelerate (info)
What worked for me:
Enabling S3 Transfer Acceleration within AWS console
When configuring the client, only utilize the accelerated endpoint service:
clientConfig->endpointOverride = "s3-accelerate.amazonaws.com";
#gabry - your solution was extremely close, I think the reason it wasn't working for me was perhaps due to SDK changes since originally posted as the change is relatively small. Or maybe because I am constructing put object templates for requests used with the transfer manager.
Looking through the logs (Debug level) the SDK automatically concatenates the bucket used in transferManager::UploadFile() with the overridden endpoint. I was getting unresolved host errors as the requested host looked like:
[DEBUG] host: myBucket.myBucket.s3-accelerate.amazonaws.com
This way I could still keep the same S3_BUCKET macro name while only selectively calling this when instantiating a new configuration for upload.
e.g.
<<
...
auto putTemplate = new Aws::S3::Model::PutObjectRequest();
putTemplate->SetStorageClass(STORAGE_CLASS);
transferConfig->putObjectTemplate = *putTemplate;
auto multiTemplate = new Aws::S3::Model::CreateMultipartUploadRequest();
multiTemplate->SetStorageClass(STORAGE_CLASS);
transferConfig->createMultipartUploadTemplate = *multiTemplate;
transferMgr = Aws::Transfer::TransferManager::Create(*transferConfig);
auto transferHandle = transferMgr->UploadFile(localFile, S3_BUCKET, s3File);
transferMgr = Aws::Transfer::TransferManager::Create(*transferConfig);
...
>>

Azure WebJob FileTrigger Path 'D:\home\data\...' does not exist

I've created a WebJob to read files from Azure Files when they are created.
When I run it locally it works but it doesn't when I publish the WebJob.
My Main() function is:
static void Main()
{
string connection = "DefaultEndpointsProtocol=https;AccountName=MYACCOUNTNAME;AccountKey=MYACCOUNTKEY";
JobHostConfiguration config = new JobHostConfiguration(connection);
var filesConfig = new FilesConfiguration();
if (config.IsDevelopment)
{
config.UseDevelopmentSettings();
filesConfig.RootPath = #"c:\temp\files";
}
config.UseFiles(filesConfig);
var host = new JobHost(config);
// The following code ensures that the WebJob will be running continuously
host.RunAndBlock();
}
The function to be triggered when the file is created is:
public void TriggerTest([FileTrigger(#"clients\{name}", "*.txt", WatcherChangeTypes.Created)] Stream file, string name, TextWriter log)
{
log.WriteLine(name + " received!");
// ...
}
And the error I get when the WebJob is published is:
[08/17/2016 00:15:31 > 4df213: ERR ] Unhandled Exception: System.InvalidOperationException: Path 'D:\home\data\clients' does not exist.
The ideia is to make the WebJob to trigger when new files are created in the "clients" folder of the Azure Files.
Can someone help me?
According to your requirement, I tested it on my side, then I reproduced your issue.
Unhandled Exception: System.InvalidOperationException: Path 'D:\home\data\clients' does not exist
When publish the WebJob, the FilesConfiguration.RootPath would be set to the “D:\HOME\DATA” directory when running in Azure Web App. You could refer to the source code:
https://github.com/Azure/azure-webjobs-sdk-extensions/blob/master/src/WebJobs.Extensions/Extensions/Files/Config/FilesConfiguration.cs
As the following tutorial has mentioned, FilesConfiguration.RootPath should be set to a valid directory.
https://azure.microsoft.com/en-us/blog/extensible-triggers-and-binders-with-azure-webjobs-sdk-1-1-0-alpha1
Please check and make sure that the specified directory is existed in the Web APP which hosts in your WebJob.
Trigger when new files are created in the "clients" folder of the Azure Files via WebJob
As far as I know, there has two triggers for Azure Storage:
QueueTrigger - When an Azure Queue Message is enqueued.
BlobTrigger – When an Azure Blob is uploaded.
The new WebJobs SDK provide a File trigger which could trigger functions based on File events.
However, a file trigger could monitor file additions/changes to a particular directory, but there seems to be no trigger for monitor file additons/changes on Azure File Storage.
In Azure Environment, the "Web-Jobs" are stored in its local folder where known as "D:\home" and "D:\local" is the local folder used by the Web-hooks. I was in need to use a folder for temporary usage of downloading a file from SFTP server and again read the file from that local temporary location file and consume it in my application.
I have used the "D:\local\Temp" as the temporary folder which is created by the code after checking the folder existence, then after creating the folder the code will download a file from server and store to this location and then read from the same location and delete the file from that temporary folder.