Realm unable to sync data among different devices - amazon-web-services

The following is my sync realm configuration.For each device, I install the app data is not synced with them.My realm object server is in AWS ec2
Realm.init(getApplicationContext());
SyncCredentials syncCredentials = SyncCredentials.usernamePassword("username","password",false);
SyncUser user = SyncUser.login(syncCredentials, serverUrl());
SyncConfiguration config = new SyncConfiguration.Builder(user, realmUrl()).name("auth").schemaVersion(schemaVersion)
.build();
realm.setDefaultConfiguration(config)

Related

Copy file from windows remote server to GCS bucket using Airflow

file_path = "\\wfs8-XXXXX\XXXX"
The files is on remote server path, I am using cloud composer to automate my data pipeline,
so How I will be able to copy the files from remote windows server to GCS bucket using composer ?
I tried to use LocalFilesystemToGCSOperator, but I am not able to provide any connection option to connect windows remote server, please advise
upload_file = LocalFilesystemToGCSOperator(
task_id="upload_file",
src=PATH_TO_UPLOAD_FILE,
dst=DESTINATION_FILE_LOCATION,
bucket=BUCKET_NAME,
)
In this case you can use SFTPToGCSOperator operator, example :
copy_file_from_sftp_to_gcs = SFTPToGCSOperator(
task_id="file-copy-sftp-to-gcs",
sftp_conn_id="<your-connection>",
source_path=f"{FILE_LOCAL_PATH}/{OBJECT_SRC_1}",
destination_bucket=BUCKET_NAME,
)
You have to configure the sftp connection in Airflow, you can check this topic to have an example.

WinSCP error while performing directory Sync

I've developed a .Net console application to run as a webjob under Azure App Service.
This console app is using WinSCP to transfer files from App Service Filesystem to an on-prem FTP Server.
The job is failing with below error:
Upload of "D:\ ...\log.txt" failed: WinSCP.SessionRemoteException: Error deleting file 'log.txt'. After resumable file upload the existing destination file must be deleted. If you do not have permissions to delete file destination file, you need to disable resumable file transfers.
Herein the code snippet I use to perform the directory sync (I've disabled deletion):
var syncResult = session.SynchronizeDirectories(SynchronizationMode.Remote, localFolder, remoteFolder, false,false);
Any clues on how to disable resumable file transfers ??
Use TransferOptions.ResumeSupport:
var transferOptions = new TransferOptions();
transferOptions.ResumeSupport.State = TransferResumeSupportState.Off;
var syncResult =
session.SynchronizeDirectories(
SynchronizationMode.Remote, localFolder, remoteFolder, false, false,
transferOptions);

Google Cloud IoT - Single MQTT client instance for all devices in a registry

I am able to publish events to a device in my Cloud IOT Registry via an MQTT client created this way (using paho python):
self.__client = mqtt.Client(client_id='projects/{}/locations/{}/registries/{}/devices/{}'.format(project_id,
cloud_region,
registry_id,
device_id))
Now I'm wondering if I can create an MQTT client being able to publish events to multiple devices by setting the client id at registry level (i.e. not specifying the device id):
self.__client = mqtt.Client(client_id='projects/{}/locations/{}/registries/{}'.format(project_id,
cloud_region,
registry_id))
This client is not able to connect even if I've added a CA Certificate to the registry.
My question is: can a single MQTT Client instance publish events to a set of devices defined in a registry?
Should I use a gateway instead?
No, you can't send messages to a registry like this.
The way you'd want to do this is either 1) Use a gateway like you say, send one message then spread it to the devices locally. Or 2) Grab the list of devices in the registry using the DeviceManagerClient(), and iterate over them each sending each device the message in a loop.
Check out this: https://cloud.google.com/iot/docs/samples/device-manager-samples#list_devices_in_a_registry
For fetching the list of devices in a registry. Snippet for python:
# project_id = 'YOUR_PROJECT_ID'
# cloud_region = 'us-central1'
# registry_id = 'your-registry-id'
print("Listing devices")
client = iot_v1.DeviceManagerClient()
registry_path = client.registry_path(project_id, cloud_region, registry_id)
devices = list(client.list_devices(request={"parent": registry_path}))
for device in devices:
print("Device: {} : {}".format(device.num_id, device.id))
return devices
So in that for device in devices loop you can call your code to get the MQTT client and send the message you want to the specified device.

AWS DocumentDB connection problem with TLS

When TLS is disabled, I can connect successfully through my lambda function using the same code as shown here - https://docs.aws.amazon.com/documentdb/latest/developerguide/connect.html#w139aac29c11c13b5b7
However, when I enable TLS and use the TLS enabled code sample from above link, my lambda function times out. I've downloaded rds combined ca pem file through wget and I am deploying the pem file along with my code to the AWS lambda.
This is the code where my execution stops and times out:
caFilePath = "rds-combined-ca-bundle.pem"
var connectionStringTemplate = "mongodb://%s:%s#%s:27017/dbname?ssl=true&sslcertificateauthorityfile=%s"
var connectionURI = fmt.Sprintf(connectionStringTemplate, secret["username"], secret["password"], secret["host"], caFilePath)
fmt.Println("Connection String", connectionURI)
client, err := mongo.NewClient(options.Client().ApplyURI(connectionURI))
if err != nil {
log.Fatalf("Failed to create client: %v", err)
}
I don't see any errors in the cloudwatch logs after the "Connection string" print.
I suspect Its an issue with your VPC design
Connecting to an Amazon DocumentDB Cluster from Outside an Amazon VPC,
check the last paragraph
https://docs.aws.amazon.com/documentdb/latest/developerguide/connect-from-outside-a-vpc.html
also, the below link is giving detailed instructions
https://blog.webiny.com/connecting-to-aws-documentdb-from-a-lambda-function-2b666c9e4402
Can you try creating lambda test function using python and see if your having the issue
import pymongo
import sys
##Create a MongoDB client, open a connection to Amazon DocumentDB as a replica set and specify the read preference as secondary preferred
client = pymongo.MongoClient('mongodb://<dbusername>:<dbpassword>#mycluster.node.us-east-1.docdb.amazonaws.com:27017/?ssl=true&ssl_ca_certs=rds-combined-ca-bundle.pem&replicaSet=rs0&readPreference=secondaryPreferred')
##Specify the database to be used
db = client.test
##Specify the collection to be used
col = db.myTestCollection
##Insert a single document
col.insert_one({'hello':'Amazon DocumentDB'})
##Find the document that was previously written
x = col.find_one({'hello':'Amazon DocumentDB'})
##Print the result to the screen
print(x)
##Close the connection
client.close()

Moving files directly from S3 to FTP

I am having a media based web application running on AWS (EC2 windows). And I'm trying to achieve scalability by adding the app and web servers on an auto scaling group.
My problem is I need to separate the media storage to S3 so that I can share this with different app server clusters. But I have to move these media files from S3 to different FTP servers. For that I have to download the files from S3 to app server and then do the FTP upload which is taking too much time process. Note that I am using ColdFusion as application server.
Now I have 2 options to solve this
Mount the S3 instance to EC2 instances (I know that is not recommenced, also not sure if that will help to improve the speed of FTP upload).
Use Lamda service to upload files directly from S3 to FTP servers
I can not use separate EBS volume to each of the EC2 instance because
The storage volume is huge and it will result in high cost
I need to sync the media storage on different EBS volumes attached to the EC2 instances
EFS is not an option as I'm using windows storage.
Can any one suggest better solution?
That is pretty easy with python
from ftplib import FTP
from socket import _GLOBAL_DEFAULT_TIMEOUT
import urllib.request
class FtpCopier(FTP):
source_address = None
timeout = _GLOBAL_DEFAULT_TIMEOUT
# host → ftp host name / ip
# user → ftp login user
# password → ftp password
# port → ftp port
# encoding → ftp servver encoding
def __init__(self, host, user, password, port = 21, encoding = 'utf-8'):
self.host = host
self.user = user
self.password = password
self.port = port
self.connect(self.host, self.port)
self.login(self.user, self.password, '')
self.encoding = encoding
# url → any web URL (for example S3)
# to_path → ftp server full path (check if ftp destination folders exists)
# chunk_size_mb → data read chunck size
def transfer(self, url, to_path, chunk_size_mb = 10):
chunk_size_mb = chunk_size_mb * 1048576 # 1024*1024
file_handle = urllib.request.urlopen(url)
self.storbinary("STOR %s" % to_path, file_handle, chunk_size_mb)
Use example:
ftp = FtpCopier("some_host.com", "user", "p#ssw0rd")
ftp.transfer("https://bucket.s3.ap-northeast-2.amazonaws.com/path/file.jpg", "/path/new_file.jpg")
But remember that lambda process time is limited to 15 minutes. So timeout may appear before file transfer completed. I recommend to use ECS Fargate instead lambda. That allows to hold running process as long as you want.
If S3 file is not public, use presigned URLs to access it via urllib.
aws s3 presign s3://bucket/path/file.jpg --expires-in 604800