I have a pipeline setup wherein I have 3 main stages:
1) take input from a zipped file, unzip this file in s3. run some basic verification on each file to guarantee its integrity, move to step 2
2) kick off 2 simultaneous processing tasks on separate EC2 instances (parallelization of this step saves us a lot of time, so we need it for efficiency sake). Each EC2 instance will run data processing steps on some of the files in s3 which were unzipped in step 1, the files required are different for each instance.
3) after the 2 simultaneous processes are both done, spin up another EC2 instance to do the final data processing. Once this is done, run a cleanup job to remove the unzipped files from s3, leaving only the original zip file in its place.
So, one of the problems we're running into is that we have 4 EC2 instances that run this pipeline process, but there are some global parameters we would like for each EC2 instance to have access to. If we were running on a single instance, we could of course use shell variables to accomplish this task, but really need the separate instances for efficiency. Currently our best idea is to store a flat file in the s3 bucket which has access to these global variables and just read them on initialization and write back to them if they need to change. This is gross and there seems like there should be a better way, however we can't figure one out yet. I saw there's a way to set parameters which can be accessed at any part of the pipeline, but it looks like you can only set this on a per pipeline level, not on the granularity of each run of the pipeline. Does anyone have any resources that could help here? Much appreciated.
We were able to solve this by using DynamoDB to keep track of variables/state. The pipeline itself doesn't have any mechanism to do this, other than parameter values, which unfortunately only work per pipeline, not per job. You'll need to setup a DynamoDB instance and then use the pipeline job ID to keep track of state, connecting via the CLI tools or some SDK.
Related
I have two AWS Elasticache instances, One of the instances (lets say instance A) has very important data sets and connections on it, downtime is unacceptable. Because of this situation, instead of doing normal migration (like preventing source data from new writes, getting dump on it, and restore it to the new one) I'm trying to sync instance A's data to another Elasticache instance (lets say instance B). As I said, this process should be downtime-free. In order to do that, I tried RedisShake, but because AWS restrict users to run certain commands (bgsave, config, replicaof,slaveof,sync etc), RedisShake is not working with AWS Elasticache. It's giving the error below.
2022/04/04 11:58:42 [PANIC] invalid psync response, continue, ERR unknown command `psync`, with args beginning with: `?`, `-1`,
[stack]:
2 github.com/alibaba/RedisShake/redis-shake/common/utils.go:252
github.com/alibaba/RedisShake/redis-shake/common.SendPSyncContinue
1 github.com/alibaba/RedisShake/redis-shake/dbSync/syncBegin.go:51
github.com/alibaba/RedisShake/redis-shake/dbSync.(*DbSyncer).sendPSyncCmd
0 github.com/alibaba/RedisShake/redis-shake/dbSync/dbSyncer.go:113
github.com/alibaba/RedisShake/redis-shake/dbSync.(*DbSyncer).Sync
... ...
I've tried rump for that matter, But it doesn't have enough stability to handle any important processes. First of all, it's not working as a background process, when the first sync finished, it's being closed with signal: exit done, so it will not be getting ongoing changes after the first finish.
Second of all, it's recognizing created/modified key/values in each run, for example, in first run key apple equals to pear, it's synced to the destination as is, but when I deleted the key apple and its value in source and ran the rump syncing script again, it's not being deleted in destination. So basically it's not literally syncing the source and the destination. Plus, last commit to the rump github repo is about 3 years ago. It seems a little bit outdated project to me.
After all this information and attempts, my question is, is there a way to sync two Elasticache for Redis instances, as I said, there is no room for downtime in my case. If you guys with this kind of experience have a bulletproof suggestion, I would be much appreciated. I tried but unfortunately didn't find any.
Thank you very much,
Best Regards.
If those two Elasticache Redis clusters exist in the same account but different regions, you can consider using AWS Elasticache global-datastore.
https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/Redis-Global-Datastores-Console.html
It has some restrictions on the regions, type of nodes and that both the clusters should have same configurations in terms of number of nodes, etc.
Limitations - https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/Redis-Global-Datastores-Getting-Started.html
Otherwise, there's a simple brute-force mechanism and you would be able to code it yourself I believe.
Create a client EC2 (let's call this Sync-er) pub-sub channel from your EC Redis instance A.
Whenever there is a new data, Sync-er would make WRITE commands on EC Redis instance B.
NOTE - You'll have to make sure that the clusters are in connectable VPCs.
Elasticache is only available to the resources within the VPC. If your Instance A and Instance B are in different VPCs, you'll have to peer them or connect them via TransitGateway.
I have the following need - the code needs to call some APIs, get some data, and store them in a database (flat file will do for our purpose). As the APIs give access to a huge number of records, we want to split it into 30 parts, each part scraping a certain section of the data from the APIs. We want these 30 scrapers to run in 30 different machines - and for that, we have got a Python program that does the following:
Call the API, get the data, based on parameters (which part of the API to call)
Dump it to the local flatfile.
And then later, we will merge the output from the 30 files into one giant DB.
Question is - which AWS tool to use for our purpose? We can use EC2 instance, but we have to keep the EC2 console open on our desktop where we connect to it to run the Python program, it is not feasible to keep 30 connections open on my laptop. It is very complicated to get remote desktop on those machines, so logging there, starting the job and then disconnecting - this is also not feasible.
What we want is this - start the tasks (one each on 30 machines), let them run and finish by themselves, and if possible notify me (or I can myself check for health periodically).
Can anyone guide me which AWS tool suits our purpose, and how?
"We can use EC2 instance, but we have to keep the EC2 console open on
our desktop where we connect to it to run the Python program"
That just means you are running the script wrong, and you need to look into running it as a service.
In general you need to look into queueing up these tasks in SQS and then triggering either EC2 auto-scaling or Lambda functions depending on if your script will run inside the Lambda runtime restrictions.
This seems like a good application for Step Functions. Step Functions allow you to orchestrate multiple lambda functions, Glue jobs, and other services into a business process. You could write lambda functions that call the API endpoints and store the results in S3. Once all the data is gathered, your step function could trigger a lambda function, glue job, or something else that processes the data into your database. Step Functions help with error handling and retry and allow easy monitoring of your process.
I want to use AWS Data Pipeline service and have created some using the manual JSON based mechanism which uses the AWS CLI to create, put and activate the pipeline.
My question is that how can I automate the editing or updating of the pipeline if something changes in the pipeline definition? Things that I can imagine changing could be schedule time, addition or removal of Activities or Preconditions, references to DataNodes, resources definition etc.
Once the pipeline is created, we cannot edit quite a few things as mentioned here in the official doc: http://docs.aws.amazon.com/datapipeline/latest/DeveloperGuide/dp-manage-pipeline-modify-console.html#dp-edit-pipeline-limits
This makes me believe that if I want to automate the updating of pipeline then I would have to delete and re-create/activate a new pipeline? If yes, then the next question is that how can I create a automated process which identifies the previous version's ID, deletes it and creates a new one? Essentially trying to build a release management flow for this where the configuration JSON file is released and deployed automatically.
Most commands like activate, delete, list-runs, put-pipeline-definition etc. take the pipeline-id which is not known until a new pipeline created. I am unable to find anything which remains constant across updates or recreation (the unique-id and name parameters of the createpipeline command are consistent but then I can't use them for the above mentioned tasks (I need pipeline-id for that.
Of course I can try writing shell scripts which grep and search the output and try to create a script but is there any other better way? Some other info that I am missing?
Thanks a lot.
You cannot edit schedules completely or change references so creating/deleting pipelines seems to be the best way for your scenario.
You'll need the pipeline-id to delete a pipeline. Is it not possible to keep a record of that somewhere? You can have a file with the last used id stored locally or in S3 for instance.
Some other ways I can think of are:
If you have only 1 pipeline in the account you can list-pipelines and
use the only result
If you have the pipeline name you can list-pipelines and find the id
I've followed the "Getting Started Guide", the Two-Shards / Two-Replicas secnario and everything worked perfectly.
Then I started using the Collections API which is the preferable way of managing collection,shards and replication.
I launched two instances locally (afterwards with AWS, same problem)
I created a new collection with two shards using the following command:
/admin/collections?action=CREATE&name=collection1&numShards=2&collection.configName=collection
This successfully created two shards, one on each instance.
Then I launched another instance, expecting it to automatically set itself as a replica for the first shard, just like in the example. That didn't happen.
Is there something I'm missing?
There were two ways I was able to achieve this:
Manually, using the Collections API, I added a replica to shard1 and then another to shard2.
This is not good enough, as I will need to have this done automatically with Auto Scaling, so I'll need to micro-manage each server "role" - which replicas of which shards of which collections its handling, which complicates things a lot
The second way, which I couldn't find a documentation for is to launch an instance with a folder named "collectionX", inside a file named core.properties. In it the following line:
collection=collection1
Each instance I launched that way was automatically added as a replica in a round-robin way.
(Also working with several collections)
That's actually not a bad way at all, as I can pass parameter when I launch an AMI/instance in AWS.
Thanks everyone.
Amir
1) You are running the wrong command; the complete command is as follows:
curl 'http://localhost:8080/solr/admin/collections?action=CREATE&name=corename&numShards=2&replicationFactor=2&maxShardsPerNode=10'
Here I have given replication factor and due to which it will create the replica for your shards.
I have a simple pure C program, that takes an integer as input, runs for a while (let's say an hour) and then returns me a text file. I want to run this program 1000 times with input integers from 1 to 1000.
Currently I'm running this program in parallel (4 processors), what takes 250 hours. The program is such that it fits in a AWS micro instance (I've tested it). Would it be possible to use 1000 micro instances in AWS to do the whole job in one hour? (at a cost of ~20$ - $0.02/instance)?
If it is possible, does anybody have some guidlines on how to do that?
If it is not, does anybody have an also low-budget alternative to that?
Thanks a lot!
In order to achieve this, you will need to:
Create a S3 bucket to store your bootstrapping scripts, application, input data and output data
Custom Lightweight AMI: you might want to create a custom lightweight AMI which knows about downloading the bootstrapping script
Bootstrapping Script: will download your software from your S3 bucket, parse a custom instance tag which will contain the integer [1..1000] and download any additional data.
Your application: does the processing stuff.
End of processing script is which uploads the result to a another result S3 bucket and terminates the instance, you might also want to send a SNS notification to communicate the end of processing status.
If you need result consolidation you might want to create another instance and use it as a coordinator, waiting for all "end of processing" notifications in order to finish the processing. In this case you might consider using in the future the Amazon's Hadoop map reduce engine, since it will do almost all this heavy lifting for free.
You didn't specify what language you'd like to do this in and since the app you use to deploy your program doesn't have to be in the same language I would recommend C#. There are a number of examples around the web on how to programmatically spawn new Amazon Instances using the SDK. For example:
How to start an Amazon EC2 instance programmatically in .NET
You can create your own AMIs with the program already present but that might end up being a pain if you want to make adjustments to it since it will require you to recreate the entire AMI. I'd recommend creating an extra instance or simply hosting the program in a location which is accessible from a public URL. Then I would create some kind of service which would be installed on the AMI ahead of time to allow me to specify the URL to download the app from along with whatever command line parameters I wanted for that particular instance.
Hope this helps.
Be carefull by default you can only spawn 20 instance by zone. You need to ask amazon in order to use more instances.
If you want low cost but don't care about delay you should use spot instances.