How to connect Jenkins and aws dynamoDB - amazon-web-services

I want to store values from Environment Variables of Jenkins to AWS DynamoDb. I request anyone of you could help me how to connect Jenkins and DynamoDb either using manual configuration or using Jenkins shell command.
Thank you in advance

You can use AWS CLI for this, launching the command from your Jenkins pipeline code.
More info here:
Using the AWS CLI with DynamoDB

Related

AWS glue interactive session in sagemaker notebooks via lifecycle configurations

I am trying to work with glue interective session in sagemaker notebook by configuring the glue-conda-pyspark kernel via aws lifecycle configurations. It worked earlier while creating a notebook instance. Now the instance is running with configuration but i am no longer able to see the conda glue pyspark kernel in the kernel list. Could anybody help with the create script and start script to run the notebook with glue-pyspark.
I am configuring using this aws doc: https://docs.aws.amazon.com/glue/latest/dg/interactive-sessions-sagemaker.html#is-sagemaker-existing
and also aws took help from aws github scripts: https://github.com/aws-samples/amazon-sagemaker-notebook-instance-lifecycle-config-samples/blob/master/scripts/install-conda-package-single-environment/on-start.sh

using AWS Codepipeline to create and API gateway

Is it possible to create a new REST api gateway using codepipeline? I already have a terraform script to create the pipeline, but I want to know if there is a way to create a pipeline that will take my script and propogate it from a dev environment api gateway to a test environment? I am trying to automate the pipeline to possibly run the script for me once the code is updated in a code commit stash
Any suggestions would be greatly appreciated.
To run a script from AWS CodePipeline you can use AWS CodeBuild action in one of your CodePipeline stages.
With CodeBuild you can specify the list of commands you want to run, like installing and running terraform.

How can I deploy (create/update/delete) cloudformation templates from jenkins to my AWS environment?

I have jenkins installed on an AWS EC2 Instance. My end state is whenever I commit cloudformation templates to my bitbucket repo, jenkins will automatically create/update/delete cf stack.
My thoughts on it was via aws cf cli commands in the jenkinsfile after installing aws cli on the server. Is there a better way of approaching this? I am new to devops
You could try AWS Cloudformation Plugin, but it's up for adoption and wasn't updated in 3 years.
I would say your approach with using the AWS cf cli commands looks safer.
I would say using CLI commands in your Jenkins pipelines is a good practice.
I am a fan of setting up Jenkins pipelines using the S3 artifact manager so your pipeline artifacts like CF templates are automatically available from S3. From there just execute the CloudFormation stack in a Jenkins task.
If your hosting Jenkins in AWS it's also nice to just add an IAM role to the instance to control what API actions Jenkins is allowed to run and use a plugin like CloudBees AWS CLI for your pipeline tasks.

Where to put application property file for spark application running on AWS EMR

I am submitting one spark application job jar to EMR, and it is using some property file. So I can put it into S3 and while creating the EMR I can download it and copy it at some location in EMR box if this is the best way how I can do this while creating the EMR cluster itself at bootstrapping time.
Check following snapshot
In edit software setting you can add your own configuration or JSON file ( which stored on S3 location ) and using this setting you can passed configure parameter to EMR cluster on creating time. For more details please check following links
Amazon EMR Cluster Configurations
Configuring Applications
AWS ClI
hope this will help you.

run spark on AWS EMR by passing credentials

I am new to EMR and tried to launch Spark job as a step using something like command-runner.jar spark-submit --deploy-mode cluster --class com.xx.xx.className s3n://mybuckets/spark-jobs.jar
However, the spark job needs credentials as environment variables, my question is what is the best way to pass the credentials as environment variables to the spark jobs.
Thanks!
Have a look here: AWS EMR 4.0 - How can I add a custom JAR step to run shell commands and here: http://docs.aws.amazon.com/emr/latest/ReleaseGuide/emr-hadoop-script.html
try running step like this(arguments): /usr/bin/spark-submit --deploy-mode cluster --class
I came to your question googling the solution for myself. Right now as a temp solution, I am passing the credential as cmd line parameters. In the future I am thinking to add a custom bootstrap script which will fetch data from service and create the ~/.aws/credentials and config files.
I hope this helps or if you have discovered any other option do post here.