Kinesis agent failing to start - amazon-web-services

I am trying to setup a kinesis agent on an Amazon EC2 instance which is supposed to be preinstalled.
But when I run the command:
sudo service aws-kinesis-agent start
It gives an error.. can someone help?

Related

Running AWS ECS Task Attached (Not Detached)

Is there easy way to run an ECS Task attached or to follow the logs only while the container is Running (ie. Detach after displaying all of the logs associated)?
Using the AWS CLI (1.17.0) and ecs-cli (1.21.0), I have gotten decently close with the following two commands:
aws ecs run-task --cluster "mycluster" --task-definition testhelloworldjob --launch-type FARGATE --network-configuration etc.etc.etc.
ecs-cli logs --task-id {TASK_ID_HERE_FROM_OUTPUT_OF_PREVIOUS_COMMAND} --follow
I am currently have two issues with the above approach:
There is a race condition being that the logs are not available when the task is in a pre "running" state. Instead of ecs-cli logs waiting for the logs to exist, there is an error immediately thrown.
Even after waiting for the task to be in a running state, and issuing the ecs-cli logs the command refuses to detach even AFTER the task is finished and in a Post Running status.
For the first issue I could poll until there is a post activating/pending status, prior to calling logs. For the second issue I could draft some type of threaded call that would poll to stop the following of a log after the container in question is no longer running.... But there has to be an easier way?
To clarify I am coming from numerous other container orchestration tools/technologies that seemingly supported this very seamlessly. Here are some examples of tools and their associated commands that would yield me my intended results:
Docker CLI:
docker run hello-world
Docker-Compose Yaml:
docker-compose up
K8 Kubectl Yaml:
kubectl apply -f ./hello-k8.yaml && kubectl logs --follow hello-world
I think ecs-cli is the best option available at the moment.
Apart from that, you can change the logs driver of the AWS ECS task to syslog and then watch the logs file from the terminal after doing SSH into the EC2 container instance in which it is running.
Another thing you can do is SSH into the EC2 container instance in which it was running before and then run the container of that AWS ECS task by yourself in it using docker run and once the testing is done, you can stop and remove that container and then get that task started via AWS ECS.
Note: You can use AWS SSM Session Manager in order to avoid using EC2 key pair and adding an inbound rule for SSH.

kops can't update aws cluster. RequestTimeout

I have an issue with k8s cluster on aws.
I tried to create cluster on aws.
The first step was creation:
kops create cluster --name=kubernetes.xarva.stream --state=s3://kops-bucket-pnz-se-kube1 --node-count=2 --node-size=t2.micro --master-size=t2.micro --dns-zone=kubernetes.xarva.stream --zones=eu-central-1a
and it was successfully completed. But when I tried to update the cluster with the command:
kops update cluster kubernetes.xarva.stream --state=s3://kops-b ucket-pnz-se-kube1 --yes
I got this error:
error writing completed cluster spec: error writing configuration file s3://kops -bucket-pnz-se-kube1/kubernetes.xarva.stream/cluster.spec: error writing s3://ko ps-bucket-pnz-se-kube1/kubernetes.xarva.stream/cluster.spec: RequestTimeout: You r socket connection to the server was not read from or written to within the tim eout period. Idle connections will be closed.
status code: 400, request id: 2***********3, host id: *********************************************6
On the s3 bucket I found the configuration, so seems that it's read issue.
Does anybody face with this problem? Any ideas how to solve it?
Thanks
RequestTimeout: Your socket connection to the server was not read from or written to within the timeout period. Idle connections will be closed.
This error says you have a problem with access to AWS S3 from a place where you have installed kops.
Make sure you have access to S3 and try again or try to do this from another place.

Logs to AWS Cloudwatch from Docker Containers

I have a few docker containers running with docker-compose on an AWS EC2 instance. I am looking to get the logs sent to AWS CloudWatch. I was also having issues getting the logs from docker containers to AWS CloudWatch from my Mac running Sierra so I've moved over to EC2 instances running Amazon AMI.
My docker-compose file:
version: '2'
services:
scraper:
build: ./Scraper/
logging:
driver: "awslogs"
options:
awslogs-region: "eu-west-1"
awslogs-group: "permission-logs"
awslogs-stream: "stream"
volumes:
- ./Scraper/spiders:/spiders
When I run docker-compose up I get the following error:
scraper_1 | WARNING: no logs are available with the 'awslogs' log driver
but the container is running. No logs appear on the AWS CloudWatch stream. I have assigned an IAM role to the EC2 container that the docker-containers run on.
I am at a complete loss now as to what I should be doing and would apprecaite any advice.
The awslogs works without using ECS.
you need to configure the AWS credentials (the user should have IAM roles appropriate [cloudwatch logs]).
I used this tutorial, it worked for me: https://wdullaer.com/blog/2016/02/28/pass-credentials-to-the-awslogs-docker-logging-driver-on-ubuntu/
I was getting the same error but when I checked the cloudwatch logs, I was able to see the logs in cloudwatch. Did you check that if you have the logs group created in cloudwatch. Docker doesn't support console logging when we use the custom logging drivers.
The section on limitations here says that docker logs command is only available for json-file and journald drivers, and that's true for built-in drivers.
When trying to get logs from a driver that doesn't support reading, nothing hangs for me, docker logs prints this:
Error response from daemon: configured logging driver does not support reading
There are 3 main steps involved it to it.
Create an IAM role/User
Install CloudAgent
Modify docker-compose file or docker run command
I have referred an article here with steps to send the docker logs to aws cloudwatch.
The AWS logs driver you are using awslogs is for use with EC2 Container Service (ECS). It will not work on plain EC2. See documentation.
I would recommend creating a single node ECS cluster. Be sure the EC2 instance(s) in that cluster have a role, and the role provides permissions to write to Cloudwatch logs.
From there anything in your container that logs to stdout will be captured by the awslogs driver and streamed to Cloudwatch logs.

Kubernetes run on AWS

I've been struggling with configuring Kubernetes for many hours and I don't know how to move it forward.
What I did :
I created few services using spring cloud
I created docker images for each service
I pushed those images to docker hub
I launched AWS by running
export KUBERNETES_PROVIDER=aws; wget -q -O - https://get.k8s.io | bash
Command kubectl cluster-info shows that it actually works.
I created Kubernetes pods for each service. Command kubectl get pods
shows that all pods have status running.
The problem is that when I log to my AWS account I don't see any running instance, although I can see kubernetes-staging created in my S3 bucket.
My goal is to actually access my service , not on localhost. How can I do it ?
You should be able to see instances of course - as #kichik mentioned check whether your AWS console is using the same region as the deployment scripts.
To use your services/applications the next step is to expose them to the public with Kubernetes services as described here and here

Linking github repository with my Amazon EC2 Instances AWS

I am new to github and AWS. I want to deploy my code directly from my github repository (a simple 'hello world' html page), and onto my EC2 instance. I was following this tutorial http://docs.aws.amazon.com/codedeploy/latest/userguide/github-integ-tutorial.html However on step 4 I am struggling.
It says after 'launched the instance and verified the AWS CodeDeploy agent is running, go to the next step'.
But, how do I verify AWS CodeDeploy Agent is running? It says to follow this link, however i am lost with it http://docs.aws.amazon.com/codedeploy/latest/userguide/how-to-run-agent.html#how-to-run-agent-install-windows (windows server)
Where do i put these commands in and where? And do I need the AWS SDK first?
Thanks
You can check if the code deploy agent is running from the command
sudo service codedeploy-agent status
If the command returns an error, the AWS CodeDeploy agent is not installed. Install it as described in To install, uninstall, or reinstall the AWS CodeDeploy agent for Amazon Linux or RHEL
If the AWS CodeDeploy agent is installed and running, you should see a message like The AWS CodeDeploy agent is running.
If you see a message like error: No AWS CodeDeploy agent running, start the service and run the following two commands, one at a time:
sudo service codedeploy-agent start
sudo service codedeploy-agent status
see http://docs.aws.amazon.com/codedeploy/latest/userguide/how-to-run-agent.html if you want info for another os type