I'm trying to run ElasticSearch on Docker (actually on AWS ECS). If I don't configure the volume it's working correctly, but every time I restart the container I lose all the data.
I can't figure out how to configure the volume.
What I tried:
in the task definition I configured volume "Name=esdata1" and "source path=/usr/share/elasticsearch/data"
inside the container definition in the "storage and logging" section I configured the mount point "source volume= esdata1" and "container path=/usr/share/elasticsearch/data"
Now when I launch the container it fail with error "access denied" when elasticsearch try to write in "/usr/share/elasticsearch/data". So in the section Security I configured "user=ec2-user" but then the container will not even launch (stay in "status=created"). What should I do? I guess the issue is due to the fact that the user of the container must be the same of the one on the host. The user on the host is "ec2-user", I don't know how to proceed.
Edit:
I'm now able to persist data with this configuration:
docker inspect:
"Mounts": [
{
"Name": "elasticsearch_data",
"Source": "/var/lib/docker/volumes/elasticsearch_data/_data",
"Destination": "/usr/share/elasticsearch/data",
"Driver": "local",
"Mode": "z",
"RW": true,
"Propagation": "rprivate"
}
]
Now data persist if I stop the container or I reboot the host. My only last concern is that this folder "/var/lib/docker/volumes/elasticsearch_data/_data" is located on the OS volume and not on the bigger docker volume. From Aws doc:
Amazon ECS-optimized AMIs from version 2015.09.d and later launch with
an 8 GiB volume for the operating system that is attached at /dev/xvda
and mounted as the root of the file system. There is an additional 22
GiB volume that is attached at /dev/xvdcz that Docker uses for image
and metadata storage. The volume is configured as a Logical Volume
Management (LVM) device and it is accessed directly by Docker via the
devicemapper back end.
How can I persist data on /dev/xvdcz?
Thanks very much
Your sourcepath is the path on the host instance where the data is written. In your case elasticsearch_data. You need to point sourcepath to a folder that exists and that is on the disk you want on the EC2 instance.
So attach an EBS disk to the instance. Mount the disk in a place like /data/es and set your source path to that folder.
But remember that to properly run ES you would probably need a cluster of machines that are connected and automated backups. Consider using the managed ES from Amazon if you plan to host critical data. It does not sound like you have a very robust setup here.
Related
I'm using:
Packer v1.31.1
Amazon Linux 2 Base AMI
Concept
When I bake my AMI with Packer I want to create an encrypted EBS volume that contains the contents of the JENKINS_HOME path. My current way of thinking is that I should be able to create an EBS volume that I can mount to a particular path on the Linux filesystem (/var/jenkins_home/ in this case)
What I've done so far
I've added the below snippet to my packer template, to create the EBS Volume.
"ami_block_device_mappings":
{
"device_name": "/dev/sdh",
"encrypted ": true,
"volume_size": "10",
"volume_type": "gp2"
}
Questions
Am I approaching this problem in the right way?
If so, how do you map an EBS Volume to a path on the host
1. Am I approaching this problem in the right way?
Yes this is a good solution.
2. If so, how do you map an EBS Volume to a path on the host?
Add it to /etc/fstab, see fstab (5). And then mount it.
I am using the following chef command to create a folder on /mnt
directory '/mnt/node/deploy' do
owner 'ubuntu'
group 'ubuntu'
mode '0755'
recursive true
action :create
end
This is a part of a recipe which is invoked via packer to create an AWS AMI. ubuntu is the user that I use to deploy my code to a provisioned machine.
When I launch an EC2 instance using the AMI, this folder is not created on the machine. What could be the problem? I see no errors when the AMI is created.
Update -1
These are the logs. I tried using root.
`amazon-ebs: * directory[/mnt/node/deploy] action create`
`amazon-ebs: - create new directory /mnt/node/deploy`
`amazon-ebs: - change mode from '' to '0755'`
`amazon-ebs: - change owner from '' to 'root'`
`amazon-ebs: - change group from '' to 'root'`
I see that EC2 is mounting ephemeral storage on /mnt.
I want to create these folders on the ephemeral storage.
I unmounted /mnt, but did not see the folders there.
Packer runs Chef before creating the image. So, if I understand you correctly:
Chef creates the directory on an instance ephemeral storage.
Packer creates the AMI.
You start the AMI and the directory does not exist in the ephemeral storage.
AFAIK that's an expected behavior. The directory is created in a partition that is ephemeral and this kind of partitions are not expected to endure.
Summarizing, when you create an AWS AMI image, it does not include the ephemeral storage. Only the EBS volumes. Ephemeral partitions are always empty at startup. If you want to retain that directory, it must be in a EBS partition.
If you still want to use the /mnt directory, you can avoid mounting the ephemeral storage with the ami_block_device_mappings option:
"ami_block_device_mappings": [
{
"device_name": "/dev/sdb",
"no_device": true
}
],
And the same for the launch_block_device_mappings
Another solution could be to run your Chef cookbook again in the newly created instance.
amazon-ebs is the name of the packer builder:
amazon-ebs - Create EBS-backed AMIs by launching a source AMI and
re-packaging it into a new AMI after provisioning. If in doubt, use
this builder, which is the easiest to get started with.
It runs the whole machine as EBS-backed so it can convert the EBS volume into an AMI later.
This is not related to Chef.
I am running a t2.micro ec2 instance on us-west-2a and instance's state is all green.
When I access my website it stops loading once in a while. Even if I reboot it, the website still doesn't load. When I stop an instance and then relaunch it, it shows 1/2 status checks failed.
ALARM TYPE: awsec2-i-20aaa52c-High-Network-Out
I also faced same type of issue.
EC2 instances were failing Instance Status Checks after a stop/start. I was able to take a look on my side at the System logs available to support and I could confirm that the system was having a kernel panic and was unable to boot from the root volume.
So I launched new EC2 temporary instance so we can attach the EBS root volumes of each EC2 instance . Here we modified the grub configuration file so it can load from a previous kernel.
The following commands:
1. Mount the EBS volume as a secondary volume into mnt folder: $ sudo mount /dev/xvdf1 /mnt
2. Backup the grub.cfg file: sudo cp /mnt/boot/grub2/grub.cfg grub.cfg_backup
3. Edit the grub.cfg file: sudo vim /mnt/boot/grub2/grub.cfg
4. Here we commented # all the lines for the first entry loading the new kernel.
Then you attached the original EBS volumes back to the original EC2 instances and these EC2 instances were able to successfully boot.
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/TroubleshootingInstances.html#FilesystemKernel
AWS Beanstalk can run applications from Docker containers.
As mentioned in the docs (http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_docker_image.html) it's possible to write directory mappings to the EC2 volume in the Dockerrun.aws.json:
"Volumes": [
{
"HostDirectory": "/var/app/mydb",
"ContainerDirectory": "/etc/mysql"
}
but, is it possible to mount specific EBS volume?
F.e. I need to run db in the Docker container and deploy it with Beanstalk. It's clear that I need to have persistence of the data, backup/restore for db, etc..
You can mount EBS volumes on any Beanstalk environment. This volume will be available on the EC2 instances.
You can do this using ebextensions option settings. Create a file in your app source .ebextensions/01-ebs.config with the following contents:
option_settings:
- namespace: aws:autoscaling:launchconfiguration
option_name: BlockDeviceMappings
value: /dev/sdj=:100,/dev/sdh=snap-51eef269,/dev/sdb=ephemeral0
The format of the mapping is device name=volume where the device mappings are specified as a single string with mappings separated by a comma. This example attaches to all instances in the autoscaling group an empty 100-GB Amazon EBS volume, an Amazon EBS volume with the snapshot ID snap-51eef269, and an instance store volume.
Read more details about this option setting here.
Read more about ebextensions here.
Once you have mounted the EBS volume for your beanstalk environment instances, you can use the volume mapping as above to map directories per your need.
I guess the leg100/docker-ebs-attach Docker container does what you want, i.e. make a particular existing EBS volume available. You can either copy the .py file and relevant Dockerfile statements or create a multi-container EB setup and mount the volume from this container.
BTW I have tried to mount a new EBS volume as proposed by Rohit (+ commands to format and mount it) and it works but Docker does not see the mount until the docker daemon is restarted.
I'm trying to use ElasticBeanstalk for an app with some big initial space requirements. It exceeds the default 8GB capacity of the EBS disk on the EC2 instances.
I tried to increase the disk capacity by using a bigger EC2 instance type. For instance, I used an m3.large, which AWS tells me should provide me with 1x32GB of storage.
However, when the Beanstalk environment launches, it still only shows 8GB. I tried to run the "resize2fs" command on the instance, but it didn't expand the volume to anything over 8GB.
Does anyone know how to get bigger instance storage on ElasticBeanstalk environments?
There is a better way to do this now using RootVolumeType and RootVolumeSize in aws:autoscaling:launchconfiguration. Details are [here].1
Following is the relevant section from my cloudformation script to create elastic beanstalk
{
"Namespace": "aws:autoscaling:launchconfiguration",
"OptionName": "RootVolumeType",
"Value": "gp2"
},
{
"Namespace": "aws:autoscaling:launchconfiguration",
"OptionName": "RootVolumeSize",
"Value": 25
},
This can be easily achieved through ebextensions also.
Example of solution for Elastic Beanstalk with ebextensions config:
application-root-dir/.ebextensions/001-filesystem.config:
option_settings:
aws:autoscaling:launchconfiguration:
RootVolumeType: gp2
RootVolumeSize: "64"
The 8 GB disk you are seeing is an EBS root volume mounted on /. That's the same no matter what instance type you use, which is why it still only shows 8 GB. The 32 GB of storage is ephemeral storage attached to the instance (not EBS). It's possible that it's not mounting automatically but it's certainly there.
Two options for you:
You can try to get that 32 GB ephemeral storage mounted.
You can create and mount a separate EBS volume of whatever size you
need:
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-creating-volume.html
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-using-volumes.html
Either way, you will need to make whatever changes are required to point to this new storage, where ever you mount it in the filesystem. That can be by changing your configurations or creating symlinks from the old location to the new filesystem.