Pre warmed Java Docker image with class data sharing - dockerfile

I want to create a docker image that runs on a Java Service with OpenJ9's Class Data Sharing feature to improve startup performance.
I want to create the Class Cache while building the image using a multi stage docker build.
I saw a few mentions of pre warming a docker image like this online
https://github.com/barecode/adopt-openj9-spring-boot/blob/master/Dockerfile.openj9.warmed
however, i'm not able to recreate it here is my Dockerfile
FROM adoptopenjdk/openjdk11-openj9:alpine as base
ADD libs/ /libs
ADD service.jar /service.jar
RUN mkdir /hi
WORKDIR /hi
RUN ls /
RUN java -Xshareclasses:name=mycache -Xshareclasses:cacheDir=/hi -Xshareclasses -jar /usr/share/app/service.jar &
RUN sleep 5
RUN ls -la /hi
FROM adoptopenjdk/openjdk11-openj9:alpine-jre
COPY --from=base libs/ /usr/share/app/libs
COPY --from=base service.jar /usr/share/app/service.jar
RUN /bin/sh -c 'ps aux | grep java | grep service | awk '{print $2}' | xargs kill -1'
#RUN java -Xshareclasses:listAllCaches
ENTRYPOINT ["java","-jar", "-Xshareclasses" , "-Xtune:virtualized", "-XX:+UseContainerSupport", "/usr/share/app/service.jar"]
my problem is that when I'm running
RUN java -Xshareclasses:name=mycache -Xshareclasses:cacheDir=/hi -Xshareclasses -jar /usr/share/app/service.jar &
and then expecting the cache file to be saved on /hi, the file isn't there.
any help will be appreciated.
Thanks.

OpenJ9 only reads the last -Xshareclasses option provided. This makes it easy to replace previous options in the commandline when developing / debugging as in some environments, it's hard to modify the existing commandline args.
Change the command to:
RUN java -Xshareclasses:name=mycache,cacheDir=/hi -jar /usr/share/app/service.jar &
and the cache will be created in the /hi directory.
For example:
# java -Xshareclasses:name=mycache,cacheDir=/hi -version
openjdk version "11.0.4" 2019-07-16
OpenJDK Runtime Environment AdoptOpenJDK (build 11.0.4+11)
Eclipse OpenJ9 VM AdoptOpenJDK (build openj9-0.15.1, JRE 11 Linux amd64-64-Bit Compressed References 20190717_286 (JIT enabled, AOT enabled)
OpenJ9 - 0f66c6431
OMR - ec782f26
JCL - fa49279450 based on jdk-11.0.4+11)
# ls /hi
C290M11F1A64P_mycache_G37

Related

Running a Qt GUI in a docker container

So, I have a C++ GUI based on Qt5 which I want to run from inside a Docker container.
When I try to start it with
docker run --rm -it my_image
this results in the error output
qt.qpa.xcb: could not connect to display localhost:10.0
qt.qpa.plugin: Could not load the Qt platform plugin "xcb" in "" even though it was found.
This application failed to start because no Qt platform plugin could be initialized. Reinstalling the application may fix this problem.
Available platform plugins are: eglfs, linuxfb, minimal, minimalegl, offscreen, vnc, xcb.
So I searched for how to do this. I found GUI Qt Application in a docker container, and based on that called it with
QT_GRAPHICSSYSTEM="native" docker run -it -e DISPLAY=$DISPLAY -v /tmp/.X11-unix:/tmp/.X11-unix my_image
which resulted in the same error.
Then I found Can you run GUI applications in a Docker container?.
The accepted answer in there seems to be specific to certain applications such as Firefox?
Scrolling further down I got a solution that tells me to set X11UseLocalhost no in sshd_config and then call it like
docker run -v $HOME:/hosthome:ro -e XAUTHORITY=/hosthome/.Xauthority -e DISPLAY=$(echo $DISPLAY | sed "s/^.*:/$(hostname -i):/") my_image
this produces a slight variation of the error above:
qt.qpa.xcb: could not connect to display 127.0.1.1:13.0
qt.qpa.plugin: Could not load the Qt platform plugin "xcb" in "" even though it was found.
This application failed to start because no Qt platform plugin could be initialized. Reinstalling the application may fix this problem.
Available platform plugins are: eglfs, linuxfb, minimal, minimalegl, offscreen, vnc, xcb.
Following another answer, I added ENV DISPLAY :0 to my Dockerfile and called it with
xhost +
XSOCK=/tmp/.X11-unix/X0
docker run -v $XSOCK:$XSOCK my_image
This time, the first line of my error was qt.qpa.xcb: could not connect to display :0.
Then I tried another answer, added
RUN export uid=0 gid=0 && \
mkdir -p /home/developer && \
echo "developer:x:${uid}:${gid}:Developer,,,:/home/developer:/bin/bash" >> /etc/passwd && \
echo "developer:x:${uid}:" >> /etc/group && \
mkdir /etc/sudoers.d/ && \
echo "developer ALL=(ALL) NOPASSWD: ALL" > /etc/sudoers.d/developer && \
chmod 0440 /etc/sudoers.d/developer && \
chown ${uid}:${gid} -R /home/developer
to my Dockerfile and called docker run -ti --rm -e DISPLAY=$DISPLAY -v /tmp/.X11-unix:/tmp/.X11-unix my_image, again same error.
I also tried several of the ways described in http://wiki.ros.org/docker/Tutorials/GUI, same error.
Am I doing something wrong? Note that I'm working on a remote machine via SSH, with X11 forwarding turned on of course (and the application works just fine outside of Docker). Also note that what I write is a client-server-application, and the server part which needs no GUI elements but shares most of the source code works just fine from it's container.
I hope for a solution that doesn't require me to change the system as the reason I use Docker in the first place is for users of my application to get it running without much hassle.
You have multiple errors that are covering each other. First of all, make sure you have the correct libraries installed. If your docker image is debian-based, it usually looks like a line
RUN apt-get update && \
apt-get install -y libqt5gui5 && \
rm -rf /var/lib/apt/lists/*
ENV QT_DEBUG_PLUGINS=1
Note the environment variable QT_DEBUG_PLUGINS. This will make the output much more helpful and cite any missing libraries. In the now very verbose output, look for something like this:
Cannot load library /usr/local/lib/python3.9/site-packages/PySide2/Qt/plugins/platforms/libqxcb.so: (libxcb-icccm.so.4: cannot open shared object file: No such file or directory)
The bolded part will be the missing library file; you can find the package it's in with your distribution's package manager (e.g. dpkg -S libxcb-iccm.so.4 on debian).
Next, start docker like this (can be one line, separated for clarity):
docker run \
-e "DISPLAY=$DISPLAY" \
-v "$HOME/.Xauthority:/root/.Xauthority:ro" \
--network host \
YOUR_APP_HERE
Make sure to replace /root with the guest user's HOME directory.
Advanced graphics (e.g. games, 3D modelling etc.) may also require mounting of /dev with -v /dev:/dev. Note that this will give the guest root access to your hard disks though, so you may want to copy/recreate the devices in a more fine-grained way.
On host system allow X connection from docker xhost +local:root
And start your container
docker run -it \
-e DISPLAY=$DISPLAY \
-v /tmp/.X11-unix:/tmp/.X11-unix \
--name my_app \
--rm \
<your_image>

Are there anyway I can run commands through `docker-compose`, like instantiate containers, run commands, stop containers?

I suppose the title says it all.
The case is that I want to run a set of Odoo Unit Tests. What I have in mind is:
Run docker-compose up -d --build
Run the Odoo Unit Tests
Stop all running containers from point number 1
Aside from making a shell script that runs these commands (commands to run Odoo Unit Tests), are there any more practical methods?
init containers
docker-compose up -d --build
run tests
container_id=$(docker ps | grep image_name | awk '{print $1}')
docker exec -it $container_id bash -c 'cmd to run unit tests'
shutdown containers
docker-compose down
You can use docker-compose run -command to run Odoo tests. E.g.:
docker-compose run yourodooservice \
--test-enable \
--stop-after-init \
-d yourdatabase \
-i your_module
You can find more info for run parameters and restrictions from documentation at https://docs.docker.com/compose/reference/run/.
Good article about Odoo testing can be found at https://link.medium.com/TNN7jLbUoY. It also includes considerations when using docker and docker-compose.
I normally prefer running tests with pure docker, not with docker-compose. I find it easier to manage the dependencies that way. If compose fits better to your test and development workflow, I can see this also as a valid choise.

WebP support with AWS ElasticBeanstalk

I try to support the use of the webp format with EB, however it's not working as expected...
I created a .config file in .ebextensions with this:
commands:
01-command:
command: wget https://storage.googleapis.com/downloads.webmproject.org/releases/webp/libwebp-0.5.0.tar.gz
02-command:
command: tar xvzf libwebp-0.5.0.tar.gz
03-command:
command: cd libwebp-0.5.0
04-command:
command: ./configure
05-command:
command: make
06-command:
command: sudo make install
But when deploying I got this error:
ERROR: Command failed on instance. Return code: 127 Output: /bin/sh: ./configure: No such file or directory.
Am I doing something wrong?
(environment: 64bit Amazon Linux 2015.09 v2.0.6 running PHP 5.6)
You need to execute the install post deployment. AWS hasn't really documented how to execute commands post deployment, so I'll do so here.
commands:
create_post_dir:
command: "mkdir /opt/elasticbeanstalk/hooks/appdeploy/post"
ignoreErrors: true
files:
"/opt/elasticbeanstalk/hooks/appdeploy/post/99_install_libwebp.sh":
mode: "000755"
owner: root
group: root
content: |
#!/usr/bin/env bash
. /opt/elasticbeanstalk/support/envvars
cd $EB_CONFIG_APP_CURRENT
wget https://storage.googleapis.com/downloads.webmproject.org/releases/webp/libwebp-0.5.0.tar.gz
tar xvzf libwebp-0.5.0.tar.gz
cd libwebp-0.5.0
sudo ./configure
sudo make
sudo make install
As I mentioned, AWS has not really documented that you can actually execute scripts on ElasticBeanstalk post deployment. If you talk a look in the eb-commandprocessor.log file you will see that eb looks for AppDeployPreHook (4 of 6) and AppDeployPostHook (1 of 2). It will look something like this:
[2016-04-13T14:15:22.955Z] DEBUG [8851] : Loaded 6 actions for stage 0.<br>
[2016-04-13T14:15:22.955Z] INFO [8851] : Running 1 of 6 actions: InfraWriteConfig...<br>
[2016-04-13T14:15:22.962Z] INFO [8851] : Running 2 of 6 actions: DownloadSourceBundle...<br>
[2016-04-13T14:15:23.606Z] INFO [8851] : Running 3 of 6 actions: EbExtensionPreBuild...<br>
[2016-04-13T14:15:24.229Z] INFO [8851] : Running 4 of 6 actions: AppDeployPreHook...<br>
[2016-04-13T14:15:28.469Z] INFO [8851] : Running 5 of 6 actions: EbExtensionPostBuild...<br>
[2016-04-13T14:15:28.970Z] INFO [8851] : Running 6 of 6 actions: InfraCleanEbextension...<br>
[2016-04-13T14:15:28.974Z] INFO [8851] : Running stage 1 of command CMD-AppDeploy...<br>
[2016-04-13T14:15:28.974Z] DEBUG [8851] : Loaded 2 actions for stage 1.<br>
[2016-04-13T14:15:28.974Z] INFO [8851] : Running 1 of 2 actions: AppDeployEnactHook...<br>
[2016-04-13T14:15:29.600Z] INFO [8851] : Running 2 of 2 actions: AppDeployPostHook...<br>
[2016-04-13T14:16:42.048Z] INFO [8851] : Running AddonsAfter for command CMD-AppDeploy... <br>
That little "AppDeployPostHook" tells us that it is searching for scripts to run post deployment. You can find the eb deployment scripts in the /opt/elasticbeanstalk directory on the server, and if you ssh in and ls on that directory you'll find hooks, which is what we're looking for, and if you cd hooks you'll find the appdeploy directory, cd appdeploy and then ls and you'll get two directories pre and enact. This seems mundane but is really great, because now we know where eb is looking for scripts it's running. Since the AppDeployPreHook scripts are executing from the "pre" directory we know that we'll need a "post" directory to execute a command post deployment with that AppDeployPostHook that eb is running. Now that we know what to do, we can start writing our commands.
create_post_dir First step is to actually going to create the "post" directory on the server using the mkdir command. mkdir "/opt/elasticbeanstalk/hooks/appdeploy/post" will do that for us, so we'll create that as the command.
files The files config allows us to create a file in a directory via ElasticBeanstalk. Pretty convenient for our purposes! The first line of the files action gives us the name of the file to create. We'll create a shell script to execute out commands, and you can call it whatever you want, but I'd start with 99 and go onwards. We'll call this shell script that we're creating "99_install_libwebp.sh".
File settings The next three lines set the file settings. Make sure root owns them and that there 000755'd.
File Contents This is the content of the file we're creating. Straight forward. Put your shell script in there and you're good to go.
Load environment vars We opted to load the eb environment variables so our script can know where the current version of the app is. It's usually in /var/app/current but it could be elsewhere depending on a variety of factors. We'll use the environment variables to make life a bit easier for us.
Change to our current app directory We're going to cd to our current app directory so we can do what we we're here to do.
Get the package we want use wget to get the libwebp we want
Unpack the package self explanatory
Change to the package directory Now that we've unpacked the package we can change to the package directory.
Do what we need to do We can now run our ./configure, make, and make install.
That's it. You can use the stealthy AppDeployPostHook to run pretty much any post deployment command that you need. Super useful if you need to install packages, restart services, or do anything else post deployment.
I added the code I deployed to Github, for easy reference too. https://github.com/hephalump/testphp
Note: I did this successfully running a slightly different environment. I used ElasticBeanstalk to deploy a new PHP application using the latest environment version which is PHP 5.6 on 64bit Amazon Linux 2016.03 v2.1.0; the environment type that you are using was not available as an option to me... Actually, this was the only version with PHP 5.6 that was available to me so I went with that.

How to customize the docker run command on Elastic Beanstalk?

Here's the thing, I need to tell Docker to not containerize the container’s networking, because it needs to connect to a MongoDB that is inside a VPN (enterprise private DB).
There is a Docker command that let's me do exactly that: --net=host. Reference here.
So, for example, when running the container on my local machine, I will do something like:
docker run --rm -it --net=host [image-name]:[version] bash -il
And that command will do the trick. Thanks to that, I can connect to the "private" MongoDB.
So, my question is: Is there a way customize the docker run command of a Single Docker Environment on Elastic Beanstalk so I can add the --net=host?
I have tried using the container_commands into the config.yml file to add that instruction there, but I don't think that does what I need, here is a snippet:
container_commands:
00-test_command:
command: bundle exec thin --net=host
01-networking-fix:
command: "docker run --rm -it --net=host [image-name]:[version] bash -il"
I ended up fixing it with two container commands
container_commands:
00_fix_networking:
command: sed -i 's/docker run -d/docker run --net=host -d/' /opt/elasticbeanstalk/hooks/appdeploy/enact/00run.sh
01_fix_docker_ip:
command: sed -i 's/server $EB_CONFIG_NGINX_UPSTREAM_IP/server localhost/' /opt/elasticbeanstalk/hooks/appdeploy/enact/01flip.sh
Update:
I also had to fix the Upstart script. Unfortunately, I didn't write down what I did because I didn't end up needing to alter the docker run command. You would do a files directive for (I think) /etc/init/docker. AWS edits the Nginx configuration in the same manner as in 01flip.sh in that file as well.
Explanation:
In the 64bit Amazon Linux 2015.03 v2.0.2 running Docker 1.7.1 platform version, the file you need to edit is /opt/elasticbeanstalk/hooks/appdeploy/enact/00run.sh. This file is now far more complex than Samar's version so I didn't want to put the actual contents in there. However, the change is basically the same. There's the line that starts with
docker run -d
I fixed it with a container command:
container_commands:
00_fix_networking:
command: sed -i 's/docker run -d/docker run --net=host -d/' /opt/elasticbeanstalk/hooks/appdeploy/enact/00run.sh
This successfully adds the --net=host argument but now there's another problem. The system ends up with an invalid Nginx directive. Using --net=host means that when you run docker inspect <container id> there is no IP address in the NetworkSettings. AWS uses this to create the server directive for Nginx and ends up generating server :<some port you chose> (before adding --net=host it would look like server <ip>:<port>). I needed to patch that file, too. It's generated in /opt/elasticbeanstalk/hooks/appdeploy/enact/01flip.sh.
01_fix_docker_ip:
command: sed -i 's/server $EB_CONFIG_NGINX_UPSTREAM_IP/server localhost/' /opt/elasticbeanstalk/hooks/appdeploy/enact/01flip.sh
While elastic beanstalk is generally well suited for applications that work with standard set of configurations, its difficult to customize and keep things updated along with the updates AWS provides to EB stacks. Having said that, I've done something like below which is a bit hacky but works fine.
files:
"/opt/elasticbeanstalk/hooks/appdeploy/pre/04run.sh":
mode: "000755"
owner: root
group: root
encoding: plain
content: |
#script content of original 04run.sh along with modification on docker run cmd
# eg. I injected multi-ports here
docker run -d \
"${EB_CONFIG_DOCKER_ENV_ARGS[#]}" \
"${EB_CONFIG_DOCKER_VOLUME_MOUNTS[#]}" \
"${EB_CONFIG_DOCKER_ENTRYPOINT_ARGS[#]}" \
"${PORT_ARGS[#]}" \
$EB_CONFIG_DOCKER_IMAGE_STAGING \
"${EB_CONFIG_DOCKER_COMMAND_ARGS[#]}" 2>&1 | tee /tmp/docker_run.log | tee $EB_CONFIG_DOCKER_STAGING_APP_FILE
This is not very neat, at least I have to make sure that it does not break with updates on elastic beanstalk. The above one is for docker 1.5 stack but you can do something similar with the version you're running.
Note that the latest version of the AWS stack (with Docker 1.7.1) has a slightly different pre-deploy setup. You'll need to update the file at the location: /opt/elasticbeanstalk/hooks/appdeploy/enact/00run.sh
commands:
00001_add_privileged:
cwd: /tmp
command: 'sed -i "s/docker run -d/docker run --privileged -d/" /opt/elasticbeanstalk/hooks/appdeploy/enact/00run.sh'
or, for example, if you want to pass args to your Docker image:
commands:
00001_modify_docker_run:
cwd: /tmp
command: 'sed -i "s/\$EB_CONFIG_DOCKER_IMAGE_STAGING/\$EB_CONFIG_DOCKER_IMAGE_STAGING -gzip -enable-url-source/" /opt/elasticbeanstalk/hooks/appdeploy/enact/00run.sh'

Amazon Elastic Beanstalk - Change Timezone

I´m running an EC2 instance through AWS Elastic Beanstalk. Unfortunately it has the incorrect timezone - it´s 2 hours earlier than it should be, because timezone is set to UTC. What I need is GMT+1.
Is there a way to set up the .ebextensions configuration, in order to force the EC2 instance to use the right timezone?
Yes, you can.
Just create a file /.ebextensions/00-set-timezone.config with following content
commands:
set_time_zone:
command: ln -f -s /usr/share/zoneinfo/Australia/Sydney /etc/localtime
This is assuming your are using default Amazon Linux AMI image. If you use some other Linux distribution, just change the command to whatever it requires to set timezone in that Linux.
This is a response from the aws Support Business and this works!
---- Original message ----
How can I change the timezone of an enviroment or rather to the instances of the enviroment in Elastic Beasntalk to UTC/GMT -3 hours (Buenos Aires, Argentina)?
I´m currently using Amazon Linux 2016.03. Thanks in advance for your help.
Regards.
---------- Response ----------
Hello,Thank you for contacting AWS support regarding modifying your Elastic Beanstalk instances time zone to use UTC/GMT -3 hours (Buenos Aires, Argentina), please see below on steps on how to perform this modification.
The below example shows how to modify timezone for Elastic Beanstalk environment using .ebextensions for Amazon Linux OS:
Create .ebextensions folder in the root of your application
Create a .config file for example 00-set-timezone.config file and add the below content in yaml formatting.
container_commands:
01changePHP:
command: sed -i '/PHP_DATE_TIMEZONE/ s/UTC/America\/Argentina\/Buenos_Aires/' /etc/php.d/environment.ini
01achangePHP:
command: sed -i '/aws.php_date_timezone/ s/UTC/America\/Argentina\/Buenos_Aires/' /etc/php.d/environment.ini
02change_AWS_PHP:
command: sed -i '/PHP_DATE_TIMEZONE/ s/UTC/America\/Argentina\/Buenos_Aires/' /etc/httpd/conf.d/aws_env.conf
03php_ini_set:
command: sed -i '/date.timezone/ s/UTC/America\/Argentina\/Buenos_Aires/' /etc/php.ini
commands:
01remove_local:
command: "rm -rf /etc/localtime"
02link_Buenos_Aires:
command: "ln -s /usr/share/zoneinfo/America/Argentina/Buenos_Aires /etc/localtime"
03restart_http:
command: sudo service httpd restart
Deploy application to Elastic Beanstalk including the .ebextensions and the timezone will change as per the above.
I hope that helps
Regards!
If you are running windows in your eb environment...
.
create a folder named .ebextensions in the root of your project..
inside that folder create a file named timezone.config
in that file add the following :
commands:
set_time_zone:
command: tzutil /s "Central Standard Time"
set the time zone as needed
screenshot
I'm using custom .ini file in php.d folder along with regular recommendations from http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/set-time.html#change_time_zone:
The sed command inserts (rewrites) only the first line of /etc/sysconfig/clock, since the second line (UTC=true) should be left alone, per the above AWS documentation.
# .ebextensions/02-timezone.config
files:
/etc/php.d/webapp.ini:
mode: "000644"
owner: root
group: root
content: |
date.timezone="Europe/Amsterdam"
commands:
01_set_ams_timezone:
command:
- sed -i '1 s/UTC/Europe\/Amsterdam/g' /etc/sysconfig/clock
- ln -sf /usr/share/zoneinfo/Europe/Amsterdam /etc/localtime
Changing the time zone of EC2 with Elastic Beanstalk is simple:
Create a .ebextensions folder in the root
Add a file with filename end with .config (timezone.config)
Inside the file
container_commands:
time_zone:
command: ln -f -s /usr/share/zoneinfo/America/Argentina/Buenos_Aires /etc/localtime
Then you have done.
Note that the container_commands is different from commands, from the document it states:
commands run before the application and web server are set up and
the application version file is extracted.
That's the reason of your time zone command doesn't work because the server hasn't started yet.
container_commands run after the application and web server have been
set up and the application version file has been extracted, but before
the application version is deployed.
If you are runing a java/Tomcat container, just put the JVM Option on the configuration.
-Duser.timezone=America/Sao_Paulo
Possibles values: timezones
Moving to AWS Linux 2 was challenging. It took me a while to work out how to do this easily in .ebextensions.
I wrote the simple solution in another stackoverflow question .. but for anyone needing instant gratification .. add the following commands into the file .ebextensions/xxyyzz.config:
container_commands:
01_set_bne:
command: "sudo timedatectl set-timezone Australia/Brisbane"
command: "sudo systemctl restart crond.service"
These workarounds only fixes the timezone for applications. But when you have any system services like a cron run it looks at the /etc/sysconfig/clock and that is always UTC. If you tail the cron logs or aws-sqsd logs would will notice timestamps are still 2hrs behind - in my case. And a change to the clock setting would need a reboot into order to take effect - which is not an option to consider should you have autoscaling in place or should you want to use ebextensions to change the system clock's config.
Amazon is aware of this issue and I dont think they have resolved it yet.
If your EB application is using the Java/Tomcat container, you can add the JVM timezone Option to the Procfile configuration. Example:
web: java -Duser.timezone=Europe/Berlin -jar application.jar
Make sure to add all configuration options before the -jar option, otherwise they are ignored.
in the .ebextensions added below for PHP
container_commands:
00_changePHP:
command: sed -i '/;date.timezone =/c\date.timezone = \"Australia/Sydney\"' /etc/php.ini
01_changePHP:
command: sed -i '/date.timezone = UTC/c\date.timezone = \"Australia/Sydney\"' /etc/php.d/aws.ini
02_set_tz_AEST:
command: "sudo timedatectl set-timezone Australia/Sydney"
command: "sudo systemctl restart crond.service"
commands:
01remove_local:
command: "rm -rf /etc/localtime"
02change_clock:
command: sed -i 's/\"UTC\"/\"Australia\/Sydney\"/g' /etc/sysconfig/clock
03link_Australia_Sydney:
command: "ln -f -s /usr/share/zoneinfo/Australia/Sydney /etc/localtime"
cwd: /etc
Connect AMI(amazon linux instance) via putty or ssh and execute the commands below;
sudo rm /etc/localtime
sudo ln -sf /usr/share/zoneinfo/Europe/Istanbul /etc/localtime
sudo reboot
Explanation of the procedure above is simply;
remove localtime,
update the timezone,
reboot
Please notify that I've changed my timezone to Turkey's localtime, you can find your timezone by listing zoneinfo directory with the command below;
ls /usr/share/zoneinfo
or just check timezone abbrevetaions via wikipedia;
http://en.wikipedia.org/wiki/Category:Tz_database
You can also check out the related Amazon AWS documentation;
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/set-time.html
Note: I'm not sure that if this is the best practice or not (probably not), however I've applied the procedure I've written above and it's working for me.