I am having a django app deployed on ECS. I need to run fixtures, a management command needed to be run inside the container on my first deployment.
I want to have access to run fixtures conditionally, not fixed for first deployment only. One way I was thinking is to maintain a variable in my env, and to run fixtures in entrypoint.sh accordingly.
Is this a good way to go about it? Also, what are some other standard ways to do the same.
Let me know if I have missed some details you might need to understand my problem.
You probably need to handle it in your entrypoint.sh script only.As far as my experience goes, you won't be able to conditionally run commands without a script in case of ECS.
Related
I want to add unit testing for my ansible playbook. I am new to this and have tried few things but didn't understood much. How can I start on this and write a test case properly?
Following is the simple example:
yum:
name: httpd
state: present
Ansible is not a programming language but a tool that will check the state you describe is aligned with the state of the node your run in against. So you cannot unit tests your tasks. They are in a certain way tests by themselves already. The underlying ansible binary that runs those task has unit tests itself used during its development.
Your example above is asking ansible to test if httpd is present on the target machine and will return ok if this is the case, changed if it had to install the package to fulfill the requirement, or error if something went wrong.
Meanwhile, it is not because you cannot unit test your ansible code that no tests are possible at all. You can perform basic static checks with yammlint and ansible-lint. To go further, you will have to run your playbook/role/collection against a test target node.
This has become quite easy with CI that will let you spawn virtual machines or docker container from scratch and run your script to test that no error is fired, the --check option passes successfully, idempotency is obeyed (i.e. nothing should change on a second run with the same parameters), and everything works as expected (e.g. in your above case port 80 is opened and your get the default Apache web page).
You can write those kind of tests yourself (running against localhost in a test vm for example). This Mac Appstore CLI role by Geerlinguy is using such tests through travis-ci as an example.
You can also use existing tools to help you write those tests in a more structured way like molecule. Here are some example roles using it if you are interested:
Redis role by Geerlinguy
nexus3-oss role by ThoTeam [1]
[1] Note for transparency: I am the maintainer of this example repository
I'm looking to automatically deploy my app once we release a new version. We use CircleCI, so firing these commands shouldn't be a big deal.
cf login -a https://api.lyra-836.appcloud.swisscom.com -u myuser -p seret
cf push myapp
However I don't want to expose my personal credentials (Passeport acount) into our git repository. Is it possible to generate an API key for that purpose?
How do you handle that? I might also need to ssh into the instance to fire some migrations scripts after the deployment, same goes there.
Currently Swisscoms Application cloud does not offer technical accounts but you can create an additional account easily. Then add it to your org/space as developer and it should be able to fulfill your needs.
CircleCI documentation has a section about handling secrets: Using CircleCI Environment Variables
Setting environment variables for all commands without adding them to
git
Occasionally, you’ll need to add an API key or some other secret
as an environment variable. You might not want to add the value to
your git history. Instead, you can add environment variables using the
Project settings > Environment Variables page of your project.
This documentation describes how to store encrypted stuff within your VCS.
If you prefer to keep your sensitive environment variables checked
into git, but encrypted, you can follow the process outlined at
circleci/encrypted-files.
Currently on deployment I get:
Hook /opt/elasticbeanstalk/hooks/preinit/30directories.sh failed
I want to remove the hook entirely using .ebextensions, I am currently using:
/.ebextensions/01-remove-unused.config
commands:
removeunused:
command: "rm -f /opt/elasticbeanstalk/hooks/preinit/30directories.sh"
ignoreErrors: true
files:
"/opt/elasticbeanstalk/hooks/preinit/30directories.sh":
mode: "000755"
owner: root
group: root
content: |
#!/usr/bin/env bash
ls
I'm not sure how relevant this is, but what platform of ElasticBeanstalk are you using? For 64bit Amazon Linux 2016.09 v2.3.0 running Docker 1.11.2 specifically (and maybe other platforms), I don't believe that there is any way to do this the way that you are describing.
Unfortunately, the preinit scripts are executed well before ElasticBeanstalk will inject .ebextensions into your environment, and they are only run when a fresh instance is started. To confirm this, you can inspect /var/log/eb-activity.log on a freshly deployed ElasticBeanstalk instance, which shows you everything related to the bootstrapping process that AWS logs for you. Search for Initialization/PreInitStage0/PreInitHook in this log file, and then also search for .ebextensions; you will see that the preinit scripts indeed get executed before most everything else, and .ebextensions files come much later. (for what it's worth, this blog post might help further help understand which hooks get run at which times)
What you could potentially do is configure an .ebextensions script to execute before all other non-preinit hooks scripts that will re-execute (and potentially undo changes from) all of the preinit scripts. However, I would guess that this would be more trouble than it is worth, as there are likely unintended side effects that could come from this.
At any rate, these are my findings trying to do something similar. Hopefully, this helps (despite the fact that I haven't technically solved your problem)!
we continuously build our apps with Jenkins and deploy them to our different spaces:
...
cf login -a https://api.lyra-836.appcloud.swisscom.com -u ...
cf target -s development
cf push scs-flux-monitoring-development
...
Now we recognized that the push is sometimes taking a wrong space to install the app. We think this is because of another Jenkins Job doing a parallel push. As far we can see the .cf/config.json stores the name of the Space and when another cf target is called all pushes are using that new target.
Anyone who recognized that behaviour also? Any suggestions to solve that?
Kind regards
Josef
There are a couple options you could go with:
Don't use a CI solution that allows shared state between different jobs. Just as Cloud Foundry uses containers to isolate apps, there are CI solutions out there that use containers to isolate builds. One great example is Concourse CI which is actually the main solution used by the core Cloud Foundry development teams.
Have every Jenkins job use a different location for CF_HOME so they don't all share ~jenkins/.cf:
$ cf help | grep CF_HOME
CF_HOME=path/to/dir/ Override path to default config directory
I have been trying to find out the best way to run background jobs using PHP on AWS Elastic beanstalk, and after many hours searching on Google and SO, I believe that one good solution is using SWF and activity workers.
I found this example buried in the aws-sdk-for-php: https://github.com/amazonwebservices/aws-sdk-for-php/tree/master/_samples/AmazonSimpleWorkflow/cron
The read-me file says:
To run this sample, you need to execute three scripts from the command line in separate terminal/console windows
and
Note that the start_cron_example_workflow.php script will exit quickly
while the decider and activity worker scripts keep running until you
manually terminate them.
the decider and activity worker will loop "forever", and trying to run these in EB is what I'm having trouble doing.
In my .ebextensions directory I have a file that executes these files:
container_commands:
01background_task:
command: "php -f start_cron_example_activity_workers.php"
02background_task:
command: "php -f start_cron_example_workflow_workers.php"
But I get the following error messages:
ERROR
Failed to deploy application version.
ERROR
Some instances have not responded to commands. Responses were not received from [i-a5417ed4].
Any way I can do this using config files? How can I make this work in AWS EB without introducing a single point of failure?
Thank you.
You might consider using a service like IronWorker — this is specifically designed for what you are trying to do and will probably work better than putting together your own solution on a micro instance.
I have not used Iron.io yet, but was evaluating it as I am looking to move my stuff over to AWS so I need to have cron jobs handled as well.
Have you taken a look at the Fat Controller ? It can daemonise anything. There's documentation and examples on the website: http://fat-controller.sourceforge.net