When does a MWAA environment update it's requirements, and is it an automatic or manual process? - amazon-web-services

Pretty much title.
Supose I edit the requirements.txt file to add a new package, or to change package versions. When will the environment apply these changes, and what manual steps do I have to take (if any) to make that happen?
The aws documentation on the procedure states that one must simply go through the 'Edit' settings, but in the case I didn't even change the filename is it necessary?

In the MWAA settings, you will change the version of requirements.txt (the name of the file will remain the same). You choose the latest version of the file and save it. As soon as you save it, the MWAA instance will restart automatically to update and install the packages mentioned in the "requirements.txt" file.

Related

Does every policy installed must come from a package? (express-gateway)

I read the documentation and I wonder if the only way to install policies (under plugin) is from package?
I know that I can create a local package and install it as a file but I wonder if I missed a way to just create my specific policy (for example, under 'policies' folder, next to 'config') and install it at the gateway, without any NPM intervention.
So is there a way or I missed a point?
you can definitely load your plugins and policy directly without having to pass through NPM.
All you need to do is specify the package file location in the plugin definition, in system.config.yml file.
You can see an example of such technique here
I hope that clarifies!

After editing a locally installed package in Heroku it resets

I've noticed that after the packages required written in "requirements.txt" were installed they are not installed anymore every time I push changes into the Heroku application I'm working, so I was assuming that those files were not modified anymore.
I then changed a file in /app/.heroku/python/lib/python2.7/site-packages/target_library/target_file but when I do git push the file goes back to its original state, although the library is not being installed again.
Is there a way to avoid libraries to be reseted or any workaround?
Based on the last answer.
or fork the library on GitHub and install the forked version.
Here are the few steps I've tested and it worked for me:
1- Fork the package repo on GitHub.
2- Edit it and change whatever you need.
3- Now remove the original package name from your requirments.txt and replace it with git+https://github.com/you-github-username/forked-edited-package.git
Now it should simply install the edited package to your Heroku dyno when you deploy the project
No this can't possibly work. Heroku will always install the packages directly from PyPI and won't know anything about your modifications. I don't know why you say they aren't installed again - on the contrary, they are.
Are you sure you really need to do this? It's a fairly unusual thing to do. If you are sure you do, then the only thing to do is to either for the files into your own project, or fork the library on GitHub and install the forked version.

How to name a BOSH release tarball?

Using: bosh create release --final --with-tarball --version <release version>
I get a package with the name <release version>.tgz.
However, it's not named as I desire and since the documentation is lacking in the use of the command line, and I didn't write the command to automate this, it would be helpful if someone could pick apart just exactly what these flags and commands do for me.
Googling again while I wait just in case I missed something!
$ bosh create release --help
Usage: bosh [options]
--force bypass git dirty state check
--final create final release
--with-tarball create release tarball
--dry-run stop before writing release manifest
--name NAME specify a custom release name
--version VERSION specify a custom version number (ex: 1.0.0 or 1.0-beta.2+dev.10)
A BOSH release is a way to package up software (source code and already-compiled binaries) to then be deployed in a distributed fashion and managed by a BOSH director, i.e. once you have a BOSH director running, you can give it a release (or multiple), and a manifest describing how you want the distributed deployment to look like (or multiple manifests) and the director will facilitate everything: deploying, upgrading, failure recovery, etc.
To create your own BOSH release, all your bits must live in a git repository that's structured in a special way. Given such a repo, you can run bosh create release from the root of the repository to produce the artifact you then later upload to the Director when you actually want to deploy.
--force: Normally the BOSH CLI will complain if your git repo is dirty, i.e. it thinks you're about to build a release with some unintentional changes. Use this flag to skip this check. Note that when you've uploaded a release to a Director, you can say bosh releases and it will tell you all the names, versions, and git commit SHAs of the uploaded releases. This can be nice if you have a release on a Director, you don't know exactly where it came from, but you can at least see the SHA so you can checkout the repo at that SHA. If you built the release from a dirty repo, you'll see a small + next to the SHA in the bosh releases output, so now you don't really know how that release was made.
--with-tarball: The primary artifact created when you do bosh create release is a YAML file that describes all the packages and jobs that make up your release. When you do bosh upload release, it will determine which of these jobs and packages already exist on the director, put the rest in a tarball, and upload it to the Director. If you pass the --with-tarball flag during create release, it will put everything in a tarball. This is only useful if you want that tarball for some purpose other than immediately uploading to a Director, i.e. if you want to put the tarball in some shared location so that other people (or perhaps other steps in a CI pipeline) can use the tarball without having to re-run bosh create release or even check out the repo for that matter.
--final: The YAML file described above is usually something you don't bother checking in. However, if you build a "final" release, it will place the YAML file in a different directory which you do want to check in. When creating a final release, it will make sure your blobs are also synced with a "final blobstore", so that someone checking out your repo will be able to deterministically build the same final release, because they will also get the "official" blobs from the final blobstore. Final release versions, blobs, etc. are meant to be globally unique, so that anyone using this release gets something deterministic when they use a final version of the release. "Final version" means something perhaps like "major version". This is in contrast with "Dev versions" where two developers could both be working with something called version 18+dev.20 and actually have totally different bits.
--name: This is not the name of the generated file, it's the name of the release itself. I.e. it's a piece of metadata in the YAML file mentioned above. If you upload the release and do bosh releases, you will see this name. When you are writing a deployment manifest to actually deploy the stuff in the release, you will refer to it by this name.
--version: Similar to name, this is the version of the release. If you don't specify your own version, BOSH will determine the version for you based on the previous version, plus whether or not you added the --final flag. If the previous version were 18+dev.20 then with --final the new version would be 19 and without, the new version would be 18+dev.21.
The bosh create release command does not let you choose a location or name for the resulting tarball. You could open an issue here if that's a feature you need. However, in most use cases if you're just building the release in order to upload it to a Director, you don't need the file, bosh upload release will upload the right thing. In fact, you don't even need to pass --with-tarball in this case. On the other hand, if you need to know where the tarball is, because you're going to upload it to some shared location for instance, you can script it like this:
CF_RELEASE_OUT="${TMPDIR}/create-release.out"
bosh -n create release --with-tarball --version $VERSION | tee -a $CF_RELEASE_OUT
TARBALL=`grep -a "Release tarball" $CF_RELEASE_OUT | cut -d " " -f4`

c:\ınetpub\wwwroot\mysite\website\sitecore\shell\override is invalid

I install sitecore 6.4 but after login i take this error
The directory name c:\ınetpub\wwwroot\mysite\website\sitecore\shell\override is invalid.
Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code.
I uninstall sitecore and install again but result same. Someone can help me pls.
By default this folder is not created when you create a fresh install of Sitecore. Have had this many times, and essentially you must manually create the folder, and also ensure the app pool identity has write permissions to this folder. If you have your Visual Studio solution open, also close and reopen as the change will not be picked up if you are running webdev.
I ran into this problem as well. My problem was i had my project committed on Git and I was trying to pull files from GIT to my local to setup the project.
The problem with GIT is that it doesnt commit empty folders so \website\sitecore\shell\override was not committed to the repo, and when i pulled, the folder didnt existed on my local as well.
Creating the folder manually resolved the issue.
As mentioned by #pranav-shah, git doesn't support adding empty folders so if you are using git and you are doing clean builds it is likely you are running into this problem.
To get around it you can just create an empty file in the override folder. I recommend following the suggestion in this answer and call it .keep
Whenever I run into this, it's the app pool identity missing write permissions to the folder. Often applies to following folders too, under the sitecore directory:
* shell\controls\debug
* shell\applications\debug
(I think there's one more but too tired to remember right now).
If you run the installer it normally takes care of these issues. Also be sure to read the manual installation steps in the Sitecore documentation, available on the Sitecore Developer Network.

Tomcat + Hudson and testing a Django Application

I'm using Hudson for the expected purpose of testing our Django application. In initial testing, I would deploy Hudson using the war method:
java -jar hudson.war
This worked great. However, we wanted to run the Hudson instance on Tomcat for stability and better flexibility for security.
However, now with Tomcat running Hudson does not seem to recognize previously-recognized Python libraries like Virtualenv. Here's an output from a test:
+ bash ./config/testsuite/hudson-build.sh
./config/testsuite/hudson-build.sh: line 5: virtualenv: command not found
./config/testsuite/hudson-build.sh: line 6: ./ve/bin/activate: No such file or directory
./config/testsuite/hudson-build.sh: line 7: pip: command not found
virtualenv and pip were both installed using sudo easy_install, where are they?
virtualenv: /usr/local/bin/virtualenv
pip: /usr/local/bin/pip
Hudson now runs under the tomcat6 user. If I su into the tomcat6 user and check for virtualenv, it recognizes it. Thus, I am at a loss as to why it doesn't recognize it there.
I tried removing the commands from a script and placing it line-by-line into the shell execute box in Hudson and still same issue.
Any ideas? Cheers.
You can configure your environment variables globally via Manage Hudson ->
Environment Variables or per machine via Machine -> Configure ->
Environment Variables (or per build with the Setenv plugin). It sounds like
you may need to set the PATH and PYTHONPATH appropriately; at least that's the
simple solution.
Edited to add: I feel as though the following is a bit of a rant, though not really directed at you or your situation. I think that you already have the right mindset here since you're using virtualenv and pip in the first place -- and it's not unreasonable for you to say, "we expect our build machines to have virtualenv and pip installed in /usr/local," and be done with it. Take the rest as you will...
While the PATH is a simple thing to set up, having different build
environments (or relying on a user's environment) is an integration "smell".
If you depend on a certain environment in your build, then you should either
verify the environment or explicitly set it up as part of the build. I put
environment setup in the build scripts rather than in Hudson.
Maybe your only assumption is that virtualenv and pip are in the PATH (because
those are good tools for managing other dependencies), but build
assumptions tend to grow and get forgotten (until you need to set up a new
machine or user). I find it useful to either have explicit checks, or refer to
explicit executable paths that are part of my defined build environment. It is
especially useful to have a explicitly defined environment when you have
legacy builds or if you depend on specific versions of your build tools.
As part of builds where I've had environment problems (especially on Windows
with cygwin), I print the environment as the first build step. (But I tend to
be a little paranoid proactive.)
I don't mean to sound so preachy, I'm just trying to share my perspective.
Just to add to Dave Bacher's comment:
If you set your path in .profile, it is most likely not executed when running tomcat. The .profile (or whatever the name is on your system) is only executed when you have a login shell. To set necessary environment variables, you have to use a different set of file. Sometimes they are called .env and they exist on global and user level. In my environment (AIX), the user level .env file can have a different name (name is set in the env variable either in global environment file (eg. /etc/environment) or by parameter, when starting the shell).
Disclaimer: This is for the IBM AIX ksh, but should be the same for ksh on other systems.
P.S. I just found a nice explanation for .profile and .env from the HP site. Notice that they speak of a login shell (!) when they speak about the execution of the .profile file.