I use Yeoman to build Ember applications.
I want to duplicate an application (changing the directory and application name (which means changing the name in all Ember views, controllers, etc.).
At the moment I'm doing everything manually, is there a better way of doing this?
There is no built-in tool in Yeoman, but if your app name is unique, you can just replace it in all your files by running something like this:
git ls-files | egrep '\.(js|html)$' | xargs sed -i s/OldAppName/NewAppName/g
Related
When working with a ColdFusion server you can access the CFIDE/administrator to set config values, which update the cfusion/lib/ xml files (e.g. neo-runtime.xml, neo-mail.xml, etc.)
I'd like to automate a deployment process that includes setting these administrator values so that I don't have to log in and manually set them for each new box that shares settings. I'm unsure of the best way to go about it.
Some thoughts I had are:
Replacing the full files with ones containing my custom settings. I've done this for local development, but it may not be an ideal method due to CF hot-fixes potentially adding/removing/changing attributes.
A script to read the wddx xml file and replace the attribute values. I'm having trouble finding information about how to do this method.
Has anyone done anything like this before? Or does anyone have any recommendations on how to best go about this?
At one company, we checked all the neo-*.xml files into source control, with a set for each environment Devs only had access to the dev settings and we could deploy a local development environment with all the correct settings for new employees quickly.
but it may not be an ideal method due to CF hot-fixes potentially adding/removing/changing attributes.
You have to keep up with those changes and migrate each environment appropriately.
While I was there, we upgraded from 8 to 9, 9 to 11 and from 11 to 2016. Environments would have to be mixed as it took time to verify the applications worked with each new version of CF. Each server got their correct XML files for that environment and scripts would copy updates as needed. We had something like 55 servers in production running 8 instances each, so this scaled well.
There is a very usefull tool developed by Ortus Solutions for this kind of automatizations called cfconfig that can be installed with their commandbox command line utility. This tool isn't only capable of setting configurations of the administrator: It is also capable of exporting/importing settings to a json file (cfconfig.json). It might be what you need.
Here is the link to their docs
https://cfconfig.ortusbooks.com/introduction/getting-started-guide
CFConfig worked perfectly for my needs. I marked #AndreasRu answer as accepted for introducing me to that tool! I'm just adding this response with some additional detail for posterity.
Install CommandBox as part of deployment script
Install CFConfig as part of deployment script
Use CFConfig to export a config.json file from an existing box that will share settings with the new deployment. Store this json file in source control for each type/env of box.
Use CFConfig to import the config.json as part of deployment script
Here's a simple example of what this looks like on debian
# Installs CommandBox
curl -fsSl https://downloads.ortussolutions.com/debs/gpg | apt-key add -
echo "deb https://downloads.ortussolutions.com/debs/noarch /" | tee -a /etc/apt/sources.list.d/commandbox.list
apt-get update && apt-get install apt-transport-https commandbox
# Installs CFConfig module
box install commandbox-cfconfig
# Import config settings
box cfconfig import from=/<path-to-config>/config.json to=/opt/ColdFusion/cfusion/ toFormat=adobe#11.0.19
What is the best way to reset the database along with the migrations.
Here is what I have tried.
Delete all migrations along with deleting all the database tables.
Then running
php bin/console doctrine:mi:diff
php bin/console doctrine:mi:mi
That does not work.
Try running single version at a time like so
php bin/console doctrine:migrations:migrate 'DoctrineMigrations\Version20200722104913'
I tried this:
php bin/console doctrine:database:drop --force
php bin/console doctrine:database:create
php bin/console doctrine:mi:mi
The Problem (in detail):
Everything I do leads me to the same result.
Doctrine thinks that I still have some tables to generate which are long gone (Not in the Enitity anymore)
That's why I have this error:
An exception occurred while executing 'DROP TABLE
greetings_card_category':
SQLSTATE[42S02]: Base table or view not found: 1051 Unknown table
'symfony.greetings_card_category'
I also get this warning
[WARNING] You have 6 previously executed migrations in the database that are not registered migrations.
In my migrations Directory I only have two migrations:
Version20200722104913.php
Version20200722143619.php
Here is the status if it somehow helps.
bin/console do:mi:status
| Versions | Previous | DoctrineMigrations\Version20200717093052 |
| | Current | DoctrineMigrations\Version20200722150530 |
| | Next | DoctrineMigrations\Version20200722104913 |
| | Latest | DoctrineMigrations\Version20200722143619 |
|--------------------------------------------------------------------------------------------------------
| Migrations | Executed | 6 |
| | Executed Unavailable | 6 |
| | Available | 2 |
| | New | 2
At this point I would really just love to have 1 clean database and 1 migration.
How to achieve this?
Be aware that having one migration script isn't the best way, if you work in a team or if application was already deployed. But, in the case that you are the only one developer or if you want to rebase some commit, you can do it.
If you really want to have only one migration script, here is a solution. But first of all, it is always a bad idea to drop data and table manually because of the migration_table that is used by doctrine. This table contains data used by doctrine to know the current state of your database. So, you drop tables manually, your database will be unsynchronized with the migration table, and your scripts will failed. To avoid your current error, you now have to truncate the migration_table. If it isn't enough, drop migration table, if it isn't enough drop and create the database (for the last time, because below is a solution to avoid this kind of mismatch.
Step1: Downgrade your database via doctrine:migrate
php bin/console doctrine:migrations:migrate first -n
At this step your database is empty. (The tips is the keyword "first")
Step2: Delete (after a backup) all files in the migration directory
Step3: Create a new migration
php bin/console make:migration
(or you can use symfony console doctrine:migrations:diff if you do not use the make-bundle)
Step4: Verify and manually edit the file created if necessary.
Step5: Upgrade your database
php bin/console doctrine:migration:migrate -n
Having one migration it isn't the best way, especially when you work in a team and do some features in different branches. With the one migration, it is easy to mess up and do something wrong like in your case. So it's ok to have many migrations.
What about your error, you can manually edit your migration and fix all errors, then run diff and migrate(if needed), or you can drop your database, remove all migrations and create a new one, and then creates migrations after making changes in code.
In my Ember.js project, I have the following files:
public/img/pixels.png 0640
public/img/vector.svg 0640
I'm deploying this inside a Docker nginx container. COPY'd files automatically belong to root. Nginx reads files with user nginx.
After doing ember build -prod in a Docker container, I have the following files:
img/pixels-d72816e93259890d380ddf05acb748e7.png 0644
img/vector.svg 0640
Notice how the hashed file automatically changed from 0640 to 0644 so it is readable. The other one however is not. It is copied, but not readable.
In this Ember app, all references to pixels.png work fine, but references to vector.svg result in unavailable images.
What causes Ember to add the read bit for some, but not others?
How can I force Ember to set the a+r permission for all files it copies from public?
Should Ember do this automatically? (e.g. is this a bug?)
As discussed in the issue created in Ember CLI repository after your question here, it should be considered as a bug that these files end up with different permissions. I would recommend to normalize the permissions in another step after building the ember.js application (ember build -prod), e.g. RUN chmod -R 0644 /usr/share/nginx/html.
I have an app written in Go, which I attempted to deploy to EB.
When trying to access it, I get an Error 502 from nginx, presumably because the app is not running.
Looking at logs, I get a lot of errors like
14:01:29 build.1 | application.go:10:2: cannot find package "github.com/aws/aws-sdk-go/aws" in any of:
14:01:29 build.1 | /opt/elasticbeanstalk/lib/go/src/github.com/aws/aws-sdk-go/aws (from $GOROOT)
14:01:29 build.1 | /var/app/current/src/github.com/aws/aws-sdk-go/aws (from $GOPATH)
Despite the fact, that I have all of my dependencies included in the application bundle under a vendor subdirectory. How come EB does not use vendoring? According to the dashboard, it is running Go 1.9, so vendoring should be supported.
You need to set your GOPATH in your EBS to the root of your project directory, assuming there is a src directory where your vendor directory is located.
For instance, pretend this is your project structure:
app/
src/
vendor/
And pretend that project is located in ~/home, which makes its location ~/home/app.
Then your GOPATH should be set to ~/home/app. Go will attempt to access the dependencies through $GOPATH/src/vendor.
But if this were the kind of structure you were using before, then you would need to have your GOPATH updated during local development as well, so if you aren't already doing that then I imagine you're using a different kind of setup... this solution, however, will work as long as your project is structured as I described above.
I'm trying to come up with some sensible solution for a build written using SCons, which relies on quite a lot of applications to be accessible in a Unix-like way, using Unix-like paths etc. However, when I'm trying to use SCons plugin, or Git plugin in Jenkins, it tries to invoke the plugins using something like cmd /c git.exe - and this will certainly fail, because Git was installed using Cygwin and is only known in Cygwin shell, but not in CMD. But even if I could make git and the rest available to cmd.exe, other problems arise: the Cygwin version of Git expects paths to have forward slashes and treats backward slashes as escape characters. Idiotic Windows file-system related issues kick in too (I can't give Jenkins permissions to delete my own files!).
So, is there a way to somehow make Jenkins only use Cygwin shell, and never cmd.exe? Or should I be prepared to run some Linux in a VM to have this handled?
You could configure Jenkins to execute the cygwin command with the specific shell command, as follows:
c:\cygwin\bin\mintty --hold always --exec /cygdrive/c/path/to/bash/script.sh
Where script.sh will execute all the commands needed for the Jenkins execution.
Just for the record here's what I ended up doing:
Added a user SYSTEM to Cygwin, mkpasswd -u SYSTEM
Edited /etc/passwd by adding the newly created user's home directory to the record. Now it looks something like the below:
SYSTEM:*:18:544:,S-1-5-18:/home/SYSTEM:
Copied my own user's configuration settings such as .netrc, .ssh and so on into the SYSTEM home. Then, from Windows Explorer, through an array of popups I've claimed ownership of all of these files to SYSTEM user. One by one! I love Microsoft!
In Jenkins I now run a wrapper for my build that sets some other environment variables etc. by calling c:\cygwin\bin\bash --login -i /path/to/script/script
Gave it up because of other difficulties in configuration and made Jenkins service run under my user rather then SYSTEM. Here's a blog post on how to do it: http://antagonisticpleiotropy.blogspot.co.il/2012/08/running-jenkins-in-windows-with-regular.html but, basically, you need to open Windows services, then find Jenkins service, select it's properties, go to "Logon" tab and change the user to the "this user".
One way to do this is to start your "execute shell" build steps with
#!c:\cygwin\bin\bash --login
The trick is of course that it resets your current directory so you need to
cd `cygpath $WORKSPACE`
to get back to the workspace.
Adding to thon56's good answer: this is helpful: "set -ex"
#!c:\cygwin\bin\bash --login
cd `cygpath $WORKSPACE`
set -ex
Details:
-e to exit on error. This is important if you want your jobs to fail on error.
-x to echo command to the screen, if desired.
You can also use #!c:\cygwin\bin\bash --login -ex, but that echos a lot of login steps that you most likely don't care to see.