Hi All I have gone through many related question to this problem but I am still unable to get the solution. I have installed Zend Server. Now I want to install PHPunit. I have installed pear and then installed PHPUnit.
My Zend Server is installed at
C:\xyz\zend\ZendServer
My pear is installed at
C:\xyz\zend\ZendServer\bin\PEAR
And PHPunit is installed at
C:\xyz\zend\ZendServer\bin\PEAR\pear\PHPUnit
I have added pear path and even PHPUnit path to Envrionmental PATH variable. Then I opened php.ini located at
C:\xyz\zend\ZendServer\etc
and set include_path as
include_path = ".;c:\php\includes;c:\xyz\zend\ZendServer\bin\PEAR\pear;c:\xyz\zend\ZendServer\bin\PEAR\pear\PHPUnit"
Now When I run command at cmd to create zend project, the project is created but I found this note too
Testing Note: PHPUnit was not found in your include_path, therefore no testing action will be created.
Some one please tell me what Am I doing wrong and where to set this include path???
Best Regards :-)
Let's do some old school debugging :)
$ zf create project ./one
Creating project at /path/to/one
Note: This command created a web project, for more information setting up your VHOST, please see docs/README
Testing Note: PHPUnit was not found in your include_path, therefore no testing actions will be created.
No PHPUnit!
Locate path/zend-framework/bin/zf.php and add near the top:
var_dump(get_include_path());
Now let's see what the include path looks like:
$ zf create project ./two
string(32) ".:/usr/share/php"
Creating project at /path/to/two
Note: This command created a web project, for more information setting up your VHOST, please see docs/README
Testing Note: PHPUnit was not found in your include_path, therefore no testing actions will be created
My PHPUnit isn't in the /usr/share/php directory. Let's resolve that by adding PHPUnit to the include path.
e.g. If PHPUnit is in /path/to/phpunit, open the php.ini file and add it to the include path.
Third times a charm:
$ zf create project ./three
string(56) ".:/usr/share/php:/path/to/phpunit"
Creating project at /path/to/three
Note: This command created a web project, for more information setting up your VHOST, please see docs/README
If you've edited the correct php.ini, the var_dump() you added to zf.php will now echo the include path with whatever you modified it to, which in my case it was correct, so now PHPUnit is working.
Now remove the debug code from zf.php
Related
In short: why index.android.bundle is not uploaded to Sentry server following expo's guide
I made a GitHub issue as I tested this with a clean repository. And there I specified the issue better and with more detail. The main problem could be the script I'm using. I will link the issue here:
https://github.com/expo/sentry-expo/issues/313
Hello.
I'm using the latest sentry-expo which correctly sends errors to sentry server.
I have followed the documentation from https://docs.expo.dev/guides/using-sentry/#uploading-source-maps-for-updates
On new builds index.android.bundle and .map is uploaded to sentry.
But when I make an update running eas update and following the sentry-cli releases... script as documented in expo guide, the android-'hash'.map file is uploaded and index.android.bundle is not.
Therefore dist is different between .js and .map file and Sentry issues don't contain source map information:
Source code was not found (see Troubleshooting for JavaScript)
Url app:///index.android.bundle
But if I change index.android.bundle to index.android.bundle.js in Sentry-cli --rewrite command the bundle is uploaded but issues still show the same information probably due to that android Archive is ~/index.android.bundle.js but the issue is expecting ~/index.android.bundle.
package versions:
"#sentry/react-native": "4.9.0",
"expo": "~47.0.8",
"sentry-expo": "~6.0.0",
I add here that I'm on Windows and couldn't get sentry-cli release to work as it is documented in expo-sentry tutorial. I used this script
cross-env ./node_modules/#sentry/cli/bin/sentry-cli releases --org 'organization name' --project 'project name' files 'release name' upload-sourcemaps --dist 'Android Update ID' --rewrite dist/bundles/index.android.bundle dist/bundles/android-'hash'.map
Thank you for all the help!
Android*.js file simply needed to be changed to index.android.bundle not to index.android.bundle.js. Now source maps are showing correctly.
Expo documentation showed everything correctly but my own understanding added the need of .js in file naming. Bundle file without any extension works correctly
When working with a ColdFusion server you can access the CFIDE/administrator to set config values, which update the cfusion/lib/ xml files (e.g. neo-runtime.xml, neo-mail.xml, etc.)
I'd like to automate a deployment process that includes setting these administrator values so that I don't have to log in and manually set them for each new box that shares settings. I'm unsure of the best way to go about it.
Some thoughts I had are:
Replacing the full files with ones containing my custom settings. I've done this for local development, but it may not be an ideal method due to CF hot-fixes potentially adding/removing/changing attributes.
A script to read the wddx xml file and replace the attribute values. I'm having trouble finding information about how to do this method.
Has anyone done anything like this before? Or does anyone have any recommendations on how to best go about this?
At one company, we checked all the neo-*.xml files into source control, with a set for each environment Devs only had access to the dev settings and we could deploy a local development environment with all the correct settings for new employees quickly.
but it may not be an ideal method due to CF hot-fixes potentially adding/removing/changing attributes.
You have to keep up with those changes and migrate each environment appropriately.
While I was there, we upgraded from 8 to 9, 9 to 11 and from 11 to 2016. Environments would have to be mixed as it took time to verify the applications worked with each new version of CF. Each server got their correct XML files for that environment and scripts would copy updates as needed. We had something like 55 servers in production running 8 instances each, so this scaled well.
There is a very usefull tool developed by Ortus Solutions for this kind of automatizations called cfconfig that can be installed with their commandbox command line utility. This tool isn't only capable of setting configurations of the administrator: It is also capable of exporting/importing settings to a json file (cfconfig.json). It might be what you need.
Here is the link to their docs
https://cfconfig.ortusbooks.com/introduction/getting-started-guide
CFConfig worked perfectly for my needs. I marked #AndreasRu answer as accepted for introducing me to that tool! I'm just adding this response with some additional detail for posterity.
Install CommandBox as part of deployment script
Install CFConfig as part of deployment script
Use CFConfig to export a config.json file from an existing box that will share settings with the new deployment. Store this json file in source control for each type/env of box.
Use CFConfig to import the config.json as part of deployment script
Here's a simple example of what this looks like on debian
# Installs CommandBox
curl -fsSl https://downloads.ortussolutions.com/debs/gpg | apt-key add -
echo "deb https://downloads.ortussolutions.com/debs/noarch /" | tee -a /etc/apt/sources.list.d/commandbox.list
apt-get update && apt-get install apt-transport-https commandbox
# Installs CFConfig module
box install commandbox-cfconfig
# Import config settings
box cfconfig import from=/<path-to-config>/config.json to=/opt/ColdFusion/cfusion/ toFormat=adobe#11.0.19
I'm trying to figure out how to get code coverage working with #angular/cli but so far i'm not having much luck.
I started a new project using angular CLI. Basically all i did was ng new test-coverage and once everything was installed in my new project folder, I did a ng test --code-coverage. The tests were run successfully but nothing resembling code coverage was displayed in the browser.
Am I missing some dependencies or something else? Any help will be appreciated.
EDIT:
R. Richards and Rachid Oussanaa were right, the file does get generated and I can access it by opening the index.html.
Now i'm wondering, is there a way I could integrate that into a node command so that the file opens right after the tests are run?
here's what you can do:
install opn-cli which is a cli for the popular opn package which is a cross-platform tool used to open files in their default apps.
npm install -D opn-cli -D to install as dev dependency.
in package.json add a script under scripts as follows
"scripts": {
...
"test-coverage": "ng test --code-coverage --single-run && opn ./coverage/index.html"
}
now run npm run test-coverage
this will run the script we defined. here is an explanation of that script:
ng test --code-coverage --single-run will run tests, with coverage, only ONCE, hence --single-run
&& basically executes the second command if the first succeeds
opn ./coverage/index.html will open the file regardless of platform.
This is Driving me nuts.
I had a working .ebextensions config file in my Project which was working fine.
Recently my single instance failed and a new one got initiated. My configuration failed to run so i tried to troubleshoot what went wrong. I didn't find anything suspicious so i just created a new .config with a very simple command but it still fails!!
I validated my config file with an online yaml validator.
I Connected to the instance through remote desktop and saw that .ebextensions folder is actually created within the wwwroot and then it disappears meaning that it got successfully picked up by elastic beanstalk.
I also granted all permissions to everyone on the test folder just to make sure this is not the reason.
Whichever i tried the old configuration or this test command it just does not work and elastic beanstalk just ignores it!
Any info of what might be wrong is appreciated.
commands:
01_Dowork:
command: mkdir kakarot
cwd: c:\\testdir
waitForCompletion: 0
I think everything under 01_DoWork needs to be indented (command, cwd, waitForCompletion). Also, make sure you're using spaces and not tabs.
Check the properties on your config file in VS. It should be (I think) both 'Content' and 'Copy if Newer'. Also, make sure that it gets packaged into the msdeploy package. It's a .zip file in/below your obj directory.
The command will error-out of it's already succeeded, so you would want to either ignore errors or add this. I found this syntax on another SO post but don't know who to credit for it :-/. The errorlevel will cause your command to not run if the directory already exists.
test: test ! -d c:\\testdir\\kakarot
If you're creating a package.zip (that inside has a deploy manifest json file plus the actual site.zip content) for a Windows deployment, it appears the .ebextensions directory needs to be inside package.zip, alongside the manifest json, not inside the site.zip, contrary to the current documentation.
In my use case I am setting up a single go test which runs all _test.go in all packages in the project folder. I tried to achieve this using $go test ./... from the src folder of the project
/project-name
/src
/mypack
/dao
/util
When I try to run the test it is asking to install the packages which are used in the imported packages. For example if I import "github.com/go-sql-driver/mysql", it might have used another package github.com/golang/protobuf/proto. I did not manually import the proto package. The application runs without manually importing the inner package. But when I run the tests it fails. But individual package test succeeded. Do I have to install all the packages in the $go test ./... error manually?
Could anyone help me on this?
You need to run go get -t ./... first to get all test deps.
From the go test -h:
The -t flag instructs get to also download the packages required to
build the tests for the specified packages.